Contemplating a job change can be exciting, but it can also be difficult when it means leaving a place you actually enjoy working. I put this post together as a retrospective on why I decided to join InfluxData to lead the products organization.
Welcome to 2017!
What does the change in the calendar mean? Well, in addition to a lot of fresh faces temporarily showing up at the gym, it is a time of reflection for many as we think about the past year or two and think about the future. For the past three years, I was incredibly lucky to lead the product management, documentation, and user experience design team at Hortonworks. I am grateful to have had the chance to be part of a wonderful organization during a period of tremendous growth. It was challenging professionally, but we managed to balance that with some fun times (sales kick-offs, offsite meetings, and the various Hadoop Summits) as well. I remain a firm believer in the technology and people executing the overall business strategy. All of that sounds pretty positive, so why the switch? The primary reason was…time.
Working with various technologies during my career, the one constant has been the necessity of dealing with time. During my days as a junior consultant working on a financial services project, we creating a new system for pricing mutual funds on a daily basis. Back then, we needed to receive and process tens of thousands of security prices along with the associated changes in positions across hundreds of mutual funds. It meant reviewing and clearing exceptions spotted after the data was received from various clearing houses and then accurately calculating the net asset value (NAV) for each fund in order to get them to the newspapers (remember those?) by 3:00pm PT for publication the next day. But, what it really meant was doing all of that as fast as possible on commodity hardware, while the old system was mainframe-based. We needed to deliver a system that delivered better results, faster than the mainframe-based system, and allowed for future scalability. No small task…and yet, we did successfully deliver that system and put it into production. That experience gave me an appreciation of dealing with time-based data, significant time-constraints, data feeds from multiple providers and big data…back before that term was fashionable.
Fast forward to the 21st century and the importance of time as a critical data element which is captured by every interaction with mobile applications, and every device (i.e. think about the Internet of Things), and really any electronic interchange of information…it struck me that time should not be dealt with as one of those “extra columns” that we simply stash on a database table or stamp into a log file. Time is a quantity that we should orient ourselves and our business data around and in so doing, build new systems and applications to exploit the uniqueness of that orientation. Time is a critical element that ties together information and provides meaningful context and I believe that treating it as a first class citizen within the context of a technology platform can deliver some powerful results. That feels like a pretty big deal.
As more time-based data is being generated which doesn’t easily fit the traditional database table/column structure, the drive by many organizations to capture, store, and analyze this data to their advantage is increasing as well. The cost effective storage and analysis of this “new data” is one of the key drivers for the adoption of technologies like Hadoop. Organizations need to bring vast amounts of information together of both traditional and new data elements in order to attempt to make sense of it all. But, beyond the cost effectiveness of doing so, there is associated complexity in achieving this data nirvana both technically and organizationally.
From a technical perspective, I once had an engineer tell me that the installation of a product we were working on was complex “because it was enterprise software.” My response was, “Just because its enterprise software, doesn’t mean it needs to be complicated to install.” The point here is that while there are difficult technical challenges to overcome, product vendors should be incredibly mindful of the additional technical complexity for solutions being introduced to address those challenges. Introducing a raft of technologies which deliver some new insight or value is great, but if the added overhead and complexity of installing, maintaining and operating the solution is significant, it detracts from the overall value of what was delivered. What’s worse is shifting problems from one group to another within larger IT organizations and this can definitely occur when the technical complexity of the solution is high.
From an organizational perspective, enterprises are attempting to eliminate complexity and shifting burdens (and costs) between IT groups by adopting cloud-based platforms and solutions. Great move…but, this also requires a re-think of what technologies you are using and for what purpose. Are there any cloud-native technologies already present which you should take advantage of to further reduce the complexity? If you take a pure “lift-and-shift” approach, you might take some complexity out, but end up with new challenges and a sub-optimal, and potentially more costly, solution within the cloud itself.
We are just over 20 years into Java’s use within the IT landscape. Java will certainly continue to be relevant and is likely to maintain its position as the no.1 overall platform for software development for some time to come, but I’ve been looking (waiting?) for a new, viable server-side language to emerge. The one that has looked most promising is Go. Go 1.0 was released in early 2012 and really has developed quite a vibrant, active, and rapidly growing community. While Go is still maturing, it does provide simplicity and speed that Java seems to have lost along the way. As I thought about what kinds of technologies I wanted to be involved with “next”, I figured that Go would be a part of whatever organization I joined.
Location, location, location…
Last, but not least, I have called San Francisco home for quite some time and yet, I’ve spent at least 13 years driving up and down the peninsula to the various tech companies that I’ve been apart of. While there has been a steady supply of consumer centric software companies in San Francisco for years, infrastructure software seemed to disappeared after the dot-com bust. However, in recent years, there has been a bit of a renaissance of infrastructure platforms which have appeared right in my backyard. Companies like Mesosphere, CoreOS, HashiCorp, and Meteor (among many others) all call San Francisco home and are lead by intelligent and dedicated professionals. As a shout-out to my friends and former colleagues at HashiCorp (also backed by Mayfield), they have a great group of Go developers there as well! So, after 13 very long years spent commuting on the least crowded freeway that I could find, it was time….time to rediscover working and living in the city by the bay.
I was introduced to CEO Evan Kaplan along with the co-founders (Paul and Todd) of InfluxData and it felt like a good fit almost immediately. The combination of technology vision with the focus on building products that dramatically reduce time to value for developers and by broadly solving for developer happiness really struck a chord with me. The TICK stack is written in Go and the emphasis on gathering, storing and analyzing time series data to address challenges associated with everything from DevOps monitoring to the Internet of Things (IoT) and real-time analytics felt precisely where I was heading. The InfluxData team has created some great technology and I’m excited to join the team and looking forward to bringing my experiences from the past few years to bear as we take InfluxDB Enterprise and InfluxDB Cloud forward.
For those of you from the InfluxData community who haven’t met me or don’t know me, I thought it might be nice to provide a little background on how I got here and what you can expect from me as we work together on products.
I’ve been in product management for enterprise software products for the last 16 years. I have worked on both open-source and closed-source software and their commercial counterparts. My last startup experience was focused on management and monitoring of web service integrations and that company, Talking Blocks, was acquired by HP. After a number of years at HP where I focused on web services governance, management and testing, I spent 4 years at Oracle leading the outbound product management team covering the Business Process Management Suite, SOA Suite offerings including Oracle Event Processing, and the Data Integration Suite. Before InfluxData, I had the distinct pleasure of working with an extraordinarily talented and driven team at Hortonworks. In addition to working with a great team, I had the opportunity to work on big data related technologies via Apache Hadoop and the entire open source ecosystem of projects which leverage Hadoop. That experience was also my first time working alongside an open source community as we planned, prioritized, and innovated on the technology.
My Philosophy on Building Great Products
For startups approaching the enterprise software space, and for those focused on building infrastructure platforms, I think about four key areas that need to be addressed as software is developed:
- Core capabilities — every product has a set of unique differentiators that make up the “core” of what it does. As a product manager you want to listen to the customer and the market and extend your lead in these core capabilities versus competitors with each release. In addition, you want to keep a watchful eye on your competitors incorporating things that become “table stakes” in deals and potentially become deal killers if you fail to address them.
- Extensibility — for infrastructure products in particular, it is important to provide access to developers to extend the capabilities of your technology. This allows customers and partners to take advantage of your unique capabilities and leverage your technology in ways that you might never have dreamt possible. This also comes in handy when you may have feature gaps that can be filled by an imaginative professional services team who can use those extensions on behalf of customers to deliver features sooner than having to wait for them to be baked into the core.
- Enterprise Readiness — having worked with so many enterprises during my career, there are a common set of expectations which exist around consuming technology and what the software vendor must provide in order for broader adoption. These things are really “non-sexy” capabilities around operations, security, governance, auditing, supportability and more. But, these are critical aspects for running these products in mission critical environments and being able to meet the minimum requirements placed on organizations via government rules and regulations.
- Consumability — last, but certainly not least, I am a big believer in user-centric design. Understanding who the end-user is, how they plan to address challenges that they have, what they will do next within the product (or outside the product) is really critical in designing and developing products that customers and developers love. The products we build need to be consumable by real people and we need to appreciate the value of things like documentation, user experience design, training and tutorials, and generally speeding the time-to-value.
With that as some context, you can expect me to be a big advocate for customer and community members alike. I like to listen and understand how you use the product and technologies that we produce. The software we create needs to be simple and approachable. I don’t expect users of our software to need to have PhDs in computer science. Making the complex simple is something I’m very passionate about. I’m looking forward to engaging with the community of folks who have already embraced the TICK stack and seeing where we can go from here. You can follow me on twitter @thallinflux.