Author: OSEN

  • Avoiding Unnecessary Risk – Rules for CEO’s

    Found an interesting article at “The C Suite” on the topic “CEO’s ignorance of open source software use places their business at risk“. While some of the article is a bit “FUDdy” – the author works for a company that sells risk management and mitigation, so there’s a greatest hits of open source vulnerabilities – there were also some eye-opening bits of data. To wit:

    As much as 50 percent of the code found in most commercial software packages is open source.  Most software engineers use open source components to expedite their work – but they do not track what they use, understand their legal obligations for using that code, or the software vulnerability risk it may contain.

    We all know that developers use whatever’s available and don’t ask permission. That is not a surprise. What stood out to me was that the amount of open source code in commercial software was anywhere near 50%. Holy moly. That’s a lot of things to keep track of. When I first started this site, I had an inkling that pretty much all products consume some open source code, and I thought there should be some discussion around best practices for doing so, but I had no idea it was that pervasive. Even I, open source product person, am surprised sometimes by the near ubiquity of open source software in commercial products.

    I think we’re moving beyond simply using open source software. I think we’ll see a  marked shift towards optimization of usage and figuring out models to justify participation and collaboration. At least, that’s my hope. Look for more thoughts on this very subject coming up on this site soon.

  • Product Development in the Age of Cloud Native

    Product Development in the Age of Cloud Native

    In defense of the community distribution

    Ever since the mass adoption of Agile development techniques and devops philosophies that attempt to eradication organizational silos, there’s been a welcome discussion on how to optimize development for continuous delivery on a massive scale. Some of the better known adages that have taken root as a result of this shift include “deploy in production after checking in code” (feasible due to the rigorous upfront testing required in this model), “infrastructure as code”, and a host of others that, taken out of context, would lead one down the path of chaos and mayhem. Indeed, the shift towards devops and agile methodologies and away from “waterfall” has led to a much needed evaluation of all processes around product and service delivery that were taken as a given in the very recent past.

    In a cloud native world, where workloads and infrastructure are all geared towards applications that spend their entire life cycle in a cloud environemnt, One of the first shifts was towards lightning fast release cycles. No longer would dev and ops negotiate 6 month chunks of time to ensure safe deployment in production of major application upgrades. No, in a cloud native world, you deploy incremental changes in production whenever needed. And because the dev and test environments have been automated to the extreme, the pipeline for application delivery in production is much shorter and can be triggered by the development team, without needing to wait for a team of ops specialists to clear away obstacles and build out infrastructure – that’s already done.

    Let me be clear: this is all good stuff. The tension between dev and ops that has been the source of frustration over the centuries has left significant organizational scar tissue in the form of burdensome regulations enforced by ops teams and rigid hierarchies which serve to stifle innovation and prevent rapid changes. This is anathema, of course, to the whole point of agile and directly conflicts with the demands of modern organizations to move quickly. As a result, a typical legacy development pathway may have looked like this:

    Screen Shot 2017-05-19 at 9.27.45 AM
    3-stage development process, from open source components to 1st software integration to release-ready product

    In the eyes of agile adherents, this is heretical. Why would you waste effort creating release branches solely for the purpose of branching again and going through another round of testing? This smacks of inefficiency. In a cloud native world, developers would rather cut out the middle step entirely, create a better, comprehensive testing procedure, and optimize the development pipeline for fast delivery of updated code. Or as Donnie Berkholz put it: this model implies waterfall development. What a cloud native practitioner strives for is a shortened release cycle more akin to this:

    Screen Shot 2017-05-19 at 10.20.03 AM
    2-stage process, from open source components to deployable product

    Of course, if you’ve read my series about building business on open source products and services, you’ll note that I’m a big advocate for the 3-step process identified in figure 1. So what gives? Is it hopelessly inefficient, a casualty of the past, resigned to the ash heap of history? I’ll introduce a term here to describe why I firmly believe in the middle step: orthogonal innovation.

    Orthogonal Innovation

    In a perfect world, innovation could be perfectly described before it happens, and the process for creating it would take place within well-defined constructs. The problem is that innovation happens all the time, due to the psychological concept of mental incubation, where ideas fester inside the brain for some indeterminate period of time, until finding its way into a conscious state, producing an “Aha!” moment. Innovation is very much conjoined with happenstance and timing. People spawn innovative ideas all the time, but the vast majority of them never take hold.

    As I wrote in It Was Never About Innovation, the purpose of communities created around software freedom and open source was never to create the most innovative ecosystems in human history – that was just an accidental byproduct. By creating rules that mandated all parties in an ecosystem were relative equals, the stage was set for massively scalable innovation. If one were to look at product lifecycles solely from the point of view of engineering efficiency, then yes, the middle piece of the 3-stage pathway looks extraneous. However, the assumption made is that a core development team is responsible for all necessary innovation, and none more is required. That model also assumes that a given code base has a single purpose and a single customer set. I would argue that the purpose of the middle stage is to expose software to new use cases and people that would have a different perspective from the primary users or developers of a single product. Furthermore, once you expose this middle step to more people, they need a way to iterate on further developments for that code base – developments that may run contrary to the goals of the core development team and its customers. Let’s revisit the 3-stage process:

    Screen Shot 2017-05-19 at 10.48.43 AM

    In this diagram, each stage is important for different reasons. The components on the left represent raw open source supply chain components that form the basis for the middle stage. The middle stage serves multiple entities in the ecosystem that springs up around the code base and is a “safe space” where lots of new things are tried, without introducing risk into the various downstream products. You can see echoes of t his in many popular open source-based products and services. Consider the Pivotal Cloud Foundry process, as explained by James Watters in this podcast with Andreessen Horowitz: raw open source components -> CloudFoundry.org -> Pivotal Cloud Foundry, with multiple derivatives from CloudFoundry.org, including IBM’s.

    As I’ve mentioned elsewhere, this also describes the RHEL process: raw components -> Fedora -> RHEL. And it’s the basis on which Docker is spinning up the Moby community. Once you’ve defined that middle space, there are many other fun things you can do, including building an identity for that middle distribution, which is what many open source communities have formed around. This process works just as well from an InnerSource perspective. Except in that case, the downstream products’ customers are internal, and there are multiple groups within your organization deriving products and services from the core code base in the middle stage. Opening up the middle stage to inspection and modification increases the surface area for innovation and gives breathing room for the more crazy ideas to take shape, possibly leading to their becoming slightly less crazy and useful for the other downstream participants.

    For more on this, come to our meetup at the Microsoft NERD center in Cambridge, MA on May 23, where I’ll present on this subject.

    Addendum: none of the above necessitates a specific development methodology. It could be agile, waterfall, pair programming or any other methodology du jour – it’s immaterial. What matters is constructing a process that allows for multiple points of innovation and iterative construction, even – or especially – where it doesn’t serve the aims of a specific downstream product. You want a fresh perspective, and to get that, you have to allow those with different goals to participate in the process.

  • Meetup 5/25 – Product Delivery in the Age of Cloud Native

    Meetup 5/25 – Product Delivery in the Age of Cloud Native

    Print

    We have secured space at the Microsoft NERD center in Cambridge, MA, for a meetup on May 23.

    We’ll talk about product management in cloud native environments, which basically means the intersection of open source, devops, and continuous integration as it pertains to automated service/product delivery.

    So bring your devops hat and get ready to think about risk vectors and how to manage them in this kind of environment. Should be fun!

  • OSEN Podcast, CLS Edition – Jono Bacon

    OSEN Podcast, CLS Edition – Jono Bacon

    We had a great talk with Jono Bacon, community leader extraordinaire. Jono spent many years as the Ubuntu community leader, founded the Community Leadership Summit (CLS – now taking place in Austin, TX, as we speak), wrote the book The Art of Community, and has now started his own consulting practice, Jono Bacon Consulting.

    We talked about all things community-related, including the intersection between community development, devops, and product management. It was a great discussion, and I hope you enjoy listening as much as we enjoyed talking.

    [youtube https://www.youtube.com/watch?v=jsJwBR7HzFs]

  • Podcast: Stephen Walli and Rikki Endsley

    Stephen and Rikki stopped by the OSEN studios (haha) to talk about open source trends, product management, and why is there only one Red Hat.

     

    [youtube https://www.youtube.com/watch?v=gKbWix1QJ5E]

    Rikki Endsley is the guru who runs the community for OpenSource.com – and does a whale of a job. Stephen is an open source engineering consultant at Docker and blogs for OSEN and at Medium

  • First Podcast: Tim Mackey, Black Duck

    I spoke with Tim Mackey, Technology Evangelist from Black Duck. Tim spent a few years at Citrix working on Xen Server and Cloudstack, where he, like me and many others, started thinking about how to get code from project to product. Tim and I talked about open source risk management, the current state of IT and open source, Xen vs. KVM flashbacks, and more.

    [youtube https://www.youtube.com/watch?v=4BbN5XXGCrc]

    For more on Black Duck:
    https://blackducksoftware.com/

    Opening music: “Hey Now” by MK2
    https://www.youtube.com/watch?v=E8rPgJwgmOI

  • Supply Chain Case Study: Canonical and Ubuntu

    canonical-logo1

    I love talking about supply chain management in an open source software context, especially as it applies to managing collaborative processes between upstream projects and their downstream products. In the article linked above, I called out a couple of examples of supply chain management: an enterprise OpenStack distribution and a container management product utilizing Kubernetes and Docker for upstream platforms.

    What about anti-patterns or things to avoid? There are several we could call out. At the risk of picking on someone I like, I’ll choose Canonical simply because they’ve been in the headlines recently for changes they’ve made to their organization, cutting back on some efforts and laying off some people. As I look at Canonical from a product offering perspective, there’s a lot they got right, which others could benefit from. But they also made many mistakes, some of which could have been avoided. First, the good.

    What Canonical Got Right About Supply Chains

    When the Ubuntu distribution first started in 2004, it made an immediate impact; the kind of impact that would frankly be impossible today for a Linux distribution. Remember what was happening at the time: many, many Red Hat Linux distribution users were feeling left out in the cold by Red Hat’s then groundbreaking decision to fork their efforts into a community stream and a product stream. One of the prevailing opinions at the time was that Fedora would be treated like the red-headed stepchild and starved for resources. Unfortunately, Red Hat played into that fear by… initially treating Fedora like the red-headed stepchild and almost willfully sabotaging their own community efforts. (for a good run-down of the 2004 zeitgeist and some LOL-level hilarity, see this page on LWN).

    Ubuntu never had that problem. From the very outset, there was never any doubt that Mark Shuttleworth and crew meant what they said when they set out to deliver an easy-to-use, free distribution. Lots of people tried to do that, but Ubuntu went about it more intelligently and made a lot more progress than its predecessors. Where did they succeed where others failed?

    1. They chose a great upstream platform. Instead of building something from scratch (which would have taken forever) or using the abandoned Red Hat Linux or even Mandrake, which were both going through awkward transitional phases (one to Fedora and the other out of the Red Hat orbit), they built Ubuntu on a rock-solid, dependable, community-maintained Linux distribution: Debian. openSUSE was not yet a thing, and building on SuSE Linux would have tied Ubuntu to the fortunes of SuSE and Novell, which would have been a bad idea. Slackware was long in the tooth, even then. Debian had its own challenges, not the least of which was a clash of cultures between free culture diehards and a group of people starting a for-profit entity around Debian, but it worked for Ubuntu’s purposes. It was also a challenge to install, which provided a great opportunity for an upstart like Ubuntu.
    2. Their supply chain was highly efficient, which is directly related to the above. Contrast this to what I’ll say below, but in the case of the base platform they started from, the software supply chain that made up Ubuntu was reliable and something its developers and users could depend on.
    3. They invested in the user experience and community. Ubuntu classified itself, at least back then, as “Linux for humans”, which spoke to the fact that, up until then, using Linux was an  esoteric and mistake-prone set of tasks. It was the sort of thing you did in your spare time if you were a CS or EE major looking to construct a new toy. Ubuntu changed all that. They made Linux much easier than any previous desktop Linux initiative. From a supply chain perspective, they did this great UX work as participants in the greater Gnome community. I realize some Gnome folks may blanch at that statement, but by and large, Ubuntu was very much depicted as Gnome people, and they were making contributions to the greater effort.
    4. They scaled globally, from the beginning. It was awe-inspiring to see all the local communities (LoCos, in Ubuntu parlance) spring up around the world dedicated to evangelizing Ubuntu and supporting its users. This happened organically, with Canonical providing support in the form of tooling, broad messaging, and in some cases, on the ground resources. Ubuntu also employed a formidable community team helmed by Jono Bacon, who accelerated Ubuntu’s growth (I once chided Jono on a community managers panel at OSCON about how easy he had it as the Ubuntu community lead. I still chuckle over that). One cannot emphasize enough that when this massive global community kicks into gear, the effect on upstream supply chains is tremendous. A hefty number of these global users and developers also became participants in many of the upstream communities that fed into Ubuntu. It’s one of the great examples of increased participation yielding positive results for everyone in the ecosystem, including Canonical.
    5. They were early adopters of “cloud native” workloads. As Simon Wardley will never let us forget, Canonical bought into cloud-based workloads before any other Linux vendor. It was their work in 2008-2009 that really cemented Ubuntu’s status as *the* primary and default platform for all new cloud and server-based technologies, which continues to this day. Even now, if a new project wants to get early adoption, they release .DEBs on Ubuntu and make sure it builds properly for those Ubuntu users and developers who download the source code. It gives Ubuntu and Canonical an incredible advantage. Again, from a supply chain perspective, this was gold. It meant that the upstream supply chain for cloud native tools was heavily Ubuntu-centric, wrapping it all together with a nice bow.

    Where it Went Pear-shaped

    In writing about everything they got right, I am of course using the rhetorical device of setting up the reader for a barrage of things they got wrong. This is that section. For all of their incredible acumen at scaling out a global community around a Linux distribution, they failed to learn from their supply chain success, and instead started down the path of NIH syndrome. You’ll see lots of critiques elsewhere pertaining to other challenges at Canonical, but I’ll focus on their supply chain strategy, and how it failed them.

    1. Launchpad. The first sign that Canonical were moving away from their established supply chain methodology was when they first released Launchpad, a web-based service for developers to create, share, and collaborate on software projects. It also featured an auto-build service and an easy way to release and manage unofficial builds of bleeding edge software: the “Personal Package Archive” or PPA. The service was great for its time and highly ambitious. And when Canonical announced they were open-sourcing it, even better. But there were problems: maintaining a code base for a service as complex as Launchpad is really difficult. Even with an entire company devoted to such a concept, there are still major challenges. There were a couple of ways to deal with that complexity: upstream as much as possible to defray the cost of maintaining the code and/or create a long-term revenue model around launchpad to sustain its development. Canonical did neither. In fact, it was the worst of both worlds: they neither upstreamed the project nor created a revenue model to sustain development. In other words, Launchpad became a proverbial albatross around the neck, both in terms of technical debt to be maintained solely by the Launchpad team and in the lack of funding for future development. It was the first sign that Canonical was on the wrong track from a business strategy viewpoint. The polished user experience that Ubuntu users came to expect from their software was missing from Launchpad, giving GitHub the opportunity to build something larger.
    2. Juju. It might be premature to write off Juju entirely, but it hasn’t become the force Canonical and Ubuntu intended it to be. Written at a time when Puppet and Chef were the young upstarts, and Ansible was but a gleam in Cobbler’s eye, Juju was Canonical’s answer to the problem of configuration management in the age of cloud. It might have had a better chance if Canonical had decided to be a bit looser with its user base. Puppet and Chef, for example, were always available on a variety of platforms, whereas Juju was specifically tied to Ubuntu. And while Ubuntu became the de facto standard for building cloud tools, the enterprise was still dominated by Windows, Unix, and RHEL. Developers may have built many tools using Ubuntu, but they deployed in production on other platforms, where Juju was not to be found. If you were an enterprising young devops professional, going with a Juju-only approach meant cutting off your air supply. Because it was Ubuntu-only, and because it was never a core part of the Debian upstream community, the impact made by Juju was limited. Canonical was unable to build a collaborative model with other developer communities, which would have improved the supply chain efficiency, and they weren’t able to use it to add value to a revenue-generating product, because their capacity for generating server revenue was limited. It’s another case of great software hobbled by a poor business strategy.
    3. Unity. If Launchpad and Juju gave observers an inkling that Canonical was going off the rails, the launch of Unity confirmed it. From the beginning, Canonical was always a participant in the Gnome desktop community. This made sense, because Ubuntu had always featured a desktop environment based on Gnome. At some point, Canonical decided they could go faster and farther if they ditched this whole Gnome community thing and went their own way. As with Launchpad and Juju, this makes sense if you’re able to generate enough revenue to sustain development over time with a valid business model. I personally liked Unity, but the decision to go with it over stock Gnome 3 drove an even larger wedge between Ubuntu and the rest of the Linux desktop ecosystem. Once again, Ubuntu packagers and developers were caught in a bubble without the support of an upstream community to stabilize the supply chain. This meant that, once again, Canonical developers were the sole developers and maintainers of the software, further straining an already stretched resource.
    4. Mir. I don’t actually know the origins of Mir. Frankly, it doesn’t matter. What you need to know is this: the open source technology world participated in the Wayland project, whose goal was to build a modern successor to the venerable X.org windows server, and Canonical decided to build Mir, instead. The end. Now, Mir and Unity are, for all intents and purposes, mothballed, and Wayland is the clear winner on the desktop front. Supply chains: learn them, live them, love them – or else.
    5. Ubuntu mobile / Ubuntu phone. The mobile space is extremely difficult because the base hardware platforms are always proprietary, as mandated by the large carriers who set the rules for the entire ecosystem. It’s even more difficult to navigate when you’re launching a product that’s not in your area of expertise, and you try to go to market without a strong ecosystem of partners. The iPhone had AT&T in its corner. The Ubuntu phone had… I’m not even sure.  Ubuntu phone and the mobile OS that ran on it were DOA, and they should have understood that much sooner than they did.
    6. Ubuntu platform itself. I know, I spent the first half of this article talking up the great success of Ubuntu, but there is one place where it never excelled: it never became a large enough revenue generator to sustain the many other projects under development. There was also never a coherent strategy, product-wise, around what Ubuntu should grow up to become. Was it a cloud platform? Mobile platform? Enterprise server? Developer workstation? And there was never an over-arching strategy with respect to the complementary projects built on top of Ubuntu. There was never a collective set of tools designed to create the world’s greatest cloud platform. Or enterprise server. Or any of the other choices. Canonical tried to make Ubuntu the pathway to any number of destinations, but without the product discipline to “just say no” at the appropriate time.

    I get no joy from listing the failings of Canonical. I remain a great fan of what they accomplished on the community front, which as far as I can tell, is without parallel. Not many companies can claim with any credibility that they fostered a massive, global community of users and developers that numbered in the millions and covered every country and continent on the planet, driven by organic growth and pursued with a religious zeal. That is no small feat and should be celebrated. My hope is that this is what Ubuntu, Canonical, and yes, Mark Shuttleworth, are known for, and not for any business shortcomings.

    I’m not suggesting that a company cannot be successful without building out an upstream supply chain – there are far too many counter-examples to claim that. What I am suggesting is that if you have limited resources, and you choose to build out so many products, you’re going to need the leverage that comes from massive global participation. If Canonical had chosen to focus on one of the above initiatives, you could argue that supply chain would not have been as important. I will note, for the record, that none of the challenges listed above are related to the fact that they were open source. Rather, to sustain their development, they needed much broader adoption. In order to sustain that model, they would have had to create a successful product with a high growth in revenue, which never came. The lesson: if you want more control over your software products, you need an accompanying product strategy that supports it. If I were at Canonical, I would have pushed for a much more aggressive upstream strategy to get more benefits from broader open source participation.

  • An Open Letter to Docker About Moby

    Congratulations, Docker. You’ve taken the advice of many and gone down the path of Fedora / RHEL. Welcome to the world of upstream/downstream product management, with community participation a core component of supply chain management. You’ve also unleashed a clever governance hack that cements your container technology as the property of Docker, rather than let other vendors define it as an upstream technology for everyone. Much like Red Hat used Fedora to stake its claim as owner of an upstream community. I’ll bet the response to this was super positive, and everyone understood your intent perfectly! Oh…

    So yes, the comparison to Fedora/RHEL is spot on, but you should also remember something from that experiment: at first, everyone *hated* it. The general take from the extended Linux community at the time was that Red Hat was abandoning community Linux in an effort to become “the Microsoft of Linux”. Remember, this level of dissatisfaction is why CentOS exists today. And the Fedora community rollout didn’t exactly win any awards for precise execution. At first, there was “Fedora Core”, and it was quite limited and not a smashing success. This was one of the reasons that Ubuntu became as successful as it did, because they were able to capitalize on the suboptimal Fedora introduction. Over time, however, Red Hat continued to invest in Fedora as a strategic community brand, and it became a valuable staging ground for leading edge technologies from the upstream open source world, much like Moby could be a staging area for Docker.

    But here’s the thing, Docker: you need to learn from previous mistakes and get this right. By waiting so long to make this move, you’ve increased the level of risk you’re taking on, which could have been avoided. If you get it wrong, you’re going to see a lot of community pressure to fork Moby or Docker and create another community distribution outside of your sphere of influence. The Fedora effort frittered away a lot of good will from the Linux community by not creating an easy to use, out of the box distribution at first. And the energy from that disenchantment went to Ubuntu, leaving Fedora in a position to play catchup. That Red Hat was able to recover and build a substantial base of RHEL customers is a testament to their ability to execute on product management. However, Ubuntu was able to become the de facto developer platform for the cloud by capitalizing on Fedora’s missteps, putting them on the inside track for new cloud, IoT, and container technologies over the years. My point is this: missteps in either strategy and execution have a large and lasting impact.

    So listen up, Docker. You need to dedicate tremendous resources right now to the Moby effort to make sure that it’s easy to use, navigate, and most importantly, ensure that your community understands its purpose. Secondly, and almost as importantly, you need to clearly communicate your intent around Docker CE and EE. There is no room for confusion around the difference between Moby and Docker *E. Don’t be surprised if you see a CentOS equivalent to Docker CE and/or EE soon, even though you’re probably trying to prevent that with a freely available commercial offering. Don’t worry, it will only prove your model, not undermine it, because no one can do Docker better than Docker. Take that last bit to heart, because far too many companies have failed because they feared the success of their free stuff. In this case, that would be a colossal unforced error.

     

  • Why Project Moby is a Brilliant Move by Docker

    On Tuesday, Solomon Hykes, Docker’s CTO and co-founder, unleashed the Moby Project on the world. I’ll admit I didn’t fully grasp its significance at first. This might have something to do with being on vacation in Cape Cod and not being at DockerCon, but I digress. It wasn’t until I read this Twitter thread from Kelsey Hightower that something clicked:

    And then it dawned on me – Docker was taking a page out of the Red Hat playbook and fully embracing the upstream supply chain model. In 2003, Red Hat decided it needed to focus on enterprise subscriptions and moved away from its free commercial linux, the venerable Red Hat Linux. In its place, Red Hat created Red Hat Enterprise Linux (RHEL), and then a funny thing happened: its employees rebelled and created the Fedora community and project, designed to be a community Linux distribution. This turned out to be a brilliant move. Forward looking technology and innovation happened in the Fedora community, and then it went through a series of hardening, polish, integration with other Red Hat platforms and bug fixes before being released under the RHEL brand. The more complex Red Hat’s product offerings became, the more valuable this model proved.

    Red Hat product supply chain:

    The container ecosystem shares much with the Linux ecosystem, because that’s where it came from. One of the criticisms of Docker, much like Red Hat, is that they’re “trying to control the entire ecosystem”. I may have uttered that phrase from time to time, under my breath. The Moby Project, in my opinion only, is a direct response to that. As Solomon mentioned in his blog announcement:

    In order to be able to build and ship these specialized editions is a relatively short time, with small teams, in a scalable way, without having to reinvent the wheel; it was clear we needed a new approach.

    Yes, any successful ecosystem becomes extremely difficult to manage over time, which is why you end up giving up control, without giving up your value proposition. This is also probably why you’ve seen Docker become more engaged on the CNCF front and why they drove the OCI formation. As David Nalley likes to say, this is the “hang on loosely, but don’t let go” approach to community-building:

    There’s also the branding and trademark benefit. Just as with Fedora and RHEL, separating the branding streams now means that community-minded people know where to go: Project Moby. And prospective customers and partners also know where to go: Docker.com. It’s a great way to let your audiences self-select.

    Docker decided to take the next step and embrace the open source supply chain model. This is a good thing.

  • How Silicon Valley Ruined Open Source Business

    Back in the early days of open source software, we were constantly looking for milestones to indicate how far we had progressed. Major vendor support: check (Oracle and IBM in 1998). An open source IPO: check (Red Hat and VA Linux in 1999). Major trade show: check (LinuxWorld in 1999). And then, of course, a venture-backed startup category: check (beginning with Cygnus, Sendmail, VA, and Red Hat in the late 90’s, followed by a slew of others, especially after the dot bomb faded after 2003). Unfortunately, VC involvement came with a hefty price. And then, of course, our first VC superstar: check (Peter Fenton).

    Remember, this was a world where CNBC pundits openly questioned how Red Hat could make money after they “gave away all their IP”. (spoiler alert: they didn’t. That’s why it’s so funny). So when venture capitalists got into the game, they started inflicting poor decisions on startup founders, mostly centered on the following conceits:

    1. OMG you’re giving away your IP. You have to hold some back! How do you expect to make money?
    2. Here’s a nice business plan I just got from my pal who’s from Wharton. He came up with this brilliant idea: we call it ‘Open Core’!”
    3. Build a community – so we can upsell them!
    4. Freeloaders are useless

    A VC’s view of open source at the time was simplistic and limited mostly to the view that a vendor should be able to recoup costs by charging for an enterprise product that takes advantage of the many stooges dumb enough to take in the free “community” product. In this view, a community is mostly a marketing ploy designed for a captive audience that had nowhere else to go. For many reasons, this is the view that the VC community embraced in the mid-2000’s. My hypothesis: when it didn’t work, it soured the relationship between investors and open source, which manifests itself in lost opportunities to this day.

    What should have been obvious from the beginning – source code is not product – has only recently begun to get airplay. Instead, we’ve been forced to endure a barrage of bad diagnoses of failures and bad advice for startup founders. It’s so bad that even our open source business heroes don’t think you can fully embrace open source if you want to make money. The following is from Mike Olson, mostly known for his business acumen with Cloudera and Sleepycat:

    The list of successful stand-alone open source vendors that emerged over that period is easy to write, because the only company on it is Red Hat. The rest have failed to scale or been swallowed.

    …The moral of that story is that it’s pretty hard to build a successful, stand-alone open source company. Notably, no support- or services-only business model has ever made the cut.

    As I have mentioned early and often, as has Stephen Walli, a project is not a product, and vice-versa, and it’s this conflation of the two that is a profound disservice to startups, developers, and yes, investors. Here’s the bottom line: you want to make money in a tech startup? Make a winning solution that offers value to customers for a price. This applies whether you’re talking about an open source, proprietary, or hybrid solution. This is hard to do, regardless of how you make the sausage. Mike Olson is a standup guy, and I hope he doesn’t take this personally, but he’s wrong. It’s not that “it’s pretty hard to build a successful, stand-alone open source company.” Rather, it’s hard to build a successful stand-alone company in *any* context. But for some reason, we notice the open source failures more than the others.

    The failures are particularly notable for how they failed, and how little has been done to diagnose what went wrong, other than “They gave away all their IP!” In the vast majority of cases, these startups were poorly received because they either a.) had a terrible product or b.) they constrained their community to the point of cutting off their own air supply. There were of course, notable exceptions. While it wasn’t my idea of the best way to do it, MySQL turned out pretty well, all things considered. The point is, don’t judge startups based on their development model; judge them on whether they have a compelling offering that customers want.

    While investors in 2017 are smarter than their 2005 cousins, they still have blinders when it comes to open source. They have distanced themselves from the open core pretenders, but in the process, they’ve also distanced themselves from potential Red Hats. Part of this is due to an overall industry trend towards *aaS and cloud-based services, but even so, any kind of emphasis on pure open source product development is strongly discouraged. If I’m a *aaS-based startup today and I approach an investor, I’m not going to lead off with, “and we’re pushing all of our code that runs our internal services to a publicly-accessible GitHub account!” Unless, of course, I wanted to see ghastly reactions.

    This seems like a missed opportunity: if ever there was a time to safely engage in open source development models while still maintaining your product development speed and agility, using a *aaS model is a great way to do it. After all, winning at *aaS means winning at superior operations and uptime, which has zilch to do with source code. And yet, we’re seeing the opposite: most of the startups that do full open source development are the ones that release a “physical software download” and the SaaS startups run away scared, despite the leverage that a SaaS play could have if they were to go full-throttle in open source development.

    It’s gotten to the point that when I advise startups, I tell them not to emphasize their open source development, unless they’re specifically asked about it. Why bother? They’re just going to be subjected to a few variations on the theme of “but how are you going to get paid?” Better to focus on your solution, how it wins against the competition, and how customers are falling over themselves to get it. Perhaps that’s how it always should have been, but it strikes me as odd that your choice of development model, which is indirectly and tangentially related to how you move product, should be such a determining factor on whether your startup gets funded, even more so than whether you have a potentially winning solution. It’s almost as if there’s an “open source tax” that investors factor into any funding decision involving open source. As in, “sure, that’s a nice product, but ooooh, I have to factor in the ‘open source tax’ I’ll have to pay because I can’t get maximum extraction of revenue.”

    There are, of course, notable exceptions. I can think of a few venture-backed startups with an open source focus. But often the simple case is ignored in favor of some more complex scheme with less of a chance for success. Red Hat has a very simple sales model: leverage its brand and sell subscriptions to its software products. The end. Now compare that to how some startups today position their products and try to sell solutions. It seems, at least from afar, overly complicated. I suspect this is because, as part of their funding terms, they’re trying to overcome their investors’ perceptions of the open source tax. Sure, you can try to build the world’s greatest software supply house and delivery mechanism, capitalizing on every layer of tooling built on top of the base platform. Or you can, you know, sell subscriptions to your certified platform. One is a home run attempt. The other is a base hit. Guess which way investors have pushed?