Blog

  • OSEN Podcast, CLS Edition – Jono Bacon

    OSEN Podcast, CLS Edition – Jono Bacon

    We had a great talk with Jono Bacon, community leader extraordinaire. Jono spent many years as the Ubuntu community leader, founded the Community Leadership Summit (CLS – now taking place in Austin, TX, as we speak), wrote the book The Art of Community, and has now started his own consulting practice, Jono Bacon Consulting.

    We talked about all things community-related, including the intersection between community development, devops, and product management. It was a great discussion, and I hope you enjoy listening as much as we enjoyed talking.

    [youtube https://www.youtube.com/watch?v=jsJwBR7HzFs]

  • Podcast: Stephen Walli and Rikki Endsley

    Stephen and Rikki stopped by the OSEN studios (haha) to talk about open source trends, product management, and why is there only one Red Hat.

     

    [youtube https://www.youtube.com/watch?v=gKbWix1QJ5E]

    Rikki Endsley is the guru who runs the community for OpenSource.com – and does a whale of a job. Stephen is an open source engineering consultant at Docker and blogs for OSEN and at Medium

  • First Podcast: Tim Mackey, Black Duck

    I spoke with Tim Mackey, Technology Evangelist from Black Duck. Tim spent a few years at Citrix working on Xen Server and Cloudstack, where he, like me and many others, started thinking about how to get code from project to product. Tim and I talked about open source risk management, the current state of IT and open source, Xen vs. KVM flashbacks, and more.

    [youtube https://www.youtube.com/watch?v=4BbN5XXGCrc]

    For more on Black Duck:
    https://blackducksoftware.com/

    Opening music: “Hey Now” by MK2
    https://www.youtube.com/watch?v=E8rPgJwgmOI

  • Supply Chain Case Study: Canonical and Ubuntu

    canonical-logo1

    I love talking about supply chain management in an open source software context, especially as it applies to managing collaborative processes between upstream projects and their downstream products. In the article linked above, I called out a couple of examples of supply chain management: an enterprise OpenStack distribution and a container management product utilizing Kubernetes and Docker for upstream platforms.

    What about anti-patterns or things to avoid? There are several we could call out. At the risk of picking on someone I like, I’ll choose Canonical simply because they’ve been in the headlines recently for changes they’ve made to their organization, cutting back on some efforts and laying off some people. As I look at Canonical from a product offering perspective, there’s a lot they got right, which others could benefit from. But they also made many mistakes, some of which could have been avoided. First, the good.

    What Canonical Got Right About Supply Chains

    When the Ubuntu distribution first started in 2004, it made an immediate impact; the kind of impact that would frankly be impossible today for a Linux distribution. Remember what was happening at the time: many, many Red Hat Linux distribution users were feeling left out in the cold by Red Hat’s then groundbreaking decision to fork their efforts into a community stream and a product stream. One of the prevailing opinions at the time was that Fedora would be treated like the red-headed stepchild and starved for resources. Unfortunately, Red Hat played into that fear by… initially treating Fedora like the red-headed stepchild and almost willfully sabotaging their own community efforts. (for a good run-down of the 2004 zeitgeist and some LOL-level hilarity, see this page on LWN).

    Ubuntu never had that problem. From the very outset, there was never any doubt that Mark Shuttleworth and crew meant what they said when they set out to deliver an easy-to-use, free distribution. Lots of people tried to do that, but Ubuntu went about it more intelligently and made a lot more progress than its predecessors. Where did they succeed where others failed?

    1. They chose a great upstream platform. Instead of building something from scratch (which would have taken forever) or using the abandoned Red Hat Linux or even Mandrake, which were both going through awkward transitional phases (one to Fedora and the other out of the Red Hat orbit), they built Ubuntu on a rock-solid, dependable, community-maintained Linux distribution: Debian. openSUSE was not yet a thing, and building on SuSE Linux would have tied Ubuntu to the fortunes of SuSE and Novell, which would have been a bad idea. Slackware was long in the tooth, even then. Debian had its own challenges, not the least of which was a clash of cultures between free culture diehards and a group of people starting a for-profit entity around Debian, but it worked for Ubuntu’s purposes. It was also a challenge to install, which provided a great opportunity for an upstart like Ubuntu.
    2. Their supply chain was highly efficient, which is directly related to the above. Contrast this to what I’ll say below, but in the case of the base platform they started from, the software supply chain that made up Ubuntu was reliable and something its developers and users could depend on.
    3. They invested in the user experience and community. Ubuntu classified itself, at least back then, as “Linux for humans”, which spoke to the fact that, up until then, using Linux was an  esoteric and mistake-prone set of tasks. It was the sort of thing you did in your spare time if you were a CS or EE major looking to construct a new toy. Ubuntu changed all that. They made Linux much easier than any previous desktop Linux initiative. From a supply chain perspective, they did this great UX work as participants in the greater Gnome community. I realize some Gnome folks may blanch at that statement, but by and large, Ubuntu was very much depicted as Gnome people, and they were making contributions to the greater effort.
    4. They scaled globally, from the beginning. It was awe-inspiring to see all the local communities (LoCos, in Ubuntu parlance) spring up around the world dedicated to evangelizing Ubuntu and supporting its users. This happened organically, with Canonical providing support in the form of tooling, broad messaging, and in some cases, on the ground resources. Ubuntu also employed a formidable community team helmed by Jono Bacon, who accelerated Ubuntu’s growth (I once chided Jono on a community managers panel at OSCON about how easy he had it as the Ubuntu community lead. I still chuckle over that). One cannot emphasize enough that when this massive global community kicks into gear, the effect on upstream supply chains is tremendous. A hefty number of these global users and developers also became participants in many of the upstream communities that fed into Ubuntu. It’s one of the great examples of increased participation yielding positive results for everyone in the ecosystem, including Canonical.
    5. They were early adopters of “cloud native” workloads. As Simon Wardley will never let us forget, Canonical bought into cloud-based workloads before any other Linux vendor. It was their work in 2008-2009 that really cemented Ubuntu’s status as *the* primary and default platform for all new cloud and server-based technologies, which continues to this day. Even now, if a new project wants to get early adoption, they release .DEBs on Ubuntu and make sure it builds properly for those Ubuntu users and developers who download the source code. It gives Ubuntu and Canonical an incredible advantage. Again, from a supply chain perspective, this was gold. It meant that the upstream supply chain for cloud native tools was heavily Ubuntu-centric, wrapping it all together with a nice bow.

    Where it Went Pear-shaped

    In writing about everything they got right, I am of course using the rhetorical device of setting up the reader for a barrage of things they got wrong. This is that section. For all of their incredible acumen at scaling out a global community around a Linux distribution, they failed to learn from their supply chain success, and instead started down the path of NIH syndrome. You’ll see lots of critiques elsewhere pertaining to other challenges at Canonical, but I’ll focus on their supply chain strategy, and how it failed them.

    1. Launchpad. The first sign that Canonical were moving away from their established supply chain methodology was when they first released Launchpad, a web-based service for developers to create, share, and collaborate on software projects. It also featured an auto-build service and an easy way to release and manage unofficial builds of bleeding edge software: the “Personal Package Archive” or PPA. The service was great for its time and highly ambitious. And when Canonical announced they were open-sourcing it, even better. But there were problems: maintaining a code base for a service as complex as Launchpad is really difficult. Even with an entire company devoted to such a concept, there are still major challenges. There were a couple of ways to deal with that complexity: upstream as much as possible to defray the cost of maintaining the code and/or create a long-term revenue model around launchpad to sustain its development. Canonical did neither. In fact, it was the worst of both worlds: they neither upstreamed the project nor created a revenue model to sustain development. In other words, Launchpad became a proverbial albatross around the neck, both in terms of technical debt to be maintained solely by the Launchpad team and in the lack of funding for future development. It was the first sign that Canonical was on the wrong track from a business strategy viewpoint. The polished user experience that Ubuntu users came to expect from their software was missing from Launchpad, giving GitHub the opportunity to build something larger.
    2. Juju. It might be premature to write off Juju entirely, but it hasn’t become the force Canonical and Ubuntu intended it to be. Written at a time when Puppet and Chef were the young upstarts, and Ansible was but a gleam in Cobbler’s eye, Juju was Canonical’s answer to the problem of configuration management in the age of cloud. It might have had a better chance if Canonical had decided to be a bit looser with its user base. Puppet and Chef, for example, were always available on a variety of platforms, whereas Juju was specifically tied to Ubuntu. And while Ubuntu became the de facto standard for building cloud tools, the enterprise was still dominated by Windows, Unix, and RHEL. Developers may have built many tools using Ubuntu, but they deployed in production on other platforms, where Juju was not to be found. If you were an enterprising young devops professional, going with a Juju-only approach meant cutting off your air supply. Because it was Ubuntu-only, and because it was never a core part of the Debian upstream community, the impact made by Juju was limited. Canonical was unable to build a collaborative model with other developer communities, which would have improved the supply chain efficiency, and they weren’t able to use it to add value to a revenue-generating product, because their capacity for generating server revenue was limited. It’s another case of great software hobbled by a poor business strategy.
    3. Unity. If Launchpad and Juju gave observers an inkling that Canonical was going off the rails, the launch of Unity confirmed it. From the beginning, Canonical was always a participant in the Gnome desktop community. This made sense, because Ubuntu had always featured a desktop environment based on Gnome. At some point, Canonical decided they could go faster and farther if they ditched this whole Gnome community thing and went their own way. As with Launchpad and Juju, this makes sense if you’re able to generate enough revenue to sustain development over time with a valid business model. I personally liked Unity, but the decision to go with it over stock Gnome 3 drove an even larger wedge between Ubuntu and the rest of the Linux desktop ecosystem. Once again, Ubuntu packagers and developers were caught in a bubble without the support of an upstream community to stabilize the supply chain. This meant that, once again, Canonical developers were the sole developers and maintainers of the software, further straining an already stretched resource.
    4. Mir. I don’t actually know the origins of Mir. Frankly, it doesn’t matter. What you need to know is this: the open source technology world participated in the Wayland project, whose goal was to build a modern successor to the venerable X.org windows server, and Canonical decided to build Mir, instead. The end. Now, Mir and Unity are, for all intents and purposes, mothballed, and Wayland is the clear winner on the desktop front. Supply chains: learn them, live them, love them – or else.
    5. Ubuntu mobile / Ubuntu phone. The mobile space is extremely difficult because the base hardware platforms are always proprietary, as mandated by the large carriers who set the rules for the entire ecosystem. It’s even more difficult to navigate when you’re launching a product that’s not in your area of expertise, and you try to go to market without a strong ecosystem of partners. The iPhone had AT&T in its corner. The Ubuntu phone had… I’m not even sure.  Ubuntu phone and the mobile OS that ran on it were DOA, and they should have understood that much sooner than they did.
    6. Ubuntu platform itself. I know, I spent the first half of this article talking up the great success of Ubuntu, but there is one place where it never excelled: it never became a large enough revenue generator to sustain the many other projects under development. There was also never a coherent strategy, product-wise, around what Ubuntu should grow up to become. Was it a cloud platform? Mobile platform? Enterprise server? Developer workstation? And there was never an over-arching strategy with respect to the complementary projects built on top of Ubuntu. There was never a collective set of tools designed to create the world’s greatest cloud platform. Or enterprise server. Or any of the other choices. Canonical tried to make Ubuntu the pathway to any number of destinations, but without the product discipline to “just say no” at the appropriate time.

    I get no joy from listing the failings of Canonical. I remain a great fan of what they accomplished on the community front, which as far as I can tell, is without parallel. Not many companies can claim with any credibility that they fostered a massive, global community of users and developers that numbered in the millions and covered every country and continent on the planet, driven by organic growth and pursued with a religious zeal. That is no small feat and should be celebrated. My hope is that this is what Ubuntu, Canonical, and yes, Mark Shuttleworth, are known for, and not for any business shortcomings.

    I’m not suggesting that a company cannot be successful without building out an upstream supply chain – there are far too many counter-examples to claim that. What I am suggesting is that if you have limited resources, and you choose to build out so many products, you’re going to need the leverage that comes from massive global participation. If Canonical had chosen to focus on one of the above initiatives, you could argue that supply chain would not have been as important. I will note, for the record, that none of the challenges listed above are related to the fact that they were open source. Rather, to sustain their development, they needed much broader adoption. In order to sustain that model, they would have had to create a successful product with a high growth in revenue, which never came. The lesson: if you want more control over your software products, you need an accompanying product strategy that supports it. If I were at Canonical, I would have pushed for a much more aggressive upstream strategy to get more benefits from broader open source participation.

  • An Open Letter to Docker About Moby

    Congratulations, Docker. You’ve taken the advice of many and gone down the path of Fedora / RHEL. Welcome to the world of upstream/downstream product management, with community participation a core component of supply chain management. You’ve also unleashed a clever governance hack that cements your container technology as the property of Docker, rather than let other vendors define it as an upstream technology for everyone. Much like Red Hat used Fedora to stake its claim as owner of an upstream community. I’ll bet the response to this was super positive, and everyone understood your intent perfectly! Oh…

    So yes, the comparison to Fedora/RHEL is spot on, but you should also remember something from that experiment: at first, everyone *hated* it. The general take from the extended Linux community at the time was that Red Hat was abandoning community Linux in an effort to become “the Microsoft of Linux”. Remember, this level of dissatisfaction is why CentOS exists today. And the Fedora community rollout didn’t exactly win any awards for precise execution. At first, there was “Fedora Core”, and it was quite limited and not a smashing success. This was one of the reasons that Ubuntu became as successful as it did, because they were able to capitalize on the suboptimal Fedora introduction. Over time, however, Red Hat continued to invest in Fedora as a strategic community brand, and it became a valuable staging ground for leading edge technologies from the upstream open source world, much like Moby could be a staging area for Docker.

    But here’s the thing, Docker: you need to learn from previous mistakes and get this right. By waiting so long to make this move, you’ve increased the level of risk you’re taking on, which could have been avoided. If you get it wrong, you’re going to see a lot of community pressure to fork Moby or Docker and create another community distribution outside of your sphere of influence. The Fedora effort frittered away a lot of good will from the Linux community by not creating an easy to use, out of the box distribution at first. And the energy from that disenchantment went to Ubuntu, leaving Fedora in a position to play catchup. That Red Hat was able to recover and build a substantial base of RHEL customers is a testament to their ability to execute on product management. However, Ubuntu was able to become the de facto developer platform for the cloud by capitalizing on Fedora’s missteps, putting them on the inside track for new cloud, IoT, and container technologies over the years. My point is this: missteps in either strategy and execution have a large and lasting impact.

    So listen up, Docker. You need to dedicate tremendous resources right now to the Moby effort to make sure that it’s easy to use, navigate, and most importantly, ensure that your community understands its purpose. Secondly, and almost as importantly, you need to clearly communicate your intent around Docker CE and EE. There is no room for confusion around the difference between Moby and Docker *E. Don’t be surprised if you see a CentOS equivalent to Docker CE and/or EE soon, even though you’re probably trying to prevent that with a freely available commercial offering. Don’t worry, it will only prove your model, not undermine it, because no one can do Docker better than Docker. Take that last bit to heart, because far too many companies have failed because they feared the success of their free stuff. In this case, that would be a colossal unforced error.

     

  • “Every evangelist of yesteryear is now a Community Manager ….”

    This post first appeared on Medium. It is reprinted here with permission. 

    OH: “Every evangelist of yesteryear is now a Community Manager … at least on their biz card.”

    This statement best captures a question that comes up regularly in the open source community world when you have corporations involved. Does your community manager report to engineering or marketing? It captures a number of assumptions quite nicely.

    First, the concept of a “community manager” does imply a certain amount of corporate structure regardless of whether it’s for profit or some form of non-profit. If the open source licensed project is in the wild then it probably doesn’t have the size and adoption to require people with such titles. Such well-run project communities are self-organizing. As they grow and there are more things to do than vet code and commit, they accommodate new roles. They may even form council-like sub-organizations (e.g. documentation). But for the most part, the work is in-the-small, and it’s organic. The structure of well run “pre-corporate” projects is in process discipline around such work as contributions, issue tracking, and build management.

    When projects grow and evolve to the point that companies want to get involved, using the project software in their products and services, then the project “property” needs to be protected differently. The software project already has a copyright (because it’s the law) and is covered by an open source license (because that social contract enables the collaboration), but trademarks and provenance can quickly become important. Companies have different risk profiles. A solution to such corporate concerns can be to wrap a non-profit structure around the project. This can mean the project chooses to join an existing foundation like the Apache Software Foundation or the Eclipse Foundation, or it could form its own foundation (e.g. the Gnome Foundation). In return for the perceived added overhead to the original community, it enables company employees to more actively contribute. (The code base for Apache httpd project doubled in the first few months after the ASF formed.)

    A community manager implies more administrative structure and discipline around people coordination for growth, than the necessary software construction discipline that the early project required for growth. But a foundation often brings the sort of administrative structure for events and communications such that folks in the project (or projects) still don’t have a title of “community manager.”

    Community managers are a corporate thing. And I believe they start showing up when either a project becomes a company (e.g. Apache Mesos into Mesosphere), a company wants to create a project or turn a product into a project (e.g. Hewlett Packard Enterprise with OpenSwitch), or a company wants to create a distribution of a project (e.g. Canonical with Ubuntu, Red Hat with Fedora, Cloudera with Apache Hadoop). And this is implied in the original statement about “biz cards” and questions of marketing versus engineering.

    Software companies have long understood developer networks. MSDN, the Oracle Developer Network, and the IBM Developer Network have been around for decades. They are broadcast communities carrying marketing messages to the faithful. They were run by Developer Evangelists and Developer Advocates. MVP programs were created to identify and support non-employees acting in an evangelism role. These networks are product marketing programs. They tightly control access to product engineers, who allowed to appear at conferences and encouraged to write blog posts. These networks are the antithesis of the conversation that is a high functioning open source community.

    I believe companies with long histories building developer networks (or employees that have such experience at new companies) make the mistake of thinking open source “community managers” belong in product marketing. They are probably using the wrong metrics to measure and understand (and hopefully support) their communities. They are falling into the classic trap of confusing community with customer, and project with product.

    Liberally-licensed, collaboratively-developed software projects, (i.e. open source) is an engineering economic imperative. Because of that reality, I believe the community management/enablement role belongs in engineering. If a company is enlightened enough to have a product management team that is engineering focused (not marketing focused), then the community manager fits into that team as well.

    This is a working theory for me, consistent with the successes and failures I’ve observed over the past 25 year. I would love folks feedback, sharing their experiences or expanding upon my observations.

  • Why Project Moby is a Brilliant Move by Docker

    On Tuesday, Solomon Hykes, Docker’s CTO and co-founder, unleashed the Moby Project on the world. I’ll admit I didn’t fully grasp its significance at first. This might have something to do with being on vacation in Cape Cod and not being at DockerCon, but I digress. It wasn’t until I read this Twitter thread from Kelsey Hightower that something clicked:

    And then it dawned on me – Docker was taking a page out of the Red Hat playbook and fully embracing the upstream supply chain model. In 2003, Red Hat decided it needed to focus on enterprise subscriptions and moved away from its free commercial linux, the venerable Red Hat Linux. In its place, Red Hat created Red Hat Enterprise Linux (RHEL), and then a funny thing happened: its employees rebelled and created the Fedora community and project, designed to be a community Linux distribution. This turned out to be a brilliant move. Forward looking technology and innovation happened in the Fedora community, and then it went through a series of hardening, polish, integration with other Red Hat platforms and bug fixes before being released under the RHEL brand. The more complex Red Hat’s product offerings became, the more valuable this model proved.

    Red Hat product supply chain:

    The container ecosystem shares much with the Linux ecosystem, because that’s where it came from. One of the criticisms of Docker, much like Red Hat, is that they’re “trying to control the entire ecosystem”. I may have uttered that phrase from time to time, under my breath. The Moby Project, in my opinion only, is a direct response to that. As Solomon mentioned in his blog announcement:

    In order to be able to build and ship these specialized editions is a relatively short time, with small teams, in a scalable way, without having to reinvent the wheel; it was clear we needed a new approach.

    Yes, any successful ecosystem becomes extremely difficult to manage over time, which is why you end up giving up control, without giving up your value proposition. This is also probably why you’ve seen Docker become more engaged on the CNCF front and why they drove the OCI formation. As David Nalley likes to say, this is the “hang on loosely, but don’t let go” approach to community-building:

    There’s also the branding and trademark benefit. Just as with Fedora and RHEL, separating the branding streams now means that community-minded people know where to go: Project Moby. And prospective customers and partners also know where to go: Docker.com. It’s a great way to let your audiences self-select.

    Docker decided to take the next step and embrace the open source supply chain model. This is a good thing.

  • How Silicon Valley Ruined Open Source Business

    Back in the early days of open source software, we were constantly looking for milestones to indicate how far we had progressed. Major vendor support: check (Oracle and IBM in 1998). An open source IPO: check (Red Hat and VA Linux in 1999). Major trade show: check (LinuxWorld in 1999). And then, of course, a venture-backed startup category: check (beginning with Cygnus, Sendmail, VA, and Red Hat in the late 90’s, followed by a slew of others, especially after the dot bomb faded after 2003). Unfortunately, VC involvement came with a hefty price. And then, of course, our first VC superstar: check (Peter Fenton).

    Remember, this was a world where CNBC pundits openly questioned how Red Hat could make money after they “gave away all their IP”. (spoiler alert: they didn’t. That’s why it’s so funny). So when venture capitalists got into the game, they started inflicting poor decisions on startup founders, mostly centered on the following conceits:

    1. OMG you’re giving away your IP. You have to hold some back! How do you expect to make money?
    2. Here’s a nice business plan I just got from my pal who’s from Wharton. He came up with this brilliant idea: we call it ‘Open Core’!”
    3. Build a community – so we can upsell them!
    4. Freeloaders are useless

    A VC’s view of open source at the time was simplistic and limited mostly to the view that a vendor should be able to recoup costs by charging for an enterprise product that takes advantage of the many stooges dumb enough to take in the free “community” product. In this view, a community is mostly a marketing ploy designed for a captive audience that had nowhere else to go. For many reasons, this is the view that the VC community embraced in the mid-2000’s. My hypothesis: when it didn’t work, it soured the relationship between investors and open source, which manifests itself in lost opportunities to this day.

    What should have been obvious from the beginning – source code is not product – has only recently begun to get airplay. Instead, we’ve been forced to endure a barrage of bad diagnoses of failures and bad advice for startup founders. It’s so bad that even our open source business heroes don’t think you can fully embrace open source if you want to make money. The following is from Mike Olson, mostly known for his business acumen with Cloudera and Sleepycat:

    The list of successful stand-alone open source vendors that emerged over that period is easy to write, because the only company on it is Red Hat. The rest have failed to scale or been swallowed.

    …The moral of that story is that it’s pretty hard to build a successful, stand-alone open source company. Notably, no support- or services-only business model has ever made the cut.

    As I have mentioned early and often, as has Stephen Walli, a project is not a product, and vice-versa, and it’s this conflation of the two that is a profound disservice to startups, developers, and yes, investors. Here’s the bottom line: you want to make money in a tech startup? Make a winning solution that offers value to customers for a price. This applies whether you’re talking about an open source, proprietary, or hybrid solution. This is hard to do, regardless of how you make the sausage. Mike Olson is a standup guy, and I hope he doesn’t take this personally, but he’s wrong. It’s not that “it’s pretty hard to build a successful, stand-alone open source company.” Rather, it’s hard to build a successful stand-alone company in *any* context. But for some reason, we notice the open source failures more than the others.

    The failures are particularly notable for how they failed, and how little has been done to diagnose what went wrong, other than “They gave away all their IP!” In the vast majority of cases, these startups were poorly received because they either a.) had a terrible product or b.) they constrained their community to the point of cutting off their own air supply. There were of course, notable exceptions. While it wasn’t my idea of the best way to do it, MySQL turned out pretty well, all things considered. The point is, don’t judge startups based on their development model; judge them on whether they have a compelling offering that customers want.

    While investors in 2017 are smarter than their 2005 cousins, they still have blinders when it comes to open source. They have distanced themselves from the open core pretenders, but in the process, they’ve also distanced themselves from potential Red Hats. Part of this is due to an overall industry trend towards *aaS and cloud-based services, but even so, any kind of emphasis on pure open source product development is strongly discouraged. If I’m a *aaS-based startup today and I approach an investor, I’m not going to lead off with, “and we’re pushing all of our code that runs our internal services to a publicly-accessible GitHub account!” Unless, of course, I wanted to see ghastly reactions.

    This seems like a missed opportunity: if ever there was a time to safely engage in open source development models while still maintaining your product development speed and agility, using a *aaS model is a great way to do it. After all, winning at *aaS means winning at superior operations and uptime, which has zilch to do with source code. And yet, we’re seeing the opposite: most of the startups that do full open source development are the ones that release a “physical software download” and the SaaS startups run away scared, despite the leverage that a SaaS play could have if they were to go full-throttle in open source development.

    It’s gotten to the point that when I advise startups, I tell them not to emphasize their open source development, unless they’re specifically asked about it. Why bother? They’re just going to be subjected to a few variations on the theme of “but how are you going to get paid?” Better to focus on your solution, how it wins against the competition, and how customers are falling over themselves to get it. Perhaps that’s how it always should have been, but it strikes me as odd that your choice of development model, which is indirectly and tangentially related to how you move product, should be such a determining factor on whether your startup gets funded, even more so than whether you have a potentially winning solution. It’s almost as if there’s an “open source tax” that investors factor into any funding decision involving open source. As in, “sure, that’s a nice product, but ooooh, I have to factor in the ‘open source tax’ I’ll have to pay because I can’t get maximum extraction of revenue.”

    There are, of course, notable exceptions. I can think of a few venture-backed startups with an open source focus. But often the simple case is ignored in favor of some more complex scheme with less of a chance for success. Red Hat has a very simple sales model: leverage its brand and sell subscriptions to its software products. The end. Now compare that to how some startups today position their products and try to sell solutions. It seems, at least from afar, overly complicated. I suspect this is because, as part of their funding terms, they’re trying to overcome their investors’ perceptions of the open source tax. Sure, you can try to build the world’s greatest software supply house and delivery mechanism, capitalizing on every layer of tooling built on top of the base platform. Or you can, you know, sell subscriptions to your certified platform. One is a home run attempt. The other is a base hit. Guess which way investors have pushed?

  • Ask Not What Your Community Can Do For You

    This post first appeared on Medium. It is reprinted here with permission. 

    During his inaugural speech on Jan. 20, 1961, U.S. President John F. Kennedy uttered the challenge, “And so, my fellow Americans: ask not what your country can do for you — ask what you can do for your country.” Its simple meaning was to challenge society to contribute to improve the public good. But I think there’s a bigger message here, and that is that we have to work together as a society to improve our collective state. And because of that, I think the statement has a lot to tell us about community building in general and for open source communities certainly.

    Publishing software with an open source license is the definitive step of creating an open source project. The creation of such collaborative licensing is Richard Stallman’s brilliant hack on the system of copyright law back in the early 1980s. If software is covered by copyright, so a user of the software must honor the copyright by adhering to its license or otherwise asking for permission, then create licenses that say, “do whatever you will with this software, while honoring these clauses in return.” It is the perfect hack on a software copyright system that would otherwise collapse under its own weight when the dynamic nature of software encountered the friction-free publishing channel of the Internet. But while publishing software this way is the definitive step of creating an open source project, it doesn’t create a community.

    While many societal norms and interactions are imposed by a choice of where you live, from the country down to your street address, on the Internet those choices and interactions play out differently. The cost of choosing your online community is far lower in the digital world than the economic friction in the world of bricks and mortar. So is the cost of leaving. As an online community you need to attract folks differently if they’re going to engage. You need to make it easy for the community to do the things they want to do. They aren’t coming to help you build your community, at least not at first. They’re coming because the project (the neighborhood) is interesting to their own needs and goals.

    And there are three sorts of folks that join your community.

    • There are the folks that just want to live there using your software.
    • There are the folks living there that are happy to help in small ways, letting you know where the potholes that they almost hit every day are forming, or about a vacant lot that’s unsafe.
    • And there are the folks living there, that let you know about that vacant lot, and that are happy to help clean it up, contribute a park bench, or organize the neighborhood party to celebrate after the cleanup is done.

    And you need to make it easy in three different ways for those three sorts of folks in your community, because the groups are nested inside one another. People don’t build parks in vacant lots where they don’t live. They’re very unlikely to notice the potholes in a meaningful way if they don’t live on the street. They won’t move into your neighborhood in the first place if its cluttered, unorganized, and doesn’t provide any guidance for where the schools and shops are, or when garbage collection happens and how recycling works there. And they certainly don’t move into neighborhoods in which they can’t afford the costs (in time or money).

    Growing and scaling a successful open source software project requires building three on-ramps: first for users, then for developers, and ultimately for contributors.

    If it is not blindingly easy to download the software, install it to a known state, and use it in some meaningful way, you will not encourage a growing set of neighbors. If the developers in that group of neighbors can’t easily build the software to a known state so they can effect their own changes, then they will look elsewhere for easier to use solutions. They will move out and move on. If you don’t make it easy to contribute those developers’s changes back to the project, it becomes a permanent growing collection of expensive forks that doesn’t harden the way good software projects do.

    In the 1990s, we had a ten-minute rule of thumb for packaged PC software from pulling the shrink-wrap off the box to doing something meaningful. When you download an app to your phone, how long do you work at it until it behaves as advertised or you abandon it? The same sort of thinking needs to apply to your software project users. So to for the developers in that group of users that will want to do things with the project to their own needs. So to for your eventual contributors out of that group of developers.

    Well run successful open source software communities make the costs of using, developing, and contributing to the community easy to bear, and they work to continue to make it easier. Publishing a piece of software on the Internet with an open source license is easy. Growing a community takes work, but the value in the hardened, evolved software for all is easy to see in successful communities. So ask not what your community can do for you, ask what you can do for your community.

  • Updated: OSEN Meetup in Cambridge, MA 4/25

    Update: Red Hat is sponsoring food and drinks, and the CIC is sponsoring meeting space. Our agenda is as follows:

    • 6pm – Introductions, food and drinks
    • 6:30pm – Open Source Business Models, Dave Neary, Red Hat
    • 7pm – From Project to Product, John Mark Walker, Dell EMC and OSEN
    • 7:30pm – Learning from the OpenNebula Example, Ignacio Llorente, OpenNebula

    We just formed a meetup group for the Boston, MA area – read about it at meetup.com.

    So what’s this about? In this first meetup, we’ll talk about taking code from project to product – what do you need to know when undergoing this journey?

    Who should attend:

    • CIOs/CTOs who need to understand dynamics of open source ecosystems and their impact on supply chains

    • Product managers who want to understand modern methods for product development

    • Founders how have a great idea and need to efficiently build on platforms of innovation

    • Investors who want to understand what to look for in their startup portfolio

    • Anyone, anywhere, who wants to take an open source project and build a solution