Tag: business models

  • The New Open Source Playbook – Platforms Part Deux

    (This is the 2nd post in a series. Part 1 is here)

    I was all set to make this 2nd post about open core and innovation on the edge, and then I realized that I should probably explore the concept of “lift” in a bit more detail. Specifically, if you’re looking for your platform strategy to give your technology products lift, what does that mean exactly? This goes back to the idea that a rising tide lifts all boats. If you think of a rising tide as a growing community of users or developers, and the boat is your particular software project, then you want a startegy where your project benefits from a larger community. A dynamic, growing community will be able to support several “boats” – products, projects, platforms, et al. A good example of this is the Kubernetes community, which is the flagship project of the Cloud Native Computing Foundation (CNCF).

    How Do We Generate Lift?

    There are 2 basic types of lift you will be looking for – user lift, or getting more people ot adopt your platform, and developer lift, where more developers are contributing to your platform. The former gets more people familiar with your particular technology, providing the basis for potential future customers, and the latter allows you to reduce your engineering cost and potentially benefit from new ideas that you didn’t think of. This means that the community or ecosystem you align with depends on the goals for your platform. If you want more users, that is a very different community strategy from wanting more collaborators. Many startups conflate these strategies, which means they don’t always get the results they’re looking for.

    Let’s assume that you have a potential platform that is categorized in the same cloud native space as Kubernetes. And let’s assume that you’ve determined that the best strategy to maximize your impact is to open source your platform. Does that mean you should put your project in the CNCF? It depends! Let’s assume that your product will target infosec professionals, and you want to get feedback on usage patterns for common security use cases. In that case, the Kubernetes or CNCF communities may not be the best fit. If you want security professionals getting familiar with and adopting your platform, you may want to consider security-focused communities, such as those that have formed around SBOM, compliance, and scanning projects. Or perhaps you do want to see how devops or cloud computing professionals would use your platform to improve their security risk, in which case Kubernetes or CNCF make sense. Your target audience will determine what community is the best fit.

    Another scenario: let’s assume that your platform is adjacent to Kubernetes and you think it’s a good candidate for collaboration with multiple entities with a vested interest in your project’s success. In that case, you need developers with working knowledge of Kubernetes architecture, and the Kubernetes community is definitely where you want your project to be incubated. It’s not always so straightforward, however. If you’re primarily looking for developers who will extend your platform, making use of your interfaces and APIs, then perhaps it doesn’t matter if they have working knowledge of Kubernetes. Maybe in this case, you would do well to understand developer use cases and which vertical markets or industries your platform appeals to, and then follow a different community trail. Platform-community fit for your developer strategy is a more nuanced decision than product-market fit. The former is much more multi-dimensional than the latter.

    If you have decided that developers are key to your platform strategy, you have to decide what kind of developers you’re looking for: those that will *extend* your platform; those that will contribute to your core platform; or those that will use or embed your platform. That will determine the type of lift you need and what community(ies) to align with.

    One more example: You’re creating a platform that you believe will transform the cybersecurity industry, and you want developers that will use and extend your platform. You may at first be attracted to security-focused communities, but then you discover a curious thing: cyber security professionals don’t seem fond of your platform and haven’t adopted it at the scale you expect or need. Does this mean your platform sucks? Not always – it could be that these professionals are highly opinionated and have already made up their minds about desired platforms to base their efforts on. However, it turns out that your platform helps enterprise developers be more secure. Furthermore, you notice that within your enterprise developer community, there is overlap with the PyTorch community, which is not cyber security focused. This could be an opportunity to pivot on your adoption strategy and go where your community is leading: PyTorch. Perhaps that is a more ideal destination for community alignment purposes. Before deciding, however, you can do some testing within the PyTorch community before making a final decision.

    Learn From My Example: Hyperic

    Hyperic was a systems management monitoring tool. These days we would put it in the “observability” category, but that term didn’t exist at the time (2006). The Hyperic platform was great for monitoring Java applications. It was open core, so we focused on adoption by enterprise developers and not contributions. We thought we had a great execution strategy to build a global user base that would use Hyperic as the basis for all of their general purpose application monitoring needs. From a community strategy perspective, we wanted Hyperic to be ubiquitous, used in every data center where applications were deployed and managed. We had a great tag line, too: “All Systems Go”. But there was a problem: although Hyperic could be used to monitor any compute instance, it really shined when used with Java appliations. Focusing on general systems management put us in the same bucket, product-wise, as other general use systems management tools, none of which were able to differentiate each other. If we had decided to place more of our community focus on Java developers, we could have ignored all of the general purpose monitoring and focused on delivering great value for our core audience: Java development communities. Our platform-community fit wasn’t aligned properly, and as a result, we did not get the lift we were expecting. This meant that our sales team had to work harder to find opportunities and put a drag on our revenue and overall momentum. Lesson learned…

    When attempting a platform execution strategy, and you’re going the open source route, platform-community fit is paramount. Without it, you won’t get the lift you’re expecting. You can always change up your community alignment strategy later, but it’s obviously better if you get it right the first time.

  • The New Open Source Playbook

    (This is the first in a series)

    For the last few years, the world of commercial open source has been largely dormant, with few startup companies making a splash with new open source products. Or if companies did make a splash it was for the wrong reasons, see eg. Hashicorp’s Terraform rugpull. It got to the point that Jeff Geerling declared that “Corporate Open Source is Dead“, and honestly, I would have agreed with him. It seemed that the age of startups pushing new open source projects and building a business around them was a thing of the past. To be clear, I always thought that it was naive to think that you could simply charge money for a rebuild of open source software, but that fact that startups were always trying showed that there was momentum behind the idea of using open source to build a business.

    And then a funny thing happened – a whole lot of new energy (and money) started flowing into new nascent companies looking to make a mark in… stop me if you’ve heard this one… generative AI. Or to put it in other words, some combination of agents built on LLMs that attempted to solve some automation problem, usually in the category of software development or delivery. It turns out that when there’s lots of competition for users, especially when those users are themselves developers, that a solid open source strategy can make the difference between surviving and thriving. In light of this newfound enthusiasm for open source and startups, I thought I’d write a handy guide for startups looking to incorporate open source startegy into their developer go to market playbook. Except in this version, I will incorporate nuances specific to our emerging agentic world.

    To start down this path, I recommend that startup founders look at 3 layers of open source go to market strategy: platform ecosystem (stuff you co-develop), open core (stuff you give away but keep IP), and product focus (stuff you only allow paying customers to use). That last category, product focus, can be on-prem, cloud hosted, or SaaS services – it won’t matter, ultimately. Remember, this is about how to create compelling products that people will pay for, helping you establish a business. There are ways to use open source principles that can help you reach that goal, but proceed carefully. You can derail your product strategy by making the wrong choices.

    Foundation: the Platform Ecosystem Play

    When thinking about open source strategy, many founders thought they could release open source code and get other developers to work on their code for free as a new model of outsourcing. This almost never works as the startup founders imagined. What does end up happening is that a startup releases open source code and their target audience happily uses the code for free, often not contributing back, causing a number of startups to question why they went down the open source path to begin with. Don’t be like them.

    The way to think of this is within the concept of engineering economics. What is the most efficient means to produce the foundational parts of your software?

    • If the answer is by basing your platform on existing open source projects, then you figure out how to do that while protecting your intellectual property. This usually means focusing on communities and projects under the auspices of a neutral 3rd party, such as the Eclipse or Linux Foundation.
    • If the answer is by creating a new open source platform that you expect to attract significant interest from other technology entities, then you test product-market fit with prospective collaborators and organizations with a vested interest in your project. Note: this is a risky strategy requiring a thoughtful approach and ruthless honesty about your prospects. The most successful examples of this, such as Kubernetes, showed strong demand from the outset and their creation was a result of market pull, not a push.
    • If the answer is that you don’t need external developers contributing to your core platform, but you do need end users and data on product-market fit, then you look into either an open core approach, or you create a free product that gives the platform away for free but not necessarily under an open source license. This is usually for the cases where you need developers to use or embed your product, but you don’t need them contributing directly. This is the “innovation on the edge” approach.
    • Or, if the answer is that you’ll make better progress by going it alone, then you do that and you don’t give it a 2nd thought. The goal is to use the most efficient means to produce your platform or foundational software, not score points on hacker news.

    Many startups through the years have been tripped up by this step, misguidedly believing that their foundational software was so great that once they released it, thousands of developers would step over each other to contribute to a project.

    In the world of LLMs and generative AI, there is an additional consideration: do you absolutely need the latest models from Google, OpenAI, or elsewhere, or can you get by with slightly older models less constrained by usage restrictions? Can you use your own training and weights with off-the-shelf open source models? If you’re building a product that relies on agentic workflows, you’ll have to consider end user needs and preferences, but you’ll also have to protect yourself from downstream usage contraints, which could hit you if you reach certain thresholds of popularity. When starting out, I wholeheartedly recommend having as few constraints as possible, opting for open source models whenever possible, but also giving your end users the choice if they have existing accounts with larger providers. This is where it helps to have a platform approach that helps you address product-ecosystem fit as early as possible. If you can build momentum while architecting your platform around open source models and model orchestration tools, your would-be platform contributors will let you know that early on. Having an open source platform approach will help you guide your development in the right direction. Building your platform or product foundation around an existing open source project will be even more insightful, because that community will likely already have established AI preferences, helping make the decision for you.

    To summarize, find the ecosystem that best fits your goals and product plans and try to build your platform strategy within a community in that ecosystem, preferably on an existing project; barring that, create your own open source platform but maintain close proximity to adjacent communities and ecosystems, looking for lift from common users that will help determine platform-ecosystem fit; or build an open core platform, preferably with a set of potential users from an existing community or ecosystem who will innovate on the edge, using your APIs and interfaces; if none of those apply, build your own free-to-use proprietary platfrom but maintain a line-of-sight to platform-ecosystem fit. No matter how you choose to build or shape a platform, you will need actual users to provide lift for your overall product strategy. You can get that lift from core contributors, innovators on the edge, or adoption from your target audience, or some combination of these. How you do that depends on your needs and the expectations of your target audience.

    Up Next: open core on the edge and free products.

  • Transform Your Business with Open Source Entrepreneurship

    Transform Your Business with Open Source Entrepreneurship

    This is a webinar I did for the Linux Foundation earlier this month. If you missed it, you can catch it on demand!

     

  • Kite Demonstrates Continuing Toxicity of Silicon Valley

    One of the most frustrating parts of being in open source circles is battling the conventional wisdom in the Valley that open source is just another way to do marketing. It’s complicated by the fact that being a strong open source participant can greatly aid marketing efforts, so it’s not as if marketing activities are completely unrelated to open source processes. But then something happens that so aptly demonstrates what we mean when we say that Silicon Valley has largely been a poisonous partner for open source efforts. Which brings me to this week’s brouhaha around a silly valley startup looking to “Make money fast!” by glomming onto the success of open source projects.

    To quote from the article:

    After being hired by Kite, @abe33 made an update to Minimap. The update was titled “Implement Kite promotion,” and it appeared to look at a user’s code and insert links to related pages on Kite’s website. Kite called this a useful feature. Programmers said it was not useful and was therefore just an ad for an unrelated service, something many programmers would consider a violation of the open-source spirit.

    It’s the “stealing underpants” business model all over again.

    1. Get users and “move the needle”
    2. ?
    3. Profit!

    Step 1 above is why we actually have valley poseurs who unironically refer to themselves as “growth hackers.” Only in the valley.

    The really sad part of this is that the methodology outlined above is terrible, not just because it’s unethical, but because it’s counterproductive to what Kite wants to accomplish. As I’ve mentioned countless times before, a project is not a product, and trying to turn it into one kills the project. The best way to make money on open source is to, big surprise, make a great product that incorporates it in a way that adds value to the customer. In this example, this means taking projects like minimap and autocomplete-python, producing commercial versions of them, and make them part of an existing product or offer them up as separate downloads – from the company site or part of a commercial distribution.

    The worst part of all this is there are still investors and business folks who think that doing is Kite did is the only way to make money from an open source project. It’s not. It’s a terrible maneuver from both an ethics as well as product development standpoint. It’s once again conflating open source with marketing, which is one of the reasons I started this site – it’s an unforced error and should be part of any “open source product 101” curriculum.

  • Bombshell Report: 200% ROI on Open Source Participation

    Bombshell Report: 200% ROI on Open Source Participation

    From the World Bank Data Blog: in a stunning report from the OpenDRI, an initiative sponsored by the Global Facility for Disaster Reduction and Recovery (GFDRR), researchers looked into whether it was possible to quantify the benefit of contributing to and participating in open source communities and found the conservative estimate to be on the order of 200% ROI. This is astounding. We in the open source world have often wondered how, exactly, a business can benefit from participating in upstream open source communities.

    On one hand, I don’t want to overplay the importance of ROI, because there are many tangible and intangible benefits from open source participation, which should not be downplayed. On the other hand, if there’s an appreciable, direct ROI to open source participation, this is something to make some noise about.

    The Data Blog from the World Bank has a nice summary of the research. It discusses this specific report that covers 7 years of development on the GeoNode software project. There’s also a nice primer on open source best practices for those new to the subject matter.

  • “Every evangelist of yesteryear is now a Community Manager ….”

    This post first appeared on Medium. It is reprinted here with permission. 

    OH: “Every evangelist of yesteryear is now a Community Manager … at least on their biz card.”

    This statement best captures a question that comes up regularly in the open source community world when you have corporations involved. Does your community manager report to engineering or marketing? It captures a number of assumptions quite nicely.

    First, the concept of a “community manager” does imply a certain amount of corporate structure regardless of whether it’s for profit or some form of non-profit. If the open source licensed project is in the wild then it probably doesn’t have the size and adoption to require people with such titles. Such well-run project communities are self-organizing. As they grow and there are more things to do than vet code and commit, they accommodate new roles. They may even form council-like sub-organizations (e.g. documentation). But for the most part, the work is in-the-small, and it’s organic. The structure of well run “pre-corporate” projects is in process discipline around such work as contributions, issue tracking, and build management.

    When projects grow and evolve to the point that companies want to get involved, using the project software in their products and services, then the project “property” needs to be protected differently. The software project already has a copyright (because it’s the law) and is covered by an open source license (because that social contract enables the collaboration), but trademarks and provenance can quickly become important. Companies have different risk profiles. A solution to such corporate concerns can be to wrap a non-profit structure around the project. This can mean the project chooses to join an existing foundation like the Apache Software Foundation or the Eclipse Foundation, or it could form its own foundation (e.g. the Gnome Foundation). In return for the perceived added overhead to the original community, it enables company employees to more actively contribute. (The code base for Apache httpd project doubled in the first few months after the ASF formed.)

    A community manager implies more administrative structure and discipline around people coordination for growth, than the necessary software construction discipline that the early project required for growth. But a foundation often brings the sort of administrative structure for events and communications such that folks in the project (or projects) still don’t have a title of “community manager.”

    Community managers are a corporate thing. And I believe they start showing up when either a project becomes a company (e.g. Apache Mesos into Mesosphere), a company wants to create a project or turn a product into a project (e.g. Hewlett Packard Enterprise with OpenSwitch), or a company wants to create a distribution of a project (e.g. Canonical with Ubuntu, Red Hat with Fedora, Cloudera with Apache Hadoop). And this is implied in the original statement about “biz cards” and questions of marketing versus engineering.

    Software companies have long understood developer networks. MSDN, the Oracle Developer Network, and the IBM Developer Network have been around for decades. They are broadcast communities carrying marketing messages to the faithful. They were run by Developer Evangelists and Developer Advocates. MVP programs were created to identify and support non-employees acting in an evangelism role. These networks are product marketing programs. They tightly control access to product engineers, who allowed to appear at conferences and encouraged to write blog posts. These networks are the antithesis of the conversation that is a high functioning open source community.

    I believe companies with long histories building developer networks (or employees that have such experience at new companies) make the mistake of thinking open source “community managers” belong in product marketing. They are probably using the wrong metrics to measure and understand (and hopefully support) their communities. They are falling into the classic trap of confusing community with customer, and project with product.

    Liberally-licensed, collaboratively-developed software projects, (i.e. open source) is an engineering economic imperative. Because of that reality, I believe the community management/enablement role belongs in engineering. If a company is enlightened enough to have a product management team that is engineering focused (not marketing focused), then the community manager fits into that team as well.

    This is a working theory for me, consistent with the successes and failures I’ve observed over the past 25 year. I would love folks feedback, sharing their experiences or expanding upon my observations.

  • There is NO Open Source Business Model

    Note: the following was first published on medium.com by Stephen Walli. It is reprinted here with his permission.

    Preface: It has been brought to my attention by friends and trusted advisors that a valid interpretation of my point below is that open source is ultimately about “grubby commercialism”, and altruism equals naïveté. That was not my intent. I believe that economics is about behaviour not money. I believe in Drucker (a company exists to create a market for the solution), not Friedman (a company exists to provide a return to shareholders). I believe in the Generous Man. I believe in Rappaport’s solution to the Prisoner’s Dilemma to always start with the most generous choice. I believe we’ve known how communities work since you had a campfire and I wanted to sit beside it. I had the pleasure of watching Bob Young give a talk today at “All Things Open” where he reiterated that a successful company always focuses on the success of its customers. I think that was stamped on Red Hat’s DNA from its founding, and continues to contribute to its success with customers today. I believe sharing good software is the only way to make all of us as successful as we can be as a tribe. I believe there is no scale in software without discipline.

    The open source definition is almost 20 years old. Red Hat at 22 is a $2B company. MySQL and JBoss have had great acquisition exits. Cloudera and Hortonworks are well on their way to becoming the next billion dollar software companies. But I would like to observe that despite these successes, there is no open source business model.

    yosuke muroya (on Flickr)

    I completely believe in the economic value of liberally-licensed collaboratively-developed software. We’ve shared software since we’ve developed software, all the way back into the late 40s and early 50s. This is because writing good software is inherently hard work. We’ve demonstrated that software reviews find more bugs than testing, so building a software development culture of review creates better software. Much of the invention in software engineering and programming systems has been directed towards re-use and writing more and better software in fewer lines of code. Software can’t scale without discipline and rigour in how it’s built and deployed. Software is inherently dynamic, and this dynamism has become clear in an Internet connected world. Well-run, disciplined, liberally-licensed collaborative communities seem to solve for these attributes of software and its development better than other ways of developing, evolving, and maintaining it. There is an engineering economic imperative behind open source software.

    Here’s an example using open source that I believe closely demonstrates that reality.

    Interix was a product in the late 90s that provided the UNIX face on Windows NT. It encompassed ~300 software packages covered by 25 licenses, plus a derivative of the Microsoft POSIX subsystem, plus our own code. This was before the open source definition. We started with the 4.4BSD-Lite distro because that’s what the AT&T/USL lawyers said we could use. The gcc compiler suite would provide critical support for our tool chain as well as an SDK to enable customers to port their UNIX application base to Windows NT.

    It took a senior compiler developer on the order of 6–8 months to port gcc into the Interix environment. It was a little more work when you include testing and integration, etc., so round it up to on the order of $100K. The gcc suite was about 750K lines of code in those days, which the COCOMO calculation suggests was worth $10M-$20M worth of value depending on how much folks were earning. So that’s roughly two orders of magnitude in cost savings instead of writing a compiler suite on our own. That and this was a well-maintained, robust, hardened compiler suite, not a new creation created from scratch in a vacuum. That is the benefit of using open source. You can see a similar net return on the 10% year-on-year investment Red Hat makes on their Linux kernel contributions as they deliver Fedora and RHEL. Of course with Interix, we were now living on a fork. This means we are drifting further away from the functionality and fixes on the mainline.

    The back of the envelop estimate suggested that every new major revision of gcc would cost us another 6+ months to re-integrate, but if we could get our changes contributed back into the mainline code base, we were probably looking at a month of integration testing instead. So from ~$100K we’re approaching $10K-$20K so possibly another order of magnitude cheaper by not living on a fork. We approached Cygnus Solutions as they were the premier gcc engineering team with several gcc committers. The price to integrate quoted to us was ~$120K, but they were successfully oversubscribed with enough other gcc work that they couldn’t begin for 14 months. Ada Core Technologies on the other hand would only charge ~$40K and could begin the following month. It was a very easy decision. (We were not in a position to participate directly in the five communities hiding under the gcc umbrella. While some projects respected the quality of engineering we were trying to contribute, others were hostile to the fact we were working on that Microsoft s***. There’s no pleasing some people.)

    This wasn’t contributing back out of altruism. It was engineering economics. It was the right thing to do, and contributed back to the hardening of the compiler suite we were using ourselves. It was what makes well run open source projects work. I would argue that individuals make similar decisions because having your name on key contribution streams in the open source world is some of the best advertising and resume content you can provide as a developer on your ability to get work done, in a collaborative engineering setting, and demonstrating you well understand a technology base. It’s the fact with which you can lead in an interview. And it’s fun. It’s interesting and challenging in all the right ways. If you’re a good developer or interested in improving your skills, why wouldn’t you participate and increase your own value and skills?

    Well run open source software communities are interesting buckets of technology. If they evolve to a particular size they become ecosystems of products, services (support, consulting, training), books and other related-content. To use an organic model, open source is trees, out of which people create lumber, out of which they build a myriad of other products.

    Red Hat is presented as the epitome of an open source company. When I look at Red Hat, I don’t see an open source company. I see a software company that has had three CEOs making critical business decisions in three different market contexts as they grow a company focused on their customers. Bob Young founded a company building a Linux distro in the early days of Linux. He was focused on the Heinz ketchup model of branding. When you thought “Linux”, Bob wanted the next words in your head to be “Red Hat.” And this was the initial growth of Red Hat Linux in the early days of the Internet and through the building of the Internet bubble. It was all about brand management. Red Hat successfully took key rounds of funding, and successfully went public in 1999. The Red Hat stock boomed.

    Matt Szulick took over the reins as CEO that Fall. Within a couple of years the Internet bubble burst and the stock tumbled from ~$140 down to $3.50. Over the next couple of years, Red Hat successfully made the pivot to server. RHEL was born. Soon after Fedora was delivered such that a Red Hat focused developer community would have an active place to collaborate while Red Hat maintained stability for enterprise customers on RHEL. They successfully crossed Moore’s Chasm in financial services. JBoss was acquired for $350M to provide enterprise middleware. Red Hat went after the UNIX ISV community before the other Linux distro vendors realized it was a race.

    In 2008, Jim Whitehurst took over the helm. In Whitehurst, they had a successful executive that had navigated running an airline through its Chapter 11 restructuring. So he knows how to grow and maintain employee morale, while managing costs, and keeping customers happy in the viciously competitive cutthroat market of a commercial air travel. He arrives at Red Hat just in time for the economic collapse of 2008. Perfect. But he has also led them through steady stock growth since joining.

    Through its history, Red Hat has remained focused on solving their customers problems. Harvard economist Theodore Levitt once observed that a customer didn’t want a quarter inch drill, what they wanted was a quarter inch hole. While lots of competing Linux distro companies tried to be the best Linux distro, Red Hat carefully positioned themselves not as the best Linux but as an enterprise quality, inexpensive alternative to Solaris on expensive SPARC machines in the data centre.

    Red Hat certainly uses open source buckets of technology to shape their products and services, but it’s not a different business model from the creation of DEC Ultrix or Sun SunOS out of the BSD world, or the collaborative creation of OSF/1 and the evolution of DEC Ultrix and IBM AIX, or the evolution of SunOS to Solaris from a licensed System V base. At what point did Windows NT cease to be a Microsoft product with the addition of thousands of third party licensed pieces of software including the Berkeley sockets technology?

    When companies share their own source code out of which they build their products and services, and attempt to develop their own collaborative communities, they gain different benefits. Their technology becomes stickier with customers and potential future customers. They gain advocates and experts. It builds inertia around the technology. The technology is hardened. Depending on the relationship between the bucket of technology and their products, they can evolve strong complements to their core offerings.

    The engineering economic effects may not be as great as pulling from a well run external bucket of technology, but the other developer effects make up for the investment in a controlled and owned community. It’s why companies like IBM, Intel, Microsoft, and Oracle all invest heavily in their developer networks regardless of the fact these historically had nothing to do with open source licensing. It creates stickiness. Red Hat gains different benefits from their engineering investments in Linux, their development of the Fedora community, and the acquisition of the JBoss technology, experts, and customers.

    I believe open source licensed, disciplined, collaborative development communities will prove to be the best software development and maintenance methodology over the long term. It’s created a myriad of robust adaptive building blocks that have become central to modern life in a world that runs on software. But folks should never confuse the creation of those building blocks with the underlying business of solving customer problems in a marketplace. There is no “open source business model.”