Category: syndicate

  • Change Agents in Large Organizations

    “Everybody has plans until they get hit the first time.” — Mike Tyson

    An economist was once asked, “If you were stranded on a desert island, how would you survive?” The economist pondered this great question for some time and then proudly ventured his answer: “Assume a perfect world…” — old joke about economists

    I am not known for my love of business or management books; quite the opposite, actually. When I try to articulate why, it generally comes down to boredom and a decided lack of enthusiasm for the subject. It’s not that I don’t appreciate the appeal of business topics or the act of conducting it. Far from it. It’s more that there’s an utter futility to the idea that we can do it better. I led off this post with the 2 quotes above to illustrate my reasoning. They seem to come from very different points of view, and yet they are in my mind very much related:

    • Business books are not reflective of lived experiences or real world incentives, just like the economist in the above example, and…
    • They’re hopelessly naive and unable to account for what happens the first time a practitioner gets “punched in the face” (not literally, of course, or at least not usually)

    Both quotes illustrate the difficulty of putting a plan into action, either because you didn’t account for resistance (Mike Tyson) or the realities of the real world constraints (the economist). I dislike business books for the same reason I dislike management consultants, strategists, market analysts, pundits, and any other pointy-haired expert who tries to tell me how to do my job better: because their words prove to be almost useless against the realities of functioning, much less thriving, in a real-life bureaucratic system. With that in mind, I’m now going to do what I probably never should: give advice on how to be a change agent in large bureaucratic organizations. Given what I wrote above, you could be forgiven for asking, “Why?” The answer is rather simple: despite all my experience which tells me I should really know better, at the end of the day, I naively follow an insatiable desire to drive change. Knowing better or not, it doesn’t stop me from trying. The act of futile resistance against the borg is buried deep, deep inside my psyche. It’s terminal, I’m afraid.

    Never, Ever be a Change Agent

    The first thing to know about being a change agent is to not be one. Just don’t do it. No one is ever going to congratulate you on your futility or give you an award for repeatedly beating your head against countless walls. Just don’t. The best that can happen is that somebody else advances themselves based on your ideas, picks up the pieces after you’ve been beaten to a pulp, and then gets the accolades. The worst that can happen is that nobody every listens to you at all and you toil away in silence and isolation, walled off from doing any real damage. Note that getting fired is not the worst outcome. In fact, it’s a great way to confirm you were on to something and extricate yourself from a terrible situation, preventing you from expending energy on fruitless efforts that nobody will acknowledge. Getting fired is a merciful end. Futilely trudging along with no end in sight? Now that’s inhumane torture.

    To successfully make changes in an organization, think very carefully about who you need to convince and bring along for the ride: your upper management. That’s right, the very people who have benefited from the existing systems. Remind me, what is their incentive for changing? Their interest goes only as far as their incentives. To be a successful change agent, you have to convince them that change is in their interest, and that’s a pretty high bar. To be successful, your leaders have to be in such a position that they see an urgent need for change that will benefit the organization – but also simultaneously benefit them. The stars have to be aligned just so, and you will need to take extra care to spot the right opportunities. I cannot emphasize this point enough: the stars do not tend to align except in particular circumstances. You have to learn to be very adept at spotting those particular circumstances. As I said, most of the time it’s not worth it.

    For the remainder of this post, I’m going to assume that you are disregarding my hard-earned, well-considered advice and have chosen to proceed on this adventure in masochism.

    Ok, Fine. You’re a Change Agent. Now What?

    The first thing to know about large organizations is that they never fail. Where most change agents go wrong is they erroneously assume that organizations are failing. If you are asking the question, “why is this organization failing?” know that you are asking precisely the wrong question. Organizations behave exactly as they are designed, and one thing they are designed to do is to resist change. When you zoom out and consider the possible outcomes of an organization’s lifecycle, this is not a bad thing. A long-lived organization will need to be able to survive the whims and myopia of misguided leaders as well as the ups and downs of its industry and market. Resistance to change is a design feature, not a bug. Organizations are so good at self-perpetuation that they are quite adept at identifying and neutralizing potential threats, ie. people that want to change things. How this happens depends on the environment, from putting you on projects that keep you tied up (and away from the things management cares about) to just flushing you out entirely if you prove too meddlesome.

    This is why I get annoyed with most attempts to affect change: they assume that organizations need to be better, and they assume that their pet project is a way to do that. Thus we have movements like Agile and DevOps, which started off as a means to change organizations and eventually were subsumed by the beast, becoming tools that organizations use to perpetuate their existence without actually changing anything. The authors of the Agile manifesto wanted to change how technology worked in organizations, but what they actually did was give large organizations the language they needed to perpetuate the same incentive structures and bureaucracy they always had. DevOps was going to put Agile methodology into practice and empower (whatever that means) technologists to take a larger stake in the business. I’m pretty sure CIOs are still laughing about that one. In the meantime, we still get design by committee, the inability to make decisions, and endless red tape to prevent change from actually happening. Again, this isn’t necessarily bad from the perspective of a business built to last; it’s just really annoying if you expect things to move after you push. My advice: adjust your expectations.

    Incentives and Human Behavior

    The reason most change initiatives fail is because they don’t account for the reality of incentives and the influence of human behavior. Large organizations have evolved intricate systems of carrots and sticks to reward certain behaviors and punish or at least discourage behaviors deemed impolitic. Want to know why teams don’t collaborate across organizations? Because they’re not rewarded for doing so. Why do leaders’ edicts get ignored? Because teams are incentivized to stay the course and not switch abruptly.

    Agile failed in its original intent because it naively assumed that incentives would be aligned with faster development and delivery of technology. What it failed to calculate was that any change in a complex system would incur a premium cost or tax for the change. Any change to a legacy system with complex operations will have unknown consequences and therefore unknown costs. The whole point of running a business effectively is to be able predict P&L with some accuracy. Changes to legacy systems incur necessary risk, which disincentivizes organizations from adopting them at scale. Thus agile morphs from accelerated development and delivery to a different type of bureaucracy that served the same purpose as the old one: preventing change. Except now it uses fancy words like “scrums”, “standups”, and “story points”. As Charles Munger put it, “show me the incentives, and I’ll show you the outcome.” If the avoidance of risk is incentivized and rewarded, then practitioners in your organization will adopt that as their guiding principle. If your employees get promoted for finishing their pet projects and not for collaborating across the organization, guess what they will choose to do with their time?

    It’s this naive disregard of humanity that dooms so many change initiatives. Not everyone wants to adopt your particular changes, and there may be valid reasons for that. Not everyone wants to be part of an initiative that forever changes an organization. Some people just want to draw a paycheck and go home. To them, change represents risk to their future employment. Any change initiative has to acknowledge one universal aspect of humanity: to most people, change is scary. Newsflash: some people don’t tie their identities to their jobs. I envy them, honestly. And still others aren’t motivated to change their organization. They are just fine with the way things are.

    Parasite-Host Analogy

    And how do organizations prevent change? By engaging in what I call the “host immune response.” If you’re familiar with germ theory and disease pathology you know that most organisms have evolved the means to prevent external threats from causing too much harm. Mammals produce mucous, which surrounds viruses and bacteria in slimy goo to prepare for expulsion from the body, preventing these organisms from multiplying internally and causing damage to organs. Or the host will wall off an intruder, not eradicating or expelling it, just allowing it to exist where it can’t do any damage, like a cyst. Or an open source community.

    Within this host immune response and parasite analogy, there lies the secret to potential success: symbiosis. If you recall your high school biology textbook (and really, who doesn’t?) you’ll recall that symbiosis is the result of 2 species developing a mutually beneficial relationship. Nature provides numerous examples or parasitic relationships evolving into symbiosis: some barnacle species and whales; some intestinal worms and mammals; etc, etc. In this analogy, you the change agent are the parasite, and the organization you work for is the host. The trick is for the parasite to evade getting ejected from the host. To do that, the parasite has to be visible enough for its benefits to be felt, but not so visible as to inflame the host. It’s quite the trick to pull off. To put this into more practical terms, don’t announce yourself too loudly, and get in the habit of showing, not telling.

    Oh dear… I’ve now shifted into the mode of giving you a ray of hope. I’m terribly sorry. I fear that my terminal case of unbridled optimism has now reared its ugly head. Fine. Even though it’s probably pointless and a lost cause, and you’re only signing up for more pain, there are some things you can do to improve your chances of success from 0% to… 0.5%?

    Show, Don’t Tell

    There are few things large organizations detest more than a loud, barking dog. The surest route to failure is to raise the expectations of everyone around you. This is a thing that happens when you talk about your vision and plant the seeds of hope.

    Stop. Talking.

    Open source projects serve as a great point of reference here. Sure, many large open source projects undergo some amount of planning, usually in the form of a backlog of features they want to implement. Most well-run, large open source projects have a set of procedures and guidelines for how to propose new features and then present them to the core team as well as the community-at-large. Generally speaking, they do not write reams of text in the form of product requirements documents. They will look at personas and segmentation. They will create diagrams that show workflows. But generally speaking, they lead with code. Documentation and diagrams tend to happen after the fact. Yes, they will write or contribute to specifications, especially if their project requires in-depth integration or collaboration with another project, but the emphasis is on releasing code and building out the project. Open source projects serve as my point of reference, so imagine my surprise when I started working in large organizations and discovered that most of them do precisely the opposite. They write tomes of text about what they are thinking of building and what they wish to build, before they ever start to actually build it. This runs counter to everything I’ve learned working in open source communities. Given my above points about not changing too much too quickly, what is a change agent to do?

    Prototype. Bootstrap. Iterate.

    Open Source innovation tells us that the secret to success is to lead with the code. You want to lead change? Don’t wait for something to be perfect. Do your work in the open. Show your work transparently. Iterate rapidly and demonstrate continuously. Others will want to create the PRDs, the architectural documents, the white papers, and the other endless reams of text that no one will ever read. Let them. It’s a necessary step – remember, your job is to not trigger the host immune response. You can do that by letting the usual processes continue. What you are going to do is make sure that the plans being written out are represented in the code in a form that’s accessible to your target audience as quickly as possible, and that you get it in front of your target audience as soon as it’s available. Without a working representation of what is being proposed, your vision is wishful thinking and vaporware.

    The reasons are simple: if you expend your time and energy building up expectations for something that doesn’t exist yet, you risk letting the imaginations of your customers and stakeholders run wild. By limiting your interactions to demonstrations of what exists, the conversation remains grounded in reality. If you continuously present a grand vision of “the future” you will set the stage for allowing perfect to be the enemy of good. Your customers will have a moving target in their minds that you will never be able to satisfy. By building up expectations and attempting to meet them, you are setting the stage for failure. But with continuous iteration, you help to prevent expectations from exceeding what you are capable of delivering. There’s also the added benefit of showing continuous progress.

    Borrowing from the open source playbook is a smart way to lead change in an organization, and it doesn’t necessarily need to be limited to code or software. Continuous iteration of a product or service being delivered can apply to documentation, process design, or anything that requires multi-stage delivery. By being transparent with your customers and stakeholders and bringing them with you on the journey, you give them an ownership stake in the process. This ownership stake can incentivize them to collaborate more deeply, moving beyond customer into becoming a full-fledged partner. This continuous iteration and engagement builds trust, which helps prevent the host from treating you like a parasite and walling you off.

    Remember, most people and organizations don’t like change. It scares them. By progressing iteratively, your changesets become more manageable as well as palatable and acceptable. This is the way to make your changes seem almost unnoticeable, under the radar, and yet very effective, ultimately arriving at your desired outcome.

    Prototype. Bootstrap. Iterate.

  • The New Open Source Playbook – Platforms Part Deux

    (This is the 2nd post in a series. Part 1 is here)

    I was all set to make this 2nd post about open core and innovation on the edge, and then I realized that I should probably explore the concept of “lift” in a bit more detail. Specifically, if you’re looking for your platform strategy to give your technology products lift, what does that mean exactly? This goes back to the idea that a rising tide lifts all boats. If you think of a rising tide as a growing community of users or developers, and the boat is your particular software project, then you want a startegy where your project benefits from a larger community. A dynamic, growing community will be able to support several “boats” – products, projects, platforms, et al. A good example of this is the Kubernetes community, which is the flagship project of the Cloud Native Computing Foundation (CNCF).

    How Do We Generate Lift?

    There are 2 basic types of lift you will be looking for – user lift, or getting more people ot adopt your platform, and developer lift, where more developers are contributing to your platform. The former gets more people familiar with your particular technology, providing the basis for potential future customers, and the latter allows you to reduce your engineering cost and potentially benefit from new ideas that you didn’t think of. This means that the community or ecosystem you align with depends on the goals for your platform. If you want more users, that is a very different community strategy from wanting more collaborators. Many startups conflate these strategies, which means they don’t always get the results they’re looking for.

    Let’s assume that you have a potential platform that is categorized in the same cloud native space as Kubernetes. And let’s assume that you’ve determined that the best strategy to maximize your impact is to open source your platform. Does that mean you should put your project in the CNCF? It depends! Let’s assume that your product will target infosec professionals, and you want to get feedback on usage patterns for common security use cases. In that case, the Kubernetes or CNCF communities may not be the best fit. If you want security professionals getting familiar with and adopting your platform, you may want to consider security-focused communities, such as those that have formed around SBOM, compliance, and scanning projects. Or perhaps you do want to see how devops or cloud computing professionals would use your platform to improve their security risk, in which case Kubernetes or CNCF make sense. Your target audience will determine what community is the best fit.

    Another scenario: let’s assume that your platform is adjacent to Kubernetes and you think it’s a good candidate for collaboration with multiple entities with a vested interest in your project’s success. In that case, you need developers with working knowledge of Kubernetes architecture, and the Kubernetes community is definitely where you want your project to be incubated. It’s not always so straightforward, however. If you’re primarily looking for developers who will extend your platform, making use of your interfaces and APIs, then perhaps it doesn’t matter if they have working knowledge of Kubernetes. Maybe in this case, you would do well to understand developer use cases and which vertical markets or industries your platform appeals to, and then follow a different community trail. Platform-community fit for your developer strategy is a more nuanced decision than product-market fit. The former is much more multi-dimensional than the latter.

    If you have decided that developers are key to your platform strategy, you have to decide what kind of developers you’re looking for: those that will *extend* your platform; those that will contribute to your core platform; or those that will use or embed your platform. That will determine the type of lift you need and what community(ies) to align with.

    One more example: You’re creating a platform that you believe will transform the cybersecurity industry, and you want developers that will use and extend your platform. You may at first be attracted to security-focused communities, but then you discover a curious thing: cyber security professionals don’t seem fond of your platform and haven’t adopted it at the scale you expect or need. Does this mean your platform sucks? Not always – it could be that these professionals are highly opinionated and have already made up their minds about desired platforms to base their efforts on. However, it turns out that your platform helps enterprise developers be more secure. Furthermore, you notice that within your enterprise developer community, there is overlap with the PyTorch community, which is not cyber security focused. This could be an opportunity to pivot on your adoption strategy and go where your community is leading: PyTorch. Perhaps that is a more ideal destination for community alignment purposes. Before deciding, however, you can do some testing within the PyTorch community before making a final decision.

    Learn From My Example: Hyperic

    Hyperic was a systems management monitoring tool. These days we would put it in the “observability” category, but that term didn’t exist at the time (2006). The Hyperic platform was great for monitoring Java applications. It was open core, so we focused on adoption by enterprise developers and not contributions. We thought we had a great execution strategy to build a global user base that would use Hyperic as the basis for all of their general purpose application monitoring needs. From a community strategy perspective, we wanted Hyperic to be ubiquitous, used in every data center where applications were deployed and managed. We had a great tag line, too: “All Systems Go”. But there was a problem: although Hyperic could be used to monitor any compute instance, it really shined when used with Java appliations. Focusing on general systems management put us in the same bucket, product-wise, as other general use systems management tools, none of which were able to differentiate each other. If we had decided to place more of our community focus on Java developers, we could have ignored all of the general purpose monitoring and focused on delivering great value for our core audience: Java development communities. Our platform-community fit wasn’t aligned properly, and as a result, we did not get the lift we were expecting. This meant that our sales team had to work harder to find opportunities and put a drag on our revenue and overall momentum. Lesson learned…

    When attempting a platform execution strategy, and you’re going the open source route, platform-community fit is paramount. Without it, you won’t get the lift you’re expecting. You can always change up your community alignment strategy later, but it’s obviously better if you get it right the first time.

  • The New Open Source Playbook

    (This is the first in a series)

    For the last few years, the world of commercial open source has been largely dormant, with few startup companies making a splash with new open source products. Or if companies did make a splash it was for the wrong reasons, see eg. Hashicorp’s Terraform rugpull. It got to the point that Jeff Geerling declared that “Corporate Open Source is Dead“, and honestly, I would have agreed with him. It seemed that the age of startups pushing new open source projects and building a business around them was a thing of the past. To be clear, I always thought that it was naive to think that you could simply charge money for a rebuild of open source software, but that fact that startups were always trying showed that there was momentum behind the idea of using open source to build a business.

    And then a funny thing happened – a whole lot of new energy (and money) started flowing into new nascent companies looking to make a mark in… stop me if you’ve heard this one… generative AI. Or to put it in other words, some combination of agents built on LLMs that attempted to solve some automation problem, usually in the category of software development or delivery. It turns out that when there’s lots of competition for users, especially when those users are themselves developers, that a solid open source strategy can make the difference between surviving and thriving. In light of this newfound enthusiasm for open source and startups, I thought I’d write a handy guide for startups looking to incorporate open source startegy into their developer go to market playbook. Except in this version, I will incorporate nuances specific to our emerging agentic world.

    To start down this path, I recommend that startup founders look at 3 layers of open source go to market strategy: platform ecosystem (stuff you co-develop), open core (stuff you give away but keep IP), and product focus (stuff you only allow paying customers to use). That last category, product focus, can be on-prem, cloud hosted, or SaaS services – it won’t matter, ultimately. Remember, this is about how to create compelling products that people will pay for, helping you establish a business. There are ways to use open source principles that can help you reach that goal, but proceed carefully. You can derail your product strategy by making the wrong choices.

    Foundation: the Platform Ecosystem Play

    When thinking about open source strategy, many founders thought they could release open source code and get other developers to work on their code for free as a new model of outsourcing. This almost never works as the startup founders imagined. What does end up happening is that a startup releases open source code and their target audience happily uses the code for free, often not contributing back, causing a number of startups to question why they went down the open source path to begin with. Don’t be like them.

    The way to think of this is within the concept of engineering economics. What is the most efficient means to produce the foundational parts of your software?

    • If the answer is by basing your platform on existing open source projects, then you figure out how to do that while protecting your intellectual property. This usually means focusing on communities and projects under the auspices of a neutral 3rd party, such as the Eclipse or Linux Foundation.
    • If the answer is by creating a new open source platform that you expect to attract significant interest from other technology entities, then you test product-market fit with prospective collaborators and organizations with a vested interest in your project. Note: this is a risky strategy requiring a thoughtful approach and ruthless honesty about your prospects. The most successful examples of this, such as Kubernetes, showed strong demand from the outset and their creation was a result of market pull, not a push.
    • If the answer is that you don’t need external developers contributing to your core platform, but you do need end users and data on product-market fit, then you look into either an open core approach, or you create a free product that gives the platform away for free but not necessarily under an open source license. This is usually for the cases where you need developers to use or embed your product, but you don’t need them contributing directly. This is the “innovation on the edge” approach.
    • Or, if the answer is that you’ll make better progress by going it alone, then you do that and you don’t give it a 2nd thought. The goal is to use the most efficient means to produce your platform or foundational software, not score points on hacker news.

    Many startups through the years have been tripped up by this step, misguidedly believing that their foundational software was so great that once they released it, thousands of developers would step over each other to contribute to a project.

    In the world of LLMs and generative AI, there is an additional consideration: do you absolutely need the latest models from Google, OpenAI, or elsewhere, or can you get by with slightly older models less constrained by usage restrictions? Can you use your own training and weights with off-the-shelf open source models? If you’re building a product that relies on agentic workflows, you’ll have to consider end user needs and preferences, but you’ll also have to protect yourself from downstream usage contraints, which could hit you if you reach certain thresholds of popularity. When starting out, I wholeheartedly recommend having as few constraints as possible, opting for open source models whenever possible, but also giving your end users the choice if they have existing accounts with larger providers. This is where it helps to have a platform approach that helps you address product-ecosystem fit as early as possible. If you can build momentum while architecting your platform around open source models and model orchestration tools, your would-be platform contributors will let you know that early on. Having an open source platform approach will help you guide your development in the right direction. Building your platform or product foundation around an existing open source project will be even more insightful, because that community will likely already have established AI preferences, helping make the decision for you.

    To summarize, find the ecosystem that best fits your goals and product plans and try to build your platform strategy within a community in that ecosystem, preferably on an existing project; barring that, create your own open source platform but maintain close proximity to adjacent communities and ecosystems, looking for lift from common users that will help determine platform-ecosystem fit; or build an open core platform, preferably with a set of potential users from an existing community or ecosystem who will innovate on the edge, using your APIs and interfaces; if none of those apply, build your own free-to-use proprietary platfrom but maintain a line-of-sight to platform-ecosystem fit. No matter how you choose to build or shape a platform, you will need actual users to provide lift for your overall product strategy. You can get that lift from core contributors, innovators on the edge, or adoption from your target audience, or some combination of these. How you do that depends on your needs and the expectations of your target audience.

    Up Next: open core on the edge and free products.

  • Open Source is About to Undergo Substantial Change

    …And Most Open Source Communities Aren’t Ready

    It’s probably gauche to talk about “AI” by now. AI this… AI that… and most of the time, what we’re really talking about is predictive text machines, aka LLMs. But today I want to talk about what I see happening in the open source world, and how I see things changing in the not too distant future, and how much of that will be shaped by these predictive text machines, aka… LLMs. The agentic world is growing very quickly, and even if the large LLMs are starting to plateau, the LLM-backed services are still accelerating in their product growth for the simple reason that developers are figuring out how to add rules engines and orchestration platforms to build out targeted vertical services (think tools for reading radiology and MRI scans, for example). A great analogy from computing history for this shift from LLMs to agentic “SLMs” is the shift in emphasis from the single CPU for defining compute power to the emergence of multi-core CPUs along with faster RAM, NVMe, larger onboard caches, and of course, GPUs. When we think about compute power today, we don’t refer to the chip speed, which is a far cry from the late 90’s and early 2000s. Believe it or not, kids, there was a time when many people thought that Moore’s law applied to the clock speed on a CPU.

    For some time now, source code has been of little value. There’s so much of it. Nobody buys source code. I’ve made this point before in a series of posts on the subject. 20 years ago, I noted how internet collaboration was driving down the price of software because of the ubiquity of source code and the ability to collaborate beyond geographic borders. This trend, which has been unceasing now for 25+ years, has hit an inflection point and accelerating beyond the previous rate. This is, of course, because of the oncoming train that is AI, or more specifically, agentic LLM-based systems that are starting to write more and more of our source code. Before I get into the full ramifications of What This Means for Open Source (tm) let me review the 2 previous transformative eras in tech that played a pivotal role in bringing us to this point: open source and cloud.

    Open Source Accelerated the Speed of Development

    A long, long time ago, software vendors had long release cycles, and customers had no choice but to wait 1-2 years, or longer depending on the industry, for the long cycle of dev, test, and release to complete. And then a funny thing happened: more people got online and suddenly created a flurry of core tools, libraries, and systems that gave application developers the ultimate freedom to create whatever they wanted without interference from gate-keepers. I cannot over-emphasize the impact this had on software vendors. At first, it involved a tradeoff: vendors were happy to use the free tools and development platforms, because they saw a way to gain a market edge and deliver faster. At the same time, startups also saw an opportunity to capitalize on this development and quickly create companies that could compete with incumbents. In the late 90s, this meant grabbing as much cash as possible from investors in the hopes of having an IPO. All of this meant that for every advance software vendors embraced from the open source world, they were also effectively writing checks that future competitors would cash, which required that established vendors release even more quickly, lather, rinse, repeat, and find vertical markets where they could build moats.

    Cloud accelerated the speed of delivery

    If open source accelerated the speed of development, the emergence of what became “cloud technologies” enabled the delivery of software at a speed and scale previously thought to be impossible. Several smart companies in the mid-2000s saw this development and started to enact plans that would capitalize on the trend to outsource computing infrastructure. The companies most famous for leading the charge were Amazon, which created AWS in 2006, Netflix, which embraced AWS at an early stage, Google, which created Borg, the predecessor to Kubernetes, and Salesforce, which created it’s cloud-based PaaS, Force.com, in 2009. Where open source gave small growing companies a chance to compete, cloud did the same, but also at a price. Established software vendors started moving to cloud-based systems that allowed them to deliver solutions to customers more quickly, and startups embraced cloud because they could avoid capital expenditures for data center maintenance. Concurrently, open source software continued to develop at a fast pace for the simple reason that it enabled the fast development of technologies that powered cloud delivery. Similar to open source, the emergence of cloud led directly to faster release cycles and increasing competition. Unlike open source, however, cloud computing allowed established cloud companies to build out hegemonic systems designed to exact higher rental fees over time, pulling customers deeper into dependencies that are increasingly difficult to unravel. Software vendors that thought open source developers were the architects of their demise in the early 2000s hadn’t yet met Amazon.

    All of these developments and faster release cycles led to a lot more source code being written and shared, with GitHub.com emerging as the preferred source code management system for open source communities. (Pour one out for Sourceforge.net, which should have captured this market but didn’t.) Sometimes this led companies to think that maybe their business wasn’t cut out for this world of source code sharing, so they began a retrenchment from their open source commitments. I predicted that this retrenchment would have little impact on their viability as a business, and I was right. If only they had asked me, but I digress…

    All of this brings us to our present moment where source code is less valuable than ever. And in a world of deprectiating value for something, how do we ensure that the rules of engagement remain fair for all parties?

    Sorry Doubters: AI Will Change Everything

    If open source accelerated development and cloud accelerated delivery, then AI is accelerating both, simultaneously. Code generation tools are accelerating the total growth of source code; code generation tools are accelerating the ongoing trend of blending the boundary between hardware and software; and code generation tools are (potentially) creating automated systems that deliver solutions more quickly. That last one has not yet been realized, but with the continuing growth of agentic workflows, orchestrators, and rules engines, I would bet my last investment dollar on that trend realizing its potential sooner rather than later.

    What does this portend? I think it means we will need to craft new methods of managing and governing all of this source code. I think it means that rules of collaboration are going to change to reflect shifting definitions of openness and fairness in collaboration. I think it means that previously staid industries (read: semiconductors) are facing increasing pressure in the form of power consumption. speed of data flow, and increasingly virtualized capabilities that have always lived close to the silicon. And I think a whole lot of SaaS and cloud native vendors are about to understand what it means to lose your “moat”. The rise of agentic systems is going to push new boundaries and flip entire industries on their heads. But for the purpose of this essay, I’m going to focus on what it means for rules of collaboration.

    What is the Definition of Open Source?

    For many years, the definition of open source has been housed and governed by the Open Source Initiative (OSI). Written in the post-cold war era of open borders and free trade, it’s a document very much of its time. In the intervening years, much has happened. Open source proliferation happened, and many licenses were approved by the OSI as meeting the requirements of the Open Source Definition (OSD). State-sponsored malware has happened, sometimes inflicting damage on the perceived safety of open source software. Cloud happened, and many open source projects were used in the creation of “cloud-native” technologies. And now LLM-based agentic systems are happening. I mention all of this to ask, in what context is it appropriate to consider changes in the OSI?

    One of the reasons open source governance proved to be so popular is that it paved the way for innovation. Allow me to quote my own definition of innovation:

    Innovation cannot be sought out and achieved. It’s like happiness. It has to be achieved by laying the foundation and establishing the rules that enable it to flourish.

    In open source communities and ecosystems, every stakeholder has a seat at the table, whether they are individuals, companies, governments, or any other body with a vested interest. That is the secret of its success. When you read the 10 tenets of the OSD, it boils down to “Establishing the rules of collaboration that ensure fairness for all participants.” Basically, it’s about establishing and defending the rights of stakeholders, namely the ability to modify and distribute derivative works. In the traditional world of source code, this is pretty straightforward. Software is distributed. Software has a license. Users are held to the requirements of that license. We already saw the first cracks in this system when cloud computing emerged, because the act of distributing… sorry “conveying” software changed significantly when I used software distributed over a network. And the idea of derivative works was formed at a time when software was compiled with shared library binaries (.so and .dll) that were pulled directly into a software build. Those ideas have become more quaint over time, and the original ideas of the OSD have become increasingly exploitable over the years. What use is a software license when we don’t technically “use software”? We chose to not deal with this issue by pretending that it hadn’t changed. For the most part, open source continued to flourish, and more open source projects continued to fuel the cloud computing industry.

    But now we’re bracing for another change. How do we govern software when we can’t even know if it was written by humans? Agentic systems can now modify and write new source code with little human intervention. I will not comment on whether this is a good idea, merely that it is happening. Agentic systems can take the output of cloud-based services, and write entire applications that mimic their entire feature set. Does that meet the definition of open source? Does it violate the EULA of a cloud service? And if companies can recreate entire code bases of projects based only on the requirements of applications that use it, does that violate the terms of reciprocal licenses like the GPL? And this is before we even get to the issues of copyright pertaining to all the source code that had to feed the models in order to write code.

    If we true back to answering the question “how do we protect the rights and ensure the fairness of all participants”, how do we prepare for these changes? I think a couple of things are in order:

    • The right to reverse engineer must be protected to meet the definition of Open Source. This means that the ability to recreate, modify, and redistribute a model, cloud service, or really anything in technology that we use, has to be protected. For years, cloud providers have built in complexity in their services that makes them very difficult to replicate at scale. That is now changing, and it is a good thing.
    • This also means that the ability to recreate, modify, and redistribute models must also be protected if it uses the moniker of Open Source.
    • Agents must abide by licensing terms in order to be categorized as open source. If you call your agentic systems open source, they must be able to interpret and abide by software licenses. This effectively means that all agentic systems will need to include a compliance persona in order to meet the definition of Open Source.
    • Maintainers of Open Source projects must have a way to quickly dismiss the output of agentic systems that file bug and vulnerability reports. This means that in order to meet the open source definition, agentic systems that fit in that category will have to abide by a standard that maintainers use to signal their willingness to accept input from agents. If maintainers decline, then agentic systems will either avoid these projects, or push their inputs and changes into forked repos maintained elsewhere.

    These are just a couple of ideas. The bottom line is that the open source ethos guarantees all stakeholders a seat at the table, and we must be willing to make changes to our governing rules in order to ensure fairness for all parties. To do otherwise is to shirk our responsibility and pretend like it’s still 1999. No change to the open source definition should be taken lightly, but as the governing document that protects the rights of those who participate in open source communities, we need to make sure that it doesn’t become more easily exploitable by monopolistic companies and those that wish to extort from community members or commit harmful acts.

    Open Source communities and maintainers are not yet prepared for these changes, and it’s our job as community members to make sure that these communities, the backbone of open source innovation, remain vibrant and strong.

  • There is No Open Source Community

    There is No Open Source Community

     

    In January, 2006, I published this article on O’Reilly’s OnLAMP.com site, which was recently shut down. I’ve always been proud of this essay, because I think I got a lot right.  I’m republishing it now in the hopes that it will continue to educate others – and perhaps  allow others to critically evaluate where I fell short in my arguments.  The central thesis is here:

    The commoditization of software and a gradual, long-term reduction in price have played far more important roles than previously recognized. Business strategy designed to leverage open source should focus more on economies of scale (in terms of user and developer bases) and less on pleasing a mythical, monolithic community.

    Basically, stop treating open source as a social movement, because it’s not. This false assumption has caused much harm to software developers and users alike (more on that in a follow-up article). However, while I’m busy patting myself on the back for writing about software commoditization, I missed something fairly big: the value of source code itself is essentially worthless. This may have actually been more important than the price of software.

  • Podcast: Shane Coughlan of Openchain

    Podcast: Shane Coughlan of Openchain

    Shane Coughlan is the founder and manager of the Openchain Project, which “builds trust in open source by making open source license compliance simpler and more consistent.” As any software asset management person can tell you, they get cross-eyed when it comes to open source license compliance. My opinion has always been that this was due to lack of information outside of the immediate sphere of open source developers. The Openchain Project aims to remedy that, and in this podcast we talked about the challenges of doing that. It’s a great listen!

  • Is Open Source More Risky?

    Is Open Source More Risky?

    There’s been a long-running debate over open source and security, and it goes something like this:

    Pro: Open source is awesome! Given enough eyes, all bugs are shallow. This is why open source software is inherently more secure.

    Con: Hackers can see the code! They’ll look at the source code and find ways to exploit it. This is why open source software is inherently more insecure.

    And on and on… ad nauseum. There are a variety of studies that each side can finger to help state their case. The problem as I see it, is that we’re not even talking about the same thing. If someone says open source software is more or less secure, what are they actually talking about? Do they mean software you download from the web and push into production? Or do they mean vendor-supported solutions? Unless we can agree on that, then any further discussion is pointless.

    Open Source Products

    So let’s shift the conversation to an apples vs. apples comparison so that we’re discussing the same things. According to a survey by Black Duck, upwards of 96% of commercial software solutions use open source software to some extent. This means virtually *all* new software solutions use open source software. So, when someone argues whether open source is more or less secure, the question to ask is, “more or less secure than *what*?” Because as we can see, the number of software solutions that *don’t* use open source software is rapidly dwindling.

    To save everyone’s breath, let’s change the dynamics of this conversation. Let’s compare “raw” upstream open source code vs. supported software solutions backed by a vendor. As I’ve mentioned before, you can do the former, but it helps if you’re Amazon, Google or Facebook and have an army of engineers and product managers to manage risk. Since most of us aren’t Amazon, Google or Facebook, we usually use a vendor. There are, of course, many grey areas in-between. If you choose to download “raw” code and deploy in production, there are naturally many best practices you should adopt to ensure reliability, including developing contingency plans for when it all goes pear-shaped. Most people choose some hybrid approach, where core, business-critical technologies come with vendor backing, and everything else is on a case-by-case basis.

    So, can we please stop talking about “open source vs. proprietary”? We should agree that this phrasing is inherently anachronistic. Instead, let’s talk about “managed” vs. “unmanaged” solutions and have a sane, productive discussion that can actually lead us forward.

  • Kite Demonstrates Continuing Toxicity of Silicon Valley

    One of the most frustrating parts of being in open source circles is battling the conventional wisdom in the Valley that open source is just another way to do marketing. It’s complicated by the fact that being a strong open source participant can greatly aid marketing efforts, so it’s not as if marketing activities are completely unrelated to open source processes. But then something happens that so aptly demonstrates what we mean when we say that Silicon Valley has largely been a poisonous partner for open source efforts. Which brings me to this week’s brouhaha around a silly valley startup looking to “Make money fast!” by glomming onto the success of open source projects.

    To quote from the article:

    After being hired by Kite, @abe33 made an update to Minimap. The update was titled “Implement Kite promotion,” and it appeared to look at a user’s code and insert links to related pages on Kite’s website. Kite called this a useful feature. Programmers said it was not useful and was therefore just an ad for an unrelated service, something many programmers would consider a violation of the open-source spirit.

    It’s the “stealing underpants” business model all over again.

    1. Get users and “move the needle”
    2. ?
    3. Profit!

    Step 1 above is why we actually have valley poseurs who unironically refer to themselves as “growth hackers.” Only in the valley.

    The really sad part of this is that the methodology outlined above is terrible, not just because it’s unethical, but because it’s counterproductive to what Kite wants to accomplish. As I’ve mentioned countless times before, a project is not a product, and trying to turn it into one kills the project. The best way to make money on open source is to, big surprise, make a great product that incorporates it in a way that adds value to the customer. In this example, this means taking projects like minimap and autocomplete-python, producing commercial versions of them, and make them part of an existing product or offer them up as separate downloads – from the company site or part of a commercial distribution.

    The worst part of all this is there are still investors and business folks who think that doing is Kite did is the only way to make money from an open source project. It’s not. It’s a terrible maneuver from both an ethics as well as product development standpoint. It’s once again conflating open source with marketing, which is one of the reasons I started this site – it’s an unforced error and should be part of any “open source product 101” curriculum.

  • Red Hat’s Secret Sauce

    This is a guest post by Paul Cormier, President, Products and Technologies, Red Hat. It was originally posted on the Red Hat blog.

    Open source software is, in fact, eating the world. It is a de facto model for innovation, and technology as we know it would look vastly different without it. On a few occasions, over the past several years, software industry observers have asked whether there will ever be another Red Hat. Others have speculated that due to the economics of open source software, there will never be another Red Hat. Having just concluded another outstanding fiscal year, and with the perspective of more than 15 years leading Red Hat’s Products and Technologies division, I thought it might be a good time to provide my own views on what actually makes Red Hat Red Hat.

    Commitment to open source

    Red Hat is the world’s leading provider of open source software solutions. Red Hat’s deep commitment to the open source community and open source development model is the key to our success. We don’t just sell open source software, we are leading contributors to hundreds of open source projects that drive these solutions. While open source was once viewed as a driver for commoditization and driving down costs, today open source is literally the source of innovation in every area of technology, including cloud computing, containers, big data, mobile, IoT and more.

    Red Hat is best known for our leadership in the Linux communities that drive our flagship product, Red Hat Enterprise Linux, including our role as a top contributor to the Linux kernel. While the kernel is the core of any Linux distribution, there are literally thousands of other open source components that make up a Linux distribution like Red Hat Enterprise Linux, and you will find Red Hatters, as well as non-Red Hatters, leading and contributing across many of these projects. It’s also important to note that Red Hat’s contributions to Linux don’t just power Red Hat Enterprise Linux, but also every single Linux distribution on the planet – including those of our biggest competitors. This is the beauty of the open source development model, where collaboration drives innovation even among competitors.

    Today, Red Hat doesn’t just lead in Linux, we are leaders in many different communities. This includes well-known projects like the docker container engine, Kubernetes and OpenStack, which are among the fastest growing open source projects of the last several years. Red Hat has been a top contributor to all of these projects since their inception and brings them to market in products like Red Hat Enterprise Linux, Red Hat OpenShift Container Platform and Red Hat OpenStack Platform. Red Hat’s contributions also power competing solutions from the likes of SUSE, Canonical, Mirantis, Docker Inc., CoreOS and more.

    The list of communities Red Hat contributes to includes many more projects like Fedora, OpenJDK, Wildfly, Hibernate, Apache ActiveMQ, Apache Camel, Ansible, Gluster, Ceph, ManageIQ and many, many more. These power Red Hat’s entire enterprise software portfolio. This represents thousands of developers and millions of man-hours per year that Red Hat commits to the open source community. Red Hat also commits to keeping our commercial products 100% pure open source. Even when we acquire a proprietary software company, we commit to releasing all of its code as open source. We don’t believe in open core models, or in being just consumers but not contributors to the projects we depend on. We do this because we still believe in our core that the open source development model is THE best model to foster innovation, faster.

    As I told one reporter last week, some companies have endeavored to only embrace ‘open’ where it benefits them, such as open core models. Half open is half closed, limiting the benefits of a fully open source model. This is not the Red Hat way.

    This commitment to contribution translates to knowledge, leadership and influence in the communities we participate in. This then translates directly to the value we are able to provide to customers. When customers encounter a critical issue, we are as likely as anyone to employ the developers who can fix it. When customers request new features or identify new use cases, we work with the relevant communities to drive and champion those requests. When customers or partners want to become contributors themselves, we even encourage and help guide their contributions. This is how we gain credibility and create value for ourselves and the customers we serve. This is what makes Red Hat Red Hat.

    Products not projects

    Open source is a development model, not a business model. Red Hat is in the enterprise software business and is a leading provider to the Global 500. Enterprise customers need products, not projects and it’s incumbent on vendors to know the difference. Open source projects are hotbeds of innovation and thrive on constant change. These projects are where sometimes constant change happens, where the development is done. Enterprise customers value this innovation, but they also rely on stability and long-term support that a product can give. The stable, supported foundation of a product is what then enables those customers to deliver their own innovations and serve their own customers.

    Too often, we see open source companies who don’t understand the difference between projects and products. In fact, many go out of their way to conflate the two. In a rush to deliver the latest and greatest innovations, as packaged software or public cloud services, these companies end up delivering solutions that lack the stability, reliability, scalability, compatibility and all the other “ilities” or non-functional requirements that enterprise customers rely on to run their mission-critical applications.

    Red Hat understands the difference between projects and products. When we first launched Red Hat Enterprise Linux, open source was a novelty in the enterprise. Some even viewed it as a cancer. In its earliest days, few believed that Linux and open source software would one day power everything from hospitals, banks and stock exchanges, to airplanes, ships and submarines. Today open source is the default choice for these and many other critical systems. And while these systems thrive on the innovation that open source delivers, they rely on vendors like Red Hat to deliver the quality that these systems demand.

    Collaborating for community and customer success

    Red Hat’s customers are our lifeblood. Their success is our success. Just like we thrive on collaboration in open source communities, that same spirit of collaboration drives our relationships with our customers. By using open source innovation, we help customers drive innovation in their own business. We help customers consume the innovation of open source-developed software. Customers appreciate our willingness to work with them to solve their most difficult challenges. They value the open source ethos of transparency, community and collaboration. They trust Red Hat to work in their best interests and the best interests of the open source community.

    Too often open source vendors are forced to put commercial concerns over the interests of customers and the open source communities that enable their solutions. This doesn’t serve them or their customers well. It can lead to poor decision making in the best case and fractured communities in the worst case. Sometimes these fractures are repaired and the community emerges stronger, as we saw recently with Node.js. Other times, when fractures are beyond repair, new communities take the place of existing ones, as we have seen with Jenkins and MariaDB. Usually, we see that open source innovation marches forward, but this fragmentation only serves to put vendors and their customers at risk.

    Red Hat believes in collaborating openly with both customers and the open source community. It’s that collaboration that brings forward new ideas and creative solutions to the most difficult problems. We work with the community to identify solutions and find common ground to avoid fragmentation. Through the newly launched Red Hat Open Innovation Labs we are bringing that knowledge and experience directly to our customers.

    The next Red Hat

    Will there be another Red Hat? I hope and expect that there will be several. Open source is now the proven methodology for developing software. The days of enterprises relying strictly on proprietary software has ended. The problems that we have to solve in the complexities of today’s world are too big for just one company. Vendors may deliver solutions in different ways, address different market needs and/or serve different customers – but I believe that open source will be at the heart of what they do. We see open source at the core of leading solutions from both the major cloud providers and leading independent software vendors. But, open source is a commitment, not a convenience, and innovative open source projects do not always lead to successful open source software companies.

    Today, we strive not only to be the Red Hat of Linux, but also the Red Hat of containers, the Red Hat of OpenStack, the Red Hat of middleware, virtualization, storage and a whole lot more. Many of these businesses, taken independently, would be among the fastest growing technology companies in the world. They are succeeding because of the strong foundation we’ve built with Red Hat Enterprise Linux, but also because we’ve followed the same Red Hat Enterprise Linux playbook of commitment to the open source community, knowing the difference between products and projects, and collaborating for community and customer success – across all of our businesses. That’s what makes us Red Hat.

  • There is NO Open Source Business Model

    Note: the following was first published on medium.com by Stephen Walli. It is reprinted here with his permission.

    Preface: It has been brought to my attention by friends and trusted advisors that a valid interpretation of my point below is that open source is ultimately about “grubby commercialism”, and altruism equals naïveté. That was not my intent. I believe that economics is about behaviour not money. I believe in Drucker (a company exists to create a market for the solution), not Friedman (a company exists to provide a return to shareholders). I believe in the Generous Man. I believe in Rappaport’s solution to the Prisoner’s Dilemma to always start with the most generous choice. I believe we’ve known how communities work since you had a campfire and I wanted to sit beside it. I had the pleasure of watching Bob Young give a talk today at “All Things Open” where he reiterated that a successful company always focuses on the success of its customers. I think that was stamped on Red Hat’s DNA from its founding, and continues to contribute to its success with customers today. I believe sharing good software is the only way to make all of us as successful as we can be as a tribe. I believe there is no scale in software without discipline.

    The open source definition is almost 20 years old. Red Hat at 22 is a $2B company. MySQL and JBoss have had great acquisition exits. Cloudera and Hortonworks are well on their way to becoming the next billion dollar software companies. But I would like to observe that despite these successes, there is no open source business model.

    yosuke muroya (on Flickr)

    I completely believe in the economic value of liberally-licensed collaboratively-developed software. We’ve shared software since we’ve developed software, all the way back into the late 40s and early 50s. This is because writing good software is inherently hard work. We’ve demonstrated that software reviews find more bugs than testing, so building a software development culture of review creates better software. Much of the invention in software engineering and programming systems has been directed towards re-use and writing more and better software in fewer lines of code. Software can’t scale without discipline and rigour in how it’s built and deployed. Software is inherently dynamic, and this dynamism has become clear in an Internet connected world. Well-run, disciplined, liberally-licensed collaborative communities seem to solve for these attributes of software and its development better than other ways of developing, evolving, and maintaining it. There is an engineering economic imperative behind open source software.

    Here’s an example using open source that I believe closely demonstrates that reality.

    Interix was a product in the late 90s that provided the UNIX face on Windows NT. It encompassed ~300 software packages covered by 25 licenses, plus a derivative of the Microsoft POSIX subsystem, plus our own code. This was before the open source definition. We started with the 4.4BSD-Lite distro because that’s what the AT&T/USL lawyers said we could use. The gcc compiler suite would provide critical support for our tool chain as well as an SDK to enable customers to port their UNIX application base to Windows NT.

    It took a senior compiler developer on the order of 6–8 months to port gcc into the Interix environment. It was a little more work when you include testing and integration, etc., so round it up to on the order of $100K. The gcc suite was about 750K lines of code in those days, which the COCOMO calculation suggests was worth $10M-$20M worth of value depending on how much folks were earning. So that’s roughly two orders of magnitude in cost savings instead of writing a compiler suite on our own. That and this was a well-maintained, robust, hardened compiler suite, not a new creation created from scratch in a vacuum. That is the benefit of using open source. You can see a similar net return on the 10% year-on-year investment Red Hat makes on their Linux kernel contributions as they deliver Fedora and RHEL. Of course with Interix, we were now living on a fork. This means we are drifting further away from the functionality and fixes on the mainline.

    The back of the envelop estimate suggested that every new major revision of gcc would cost us another 6+ months to re-integrate, but if we could get our changes contributed back into the mainline code base, we were probably looking at a month of integration testing instead. So from ~$100K we’re approaching $10K-$20K so possibly another order of magnitude cheaper by not living on a fork. We approached Cygnus Solutions as they were the premier gcc engineering team with several gcc committers. The price to integrate quoted to us was ~$120K, but they were successfully oversubscribed with enough other gcc work that they couldn’t begin for 14 months. Ada Core Technologies on the other hand would only charge ~$40K and could begin the following month. It was a very easy decision. (We were not in a position to participate directly in the five communities hiding under the gcc umbrella. While some projects respected the quality of engineering we were trying to contribute, others were hostile to the fact we were working on that Microsoft s***. There’s no pleasing some people.)

    This wasn’t contributing back out of altruism. It was engineering economics. It was the right thing to do, and contributed back to the hardening of the compiler suite we were using ourselves. It was what makes well run open source projects work. I would argue that individuals make similar decisions because having your name on key contribution streams in the open source world is some of the best advertising and resume content you can provide as a developer on your ability to get work done, in a collaborative engineering setting, and demonstrating you well understand a technology base. It’s the fact with which you can lead in an interview. And it’s fun. It’s interesting and challenging in all the right ways. If you’re a good developer or interested in improving your skills, why wouldn’t you participate and increase your own value and skills?

    Well run open source software communities are interesting buckets of technology. If they evolve to a particular size they become ecosystems of products, services (support, consulting, training), books and other related-content. To use an organic model, open source is trees, out of which people create lumber, out of which they build a myriad of other products.

    Red Hat is presented as the epitome of an open source company. When I look at Red Hat, I don’t see an open source company. I see a software company that has had three CEOs making critical business decisions in three different market contexts as they grow a company focused on their customers. Bob Young founded a company building a Linux distro in the early days of Linux. He was focused on the Heinz ketchup model of branding. When you thought “Linux”, Bob wanted the next words in your head to be “Red Hat.” And this was the initial growth of Red Hat Linux in the early days of the Internet and through the building of the Internet bubble. It was all about brand management. Red Hat successfully took key rounds of funding, and successfully went public in 1999. The Red Hat stock boomed.

    Matt Szulick took over the reins as CEO that Fall. Within a couple of years the Internet bubble burst and the stock tumbled from ~$140 down to $3.50. Over the next couple of years, Red Hat successfully made the pivot to server. RHEL was born. Soon after Fedora was delivered such that a Red Hat focused developer community would have an active place to collaborate while Red Hat maintained stability for enterprise customers on RHEL. They successfully crossed Moore’s Chasm in financial services. JBoss was acquired for $350M to provide enterprise middleware. Red Hat went after the UNIX ISV community before the other Linux distro vendors realized it was a race.

    In 2008, Jim Whitehurst took over the helm. In Whitehurst, they had a successful executive that had navigated running an airline through its Chapter 11 restructuring. So he knows how to grow and maintain employee morale, while managing costs, and keeping customers happy in the viciously competitive cutthroat market of a commercial air travel. He arrives at Red Hat just in time for the economic collapse of 2008. Perfect. But he has also led them through steady stock growth since joining.

    Through its history, Red Hat has remained focused on solving their customers problems. Harvard economist Theodore Levitt once observed that a customer didn’t want a quarter inch drill, what they wanted was a quarter inch hole. While lots of competing Linux distro companies tried to be the best Linux distro, Red Hat carefully positioned themselves not as the best Linux but as an enterprise quality, inexpensive alternative to Solaris on expensive SPARC machines in the data centre.

    Red Hat certainly uses open source buckets of technology to shape their products and services, but it’s not a different business model from the creation of DEC Ultrix or Sun SunOS out of the BSD world, or the collaborative creation of OSF/1 and the evolution of DEC Ultrix and IBM AIX, or the evolution of SunOS to Solaris from a licensed System V base. At what point did Windows NT cease to be a Microsoft product with the addition of thousands of third party licensed pieces of software including the Berkeley sockets technology?

    When companies share their own source code out of which they build their products and services, and attempt to develop their own collaborative communities, they gain different benefits. Their technology becomes stickier with customers and potential future customers. They gain advocates and experts. It builds inertia around the technology. The technology is hardened. Depending on the relationship between the bucket of technology and their products, they can evolve strong complements to their core offerings.

    The engineering economic effects may not be as great as pulling from a well run external bucket of technology, but the other developer effects make up for the investment in a controlled and owned community. It’s why companies like IBM, Intel, Microsoft, and Oracle all invest heavily in their developer networks regardless of the fact these historically had nothing to do with open source licensing. It creates stickiness. Red Hat gains different benefits from their engineering investments in Linux, their development of the Fedora community, and the acquisition of the JBoss technology, experts, and customers.

    I believe open source licensed, disciplined, collaborative development communities will prove to be the best software development and maintenance methodology over the long term. It’s created a myriad of robust adaptive building blocks that have become central to modern life in a world that runs on software. But folks should never confuse the creation of those building blocks with the underlying business of solving customer problems in a marketplace. There is no “open source business model.”