Tag: featured

  • Open Source in a Post-Agentic World

    I haven’t seen so much anxiety permeating the world of technology since the dot bomb implosion of 2000-2001. And anxiety is everywhere right now. Software developers are worried about their jobs ending. Venture capitalists are wondering whether they will even be needed when 2 vibe coders can literally build complete apps in days or weeks without any funding. Startup founders are worried about building a “moat” around their business when autonomous agents can reverse-engineer and reproduce their features at blinding speed. And open source maintainers are worried about keeping their heads when autonomous agents are sending an inordinate number of pull requests, many of which are substandard and should be disregarded.

    A number of people have opined that “the end of open source is nigh”. One article from The Register highlighted one example that demosntrated how agentic development could change the face of open source forever by killing the very essence of software licensing, open source and otherwise. The choicest comments came from Bruce Perens, who declared that “the entire economics of software development are dead, gone, over, kaput!” To demonstrate the degree of change, Perens enlisted the aid of an agentic engineering platform to reverse engineer and copy an SRE platform, declaring, “I am the Harry Potter of software!” waving a magic wand and summoning a new platform into being. Dan Lorenc, co-founder and CEO of Chainguard, was a bit more circumspect in his outlook, offering that open source platforms would get much needed improvements, and that agentic engineering is great at one-shot software outcomes, but not so great at maintaining efforts that add value over time. In the end, nobody really knows, but hey, that never stopped me before! So let me offer my take, which you no doubt were awaiting with bated breath…

    No, this is not the end of open source

    Let me just cut to the chase and say that open source is not ending, not by a long shot, but open source will definitely change and may not be recognizable to those of us who grew up on hand-crafted, artisanal (organic!) source code. Licensing will almost certainly change, and the medium of exchange, source code, will undergo significant change as well, with the points of collaboration more resembling writing tutorials and language exams than software. I went through some of this in my previous posts about upcoming changes, including the potential death of source code, the inevitable changes to business models, and the increasing importance of open source platforms. There are valid concerns to be sure, and change can be difficult, especially when assymetric change affects people differently depending on where they are in a given ecosystem or point in technology lifecycle.

    How Will Open Source Change?

    I have been pretty adamant over the past few months that open source and innersource, while about to undergo significant change, would emerge as more important than ever. Ok, so what exactly will change and how? How is open source going to survive, and what will it look like?

    For one thing, we always wanted software tools to progress to the point where the developer interface was not something that required arcane and esoteric syntax, but something that more resembled human language. LLMs and agentic tools are the great enabler here. This is to be celebrated. We should be thankful that we can summon systems into being without worrying about obscure reference pointers, poorfly implemented semaphores, and race conditions. I’m making the assumption that the current crop of agentic systems are good enough to avoid those mistakes or correct them if needed.

    What this means in practice is that well-written instructions, user stories, and specifications will be the driving force of all software development. The implications of this are momentous – your philosophy and comparative literature graduates may be better at this than your friends who are well-versed in a particular language syntax. Collaborating on prompts and specifications will look much different from today’s code pull requests, but the act will be very similar: developers with different ideas will be able to script them out and try them in record time, comparing results and deciding what is the best solution. Once they’ve written and testing the specifications and program narratives, they may not need to even submit the pull requests – they’ll just have their agents to that. And who is reviewing the pull requests as submitted? Those would be other agents. The humans in the loop will be evaluating results, comparing multiple tests and determining which is the best solution. Because writing and testing code is now as easy as a simple command to multiple agents, open source collaborators will be able to run as many concurrent tests as they want, depending on their infrastructure capacity. The collaboration will still be there. The ideation will still be there. But the implementation will change.

    I have seen some developers question why we even need reusable software when agents can simply rewrite anything at will. This can get tricky, because many simple, single-maintainer libraries could be easily rewritten by an agent in the course of developing software. Given the number of single-maintainer libraries that involve burned out developers who don’t get paid for their work, this may not be a bad thing. But that doesn’t mean that maintainers will simply go away. It means that maintainers may not care about single libraries anymore, but they will be managing and maintaining tools suites, large infrastructure systems, and large platforms. Single maintainers will no longer manage just a library, they will band together and manage technology ecosystems, and agentic engineering platforms will enable them to do that more effectively than ever.

    Everything Comes at a Price

    This is not to say that everything will be peachy keen with no consequences. For one thing, our massive data center buildout will have untold environmental ramifications, and as developers, we would be remiss if we did not account for the external costs of our work. Our agentic systems also come with systemic bias that is difficult to foresee and weed out as we build interfaces meant for humans. And then there are groups of workers who will be out of a job if current trends continue. And then, of course, our new agentic systems have already been used to conduct mass surveillance and war at an industrial scale. These are just some of the societal costs that will come with our “great transformation”, but there are other, smaller scale costs as well, and those are also worth exploring.

    We have already seen open source maintainers inundated with “slop PRs” submitted by agents. Some maintainers have elected to simply close their projects to all outside pull requests. You may call them luddites or make fun of them, but I have great sympathy, because they never signed up for this. It’s clear to me that the age of personally reviewing every incoming pull request is probably drawing to a close sooner rather than later, but right now we live in a liminal period where we’ve only begun the transition. Until we work out a community standard for both submitting and receiving agentic pull requests, we’re going to be awkwardly moving forward, often blindly, as best as we can muster, feeling our way through. This will no doubt accelerate the burnout rate of open source maintainers, and some projects will likely disappear as a result, bringing about some degree of chaos to the ecosystem.

    As I like to tell my kids, everything comes at a price. There are going to be some painful transitions, and not everyone will make it through unscathed. Some people will lose their jobs and decide that this agentic world isn’t worth the trouble. Some will be energized by how quickly they can now build things. And still others will suffer from “AI exhaustion” and “AI mania”, two phenomena that we’re only just now starting to see. We still don’t quite understand the human cost of subjecting people to these tools. But I don’t really seen an alternative at the moment – the world seems to be rushing headlong towards the great agentic transformation, and I don’t see much standing in the way. My advice is to get used to it and learn as much as possible about it.

  • The New Open Source Playbook – Products and Customers in an Agentic Engineering World

    Thus far in this series, I’ve focused on various ways to align with ecosystems and communities and create or integrate with platforms. This is designed to maximize the engineering economics of your business, reducing costs, outsourcing maintenance, and benefiting from innovation that comes from outside your employer or core engineering team. But if you’re running a business, you’re probably asking, “that’s great, but how do I make money?” In the past, my snarky answer was, “create a great product that reduces your customers’ pain and saves them time. Duh…” But as time goes on, I’ve realized that what they’re really asking is how to benefit from open source innovation without giving away your core value for free. That is to say, how do you do this open source stuff and still create a moat that prevents competitors from stealing your milkshake while you establish lucrative business relationships with your customers and partners?

    Open Source Heirarchy of Products

    Triangle with 3 layers. At the top is "paid product". The middle layer is "Free Product or Open Core". And the bottom layer is "Open Source Platform Neutral 3rd Party Governance".

    Thus far in this series, I’ve focused on the lower parts of the above pyramid. In this post, I’m going to focus on the upper parts. The lower 3rd, which focuses on platforms, is about cost, the bottom line, and generating enough innovation that provides lift to the 2 upper layers. Platforms are about engineering economics – how do I accelerate innovation for less money than I would spend if I did it all myself. It’s about delegation, ecosystem integration, neutral 3rd parties, and open governance. The 2 upper layers about about taking the platform innovation and applying it to customer use cases; going to market and showing product-market fit. The bottom layer is a shared resource. The top layers are all yours. Even then, there’s an art to constructing your products to give you the best chance to thrive. You’ll notice that I break this section into 2 layers and not one. Even when the product is 100% yours, there’s a need to diversify your customer base and think about the multiple personas you want to bring into your fold.

    The “Freemium” or Open Core Layer

    No product category has been as poorly understood as open core or other “free to use” products. In the early to mid 2000s, there was a simple model for getting investors to put money into a startup: take an established open source project and “commercialize” it, stripping it of just enough features so that you could convince users to convert into paying customers in order to get the “creamy frosting” of paid features. This model produced a smattering of successes, but most of the companies who tried it failed. Invariably, the paid product would compete with the free version, thus incentivizing the company leaders to put more and more features into the paid version and less into the free one. The end result was a bunch of unhappy users who abandoned the project and blunted whatever momentum the commercial product may have had. I do not recommend this approach.

    These days, I think about core platforms like Kubernetes, with free products built around it, such as the many freely available but commercial Kubernetes distributions, and then the for-pay vertical applications built on that. Each layer of the product stack is designed for a different audience and fulfills a different purpose. No one is going to take plain, vanilla Kubernetes and sell you the software bits, but they might provide an easy-to-use bundled version with some limitations for personal use, and then sell you a full product with proprietary extensions and plugins. The base platfrom from the Cloud Native Computing Foundation is designed for and by core contributors; the free bundle or distribution is for end users or “developer users” who want to try it out or use it for limited applications; and the commercial bundle with for-pay plugins and extensions is for customers with specific needs and little time for implementation. All are segments with different needs and all have value in the kubernetes ecosystem, with vendors tailoring their solutions to various use cases.

    In some cases, the free product skips the base platform entirely and is its own entity. One example of this is Splunk, which gave away a proprietary and limited but free product and provided a convenient means for customers to buy the full version. Splunk avoided the fate of the open core failures by ensuring that its free product always had an audience and always provided value, even for users who didn’t pay for it. The founders of Splunk debated whether to open source their product and ultimately decided they could delivery value for free customers without open sourcing – and they were proven correct. Because they never needed outside contributors to reduce costs, and because they could sustain the innovation required to land paying customers, open source wasn’t as compelling for their product strategy. Keep this in mind when I discuss agentic products below.

    Having a free product can make the difference between surviving and thriving, but you must be thoughtful of your goals and mindful of the drawbacks of different approaches. There are a couple of things you should keep in mind:

    • All free products should provide something of value for customers who don’t pay. There are some customers who will never ever pay for your product. Are you ok with them leaving your sphere of influence and going elsewhere? What is the value of growing your brand recognition? Can you do that without a free product?
    • Your free product is your intellectual property. The platform is the place for neutral 3rd party governance. Your free product is yours to do with as you please, whether it’s released under an open source license or not. Of course, it’s best to treat your community with respect: your free product is there to create brand ambassadors who will vouch for your company.
    • A free product with an open source license can be beneficial to your overall product strategy. You have to decide whether the benefits outweigh the costs. It is an expression of transparency and trust that your customers will appreciate. And you can protect yourself through copyright and trademark law. It can also accelerate your brand recognition and growth in ways that a typical proprietary free product cannot, but not always. And therein lies the rub: It depends on who your customers are and their expectations.
    • If you view your free product as competition to your paid version, you’ve already failed. Either you fail to understand the value of a free product, or you’ve implemented your product strategy poorly. Either way, you would do well to take a step back and rethink your strategy. Hopefully, you see this in time to course correct.

    The Paid Product

    The interesting part of paid products is that there are so many potential avenues to take. Whereas platforms and free products are relatively straightforward, paid products can take on a variety of shapes, sizes, and types: *-as-a-service; software bundles; paid consultation service; vertical integration; vertical customer use case; etc. This makes it easier to separate out the core value proposition of your paid solution, but it also makes it trickier to establish a conduit from free to paid. For example, if your solution is SaaS, does it make sense for your free product to a be downloadable open source software bundle? Possibly – there is enough market differentiation such that the free product will not detract from the SaaS experience, but usually, you want the free version to be easy to use so that your technology becomes more ubiquitous. A difficult to configure software bundle would take a significant effort for you to maintain and may not add enough of a benefit to justify the expense. Then again, if a free bundle enables other businesses to embed your technology and become potential OEM partners, it could allow you to expand your business in ways you hadn’t thought of. As long as giving away your product adds value to your overall product strategy and accelerates the growth of your paid solution, then it’s justifiable.

    The Agentic Wrinkle

    I’ve argued in the past that agentic engineering was going to change the open source landscape significantly – there will be more open source software, not less, and a growing number of companies will need a solid open source strategy, probably more than ever before. I wrote this series for 2 main reasons:

    1. Large numbers of startup founders are taking a crash course as we speak in open source ecosystems and strategies. I want them to think through their approaches, consider what they want to achieve, and decide whether an open source approach will benefit them.
    2. In a world where autonomous software agents will write an increasing share of our source code, rules of transparency and governance in software collaboration are more important than ever. The risks are also higher than ever. This is a world where your competitors can copy your features almost as soon as you release them. How are you going to protect your business?

    Agentic engineering holds great promise for entrepreneurs. I’ve seen companies with just 2 co-founders deliver a ready-to-order product without needing to hire a team of developers. This is astounding! But I’ve also seen startups get attacked by no-innovation companies that only repackage their code and still get millions in investment dollars. The emergence of agentic engineering tips the scales in a few interesting ways.

    • Platforms are still valuable. In fact, having a neutral location for platform development may be more valuable than ever – a dynamic, growing platform will also attract agentic development, which means the platforms will become more dynamic and robust, providing more growth fuel for your intellectual property.
    • Protect your intellectual property. Releasing a free product as open source may actually be more safe than a proprietary version with no source code. Open source code released under your trademark and copyright gives you a way to audit what competitors release. Embedding clues within your code will help you determine if other companies rebranded your intellectual property, whereas an agent reverse-engineering the features of your proprietary product will be almost undetectable.
    • You will have to adapt. For every startup out there: the game has changed. Our entire way of designing, building, testing, and delivering software has changed forever and is about to rewrite its existence. Entire platforms will be torn down and replaced by new ones with incredible speed. If you haven’t adopted this methodology, you will be left behind.

    There are some incredible challenges ahead. In the past, companies could separate their free from paid products through data. The software was free, but the data or “content” was what customers paid for. In an agentic world, data is a core part of any product. There is no such thing as software-only solutions in an agentic world. And in a world where agents can regenerate content with striking speed, this is no longer the product moat that it once was. Tech vendors will have to learn how to deliver free agentic tools, complete with data, that will still provide an avenue for conversion to paid, commercial solutions.

    As you think through your product strategy, consider these questions:

    • Platforms: What is your platform strategy? Where is collaboration within an ecosystem helpful?
    • Free products: What can you give away for free that will accelerate your growth strategy?
    • Paid products: How can you create a compelling product over and above what’s available for free?
    • Agentic engineering: How will you benefit from an agentic world? How do you protect your value proposition?
  • Change Agents in Large Organizations

    “Everybody has plans until they get hit the first time.” — Mike Tyson

    An economist was once asked, “If you were stranded on a desert island, how would you survive?” The economist pondered this great question for some time and then proudly ventured his answer: “Assume a perfect world…” — old joke about economists

    I am not known for my love of business or management books; quite the opposite, actually. When I try to articulate why, it generally comes down to boredom and a decided lack of enthusiasm for the subject. It’s not that I don’t appreciate the appeal of business topics or the act of conducting it. Far from it. It’s more that there’s an utter futility to the idea that we can do it better. I led off this post with the 2 quotes above to illustrate my reasoning. They seem to come from very different points of view, and yet they are in my mind very much related:

    • Business books are not reflective of lived experiences or real world incentives, just like the economist in the above example, and…
    • They’re hopelessly naive and unable to account for what happens the first time a practitioner gets “punched in the face” (not literally, of course, or at least not usually)

    Both quotes illustrate the difficulty of putting a plan into action, either because you didn’t account for resistance (Mike Tyson) or the realities of the real world constraints (the economist). I dislike business books for the same reason I dislike management consultants, strategists, market analysts, pundits, and any other pointy-haired expert who tries to tell me how to do my job better: because their words prove to be almost useless against the realities of functioning, much less thriving, in a real-life bureaucratic system. With that in mind, I’m now going to do what I probably never should: give advice on how to be a change agent in large bureaucratic organizations. Given what I wrote above, you could be forgiven for asking, “Why?” The answer is rather simple: despite all my experience which tells me I should really know better, at the end of the day, I naively follow an insatiable desire to drive change. Knowing better or not, it doesn’t stop me from trying. The act of futile resistance against the borg is buried deep, deep inside my psyche. It’s terminal, I’m afraid.

    Never, Ever be a Change Agent

    The first thing to know about being a change agent is to not be one. Just don’t do it. No one is ever going to congratulate you on your futility or give you an award for repeatedly beating your head against countless walls. Just don’t. The best that can happen is that somebody else advances themselves based on your ideas, picks up the pieces after you’ve been beaten to a pulp, and then gets the accolades. The worst that can happen is that nobody every listens to you at all and you toil away in silence and isolation, walled off from doing any real damage. Note that getting fired is not the worst outcome. In fact, it’s a great way to confirm you were on to something and extricate yourself from a terrible situation, preventing you from expending energy on fruitless efforts that nobody will acknowledge. Getting fired is a merciful end. Futilely trudging along with no end in sight? Now that’s inhumane torture.

    To successfully make changes in an organization, think very carefully about who you need to convince and bring along for the ride: your upper management. That’s right, the very people who have benefited from the existing systems. Remind me, what is their incentive for changing? Their interest goes only as far as their incentives. To be a successful change agent, you have to convince them that change is in their interest, and that’s a pretty high bar. To be successful, your leaders have to be in such a position that they see an urgent need for change that will benefit the organization – but also simultaneously benefit them. The stars have to be aligned just so, and you will need to take extra care to spot the right opportunities. I cannot emphasize this point enough: the stars do not tend to align except in particular circumstances. You have to learn to be very adept at spotting those particular circumstances. As I said, most of the time it’s not worth it.

    For the remainder of this post, I’m going to assume that you are disregarding my hard-earned, well-considered advice and have chosen to proceed on this adventure in masochism.

    Ok, Fine. You’re a Change Agent. Now What?

    The first thing to know about large organizations is that they never fail. Where most change agents go wrong is they erroneously assume that organizations are failing. If you are asking the question, “why is this organization failing?” know that you are asking precisely the wrong question. Organizations behave exactly as they are designed, and one thing they are designed to do is to resist change. When you zoom out and consider the possible outcomes of an organization’s lifecycle, this is not a bad thing. A long-lived organization will need to be able to survive the whims and myopia of misguided leaders as well as the ups and downs of its industry and market. Resistance to change is a design feature, not a bug. Organizations are so good at self-perpetuation that they are quite adept at identifying and neutralizing potential threats, ie. people that want to change things. How this happens depends on the environment, from putting you on projects that keep you tied up (and away from the things management cares about) to just flushing you out entirely if you prove too meddlesome.

    This is why I get annoyed with most attempts to affect change: they assume that organizations need to be better, and they assume that their pet project is a way to do that. Thus we have movements like Agile and DevOps, which started off as a means to change organizations and eventually were subsumed by the beast, becoming tools that organizations use to perpetuate their existence without actually changing anything. The authors of the Agile manifesto wanted to change how technology worked in organizations, but what they actually did was give large organizations the language they needed to perpetuate the same incentive structures and bureaucracy they always had. DevOps was going to put Agile methodology into practice and empower (whatever that means) technologists to take a larger stake in the business. I’m pretty sure CIOs are still laughing about that one. In the meantime, we still get design by committee, the inability to make decisions, and endless red tape to prevent change from actually happening. Again, this isn’t necessarily bad from the perspective of a business built to last; it’s just really annoying if you expect things to move after you push. My advice: adjust your expectations.

    Incentives and Human Behavior

    The reason most change initiatives fail is because they don’t account for the reality of incentives and the influence of human behavior. Large organizations have evolved intricate systems of carrots and sticks to reward certain behaviors and punish or at least discourage behaviors deemed impolitic. Want to know why teams don’t collaborate across organizations? Because they’re not rewarded for doing so. Why do leaders’ edicts get ignored? Because teams are incentivized to stay the course and not switch abruptly.

    Agile failed in its original intent because it naively assumed that incentives would be aligned with faster development and delivery of technology. What it failed to calculate was that any change in a complex system would incur a premium cost or tax for the change. Any change to a legacy system with complex operations will have unknown consequences and therefore unknown costs. The whole point of running a business effectively is to be able predict P&L with some accuracy. Changes to legacy systems incur necessary risk, which disincentivizes organizations from adopting them at scale. Thus agile morphs from accelerated development and delivery to a different type of bureaucracy that served the same purpose as the old one: preventing change. Except now it uses fancy words like “scrums”, “standups”, and “story points”. As Charles Munger put it, “show me the incentives, and I’ll show you the outcome.” If the avoidance of risk is incentivized and rewarded, then practitioners in your organization will adopt that as their guiding principle. If your employees get promoted for finishing their pet projects and not for collaborating across the organization, guess what they will choose to do with their time?

    It’s this naive disregard of humanity that dooms so many change initiatives. Not everyone wants to adopt your particular changes, and there may be valid reasons for that. Not everyone wants to be part of an initiative that forever changes an organization. Some people just want to draw a paycheck and go home. To them, change represents risk to their future employment. Any change initiative has to acknowledge one universal aspect of humanity: to most people, change is scary. Newsflash: some people don’t tie their identities to their jobs. I envy them, honestly. And still others aren’t motivated to change their organization. They are just fine with the way things are.

    Parasite-Host Analogy

    And how do organizations prevent change? By engaging in what I call the “host immune response.” If you’re familiar with germ theory and disease pathology you know that most organisms have evolved the means to prevent external threats from causing too much harm. Mammals produce mucous, which surrounds viruses and bacteria in slimy goo to prepare for expulsion from the body, preventing these organisms from multiplying internally and causing damage to organs. Or the host will wall off an intruder, not eradicating or expelling it, just allowing it to exist where it can’t do any damage, like a cyst. Or an open source community.

    Within this host immune response and parasite analogy, there lies the secret to potential success: symbiosis. If you recall your high school biology textbook (and really, who doesn’t?) you’ll recall that symbiosis is the result of 2 species developing a mutually beneficial relationship. Nature provides numerous examples or parasitic relationships evolving into symbiosis: some barnacle species and whales; some intestinal worms and mammals; etc, etc. In this analogy, you the change agent are the parasite, and the organization you work for is the host. The trick is for the parasite to evade getting ejected from the host. To do that, the parasite has to be visible enough for its benefits to be felt, but not so visible as to inflame the host. It’s quite the trick to pull off. To put this into more practical terms, don’t announce yourself too loudly, and get in the habit of showing, not telling.

    Oh dear… I’ve now shifted into the mode of giving you a ray of hope. I’m terribly sorry. I fear that my terminal case of unbridled optimism has now reared its ugly head. Fine. Even though it’s probably pointless and a lost cause, and you’re only signing up for more pain, there are some things you can do to improve your chances of success from 0% to… 0.5%?

    Show, Don’t Tell

    There are few things large organizations detest more than a loud, barking dog. The surest route to failure is to raise the expectations of everyone around you. This is a thing that happens when you talk about your vision and plant the seeds of hope.

    Stop. Talking.

    Open source projects serve as a great point of reference here. Sure, many large open source projects undergo some amount of planning, usually in the form of a backlog of features they want to implement. Most well-run, large open source projects have a set of procedures and guidelines for how to propose new features and then present them to the core team as well as the community-at-large. Generally speaking, they do not write reams of text in the form of product requirements documents. They will look at personas and segmentation. They will create diagrams that show workflows. But generally speaking, they lead with code. Documentation and diagrams tend to happen after the fact. Yes, they will write or contribute to specifications, especially if their project requires in-depth integration or collaboration with another project, but the emphasis is on releasing code and building out the project. Open source projects serve as my point of reference, so imagine my surprise when I started working in large organizations and discovered that most of them do precisely the opposite. They write tomes of text about what they are thinking of building and what they wish to build, before they ever start to actually build it. This runs counter to everything I’ve learned working in open source communities. Given my above points about not changing too much too quickly, what is a change agent to do?

    Prototype. Bootstrap. Iterate.

    Open Source innovation tells us that the secret to success is to lead with the code. You want to lead change? Don’t wait for something to be perfect. Do your work in the open. Show your work transparently. Iterate rapidly and demonstrate continuously. Others will want to create the PRDs, the architectural documents, the white papers, and the other endless reams of text that no one will ever read. Let them. It’s a necessary step – remember, your job is to not trigger the host immune response. You can do that by letting the usual processes continue. What you are going to do is make sure that the plans being written out are represented in the code in a form that’s accessible to your target audience as quickly as possible, and that you get it in front of your target audience as soon as it’s available. Without a working representation of what is being proposed, your vision is wishful thinking and vaporware.

    The reasons are simple: if you expend your time and energy building up expectations for something that doesn’t exist yet, you risk letting the imaginations of your customers and stakeholders run wild. By limiting your interactions to demonstrations of what exists, the conversation remains grounded in reality. If you continuously present a grand vision of “the future” you will set the stage for allowing perfect to be the enemy of good. Your customers will have a moving target in their minds that you will never be able to satisfy. By building up expectations and attempting to meet them, you are setting the stage for failure. But with continuous iteration, you help to prevent expectations from exceeding what you are capable of delivering. There’s also the added benefit of showing continuous progress.

    Borrowing from the open source playbook is a smart way to lead change in an organization, and it doesn’t necessarily need to be limited to code or software. Continuous iteration of a product or service being delivered can apply to documentation, process design, or anything that requires multi-stage delivery. By being transparent with your customers and stakeholders and bringing them with you on the journey, you give them an ownership stake in the process. This ownership stake can incentivize them to collaborate more deeply, moving beyond customer into becoming a full-fledged partner. This continuous iteration and engagement builds trust, which helps prevent the host from treating you like a parasite and walling you off.

    Remember, most people and organizations don’t like change. It scares them. By progressing iteratively, your changesets become more manageable as well as palatable and acceptable. This is the way to make your changes seem almost unnoticeable, under the radar, and yet very effective, ultimately arriving at your desired outcome.

    Prototype. Bootstrap. Iterate.

  • The New Open Source Playbook – Platforms Part Deux

    (This is the 2nd post in a series. Part 1 is here)

    I was all set to make this 2nd post about open core and innovation on the edge, and then I realized that I should probably explore the concept of “lift” in a bit more detail. Specifically, if you’re looking for your platform strategy to give your technology products lift, what does that mean exactly? This goes back to the idea that a rising tide lifts all boats. If you think of a rising tide as a growing community of users or developers, and the boat is your particular software project, then you want a startegy where your project benefits from a larger community. A dynamic, growing community will be able to support several “boats” – products, projects, platforms, et al. A good example of this is the Kubernetes community, which is the flagship project of the Cloud Native Computing Foundation (CNCF).

    How Do We Generate Lift?

    There are 2 basic types of lift you will be looking for – user lift, or getting more people ot adopt your platform, and developer lift, where more developers are contributing to your platform. The former gets more people familiar with your particular technology, providing the basis for potential future customers, and the latter allows you to reduce your engineering cost and potentially benefit from new ideas that you didn’t think of. This means that the community or ecosystem you align with depends on the goals for your platform. If you want more users, that is a very different community strategy from wanting more collaborators. Many startups conflate these strategies, which means they don’t always get the results they’re looking for.

    Let’s assume that you have a potential platform that is categorized in the same cloud native space as Kubernetes. And let’s assume that you’ve determined that the best strategy to maximize your impact is to open source your platform. Does that mean you should put your project in the CNCF? It depends! Let’s assume that your product will target infosec professionals, and you want to get feedback on usage patterns for common security use cases. In that case, the Kubernetes or CNCF communities may not be the best fit. If you want security professionals getting familiar with and adopting your platform, you may want to consider security-focused communities, such as those that have formed around SBOM, compliance, and scanning projects. Or perhaps you do want to see how devops or cloud computing professionals would use your platform to improve their security risk, in which case Kubernetes or CNCF make sense. Your target audience will determine what community is the best fit.

    Another scenario: let’s assume that your platform is adjacent to Kubernetes and you think it’s a good candidate for collaboration with multiple entities with a vested interest in your project’s success. In that case, you need developers with working knowledge of Kubernetes architecture, and the Kubernetes community is definitely where you want your project to be incubated. It’s not always so straightforward, however. If you’re primarily looking for developers who will extend your platform, making use of your interfaces and APIs, then perhaps it doesn’t matter if they have working knowledge of Kubernetes. Maybe in this case, you would do well to understand developer use cases and which vertical markets or industries your platform appeals to, and then follow a different community trail. Platform-community fit for your developer strategy is a more nuanced decision than product-market fit. The former is much more multi-dimensional than the latter.

    If you have decided that developers are key to your platform strategy, you have to decide what kind of developers you’re looking for: those that will *extend* your platform; those that will contribute to your core platform; or those that will use or embed your platform. That will determine the type of lift you need and what community(ies) to align with.

    One more example: You’re creating a platform that you believe will transform the cybersecurity industry, and you want developers that will use and extend your platform. You may at first be attracted to security-focused communities, but then you discover a curious thing: cyber security professionals don’t seem fond of your platform and haven’t adopted it at the scale you expect or need. Does this mean your platform sucks? Not always – it could be that these professionals are highly opinionated and have already made up their minds about desired platforms to base their efforts on. However, it turns out that your platform helps enterprise developers be more secure. Furthermore, you notice that within your enterprise developer community, there is overlap with the PyTorch community, which is not cyber security focused. This could be an opportunity to pivot on your adoption strategy and go where your community is leading: PyTorch. Perhaps that is a more ideal destination for community alignment purposes. Before deciding, however, you can do some testing within the PyTorch community before making a final decision.

    Learn From My Example: Hyperic

    Hyperic was a systems management monitoring tool. These days we would put it in the “observability” category, but that term didn’t exist at the time (2006). The Hyperic platform was great for monitoring Java applications. It was open core, so we focused on adoption by enterprise developers and not contributions. We thought we had a great execution strategy to build a global user base that would use Hyperic as the basis for all of their general purpose application monitoring needs. From a community strategy perspective, we wanted Hyperic to be ubiquitous, used in every data center where applications were deployed and managed. We had a great tag line, too: “All Systems Go”. But there was a problem: although Hyperic could be used to monitor any compute instance, it really shined when used with Java appliations. Focusing on general systems management put us in the same bucket, product-wise, as other general use systems management tools, none of which were able to differentiate each other. If we had decided to place more of our community focus on Java developers, we could have ignored all of the general purpose monitoring and focused on delivering great value for our core audience: Java development communities. Our platform-community fit wasn’t aligned properly, and as a result, we did not get the lift we were expecting. This meant that our sales team had to work harder to find opportunities and put a drag on our revenue and overall momentum. Lesson learned…

    When attempting a platform execution strategy, and you’re going the open source route, platform-community fit is paramount. Without it, you won’t get the lift you’re expecting. You can always change up your community alignment strategy later, but it’s obviously better if you get it right the first time.

  • The New Open Source Playbook

    (This is the first in a series)

    For the last few years, the world of commercial open source has been largely dormant, with few startup companies making a splash with new open source products. Or if companies did make a splash it was for the wrong reasons, see eg. Hashicorp’s Terraform rugpull. It got to the point that Jeff Geerling declared that “Corporate Open Source is Dead“, and honestly, I would have agreed with him. It seemed that the age of startups pushing new open source projects and building a business around them was a thing of the past. To be clear, I always thought that it was naive to think that you could simply charge money for a rebuild of open source software, but that fact that startups were always trying showed that there was momentum behind the idea of using open source to build a business.

    And then a funny thing happened – a whole lot of new energy (and money) started flowing into new nascent companies looking to make a mark in… stop me if you’ve heard this one… generative AI. Or to put it in other words, some combination of agents built on LLMs that attempted to solve some automation problem, usually in the category of software development or delivery. It turns out that when there’s lots of competition for users, especially when those users are themselves developers, that a solid open source strategy can make the difference between surviving and thriving. In light of this newfound enthusiasm for open source and startups, I thought I’d write a handy guide for startups looking to incorporate open source startegy into their developer go to market playbook. Except in this version, I will incorporate nuances specific to our emerging agentic world.

    To start down this path, I recommend that startup founders look at 3 layers of open source go to market strategy: platform ecosystem (stuff you co-develop), open core (stuff you give away but keep IP), and product focus (stuff you only allow paying customers to use). That last category, product focus, can be on-prem, cloud hosted, or SaaS services – it won’t matter, ultimately. Remember, this is about how to create compelling products that people will pay for, helping you establish a business. There are ways to use open source principles that can help you reach that goal, but proceed carefully. You can derail your product strategy by making the wrong choices.

    Foundation: the Platform Ecosystem Play

    When thinking about open source strategy, many founders thought they could release open source code and get other developers to work on their code for free as a new model of outsourcing. This almost never works as the startup founders imagined. What does end up happening is that a startup releases open source code and their target audience happily uses the code for free, often not contributing back, causing a number of startups to question why they went down the open source path to begin with. Don’t be like them.

    The way to think of this is within the concept of engineering economics. What is the most efficient means to produce the foundational parts of your software?

    • If the answer is by basing your platform on existing open source projects, then you figure out how to do that while protecting your intellectual property. This usually means focusing on communities and projects under the auspices of a neutral 3rd party, such as the Eclipse or Linux Foundation.
    • If the answer is by creating a new open source platform that you expect to attract significant interest from other technology entities, then you test product-market fit with prospective collaborators and organizations with a vested interest in your project. Note: this is a risky strategy requiring a thoughtful approach and ruthless honesty about your prospects. The most successful examples of this, such as Kubernetes, showed strong demand from the outset and their creation was a result of market pull, not a push.
    • If the answer is that you don’t need external developers contributing to your core platform, but you do need end users and data on product-market fit, then you look into either an open core approach, or you create a free product that gives the platform away for free but not necessarily under an open source license. This is usually for the cases where you need developers to use or embed your product, but you don’t need them contributing directly. This is the “innovation on the edge” approach.
    • Or, if the answer is that you’ll make better progress by going it alone, then you do that and you don’t give it a 2nd thought. The goal is to use the most efficient means to produce your platform or foundational software, not score points on hacker news.

    Many startups through the years have been tripped up by this step, misguidedly believing that their foundational software was so great that once they released it, thousands of developers would step over each other to contribute to a project.

    In the world of LLMs and generative AI, there is an additional consideration: do you absolutely need the latest models from Google, OpenAI, or elsewhere, or can you get by with slightly older models less constrained by usage restrictions? Can you use your own training and weights with off-the-shelf open source models? If you’re building a product that relies on agentic workflows, you’ll have to consider end user needs and preferences, but you’ll also have to protect yourself from downstream usage contraints, which could hit you if you reach certain thresholds of popularity. When starting out, I wholeheartedly recommend having as few constraints as possible, opting for open source models whenever possible, but also giving your end users the choice if they have existing accounts with larger providers. This is where it helps to have a platform approach that helps you address product-ecosystem fit as early as possible. If you can build momentum while architecting your platform around open source models and model orchestration tools, your would-be platform contributors will let you know that early on. Having an open source platform approach will help you guide your development in the right direction. Building your platform or product foundation around an existing open source project will be even more insightful, because that community will likely already have established AI preferences, helping make the decision for you.

    To summarize, find the ecosystem that best fits your goals and product plans and try to build your platform strategy within a community in that ecosystem, preferably on an existing project; barring that, create your own open source platform but maintain close proximity to adjacent communities and ecosystems, looking for lift from common users that will help determine platform-ecosystem fit; or build an open core platform, preferably with a set of potential users from an existing community or ecosystem who will innovate on the edge, using your APIs and interfaces; if none of those apply, build your own free-to-use proprietary platfrom but maintain a line-of-sight to platform-ecosystem fit. No matter how you choose to build or shape a platform, you will need actual users to provide lift for your overall product strategy. You can get that lift from core contributors, innovators on the edge, or adoption from your target audience, or some combination of these. How you do that depends on your needs and the expectations of your target audience.

    Up Next: open core on the edge and free products.

  • Open Source is About to Undergo Substantial Change

    …And Most Open Source Communities Aren’t Ready

    It’s probably gauche to talk about “AI” by now. AI this… AI that… and most of the time, what we’re really talking about is predictive text machines, aka LLMs. But today I want to talk about what I see happening in the open source world, and how I see things changing in the not too distant future, and how much of that will be shaped by these predictive text machines, aka… LLMs. The agentic world is growing very quickly, and even if the large LLMs are starting to plateau, the LLM-backed services are still accelerating in their product growth for the simple reason that developers are figuring out how to add rules engines and orchestration platforms to build out targeted vertical services (think tools for reading radiology and MRI scans, for example). A great analogy from computing history for this shift from LLMs to agentic “SLMs” is the shift in emphasis from the single CPU for defining compute power to the emergence of multi-core CPUs along with faster RAM, NVMe, larger onboard caches, and of course, GPUs. When we think about compute power today, we don’t refer to the chip speed, which is a far cry from the late 90’s and early 2000s. Believe it or not, kids, there was a time when many people thought that Moore’s law applied to the clock speed on a CPU.

    For some time now, source code has been of little value. There’s so much of it. Nobody buys source code. I’ve made this point before in a series of posts on the subject. 20 years ago, I noted how internet collaboration was driving down the price of software because of the ubiquity of source code and the ability to collaborate beyond geographic borders. This trend, which has been unceasing now for 25+ years, has hit an inflection point and accelerating beyond the previous rate. This is, of course, because of the oncoming train that is AI, or more specifically, agentic LLM-based systems that are starting to write more and more of our source code. Before I get into the full ramifications of What This Means for Open Source (tm) let me review the 2 previous transformative eras in tech that played a pivotal role in bringing us to this point: open source and cloud.

    Open Source Accelerated the Speed of Development

    A long, long time ago, software vendors had long release cycles, and customers had no choice but to wait 1-2 years, or longer depending on the industry, for the long cycle of dev, test, and release to complete. And then a funny thing happened: more people got online and suddenly created a flurry of core tools, libraries, and systems that gave application developers the ultimate freedom to create whatever they wanted without interference from gate-keepers. I cannot over-emphasize the impact this had on software vendors. At first, it involved a tradeoff: vendors were happy to use the free tools and development platforms, because they saw a way to gain a market edge and deliver faster. At the same time, startups also saw an opportunity to capitalize on this development and quickly create companies that could compete with incumbents. In the late 90s, this meant grabbing as much cash as possible from investors in the hopes of having an IPO. All of this meant that for every advance software vendors embraced from the open source world, they were also effectively writing checks that future competitors would cash, which required that established vendors release even more quickly, lather, rinse, repeat, and find vertical markets where they could build moats.

    Cloud accelerated the speed of delivery

    If open source accelerated the speed of development, the emergence of what became “cloud technologies” enabled the delivery of software at a speed and scale previously thought to be impossible. Several smart companies in the mid-2000s saw this development and started to enact plans that would capitalize on the trend to outsource computing infrastructure. The companies most famous for leading the charge were Amazon, which created AWS in 2006, Netflix, which embraced AWS at an early stage, Google, which created Borg, the predecessor to Kubernetes, and Salesforce, which created it’s cloud-based PaaS, Force.com, in 2009. Where open source gave small growing companies a chance to compete, cloud did the same, but also at a price. Established software vendors started moving to cloud-based systems that allowed them to deliver solutions to customers more quickly, and startups embraced cloud because they could avoid capital expenditures for data center maintenance. Concurrently, open source software continued to develop at a fast pace for the simple reason that it enabled the fast development of technologies that powered cloud delivery. Similar to open source, the emergence of cloud led directly to faster release cycles and increasing competition. Unlike open source, however, cloud computing allowed established cloud companies to build out hegemonic systems designed to exact higher rental fees over time, pulling customers deeper into dependencies that are increasingly difficult to unravel. Software vendors that thought open source developers were the architects of their demise in the early 2000s hadn’t yet met Amazon.

    All of these developments and faster release cycles led to a lot more source code being written and shared, with GitHub.com emerging as the preferred source code management system for open source communities. (Pour one out for Sourceforge.net, which should have captured this market but didn’t.) Sometimes this led companies to think that maybe their business wasn’t cut out for this world of source code sharing, so they began a retrenchment from their open source commitments. I predicted that this retrenchment would have little impact on their viability as a business, and I was right. If only they had asked me, but I digress…

    All of this brings us to our present moment where source code is less valuable than ever. And in a world of deprectiating value for something, how do we ensure that the rules of engagement remain fair for all parties?

    Sorry Doubters: AI Will Change Everything

    If open source accelerated development and cloud accelerated delivery, then AI is accelerating both, simultaneously. Code generation tools are accelerating the total growth of source code; code generation tools are accelerating the ongoing trend of blending the boundary between hardware and software; and code generation tools are (potentially) creating automated systems that deliver solutions more quickly. That last one has not yet been realized, but with the continuing growth of agentic workflows, orchestrators, and rules engines, I would bet my last investment dollar on that trend realizing its potential sooner rather than later.

    What does this portend? I think it means we will need to craft new methods of managing and governing all of this source code. I think it means that rules of collaboration are going to change to reflect shifting definitions of openness and fairness in collaboration. I think it means that previously staid industries (read: semiconductors) are facing increasing pressure in the form of power consumption. speed of data flow, and increasingly virtualized capabilities that have always lived close to the silicon. And I think a whole lot of SaaS and cloud native vendors are about to understand what it means to lose your “moat”. The rise of agentic systems is going to push new boundaries and flip entire industries on their heads. But for the purpose of this essay, I’m going to focus on what it means for rules of collaboration.

    What is the Definition of Open Source?

    For many years, the definition of open source has been housed and governed by the Open Source Initiative (OSI). Written in the post-cold war era of open borders and free trade, it’s a document very much of its time. In the intervening years, much has happened. Open source proliferation happened, and many licenses were approved by the OSI as meeting the requirements of the Open Source Definition (OSD). State-sponsored malware has happened, sometimes inflicting damage on the perceived safety of open source software. Cloud happened, and many open source projects were used in the creation of “cloud-native” technologies. And now LLM-based agentic systems are happening. I mention all of this to ask, in what context is it appropriate to consider changes in the OSI?

    One of the reasons open source governance proved to be so popular is that it paved the way for innovation. Allow me to quote my own definition of innovation:

    Innovation cannot be sought out and achieved. It’s like happiness. It has to be achieved by laying the foundation and establishing the rules that enable it to flourish.

    In open source communities and ecosystems, every stakeholder has a seat at the table, whether they are individuals, companies, governments, or any other body with a vested interest. That is the secret of its success. When you read the 10 tenets of the OSD, it boils down to “Establishing the rules of collaboration that ensure fairness for all participants.” Basically, it’s about establishing and defending the rights of stakeholders, namely the ability to modify and distribute derivative works. In the traditional world of source code, this is pretty straightforward. Software is distributed. Software has a license. Users are held to the requirements of that license. We already saw the first cracks in this system when cloud computing emerged, because the act of distributing… sorry “conveying” software changed significantly when I used software distributed over a network. And the idea of derivative works was formed at a time when software was compiled with shared library binaries (.so and .dll) that were pulled directly into a software build. Those ideas have become more quaint over time, and the original ideas of the OSD have become increasingly exploitable over the years. What use is a software license when we don’t technically “use software”? We chose to not deal with this issue by pretending that it hadn’t changed. For the most part, open source continued to flourish, and more open source projects continued to fuel the cloud computing industry.

    But now we’re bracing for another change. How do we govern software when we can’t even know if it was written by humans? Agentic systems can now modify and write new source code with little human intervention. I will not comment on whether this is a good idea, merely that it is happening. Agentic systems can take the output of cloud-based services, and write entire applications that mimic their entire feature set. Does that meet the definition of open source? Does it violate the EULA of a cloud service? And if companies can recreate entire code bases of projects based only on the requirements of applications that use it, does that violate the terms of reciprocal licenses like the GPL? And this is before we even get to the issues of copyright pertaining to all the source code that had to feed the models in order to write code.

    If we true back to answering the question “how do we protect the rights and ensure the fairness of all participants”, how do we prepare for these changes? I think a couple of things are in order:

    • The right to reverse engineer must be protected to meet the definition of Open Source. This means that the ability to recreate, modify, and redistribute a model, cloud service, or really anything in technology that we use, has to be protected. For years, cloud providers have built in complexity in their services that makes them very difficult to replicate at scale. That is now changing, and it is a good thing.
    • This also means that the ability to recreate, modify, and redistribute models must also be protected if it uses the moniker of Open Source.
    • Agents must abide by licensing terms in order to be categorized as open source. If you call your agentic systems open source, they must be able to interpret and abide by software licenses. This effectively means that all agentic systems will need to include a compliance persona in order to meet the definition of Open Source.
    • Maintainers of Open Source projects must have a way to quickly dismiss the output of agentic systems that file bug and vulnerability reports. This means that in order to meet the open source definition, agentic systems that fit in that category will have to abide by a standard that maintainers use to signal their willingness to accept input from agents. If maintainers decline, then agentic systems will either avoid these projects, or push their inputs and changes into forked repos maintained elsewhere.

    These are just a couple of ideas. The bottom line is that the open source ethos guarantees all stakeholders a seat at the table, and we must be willing to make changes to our governing rules in order to ensure fairness for all parties. To do otherwise is to shirk our responsibility and pretend like it’s still 1999. No change to the open source definition should be taken lightly, but as the governing document that protects the rights of those who participate in open source communities, we need to make sure that it doesn’t become more easily exploitable by monopolistic companies and those that wish to extort from community members or commit harmful acts.

    Open Source communities and maintainers are not yet prepared for these changes, and it’s our job as community members to make sure that these communities, the backbone of open source innovation, remain vibrant and strong.

  • AI Native and the Open Source Supply Chain

    AI Native and the Open Source Supply Chain

    I recently wrote 2 essays on the subject of AI Native Automation over on the AINT blog. The gist of them is simple:

    It’s that latter point that I want to dive a bit deeper into here, but first a disclaimer:

    We have no idea what the ultimate impact of "AI" is to the world, but there are some profoundly negative ramifications that we can see today: misinformation, bigotry and bias at scale, deep fakes, rampant surveillance, obliteration of privacy, increasing carbon pollution, destruction of water reservoirs, etc. etc. It would be irresponsible not to mention this in any article about what we call today "AI". Please familiarize yourself with DAIR and it's founder, Dr. Timnit Gebru.

    When I wrote that open source ecosystems and InnerSource rules were about to become more important than ever, I meant that as a warning, not a celebration. If we want a positive outcome, we’ll have to make sure that our various code-writing agents and models subscribe to various agreed-upon rules of engagement. The good news is we now have over 25 years of practice for open source projects at scale that gives us the basis to police whatever is about come next. The bad news is that open source maintainers are already overwhelmed as it is, and they will need some serious help to address what is going to be an onslaught of “slop”. This means that 3rd party mediators will need to step up their game to help maintainers, which is a blessing and a curse. I’m glad that we have large organizations in the world to help with the non-coding aspects of legal protections, licensing, and project management. But I’m also wary of large multi-national tech companies wielding even more power over something as critical to the functioning of society as global software infrastructure.

    We already see stressors from the proliferation of code bots today: too many incoming contributions that are – to be frank – of dubious quality; new malware vectors such as “slopsquatting“; malicious data injections that turn bots into zombie bad actors; malicious bots that probe code repos for opportunities to slip in backdoors; etc – it’s an endless list, and we don’t yet even know the extent to which state-sponsored actors are going to use these new technologies to engage in malicious activity. It is a scary emerging world. On one hand, I look forward to seeing what AI Native automation can accomplish. But on the other, we don’t quite understand the game we’re now playing.

    Here are all the ways that we are ill prepared for the brave new world of AI Native:

    • Code repositories can be created, hosted, and forked by bots with no means to determine provenance
    • Artifact repositories can have new projects created by bots with software available for download before anyone knows no humans are in the loop
    • Even legitimate projects that use models are vulnerable to malicious data injections, with no reliable way to prove data origins
    • CVEs can now be created by bots, inundating projects with a multitude of false positives that can only be determined by time-consuming manual checks
    • Or, perhaps the CVE reports are legitimate, and now bots scanning for new ones can immediately find a way to exploit one (or many) of them and inject malware into an unsuspecting project

    The list goes on… I fear we’ve only scratched the surface of what lies ahead. The only way we can combat this is through the community engagement powers that we’ve built over the past 25-30 years. Some rules and behaviors will need to change, but communities have a remarkable ability to adapt, and that’s what is required. I can think of a few things that will limit the damage:

    • Public key architecture and key signing: public key signing has been around for a long time, but we still don’t have enough developers who are serious about it. We need to get very serious very quickly about the provenance of every actor in every engagement. Contributed patches can only come from someone with a verified key. Projects on package repositories can only be trusted if posted by a verified user via their public keys. Major repositories have started to do some of this, but they need to get much more aggressive about enforcing it. /me sideeyes GitHub and PyPi
    • Signed artifacts: similar to the above – every software artifact and package must have a verified signature to prove its provenance, else you should never ever use it. If implemented correctly, a verified package on pypi.org will have 2 ways to verify its authenticity: the key of the person posting it, and the signature of the artifact itself.
    • Recognize national borders: I know many folks in various open source communities don’t want to hear this, but the fact is that code that emanates from rogue states cannot be trusted. I don’t care if your best friend in Russia has been the most prolific member of your software project. You have no way of knowing if they have been compromised or blackmailed. Sorry, they cannot have write access. We can no longer ignore international politics when we “join us now and share the software”. You will not be free, hackers. I have to applaud the actions of The Linux Foundation and their legal chief, Michael Dolan. I believe this was true even before the age of AI slop, but the emergence of AI Native technologies makes it that much more critical.
    • Trust no one, Mulder: And finally, if you have a habit of pulling artifacts directly from the internet in real time for your super automated devops foo, stop that. Now. Like.. you should have already eliminated that practice, but now you really need to stop. If you don’t have a global policy for pushing all downloads through a centralized proxy repository – with the assumption that you’re checking every layer of your downloads – you are asking for trouble from the bot madness.
    • Community powered: It’s not all paranoid, bad stuff. Now is a great opportunity for tech companies, individual developers, enterprises, and software foundations to work out a community protocol that will limit the damage. All of these actors can sign on to a declaration of rules they will follow to limit the damage, quarantine known bad actors, and exchange vital information for the purpose of improving security for everyone. This is an opportunity for The Linux Foundation, Eclipse, and the Open Source Initiative to unite our communities and show some leadership.
    • Bots detecting bots: I was very hesitant to list this one, because I can feel the reactions from some people, but I do believe that we will need bots, agents, and models to help us with threat detection and mitigation.

    I have always believed in the power of communities to take positive actions for the greater good, and now is the perfect time to put that belief to the test. If we’re successful, we can actually enjoy revamped ecosystems that will be improved upon by our AI Native automation platforms. If successful, we will have safer ecosystems that can more easily detect malicious actors. We will also have successful communities that can add new tech capabilities faster than ever. In short, if we adapt appropriately, we can accelerate the innovations that open source communities have already excelled at. In a previous essay, I mentioned how the emergence of cloud computing was both a result of and an accelerant of open source software. The same is true of AI Native automation. It will inject more energy into open source ecosystems and take them places we didn’t know were possible. But what we must never forget is that not all these possibilities are good.