Tag: llms

  • The New Open Source Playbook – Products and Customers in an Agentic Engineering World

    Thus far in this series, I’ve focused on various ways to align with ecosystems and communities and create or integrate with platforms. This is designed to maximize the engineering economics of your business, reducing costs, outsourcing maintenance, and benefiting from innovation that comes from outside your employer or core engineering team. But if you’re running a business, you’re probably asking, “that’s great, but how do I make money?” In the past, my snarky answer was, “create a great product that reduces your customers’ pain and saves them time. Duh…” But as time goes on, I’ve realized that what they’re really asking is how to benefit from open source innovation without giving away your core value for free. That is to say, how do you do this open source stuff and still create a moat that prevents competitors from stealing your milkshake while you establish lucrative business relationships with your customers and partners?

    Open Source Heirarchy of Products

    Triangle with 3 layers. At the top is "paid product". The middle layer is "Free Product or Open Core". And the bottom layer is "Open Source Platform Neutral 3rd Party Governance".

    Thus far in this series, I’ve focused on the lower parts of the above pyramid. In this post, I’m going to focus on the upper parts. The lower 3rd, which focuses on platforms, is about cost, the bottom line, and generating enough innovation that provides lift to the 2 upper layers. Platforms are about engineering economics – how do I accelerate innovation for less money than I would spend if I did it all myself. It’s about delegation, ecosystem integration, neutral 3rd parties, and open governance. The 2 upper layers about about taking the platform innovation and applying it to customer use cases; going to market and showing product-market fit. The bottom layer is a shared resource. The top layers are all yours. Even then, there’s an art to constructing your products to give you the best chance to thrive. You’ll notice that I break this section into 2 layers and not one. Even when the product is 100% yours, there’s a need to diversify your customer base and think about the multiple personas you want to bring into your fold.

    The “Freemium” or Open Core Layer

    No product category has been as poorly understood as open core or other “free to use” products. In the early to mid 2000s, there was a simple model for getting investors to put money into a startup: take an established open source project and “commercialize” it, stripping it of just enough features so that you could convince users to convert into paying customers in order to get the “creamy frosting” of paid features. This model produced a smattering of successes, but most of the companies who tried it failed. Invariably, the paid product would compete with the free version, thus incentivizing the company leaders to put more and more features into the paid version and less into the free one. The end result was a bunch of unhappy users who abandoned the project and blunted whatever momentum the commercial product may have had. I do not recommend this approach.

    These days, I think about core platforms like Kubernetes, with free products built around it, such as the many freely available but commercial Kubernetes distributions, and then the for-pay vertical applications built on that. Each layer of the product stack is designed for a different audience and fulfills a different purpose. No one is going to take plain, vanilla Kubernetes and sell you the software bits, but they might provide an easy-to-use bundled version with some limitations for personal use, and then sell you a full product with proprietary extensions and plugins. The base platfrom from the Cloud Native Computing Foundation is designed for and by core contributors; the free bundle or distribution is for end users or “developer users” who want to try it out or use it for limited applications; and the commercial bundle with for-pay plugins and extensions is for customers with specific needs and little time for implementation. All are segments with different needs and all have value in the kubernetes ecosystem, with vendors tailoring their solutions to various use cases.

    In some cases, the free product skips the base platform entirely and is its own entity. One example of this is Splunk, which gave away a proprietary and limited but free product and provided a convenient means for customers to buy the full version. Splunk avoided the fate of the open core failures by ensuring that its free product always had an audience and always provided value, even for users who didn’t pay for it. The founders of Splunk debated whether to open source their product and ultimately decided they could delivery value for free customers without open sourcing – and they were proven correct. Because they never needed outside contributors to reduce costs, and because they could sustain the innovation required to land paying customers, open source wasn’t as compelling for their product strategy. Keep this in mind when I discuss agentic products below.

    Having a free product can make the difference between surviving and thriving, but you must be thoughtful of your goals and mindful of the drawbacks of different approaches. There are a couple of things you should keep in mind:

    • All free products should provide something of value for customers who don’t pay. There are some customers who will never ever pay for your product. Are you ok with them leaving your sphere of influence and going elsewhere? What is the value of growing your brand recognition? Can you do that without a free product?
    • Your free product is your intellectual property. The platform is the place for neutral 3rd party governance. Your free product is yours to do with as you please, whether it’s released under an open source license or not. Of course, it’s best to treat your community with respect: your free product is there to create brand ambassadors who will vouch for your company.
    • A free product with an open source license can be beneficial to your overall product strategy. You have to decide whether the benefits outweigh the costs. It is an expression of transparency and trust that your customers will appreciate. And you can protect yourself through copyright and trademark law. It can also accelerate your brand recognition and growth in ways that a typical proprietary free product cannot, but not always. And therein lies the rub: It depends on who your customers are and their expectations.
    • If you view your free product as competition to your paid version, you’ve already failed. Either you fail to understand the value of a free product, or you’ve implemented your product strategy poorly. Either way, you would do well to take a step back and rethink your strategy. Hopefully, you see this in time to course correct.

    The Paid Product

    The interesting part of paid products is that there are so many potential avenues to take. Whereas platforms and free products are relatively straightforward, paid products can take on a variety of shapes, sizes, and types: *-as-a-service; software bundles; paid consultation service; vertical integration; vertical customer use case; etc. This makes it easier to separate out the core value proposition of your paid solution, but it also makes it trickier to establish a conduit from free to paid. For example, if your solution is SaaS, does it make sense for your free product to a be downloadable open source software bundle? Possibly – there is enough market differentiation such that the free product will not detract from the SaaS experience, but usually, you want the free version to be easy to use so that your technology becomes more ubiquitous. A difficult to configure software bundle would take a significant effort for you to maintain and may not add enough of a benefit to justify the expense. Then again, if a free bundle enables other businesses to embed your technology and become potential OEM partners, it could allow you to expand your business in ways you hadn’t thought of. As long as giving away your product adds value to your overall product strategy and accelerates the growth of your paid solution, then it’s justifiable.

    The Agentic Wrinkle

    I’ve argued in the past that agentic engineering was going to change the open source landscape significantly – there will be more open source software, not less, and a growing number of companies will need a solid open source strategy, probably more than ever before. I wrote this series for 2 main reasons:

    1. Large numbers of startup founders are taking a crash course as we speak in open source ecosystems and strategies. I want them to think through their approaches, consider what they want to achieve, and decide whether an open source approach will benefit them.
    2. In a world where autonomous software agents will write an increasing share of our source code, rules of transparency and governance in software collaboration are more important than ever. The risks are also higher than ever. This is a world where your competitors can copy your features almost as soon as you release them. How are you going to protect your business?

    Agentic engineering holds great promise for entrepreneurs. I’ve seen companies with just 2 co-founders deliver a ready-to-order product without needing to hire a team of developers. This is astounding! But I’ve also seen startups get attacked by no-innovation companies that only repackage their code and still get millions in investment dollars. The emergence of agentic engineering tips the scales in a few interesting ways.

    • Platforms are still valuable. In fact, having a neutral location for platform development may be more valuable than ever – a dynamic, growing platform will also attract agentic development, which means the platforms will become more dynamic and robust, providing more growth fuel for your intellectual property.
    • Protect your intellectual property. Releasing a free product as open source may actually be more safe than a proprietary version with no source code. Open source code released under your trademark and copyright gives you a way to audit what competitors release. Embedding clues within your code will help you determine if other companies rebranded your intellectual property, whereas an agent reverse-engineering the features of your proprietary product will be almost undetectable.
    • You will have to adapt. For every startup out there: the game has changed. Our entire way of designing, building, testing, and delivering software has changed forever and is about to rewrite its existence. Entire platforms will be torn down and replaced by new ones with incredible speed. If you haven’t adopted this methodology, you will be left behind.

    There are some incredible challenges ahead. In the past, companies could separate their free from paid products through data. The software was free, but the data or “content” was what customers paid for. In an agentic world, data is a core part of any product. There is no such thing as software-only solutions in an agentic world. And in a world where agents can regenerate content with striking speed, this is no longer the product moat that it once was. Tech vendors will have to learn how to deliver free agentic tools, complete with data, that will still provide an avenue for conversion to paid, commercial solutions.

    As you think through your product strategy, consider these questions:

    • Platforms: What is your platform strategy? Where is collaboration within an ecosystem helpful?
    • Free products: What can you give away for free that will accelerate your growth strategy?
    • Paid products: How can you create a compelling product over and above what’s available for free?
    • Agentic engineering: How will you benefit from an agentic world? How do you protect your value proposition?
  • Open Source is About to Undergo Substantial Change

    …And Most Open Source Communities Aren’t Ready

    It’s probably gauche to talk about “AI” by now. AI this… AI that… and most of the time, what we’re really talking about is predictive text machines, aka LLMs. But today I want to talk about what I see happening in the open source world, and how I see things changing in the not too distant future, and how much of that will be shaped by these predictive text machines, aka… LLMs. The agentic world is growing very quickly, and even if the large LLMs are starting to plateau, the LLM-backed services are still accelerating in their product growth for the simple reason that developers are figuring out how to add rules engines and orchestration platforms to build out targeted vertical services (think tools for reading radiology and MRI scans, for example). A great analogy from computing history for this shift from LLMs to agentic “SLMs” is the shift in emphasis from the single CPU for defining compute power to the emergence of multi-core CPUs along with faster RAM, NVMe, larger onboard caches, and of course, GPUs. When we think about compute power today, we don’t refer to the chip speed, which is a far cry from the late 90’s and early 2000s. Believe it or not, kids, there was a time when many people thought that Moore’s law applied to the clock speed on a CPU.

    For some time now, source code has been of little value. There’s so much of it. Nobody buys source code. I’ve made this point before in a series of posts on the subject. 20 years ago, I noted how internet collaboration was driving down the price of software because of the ubiquity of source code and the ability to collaborate beyond geographic borders. This trend, which has been unceasing now for 25+ years, has hit an inflection point and accelerating beyond the previous rate. This is, of course, because of the oncoming train that is AI, or more specifically, agentic LLM-based systems that are starting to write more and more of our source code. Before I get into the full ramifications of What This Means for Open Source (tm) let me review the 2 previous transformative eras in tech that played a pivotal role in bringing us to this point: open source and cloud.

    Open Source Accelerated the Speed of Development

    A long, long time ago, software vendors had long release cycles, and customers had no choice but to wait 1-2 years, or longer depending on the industry, for the long cycle of dev, test, and release to complete. And then a funny thing happened: more people got online and suddenly created a flurry of core tools, libraries, and systems that gave application developers the ultimate freedom to create whatever they wanted without interference from gate-keepers. I cannot over-emphasize the impact this had on software vendors. At first, it involved a tradeoff: vendors were happy to use the free tools and development platforms, because they saw a way to gain a market edge and deliver faster. At the same time, startups also saw an opportunity to capitalize on this development and quickly create companies that could compete with incumbents. In the late 90s, this meant grabbing as much cash as possible from investors in the hopes of having an IPO. All of this meant that for every advance software vendors embraced from the open source world, they were also effectively writing checks that future competitors would cash, which required that established vendors release even more quickly, lather, rinse, repeat, and find vertical markets where they could build moats.

    Cloud accelerated the speed of delivery

    If open source accelerated the speed of development, the emergence of what became “cloud technologies” enabled the delivery of software at a speed and scale previously thought to be impossible. Several smart companies in the mid-2000s saw this development and started to enact plans that would capitalize on the trend to outsource computing infrastructure. The companies most famous for leading the charge were Amazon, which created AWS in 2006, Netflix, which embraced AWS at an early stage, Google, which created Borg, the predecessor to Kubernetes, and Salesforce, which created it’s cloud-based PaaS, Force.com, in 2009. Where open source gave small growing companies a chance to compete, cloud did the same, but also at a price. Established software vendors started moving to cloud-based systems that allowed them to deliver solutions to customers more quickly, and startups embraced cloud because they could avoid capital expenditures for data center maintenance. Concurrently, open source software continued to develop at a fast pace for the simple reason that it enabled the fast development of technologies that powered cloud delivery. Similar to open source, the emergence of cloud led directly to faster release cycles and increasing competition. Unlike open source, however, cloud computing allowed established cloud companies to build out hegemonic systems designed to exact higher rental fees over time, pulling customers deeper into dependencies that are increasingly difficult to unravel. Software vendors that thought open source developers were the architects of their demise in the early 2000s hadn’t yet met Amazon.

    All of these developments and faster release cycles led to a lot more source code being written and shared, with GitHub.com emerging as the preferred source code management system for open source communities. (Pour one out for Sourceforge.net, which should have captured this market but didn’t.) Sometimes this led companies to think that maybe their business wasn’t cut out for this world of source code sharing, so they began a retrenchment from their open source commitments. I predicted that this retrenchment would have little impact on their viability as a business, and I was right. If only they had asked me, but I digress…

    All of this brings us to our present moment where source code is less valuable than ever. And in a world of deprectiating value for something, how do we ensure that the rules of engagement remain fair for all parties?

    Sorry Doubters: AI Will Change Everything

    If open source accelerated development and cloud accelerated delivery, then AI is accelerating both, simultaneously. Code generation tools are accelerating the total growth of source code; code generation tools are accelerating the ongoing trend of blending the boundary between hardware and software; and code generation tools are (potentially) creating automated systems that deliver solutions more quickly. That last one has not yet been realized, but with the continuing growth of agentic workflows, orchestrators, and rules engines, I would bet my last investment dollar on that trend realizing its potential sooner rather than later.

    What does this portend? I think it means we will need to craft new methods of managing and governing all of this source code. I think it means that rules of collaboration are going to change to reflect shifting definitions of openness and fairness in collaboration. I think it means that previously staid industries (read: semiconductors) are facing increasing pressure in the form of power consumption. speed of data flow, and increasingly virtualized capabilities that have always lived close to the silicon. And I think a whole lot of SaaS and cloud native vendors are about to understand what it means to lose your “moat”. The rise of agentic systems is going to push new boundaries and flip entire industries on their heads. But for the purpose of this essay, I’m going to focus on what it means for rules of collaboration.

    What is the Definition of Open Source?

    For many years, the definition of open source has been housed and governed by the Open Source Initiative (OSI). Written in the post-cold war era of open borders and free trade, it’s a document very much of its time. In the intervening years, much has happened. Open source proliferation happened, and many licenses were approved by the OSI as meeting the requirements of the Open Source Definition (OSD). State-sponsored malware has happened, sometimes inflicting damage on the perceived safety of open source software. Cloud happened, and many open source projects were used in the creation of “cloud-native” technologies. And now LLM-based agentic systems are happening. I mention all of this to ask, in what context is it appropriate to consider changes in the OSI?

    One of the reasons open source governance proved to be so popular is that it paved the way for innovation. Allow me to quote my own definition of innovation:

    Innovation cannot be sought out and achieved. It’s like happiness. It has to be achieved by laying the foundation and establishing the rules that enable it to flourish.

    In open source communities and ecosystems, every stakeholder has a seat at the table, whether they are individuals, companies, governments, or any other body with a vested interest. That is the secret of its success. When you read the 10 tenets of the OSD, it boils down to “Establishing the rules of collaboration that ensure fairness for all participants.” Basically, it’s about establishing and defending the rights of stakeholders, namely the ability to modify and distribute derivative works. In the traditional world of source code, this is pretty straightforward. Software is distributed. Software has a license. Users are held to the requirements of that license. We already saw the first cracks in this system when cloud computing emerged, because the act of distributing… sorry “conveying” software changed significantly when I used software distributed over a network. And the idea of derivative works was formed at a time when software was compiled with shared library binaries (.so and .dll) that were pulled directly into a software build. Those ideas have become more quaint over time, and the original ideas of the OSD have become increasingly exploitable over the years. What use is a software license when we don’t technically “use software”? We chose to not deal with this issue by pretending that it hadn’t changed. For the most part, open source continued to flourish, and more open source projects continued to fuel the cloud computing industry.

    But now we’re bracing for another change. How do we govern software when we can’t even know if it was written by humans? Agentic systems can now modify and write new source code with little human intervention. I will not comment on whether this is a good idea, merely that it is happening. Agentic systems can take the output of cloud-based services, and write entire applications that mimic their entire feature set. Does that meet the definition of open source? Does it violate the EULA of a cloud service? And if companies can recreate entire code bases of projects based only on the requirements of applications that use it, does that violate the terms of reciprocal licenses like the GPL? And this is before we even get to the issues of copyright pertaining to all the source code that had to feed the models in order to write code.

    If we true back to answering the question “how do we protect the rights and ensure the fairness of all participants”, how do we prepare for these changes? I think a couple of things are in order:

    • The right to reverse engineer must be protected to meet the definition of Open Source. This means that the ability to recreate, modify, and redistribute a model, cloud service, or really anything in technology that we use, has to be protected. For years, cloud providers have built in complexity in their services that makes them very difficult to replicate at scale. That is now changing, and it is a good thing.
    • This also means that the ability to recreate, modify, and redistribute models must also be protected if it uses the moniker of Open Source.
    • Agents must abide by licensing terms in order to be categorized as open source. If you call your agentic systems open source, they must be able to interpret and abide by software licenses. This effectively means that all agentic systems will need to include a compliance persona in order to meet the definition of Open Source.
    • Maintainers of Open Source projects must have a way to quickly dismiss the output of agentic systems that file bug and vulnerability reports. This means that in order to meet the open source definition, agentic systems that fit in that category will have to abide by a standard that maintainers use to signal their willingness to accept input from agents. If maintainers decline, then agentic systems will either avoid these projects, or push their inputs and changes into forked repos maintained elsewhere.

    These are just a couple of ideas. The bottom line is that the open source ethos guarantees all stakeholders a seat at the table, and we must be willing to make changes to our governing rules in order to ensure fairness for all parties. To do otherwise is to shirk our responsibility and pretend like it’s still 1999. No change to the open source definition should be taken lightly, but as the governing document that protects the rights of those who participate in open source communities, we need to make sure that it doesn’t become more easily exploitable by monopolistic companies and those that wish to extort from community members or commit harmful acts.

    Open Source communities and maintainers are not yet prepared for these changes, and it’s our job as community members to make sure that these communities, the backbone of open source innovation, remain vibrant and strong.

  • AI Native and the Open Source Supply Chain

    AI Native and the Open Source Supply Chain

    I recently wrote 2 essays on the subject of AI Native Automation over on the AINT blog. The gist of them is simple:

    It’s that latter point that I want to dive a bit deeper into here, but first a disclaimer:

    We have no idea what the ultimate impact of "AI" is to the world, but there are some profoundly negative ramifications that we can see today: misinformation, bigotry and bias at scale, deep fakes, rampant surveillance, obliteration of privacy, increasing carbon pollution, destruction of water reservoirs, etc. etc. It would be irresponsible not to mention this in any article about what we call today "AI". Please familiarize yourself with DAIR and it's founder, Dr. Timnit Gebru.

    When I wrote that open source ecosystems and InnerSource rules were about to become more important than ever, I meant that as a warning, not a celebration. If we want a positive outcome, we’ll have to make sure that our various code-writing agents and models subscribe to various agreed-upon rules of engagement. The good news is we now have over 25 years of practice for open source projects at scale that gives us the basis to police whatever is about come next. The bad news is that open source maintainers are already overwhelmed as it is, and they will need some serious help to address what is going to be an onslaught of “slop”. This means that 3rd party mediators will need to step up their game to help maintainers, which is a blessing and a curse. I’m glad that we have large organizations in the world to help with the non-coding aspects of legal protections, licensing, and project management. But I’m also wary of large multi-national tech companies wielding even more power over something as critical to the functioning of society as global software infrastructure.

    We already see stressors from the proliferation of code bots today: too many incoming contributions that are – to be frank – of dubious quality; new malware vectors such as “slopsquatting“; malicious data injections that turn bots into zombie bad actors; malicious bots that probe code repos for opportunities to slip in backdoors; etc – it’s an endless list, and we don’t yet even know the extent to which state-sponsored actors are going to use these new technologies to engage in malicious activity. It is a scary emerging world. On one hand, I look forward to seeing what AI Native automation can accomplish. But on the other, we don’t quite understand the game we’re now playing.

    Here are all the ways that we are ill prepared for the brave new world of AI Native:

    • Code repositories can be created, hosted, and forked by bots with no means to determine provenance
    • Artifact repositories can have new projects created by bots with software available for download before anyone knows no humans are in the loop
    • Even legitimate projects that use models are vulnerable to malicious data injections, with no reliable way to prove data origins
    • CVEs can now be created by bots, inundating projects with a multitude of false positives that can only be determined by time-consuming manual checks
    • Or, perhaps the CVE reports are legitimate, and now bots scanning for new ones can immediately find a way to exploit one (or many) of them and inject malware into an unsuspecting project

    The list goes on… I fear we’ve only scratched the surface of what lies ahead. The only way we can combat this is through the community engagement powers that we’ve built over the past 25-30 years. Some rules and behaviors will need to change, but communities have a remarkable ability to adapt, and that’s what is required. I can think of a few things that will limit the damage:

    • Public key architecture and key signing: public key signing has been around for a long time, but we still don’t have enough developers who are serious about it. We need to get very serious very quickly about the provenance of every actor in every engagement. Contributed patches can only come from someone with a verified key. Projects on package repositories can only be trusted if posted by a verified user via their public keys. Major repositories have started to do some of this, but they need to get much more aggressive about enforcing it. /me sideeyes GitHub and PyPi
    • Signed artifacts: similar to the above – every software artifact and package must have a verified signature to prove its provenance, else you should never ever use it. If implemented correctly, a verified package on pypi.org will have 2 ways to verify its authenticity: the key of the person posting it, and the signature of the artifact itself.
    • Recognize national borders: I know many folks in various open source communities don’t want to hear this, but the fact is that code that emanates from rogue states cannot be trusted. I don’t care if your best friend in Russia has been the most prolific member of your software project. You have no way of knowing if they have been compromised or blackmailed. Sorry, they cannot have write access. We can no longer ignore international politics when we “join us now and share the software”. You will not be free, hackers. I have to applaud the actions of The Linux Foundation and their legal chief, Michael Dolan. I believe this was true even before the age of AI slop, but the emergence of AI Native technologies makes it that much more critical.
    • Trust no one, Mulder: And finally, if you have a habit of pulling artifacts directly from the internet in real time for your super automated devops foo, stop that. Now. Like.. you should have already eliminated that practice, but now you really need to stop. If you don’t have a global policy for pushing all downloads through a centralized proxy repository – with the assumption that you’re checking every layer of your downloads – you are asking for trouble from the bot madness.
    • Community powered: It’s not all paranoid, bad stuff. Now is a great opportunity for tech companies, individual developers, enterprises, and software foundations to work out a community protocol that will limit the damage. All of these actors can sign on to a declaration of rules they will follow to limit the damage, quarantine known bad actors, and exchange vital information for the purpose of improving security for everyone. This is an opportunity for The Linux Foundation, Eclipse, and the Open Source Initiative to unite our communities and show some leadership.
    • Bots detecting bots: I was very hesitant to list this one, because I can feel the reactions from some people, but I do believe that we will need bots, agents, and models to help us with threat detection and mitigation.

    I have always believed in the power of communities to take positive actions for the greater good, and now is the perfect time to put that belief to the test. If we’re successful, we can actually enjoy revamped ecosystems that will be improved upon by our AI Native automation platforms. If successful, we will have safer ecosystems that can more easily detect malicious actors. We will also have successful communities that can add new tech capabilities faster than ever. In short, if we adapt appropriately, we can accelerate the innovations that open source communities have already excelled at. In a previous essay, I mentioned how the emergence of cloud computing was both a result of and an accelerant of open source software. The same is true of AI Native automation. It will inject more energy into open source ecosystems and take them places we didn’t know were possible. But what we must never forget is that not all these possibilities are good.