Tag: ai

  • The New Open Source Playbook

    (This is the first in a series)

    For the last few years, the world of commercial open source has been largely dormant, with few startup companies making a splash with new open source products. Or if companies did make a splash it was for the wrong reasons, see eg. Hashicorp’s Terraform rugpull. It got to the point that Jeff Geerling declared that “Corporate Open Source is Dead“, and honestly, I would have agreed with him. It seemed that the age of startups pushing new open source projects and building a business around them was a thing of the past. To be clear, I always thought that it was naive to think that you could simply charge money for a rebuild of open source software, but that fact that startups were always trying showed that there was momentum behind the idea of using open source to build a business.

    And then a funny thing happened – a whole lot of new energy (and money) started flowing into new nascent companies looking to make a mark in… stop me if you’ve heard this one… generative AI. Or to put it in other words, some combination of agents built on LLMs that attempted to solve some automation problem, usually in the category of software development or delivery. It turns out that when there’s lots of competition for users, especially when those users are themselves developers, that a solid open source strategy can make the difference between surviving and thriving. In light of this newfound enthusiasm for open source and startups, I thought I’d write a handy guide for startups looking to incorporate open source startegy into their developer go to market playbook. Except in this version, I will incorporate nuances specific to our emerging agentic world.

    To start down this path, I recommend that startup founders look at 3 layers of open source go to market strategy: platform ecosystem (stuff you co-develop), open core (stuff you give away but keep IP), and product focus (stuff you only allow paying customers to use). That last category, product focus, can be on-prem, cloud hosted, or SaaS services – it won’t matter, ultimately. Remember, this is about how to create compelling products that people will pay for, helping you establish a business. There are ways to use open source principles that can help you reach that goal, but proceed carefully. You can derail your product strategy by making the wrong choices.

    Foundation: the Platform Ecosystem Play

    When thinking about open source strategy, many founders thought they could release open source code and get other developers to work on their code for free as a new model of outsourcing. This almost never works as the startup founders imagined. What does end up happening is that a startup releases open source code and their target audience happily uses the code for free, often not contributing back, causing a number of startups to question why they went down the open source path to begin with. Don’t be like them.

    The way to think of this is within the concept of engineering economics. What is the most efficient means to produce the foundational parts of your software?

    • If the answer is by basing your platform on existing open source projects, then you figure out how to do that while protecting your intellectual property. This usually means focusing on communities and projects under the auspices of a neutral 3rd party, such as the Eclipse or Linux Foundation.
    • If the answer is by creating a new open source platform that you expect to attract significant interest from other technology entities, then you test product-market fit with prospective collaborators and organizations with a vested interest in your project. Note: this is a risky strategy requiring a thoughtful approach and ruthless honesty about your prospects. The most successful examples of this, such as Kubernetes, showed strong demand from the outset and their creation was a result of market pull, not a push.
    • If the answer is that you don’t need external developers contributing to your core platform, but you do need end users and data on product-market fit, then you look into either an open core approach, or you create a free product that gives the platform away for free but not necessarily under an open source license. This is usually for the cases where you need developers to use or embed your product, but you don’t need them contributing directly. This is the “innovation on the edge” approach.
    • Or, if the answer is that you’ll make better progress by going it alone, then you do that and you don’t give it a 2nd thought. The goal is to use the most efficient means to produce your platform or foundational software, not score points on hacker news.

    Many startups through the years have been tripped up by this step, misguidedly believing that their foundational software was so great that once they released it, thousands of developers would step over each other to contribute to a project.

    In the world of LLMs and generative AI, there is an additional consideration: do you absolutely need the latest models from Google, OpenAI, or elsewhere, or can you get by with slightly older models less constrained by usage restrictions? Can you use your own training and weights with off-the-shelf open source models? If you’re building a product that relies on agentic workflows, you’ll have to consider end user needs and preferences, but you’ll also have to protect yourself from downstream usage contraints, which could hit you if you reach certain thresholds of popularity. When starting out, I wholeheartedly recommend having as few constraints as possible, opting for open source models whenever possible, but also giving your end users the choice if they have existing accounts with larger providers. This is where it helps to have a platform approach that helps you address product-ecosystem fit as early as possible. If you can build momentum while architecting your platform around open source models and model orchestration tools, your would-be platform contributors will let you know that early on. Having an open source platform approach will help you guide your development in the right direction. Building your platform or product foundation around an existing open source project will be even more insightful, because that community will likely already have established AI preferences, helping make the decision for you.

    To summarize, find the ecosystem that best fits your goals and product plans and try to build your platform strategy within a community in that ecosystem, preferably on an existing project; barring that, create your own open source platform but maintain close proximity to adjacent communities and ecosystems, looking for lift from common users that will help determine platform-ecosystem fit; or build an open core platform, preferably with a set of potential users from an existing community or ecosystem who will innovate on the edge, using your APIs and interfaces; if none of those apply, build your own free-to-use proprietary platfrom but maintain a line-of-sight to platform-ecosystem fit. No matter how you choose to build or shape a platform, you will need actual users to provide lift for your overall product strategy. You can get that lift from core contributors, innovators on the edge, or adoption from your target audience, or some combination of these. How you do that depends on your needs and the expectations of your target audience.

    Up Next: open core on the edge and free products.

  • Open Source is About to Undergo Substantial Change

    …And Most Open Source Communities Aren’t Ready

    It’s probably gauche to talk about “AI” by now. AI this… AI that… and most of the time, what we’re really talking about is predictive text machines, aka LLMs. But today I want to talk about what I see happening in the open source world, and how I see things changing in the not too distant future, and how much of that will be shaped by these predictive text machines, aka… LLMs. The agentic world is growing very quickly, and even if the large LLMs are starting to plateau, the LLM-backed services are still accelerating in their product growth for the simple reason that developers are figuring out how to add rules engines and orchestration platforms to build out targeted vertical services (think tools for reading radiology and MRI scans, for example). A great analogy from computing history for this shift from LLMs to agentic “SLMs” is the shift in emphasis from the single CPU for defining compute power to the emergence of multi-core CPUs along with faster RAM, NVMe, larger onboard caches, and of course, GPUs. When we think about compute power today, we don’t refer to the chip speed, which is a far cry from the late 90’s and early 2000s. Believe it or not, kids, there was a time when many people thought that Moore’s law applied to the clock speed on a CPU.

    For some time now, source code has been of little value. There’s so much of it. Nobody buys source code. I’ve made this point before in a series of posts on the subject. 20 years ago, I noted how internet collaboration was driving down the price of software because of the ubiquity of source code and the ability to collaborate beyond geographic borders. This trend, which has been unceasing now for 25+ years, has hit an inflection point and accelerating beyond the previous rate. This is, of course, because of the oncoming train that is AI, or more specifically, agentic LLM-based systems that are starting to write more and more of our source code. Before I get into the full ramifications of What This Means for Open Source (tm) let me review the 2 previous transformative eras in tech that played a pivotal role in bringing us to this point: open source and cloud.

    Open Source Accelerated the Speed of Development

    A long, long time ago, software vendors had long release cycles, and customers had no choice but to wait 1-2 years, or longer depending on the industry, for the long cycle of dev, test, and release to complete. And then a funny thing happened: more people got online and suddenly created a flurry of core tools, libraries, and systems that gave application developers the ultimate freedom to create whatever they wanted without interference from gate-keepers. I cannot over-emphasize the impact this had on software vendors. At first, it involved a tradeoff: vendors were happy to use the free tools and development platforms, because they saw a way to gain a market edge and deliver faster. At the same time, startups also saw an opportunity to capitalize on this development and quickly create companies that could compete with incumbents. In the late 90s, this meant grabbing as much cash as possible from investors in the hopes of having an IPO. All of this meant that for every advance software vendors embraced from the open source world, they were also effectively writing checks that future competitors would cash, which required that established vendors release even more quickly, lather, rinse, repeat, and find vertical markets where they could build moats.

    Cloud accelerated the speed of delivery

    If open source accelerated the speed of development, the emergence of what became “cloud technologies” enabled the delivery of software at a speed and scale previously thought to be impossible. Several smart companies in the mid-2000s saw this development and started to enact plans that would capitalize on the trend to outsource computing infrastructure. The companies most famous for leading the charge were Amazon, which created AWS in 2006, Netflix, which embraced AWS at an early stage, Google, which created Borg, the predecessor to Kubernetes, and Salesforce, which created it’s cloud-based PaaS, Force.com, in 2009. Where open source gave small growing companies a chance to compete, cloud did the same, but also at a price. Established software vendors started moving to cloud-based systems that allowed them to deliver solutions to customers more quickly, and startups embraced cloud because they could avoid capital expenditures for data center maintenance. Concurrently, open source software continued to develop at a fast pace for the simple reason that it enabled the fast development of technologies that powered cloud delivery. Similar to open source, the emergence of cloud led directly to faster release cycles and increasing competition. Unlike open source, however, cloud computing allowed established cloud companies to build out hegemonic systems designed to exact higher rental fees over time, pulling customers deeper into dependencies that are increasingly difficult to unravel. Software vendors that thought open source developers were the architects of their demise in the early 2000s hadn’t yet met Amazon.

    All of these developments and faster release cycles led to a lot more source code being written and shared, with GitHub.com emerging as the preferred source code management system for open source communities. (Pour one out for Sourceforge.net, which should have captured this market but didn’t.) Sometimes this led companies to think that maybe their business wasn’t cut out for this world of source code sharing, so they began a retrenchment from their open source commitments. I predicted that this retrenchment would have little impact on their viability as a business, and I was right. If only they had asked me, but I digress…

    All of this brings us to our present moment where source code is less valuable than ever. And in a world of deprectiating value for something, how do we ensure that the rules of engagement remain fair for all parties?

    Sorry Doubters: AI Will Change Everything

    If open source accelerated development and cloud accelerated delivery, then AI is accelerating both, simultaneously. Code generation tools are accelerating the total growth of source code; code generation tools are accelerating the ongoing trend of blending the boundary between hardware and software; and code generation tools are (potentially) creating automated systems that deliver solutions more quickly. That last one has not yet been realized, but with the continuing growth of agentic workflows, orchestrators, and rules engines, I would bet my last investment dollar on that trend realizing its potential sooner rather than later.

    What does this portend? I think it means we will need to craft new methods of managing and governing all of this source code. I think it means that rules of collaboration are going to change to reflect shifting definitions of openness and fairness in collaboration. I think it means that previously staid industries (read: semiconductors) are facing increasing pressure in the form of power consumption. speed of data flow, and increasingly virtualized capabilities that have always lived close to the silicon. And I think a whole lot of SaaS and cloud native vendors are about to understand what it means to lose your “moat”. The rise of agentic systems is going to push new boundaries and flip entire industries on their heads. But for the purpose of this essay, I’m going to focus on what it means for rules of collaboration.

    What is the Definition of Open Source?

    For many years, the definition of open source has been housed and governed by the Open Source Initiative (OSI). Written in the post-cold war era of open borders and free trade, it’s a document very much of its time. In the intervening years, much has happened. Open source proliferation happened, and many licenses were approved by the OSI as meeting the requirements of the Open Source Definition (OSD). State-sponsored malware has happened, sometimes inflicting damage on the perceived safety of open source software. Cloud happened, and many open source projects were used in the creation of “cloud-native” technologies. And now LLM-based agentic systems are happening. I mention all of this to ask, in what context is it appropriate to consider changes in the OSI?

    One of the reasons open source governance proved to be so popular is that it paved the way for innovation. Allow me to quote my own definition of innovation:

    Innovation cannot be sought out and achieved. It’s like happiness. It has to be achieved by laying the foundation and establishing the rules that enable it to flourish.

    In open source communities and ecosystems, every stakeholder has a seat at the table, whether they are individuals, companies, governments, or any other body with a vested interest. That is the secret of its success. When you read the 10 tenets of the OSD, it boils down to “Establishing the rules of collaboration that ensure fairness for all participants.” Basically, it’s about establishing and defending the rights of stakeholders, namely the ability to modify and distribute derivative works. In the traditional world of source code, this is pretty straightforward. Software is distributed. Software has a license. Users are held to the requirements of that license. We already saw the first cracks in this system when cloud computing emerged, because the act of distributing… sorry “conveying” software changed significantly when I used software distributed over a network. And the idea of derivative works was formed at a time when software was compiled with shared library binaries (.so and .dll) that were pulled directly into a software build. Those ideas have become more quaint over time, and the original ideas of the OSD have become increasingly exploitable over the years. What use is a software license when we don’t technically “use software”? We chose to not deal with this issue by pretending that it hadn’t changed. For the most part, open source continued to flourish, and more open source projects continued to fuel the cloud computing industry.

    But now we’re bracing for another change. How do we govern software when we can’t even know if it was written by humans? Agentic systems can now modify and write new source code with little human intervention. I will not comment on whether this is a good idea, merely that it is happening. Agentic systems can take the output of cloud-based services, and write entire applications that mimic their entire feature set. Does that meet the definition of open source? Does it violate the EULA of a cloud service? And if companies can recreate entire code bases of projects based only on the requirements of applications that use it, does that violate the terms of reciprocal licenses like the GPL? And this is before we even get to the issues of copyright pertaining to all the source code that had to feed the models in order to write code.

    If we true back to answering the question “how do we protect the rights and ensure the fairness of all participants”, how do we prepare for these changes? I think a couple of things are in order:

    • The right to reverse engineer must be protected to meet the definition of Open Source. This means that the ability to recreate, modify, and redistribute a model, cloud service, or really anything in technology that we use, has to be protected. For years, cloud providers have built in complexity in their services that makes them very difficult to replicate at scale. That is now changing, and it is a good thing.
    • This also means that the ability to recreate, modify, and redistribute models must also be protected if it uses the moniker of Open Source.
    • Agents must abide by licensing terms in order to be categorized as open source. If you call your agentic systems open source, they must be able to interpret and abide by software licenses. This effectively means that all agentic systems will need to include a compliance persona in order to meet the definition of Open Source.
    • Maintainers of Open Source projects must have a way to quickly dismiss the output of agentic systems that file bug and vulnerability reports. This means that in order to meet the open source definition, agentic systems that fit in that category will have to abide by a standard that maintainers use to signal their willingness to accept input from agents. If maintainers decline, then agentic systems will either avoid these projects, or push their inputs and changes into forked repos maintained elsewhere.

    These are just a couple of ideas. The bottom line is that the open source ethos guarantees all stakeholders a seat at the table, and we must be willing to make changes to our governing rules in order to ensure fairness for all parties. To do otherwise is to shirk our responsibility and pretend like it’s still 1999. No change to the open source definition should be taken lightly, but as the governing document that protects the rights of those who participate in open source communities, we need to make sure that it doesn’t become more easily exploitable by monopolistic companies and those that wish to extort from community members or commit harmful acts.

    Open Source communities and maintainers are not yet prepared for these changes, and it’s our job as community members to make sure that these communities, the backbone of open source innovation, remain vibrant and strong.

  • Open Source, AI, and the Global War on Fascism

    Open Source, AI, and the Global War on Fascism

    (This was originally posted on medium.com)

    I have been struggling recently with where to direct my focus and what I could write about that would add something material to the ongoing debates on “AI”, technology, and politics. Thanks to my friend Randy Bias for this post that inspired me to follow up:

    Screenshot of Randy Bias post on LinkedIn “I notice that a lot of the open source world gets uncomfortable when I start talking about how geopolitics is now creating challenges for open source. I don’t understand this. It’s provably true. Even things at the margins, like the Llama 4 release, which is technically not ‘open’ has a restriction against EU usage. We *must* talk about the geopolitical realities and look for solutions rather than letting us be driven by realtime political trends…”

    This post triggered a few thoughts I’ve been having on the subject. Namely, that open source was born at a time that coincided with the apex of neoliberal thought, corresponding with free trade, borderless communication and collaboration, and other naive ideologies stemming from the old adage “information wants to be free”. Open source, along with its immediate forbear free software, carried with it a techno-libertarian streak that proliferated throughout the movement. Within the open source umbrella, there was a wide array of diverse factions: the original free software political movement, libertarian entrepreneurs and investors, anarcho-capitalists, political liberals and progressives, and a hodgepodge of many others who came around to see the value of faster collaboration enabled by the internet. There was significant overlap amongst the factions, and the coalition held while each shared mutual goals.

    From 1998, when the term “open source” was coined, until the early 2010’s, this coalition held strong, accomplishing much with robust collaboration between large tech companies, startup entrepreneurs, investors, independent developers, general purpose computer owners, and non-profit software foundations. This was the time when organizations like the Linux Foundation, the Apache Software Foundation, and the Eclipse Foundation, found their footing and began organizing increasingly larger swaths of the industry around open source communities. The coalition started to fray in the early 2010s for a number of reasons, including the rise of cloud computing and smart phones, and the overall decline of free trade as a guiding principle shared by most mainstream political factions.

    Open source grew in importance along with the world wide web, which was the other grand manifestation of the apex of neoliberal thought and the free trade era. These co-evolving movements, open source and the advocacy for the world wide web, were fueled by the belief, now debunked, that giving groups of people unfettered access to each other would result in a more educated public, greater understanding between groups, and a decline in conflicts and perhaps even war. The nation state, some thought, was starting to outlive its purpose and would soon slide into the dustbin of history. (side note: you have not lived until an open source community member unironically labels you a “statist”)

    For a long time, open source participants happily continued down the path of borderless collaboration, falsely believing that the political earthquake that started in the mid-2010s woud somehow leave them untouched. This naivety ignored several simultaneous trends that spelled the end of an era: Russian influence peddling; brexit; the election of Trump; Chinese censorship, surveillance and state-sponsored hacking; and a global resurgence of illiberal, authoritarian governments. But even if one could ignore all of those geopolitical trends and movements, the technology industry alone should have signaled the end of an era. The proliferation of cryptocurrency, the growth of “AI”, and the use of open source tools to build data exploitation schemes should have been obvious clues that the geopolitical world was crashing our party. This blithe ignorance came to a screeching halt when a Microsoft employee discovered that state-sponsored hackers had infiltrated an open source project, XZ utils, installing a targeted backdoor 3 years after assumgin the ownership of a project.

    One cannot overstate the impact of this event. For the first time, we had to actively monitor the threats from nation states wanting to exploit our open source communities to achieve geopolitical goals. The reactions were varied. After some time, the Linux Foundation finally admitted that it could no longer ignore the origins of its contributors, demoting the status of some Russian contributors. At the other end of the spectrum is Amanda Brock, who prefers to stay ensconced in her neoliberal bubble, unperturbed by the realities of our modern political landscape.

    Amanda Brock, CEO of OpenUK, described the decision to remove Russian developers from patching the Linux kernel as “alarming”. In a LinkedIn post, she said: “At its heart, open source allows anyone to participate for any purpose. But as we have seen adoption of open source at scale in recent years, to the point where over 90% of the active codebases used by companies have dependencies on open source software, it’s understandable that concerns about risk have been raised by governments.”

    One thing must be clear by now: we find ourselves knee-deep in a global conflict with fascist regimes who are united in their attempts to undermine free republics and democracies. As we speak, these regimes are looking to use open source communities and projects to accomplish their aims. They’ve done it with blockchains and cryptocurrencies. They’ve done it with malware. They’ve done it with the erosion of privacy and the unholy alliance of surveillance capitalism and state-sponsored surveillance. And they’re continuing to do it with the growth of the TESCREAL movement and the implementation of bias and bigotry through the mass adoption of AI tools. This is part and parcel of a plan to upend free thought and subjugate millions of people through the implementation of a techno oligarchy. I don’t doubt the utility of many of these tools — I myself use some of them. But I also cannot ignore how these data sets and tools have become beachheads for the world’s worst people. When Meta, Google, Microsoft or other large tech companies announce their support of fascism and simultaneously release new AI models that don’t disclose their data sets or data origins, we cannot know for sure what biases have been embedded. The only way we could know for sure is if we could inspect the raw data sources themselves, as well as the training scripts that were run on those data sets. The fact that we don’t have that information for any of these popular AI models means that we find ourselves vulnerable to the aims of global conglomerates and the governments they are working in tandem with. This is not where we want to be.

    From where I stand, the way forward is clear: we must demand complete transparency of all data sources we use. We must demand complete transparency in how the models were trained on this data. To that end, I have been disappointed by almost every organization responsible for governing open source and AI ecosystems, from the Linux Foundation to the Open Source Initiative. None of them seem to truly understand the moment we are in, and none of them seem to be prepared for the consequences of inaction. While I do applaud the Linux Foundation’s application of scrutiny to core committers to its projects, they do seem to have missed the boat on the global fascist movement that threatens our very existence.

    We have to demand that the organizations that represent us do better. We must demand that they recognize and meet the moment, because so far they have not.

  • AI Native and the Open Source Supply Chain

    AI Native and the Open Source Supply Chain

    I recently wrote 2 essays on the subject of AI Native Automation over on the AINT blog. The gist of them is simple:

    It’s that latter point that I want to dive a bit deeper into here, but first a disclaimer:

    We have no idea what the ultimate impact of "AI" is to the world, but there are some profoundly negative ramifications that we can see today: misinformation, bigotry and bias at scale, deep fakes, rampant surveillance, obliteration of privacy, increasing carbon pollution, destruction of water reservoirs, etc. etc. It would be irresponsible not to mention this in any article about what we call today "AI". Please familiarize yourself with DAIR and it's founder, Dr. Timnit Gebru.

    When I wrote that open source ecosystems and InnerSource rules were about to become more important than ever, I meant that as a warning, not a celebration. If we want a positive outcome, we’ll have to make sure that our various code-writing agents and models subscribe to various agreed-upon rules of engagement. The good news is we now have over 25 years of practice for open source projects at scale that gives us the basis to police whatever is about come next. The bad news is that open source maintainers are already overwhelmed as it is, and they will need some serious help to address what is going to be an onslaught of “slop”. This means that 3rd party mediators will need to step up their game to help maintainers, which is a blessing and a curse. I’m glad that we have large organizations in the world to help with the non-coding aspects of legal protections, licensing, and project management. But I’m also wary of large multi-national tech companies wielding even more power over something as critical to the functioning of society as global software infrastructure.

    We already see stressors from the proliferation of code bots today: too many incoming contributions that are – to be frank – of dubious quality; new malware vectors such as “slopsquatting“; malicious data injections that turn bots into zombie bad actors; malicious bots that probe code repos for opportunities to slip in backdoors; etc – it’s an endless list, and we don’t yet even know the extent to which state-sponsored actors are going to use these new technologies to engage in malicious activity. It is a scary emerging world. On one hand, I look forward to seeing what AI Native automation can accomplish. But on the other, we don’t quite understand the game we’re now playing.

    Here are all the ways that we are ill prepared for the brave new world of AI Native:

    • Code repositories can be created, hosted, and forked by bots with no means to determine provenance
    • Artifact repositories can have new projects created by bots with software available for download before anyone knows no humans are in the loop
    • Even legitimate projects that use models are vulnerable to malicious data injections, with no reliable way to prove data origins
    • CVEs can now be created by bots, inundating projects with a multitude of false positives that can only be determined by time-consuming manual checks
    • Or, perhaps the CVE reports are legitimate, and now bots scanning for new ones can immediately find a way to exploit one (or many) of them and inject malware into an unsuspecting project

    The list goes on… I fear we’ve only scratched the surface of what lies ahead. The only way we can combat this is through the community engagement powers that we’ve built over the past 25-30 years. Some rules and behaviors will need to change, but communities have a remarkable ability to adapt, and that’s what is required. I can think of a few things that will limit the damage:

    • Public key architecture and key signing: public key signing has been around for a long time, but we still don’t have enough developers who are serious about it. We need to get very serious very quickly about the provenance of every actor in every engagement. Contributed patches can only come from someone with a verified key. Projects on package repositories can only be trusted if posted by a verified user via their public keys. Major repositories have started to do some of this, but they need to get much more aggressive about enforcing it. /me sideeyes GitHub and PyPi
    • Signed artifacts: similar to the above – every software artifact and package must have a verified signature to prove its provenance, else you should never ever use it. If implemented correctly, a verified package on pypi.org will have 2 ways to verify its authenticity: the key of the person posting it, and the signature of the artifact itself.
    • Recognize national borders: I know many folks in various open source communities don’t want to hear this, but the fact is that code that emanates from rogue states cannot be trusted. I don’t care if your best friend in Russia has been the most prolific member of your software project. You have no way of knowing if they have been compromised or blackmailed. Sorry, they cannot have write access. We can no longer ignore international politics when we “join us now and share the software”. You will not be free, hackers. I have to applaud the actions of The Linux Foundation and their legal chief, Michael Dolan. I believe this was true even before the age of AI slop, but the emergence of AI Native technologies makes it that much more critical.
    • Trust no one, Mulder: And finally, if you have a habit of pulling artifacts directly from the internet in real time for your super automated devops foo, stop that. Now. Like.. you should have already eliminated that practice, but now you really need to stop. If you don’t have a global policy for pushing all downloads through a centralized proxy repository – with the assumption that you’re checking every layer of your downloads – you are asking for trouble from the bot madness.
    • Community powered: It’s not all paranoid, bad stuff. Now is a great opportunity for tech companies, individual developers, enterprises, and software foundations to work out a community protocol that will limit the damage. All of these actors can sign on to a declaration of rules they will follow to limit the damage, quarantine known bad actors, and exchange vital information for the purpose of improving security for everyone. This is an opportunity for The Linux Foundation, Eclipse, and the Open Source Initiative to unite our communities and show some leadership.
    • Bots detecting bots: I was very hesitant to list this one, because I can feel the reactions from some people, but I do believe that we will need bots, agents, and models to help us with threat detection and mitigation.

    I have always believed in the power of communities to take positive actions for the greater good, and now is the perfect time to put that belief to the test. If we’re successful, we can actually enjoy revamped ecosystems that will be improved upon by our AI Native automation platforms. If successful, we will have safer ecosystems that can more easily detect malicious actors. We will also have successful communities that can add new tech capabilities faster than ever. In short, if we adapt appropriately, we can accelerate the innovations that open source communities have already excelled at. In a previous essay, I mentioned how the emergence of cloud computing was both a result of and an accelerant of open source software. The same is true of AI Native automation. It will inject more energy into open source ecosystems and take them places we didn’t know were possible. But what we must never forget is that not all these possibilities are good.

  • The Rise of AI Native Automation

    I conceived of this blog in 2023 as a skeptic’s outpost, demystifying AI hype and taking it down whenever necessary. I had no interest in fueling a speculative bubble, but as a technologist at heart, I’m always interested in seeing what’s coming down the road. This is my way of saying that this post is not about poking holes in the current hype cycle, but rather taking a look at what I see developing in the application development world. Because I see major changes afoot, and I’m not sure folks are ready for it.

    No, we’re not all losing our jobs

    This is one of those zombie lies that won’t die. Turns out, sprinkling AI sauce into your IT and build environments makes humans in the loop even more important than before. Humans who understand how to build things; humans who understand how to communicate; humans who know how to write effectively; and humans who can conceive of a bigger picture and break it down into lanes of delivery. In other words, the big winners in an AI world are full-stack humans (I stole this from Michelle Yi. I hope she doesn’t mind) – a career in tech has never been more accessible to a humanities degree holder than now.

    The big losers are the code monkeys who crank out indecipherable code and respond in monosyllabic grunts to anyone who deigns to ask – those “10x” engineers that everyone was lauding just 3 years ago. We already knew source code wasn’t that valuable, and now it’s even less valuable than ever.

    AI and DevOps

    I had thought, until recently, that as AI infiltrated more and more application development, that the day would come when developers would need to integrate their model development into standard tools and platforms commonplace in our devops environments. Eventually, all AI development would succumb to the established platforms and tools that we’ve grown to know and love that make up our build, test, release, and monitor application lifecycle. I assumed there would be a great convergence. I still believe that, but I think I had the direction wrong. AI isn’t getting shoehorned into DevOps, it’s DevOps that is being shoehorned into AI. The tools we use today for infrastructure as code, continuous integration, testing, and releasing are not going to suddenly gain relevance in the AI developers world. A new class of AI-native tools are going to grow and obliterate (mostly) the tools that came before. These tools will both use trained models to be better at the build-test-release application development lifecycle as well as deploy apps that use models and agents as central features. It will be models all the way down, from the applications that are developed to the infrastructure that will be used to deploy, monitor, and improve it.

    Ask yourself a question: why do I need a human to write Terraform modules? They’re just rule sets with logic that defines guardrails for how infrastructure gets deployed and in what sequence. But let’s take that one step further: if I train my models and agents to interact with my deployment environments directly – K8s, EC2, et al – why do I need Terraform at all? Training a model to interact directly with the deployment environments will give it the means to master any number of rulesets for deployments. Same thing with CI tools. Training models to manage the build and release processes can proceed without the need for CI platforms. The model orchestrators will be the CI. A Langchain-based product is a lot better at this than Circle CI or Jenkins. The eye-opener for me has been the rise of standards like MCP, A2A, and the like. Now that we are actively defining the interfaces between models, agents, and each other, it’s a short hop, skip, and a jump to AI-native composite apps that fill our clouds and data centers, combined with AI-native composite platforms that build, monitor, and tweak the infrastructure that hosts them.

    AI Native Tools are Coming

    Once you fully understand the potential power of model-based development and agent-based delivery and fulfillment, you begin to realize just how much of the IT and devops world is about to be flipped on its head. Model management platforms, model orchestrators, and the like become a lot more prominent in this world, and the winners in the new infrastructure arms race will be those tools that take the most advantage of these feature sets. Moreover, when you consider the general lifespan of platforms and the longevity of the current set of tools most prevalent in today’s infrastructure, you get the impression that the time for the next shift has begun. Hardly any of today’s most popular tools were in use prior to 2010.

    DevOps tools have followed a general pattern over the past 30 years, starting with the beginning of what I’ll call the “web era”:

    Timeline showing the progress of machine automation in the web era, starting from 1995 until today. The diagram shows "custom scripts on bare metal" lasting from 1995 - 2010, "Automated IaC, CI 'cloud native'" from 2010 - 2025, and "MCP, agentic automation 'AI Native'" starting in 2025 going until ????
    Progression of automation in the web era

    The current crop of automation tools are now becoming “legacy” and their survival now rests on how well they acclimate to an AI Native application development world. Even what we call “MLOps” was mostly running the AI playbook in a CI “cloud native” DevOps world. Either MLOps platforms adapt and move to an AI native context, or they will be relegated to legacy status (my prediction). I don’t think we yet know what AI native tools will look like in 5 years, but if the speed of MCP adoption is any indicator, I think the transition will happen much more quickly than we anticipated. This is perhaps due to the possibility that well-designed agentic systems can be used to help architect these new AI Native systems.

    I also don’t think this will be any great glorious era of amazing transformation. Any migration to new systems will bring about a host of new challenges and obstacles, especially in the security space. I shudder to think of all the new threat vectors that are emerging as we speak to take advantage of these automated interfaces to core infrastructure. But that’s ok, we’ll design agentic security systems that will work 24/7 to thwart these threats! What could possibly go wrong??? And then there are all the other problems that have already been discussed by the founders of DAIR: bias, surveillance, deep fakes, the proliferation of misinformation, et al. We cannot count on AI Native systems to design inclusive human interfaces or prevent malicious ones. In fact, without proper human governance, AI native systems will accelerate and maximize these problems.

    In part 2, I’ll examine the impact of AI Native on development ecosystems, open source, and our (already poor) systems of governance for technology.

  • Will It Let Me Fire Some Guys?

    Cory Doctorow published an excellent essay in Locus about the AI bubble and what will happen when (not if) it goes “bloop” as bubbles are wont to do. Namely, the money in the AI ecosystem is only sustainable if it allows programs to replace people, and due to the prevalence of high risk applications, that seems highly unlikely. I think he’s absolutely right – read that first.

    Ok, done? Cool…

    Reading Cory’s essay jogged my memory about some experiences I’ve had over my tech career. The first thought that came to mind was, haven’t we been through this before? Yes, we have. Several times. And each time we learn the same lesson the hard way: paradigm shifting tech transformations do not, in fact, result in large reductions of workers. Sometimes there may be displacement and reallocation, but never reductions. No, large reductions happen when businesses decide it’s time to trim across the board or exit certain businesses altogether.

    One particular moment from my career came to mind. I was a product manager at large storage vendor. We had a assembled a small group of large company CTOs and were telling them about our latest roadmap for storage management automation. We had launched an automation product 3 years prior, and we wanted to assure them that we were committed to continuing our investment (spoiler alert: we were not, in fact, committed to that). So we went through the song and dance about all the great new things we were bringing to the product suite, about how it would solve problems and help our customers be more productive.

    I’ll never forget one exchange with a particular CTO that is forever seared into my memory. He began by carefully choosing his words, mindful of their impact, but he finally said what was really on his mind, and likely for the rest of the group as well: “Will this let me fire some guys?” I was unprepared for this question. We had just spent the last 2 hours talking about increased productivity and efficiency from automation, so he drew what seemed to him to be a very logical conclusion from that. That is, if the product is as efficient and productive as we claimed, then surely he would be able to reduce staff. We hemmed and hawed and finally admitted that, no, we could not guarantee that it would, in his words, let him “fire some guys.” It was as if the air completely left the room. Whatever we said after that didn’t really matter, because it wouldn’t be the magic bullet that let everyone fire a bunch of staff.

    This is a lesson that we keep learning and unlearning, over and over again. Remember cloud? Remember how that spelled the end of sysadmins and half of IT staff? Yeah, they’re still here, but their job titles have changed. Just because you moved things to the cloud doesn’t mean you can be hands off – you still need people to manage things. Remember Uber? None of these gazillion dollar swallowing enterprises or sub-industries of tech have generated anywhere near the original perceived value. And don’t even get me started on crypto, which never had any actual value. Cory’s point is the same: do you really think hospitals are going to fire their radiologists and put all patient screening and lab results in the hands of a machine learning (ahem: advanced pattern recognition) bot? Of course not. And so, a hospital administrator will ask, what’s the point? Do you really believe that hospitals are going to add tens or even hundreds of thousands of dollars to their annual budget to have both bots AND people? Don’t be absurd. They’ll be happy to make use of some free database provided by bots, but the humans in the loop will remain. Cory’s other example was self-driving cars. Do you think taxi or other transportation companies are going to pay both drivers (remote or otherwise) and bots for transit services? Be serious. And yet, that’s the only logical outcome, because there is no universe where humans will be taken out of this very high risk loop.

    The problem is that this is no justification for the billions of dollars being invested in this space. End user companies will happily make use of free tools, keep their humans, and spend as little as possible on tech. That part will not change. So who, then, is going to justify the scope of current investments? No one. That’s why it’s a bubble. Cory’s right. The only thing that remains to be seen is who gets harmed in the aftermath and how badly.

    The intended buyers of this technology are going to ask the same question as that CTO from years ago: will it let me fire some guys? The answer is no. It is always no.