Category: commentary

  • Open Source, AI, and the Global War on Fascism

    Open Source, AI, and the Global War on Fascism

    (This was originally posted on medium.com)

    I have been struggling recently with where to direct my focus and what I could write about that would add something material to the ongoing debates on “AI”, technology, and politics. Thanks to my friend Randy Bias for this post that inspired me to follow up:

    Screenshot of Randy Bias post on LinkedIn “I notice that a lot of the open source world gets uncomfortable when I start talking about how geopolitics is now creating challenges for open source. I don’t understand this. It’s provably true. Even things at the margins, like the Llama 4 release, which is technically not ‘open’ has a restriction against EU usage. We *must* talk about the geopolitical realities and look for solutions rather than letting us be driven by realtime political trends…”

    This post triggered a few thoughts I’ve been having on the subject. Namely, that open source was born at a time that coincided with the apex of neoliberal thought, corresponding with free trade, borderless communication and collaboration, and other naive ideologies stemming from the old adage “information wants to be free”. Open source, along with its immediate forbear free software, carried with it a techno-libertarian streak that proliferated throughout the movement. Within the open source umbrella, there was a wide array of diverse factions: the original free software political movement, libertarian entrepreneurs and investors, anarcho-capitalists, political liberals and progressives, and a hodgepodge of many others who came around to see the value of faster collaboration enabled by the internet. There was significant overlap amongst the factions, and the coalition held while each shared mutual goals.

    From 1998, when the term “open source” was coined, until the early 2010’s, this coalition held strong, accomplishing much with robust collaboration between large tech companies, startup entrepreneurs, investors, independent developers, general purpose computer owners, and non-profit software foundations. This was the time when organizations like the Linux Foundation, the Apache Software Foundation, and the Eclipse Foundation, found their footing and began organizing increasingly larger swaths of the industry around open source communities. The coalition started to fray in the early 2010s for a number of reasons, including the rise of cloud computing and smart phones, and the overall decline of free trade as a guiding principle shared by most mainstream political factions.

    Open source grew in importance along with the world wide web, which was the other grand manifestation of the apex of neoliberal thought and the free trade era. These co-evolving movements, open source and the advocacy for the world wide web, were fueled by the belief, now debunked, that giving groups of people unfettered access to each other would result in a more educated public, greater understanding between groups, and a decline in conflicts and perhaps even war. The nation state, some thought, was starting to outlive its purpose and would soon slide into the dustbin of history. (side note: you have not lived until an open source community member unironically labels you a “statist”)

    For a long time, open source participants happily continued down the path of borderless collaboration, falsely believing that the political earthquake that started in the mid-2010s woud somehow leave them untouched. This naivety ignored several simultaneous trends that spelled the end of an era: Russian influence peddling; brexit; the election of Trump; Chinese censorship, surveillance and state-sponsored hacking; and a global resurgence of illiberal, authoritarian governments. But even if one could ignore all of those geopolitical trends and movements, the technology industry alone should have signaled the end of an era. The proliferation of cryptocurrency, the growth of “AI”, and the use of open source tools to build data exploitation schemes should have been obvious clues that the geopolitical world was crashing our party. This blithe ignorance came to a screeching halt when a Microsoft employee discovered that state-sponsored hackers had infiltrated an open source project, XZ utils, installing a targeted backdoor 3 years after assumgin the ownership of a project.

    One cannot overstate the impact of this event. For the first time, we had to actively monitor the threats from nation states wanting to exploit our open source communities to achieve geopolitical goals. The reactions were varied. After some time, the Linux Foundation finally admitted that it could no longer ignore the origins of its contributors, demoting the status of some Russian contributors. At the other end of the spectrum is Amanda Brock, who prefers to stay ensconced in her neoliberal bubble, unperturbed by the realities of our modern political landscape.

    Amanda Brock, CEO of OpenUK, described the decision to remove Russian developers from patching the Linux kernel as “alarming”. In a LinkedIn post, she said: “At its heart, open source allows anyone to participate for any purpose. But as we have seen adoption of open source at scale in recent years, to the point where over 90% of the active codebases used by companies have dependencies on open source software, it’s understandable that concerns about risk have been raised by governments.”

    One thing must be clear by now: we find ourselves knee-deep in a global conflict with fascist regimes who are united in their attempts to undermine free republics and democracies. As we speak, these regimes are looking to use open source communities and projects to accomplish their aims. They’ve done it with blockchains and cryptocurrencies. They’ve done it with malware. They’ve done it with the erosion of privacy and the unholy alliance of surveillance capitalism and state-sponsored surveillance. And they’re continuing to do it with the growth of the TESCREAL movement and the implementation of bias and bigotry through the mass adoption of AI tools. This is part and parcel of a plan to upend free thought and subjugate millions of people through the implementation of a techno oligarchy. I don’t doubt the utility of many of these tools — I myself use some of them. But I also cannot ignore how these data sets and tools have become beachheads for the world’s worst people. When Meta, Google, Microsoft or other large tech companies announce their support of fascism and simultaneously release new AI models that don’t disclose their data sets or data origins, we cannot know for sure what biases have been embedded. The only way we could know for sure is if we could inspect the raw data sources themselves, as well as the training scripts that were run on those data sets. The fact that we don’t have that information for any of these popular AI models means that we find ourselves vulnerable to the aims of global conglomerates and the governments they are working in tandem with. This is not where we want to be.

    From where I stand, the way forward is clear: we must demand complete transparency of all data sources we use. We must demand complete transparency in how the models were trained on this data. To that end, I have been disappointed by almost every organization responsible for governing open source and AI ecosystems, from the Linux Foundation to the Open Source Initiative. None of them seem to truly understand the moment we are in, and none of them seem to be prepared for the consequences of inaction. While I do applaud the Linux Foundation’s application of scrutiny to core committers to its projects, they do seem to have missed the boat on the global fascist movement that threatens our very existence.

    We have to demand that the organizations that represent us do better. We must demand that they recognize and meet the moment, because so far they have not.

  • AI Native and the Open Source Supply Chain

    AI Native and the Open Source Supply Chain

    I recently wrote 2 essays on the subject of AI Native Automation over on the AINT blog. The gist of them is simple:

    It’s that latter point that I want to dive a bit deeper into here, but first a disclaimer:

    We have no idea what the ultimate impact of "AI" is to the world, but there are some profoundly negative ramifications that we can see today: misinformation, bigotry and bias at scale, deep fakes, rampant surveillance, obliteration of privacy, increasing carbon pollution, destruction of water reservoirs, etc. etc. It would be irresponsible not to mention this in any article about what we call today "AI". Please familiarize yourself with DAIR and it's founder, Dr. Timnit Gebru.

    When I wrote that open source ecosystems and InnerSource rules were about to become more important than ever, I meant that as a warning, not a celebration. If we want a positive outcome, we’ll have to make sure that our various code-writing agents and models subscribe to various agreed-upon rules of engagement. The good news is we now have over 25 years of practice for open source projects at scale that gives us the basis to police whatever is about come next. The bad news is that open source maintainers are already overwhelmed as it is, and they will need some serious help to address what is going to be an onslaught of “slop”. This means that 3rd party mediators will need to step up their game to help maintainers, which is a blessing and a curse. I’m glad that we have large organizations in the world to help with the non-coding aspects of legal protections, licensing, and project management. But I’m also wary of large multi-national tech companies wielding even more power over something as critical to the functioning of society as global software infrastructure.

    We already see stressors from the proliferation of code bots today: too many incoming contributions that are – to be frank – of dubious quality; new malware vectors such as “slopsquatting“; malicious data injections that turn bots into zombie bad actors; malicious bots that probe code repos for opportunities to slip in backdoors; etc – it’s an endless list, and we don’t yet even know the extent to which state-sponsored actors are going to use these new technologies to engage in malicious activity. It is a scary emerging world. On one hand, I look forward to seeing what AI Native automation can accomplish. But on the other, we don’t quite understand the game we’re now playing.

    Here are all the ways that we are ill prepared for the brave new world of AI Native:

    • Code repositories can be created, hosted, and forked by bots with no means to determine provenance
    • Artifact repositories can have new projects created by bots with software available for download before anyone knows no humans are in the loop
    • Even legitimate projects that use models are vulnerable to malicious data injections, with no reliable way to prove data origins
    • CVEs can now be created by bots, inundating projects with a multitude of false positives that can only be determined by time-consuming manual checks
    • Or, perhaps the CVE reports are legitimate, and now bots scanning for new ones can immediately find a way to exploit one (or many) of them and inject malware into an unsuspecting project

    The list goes on… I fear we’ve only scratched the surface of what lies ahead. The only way we can combat this is through the community engagement powers that we’ve built over the past 25-30 years. Some rules and behaviors will need to change, but communities have a remarkable ability to adapt, and that’s what is required. I can think of a few things that will limit the damage:

    • Public key architecture and key signing: public key signing has been around for a long time, but we still don’t have enough developers who are serious about it. We need to get very serious very quickly about the provenance of every actor in every engagement. Contributed patches can only come from someone with a verified key. Projects on package repositories can only be trusted if posted by a verified user via their public keys. Major repositories have started to do some of this, but they need to get much more aggressive about enforcing it. /me sideeyes GitHub and PyPi
    • Signed artifacts: similar to the above – every software artifact and package must have a verified signature to prove its provenance, else you should never ever use it. If implemented correctly, a verified package on pypi.org will have 2 ways to verify its authenticity: the key of the person posting it, and the signature of the artifact itself.
    • Recognize national borders: I know many folks in various open source communities don’t want to hear this, but the fact is that code that emanates from rogue states cannot be trusted. I don’t care if your best friend in Russia has been the most prolific member of your software project. You have no way of knowing if they have been compromised or blackmailed. Sorry, they cannot have write access. We can no longer ignore international politics when we “join us now and share the software”. You will not be free, hackers. I have to applaud the actions of The Linux Foundation and their legal chief, Michael Dolan. I believe this was true even before the age of AI slop, but the emergence of AI Native technologies makes it that much more critical.
    • Trust no one, Mulder: And finally, if you have a habit of pulling artifacts directly from the internet in real time for your super automated devops foo, stop that. Now. Like.. you should have already eliminated that practice, but now you really need to stop. If you don’t have a global policy for pushing all downloads through a centralized proxy repository – with the assumption that you’re checking every layer of your downloads – you are asking for trouble from the bot madness.
    • Community powered: It’s not all paranoid, bad stuff. Now is a great opportunity for tech companies, individual developers, enterprises, and software foundations to work out a community protocol that will limit the damage. All of these actors can sign on to a declaration of rules they will follow to limit the damage, quarantine known bad actors, and exchange vital information for the purpose of improving security for everyone. This is an opportunity for The Linux Foundation, Eclipse, and the Open Source Initiative to unite our communities and show some leadership.
    • Bots detecting bots: I was very hesitant to list this one, because I can feel the reactions from some people, but I do believe that we will need bots, agents, and models to help us with threat detection and mitigation.

    I have always believed in the power of communities to take positive actions for the greater good, and now is the perfect time to put that belief to the test. If we’re successful, we can actually enjoy revamped ecosystems that will be improved upon by our AI Native automation platforms. If successful, we will have safer ecosystems that can more easily detect malicious actors. We will also have successful communities that can add new tech capabilities faster than ever. In short, if we adapt appropriately, we can accelerate the innovations that open source communities have already excelled at. In a previous essay, I mentioned how the emergence of cloud computing was both a result of and an accelerant of open source software. The same is true of AI Native automation. It will inject more energy into open source ecosystems and take them places we didn’t know were possible. But what we must never forget is that not all these possibilities are good.

  • The Revenge of the Linux Distribution

    The Revenge of the Linux Distribution

    Some things appear in hindsight as blindingly obvious. And to some of us, perhaps they seemed obvious to the world even at the time. The observations of Copernicus and Galileo come to mind. To use a lesser example, let’s think back to the late 2000s and early 2010s when this new-fangled methodology called “devops” started to take shape. This was at a moment in time when “just-in-time” (JIT) was all the rage, and just-in-time continuous integration (CI) was following the same path as just-in-time inventory and manufacturing. And just like JIT inventory management had some weaknesses that were exposed later (supply chain shocks), so too were the weak points of JIT CI similarly exposed in recent security incidents. But it wasn’t always thus – let’s roll back the clock even further, shall we?

    Party like it’s 1999

    Back when Linux was first making headway towards “crossing the chasm” in the late 90s, Linux distributions were state of the art. After all, how else could anyone keep track of the all the system tools, core libraries, and language runtime dependencies without a curated set of software packaged up as part of a Linux distribution? Making all this software work together from scratch was quite difficult, so thank goodness for the fine folks at Red Hat, SuSE, Caldera, Debian, and Slackware for creating ready-made platforms that developers could rely on for consistency and reliability. They featured core packages by default that would enable anyone, so long as they had hardware and bandwidth, to run their own application development shop and then deliver those custom apps on the very same operating systems, in one consistent dev-run-test-deploy workflow. They were almost too good – so good, in fact, that developers and sysadmins (ahem, sorry… “devops engineers”) started to take them for granted. The heyday of the linux distribution was probably 2006, when Ubuntu Linux, which was based on Debian, became a global phenomenon, reaching millions of users. But then a funny thing happened… with advances in software automation, the venerable Linux distribution started to feel aged, an artifact from a bygone time when packaging, development, and deployment were all manual processes, handled by hand-crafted scripts created with love by systems curmudgeons who rarely saw the light of day.

    The Age of DevOps

    With advances made in systems automation, the question was asked, reaching a crescendo in the early to mid-2010’s, “why do we need Linux distributions, if I can pull any language runtime dependency I need at a moment’s notice from a set of freely available repositories of artifacts pre-built for my operating system and chip architecture? Honestly, it was a compelling question, although it did lead to iconic graphics like this one from XKCD:

    For a while it was so easy. Sure, give me a stripped down platform to start with, but then get the operating system out of the way, and let me design the application development and deployment layers. After all, any competent developer can assemble the list of dependencies they will need in their application. Why do I need Red Hat to curate it for me? Especially when their versions are so out of date? The rise of Docker and the race to strip down containers was a perfect example of this ethos.

    A few incidents demonstrated the early limitations of this methodology, but for the most part the trend continued apace, and has remained to this day. But now it feels like something has changed. It feels like curation is suddenly back in vogue. Because of the risks from typo-squatting, social engineering hacks, and other means of exploiting gaps in supply chain security, I think we’ve reached somewhat of a sea change. In a world where the “zero trust” buzzword has taken firm hold, it’s no longer en vogue to simply trust that the dependencies you download from a public repository are safe to use. To compensate, we’ve resorted to a number of code scanners, meta data aggregators, and risk scoring algorithms to determine whether a particular piece of software is relatively “safe”. I wonder if we’re missing the obvious here.

    Are We Reinventing the Wheel?

    Linux distributions never went away, of course. They’ve been around the whole time, although assigned to the uncool corner of the club, but they’re still here. I’m wondering if now is a moment for their return as the primary platform application development. One of the perennial struggles of keeping a distribution up to date was the sheer number of libraries one had to curate and oversee, which is in the tens of thousands. Here’s where the story of automation can come back and play a role in the rebirth of the distribution. It turns out that the very same automation tools that led some IT shops to get too far ahead over their skis and place their organizations at risk also allow Linux distributions to operate with more agility. Whereas in the past distributions struggled to keep up the pace, now automated workflows allow curation to operate quickly enough for most enterprise developers. Theoretically, this level of automated curation could be performed by enterprise IT, and indeed it is at some places. But for teams who don’t have expertise in the area of open source maintainership or open source packaging, the risk is uncertain.

    Is It Time for a Comeback?

    I don’t know for a fact that Linux distributions are poised to return to the center of application development, but I do know that much of what we’re doing to isolate and mitigate risk – security scanning, dependency curation, policy enforcement, and scorecards – feels an awful lot like what you get “out of the box” with a distribution. Enterprise IT has moved to a different delivery model than what existed previously, and moving away from that is not trivial. But if I were looking to start an organization or team from scratch, and I wanted to reduce the risk of supply chain attacks, I would probably elect to outsource risk mitigation to a curated distribution as much as possible.
  • Whither the OSPO?

    Whither the OSPO?

    I read Dirk Riehle’s recent post on the OSPO Lifecycle, and it conjured up some thoughts that I’ve had recently and have been meaning to write down. Something has been bothering me about the concept of Open Source Program Offices (OSPOs) within corporations and where they fit in value stream discussions, especially since a few OSPOs suffered waves of layoffs and saw a reduction in scope. As a professional OSPO guy, it certainly turned my head and made me think. In Dirk’s post, he points out that the OSPO provides an important leadership function, mostly at the start. Over time, as the company’s open source involvement matures, the OSPO reaches an inflection point and transitions from a thought leadership role to one of coordination and support. The mature OSPO performs a support function for open source governance and compliance, as well as procedural guidance for the few lucky engineers who get to actively participate in external communities. This makes sense if you think of the OSPO as an atomic entity, riding a 5-year lifecycle from inception to “business as usual”.

    But what if OSPOs are not atomic entities? When I think about how OSPOs function, what is often missed is its role in developer productivity. Back when OSPOs were first stood up inside tech vendors, before they were even called OSPOs, a big incentive was vendors wanting to capture value from software produced by collaborative communities. Vendors wanted to be able to reliably use community-produced software embedded within products that they sold. This required a different view of supply chain and product management than had ever existed before, and OSPOs were the chosen vehicle for doing so. Along the way, these vendors discovered that an additional source of value was learning how to collaborate in an open source way. Suddenly, they weren’t just pulling software down from communities, they were actively collaborating with these communities. What OSPOs helped vendors achieve was producing products using the principles of open source collaboration. To me, the enablement of community collaboration and the embrace of open source principles was always the primary value of an OSPO. In that light, to constrain the scope of an OSPO to one of coordination and support is to miss the primary opportunities for value.

    What’s in a Name?

    I think a maturing OSPO needs a name that reflects its aspirational scope. If the ultimate value of an OSPO is measured in developer productivity, then perhaps what’s holding it back is the name. A “program office” may seem like an interesting place to invest if you’re a tech vendor, but the words “program office” have a very different meaning inside large enterprises, one largely associated with bureaucratic functions.

    One of the messages I have incorporated into a lot of my talks since 2013 is that open source communities have been the greatest source of innovation for over two decades, going back to the linux boom of the late 90’s. Any large enterprise would do well to at least attempt to replicate the success of open source communities and instill open source principles into its engineering teams. And if you can expand your “shift left” methodologies to include open source supply chains in your SDLC, then you benefit direclty from the innovation produced by these communities. This is where an OSPO can add the most value, if that value is recognized and invested in. I don’t know that the name necessarily should be, but since accelerated innovation and higher developer productivity are the end goals, then that should be reflected.

    I think when OSPOs grow up, they should become Centers of Innovation and Developer Productivity. Let’s face it, the term “open source” doesn’t grab people like it used to. It became what we always thought it would be – a means to an end. A tool. Instead, let’s focus on the outcome we’re looking to drive: Innovation and Developer Productivity.

  • Why is This Site Called Pro-Life?

    You may have noticed the name of this blog and wondered what this is all about. Am I going to scream at you that abortion is murder and stopping the baby killers? No. Well… unless the subject is infant and maternal mortality in the United States, in which case I will tell you that our terrible racist healthcare “system” and lack of reproductive rights does in fact put babies, and their mamas, at risk. The United States leads the industrialized world in infant and maternal mortality, and not in a good way.

    There are a number of reasons why this is the case:

    • Lack of comprehensive health care – the US leads the world in bankruptcies from illness
    • Rampant poverty, especially among younger women of color of childbearing age
    • High rates of unwanted pregnancies (for a number of reasons – will go into detail in a future blog post)
    • Relatively poor health: high rates of diabetes and other chronic debilitating health issues as well as lowest life expectancy of industrialized countries
    • Lack of prenatal care (will address this in the future – know that this is connected to the US’ overall rejection of reproductive rights for women)

    In every point made above, there is a readily available solution. In fact, every other industrialized nation has solved this problem, and it would be relatively easy for the US to address these issues. The irony is that those most opposed to abortion – those with the gall to call themselves “pro-life” – have resisted every opportunity to address any of the above issues. Every. Single. Time. In fact, they are the ones most vehemently opposed to addressing these problems. Sickening, no? Isn’t it odd that those who call themselves “pro-life” are actually ensuring that more women and children die?

    One of the reasons I started this blog and named it “We Are Pro-Life” is because we, those of us who actually care about people in our communities, we are the real pro-life advocates. We are the ones who advocate for trans lives. We are the ones who defend black lives. We are the ones with the core belief that everyone is equal in the eyes of our creator.

    We. Are. Pro. Life.

    Not those other clowns.

  • There is No Open Source Community

    There is No Open Source Community

     

    In January, 2006, I published this article on O’Reilly’s OnLAMP.com site, which was recently shut down. I’ve always been proud of this essay, because I think I got a lot right.  I’m republishing it now in the hopes that it will continue to educate others – and perhaps  allow others to critically evaluate where I fell short in my arguments.  The central thesis is here:

    The commoditization of software and a gradual, long-term reduction in price have played far more important roles than previously recognized. Business strategy designed to leverage open source should focus more on economies of scale (in terms of user and developer bases) and less on pleasing a mythical, monolithic community.

    Basically, stop treating open source as a social movement, because it’s not. This false assumption has caused much harm to software developers and users alike (more on that in a follow-up article). However, while I’m busy patting myself on the back for writing about software commoditization, I missed something fairly big: the value of source code itself is essentially worthless. This may have actually been more important than the price of software.

  • Open Source and SaaS

    Now that I work in an engineering environment tailored for SaaS development, I’ve developed a better understanding of the challenges they face when open sourcing their code. I wrote it up for OpenSource.com in a 2-part article, “How to decide whether to open source your SaaS solution.

    Some tidbits:

    The decision to open source code requires a fair bit of planning if you want to do it right, especially when it comes to user support and documentation. In the case of SaaS, the required planning is different, although it shares some factors with any open source effort. In my series, How to Make Money from Open Source Platforms, I focused on software that exists solely to be deployed on a computer, whether on a local machine, in a data center, or in a cloud platform (yes, I know the last two are redundant).

    There was a simple reason for this focus: It was what I knew. In my career, I have always worked with software, of commercial and community origins, to be installed somewhere. Now I work directly with engineers who take software designed to work solely on their website and with their particular infrastructure, automation, and orchestration. The fact they have been able to take this software and offer it to others in a way that is not only usable but can actually power other businesses is a testament to their commitment to an open source world.

    This article attempts to summarize their experience and point out lessons to draw from it. I’ll also try to identify how open source efforts relate to business and product strategy for SaaS models.

    I try to go into some level of detail, using my favorite tool: supply chain funnel analysis. If you’re looking into taking your SaaS code open source, I hope this helps you.

    Read the full article

  • TechRepublic: Open Source and Corporate Funding

    I have more to say about this. See the original article on TechRepublic.

    Basic argument goes like this, “individual developers working in their mom’s basement no longer drive open source development! Now it’s all about the corporate $$$$.” My initial thought is “duh”. I’ve always felt that the narrative about a decentralized army creating amazing software that undermined large vendors was entirely wrong. So it’s not that open source is “increasingly” about corporate funding – it was *always* about corporate funding. And as I’ve mentioned elsewhere, open source is not free software. Free software, also known as software freedom, has been about the rights of individual developers and users against the IP cabal of the TIC (techno industrial complex). Open source was about, “yeah, that’s great – but how can I profit from that?”

    So congrats to TechRepublic for being about 15 years behind. I guess?

  • Is Open Source More Risky?

    Is Open Source More Risky?

    There’s been a long-running debate over open source and security, and it goes something like this:

    Pro: Open source is awesome! Given enough eyes, all bugs are shallow. This is why open source software is inherently more secure.

    Con: Hackers can see the code! They’ll look at the source code and find ways to exploit it. This is why open source software is inherently more insecure.

    And on and on… ad nauseum. There are a variety of studies that each side can finger to help state their case. The problem as I see it, is that we’re not even talking about the same thing. If someone says open source software is more or less secure, what are they actually talking about? Do they mean software you download from the web and push into production? Or do they mean vendor-supported solutions? Unless we can agree on that, then any further discussion is pointless.

    Open Source Products

    So let’s shift the conversation to an apples vs. apples comparison so that we’re discussing the same things. According to a survey by Black Duck, upwards of 96% of commercial software solutions use open source software to some extent. This means virtually *all* new software solutions use open source software. So, when someone argues whether open source is more or less secure, the question to ask is, “more or less secure than *what*?” Because as we can see, the number of software solutions that *don’t* use open source software is rapidly dwindling.

    To save everyone’s breath, let’s change the dynamics of this conversation. Let’s compare “raw” upstream open source code vs. supported software solutions backed by a vendor. As I’ve mentioned before, you can do the former, but it helps if you’re Amazon, Google or Facebook and have an army of engineers and product managers to manage risk. Since most of us aren’t Amazon, Google or Facebook, we usually use a vendor. There are, of course, many grey areas in-between. If you choose to download “raw” code and deploy in production, there are naturally many best practices you should adopt to ensure reliability, including developing contingency plans for when it all goes pear-shaped. Most people choose some hybrid approach, where core, business-critical technologies come with vendor backing, and everything else is on a case-by-case basis.

    So, can we please stop talking about “open source vs. proprietary”? We should agree that this phrasing is inherently anachronistic. Instead, let’s talk about “managed” vs. “unmanaged” solutions and have a sane, productive discussion that can actually lead us forward.

  • DevOps is not enough

    Or: My source code is your platform, and vice-versa.

    https://twitter.com/i/moments/897859467529912321

    https://twitter.com/johnmark/status/897837253946466304