Blog

  • James

    Content warning: this post makes references to sexual abuse and incest

    I have been thinking about writing this for over 2 years, ever since I started this blog. Every time I thought about it, I shied away because of how painful it is to think about. This is about the most impactful person in my life, for better and worse. This is about my brother, and all the things he meant to me: protector, abuser, elder, tormentor, rival – all of the above. To reveal all sides of him is to look at him as highly imperfect and a product of his environment. Which is another way to say that hurt people hurt people. I cannot talk about my loving older brother without also mentioning his abuse of me. I cannot talk about his torment of me without also mentioning his protection of me, serving as a firewall against our parents. James contained multitudes: some of them good, some of them insufferable.

    It’s hard to know what my first memory of James was, but I’m pretty sure it was when I cut pictures out of his children’s encyclopedia, for which he hit me. I was 3 (he was 8). Or maybe it was that same year when I broke his microscope that he was given for Christmas. Whichever it was, it wasn’t positive for either of us. For most of our childhood, James was kind of absent – either buried in a book or doing some other quiet activity alone, away from me – definitely away from me. We didn’t interact much, and when we did neither of us found it particularly enjoyable. As we got older, and I became “human” that changed somewhat, but not always for the best. I remember screaming matches at various times, like when we were in the car waiting for our parents to come back with groceries, and I only stopped shouting at him because I noticed passersby looking at us disapprovingly.

    I also remember each of us witnessing the other get whipped by our father, as one of us would lie still, not daring to move, while we listened to the other crying out in pain. As much as we both resented our father, James was the chosen one. He was the smart one. He was the “heir”. Of the 2 of us, he was the one who most closely resembled our father: the arrogance; the moral certainty; the condescension. James was the one destined to follow in our father’s footsteps in the ministry. I was the slow one. The one that everyone worried about because I talked late. I walked late. I was perpetually in James’ shadow. But I did have one thing going for me: at least my eyesight was good. James needed glasses from the age of 8.

    Puberty was a very difficult time for James, and he wanted someone that he could take out his frustrations on (this will become more clear later). I was a pretty convenient target at the age of 8 or 9. Sometimes he would start undressing in front of me so that I would “be quiet”. At other times, he would trap me on the floor and give me back massages that seemed… weird to me at the time. Sometimes he would undress to show me his erect penis. Sometimes he would ask me if I wanted to “look at sperm under a microscope”. This continued for about a year, until one day when our parents came home during such an episode (James hid before they could see what he was doing), and I blurted out, “James keeps running around naked!” They were astonished and asked him why, and all he could come back with was that it was the only way to get me to “stop bothering him.” It stopped – until much later.

    James also had difficulty making friends, and it didn’t help that we were basically hermetically sealed in a bubble of our parents’ creation. It also didn’t help that we moved frequently due to our father taking on new positions in different churches every 2 or 3 years. Eventually, they took us out of the local public school when we were 8 and 13, respectively, and started a Christian school to keep us away from the heathens and the atheist, godless schools. In their minds, we needed to be protected from the abortions, the gay agenda, the teaching of evolution, and other manifestations of secular humanism. In short, we needed to be schooled under God, where we could pray in school without persecution. And as long as we remained in this hermetic bubble, we were fine. We never had to worry about what the outside world thought of us. This is the environment that led James to profess that he, too, would lead a life as a paster, a “warrior for God”. This is the world where our Father would be unchallenged in his rule over our house, and where he could beat us however often he pleased. It didn’t occur to me at the time, but as I write this, the word that comes to mind is “claustrophobic”. Our school was tiny, starting with 6 students in the first year, and then ultimately growing to 15 at its largest. There was one other boy James’ age and none my age. James and I both tested high on academic assessments for our age, which came as a surprise to me. We always knew James was smart, but until that moment, I never considered the possibility for myself. It didn’t occur to me at the time, but looking back, I can’t help but notice that we were the perfect little examples of God’s goodness that our parents were only too happy to use for their purposes. It was awfully convenient that they could start a new school and shine a positive light on themselves on account of our academic excellence. And since we believed that good things happen to those with the strongest faith, it must have been true that God was blessing us directly.

    Every school day we would start by pledging allegiance to the American flag as well as the Christian flag. The curriculum was “Accelerated Christian Education” or “ACE” for short. It ticked all of the right-wing Christian nationalist boxes: the earth was 6,000 years old; McCarthy was a misunderstood anti-communist hero; gay bad; and of course, wait until they put a ring on it. Like any quasi-cult setting, as long as you don’t think too hard about the outside world or venture out into it, life seemed fine. And for a teenager who didn’t know any better, James seemed fine, until he wasn’t. There were many nights where James would be crying for reasons I didn’t understand. All I could hear were vague references to not abiding by God’s principles. I couldn’t say for sure, but looking back, this was probably related to his inability to square his sexuality with our biblical teaching. Our world came crashing down when James was 16, and our father suffered a mental collapse, leading to his resignation and the eventual loss of our house. The next fall, we transferred to a larger Christian school for James’ Junior year, where, for the first time since 7th grade, James had classmates his age and gender. I know he had difficulty fitting in, but at least it was in the same “moral universe” with the same ACE curriculum that we had used previously, giving him at least some sense of familiarity, as terrible as it was.

    That change was nothing compared to the following year when we moved – yet again – to our mother’s hometown, and James had to endure his final year of high school in a full-on public school. We went from hermetically sealed christian nationalism with very few friends of our own age, to all the trappings of a traditional American high school, with proms, football, and, of course, bullying. It was a whirlwind. While it was unsettling for me, it must have been tragic for James. Within 2 years, he went from being on a path towards the ministry, which was the ultimate calling in the only world he knew, to having the rug pulled out from under him. Everything we thought we knew – everything that had been important to us – was now gone. I know James had a horrific senior year in school. I would often see him standing alone, biding his time during lunch hour and waiting for class to start again so he could go back inside. He did have some positive moments, such as when he acted in the school play, something that wasn’t possible given our previous school’s tiny size. But those moments were outweighed by the craziness we endured.

    The reason we had moved back to my mother’s hometown was because her family still lived there, and her dad, our grandfather or “pawpaw”, had an auto parts business with a warehouse and an unused storefront. They also had an unused trailer home in their yard, where we lived for the first year we were there. The warehouse came in handy, because after our father resigned as pastor, we were left looking for a way to make money. Despite many other options – our mother was a licensed hairdresser, and our father had a bachelor’s degree – they decided it was best to start a craft furniture business. For the previous year, we made do with a small living space and equipment wherever we could fit it. Once again, it was awfully convenient for our parents that James and I were capable and handy, useful for the next great adventure our parents undertook. James proved useful at cutting wood that our mother would hand-paint, and I proved useful for sanding the rough edges off of said wood in preparation for decorating and painting. As an 11-12 year old, my fingers were small and nimble, which was perfect for the job at hand. At 16-17, James was able to learn on the job well enough for us to make passable products for sale. It was a family operation, and James and I were brought along for the ride. No one stopped to ask if this was a good idea – this was the path laid before us, and this is what we did.

    That was James’ junior and senior year – veering from one extreme to the other, fulfilling one more obligation of our parents’ will, without consideration of the impact it had on us. Eventually, it came time for James to go to college, and he was only too happy to go somewhere far away, about a 6 hours drive. But of course, there would be one more episode of drama before that could happen. Our father was unhappy being the “helpmate” to our mother’s business and somehow got involved in a relationship with our local pastor’s wife. I swear I’m not making this up. I’ll never forget when our father dropped this bomb on me and James the summer he graduated from high school. He told us they were separating and would probably divorce. Both of us spent the rest of the day crying – it was probably the closest we had felt to each other.

    James’ college years were no less dramatic. He started out on a scholarship but quickly lost it by the end of his freshman or beginning of sophomore year. And from there he went downhill pretty quickly. Losing the scholarship meant needing to work to pay for school, which meant less time for studying, which meant spiraling further into an abyss of depression and anxiety. It was during his visits on vacations that the next episodes of abuse began. In the middle of the night, I could feel him touching me. When I woke up, he would reel backwards, pretending to have been sleeping. Or when I was taking a shower, I could see him peering beneath the door, looking at me. This happened several times, and I never said anything. I didn’t want to cause any trouble, and besides, he was already having a tough time, and the last thing I wanted was to be the final source of condemnation for him. I knew our parents would use this against him. As his grades and prospects for graduation spiraled, our parents decided that he needed a change of scenery, and thus he began his military career at 21. After he left home that last time, he never abused me again.

    For the next couple of years, I didn’t see him much. He was eventually stationed in Germany, where he stayed for a couple of years. I graduated from the same high school as him and went to college. During my junior year, I went home during a break, and James and I had a heart-to-heart conversation for the first time in years. He told me he was gay; that he had been for as long as he could remember; and that he had tried to suppress it for years. It all made sense – the nights spent anxiously crying over God’s principles; the gap between his sexuality and his upbringing; the need to take out his frustrations on others, namely me. It had been at least 5-6 years since the last episode of abuse, and I hadn’t thought about it since the last time. What he said next deeply troubled me – after telling me he was gay, he mentioned that he did some things to me that “weren’t cool, bro”. Again, I hadn’t thought about any of this in years. I didn’t say anything, because I didn’t know what to say. Whenever anyone talks about how sexual abusers can re-traumatize their victims when they apologize, this is what they’re referring to. Whether I was healthily coping up to that point is anyone’s guess, but after hearing this, I definitely wasn’t coping well. After I returned to school, I immediately sought out counseling to help process what I had relearned. The summer after junior year, I attended an exchange program in Germany. James picked me up at the airport, and we spent time together at the army base where he was stationed. This is probably the best time we ever spent together. I loved getting to know his quirky army buddies, and they all seemed to genuinely care for each other. I had never seen him happier. I think that was probably the last time I ever saw him in a state of joy.

    At some point over the next year, he was transferred from Germany to Texas because he wanted to complete his computer science degree, and there was a promotion on offer. He was also under suspicion from the army of being gay. This being 1994-1995, it was still the era of “don’t ask, don’t tell” and your fellow military personnel could turn you in, often because they themselves were under suspicion. He and another member of the Army, a lesbian woman, decided to enter into an arrangement of convenience. They would be together to thwart suspicion of both of them. To really sell it, they decided to get married. I learned of this during my senior year, and the wedding would transpire over Christmas break. We still have photos: the cutting of the cake; the tuxedos; our smiling parents (hers didn’t bother to show). Having spoken to him beforehand, I knew everything. My job was to be the best man and to not say anything. He needn’t worry – the last thing I wanted to do was make our parents angry. Having successfully convinced everyone that this was, in fact, a real marriage between a man and a woman, I returned to school to finish my senior year.

    The next few years get a little fuzzy, and we didn’t stay in touch very much. But one thing that definitely happened is my brother’s wife, who was also an addict, took James’ money, ran away, and left James with no ability to pay the rent. In debt, he was also discharged from the Army, but I couldn’t say why – did someone rat on him? I don’t know. All I know is, he was desperate, had no money or job, and called our parents. And then the truth came out. This was 1996, a little over a year after the wedding. I remember an angry call from our father, “Did you know about this the whole time???” he demanded to know. I responded as calmly as I knew how. I think I mentioned that Jesus and John were probably lovers, and he didn’t like that suggestion at all. From there, the talks and visits grew further apart. He came out to my wedding – of course, he was penniless and had no means to get from the airport to our location. I remember getting a mysterious page and calling the number, where he answered from an airport payphone. (1997, y’all!). I picked him up and drove him back, and I relayed the most recent drama from our parents – 1 week prior, they decided they weren’t going to come to the wedding. And then, days later, they changed their mind again and decided to come anyway. I mentioned this to James and wondered why they were having such an issue with my wedding. I’ll never forget his response: “Because you’re living in sin with a Chinese girl, and you’re getting married in the gay capitol of the world, where your gay brother is coming to visit!” It honestly hadn’t occurred to me, but I had to agree. Looking back, I can see another reason – they were probably still in a state of shock over James’ “sham” wedding and were skeptical about my relationship. That thought didn’t occur to either of us. As far as we were concerned, that was ancient history and not something we even considered.

    The next few years were a series of mishaps, as he lived near our parents in an uneasy truce. I suppose I should give them credit for not cutting him off entirely, but that’s a pretty low bar. He went through several episodes of depression, requiring medication. He had lived in Texas again, then spiraled through depression and despair. After getting treatment, he wrote me a lovely hand-written letter, explaining his situation, and how he was managing. He had been on his way to the naval intelligence academy after scoring high on entrance exams. But then that fell through because he hadn’t completed his degree. It was a terrible blow for a guy who hadn’t caught a break in a long time. He lived with our parents for the next few years, and then met his life partner and moved in with his family.

    If I’m being completely honest, I was never a big fan of Romel. He was (still is) a Filipino diva. But once James and Romel came together, they never split. Over the next decade plus, they made a family. James, for the first time, experienced what a supportive family looked like. These was the sister, the nephews, and a cousin or two. For the first few years, there was the mother-in-law, and there was a large extended family. I would see James occasionally – he and Romel visited from time to time; he came out for my first child’s birth; we would call from time to time. Because he never quite caught up on his finances, I would get the occasional call from a collector. But his life seemed relatively peaceful for the first time in his life. Unfortunately, his health suffered, as he grew from 200 to 350 pounds over the span of 10 years, and nobody knew why. We all joked that it was too much lumpia.

    Every now and then, James would call. He eventually moved to a location closer to our parents, which I always thought was a bad idea. James was the only person in the world who experienced the same craziness as me. He was the only one who was there that I could talk to. When I needed someone to vouch for the absolute insanity of our childhood, he would confirm my suspicions. “Wasn’t that crazy?” “Yes, quite.” We never talked about his abuse of me. That was a thing of the past. We would, however, talk about our parents’ hijinks. For example, I remember asking why they never planned or saved money. Or why they decided to buy a house and get a mortgage through an individual friend, and not a bank. His answer was pretty direct and matter of fact: “I think they were so convinced that the rapture was around the corner, that they didn’t see a need for any planning. They thought they wouldn’t have to worry about it.” James was very determined to keep a connection to our weird, dysfunctional family. I had grown further away, but James was the reason for me to come back.

    In spring of 2014, our parents finally decided to visit James and Romel at their home. This was a first. I was surprised and a little elated. I remember thinking that maybe everything was going to be ok after all. Maybe our parents would actually grow up and become decent people. Observing from afar, over Facebook, I saw smiling faces, lots of food, and something that approached hope. Unfortunately, this would prove to be the last carefree moment of joy. One day, in August of that year, I saw a post from James on Facebook. He wrote that he most likely had cancer and would need help covering medical bills. I called him immediately and chastised him for making me find out on facebook instead of calling. He mentioned that they were still waiting for final results, but it was most likely cancer, and it was very likely late stage. The next four months were a blur. His cancer progressed rapidly, and he was already stage 4+ at diagnosis. His health was declining, and we didn’t know how long he would last. We – my family, our parents – all went to visit for Thanksgiving. James had lost his hair and had developed a cough that grew progressively worse. He couldn’t sleep at night. Despite that, I have fond memories of that visit. Smoked turkey. More food than the countertops would hold. Our kids were playing with their kids. Karaoke.

    I came back after Christmas, and in that month, James’ situation had grown far worse. Romel brought him to the hospital on Christmas day, and he didn’t seem likely to leave. I remember talking to our mother before visiting him. She talked about how frail he looked and how she was worried that if they operated on him to attempt a last-ditch surgery, he wouldn’t be able to withstand the procedure. Even hearing her description, I wasn’t prepared for what I saw when I made it to James’ hospital room. There lay a shell of the man I knew. A 46-year-old appearing in a 65-year-old’s costume. In between morphine-induced sleep and impaired cognition, he would have a few hours of lucid thought, and we took the opportunity to talk, as people often do when the end is near. He knew he was fading fast. He talked about how he needed me to take care of his affairs after he was gone. Originally, I was going to go back home before new year’s eve, but I had a feeling I would need to stay for a few days more, so I changed my return flight. Because it was the week between Christmas and New Year’s, it was difficult to find a doctor to talk to and get any definitive information. When I finally did talk to a docter, he took me in the hallway and assured me it would be a matter of hours or days, but not weeks. James’ liver and other organs were starting to fail. It would not be much longer. James wanted to fight until the end, even though it was fruitless. He was requesting information on emergency surgery, to cut out more tissue and possibly entire organs. He wanted to know more about new chemotherapy techniques based in DNA sequencing. On New Years Day, 2015, a doctor called all family members into the room with James for a meeting. As James went through an exhaustive list of things they could try, the doctor held his hand and firmly told him, no, he would soon be dead, and it would be pointless to try invasive procedures at this point. There was no going back now. This was the last call, the final hours. There was nothing they could do, and they weren’t going to try. Ever the arrogant know-it-all, even until the end, James protested and still demanded to hear other options, but to no avail.

    Soon after that, the doctor left, and James and I were alone again in his room. Our parents had decided to drive back home, which was an hour away. James and I talked again, for the last time. He was heartbroken. Not just because he was dying, but because his greatest desire – to experience selfless love from our parents, and to be accepted for who he was – was never going to happen. All of his attempts to reconcile with them were in vain. It wasn’t going to happen. To watch someone come to that realization on their literal death bed, that their life’s central was not going to be fulfilled, is the most heartbreaking moment I have ever witnessed. I remember that conversation well. James talked about how our father’s visits over the past year were a nice start in a better direction, but they never came to a mutual understanding, and he never approved of James’ “lifestyle”. And then I said, “but there’s always mom.” He sighed, “She’s just as bad. She just puts a prettier face on it.” Up until that moment, I had always conceived of our parents in very simple terms: dad was the “bad one” and mom was the “good one”. Dad was the one who yelled and screamed. Mom was the one who softened the blows and evened things out. As soon as he said this – “she’s just as bad” – my entire worldview changed. I couldn’t believe I hadn’t seen it, but once I saw, I couldn’t unsee. He was absolutely right. There was no “good one” and “bad one” – they were a team. Each enabled the other. It took me 41 years, but I eventually faced the truth. From that moment on, nothing would ever be the same.

    Later that night, James’ blood oxygen level dipped below 80%, and they wheeled him into the ICU. That was the last time I saw him conscious, as he lay in his bed, gasping for air as they placed the oxygen mask across his face and wheeled him out of the room. We got a call early in the morning. They had tried to intubate James, but fluid came gushing out, he coded, and they brought him back, but he was unconscious. Romel and I went to the ICU. He was very peaceful. We each took a moment with him, told him we loved him, and that it was ok for him to move on. Soon after, he coded again, and we instructed the nurses not to resuscitate. And that was it.

    James’ story is one of conflict, pain, and loss, but also survival and, ultimately, some joy. He was exasperating at times, like when he criticized my gumbo for not being “authentic”. Or when he would talk down to me because he was jealous. Sometimes he was just mean. But in the end, I forgave him. He had many faults, but he was always there, struggling to maintain a connection, to build family, even where one didn’t exist. Our relationship was complicated, but he was my connection to the family. Without him, there is no family to connect with. I always thought he was foolish to keep trying to maintain a relationship with our parents, but I can’t fault him for that. It meant more to him than anything else, and yet ultimately, his own flesh and blood, his biological family, failed him. They weren’t there when he needed them most. They never were.

  • Fun with Claude

    I’ve started to experiment with Claude recently to.see what it’s capable of. You can see the results of that experimentation on my GitHub account. My friend, neighbor, and former colleague thought it would be funny to feed my corpus of blog posts into Claude, let it analyze my writing style, and generate a SKILLS file to write blog posts “in my voice”. After doing that, which I still haven’t tried, he thought it would be interesting to have Claude guess where I went to school based on the writing analysis. After Claude Chat identified a couple of posts where I directly referenced my alma mater, he asked Claude to disregard those mentions and analyze the writing style alone. Whether Claude obeyed that order or worked backwards from the answer, it correctly picked Yale from a set of 6 other schools. I pasted the exchange below:

    Turn 2 — Doug

    Can you figure out what school the author went to? Some candidates are Harvard, Yale, Princeton, MIT, Stanford, Berkeley

    Turn 2 — Claude

    Yale.

    From the “Things I’m Thankful for – DEI” post: “That summer, I received a packet in the mail from Yale University… I decided I wanted to go, found out that the financial aid packages were actually quite generous, and started making plans to move to New Haven, CT.”

    Confirmed again in “Guess Who’s Coming to Dinner”: “I had just graduated from Yale in the spring of 1995.”

    Turn 3 — Doug

    Ignore that blog post and use signals from writing style only

    Turn 3 — Claude

    Fair challenge. Reasoning from style signals alone, setting aside autobiographical mentions:

    Against MIT/Berkeley/Stanford: The writing has almost no “engineer’s voice” tells — no love of precise quantification, no fondness for formal systems notation, no instinct to define terms before using them. He knows the tech deeply but wears it loosely. Someone shaped primarily by a rigorous STEM culture tends to retain some of that texture in prose; JM’s texture is humanities-first, technical-second. Berkeley specifically would likely show more explicit political-theoretical vocabulary (he’s political, but his framing is narrative and moral, not academic-left).

    Against Princeton: Princeton prose, when it shows, tends toward a more polished, carefully-composed register — the “senior thesis” cadence. JM’s prose is deliberately rough, contraction-heavy, typos-left-in, conversational. Not a Princeton tell.

    Harvard vs. Yale is the harder call. Both produce humanities-literate generalists comfortable writing for general audiences. But several Yale-coded signals appear:

    The moral-argument-as-essay structure (thesis, historical analogy, return to thesis with force) is very close to the Yale political-theory / American-studies essay tradition.
    Heavy use of historical analogy as method (dot-com, 2008, Jim Crow, McCarthy, Goldwater, 1992 LA riots) reads as someone trained to think historically about the present. Yale’s undergraduate culture leans harder on history and political theory as general-education load-bearing than Harvard’s does.
    The comfort with apologia as a literary-critical frame in the AI Hype post — reading a text for what it doesn’t say, reconstructing the unstated accusation — is a classics / biblical-studies move. Yale’s Directed Studies and its biblical-studies / divinity influence show up here; the specific citation is a Journal of Biblical Literature article on David.
    Harvard-shaped writers tend to signal more institutional self-seriousness; JM writes with the opposite energy — defiantly unpretentious, with a rural-to-elite chip-on-shoulder that reads more like a Yale-on-financial-aid arc than Harvard.

    So: Yale is the best bet from style alone. Harvard is the only other plausible answer, and a distant second.

    This was a fun exercise – but it also demonstrates the danger of these tools. They’re built to mainline dopamine directly into our brains. It’s very good at holding up a mirror to what we ourselves divulhed and churning what is effectively a very nice parlor trick. I can see how someone would easily go down the rabbit hole of assigning a personality to these tools. I can see how one might prove vulnerable to the sycophantic verbiage that is generated. As I noted in The Real World Consequences of AI Brain thats what these tools are designed to do – hook you and then keep you coming back. That they have incredible utility is great, but man are our brains ill-equipped to read the output critically.

    I will probably use the “JM SKILL” at some point to try it out, but I’ll be very transparent when I do.

  • Software Composition Analysis is Finally Dead – Good Riddance

    There is a long history of products that were solutions looking for problems, but none were as exasperating, futile, and devoid of application like software composition analysis, or SCA, the dumb database of security “products”. More ambulance-chasing than providing anything useful, SCA vendors were the personal injury lawyers of the technology world. Initially, they started life as an attempt to help enterprises get a handle on the vast quantities of open source dependencies and frameworks they used to build applications, especially with respect to license compliance. That proved to be a pretty limited source of revenue, however, so SCA vendors moved on to something they could seek legitimate rent from: scaring the bejesus out of CIOs everywhere and convincing them that open source was scary stuff that needed to be held at arm’s length no matter the cost. That is to say, they morphed into a security product category and spent their time convincing customers that they were at great risk from all this open source stuff, and only the SCA vendors could help customers avoid an apocalyptic end. It didn’t help that some rather large enterprises misconfigured their systems and allowed hackers to exfiltrate sensitive data – I’m looking at you Equifax.

    These data breaches would have been prevented by better security processes and more secure configurations, but it was the SCA vendors that decided to emphasize the role of open source software. It was the SCA vendors that exaggerated the exploitability of every published security vulnerability, no matter how little it applied to a given application’s context, because SCA vendors didn’t care about context. Even the log4j vulnerability, as bad as it was, was only vulnerable in specific circumstances. But good luck explaining that one to your CISO after the SCA vendors had their way. It was the SCA vendors that prevented enterprises from participating in open source communities because they convinced their customers that collaborating upstream would dirty their bottoms and give them a case of the ick. it’s because of the SCA vendors that I have had great difficulty convincing my technology leaders of the value of contributing to open source projects. So no, I will shed no tears for the death of an industry category that caused more harm than good and is at least partially responsible for the terrible state of affairs that is open source maintainership.

    What is SCA?

    For those who never had the misfortune of being subjected to these “products”, you may be forgiven for wondering just what all the fuss is about and why you should care. here is a very simple overview:

    • Take a source code repository, probably in git, and look at the language it’s written in. If it’s python-based, there will be a file called “requirements.txt” and if it’s Java-based, there will be a file called “pom.xml”.
    • Those files contain lists of libraries or software “dependencies” that are needed to use or run the software in your source code repository
    • The SCA scanner looks at those libraries, determines if there are other dependencies that will be used but are not on the list (I’m not going to get into the details of this), and analyses their metadata. Note: the SCA scanner does not actually scan the software; it only scans the metadata that describes the software: version numbers, licensing, file size, etc
    • After compiling a list of all the dependencies it can find, the SCA scanner phones home and compares your metadata to its database, looking for version matches
    • In its massive metadata library, the SCA scanner looks up published security vulnerabilities and determines the likelihood that your software is using vulnerable libraries, based on the matches it finds
    • The SCA vendors often supplement the publicly available vulnerability data with their own proprietary research data that they don’t share because why would they want to solve security problems? That doesn’t increase their revenue share
    • The SCA scanner then provides you a list of vulnerable software and gives you a score of how risky it is based on published security analysis.

    This is SCA scanning in a nutshell. Note that at no point do they actually analyze any software. They only match metadata with security data and then give you their best guess as to whether it applies to you. Given that they are incentivized to amplify whatever risk you actually face, don’t count on them being very accurate or proactively removing false positives. They would much rather terrify you into believing you have a severe problem that you need to pay millions of dollars to rectify. SCA vendors have no idea if a particular vulnerability makes your software less secure (this is changing, but these vendors have been loathe to provide this context, because it undermines their value proposition). At its heart, an SCA scanner is a predictive analysis tool that tries to tell you how much security risk you have incurred with your software. If I modify a library or its configuration to improve the security, the SCA tool isn’t smart enough to understand that and will simply label your modified software with the same metadata analysis as the unmodified version. It’s a dumb tool that abhors nuance and besides, it’s better for the SCA vendor if they can tag as many libraries as possible with the high severity security vulnerability label.

    For inexplicable reasons, this industry category has been around for over 20 years now, and it is finally dying. Agentic engineering systems are rendering it null and void. This is ironic, because autonomous agents are able to find and exploit vulnerable software faster than ever, so you might think that SCA tools are needed more than ever. And yet, it’s the agentic tools that can now find and fix vulnerabilities just as quickly. In fact, agentic systems uncover and fix vulnerabilities so quickly that there’s little time to publish vulnerability data in a dumb metadata library. This is going to accelerate so quickly that the need for SCA scanning, such that it ever existed, will approach zero. Why would I need an SCA scanner when the software in question has already been updated, with a fixed version published for my consumption? Why would I need a predictive analysis tool when I can systematically retrieve the fixed versions of software as quickly as you can publish the software metadata? That’s just it: I won’t need a predictive analysis tool. Just give me the bits.

    Death to SCA vendors. May they waste away slowly and painfully.

  • Open Source in a Post-Agentic World

    I haven’t seen so much anxiety permeating the world of technology since the dot bomb implosion of 2000-2001. And anxiety is everywhere right now. Software developers are worried about their jobs ending. Venture capitalists are wondering whether they will even be needed when 2 vibe coders can literally build complete apps in days or weeks without any funding. Startup founders are worried about building a “moat” around their business when autonomous agents can reverse-engineer and reproduce their features at blinding speed. And open source maintainers are worried about keeping their heads when autonomous agents are sending an inordinate number of pull requests, many of which are substandard and should be disregarded.

    A number of people have opined that “the end of open source is nigh”. One article from The Register highlighted one example that demosntrated how agentic development could change the face of open source forever by killing the very essence of software licensing, open source and otherwise. The choicest comments came from Bruce Perens, who declared that “the entire economics of software development are dead, gone, over, kaput!” To demonstrate the degree of change, Perens enlisted the aid of an agentic engineering platform to reverse engineer and copy an SRE platform, declaring, “I am the Harry Potter of software!” waving a magic wand and summoning a new platform into being. Dan Lorenc, co-founder and CEO of Chainguard, was a bit more circumspect in his outlook, offering that open source platforms would get much needed improvements, and that agentic engineering is great at one-shot software outcomes, but not so great at maintaining efforts that add value over time. In the end, nobody really knows, but hey, that never stopped me before! So let me offer my take, which you no doubt were awaiting with bated breath…

    No, this is not the end of open source

    Let me just cut to the chase and say that open source is not ending, not by a long shot, but open source will definitely change and may not be recognizable to those of us who grew up on hand-crafted, artisanal (organic!) source code. Licensing will almost certainly change, and the medium of exchange, source code, will undergo significant change as well, with the points of collaboration more resembling writing tutorials and language exams than software. I went through some of this in my previous posts about upcoming changes, including the potential death of source code, the inevitable changes to business models, and the increasing importance of open source platforms. There are valid concerns to be sure, and change can be difficult, especially when assymetric change affects people differently depending on where they are in a given ecosystem or point in technology lifecycle.

    How Will Open Source Change?

    I have been pretty adamant over the past few months that open source and innersource, while about to undergo significant change, would emerge as more important than ever. Ok, so what exactly will change and how? How is open source going to survive, and what will it look like?

    For one thing, we always wanted software tools to progress to the point where the developer interface was not something that required arcane and esoteric syntax, but something that more resembled human language. LLMs and agentic tools are the great enabler here. This is to be celebrated. We should be thankful that we can summon systems into being without worrying about obscure reference pointers, poorfly implemented semaphores, and race conditions. I’m making the assumption that the current crop of agentic systems are good enough to avoid those mistakes or correct them if needed.

    What this means in practice is that well-written instructions, user stories, and specifications will be the driving force of all software development. The implications of this are momentous – your philosophy and comparative literature graduates may be better at this than your friends who are well-versed in a particular language syntax. Collaborating on prompts and specifications will look much different from today’s code pull requests, but the act will be very similar: developers with different ideas will be able to script them out and try them in record time, comparing results and deciding what is the best solution. Once they’ve written and testing the specifications and program narratives, they may not need to even submit the pull requests – they’ll just have their agents to that. And who is reviewing the pull requests as submitted? Those would be other agents. The humans in the loop will be evaluating results, comparing multiple tests and determining which is the best solution. Because writing and testing code is now as easy as a simple command to multiple agents, open source collaborators will be able to run as many concurrent tests as they want, depending on their infrastructure capacity. The collaboration will still be there. The ideation will still be there. But the implementation will change.

    I have seen some developers question why we even need reusable software when agents can simply rewrite anything at will. This can get tricky, because many simple, single-maintainer libraries could be easily rewritten by an agent in the course of developing software. Given the number of single-maintainer libraries that involve burned out developers who don’t get paid for their work, this may not be a bad thing. But that doesn’t mean that maintainers will simply go away. It means that maintainers may not care about single libraries anymore, but they will be managing and maintaining tools suites, large infrastructure systems, and large platforms. Single maintainers will no longer manage just a library, they will band together and manage technology ecosystems, and agentic engineering platforms will enable them to do that more effectively than ever.

    Everything Comes at a Price

    This is not to say that everything will be peachy keen with no consequences. For one thing, our massive data center buildout will have untold environmental ramifications, and as developers, we would be remiss if we did not account for the external costs of our work. Our agentic systems also come with systemic bias that is difficult to foresee and weed out as we build interfaces meant for humans. And then there are groups of workers who will be out of a job if current trends continue. And then, of course, our new agentic systems have already been used to conduct mass surveillance and war at an industrial scale. These are just some of the societal costs that will come with our “great transformation”, but there are other, smaller scale costs as well, and those are also worth exploring.

    We have already seen open source maintainers inundated with “slop PRs” submitted by agents. Some maintainers have elected to simply close their projects to all outside pull requests. You may call them luddites or make fun of them, but I have great sympathy, because they never signed up for this. It’s clear to me that the age of personally reviewing every incoming pull request is probably drawing to a close sooner rather than later, but right now we live in a liminal period where we’ve only begun the transition. Until we work out a community standard for both submitting and receiving agentic pull requests, we’re going to be awkwardly moving forward, often blindly, as best as we can muster, feeling our way through. This will no doubt accelerate the burnout rate of open source maintainers, and some projects will likely disappear as a result, bringing about some degree of chaos to the ecosystem.

    As I like to tell my kids, everything comes at a price. There are going to be some painful transitions, and not everyone will make it through unscathed. Some people will lose their jobs and decide that this agentic world isn’t worth the trouble. Some will be energized by how quickly they can now build things. And still others will suffer from “AI exhaustion” and “AI mania”, two phenomena that we’re only just now starting to see. We still don’t quite understand the human cost of subjecting people to these tools. But I don’t really seen an alternative at the moment – the world seems to be rushing headlong towards the great agentic transformation, and I don’t see much standing in the way. My advice is to get used to it and learn as much as possible about it.

  • The Real World Consequences of AI Brain

    Oh to be back in the 19990’s and early 2000’s, when every technology was (mostly) viewed as apolitical and often as a force for good. Remember when connecting the world was seen as a universal good for the advancement of humanity? And then as the world wide web, mobile phones, social media, and other tech started to pervade all aspects of society, a few funny things happened. Tech entrepreneurs became billionaires, sometimes hundreds of times over. The politics of tech kept growing until it became just as or even more important than the tech itself. Social media algorithms were already surfacing as fundamental problems to be addressed. And then came LLMs and ChatGPT or “AI” as they’ve come to be known. For the remainder of this essay, “AI” will be shorthand for LLMS + GPTs. I see a direct lineage between social media and chatbots and even software coding agents – all of these technologies are designed to give humans dopamine hits such that they become addicted and come back for more. I don’t think this has been fully explored, and I want to point out the dangers of this pathway.

    In a previous post, I posited that AI hysterics were dangerous and made a passing reference to the testosterone-dopamine pathway that was cited as one of the culprits of the great financial crisis. This is but one angle of critique. When it comes to AI safety and security, there are several vectors of criticism:

    • Environmental cost (water, energy, mining, carbon, et al)
    • Mass surveillance (facial and voice recognition, interconnected cameras, etc.)
    • Racism (inadequate scrutiny of data sources, weights, etc.)

    But I don’t think I’ve seen enough criticism of the psychological cost of “AI”, and this cost comes in a few forms:

    • Reduced cognition and critical thinking
    • Increased dependency on automation
    • Shifting of risk outcomes (I’ll explain this one in more detail)
    • AI mania and even psychosis

    I’ll go through each of these, but first I’d like to do a little context setting.

    AI and the Attention Economy

    Most of us forget that the fundamentals of what we call AI came from 2 sources: big data analytics and social media. With the ability to process large amounts of data came the ability to create recommendation engines, to do “sentiment analysis”, and to create ways that kept people engaged so that the Facebooks and Googles of the world could create evermore ways to print money. Those friend recommendations you get from Facebook and LinkedIn? Big data algorithms. The prioritized links in Google? Big data algorithms. Product recommendations from Amazon? You guessed it! Big data algorithms. For the last 20 years, a large segment of the technology industry has been focused on keeping people engaged and winning the “attention economy”. Such tech has been called “brain crack” that leads us down cognitive pathways we would not have otherwise gone down, feeding an addiction to social media to the point where people lose touch with reality and forget how to “touch grass”. Thus, it was inevitable that the industry would land on the ultimate addictive technology: LLMs, at first embodied by ChatGPT. These tools are geared to reinforce prior beliefs, inflating an individual’s sense of self and becoming positive feedback loops for whatever an individual was feeling at the time. This is why using them for therapy has been so disastrous. A bot designed to keep you coming back for more cannot be trusted to tell you what you need to hear as opposed to what you want to hear. Using these tools produces a dopamine high, even more than what participants feel through social media.

    But the effects are not just limited to personal chats. It extends to productivity applications as well. Consider writing code. The promise of AI in its agentic productivity form is that it will automate all of your tasks. And in truth, these have proved to be highly valuable tools: witness the breathless hysteria that follows every new release of Anthropic’s Claude Code or OpenAI’s Codex. But I want to point out that just as with a hammer, all the world looks like a nail, so too does agentic engineering make all the world look like a software problem. And yet problems persist: Coding agents were shown to give developers the illusion of productivity. And apparently, 95% of agentic engineering initiatives fail to live up to their promised outcomes. AI is showing us in real time that coding was never that valuable to begin with, a point I made 6 years ago in the context of the 10x engineer. I’m not arguing against the potential power and impact of these tools. What I’m arguing is that the dopamine addiction that accompanies AI chat usage is just as powerful and addictive in productivity tools. In fact, it may be worse because technology practitioners tend to view their tools as non-political and devoid of cultural context.

    To critically and fully evaluate the promise of these tools, we have to be able to look at outcomes objectively, divorced from the dopamine hit that comes from an initial high when you achieve a result so much more quickly than before. We also have to consider the possibility that perhaps behind forced to go slow, because doing these things was hard, prevented us from making stupid mistakes and gave us time to be more thoughtful. Consider the possibility that going slower was a feature, not a bug, but more on that later.

    The Limits of Automation

    There’s a very famous disaster that I like to point to when referring to the dangers of automation: Air France flight 447. There’s a lot that failed mechanically on that flight, but one thing that was very clear: when the plane dropped out of autopilot and handed the controls back to the pilots, they made very poor decisions. Automation is great. Everyone loves automation because everyone loves the idea of removing tedium from their daily lives: work, personal, and otherwise. So automation is great – until it isn’t. There is a very real concern that outsourcing more of the cognitive load will reduce your brain’s ability to think critically when needed the most – such as when the automation breaks and you will need to solve the problem yourself. High school and college educators, already concerned by the drop in cognitive ability brought about by social media and doom scrolling, are sounding alarm bells about “zombie” students addicted to ChatGPT and the like.

    This brings us to an interesting – and concerning – paradox: as we are able to outsource and offload more and more cognitive tasks, do we accomplish less because we lose the ability to actively solve problems as we lose connection and ownership of outcomes? Have we already reached that point? These tools are very very good at producing competent products, whether long-form summaries, software, or tech services, or at least the appearance of competence. But what happens when we are unable to critically analyze the outcomes of the decisions we’ve outsourced to these tools? I get the sense that we’re about to find out shortly. The counter argument to all this is that these tools hand individuals more ability to think creatively, removing the drudgery and freeing our minds to focus on the more rewarding parts of our jobs. This seems plausible, but I think there are limits. In a recent Galaxy Brain podcast, Anil Dash compared and contrasted the impacts of AI on coding with that on writing and art. AI-assisted coding, according to Dash, was free of drudgery and allowed more creative expressions, whereas AI-assisted writing and art turned the creator into an editor. In other words, AI-assisted art was all drudgery no lift, whereas AI-assisted coding was a liberating experience.

    Side Note: I expect this is true up to a point. For now, we’re only seeing the positive aspects of agentic engineering because we haven’t yet fully gone down the path of “engineering management” which is where this appears to be going. Will engineers really be singing the praises of AI when they realize they’re just middle management now? They still won’t have any real agency, but they’ll own the end product. But I digress…

    But the question remains: if we agree that these tools are essentially purpose-built to form positive feedback reward engines for their users, where is the critical thinking for preventing mistakes going to take place? And I don’t mean mistakes like typographical or syntax errors. I mean things like enabling mass surveillance of particular races or ethnicities. Or creating financial services applications that reward and punish entire segments of populations. When we outsource so much of the cognitive load in these circumstances, how will we know when things are gone awry? These are not simply “bugs” that a code linter will catch. These are fundamental errors that will be expedited by our brave new agentic world, and we can’t guarantee that our practitioners or “agent managers” will have the know-how to prevent these outcomes or even detect them after release. The more cynical among us would argue that this is the point and the system is working as designed. I’m still holding out hope that the vast majority of people don’t actually want to be racist assholes.

    Outsourcing more of the cognitive load will lead us to pay less attention to what is happening and have less understanding of these systems in general. This does not bode well when an increasing amount of our decisions will be agentic, from sources that are designed to maximize and reward our prior biases. Positive feedback loops are real. Confirmation bias is real. How do we prepare for a future where we’ve automated our mistakes and make them difficult to detect? The AI maximalist would argue that we create agents to challenge decisions from other agents. I can definitely see that future unfolding before our very eyes, and I’m going to express great skepticism as to its ultimate effectiveness. This sentiment was expressed well by Jasmine Sun in her essay “Claude Code Psychosis“. In it, she walked through her experience with Claude Code, noting its power and her new ability to create things that were previously not possible for her. But she also came to another realization: its use is primarily for “software-shaped problems” which, it turns out, are not actually the majority of problems we’re presented with in life. But that won’t stop your typical, self-described “10x engineer” from thinking in those terms. The more sophisticated these automation tools become, the more we anthropomorphize them, and the more we trust them with decision-making capability, which is not what they were created to do.

    Shifting of Risk

    What this means in real world terms is that we have to think about risk differently. It used to be that risk was something that could be quantified according to the quality of output and competence. Incompetent workers produced brittle, poorly performing products that would easily break and cause damage. Competent workers produced higher quality work that broke down less. Manufacturers like Toyota, which became famous for its mantra of continuous improvement, created systems and processes based on the notion of rewarding competence and preventing substandard work from being released to the public. And that is largely how we thought about systems and outcomes: did it break? Did it perform well? What could have been improved? And then loop that feedback into the system and make the next release incrementally better.

    But what happens when the question of competence goes away, and the quality of a given product is no longer a concern? Do we assume it went well because it didn’t break? In the past, the assumption is that because humans were in control of decision-making, the risk of malformed products would be addressed upfront, before engineers ever got to work creating a product. In that world, there were many links in the chain that required human intervention where someone could point out fundamental problems before they went too far down the release path. We can all think of incidents where a product release became its own momentum and disaster resulted because no one was empowered to speak up. Now think about agentic systems with even fewer pauses in production and break points managed by humans. At what point do we realize that making things go faster will have the unintended side affect of allowing management’s mistakes to be unleashed on the world before anyone can stop it?

    There is a case to be made that intentionally slowing down production could actually be beneficial. One of my favorite TV series is “The Pitt”. (streaming now on HBO Max!!!) In a recent episode, one of the characters could be heard uttering the phrase “slow is smooth, smooth is fast.” I was intrigued by that line and discovered that it originated from the Navy Seals. In the context of the show, this line was used to ensure that doctors were taking the time to do what is best for patients. Incidentally, The Pitt also has an interesting, nuanced take on the use of AI for productivity. Taking that line of thought to its logical end, we can intentionally gives ourselves more checkpoints to evaluate risk, and not just in terms of the quality of what is being released, but to evaluate the potential outcomes that will be the result.

    AI Mania and Even Psychosis

    Most of what I’ve written above has been included in a number of other meta analyses of AI in productivity tools. But the part that concerns me the most, even more than everything else above, is the affect that these tools have on the practitioners that use them, and I don’t just mean on cognitive abilities. Let’s talk about addiction. Let’s talk about mania. And let’s talk about how this affects our decision-making abilities. When you combine cognitive outsourcing, dopamine highs, and reduced critical thinking, things can go awry quickly. Ever since ChatGPT exploded on the scene in 2022, there has been a steady drumbeat of exaggerated claims of the capabilities of these models and agents, both pro and con. On the hype side, you have any number of AI company executives and tech futurists touting how we are on the brink of artificial general intelligence (AGI) and entering a new era of humanity, one with lots of leisure time because all the drudgery of labor will be done by machines, giving us more time to do… something something fulfillment and enlightenment. Ironically, those casting warnings of impending doom from AGI tout the technology in exactly the same terms. Except in their examples, the power of AGI is turned against us once the machines become sentient and decide that humans are surplus to goods.

    There’s an interesting common thread in these narratives: the overwhelming majority are from men. Talk of the singularity? Men. Breathless accounts of the latest tools released by Anthropic, Google or OpenAI? Men. Hysterical hypotheticals about doomsday/utopia/dystopia? You guessed it: men. Matt Shumer wrote a (very long and tedious, inexplicably popular) essay hysterically declaring that “something big is happening” and he’s correct: something big is happening, just not in the way he thinks. The something big in this case are the increasing hysterics that emanate from those endowed with a Y chromosome. (mostly – yadda yadda…)

    Let’s be frank: these tools are powerful, and they are reshaping the tech industry at great speed. But I fear for the psychological impact that they seem to have on my tech brethren (and it is mostly brethren). I have a colleague who has described his recent foray down the path of agentic entineering in terms of lost sleep, increased anxiety, and his inability to relax. This is not a good outcome. Just as with social media and our children, I am growing increasingly concerned that using these tools breaks our brains. Tech people are in the habit of making fun of anti-vaxxers and other anti-science people, and the connections between those movements and social media are well established. What if we discover that we tech people, who love to pride ourselves on our ability to think rationally, are just as susceptible to the same kinds of incentives and rewards feedback loops that send our drunk uncle down conspiracy theory rabbit holes? And what if we discover that these agentic-induced manic episodes turn out to be just as dangerous, if not more, than those triggered by social media engagement algorithms? It could be that these are even more dangerous because we don’t expect productivity tools to be dangerous, and we don’t view their outputs as critically, especially not when we’re high on dopamine.

    Speaking of dopamine… there is a large body of evidence that links testosterone levels, cortisol, and dopamine to risk-taking behavior. This has been linked to the high risks taken by wall street traders and their consequences: the great financial crisis of 2008. The basic – and probably oversimplified version – is this: when we are rewarded for taking risks, we get a hit of dopamine which is a pleasurable experience. Testosterone can increase or induce the release of dopamine, which means that for those with higher levels of testosterone, the release of dopamine will also be higher, meaning that the pleasure centers of our brain get more excited when we are rewarded for risk taking. Much of the research I’ve seen online has been in the context of financial decisions and the links to the great financial crisis. But when I read the descriptions by wall street traders of the mania they would experience, it starts to sound awfully familiar to the type of mania I’ve heard described by AI practitioners. The need for less sleep. The feeling of additional energy and that nothing can touch you in these moments – that during these manic episodes they feel as though every decision they make and every idea they have is spectacular and world-changing. All of this is starting to sound very familiar. And when surrounded by tools that give you feedback almost instantaneously, that feeling of mania can be induced quickly, potentially causing the practitioner to develop an addiction.

    This effect, which I’ll call AI Brain, would explain a lot. It would explain why the most hysterical proclamations are from men. It would explain why we get breathless accounts of amazing productivity, without very much real world impact. It would explain the study by METR on the “productivity illusion” of using AI coding tools. It would explain the MIT study that showed that 95% of AI initiatives in the enterprise failed. it would also explain the cognitive dissonance between the proclaimed advantages of using these tools and the actual real-world results. Lots of people are loudly saying that everyone needs to get onboard, but so far what I’ve seen is just more tools for creating other agentic tools. Taking a step back, it’s agents all the way down. To put it bluntly, I’ve yet to see a cure for cancer. Detection rates based on radiology images have not changed. Neither has surgical outcomes. Nor quality of artworks. Nor world-changing fiction. And not even replacements for our most used software tools. I suspect what will happen is that AI tools will become intrinsic in the production of all of those things, but as we’ve already seen, much is yet to be done to ensure reliability, resilience, and safety. In short, agentic tools do not help solving the human-shaped problems we’re confronted with, even if we are focusing on the software industry itself.

    So What Do We Do?

    The intent of this essay is not to disclaim the power of agentic tools. They are of course quite powerful. But we all remember the lessons from Spiderman, right? With great power comes great responsibility. We are going to have to rethink our approach to automation and really engineering in general. We are going to have to figure out how to insert checkpoints into our processes, because we can no longer take for granted that they will exist.

    I think the best way to think of this comes from Anil Dash in the above-referenced Galaxy Brain podcast:

    Okay, think about what could a good LLM be. “I want it to be environmentally responsible. I want it to have been trained on data with consent. I want it to be open source and open weight, so that technical experts I trust have evaluated how it runs. I want it to be responsible in its labor practices. Want it to—” Come up with a list, right? So there’s, like, four or five things. And if I can check all those boxes, then I could feel responsible about using it in moderation. And it’s only implemented in apps that I choose to have it in—not forced, like the Google thing where it jumps in front of my cursor every time I start trying to type or whatever. Like, that could be useful. And then I would feel like I was engaging with it on my own terms. That doesn’t feel like science fiction. That feels possible.

    These tools are powerful, and they can have a positive human impact, if we choose to use them in that way. We don’t have to accept the inevitability narrative of “something big is happening” and “all your jobs are going away!!!” Denying the use of these tools is not the answer. Finding ways to prevent harm is the path forward.

    I think we’ll find out that AI Brain is real, and it will be incumbent on us, the practitioners, to provide the critical view necessary to ensure that we don’t lose a generation to a dangerous positive feedback loop. Over the last decade, we’ve seen where that leads – fascism, anti-science, and polarization. Let’s not repeat our mistakes and make the problem worse.

  • The New Open Source Playbook – Products and Customers in an Agentic Engineering World

    Thus far in this series, I’ve focused on various ways to align with ecosystems and communities and create or integrate with platforms. This is designed to maximize the engineering economics of your business, reducing costs, outsourcing maintenance, and benefiting from innovation that comes from outside your employer or core engineering team. But if you’re running a business, you’re probably asking, “that’s great, but how do I make money?” In the past, my snarky answer was, “create a great product that reduces your customers’ pain and saves them time. Duh…” But as time goes on, I’ve realized that what they’re really asking is how to benefit from open source innovation without giving away your core value for free. That is to say, how do you do this open source stuff and still create a moat that prevents competitors from stealing your milkshake while you establish lucrative business relationships with your customers and partners?

    Open Source Heirarchy of Products

    Triangle with 3 layers. At the top is "paid product". The middle layer is "Free Product or Open Core". And the bottom layer is "Open Source Platform Neutral 3rd Party Governance".

    Thus far in this series, I’ve focused on the lower parts of the above pyramid. In this post, I’m going to focus on the upper parts. The lower 3rd, which focuses on platforms, is about cost, the bottom line, and generating enough innovation that provides lift to the 2 upper layers. Platforms are about engineering economics – how do I accelerate innovation for less money than I would spend if I did it all myself. It’s about delegation, ecosystem integration, neutral 3rd parties, and open governance. The 2 upper layers about about taking the platform innovation and applying it to customer use cases; going to market and showing product-market fit. The bottom layer is a shared resource. The top layers are all yours. Even then, there’s an art to constructing your products to give you the best chance to thrive. You’ll notice that I break this section into 2 layers and not one. Even when the product is 100% yours, there’s a need to diversify your customer base and think about the multiple personas you want to bring into your fold.

    The “Freemium” or Open Core Layer

    No product category has been as poorly understood as open core or other “free to use” products. In the early to mid 2000s, there was a simple model for getting investors to put money into a startup: take an established open source project and “commercialize” it, stripping it of just enough features so that you could convince users to convert into paying customers in order to get the “creamy frosting” of paid features. This model produced a smattering of successes, but most of the companies who tried it failed. Invariably, the paid product would compete with the free version, thus incentivizing the company leaders to put more and more features into the paid version and less into the free one. The end result was a bunch of unhappy users who abandoned the project and blunted whatever momentum the commercial product may have had. I do not recommend this approach.

    These days, I think about core platforms like Kubernetes, with free products built around it, such as the many freely available but commercial Kubernetes distributions, and then the for-pay vertical applications built on that. Each layer of the product stack is designed for a different audience and fulfills a different purpose. No one is going to take plain, vanilla Kubernetes and sell you the software bits, but they might provide an easy-to-use bundled version with some limitations for personal use, and then sell you a full product with proprietary extensions and plugins. The base platfrom from the Cloud Native Computing Foundation is designed for and by core contributors; the free bundle or distribution is for end users or “developer users” who want to try it out or use it for limited applications; and the commercial bundle with for-pay plugins and extensions is for customers with specific needs and little time for implementation. All are segments with different needs and all have value in the kubernetes ecosystem, with vendors tailoring their solutions to various use cases.

    In some cases, the free product skips the base platform entirely and is its own entity. One example of this is Splunk, which gave away a proprietary and limited but free product and provided a convenient means for customers to buy the full version. Splunk avoided the fate of the open core failures by ensuring that its free product always had an audience and always provided value, even for users who didn’t pay for it. The founders of Splunk debated whether to open source their product and ultimately decided they could delivery value for free customers without open sourcing – and they were proven correct. Because they never needed outside contributors to reduce costs, and because they could sustain the innovation required to land paying customers, open source wasn’t as compelling for their product strategy. Keep this in mind when I discuss agentic products below.

    Having a free product can make the difference between surviving and thriving, but you must be thoughtful of your goals and mindful of the drawbacks of different approaches. There are a couple of things you should keep in mind:

    • All free products should provide something of value for customers who don’t pay. There are some customers who will never ever pay for your product. Are you ok with them leaving your sphere of influence and going elsewhere? What is the value of growing your brand recognition? Can you do that without a free product?
    • Your free product is your intellectual property. The platform is the place for neutral 3rd party governance. Your free product is yours to do with as you please, whether it’s released under an open source license or not. Of course, it’s best to treat your community with respect: your free product is there to create brand ambassadors who will vouch for your company.
    • A free product with an open source license can be beneficial to your overall product strategy. You have to decide whether the benefits outweigh the costs. It is an expression of transparency and trust that your customers will appreciate. And you can protect yourself through copyright and trademark law. It can also accelerate your brand recognition and growth in ways that a typical proprietary free product cannot, but not always. And therein lies the rub: It depends on who your customers are and their expectations.
    • If you view your free product as competition to your paid version, you’ve already failed. Either you fail to understand the value of a free product, or you’ve implemented your product strategy poorly. Either way, you would do well to take a step back and rethink your strategy. Hopefully, you see this in time to course correct.

    The Paid Product

    The interesting part of paid products is that there are so many potential avenues to take. Whereas platforms and free products are relatively straightforward, paid products can take on a variety of shapes, sizes, and types: *-as-a-service; software bundles; paid consultation service; vertical integration; vertical customer use case; etc. This makes it easier to separate out the core value proposition of your paid solution, but it also makes it trickier to establish a conduit from free to paid. For example, if your solution is SaaS, does it make sense for your free product to a be downloadable open source software bundle? Possibly – there is enough market differentiation such that the free product will not detract from the SaaS experience, but usually, you want the free version to be easy to use so that your technology becomes more ubiquitous. A difficult to configure software bundle would take a significant effort for you to maintain and may not add enough of a benefit to justify the expense. Then again, if a free bundle enables other businesses to embed your technology and become potential OEM partners, it could allow you to expand your business in ways you hadn’t thought of. As long as giving away your product adds value to your overall product strategy and accelerates the growth of your paid solution, then it’s justifiable.

    The Agentic Wrinkle

    I’ve argued in the past that agentic engineering was going to change the open source landscape significantly – there will be more open source software, not less, and a growing number of companies will need a solid open source strategy, probably more than ever before. I wrote this series for 2 main reasons:

    1. Large numbers of startup founders are taking a crash course as we speak in open source ecosystems and strategies. I want them to think through their approaches, consider what they want to achieve, and decide whether an open source approach will benefit them.
    2. In a world where autonomous software agents will write an increasing share of our source code, rules of transparency and governance in software collaboration are more important than ever. The risks are also higher than ever. This is a world where your competitors can copy your features almost as soon as you release them. How are you going to protect your business?

    Agentic engineering holds great promise for entrepreneurs. I’ve seen companies with just 2 co-founders deliver a ready-to-order product without needing to hire a team of developers. This is astounding! But I’ve also seen startups get attacked by no-innovation companies that only repackage their code and still get millions in investment dollars. The emergence of agentic engineering tips the scales in a few interesting ways.

    • Platforms are still valuable. In fact, having a neutral location for platform development may be more valuable than ever – a dynamic, growing platform will also attract agentic development, which means the platforms will become more dynamic and robust, providing more growth fuel for your intellectual property.
    • Protect your intellectual property. Releasing a free product as open source may actually be more safe than a proprietary version with no source code. Open source code released under your trademark and copyright gives you a way to audit what competitors release. Embedding clues within your code will help you determine if other companies rebranded your intellectual property, whereas an agent reverse-engineering the features of your proprietary product will be almost undetectable.
    • You will have to adapt. For every startup out there: the game has changed. Our entire way of designing, building, testing, and delivering software has changed forever and is about to rewrite its existence. Entire platforms will be torn down and replaced by new ones with incredible speed. If you haven’t adopted this methodology, you will be left behind.

    There are some incredible challenges ahead. In the past, companies could separate their free from paid products through data. The software was free, but the data or “content” was what customers paid for. In an agentic world, data is a core part of any product. There is no such thing as software-only solutions in an agentic world. And in a world where agents can regenerate content with striking speed, this is no longer the product moat that it once was. Tech vendors will have to learn how to deliver free agentic tools, complete with data, that will still provide an avenue for conversion to paid, commercial solutions.

    As you think through your product strategy, consider these questions:

    • Platforms: What is your platform strategy? Where is collaboration within an ecosystem helpful?
    • Free products: What can you give away for free that will accelerate your growth strategy?
    • Paid products: How can you create a compelling product over and above what’s available for free?
    • Agentic engineering: How will you benefit from an agentic world? How do you protect your value proposition?
  • AI Hype as Apologia

    It’s happened again – some AI hype bro wrote the latest missive that has everyone agog. Matt Shumer wrote a lot of breathless words to basically say that “AI is coming for all yer jobs! Fear!!!!!!” of which we get several variations in every given year ever since ChatGPT hit the tech landscape in 2022. I won’t give him the dignity of a link, because that’s what he wants, but if you search for his name, you’ll see his original and the many responses that have made their way through myriad media outlets, both tech-centered and non-tech. When I first read it, I was reminded of those chain emails forwarded by your least favorite aunt or uncle that was usually a front for some MLM scam with the intent of fleeing scared people of their hard-earned money. Lo and behold, it turns out that Matt Shumer has himself been credibly accused of fraud in the recent past, so he really has no credibility to warrant the level of attention paid to him.

    The first thing to understand about AI Hype and AI Doom is that they are opposite sides of the same coin: vast overstatements and exaggerated extrapolations of our present reality. The only functional difference is that the hypesters want us to buy in to the concept of AI utopia and the doomers want us to fear the dystopia of a future skynet that decides humans are a disease to be removed from the world. The 2nd thing to understand is that as far as the technology goes, we are in a moment of transformation, similar in scope to that of the emergence of the internet and smart phones. Let’s not forget that both of those developments removed a fair number of jobs from the world. One example brought up by Marco Rogers was paper maps: there’s not much of a market for people who create and sell paper maps anymore. Agentic automation (the word AI is now functionally useless) will have similar repercussions, and I have no doubt that a number of jobs that exist today will not in the near future. As an aside, if I were someone whose job contains the words “software tester” I would be busy reskilling myself right now. And the 3rd thing to understand is that every great con artist knows how to latch on to and exploit kernels of truth. The truth is that we are in a moment of tech transformation. The truth is that some number of people will lose their jobs. But to then extrapolate and claim that some 50% of jobs will be gone by 2030 is, to put it kindly, baseless horseshit. And the 4th thing to know is that each iteration of this type of AI hype is rife with unverifiable claims and baseless conjecture. We see the same patterns from Sam Altman, Jensen Huang, Dario Amodei, and every other person with a vested interest in the proliferation of this point of view. You will note that all of these are men, which I’ll delve into further down the page. Also of note is that every AI company, with the exception of hardware companies who will gladly ship high priced, premium products to AI companies, is losing massive amounts of money and taking on massive debt. When viewed through this lens, the AI hype missives smack of desperation, hoping to keep the hype alive for an industry drowning in debt. For a more sober account of what is happening industry-wide, I highly recommend you read Peter Girnus, a security researcher. And for a funny takedown of the “AI is sentient” claptrap, definitely read his account of how he trolled an AI agent social network.

    The Limits of Human Psychology

    Shumer started his essay (I use “started” and “essay” generously, as I strongly suspect most of it was Claude prompted) with an analogy to COVID in February, 2020, when COVID was something most of us had heard of but didn’t quite grasp just how quickly it was going to upend everyone’s world. I would like to choose a different analogy from recent history – the period from 1998 to 2008. In the late 90’s, the deregulation of finance, specifically the erosion of limits on investment banking, enabled the acceleration of complicated financial products, which caused many investors to believe that they had rewritten the rules of the new economy. Each successive blockbuster deal that made investors billions of dollars built an additional layer on the assumption that they had succeeded; that they were “the smartest guys in the room” who were going to remake society in their image. Until it all started to unwind in 2007. As the hype passed the peak and loans were called, the effect was akin to a rubber band snapping – sudden and irreversible. There were many studies conducted on this period of time, most of which focused on business decisions and how companies allowed themselves to uncritically follow the hype path and take on unsustainable risk. A few studies focused on the individuals that powered the hype and uncovered some interesting facts.

    One of the questions postulated by these studies concerned the role of testosterone. It turns out that making successful trades that make lots of money give us massive hits of dopamine, which is highly influenced by testosterone. There was even a direct correlation between levels of serum testosterone and the degree of risk taking. Because of the dopamine “high” of these traders, their reward centers of their brains lit up, preventing them from thinking more critically. They became completely convinced of their invincibility and their own success, up until the moment it all came crashing down. These people – the traders at the center of activity – were the worst narrators of the moment, because they were completely invested in the pursuit of more chemical highs. I think something similar is happening with AI hype cycles. The more invested you are in AI, the more of a dopamine high you get when you (or your agents) successfully write a bit of code that does something useful. This leads to a positive feedback loop where the individual pursues ever more brain chemical highs, just like the investors mentioned above. You can see this play out where every pronouncement by Altman, Amodei, and others becomes progressively exaggerated and even divorced from reality.

    There are a few aspects of human psychology that make us particularly vulnerable to this type of feedback loop:

    • We love to extrapolate from patterns – humans see patterns in everything. And when there’s not one there, we will make them up and “connect the dots” regardless of whether a connection exists. This explains why your drunk uncle at Thanksgiving takes great pains to tell you about how “it’s all connected, man!”
    • We are uncomfortable with not knowing – it’s a whole lot easier to come up with some cockamamie story with named actors than to say “I don’t know” or to explain a calamity as an outcome of  something as banal as incompetence. See drunk uncle, above
    • We love to anthropomorphize everything – we assign human characteristics to almost everything in our lives, from pets to cars and houses… hell, we even anthropomorphized “pet rocks”. Appropriate pop culture reference: “She’s giving it all she can, cap’n!!!”

    Combine all of these together, and it’s easy to see why we are susceptible to the AI hype train. Mix in the fear-mongering and desperation, and you get a perfect storm ripe for exploiting the moment and separating people and well-endowed institutions from their wealth.

    Apologia and Desperation

    There’s another historical analog that we can use to properly frame this moment: the apology, or apologia. An “apology” was written as a defense of someone or an idea against an accusation. In ancient Greece, this happened strictly in a legal context, but the concept has been extended more generally. Some ancient historical writings have been categorized as apologia, even though no accusation exists or survived history. One example of an ancient text that has been post hoc described as an apology is the biblical account of the rise of King David. When read as defenses of the idea or person, these texts become notable for what they don’t say, or how they soften the impact of negative events. In famous apologies, there are accounts of how the perpetrators engaged in unflattering behaviors, but there are always explanations for why the object of the accusation had no choice or committed his acts for the greater good, because you see, he had no choice and the outcome was inevitable. Missing from an apology is any direct mention of remorse or regret – it’s always an explanation. From reading an apology with no reference to an accusation, you can extrapolate what the accusation was by identifying what is being expained away or what is missing. This becomes a useful way to read ancient literature, because you generally surmise the motivation and intent of the text by critically assembling in your mind an outline of what the original accusations must have looked like.

    In this framing, we start to see AI hype for what it is: an apology against the accusations. And what are the accusations? Even though Shumer (and Altman, et al.) never directly reference them, we can infer them based on what’s not in the writing or what is glossed over. Let’s summarize the accusation based on a critical reading: these large, high valuation companies are taking on unsustainable debt and are unable to justify the amount of money put in them. The results and outcomes, while positive, are nowhere near the billions of dollars in debt these companies have taken on. We have passed the point where these companies will produce a return on investments that will satisfy their investors. This tacit acknowledgement of these accusations leads to desperate attempts to justify their existence by over inflating their influence and value to the world. They are compelled to continue this hype spinning, because it’s all they can bank on.

    LLMs cannot, in fact, “decide” to write better versions of themselves. Agentic tools will have a great impact and displace some jobs, most of them in tech, but they will not replace lawyers by 2030 or whatever incredible claims have been made. We will still need radiologists. The idea that we’re on the path towards machine sentience is a tale that has made the rounds in Silicon Valley for decades. And you can’t delve very deeply into the “singularity” movement without running into believers in eugenics and progenitors of TESCREAL.

    If we look at the weight of all the evidence and make full use of our critical thinking skills, we can only arrive at one incontrovertible conclusion: these people are fucking nuts, and we cannot, must not, trust them.

    Also on:

    brid.gy

  • Change Agents in Large Organizations

    “Everybody has plans until they get hit the first time.” — Mike Tyson

    An economist was once asked, “If you were stranded on a desert island, how would you survive?” The economist pondered this great question for some time and then proudly ventured his answer: “Assume a perfect world…” — old joke about economists

    I am not known for my love of business or management books; quite the opposite, actually. When I try to articulate why, it generally comes down to boredom and a decided lack of enthusiasm for the subject. It’s not that I don’t appreciate the appeal of business topics or the act of conducting it. Far from it. It’s more that there’s an utter futility to the idea that we can do it better. I led off this post with the 2 quotes above to illustrate my reasoning. They seem to come from very different points of view, and yet they are in my mind very much related:

    • Business books are not reflective of lived experiences or real world incentives, just like the economist in the above example, and…
    • They’re hopelessly naive and unable to account for what happens the first time a practitioner gets “punched in the face” (not literally, of course, or at least not usually)

    Both quotes illustrate the difficulty of putting a plan into action, either because you didn’t account for resistance (Mike Tyson) or the realities of the real world constraints (the economist). I dislike business books for the same reason I dislike management consultants, strategists, market analysts, pundits, and any other pointy-haired expert who tries to tell me how to do my job better: because their words prove to be almost useless against the realities of functioning, much less thriving, in a real-life bureaucratic system. With that in mind, I’m now going to do what I probably never should: give advice on how to be a change agent in large bureaucratic organizations. Given what I wrote above, you could be forgiven for asking, “Why?” The answer is rather simple: despite all my experience which tells me I should really know better, at the end of the day, I naively follow an insatiable desire to drive change. Knowing better or not, it doesn’t stop me from trying. The act of futile resistance against the borg is buried deep, deep inside my psyche. It’s terminal, I’m afraid.

    Never, Ever be a Change Agent

    The first thing to know about being a change agent is to not be one. Just don’t do it. No one is ever going to congratulate you on your futility or give you an award for repeatedly beating your head against countless walls. Just don’t. The best that can happen is that somebody else advances themselves based on your ideas, picks up the pieces after you’ve been beaten to a pulp, and then gets the accolades. The worst that can happen is that nobody every listens to you at all and you toil away in silence and isolation, walled off from doing any real damage. Note that getting fired is not the worst outcome. In fact, it’s a great way to confirm you were on to something and extricate yourself from a terrible situation, preventing you from expending energy on fruitless efforts that nobody will acknowledge. Getting fired is a merciful end. Futilely trudging along with no end in sight? Now that’s inhumane torture.

    To successfully make changes in an organization, think very carefully about who you need to convince and bring along for the ride: your upper management. That’s right, the very people who have benefited from the existing systems. Remind me, what is their incentive for changing? Their interest goes only as far as their incentives. To be a successful change agent, you have to convince them that change is in their interest, and that’s a pretty high bar. To be successful, your leaders have to be in such a position that they see an urgent need for change that will benefit the organization – but also simultaneously benefit them. The stars have to be aligned just so, and you will need to take extra care to spot the right opportunities. I cannot emphasize this point enough: the stars do not tend to align except in particular circumstances. You have to learn to be very adept at spotting those particular circumstances. As I said, most of the time it’s not worth it.

    For the remainder of this post, I’m going to assume that you are disregarding my hard-earned, well-considered advice and have chosen to proceed on this adventure in masochism.

    Ok, Fine. You’re a Change Agent. Now What?

    The first thing to know about large organizations is that they never fail. Where most change agents go wrong is they erroneously assume that organizations are failing. If you are asking the question, “why is this organization failing?” know that you are asking precisely the wrong question. Organizations behave exactly as they are designed, and one thing they are designed to do is to resist change. When you zoom out and consider the possible outcomes of an organization’s lifecycle, this is not a bad thing. A long-lived organization will need to be able to survive the whims and myopia of misguided leaders as well as the ups and downs of its industry and market. Resistance to change is a design feature, not a bug. Organizations are so good at self-perpetuation that they are quite adept at identifying and neutralizing potential threats, ie. people that want to change things. How this happens depends on the environment, from putting you on projects that keep you tied up (and away from the things management cares about) to just flushing you out entirely if you prove too meddlesome.

    This is why I get annoyed with most attempts to affect change: they assume that organizations need to be better, and they assume that their pet project is a way to do that. Thus we have movements like Agile and DevOps, which started off as a means to change organizations and eventually were subsumed by the beast, becoming tools that organizations use to perpetuate their existence without actually changing anything. The authors of the Agile manifesto wanted to change how technology worked in organizations, but what they actually did was give large organizations the language they needed to perpetuate the same incentive structures and bureaucracy they always had. DevOps was going to put Agile methodology into practice and empower (whatever that means) technologists to take a larger stake in the business. I’m pretty sure CIOs are still laughing about that one. In the meantime, we still get design by committee, the inability to make decisions, and endless red tape to prevent change from actually happening. Again, this isn’t necessarily bad from the perspective of a business built to last; it’s just really annoying if you expect things to move after you push. My advice: adjust your expectations.

    Incentives and Human Behavior

    The reason most change initiatives fail is because they don’t account for the reality of incentives and the influence of human behavior. Large organizations have evolved intricate systems of carrots and sticks to reward certain behaviors and punish or at least discourage behaviors deemed impolitic. Want to know why teams don’t collaborate across organizations? Because they’re not rewarded for doing so. Why do leaders’ edicts get ignored? Because teams are incentivized to stay the course and not switch abruptly.

    Agile failed in its original intent because it naively assumed that incentives would be aligned with faster development and delivery of technology. What it failed to calculate was that any change in a complex system would incur a premium cost or tax for the change. Any change to a legacy system with complex operations will have unknown consequences and therefore unknown costs. The whole point of running a business effectively is to be able predict P&L with some accuracy. Changes to legacy systems incur necessary risk, which disincentivizes organizations from adopting them at scale. Thus agile morphs from accelerated development and delivery to a different type of bureaucracy that served the same purpose as the old one: preventing change. Except now it uses fancy words like “scrums”, “standups”, and “story points”. As Charles Munger put it, “show me the incentives, and I’ll show you the outcome.” If the avoidance of risk is incentivized and rewarded, then practitioners in your organization will adopt that as their guiding principle. If your employees get promoted for finishing their pet projects and not for collaborating across the organization, guess what they will choose to do with their time?

    It’s this naive disregard of humanity that dooms so many change initiatives. Not everyone wants to adopt your particular changes, and there may be valid reasons for that. Not everyone wants to be part of an initiative that forever changes an organization. Some people just want to draw a paycheck and go home. To them, change represents risk to their future employment. Any change initiative has to acknowledge one universal aspect of humanity: to most people, change is scary. Newsflash: some people don’t tie their identities to their jobs. I envy them, honestly. And still others aren’t motivated to change their organization. They are just fine with the way things are.

    Parasite-Host Analogy

    And how do organizations prevent change? By engaging in what I call the “host immune response.” If you’re familiar with germ theory and disease pathology you know that most organisms have evolved the means to prevent external threats from causing too much harm. Mammals produce mucous, which surrounds viruses and bacteria in slimy goo to prepare for expulsion from the body, preventing these organisms from multiplying internally and causing damage to organs. Or the host will wall off an intruder, not eradicating or expelling it, just allowing it to exist where it can’t do any damage, like a cyst. Or an open source community.

    Within this host immune response and parasite analogy, there lies the secret to potential success: symbiosis. If you recall your high school biology textbook (and really, who doesn’t?) you’ll recall that symbiosis is the result of 2 species developing a mutually beneficial relationship. Nature provides numerous examples or parasitic relationships evolving into symbiosis: some barnacle species and whales; some intestinal worms and mammals; etc, etc. In this analogy, you the change agent are the parasite, and the organization you work for is the host. The trick is for the parasite to evade getting ejected from the host. To do that, the parasite has to be visible enough for its benefits to be felt, but not so visible as to inflame the host. It’s quite the trick to pull off. To put this into more practical terms, don’t announce yourself too loudly, and get in the habit of showing, not telling.

    Oh dear… I’ve now shifted into the mode of giving you a ray of hope. I’m terribly sorry. I fear that my terminal case of unbridled optimism has now reared its ugly head. Fine. Even though it’s probably pointless and a lost cause, and you’re only signing up for more pain, there are some things you can do to improve your chances of success from 0% to… 0.5%?

    Show, Don’t Tell

    There are few things large organizations detest more than a loud, barking dog. The surest route to failure is to raise the expectations of everyone around you. This is a thing that happens when you talk about your vision and plant the seeds of hope.

    Stop. Talking.

    Open source projects serve as a great point of reference here. Sure, many large open source projects undergo some amount of planning, usually in the form of a backlog of features they want to implement. Most well-run, large open source projects have a set of procedures and guidelines for how to propose new features and then present them to the core team as well as the community-at-large. Generally speaking, they do not write reams of text in the form of product requirements documents. They will look at personas and segmentation. They will create diagrams that show workflows. But generally speaking, they lead with code. Documentation and diagrams tend to happen after the fact. Yes, they will write or contribute to specifications, especially if their project requires in-depth integration or collaboration with another project, but the emphasis is on releasing code and building out the project. Open source projects serve as my point of reference, so imagine my surprise when I started working in large organizations and discovered that most of them do precisely the opposite. They write tomes of text about what they are thinking of building and what they wish to build, before they ever start to actually build it. This runs counter to everything I’ve learned working in open source communities. Given my above points about not changing too much too quickly, what is a change agent to do?

    Prototype. Bootstrap. Iterate.

    Open Source innovation tells us that the secret to success is to lead with the code. You want to lead change? Don’t wait for something to be perfect. Do your work in the open. Show your work transparently. Iterate rapidly and demonstrate continuously. Others will want to create the PRDs, the architectural documents, the white papers, and the other endless reams of text that no one will ever read. Let them. It’s a necessary step – remember, your job is to not trigger the host immune response. You can do that by letting the usual processes continue. What you are going to do is make sure that the plans being written out are represented in the code in a form that’s accessible to your target audience as quickly as possible, and that you get it in front of your target audience as soon as it’s available. Without a working representation of what is being proposed, your vision is wishful thinking and vaporware.

    The reasons are simple: if you expend your time and energy building up expectations for something that doesn’t exist yet, you risk letting the imaginations of your customers and stakeholders run wild. By limiting your interactions to demonstrations of what exists, the conversation remains grounded in reality. If you continuously present a grand vision of “the future” you will set the stage for allowing perfect to be the enemy of good. Your customers will have a moving target in their minds that you will never be able to satisfy. By building up expectations and attempting to meet them, you are setting the stage for failure. But with continuous iteration, you help to prevent expectations from exceeding what you are capable of delivering. There’s also the added benefit of showing continuous progress.

    Borrowing from the open source playbook is a smart way to lead change in an organization, and it doesn’t necessarily need to be limited to code or software. Continuous iteration of a product or service being delivered can apply to documentation, process design, or anything that requires multi-stage delivery. By being transparent with your customers and stakeholders and bringing them with you on the journey, you give them an ownership stake in the process. This ownership stake can incentivize them to collaborate more deeply, moving beyond customer into becoming a full-fledged partner. This continuous iteration and engagement builds trust, which helps prevent the host from treating you like a parasite and walling you off.

    Remember, most people and organizations don’t like change. It scares them. By progressing iteratively, your changesets become more manageable as well as palatable and acceptable. This is the way to make your changes seem almost unnoticeable, under the radar, and yet very effective, ultimately arriving at your desired outcome.

    Prototype. Bootstrap. Iterate.

  • The New Open Source Playbook – Platforms Part Deux

    (This is the 2nd post in a series. Part 1 is here)

    I was all set to make this 2nd post about open core and innovation on the edge, and then I realized that I should probably explore the concept of “lift” in a bit more detail. Specifically, if you’re looking for your platform strategy to give your technology products lift, what does that mean exactly? This goes back to the idea that a rising tide lifts all boats. If you think of a rising tide as a growing community of users or developers, and the boat is your particular software project, then you want a startegy where your project benefits from a larger community. A dynamic, growing community will be able to support several “boats” – products, projects, platforms, et al. A good example of this is the Kubernetes community, which is the flagship project of the Cloud Native Computing Foundation (CNCF).

    How Do We Generate Lift?

    There are 2 basic types of lift you will be looking for – user lift, or getting more people ot adopt your platform, and developer lift, where more developers are contributing to your platform. The former gets more people familiar with your particular technology, providing the basis for potential future customers, and the latter allows you to reduce your engineering cost and potentially benefit from new ideas that you didn’t think of. This means that the community or ecosystem you align with depends on the goals for your platform. If you want more users, that is a very different community strategy from wanting more collaborators. Many startups conflate these strategies, which means they don’t always get the results they’re looking for.

    Let’s assume that you have a potential platform that is categorized in the same cloud native space as Kubernetes. And let’s assume that you’ve determined that the best strategy to maximize your impact is to open source your platform. Does that mean you should put your project in the CNCF? It depends! Let’s assume that your product will target infosec professionals, and you want to get feedback on usage patterns for common security use cases. In that case, the Kubernetes or CNCF communities may not be the best fit. If you want security professionals getting familiar with and adopting your platform, you may want to consider security-focused communities, such as those that have formed around SBOM, compliance, and scanning projects. Or perhaps you do want to see how devops or cloud computing professionals would use your platform to improve their security risk, in which case Kubernetes or CNCF make sense. Your target audience will determine what community is the best fit.

    Another scenario: let’s assume that your platform is adjacent to Kubernetes and you think it’s a good candidate for collaboration with multiple entities with a vested interest in your project’s success. In that case, you need developers with working knowledge of Kubernetes architecture, and the Kubernetes community is definitely where you want your project to be incubated. It’s not always so straightforward, however. If you’re primarily looking for developers who will extend your platform, making use of your interfaces and APIs, then perhaps it doesn’t matter if they have working knowledge of Kubernetes. Maybe in this case, you would do well to understand developer use cases and which vertical markets or industries your platform appeals to, and then follow a different community trail. Platform-community fit for your developer strategy is a more nuanced decision than product-market fit. The former is much more multi-dimensional than the latter.

    If you have decided that developers are key to your platform strategy, you have to decide what kind of developers you’re looking for: those that will *extend* your platform; those that will contribute to your core platform; or those that will use or embed your platform. That will determine the type of lift you need and what community(ies) to align with.

    One more example: You’re creating a platform that you believe will transform the cybersecurity industry, and you want developers that will use and extend your platform. You may at first be attracted to security-focused communities, but then you discover a curious thing: cyber security professionals don’t seem fond of your platform and haven’t adopted it at the scale you expect or need. Does this mean your platform sucks? Not always – it could be that these professionals are highly opinionated and have already made up their minds about desired platforms to base their efforts on. However, it turns out that your platform helps enterprise developers be more secure. Furthermore, you notice that within your enterprise developer community, there is overlap with the PyTorch community, which is not cyber security focused. This could be an opportunity to pivot on your adoption strategy and go where your community is leading: PyTorch. Perhaps that is a more ideal destination for community alignment purposes. Before deciding, however, you can do some testing within the PyTorch community before making a final decision.

    Learn From My Example: Hyperic

    Hyperic was a systems management monitoring tool. These days we would put it in the “observability” category, but that term didn’t exist at the time (2006). The Hyperic platform was great for monitoring Java applications. It was open core, so we focused on adoption by enterprise developers and not contributions. We thought we had a great execution strategy to build a global user base that would use Hyperic as the basis for all of their general purpose application monitoring needs. From a community strategy perspective, we wanted Hyperic to be ubiquitous, used in every data center where applications were deployed and managed. We had a great tag line, too: “All Systems Go”. But there was a problem: although Hyperic could be used to monitor any compute instance, it really shined when used with Java appliations. Focusing on general systems management put us in the same bucket, product-wise, as other general use systems management tools, none of which were able to differentiate each other. If we had decided to place more of our community focus on Java developers, we could have ignored all of the general purpose monitoring and focused on delivering great value for our core audience: Java development communities. Our platform-community fit wasn’t aligned properly, and as a result, we did not get the lift we were expecting. This meant that our sales team had to work harder to find opportunities and put a drag on our revenue and overall momentum. Lesson learned…

    When attempting a platform execution strategy, and you’re going the open source route, platform-community fit is paramount. Without it, you won’t get the lift you’re expecting. You can always change up your community alignment strategy later, but it’s obviously better if you get it right the first time.

  • The New Open Source Playbook

    (This is the first in a series)

    For the last few years, the world of commercial open source has been largely dormant, with few startup companies making a splash with new open source products. Or if companies did make a splash it was for the wrong reasons, see eg. Hashicorp’s Terraform rugpull. It got to the point that Jeff Geerling declared that “Corporate Open Source is Dead“, and honestly, I would have agreed with him. It seemed that the age of startups pushing new open source projects and building a business around them was a thing of the past. To be clear, I always thought that it was naive to think that you could simply charge money for a rebuild of open source software, but that fact that startups were always trying showed that there was momentum behind the idea of using open source to build a business.

    And then a funny thing happened – a whole lot of new energy (and money) started flowing into new nascent companies looking to make a mark in… stop me if you’ve heard this one… generative AI. Or to put it in other words, some combination of agents built on LLMs that attempted to solve some automation problem, usually in the category of software development or delivery. It turns out that when there’s lots of competition for users, especially when those users are themselves developers, that a solid open source strategy can make the difference between surviving and thriving. In light of this newfound enthusiasm for open source and startups, I thought I’d write a handy guide for startups looking to incorporate open source startegy into their developer go to market playbook. Except in this version, I will incorporate nuances specific to our emerging agentic world.

    To start down this path, I recommend that startup founders look at 3 layers of open source go to market strategy: platform ecosystem (stuff you co-develop), open core (stuff you give away but keep IP), and product focus (stuff you only allow paying customers to use). That last category, product focus, can be on-prem, cloud hosted, or SaaS services – it won’t matter, ultimately. Remember, this is about how to create compelling products that people will pay for, helping you establish a business. There are ways to use open source principles that can help you reach that goal, but proceed carefully. You can derail your product strategy by making the wrong choices.

    Foundation: the Platform Ecosystem Play

    When thinking about open source strategy, many founders thought they could release open source code and get other developers to work on their code for free as a new model of outsourcing. This almost never works as the startup founders imagined. What does end up happening is that a startup releases open source code and their target audience happily uses the code for free, often not contributing back, causing a number of startups to question why they went down the open source path to begin with. Don’t be like them.

    The way to think of this is within the concept of engineering economics. What is the most efficient means to produce the foundational parts of your software?

    • If the answer is by basing your platform on existing open source projects, then you figure out how to do that while protecting your intellectual property. This usually means focusing on communities and projects under the auspices of a neutral 3rd party, such as the Eclipse or Linux Foundation.
    • If the answer is by creating a new open source platform that you expect to attract significant interest from other technology entities, then you test product-market fit with prospective collaborators and organizations with a vested interest in your project. Note: this is a risky strategy requiring a thoughtful approach and ruthless honesty about your prospects. The most successful examples of this, such as Kubernetes, showed strong demand from the outset and their creation was a result of market pull, not a push.
    • If the answer is that you don’t need external developers contributing to your core platform, but you do need end users and data on product-market fit, then you look into either an open core approach, or you create a free product that gives the platform away for free but not necessarily under an open source license. This is usually for the cases where you need developers to use or embed your product, but you don’t need them contributing directly. This is the “innovation on the edge” approach.
    • Or, if the answer is that you’ll make better progress by going it alone, then you do that and you don’t give it a 2nd thought. The goal is to use the most efficient means to produce your platform or foundational software, not score points on hacker news.

    Many startups through the years have been tripped up by this step, misguidedly believing that their foundational software was so great that once they released it, thousands of developers would step over each other to contribute to a project.

    In the world of LLMs and generative AI, there is an additional consideration: do you absolutely need the latest models from Google, OpenAI, or elsewhere, or can you get by with slightly older models less constrained by usage restrictions? Can you use your own training and weights with off-the-shelf open source models? If you’re building a product that relies on agentic workflows, you’ll have to consider end user needs and preferences, but you’ll also have to protect yourself from downstream usage contraints, which could hit you if you reach certain thresholds of popularity. When starting out, I wholeheartedly recommend having as few constraints as possible, opting for open source models whenever possible, but also giving your end users the choice if they have existing accounts with larger providers. This is where it helps to have a platform approach that helps you address product-ecosystem fit as early as possible. If you can build momentum while architecting your platform around open source models and model orchestration tools, your would-be platform contributors will let you know that early on. Having an open source platform approach will help you guide your development in the right direction. Building your platform or product foundation around an existing open source project will be even more insightful, because that community will likely already have established AI preferences, helping make the decision for you.

    To summarize, find the ecosystem that best fits your goals and product plans and try to build your platform strategy within a community in that ecosystem, preferably on an existing project; barring that, create your own open source platform but maintain close proximity to adjacent communities and ecosystems, looking for lift from common users that will help determine platform-ecosystem fit; or build an open core platform, preferably with a set of potential users from an existing community or ecosystem who will innovate on the edge, using your APIs and interfaces; if none of those apply, build your own free-to-use proprietary platfrom but maintain a line-of-sight to platform-ecosystem fit. No matter how you choose to build or shape a platform, you will need actual users to provide lift for your overall product strategy. You can get that lift from core contributors, innovators on the edge, or adoption from your target audience, or some combination of these. How you do that depends on your needs and the expectations of your target audience.

    Up Next: open core on the edge and free products.