Author: johnmark

  • Change Agents in Large Organizations

    “Everybody has plans until they get hit the first time.” — Mike Tyson

    An economist was once asked, “If you were stranded on a desert island, how would you survive?” The economist pondered this great question for some time and then proudly ventured his answer: “Assume a perfect world…” — old joke about economists

    I am not known for my love of business or management books; quite the opposite, actually. When I try to articulate why, it generally comes down to boredom and a decided lack of enthusiasm for the subject. It’s not that I don’t appreciate the appeal of business topics or the act of conducting it. Far from it. It’s more that there’s an utter futility to the idea that we can do it better. I led off this post with the 2 quotes above to illustrate my reasoning. They seem to come from very different points of view, and yet they are in my mind very much related:

    • Business books are not reflective of lived experiences or real world incentives, just like the economist in the above example, and…
    • They’re hopelessly naive and unable to account for what happens the first time a practitioner gets “punched in the face” (not literally, of course, or at least not usually)

    Both quotes illustrate the difficulty of putting a plan into action, either because you didn’t account for resistance (Mike Tyson) or the realities of the real world constraints (the economist). I dislike business books for the same reason I dislike management consultants, strategists, market analysts, pundits, and any other pointy-haired expert who tries to tell me how to do my job better: because their words prove to be almost useless against the realities of functioning, much less thriving, in a real-life bureaucratic system. With that in mind, I’m now going to do what I probably never should: give advice on how to be a change agent in large bureaucratic organizations. Given what I wrote above, you could be forgiven for asking, “Why?” The answer is rather simple: despite all my experience which tells me I should really know better, at the end of the day, I naively follow an insatiable desire to drive change. Knowing better or not, it doesn’t stop me from trying. The act of futile resistance against the borg is buried deep, deep inside my psyche. It’s terminal, I’m afraid.

    Never, Ever be a Change Agent

    The first thing to know about being a change agent is to not be one. Just don’t do it. No one is ever going to congratulate you on your futility or give you an award for repeatedly beating your head against countless walls. Just don’t. The best that can happen is that somebody else advances themselves based on your ideas, picks up the pieces after you’ve been beaten to a pulp, and then gets the accolades. The worst that can happen is that nobody every listens to you at all and you toil away in silence and isolation, walled off from doing any real damage. Note that getting fired is not the worst outcome. In fact, it’s a great way to confirm you were on to something and extricate yourself from a terrible situation, preventing you from expending energy on fruitless efforts that nobody will acknowledge. Getting fired is a merciful end. Futilely trudging along with no end in sight? Now that’s inhumane torture.

    To successfully make changes in an organization, think very carefully about who you need to convince and bring along for the ride: your upper management. That’s right, the very people who have benefited from the existing systems. Remind me, what is their incentive for changing? Their interest goes only as far as their incentives. To be a successful change agent, you have to convince them that change is in their interest, and that’s a pretty high bar. To be successful, your leaders have to be in such a position that they see an urgent need for change that will benefit the organization – but also simultaneously benefit them. The stars have to be aligned just so, and you will need to take extra care to spot the right opportunities. I cannot emphasize this point enough: the stars do not tend to align except in particular circumstances. You have to learn to be very adept at spotting those particular circumstances. As I said, most of the time it’s not worth it.

    For the remainder of this post, I’m going to assume that you are disregarding my hard-earned, well-considered advice and have chosen to proceed on this adventure in masochism.

    Ok, Fine. You’re a Change Agent. Now What?

    The first thing to know about large organizations is that they never fail. Where most change agents go wrong is they erroneously assume that organizations are failing. If you are asking the question, “why is this organization failing?” know that you are asking precisely the wrong question. Organizations behave exactly as they are designed, and one thing they are designed to do is to resist change. When you zoom out and consider the possible outcomes of an organization’s lifecycle, this is not a bad thing. A long-lived organization will need to be able to survive the whims and myopia of misguided leaders as well as the ups and downs of its industry and market. Resistance to change is a design feature, not a bug. Organizations are so good at self-perpetuation that they are quite adept at identifying and neutralizing potential threats, ie. people that want to change things. How this happens depends on the environment, from putting you on projects that keep you tied up (and away from the things management cares about) to just flushing you out entirely if you prove too meddlesome.

    This is why I get annoyed with most attempts to affect change: they assume that organizations need to be better, and they assume that their pet project is a way to do that. Thus we have movements like Agile and DevOps, which started off as a means to change organizations and eventually were subsumed by the beast, becoming tools that organizations use to perpetuate their existence without actually changing anything. The authors of the Agile manifesto wanted to change how technology worked in organizations, but what they actually did was give large organizations the language they needed to perpetuate the same incentive structures and bureaucracy they always had. DevOps was going to put Agile methodology into practice and empower (whatever that means) technologists to take a larger stake in the business. I’m pretty sure CIOs are still laughing about that one. In the meantime, we still get design by committee, the inability to make decisions, and endless red tape to prevent change from actually happening. Again, this isn’t necessarily bad from the perspective of a business built to last; it’s just really annoying if you expect things to move after you push. My advice: adjust your expectations.

    Incentives and Human Behavior

    The reason most change initiatives fail is because they don’t account for the reality of incentives and the influence of human behavior. Large organizations have evolved intricate systems of carrots and sticks to reward certain behaviors and punish or at least discourage behaviors deemed impolitic. Want to know why teams don’t collaborate across organizations? Because they’re not rewarded for doing so. Why do leaders’ edicts get ignored? Because teams are incentivized to stay the course and not switch abruptly.

    Agile failed in its original intent because it naively assumed that incentives would be aligned with faster development and delivery of technology. What it failed to calculate was that any change in a complex system would incur a premium cost or tax for the change. Any change to a legacy system with complex operations will have unknown consequences and therefore unknown costs. The whole point of running a business effectively is to be able predict P&L with some accuracy. Changes to legacy systems incur necessary risk, which disincentivizes organizations from adopting them at scale. Thus agile morphs from accelerated development and delivery to a different type of bureaucracy that served the same purpose as the old one: preventing change. Except now it uses fancy words like “scrums”, “standups”, and “story points”. As Charles Munger put it, “show me the incentives, and I’ll show you the outcome.” If the avoidance of risk is incentivized and rewarded, then practitioners in your organization will adopt that as their guiding principle. If your employees get promoted for finishing their pet projects and not for collaborating across the organization, guess what they will choose to do with their time?

    It’s this naive disregard of humanity that dooms so many change initiatives. Not everyone wants to adopt your particular changes, and there may be valid reasons for that. Not everyone wants to be part of an initiative that forever changes an organization. Some people just want to draw a paycheck and go home. To them, change represents risk to their future employment. Any change initiative has to acknowledge one universal aspect of humanity: to most people, change is scary. Newsflash: some people don’t tie their identities to their jobs. I envy them, honestly. And still others aren’t motivated to change their organization. They are just fine with the way things are.

    Parasite-Host Analogy

    And how do organizations prevent change? By engaging in what I call the “host immune response.” If you’re familiar with germ theory and disease pathology you know that most organisms have evolved the means to prevent external threats from causing too much harm. Mammals produce mucous, which surrounds viruses and bacteria in slimy goo to prepare for expulsion from the body, preventing these organisms from multiplying internally and causing damage to organs. Or the host will wall off an intruder, not eradicating or expelling it, just allowing it to exist where it can’t do any damage, like a cyst. Or an open source community.

    Within this host immune response and parasite analogy, there lies the secret to potential success: symbiosis. If you recall your high school biology textbook (and really, who doesn’t?) you’ll recall that symbiosis is the result of 2 species developing a mutually beneficial relationship. Nature provides numerous examples or parasitic relationships evolving into symbiosis: some barnacle species and whales; some intestinal worms and mammals; etc, etc. In this analogy, you the change agent are the parasite, and the organization you work for is the host. The trick is for the parasite to evade getting ejected from the host. To do that, the parasite has to be visible enough for its benefits to be felt, but not so visible as to inflame the host. It’s quite the trick to pull off. To put this into more practical terms, don’t announce yourself too loudly, and get in the habit of showing, not telling.

    Oh dear… I’ve now shifted into the mode of giving you a ray of hope. I’m terribly sorry. I fear that my terminal case of unbridled optimism has now reared its ugly head. Fine. Even though it’s probably pointless and a lost cause, and you’re only signing up for more pain, there are some things you can do to improve your chances of success from 0% to… 0.5%?

    Show, Don’t Tell

    There are few things large organizations detest more than a loud, barking dog. The surest route to failure is to raise the expectations of everyone around you. This is a thing that happens when you talk about your vision and plant the seeds of hope.

    Stop. Talking.

    Open source projects serve as a great point of reference here. Sure, many large open source projects undergo some amount of planning, usually in the form of a backlog of features they want to implement. Most well-run, large open source projects have a set of procedures and guidelines for how to propose new features and then present them to the core team as well as the community-at-large. Generally speaking, they do not write reams of text in the form of product requirements documents. They will look at personas and segmentation. They will create diagrams that show workflows. But generally speaking, they lead with code. Documentation and diagrams tend to happen after the fact. Yes, they will write or contribute to specifications, especially if their project requires in-depth integration or collaboration with another project, but the emphasis is on releasing code and building out the project. Open source projects serve as my point of reference, so imagine my surprise when I started working in large organizations and discovered that most of them do precisely the opposite. They write tomes of text about what they are thinking of building and what they wish to build, before they ever start to actually build it. This runs counter to everything I’ve learned working in open source communities. Given my above points about not changing too much too quickly, what is a change agent to do?

    Prototype. Bootstrap. Iterate.

    Open Source innovation tells us that the secret to success is to lead with the code. You want to lead change? Don’t wait for something to be perfect. Do your work in the open. Show your work transparently. Iterate rapidly and demonstrate continuously. Others will want to create the PRDs, the architectural documents, the white papers, and the other endless reams of text that no one will ever read. Let them. It’s a necessary step – remember, your job is to not trigger the host immune response. You can do that by letting the usual processes continue. What you are going to do is make sure that the plans being written out are represented in the code in a form that’s accessible to your target audience as quickly as possible, and that you get it in front of your target audience as soon as it’s available. Without a working representation of what is being proposed, your vision is wishful thinking and vaporware.

    The reasons are simple: if you expend your time and energy building up expectations for something that doesn’t exist yet, you risk letting the imaginations of your customers and stakeholders run wild. By limiting your interactions to demonstrations of what exists, the conversation remains grounded in reality. If you continuously present a grand vision of “the future” you will set the stage for allowing perfect to be the enemy of good. Your customers will have a moving target in their minds that you will never be able to satisfy. By building up expectations and attempting to meet them, you are setting the stage for failure. But with continuous iteration, you help to prevent expectations from exceeding what you are capable of delivering. There’s also the added benefit of showing continuous progress.

    Borrowing from the open source playbook is a smart way to lead change in an organization, and it doesn’t necessarily need to be limited to code or software. Continuous iteration of a product or service being delivered can apply to documentation, process design, or anything that requires multi-stage delivery. By being transparent with your customers and stakeholders and bringing them with you on the journey, you give them an ownership stake in the process. This ownership stake can incentivize them to collaborate more deeply, moving beyond customer into becoming a full-fledged partner. This continuous iteration and engagement builds trust, which helps prevent the host from treating you like a parasite and walling you off.

    Remember, most people and organizations don’t like change. It scares them. By progressing iteratively, your changesets become more manageable as well as palatable and acceptable. This is the way to make your changes seem almost unnoticeable, under the radar, and yet very effective, ultimately arriving at your desired outcome.

    Prototype. Bootstrap. Iterate.

  • The New Open Source Playbook – Platforms Part Deux

    (This is the 2nd post in a series. Part 1 is here)

    I was all set to make this 2nd post about open core and innovation on the edge, and then I realized that I should probably explore the concept of “lift” in a bit more detail. Specifically, if you’re looking for your platform strategy to give your technology products lift, what does that mean exactly? This goes back to the idea that a rising tide lifts all boats. If you think of a rising tide as a growing community of users or developers, and the boat is your particular software project, then you want a startegy where your project benefits from a larger community. A dynamic, growing community will be able to support several “boats” – products, projects, platforms, et al. A good example of this is the Kubernetes community, which is the flagship project of the Cloud Native Computing Foundation (CNCF).

    How Do We Generate Lift?

    There are 2 basic types of lift you will be looking for – user lift, or getting more people ot adopt your platform, and developer lift, where more developers are contributing to your platform. The former gets more people familiar with your particular technology, providing the basis for potential future customers, and the latter allows you to reduce your engineering cost and potentially benefit from new ideas that you didn’t think of. This means that the community or ecosystem you align with depends on the goals for your platform. If you want more users, that is a very different community strategy from wanting more collaborators. Many startups conflate these strategies, which means they don’t always get the results they’re looking for.

    Let’s assume that you have a potential platform that is categorized in the same cloud native space as Kubernetes. And let’s assume that you’ve determined that the best strategy to maximize your impact is to open source your platform. Does that mean you should put your project in the CNCF? It depends! Let’s assume that your product will target infosec professionals, and you want to get feedback on usage patterns for common security use cases. In that case, the Kubernetes or CNCF communities may not be the best fit. If you want security professionals getting familiar with and adopting your platform, you may want to consider security-focused communities, such as those that have formed around SBOM, compliance, and scanning projects. Or perhaps you do want to see how devops or cloud computing professionals would use your platform to improve their security risk, in which case Kubernetes or CNCF make sense. Your target audience will determine what community is the best fit.

    Another scenario: let’s assume that your platform is adjacent to Kubernetes and you think it’s a good candidate for collaboration with multiple entities with a vested interest in your project’s success. In that case, you need developers with working knowledge of Kubernetes architecture, and the Kubernetes community is definitely where you want your project to be incubated. It’s not always so straightforward, however. If you’re primarily looking for developers who will extend your platform, making use of your interfaces and APIs, then perhaps it doesn’t matter if they have working knowledge of Kubernetes. Maybe in this case, you would do well to understand developer use cases and which vertical markets or industries your platform appeals to, and then follow a different community trail. Platform-community fit for your developer strategy is a more nuanced decision than product-market fit. The former is much more multi-dimensional than the latter.

    If you have decided that developers are key to your platform strategy, you have to decide what kind of developers you’re looking for: those that will *extend* your platform; those that will contribute to your core platform; or those that will use or embed your platform. That will determine the type of lift you need and what community(ies) to align with.

    One more example: You’re creating a platform that you believe will transform the cybersecurity industry, and you want developers that will use and extend your platform. You may at first be attracted to security-focused communities, but then you discover a curious thing: cyber security professionals don’t seem fond of your platform and haven’t adopted it at the scale you expect or need. Does this mean your platform sucks? Not always – it could be that these professionals are highly opinionated and have already made up their minds about desired platforms to base their efforts on. However, it turns out that your platform helps enterprise developers be more secure. Furthermore, you notice that within your enterprise developer community, there is overlap with the PyTorch community, which is not cyber security focused. This could be an opportunity to pivot on your adoption strategy and go where your community is leading: PyTorch. Perhaps that is a more ideal destination for community alignment purposes. Before deciding, however, you can do some testing within the PyTorch community before making a final decision.

    Learn From My Example: Hyperic

    Hyperic was a systems management monitoring tool. These days we would put it in the “observability” category, but that term didn’t exist at the time (2006). The Hyperic platform was great for monitoring Java applications. It was open core, so we focused on adoption by enterprise developers and not contributions. We thought we had a great execution strategy to build a global user base that would use Hyperic as the basis for all of their general purpose application monitoring needs. From a community strategy perspective, we wanted Hyperic to be ubiquitous, used in every data center where applications were deployed and managed. We had a great tag line, too: “All Systems Go”. But there was a problem: although Hyperic could be used to monitor any compute instance, it really shined when used with Java appliations. Focusing on general systems management put us in the same bucket, product-wise, as other general use systems management tools, none of which were able to differentiate each other. If we had decided to place more of our community focus on Java developers, we could have ignored all of the general purpose monitoring and focused on delivering great value for our core audience: Java development communities. Our platform-community fit wasn’t aligned properly, and as a result, we did not get the lift we were expecting. This meant that our sales team had to work harder to find opportunities and put a drag on our revenue and overall momentum. Lesson learned…

    When attempting a platform execution strategy, and you’re going the open source route, platform-community fit is paramount. Without it, you won’t get the lift you’re expecting. You can always change up your community alignment strategy later, but it’s obviously better if you get it right the first time.

  • The New Open Source Playbook

    (This is the first in a series)

    For the last few years, the world of commercial open source has been largely dormant, with few startup companies making a splash with new open source products. Or if companies did make a splash it was for the wrong reasons, see eg. Hashicorp’s Terraform rugpull. It got to the point that Jeff Geerling declared that “Corporate Open Source is Dead“, and honestly, I would have agreed with him. It seemed that the age of startups pushing new open source projects and building a business around them was a thing of the past. To be clear, I always thought that it was naive to think that you could simply charge money for a rebuild of open source software, but that fact that startups were always trying showed that there was momentum behind the idea of using open source to build a business.

    And then a funny thing happened – a whole lot of new energy (and money) started flowing into new nascent companies looking to make a mark in… stop me if you’ve heard this one… generative AI. Or to put it in other words, some combination of agents built on LLMs that attempted to solve some automation problem, usually in the category of software development or delivery. It turns out that when there’s lots of competition for users, especially when those users are themselves developers, that a solid open source strategy can make the difference between surviving and thriving. In light of this newfound enthusiasm for open source and startups, I thought I’d write a handy guide for startups looking to incorporate open source startegy into their developer go to market playbook. Except in this version, I will incorporate nuances specific to our emerging agentic world.

    To start down this path, I recommend that startup founders look at 3 layers of open source go to market strategy: platform ecosystem (stuff you co-develop), open core (stuff you give away but keep IP), and product focus (stuff you only allow paying customers to use). That last category, product focus, can be on-prem, cloud hosted, or SaaS services – it won’t matter, ultimately. Remember, this is about how to create compelling products that people will pay for, helping you establish a business. There are ways to use open source principles that can help you reach that goal, but proceed carefully. You can derail your product strategy by making the wrong choices.

    Foundation: the Platform Ecosystem Play

    When thinking about open source strategy, many founders thought they could release open source code and get other developers to work on their code for free as a new model of outsourcing. This almost never works as the startup founders imagined. What does end up happening is that a startup releases open source code and their target audience happily uses the code for free, often not contributing back, causing a number of startups to question why they went down the open source path to begin with. Don’t be like them.

    The way to think of this is within the concept of engineering economics. What is the most efficient means to produce the foundational parts of your software?

    • If the answer is by basing your platform on existing open source projects, then you figure out how to do that while protecting your intellectual property. This usually means focusing on communities and projects under the auspices of a neutral 3rd party, such as the Eclipse or Linux Foundation.
    • If the answer is by creating a new open source platform that you expect to attract significant interest from other technology entities, then you test product-market fit with prospective collaborators and organizations with a vested interest in your project. Note: this is a risky strategy requiring a thoughtful approach and ruthless honesty about your prospects. The most successful examples of this, such as Kubernetes, showed strong demand from the outset and their creation was a result of market pull, not a push.
    • If the answer is that you don’t need external developers contributing to your core platform, but you do need end users and data on product-market fit, then you look into either an open core approach, or you create a free product that gives the platform away for free but not necessarily under an open source license. This is usually for the cases where you need developers to use or embed your product, but you don’t need them contributing directly. This is the “innovation on the edge” approach.
    • Or, if the answer is that you’ll make better progress by going it alone, then you do that and you don’t give it a 2nd thought. The goal is to use the most efficient means to produce your platform or foundational software, not score points on hacker news.

    Many startups through the years have been tripped up by this step, misguidedly believing that their foundational software was so great that once they released it, thousands of developers would step over each other to contribute to a project.

    In the world of LLMs and generative AI, there is an additional consideration: do you absolutely need the latest models from Google, OpenAI, or elsewhere, or can you get by with slightly older models less constrained by usage restrictions? Can you use your own training and weights with off-the-shelf open source models? If you’re building a product that relies on agentic workflows, you’ll have to consider end user needs and preferences, but you’ll also have to protect yourself from downstream usage contraints, which could hit you if you reach certain thresholds of popularity. When starting out, I wholeheartedly recommend having as few constraints as possible, opting for open source models whenever possible, but also giving your end users the choice if they have existing accounts with larger providers. This is where it helps to have a platform approach that helps you address product-ecosystem fit as early as possible. If you can build momentum while architecting your platform around open source models and model orchestration tools, your would-be platform contributors will let you know that early on. Having an open source platform approach will help you guide your development in the right direction. Building your platform or product foundation around an existing open source project will be even more insightful, because that community will likely already have established AI preferences, helping make the decision for you.

    To summarize, find the ecosystem that best fits your goals and product plans and try to build your platform strategy within a community in that ecosystem, preferably on an existing project; barring that, create your own open source platform but maintain close proximity to adjacent communities and ecosystems, looking for lift from common users that will help determine platform-ecosystem fit; or build an open core platform, preferably with a set of potential users from an existing community or ecosystem who will innovate on the edge, using your APIs and interfaces; if none of those apply, build your own free-to-use proprietary platfrom but maintain a line-of-sight to platform-ecosystem fit. No matter how you choose to build or shape a platform, you will need actual users to provide lift for your overall product strategy. You can get that lift from core contributors, innovators on the edge, or adoption from your target audience, or some combination of these. How you do that depends on your needs and the expectations of your target audience.

    Up Next: open core on the edge and free products.

  • Open Source is About to Undergo Substantial Change

    …And Most Open Source Communities Aren’t Ready

    It’s probably gauche to talk about “AI” by now. AI this… AI that… and most of the time, what we’re really talking about is predictive text machines, aka LLMs. But today I want to talk about what I see happening in the open source world, and how I see things changing in the not too distant future, and how much of that will be shaped by these predictive text machines, aka… LLMs. The agentic world is growing very quickly, and even if the large LLMs are starting to plateau, the LLM-backed services are still accelerating in their product growth for the simple reason that developers are figuring out how to add rules engines and orchestration platforms to build out targeted vertical services (think tools for reading radiology and MRI scans, for example). A great analogy from computing history for this shift from LLMs to agentic “SLMs” is the shift in emphasis from the single CPU for defining compute power to the emergence of multi-core CPUs along with faster RAM, NVMe, larger onboard caches, and of course, GPUs. When we think about compute power today, we don’t refer to the chip speed, which is a far cry from the late 90’s and early 2000s. Believe it or not, kids, there was a time when many people thought that Moore’s law applied to the clock speed on a CPU.

    For some time now, source code has been of little value. There’s so much of it. Nobody buys source code. I’ve made this point before in a series of posts on the subject. 20 years ago, I noted how internet collaboration was driving down the price of software because of the ubiquity of source code and the ability to collaborate beyond geographic borders. This trend, which has been unceasing now for 25+ years, has hit an inflection point and accelerating beyond the previous rate. This is, of course, because of the oncoming train that is AI, or more specifically, agentic LLM-based systems that are starting to write more and more of our source code. Before I get into the full ramifications of What This Means for Open Source (tm) let me review the 2 previous transformative eras in tech that played a pivotal role in bringing us to this point: open source and cloud.

    Open Source Accelerated the Speed of Development

    A long, long time ago, software vendors had long release cycles, and customers had no choice but to wait 1-2 years, or longer depending on the industry, for the long cycle of dev, test, and release to complete. And then a funny thing happened: more people got online and suddenly created a flurry of core tools, libraries, and systems that gave application developers the ultimate freedom to create whatever they wanted without interference from gate-keepers. I cannot over-emphasize the impact this had on software vendors. At first, it involved a tradeoff: vendors were happy to use the free tools and development platforms, because they saw a way to gain a market edge and deliver faster. At the same time, startups also saw an opportunity to capitalize on this development and quickly create companies that could compete with incumbents. In the late 90s, this meant grabbing as much cash as possible from investors in the hopes of having an IPO. All of this meant that for every advance software vendors embraced from the open source world, they were also effectively writing checks that future competitors would cash, which required that established vendors release even more quickly, lather, rinse, repeat, and find vertical markets where they could build moats.

    Cloud accelerated the speed of delivery

    If open source accelerated the speed of development, the emergence of what became “cloud technologies” enabled the delivery of software at a speed and scale previously thought to be impossible. Several smart companies in the mid-2000s saw this development and started to enact plans that would capitalize on the trend to outsource computing infrastructure. The companies most famous for leading the charge were Amazon, which created AWS in 2006, Netflix, which embraced AWS at an early stage, Google, which created Borg, the predecessor to Kubernetes, and Salesforce, which created it’s cloud-based PaaS, Force.com, in 2009. Where open source gave small growing companies a chance to compete, cloud did the same, but also at a price. Established software vendors started moving to cloud-based systems that allowed them to deliver solutions to customers more quickly, and startups embraced cloud because they could avoid capital expenditures for data center maintenance. Concurrently, open source software continued to develop at a fast pace for the simple reason that it enabled the fast development of technologies that powered cloud delivery. Similar to open source, the emergence of cloud led directly to faster release cycles and increasing competition. Unlike open source, however, cloud computing allowed established cloud companies to build out hegemonic systems designed to exact higher rental fees over time, pulling customers deeper into dependencies that are increasingly difficult to unravel. Software vendors that thought open source developers were the architects of their demise in the early 2000s hadn’t yet met Amazon.

    All of these developments and faster release cycles led to a lot more source code being written and shared, with GitHub.com emerging as the preferred source code management system for open source communities. (Pour one out for Sourceforge.net, which should have captured this market but didn’t.) Sometimes this led companies to think that maybe their business wasn’t cut out for this world of source code sharing, so they began a retrenchment from their open source commitments. I predicted that this retrenchment would have little impact on their viability as a business, and I was right. If only they had asked me, but I digress…

    All of this brings us to our present moment where source code is less valuable than ever. And in a world of deprectiating value for something, how do we ensure that the rules of engagement remain fair for all parties?

    Sorry Doubters: AI Will Change Everything

    If open source accelerated development and cloud accelerated delivery, then AI is accelerating both, simultaneously. Code generation tools are accelerating the total growth of source code; code generation tools are accelerating the ongoing trend of blending the boundary between hardware and software; and code generation tools are (potentially) creating automated systems that deliver solutions more quickly. That last one has not yet been realized, but with the continuing growth of agentic workflows, orchestrators, and rules engines, I would bet my last investment dollar on that trend realizing its potential sooner rather than later.

    What does this portend? I think it means we will need to craft new methods of managing and governing all of this source code. I think it means that rules of collaboration are going to change to reflect shifting definitions of openness and fairness in collaboration. I think it means that previously staid industries (read: semiconductors) are facing increasing pressure in the form of power consumption. speed of data flow, and increasingly virtualized capabilities that have always lived close to the silicon. And I think a whole lot of SaaS and cloud native vendors are about to understand what it means to lose your “moat”. The rise of agentic systems is going to push new boundaries and flip entire industries on their heads. But for the purpose of this essay, I’m going to focus on what it means for rules of collaboration.

    What is the Definition of Open Source?

    For many years, the definition of open source has been housed and governed by the Open Source Initiative (OSI). Written in the post-cold war era of open borders and free trade, it’s a document very much of its time. In the intervening years, much has happened. Open source proliferation happened, and many licenses were approved by the OSI as meeting the requirements of the Open Source Definition (OSD). State-sponsored malware has happened, sometimes inflicting damage on the perceived safety of open source software. Cloud happened, and many open source projects were used in the creation of “cloud-native” technologies. And now LLM-based agentic systems are happening. I mention all of this to ask, in what context is it appropriate to consider changes in the OSI?

    One of the reasons open source governance proved to be so popular is that it paved the way for innovation. Allow me to quote my own definition of innovation:

    Innovation cannot be sought out and achieved. It’s like happiness. It has to be achieved by laying the foundation and establishing the rules that enable it to flourish.

    In open source communities and ecosystems, every stakeholder has a seat at the table, whether they are individuals, companies, governments, or any other body with a vested interest. That is the secret of its success. When you read the 10 tenets of the OSD, it boils down to “Establishing the rules of collaboration that ensure fairness for all participants.” Basically, it’s about establishing and defending the rights of stakeholders, namely the ability to modify and distribute derivative works. In the traditional world of source code, this is pretty straightforward. Software is distributed. Software has a license. Users are held to the requirements of that license. We already saw the first cracks in this system when cloud computing emerged, because the act of distributing… sorry “conveying” software changed significantly when I used software distributed over a network. And the idea of derivative works was formed at a time when software was compiled with shared library binaries (.so and .dll) that were pulled directly into a software build. Those ideas have become more quaint over time, and the original ideas of the OSD have become increasingly exploitable over the years. What use is a software license when we don’t technically “use software”? We chose to not deal with this issue by pretending that it hadn’t changed. For the most part, open source continued to flourish, and more open source projects continued to fuel the cloud computing industry.

    But now we’re bracing for another change. How do we govern software when we can’t even know if it was written by humans? Agentic systems can now modify and write new source code with little human intervention. I will not comment on whether this is a good idea, merely that it is happening. Agentic systems can take the output of cloud-based services, and write entire applications that mimic their entire feature set. Does that meet the definition of open source? Does it violate the EULA of a cloud service? And if companies can recreate entire code bases of projects based only on the requirements of applications that use it, does that violate the terms of reciprocal licenses like the GPL? And this is before we even get to the issues of copyright pertaining to all the source code that had to feed the models in order to write code.

    If we true back to answering the question “how do we protect the rights and ensure the fairness of all participants”, how do we prepare for these changes? I think a couple of things are in order:

    • The right to reverse engineer must be protected to meet the definition of Open Source. This means that the ability to recreate, modify, and redistribute a model, cloud service, or really anything in technology that we use, has to be protected. For years, cloud providers have built in complexity in their services that makes them very difficult to replicate at scale. That is now changing, and it is a good thing.
    • This also means that the ability to recreate, modify, and redistribute models must also be protected if it uses the moniker of Open Source.
    • Agents must abide by licensing terms in order to be categorized as open source. If you call your agentic systems open source, they must be able to interpret and abide by software licenses. This effectively means that all agentic systems will need to include a compliance persona in order to meet the definition of Open Source.
    • Maintainers of Open Source projects must have a way to quickly dismiss the output of agentic systems that file bug and vulnerability reports. This means that in order to meet the open source definition, agentic systems that fit in that category will have to abide by a standard that maintainers use to signal their willingness to accept input from agents. If maintainers decline, then agentic systems will either avoid these projects, or push their inputs and changes into forked repos maintained elsewhere.

    These are just a couple of ideas. The bottom line is that the open source ethos guarantees all stakeholders a seat at the table, and we must be willing to make changes to our governing rules in order to ensure fairness for all parties. To do otherwise is to shirk our responsibility and pretend like it’s still 1999. No change to the open source definition should be taken lightly, but as the governing document that protects the rights of those who participate in open source communities, we need to make sure that it doesn’t become more easily exploitable by monopolistic companies and those that wish to extort from community members or commit harmful acts.

    Open Source communities and maintainers are not yet prepared for these changes, and it’s our job as community members to make sure that these communities, the backbone of open source innovation, remain vibrant and strong.

  • Tales of Privilege

    Here’s my latest on Medium:

    Much has been said recently about privilege and, specifically, white male privilege. How it feeds into the success of many people, especially those who benefit from institutions that privilege whiteness, maleness, and more specifically, maleness that falls within the strict bounds of gender and sexuality norms. It has been said that it’s impossible to separate the role of privilege from one’s success. That they are tightly coupled, and to suggest that one can have success without acknowledging the role of social privilege is highly disingenuous and tantamount to thievery. In other words, check your privilege. This is an attempt to put my story in this context and to show how conventional wisdom doesn’t always apply to individual stories.

    Read the complete post.

  • Episode 8: A New Beginning

    Some of you know that I recently left Red Hat. There are multiple reasons for this, mostly to do with a wonderful opportunity that came my way (more on that later).

    First, Red Hat. I learned more in my 4 years there than at any other time in my career. I went from being just another community manager to someone who learned how to grow a community into a global ecosystem, essentially functioning as chief executive, CMO, and head of alliances for the Gluster Community for three years. It was an awesome job – and came with awesome responsibilities. Red Hat separates its community and product operations into “church” and “state.” There is a huge benefit to this: those on the open source (or “church”) side function independently and are authorized to make decisions on behalf of their respective communities with little meddling from the business or “state” side of the company. This allowed me great latitude in running the Gluster community and was a welcome difference from previous roles in other companies. After four years, however, I had outgrown this model and wanted to take on more of a hybrid business-community-product role. I was also ready to take on more responsibility.

    And now, what you really want to know – where did I go? I’m so glad you asked!

    I wrote a series of articles at Linux.com where I explored the art of open source product management, which should give you an idea of what’s been on my mind. I ended up speaking with a few companies about various opportunities, and in the end, I chose the one that felt right: EMC. Every company I spoke to ticked off all the checkboxes, but EMC seemed like the ideal fit for all sorts of reasons – some business, some personal and family. So here I am in the Advanced Software Division as the Director of Open Source Programs! First order of business is building out the ecosystem and product space around CoprHD, EMC’s first major foray into the wily world of open source.

    But there’s more than just community and ecosystem development to work on – there are a host of best practices to wrangle, institutionalizing the open source way, and much more. As I’ve written before, making software in the open source way requires a cultural change, and it’s much more than simply pasting a license and pushing to GitHub. I’ll be building programs that make the relationship between community and product, ie. church and state, more efficient. There’s much to do, and it’s a fun challenge. Onward and upward!

  • El-Deko – Why Containers Are Worth the Hype

    [youtube http://www.youtube.com/watch?v=vqtnG1TBdxM&w=560&h=315]

    Video above from Kubernetes 1.0 Launch event at OSCON

    In the above video, I attempted to put Red Hat’s container efforts into a bit of context, especially with respect to our history of Linux platform development. Having now watched the above video (they forced me to watch!) I thought it would be good to expound on what I discussed in the video.

    Admit it, you’ve read one of the umpteen millions of articles breathlessly talking about the new Docker/Kubernetes/Flannel/CoreOS/whatever hotness and thought to yourself, Wow, is this stuff overhyped. There is some truth to that knee-jerk reaction, and the buzzworthiness of all things container-related should give one pause – It’s turt^H^H^H^Hcontainers all the way down!

    I myself have often thought how much fun it would be to write the Silicon Valley buzzword-compliant slide deck, with all of the insane things that have passed for “technical content” over the years, from Java to Docker and lots of other nonsense in between. But this blog post is not about how overhyped the container oeuvre is, but rather why it’s getting all the hype and why – and this is going to hurt writing this – it’s actually deserved.

    IT, from the beginning, has been about doing more, faster. This edict has run the gamut from mainframes and microcomputers to PCs, tablets, and phones. From timeshare computing to client-server to virtualization and cloud computing, the quest for this most nebulous of holy grails, efficiency, has taken many forms over the years, in some cases fruitful and in others, meh.

    More recently, efficiency has taken the form of automation at scale, specifically in the realm of cloud computing and big data technologies. But there has been some difficulty with this transition:

    • The preferred base of currency for cloud computing, the venerable virtual machine, has proved to be a tad overweight for this transformation.
    • Not all clouds are public clouds. The cloudies want to pretend that everyone wants to move to public cloud infrastructure NowNowNow. They are wrong.
    • Existing management frameworks were not built for cloud workloads. It’s extremely difficult to get a holistic view of your infrastructure, from on-premises workloads to cloud-based SaaS applications and deployments on IaaS infrastructure.

    While cloud computing has had great momentum for a few years now and shows no signs of stopping, its transformative power over IT remains incomplete. To complete the cloudification of IT, the above problems need to be solved, which involves rewriting the playbook for enterprise workloads to account for on-premises, hybrid and, yes, public cloud workloads. The entire pathway from dev to ops is currently undergoing the most disruption since the transition from mainframe to client-server. We’re a long ways from the days when LAMP was a thing, and software running on bare metal was the only means of deploying applications. Aside from the “L”, the rest of the LAMP stack has been upended with its replacements in the formative stages.

    While we may not know precisely what the new stack will be, we can now make some pretty educated guesses:

    • Linux (duh): It’s proved remarkably flexible, regardless of what new workload is introduced. Remember when Andy Tanenbaum tried to argue in 1992 that monolithic kernels couldn’t possibly provide the modularity required for modern operating systems?
    • Docker: The preferred container format for packaging applications. I realize this is now called the Open Container Format, but most people will know it as Docker.
    • Kubernetes: The preferred orchestration framework. There are others in the mix, but Kubernetes seems to have the inside track, although its use certainly doesn’t preclude Mesos, et al. One can see a need for multiple, although Kube seems to be “core”.
    • OpenShift: There’s exactly one open source application management platform for the Docker and Kubernetes universe, and that’s OpenShift. No other full-featured open source PaaS is built on these core building blocks.

    In the interest of marketers everywhere, I give you the “LDKO” or “El-deko” stack. You’re welcome.

    Why This is a Thing

    The drive to efficiency has meant extending the life of existing architecture, while spinning up new components that can work with, rather than against, current infrastructure. After it became apparent to the vast majority of IT pros that applications would need to straddle the on-premises and public cloud worlds, the search was on for the best way to do this.

    Everyone has AWS instances; everyone has groups of virtual machines; and everyone has bare metal systems in multiple locations. How do we create applications that can run on the maximum number of platforms, thus giving devops folks the most choices in where and how to deploy infrastructure at scale? And how do we make it easy for developers to package and ship applications to run on said infrastructure?

    At Red Hat, we embraced both Docker and Kubernetes early on, because we recognized their ability to deliver value in a number of contexts, regardless of platform. By collaborating with their respective upstream communities, and then rewriting OpenShift to take advantage of them, we were able to create a streamlined process that allowed both dev and ops to focus on their core strengths and deliver value at a higher level than ever before. The ability to build, package, distribute, deploy, and manage applications at scale has been the goal from the beginning, and with these budding technologies, we can now do it more efficiently than ever before.

    Atomic: Container Infrastructure for the DevOps Pro

    In the interests of utilizing the building blocks above, it was clear that we needed to retool our core platform to be “container-ready,” hence Project Atomic and its associated technologies:

    • Atomic Host: The core platform or “host” for containers and container orchestration. We needed a stripped-down version of our Linux distributions to support lightweight container management. You can now use RHEL, CentOS, and Fedora versions of Atomic Host images to provide your container environment. The immutability of Atomic Host and its atomic update feature provides a secure environment to run container-based workloads.
    • Atomic CLI: This enables users to quickly perform administrative functions on Atomic Host, including installing and running containers as well as performing an Atomic Host update.
    • Atomic App: Our implementation of the Nulecule application specification, allowing developers to define and package an application and operations to then deploy and manage that application. This gives enterprises the advantage of a seamless, iterative methodology to complete their application development pipeline. Atomic App supports OpenShift, Kubernetes, and Just Plain Docker as orchestration targets out of the box with the ability to easily add more.

    Putting It All Together

    As demonstrated in the graphic below, the emerging stack is very different from your parents’ Linux. It takes best of breed open source technologies and pieces them together into a cloud native fabric worthy of the DevOps moniker.

    El-Deko in All Its Glory

    el-decko stack

    With our collaboration in the Docker and Kubernetes communities, as well as our rebuild of OpenShift and the introduction of Project Atomic, we are creating a highly efficient dev to ops pipeline that enterprises can use to deliver more value to their respective businesses. It also gives enterprises more choice:

    • Want to use your orchestration framework? You can add that parameter to your Nulecule app definition and dependency graph.
    • Want to use another container format? Add it to your Nulecule file.
    • Want to package an application so that it can run on Atomic Host, Just Plain Docker, or OpenShift? Package it with Atomic App.
    • Want an application management platform that utilizes all this cool stuff and doesn’t force you to manage every detail? OpenShift is perfect for that.
    • Need to manage and automate your container infrastructure side-by-side with the rest of your infrastructure? ManageIQ is emerging as an ideal open source management platform for containers – in addition to your cloud and virtualization technologies.

    As our container story evolves, we’re creating a set of technologies useful to every enterprise in the world, whether developer or operations-centric (or both). The IT world is changing quickly, but we’re pulling it together in a way that works for you.

    Where to Learn More

    There are myriad ways to learn more about the tools mentioned above:

    • projectatomic.io – All the Atomic stuff, in one place
    • openshift.org – Learn about the technology that powers the next version of OpenShift.com and download OpenShift Origin
    • manageiq.org – ManageIQ now includes container management, especially for Kubernetes as well as OpenShift users

    We will also present talks at many upcoming events that you will want to take advantage of:

  • Survivor’s Guilt

    John Goebel

    Now that I’ve had two gastro tests with negative results, I feel safe in saying that I don’t have any serious gastrointestinal diseases or cancer. I feel some combination of relief but also a tad of survivor’s guilt. In addition to my brother, I’ve had other friends and family succumb to gastric and colon cancers over the years.

    It all seems like such a crap shoot – some of us “win” the genetic lottery of cancer mutations, and some of us survive with decent health – for now, at least. It begs the question, why do some of us stay in good health where others have the incredible bad luck, through no fault of their own, of getting terminal illness. In these past few months since James was diagnosed with gastric cancer, I have often wondered what I have done to deserve my (thus far) decent state of health. The reality is that I’ve done nothing – I don’t regularly exercise and I don’t pay much attention to what or how much I eat. It all feels grossly unfair.

    I’ll never forget when John Goebel told me he had been diagnosed with Colon cancer. It blew my mind. Here was this 38-year-old who was the epitome of good health: ate right, exercised, and looked great. He looked 10 years younger than his age. It seemed like such a cruel joke that he would be the one to leave behind his family while many of us with poor lifestyle habits have the luxury of seeing our children grow up.

    Then again, I could be diagnosed tomorrow with some terminal illness or die in a horrible accident, rendering this post entirely moot. If the last 6 months have taught me anything, it’s that these things can change rather rapidly.

  • Open source more about process than licensing

    It is a testament to the success of the Open Source Initiative’s (OSI) branding campaign for open source software that “open source” and “licensing” are functionally synonymous. To the extent that people are familiar with open source software, it is the source code released under a license that lets anyone see the “crown jewels” of a software program as opposed to an opaque binary, or black box that hides its underpinnings.

    This well-trodden trope has dominated the mainstream view of open source software since Eric Raymond pushed it into the public consciousness over 15 years ago. But taking a previously proprietary code base and transitioning it to an open source project makes one seriously question any previous assumptions about code and licensing. It is that undertaking that leads one to appreciate the values of process and governance. After seeing that transition from closed to open firsthand, I am convinced that the choice of whether to release code as a proprietary or open source project leads to fundamental changes in the end product, a divergence that is very difficult to roll back.

    From the point of view of most people, the software license is the most important aspect of releasing open source software, but in my opinion, licensing falls somewhere under user experience, workflows, and integration into existing data center technologies. Nowhere is this difference, in what is “known” (licensing) and what is the actual reality (user workflows), more clear than in the fearful eyes of the development team tasked with transforming their proprietary product into an open source project. In fact, the development methodology chosen by the engineers has a direct impact on what type of software is produced. If an open source development model is chosen from the beginning, one can be reasonably sure that the end product will be relatively portable and will plug into the most commonly used environments. If a proprietary model is chosen, it’s very easy for the developers to make cheap shortcuts that result in short-term gain and long-term pain—and that’s precisely what often happens.

    To the extent that people think of these things, the common perception is that this change involves a simple search and replace, maybe the removal of 3rd party software, uploading to a public repository, and presto! Fork me on GitHub! But, nothing could be further from the truth. What most people miss about software is that it’s much more about process, control, and administration than software licenses. As I argued in It Was Never About Innovation, the key to the success of open source software is not the desire for innovation but rather the fact that all players in open source ecosystems are on a level playing field. Customers, outside developers, freeloaders—they all have a seat at the table and can exert influence on a project by virtue of their leveraging of community equity, which they have built up over time by contributing in various ways. This is in stark contrast to proprietary development models where developers can essentially do whatever they want as long as they create an end product that meets the expectations of the Product Requirements Document (PRD) supplied by product management.

    This is where the difference between open source and proprietary development comes into stark relief. The open process that accompanies open source development will help to ensure that the software will likely integrate into any given environment and that some bad habits are often avoided. These two things go hand-in-hand. For example, proprietary software development often results in software that is monolithic in nature with a minimum of dependencies on system software and often bundled with its own set of libraries and tools. This gives developers the leeway to do whatever they want, often employing specific versions of libraries, reinventing various wheels, and generally veering far from the path of creating software that works well in a broader context.

    Open source software developers, by contrast, have no such luxury. From day one, their users demand the ultimate in flexibility, integration, and conformance to standard data center systems practices. This means the utilization of existing tools and libraries whenever possible, baking into the process the idea that your software will be a cog in a much larger data center machine. Note that nowhere did I mention that open source development was faster or more innovative, although it can be. On one hand, developers love the fact that they have complete control over the end product and don’t have to deal with annoyances, such as customer demands that their precious software honor their existing workflows. On the other hand, end users love the fact that their open source deployments likely have a long history of use within large data centers and that those previous users made sure the software was to their liking.

    Both of these approaches come at a cost: open source development may actually be slower at particular times in its life-cycle due to some overhead costs that are inherent to the model, and proprietary development, while perhaps faster, sends the developer team down the road of maintenance hell, needing to endlessly maintain the bits of glue that generally come for free in open source development. The overwhelming evidence of late suggests that the open source approach is far more effective in the data center.

    Suppose that your team went down the road of proprietary development but eventually came to the conclusion that they could win over more users with an open source approach—what then? Here lies the conundrum: the process of undoing the proprietary process and imbuing a project with the open source sauce is spectacularly difficult. Many otherwise knowledgeable people in the tech industry have no idea just how much change is involved. Hell, most engineers have no idea what’s actually involved in switching horses midstream. To engage in the process means necessarily losing valuable development time while taking up tasks that developers feel are, frankly, beneath them. To change software from a monolithic, proprietary code base to one that plays well with others is a gargantuan task.

    “But wait!,” I can hear you say. “Can’t they just release whatever they have under an open source license and then take care of the other stuff later?” Sure, they can, but the end result will likely be disappointing at best, and a colossal disaster at worst. For starters, mere mortals won’t be able to even install the software, much less build it from source. There are several tricks developers play to make black box monolithic products work for their end users that make it terrible for open source community-building:

    • Highly customized build environment and tools. This is the #1 reason why the majority of proprietary software cannot simply be set loose as open source: it’s completely unusable to all except the developer team that built it. When developing open source software, there are a few standard ways to build software. All of them are terrible at producing highly optimized executable programs for running at the highest level of efficiency, but they’re great for giving developers a simple, standardized way to build and distribute software. The process of making your proprietary software build with standardized open source build tools is probably non-trivial. Open source projects, by contrast, came out of the crib compiling with GCC.

    • 3rd party libraries, also proprietary, that you do not have permission to include in your open source code. Even if your code can build with GNU autotools and GCC, to use one example, you probably have to rewrite some not-insignificant portion of the code. This takes time and effort away from your developers who will be spending time ripping and replacing many pieces of code and not implementing new features. This varies from project to project, but it afflicts the vast majority of projects going from closed to open.

    • Bad security practices. When developers think nobody else is looking, they do all sorts of crazy things. And as long as features are developed on schedule, nobody bats a eye. It is this primacy of feature development over code quality that can result in some horrendous security holes. Obvious exceptions aside, *cough*heartbleed*cough*, there is lots of evidence that open source software is more secure than its proprietary counterparts.

    • Bad coding practices and magical unicorn libraries. For the same reasons as above, ie. feature primacy and nobody’s looking, developers tend to work with the latest and greatest from other software packages, especially when it comes to runtime scripting engines, libraries, and tools. They take the code, modify it, and then they have an end product that works. For now. This is great if you’re on a deadline and your code must work by midnight, and it’s approaching 23:30. The problem, however, is that the product will live long after midnight tonight, and you will be responsible for maintaining, updating and syncing your pristine unicorn library with code that will inevitably diverge from what you modified. This is terrible for everyone, developers and admins included. Imagine the poor sod in operations assigned to installing and maintaining someone’s late-night “innovations”.

    All of the above leads product teams to one obvious conclusion: package and distribute the software in such a way that it runs as far removed as possible from the system on which it resides, usually in the form of a bloated virtual appliance or at least in the form of a self-contained application that relies on the bare minimum of system libraries. Windows admins should take a look at their Program Files directory sometime. Or better yet, don’t. All of this, taken together, adds up to an end product that is extremely difficult to release as open source software.

    Some ops people might think that an appliance is easier for them to deploy and maintain, but more often, they hold their nose in order to use the thing. They will tolerate such an approach if the software actually makes their jobs easier, but they won’t like it. All of the ops people I know, and I used to be one, prefer that the software they deploy conform to their existing processes and workflows, not force them to create new ones.

    Put another way: would your software exist in its current form if it started life as an open source project? Or would end users have demanded a different approach?

    Open source is about process much more than license, and everyone in an open source community has the ability to influence those processes. Projects that start out as open source have many characteristics baked in from the beginning that often, though not always, save developers from their own worst instincts. If you elect to reverse course and move to the open source model, understand what this change entails—it is a minefield, laden with challenges that will be new to your development team, who are unaccustomed to seeing their practices challenged, don’t particularly relish direct customer feedback, and are entirely uncomfortable with the idea of others reading over their shoulder as they write code. The amount of effort to change from proprietary to open source processes is probably on the same order as going from waterfall to agile development.

    Example: ManageIQ

    When Red Hat acquired ManageIQ in late 2012, it was with the understanding that the code would be open sourced—eventually. However, there were several things standing in the way of that:

    1. Many of the User Interface (UI) scripts and libraries were proprietary, 3rd party tools.

    2. The software was distributed as an encrypted virtual machine.

    3. ManageIQ was and is a Rails app, and some of the accompanying Ruby gems were modified from their upstream sources to implement some specific features.

    #1 meant that many parts of the code, particularly in the UI, had to be ripped out and either replaced with an open source library or rewritten. This took quite a bit of time, but was something that had to be done to release the code.

    #2 is not something one can do in an open source project, striking fear into the hearts of the development team. Some changes to the code were necessary after losing the (false) sense of security that came with distributing the software in an encrypted appliance.

    #3 meant that the developer team had to carry forward its modifications to custom gems, which was becoming a burdensome chore and would only get worse over time. The developer team is still in the process of fixing this, but I’m happy to report that we’ve hired a strong Ruby developer, Aaron Patterson, who will, among other things, maintain the team’s changes to upstream gems and prevent future forks and divergence. He’ll also lead the effort to convert ManageIQ to Ruby on Rails 4.

    Conclusion

    Be considerate of your developers and the challenges ahead of them. Hopefully they understand that the needed changes will ultimately result in a better end product. It comes at a price but has its own rewards, too. And never forget to remind folks that choosing an open source approach from the beginning would have obviated this pain.

    Lead Image: 
    Rating: 
    Select ratingGive it 1/5Give it 2/5Give it 3/5Give it 4/5Give it 5/5
    (8 votes)
    Add This: 
    Channel: 
    Article Type: 
    Default CC License: 
  • The ManageIQ Design Summit – a small intimate gathering of cloud experts

    We’re happy to announce the preliminary agenda for the upcoming ManageIQ Design Summit, a 2-day event on October 7 & 8 in Montvale, NJ. Be sure to RSVP soon, as space is very limited. As mentioned in the title, it’s a small intimate gathering of cloud experts, those interested in pushing the limits of ManageIQ and setting the roadmap for development. If you’re a ManageIQ user who wants to learn how to make the most of its automation and orchestration capabilities, then there will be plenty for you, too:

    • Tour the new RESTful APIs released in Anand
    • Create reusable components for automation and orchestration of your hybrid cloud infrastructure
    • Hack rooms for those who want to dive in

    The proud sponsors of the event are Red Hat and Booz Allen Hamilton. I’ve been told to be on the lookout for a new open source cloud broker project from the Booz Allen engineers.

    Look forward to seeing you there!