Blog

  • Red Hat’s Secret Sauce

    This is a guest post by Paul Cormier, President, Products and Technologies, Red Hat. It was originally posted on the Red Hat blog.

    Open source software is, in fact, eating the world. It is a de facto model for innovation, and technology as we know it would look vastly different without it. On a few occasions, over the past several years, software industry observers have asked whether there will ever be another Red Hat. Others have speculated that due to the economics of open source software, there will never be another Red Hat. Having just concluded another outstanding fiscal year, and with the perspective of more than 15 years leading Red Hat’s Products and Technologies division, I thought it might be a good time to provide my own views on what actually makes Red Hat Red Hat.

    Commitment to open source

    Red Hat is the world’s leading provider of open source software solutions. Red Hat’s deep commitment to the open source community and open source development model is the key to our success. We don’t just sell open source software, we are leading contributors to hundreds of open source projects that drive these solutions. While open source was once viewed as a driver for commoditization and driving down costs, today open source is literally the source of innovation in every area of technology, including cloud computing, containers, big data, mobile, IoT and more.

    Red Hat is best known for our leadership in the Linux communities that drive our flagship product, Red Hat Enterprise Linux, including our role as a top contributor to the Linux kernel. While the kernel is the core of any Linux distribution, there are literally thousands of other open source components that make up a Linux distribution like Red Hat Enterprise Linux, and you will find Red Hatters, as well as non-Red Hatters, leading and contributing across many of these projects. It’s also important to note that Red Hat’s contributions to Linux don’t just power Red Hat Enterprise Linux, but also every single Linux distribution on the planet – including those of our biggest competitors. This is the beauty of the open source development model, where collaboration drives innovation even among competitors.

    Today, Red Hat doesn’t just lead in Linux, we are leaders in many different communities. This includes well-known projects like the docker container engine, Kubernetes and OpenStack, which are among the fastest growing open source projects of the last several years. Red Hat has been a top contributor to all of these projects since their inception and brings them to market in products like Red Hat Enterprise Linux, Red Hat OpenShift Container Platform and Red Hat OpenStack Platform. Red Hat’s contributions also power competing solutions from the likes of SUSE, Canonical, Mirantis, Docker Inc., CoreOS and more.

    The list of communities Red Hat contributes to includes many more projects like Fedora, OpenJDK, Wildfly, Hibernate, Apache ActiveMQ, Apache Camel, Ansible, Gluster, Ceph, ManageIQ and many, many more. These power Red Hat’s entire enterprise software portfolio. This represents thousands of developers and millions of man-hours per year that Red Hat commits to the open source community. Red Hat also commits to keeping our commercial products 100% pure open source. Even when we acquire a proprietary software company, we commit to releasing all of its code as open source. We don’t believe in open core models, or in being just consumers but not contributors to the projects we depend on. We do this because we still believe in our core that the open source development model is THE best model to foster innovation, faster.

    As I told one reporter last week, some companies have endeavored to only embrace ‘open’ where it benefits them, such as open core models. Half open is half closed, limiting the benefits of a fully open source model. This is not the Red Hat way.

    This commitment to contribution translates to knowledge, leadership and influence in the communities we participate in. This then translates directly to the value we are able to provide to customers. When customers encounter a critical issue, we are as likely as anyone to employ the developers who can fix it. When customers request new features or identify new use cases, we work with the relevant communities to drive and champion those requests. When customers or partners want to become contributors themselves, we even encourage and help guide their contributions. This is how we gain credibility and create value for ourselves and the customers we serve. This is what makes Red Hat Red Hat.

    Products not projects

    Open source is a development model, not a business model. Red Hat is in the enterprise software business and is a leading provider to the Global 500. Enterprise customers need products, not projects and it’s incumbent on vendors to know the difference. Open source projects are hotbeds of innovation and thrive on constant change. These projects are where sometimes constant change happens, where the development is done. Enterprise customers value this innovation, but they also rely on stability and long-term support that a product can give. The stable, supported foundation of a product is what then enables those customers to deliver their own innovations and serve their own customers.

    Too often, we see open source companies who don’t understand the difference between projects and products. In fact, many go out of their way to conflate the two. In a rush to deliver the latest and greatest innovations, as packaged software or public cloud services, these companies end up delivering solutions that lack the stability, reliability, scalability, compatibility and all the other “ilities” or non-functional requirements that enterprise customers rely on to run their mission-critical applications.

    Red Hat understands the difference between projects and products. When we first launched Red Hat Enterprise Linux, open source was a novelty in the enterprise. Some even viewed it as a cancer. In its earliest days, few believed that Linux and open source software would one day power everything from hospitals, banks and stock exchanges, to airplanes, ships and submarines. Today open source is the default choice for these and many other critical systems. And while these systems thrive on the innovation that open source delivers, they rely on vendors like Red Hat to deliver the quality that these systems demand.

    Collaborating for community and customer success

    Red Hat’s customers are our lifeblood. Their success is our success. Just like we thrive on collaboration in open source communities, that same spirit of collaboration drives our relationships with our customers. By using open source innovation, we help customers drive innovation in their own business. We help customers consume the innovation of open source-developed software. Customers appreciate our willingness to work with them to solve their most difficult challenges. They value the open source ethos of transparency, community and collaboration. They trust Red Hat to work in their best interests and the best interests of the open source community.

    Too often open source vendors are forced to put commercial concerns over the interests of customers and the open source communities that enable their solutions. This doesn’t serve them or their customers well. It can lead to poor decision making in the best case and fractured communities in the worst case. Sometimes these fractures are repaired and the community emerges stronger, as we saw recently with Node.js. Other times, when fractures are beyond repair, new communities take the place of existing ones, as we have seen with Jenkins and MariaDB. Usually, we see that open source innovation marches forward, but this fragmentation only serves to put vendors and their customers at risk.

    Red Hat believes in collaborating openly with both customers and the open source community. It’s that collaboration that brings forward new ideas and creative solutions to the most difficult problems. We work with the community to identify solutions and find common ground to avoid fragmentation. Through the newly launched Red Hat Open Innovation Labs we are bringing that knowledge and experience directly to our customers.

    The next Red Hat

    Will there be another Red Hat? I hope and expect that there will be several. Open source is now the proven methodology for developing software. The days of enterprises relying strictly on proprietary software has ended. The problems that we have to solve in the complexities of today’s world are too big for just one company. Vendors may deliver solutions in different ways, address different market needs and/or serve different customers – but I believe that open source will be at the heart of what they do. We see open source at the core of leading solutions from both the major cloud providers and leading independent software vendors. But, open source is a commitment, not a convenience, and innovative open source projects do not always lead to successful open source software companies.

    Today, we strive not only to be the Red Hat of Linux, but also the Red Hat of containers, the Red Hat of OpenStack, the Red Hat of middleware, virtualization, storage and a whole lot more. Many of these businesses, taken independently, would be among the fastest growing technology companies in the world. They are succeeding because of the strong foundation we’ve built with Red Hat Enterprise Linux, but also because we’ve followed the same Red Hat Enterprise Linux playbook of commitment to the open source community, knowing the difference between products and projects, and collaborating for community and customer success – across all of our businesses. That’s what makes us Red Hat.

  • There is NO Open Source Business Model

    Note: the following was first published on medium.com by Stephen Walli. It is reprinted here with his permission.

    Preface: It has been brought to my attention by friends and trusted advisors that a valid interpretation of my point below is that open source is ultimately about “grubby commercialism”, and altruism equals naïveté. That was not my intent. I believe that economics is about behaviour not money. I believe in Drucker (a company exists to create a market for the solution), not Friedman (a company exists to provide a return to shareholders). I believe in the Generous Man. I believe in Rappaport’s solution to the Prisoner’s Dilemma to always start with the most generous choice. I believe we’ve known how communities work since you had a campfire and I wanted to sit beside it. I had the pleasure of watching Bob Young give a talk today at “All Things Open” where he reiterated that a successful company always focuses on the success of its customers. I think that was stamped on Red Hat’s DNA from its founding, and continues to contribute to its success with customers today. I believe sharing good software is the only way to make all of us as successful as we can be as a tribe. I believe there is no scale in software without discipline.

    The open source definition is almost 20 years old. Red Hat at 22 is a $2B company. MySQL and JBoss have had great acquisition exits. Cloudera and Hortonworks are well on their way to becoming the next billion dollar software companies. But I would like to observe that despite these successes, there is no open source business model.

    yosuke muroya (on Flickr)

    I completely believe in the economic value of liberally-licensed collaboratively-developed software. We’ve shared software since we’ve developed software, all the way back into the late 40s and early 50s. This is because writing good software is inherently hard work. We’ve demonstrated that software reviews find more bugs than testing, so building a software development culture of review creates better software. Much of the invention in software engineering and programming systems has been directed towards re-use and writing more and better software in fewer lines of code. Software can’t scale without discipline and rigour in how it’s built and deployed. Software is inherently dynamic, and this dynamism has become clear in an Internet connected world. Well-run, disciplined, liberally-licensed collaborative communities seem to solve for these attributes of software and its development better than other ways of developing, evolving, and maintaining it. There is an engineering economic imperative behind open source software.

    Here’s an example using open source that I believe closely demonstrates that reality.

    Interix was a product in the late 90s that provided the UNIX face on Windows NT. It encompassed ~300 software packages covered by 25 licenses, plus a derivative of the Microsoft POSIX subsystem, plus our own code. This was before the open source definition. We started with the 4.4BSD-Lite distro because that’s what the AT&T/USL lawyers said we could use. The gcc compiler suite would provide critical support for our tool chain as well as an SDK to enable customers to port their UNIX application base to Windows NT.

    It took a senior compiler developer on the order of 6–8 months to port gcc into the Interix environment. It was a little more work when you include testing and integration, etc., so round it up to on the order of $100K. The gcc suite was about 750K lines of code in those days, which the COCOMO calculation suggests was worth $10M-$20M worth of value depending on how much folks were earning. So that’s roughly two orders of magnitude in cost savings instead of writing a compiler suite on our own. That and this was a well-maintained, robust, hardened compiler suite, not a new creation created from scratch in a vacuum. That is the benefit of using open source. You can see a similar net return on the 10% year-on-year investment Red Hat makes on their Linux kernel contributions as they deliver Fedora and RHEL. Of course with Interix, we were now living on a fork. This means we are drifting further away from the functionality and fixes on the mainline.

    The back of the envelop estimate suggested that every new major revision of gcc would cost us another 6+ months to re-integrate, but if we could get our changes contributed back into the mainline code base, we were probably looking at a month of integration testing instead. So from ~$100K we’re approaching $10K-$20K so possibly another order of magnitude cheaper by not living on a fork. We approached Cygnus Solutions as they were the premier gcc engineering team with several gcc committers. The price to integrate quoted to us was ~$120K, but they were successfully oversubscribed with enough other gcc work that they couldn’t begin for 14 months. Ada Core Technologies on the other hand would only charge ~$40K and could begin the following month. It was a very easy decision. (We were not in a position to participate directly in the five communities hiding under the gcc umbrella. While some projects respected the quality of engineering we were trying to contribute, others were hostile to the fact we were working on that Microsoft s***. There’s no pleasing some people.)

    This wasn’t contributing back out of altruism. It was engineering economics. It was the right thing to do, and contributed back to the hardening of the compiler suite we were using ourselves. It was what makes well run open source projects work. I would argue that individuals make similar decisions because having your name on key contribution streams in the open source world is some of the best advertising and resume content you can provide as a developer on your ability to get work done, in a collaborative engineering setting, and demonstrating you well understand a technology base. It’s the fact with which you can lead in an interview. And it’s fun. It’s interesting and challenging in all the right ways. If you’re a good developer or interested in improving your skills, why wouldn’t you participate and increase your own value and skills?

    Well run open source software communities are interesting buckets of technology. If they evolve to a particular size they become ecosystems of products, services (support, consulting, training), books and other related-content. To use an organic model, open source is trees, out of which people create lumber, out of which they build a myriad of other products.

    Red Hat is presented as the epitome of an open source company. When I look at Red Hat, I don’t see an open source company. I see a software company that has had three CEOs making critical business decisions in three different market contexts as they grow a company focused on their customers. Bob Young founded a company building a Linux distro in the early days of Linux. He was focused on the Heinz ketchup model of branding. When you thought “Linux”, Bob wanted the next words in your head to be “Red Hat.” And this was the initial growth of Red Hat Linux in the early days of the Internet and through the building of the Internet bubble. It was all about brand management. Red Hat successfully took key rounds of funding, and successfully went public in 1999. The Red Hat stock boomed.

    Matt Szulick took over the reins as CEO that Fall. Within a couple of years the Internet bubble burst and the stock tumbled from ~$140 down to $3.50. Over the next couple of years, Red Hat successfully made the pivot to server. RHEL was born. Soon after Fedora was delivered such that a Red Hat focused developer community would have an active place to collaborate while Red Hat maintained stability for enterprise customers on RHEL. They successfully crossed Moore’s Chasm in financial services. JBoss was acquired for $350M to provide enterprise middleware. Red Hat went after the UNIX ISV community before the other Linux distro vendors realized it was a race.

    In 2008, Jim Whitehurst took over the helm. In Whitehurst, they had a successful executive that had navigated running an airline through its Chapter 11 restructuring. So he knows how to grow and maintain employee morale, while managing costs, and keeping customers happy in the viciously competitive cutthroat market of a commercial air travel. He arrives at Red Hat just in time for the economic collapse of 2008. Perfect. But he has also led them through steady stock growth since joining.

    Through its history, Red Hat has remained focused on solving their customers problems. Harvard economist Theodore Levitt once observed that a customer didn’t want a quarter inch drill, what they wanted was a quarter inch hole. While lots of competing Linux distro companies tried to be the best Linux distro, Red Hat carefully positioned themselves not as the best Linux but as an enterprise quality, inexpensive alternative to Solaris on expensive SPARC machines in the data centre.

    Red Hat certainly uses open source buckets of technology to shape their products and services, but it’s not a different business model from the creation of DEC Ultrix or Sun SunOS out of the BSD world, or the collaborative creation of OSF/1 and the evolution of DEC Ultrix and IBM AIX, or the evolution of SunOS to Solaris from a licensed System V base. At what point did Windows NT cease to be a Microsoft product with the addition of thousands of third party licensed pieces of software including the Berkeley sockets technology?

    When companies share their own source code out of which they build their products and services, and attempt to develop their own collaborative communities, they gain different benefits. Their technology becomes stickier with customers and potential future customers. They gain advocates and experts. It builds inertia around the technology. The technology is hardened. Depending on the relationship between the bucket of technology and their products, they can evolve strong complements to their core offerings.

    The engineering economic effects may not be as great as pulling from a well run external bucket of technology, but the other developer effects make up for the investment in a controlled and owned community. It’s why companies like IBM, Intel, Microsoft, and Oracle all invest heavily in their developer networks regardless of the fact these historically had nothing to do with open source licensing. It creates stickiness. Red Hat gains different benefits from their engineering investments in Linux, their development of the Fedora community, and the acquisition of the JBoss technology, experts, and customers.

    I believe open source licensed, disciplined, collaborative development communities will prove to be the best software development and maintenance methodology over the long term. It’s created a myriad of robust adaptive building blocks that have become central to modern life in a world that runs on software. But folks should never confuse the creation of those building blocks with the underlying business of solving customer problems in a marketplace. There is no “open source business model.”

  • Dear CIO Mag: 2005 called, wants its article back

    I can’t believe that journalists still, despite a wealth of resources at their fingertips, continue to get the story completely wrong about building a business on open source software. I’ve never met Paul Rubens, but he should never be allowed to write on the subject again. Witness his article “How to Make Money From Open Source Software“.

    But how easy is it really to establish an open source startup that makes money? For every success story like Red Hat there are companies like Cyanogen that fail to thrive  and projects that are abandoned.

    That’s in the opening paragraph and is a pretty unsatisfactory way to begin an article. He comes out assuming that making a business or product on open source software is somehow different from any other effort to create a business. Spoiler alert: it’s hard. There’s a reason most startups, open source or otherwise, fail. It’s just not an easy thing to do. This is why we worship the successful founders and their companies (sometimes undeservedly). Because they succeeded where so many other smart, capable people failed. And then he immediately goes to a comparison between Red Hat and Cyanogen, 2 things which couldn’t be more different and weren’t using the same product or business model.

    I’ve said it before: this idea that Red Hat is some magic unicorn whose success cannot be repeated is entirely fallacious. It’s not that Red Hat’s model can’t be repeated, it’s that no one else has even tried.

    Rubens then goes on to quote Sam Byers from Balderton Capital (Why? We’re never told why we should listen to this person, other than he’s a VC dude. Balderdash, I say):

    It’s tempting to believe that the Red Hat business model, which is based around selling subscriptions for support to a maintained and tested version of Linux (or a closely related model that offers consultancy and customization to an open source software solution as well support and maintenance), is the most viable way to make money from open source software. But Sam Myers, a principal at Balderton Capital, a technology venture capital company, says that most open source startups are unlikely to succeed using these business models.

    Here is Rubens setting up the non-point that Red Hat is a magic unicorn, unseen in the wild. What’s wrong here? He gets the model entirely wrong. Red Hat doesn’t sell “subscriptions for support”. Red Hat sells a product, of which one feature is risk reduction and support. This is a common mistake, and it’s very irritating that it’s still made in 2017.

    “Despite Red Hat, it is actually quite challenging to make money selling customization, support and consultancy,” Myers says. “Why? Because it is head-count driven, the model doesn’t scale, and you get low renewals. And you have competition from other consultancies.”

    Newsflash: this is not what Red Hat sells. Again, Red Hat sells a product, that customers buy. Because it delivers value to them. They are not consultants. Can these people read? Do they actually talk to Red Hat customers? But wait, it gets worse:

    Myers admits that the subscription model can occasionally be successful, but asserts that a more promising business model is to build a product line around an open source core. This can involve developing premium software modules that add features to the core open source software or, alternatively, building supporting applications that complement the core.

    Dear God. This is the open core model. It is a highly unsuccessful model, despite repeated efforts over the years to try it. Most sane people came to understand its limitations years ago. Unfortunately, some VC’s still push startups to this approach because it allows them to maintain tighter control over the IP, because they make the mistake of equating code to product. Incidentally, it’s the failure of the open core model that has led some VC’s to simply not invest in open source startups, which is a tragedy.

    Rubens does manage to get in a quote from Allison Randal, which is the only intelligent part of this article:

    …there is no “best” open source business model, and Allison Randal, president of the Open Source Initiative, says that open source startups should avoid searching for one. “The mistake people make is thinking about an open source business model. They should be thinking about a business model and how open source software fits into that,” she says. “VCs are only beginning to understand open source and how to make money, but the way is the same as for any other business: by offering better value and making customers happy. “

    This is what you call “burying the lede.” This is the only takeaway worth taking away from the entire article. Unfortunately, Rubens sets it up as the secondary source. He would have done much better to base the majority of the article on her years of experience building open source communities and products. Randal went on to note the difference between upstream communities and downstream product management:

    Randal says that while most communities don’t mind a company trying to monetize a project, it is key that the community still has a life of its own — in the way that Red Hat has fostered the Fedora community. “What drives a community away is when you take the wind out of its sails and it feels taken over,” she says. Randal adds that little things can make a big difference:  if Cyanogen Inc. had chosen a different name (in place of Cyanogen OS) for its commercial product, which was based on the Cyanogen Mod project, then the community may not have felt so offended by it, she says.

    Exactly. This isn’t hard. On the other hand, there must be something very counter-intuitive about it, because very few actually get it.

    My question to Rubens would be, there are many many people who have spent many years in the trenches building open source products and processes. Why didn’t you base the article on their experience? It’s 2017 for Pete’s sake. Why can’t we discuss this intelligently?

     

  • Bitnami Enters Kubernetes, Cloud Native Race

    logo2x-d9b4e58308404c7e986f383f2191084eBitnami has officially entered the highly competitive race for cloud native platforms, acquiring Skippbox and joining the Cloud Native Computing Foundation. To those who have been observing Bitnami over the years, this move is not a surprise. Previously known as the world’s #1 service for launching apps in AWS, it seems a natural move for Bitnami to add app management to its portfolio. From the press release:

    The acquisition of Skippbox isn’t Bitnami’s only investment into the Kubernetes ecosystem. She noted her company has been investing significantly in the Helm project for Kubernetes and has two committers to the project. Helm is a tool for managing Kubernetes Charts, with Charts being packages of pre-configured Kubernetes resources.

    That is what you call “burying the lede.” This is the continuation of a year-long effort to create products in the cloud native application management space. On the same day, Bitnami also announced it joined the CNCF. I think it’s safe to say that Bitnami sees a strong future in Kubernetes-based cloud native management services.

  • Managing Your Supply Chain

    Depending on open source software introduces some challenges for those looking to create products or services derived from upstream open source components. There’s a lot to consider regarding risk management, engineering efficiency, and how to influence the nebulous upstream open source world – and why you should. Original content was published at opensource.com:

     

  • How to Make Money with Open Source Platforms

    I wrote this 4-part series at linux.com in mid-2015 about creating products based on open source platforms, and how to provide value to a paying customer. The response was great! Read for yourself below:

    In the above series, you’ll read about different product-based business models and which are best, as well as which are not. You’ll read about the difference between “open core” and other viable types of open source-based products. But mostly, you’ll read about the inherent value of open source platforms and how customers will pay to manage their risk.

  • Episode 8: A New Beginning

    Some of you know that I recently left Red Hat. There are multiple reasons for this, mostly to do with a wonderful opportunity that came my way (more on that later).

    First, Red Hat. I learned more in my 4 years there than at any other time in my career. I went from being just another community manager to someone who learned how to grow a community into a global ecosystem, essentially functioning as chief executive, CMO, and head of alliances for the Gluster Community for three years. It was an awesome job – and came with awesome responsibilities. Red Hat separates its community and product operations into “church” and “state.” There is a huge benefit to this: those on the open source (or “church”) side function independently and are authorized to make decisions on behalf of their respective communities with little meddling from the business or “state” side of the company. This allowed me great latitude in running the Gluster community and was a welcome difference from previous roles in other companies. After four years, however, I had outgrown this model and wanted to take on more of a hybrid business-community-product role. I was also ready to take on more responsibility.

    And now, what you really want to know – where did I go? I’m so glad you asked!

    I wrote a series of articles at Linux.com where I explored the art of open source product management, which should give you an idea of what’s been on my mind. I ended up speaking with a few companies about various opportunities, and in the end, I chose the one that felt right: EMC. Every company I spoke to ticked off all the checkboxes, but EMC seemed like the ideal fit for all sorts of reasons – some business, some personal and family. So here I am in the Advanced Software Division as the Director of Open Source Programs! First order of business is building out the ecosystem and product space around CoprHD, EMC’s first major foray into the wily world of open source.

    But there’s more than just community and ecosystem development to work on – there are a host of best practices to wrangle, institutionalizing the open source way, and much more. As I’ve written before, making software in the open source way requires a cultural change, and it’s much more than simply pasting a license and pushing to GitHub. I’ll be building programs that make the relationship between community and product, ie. church and state, more efficient. There’s much to do, and it’s a fun challenge. Onward and upward!

  • El-Deko – Why Containers Are Worth the Hype

    [youtube http://www.youtube.com/watch?v=vqtnG1TBdxM&w=560&h=315]

    Video above from Kubernetes 1.0 Launch event at OSCON

    In the above video, I attempted to put Red Hat’s container efforts into a bit of context, especially with respect to our history of Linux platform development. Having now watched the above video (they forced me to watch!) I thought it would be good to expound on what I discussed in the video.

    Admit it, you’ve read one of the umpteen millions of articles breathlessly talking about the new Docker/Kubernetes/Flannel/CoreOS/whatever hotness and thought to yourself, Wow, is this stuff overhyped. There is some truth to that knee-jerk reaction, and the buzzworthiness of all things container-related should give one pause – It’s turt^H^H^H^Hcontainers all the way down!

    I myself have often thought how much fun it would be to write the Silicon Valley buzzword-compliant slide deck, with all of the insane things that have passed for “technical content” over the years, from Java to Docker and lots of other nonsense in between. But this blog post is not about how overhyped the container oeuvre is, but rather why it’s getting all the hype and why – and this is going to hurt writing this – it’s actually deserved.

    IT, from the beginning, has been about doing more, faster. This edict has run the gamut from mainframes and microcomputers to PCs, tablets, and phones. From timeshare computing to client-server to virtualization and cloud computing, the quest for this most nebulous of holy grails, efficiency, has taken many forms over the years, in some cases fruitful and in others, meh.

    More recently, efficiency has taken the form of automation at scale, specifically in the realm of cloud computing and big data technologies. But there has been some difficulty with this transition:

    • The preferred base of currency for cloud computing, the venerable virtual machine, has proved to be a tad overweight for this transformation.
    • Not all clouds are public clouds. The cloudies want to pretend that everyone wants to move to public cloud infrastructure NowNowNow. They are wrong.
    • Existing management frameworks were not built for cloud workloads. It’s extremely difficult to get a holistic view of your infrastructure, from on-premises workloads to cloud-based SaaS applications and deployments on IaaS infrastructure.

    While cloud computing has had great momentum for a few years now and shows no signs of stopping, its transformative power over IT remains incomplete. To complete the cloudification of IT, the above problems need to be solved, which involves rewriting the playbook for enterprise workloads to account for on-premises, hybrid and, yes, public cloud workloads. The entire pathway from dev to ops is currently undergoing the most disruption since the transition from mainframe to client-server. We’re a long ways from the days when LAMP was a thing, and software running on bare metal was the only means of deploying applications. Aside from the “L”, the rest of the LAMP stack has been upended with its replacements in the formative stages.

    While we may not know precisely what the new stack will be, we can now make some pretty educated guesses:

    • Linux (duh): It’s proved remarkably flexible, regardless of what new workload is introduced. Remember when Andy Tanenbaum tried to argue in 1992 that monolithic kernels couldn’t possibly provide the modularity required for modern operating systems?
    • Docker: The preferred container format for packaging applications. I realize this is now called the Open Container Format, but most people will know it as Docker.
    • Kubernetes: The preferred orchestration framework. There are others in the mix, but Kubernetes seems to have the inside track, although its use certainly doesn’t preclude Mesos, et al. One can see a need for multiple, although Kube seems to be “core”.
    • OpenShift: There’s exactly one open source application management platform for the Docker and Kubernetes universe, and that’s OpenShift. No other full-featured open source PaaS is built on these core building blocks.

    In the interest of marketers everywhere, I give you the “LDKO” or “El-deko” stack. You’re welcome.

    Why This is a Thing

    The drive to efficiency has meant extending the life of existing architecture, while spinning up new components that can work with, rather than against, current infrastructure. After it became apparent to the vast majority of IT pros that applications would need to straddle the on-premises and public cloud worlds, the search was on for the best way to do this.

    Everyone has AWS instances; everyone has groups of virtual machines; and everyone has bare metal systems in multiple locations. How do we create applications that can run on the maximum number of platforms, thus giving devops folks the most choices in where and how to deploy infrastructure at scale? And how do we make it easy for developers to package and ship applications to run on said infrastructure?

    At Red Hat, we embraced both Docker and Kubernetes early on, because we recognized their ability to deliver value in a number of contexts, regardless of platform. By collaborating with their respective upstream communities, and then rewriting OpenShift to take advantage of them, we were able to create a streamlined process that allowed both dev and ops to focus on their core strengths and deliver value at a higher level than ever before. The ability to build, package, distribute, deploy, and manage applications at scale has been the goal from the beginning, and with these budding technologies, we can now do it more efficiently than ever before.

    Atomic: Container Infrastructure for the DevOps Pro

    In the interests of utilizing the building blocks above, it was clear that we needed to retool our core platform to be “container-ready,” hence Project Atomic and its associated technologies:

    • Atomic Host: The core platform or “host” for containers and container orchestration. We needed a stripped-down version of our Linux distributions to support lightweight container management. You can now use RHEL, CentOS, and Fedora versions of Atomic Host images to provide your container environment. The immutability of Atomic Host and its atomic update feature provides a secure environment to run container-based workloads.
    • Atomic CLI: This enables users to quickly perform administrative functions on Atomic Host, including installing and running containers as well as performing an Atomic Host update.
    • Atomic App: Our implementation of the Nulecule application specification, allowing developers to define and package an application and operations to then deploy and manage that application. This gives enterprises the advantage of a seamless, iterative methodology to complete their application development pipeline. Atomic App supports OpenShift, Kubernetes, and Just Plain Docker as orchestration targets out of the box with the ability to easily add more.

    Putting It All Together

    As demonstrated in the graphic below, the emerging stack is very different from your parents’ Linux. It takes best of breed open source technologies and pieces them together into a cloud native fabric worthy of the DevOps moniker.

    El-Deko in All Its Glory

    el-decko stack

    With our collaboration in the Docker and Kubernetes communities, as well as our rebuild of OpenShift and the introduction of Project Atomic, we are creating a highly efficient dev to ops pipeline that enterprises can use to deliver more value to their respective businesses. It also gives enterprises more choice:

    • Want to use your orchestration framework? You can add that parameter to your Nulecule app definition and dependency graph.
    • Want to use another container format? Add it to your Nulecule file.
    • Want to package an application so that it can run on Atomic Host, Just Plain Docker, or OpenShift? Package it with Atomic App.
    • Want an application management platform that utilizes all this cool stuff and doesn’t force you to manage every detail? OpenShift is perfect for that.
    • Need to manage and automate your container infrastructure side-by-side with the rest of your infrastructure? ManageIQ is emerging as an ideal open source management platform for containers – in addition to your cloud and virtualization technologies.

    As our container story evolves, we’re creating a set of technologies useful to every enterprise in the world, whether developer or operations-centric (or both). The IT world is changing quickly, but we’re pulling it together in a way that works for you.

    Where to Learn More

    There are myriad ways to learn more about the tools mentioned above:

    • projectatomic.io – All the Atomic stuff, in one place
    • openshift.org – Learn about the technology that powers the next version of OpenShift.com and download OpenShift Origin
    • manageiq.org – ManageIQ now includes container management, especially for Kubernetes as well as OpenShift users

    We will also present talks at many upcoming events that you will want to take advantage of:

  • Survivor’s Guilt

    John Goebel

    Now that I’ve had two gastro tests with negative results, I feel safe in saying that I don’t have any serious gastrointestinal diseases or cancer. I feel some combination of relief but also a tad of survivor’s guilt. In addition to my brother, I’ve had other friends and family succumb to gastric and colon cancers over the years.

    It all seems like such a crap shoot – some of us “win” the genetic lottery of cancer mutations, and some of us survive with decent health – for now, at least. It begs the question, why do some of us stay in good health where others have the incredible bad luck, through no fault of their own, of getting terminal illness. In these past few months since James was diagnosed with gastric cancer, I have often wondered what I have done to deserve my (thus far) decent state of health. The reality is that I’ve done nothing – I don’t regularly exercise and I don’t pay much attention to what or how much I eat. It all feels grossly unfair.

    I’ll never forget when John Goebel told me he had been diagnosed with Colon cancer. It blew my mind. Here was this 38-year-old who was the epitome of good health: ate right, exercised, and looked great. He looked 10 years younger than his age. It seemed like such a cruel joke that he would be the one to leave behind his family while many of us with poor lifestyle habits have the luxury of seeing our children grow up.

    Then again, I could be diagnosed tomorrow with some terminal illness or die in a horrible accident, rendering this post entirely moot. If the last 6 months have taught me anything, it’s that these things can change rather rapidly.

  • Open source more about process than licensing

    It is a testament to the success of the Open Source Initiative’s (OSI) branding campaign for open source software that “open source” and “licensing” are functionally synonymous. To the extent that people are familiar with open source software, it is the source code released under a license that lets anyone see the “crown jewels” of a software program as opposed to an opaque binary, or black box that hides its underpinnings.

    This well-trodden trope has dominated the mainstream view of open source software since Eric Raymond pushed it into the public consciousness over 15 years ago. But taking a previously proprietary code base and transitioning it to an open source project makes one seriously question any previous assumptions about code and licensing. It is that undertaking that leads one to appreciate the values of process and governance. After seeing that transition from closed to open firsthand, I am convinced that the choice of whether to release code as a proprietary or open source project leads to fundamental changes in the end product, a divergence that is very difficult to roll back.

    From the point of view of most people, the software license is the most important aspect of releasing open source software, but in my opinion, licensing falls somewhere under user experience, workflows, and integration into existing data center technologies. Nowhere is this difference, in what is “known” (licensing) and what is the actual reality (user workflows), more clear than in the fearful eyes of the development team tasked with transforming their proprietary product into an open source project. In fact, the development methodology chosen by the engineers has a direct impact on what type of software is produced. If an open source development model is chosen from the beginning, one can be reasonably sure that the end product will be relatively portable and will plug into the most commonly used environments. If a proprietary model is chosen, it’s very easy for the developers to make cheap shortcuts that result in short-term gain and long-term pain—and that’s precisely what often happens.

    To the extent that people think of these things, the common perception is that this change involves a simple search and replace, maybe the removal of 3rd party software, uploading to a public repository, and presto! Fork me on GitHub! But, nothing could be further from the truth. What most people miss about software is that it’s much more about process, control, and administration than software licenses. As I argued in It Was Never About Innovation, the key to the success of open source software is not the desire for innovation but rather the fact that all players in open source ecosystems are on a level playing field. Customers, outside developers, freeloaders—they all have a seat at the table and can exert influence on a project by virtue of their leveraging of community equity, which they have built up over time by contributing in various ways. This is in stark contrast to proprietary development models where developers can essentially do whatever they want as long as they create an end product that meets the expectations of the Product Requirements Document (PRD) supplied by product management.

    This is where the difference between open source and proprietary development comes into stark relief. The open process that accompanies open source development will help to ensure that the software will likely integrate into any given environment and that some bad habits are often avoided. These two things go hand-in-hand. For example, proprietary software development often results in software that is monolithic in nature with a minimum of dependencies on system software and often bundled with its own set of libraries and tools. This gives developers the leeway to do whatever they want, often employing specific versions of libraries, reinventing various wheels, and generally veering far from the path of creating software that works well in a broader context.

    Open source software developers, by contrast, have no such luxury. From day one, their users demand the ultimate in flexibility, integration, and conformance to standard data center systems practices. This means the utilization of existing tools and libraries whenever possible, baking into the process the idea that your software will be a cog in a much larger data center machine. Note that nowhere did I mention that open source development was faster or more innovative, although it can be. On one hand, developers love the fact that they have complete control over the end product and don’t have to deal with annoyances, such as customer demands that their precious software honor their existing workflows. On the other hand, end users love the fact that their open source deployments likely have a long history of use within large data centers and that those previous users made sure the software was to their liking.

    Both of these approaches come at a cost: open source development may actually be slower at particular times in its life-cycle due to some overhead costs that are inherent to the model, and proprietary development, while perhaps faster, sends the developer team down the road of maintenance hell, needing to endlessly maintain the bits of glue that generally come for free in open source development. The overwhelming evidence of late suggests that the open source approach is far more effective in the data center.

    Suppose that your team went down the road of proprietary development but eventually came to the conclusion that they could win over more users with an open source approach—what then? Here lies the conundrum: the process of undoing the proprietary process and imbuing a project with the open source sauce is spectacularly difficult. Many otherwise knowledgeable people in the tech industry have no idea just how much change is involved. Hell, most engineers have no idea what’s actually involved in switching horses midstream. To engage in the process means necessarily losing valuable development time while taking up tasks that developers feel are, frankly, beneath them. To change software from a monolithic, proprietary code base to one that plays well with others is a gargantuan task.

    “But wait!,” I can hear you say. “Can’t they just release whatever they have under an open source license and then take care of the other stuff later?” Sure, they can, but the end result will likely be disappointing at best, and a colossal disaster at worst. For starters, mere mortals won’t be able to even install the software, much less build it from source. There are several tricks developers play to make black box monolithic products work for their end users that make it terrible for open source community-building:

    • Highly customized build environment and tools. This is the #1 reason why the majority of proprietary software cannot simply be set loose as open source: it’s completely unusable to all except the developer team that built it. When developing open source software, there are a few standard ways to build software. All of them are terrible at producing highly optimized executable programs for running at the highest level of efficiency, but they’re great for giving developers a simple, standardized way to build and distribute software. The process of making your proprietary software build with standardized open source build tools is probably non-trivial. Open source projects, by contrast, came out of the crib compiling with GCC.

    • 3rd party libraries, also proprietary, that you do not have permission to include in your open source code. Even if your code can build with GNU autotools and GCC, to use one example, you probably have to rewrite some not-insignificant portion of the code. This takes time and effort away from your developers who will be spending time ripping and replacing many pieces of code and not implementing new features. This varies from project to project, but it afflicts the vast majority of projects going from closed to open.

    • Bad security practices. When developers think nobody else is looking, they do all sorts of crazy things. And as long as features are developed on schedule, nobody bats a eye. It is this primacy of feature development over code quality that can result in some horrendous security holes. Obvious exceptions aside, *cough*heartbleed*cough*, there is lots of evidence that open source software is more secure than its proprietary counterparts.

    • Bad coding practices and magical unicorn libraries. For the same reasons as above, ie. feature primacy and nobody’s looking, developers tend to work with the latest and greatest from other software packages, especially when it comes to runtime scripting engines, libraries, and tools. They take the code, modify it, and then they have an end product that works. For now. This is great if you’re on a deadline and your code must work by midnight, and it’s approaching 23:30. The problem, however, is that the product will live long after midnight tonight, and you will be responsible for maintaining, updating and syncing your pristine unicorn library with code that will inevitably diverge from what you modified. This is terrible for everyone, developers and admins included. Imagine the poor sod in operations assigned to installing and maintaining someone’s late-night “innovations”.

    All of the above leads product teams to one obvious conclusion: package and distribute the software in such a way that it runs as far removed as possible from the system on which it resides, usually in the form of a bloated virtual appliance or at least in the form of a self-contained application that relies on the bare minimum of system libraries. Windows admins should take a look at their Program Files directory sometime. Or better yet, don’t. All of this, taken together, adds up to an end product that is extremely difficult to release as open source software.

    Some ops people might think that an appliance is easier for them to deploy and maintain, but more often, they hold their nose in order to use the thing. They will tolerate such an approach if the software actually makes their jobs easier, but they won’t like it. All of the ops people I know, and I used to be one, prefer that the software they deploy conform to their existing processes and workflows, not force them to create new ones.

    Put another way: would your software exist in its current form if it started life as an open source project? Or would end users have demanded a different approach?

    Open source is about process much more than license, and everyone in an open source community has the ability to influence those processes. Projects that start out as open source have many characteristics baked in from the beginning that often, though not always, save developers from their own worst instincts. If you elect to reverse course and move to the open source model, understand what this change entails—it is a minefield, laden with challenges that will be new to your development team, who are unaccustomed to seeing their practices challenged, don’t particularly relish direct customer feedback, and are entirely uncomfortable with the idea of others reading over their shoulder as they write code. The amount of effort to change from proprietary to open source processes is probably on the same order as going from waterfall to agile development.

    Example: ManageIQ

    When Red Hat acquired ManageIQ in late 2012, it was with the understanding that the code would be open sourced—eventually. However, there were several things standing in the way of that:

    1. Many of the User Interface (UI) scripts and libraries were proprietary, 3rd party tools.

    2. The software was distributed as an encrypted virtual machine.

    3. ManageIQ was and is a Rails app, and some of the accompanying Ruby gems were modified from their upstream sources to implement some specific features.

    #1 meant that many parts of the code, particularly in the UI, had to be ripped out and either replaced with an open source library or rewritten. This took quite a bit of time, but was something that had to be done to release the code.

    #2 is not something one can do in an open source project, striking fear into the hearts of the development team. Some changes to the code were necessary after losing the (false) sense of security that came with distributing the software in an encrypted appliance.

    #3 meant that the developer team had to carry forward its modifications to custom gems, which was becoming a burdensome chore and would only get worse over time. The developer team is still in the process of fixing this, but I’m happy to report that we’ve hired a strong Ruby developer, Aaron Patterson, who will, among other things, maintain the team’s changes to upstream gems and prevent future forks and divergence. He’ll also lead the effort to convert ManageIQ to Ruby on Rails 4.

    Conclusion

    Be considerate of your developers and the challenges ahead of them. Hopefully they understand that the needed changes will ultimately result in a better end product. It comes at a price but has its own rewards, too. And never forget to remind folks that choosing an open source approach from the beginning would have obviated this pain.

    Lead Image: 
    Rating: 
    Select ratingGive it 1/5Give it 2/5Give it 3/5Give it 4/5Give it 5/5
    (8 votes)
    Add This: 
    Channel: 
    Article Type: 
    Default CC License: