Tag: community

  • The New Open Source Playbook – Platforms Part Deux

    (This is the 2nd post in a series. Part 1 is here)

    I was all set to make this 2nd post about open core and innovation on the edge, and then I realized that I should probably explore the concept of “lift” in a bit more detail. Specifically, if you’re looking for your platform strategy to give your technology products lift, what does that mean exactly? This goes back to the idea that a rising tide lifts all boats. If you think of a rising tide as a growing community of users or developers, and the boat is your particular software project, then you want a startegy where your project benefits from a larger community. A dynamic, growing community will be able to support several “boats” – products, projects, platforms, et al. A good example of this is the Kubernetes community, which is the flagship project of the Cloud Native Computing Foundation (CNCF).

    How Do We Generate Lift?

    There are 2 basic types of lift you will be looking for – user lift, or getting more people ot adopt your platform, and developer lift, where more developers are contributing to your platform. The former gets more people familiar with your particular technology, providing the basis for potential future customers, and the latter allows you to reduce your engineering cost and potentially benefit from new ideas that you didn’t think of. This means that the community or ecosystem you align with depends on the goals for your platform. If you want more users, that is a very different community strategy from wanting more collaborators. Many startups conflate these strategies, which means they don’t always get the results they’re looking for.

    Let’s assume that you have a potential platform that is categorized in the same cloud native space as Kubernetes. And let’s assume that you’ve determined that the best strategy to maximize your impact is to open source your platform. Does that mean you should put your project in the CNCF? It depends! Let’s assume that your product will target infosec professionals, and you want to get feedback on usage patterns for common security use cases. In that case, the Kubernetes or CNCF communities may not be the best fit. If you want security professionals getting familiar with and adopting your platform, you may want to consider security-focused communities, such as those that have formed around SBOM, compliance, and scanning projects. Or perhaps you do want to see how devops or cloud computing professionals would use your platform to improve their security risk, in which case Kubernetes or CNCF make sense. Your target audience will determine what community is the best fit.

    Another scenario: let’s assume that your platform is adjacent to Kubernetes and you think it’s a good candidate for collaboration with multiple entities with a vested interest in your project’s success. In that case, you need developers with working knowledge of Kubernetes architecture, and the Kubernetes community is definitely where you want your project to be incubated. It’s not always so straightforward, however. If you’re primarily looking for developers who will extend your platform, making use of your interfaces and APIs, then perhaps it doesn’t matter if they have working knowledge of Kubernetes. Maybe in this case, you would do well to understand developer use cases and which vertical markets or industries your platform appeals to, and then follow a different community trail. Platform-community fit for your developer strategy is a more nuanced decision than product-market fit. The former is much more multi-dimensional than the latter.

    If you have decided that developers are key to your platform strategy, you have to decide what kind of developers you’re looking for: those that will *extend* your platform; those that will contribute to your core platform; or those that will use or embed your platform. That will determine the type of lift you need and what community(ies) to align with.

    One more example: You’re creating a platform that you believe will transform the cybersecurity industry, and you want developers that will use and extend your platform. You may at first be attracted to security-focused communities, but then you discover a curious thing: cyber security professionals don’t seem fond of your platform and haven’t adopted it at the scale you expect or need. Does this mean your platform sucks? Not always – it could be that these professionals are highly opinionated and have already made up their minds about desired platforms to base their efforts on. However, it turns out that your platform helps enterprise developers be more secure. Furthermore, you notice that within your enterprise developer community, there is overlap with the PyTorch community, which is not cyber security focused. This could be an opportunity to pivot on your adoption strategy and go where your community is leading: PyTorch. Perhaps that is a more ideal destination for community alignment purposes. Before deciding, however, you can do some testing within the PyTorch community before making a final decision.

    Learn From My Example: Hyperic

    Hyperic was a systems management monitoring tool. These days we would put it in the “observability” category, but that term didn’t exist at the time (2006). The Hyperic platform was great for monitoring Java applications. It was open core, so we focused on adoption by enterprise developers and not contributions. We thought we had a great execution strategy to build a global user base that would use Hyperic as the basis for all of their general purpose application monitoring needs. From a community strategy perspective, we wanted Hyperic to be ubiquitous, used in every data center where applications were deployed and managed. We had a great tag line, too: “All Systems Go”. But there was a problem: although Hyperic could be used to monitor any compute instance, it really shined when used with Java appliations. Focusing on general systems management put us in the same bucket, product-wise, as other general use systems management tools, none of which were able to differentiate each other. If we had decided to place more of our community focus on Java developers, we could have ignored all of the general purpose monitoring and focused on delivering great value for our core audience: Java development communities. Our platform-community fit wasn’t aligned properly, and as a result, we did not get the lift we were expecting. This meant that our sales team had to work harder to find opportunities and put a drag on our revenue and overall momentum. Lesson learned…

    When attempting a platform execution strategy, and you’re going the open source route, platform-community fit is paramount. Without it, you won’t get the lift you’re expecting. You can always change up your community alignment strategy later, but it’s obviously better if you get it right the first time.

  • Moving on From Gluster

    All good things must come to an end. I can say with no equivocation that the last three years have been the most rewarding from a work perspective than any other job I’ve ever had. When I accepted this challenge in May, 2011, I had no idea that the project and community would blossom as they have. I had no idea how many great people were already in place to push this project to the forefront of open source development. I had no idea how many great partners we would find who share our vision for open source storage. I also, of course, didn’t know that Gluster, Inc. would be acquired within months of my arrival, which drastically increased the velocity of the project and community. I didn’t know any of that – what I did know was that there was a pretty cool project called GlusterFS and it seemed like the way forward for storage.

    After we were acquired, we knew there would be a bit of angst from the community about whether we would still actively support other distributions outside of the Red Hat arena. I’m proud to say that we have done that, with active contributions from various community members for Ubuntu, Debian, NetBSD and OpenSUSE builds. We always strove to make gluster.org a truly open community and, in some respects, “bigger than Red Hat.”

    Along the way, we created a board consisting of active community members and organizations. We made the project more transparent and active than ever. We greatly increased the degree that the community is a collaborative experience beyond just the immediate development team. And we greatly increased the reach and scope of the open source storage ecosystem. I can look back and feel privileged to have worked with such amazing visionaries, developers and community evangelists.

    Now it’s time to turn the Gluster community over to someone who can build on what we’ve done and take it even further. I’m staying at Red Hat but moving on to other projects and communities. The ideal candidate should know their way around open source projects and communities, should have an unyielding desire to push things beyond the status quo, should know a thing or two about business strategy, and should understand how to identify which organizations should invest in a community and sell them on the vision. As I’ve mentioned before, today’s community leaders are the equivalent of startup executives, having to mesh together product management and marketing, business development and strategy, sales and messaging into a cohesive whole.

    Are you the next Gluster Community Leader? Drop me a line on IRC – I’m “johnmark” on the Freenode network.

  • Citrix and Harvard FASRC Join Gluster Community; Board Expands

    Citrix, Harvard University FASRC and long-time contributors join the Gluster Community Board to drive the direction of open software-defined storage

    February 5, 2014 – The Gluster Community, the leading community for open software-defined storage, announced today two new organizations have signed letters of intent to join: Citrix, Inc. and Harvard University’s Faculty of Arts and Science Research Computing (FASRC) group. This marks the third major expansion of the Gluster Community in governance and projects since mid-2013. Monthly downloads of GlusterFS have tripled since the beginning of 2013, and traffic to gluster.org has increased by over 50% over the previous year. There are now 45 projects on the Gluster Forge and more than 200 developers, with integrations either completed or in the works for Swift, CloudStack, OpenStack Cinder, Ganeti, Archipelago, Xen, QEMU/KVM, Ganesha, the Java platform, and SAMBA, with more to come in 2014.

    Citrix and FASRC will be represented by Mark Hinkle, Senior Director of Open Source Solutions, and James Cuff, Assistant Dean for Research Computing, respectively, joining two individual contributors: Anond Avati, Lead GlusterFS Architect, and Theron Conrey, a contributing speaker, blogger and leading advocate for converged infrastructure. Rounding out the Gluster Community Board are Xavier Hernandez (DataLab); Marc Holmes (Hortonworks), Vin Sharma (Intel), Jim Zemlin (The Linux Foundation), Keisuke Takahashi (NTTPC), Lance Albertson (The Open Source Lab at Oregon State University), John Mark Walker (Red Hat), Louis Zuckerman, Joe Julian, and David Nalley.

    Citrix

    Citrix has become a major innovator in the cloud and virtualization markets. They will drive ongoing efforts to integrate GlusterFS with CloudStack (https://forge.gluster.org/cloudstack-gluster) and the Xen hypervisor. Citrix is also sponsoring Gluster Community events, including a Gluster Cloud Night at their facility in Santa Clara, California on March 18.

    Harvard FASRC

    The research computing group at Harvard has one of the largest known deployments of GlusterFS in the world, pushing GlusterFS beyond previously established limits. Their involvement in testing and development has been invaluable for advancing the usability and stability of GlusterFS.

    Anand Avati

    Anand Avati was employee number 3 at Gluster, Inc. in 2007 and has been the most prolific contributor to the GlusterFS code base as well as its most significant architect over the years. He is primarily responsible for setting the roadmap for the GlusterFS project. Avati is employed by Red Hat but is an individual contributor for the board.

    Theron Conrey

    Theron became involved in the Gluster community when he started experimenting with the integration between oVirt (http://ovirt.org/) and GlusterFS. Long a proponent of converged infrastructure, Theron bring years of expertise from his stints at VMware and Nexenta.

    Supporting Quotes

    John Mark Walker, Gluster Community Leader, Red Hat

    The additions of Citrix and Harvard FASRC to the Gluster Community show that we continue to build momentum in the software-defined storage space. With the continuing integration with all cloud and big data technologies, including the Xen Hypervisor and CloudStack, we are building the default platform for modern data workloads.

    Mark Hinkle, Senior Director, Open Source Solutions, Citrix

    “We see an ever increasing hunger for storage solutions that have design points that mirror those in our open source and enterprise cloud computing efforts. Our goal is to enable many kinds of storage with varying levels of utility and we see GlusterFS as helping to pioneer new advances in this area. As an active participant in the open source community we want to make sure projects that we sponsor like Apache CloudStack and the Linux Foundation’s Xen Project are enabled to collaborate with such technologies to best serve our users.

    James Cuff, Assistant Dean for Research Computing, Harvard University

    As long term advocates of both open source, and open science initiatives at scale, Research Computing are particularly excited to participate on the Gluster Community Governing Board. We really look forward to further accelerating science and discovery through this important and vibrant community collaboration.”

     

    ***The OpenStack mark is either a registered trademark/service mark or trademark/service mark of the OpenStack Foundation, in the United States and other countries, and is used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community

    ***Gluster and GlusterFS are trademarks of Red Hat, Inc.

    ***Xen and Linux are trademarks of The Linux Foundation

    ***Apache Cloudstack is a trademark of the Apache Software Foundation

  • Gluster Cloud Night Amsterdam

    Join us on March 4 for the Gluster Community seminar and learn how to improve your storage.
    This half day seminar brings you in-depth presentations, use cases, demos and developer content presented by Gluster Community experts.

    REGISTRATION

    Register today for this free half-day seminar and reserve your seat since spaces are limited. Click here to register.

    We look forward to meeting you on March 4th!

    AGENDA

    13:30 – 13:45 Registration
    13:45 – 14:15 The State of the Gluster Community
    14:15 – 15:30 GlusterFS for SysAdmins, Niels de Vos, Red Hat
    15:30 – 15:45 Break
    15:45 – 16:30 Adventures in Cloud Storage with OpenStack and GlusterFS
    Tycho Klitsee (Technical Consultant and Co-owner) of Kratz Business Solutions
    16:30 – 17:15 Gluster Forge Demos, Fred van Zwieten, Technical Engineer, VX Company and Marcel Hergaarden, Red Hat
    17:15 – 18:00 Networking Drinks

  • Gluster Spotlight on James Shubin: Puppet-Gluster, Vagrant and GlusterFS Automation

    [youtube http://www.youtube.com/watch?v=SV1_RpZssGk&w=560&h=315]

    ***UPDATE: Due to weather-related flight cancelations and rebooking, we had to push this back to Thursday, January 23, at noon PST/3pm EST/20:00 GMT***

    James Shubin is known in the Gluster community for his work on the Puppet-Gluster module.

    Recently, he’s begun to create powerful cocktails of Puppet and Vagrant to create recipes for automated Gluster deployments. See, eg.

    Building Base Images for Vagrant with a Makefile

    and

    Testing GlusterFS During GlusterFest

    This will be quite a fun spotlight, and very much worth your while. As usual, join the #gluster-meeting channel on the Freenode IRC network to participate in the live Q&A.

    About Gluster Spotlight

    Gluster Spotlight is a weekly Q&A show featuring the most exciting movers and shakers in the Gluster Community. If you don’t catch them live, you can always watch the recordings later.

  • GlusterFest Weekend is Here – Jan 17 – 20

    As I mentioned yesterday, the GlusterFest is nigh. This time, we’ll break out testing into two types:
    • Performance testing
    • Feature testing
    To learn about the GlusterFest and what it is, visit the GlusterFest home at gluster.org/gfest
    Remember that if you file a bug that is verified by the Gluster QE team, you’ll win a t-shirt plus other swag.

    PERFORMANCE TESTING

    We are lucky in that two individuals have stepped up with tools to help with performance testing. One is James Shubin with his Puppet-Gluster module:
    https://forge.gluster.org/puppet-gluster/

    Together with his blog posts on puppet-gluster + vagrant, you should have an easy way to deploy GlusterFS:
    Automatically deploying GlusterFS with Vagrant and Puppet

    Also, Ben England recently released some code for his Smallfile performance testing project, which targets metadata-intensive workloads:

    http://forge.gluster.org/smallfile-performance-testing

    He also wrote up a nice primer on performance testing on the Gluster.org wiki that discusses iozone, smallfile, and how to utilize performance testing in general:
    http://www.gluster.org/community/documentation/index.php/Performance_Testing
    Please follow the instructions on the GlusterFest page (gluster.org/gfest) and report your results there. Some of the test results are quite large, so you will want to report test results on a separate page, either on the Gluster.org wiki or on the paste site of your choosing, such as fpaste.org.
    Please file any bugs and report them on the gluster-devel list, as well as providing links on the GlusterFest page.

    FEATURE TESTING

    In addition to performance, we have new features in 3.5 which needs some further testing. Please follow the instructions on the GlusterFest page and add your results there. Some of the developers were kind enough to include testing scenarios with their feature pages. If you want your feature to be tested but didn’t supply any testing information, please add that now.

    The GlusterFest begins at 00:00 GMT/UTC (today, January 17) and ends at 23:59 GMT/UTC on Monday, January 20.
    Rev your engines and get ready for some testing!
  • Gluster Hangout with Daniel Mons from Cutting Edge

    [youtube http://www.youtube.com/watch?v=Ep4C2XWsG8o&w=560&h=315]

    Dan Mons came across GlusterFS at his job with Cutting Edge, a VFX company. He needed lots of storage space that was available to many different users – and he needed it to be able to expand as he needed. That it was free and ran on commodity systems was a big plus.

    Come join us as we learn from Dan and pepper him with lots of questions. We’ll be at a special time this week because Dan is in Oz – 5pm Pacific US/8pm Eastern US/01:00 GMT

    Follow along on YouTube in the video above and ask questions in #gluster-meeting on the Freenode IRC network (irc.freenode.net or irc.gnu.org, among others).

  • Hangout with Semiosis (Louis Z) Today – Gluster on AWS, Java Filesystem and more

    In about 90 minutes, Louis Zuckerman and I will be “hanging out” and talking about how he came to deploy GlusterFS on AWS, and why he’d developing a Java Filesystem integration with GlusterFS. I’ll post the embedded YouTube link here when we’re about to go live. Hangout starts at 11am EST, 8am PST, 16:00GMT – follow along on YouTube and ask questions in #gluster-meeting on IRC.gnu.org.

     

    [youtube http://www.youtube.com/watch?v=usoY_FPc2EY&w=560&h=315]

  • The Tyranny of the Clouds

    Or “How I learned to start worrying and never trust the cloud.”

    The Clouderati have been derping for some time now about how we’re all going towards the public cloud and “private cloud” will soon become a distant, painful memory, much like electric generators filled the gap before power grids became the norm. They seem far too glib about that prospect, and frankly, they should know better. When the Clouderati see the inevitability of the public cloud, their minds lead to unicorns and rainbows that are sure to follow. When I think of the inevitability of the public cloud, my mind strays to “The Empire Strikes Back” and who’s going to end up as Han Solo. When the Clouderati extol the virtues of public cloud providers, they prove to be very useful idiots advancing service providers’ aims, sort of the Lando Calrissians of the cloud wars. I, on the other hand, see an empire striking back at end users and developers, taking away our hard-fought gains made from the proliferation of free/open source software. That “the empire” is doing this *with* free/open source software just makes it all the more painful an irony to bear.

    I wrote previously that It Was Never About Innovation, and that article was set up to lead to this one, which is all about the cloud. I can still recall talking to Nicholas Carr about his new book at the time, “The Big Switch“, all about how we were heading towards a future of utility computing, and what that would portend. Nicholas saw the same trends the Clouderati did, except a few years earlier, and came away with a much different impression. Where the Clouderati are bowled over by Technology! and Innovation!, Nicholas saw a harbinger of potential harm and warned of a potential economic calamity as a result. While I also see a potential calamity, it has less to do with economic stagnation and more to do with the loss of both freedom and equality.

    The virtuous cycle I mentioned in the previous article does not exist when it comes to abstracting software over a network, into the cloud, and away from the end user and developer. In the world of cloud computing, there is no level playing field – at least, not at the moment. Customers are at the mercy of service providers and operators, and there are no “four freedoms” to fall back on.

    When several of us co-founded the Open Cloud Initiative (OCI), it was with the intent, as Simon Phipps so eloquently put it, of projecting the four freedoms onto the cloud. There have been attempts to mandate additional terms in licensing that would force service providers to participate in a level playing field. See, for example, the great debates over “closing the web services loophole” as we called it then, during the process to create the successor to the GNU General Public License version 2. Unfortunately, while we didn’t yet realize it, we didn’t have the same leverage as we had when software was something that you installed and maintained on a local machine.

    The Way to the Open Cloud

    Many “open cloud” efforts have come and gone over the years, none of them leading to anything of substance or gaining traction where it matters. Bradley Kuhn helped drive the creation of the Affero GPL version 3, which set out to define what software distribution and conveyance mean in a web-driven world, but the rest of the world has been slow to adopt because, again, service providers have no economic incentive to do so. Where we find ourselves today is a world without a level playing field, which will, in my opinion, stifle creativity and, yes, innovation. It is this desire for “innovation” that drives the service providers to behave as they do, although as you might surmise, I do not think that word means what they think it means. As in many things, service providers want to be the arbiters of said innovation without letting those dreaded freeloaders have much of a say. Worse yet, they create services that push freeloaders into becoming part of the product – not a participant in the process that drives product direction. (I know, I know: yes, users can get together and complain or file bugs, but they cannot mandate anything over the providers)

    Most surprising is that the closed cloud is aided and abetted by well-intentioned, but ultimately harmful actors. If you listen to the Clouderati, public cloud providers are the wonderful innovators in the space, along with heaping helpings of concern trolling over OpenStack’s future prospects. And when customers lose because a cloud company shuts its doors, the clouderati can’t be bothered to bring themselves to care: c’est la vie and let them eat cake. The problem is that too many of the clouderati think that Innovation! is a means to its own ends without thinking of ground rules or a “bill of rights” for the cloud. Innovation! and Technology! must rule all, and therefore the most innovative take all, and anything else is counter-productive or hindering the “free market”. This is what happens when the libertarian-minded carry prejudiced notions of what enabled open source success without understanding what made it possible: the establishment and codification of rights and freedoms. None of the Clouderati are evil, freedom-stealing, or greedy, per se, but their actions serve to enable those who are. Because they think solely in terms of Innovation! and Technology!, they set the stage for some companies to dominate the cloud space without any regard for establishing a level playing field.

    Let us enumerate the essential items for open innovation:

    1. Set of ground rules by which everyone must abide, eg. the four freedoms
    2. Level playing field where every participant is a stakeholder in a collaborative effort
    3. Economic incentives for participation

    These will be vigorously opposed by those who argue that establishing such a list is too restrictive for innovation to happen, because… free market! The irony is that establishing such rules enabled Open Source communities to become the engine that runs the world’s economy. Let us take each and discuss its role in creating the open cloud.

    Ground Rules

    We have already established the irony that the four freedoms led to the creation of software that was used as the infrastructure for creating proprietary cloud services. What if the four freedoms where tweaked for cloud services. As a reminder, here are the four freedoms:

    • The freedom to run the program, for any purpose (freedom 0).
    • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1).
    • The freedom to redistribute copies so you can help your neighbor (freedom 2).
    • The freedom to distribute copies of your modified versions to others (freedom 3).

    If we rewrote this to apply to cloud services, how much would need to change? I made an attempt at this, and it turns out that only a couple of words need to change:

    • The freedom to run the program or service, for any purpose (freedom 0).
    • The freedom to study how the service works, and change it so it does your computing as you wish (freedom 1).
    • The freedom to implement and redistribute copies so you can help your neighbor (freedom 2).
    • The freedom to implement your modified versions for others (freedom 3).

    Freedom 0 adds “or service” to denote that we’re not just talking about a single program, but a set of programs that act in concert to deliver a service.

    Freedom 1 allows end users and developers to peak under the hood.

    Freedom 2 adds “implement and” to remind us that the software alone is not much use – the data forms a crucial part of any service.

    Freedom 3 also changes “distribute copies of” to “implement” because of the fundamental role that data plays in any service. Distributing copies of software in this case doesn’t help anyone without also adding the capability of implementing the modified service, data and all.

    Establishing these rules will be met, of course, with howls of rancor from the established players in the market, as it should be.

    Level Playing Field

    With the establishment of the service-oriented freedoms, above, we have the foundation for a level playing field with actors from all sides having a stake in each other’s success. Each of the enumerated freedoms serves to establish a managed ecosystem, rather than a winner-take-all pillage and plunder system. This will be countered by the argument that if we hinder the development of innovative companies won’t we a.) hinder economic growth in general and b.) socialism!

    In the first case, there is a very real threat from a winner-take-all system. In its formative stages, when everyone has the economic incentive to innovate (there’s that word again!), everyone wins. Companies create and disrupt each other, and everyone else wins by utilizing the creations of those companies. But there’s a well known consequence of this activity: each actor will try to build in the ability to retain customers at all costs. We have seen this happen in many markets, such as the creation of proprietary, undocumented data formats in the office productivity market. And we have seen it in the cloud, with the creation of proprietary APIs that lock in customers to a particular service offering. This, too, chokes off economic development and, eventually, innovation. At first, this lock in happens via the creation of new products and services which usually offer new features that enable customers to be more productive and agile. Over time, however, once the lock-in is established, customers find that their long-term margins are not in their favor, and moving to another platform proves too costly and time-consuming. If all vendors are equal, this may not be so bad, because vendors have an incentive to lure customers away from their existing providers, and the market becomes populated by vendors competing for customers, acting in their interest. Allow one vendor to establish a larger share than others, and this model breaks down. In a monopoly situation, the incumbent vendor has many levers to lock in their customers, making the transition cost too high to switch to another provider. In cloud computing, this winner-take-all effect is magnified by the massive economies of scale enjoyed by the incumbent providers. Thus, the customer is unable to be as innovative as they could be due to their vendor’s lock-in schemes. If you believe in unfettered Innovation! at all costs, then you must also understand the very real economic consequences of vendor lock-in. By creating a level playing field through the establishment of ground rules that ensure freedom, a sustainable and innovative market is at least feasible. Without that, an unfettered winner-take-all approach will invariably result in the loss of freedom and, consequently, agility and innovation.

    Economic Incentives

    This is the hard one. We have already established that open source ecosystems work because all actors have an incentive to participate, but we have not established whether the same incentives apply here. In the open source software world, developers participate because they had to, because the price of software is always dropping, and customers enjoy open source software too much to give it up for anything else. One thing that may be in our favor is the distinct lack of profits in the cloud computing space, although that changes once you include services built on cloud computing architectures.

    If we focus on infrastructure as a service (IaaS) and platform as a service (PaaS), the primary gateways to creating cloud-based services, then the margins and profits are quite low. This market is, by its nature, open to competition because the race is on to lure as many developers and customers as possible to the respective platform offerings. However, the danger becomes if one particular service provider is able to offer proprietary services that give it leverage over the others, establishing the lock-in levers needed to pound the competition into oblivion.

    In contrast to basic infrastructure, the profit margins of proprietary products built on top of cloud infrastructure has been growing for some time, which incentivizes the IaaS and PaaS vendors to keep stacking proprietary services on top of their basic infrastructure. This results in a situation where increasing numbers of people and businesses have happily donated their most important business processes and workflows to these service providers. If any of them are to grow unhappy with the service, they cannot easily switch, because no competitor would have access to the same data or implementation of that service. In this case, not only is there a high cost associated with moving to another service, there is the distinct loss of utility (and revenue) that the customer would experience. There is a cost that comes from entrusting so much of your business to single points of failure with no known mechanism for migrating to a competitor.

    In this model, there is no incentive for service providers to voluntarily open up their data or services to other service providers. There is, however, an incentive for competing service providers to be more open with their products. One possible solution could be to create an Open Cloud certification that would allow services that abide by the four freedoms in the cloud to differentiate themselves from the rest of the pack. If enough service providers signed on, it would lead to a network effect adding pressure to those providers who don’t abide by the four freedoms. This is similar to the model established by the Free Software Foundation and, although the GNU people would be loathe to admit it, the Open Source Initiative. The OCI’s goal was to ultimately create this, but we have not yet been able to follow through on those efforts.

    Conclusion

    We have a pretty good idea why open source succeeded, but we don’t know if the open cloud will follow the same path. At the moment, end users and developers have little leverage in this game. One possibility would be if end users chose, at massive scale, to use services that adhered to open cloud principles, but we are a long way away from this reality. Ultimately, in order for the open cloud to succeed, there must be economic incentives for all parties involved. Perhaps pricing demands will drive some of the lower rung service providers to adopt more open policies. Perhaps end users will flock to those service providers, starting a new virtuous cycle. We don’t yet know. What we do know is that attempts to create Innovation! will undoubtedly lead to a stacked deck and a lack of leverage for those who rely on these services.

    If we are to resolve this problem, it can’t be about innovation for innovation’s sake – it must be, once again, about freedom.

     

  • Do Open Source Communities Have a Social Responsibility?

    This post continues my holiday detour into things not necessarily tech related. Forgive me this indulgence – there is at least one more post I’ll make in a similar vein.

    Open Source communities are different. At least, I’ve always felt that they are. Think of the term “community manager.” If you’re a community manager in an open source community your responsibilities include, but are not limited to: product management, project management, enabling your employer’s competition, enabling people’s success without their paying you, marketing strategy and vision, product strategy and vision, people management (aka cat-herding), event management, and even, sometimes, basic IT administration and web development. If you ask a community manager in some other industry, they do anywhere from half of those things to, at most, 3/4. But even the most capable  community manager in a non-open source field will not do at least two of the things mentioned, enabling your competitors and enabling “freeloaders”. (Before anyone says anything – no, enabling non-paying contributors to upload free content that the your employer uses to rake in ad revenue doesn’t count for the latter. That’s called tricking people into contributing free labor to a product you sell.)

    So it would seem that Open Source community management is a different beast, a much more comprehensive set of duties and, dare I say it, a proving ground for executive leadership. There are other differences, too, that make the scope of open source communities different and more expansive. Beginning with the GNU project and the Free Software Foundation, the roots of open source are enmeshed with social responsibility, but do modern open source communities continue to carry the flame of social responsibility?

    One of the things that attracted me to open source communities in the beginning was the sense that by participating in them, I was making the world a better place. And by that, I don’t mean in the Steve Jobs sense, where “making the world a better place” means “anything that fattens my wallet and strips people of their information rights.” I mean actually creating something that adds value to others without expecting any form of monetary remuneration. Others have called this a “gift economy” but I’m not sure that’s exactly correct. I mean, I’ve always been paid for my open source work, which is different from other social advocates who literally make nothing for their efforts. Regardless, there’s a sense that I’m enabling a better world while also drawing a nice paycheck, which certainly beats making the world crappier while drawing an even bigger paycheck.

    Anyway, throughout my open source community career, I’ve seen all sorts of social causes at work: bridging the digital divide, defining information rights and, more recently, gender and ethnic equality in technology. Because of our social activism roots the question becomes, how much responsibility do we have as open source advocates to carry the torch for related causes? Take the Ada Initiative, for example. Does it not behoove us to do our part for gender equality in high tech? How many open source conferences have you been to that were >90% male? Does saying that “well, the code is open, so anyone can participate” really cut it? If we’re really going to address the problem of the digital divide, does it not make sense to more aggressively recruit women and under-represented minorities into the fold?

    If we really want to rid the world or proprietary software, I don’t see how we can do that without adding in people who currently do not actively participate in open source communities. There’s also been a disturbing trend whereby the more commercial communities have begun to separate themselves from the communities with more social activism roots, dividing the hippies from the money-makers. As I noted in my previous post, the hippies were right the whole time about the four freedoms, so perhaps we should listem to them more closely on these other issues? Think about it – if we more aggressively recruit from under-represented portions of society, would that not add a much-needed influx of talent and ambition? Would that not, then, make our communities that much more dynamic and productive? I’ve always held that economics has a long-term liberal bias, and I think this is an opportunity to put that maxim to the test.

    This holiday season, let’s think about the social responsibility of open source communities and its participants. Let’s think about ways we can bring the under-represented into the fold.