Category: Syndicated

  • El-Deko – Why Containers Are Worth the Hype

    [youtube http://www.youtube.com/watch?v=vqtnG1TBdxM&w=560&h=315]

    Video above from Kubernetes 1.0 Launch event at OSCON

    In the above video, I attempted to put Red Hat’s container efforts into a bit of context, especially with respect to our history of Linux platform development. Having now watched the above video (they forced me to watch!) I thought it would be good to expound on what I discussed in the video.

    Admit it, you’ve read one of the umpteen millions of articles breathlessly talking about the new Docker/Kubernetes/Flannel/CoreOS/whatever hotness and thought to yourself, Wow, is this stuff overhyped. There is some truth to that knee-jerk reaction, and the buzzworthiness of all things container-related should give one pause – It’s turt^H^H^H^Hcontainers all the way down!

    I myself have often thought how much fun it would be to write the Silicon Valley buzzword-compliant slide deck, with all of the insane things that have passed for “technical content” over the years, from Java to Docker and lots of other nonsense in between. But this blog post is not about how overhyped the container oeuvre is, but rather why it’s getting all the hype and why – and this is going to hurt writing this – it’s actually deserved.

    IT, from the beginning, has been about doing more, faster. This edict has run the gamut from mainframes and microcomputers to PCs, tablets, and phones. From timeshare computing to client-server to virtualization and cloud computing, the quest for this most nebulous of holy grails, efficiency, has taken many forms over the years, in some cases fruitful and in others, meh.

    More recently, efficiency has taken the form of automation at scale, specifically in the realm of cloud computing and big data technologies. But there has been some difficulty with this transition:

    • The preferred base of currency for cloud computing, the venerable virtual machine, has proved to be a tad overweight for this transformation.
    • Not all clouds are public clouds. The cloudies want to pretend that everyone wants to move to public cloud infrastructure NowNowNow. They are wrong.
    • Existing management frameworks were not built for cloud workloads. It’s extremely difficult to get a holistic view of your infrastructure, from on-premises workloads to cloud-based SaaS applications and deployments on IaaS infrastructure.

    While cloud computing has had great momentum for a few years now and shows no signs of stopping, its transformative power over IT remains incomplete. To complete the cloudification of IT, the above problems need to be solved, which involves rewriting the playbook for enterprise workloads to account for on-premises, hybrid and, yes, public cloud workloads. The entire pathway from dev to ops is currently undergoing the most disruption since the transition from mainframe to client-server. We’re a long ways from the days when LAMP was a thing, and software running on bare metal was the only means of deploying applications. Aside from the “L”, the rest of the LAMP stack has been upended with its replacements in the formative stages.

    While we may not know precisely what the new stack will be, we can now make some pretty educated guesses:

    • Linux (duh): It’s proved remarkably flexible, regardless of what new workload is introduced. Remember when Andy Tanenbaum tried to argue in 1992 that monolithic kernels couldn’t possibly provide the modularity required for modern operating systems?
    • Docker: The preferred container format for packaging applications. I realize this is now called the Open Container Format, but most people will know it as Docker.
    • Kubernetes: The preferred orchestration framework. There are others in the mix, but Kubernetes seems to have the inside track, although its use certainly doesn’t preclude Mesos, et al. One can see a need for multiple, although Kube seems to be “core”.
    • OpenShift: There’s exactly one open source application management platform for the Docker and Kubernetes universe, and that’s OpenShift. No other full-featured open source PaaS is built on these core building blocks.

    In the interest of marketers everywhere, I give you the “LDKO” or “El-deko” stack. You’re welcome.

    Why This is a Thing

    The drive to efficiency has meant extending the life of existing architecture, while spinning up new components that can work with, rather than against, current infrastructure. After it became apparent to the vast majority of IT pros that applications would need to straddle the on-premises and public cloud worlds, the search was on for the best way to do this.

    Everyone has AWS instances; everyone has groups of virtual machines; and everyone has bare metal systems in multiple locations. How do we create applications that can run on the maximum number of platforms, thus giving devops folks the most choices in where and how to deploy infrastructure at scale? And how do we make it easy for developers to package and ship applications to run on said infrastructure?

    At Red Hat, we embraced both Docker and Kubernetes early on, because we recognized their ability to deliver value in a number of contexts, regardless of platform. By collaborating with their respective upstream communities, and then rewriting OpenShift to take advantage of them, we were able to create a streamlined process that allowed both dev and ops to focus on their core strengths and deliver value at a higher level than ever before. The ability to build, package, distribute, deploy, and manage applications at scale has been the goal from the beginning, and with these budding technologies, we can now do it more efficiently than ever before.

    Atomic: Container Infrastructure for the DevOps Pro

    In the interests of utilizing the building blocks above, it was clear that we needed to retool our core platform to be “container-ready,” hence Project Atomic and its associated technologies:

    • Atomic Host: The core platform or “host” for containers and container orchestration. We needed a stripped-down version of our Linux distributions to support lightweight container management. You can now use RHEL, CentOS, and Fedora versions of Atomic Host images to provide your container environment. The immutability of Atomic Host and its atomic update feature provides a secure environment to run container-based workloads.
    • Atomic CLI: This enables users to quickly perform administrative functions on Atomic Host, including installing and running containers as well as performing an Atomic Host update.
    • Atomic App: Our implementation of the Nulecule application specification, allowing developers to define and package an application and operations to then deploy and manage that application. This gives enterprises the advantage of a seamless, iterative methodology to complete their application development pipeline. Atomic App supports OpenShift, Kubernetes, and Just Plain Docker as orchestration targets out of the box with the ability to easily add more.

    Putting It All Together

    As demonstrated in the graphic below, the emerging stack is very different from your parents’ Linux. It takes best of breed open source technologies and pieces them together into a cloud native fabric worthy of the DevOps moniker.

    El-Deko in All Its Glory

    el-decko stack

    With our collaboration in the Docker and Kubernetes communities, as well as our rebuild of OpenShift and the introduction of Project Atomic, we are creating a highly efficient dev to ops pipeline that enterprises can use to deliver more value to their respective businesses. It also gives enterprises more choice:

    • Want to use your orchestration framework? You can add that parameter to your Nulecule app definition and dependency graph.
    • Want to use another container format? Add it to your Nulecule file.
    • Want to package an application so that it can run on Atomic Host, Just Plain Docker, or OpenShift? Package it with Atomic App.
    • Want an application management platform that utilizes all this cool stuff and doesn’t force you to manage every detail? OpenShift is perfect for that.
    • Need to manage and automate your container infrastructure side-by-side with the rest of your infrastructure? ManageIQ is emerging as an ideal open source management platform for containers – in addition to your cloud and virtualization technologies.

    As our container story evolves, we’re creating a set of technologies useful to every enterprise in the world, whether developer or operations-centric (or both). The IT world is changing quickly, but we’re pulling it together in a way that works for you.

    Where to Learn More

    There are myriad ways to learn more about the tools mentioned above:

    • projectatomic.io – All the Atomic stuff, in one place
    • openshift.org – Learn about the technology that powers the next version of OpenShift.com and download OpenShift Origin
    • manageiq.org – ManageIQ now includes container management, especially for Kubernetes as well as OpenShift users

    We will also present talks at many upcoming events that you will want to take advantage of:

  • Open source more about process than licensing

    It is a testament to the success of the Open Source Initiative’s (OSI) branding campaign for open source software that “open source” and “licensing” are functionally synonymous. To the extent that people are familiar with open source software, it is the source code released under a license that lets anyone see the “crown jewels” of a software program as opposed to an opaque binary, or black box that hides its underpinnings.

    This well-trodden trope has dominated the mainstream view of open source software since Eric Raymond pushed it into the public consciousness over 15 years ago. But taking a previously proprietary code base and transitioning it to an open source project makes one seriously question any previous assumptions about code and licensing. It is that undertaking that leads one to appreciate the values of process and governance. After seeing that transition from closed to open firsthand, I am convinced that the choice of whether to release code as a proprietary or open source project leads to fundamental changes in the end product, a divergence that is very difficult to roll back.

    From the point of view of most people, the software license is the most important aspect of releasing open source software, but in my opinion, licensing falls somewhere under user experience, workflows, and integration into existing data center technologies. Nowhere is this difference, in what is “known” (licensing) and what is the actual reality (user workflows), more clear than in the fearful eyes of the development team tasked with transforming their proprietary product into an open source project. In fact, the development methodology chosen by the engineers has a direct impact on what type of software is produced. If an open source development model is chosen from the beginning, one can be reasonably sure that the end product will be relatively portable and will plug into the most commonly used environments. If a proprietary model is chosen, it’s very easy for the developers to make cheap shortcuts that result in short-term gain and long-term pain—and that’s precisely what often happens.

    To the extent that people think of these things, the common perception is that this change involves a simple search and replace, maybe the removal of 3rd party software, uploading to a public repository, and presto! Fork me on GitHub! But, nothing could be further from the truth. What most people miss about software is that it’s much more about process, control, and administration than software licenses. As I argued in It Was Never About Innovation, the key to the success of open source software is not the desire for innovation but rather the fact that all players in open source ecosystems are on a level playing field. Customers, outside developers, freeloaders—they all have a seat at the table and can exert influence on a project by virtue of their leveraging of community equity, which they have built up over time by contributing in various ways. This is in stark contrast to proprietary development models where developers can essentially do whatever they want as long as they create an end product that meets the expectations of the Product Requirements Document (PRD) supplied by product management.

    This is where the difference between open source and proprietary development comes into stark relief. The open process that accompanies open source development will help to ensure that the software will likely integrate into any given environment and that some bad habits are often avoided. These two things go hand-in-hand. For example, proprietary software development often results in software that is monolithic in nature with a minimum of dependencies on system software and often bundled with its own set of libraries and tools. This gives developers the leeway to do whatever they want, often employing specific versions of libraries, reinventing various wheels, and generally veering far from the path of creating software that works well in a broader context.

    Open source software developers, by contrast, have no such luxury. From day one, their users demand the ultimate in flexibility, integration, and conformance to standard data center systems practices. This means the utilization of existing tools and libraries whenever possible, baking into the process the idea that your software will be a cog in a much larger data center machine. Note that nowhere did I mention that open source development was faster or more innovative, although it can be. On one hand, developers love the fact that they have complete control over the end product and don’t have to deal with annoyances, such as customer demands that their precious software honor their existing workflows. On the other hand, end users love the fact that their open source deployments likely have a long history of use within large data centers and that those previous users made sure the software was to their liking.

    Both of these approaches come at a cost: open source development may actually be slower at particular times in its life-cycle due to some overhead costs that are inherent to the model, and proprietary development, while perhaps faster, sends the developer team down the road of maintenance hell, needing to endlessly maintain the bits of glue that generally come for free in open source development. The overwhelming evidence of late suggests that the open source approach is far more effective in the data center.

    Suppose that your team went down the road of proprietary development but eventually came to the conclusion that they could win over more users with an open source approach—what then? Here lies the conundrum: the process of undoing the proprietary process and imbuing a project with the open source sauce is spectacularly difficult. Many otherwise knowledgeable people in the tech industry have no idea just how much change is involved. Hell, most engineers have no idea what’s actually involved in switching horses midstream. To engage in the process means necessarily losing valuable development time while taking up tasks that developers feel are, frankly, beneath them. To change software from a monolithic, proprietary code base to one that plays well with others is a gargantuan task.

    “But wait!,” I can hear you say. “Can’t they just release whatever they have under an open source license and then take care of the other stuff later?” Sure, they can, but the end result will likely be disappointing at best, and a colossal disaster at worst. For starters, mere mortals won’t be able to even install the software, much less build it from source. There are several tricks developers play to make black box monolithic products work for their end users that make it terrible for open source community-building:

    • Highly customized build environment and tools. This is the #1 reason why the majority of proprietary software cannot simply be set loose as open source: it’s completely unusable to all except the developer team that built it. When developing open source software, there are a few standard ways to build software. All of them are terrible at producing highly optimized executable programs for running at the highest level of efficiency, but they’re great for giving developers a simple, standardized way to build and distribute software. The process of making your proprietary software build with standardized open source build tools is probably non-trivial. Open source projects, by contrast, came out of the crib compiling with GCC.

    • 3rd party libraries, also proprietary, that you do not have permission to include in your open source code. Even if your code can build with GNU autotools and GCC, to use one example, you probably have to rewrite some not-insignificant portion of the code. This takes time and effort away from your developers who will be spending time ripping and replacing many pieces of code and not implementing new features. This varies from project to project, but it afflicts the vast majority of projects going from closed to open.

    • Bad security practices. When developers think nobody else is looking, they do all sorts of crazy things. And as long as features are developed on schedule, nobody bats a eye. It is this primacy of feature development over code quality that can result in some horrendous security holes. Obvious exceptions aside, *cough*heartbleed*cough*, there is lots of evidence that open source software is more secure than its proprietary counterparts.

    • Bad coding practices and magical unicorn libraries. For the same reasons as above, ie. feature primacy and nobody’s looking, developers tend to work with the latest and greatest from other software packages, especially when it comes to runtime scripting engines, libraries, and tools. They take the code, modify it, and then they have an end product that works. For now. This is great if you’re on a deadline and your code must work by midnight, and it’s approaching 23:30. The problem, however, is that the product will live long after midnight tonight, and you will be responsible for maintaining, updating and syncing your pristine unicorn library with code that will inevitably diverge from what you modified. This is terrible for everyone, developers and admins included. Imagine the poor sod in operations assigned to installing and maintaining someone’s late-night “innovations”.

    All of the above leads product teams to one obvious conclusion: package and distribute the software in such a way that it runs as far removed as possible from the system on which it resides, usually in the form of a bloated virtual appliance or at least in the form of a self-contained application that relies on the bare minimum of system libraries. Windows admins should take a look at their Program Files directory sometime. Or better yet, don’t. All of this, taken together, adds up to an end product that is extremely difficult to release as open source software.

    Some ops people might think that an appliance is easier for them to deploy and maintain, but more often, they hold their nose in order to use the thing. They will tolerate such an approach if the software actually makes their jobs easier, but they won’t like it. All of the ops people I know, and I used to be one, prefer that the software they deploy conform to their existing processes and workflows, not force them to create new ones.

    Put another way: would your software exist in its current form if it started life as an open source project? Or would end users have demanded a different approach?

    Open source is about process much more than license, and everyone in an open source community has the ability to influence those processes. Projects that start out as open source have many characteristics baked in from the beginning that often, though not always, save developers from their own worst instincts. If you elect to reverse course and move to the open source model, understand what this change entails—it is a minefield, laden with challenges that will be new to your development team, who are unaccustomed to seeing their practices challenged, don’t particularly relish direct customer feedback, and are entirely uncomfortable with the idea of others reading over their shoulder as they write code. The amount of effort to change from proprietary to open source processes is probably on the same order as going from waterfall to agile development.

    Example: ManageIQ

    When Red Hat acquired ManageIQ in late 2012, it was with the understanding that the code would be open sourced—eventually. However, there were several things standing in the way of that:

    1. Many of the User Interface (UI) scripts and libraries were proprietary, 3rd party tools.

    2. The software was distributed as an encrypted virtual machine.

    3. ManageIQ was and is a Rails app, and some of the accompanying Ruby gems were modified from their upstream sources to implement some specific features.

    #1 meant that many parts of the code, particularly in the UI, had to be ripped out and either replaced with an open source library or rewritten. This took quite a bit of time, but was something that had to be done to release the code.

    #2 is not something one can do in an open source project, striking fear into the hearts of the development team. Some changes to the code were necessary after losing the (false) sense of security that came with distributing the software in an encrypted appliance.

    #3 meant that the developer team had to carry forward its modifications to custom gems, which was becoming a burdensome chore and would only get worse over time. The developer team is still in the process of fixing this, but I’m happy to report that we’ve hired a strong Ruby developer, Aaron Patterson, who will, among other things, maintain the team’s changes to upstream gems and prevent future forks and divergence. He’ll also lead the effort to convert ManageIQ to Ruby on Rails 4.

    Conclusion

    Be considerate of your developers and the challenges ahead of them. Hopefully they understand that the needed changes will ultimately result in a better end product. It comes at a price but has its own rewards, too. And never forget to remind folks that choosing an open source approach from the beginning would have obviated this pain.

    Lead Image: 
    Rating: 
    Select ratingGive it 1/5Give it 2/5Give it 3/5Give it 4/5Give it 5/5
    (8 votes)
    Add This: 
    Channel: 
    Article Type: 
    Default CC License: 
  • The ManageIQ Design Summit – a small intimate gathering of cloud experts

    We’re happy to announce the preliminary agenda for the upcoming ManageIQ Design Summit, a 2-day event on October 7 & 8 in Montvale, NJ. Be sure to RSVP soon, as space is very limited. As mentioned in the title, it’s a small intimate gathering of cloud experts, those interested in pushing the limits of ManageIQ and setting the roadmap for development. If you’re a ManageIQ user who wants to learn how to make the most of its automation and orchestration capabilities, then there will be plenty for you, too:

    • Tour the new RESTful APIs released in Anand
    • Create reusable components for automation and orchestration of your hybrid cloud infrastructure
    • Hack rooms for those who want to dive in

    The proud sponsors of the event are Red Hat and Booz Allen Hamilton. I’ve been told to be on the lookout for a new open source cloud broker project from the Booz Allen engineers.

    Look forward to seeing you there!

  • Moving on From Gluster

    All good things must come to an end. I can say with no equivocation that the last three years have been the most rewarding from a work perspective than any other job I’ve ever had. When I accepted this challenge in May, 2011, I had no idea that the project and community would blossom as they have. I had no idea how many great people were already in place to push this project to the forefront of open source development. I had no idea how many great partners we would find who share our vision for open source storage. I also, of course, didn’t know that Gluster, Inc. would be acquired within months of my arrival, which drastically increased the velocity of the project and community. I didn’t know any of that – what I did know was that there was a pretty cool project called GlusterFS and it seemed like the way forward for storage.

    After we were acquired, we knew there would be a bit of angst from the community about whether we would still actively support other distributions outside of the Red Hat arena. I’m proud to say that we have done that, with active contributions from various community members for Ubuntu, Debian, NetBSD and OpenSUSE builds. We always strove to make gluster.org a truly open community and, in some respects, “bigger than Red Hat.”

    Along the way, we created a board consisting of active community members and organizations. We made the project more transparent and active than ever. We greatly increased the degree that the community is a collaborative experience beyond just the immediate development team. And we greatly increased the reach and scope of the open source storage ecosystem. I can look back and feel privileged to have worked with such amazing visionaries, developers and community evangelists.

    Now it’s time to turn the Gluster community over to someone who can build on what we’ve done and take it even further. I’m staying at Red Hat but moving on to other projects and communities. The ideal candidate should know their way around open source projects and communities, should have an unyielding desire to push things beyond the status quo, should know a thing or two about business strategy, and should understand how to identify which organizations should invest in a community and sell them on the vision. As I’ve mentioned before, today’s community leaders are the equivalent of startup executives, having to mesh together product management and marketing, business development and strategy, sales and messaging into a cohesive whole.

    Are you the next Gluster Community Leader? Drop me a line on IRC – I’m “johnmark” on the Freenode network.

  • The Rise of Open Source Analytics Software

    I was pleased to read about the progress of Graylog2, ElasticSearch, Kibana, et al. in the past year. Machine data analysis has been a growing area of interest for some time now, as traditional monitoring and systems management tools aren’t capable of keeping up with All of the Things that make up many modern workloads. And then there are the more general purpose, “big data” platforms like Hadoop along with the new in-memory upstarts sprouting up around the BDAS stack. Right now is a great time to be a data analytics person, because there has never in the history of computing been a richer set of open source tools to work with.

    There’s a functional difference between what I call data processing platforms, such as Hadoop and BDAS, and data search presentation layers, such as what you find with the ELK stack (ElasticSearch, Logstash and Kibana). While Hadoop, BDAS, et al. are great for processing extremely large data sets, they’re mostly useful as platforms for people Who Know What They’re Doing (TM), ie. math and science PhDs and analytics groups within larger companies. But really, the search and presentation layers are, to me, where the interesting work is taking place these days: it’s where Joe and Jane programmer and operations person are going to make their mark on their organization. And many of the modern tools for data presentation can take data sets from a number of sources: log data, JSON, various forms of XML, event data piped directly over sockets or some other forwarding mechanism. This is why there’s a burgeoning market around tools that integrate with Hadoop and other platforms.

    There’s one aspect of data search presentation layers that has largely gone unmentioned. Everyone tends to focus on the software, and if it’s open source, that gets a strong mention. No one, however, seems to focus on the parts that are most important: data formats, data availability and data reuse. The best part about open source analytics tools is that, by definition, the data outputs must also be openly defined and available for consumption by other tools and platforms. This is in stark contrast to traditional systems management tools and even some modern ones. The most exciting premise of open source tooling in this area is the freedom from the dreaded data roach motel model, where data goes in, but it doesn’t come out unless you pay for the privilege of accessing the data you already own. Recently, I’ve taken to calling it the “skunky data model” and imploring people to “de-skunk their data.”

    Last year, the Red Hat Storage folks came up with the tag line of “Liberate Your Information.” Yes, I know, it sounds hokey and like marketing double-speak, but the concept is very real. There are, today, many users, developers and customers trapped in the data roach motel and cannot get out, because they made the (poor) decision to go with a vendor that didn’t have their needs in mind. It would seem that the best way to prevent this outcome is to go with an open source solution, because again, by definition, it is impossible to create an open source solution that creates proprietary data – because the source is open to the world, it would be impossible to hide how the data is indexed, managed, and stored.

    In the past, one of the problems is that there simply weren’t a whole lot of choices for would-be customers. Luckily, we now have a wealth of options to choose from. As always, I recommend that those looking for solutions in this area go with a vendor that has their interests at heart. Go with a vendor that will allow you to access your data on your terms. Go with a vendor that gives you the means to fire them if they’re not a good partner for you. I think it’s no exaggeration to say that the only way to guarantee this freedom is to go with an open source solution.

    Further reading:

     

  • Citrix and Harvard FASRC Join Gluster Community; Board Expands

    Citrix, Harvard University FASRC and long-time contributors join the Gluster Community Board to drive the direction of open software-defined storage

    February 5, 2014 – The Gluster Community, the leading community for open software-defined storage, announced today two new organizations have signed letters of intent to join: Citrix, Inc. and Harvard University’s Faculty of Arts and Science Research Computing (FASRC) group. This marks the third major expansion of the Gluster Community in governance and projects since mid-2013. Monthly downloads of GlusterFS have tripled since the beginning of 2013, and traffic to gluster.org has increased by over 50% over the previous year. There are now 45 projects on the Gluster Forge and more than 200 developers, with integrations either completed or in the works for Swift, CloudStack, OpenStack Cinder, Ganeti, Archipelago, Xen, QEMU/KVM, Ganesha, the Java platform, and SAMBA, with more to come in 2014.

    Citrix and FASRC will be represented by Mark Hinkle, Senior Director of Open Source Solutions, and James Cuff, Assistant Dean for Research Computing, respectively, joining two individual contributors: Anond Avati, Lead GlusterFS Architect, and Theron Conrey, a contributing speaker, blogger and leading advocate for converged infrastructure. Rounding out the Gluster Community Board are Xavier Hernandez (DataLab); Marc Holmes (Hortonworks), Vin Sharma (Intel), Jim Zemlin (The Linux Foundation), Keisuke Takahashi (NTTPC), Lance Albertson (The Open Source Lab at Oregon State University), John Mark Walker (Red Hat), Louis Zuckerman, Joe Julian, and David Nalley.

    Citrix

    Citrix has become a major innovator in the cloud and virtualization markets. They will drive ongoing efforts to integrate GlusterFS with CloudStack (https://forge.gluster.org/cloudstack-gluster) and the Xen hypervisor. Citrix is also sponsoring Gluster Community events, including a Gluster Cloud Night at their facility in Santa Clara, California on March 18.

    Harvard FASRC

    The research computing group at Harvard has one of the largest known deployments of GlusterFS in the world, pushing GlusterFS beyond previously established limits. Their involvement in testing and development has been invaluable for advancing the usability and stability of GlusterFS.

    Anand Avati

    Anand Avati was employee number 3 at Gluster, Inc. in 2007 and has been the most prolific contributor to the GlusterFS code base as well as its most significant architect over the years. He is primarily responsible for setting the roadmap for the GlusterFS project. Avati is employed by Red Hat but is an individual contributor for the board.

    Theron Conrey

    Theron became involved in the Gluster community when he started experimenting with the integration between oVirt (http://ovirt.org/) and GlusterFS. Long a proponent of converged infrastructure, Theron bring years of expertise from his stints at VMware and Nexenta.

    Supporting Quotes

    John Mark Walker, Gluster Community Leader, Red Hat

    The additions of Citrix and Harvard FASRC to the Gluster Community show that we continue to build momentum in the software-defined storage space. With the continuing integration with all cloud and big data technologies, including the Xen Hypervisor and CloudStack, we are building the default platform for modern data workloads.

    Mark Hinkle, Senior Director, Open Source Solutions, Citrix

    “We see an ever increasing hunger for storage solutions that have design points that mirror those in our open source and enterprise cloud computing efforts. Our goal is to enable many kinds of storage with varying levels of utility and we see GlusterFS as helping to pioneer new advances in this area. As an active participant in the open source community we want to make sure projects that we sponsor like Apache CloudStack and the Linux Foundation’s Xen Project are enabled to collaborate with such technologies to best serve our users.

    James Cuff, Assistant Dean for Research Computing, Harvard University

    As long term advocates of both open source, and open science initiatives at scale, Research Computing are particularly excited to participate on the Gluster Community Governing Board. We really look forward to further accelerating science and discovery through this important and vibrant community collaboration.”

     

    ***The OpenStack mark is either a registered trademark/service mark or trademark/service mark of the OpenStack Foundation, in the United States and other countries, and is used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community

    ***Gluster and GlusterFS are trademarks of Red Hat, Inc.

    ***Xen and Linux are trademarks of The Linux Foundation

    ***Apache Cloudstack is a trademark of the Apache Software Foundation

  • Gluster Cloud Night Amsterdam

    Join us on March 4 for the Gluster Community seminar and learn how to improve your storage.
    This half day seminar brings you in-depth presentations, use cases, demos and developer content presented by Gluster Community experts.

    REGISTRATION

    Register today for this free half-day seminar and reserve your seat since spaces are limited. Click here to register.

    We look forward to meeting you on March 4th!

    AGENDA

    13:30 – 13:45 Registration
    13:45 – 14:15 The State of the Gluster Community
    14:15 – 15:30 GlusterFS for SysAdmins, Niels de Vos, Red Hat
    15:30 – 15:45 Break
    15:45 – 16:30 Adventures in Cloud Storage with OpenStack and GlusterFS
    Tycho Klitsee (Technical Consultant and Co-owner) of Kratz Business Solutions
    16:30 – 17:15 Gluster Forge Demos, Fred van Zwieten, Technical Engineer, VX Company and Marcel Hergaarden, Red Hat
    17:15 – 18:00 Networking Drinks

  • Gluster Spotlight on James Shubin: Puppet-Gluster, Vagrant and GlusterFS Automation

    [youtube http://www.youtube.com/watch?v=SV1_RpZssGk&w=560&h=315]

    ***UPDATE: Due to weather-related flight cancelations and rebooking, we had to push this back to Thursday, January 23, at noon PST/3pm EST/20:00 GMT***

    James Shubin is known in the Gluster community for his work on the Puppet-Gluster module.

    Recently, he’s begun to create powerful cocktails of Puppet and Vagrant to create recipes for automated Gluster deployments. See, eg.

    Building Base Images for Vagrant with a Makefile

    and

    Testing GlusterFS During GlusterFest

    This will be quite a fun spotlight, and very much worth your while. As usual, join the #gluster-meeting channel on the Freenode IRC network to participate in the live Q&A.

    About Gluster Spotlight

    Gluster Spotlight is a weekly Q&A show featuring the most exciting movers and shakers in the Gluster Community. If you don’t catch them live, you can always watch the recordings later.

  • GlusterFest Weekend is Here – Jan 17 – 20

    As I mentioned yesterday, the GlusterFest is nigh. This time, we’ll break out testing into two types:
    • Performance testing
    • Feature testing
    To learn about the GlusterFest and what it is, visit the GlusterFest home at gluster.org/gfest
    Remember that if you file a bug that is verified by the Gluster QE team, you’ll win a t-shirt plus other swag.

    PERFORMANCE TESTING

    We are lucky in that two individuals have stepped up with tools to help with performance testing. One is James Shubin with his Puppet-Gluster module:
    https://forge.gluster.org/puppet-gluster/

    Together with his blog posts on puppet-gluster + vagrant, you should have an easy way to deploy GlusterFS:
    Automatically deploying GlusterFS with Vagrant and Puppet

    Also, Ben England recently released some code for his Smallfile performance testing project, which targets metadata-intensive workloads:

    http://forge.gluster.org/smallfile-performance-testing

    He also wrote up a nice primer on performance testing on the Gluster.org wiki that discusses iozone, smallfile, and how to utilize performance testing in general:
    http://www.gluster.org/community/documentation/index.php/Performance_Testing
    Please follow the instructions on the GlusterFest page (gluster.org/gfest) and report your results there. Some of the test results are quite large, so you will want to report test results on a separate page, either on the Gluster.org wiki or on the paste site of your choosing, such as fpaste.org.
    Please file any bugs and report them on the gluster-devel list, as well as providing links on the GlusterFest page.

    FEATURE TESTING

    In addition to performance, we have new features in 3.5 which needs some further testing. Please follow the instructions on the GlusterFest page and add your results there. Some of the developers were kind enough to include testing scenarios with their feature pages. If you want your feature to be tested but didn’t supply any testing information, please add that now.

    The GlusterFest begins at 00:00 GMT/UTC (today, January 17) and ends at 23:59 GMT/UTC on Monday, January 20.
    Rev your engines and get ready for some testing!
  • GlusterFS 3.5 Beta + GlusterFest Weekend

    The first GlusterFS 3.5 Beta is here! See what features made it in over at the 3.5 planning page. Here are some of the marquee features:

    With this first beta, we’ll have the next weekend GlusterFest! We’ll kick it off on Friday, January 17 at 00:00 GMT, continuing through Monday, January 20 at 23:59 GMT. Set your clocks!