Learn firsthand about OpenStack, its challenges and opportunities, market adoption and Cloudscaling’s engagement in the community. My goal is to interview all 24 members of the OpenStack board, and I will post these talks sequentially at Dell TechCenter. In order to make these interviews easier to read, I structured them into my (subjectively) most important takeaways. Enjoy reading!
#1 Takeaway: OpenStack fundamental architecture is very sound
Rafael: Let’s start with a very basic question, Randy. What is the value that OpenStack brings to the table?
Randy: The core value of OpenStack is that the fundamental architecture is very sound. If you want to build Infrastructure as a Service (IaaS) in a scale out manner, then you need an asynchronous, loosely coupled, message based type of solution, so that you can create a distributed software system.
We looked at a number of different technologies on the market place today including CloudStack and OpenNebula, and we missed a combination of both components. In case of CloudStack, where we did one of the three largest deployments, the code base is monolithic. All the code runs in one place. And in the case of OpenNebula you have distributed software, but it lacks an asynchronous message passing paradigm.
#2 Takeaway: OpenStack key accomplishments are: 1) high profile contributors, 2) regular release cycle, 3) governed by a foundation and 4) broad suit of services
Rafael: What are the key accomplishments of OpenStack so far?
Randy: First, a number of high profile appointments like AT&T, HP and Dell. Second, a regular cadence in the major release cycle, which is a key factor in any large open source project. OpenStack has a six-month cadence with six major releases so far, and it’s becoming better and better with every new version. Third, the formation of the OpenStack Foundation, a single neutral entity that can support OpenStack and its mission, rather than being owned by any one organization. Fourth, OpenStack branched out from the core of Nova and Swift into Nova, Cinder, Horizon, Glance, Swift, Quantum and Keystone in the current OpenStack Folsom release. With that, OpenStack has a full end-to-end suite of services you can pick from, whereas other IaaS open source projects are focusing on a single piece of the problem like compute or networking, for example.
#3 Takeaway: OpenStack needs a broader market adoption and a clear mission
Rafael: … and what does still needs to be worked on in OpenStack, Randy?
Randy: We still need significantly more adoption in the marketplace, although I don’t know how to quantify that well. A lot of people are in the process of evaluating OpenStack, and I think that a broader market adoption is just a matter of time.
Also, since the formation of the OpenStack Foundation, there’s an opportunity to re-evaluate what the mission for OpenStack should be. Initially the mission for OpenStack was defined by Rackspace. Now that Rackspace doesn’t own OpenStack anymore, the project’s mission needs to be clarified.
#4 Takeaway: OpenStack Foundation has two options: 1) a hands off and 2) a hands on approach to govern the project
Rafael: What are viable options for OpenStack?
Randy: One option would be a similar approach to how the Linux Foundation exists in the market place.
The Linux Foundation doesn’t really say what Linux should be. It allows the marketplace and the key contributors to make those decisions themselves. Linux has a reasonable amount of standardization across the distributions, but yet all of them are designed for completely different purposes. RedHat Enterprise is seen by many as designed for servers in the way that Ubuntu is for desktops. There are even more specialized Linux distributions for things like embedded systems.
That sort of hands off approach would mean that OpenStack becomes a framework that can be used by anybody in the ecosystem to build all sorts of clouds. In that case the market forces would determine which distributions rise to the top and which don’t … just like is the case with Linux.
Alternatively, the OpenStack Foundation could take a hands on approach and make top down determinations about OpenStack standards … what OpenStack supports, which other cloud software it’s compatible with etc. That approach might get us sooner to interoperability with other clouds such as Amazon Web Services or Google Compute Engine, but it may alienate those members of the community who don’t want to go in the direction which the OpenStack Foundation and Technical Committee may decide makes sense. These folks might then want to build their own system, and that might create a threat of forking OpenStack.
Rafael: Hands off or hands on: Which approach would you prefer for OpenStack?
Randy: I have seen in the past that standards in interoperability seem to bubble up through market adoption. Once customers decide what they want, developers can respond to that rather than trying to predict what the market might want. We all know: Human beings are the worst predictors of the future (laughs).
#5 Takeaway: Cloudscaling focusses on improving OpenStack’s computing capabilities and API compatibility with other public clouds
Rafael: Let’s talk a bit about Open Cloud System – your company’s OpenStack distribution. What makes it unique in the marketplace?
Randy: Before I answer that question let me explain briefly that we use 100% stock OpenStack in our product, Open Cloud System. We don’t modify it, we don’t fork it. Our team has a background in building large scale out, production grade systems that are compatible with other clouds, and that’s our primary focus with OpenStack. We’ve built an integrated system solution rather than a bunch of separate components.
OpenStack from our point of view is a great technology, but it’s a little bit like the Linux kernel. You probably wouldn’t take it and run it in production. Just as with the Linux kernel, you actually have to do a number of things in order to make it robust and production ready … and by production ready we mean that there is a focus on availability, security, performance and maintainability.
OpenStack has lot of configuration parameters, and we simply took advantage of that in order to provide functionality that isn’t available in default OpenStack. Let me give you one example.
Default OpenStack comes with Nova, the compute component. It has two deployment modes. The first deployment mode is a centralized service, that all of your network traffic goes through. It has VLANs behind it, and every tenant of your cloud is on a VLAN. The challenge is that you have a central choke point for all your traffic. Regardless of how fast your switch fabric is … you’re driving traffic for all your VMs through this central Linux box which is obviously not ideal.
The second deployment mode is distributed, where you are also using VLANs … with your Nova network controller running on all your hypervisors. Obviously there are security concerns around that. But taking that aside, you end up making some compromises, because of the Nova network controller architecture with both public and private IPs running on your internal switch fabric. A lot of people don’t like to do that, they prefer to keep their public IPs at the edge of the network. In addition to that you have these weird bugs that crop up, because the network address translations are happening on each of the hypervisor nodes, instead of at the edge of your network like you would normally expect. There is a bug right now in OpenStack which is not fixable because of the architecture and you cannot use floating IPs.
This is really clunky. Open Cloud System takes a very different approach. We don’t use any VLANs. We use Layer 3 network routing just like Amazon Web Services does. Our networking model looks exactly like Amazon Web Services Elastic Compute Cloud (EC2).
We have a separate NAT service that runs at the edge of the system. Your public and private IPs are not running together on the switch fabric. We have a distributed DHCP service that runs on all hypervisors; the Nova network controller is off to the side and no network traffic goes through it anymore. Because of that we get full end to end bandwidth between all the VMs and tenants, and we get full throughput to the internet ingress and egress.
Rafael: Can you give some more examples of what your distribution does differently than raw OpenStack?
Randy: Sure. Default OpenStack comes with RabbitMQ as the messaging broker, and we felt that created a single point of failure which didn’t make sense to us, especially in larger deployments. So we came up with an Alternative Approach to OpenStack Nova RPC Messaging.
We also spent time on pushing code back into OpenStack for API compatibility; we do a lot of work on AWS EC2 API and we recently announced that we are providing a set of APIs that are compatible with Google Compute Engine. Now people have a choice when they use OpenStack compute project Nova, whether they want to use the OpenStack native API, the AWS API or the GCE API.
#6 Takeaway: Early adopters such as the finance sector are embracing OpenStack, businesses with less sophisticated IT needs might jump in later
Rafael: Randy, we talked very briefly about market adoption earlier. Let’s dwell a bit more on that. Who are the early adopters and do you see signs of OpenStack going mainstream?
Randy: Cloud computing is a new paradigm in cloud computing which is driven by large scale web companies. When you look Google, Amazon, Facebook or Twitter data centers: They don’t build up systems that look like traditional enterprise data centers.
OpenStack will be broadly adopted by the enterprise over time. If you look at other open source projects such as Hadoop … it gives the average enterprise a competitiveness that Google previously had with MapReduce. Many companies are comfortable with Hadoop already, and I believe the same will happen with OpenStack over time.
In general, we see two sets of customers. You have companies that see IT as a competitive advantage. They will refresh their datacenter infrastructure over the next 5 to 10 years, and they will embrace models such as OpenStack and Hadoop. Financial services companies are already very much involved with both open source projects. Usually they are a good indicator for other industries to follow later.
And then you have companies for which IT is not a significant competitive advantage, such as shipping, and logistics … they are just tracking where their ships or containers are and they don’t have significant need for sophisticated IT solutions. I think that over time those companies might adopt public clouds designed to run their workloads.
#7 Takeaway: Getting the software & solutions provider DNA into Dell is crucial for the future
Rafael: Last question, Randy. How do you view Dell in the OpenStack game?
Randy: We are currently in the middle of a fundamental change in IT, which can be compared to the transition from mainframe to enterprise computing. A lot of the mainframe companies didn’t survive. Those who did like IBM took a very close look at emerging technologies, and they changed their business model.
Dell is making a lot of changes in order to become a software and solution company. I think being a solution provider is the only way for a company like Dell, which is traditionally a hardware supplier, to survive in the future.
All the right moves have been made at Dell, and it’s more a question of execution: How do you get the software DNA into Dell? How do you get system thinking DNA into the business? It’s all about finding answers to those questions.
Rafael: Thank you very much for this interview, Randy.
Randy: You’re welcome!
Company website: http://www.cloudscaling.com/
Company blog: http://www.cloudscaling.com/blog/
Feedback Twitter: @RafaelKnuth Email: email@example.com
To post a comment
login or create an account