I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
Kelly Boeckman, Senior Product Marketing Manager, SolidFire
Virtualization has arguably been the biggest shift we’ve seen in IT for the last 10 years. This technology helped simplify IT management by removing silos between servers, storage and networking. It also allowed applications to be engineered in a way that they need not rely on any underlying hardware platform—they can seamlessly move in and out of the cloud.
However, organizations now want the ability to deploy application changes in real time—ideally from development to a live environment at the click of a button—which is something that can’t be done with virtualization on its own. Demands on IT environments are ever increasing, causing headaches for the IT department because existing technologies and resources are typically strained. As a result, we are at a transitional period within IT, where the main focus is on increased use of automation and full infrastructure orchestration. To address complex automation needs, many people are turning to OpenStack.
According to an OpenStack User Survey, key drivers for adopting OpenStack include the ability to bring innovation to the business; orchestrating IT delivery means there is less infrastructure maintenance, so more time can be spent on innovating, delivering value and differentiating the business. OpenStack is open technology and lacks vendor lock-in – attractive qualities because both can help save on costs by cutting the need for licenses and initial, up-front investments.
"Determining your use case and workloads for OpenStack is a critical first step; decisions on distributions and infrastructure will flow from that key initial decision"
As a result, OpenStack is being seen as a major part of the next generation data center, perhaps even its universal operating system. Indeed, ask anyone who’s worked with OpenStack and they’ll tell you of the huge success and benefits they’ve had from their initial deployments. Development and test (dev/test) is currently one of the most popular use cases for OpenStack. Fully orchestrating the infrastructure environment to enable rapid, self-service deployment of IT resources accelerates the development, build, test, and release processes. It can also positively affect a business’s bottom line by reducing the need for shadow IT and better utilizing on-premises resources. Running web applications and providing databases as a service also are popular use cases.
But what happens when you want to deploy OpenStack more widely—or even in production— in an enterprise, where reliability, high availability and guaranteed performance are key? When you look at large enterprises like eBay, MercadoLibre and PayPal, they’re operating at such a scale that they don’t want arduous management layers between OpenStack and the storage system or to waste time manually setting performance levels and managing volumes. They need to choose an infrastructure that integrates so well with OpenStack that their time is free to focus on the core of their business.
Don’t forget your data
There are an endless number of architectures available when building your OpenStack infrastructure for the first time. Determining your use case and workloads for OpenStack is a critical first step; decisions on distributions and infrastructure will flow from that key initial decision. Should you try DIY OpenStack, go with a vendor’s unique distribution, or possibly opt for managed OpenStack service? The good news is that companies like RedHat, Mirantis, Cisco MetaPod and Platform9 are making it easier to fit the storage and cloud layers and remove complexities. These solutions are a good starting point for building your OpenStack cloud and, through their integrations with a variety of storage solutions, can help you accelerate deployment.
But if you want your own OpenStack deployment to be more automated, you can only do this by selecting a storage layer that supports full automation and is tightly integrated with OpenStack’s storage services. Additionally, the idea that a single pooled storage system can capably consolidate all three tiers of storage will likely end badly. Any discussion about distributed storage solutions for cloud should include commercial options alongside open source ones.
A few considerations to keep in mind during this process include:
1. Have you chosen the right storage type—block or object—for your particular workload?
2. Will the storage scale out linearly as you grow?
3. Is the storage proven with OpenStack at scale?
4. How easy is it to resolve things like performance issues and bottlenecks?
5. Is the storage interoperable with other vendors in an OpenStack deployment?
6. Can the storage be deployed quickly and easily in OpenStack?
As the IT industry shifts from the classic monolithic and static operational models into the new era of dynamic, agile workflows of cloud computing, it’s time to look beyond traditional storage hardware architectures and consider products that are built from the ground up for the next generation of applications and operations.
Unfortunately, a lot of people forget that getting the storage layer right is essential for supporting OpenStack deployments. While it may be the bottom of the stack, storage shouldn’t be your last consideration. And with a whole host of storage types and vendors to choose from, hopefully this list will help you narrow down the search and select the best storage layer to support your businesses’ infrastructure orchestration strategy.