Inside HPE Synergy, the first “composable infrastructure” system designed for both stability and on-demand innovation
In this article...
- HPE Synergy was launched to provide both the stability for core business operations in traditional IT and the agility for revenue-driving cloud applications all on the same platform
- Here, HPE execs discuss how the company thinks differently about IT infrastructure
WANT TO KNOW MORE?
- The world of the IT department is undergoing a fundamental change in how to do business. Read the “HPE Synergy for Dummies eBook” to understand how your organization can transform its IT function to reduce data center complexity and maximize business outcomes.
Today’s enterprise IT managers should be pitied. As the growing ubiquity of sensors, apps and mobility unleash a tsunami of data, which strain companies’ infrastructure and upend methodical workflows, IT managers are scrambling to adapt to this new reality with inadequate, outdated tools and processes. The system is stressed and IT managers are stressed. Reliable IT is no longer enough. The infrastructure needs to become more flexible and agile, capable of responding immediately to new business opportunities, much like IT managers themselves.
Over the past three years, hundreds of employees throughout the newly launched Hewlett Packard Enterprise (HPE) have been working on a top-secret project designed to solve this very problem. HPE Synergy, announced this week at Discover 2015 London, is an industry first. Unlike traditional infrastructure platforms, it’s designed to operate as a hybrid. It simultaneously provides the stability for core business operations while also seamlessly adapting to enable application development and management all on the same platform.
“This is truly an experiential product that addresses people’s frustrations in the data center.”
“This is truly an experiential product that addresses people’s frustrations in the data center,” says Paul Miller, vice president of marketing for HPE’s Converged Data Center Infrastructure business. “We have hundreds of customers who are dying to get their hands on this.”
What’s the future of infrastructure management? Where does Synergy fit in? Here, Miller along with Ric Lewis, senior vice president and general manager of HPE’s Converged Data Center Infrastructure business and Neil MacDonald, vice president and general manager of HPE’s BladeSystem business, share their insights.
Why do IT executives need to think about infrastructure differently?
Ric Lewis: Historically, IT has been in the back room, and the metric of success has been, “If I never hear about it, it’s all good. My databases are running, my customer ERP (enterprise resource planning) application software is working, so I can run my business." That's IT of the past. IT of the future is the face of the company, where customers are shopping or choosing their airline seats or checking their bank balance—all on their phones. The app is the business.
In the Idea Economy, as HPE calls the current age of rapid digital innovation, IT is a growing piece of how a company differentiates itself, how it generates revenue and, more importantly, how it generates profit. So all of these CIOs are going from supporting the business to creating new value for the business. Yet the traditional expectations are still there: your infrastructure has to be reliable and cost-effective. It can’t go down.
Neil MacDonald: Too much time, energy and money is being spent keeping the plumbing running at the expense of innovating in services that bring new customers to the business or deliver a better experience to existing customers. IT now has a seat at the executive table and they’re being asked, “How can you transform our business?” They need new tools.
Why can’t the existing infrastructure perform on these two levels?
Paul Miller: Take a client of ours, a major energy company. Like a lot of businesses, they’ve built their IT over 20 years, stitching together various systems with custom code. It helps them find the energy, transport that energy to a refinery, manage shipments to a point of distribution and eventually to the point of sale. That system runs their business. So it’s very stable—it has to be—but it is also very rigid. You don’t want to touch it very often, maybe once or twice a year for an update.
At the same time, in order to remain competitive, the company is digitizing the entire supply chain by putting sensors in oil fields, gas fields and refineries. They’re collecting this new data and pushing it out on mobile apps to engineers so they can find energy quicker, and to the sales force so they can sell more efficiently and at higher margins. And companies are aiming to continuously update these apps. To do that, you need a very dynamic infrastructure, but what they have now is not built to support these two very different needs.
How does HPE Synergy resolve these competing demands?
Miller: Analysts will tell you that you need two different sets of tools. At HPE we think differently. We designed a completely new class of infrastructure, what we call “composable infrastructure,” which offers a fluid pool of resources. This is an entirely new way of thinking about infrastructure within the data center.
A traditional data center has database servers, exchange servers, application servers and web servers, and they’re all designed with static ratios. The database server might have 10 increments of memory, 20 of storage and five of compute. You build each system around fixed increments, but over-provision in case the database might need them in the future. But this over-provisioning means that unused resources are stranded in silos. You can’t get to any unused capacity. In a recent article, the Wall Street Journal estimated that there are 10 million of these zombie servers around the world. This equates to drawing four gigawatts of unneeded power, which is enough to power 3.2 million households.
So this fluid pool of resources removes those silos?
Miller: Right. We’ve created a tool that allows you to grab what you need on demand. You take however much memory, storage and compute a given workload requires, and when you’re done, the system returns it to the pool. This single pool means you’re getting greater utilization out of your existing infrastructure. It’s far more efficient.
MacDonald: The analogy I like is different management styles. A micro manager tells everybody what to do and how to complete each task. That's how we manage IT infrastructure today—managing each individual device. It’s not an effective or scalable approach for people or for infrastructure. But if you practice management by objectives, you tell your folks, “This is the objective you're trying to achieve,” and you rely on their intelligence to figure out how best to achieve it. That’s what we’re doing with infrastructure. Its inherent intelligence composes the necessary resources.
Give me an example of how it could change a business.
Lewis: If you think about a modern sports stadium, it’s highly dynamic. You’re running a world-class supply chain, making sure all the food and drinks are ordered and delivered, and making sure vendors are being paid. At the same time, you’ve got an enormous Wi-Fi haven with apps for fans to look up the shortest line for food, which helps the stadium predict a rush on hot dogs or beer.
And then you might put on the big screen, "Log into our app to vote for today’s seventh-inning stretch song." All of a sudden, 60,000 or so people are logging on. The resource load on that app goes from almost none to huge and then back to none once voting is over. It’s terrible to size your infrastructure for that one little burst and pay for all that capacity. We’re adjusting your infrastructure to adapt automatically to demand.
What’s the key to simplifying infrastructure management even as the overall system is getting more complex?
Lewis: One of the breakthroughs here is the unified API (application programming interface). Instead of an API for networking and storage and so on, which require individual commands and a whole sequence of changes you have to carefully manage, you can do all that with one line of code. You can give very simple commands for the whole infrastructure. Essentially, “I need four CPUs with this specific OS image and 100 gigabytes of storage.” The system then figures out where they are and how they're assigned and does that dynamically. The API is central to this idea of composability.
Miller: When you talk to all the different constituents using the system—the developers running or creating new apps, the lines of business that want to add promotions within an app—they don’t want to mess with the actual infrastructure, they just want it to work. So we designed infrastructure as code; we made it programmable.
Meanwhile, the software-defined intelligence hides all of the complexity. We automated the updates and patches so the system manages the life cycle of the devices. That way, the infrastructure is always there and it can deploy apps quickly.
“What used to take months, now takes days. What used to take days, now takes minutes.”
How would you characterize the improvement in terms of speed?
Miller: What used to take months, now takes days. What used to take days, now takes minutes. This is the speed of the cloud but on the physical systems within data centers.
MacDonald: Customers generally have specialists managing each silo—servers, storage devices, networking and so on. It’s a very expensive and very slow operational model. Any changes involve all of these different experts and all of those workflow hand-offs take time. Not to mention, all of this is done by humans and humans make mistakes. With HPE Synergy, the system itself takes care of that for you.
Why isn’t the cloud a sufficient alternative on its own?
Lewis: You’ve got lines of business saying, "I want to run this promotion," but because they don't have time to procure a bunch of servers, software and storage, or to hire an external company, they run to the public cloud to crank out an app. But this process is very expensive because you’re paying for all that extra capacity. And for security and compliance reasons, you may not want to put customers’ data and mission-critical stuff in the public cloud. You want to hold that close, in your on premise environment.
When did you know this composable infrastructure worked? What was that moment like?
Lewis: Early last summer, we took all the hardware and prototypes and set them up in a demo room in Palo Alto. We brought in Meg Whitman (HPE’s CEO) and the executive staff and had Ken Blue, the R&D leader, show what it looks like when you compose different pieces of infrastructure to run a workload.
We had been talking about how this would work, but I hadn't actually seen the full demo. When Ken finished, there was this brief pause of stunned silence between Meg and the whole team. And I was quiet too—it really blew me away. But then I couldn’t help myself and I blurted out, “That’s really cool!” and everyone else chimed in. It all finally seemed real. Like, wow, we can really do this. And it’s so different than any experience that any customer has today with their infrastructure. That’s what makes it incredibly exciting.