Hyperconverged is now mainstream, why?
Both HPE and Dell EMC announced triple digit growth of hyperconverged solutions in their latest quarterly results. Nutanix more than doubled quarterly revenue year over year. VMWare VSAN, barely 3 years old, has crossed over the 10,000 customer mark. Companies are clearly adopting hyperconverged solutions at a blistering pace, but why now? The answer, I believe, can be found in the timing of three major trends in technology. In no particular order, these trends can be boiled down to hardware innovations, software maturity, and public cloud.
If you travel back to 2011 when Nutanix hit the market (arguably the first mover in hyperconverged), SSD’s were hundreds of gigabytes, hard drives were ~1TB in capacity, and most 2 socket servers had 8-16 cores and less than 512GB of RAM. Network connections mainly consisted of multiple 1Gb NICs connected to several VLANs. As respectable as these specs may have been in 2011, there were valid concerns around the ability to move high end storage functions inside these servers, and maintain high performance. From a physical space standpoint, how on earth were you going to build a capable 100TB SAN with these tiny components? It was technically possible, but good luck convincing the storage and finance teams. With CPU and memory being a premium, how could you absorb high level storage functions without impacting performance/efficiency? The result was early hyperconverged use cases revolved heavily around workloads that fit nicely within these constraints, e.g. VDI, ROBO, etc.
Fast-forward to 2016-17, the hardware landscape looks far different. A two socket server may have 40+ cores, 1TB+ RAM, and multiple 10Gb NICs. Hard drives and SSD’s have ballooned to 10TB and 3TB respectively. We also have 12G SAS and blazingly fast NVMe SSDs. With very little effort, you could draw up hyperconverged nodes with 48 cores, 1TB of RAM, 30TBs of flash, and dual 10Gb NICs. As far as physical capabilities go:
Disk capacity? Check!
Disk performance? Check!
The most conservative engineers in most IT organizations, are the storage administrators. They know all too well the ramifications of an unreliable storage platform. It’s why storage startups take time to crack large enterprise customers. VMWare has a long history of delivering reliable enterprise-grade cluster ware. However, up until 2014 VMWare left storage functions up to third-party hardware and software vendors. Nutanix on the other hand came out of the gate with a NO SAN mantra. Anyone else remember the “NO SAN” stickers Nutanix handed out at tech conferences? The challenge for Nutanix is they were a new player in the storage space, and had to compete against well known and trusted storage platforms. Simplivity was in a similar situation when it hit the market in 2012.
Fast-forward to 2017, Nutanix and Simplivity (HPE) have continued to innovate, iterate, and operate in some of the largest IT environments in the market. VMWare has carried out an aggressive, yet deliberate rollout of VSAN, taking advantage of its massive customer base. Distributing storage volumes across 8, 16, or 64 node clusters is no easy feat, yet all three vendors have proven themselves on the big stage in terms of being reliable enterprise solutions.
In addition to being reliable, there are other key features that have helped expand the use cases for hyperconverged. Two big ones are compression and deduplication. Yes these are primarily billed as cost-saving features, however they also have the benefit of squeezing more data into a smaller physical footprint. Another key benefit of compression and deduplication, is they make it economically feasible to build all-flash hyperconverged solutions. If you can build an all-flash hyperconverged system for the same or less cost than hybrid, why wouldn’t you?
Another area of innovation around the hyperconverged ecosystem is disaster recovery. HPE Simplivity has made disaster recovery and WAN optimization a cornerstone for their solution. Nutanix and VMware have replication and snapshots with both on-prem and cloud destinations. Dell EMC has combined native VMWare VSAN with additional disaster recovery and cloud-based storage technologies into their VxRail product. The net benefit is that it provides the ability to eliminate additional technologies as a result of consolidating to a hyperconverged platform.
What does public cloud have to do with on-prem IT infrastructure? Lots!
Public cloud has made it abundantly clear that IT consumers desperately want to streamline their access to technology services, and care less about the actual IT infrastructure it runs on. IT consumers want the ability to request resources at the click of a mouse or the execution of a command. It’s unrealistic to think that all companies can simply lift and shift years of technology growth to AWS or Azure and simply turn off the lights in the datacenter. However, IT teams are under tremendous pressure to deliver the AWS experience. Hyperconverged is the fundamental first step in moving IT away from managing IT components, to managing an IT platform.
Probably the biggest impact of adopting a hyperconverged IT infrastructure, is the impact on IT operations. Consolidating to a single IT platform, allows a greater focus on automation. Time saved on refresh cycles, upgrades, and complexities around interop, can be re-invested in automation and delivering more cloud-like infrastructure to IT consumers. Coincidentally, it also provides valuable time back to IT, to truly unify on-prem IT infrastructure with the large public cloud vendors for a more effective hybrid cloud strategy. Hyperconverged in this case can be viewed as a key enabler for public cloud. Where else would you find the time?
These three trends are here now, and show no signs of slowing down. Hyperconverged, like server virtualization and flash storage, is proving to be a disruptive force in the IT landscape. This article focused on the key enablers of the technology. What’s far more exciting are the outcomes on the backend of the decision. Topics like disaster recovery, automation, scale, hybrid cloud, and many more. We’ll drill down into these areas in future blog posts.