What does Hyperconvergence mean and how is the data centre changing?
Hyperconvergence is the buzzword claiming to bring storage in line with the demands of the 21st century. But, is it all hype? Or will it actually change the way your business operates? Simply, hyperconvergence is the collapsing of all storage into one multivariable solution — generally focused on hardware. However, there are important differences between hyperconvergence, convergence and the use of architecture or infrastructure after either word. Beyond that, if looking at what is really changing in storage and networking, there are two broad trends worth understanding that are related, but distinct. One is specific to hardware and ‘hyperconvergence’, while the other is focused on software and administration control. This article will stake out definitions around hyperconvergence, convergence and unified/software-defined storage. Particular vendors disagree on these definitions. However, this article will aid your understanding of how the data centre is changing and improve your ability to make future purchasing decisions.The Terminology of Hyperconvergence
Staking out space in an industry plagued by buzzwords.
Hyperconvergence should strictly be considered the collapsing of distinct hardware provisions for CPU, networking, memory and storage into a single and interchangeable piece of kit. However, this hardware development has accompanied the growth of à la carte software-based management tools that unify your command and control capabilities over storage units — virtualized or not. These two phenomena become easily confused because the ‘hyperconverged’ hardware is software defined in so much as the designation of different components is done virtually. These hardware platforms also come with software management packages that can be used to manage non-hyperconverged hardware. Hyperconverged infrastructure [HCI] is a more specific term that directly references the hardware component of hyperconvergence. Software-defined-storage [SDS] is a more accurate term to describe the trends towards unified and virtual control systems for data storage and the development of storage software that is truly hardware agnostic. This is, however, sometimes called hyperconverged architecture — furthering the confusion. It is much simpler, however, to think of these two capabilities as distinct trends with their own utility to any given business.Converged Infrastructure and the Traditional Data Centre
Vendor-led attempts to solve compatibility issues.
Traditionally, the data centre is comprised of servers, networking and storage components. These pieces of hardware are specialised, can be purchased separately and then integrated into the network. In theory, this enables maximum flexibility. In practice, this creates compatibility challenges that IT departments spend a lot of time attempting to resolve and manage. The first true endeavour by a vendor to solve this problem began in 2008 when Oracle, in partnership with HP, announced their ‘Database Machine’. This was a pre-integrated box that converged networking, storage and server components into a single, tested and off-the-shelf solution. The following year, EMC, Cisco and NetApp followed suit. These products are known as converged infrastructure [CI]. They are built from traditional data centre hardware. However, they come pre-designed for compatibility within a single pre-built chassis for ease of implementation and management. They are also often augmented with software management tools. All the major infrastructure providers now have converged offerings. The converged infrastructure market sat at $11.7 billion in 2016 and is predicted to rise to $76.2 by 2025*. These solutions can be thought of as proprietary switches attached to traditional disk arrays and servers in a takeaway box. These systems generally operate as infrastructure-as-a-service [IaaS]. They are designed as scale-up solutions. New converged hardware from the same vendor can be snapped onto the system when needed. The problem with converged infrastructure is that it locks you into hardware choices. It does little to solve compatibility issues between different converged offerings and it is difficult to scale-out. Pre-built converged solutions also come with hefty price tags. You get to avoid the expense and time of an in-house IT project to assess the compatibility of best-in-breed hardware. However, you ultimately pay someone else for already having done that.- Converged Infrastructure [CI] takes networking, storage and CPU hardware pre-tested for compatibility and delivers it as a package
- This can lock you into hardware choices and is expensive for what you get
The Birth of Hyperconvergence
Making things cheaper
Hyperconvergence — or hyperconverged infrastructure [HCI] — takes the ‘all-in-one-box’ concept of converged infrastructure [CI] one step further. These ‘nodes’ are not simply different pieces of kit packaged together. It is a single piece of hardware that is subdivided virtually to perform compute, network and server tasks. It is predicted that the HCI market will grow with a CAGR of 42% between 2016 and 2023 to a market size of over $17 billion*. The drive towards hyperconvergence was mostly spurred by cost. These systems still use proprietary switches for networking. However, hyperconverged systems are server SANs — multiple servers on which a hypervisor is installed. Local storage is aggregated on a shared pool and delegated virtually. This format uses a portion of the server’s CPU and RAM for management, diminishing its capacity when compared with CI in like-for-like testing*. However, the overall cost savings in setup, deployment and maintenance (compared to CI) make it financially reasonable to deploy a few extra nodes to run the same number of VMs. HCI nodes are not only smaller than traditional hardware or CI, but they also use less energy to run*. The upfront costs will be higher than traditional hardware, however, maintenance and operational costs will be less. Pivot 3, Nutanix, SimpliVity [HPE], VxRail [EMC], VMware, Cisco, Scale Computing and Gridstore are all big players in this space. These products are designed to scale-out through the addition of appliance modules.- Hyperconverged hardware is a single appliance, not just a single chassis
- HCI is :
- server SAN
- a lot cheaper than CI, although the nature of the hardware means that it is not as powerful when compared like-for-like.
- is more expensive to purchase than traditional infrastructure, but has a smaller data centre footprint, lower energy costs and is easier to scale.
Variations on Hyperconvergence
All-in-one vs. options
There are two main variations on hyperconvergence*. Companies such as Nutanix, SimpliVity (HPE) and Scale Computing provide pre-manufactured solutions that are tailored to different segments of the market. Companies such as VMware’s vSAN, and StoreVirtual (also HPE) provide the ability to construct your own hyperconverged system more flexibly. These solutions are where hyperconverged infrastructure and software-defined-storage (or hyperconverged architecture) start to really blur together. When building your own network, you can often mix and match hardware, drive configurations and management software. However, this does bring back the necessity to troubleshoot compatibility. Like with all IT, this is the price you pay for options. These ‘build-it-yourself’ solutions, however, offer one other very positive feature — the ability to sidestep scaling issues that plague out-of-the-box hyperconverged systems. Ironically, considering scaling is a major selling point of both CI and HCI, some HCI systems do not allow you to upgrade CPU and storage capabilities independently. If a cluster is running low on either, the only solution is to add a new node that is pre-configured to provide both. This can force you to over provision one or the other in order to meet demands. This is something you need to investigate regarding particular products. There have been moves by vendors to completely decouple these capabilities. VMware offers the ability to purchase storage-only nodes and Datrium provides compute-only and storage-only options when scaling. This has been dubbed ‘open-convergence’. But, this is really starting to stray into vendor fueled marketing confusion. To make things even murkier, some vendors insist that decoupled hyperconvergence is actually just converged infrastructure. However, this is missing the point when it comes to the type of hardware being used.Problems for Hyperconverged Infrastructure [HCI]
A drawback to HCI has historically been performance standardisation. Because all resources are pooled, it can be difficult to guarantee the allocation of resources to different workloads. This is particularly relevant when running virtual-desktop-infrastructure [VDI], something hyperconverged systems are often purchased to handle*. However, this is not necessarily prohibitive. HCI banks on over-provisioning enabled by cost reductions for deployment and maintenance. One challenge to think about, particularly as a legacy organisation already running siloed IT departments, is the difficulty of combining operations that had previously been separate. Ironically, this is a challenge borne out of the very problem that converged and hyperconverged systems are designed to solve. However, the political and logistical difficulty of this should not be overlooked — particularly when planning to carry on using legacy hardware and management tools in addition to CI or HCI segments.- Performance standardisation can be an issue for HCI
- Consider the pragmatic difficulties of changing your existing organisational silos when purchasing HCI or CI products