Hyperconvergence and The Future of The Data Centre

Hyperconvergence

What does Hyperconvergence mean and how is the data centre changing?

Hyperconvergence is the buzzword claiming to bring storage in line with the demands of the 21st century. But, is it all hype? Or will it actually change the way your business operates?

Simply, hyperconvergence is the collapsing of all storage into one multivariable solution — generally focused on hardware. However, there are important differences between hyperconvergence, convergence and the use of architecture or infrastructure after either word.

Beyond that, if looking at what is really changing in storage and networking, there are two broad trends worth understanding that are related, but distinct. One is specific to hardware and ‘hyperconvergence’, while the other is focused on software and administration control.  

This article will stake out definitions around hyperconvergence, convergence and unified/software-defined storage. Particular vendors disagree on these definitions. However, this article will aid your understanding of how the data centre is changing and improve your ability to make future purchasing decisions.  

The Terminology of Hyperconvergence

Staking out space in an industry plagued by buzzwords.

Hyperconvergence should strictly be considered the collapsing of distinct hardware provisions for CPU, networking, memory and storage into a single and interchangeable piece of kit.

However, this hardware development has accompanied the growth of à la carte software-based management tools that unify your command and control capabilities over storage units — virtualized or not.

These two phenomena become easily confused because the ‘hyperconverged’ hardware is software defined in so much as the designation of different components is done virtually. These hardware platforms also come with software management packages that can be used to manage non-hyperconverged hardware.

Hyperconverged infrastructure [HCI] is a more specific term that directly references the hardware component of hyperconvergence.

Software-defined-storage [SDS] is a more accurate term to describe the trends towards unified and virtual control systems for data storage and the development of storage software that is truly hardware agnostic. This is, however, sometimes called hyperconverged architecture — furthering the confusion.

It is much simpler, however, to think of these two capabilities as distinct trends with their own utility to any given business.

Converged Infrastructure and the Traditional Data Centre

Vendor-led attempts to solve compatibility issues.

Traditionally, the data centre is comprised of servers, networking and storage components. These pieces of hardware are specialised, can be purchased separately and then integrated into the network. In theory, this enables maximum flexibility. In practice, this creates compatibility challenges that IT departments spend a lot of time attempting to resolve and manage.

The first true endeavour by a vendor to solve this problem began in 2008 when Oracle, in partnership with HP, announced their ‘Database Machine’. This was a pre-integrated box that converged networking, storage and server components into a single, tested and off-the-shelf solution. The following year, EMC, Cisco and NetApp followed suit.  

These products are known as converged infrastructure [CI]. They are built from traditional data centre hardware. However, they come pre-designed for compatibility within a single pre-built chassis for ease of implementation and management. They are also often augmented with software management tools.  

All the major infrastructure providers now have converged offerings. The converged infrastructure market sat at $11.7 billion in 2016 and is predicted to rise to $76.2 by 2025*.   

These solutions can be thought of as proprietary switches attached to traditional disk arrays and servers in a takeaway box. These systems generally operate as infrastructure-as-a-service [IaaS]. They are designed as scale-up solutions. New converged hardware from the same vendor can be snapped onto the system when needed.

The problem with converged infrastructure is that it locks you into hardware choices. It does little to solve compatibility issues between different converged offerings and it is difficult to scale-out. Pre-built converged solutions also come with hefty price tags. You get to avoid the expense and time of an in-house IT project to assess the compatibility of best-in-breed hardware. However, you ultimately pay someone else for already having done that.

  • Converged Infrastructure [CI] takes networking, storage and CPU hardware pre-tested for compatibility and delivers it as a package
  • This can lock you into hardware choices and is expensive for what you get  

The Birth of Hyperconvergence

Making things cheaper

Hyperconvergence — or hyperconverged infrastructure [HCI] — takes the ‘all-in-one-box’ concept of converged infrastructure [CI] one step further. These ‘nodes’ are not simply different pieces of kit packaged together. It is a single piece of hardware that is subdivided virtually to perform compute, network and server tasks.

It is predicted that the HCI market will grow with a CAGR of 42% between 2016 and 2023 to a market size of over $17 billion*.  

The drive towards hyperconvergence was mostly spurred by cost. These systems still use proprietary switches for networking. However, hyperconverged systems are server SANs — multiple servers on which a hypervisor is installed. Local storage is aggregated on a shared pool and delegated virtually.

This format uses a portion of the server’s CPU and RAM for management, diminishing its capacity when compared with CI in like-for-like testing*. However, the overall cost savings in setup, deployment and maintenance (compared to CI) make it financially reasonable to deploy a few extra nodes to run the same number of VMs.  

HCI nodes are not only smaller than traditional hardware or CI, but they also use less energy to run*. The upfront costs will be higher than traditional hardware, however, maintenance and operational costs will be less.   

Pivot 3, Nutanix, SimpliVity [HPE], VxRail [EMC], VMware, Cisco, Scale Computing and Gridstore are all big players in this space. These products are designed to scale-out through the addition of appliance modules.

  • Hyperconverged hardware is a single appliance, not just a single chassis
  • HCI is :
    • server SAN
    • a lot cheaper than CI, although the nature of the hardware means that it is not as powerful when compared like-for-like.
    • is more expensive to purchase than traditional infrastructure, but has a smaller data centre footprint, lower energy costs and is easier to scale.

Variations on Hyperconvergence

All-in-one vs. options

There are two main variations on hyperconvergence*. Companies such as Nutanix, SimpliVity (HPE) and Scale Computing provide pre-manufactured solutions that are tailored to different segments of the market. Companies such as VMware’s vSAN, and StoreVirtual (also HPE) provide the ability to construct your own hyperconverged system more flexibly.  

These solutions are where hyperconverged infrastructure and software-defined-storage (or hyperconverged architecture) start to really blur together. When building your own network, you can often mix and match hardware, drive configurations and management software. However, this does bring back the necessity to troubleshoot compatibility. Like with all IT, this is the price you pay for options.   

These ‘build-it-yourself’ solutions, however, offer one other very positive feature — the ability to sidestep scaling issues that plague out-of-the-box hyperconverged systems.

Ironically, considering scaling is a major selling point of both CI and HCI, some HCI systems do not allow you to upgrade CPU and storage capabilities independently. If a cluster is running low on either, the only solution is to add a new node that is pre-configured to provide both. This can force you to over provision one or the other in order to meet demands. This is something you need to investigate regarding particular products.

There have been moves by vendors to completely decouple these capabilities. VMware offers the ability to purchase storage-only nodes and Datrium provides compute-only and storage-only options when scaling. This has been dubbed ‘open-convergence’. But, this is really starting to stray into vendor fueled marketing confusion.

To make things even murkier, some vendors insist that decoupled hyperconvergence is actually just converged infrastructure. However, this is missing the point when it comes to the type of hardware being used.  

Problems for Hyperconverged Infrastructure [HCI]

A drawback to HCI has historically been performance standardisation. Because all resources are pooled, it can be difficult to guarantee the allocation of resources to different workloads. This is particularly relevant when running virtual-desktop-infrastructure [VDI], something hyperconverged systems are often purchased to handle*. However, this is not necessarily prohibitive. HCI banks on over-provisioning enabled by cost reductions for deployment and maintenance.

One challenge to think about, particularly as a legacy organisation already running siloed IT departments, is the difficulty of combining operations that had previously been separate. Ironically, this is a challenge borne out of the very problem that converged and hyperconverged systems are designed to solve. However, the political and logistical difficulty of this should not be overlooked — particularly when planning to carry on using legacy hardware and management tools in addition to CI or HCI segments.      

  • Performance standardisation can be an issue for HCI
  • Consider the pragmatic difficulties of changing your existing organisational silos when purchasing HCI or CI products

Software Solution and the Hyperconverged Data Centre     

The real driver of change is proprietary software that enables the easy management of storage, reprovisioning of hardware and single point control over a multitude of systems

A decade ago, a whole company could be founded on providing quality snapshots, compression, deduplication, data protection or hybrid/tiered storage solutions. These have broadly become minimum barriers to entry in the modern market. This applies to the hyperconverged, converged and traditional data centre products.

Companies like Nimble, NetApp and EMC provide their software services independently of hardware purchases, and there is a move across the industry to accommodate a multitude of hardware choices under a single command and control system. In a way, all the moves to unify hardware have accompanied software changes that make it easier than ever to mix and match like the traditional legacy approach.

This change is one aimed to make truly hardware agnostic systems that can reign over different pieces of hardware and network configurations.

The problem that has arisen for SAN, particularly, in recent decades is connectivity across distances and an explosion of different ad hoc storage devices installed to accommodate the amount of data generated by digitisation of the random I/O patterns of VDI.

Administrators have ended up managing multiple SANs, a NAS and server-side DAS. Software-defined-storage [SDS] is as much about unifying networks as hardware.

All of these storage innovations are sometimes called hyperscaling — a system optimised to use any and all technology to maximise capacity*.

Hyperscaling also integrates the other disruptor — the cloud. This is the true replacement for the data centre. Many SAN and NAS providers operate hybrid-cloud options. The primary issue with Cloud storage, however, is reliability, speed and bandwidth availability. What the software changes have done is made it a lot easier to tie multiple storage solutions together in a unified platform. This can include or exclude hyperconverged and converged infrastructure.

SUMMARY: Hyperconvergence and Converged Infrastructure Can Help You, But Make Sure They Don’t Become Silos unto Themselves

The data centre is changing, and hyperconvergence is part of that evolution. However, the changes happening go beyond hardware. IT shops have altered the hardware and networking they use because the way we use the data centre has changed.

Hyperconverged infrastructure [HCI] has a role in solving these challenges, not least because it offers the ability to quickly and easily scale-out. Hyperconvergence also has direct roots in trying to accommodate virtual-desktop-infrastructure [VDI]. The speed of scaling offered by HCI is a significant attraction to companies operating VMs.

The evolution of HCI from converged infrastructure was an obvious path, and it offers cost saving when looking for infrastructure pre-tested for compatibility. The problem for any business integrating these solutions into an existing storage environment is that you are unlikely to replace everything you have in one fell swoop. This will create multiple environments you must align to obtain the truly unified system both HCI and CI promise.

Software-defined-storage is a solution that claims to solve this problem, among others. Picking a single proprietary piece of software to manage disparate hardware provisions moves the one permanent choice away from hardware and to administration. This is a powerful option most businesses should pursue.

Although HCI is not perfect, there are obvious foreshadowings of the future of storage within the concept. The major flaw with anything ‘off-the-shelf’ is that there are often redundancies or inefficiencies in the design. This is particularly true when compared to what you will actually use it for. However, as technology improves and costs decrease, this becomes an easily accommodatable problem that brings advantages in convenience.

Think about how personal computers and mobile technology work. If you are willing to buy all of the parts yourself, you could build a laptop (or particularly a desktop) from scratch for less money, that is optimised for how you want to use it. But, that kind of approach is thoroughly relegated to enthusiasts. It simply isn’t worth the effort when you can buy a generalised solution.

That is the direction the data centre is headed — generalised and scalable hardware optimised for nothing but suited for everything*. That is hyperconvergence in a nutshell. Getting the administration right with software-defined-storage is the piece needed to make it actually convenient.        

Sources:

* Converged Infrastructure Market (Components – Server, Storage, Networking, Software, and Services; Architecture Type – Pre-configured and Customized; End-use Industry – BFSI, Telecommunication and IT, Manufacturing, and Healthcare) – Global Industry Analysis Size Share Growth Trends and Forecast 2017 – 2025
* Hyper-Converged Infrastructure (HCI) – Global Market Outlook (2017-2023)
* Hyperconverged Infrastructure 101: A short primer about HCI
* What is hyperconvergence?
* Hyper-converged systems: What you need to know about this hot virtualization topic
* Hyper convergence
* hyperscale storage
* SDI wars: WTF is software defined infrastructure

Posted in

Rob Townsend

Rob is a co-founder at Nexstor and has dedicated his career to helping a range of organisations from SME to Enterprise to get ahead of the game when it comes to their compute, storage and data needs.

Subscribe to receive the latest content from Nexstor


By clicking subscribe you accept our terms and conditions and privacy policy. We always treat you and your data with respect and we won't share it with anyone. You can always unsubscribe at the bottom of every email.