A hyperconverged head to head
Hyperconvergence has held buzzword status for a number of years. Put simply, this is a software-defined approach to storage management that combines storage, compute, virtualisation and sometimes networking technologies in one physical unit that is managed as a single system. But, are all hyperconverged systems created equal? Most hyperconvergence companies claim that their product will solve storage needs simply, breaking down silos and offering straightforward scaling – but if this was the case, it wouldn’t really matter which one we chose. In short: no, the systems are not all the same. The answer is a little more complicated than that though, with one of the biggest problems being a lack of consistency in how vendors talk about HCI software. There are even other terms for it, including:- Hyperconverged architecture,
- ‘converged infrastructure’ (or CI)
- HCIS — hyperconverged integrated systems
SimpliVity
- Founded in 2008
- An early independent leader in hyperconverged infrastructure
- Acquired by HPE in 2017
- Now the leading hyperconverged solution within the wider HPE storage lineup
- WAN optimisation
- VM management
- data protection
- cloud integration
- deduplication
- compression
- backup and caching within scale-out architecture
NetApp
- Founded in the early 1990s
- Led the way in NAS technology
- A long-time dominant force in enterprise storage
- Now the second largest vendor in external enterprise storage — $890 million in 1Q18 revenue.
Hyperconvergence: What Is It And Why Do We Need It?
The data centre is traditionally made of servers, networks and storage components. These pieces of hardware are specialised and can be purchased separately, which enables buyers to utilise flexibility in the best way possible. However, as you can imagine, this creates compatibility challenges. Since the late 2000s, storage vendors have sold ‘pre-integrated’ boxes that converged network storage and servers into a single solution that can be used straight from the shelf. This is what we came to know as CI, or converged infrastructure. HCI combines all of these nodes into one smart piece of software, rather than purchasing separate pieces of kit and packaging them together. HCI offers this in one complete package that is virtually subdivided. Why? In short, it’s much cheaper to build hardware this way rather than create it out of numerous parts. HCI hardware is also smaller and uses less energy. This means that your upfront costs will be higher than traditional hardware. That being said, you will save on maintenance and operational costs. However, a portion of the server’s CPU and RAM are used to manage the system, slightly reducing its capacity compared to CI in like for like testing. Regardless, the cost savings generally make it a worthwhile swap.Why NetApp Isn’t ‘Technically’ Hyperconverged
NetApp HCI consists of industry standard high-density, four server and two rack units. Their design delivers simple hardware foundations for a simple, dynamic and scale-out storage experience — what HCI is all about. However, rather than using a hypervisor to run the requisite storage management processes, NetApp dedicates specific nodes to act as servers that run its SolidFire OS. This segregation has led some to call this solution ‘converged infrastructure’ — or even ‘disaggregated software-defined architecture’. The reality, however, is that this offering delivers an outcome nearly identical to that of ‘true’ HCI, it just gets there in a unique way. Some would even say that NetApp’s unique approach is advantageous — and the lines on what is HCI and what is CI is blurring in other regards as well.SimpliVity vs. NetApp: Do Their Differences Matter?
These days, hyperconverged vendors are selling more flexible solutions than ever by breaking their integrated blocks apart. This is because although HCI came out of a desire to simplify scaling, it initially delivered its own limitations. Different HCI nodes come with different provisions for storage and compute. Traditional HCI nodes, which comes with at least some of both functions, can force buyers to overprovision one or the other if they only need to expand a single capability. The first answer was to create tailored packages designed to accommodate different types of users. But, the industry has steadily progressed towards the more customisable solution of providing storage only nodes — something that SimpliVity now offers. Effectively, this brings us back to where we first started — converged infrastructure and NetApps hybrid solution. The only difference is that NetApp also allows you to scale compute without having to buy any additional storage at the cost of some not wanting to call the solution ‘true’ hyperconvergence. The reality is that, no matter what you want to call it, both NetApp HCI and SimpliVity are very comparable products that compete for customers, in spite of their differences.Conclusion: Which One Should You Use?
Looking at what you already have available to you is likely the easiest way of deciding whether you should start using SimpliVity or NetApp. Both of them have their pros and cons, and both are unique and similar in various ways. Remember that although hyperconvergence is about easy scaling and breaking down silos, hyperconverged storage segments can very much end up as their own silos. Only when it comes to the overall hyperconverged system are they simple to scale and integrate. Whichever new piece of hardware you choose, integrating it is going to be just as tough as if you were using any new piece of kit. The biggest benefits of NetApp come from looking at the bigger picture and considering the offering overall. NetApp HCI sits within a larger non-hyperconverged storage family and can easily integrate with other SolidFire products. NetApp also offers ONTAP, their other software solution. Using ‘FlexArray Virtualisation’, ONTAP can integrate with almost any third-party hardware. If you are already invested in this solution, NetApp HCI is the obvious choice. In short, NetApp is a fully integrated system, doing the following:- Delivers greater cost efficiency and agility
- Increased visibility and control
- Simplifies management
- Predictable Performance
- Using Thin Provisioning to streamline storage provisioning to “on demand” provisioning.
- Space-saving data FlexClones for development, testing, rapid virtual machine deployment and disaster recovery
- Simplifies management
- Delivers greater cost efficiency and agility
- Increased visibility and control
- Predictable Performance