HPE GreenLake Releases Alletra MP, A New Storage Architecture
In this article:
Add a header to begin generating the table of contents
Earlier this year, HPE announced their new “store, manage, protect” strategy. As part of these announcements was the release of HPE GreenLake for Block Storage and HPE GreenLake for File Storage, both powered by a newly HPE-developed storage architecture — Alletra MP. This new architecture was seen internally within HPE storage leaders as revolutionary to the market, with a disaggregated design to independently scale performance or capacity, born in cloud management and multiprotocol platform personalities.
With Nexstor positioned as one of HPE’s leading storage partners, we were keen to become early adopters. Utilising this strong relationship, we became the First UK HPE Partner authorised to receive an Alletra MP demo system designed for HPE Greenlake for Block. We have written the following short blog series to share our insights and findings as a show and tell.
What Is HPE Greenlake For Block Storage?
To answer this, I think the best place to start is to explain what HPE GreenLake is. The messaging has evolved and changed over the last few years; however, the simple answer is that it’s a single platform experience for all HPE products, solutions and services, and the much-desired single-user destination for new device onboarding, asset management, configuration and maintenance.
This cloud experience combines all routine and non-routine tasks in one MFA-secured platform for any solution consumed in any way – be it paid upfront, rental, subscription or flex consumption. The HPE Greenlake platform provides the functionality to run the initial setup wizards and potentially the orchestration of said solution, limiting the time and effort required for crucial elements of the physical installation process. With the management capabilities of HPE GreenLake, the platform also enables the following functionalities:
Review capacity, health and performance metrics
Add, change and delete configuration items.
HPE’s ‘Journey to One’ vision is to combine all various websites and portals into 2 locations, support.hpe.com and cloud.hpe.com.
Alletra MP for Block Explained
Alletra MP, the latest storage hardware platform released under HPE Alletra, powers HPE GreenLake for Block Storage. While the MP stands for multiprotocol, remember that this isn’t Unified Storage. The Alletra MP cannot run both Block and File on the same system. Instead, Alletra MP comes in 2 flavours (or personalities): Block or File. The two flavours are based on the same hardware and architecture; the only difference is the OS each ships with and, potentially, the connectivity media. So Alletra MP can technically be considered multiprotocol because it can be either Block or File. What’s especially valuable about this platform is its simplified and standardised “Lego block” architecture and how it can enable significantly faster delivery times: solving a problem that the IT industry has been seeing a lot of these days.
Here’s how it works. UK channels can hold stock of Alletra MP base units. Then when a solution needs to be shipped to a site, the corresponding personality to the solution type — Block or File — is then loaded onto the hardware and shipped out. This will inevitably give HPE customers a far more simple, yet dynamic mechanism for expanding or deploying storage, whether it’s Block iSCSI, FC, NFS, ISCSI, SMB or S3… a way of working that hasn’t been seen before.Following the success of HPE Alletra dHCI, the term ‘disaggregated’ has resonated strongly with HPE customers. With this new storage architecture, HPE is taking a step further with Disaggregated Storage infrastructure to separate an array’s storage capacity from storage compute resources. These disaggregated ‘nodes’ can be tailored for each customer’s use and combined into a logical storage pool, with storage resources provisioned to specific server instances.And the Alletra MP isn’t just a storage array. It’s an entirely new architecture based on Scale-out Disaggregated Storage Architecture utilising NVMe storage fabric. Building blocks for the Alletra MP are entirely modular. They can start with a single standalone Controller system, through to a Controller system with directly attached JBOFs (Just a Bunch of Flash), through to the RoCEv2 (RDMA over Converged Ethernet) switched Fabric with multiple Controller systems and JBOFs all working in unison to service storage traffic.What’s great about this is that we can start with a minimal single-node system and quickly scale capacity and performance. Resiliency is increased as data is distributed and accessible by many constituent parts. If one node is lost, the computer can find another path to its data. Management is streamlined because the entire system can be addressed as a whole, and resources can be added, upgraded, or replaced without disruption.Let’s dive further into what HPE’s Disaggregated Storage Architecture has to offer.
Robust Controller Node Hardware
The C-node chassis is essentially a 2-controller, active-active All NVMe system. It currently ships with eight or 16-core processors, dependent upon the performance requirement, in a single 2U Chassis.The architecture is designed with performance and availability in mind, built with an “all-active” design whereby all components engage in I/O processing. Multiple layers of abstraction ensure data is laid out on the physical NVMe drives to maximise efficiency in data migration, snapshotting, performance and wear. The all-active design steers towards system-wide striping and automatically stripes volumes across all system resources, delivering predictable high levels of performance.Each Alletra MP storage enclosure can support up to 24 dual-ported NVMe drives and 4 OCP slots, up to 2 of which can be used for Host IO and two for Southbound storage fabric connectivity (expansion enclosures). Currently, the Alletra MP for Block supports Fibre Channel and NVMe/FC protocols — simultaneously.The controllers can be equipped with eight or 16-core processors and 256GB of RAM. Each controller node is connected to the other via 25Gb cluster interlinks using a low-overhead RDMA protocol. The chassis has eight populated DIMM slots and 2 M.2 boot devices.
An Enhanced Installation Experience
I’ve installed plenty of HPE storage arrays over the years, mostly HPE Nimble and Alletra Arrays, amongst others, so I had high hopes that the installation experience would be good. More recently, HPE arrays must be registered into the HPE GreenLake DSCC (Data Services Cloud Console) as a prerequisite to running the configuration wizard, and the Alletra MP is no different.Once the array is registered in DSCC, we have the option of either running the initial setup via the HPE discovery tool, which is a small utility you run on a workstation to discover arrays on the same LAN segment – very similar to the previous HPE storage setup manager if you have used that before – or the Bluetooth based HPE Storage Connection Mobile app (runs on iOS or Android). I opted for the Bluetooth option, which involves connecting the Bluetooth dongle to the array and scanning for devices on the HPE Storage Connection app. I must admit I didn’t expect this to go smoothly — but the process was actually a dream. Once the App has discovered the unconfigured array, you simply enter the network details that you want to configure the array with, select deploy, and off it goes; it sets up the management configuration, contacts the DSCC, and ensures that the array is already registered, and that’s that.
Next, we had to return to the DSCC and finish the initial setup; this was nice and straightforward. Run through the six steps, and off it goes:
⦁ Welcome: Provide some background information to get your setup started.
⦁ Domain⦁ Time⦁ Attributes⦁ System⦁ Review and Finalise
From this point, you can either login to the array GUI or utilise the DSCC to manage the array.
I opted to utilise the local GUI on this occasion, more for habit than for any other reason.
An excellent tutorial built into the GUI then takes you through the storage configuration steps. However, it’s more of a case of defining the Host sets, creating the application sets, and then the volumes themselves.
It’s also important to note that there is no need to create and manage RAID sets, Disk groups (or whatever terminology you prefer to use); the system automatically sets and manages the data distribution. We define the application sets (consistency groups) and the volumes that make up that consistency group.
When creating the volumes, it’s a familiar feel to previous HPE storage products with concepts of “Application sets”, where you define what the application set will be used to store, and the system tunes the volumes based on that intention. Then, create and add the volumes, which is simply a case of setting the size and choosing to enable data reduction or not — data reduction being both inline deduplication and compression.
The Snapshot and replication scheduling can be defined at this time, too. It’s possible to configure a layered approach to snapshotting with multiple schedules running on the same application set to achieve that Grandfather, Father, Son type arrangement – if required.
Apart from any Ethernet or Fibre Channel switch configuration needed to support the installation, that was that. The system is set up, storage configured and presented to hosts. In this instance, I did have to log into vSphere and rescan storage before creating a VMFS volume, etc.
One final note: the Alletra MP disk inquiry string is displayed as a “3PARdata Fibre Channel Disk”, which can potentially be a little confusing if you are not expecting it and if you already have 3Par storage in your environment!
On-System GUI
At the time of writing, certain specific tasks will require the on-system GUI.
The first thing that strikes me about the on-system GUI (local web interface) is how straightforward and user-friendly it is. All of the high-level status information you’d expect to see is available right on the dashboard page.
For example, most objects on the dashboard can be selected to view more detailed information, such as data trending and more granular system status
With that said, I was slightly disappointed that there isn’t a real-time performance monitoring page; perhaps that’s something for the future.
Encryption at rest is configurable using either Local Key or External Key Management. The system supports local and LDAP authenticated users – for AD, OpenLDAP and Red Hat Directory Server.
I won’t spend more time on this because, as I said, the preferred management method is via the DSCC.
GreenLake Data Services Cloud Console (DSCC)
DSCC is part of the GreenLake application suite, alongside Aruba Central, Compute Ops Management and HPE GreenLake Central. DSCC allows us to manage our Alletra storage arrays from the GreenLake Platform. This isn’t new or unique to the Alletra MP as it is available across most of the HPE Alletra storage range (5k, 6k and 9k). However, it is an essential solution piece that is regularly updated with new features.
The DSCC includes several tiles from which you can access various functionality. The two tiles relevant to the Alletra MP for Block are “Data Ops Manager” and “Block Storage”. I’ll quickly go through each in turn so you get a feel for what each one does.
Data Ops Manager
The Data Ops Manager is the place to be for all things Data. It’s an area where you can view information on your systems, configure data access groups for your system, and link into the Block Storage and File storage areas.
The dashboard gives you a good overview of what’s going on in your storage environment. It’s a place to view insights into your systems, such as IOPs, latency, average block size, system headroom and much more. The section on headroom is a particularly excellent addition, empowering administrators to view how much more load can be added to the system before it hits the ceiling.
The various views allow you to view your systems over pretty much any time frame you like so that you can see trends over time, helping with capacity planning.
I also like that the performance metrics can be viewed based on Hosts and Hosts’ ports. This could be useful when trying to diagnose performance-related issues.
Block Storage
The Block Storage section is where you can view and provision storage. Our Demo GreenLake environment has two storage arrays – an Alletra 6030 All-Flash array and the new Alletra MP for Block array.
This is where DSCC comes into particular use. When you have multiple storage arrays that you need to manage — whether on the same site or distributed across the globe — you can log into your own instance of DSCC and start provisioning. There’s no need to hop around various VPNs and/or user interfaces to access a particular array you wish to work with; it’s all available under the DSCC.
Volumes are created using the concept of “Intent-based provisioning”. The Platform helps you place the volumes on the appropriate system based on the application type and the expected performance requirement. The system quickly checks to ensure that you have selected a host group that is visible to the array and has enough capacity. Additional options include adding a Quality of Service at the volume set level: 5 levels, Low through to High, and this kicks in when there is contention.
The best feature that comes with intent-based provisioning is that it references the data in the system headroom statistics. Intent provisioning recommends the best fit for the workload placement, taking into account available headroom across the system. It then simulates the application workload patterns to estimate the available headroom after placement.
Personally, I think this is very cool. Good work, HPE!
So, to summarise, it’s an easy-to-use interface with some powerful features. I really like it and am looking forward to seeing what functionality is added in the future.
A Final Note On Performance
I did carry out several performance tests, just because that’s what you do when you’ve got a new toy, right? You see how fast it can go.
HPE provides performance data in their sizing tools, based on various metrics and system configurations. So I based my tests on those metrics. I won’t document the results here as they are unofficial results, but I’d summarise and say that I saw improvements in those official figures.
I was testing using our small loan system, and the fact is that the GreenLake for Block system is so modular and scalable that most use cases and configurations can match that workload.
Just as our loan unit arrived with us at Nexstor, HPE announced to partners the latest release of GreenLake for Block Storage. GreenLake for Block is being released in a controlled manner, stage by stage. Release 1 included support for 8-core or 16-core controller nodes with FC or FC/NVMe host ports and no option for JBOF expansion chassis.
Release 2 builds upon that and now supports up to 32-core controller nodes with up to 2 directly connected JOBF expansion chassis.
Support for iSCSI has been added to this release, meaning now we have the following options for Host connectivity:
⦁ 32/64Gb FC or FC/NVMe (1 or 2 Quad port HBAs per controller)
⦁ Ten/25Gbe iSCSI (2 ports controller)
⦁ 100Gb two port HBA for backend connectivity to JBOFs
JBOF Hardware
The JBOFs are 2U expansion enclosures, each fitted with 2 JBOF nodes (controllers) running eight-core processors, 64GB RAM, dual port 100Gbe HBA for internode connectivity and up to 24 NVMe drives can be accommodated – same as the Controller enclosures. These can range from 1.92, 3.84, 7.68 or 15.36TB in size.
The switched architecture isn’t here yet; however, that shouldn’t be too far away.
In this latest release, the maximum effective capacity for a system is 1.79PiB (based on a 3:1 deduplication ratio). This will increase over the coming months as further releases are staged. But also bear in mind that this figure is based on enclosure level availability, ensuring that a single enclosure failure does not result in data loss.
I’ve added a screenshot below showing the current performance and capacity maximums for reference.
I won’t list out all the possible configurations and performance permutations; however, to give you an idea of equivalent storage systems in the current HPE portfolio:
⦁ 8 Core 2 node Alletra MP = 3Par 2 Node 7200 through to 2 Node 8400
⦁ 16 core two node Alletra MP = 3Par 4 Node 7400 through to 2 node 8450, Primera 2 Node A630
⦁ 32 core 2 Node Alletra MP = 3PAR 4 Node 8450, 3PAR 2 Node 9450, 3PAR 2 Node 20K, Primera 2 Node A650, Primera 2 Node A670
As I keep saying, into next year, I expect to see further enhancements in performance and scale.
Replication And Migration Interoperability
The Alletra MP for Block offers Data migration interoperability from HPEs 3Par, Alletra 9000 and Primary using Peer Motion, so system-to-system migration without a 3rd party tool. This will be useful when replacing said arrays with the Alletra MP. Currently, the only transport method supported is Fibre Channel.
HPE Alletra MP For Block: More To Come…
So, this is the end of my short blog on the HPE Alletra MP for Block. I will say that I’ve barely scratched the surface when it comes to covering all that it is capable of. One thing is for sure: I believe this will be an epic product, and I’m looking forward to seeing how this develops.
For more information, please do not hesitate to contact a friendly member of our team by clicking here.
Subscribe to receive the latest content from Nexstor
By clicking subscribe you accept our terms and conditions and privacy policy. We always treat you and your data with respect and we won't share it with anyone. You can always unsubscribe at the bottom of every email.