Hewlett Packard Enterprise (HPE) made some exciting announcements at its Discover Barcelona 2023, an edge-to–cloud conference by the tech vendor. HPE hybrid cloud was the event's highlight, with new and upgraded offerings centered around artificial intelligence (AI).
HPE is betting big on the enterprises’ need to combine on-prem and private/public clouds, given the increasing interest in AI development and stringent regulations on data security. It’s revamped its storage platform, HPE GreenLake, to enable a more seamless hybrid cloud adoption and provide support for training AI models.
In this article, we will provide the following:
The company is improving GreenLake, an all-flash platform, to ensure support for AI workloads, especially those working on training generative AI models. In other words, the storage platform can handle compute-intensive workloads better. This is thanks to the integration with NVIDIA’s new graphic processing units (GPUs), the Quantum-2 InfiniBand.
This announcement aligns with the general trend in the IT industry in 2023, with several vendors revamping their storage products and services. Earlier this year, IBM announced its storage platform at the IBM Storage Summit, highlighting the renewed importance of data storage in the current race for AI supremacy.
While discussing the platform with an analyst at SiliconANGLE, Joseph George, global vice president of strategic alliance at HPE, described how it can benefit ambitious enterprises.
“Imagine just a layer where you’re taking applications, data users, et cetera and putting it into a portal with HPE GreenLake and then determining where these things need to reside at that point in time with the flexibility to manage portability of things over time,” he said.
Brian Falvey, vice president of sales for HPE GreenLake North America at HPE, explained how GreenLake could support the most popular AI use case currently, large language models (LLMs).
“What HPE GreenLake for large language model does is, it’s basically a multi-tenant cloud. It’s like a public cloud environment where you can log in, put it to use, use the infrastructure, use the tools, use our expertise, develop your model and then shut it off. You don’t have to make the big investment. I think it’s pretty noble in democratizing access to AI,” Falvey said.
The tech vendor also plans to launch a cloud-based service run by supercomputers to support advanced AI modeling. While initially focusing on LLMs, the company plans to expand to other AI use cases such as healthcare, finance, transportation, and climate modeling.
HPE is also engaging with the biggest name in AI, NVIDIA, and collaborating on developing hardware and software for Gen AI for enterprise data centers. It’ll be a full-stack solution catered to enterprise needs, making it easier for them to tap into this segment of AI with the infrastructure to support it.
It’s supposed to be customizable and intuitive and can be used with private data on on-premise hardware or the cloud. A rack-scale architecture offers optimizations as different components come together as a single entity in a single server rack.
The solution will be based on HPE’s ProLiant Compute DL380a servers with NVIDIA L40S GPUs and BlueField-3 DPUs.
NVIDIA is also collaborating with Dell to offer enterprises an on-premise generative AI solution (Project Helix). HPE’s offering would essentially be a competitor to Dell’s. Interestingly, NVIDIA is part of both solutions, showing how the GPU maker has become an indispensable part of the AI revolution.
Another AI-related announcement at the event was about HPE’s Machine Learning Development Environment Software. It will now be a managed service, accessible on AWS and other cloud providers.
This managed service provides a comprehensive and adaptable cloud-managed experience specifically designed for AI/ML model training. Companies can now accelerate and implement GenAI initiatives, benefiting from a flexible managed service supporting every stage of their AI/ML journey.
As a managed service through the cloud, the software removes the complexity and operational overhead associated with model training. This way, it speeds up the process of model development. With the introduction of new generative AI studio capabilities, this offering seeks to boost AI adoption by enabling rapid prototyping and testing of models.
In other words, the managed service aims to future-proof AI/ML model training infrastructure, helping reduce management staffing and processing burdens.
The updated AI platform incorporates HPE Ezmeral Software, offering enhanced capabilities to streamline data, analytics, and AI tasks for businesses. The software has undergone improvements to simplify and expedite these processes, allowing users to execute cloud-native or non-cloud-native applications within containers.
The platform is designed for seamless operation across diverse cloud environments and introduces key updates, including a more efficient hybrid data lakehouse. This improvement optimizes GPU and CPU usage, facilitating smoother data management and analysis.
The HPE Ezmeral Unified Analytics Software integrates with HPE Machine Learning Development Environment Software to enhance model training and tuning while improved GPU allocation management ensures superior performance across various workloads.
Furthermore, Ezmeral supports additional third-party tools, such as Whylogs for model monitoring and Voltron Data for accelerated GPU-based data queries.
The announcements and discussions at HPE Discover Barcelona reiterate the focus on the HPE hybrid cloud, as the company sees a growing demand for hybrid architectures across enterprises in various industries. Its leadership clarified that they believe on-premise infrastructure is still crucial for enterprises, especially now that many want to adopt AI, particularly Gen AI, and train their models.
As expected, AI is at the center of it all, whether it’s the upgrades to its GreenLake platform or the collaboration with NVIDIA. HPE, like many other tech vendors, is doubling its efforts to offer clients AI solutions involving both hardware and software offerings.
From a strictly business point of view, the developments make a lot of sense. Not only is the competition heavily investing in developing their AI offerings, but HPE’s earnings data also show promise in edge services. The 2023 fourth-quarter revenue was down from last year, but edge computing revenue was up 41 percent.
The landscape is evolving, and as such HPE appears poised to attempt to shape a more accessible and innovative future for hybrid cloud and AI integration.
If you’re looking to refresh your data center and embrace AI, you have to start with the right equipment. It’s an expensive move, but given the proven efficiency AI can bring to operations, it may just be worth it. PivIT can help you procure the latest AI servers from major brands to kickstart your AI strategy.