Hewlett Packard ‌Enterprise (HPE) ⁤is⁤ expanding its artificial intelligence (AI) efforts with a series‍ of new initiatives announced today at⁤ the HPE Discover Barcelona ​2023 event.

The updates include an expanded partnership with Nvidia that involves both hardware and software to optimize AI for ‌enterprise workloads. The HPE Machine Learning Development Environment (MLDE) which was ‌first‌ released in 2022, is being⁤ enhanced with⁢ new features to help enterprises consume, ‍customize and create AI models.⁤ HPE⁢ is also extending MLDE to be available⁤ as a‌ managed​ service, ‍running on AWS and Google Cloud. Additionally, HPE is boosting its own cloud efforts ‌for AI, with⁢ new AI-optimized‌ instances in HPE Greenlake and increased performance for file storage in support of AI workloads.

The new‌ updates are all designed to support HPE’s vision ⁣for a full-stack AI-native architecture optimized from hardware to software. Modern enterprise AI workloads are extremely computationally intensive, ‌require data as a⁤ first-class ‌input, and need ⁤massive scale for ⁢processing.

“Our view at ⁣HPE is that AI requires ⁣a ⁣fundamentally different ⁢architecture, because the​ workload is fundamentally⁢ different than the classic transaction⁣ processing and web services workloads that have become so dominant in ⁣computing over the last couple of decades.” Evan Sparks, VP/GM, AI Solutions and Supercomputing Cloud‌ at HPE said in a briefing with press and analysts.

Today, Hewlett ⁢Packard Enterprise (HPE) announced its groundbreaking​ AI-native architecture vision, designed to support the deployment of both traditional machine learning (ML) and emerging deep ⁣learning (DL) models at the edge. HPE’s AI-Native Architecture Vision is ‍an integrated and secure architecture that provides compute, storage, and ⁣software layers that‌ enable easy deployment of ML and DL workloads at the edge.

The architecture provides access to integrated ‌data sources, from multiple types of data sources such as images, video, text and sensor data; develops algorithms and deploys them to AI service nodes; and deploys AI applications at‍ the edge. AI application developers have the ability ⁤to assign ML or DL models to multiple nodes to reduce deployment time and ⁣improve the overall architecture to simplify operation and improve usability.

The architecture contains HPE Edgeline servers that integrate with HPE ​AI software and compute nodes. HPE Edgeline servers‍ are designed for the most challenging edge computing environments, featuring processors designed for the most demanding‍ AI workloads and ‌the best ML/DL ⁢acceleration. In addition, the architecture also ⁢features HPE SimpliVity’s ⁢hyper-converged management, which provides a simple, unified platform that enables data and process control, automation, system health monitoring, and⁤ advanced analytics.

The AI-Native Architecture Vision integrates with HPE’s AI-enabled services, which provide the capability to deploy and manage ML ⁤and DL models remotely, enabling unmanned autonomous operations. In addition, HPE’s integrated DevOps environment provides a ‌comprehensive suite of tools for the development, testing, and deployment of AI applications.

With the⁤ combination of HPE’s AI-Native Architecture ⁤Vision and AI-enabled services, customers have access to an integrated, secure architecture‌ that provides the ability to quickly and securely deploy ML and DL models to the edge. Additionally, HPE’s DevOps environment allows customers to‍ quickly develop, test, and deploy AI applications.

HPE’s AI-Native Architecture Vision is truly a game-changer for organizations looking to take ⁣advantage of the⁢ power of machine learning and deep learning ⁣at the edge. HPE’s visionary approach to AI-driven architecture and its integrated services provides organizations with the‍ capacity needed to quickly and securely deploy models at the edge, giving them a competitive edge‍ over their competition.