The WEKA Data Platform is Now Certified as a High-Performance Data Store Solution with the NVIDIA Cloud Partner Reference Architecture for GPU Compute Infrastructure
WekaIO (WEKA), the AI-native data platform company, today announced that the WEKA® Data Platform has been certified as a high-performance data store for NVIDIA Partner Network Cloud Partners. With this certification, NVIDIA Cloud Partners can now leverage the WEKA Data Platform’s exceptional performance, scalability, operational efficiency, and ease of use through the jointly validated WEKA Reference Architecture for NVIDIA Cloud Partners using NVIDIA HGX H100 systems.
https://mma.prnewswire.com/media/2511528/PR_weka_nvidia_NCP_1200x675.jpg
The NVIDIA Cloud Partner reference architecture provides a comprehensive, full-stack hardware and software solution for cloud providers to offer AI services and workflows for different use cases. WEKA’s storage certification ensures that WEKApod™ appliances and hardware from WEKA-qualified server partners meet NVIDIA Cloud Partner high-performance storage (HPS) specifications for AI cloud environments.
The certification highlights the WEKA Data Platform’s ability to provide powerful performance at scale and accelerate AI workloads. It delivers up to 48GBps of read throughput and over 46GBps of write throughput on a single HGX H100 system and supports up to 32,000 NVIDIA GPUs in a single NVIDIA Spectrum-X Ethernet networked cluster. NVIDIA Cloud Partners can now confidently pair the WEKA Data Platform with large-scale AI infrastructure deployments powered by NVIDIA GPUs to help their customers rapidly deploy and scale AI projects.
“AI innovators are increasingly turning to hyperscale and specialty cloud providers to fuel model training and inference and build their advanced computing projects,” said Nilesh Patel, chief product officer at WEKA. “WEKA’s certified reference architecture enables NVIDIA Cloud Partners and their customers to now deploy a fully validated, AI-native data management solution that can help to improve time-to-outcome metrics while significantly reducing power and data center infrastructure costs.”
The AI Revolution Is Driving Surging Demand for Specialty Cloud SolutionsGlobal demand for next-generation GPU access has surged as organizations move to rapidly adopt generative AI and gain a competitive edge across a wide spectrum of use cases. This has spurred the rise of a new breed of specialty AI cloud service providers that offer wide GPU access by providing accelerated computing and AI infrastructure solutions to organizations of every size and in every industry. As enterprise AI projects converge training, inference, and retrieval-augmented generation (RAG) workflows on larger GPU environments, these cloud providers often face significant data management challenges, such as data integration and portability, minimizing latency, and controlling costs through efficient GPU utilization.
WEKA’s AI-native data platform optimizes and accelerates data pipelines, helping ensure GPUs are continuously saturated with data to achieve maximum utilization, streamline AI model training and inference, and accelerate performance-intensive workloads. It provides a simplified, zero-tuning storage experience that optimizes performance across all I/O profiles, helping cloud providers simplify AI workflows to reduce data management complexity and staff overhead.
Many NVIDIA Cloud Partners are also building their service offerings with sustainability in mind, employing energy-efficient technologies and sustainable AI practices to reduce their environmental impact. The WEKA Data Platform dramatically improves GPU efficiency and the efficacy of AI model training and inference, which can help cloud service providers avoid 260 tons of CO2e per petabyte of data stored. This can further reduce their data centers’ energy and carbon footprints and the environmental impact of customers’ AI and HPC initiatives.
“The WEKA Data Platform is crucial in optimizing the performance of Yotta’s Shakti Cloud, India’s fastest AI supercomputing infrastructure. Shakti Cloud allows us to provide scalable GPU services to enterprises of all sizes, democratizing access to high-performance computing resources and enabling businesses to fully harness AI through our extensive NVIDIA H100 GPU fleet. With this enhancement, our customers can efficiently run real-time generative AI on trillion-parameter language models,” said Sunil Gupta, Co-founder, Managing Director, and CEO of Yotta Data Services,an NVIDIA Cloud Partner. “At Yotta, we are deeply committed to balancing data center growth with sustainability and energy efficiency. We are dedicated to deploying energy-efficient AI technologies to minimize the environmental impact of our data centers while continuing to scale our infrastructure to meet the growing demand. WEKA is instrumental in helping us achieve this objective.”
Key Benefits of WEKA’s Reference Architecture for NVIDIA Cloud Partners include:
— Exceptional Performance: Validated high throughput and low latency help to reduce AI model training and inference wall clock time from days to hours, providing up to 48GBps of read throughput and over 46GBps of write throughput for a single HGX H100 system.
— Maximum GPU Utilization: WEKA delivers consistent performance and linear scalability across all HGX H100 systems, optimizing data pipelines to improve GPU utilization by up to 20x, resulting in fewer GPUs needed for high-traffic workloads while maximizing performance.
— Service Provider-level Multi-tenancy: Secure access controls and virtual composable clusters offer resource separation and independent encryption to preserve customer privacy and performance.
— Eliminate Checkpoint Stalls: Scalable, low-latency checkpointing is crucial for large-scale model training, mitigating risks and providing operational predictability.
— Massive Scale: Supports up to 32,000 NVIDIA H100 GPUs and an exabyte of capacity within a single namespace across an NVIDIA Spectrum-X Ethernet backbone to scale to meet the needs of any deployment size.
— Simplified Operations: Zero-tuning architecture provides linear scaling of metadata and data services and streamlines the design, deployment, and management of diverse, multi-workload cloud environments.
— Reduced Complexity & Enhanced Efficiency: WEKA delivers class-leading performance in one-tenth the data center footprint and cabling compared to competing solutions, reducing infrastructure complexity, storage and energy costs, and the associated environmental impact to promote more sustainable use of AI.
To learn more about theWEKA reference architecture for NVIDIA Cloud Partners, visit https://www.weka.io/company/partners/technology-alliance-partners/nvidia.
To explore how WEKA can enhance GPU acceleration, visit: https://www.weka.io/data-platform/solutions/gpu-acceleration/.
About WEKAWEKA is architecting a new approach to the enterprise data stack built for the AI era. The WEKA® Data Platform sets the standard for AI infrastructure with a cloud and AI-native architecture that can be deployed anywhere, providing seamless data portability across on-premises, cloud, and edge environments. It transforms legacy data silos into dynamic data pipelines that accelerate GPUs, AI model training and inference, and other performance-intensive workloads, enabling them to work more efficiently, consume less energy, and reduce associated carbon emissions. WEKA helps the world’s most innovative enterprises and research organizations overcome complex data challenges to reach discoveries, insights, and outcomes faster and more sustainably – including 12 of the Fortune 50. Visit www.weka.io to learn more, or connect with WEKA on LinkedIn, X, and Facebook.
WEKA and the WEKA logo are registered trademarks of WekaIO, Inc. Other trade names used herein may be trademarks of their respective owners.
https://c212.net/c/img/favicon.png?sn=SF12394&sd=2024-09-25
View original content to download multimedia:https://www.prnewswire.com/news-releases/weka-achieves-nvidia-cloud-network-partner-certification-302254748.html
SOURCE WekaIO
https://rt.newswire.ca/rt.gif?NewsItemId=SF12394&Transmission_Id=202409250900PR_NEWS_USPR_____SF12394&DateId=20240925
COMTEX_458152054/1005/2024-09-25T09:01:13