Optimised Storage for all AI Workloads
GPU-accelerated computing only works as intended if the GPUs in question can receive data consistently and rapidly enough that the maximum utilisation is delivered, thus providing a significant shortening of time frames required to get results. AI training presents a particularly heavy demand on the attached storage, so the Scan AI team has created a portfolio of options to deliver data at high speed to the GPU servers.
Optimised for GPU Acceleration
Maximum GPU Utilisation
The majority of servers designed for AI workloads will contain multiple GPUs, and the key thing is to ensure they are working as hard as possible for the most time possible. Having the correct all-flash storage will enable data transfer at a sufficient rate that the combined GPU memory is consistently saturated with data so that results are achieved in the fastest time frame possible
Storage appliances using technologies such as NVMe interfaces ensure that the data is not only transferred at an incredible rate, but also with minimal latency from command to action. This is another factor in delivering lightning fast storage capability and one that should not be overlooked when considering budget versus time to results required
The nature of AI projects is that expansion of GPU capability is very likely - put simply you’ll need more servers. However, when this happens you don’t want to have to replace your storage at the same time. All our options are capable of supporting multiple multi-GPU appliances, so scalability isn’t an issue is the short term and capacity can be added to the storage in the longer term too
NetApp AI Storage Solutions
The NetApp ONTAP AI architecture delivers ground breaking performance. It comprises a NetApp AFF A-Series appliance - an all-flash fabric-attached storage system, linked to one or more NVIDIA DGX or HGX servers by NVIDIA Mellanox switches. The system has been designed, tested and validated to deliver excellent training and inferencing performance.
The AFF A-series systems support end-to-end NVMe technologies, from NVMe-attached SSDs to front-end NVMe over Fibre Channel (NVMe/FC) host connectivity. These systems deliver the industry’s lowest latency for an enterprise all-flash array, making them a superior choice for driving the most demanding workloads and applications. With a simple software upgrade to the modern NVMe/FC SAN infrastructure, you can drive more workloads with faster response times, without disruption or data migration.
PNY AI Storage Solutions
PNY, NVIDIA’s global partner, has developed storage solutions from the ground up for AI workloads and optimised for NVIDIA DGX and HGX servers. Delivering ultra-low latency and tremendous bandwidth at a price which allows more investment to be made on GPU resource and less on expensive, slower storage.
PNY All-Flash Solutions
Based on AMD server platform, the PNY all-flash appliances aren’t like most storage solutions where the initial investment dictates future growth and performance, often resulting in the need to overspend for potential future growth. PNY’s NVmesh design and software defined platforms can scale in stages to suit your project without any limitation. Just purchase the capacity and performance you need today and feel secure that as you scale, so can your capacity and performance.
DDN AI Storage Solutions
The DDN A³I (Accelerated, Any-Scale AI) architecture breaks new ground for AI and deep learning. Engineered from the ground up for the AI-enabled datacentre, DDN A³I solutions with NVIDIA DGX and HGX servers accelerate end-to-end data pipelines for AI workloads of any scale. They are designed to provide extreme amounts of performance and capacity backed by a jointly engineered, validated architecture.
DDN A³I Series
DDN A³I series appliances are optimised with NVIDIA DGX servers at every layer of hardware and software to ensure data delivery and storage is fast, responsive and reliable. To meet the requirements of a variety of workloads, DDN A³I leverages the DDN AI200 and AI7990 storage appliances. The AI200 is an all-NVME, fully-integrated parallel file storage appliance that delivers 20GB/s of throughput and over 350K IOPS to applications, whereas the AI200 is specifically optimised to keep GPU computing resources fully utilised, ensuring maximum efficiency while easily managing tough data operations.
Dell-EMC AI Storage Solutions
Dell EMC offers a number of pre-validated PowerScale and Isilon architectures designed for enterprises that want to manage their AI productivity, not their storage. The storage systems are powerful yet simple to install, manage, and scale to virtually any size, and have been optimised and tested with NVIDIA DGX and HGX servers at every stage of their development.
The PowerScale family includes platforms configured with the PowerScale OneFS operating system. OneFS provides the intelligence behind the highly scalable, high–performance modular storage solution that can grow with your business. The new PowerScale all-flash platforms co-exist seamlessly in the same cluster with your existing Isilon nodes to drive your traditional and modern applications.
Dell EMC Isilon scale-out data storage solutions are powerful, yet simple to scale and manage, no matter how large your unstructured data environment becomes. Unlike traditional storage, Isilon solutions are flexible, efficient and stay simple regardless of how much storage capacity and performance is required, or how your AI requirements change.
IBM AI Storage Solutions
Data is the fuel that powers AI, but it can become trapped or stored in a way that makes it difficult or cost prohibitive to maintain or expand. Customers need to unleash that data so it can expand from edge to inference in a simple and cost-effective infrastructure. IBM Storage for AI makes data simple and accessible for a hybrid multi-cloud infrastructure with AI storage solutions that fit your business model.
Elastic Storage System
The IBM Elastic Storage System (ESS) is a modern implementation of software-defined storage, making it easier for you to deploy fast, highly scalable storage for AI and big data. With the lightning-fast NVMe storage technology and industry-leading file management capabilities of the IBM Spectrum Scale platform, the ESS 3000 and nodes can grow to over vast scalability and are designed to seamlessly support a POWER9 server deployment.