A³I - Accelerated, Any-Scale AI
DDN A³I storage solutions are fully-optimised to accelerate machine learning and artificial intelligence (AI) applications, streamlining deep learning workflows for greater productivity. A³I artificial intelligence storage solutions harness the knowledge from customer-proven deployments to make AI-powered innovation easy. A³I is a turnkey AI data storage infrastructure for rapid deployment, featuring faster performance, effortless scale, and simplified operations through deeper integration—all backed by data-at-scale experts.
Faster AI Performance
DDN A³I solutions are fully-optimised to deliver massive performance acceleration to applications. The DDN shared parallel architecture fully saturates GPUs and ensures all efforts go towards productive AI use.
DDN A³I solutions enable all phases of deep learning concurrently and continuously from a shared unified namespace. This makes it easy to collect and access large amounts of data from a variety of sources, and rapidly iterate through training, validation, inference and simulation.
DDN A³I scales seamlessly in capacity, performance and capability to match evolving workflow needs. Configurations can start small and expand in multiple dimensions to achieve optimal technical and economic benefits.
DDN A³I Parallel File Storage Appliances
To meet the requirements of a variety of workloads, the DDN A³I architecture leverages both an NVMe appliance combined with a hybrid appliance. The NVMe devices are fully-integrated parallel file storage appliances that deliver 20GB/s of throughput and over 350K IOPS to applications, while the hybrid, parallel file storage appliance that integrates both flash and deeply expandable capacity disks in a unified system for simplicity and flexibility. This integration makes it easy to collocate both hot training data and large libraries while maintaining optimal system efficiency.
DDN A³I & DGX A100
At the heart of DDN A3I AI is NVIDIA DGX A100, a universal building block for datacentre AI, supporting DL training, inferencing, data science, and other high-performance workloads from a single platform. Each DGX A100 system is powered by eight NVIDIA A100 Tensor Core GPUs and integrates the latest high-speed NVIDIA Networking interconnects.
The DDN AI200X is an all-NVME flash appliance optimised for demanding AI. Dual active controllers provide up to 24GB/s and 1.5M IOPS to clients via four EDR/HDR100 IB or 100GbE ports. The DDN AI200X is an ideal companion for single GPU compute appliance deployments, and scales seamlessly to match any capacity and performance requirements.
The AI200X is available in 32TB, 64TB, 128TB and 256TB capacities.
The DDN AI400X is an all-NVME flash appliance that delivers the fastest performance for AI workflows at all scale. Dual active controllers provide up to 48GB/s and 3M IOPS to clients via eight EDR/HDR100 IB or 100GbE ports. The DDN AI400X enables you to run workloads against all data anywhere and accelerating your most demanding workflows and applications.
The AI400X is available in 32TB, 64TB, 128TB and 256TB capacities.
The DDN AI7990X is a hybrid storage appliance for ultimate flexibility that allows intermix of performance flash and large capacity disks in a high-density system. Dual active controllers in the AI7990X appliance provide up to 24GB/s and 800K IOPS to clients via four EDR/HDR100 IB or 100GbE ports.
The AI7990X is available in 1PB, 2PB and 4PB capacities.