Scan code: LN112524 Manufacturer code: TCSDGXA1H80-PB

PNY NVIDIA DGX A100 P3687 640GB AI Server System

PNY NVIDIA DGX A100 Deep Learning AI System, A100 GPUs, 640GB, 5 petaFLOPS TF32 Performance

Thumbnail 1 : PNY NVIDIA DGX A100 P3687 640GB AI Server System Thumbnail 2 : PNY NVIDIA DGX A100 P3687 640GB AI Server System Thumbnail 3 : PNY NVIDIA DGX A100 P3687 640GB AI Server System
Scan code: LN112524 Manufacturer code: TCSDGXA1H80-PB
£264,797.40Item currently awaiting an ETA
Pre order
From £7,231.97 / month (APR 14.9%)
Monthly Payment: £23,771.64
Instalment Rate: 14.9% p.a. (fixed)
Cost of purchase: £264,797.40
Total inc. interest: £285,259.59
Monthly Payment: £12,709.91
Instalment Rate: 14.9% p.a. (fixed)
Cost of purchase: £264,797.40
Total inc. interest: £305,037.85
Monthly Payment: £9,046.27
Instalment Rate: 14.9% p.a. (fixed)
Cost of purchase: £264,797.40
Total inc. interest: £325,665.46
Monthly Payment: £7,231.97
Instalment Rate: 14.9% p.a. (fixed)
Cost of purchase: £264,797.40
Total inc. interest: £347,134.52
If you are approved for a credit limit with PayPal Credit and use it for future purchases, the APR for those purchases won't be more than 21.9% and may be even lower.

Representative Example:

Purchase Rate: 21.9% p.a. (variable)

Representative: 21.9% APR (variable)

Assumed Credit Limit: £1,200

Subject to status. Terms and Conditions apply.
spend over £280 and Buy Now, Pay Later or spread the cost of your finance over 24-48 months with a 10% deposit.
Delivery Options
  • By DPD to your specified address. | £11.50 Receive SMS with one-hour delivery window Weekend, timed and European delivery options are available at checkout

    Have your parcel delivered by DPD to your specified address. Receive SMS with one-hour delivery window

    Weekend, timed and European delivery options are available at checkout
  • Collect instore Collect from our Bolton store, BL6 6PE | Free

    Order online, collect from our Bolton store (25-28 Enterprise Park, Middlebrook, Horwich, Bolton, BL6 6PE)

Services
  • 48HR REPLACEMENT If you need to return this item, your replacement will be dispatched within 2 working days of your product arriving back at Scan. More info
Product Overview

The Universal System for AI Infrastructure

Every business needs to transform using artificial intelligence (AI), not only to survive, but to thrive in challenging times. However, the enterprise requires a platform for AI infrastructure that improves upon traditional approaches, which historically involved slow compute architectures that were siloed by analytics, training, and inference workloads. The old approach created complexity, drove up costs, constrained speed of scale, and was not ready for modern AI. Enterprises, developers, data scientists, and researchers need a new platform
that unifies all AI workloads, simplifying infrastructure and accelerating ROI.


The Universal System for Every AI Workload

NVIDIA DGX™ A100 is the universal system for all AI workloads—from analytics to training to inference. DGX A100 sets a new bar for compute density, packing 5 petaFLOPS of AI performance into a 6U form factor, replacing legacy compute infrastructure with a single, unified system. DGX A100 also offers the unprecedented ability to deliver fine-grained allocation of computing power, using the Multi-Instance GPU (MIG) capability in the NVIDIA A100 Tensor Core GPU, which enables administrators to assign resources that are right-sized for specific workloads. Available with up to 640 gigabytes (GB) of total GPU memory, which increases performance in large-scale training jobs up to 3X and doubles the size of MIG instances, DGX A100 can tackle the largest and most complex jobs, along with the simplest and smallest. Running the DGX software stack with optimised software from NGC, the combination of dense compute power and complete workload flexibility make DGX A100 an ideal choice for both single node deployments and large scale Slurm and Kubernetes clusters deployed with NVIDIA DeepOps.


Direct Access to NVIDIA DGXperts

NVIDIA DGX A100 is more than a server. It’s a complete hardware and software platform built upon the knowledge gained from the world’s largest DGX proving ground—NVIDIA DGX SATURNV—and backed by thousands of DGXperts at NVIDIA. DGXperts are AI-fluent practitioners who offer prescriptive guidance and design expertise to help fast track AI transformation. They've built a wealth of know-how and experience over the last decade to help maximise the value of your DGX investment. DGXperts help ensure that critical applications get up and running quickly, and stay running smoothly, for dramatically improved time to insights.


Fastest Time to Solution

NVIDIA DGX A100 features eight NVIDIA A100 Tensor Core GPUs, which deliver unmatched acceleration, and is fully optimised for NVIDIA CUDA-X ™ software and the end-to-end NVIDIA data centre solution stack. NVIDIA A100 GPUs bring a new precision, Tensor Float 32 (TF32), which works just like FP32 but provides 20X higher floating operations per second (FLOPS) for AI compared to the previous generation. Best of all, no code changes are required to achieve this speedup. And when using NVIDIA’s automatic mixed precision with FP16, A100 offers an additional 2X boost to performance with just one additional line of code.

The A100 80GB GPU doubles the high-bandwidth memory from 40 GB (HBM) to 80GB (HBM2e) and increases GPU memory bandwidth 30 percent over the A100 40 GB GPU to be the world's first with over 2 terabytes per second (TB/s).

DGX A100 also debuts the third generation of NVIDIA® NVLink®, which doubles the GPU-to-GPU direct bandwidth to 600 gigabytes per second (GB/s), almost 10X higher than PCIe Gen 4, and a new NVIDIA NVSwitch™ that’s 2X faster than the last generation. This unprecedented power delivers the fastest time to solution, allowing users to tackle challenges that weren't possible or practical before.


The World’s Most Secure AI System for Enterprise

NVIDIA DGX A100 delivers the most robust security posture for your AI enterprise, with a multi-layered approach that secures all major hardware and software components. Stretching across the baseboard management controller (BMC), CPU board, GPU board, self-encrypted drives, and secure boot, DGX A100 has security built in, allowing IT to focus on operationalising AI rather than spending time on threat assessment and mitigation.


Unmatched Data Centre Scalability with NVIDIA Mellanox

With the fastest I/O architecture of any DGX system, NVIDIA DGX A100 is the foundational building block for large AI clusters like NVIDIA DGX SuperPOD™, the enterprise blueprint for scalable AI infrastructure. DGX A100 features eight single-port NVIDIA Mellanox® ConnectX®-6 VPI HDR InfiniBand adapters for clustering and up to two dual-port ConnectX-6
VPI Ethernet adapters for storage and networking, all capable of 200 Gb/s.

The combination of massive GPU-accelerated compute with state-of-the-art networking hardware and software optimisations means DGX A100 can scale to hundreds or thousands of nodes to meet the biggest challenges, such as conversational AI and large-scale image classification.


Proven Infrastructure Solutions Built with Trusted Data Centre Leaders

In combination with leading storage and networking technology providers, a portfolio of infrastructure solutions are available that incorporate the best of the NVIDIA DGX POD™ reference architecture. Delivered as fully integrated, ready-to-deploy offerings through our NVIDIA Partner Network (NPN), these solutions simplify and accelerate data centre AI deployments.

Features

The world's largest 7nm chip. By following the 7nm process, NVIDIA Ampere architecture is delivering highly powerful GPUs that go further Moore’s law promises.
3rd Generation NVLINK and NVSwitch. For GPUs to act as one accelerator, high-speed interconnection is possible with NVIDIA® NVLink®. NVIDIA® NVSwitch® incorporates multiple NVLinks® ensuring full GPU communication and full NVLink® speed.
3rd generation tensor cores. The NVIDIA Ampere architecture provides a huge performance boost and delivers new precisions to cover the full spectrum required by researchers— TF32, FP64, FP16, INT8, and INT4—accelerating and simplifying AI adoption and extending the power of NVIDIA Tensor Cores to HPC.
Sparsity acceleration. The NVIDIA Ampere architecture introduces third-generation Tensor Cores in NVIDIA A100 GPUs that take advantage of the fine-grained sparsity in network weights. They offer up to 2x the maximum throughput of dense math without sacrificing accuracy of the matrix multiply-accumulate jobs at the heart of deep learning.
New Multi-Instance GPU - The new MIG feature allows the NVIDIA A100 GPU to be securely partitioned into up to seven separate GPU Instances for CUDA applications, providing multiple users with separate GPU resources for optimal GPU utilisation.

Specifications
Edition DGX A100
Form Factor 6U
CPU Included Yes
CPU Type 2x AMD EPYC™ 7002 Series
CPU Model 7742
Architecture Infinity / Zen 2
Manufacturing Process 7 nm
Unlocked CPU No
Core 64 Core
No. of Threads 128
Clock Speed 2.25 GHz
Burst Frequency 3.4 GHz
CPU Cache 256MB
CPU Cooler  
Max. TDP 225 W
CPU Socket SP3 x2
Chipset N/A
Motherboard Form Factor Bespoke/Custom
Motherboard Model  
Memory Type  
Memory Included No
Memory Capacity Included 2TB
Memory Edition  
Memory Channels  
Memory Type (ECC)  
Memory Type (R/U)  
Memory Speed (Mhz)  
Max. Memory Capacity  
On-board/Integrated Graphics No
Processor/Motherboard Graphics Chip N/A
Discrete GPU Yes
Discrete GPU Chipset 8x NVIDIA A100
GPU Architecture Ampere
GPU Manufacturing Process  
GPU Cores/Streams/Execution Units  
GPU Tensor Cores N/A
Base GPU Speed N/A
Max. GPU Speed N/A
GPU Memory Size 80GB
GPU Memory Type  
GPU Memory Clock  
GPU Memory Bus  
Discrete GPU Cooler Air Cooled
Multi-GPU Configuration No
Max. GPU Resolution  
Graphics Interface NVSwitch
Expansion Slots N/A
SATA N/A
HDD Storage Included No
HDD Storage Type  
HDD Storage Capacity N/A
Optical Drive None
Drive Bays N/A
Backplane I/O  
RAID Support  
Internal I/O N/A
Network Interface Type  
Network Chip/Modules  
Front Panel  
Back Panel  
Cooling Fans  
Keyboard / Mouse Included No
Remote Control Included No
VESA Mountable N/A
PSU Wattage  
PSU Efficiency Rating  
PSU Redundancy N/A
Operating System Included No
Operating System N/A
Colour Grey
Dimensions 482.3 x 264 x 871.1 (mm WxHxD)
Weight 123.16 kg
Includes  
Notes  
Storage Drives
SSD Storage Included No
SSD Storage Type  
SSD Storage Capacity  
Additional Information
Scan Code LN112524
Model Number TCSDGXA1H80-PB
Warranty

Please note your statutory rights are not affected.

For further information regarding Scan's warranty procedure please see our terms and conditions

Warranty information has not yet been updated for this product. For further information please call 0871 472 4747 or 01204 474747.

TekSpek Guides
NVIDIA DGX-2 Supercomputer
NVIDIA DGX-2 Supercomputer
Date Issued: 3rd Sep 2018

The increase in processing speed for computers has opened up new avenues of research and provided deeper insights into solutions for complex problems. Such power has been harnessed to accelerate research into medicine, better predict weather patterns, perform hugely complex calculations for the oil and gas industry, and a whole host more.

Read More

NVIDIA DGX-1 Supercomputer
NVIDIA DGX-1 Supercomputer
Date Issued: 21st Jul 2017

The increase in processing speed for computers has opened up new avenues of research and provided deeper insights into solutions for complex problems. Such power has been harnessed to accelerate research into medicine, better predict weather patterns, perform hugely complex calculations for the oil and gas industry, and a whole host more.

Read More

PNY NVIDIA DGX A100 P3687 640GB AI Server System
PNY NVIDIA DGX A100 P3687 640GB AI Server System