Scan AI

Get in touch with our AI team.

NVIDIA GTC

Find out about the latest innovations in AI at the NVIDIA GPU Technology Conference

 
Video


NVIDIA GTC March 2024

The Scan AI team were at GTC showcasing a virtual walkthrough of Lilith, an XR Dance Experience powered by AI technology over the Scan Cloud platform. We also showcased a Specsavers retail store design using NVIDIA Omniverse. See all the highlights in our videos below and learn more about the latest NVIDIA announcements from Jensen’s keynote.

You can also get in touch with the Scan AI team about anything regarding GTC or AI.

ENQUIRE NOW

Key GTC Announcements

Be the first to know

Discover the next-gen Blackwell GPU architecture and AI appliances below and register your interest to be the first in the queue when more information is available.

ENQUIRE NOW
 

Blackwell GPU Architecture

The latest Blackwell GPU architecture has been designed for building and training generative AI models. Blackwell delivers up to 30x faster inferencing, 4x training and has 25x lower TCO than its predecessor Hopper. Multiple different Blackwell-based GPUs will be available from late 2024.

 

800Gb/s Networking

AI systems need rapid access to data so NVIDIA announced an ecosystem of 800Gb/s networking products, doubling the throughput of today’s fastest networks. Key products include Quantum-X800 InfiniBand and Spectrum-X800 Ethernet switches plus ConnectX-8 SmartNICs, planned for launch in late 2024.

 

DGX B200

The latest DGX AI appliance, the DGX B200 features eight B200 GPUs with a total of 1.44TB of GPU memory along with two Intel Xeon CPUs and 400GB/s networking. The DGX B200 is expected to launch in late 2024.

 

HGX B200 & B100

B200 and B100 GPUs will also be available in custom-built GPU servers based on the HGX platform. Expect up to 15x faster inferencing than their predecessors using H100 GPUs. Keep an eye out for these systems in late 2024.

 

DGX SuperPOD GB200

Delivering the ultimate in performance for LLMs and generative AI projects, SuperPODs feature 36 GB200 Superchips per rack, connected via 5th gen NVLink. Planned availability is late 2024.

 

NIM Microservices

NIM is a new collection of pre-built containers that massively speed up deploying generative AI projects. NIM is available exclusively via NVIDIA AI Enterprise, bundled as standard with NVIDIA DGX appliances and select GPUs, and also available as a standalone license.

 

cuOPT Microservice

The CuOPT microservice accelerates operations optimisation by enabling faster and better decisions in areas such as logistics and supply chains. CuOPT is available exclusively via NVIDIA AI Enterprise, bundled as standard with NVIDIA DGX appliances and select GPUs, and also available as a standalone license.

 

Omniverse Cloud APIs

A new collection of five APIs that extend the capabilities of Omniverse for creating digital twins and simulating autonomous machines such as robots and self-driving vehicles. Expect to see new applications from ISVs such as Ansys, Cadence, Dassault Systèmes, Hexagon, Microsoft, Rockwell Automation, Siemens and Trimble.

 

Project GROOT

A new general-purpose foundational model designed to drive breakthroughs in robotics and embedded AI. Project GROOT will be accelerated by the new Jetson Thor SOC based on the Blackwell architecture and Isaac framework.

Interviews from GTC
Keynote highlights from previous GTCs