This collection of applications - the majority provided free-of-charge via NGC - delivers pre-trained and configured frameworks, libraries and content designed to accelerate AI research for specific use cases. Click the categories to learn more.
These healthcare and life sciences applications are designed for researchers looking to train AI models for medical imaging, genomics, patient monitoring and drug discovery.
AI Accelerated Healthcare
NVIDIA Clara is family of healthcare applications aimed at AI-powered imaging, genomics, and the development and deployment of smart sensors. It includes full-stack GPU-accelerated libraries, SDKs, and reference applications for developers, data scientists, and researchers to create real-time, secure and scalable solutions.
Clara Holoscan is an AI computing platform for medical devices that combines hardware systems for low-latency sensor and network connectivity. It also optimises libraries for data processing and AI, and accelerates core micro-services for streaming, imaging, and other applications—from embedded to edge to cloud.
The Clara Parabricks application provides both enterprise-grade, turnkey, GPU-accelerated sequencing software and a technology stack for developers to build applications for high-performance computing, deep learning and data analytics in genomics.
Clara Discovery is a collection of frameworks, applications, and AI models that, together, accelerate drug discovery, supporting research in genomics, microscopy, virtual screening, computational chemistry, visualisation, clinical imaging and more.
Clara Guardian is an application framework that brings video analytics and conversational AI capabilities to hospitals, simplifying the development and deployment of smart sensors to enhance clinical experiences
The Clara Imaging application accelerates medical imaging AI workflows with open-source frameworks, AI-assisted annotation, AI inference and pre-trained models.
These audio and video applications are designed deliver automated speech recognition, text-to-speech, speech-to-text or speech-to-speech translation, natural language programming (NLP) for uses such as virtual assistants, animated characters, video-conferencing and virtual collaboration.
NVIDIA Maxine is a suite of GPU-accelerated SDKs that reinvent audio and video communications with AI, elevating standard microphones and cameras for clear online communications. Maxine provides state-of-the-art real-time AI audio, video and augmented reality features that can be built into customisable, end-to-end deep learning pipelines.
NVIDIA Riva is aimed at building Speech AI applications that are customised for your use case and deliver real-time applications such as virtual assistants, call centre agent assist and video conferencing. Riva components are customisable, so you can adapt the applications for your use case and industry and deploy them in any cloud, on-premises and at the edge.
NVIDIA NeMo is an open-source application for developers to build, train and fine-tune GPU-accelerated speech models for real-time automated speech recognition (ASR), natural language processing (NLP), and text-to-speech (TTS) applications such as video call transcriptions, intelligent video assistants, and automated call centre support across healthcare, finance, retail and telecommunications
The NVIDIA Avatar application enables developers and artists to generate, animate, simulate and render state-of-the-art interactive avatars as well as the experiences that use them. Key uses are in game design, virtual assistants and video communication
Real Time Collaboration
NVIDIA Omniverse is a collaboration and scalable multi-GPU, real-time, true-to-reality simulation platform. Omniverse revolutionises the way you can create and develop as individuals and work together as teams, bringing more creative possibilities and efficiency to 3D creators and developers.
These applications are designed to accelerate and enhance robotics research - from development to simulation, through to deployment.
The NVIDIA Isaac suite of applications aids building and deploying commercial-grade, AI-powered robots. Isaac is a toolkit that includes building blocks and tools that accelerate robot developments such as GPU-accelerated algorithms and deep neural networks (DNNs) for perception and planning, plus machine learning workflows for supervised and reinforcement learning.
The Isaac for Autonomous Mobile Robots (AMR) platform extends NVIDIA Isaac capabilities for developers building and deploying robotics applications by bringing mapping, site analytics and fleet and route optimisation onto NVIDIA EGX servers. This platform helps enhance and accelerate AMR applications for the logistics industry in applications ranging from warehouses to retail.
Isaac Sim provides developers with a data cockpit to synthetically generate datasets for machine learning (ML) models from easy-to-understand parameters. Deployed within Omniverse, Isaac Sim generates synthetic data that can be used to train DNNs running on an AMR. This helps developers build and deploy AI-enabled robots that operate safely and avoid common mishaps.
These applications are designed to aid the development of datasets and their visualisation, recommendation systems and to help the understanding of researchers, engineers and designers when it comes to physics-driven modelling such as fluid dynamics, modular dynamics, climate models and mechanics.
Driverless AI by H2O.ai is a machine learning platform that automates many of the most difficult data science and machine learning workflows, such as feature engineering, model validation, model tuning, model selection and model deployment. It enables the rapid development of hundreds of machine learning models to help your business mitigate risks and maximise revenue potential.
NVIDIA Modulus is a neural network framework that blends the power of physics in the form of governing partial differential equations (PDEs) with data to build high-fidelity, parameterised surrogate models with near-real-time latency. Whether you’re looking to get started with AI-driven physics problems or designing digital twin models for complex non-linear, multi-physics systems, NVIDIA Modulus can be used independently or within Omniverse.
NVIDIA Merlin is an open-source application for building high-performing recommender systems at scale. It includes libraries, methods and tools that streamline the building of recommenders by addressing common preprocessing, feature engineering, training and inference challenges. Each component of the Merlin pipeline is optimised to support hundreds of terabytes of data, all accessible through easy-to-use APIs.
These applications are designed to aid the development of smart and automated infrastructure projects including smart cities, factories, airports and autonomous vehicles.
The NVIDIA Metropolis application is an end-to-end application framework that combines pre-trained models, training and optimisation tools, with common video cameras and sensors with AI-enabled video analytics to provide operational efficiency and safety applications across a broad range of industries - including retail analytics, city traffic management, airport operations and automated factory
The NVIDIA DRIVE application for the development of autonomous vehicles covers everything from the car to the datacentre. It includes highly automated supervised driving and an AI cockpit, empowering developers to efficiently build and deploy a variety of state-of-art features, including perception, localisation and mapping, planning and control, driver monitoring and natural language processing.
NVIDIA DRIVE Sim uses high-fidelity and physically accurate simulation to create a safe, scalable and cost-effective way to deploy self-driving vehicles. It delivers a powerful, cloud-based computing platform capable of generating a wide range of real-world scenarios for AV development and validation, in either a standalone environment or deployed within Omniverse.
These applications are designed to ensure maximum utilisation of GPU hardware. They work by allowing the sharing of GPU resource so that they can be allocated and segregated across users and tasks as demand requires.
Sharing a GPU
Multi-Instance GPU (MIG) expands the performance and value of NVIDIA H100, A100, and A30 Tensor Core GPUs. MIG can partition the GPU into as many as seven instances, each fully isolated with its own high-bandwidth memory, cache, and compute cores. This gives administrators the ability to support every workload, from the smallest to the largest, with guaranteed quality of service (QoS) and extending the reach of accelerated computing resources to every user.
The Run:ai Atlas software platform decouples data science workloads from the underlying hardware - regardless of what hardware you have. By pooling resources and applying an advanced scheduling mechanism to data science workflows, Run:ai greatly increases the ability to fully utilise all available resources, essentially creating unlimited compute. Data scientists can increase the number of experiments they run, speed time to results and ultimately meet the business goals of their AI initiatives.
Find out more