The Language of Deep Learning

To begin a deep learning project you need some understanding of programming languages and techniques. However, to avoid learning an entire language, deep learning frameworks and libraries are employed to automate and structure some of the tasks.

Frameworks are pre-compiled collections of pre-scripted libraries and models and provide the easiest way to start an AI project. NVIDIA provides multiple GPU-accelerated frameworks via its NGC (NVIDIA GPU Cloud) web portal.

Alternatively, you can download libraries such as CuDNN and NCCL from NGC which require more coding experience, but provide greater control than pre-scripted frameworks.

If you prefer full control of the AI model creation then you can program your own code using popular languages such as Python, R, Java and C++. Python is the most popular programming language due to its syntaxes being very simple and can be easily learnt, which makes algorithms easy to implement.


Orchestration Tools

Docker is an orchestration tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as data, libraries and other dependencies, and deploy it as one package. Containers also allow multiple models to be deployed but in segregation from one another. This also means that any changes or optimisations made within a container will not impact the host operating system or other containers. Kubernetes on the other hand is designed to do a similar containerisation task but across a cluster while Docker only runs on a single node.


Deep Learning Frameworks

Framework Features Interfaces Milti-GPU Ready
Caffe2 is a deep-learning framework designed to easily express all model types, for example, CNN, RNN, and more, in a friendly python-based API, and execute them using a highly efficiently C++ and CUDA back-end. Users have flexibility to assemble their model using combinations of high-level and expressive operations in python allowing for easy visualisation, or serialising the created model and directly using the underlying C++ implementation. Caffe2 supports single and multi-GPU execution, along with support for multi-node excetion. Python, C++ Yes
The Microsoft Cognitive Toolkit, formerly known as CNTK, is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs. C++ No
MATLAB makes deep learning easy for engineers, scientists and domain experts. With tools and functions for managing and labeling large data sets, MATLAB also offers specialised toolboxes for working with machine learning, neural networks, computer vision, and automated driving. With just a few lines of code, MATLAB lets you create and visualise models, and deploy models to servers and embedded devices without being an expert. MATLAB also enables users to generate high-performance CUDA code for deep learning and vision applications automatically from MATLAB code. Python Yes
MXNet is a deep learning framework designed for both effiency and flexibility. It allows you to mix the flavors of symbolic programming and imperative programming to maximise efficiency and productivity.

In its core is a dynamic dependency scheduler that automatically parallelises both symbolic and imperative operations on the fly. A graph optimisation layer on top of that makes symbolic execution fast and memory efficient. The library is portable and lightweight, and it scales to multiple GPUs and multiple machines.
Python, R, C++, Julia Yes
Caffe is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. NVIDIA Caffe, also known as NVCaffe, is a NVIDIA-Maintained fork of BVLC Caffe tuned for NVIDIA GPUs, particularly in multi-GPU configurations. Python, C++ Yes
PyTorch is a Python package that provides two high-level features: Tensor computation (like numph) with strong GPU acceleration. Deep Neural Networks built on a tape-based autograd system. You can reuse your favorite Python packages such as numphy, scipy and Cython to extend PyTorch when needed. Python Yes
TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. For visualising TensorFlow offers TensorBoard, suite of visualisation tools. Python, C++ Yes
Chainer is a Python-based deep learningframework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach, also known as dynamic computational graphs, as well as object-oriented high-level APIs to build a train neural netwoks. It supports CUDA and cuDNN using CuPy for high performance training and inference. Python Yes
PaddlePaddle provides an intuitive and flexible interface for loading data and specifying model structures. It supports CNN, RNN, multiple variants and configures complicated deep models easily. It also provides extremely optimised operations, memory recycling, and network communication. PaddlePaddle makes it easy to scale heterogeneous computing resources ad storage to accelerate the training process. Python Yes

The NGC Cloud

The NVIDIA GPU Cloud (NGC) is the hub for GPU-optimised software for deep learning, machine learning, and high-performance computing (HPC) that takes care of all the plumbing so data scientists, developers, and researchers can focus on building solutions, gathering insights, and delivering business value.


Accelerate Time-to-Solution

The NGC catalog accelerates productivity with easy-to-deploy, optimized AI frameworks and HPC application containers, so users can focus on building their solutions.


Simplify AI Adoption

The NGC catalog lowers the barrier to AI adoption by taking care of the heavy lifting (expertise, time, compute resources) with pre-trained models and workflows with best-in-class accuracy and performance.


Run Anywhere You Have NVIDIA GPUs

Run software from the NGC catalog on-prem, in the cloud, and edge or using hybrid and multi-cloud deployments. NGC catalog software can be deployed on bare metal servers, Kubernetes or on virtualized environments, maximizing utilization of GPUs, portability, and scalability of applications.


Deploy NGC Software with Confidence

Enterprise-grade support for NVIDIA-Certified Systems provides direct access to NVIDIA's AI experts, minimizing risk, and maximizing system utilization and user productivity.

NGC offers a comprehensive catalog of GPU-accelerated software for deep learning, machine learning, and HPC. NGC containers deliver powerful and easy-to-deploy software proven to deliver the fastest results. By taking care of the plumbing, NGC enables users to focus on building lean models, producing optimal solutions and gathering faster insights.


Optimised AI Software

NGC empowers researchers, data scientists, and developers with performance-engineered containers featuring AI software like TensorFlow, PyTorch, MXNet, NVIDIA TensorRT, RAPIDS and more. These pre-integrated containers feature the record-setting NVIDIA AI software stack, including the NVIDIA CUDA Toolkit and NVIDIA deep learning libraries.

Powerful HPC Applications

The NGC catalog lowers the barrier to AI adoption by taking care of the heavy lifting (expertise, time, compute resources) with pre-trained models and workflows with best-in-class accuracy and performance.

Enhanced HPC Visualisation

Visualisation is an integral part of scientific computing and data analysis workflows. NGC provides HPC visualisation containers for easy access to the tools that deliver real-time interactivity and quality visuals for better and faster discoveries, driving user productivity.

The catalogue of NVIDIA optimised deep learning software containers is available to everyone—at no cost—for use across supported platforms. From 3XS deep learning workstations to NVIDIA data science workstations, the DGX family and bespoke GPU-accelerated servers - all the software is containerised allowing for portability of deep learning jobs across environments, reducing the overhead typically required to scale AI workloads.

GPU Accelerated Hardware

As trusted AI advisors, the Scan AI team has put together a portfolio of systems that address all budgets and possible scenarios when it comes to deep learning projects for any industry sector - offering a range of NVIDIA GPUs from the GeForce, Quadro, Tesla and Jetson families in various configurations and in workstation, server and embedded options.

Find out more
Find out more Find out more