High Performance Solutions for Inferencing
Deep learning and AI is driving competitive advantage in a growing number of organisations - from automating business processes and gaining insight through data analysis, to engaging with customers and employees. NVIDIA Jetson is the world's leading platform for embedded applications - comprising small form-factor, high-performance computer modules, the NVIDIA JetPack SDK (Software Development Kit) for accelerating software, and an ecosystem comprised of sensors, services, and third-party products to speed up development.
Every 3XS Jetson inferencing system is highly configurable and expandable to address the unique operational environment you are using - from the smallest form factors to weatherproof and ruggedised solutions to cope with any environment.
NVIDIA Jetson Modules
Each NVIDIA Jetson module is a complete System on Module (SOM) including CPU, GPU, memory, power management, high-speed interfaces and more. Jetson modules are available in various combinations of performance, power-efficiency, and form factor so they can be used by customers across all industries. Jetson ecosystem partners provide software, hardware design services, and off-the-shelf compatible products from carrier boards to full systems, so you can get to market faster with AI embedded and edge devices. Furthermore, Jetson is compatible with the same CUDA and NVIDIA GPU Cloud (NGC) AI software stack and workflows used across the DGX, HGX, EGX and RTX GPU platforms, but delivers the performance and power-efficiency customers need to build software-defined edge appliances and autonomous machines.
Jetson AGX Orin
The Jetson AGX Orin features an NVIDIA Ampere architecture GPU, ARM Cortex-A78AE CPU and 32GB of memory and is capable of a 200TOPs inferencing speed. At just 100mm x 87mm, the Jetson AGX Orin module consumes as little as 15W, or up to a maximum of 50W, and maintains form-factor and pin-compatibility with Jetson AGX Xavier while providing up to 6x the performance. This makes it the solution of choice for next-gen autonomous machines such as delivery and logistics robots.
Jetson Orin NX
The Jetson Orin NX features an NVIDIA Ampere architecture GPU, ARM Cortex-A78AE CPU and 12GB of memory and is capable of 100TOPs inferencing speed. At just 70mm x 45mm, the Jetson Orin NX module consumes as little as 10W, or up to a maximum of 25W, and maintains form-factor and pin-compatibility with Jetson Xavier NX while providing up to 5x the performance. This makes it the solution of choice for next-gen autonomous machines such as delivery and logistics robots.
Jetson AGX Xavier
The Jetson AGX Xavier features an NVIDIA Volta architecture GPU, ARM Carmel CPU and 32GB of memory and is capable of 32TOPs inferencing speed. At just 100mm x 87mm, the Jetson AGX Xavier module consumes as little as 10W of power. This makes it the solution of choice for handle visual odometry, sensor fusion, localisation and mapping, obstacle detection, and path planning algorithms critical for robots.
Jetson Xavier NX
The Jetson Xavier NX features an NVIDIA Volta architecture GPU, ARM Carmel CPU and 12GB of memory and is capable of 21TOPS inferencing speed. At just 70mm x 45mm, the Jetson Xavier NX module consumes as little as 10W, and is ideal for commercial robots, medical instruments, smart cameras, high-resolution sensors, automated optical inspection, smart factories, and other IoT embedded systems.
The Jetson TX2 module features an NVIDIA Pascal architecture GPU, ARM Cortex A57 CPU and up to 8GB of memory and is capable of up to 1.33TFLOPS inferencing speed. Available in either 69.6 mm x 45 mm or 50 mm x 87 mm, and consuming just 7.5W of power, the Jetson TX2 series modules are ideal for real-time processing in applications where bandwidth and latency can be an issue, including factory robots, commercial drones, and enterprise collaboration devices.
The Jetson Nano module features an NVIDIA Maxwell architecture GPU, ARM Cortex A57 CPU and up to 4GB of memory and is capable of 472GFLOPS of inferencing speed. At just 70mm x 45mm and consuming just between 5 and 10W of power, the Jetson Nano series modules are ideal for deploying AI to devices at the edge across multiple industries—from smart cities and factories to agriculture and robotics.
Software for Jetson GPUs
The NVIDIA JetPack SDK enables access to all Jetson hardware features and lets developers bring the world-standard NVIDIA AI software stack to their next-generation edge AI devices. It includes the Jetson Linux driver package, together with libraries, samples, and tools to accelerate AI inference, computer vision, graphics, multimedia, and more.
In addition to the JetPack SDK, there are a whole host of pre-trained AI models that are available within NGC. Many AI applications have common needs - classification, object detection, language translation, text-to-speech, recommender engines, sentiment analysis and more. These pre-trained models are performance optimised and ready to be fine-tuned with customer datasets, reducing time and cost for production AI software development. These are further complimented by frameworks, including Isaac for robotics, Clara for healthcare, and Metropolis for smart cities; designed to accelerate the development of vertical-specific applications.
Managing Inferencing Deployments
NVIDIA Fleet Command is a cloud service that securely deploys, manages, and scales AI applications across distributed edge infrastructure. NVIDIA Fleet Command simplifies deployments and centralises management of Jetson-based edge AI devices through a cloud service. After a system is installed at a physical location once, the end-to-end lifecycle can be managed by Fleet Command, making it easy to provision, update, and monitor AI at remote locations. From retailers building intelligent stores, to hospitals using AI to improve patient care, to manufacturers looking to optimise operations, Fleet Command makes managing AI at the edge easy.
Move from software installation to edge deployment in just a few clicks. The tested and optimised solution streamlines setup and management and can run even the most intensive applications at the edge.
Deploy AI to any location with ease. The simplified interface provides centralised AI management and one-touch provisioning, so you can deploy and scale applications to numerous locations.
Simplify your AI lifecycle management with features designed for ease of use. Update applications over the air, scale applications, and monitor AI health to optimise your deployments and maintenance.
Ensure application data is always protected, from cloud to edge. Fleet Command follows leading security protocols, encrypting data in transit and at rest and providing constant monitoring.
NVIDIA Jetson Systems
As we’ve mentioned there are almost infinite uses of AI inferencing technology in the field within many industry verticals - this may be in monitoring units containing cameras or sensors across a smart city environment, or embedded within robotic devices on a factory production line. Each scenario will require its own set of features to make it work - low power consumption, weatherproof housing, a wide degree of interfaces for multiple inputs or 5G connectivity for truly autonomous locations. Our 3XS system division works has partnered with leading industrial PC manufacturer Advantech and can provide customisable solutions to address any possible inferencing project.
The Advantech range from Scan AI includes various models designed for specific environments and tailored for particular inferencing workloads. Many are modular allowing expansion through the i-Module expansion chassis or the i-Door interface to add PCIe slots, extra connectivity or control modules.