Scan's TekSpek

Our Aim
To provide you with an overview on New And existing technologies, hopefully helping you understand the changes in the technology. Together with the overviews we hope to bring topical issues to light from a series of independent reviewers saving you the time And hassle of fact finding over the web.

We will over time provide you with quality content which you can browse and subscribe to at your leisure.

TekSpek GPU - Graphics
NVIDIA Deep Learning

NVIDIA Deep Learning


Date issued:

Overall improvements in computing performance have naturally paved the way for advancements in specific areas. One such segment is called machine learning, born from pattern recognition and advances in artificial intelligence.

This field is mature enough to now be further subdivided into specialised fields. One such specialism is deep learning, considered to be the fastest growing area of machine learning.

Deep learning helps make sense of data such as sound, text and images by using deep neural networks (DNN), and these work by storing and analysing experiential data. The key is the learning part, where the neural network has the ability to comprehend and understand unseen data in the context of already-seen data. In other words, deep learning uses what it knows already to predict what it is about to see.

Multiple layers within neural networks

So how does this work in practice? A good example of the benefits of deep learning is facial recognition. A deep-learning computer builds up recognition in steps, with the knowledge gained from each step providing closer insight to the final face.

These steps become more complex as we go deeper into the hierarchy. The first step might be to look at an image for edges. A second, deeper step then associates various edges with a known shape - a triangle or rectangle, for example. It's this relatedness that enables learning through training. Moving deeper, a collection of these shapes can characterise a shape familiar to humans, so an eye, mouth or ear.

Building from the ground up, from the simplest objects to detailed features that are immediately recognisable to humans, it is the associativity between multiple neural layers that help a machine understand a face. If the machine knows that a particular arrangement of shapes makes up an eye, then it understands that the eye is part of a face, and the face is part of a human.

A deep-learning machine sees a final picture as a collection of related layers that it can recognise and piece together. Detail from simple layers, such as edges and shapes, enables it to interpret these as the foundations of a face. The end result is significantly more accurate if the deep-learning machine is trained effectively to recognise patterns - a basic form of artificial intelligence, if you will.

The way in which deep learning uses known data to extrapolate what it is likely to see next makes it ideal for a multitude of scenarios. Speech recognition is a prime example, as is natural language processing and bioinformatics.

Powerful GPUs needed for effective deep learning

Yet while deep learning has held obvious appeal for a while now, the computational power to exercise applications in real time has been lacking. The huge computational burden for hierarchical processing had stifled progress until general-purpose compute graphics cards came along. Modern graphics cards use massively parallel architectures that effectively break down the constituents for deep learning into smaller segments and then compute them at once.

Having teraflops of compute performance on tap provides GPUs with the innate ability to process deep learning quickly and efficiently. What would take days on older CPUs can be done in real-time on an array of GPUs, hugely accelerating the development of this specialism within machine learning.

Having amply powerful hardware is only one facet when solving the deep-learning problem. The GPUs need to be effectively programmed and trained in order to leverage most of their potential. NVIDIA is the undisputed leader in promoting deep learning as a discipline in itself.

NVIDIA raising the deep-learning bar

A three-pronged approach is used to tackle the problem of computing deep learning on an NVIDIA GPU. The DIGITS program enables easier-to-design DNNs for real-time processing by providing a robust open source framework that visualises, monitors and manages DNN topologies, particularly for multi-GPU systems.

NVIDIA, too, has developed specific GPU-accelerated libraries for easier DNN adoption. Known as CUDA Deep Neural Network Library (cuDNN), it integrates highly optimised routines that take away the need for low-level performance tuning. Instead, data scientists can apportion that same time to designing and training their neural networks, leading to easier deployment and more efficiency.

And building on CUDA for DNN and deep learning, the latest CUDA Toolkit 7.5 provides additional software libraries to accelerate the design and training of DNNs. No other GPU designer has such an extensive arsenal of tools and expertise for this exciting specialism within machine learning.

We've only just touched the potential of how deep learning can revolutionise a number of important industries such as security, business and music. NVIDIA is making great strides in optimising its latest Maxwell architecture to harness the possibilities this new method of computing opens up.

harness the possibilities this new method of computing opens up. NVIDIA is understandably keen to promote the benefits of deep learning on its latest GPUs but, understandably, is only willing to partner with companies who fully understand and are able to support the technology. Scan Computers is the only UK retailer to have a full NVIDIA Deep Learning license, and if you want to learn more about how Scan can help your company with NVIDIA deep learning.