Why develop AI projects in the cloud?

Deep learning and AI deployments are at the bleeding edge of IT infrastructure as we know it. Ever-evolving at an incredibly fast rate and always designed around optimal throughput from data storage to GPU, and then con cerned with maximising GPU utilisation to get the fastest time to results, and increasing the return-on-investment on your purchased hardware.

But what if your project isn’t at the 24/7 stage yet, and hardware investment is too costly to justify if not constantly working to deliver results? And how can you justify investing in such hardware knowing it may need to be upgraded in a short time, to ensure you remain at the forefront of development?

A could environment delivers access to the latest GPU related technologies and the fastest data management solutions – optimised and maintained by the experts here within the Scan Business AI team.

High Performance



Cost Effective

Why choose Scan AI Cloud?

Traditionally, cloud environments have been built with CPU performance as the centre focus, using multiple smaller servers to deliver the required processor workloads, and the associated software designed around maximising results in CPU.

The Scan AI Cloud has been designed form the ground up to take advantage of GPU-acceleration and includes optimised appliances at every stage of the architecture to reduce bottlenecks and ensure scalability to deal most intensive complex deployments.

Whichever compute model you choose you can rest assured that you are accessing a completely secure environment offering physical segregation from other users, as no device resource is shared in the Scan AI Cloud. This has been done with not only sensitive data in mind, but to ensure that the high performance components, such as DGX-1 and NetApp all-flash storage perform at the optimum level and deliver the results you are expecting.

But perhaps most importantly, to ensure complete data integrity and security, we have partnered with Cyxtera – a best-in-class hosting provider - to locate our cloud architectures in several datacentres allowing direct access for those occasions where online uploading of data would be prohibitive due to size or sensitivity.

Global data center footprint with integrated security capabilities

Unified network security & deep analytics accross hybrid architectures

On-Premises security and analytics capabilities via software and as-a-service delivery

Specialty services for key verticals (financial services, public sector, healthcare)

Who Are Cyxtera?

By combining a modern, hybrid-ready information security fabric with an uncommonly diverse and distributed footprint of 57 world-class datacentres, Cyxtera delivers a secure platform for connecting and protecting dedicated infrastructure, private clouds and public clouds. For enterprises, government agencies and service providers, Cyxtera offers powerful, secure IT infrastructure capabilities paired with agile, dynamic software-defined security.

It is this experience and pedigree that has led us to choose Cyxtera for Scan’s AI Cloud platform - one with security integrated as a core service offering into the facilities that sit at the nexus of enterprise IT and the cloud.

Suggested Cloud AI Architectures Hardware
Network 1/10 Gb/s connection from the Internet
Security Firewall Servers
Switching Mellanox Switches
GPU Data Processing DGX Systems / RTX & Quadro Systems
Data Storage Layer NETAPP Storage

The AI Journey




Your AI journey starts at the development stage, with the testing and optimisation of your code. Whilst this stage still benefits from powerful hardware there is an obvious price / performance consideration - how fast do you require.

Pre-installed Frameworks

The class-leading hardware configurations we offer come pre-installed with a range of common and widely-used deep learning frameworks. Deep Learning frameworks are systems for the training and deployment of Deep Learning networks which provide the flexibility to design and adapt the network to your specific task – they allow you to hit the ground running, to prototype and test ideas and applications without the considerable overhead of writing significant amounts of your own code. All the major frameworks use cuDNN so you can rely on optimised code, and each one has a community of developers and users who can help you not only get the most from the framework you chose but also guide you in modifying the framework to provide new features you may require for your application.

Deep Learning Frameworks
Framework From
Caffe2 Caffe2 is a deep-learning framework designed to easily express all model types, for example, CNN, RNN, and more, in a friendly python-based API, and execute them using a highly efficiently C++ and CUDA back-end. Users have flexibility to assemble their model using combinations of high-level and expressive operations in python allowing for easy visualisation, or serialising the created model and directly using the underlying C++ implementation. Caffe2 supports single and multi-GPU execution, along with support for multi-node excetion.
Microsoft Cognitive Toolkit The Microsoft Cognitive Toolkit, formerly known as CNTK, is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs.
MATLAB MATLAB makes deep learning easy for engineers, scientists and domain experts. With tools and functions for managing and labeling large data sets, MATLAB also offers specialised toolboxes for working with machine learning, neural networks, computer vision, and automated driving. With just a few lines of code, MATLAB lets you create and visualise models, and deploy models to servers and embedded devices without being an expert. MATLAB also enables users to generate high-performance CUDA code for deep learning and vision applications automatically from MATLAB code.
MXNet MXNet is a deep learning framework designed for both effiency and flexibility. It allows you to mix the flavors of symbolic programming and imperative programming to maximise efficiency and productivity.

In its core is a dynamic dependency scheduler that automatically parallelises both symbolic and imperative operations on the fly. A graph optimisation layer on top of that makes symbolic execution fast and memory efficient. The library is portable and lightweight, and it scales to multiple GPUs and multiple machines.
NVIDIA Caffe Caffe is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. NVIDIA Caffe, also known as NVCaffe, is a NVIDIA-Maintained fork of BVLC Caffe tuned for NVIDIA GPUs, particularly in multi-GPU configurations.
PyTorch PyTorch is a Python package that provides two high-level features: Tensor computation (like numph) with strong GPU acceleration. Deep Neural Networks built on a tape-based autograd system. You can reuse your favorite Python packages such as numphy, scipy and Cython to extend PyTorch when needed.
TensorFlow TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. For visualising TensorFlow offers TensorBoard, suite of visualisation tools.
Chainer Chainer is a Python-based deep learningframework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach, also known as dynamic computational graphs, as well as object-oriented high-level APIs to build a train neural netwoks. It supports CUDA and cuDNN using CuPy for high performance training and inference.

Software to compliment the hardware

In addition to all this class-leading hardware, we also offer the options to use in-memory or GPU-accelerated software solutions, supported by the following partners, to provide simple ways to optimise AI processes, visualise and analyse datasets no matter how large.


GPU-accelerated analytics applications are now available in the NVIDIA DGX container registry and NVIDIA GPU Cloud (NGC). These applications, including H2Oai, provide customers the ability to abstract insights in milliseconds.

Find out more


Kinetica’s distributed, in-memory database on NVIDIA DGX-1 and NVIDIA GPUs provides real-time analytics on data in motion and at rest 10-100x faster performance at 1/10 of the cost of traditional systems.

Find out more


OmniSci is a GPU-accelerated platform with an open-source SQL engine called OmniSci Core and an integrated visualization system called OmniSci Immerse. OmniSci Core is now on DGX systems and in the NGC.

Find out more

Planet AI

Planet AI are a team of scientists and engineers with deep roots in AI, Machine Learning and Cognitive Computing undertaking their own ambitious research projects towards Deep Universal Sequence Understanding.

Find out more

Fully Managed and Supported Infrastructure

Unlike some other remote GPU-accelerated services our cloud environment is supported by a full team of expert consultants including data scientists, Linux engineers and hardware architects to supported your AI development whether it be a proof-of-concept trial, project planning or full deployment. These specialist teams are available in half-day increments to hand-hold your deep learning projects, or simply to compliment your in-house experts as and when you require.

Data Scientist Consultancy

For help and advice with all frameworks, algorithms and interpretation of your deep learning and AI results.

Infrastructure Engineer Consultancy

For help in hardware configuration or software (Linux) set up of your deep learning environment.

Security Consultancy

For help with the security and regulatory compliance aspects of your deep learning project.

Project Consultancy

Ongoing support for longer term projects that may encompass skills sets from all of the above areas.

Get Scan AI Cloud Discounts when you enrol in the NVIDIA Deep Learning Institute courses

Scan is pleased to announce that on top of our NVIDIA Elite Solution provide status, we now have certified instructors to host NVIDIA Deep Learning Institute (DLI) workshops. This means all of our learnings and expertise gained using the NVIDIA DGX systems alongside the other elements of our AI ecosystem, places us in a great position to educate and train developers on how to apply deep learning. Furthermore, we've teamed up with Manchester Science Partnerships to deliver these teachings in a cutting edge environment in the heart of Manchester making the workshops available and easily accessible to all. Enrolling for a DLI course makes you eligible for discounted rates when using the Scan AI Cloud.

Find out more

* NVIDIA DLI courses must be delivered by Scan to qualify for discount

Find out more Find out more