A Scan managed secure hosting environment delivers access to the latest GPU related technologies and the fastest data management solutions – optimised and maintained by the experts here within the Scan AI team.

Deep learning and AI deployments are at the bleeding edge of IT infrastructure as we know it. Ever-evolving at an incredibly fast rate and always designed around optimal throughput from data storage to GPU, and then concerned with maximising GPU utilisation to get the fastest time to results, and increasing the return-on-investment on your purchased hardware.

NVIDIA CSP Program

Nvidia certifies Scan as an Elite Solution Provider and with Nvidia’s Cloud Service Provider program Scan, with our data centre partners, can offer an enhanced managed service.

Why Scan managed hosting?

The Scan AI Cloud has been designed form the ground up to take advantage of GPU-acceleration and includes optimised appliances at every stage of the architecture to reduce bottlenecks and ensure scalability to deal most intensive complex deployments.

Managed solutions

Secure managed solutions allow you to store data in a data centre but without the risk of uploading data into hyper-scalers.

AI specialists

Our Deep Learning and AI specialists can take responsibility of infrastructure management and enhance in-house technical skills.

Low cost

Rental or operational cost models on the latest infrastructure which provides a more flexible and cost-effective solutions.

Unique access

Dedicated unshared or uncontended high-performance access to Deep Learning & AI platforms.

Secure

Access to platform security with physical segregation and no resource sharing that can be accessed in a multitude of secured connectivity options.

No risk intergration

Scalable platform without the challenges and risks of later integration.

Fully Managed and Supported Infrastructure

Unlike some other remote GPU-accelerated services our cloud environment is supported by a full team of expert consultants including data scientists, Linux engineers and hardware architects to supported your AI development whether it be a proof-of-concept trial, project planning or full deployment. These specialist teams are available in half-day increments to hand-hold your deep learning projects, or simply to compliment your in-house experts as and when you require.

Data Scientist Consultancy

For help and advice with all frameworks, algorithms and interpretation of your deep learning and AI results.

Infrastructure Engineer Consultancy

For help in hardware configuration or software (Linux) set up of your deep learning environment.

Security Consultancy

For help with the security and regulatory compliance aspects of your deep learning project.

Project Consultancy

Ongoing support for longer term projects that may encompass skills sets from all of the above areas.

The AI Journey

Deep Learning and AI has an increasing relevance and growth in many industry sectors, as the technology advances and offers ever faster time to results and greater depth of insight from data. Working with key partners, Scan is a leading supplier within this market and work closely with our customers along their AI Journey, categorising the process into three main areas:

Development

Training

Inferencing

GPU Compute Data Science Workstations & DGX Workstation NVIDIA DGX-1 & DGX-2 NVIDIA Tesla T4 Servers
Storage When training with Deep Learning & AI, a key part of successful platforms is the use of low-latency, high throughput flash storage technology. Scan has the partnerships with all of the major vendors to support any enterprise requirement with storage compliance and performance.
Networking Hosted Scan managed solutions standardise on 100Gb Ethernet and 100Gb Infiniband, which are best-in-class Mellanox solutions for high-performance networking to ensure accurate transfer of data at high speeds.

The SCAN AI ecosystem

Scan provide the gpu compute / the networking and the storage as required.

Find out more

Global data center footprint with integrated security capabilities

Unified network security & deep analytics accross hybrid architectures

On-Premises security and analytics capabilities via software and as-a-service delivery

Specialty services for key verticals (financial services, public sector, healthcare)

Who Are Cyxtera?

By combining a modern, hybrid-ready information security fabric with an uncommonly diverse and distributed footprint of 57 world-class datacentres, Cyxtera delivers a secure platform for connecting and protecting dedicated infrastructure, private clouds and public clouds. For enterprises, government agencies and service providers, Cyxtera offers powerful, secure IT infrastructure capabilities paired with agile, dynamic software-defined security.

It is this experience and pedigree that has led us to choose Cyxtera for Scan’s AI Cloud platform - one with security integrated as a core service offering into the facilities that sit at the nexus of enterprise IT and the cloud.

Pre-installed Frameworks

The class-leading hardware configurations we offer come pre-installed with a range of common and widely-used deep learning frameworks. Deep Learning frameworks are systems for the training and deployment of Deep Learning networks which provide the flexibility to design and adapt the network to your specific task – they allow you to hit the ground running, to prototype and test ideas and applications without the considerable overhead of writing significant amounts of your own code. All the major frameworks use cuDNN so you can rely on optimised code, and each one has a community of developers and users who can help you not only get the most from the framework you chose but also guide you in modifying the framework to provide new features you may require for your application.

Deep Learning Frameworks
Framework From
Caffe2 Caffe2 is a deep-learning framework designed to easily express all model types, for example, CNN, RNN, and more, in a friendly python-based API, and execute them using a highly efficiently C++ and CUDA back-end. Users have flexibility to assemble their model using combinations of high-level and expressive operations in python allowing for easy visualisation, or serialising the created model and directly using the underlying C++ implementation. Caffe2 supports single and multi-GPU execution, along with support for multi-node excetion.
Microsoft Cognitive Toolkit The Microsoft Cognitive Toolkit, formerly known as CNTK, is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs.
MATLAB MATLAB makes deep learning easy for engineers, scientists and domain experts. With tools and functions for managing and labeling large data sets, MATLAB also offers specialised toolboxes for working with machine learning, neural networks, computer vision, and automated driving. With just a few lines of code, MATLAB lets you create and visualise models, and deploy models to servers and embedded devices without being an expert. MATLAB also enables users to generate high-performance CUDA code for deep learning and vision applications automatically from MATLAB code.
MXNet MXNet is a deep learning framework designed for both effiency and flexibility. It allows you to mix the flavors of symbolic programming and imperative programming to maximise efficiency and productivity.

In its core is a dynamic dependency scheduler that automatically parallelises both symbolic and imperative operations on the fly. A graph optimisation layer on top of that makes symbolic execution fast and memory efficient. The library is portable and lightweight, and it scales to multiple GPUs and multiple machines.
NVIDIA Caffe Caffe is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. NVIDIA Caffe, also known as NVCaffe, is a NVIDIA-Maintained fork of BVLC Caffe tuned for NVIDIA GPUs, particularly in multi-GPU configurations.
PyTorch PyTorch is a Python package that provides two high-level features: Tensor computation (like numph) with strong GPU acceleration. Deep Neural Networks built on a tape-based autograd system. You can reuse your favorite Python packages such as numphy, scipy and Cython to extend PyTorch when needed.
TensorFlow TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arays (tensors) that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. For visualising TensorFlow offers TensorBoard, suite of visualisation tools.
Chainer Chainer is a Python-based deep learningframework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach, also known as dynamic computational graphs, as well as object-oriented high-level APIs to build a train neural netwoks. It supports CUDA and cuDNN using CuPy for high performance training and inference.
Find out more Find out more