40-Port Non-Blocking Managed HDR 200Gb/s InfiniBand Smart Switch
Mellanox provides the world’s smartest switch, enabling in-network computing through the Co-Design Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology. QM8700 has the highest fabric performance available in the market with up to 16Tb/s of non-blocking bandwidth with sub-130ns port-to-port latency.
Scaling-Out Data Centres with HDR 200G InfiniBand
Faster servers, combined with high-performance storage and applications that use increasingly complex computations are causing data bandwidth requirements to spiral upward. As servers are deployed with next generation processors, High-Performance Computing (HPC) environments and Enterprise Data Centres (EDC) will need every last bit of bandwidth delivered with Mellanox’s next generation of HDR InfiniBand, high-speed, smart switches.
World’s Smartest Switch
Built with the Mellanox Quantum InfiniBand switch device, the QM8700 provides up to forty 200Gb/s ports, with full bi-directional bandwidth per port. These stand-alone switches are an ideal choice for top-of-rack leaf connectivity or for building small to extremely large sized clusters.
QM8700 is the world’s smartest network switch, designed to enable in-network computing through the Co-Design Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology. The Co-Design architecture enables the usage of all active data centre devices to accelerate the communications frameworks using embedded hardware, resulting in an order of magnitude application performance improvements.
QM8700 enables efficient computing with features such as static routing, adaptive routing, congestion control and enhanced VL mapping to enable modern topologies (SlimFly, Dragonfly+, 6DT). These ensure the maximum effective fabric bandwidth by eliminating congestion hot spots.
Collective Communication Acceleration
Collective communication describes communication patterns in which all members of a group of communication endpoints participate. Collective communications are commonly used in HPC protocols such as MPI and SHMEM.
The Mellanox Quantum switch improves the performance of selected collective operations by processing the data as it traverses the network, eliminating the need to send data multiple times between endpoints.
It also supports the aggregation of large data vectors at wire speed to enable MPI large vector reduction operations, which are crucial for machine learning applications.
HDR100
QM8700 together with the Mellanox ConnectX®-6 adapter card support HDR100. By utilising two pairs of two lanes per port, the QM8700 can support up to 80 ports of 100G to create the densest TOR switch available in the market. This is a perfect solution for double dense racks with more than 40 servers per rack and also helps small-medium deployments with the need to scale to 3-level fat-tree, to lower power, latency and space.
Management
The QM8700’s x86 ComEx Broadwell CPU comes with an on-board subnet manager, enabling simple, out-of-the-box bring-up for up to 2K nodes in the fabric. Running the MLNX-OS® software package, it delivers full chassis management through CLI, WebUI, SNMP or JSON interfaces.
QM8700 also incorporates Mellanox Unified Fabric Manager software for managing scale-out, InfiniBand, computing environments to enable efficient provisioning, health indications and monitoring of the cluster. UFM® ensures that the fabric is up and running at maximum performance at all times.
• 40 x HDR 200Gb/s ports in a 1U switch.
• 80 x HDR100 100Gb/s ports (using splitter cables).
• 16Tb/s aggregate switch throughput.
• Sub-130ns switch latency.
Optimised Design
• 1+1 redundant & hot-swappable power.
• N+1 redundant & hot-swappable fans.
• 80 gold+ and energy star certified power supplies.
• x86 ComEx Broadwell CPU.
Adanced Design
• Adaptive routing.
• Congestion control.
• Collective offloads (Mellanox SHARP technology).
• VL mapping (VL2VL).