Mellanox ConnectX-5 VPI Adapter Card, Single Port QSFP28, PCIe3.0 x16
1 Port Mellanox MCX555A-ECAT ConnectX-5 VPI Adapter Card, QSFP28, PCIe3.0 x16
By DPD On 27th Jan
to your specified address. |
Receive SMS with one-hour delivery window
Weekend, timed and European delivery options are available at checkout
Have your parcel delivered by DPD to your specified address. Receive SMS with one-hour delivery windowWeekend, timed and European delivery options are available at checkout
Collect from our Bolton store, BL6 6PE |
Order online, collect from our Bolton store (25-28 Enterprise Park, Middlebrook, Horwich, Bolton, BL6 6PE)
Protect against installation damage for 28 days.
ScanProtect is an enhanced warranty specifically designed and offered by Scan to reduce the risk of any mishap or damage to components during installation.
Our aim with ScanProtect is to encourage our customers to upgrade or build their own PC confidently. For a small additional fee our customers are provided with complete peace of mind.
ConnectX -5 VPI Card – 100Gb/s InfiniBand & Ethernet Adapter Card
ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600ns latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets: Machine Learning, Data Analytics, and more.
ConnectX-5 delivers high bandwidth, low latency, and high computation efficiency for high performance, data intensive and scalable compute and storage platforms. ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/PGAS and Rendezvous Tag Matching offload, hardware support for out-of-order RDMA Write and Read operations, as well as additional Network Atomic and PCIe Atomic operations support.
ConnectX-5 VPI utilizes both IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technologies, delivering low-latency and high performance. ConnectX-5 enhances RDMA network capabilities by completing the Switch Adaptive-Routing capabilities and supporting data delivered out-of-order, while maintaining ordered completion semantics, providing multipath reliability and efficient support for all network topologies including DragonFly and DragonFly+.
ConnectX-5 also supports Burst Buffer offload for background checkpointing without interfering in the main CPU operations, and the innovative transport service Dynamic Connected Transport (DCT) to ensure extreme scalability for compute and storage systems.
NVMe storage devices are gaining popularity, offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhancements by providing NVMe-oF target offloads, enabling very efficient NVMe storage access with no CPU intervention, and thus improved performance and lower latency.
Moreover, the embedded PCIe switch enables customers to build standalone storage or Machine Learning appliances. As with the earlier generations of ConnectX adapters, standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.
ConnectX-5 enables an innovative storage rack design, Host Chaining, by which different servers can interconnect directly without involving the Top of the Rack (ToR) switch. Alternatively, the Multi-Host technology that was first introduced with ConnectX-4 can be used. Mellanox Multi-Host™ technology, when enabled, allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces. With the various new rack design alternatives, ConnectX-5 lowers the total cost of ownership (TCO) in the data centre by reducing CAPEX (cables, NICs, and switch port expenses), and by reducing OPEX by cutting down on switch port management and overall power usage.
Cloud and WEB2.0 Environments
Cloud and Web2.0 customers that are developing their platforms on (Software Defined Network) SDN environments, are leveraging their servers’ Operating System Virtual-Switching capabilities to enable maximum flexibility.
Open V-Switch (OVS) is an example of a virtual switch that allows Virtual Machines to communicate with each other and with the outside world. Virtual switch traditionally resides in the hypervisor and switching is based on twelve-tuple matching on flows. The virtual switch or virtual router software-based solution is CPU intensive, affecting system performance and preventing fully utilising available bandwidth.
Mellanox Accelerated Switching and Packet Processing (ASAP2) Direct technology allows to offload vSwitch/vRouter by handling the data plane in the NIC hardware while maintaining the control plane unmodified. As a results there is significantly higher vSwitch/vRouter performance without the associated CPU load.
The vSwitch/vRouter offload functions that are supported by ConnectX-5 include Overlay Networks (for example, VXLAN, NVGRE, MPLS, GENEVE, and NSH) headers’ encapsulation and de-capsulation, as well as Stateless offloads of inner packets, packet headers’ re-write enabling NAT functionality, and more. Moreover, the intelligent ConnectX-5 flexible pipeline capabilities, which include flexible parser and flexible match-action tables, can be programmed, which enable hardware offloads for future protocols.
ConnectX-5 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. Moreover, with ConnectX-5 Network Function Virtualisation (NFV), a VM can be used as a virtual appliance. With full data-path operations offloads as well as hairpin hardware capability and service chaining, data can be handled by the Virtual Appliance with minimum CPU utilisation.
With these capabilities data centre administrators benefit from better server utilisation while reducing cost, power, and cable complexity, allowing more Virtual Appliances, Virtual Machines and more tenants on the same hardware.
Standard Host Management
Mellanox host management and control capabilities include NC-SI over MCTP over SMBus, and MCTP over PCIe - Baseboard Management Controller (BMC) interface, as well as PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update DSP0267.
• Adaptive routing on reliable transport.
• Burst buffer offloads for background checkpointing.
• NVMe over Fabric (NVMe-oF) offloads.
• Back-end switch elimination by host chaining.
• Embedded PCIe switch.
• Enhanced vSwitch/vRouter offloads.
• Flexible pipeline.
• RoCE for overlay networks.
• PCIe Gen 4 support.
Please note your statutory rights are not affected.
For further information regarding Scan's warranty procedure please see our terms and conditions
- 12 months
- Return to base
- DOA Period:
- 7 days
- RTB Period:
- 12 months
Date Issued: 26th Oct 2006
As a growing number of homes contain more than one computer, their users must find ways to network them. The primary purpose for creating a home network tends to be to provide Internet access to all computers within the house. There are other good reasons, however. These include enabling LAN gaming, data backups from one machine to another and the sharing of music within the home.