Wired Network Cards Buyers Guide

A wired network card - most often referred to a Network Interface Card (NIC) or LAN (local area network) card when dealing with Ethernet; a Host Channel Adapter (HCA) when using Infiniband; or a Host Bus Adapter (HBA) when using Fibre - is an add-in card for a system to enable it to connect to the wider network and outside world. Although most systems include a basic network card built into the motherboard as standard, there are many types of connectivity so changing or upgrading the network card is quite common - this is what we’ll look at in this guide.

What is a NIC, HCA or HBA?

A NIC, HCA and HBA are all add-in cards that fits into one of the PCI Express (PCIe) slots on the motherboard. Although they differ in names, they broadly all perform the same function - allowing one or more cables to connect to network switches which make similar connections to all the other systems, storage devices and printers. Each network card is assigned a unique address so that all other devices know which particular machine they are sending data to or where they’ve received it from.

There may be one or more ports on the network card, and these are available in various types as we’ll see later. Only a single port is required to make a network connection, but extra ports can be used to act as a failover or to increase bandwidth - again we’ll go into this later.

Why Change the NIC?

As we mentioned earlier most systems include a basic Ethernet NIC - with typically a single Ethernet port. If you are making network connections to a regular home network or small corporate network, then Ethernet is normally adequate, but you may want more ports on the NICs or more NICs - both are ways of adding redundancy and resilience into your server and network, as you are making multiple connections in case one should fail. If you are dealing with High Performance Computing (HPC) then Infiniband is the chosen standard rather than Ethernet. Similarly a Fibre connection is often used when dealing with high end storage, so in both these cases you will need to swap out your NICs for HCAs or HBAs accordingly.

Let’s take at look at each of these technologies.

Networking Technologies and Interfaces

Each of the networking technologies we’ll discuss essentially has the same job - transmitting data packets from one device on the network to another. They differ in the ways they do this and the associated resources they use to do it.

Ethernet

Ethernet is the most common form of communication seen in a network and has been around since the early 1980s. Over this time the speeds of available Ethernet connections has vastly increased. The initial commonly available NICs were capable of 10 megabits per second (10Mbps), followed by 100Mbps and Gigabit Ethernet (1GbE or 1000Mbps).

In a corporate network, 1GbE has long been the standard, with faster 10GbE, 25GbE, 40GbE and 50GbE speeds also being available. The last few years have seen speeds of Ethernet increase to 100GbE, 200GbE and recently 400GbE. Although all these speeds are delivered over Ethernet, the distance of connection and speed required have an impact on the interfaces on the NIC and the types of cabling used.

On NICs with up to 10GbE speeds the RJ45 connector is used designed to take a copper based cable. Above 10GbE it is more common to see an SFP (small form-factor pluggable) port.

For RJ45 Ethernet connections the maximum distance data can be transmitted is 100m, which has limitations when looking at networks in large buildings, campuses or even city-wide. The SFP port allows for fibre optic cabling to be employed, which suffers less data loss and can achieve much higher throughput speeds. Depending on the distance and speed required different SFPs can be plugged into the NIC - we’ll look at the different SFP modules later on this guide.

It is worth mentioning that although traditionally Ethernet has lagged behind Infiniband speeds, this is now changing, due to increased common SFP interface use by the likes of NVIDIA Mellanox - perhaps driven by the much larger install base of Ethernet technology in the market and the opportunity for upgrade.

InfiniBand

InfiniBand is an alternative technology to Ethernet. Developed in the late 1990s it is usually found in HPC applications where high bandwidth and low latency are key requirements. Although an InfiniBand HCA fits in a server the same way and works in a similar way to an Ethernet NIC to transfer data, they historically have achieved improved throughput by not needing to use the server CPU to control data transmission, hence latency is reduced by removing this step. Like Ethernet there have been several generations of InfiniBand starting with SDR (Single Data Rate) providing 2.5Gbps throughput. This has since been superseded by Dual Data Rate (DDR - 5Gbps), Quad Data Rate (QDR - 10Gbps), Fourteen Data Rate (FDR - 14Gbps), Enhanced data Rate (EDR - 25Gbps), High Data Rate (HDR - 50Gbps) and the latest Next Data Rate (NDR - 100Gbps) cards. Once again differing SFP modules will be employed depending on the speed wanted and distance required.

An InfiniBand cable has three possible types that dictate the overall throughput, either a 1x cable, a 4x cable or a 12x cable - the number referring to the physical ‘links’ within the cable. The 4x cable type is the most common and when used with the latest NDR interface offers speeds up to 400Gbps. NVIDIA Mellanox, the market leader in InfiniBand technology, transversely has begun to feature CPU offloading technology on its Ethernet cards too, making the difference between Ethernet and InfiniBand even smaller.

Data Processing Units

With each generation network cards, whether Ethernet NICs or Infiniband HCAs are delivering faster and faster throughput speeds, but additional features are also being added to remove overheads from the system CPU(s). As more and more network traffic is taken up by software defined management functions for tasks such as storage and security it makes sense that these functions should be processed by the network card and not the CPU, as this adds latency to instructions that slow the system overall.

The NVIDIA BlueField-2 data processing unit (DPU) is the first of this type of network card and is designed as a datacentre infrastructure on a chip - optimised for traditional enterprise, high-performance computing (HPC), and AI workloads. It delivers a broad set of accelerated software-defined networking, storage, security, and management services. By combining either an NVIDIA Mellanox Ethernet NIC or Infiniband HCA adapter with an array of Arm cores, BlueField-2 offers purpose-built, hardware-acceleration engines with full software programmability.

LEARN MORE

OmniPath

OmniPath Architecture (OPA) is a similar technology to Infiniband, but developed by Intel. It is also aimed at the high bandwidth, low latency high performance computing end of the market and had a maximum throughput of 100Gbps. Further development to a second generation delivering 200Gbps speeds was planned but as of 2019, the technology is now obsolete and no longer in development. We mention it in this guide in case a new server is to be connected to an older OmniPath network, as in that case an OmniPath NIC would still be required.

Fibre Channel

Fibre Channel (FC) is another high-speed networking technology primarily used for transmitting data among datacentres, computer servers, switches and storage at data rates of up to 128Gbps. Fibre Channel was seen as the leading technology for a Storage Area Network (SAN) and as it differed entirely from Ethernet servers would need an Ethernet NIC to communicate with the wider network and an FC NIC (or Host Bus Adapter - HBA) to communicate with the SAN. More recently, an alternative form of FC called Fibre Channel over Ethernet (FCoE) was developed to lower the cost of FC solutions by eliminating the need to purchase separate HBA hardware. A Fibre Channel HBA would have similar fibre optical interfaces to this now seen on the highest speeds of Ethernet and Infiniband, just without the need for an SFP module.

Although Fibre Channel networking is more mature than Infiniband and exhibits many of the same attributes - high performance, high bandwidth and low latency - it now lacks the higher speeds that Infiniband and Ethernet are delivering so may decline in use.

Form Factors

PCIe Add-in Cards

The majority of NICs are PCI Express (PCIe) add-in cards designed to fit into a slot in the motherboard. They are usually supplied with a regular height backplate but may also include a half-height backplate in more compact systems.

It is important to ensure you choose the correct slot depending on whether the NIC is a x4, x8 or x16 card. The number refers to the quantity of PCIe lanes the card uses to transmit data. Some higher performance NICs will require a PCIe 4.0 slot to deliver full performance, so check the specs of your system before choosing a NIC.

For larger server systems, Mellanox utilises a technology called Socket Direct - this is a system that enables direct PCIe access to multiple CPU sockets, eliminating network traffic having to traverse the inter-process bus and optimising overall system performance. It is worth noting that in PCIe form there is an extra slot taken up in order to deliver this enhanced CPU communication.

OCP 3.0 Cards

The Open Compute Project (OCP) is an organisation that shares designs of datacentre products and best practices among companies in an attempt to promote standardisation and improve interoperability. The OCP is supported by many large networking manufacturers, including Mellanox, Cisco and Dell, the result being a standard OCP form factor NIC. OCP3.0 is the latest version.

It is worth pointing out that the OCP affects all datacentre facets. Including server design, so an OCP NIC would only be required when using an OCP server format.

USB

It is possible to connect to a network using a USB dongle too, though usually these are only available using 1GbE rather than any of the other communication protocols. An adapter would fit into any USB port on the system and act to make an Ethernet connection.

It is fair to say that this type of network connection would only be employed in the absence of available PCIe slots.

Accessories

SFP Transceiver Modules

As we’ve mentioned the SFP port(s) on any given Ethernet or Infiniband network card are designed to take a variety of small form-factor pluggable (SFP) modules, called a transceiver - each intended to provide a guaranteed throughput speed over a specified distance - from metres to many kilometres. The SFP transceiver consists of a module that slots into the SFP slot on the network card and a connector end for the cable to fit into making these hot-pluggable - there is no need to switch off the system when connecting or disconnecting cables. Let’s take a look at the common types.

SFP transceivers offer both multi-mode and single-mode fibre connections (the latter being designed for longer distance transmission) ranging from 550m to 160km. These are older technology standards but still available in either 100Mbps or 1Gbps versions.

SFP+ transceivers are an enhanced version of the SFP that support up to 16Gbps fibre throughput. Like SFP multi-mode and single-mode options are available to cover distances up to 160km.

SFP28 is a 25Gbps interface which although faster is identical in physical dimensions to SFP and SFP+. SFP28 modules exist supporting single- or multi-mode fibre connections, active optical cable (AOC) and direct attach copper (DAC).

QSFP transceivers are 4-channel versions of SFPs and are available, like SFPs, in a number of versions. Rather than being limited to one connectivity medium, QSFPs can transmit Ethernet, Infiniband and even Fibre channel. The original QSFP transceiver specified four channels carrying 1Gbps Ethernet, 4Gbps FC (FiberChannel), or 5Gbps (DDR) InfiniBand. QSFP+ is an evolution of QSFP to support four channels carrying 10Gbps Ethernet, 10Gpbs FC or 10Gpbs (QDR) InfiniBand. The 4 channels can also be combined into a single 40Gbps Ethernet link. The QSFP14 standard of transceiver is designed to carry 14Gbps (FDR) InfiniBand or 16Gbps FC. The QSFP28 standard is designed to carry 100Gbps Ethernet, 100Gbps (EDR) InfiniBand, or 32Gbps FC. Finally, QSFP56 is designed to carry 200Gbps Ethernet, 200Gbps (HDR) InfiniBand, or 64Gbps fibre channel.

Direct Attach Cables

Active Optical Cables (AOC) are used when wanting to create fibre optical links between devices. AOCs are widely used to link servers to memory and storage subsystems because they are consume little power, provide open air flow, low weight, and low latency. They typically come in lengths up to 300m, and have splitter options available too.

Network Considerations

Although this guide is concerned with wired networking cards there are a few related aspects to consider when looking at configuring systems and connecting them to a wider network.

Redundancy

Redundancy is the concept of increasing the resilience of an individual system or an entire network by ensuring there is no single point of failure. When looking within a system, this may take the form of employing a multi-port NIC - allowing more than one connection the switch connecting to the other users and servers. Ideally two separate NICs is the preferred approach, so even an entire NIC failure is covered. Using multiple network switches can also offer further redundancy and ensure continuity in case of failures.

Switches

Whilst mentioning switches, it is worth pointing out that any change in NIC connectivity in a workstation or server may have an impact on whether the rest of the network remains at optimum speed and performance. For example, 10GbE NICs will require 10GbE capable switches and an upgrade to Infiniband will necessitate changing the entire network to Infiniband switches too.

Time to Choose

We hope you’re found this guide to wired networking cards useful in informing your choice of connectivity for your systems - whether you’re looking at a new machine or upgrading an existing one. Click below to see our range of networking cards.

If you would like further advice on the best connectivity solution for your system, don’t hesitate to contact our friendly team on 01204 474747 or email [email protected]