Wired Network Cards Buyers Guide

A wired network card is the means by which a desktop PC, workstation or server connects to a wider network. In the home or home office environment, wireless connectivity is more common nowadays and you can learn more by reading our WIRELESS NETWORK CARD BUYERS GUIDE. However, in the business space, wired technology is usually the norm as wired networks offer better security, speeds and simplicity of management. This guide will look at the various types of wired network cards and where they are most commonly used.

Network cards are referred under a wide range of acronyms - NIC, HBA, HCA, DPU and IPU to list the most common ones. A NIC (network interface card) usually refers to Ethernet connectivity; an HBA (host bus adapter) refers to InfiniBand connectivity and an HCA (host channel adapter) uses Fibre Channel connectivity. Let’s start by looking at these networking protocols first.

Networking Technologies and Interfaces

Generally speaking all PCs, workstation and many servers will use Ethernet connectivity and in most cases there is no need to change the NIC from one supplied as standard. InfiniBand and Fibre Channel offer advantages and greater performance, but only in the server and datacentre space.

Ethernet

Ethernet is the most common form of communication seen in a network and has been around since the early 1980s. Over this time the speeds of available Ethernet connections has vastly increased. The initial commonly available NICs were capable of 10 megabits per second (10Mbps), followed by 100Mbps and Gigabit Ethernet (1GbE or 1000Mbps). In a corporate network, 1GbE has long been the standard, with faster 10GbE, 25GbE, 40GbE and 50GbE speeds also being available. The last few years have seen speeds of Ethernet increase to 100GbE, 200GbE and recently 400GbE.

InfiniBand

InfiniBand is an alternative technology to Ethernet. Developed in the late 1990s it is usually used in HPC and AI clusters where high bandwidth and low latency are key requirements. Although an InfiniBand NIC fits in a server the same way and works in a similar way to an Ethernet NIC to transfer data, they historically have achieved improved throughput by not needing to use the server CPU to control data transmission, hence latency is reduced by removing this step. Like Ethernet there have been several generations of InfiniBand starting with SDR (Single Data Rate) providing 2.5Gbps throughput. This has since been superseded by DDR - 5Gbps, QDR - 10Gbps, DR - 14Gbps, EDR - 25Gbps, HDR - 50Gbps, NDR – 100Gbps, XDR – 200Gbps and GDR – 400Gbps

Fibre Channel

Fibre Channel (FC) is another high-speed networking technology primarily used for transmitting data among datacentres, computer servers, switches and storage at data rates of up to 128Gbps. Fibre Channel was seen as the leading technology for a Storage Area Network (SAN) and as it differed entirely from Ethernet servers would need an Ethernet NIC to communicate with the wider network and an FC NIC (or Host Bus Adapter - HBA) to communicate with the SAN. More recently, an alternative form of FC called Fibre Channel over Ethernet (FCoE) was developed to lower the cost of FC solutions by eliminating the need to purchase separate HBA hardware. A Fibre Channel HBA would have similar fibre optical interfaces to this now seen on the highest speeds of Ethernet and InfiniBand, just without the need for an SFP module.

Types of Network Card

Although, as mentioned, the name of a network card - NIC, HBA or HCA is related to the networking protocol, in most cases this is superseded by a naming system based more around the functionality of the card. Standard network cards for PCs, workstations and servers are still referred to as NICs, however cards with added intelligence are called Smart NICs and cards that take on much more management functions are termed DPUs (data processing units) or IPUs (infrastructure processing units).

NICs

Although only a small component in an overall system build, the NIC can contribute to a huge uplift in performance. Basic PC or workstation NICs start with throughput speeds of 1Gbps (Gigabits per second) through a single port, scaling to 40Gbps at the top end for server use, featuring two or four ports. All processing of data is performed either by the CPU(s) and GPU(s) installed in the server, and thus introduces latency as data is transferred around the server. To reduce this latency and increase throughput speeds a Smart NIC is required.

Smart NICs

A Smart NIC performs all the tasks of a regular NIC but in order to cope with higher throughput speeds a degree of off-loading reduces pressure on other components in the server. This means the network card itself performs some of the processing tasks, removing the latency usually introduced by the CPU, system memory and operating system. This off-loading is referred to as Remote Memory Direct Access (RDMA) for InfiniBand cards and RDMA over converged Ethernet (RoCE) for Ethernet cards. Additionally, SmartNICs provide innovative hardware engines that offload and accelerate security with in-line encryption/decryption and are available in speeds up to 400Gb/s.

DPUs / IPUs

DPU or IPUs offer an uplift over Smart NICs by offloading, accelerating and isolating a broad range of advanced, storage, networking and security services. They provide a secure and accelerated infrastructure for HPC or AI workloads in largely containerised environments. Featuring speeds up to 800Gb/s, they also include NVMe over Fabric (NVMe-oF), GPUDirect Storage, encryption, elastic storage, data integrity, decompression and deduplication for reduce storage management overhead within the server.

You can learn more by watching our explainer video comparing and contrasting the three types of network card and their use cases.

Video

Network Card Port Types

Most standard Ethernet NICs sue RJ45 ports, whereas Ethernet Smart NICs and DPUs / IPUs will uses SFP (small form-factor pluggable) and QSFP (quad small form-factor pluggable) ports. The distance of connection and speed required also has an impact on the interfaces on an Ethernet NIC too - up to 10Gbps speeds the RJ45 connector is used designed to take a copper based cable. From 25 - 800Gbps it is more common to see SFP or QSFP ports.

For InfiniBand cards these also use SFP and QSFP ports, but the actual connection has three possible types that dictate the overall throughput, either a 1x cable, a 4x cable or a 12x cable - the number referring to the physical links within the cable. The 4x cable type is the most common and when used with the latest NDR interface offers speeds up to 800Gbps.

Network Card Form Factors

When it comes to physical card size and type there are two main types - PCIe and OCP. The former type is found in PCs, workstations and servers, whereas the latter is only found in servers.

PCIe

The majority of network cards are PCI Express (PCIe) add-in cards designed to fit into a slot in the motherboard. They are usually supplied with a regular height backplate but may also include a half-height backplate in more compact systems. It is important to ensure you choose the correct slot depending on whether the card is a x4, x8 or x16 card. The number refers to the quantity of PCIe lanes the card uses to transmit data. Some higher performance cards will require a PCIe 4.0 or PCIe 5.0 slot to deliver full performance, so check the specs of your system before choosing a card.

OCP

The Open Compute Project (OCP) is an organisation that shares designs of datacentre products and best practices among companies in an attempt to promote standardisation and improve interoperability. The OCP is supported by many large networking manufacturers, including NVIDIA, Cisco and Dell, the result being a standard OCP form factor card - OCP3.0 is the latest version. It is worth pointing out that the OCP affects all datacentre facets. Including server design, so an OCP NIC would only be required when using an OCP server format.

USB

It is possible to connect to a network using a USB dongle too, though usually these are only available using 1GbE rather than any of the other communication protocols. An adapter would fit into any USB port on the system and act to make an Ethernet connection. It is fair to say that this type of network connection would only be employed in the absence of available PCIe slots.

Network Card Accessories

Although the network card is typically installed inside the system, the physical connection to network switches is via a cable - depending on the type of protocol and distance between connections a variety of transceiver modules or cable types will be required.

Transceivers

SFP transceivers offer both multi-mode and single-mode fibre optical connections (the latter being designed for longer distance transmission) ranging from 550m to 160km. These are older technology standards but still available in either 100Mbps or 1Gbps versions. Newer SFP+ transceivers are an enhanced version of the SFP that support up to 16Gbps throughput. SFP28 is a 25Gbps interface which although faster is identical in physical dimensions to SFP and SFP+.

QSFP transceivers are 4-channel versions of SFPs and are available, like SFPs, in a number of versions. Rather than being limited to one connectivity medium, QSFPs can transmit Ethernet, InfiniBand and even Fibre Channel. The original QSFP transceiver specified four channels carrying 1Gbps Ethernet, 4Gbps FC (FiberChannel), or 5Gbps (DDR) InfiniBand. QSFP+ is an evolution of QSFP to support four channels carrying 10Gbps Ethernet, 10Gpbs FC or 10Gpbs (QDR) InfiniBand. The 4 channels can also be combined into a single 40Gbps Ethernet link. The QSFP14 standard of transceiver is designed to carry 14Gbps (FDR) InfiniBand or 16Gbps FC. The QSFP28 standard is designed to carry 100Gbps Ethernet, 100Gbps (EDR) InfiniBand, or 32Gbps FC. Finally, QSFP56 is designed to carry 200Gbps Ethernet, 200Gbps (HDR) InfiniBand, or 64Gbps Fibre Channel.

Cables

DAC (Direct Attach Copper) cables are the lowest cost type of cabling for SFPs and QSFPs, whereas Active Optical Cables (AOC) are used when wanting to create faster fibre optical links between devices. AOCs are widely used to link servers to memory and storage subsystems because they are consume little power, provide open air flow, low weight and low latency. They typically come in lengths up to 300m and have splitter options available too.

Network Considerations

Although this guide is concerned with wired networking cards there are a couple of related aspects to consider when looking at configuring systems and connecting them to a wider network.

Redundancy

It is possible to connect to a network using a USB dongle too, though usually these are only available using 1GbE rather than any of the other communication protocols. An adapter would fit into any USB port on the system and act to make an Ethernet connection. It is fair to say that this type of network connection would only be employed in the absence of available PCIe slots.

Switches

The Open Compute Project (OCP) is an organisation that shares designs of datacentre products and best practices among companies in an attempt to promote standardisation and improve interoperability. The OCP is supported by many large networking manufacturers, including NVIDIA, Cisco and Dell, the result being a standard OCP form factor card - OCP3.0 is the latest version. It is worth pointing out that the OCP affects all datacentre facets. Including server design, so an OCP NIC would only be required when using an OCP server format.

Ready to buy?

We hope you’re found this guide to wired networking cards useful in informing your choice of connectivity for your systems - whether you’re looking at a new machine or upgrading an existing one. Click below to see our range of wired networking cards.

If you would like further advice on the best connectivity solution for your system, don’t hesitate to contact our friendly team on 01204 474747 or email [email protected]