Network Switches - Business Buyers Guide

A network switch is device that connects multiple PCs, workstations, servers or storage devices within a business environment to enable them to communicate within an organisation and share and access the Internet connection to the wider world. They may also be used to connect other network capable or IP (Internet Protocol) devices such as wireless access points, surveillance cameras, phones and video conferencing systems. Switches increase in functionality depending on where in the network they are intended for so it is important to get the correct feature set.

This guide will take you through which network switch will suit a particular environment, how to ensure optimal compatibility with the rest of your network and what management features you should consider.

Let’s get started.

What Does a Switch Do?

Switches sit at the heart of any LAN (Local Area Network) and depending on where the switch is situated will determine its tasks and naming convention - a switch at the edge of the network will act to provide access to desktop and workstation users, IP phones and wireless users (via wireless access points), whereas a switch at the centre of the network will sit in the back end of the infrastructure and communicate with each other and the servers and storage appliances.

Typically the speed of switches in the core is greater than at the edge, as the majority of data traffic to regular users may just be emails and small files. An edge switch may be deployed at the same speed as core switches if for example you have high demand graphical users, who you want to be able to access large file sizes from the servers. Core switches will also deal with access to the servers from remote workers and external users via the router and over the Internet. The below diagram demonstrates a typical mid-sized organisation’s network - larger organisations may have an additional layer of switches in an ‘aggregation’ layer to consolidate traffic from the edge switches.

It is worth at this stage just reiterating the difference between a switch and a router - essentially a switch allows communication between devices on a network or LAN, whereas a router allows communication between separate networks, sometimes referred to as a WAN (wide area network).

Network Switch Technology and Interfaces

Each of the networking technologies or protocols we’ll discuss essentially has the same job - transmitting data packets from one device on the network to another. They differ in the ways they do this and the associated resources they use to do it.

Ethernet

Ethernet is the most common form of communication seen in a network and has been around since the early 1980s. Over this time the speeds of available Ethernet connections has vastly increased. The initial commonly available NICs were capable otf 10 megabits per second (10Mbps), followed by 100Mbps and Gigabit Ethernet (1GbE or 1000Mbps).

In a corporate network, 1GbE has long been the standard, with faster 10GbE, 25GbE, 40GbE and 50GbE speeds also being available. The last few years have seen speeds of Ethernet increase to 100GbE, 200GbE and recently 400GbE. Although all these speeds are delivered over Ethernet, the distance of connection and speed required have an impact on the interfaces on the NIC and the types of cabling used.

On NICs with up to 10GbE speeds the RJ45 connector is used designed to take a copper based cable. Above 10GbE it is more common to see an SFP (small form-factor pluggable) port.

For RJ45 Ethernet connections the maximum distance data can be transmitted is 100m, which has limitations when looking at networks in large buildings, campuses or even city-wide. The SFP port allows for fibre optic cabling to be employed, which suffers less data loss and can achieve much higher throughput speeds. Depending on the distance and speed required different SFPs can be plugged into the NIC - we’ll look at the different SFP modules later on this guide.

It is worth mentioning that although traditionally Ethernet has lagged behind Infiniband speeds, this is now changing, due to increased common SFP interface use by the likes of NVIDIA Mellanox - perhaps driven by the much larger install base of Ethernet technology in the market and the opportunity for upgrade.

InfiniBand

InfiniBand is an alternative technology to Ethernet. Developed in the late 1990s it is usually found in HPC applications where high bandwidth and low latency are key requirements. Although an InfiniBand NIC fits in a server the same way and works in a similar way to an Ethernet NIC to transfer data, they historically have achieved improved throughput by not needing to use the server CPU to control data transmission, hence latency is reduced by removing this step. Like Ethernet there have been several generations of InfiniBand starting with SDR (Single Data Rate) providing 2.5Gbps throughput. This has since been superseded by Dual Data Rate (DDR - 5Gbps), Quad Data Rate (QDR - 10Gbps), Fourteen Data Rate (FDR - 14Gbps), Enhanced data Rate (EDR - 25Gbps), High Data Rate (HDR - 50Gbps) and the latest Next Data Rate (NDR - 100Gbps) cards. Once again differing SFP modules will be employed depending on the speed wanted and distance required.

An InfiniBand cable has three possible types that dictate the overall throughput, either a 1x cable, a 4x cable or a 12x cable - the number referring to the physical ‘links’ within the cable. The 4x cable type is the most common and when used with the latest NDR interface offers speeds up to 400Gbps.

OmniPath

OmniPath Architecture (OPA) is a similar technology to Infiniband, but developed by Intel. It is also aimed at the high bandwidth, low latency high performance computing end of the market and had a maximum throughput of 100Gbps. Further development to a second generation delivering 200Gbps speeds was planned but as of 2019, the technology is now obsolete and no longer in development. We mention it in this guide in case a new server is to be connected to an older OmniPath network, as in that case an OmniPath NIC would still be required.

Fibre Channel

Fibre Channel (FC) is another high-speed networking technology primarily used for transmitting data among datacentres, computer servers, switches and storage at data rates of up to 128Gbps. Fibre Channel was seen as the leading technology for a Storage Area Network (SAN) and as it differed entirely from Ethernet servers would need an Ethernet NIC to communicate with the wider network and an FC NIC (or Host Bus Adapter - HBA) to communicate with the SAN. More recently, an alternative form of FC called Fibre Channel over Ethernet (FCoE) was developed to lower the cost of FC solutions by eliminating the need to purchase separate HBA hardware. A Fibre Channel HBA would have similar fibre optical interfaces to this now seen on the highest speeds of Ethernet and Infiniband, just without the need for an SFP module.

Although Fibre Channel networking is more mature than Infiniband and exhibits many of the same attributes - high performance, high bandwidth and low latency - it now lacks the higher speeds that Infiniband and Ethernet are delivering so may decline in use.

SFP and QSFP Transceiver Modules

As we’ve mentioned the SFP port(s) on any given Ethernet or Infiniband network card are designed to take a variety of small form-factor pluggable (SFP) modules, called a transceiver - each intended to provide a guaranteed throughput speed over a specified distance - from metres to many kilometres. The SFP transceiver consists of a module that slots into the SFP slot on the network card and a connector end for the cable to fit into making these hot-pluggable - there is no need to switch off the system when connecting or disconnecting cables. Let’s take a look at the common types.

SFP transceivers offer both multi-mode and single-mode fibre connections (the latter being designed for longer distance transmission) ranging from 550m to 160km. These are older technology standards but still available in either 100Mbps or 1Gbps versions.

SFP+ transceivers are an enhanced version of the SFP that support up to 16Gbps fibre throughput. Like SFP multi-mode and single-mode options are available to cover distances up to 160km.

SFP28 is a 25Gbps interface which although faster is identical in physical dimensions to SFP and SFP+. SFP28 modules exist supporting single- or multi-mode fibre connections, active optical cable (AOC) and direct attach copper (DAC).

QSFP transceivers are 4-channel versions of SFPs and are available, like SFPs, in a number of versions. Rather than being limited to one connectivity medium, QSFPs can transmit Ethernet, Infiniband and even Fibre channel. The original QSFP transceiver specified four channels carrying 1Gbps Ethernet, 4Gbps FC (FiberChannel), or 5Gbps (DDR) InfiniBand. QSFP+ is an evolution of QSFP to support four channels carrying 10Gbps Ethernet, 10Gpbs FC or 10Gpbs (QDR) InfiniBand. The 4 channels can also be combined into a single 40Gbps Ethernet link. The QSFP14 standard of transceiver is designed to carry 14Gbps (FDR) InfiniBand or 16Gbps FC. The QSFP28 standard is designed to carry 100Gbps Ethernet, 100Gbps (EDR) InfiniBand, or 32Gbps FC. Finally, QSFP56 is designed to carry 200Gbps Ethernet, 200Gbps (HDR) InfiniBand, or 64Gbps fibre channel.

Power over Ethernet

Power over Ethernet (PoE) is a feature of many network switches that enable the delivery of enough power through the Ethernet cable connection to power the device connected to it. This has the advantage of only needing a single cable running to the device - delivering both power and data - and removes the need to have a standard plug socket nearby everything you wish to power and be on the wired network. Examples of PoE devices would be ceiling mounted wireless access points (WAPs) or outdoor mounted surveillance cameras, where in both instances running power to them may be very awkward - PoE capability provides flexibility in the placement of devices.

The level of power provided to each PoE capable port will vary on the overall power supply of the switch and there are several classifications. PoE, introduced in 2003; PoE+, an upgrade in 2009 delivering more power per port; and PoE++ added in 2018. This latter one can be classified into two types - Type 3 and Type 4.

PoE PoE+ PoE++ PoE++
IEEE Standard 802.3af 802.3at 802.3bt 802.3bt
Type Type 1 Type 2 Type 3 Type 4
Maximum Power per Port 15.4W 30W 60W 100W
Maximum Power to Device 12.95W 25.5W 51W 71W
Typical Devices Static Cameras
IP Phones
WAPs
PTZ Cameras
Video IP Phones
Alarm Systems
Heated Cameras
Laptops
Info Kiosks
TVs
High Power
WAPs

If you are considering using PoE switches in any part of your network then you should check the power draw of the devices you want to connect and ensure the class of switch you choose will support the number and type of devices you require. You should also check that each device is capable of being powered by PoE.

Network Topology and Definitions

To fully understand the following switch management and functionality sections it is worth mentioning the standard OSI (Open Systems Interconnection) model of a network. It breaks down the functions of a network into seven layers, in order to ensure interoperability of diverse communication systems with standard communication protocols.

Level Layer Function
L1 Physical Layer If you've ever had to troubleshoot anything electronic, Layer 1 is where you'd answer the question, "Is it plugged in?" Layer 1 also includes layouts of pins, voltages, radio frequency links, and other physical requirements. It's a media layer used to transmit and receive symbols, or raw bits of data, which it converts into electrical, radio, or optical signals.
L2 Data Link Layer This digital stratum is all about media, acting as an avenue for node-to-node data transfers of frames—simple containers for single network packets—between two physically connected devices. It's where you'll find most of the switches used to start or end communication between connected devices.
L3 Network Layer Another media layer, Layer 3 is home to IP addresses and routers that look for the most efficient communication pathways for packets containing control information and user data, also known as a payload. If a packet is too large to be transmitted, it can be split into several fragments which are shipped out and then reassembled on the receiving end.
L4 Transport Layer Layer 4 is a host layer that generally functions as a digital post office coordinating data transfers between systems and hosts, including how much data to send, the rate of data transmission, data destinations, and more.
L5 Session Layer Layer 5 is a host layer that acts like a moderator in that it controls the dialogue between computers, devices, or servers. It sets up pathways, limits for response wait time, and terminates sessions.
L6 Presentation Layer This host layer is where data is translated and formatted so applications, networks, and devices can understand what they're receiving. Characters are encoded and data compressed, encrypted, and decrypted on Layer 6.
L7 Application Layer This top-of-stack host layer is familiar to end users because it's home to Application Programming Interfaces (API) that allow resource sharing, remote file access, and more. It's where you'll find web browsers and apps like email clients and social media sites.

Switch Management

Although the job of a network switch is to transfer data from one device to another using their IP addresses and governed by the speed of the switch ports, there are additional management tools that can enhance switch performance. Switches usually fall into one of three categories.

Unmanaged Switches

An Unmanaged switch is designed so that you can simply plug them in and they work, no configuration is required. Unmanaged switches are typically for basic connectivity and as such you will most often see them used in home or small office networks or wherever a few more ports are needed, such as at a desk or in a conference room.

Smart Managed Switches

Smart Managed or smart switches are designed to be still simple to configure, but offer a degree of management. They can be a cost-effective alternative to managed switches. They can be deployed at the edge of a large network (with managed switches being used in the core), as the infrastructure for smaller networks, or for low complexity needs.

Smart switches allow you to segment the network into workgroups by creating VLANs (Virtual Local Area Networks) - logical networks independent of where the physical network connections are - for example all GPU workstation users could be grouped regardless of which offices or floors they occupy. Typically there is a limit to the number of VLANs allowed on a smart switch when compared to a fully managed switch. In addition, smart switches support basic quality-of-service (QoS) that facilitates prioritisation of users and applications ensuring optimal performance of voice and video calls or CAD software users.

Smart switches also offer some levels of security, such as endpoint authentication, and limited numbers of access control lists (ACLs). These features allow you to ensure the integrity of devices and users on any given part of the network, though again the levels of control and granularity would not be the same as a managed switch.



Some switch manufacturers have added functionality to make Smart Plus or Smart Pro models however these are still some way off fully Managed switches.

Managed Switches

Managed switches offer comprehensive management feature sets, so may be referred to as fully Managed switches or by the OSI network layer the management applies to (L2 / L3 / L4). Managed switches are designed to deliver the most comprehensive set of features to provide the best application experience, the highest levels of security, the most precise control and management of the network, and offer the greatest scalability in the fixed configuration category of switches. As a result, managed switches are usually deployed as aggregation / access switches in very large networks or as core switches in relatively smaller networks. Managed switches may support both L2 switching and L3 IP routing (to share work with the router) though more cost-effective models will only offer L2 switching support.

From a security perspective, managed switches provide protection of the data plane (user traffic being forwarded), the control plane (traffic being communicated between networking devices to ensure user traffic goes to the right destination) and the management plane (traffic used to manage the network or device itself). Managed switches also offer network storm control, denial-of-service attack protection and private VLANs for securing communities of users or device isolation.

For QoS, the richness of capabilities goes far beyond what is available in a smart switch. Managed switches support queues to treat traffic differently by importance, tagging of traffic by L2 or L3, or rate limiting traffic. L4 management involves many and varied routing protocols that govern how traffic is distributed around the network for optimal performance.

Switch Resiliency

Aside from management tools there are various additional ways to improve the performance of a network, increase its residency and remove potential single points of failure. These following options should always be considered as a network grows in complexity in order to maintain consistency of service to all users.

Stackable Switches

As the network grows, you will need more switches to provide network connectivity to the growing number of devices in the network. When using standalone switches, each switch is managed and configured as an individual entity. In contrast, stackable switches provide a way to simplify and increase the availability of the network. Instead of configuring, managing, and troubleshooting eight 48-port switches individually, you can manage all eight like a single unit, so the total 384 ports function as a single switch providing valuable operational advantages.

Not only can you connect the stack members in a ring using stacking cables, such that, if a port or cable fails, the stack will automatically route around that failure, many times at microsecond speeds, you can also add or subtract stack members and have it automatically recognised, configured and added into the stack. This type of stacked configuration will negate port or cable issues or a full switch failure.

Redundancy

Redundancy in a switch can refer to a couple of things - firstly in that how a switch is connected to another (edge to core). Using multiple cables to connect switches not only balances the load of data being transferred it also protects against a single cable failure. Multiple cable connections between switches can be used in a small network or in conjunction with stacking in a very large network.

Secondly, within each higher end switches you may have redundant power supplies). Each of the power supplies will have the capacity to run the device on its own, so if one fails the switch can still operate normally. Some switches support hot-swap power supplies. If one of the power supplies stops working, you will typically be able to replace it without taking the device offline - you can simply unplug the defective power supply and plug a new one physically.

Some very high switches may also offer redundant fans, to ensure there is minimal overheating risk should a fan fail. These would also be hot-swappable.

Modular Switches

Very high-end switches, also referred to as director switches, have a modular design - a chassis enclosure containing multiple hot-swappable power supplies and fans that all serve numerous blades. These blades are essentially modules that can be easily changed to upgrade the port type or speed without having to remove the entire switch.

Modular switches are characterised by having high port counts, being expandable and only being usually seen in extensive networks and then only in the core.

Datacentre Networks

Traditionally, organisations with very large networks, often referred to as datacentres - have also followed the three-layer approach of access / edge, aggregation and core switches. Usually the switches had greater capability the closer they were to the core - greater speeds, more management features etc while the core switches were often very large and modular dealing with many different types of traffic and connections. More recently however, this topology has been reduced to just two layers - referred to as leaf and spine. Leaf switches replace the edge and aggregation layers whereas a greater number of spine switches replace the larger unit(s) at the centre or core of the network. Also these spine switches are also now mostly fixed configuration models rather than modular, as these are less complex, easier to manage and remove a single point of failure.

Video

The reason for this change in datacentre network design is being predominantly led by the prevalence of cloud and containerised applications, where there is increasing server to server traffic involved in managing these modern applications that have components that are distributed across more servers or virtual machine (VMs). It is also worth mentioning that increasingly, specifically optimised switches are being deployed at either leaf or spine layers where applications and workloads are very specific such as in deep learning and AI.

Switch Accessories

Mounting Rails

NICs

Although not part of the switch directly, it is vital to ensure that servers and storage devices have compatible network interface cards (NICs) with the switches that they are to be connected to. For example if Infiniband switches are being employed for HPC or AI workloads, then all the servers must have Infiniband NICs with the correct SFP or QSFP modules to ensure proper connections.

To learn more about NICs and compatibility read our buyers guide.

Time to choose

We hope you’re found this guide to network switches useful in informing your choice of connectivity for your infrastructure - whether you’re adding new users or technologies, or installing a network in a new building. Click below to see our range of switches.

Unmanaged ethernet


VIEW RANGE >

Smart ethernet


VIEW RANGE >

Managed ethernet


VIEW RANGE >

Managed Infiniband


VIEW RANGE >