Why Do I Need a Storage Controller?

Typically, servers and storage arrays will contain a far greater number of storage drives - either HDDs or SSDs - than say a PC or workstation. A storage controller is the device that allows you to pool your drives into a single unit, so the system sees a total drive capacity rather than individual drives. This can be multiple internal drives within a system and / or drives within an external storage array. The storage controller may also offer the ability to configure RAID. RAID stands for redundant array of independent disks and it is essentially spreading the data over multiple drives to remove the chance of a single point of failure. With RAID also comes the ability to hot-swap drives - meaning there is no need to power off the system to replace or upgrade a drive. There are two main types of storage controller cards, as shown in the tabs below.

Host Bus Adapters

Host Bus Adapters

A Host Bus Adapter (HBA) is an expansion card that plugs into a PCIe slot on the system’s motherboard and provides fast, reliable non-RAID communication between the host system and the storage devices. HBAs can reliably connect hundreds or even thousands of HDDs, SSDs and even tape devices to the host system, making them ideal for cost-sensitive backup solutions or high-performance software defined storage (SDS) configurations.

RAID Controllers

RAID Controllers

A RAID controller card is similar to an HBA, but can also add redundancy (RAID) for the drives, help optimise performance, reduce latency, or even make smart decisions on whether to store data on an HDD or an SSD cache, depending on user needs. Since these additional tasks consume power and processing speed, RAID controllers are typically more expensive than HBAs and handle fewer devices. They are however recommended where data is critical and needs some degree of protection and in scenarios where different drive types are being used such as a single array containing SSDs for regularly accessed data and HDDs for archive purposes.

Storage Controller Compatibility

A storage controller needs to connect to the host system, so compatibility is key. The type of physical connector will be influenced by the connection interface and the storage bus. The tabs below explain the various common types in use today.

PCIe Cards

PCIe Cards

The most common way to connect a storage controller to a system is in a PCIe slot. PCIe 5.0 offers twice as much bandwidth as the older 4.0 standard. Although the HBA or RAID card will occupy a single physical slot on the motherboard it is also important to check how many PCIe lanes it needs (written as x1, x4, x8, x16) as this will have an impact of which PCIe slot is used. PCIe slots are backwards compatible with themselves, so if you don’t have any free x8 slots you can install the storage controller in a x16 slot without any problems.

OCP cards

OCP cards

Alternatively, some servers feature compact OCP Mezzanine Card slots. These use the same protocol as PCIe, but are physically smaller.

The OCP 2.0 specification supports up to 8x PCIe 3.0 lanes while the more advanced OCP 3.0 specification supports up to 32x PCIe 5.0 lanes.

Buses

Buses

The storage controller communicates with drives via a storage bus - so you must choose a controller that supports the drives you’re intending to add to the system. There are three types of storage bus that you will encounter - SATA-III, delivering throughput speeds of up to 600MB/sec with either HDDs or SSDs; SAS-4, physically identical to SATA-III but delivers up to 2.4GB/sec throughput - also with HDDs or SSDs; and NVMe, the newest type of bus and connects only SSDs directly to the PCIe bus for incredible performance, at up to 63GB/sec transfer speeds.

Connectors

Connectors

Aside from how the card fits into the system you need to know what connectors are on the HBA or RAID controller to ensure you get the correct cabling to attach to the HDDs or SSDs. Drives in servers are usually connected via a backplane or a fan-out cable. Common connector types in use today are SATA, MiniSASHD, Slimline SAS, U.2 or U.3, and M.2. The latter is available in both SATA or NVMe and three sizes - 42mm, 80mm or 110mm long.

Controller Caches

RAID controller cards temporarily cache data from the host system until it is successfully written to the drives. This is a cost-effective way to improve performance, but while cached, the data can be lost if system power fails, jeopardising the data’s integrity. To prevent this issue, Battery Backup Units (BBUs) - simple small lithium-ion batteries that attach directly to the controller card - can be used. The BBU stores the data that hasn’t been synced to disk yet in DRAM, for up to 72 hours without power. When the machine powers back up, the BBU will write the cache contents on the disk.

Alternatively, flash cache units NAND flash memory instead of DRAM cane used. Data in NAND flash can be stored for up to 3 years, and then like DRAM, when the server turns back on data is moved from NAND and then written to the disks.

Data Security

Systems that support RAID arrays will usually have a hot-swap capability, meaning that a failed drive can be removed from the array without powering the system down. A new drive is put in the failed drive’s place and the array rebuild begins - automatically. You can also configure a hot spare drive - an empty drive that sits in the array doing nothing until a drive fails, meaning that the rebuild can start without the failed drive being removed first. It is also worth mentioning that multiple RAID arrays can be configured in a single system - it may be that RAID 1 is employed to protect a pair of SSDs for the OS, whereas multiple HDDs are protected by RAID 5.

RAID 0

RAID 0

RAID 0 is the fastest RAID mode since it stripes data across all of the array’s drives and as the capacities of each drive are added together it results in the highest capacity of any RAID type. However, RAID 0 lacks a very important feature - data protection. If one drive fails, all data becomes inaccessible, so while RAID 0 configuration may be ideal for gaming where performance matters but data is not important, it is not recommended for storing critical data.

RAID 1

RAID 1

RAID 1 works across a maximum of two drives and provides data security since all data is written to both drives in the array. If a single drive fails, data remains available on the other drive, however, due to the time it takes to write data multiple times, performance is reduced. Additionally, RAID 1 reduces disk capacity by 50% since each bit of data is stored on both disks in the array. RAID 1 configurations are most commonly seen when mirroring drives that contain the operating system (OS) in enterprise servers, providing a back-up copy.

RAID 5

RAID 5

RAID 5 writes data across all drives in the array and to a parity block for each data block. If one drive fails, the data from the failed drive can be rebuilt onto a replacement drive. A minimum of three drives is required to create a RAID 5 array, and the capacity of a single drive is lost from useable storage due to the parity blocks. For example, if four 2TB drives were employed in a RAID 5 array, the useable capacity would be 3x 2TB = 6TB. Although some capacity is lost, the performance is almost as good as RAID 0, so RAID 5 is often seen as the sweet spot for many workstation and NAS uses.

RAID 6

RAID 6

RAID 6 writes data across all drives in the array, like RAID 5, but two parity blocks are used for each data block. This means that two drives can fail in the array without loss of data, as it can be rebuilt onto replacement drives. A minimum of four drives is required to create a RAID 6 array, although due to the dual parity block, two drives capacities are lost - for example if you had five 2TB drives in an array, the usable capacity would be 3x 2TB = 6TB. Typically due to this security versus capacity trade-off, RAID 6 would usually only be employed in NAS appliances and servers with critical data.

RAID 10

RAID 10

RAID 10 is referred to as a nested RAID configuration as it combines the protection of RAID 1 with the performance of RAID 0. Using four drives as an example, RAID 10 creates two RAID 1 arrays, and then combines them into a RAID 0 array. Such configurations offer exceptional data protection, allowing for two drives to fail across two RAID 1 segments. Additionally, due to the RAID 0 stripe, it provides users high performance when managing greater amounts of smaller files, so is often seen in database servers.

RAID 50

RAID 50

RAID 50 is referred to as a nested RAID configuration as it combines the parity protection of RAID 5 with the performance of RAID 0. Due to the speed of RAID 0 striping, RAID 50 improves upon RAID 5 performance, especially during writes, and also offers more protection than a single RAID level. RAID 50 is often employed in larger servers when you need improved fault tolerance, high capacity and fast write speeds. A minimum of six drives is required for a RAID 50 array, although the more drives in the array the longer it will take to initialise and rebuild data due to the large storage capacity.

RAID 60

RAID 60

RAID 60 is referred to as a nested RAID configuration as it combines the double parity protection of RAID 6 with the performance of RAID 0. Due to the speed of RAID 0 striping, RAID 60 improves upon RAID 6 performance, especially during writes, and also offers more protection than a single RAID level. RAID 60 is often employed in larger server deployments when you need exceptional fault tolerance, high capacity and fast write speeds. A minimum of eight drives is required for a RAID 60 array, although the more drives in the array the longer it will take to initialise and rebuild data due to the large storage capacity.

Ready to buy?

Ready to buy?

Browse our range of controllers:

Alternatively, if you have any further questions you’d like answering about storage controllers, don’t hesitate to call one of our friendly advisors on 01204 474747 or at [email protected].