Storage Controllers Buyers Guide

What is a Storage Controller?

A storage controller is a hardware device used to manage hard disk drives (HDDs) or solid-state drives (SSDs) in a server or storage array. While all motherboards include a basic storage controller built into the chipset, you may need to add a third party storage controller if you need lots of drives or want to configure them in RAID to protect data.

In this guide we’ll look at the different ways of connecting to and controlling drives within a system, why you’d want to do this and which method you should choose.

Why Do I Need a Storage Controller?

In any computer system, be it a PC, workstation or server, data is stored on drives - either HDDs or SSDs. You can learn more about which type of drive will best suit your storage needs in our Enterprise Drives Buyers Guide. In a client system such as a laptop a single storage drive may be sufficient for all your needs, however in infrastructure systems such as servers several drives may be required as data expands. A storage controller is the device that allows you to pool your drives into a single unit, so the system sees a total drive capacity rather than individual drives. This can be multiple internal drives within a system, or drives within an external storage array. The controller may also offer the ability to configure RAID. RAID stands for redundant array of independent disks and it is essentially spreading the data over multiple drives to remove the chance of a single point of failure. With RAID also comes the ability to hot-swap drives - meaning there is no need to power off the system to replace or upgrade a drive. There are two main types of storage controller cards, as shown in the tabs below.

Host Bus Adapters

A Host Bus Adapter (HBA) is an expansion card that plugs into a PCIe slot on the system’s motherboard and provides fast, reliable non-RAID communication between the host system and the storage devices. HBAs can reliably connect hundreds or even thousands of HDDs, SSDs and even tape devices to the host system, making them ideal for cost-sensitive backup solutions or high-performance software defined storage (SDS) configurations.

RAID Controllers

A RAID controller card is similar to an HBA, but can also add redundancy (RAID) for the drives, help optimise performance, reduce latency, or even make smart decisions on whether to store data on an HDD or an SSD cache, depending on user needs. Since these additional tasks consume power and processing speed, RAID controllers are typically more expensive than HBAs and handle fewer devices. They are however recommended where data is critical and needs some degree of protection and in scenarios where different drive types are being used such as a single array containing SSDs for regularly accessed data and HDDs for archive purposes.

Storage Controller Connection

A storage controller needs to connect to the host system so compatibility is key - the type of physical connector will be influenced by the connection interface and the storage bus. The tabs below explain the various common types.

Interfaces

Both HBAs and RAID controllers are available with a variety of interfaces and specifications that need to be compatible with the motherboard to ensure maximum performance. All these types connect to a PCIe slot, but will differ in throughput depending on whether they are PCIe 3.0, 4.0 or 5.0. Version 4.0 offers twice as much bandwidth as the older 3.0 standard and version 5.0 again double that of version 4.0. Although the HBA or RAID card will occupy a single physical slot on the motherboard it is also important to check how many PCIe lanes it needs (written as x1, x4, x8, x16) as this will have an impact of which PCIe slot is used. PCIe slots are backwards compatible with themselves, so if you don’t have any free x8 slots you can install the storage controller in a x16 without any problems. However, it’s important to note that in some systems not all slots will operate at full speed. For instance, many mid-range motherboards will have multiple x16 slots, but some of these may only operate at 8x speed. You can find out which slots provide which speeds by cross referencing the motherboard’s specification and the CPU’s capabilities.

Buses

The storage controller communicates with drives via a storage bus, in essence the electronic language that both devices share. It’s therefore important to choose a storage controller that supports the drives you’re intending to add to the system. There are three types of storage bus that you will encounter - SATA is the most common type of bus and there have been several generations of SATA, the latest being SATA-III, delivering throughput speeds of up to 6Gb/s. SAS is another common bus but usually only seen in high-end servers. It is physically compatible and has the same physical appearance as SATA and there are also several generations - SAS-3 delivers 12Gb/s and SAS-4 achieves 24Gb/s. NVMe is the newest type of bus and connects SSDs directly to the PCIe bus for incredible performance, with up to 63Gb/s transfer rates possible.

Connectors

Aside from how the card fits into the motherboard you need to know what connectors are on the HBA or RAID controller to ensure you get the correct cabling to attach to the HDDs or SSDs you want to use. There are a number of possible connectors, shown below. It is also worth mentioning that the drive may be connected directly using a single cable, via a fan-out cable allowing attachment of up to four drives, or via a backplane - a connector strip usually found in servers or storage appliances that takes multiple connections from the controller card and shares these with a greater number of drives. You can also buy cables with different connectors at either end depending on what the storage controller and drives/backplane require.

SATA

SATA is the oldest connector type you are likely to find on a RAID card or HBA. While commonly found in motherboards it has been mostly superseded by one of the later types listed below in add-in card storage controllers as it can only support one drive per cable at relatively slow speeds.

MiniSAS

MiniSAS is another older but very widely used connector type for SATA and SAS drives that may be still commonly seen on cards. There is also a version for connecting to JBODs and external drives.

MiniSASHD

MiniSASHD is the most common connector found on controller cards and HBAs today for SATA, SAS and NVMe drives. It allows four drives to connect per port using either a fan out cable or via a backplane. Different cables are required if NVMe drives are going to be used.

Slimline SAS

Slimline SAS or SlimSAS is a new small form factor alternative to MiniSASHD often seen on cards designed for more density and drive numbers. It also allows four drives per port to connect.

U.2 / U.3

U.2 and the more recent U.3 are dedicated NVMe connectors found on controller cards, most often employed in high-end server and storage appliances. Depending on the performance required each port can support one or two drives.

M.2

M.2 is found on a many motherboards and some controller cards and directly connects SSDs without the need for cables. Make sure that the M.2 slot and drives are both of the correct type (SATA or NVMe) and size (42mm, 80mm or 110mm long).

RAID Controller Addons

RAID controller cards temporarily cache data from the host system until it is successfully written to the drives. This is a cost-effective way to improve performance, but while cached, the data can be lost if system power fails, jeopardising the data’s integrity. To prevent this issue, there are a number of ways you can upgrade some RAID controllers.

Battery Backup Units (BBUs) are simple small lithium-ion batteries that attach directly to the controller card. The BBU’s job is to remember the data that hasn’t been synced to disk yet - it does this by storing the data in DRAM, usually up to 72 hours without power. When the machine powers back up, the BBU will write the cache contents on the disk.

Alternatively flash cache units (like CacheVault from Broadcom) can be added to RAID controllers to protect the integrity of data by storing cached data in non-volatile flash cache storage. What makes CacheVault superior in this aspect is the ability for that data to be moved from DRAM to NAND flash. When the data is moved from DRAM to NAND flash, it can be stored for up to 3 years. When the server turns back on, data is moved from NAND back to DRAM and then written to the disks.

While both a BBU and a NAND cache like CacheVault are both physical module add-ons, CacheCade is a RAID controller software (also from Broadcom) that enables an SSD Read/Write cache for the array. It allows you to optimise the existing HDD arrays with the SSD-based flash cache. With the latest tech advancements pushing HDD arrays to reach their input/output potential, data “hot spots” are inevitable. Hot spots are the areas most commonly accessed on HDD arrays; when you have hot spots, the life of the drives are severely shorten. To fix this problem, CacheCade will create a front-side flash cache for the “hottest” array. This flash cache reads/writes to the SSD which is much more efficient than reading/writing to the HDD array. CacheCade content remains intact upon reboot.

Data Security

As we’ve already stated one of the main functions of a RAID controller is the configuration of a RAID array. We mentioned that RAID spreads data over several drives and works by means of blocks of data, referred to as parity blocks, being distributed across the drives so that in the event of failure of any one drive the parity blocks can be used to retrieve the lost data and rebuild the array. RAID settings are categorised by a number and their strengths and attributes vary with each type.

RAID 0

RAID 0 is the fastest RAID mode since it stripes data across all of the array’s drives and as the capacities of each drive are added together it results in the highest capacity of any RAID type. However, RAID 0 lacks a very important feature - data protection. If one drive fails, all data becomes inaccessible, so while RAID 0 configuration may be ideal for gaming where performance matters but data is not critical, it is not recommended for storing critical data.

RAID 1

RAID 1 works across a maximum of two drives and provides data security since all data is written to both drives in the array. If a single drive fails, data remains available on the other drive, however, due to the time it takes to write data multiple times, performance is reduced. Additionally, RAID 1 reduces disk capacity by 50% since each bit of data is stored on both disks in the array. RAID 1 configurations are most commonly seen when mirroring drives that contain the operating system (OS) in enterprise servers, providing a back-up copy.

RAID 5

RAID 5 writes data across all drives in the array and to a parity block for each data block. If one drive fails, the data from the failed drive can be rebuilt onto a replacement drive. A minimum of three drives is required to create a RAID 5 array, and the capacity of a single drive is lost from useable storage due to the parity blocks. For example, if four 2TB drives were employed in a RAID 5 array, the useable capacity would be 3x 2TB = 6TB. Although some capacity is lost, the performance is almost as good as RAID 0, so RAID 5 is often seen as the sweet spot for many workstation and NAS uses.

RAID 6

RAID 6 writes data across all drives in the array, like RAID 5, but two parity blocks are used for each data block. This means that two drives can fail in the array without loss of data, as it can be rebuilt onto replacement drives. A minimum of four drives is required to create a RAID 6 array, although due to the dual parity block, two drives capacities are lost - for example if you had five 2TB drives in an array, the usable capacity would be 3x 2TB = 6TB. Typically due to this security versus capacity trade-off, RAID 6 would usually only be employed in NAS appliances and servers where data critical.

RAID 10

RAID 10 is referred to as a nested RAID configuration as it combines the protection of RAID 1 with the performance of RAID 0. Using four drives as an example, RAID 10 creates two RAID 1 arrays, and then combines them into a RAID 0 array. Such configurations offer exceptional data protection, allowing for two drives to fail across two RAID 1 segments. Additionally, due to the RAID 0 stripe, it provides users high performance when managing greater amounts of smaller files, so is often seen in database servers.

RAID 50

RAID 50 is referred to as a nested RAID configuration as it combines the parity protection of RAID 5 with the performance of RAID 0. Due to the speed of RAID 0 striping, RAID 50 improves upon RAID 5 performance, especially during writes, and also offers more protection than a single RAID level. RAID 50 is often employed in larger servers when you need improved fault tolerance, high capacity and fast write speeds. A minimum of six drives is required for a RAID 50 array, although the more drives in the array the longer it will take to initialise and rebuild data due to the large storage capacity.

RAID 60

RAID 60 is referred to as a nested RAID configuration as it combines the double parity protection of RAID 6 with the performance of RAID 0. Due to the speed of RAID 0 striping, RAID 60 improves upon RAID 6 performance, especially during writes, and also offers more protection than a single RAID level. RAID 60 is often employed in larger server deployments when you need exceptional fault tolerance, high capacity and fast write speeds. A minimum of eight drives is required for a RAID 60 array, although the more drives in the array the longer it will take to initialise and rebuild data due to the large storage capacity.

Systems that support RAID arrays will usually have a hot-swap capability, meaning that a failed drive can be removed from the array without powering the system down. A new drive is put in the failed arrives place and the array rebuild begins - automatically. You can also configure a hot spare drive - an empty drive that sits in the array doing nothing until a drive fails, meaning that the rebuild can start without the failed drive being removed first.

It is also worth mentioning that multiple RAID arrays can be configured in a single system - it may be that RAID 1 is employed to protect a pair of SSDs for the OS, whereas multiple drives are protected by RAID 6 including hot spare drives too. Ultimately however, the RAID configuration(s) you choose need to be controlled, either by software on the system or additional hardware within it. Let’s take a look at the options.

Hardware RAID

In a hardware RAID setup, the drives connect to a RAID controller card inserted in a PCIe slot or integrated into the motherboard. This works the same for larger servers as well as workstations and desktop computers, and many external drive enclosures have a RAID controller built in. High-end hardware RAID controllers can be upgraded with a cache protector, these comprise a small capacitor which in the event of power loss keeps powering the cache memory on the RAID controller for as long as three years. Without a cache protector, data stored in the RAID controllers cache will be lost and could cause data corruption.

Advantages Disadvantages
• Better performance, especially in more complex RAID configurations. Processing is handled by the dedicated RAID processor rather than the CPUs which results in less strain on the system when writing backups, and less downtime when restoring data

• Has more RAID configuration options including hybrid configurations which may not be available with certain OSes

• Compatible across different OSes. This is critical if you plan to access your RAID system from say, Mac and Windows. Hardware RAID would be recognisable by any system.
• Since there’s more hardware, there’s more cost involved in the initial setup

• Inconsistent performance for certain hardware RAID setups when using SSDs

• Older RAID controllers disable the built-in fast caching functionality of SSDs that are needed for efficient programming and erasing onto the drive

Chipset RAID

Many AMD and Intel motherboard chipsets support some of the basic types of RAID, potentially negating the need for a hardware RAID controller.

Advantages Disadvantages
• No additional cost - all you need to do is connect the drives and then configure them in the BIOS

• Modern CPUs are so powerful they can easily handle RAID 0 & 1 with no noticeable performance hit
• You’re restricted to the RAID levels your motherboard chipset supports

• Performance hit if you’re using more complex RAID configurations

• Limited performance and resilience compared to hardware RAID controller

• If the motherboard dies you lose access to the RAID array

Software RAID

The third and final type of RAID array is called software RAID and is when you use the operating system to create a RAID. Numerous operating systems support RAID, including Windows and Linux.

Advantages Disadvantages
• No additional cost - all you need to do is connect the drives and then configure them in the OS

• Modern CPUs are so powerful they can easily handle RAID 0 & 1 with no noticeable performance hit
• Software RAID is often specific to the OS being used, so it can’t generally be used for drive arrays that are shared between operating systems

• You’re restricted to the RAID levels your specific OS can support

• Performance hit if you’re using more complex RAID configurations

• If the OS dies you lose access to the RAID array

Ready to buy?

Click the links below to see our full range of storage controllers.

Remember we’re here to help so if you’d still like advice on which storage controller approach is best for your situation, don’t hesitate to contact our friendly advisors on 01204 474747.