mdadm RAID-0 and LVM can do data striping on top of multiple disks, what different and which one is better for performance? Here is the article about concepts of both data striping methods. Written by Jeffrey B. Layton and published in Linux mag
Data striping is a commonly used technique for improving performance. It breaks data into pieces that are assigned to various physical devices, usually storage devices, in a round-robin fashion. One of the reasons that this concept was developed is that processors are capable of generating IO (reads and writes) much faster than the storage device can store or recall it. But if you can split the data among multiple storage devices then you can perhaps improve IO performance.
The process is very simple. In the case of a write function, the incoming data is split into pieces with the first piece being sent to the first device, the second piece being sent to the second device, and so on until all the devices have received a data piece or all the data has been written. If there are still pieces of data to be written then the next piece is sent to the first device and the process continues (round-robin). Data throughput is improved because the system can send one piece of data to one storage device and immediately move on to the next piece of data and the next storage device without having to wait for the first one to complete. If you like, the data storage is parallelized. In Linux there are two primary ways to achieve this, RAID-0 and LVM.
RAID-0 with mdadm
One way to achieve data striping is to use RAID-0. Most people are probably familiar with the concept of RAID (Redundant Array of Inexpensive Disks) that seeks to divide data, possibly replicate it, and place it on storage devices. There are various techniques to achieve these goals and each one has a number associated with them such as RAID-0 or RAID-1. These details of the scheme define whether the emphasis is on either data reliability or increased throughput or possibly both.
RAID-0 is a scheme to improve data throughput by taking the data and spliting it evenly between multiple disks (data striping). Figure 1 below, from Wikipedia, shows how data is split across two disks.
In this example, the first data piece, A1, is sent to disk 0, the second piece, A2, is sent to disk 1, and so on.
There are two terms that help define the properties of RAID-0.
- Stripe Width
This is the number of stripes that can be read to or written from, at the same time. Very simply, this is the number of drives used in the RAID-0 group. In example 1 the stripe width is 2.
- Stripe Size
The phrase refers to the size of the stripes on each drive. The phrases block size, chunk size, stripe length or granularity will sometimes be used in place of stripe size but they are all equivalent.
RAID-0 can, in many cases, help IO performance because of the data striping (parallelism). If the data is smaller than the stripe size (chunk size) then it will be written to only one disk not taking advantage of the striping. But if the data size is greater than the stripe size, then read/write performance should increase because of the ability to use more than one disk for a read or write. Increasing the stripe width adds more disks and can improve read/write performance if the stripe width (chunk size) is greater than the data size.
Mdadm is an all-purpose RAID management tool for Linux with a long history.
LVM has been discussed in a previous article about managing pools of data. As discussed, it is an extraordinary useful tool for managing storage. Fundamentally it allows you to collect physical storage devices and combine them into virtual devices (volume groups) that can be divided into logical partitions (logical volumes) that are then used as the device for file systems. You can add or subtract devices from the virtual devices (volume group) or even move them as needed. Coupling these techniques with file systems that can be resized and you have a terribly efficient way of growing or moving file systems as needed.
In addition, LVM is very flexible allowing you to control exactly how the physical devices are combined into the volume groups (VGs) and the logical volumes (LVs). It is this flexibility that allows you to do data striping. In LVM this is called striped mapping. Figure 2 below illustrates this concept.
Striped mapping maps the physical volumes (typically the drives) to the logical volume that is then used as the basis of the file system. LVM takes the first few stripes from the first physical volume (PV0) and maps them to the first stripes on the logical volume (LV0). Then it takes the first few stripes from the next physical volume (PV1) and maps them to the next stripes in LV0. The next stripes are taken from PV0 and mapped to LV0 and so on until the stripes on PV0 and PV1 are all allocated to the logical volume, LV0.
The advantage of the striped mapping is similar to RAID-0. When data is read from or written to the file system and if the data is large enough, it spans multiple stripes so that both physical devices can be used, improving performance.
Contrasting RAID-0 and LVM
From the previous discussions it is obvious that both RAID-0 and LVM achieve improved performance because of data striping across multiple storage devices. So in that respect they are the same. However, LVM and RAID are used for different purposes, and in many cases are used together. Let’s look at both techniques from different perspectives.
The size (capacity) of a RAID-0 group is computed from the smallest disk size among the disks in the group, multiplied by the number of drives in the group. For example, if you have two drives where one drive is 250GB in size and the second drive is 200GB, then the RAID-0 group is 400GB in size, not 450GB. So RAID-0 does not allow you to use the entire space of each drive if they are different sizes.
On the other hand, LVM allows you to combine all of the space in all of the drives into a single virtual space. You can use stripe mapping across the drives as you would in RAID-0, with the capacity being the same as RAID-0. However, LVM allows you to also use the remaining space for additional volume groups (VGs).
In the case of mdadm and software RAID-0 on Linux, you cannot grow a RAID-0 group. You can only grow a RAID-1, RAID-5, or RAID-6 array. This means that you can’t add drives to an existing RAID-0 group without having to rebuild the entire RAID group but having to restore all the data from a backup.
However, with LVM you can easily grow a logical volume. But, you cannot use stripe mapping to add a drive to an existing striped logical volume because you can’t interleave the existing stripes with the new stripes. This link explains it fairly concisely.
“In LVM 2, striped LVs can be extended by concatenating another set of devices onto the end of the first set. So you can get into a situation where your LV is a 2 stripe set concatenated with a linear set concatenated with a 4 stripe set.”
Despite not being able to maintain a striped mapping in LVM, you can easily add space to a strpped logical volume.
This article, written by the original developers of LVM for Linux, present four advantages of LVM.
- Logical volumes can be resized while they are mounted and accessible by the database or file system, removing the downtime associated with adding or deleting storage from a Linux server
- Data from one (potentially faulty or damaged) physical device may be relocated to another device that is newer, faster or more resilient, while the original volume remains online and accessible
- Logical volumes can be constructed by aggregating physical devices to increase performance (via disk striping) or redundancy (via disk mirroring and I/O multipathing)
- Logical volume snapshots can be created to represent the exact state of the volume at a certain point-in-time, allowing accurate backups to proceed simultaneously with regular system operation
These four advantages point to the fact that LVM is designed for ease of management rather than performance.