Linux’s Logical Volume Manager, or lvm, is a method for dividing a physical disk (or physical volume, PV) into blocks (or physical extents, PE). These blocks can be assigned to partitions (or logical volumes, LV). The process of assigning PE‘s to a LV creates a map for translating PE‘s to LE‘s, called the allocation map. The purpose of this abstraction of disk blocks is to create great flexibility when it comes to allocating disk space to partitions. With lvm, it is no longer required to create partitions as one contiguous block on a disk. A partition can be located on one disk, span multiple disks, or even be striped or mirrored. PV‘s are grouped in volume groups (VG). LV‘s can be moved between all PV‘s in a VG. PV‘s can be added and removed from a VG, changing its total size. A PV can only belong to one VG.
lvm2 builds upon the foundation of lvm1 and improves its features, and is included in the Linux kernel since version 2.6. The features of lvm include:
- Resizing VG‘s by adding or removing PV‘s.
- Resizing LV‘s by adding or removing LE‘s.
- Create read-only snapshots (lvm1) or read-write snapshots (lvm2).
- Stripe or mirror LV‘s (RAID0 and RAID1-like)1.
- Migrate LV‘s between PV‘s.
With these features and the in-kernel availability of its features, the only real reason not to use lvm is interoperability between Operating Systems. Windows has its own lvm like system called Logical Disk Manager, and cannot access partitions on lvm without special tools. I’m not sure about the compatibility between Linux lvm and the BSD variants.
For the purpose of this article, when I say lvm, I am talking about lvm2 for Linux.
A real-life example
For storing my audio and video data, I have custom-built a NAS/HTPC with ample storage. To manage this storage, I had to find something better than the normal method of creating partitions. Because my collection is ever-growing, I have to be able to extend partitions later on. I also need a way to migrate my existing data onto a new disk when the need arises. As such, I have created the following setup:
root@razengan:~# pvscan PV /dev/sda3 VG razengan lvm2 [148.62 GiB / 66.52 GiB free] PV /dev/sdb1 VG razengan lvm2 [2.73 TiB / 240.52 GiB free] PV /dev/sde1 VG razengan lvm2 [1.82 TiB / 489.01 GiB free] PV /dev/sdd1 VG razengan lvm2 [1.82 TiB / 359.01 GiB free] PV /dev/sdc1 VG razengan lvm2 [931.51 GiB / 331.51 GiB free] Total: 5 [7.42 TiB] / in use: 5 [7.42 TiB] / in no VG: 0 [0 ]
/dev/sda3 is a small 2.5″ disk I use for the Linux OS and home directories only. As you can see, the size of this disk is 148 GiB (≈ 160GB), of which 66 GiB is not assigned to any LV. A bit more detail about this disk:
root@razengan:~# pvdisplay /dev/sda3 -m --- Physical volume --- PV Name /dev/sda3 VG Name razengan PV Size 148.63 GiB / not usable 941.00 KiB Allocatable yes PE Size 4.00 MiB Total PE 38048 Free PE 17030 Allocated PE 21018 PV UUID 0IoBwh-553o-V7tk-WGUc-Z3q4-anQP-NoGLxf --- Physical Segments --- Physical extent 0 to 9535: Logical volume /dev/razengan/root Logical extents 0 to 9535 Physical extent 9536 to 13271: FREE Physical extent 13272 to 15217: Logical volume /dev/razengan/swap_1 Logical extents 0 to 1945 Physical extent 15218 to 24753: Logical volume /dev/razengan/home Logical extents 0 to 9535 Physical extent 24754 to 38047: FREE
This disk is divided into 38048 PE‘s, which are 4 MiB each. The
-m option of
pvdisplay shows the PE to LE mapping and allocation on this disk. There are three LV‘s present: root, swap_1 and home. Between root and swap_1 there is PE‘s of unassigned space. With a PE size of 4 MiB, this means there is of unassigned space available. Looking from the root LV‘s perspective, we would see:
root@razengan:~# lvdisplay -m razengan/root --- Logical volume --- LV Name /dev/razengan/root VG Name razengan LV UUID xmmr0X-VP3w-jrmH-Dh1o-RRQV-s4zV-O6C8zx LV Write Access read/write LV Status available # open 1 LV Size 37.25 GiB Current LE 9536 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 --- Segments --- Logical extent 0 to 9535: Type linear Physical volume /dev/sda3 Physical extents 0 to 9535
This shows that LV root consists of one segment on PV
As mentioned earlier, the basis of lvm are blocks called extents. These extents exist physically on a PV, and are mapped into groups called LV‘s. To create a PV, there are several options:
- Initialize a disk entirely as PV.
- Create a single disk filling partition, and initialize it as PV.
- Create multiple smaller partitions on a disk, and initialize one or more as PV.
The first two options I would personally only use for storage purposes. The reason for this is, that although you can boot from lvm, it’s a bit tricky. First off, you have to use GRUB2 and tell it to preload its lvm module while booting. Second, if something were to happen to your GRUB2 install, you would have a harder time recovering from it. In my experience, creating /boot as normal partition is a reasonable trade-off between flexibility and ease of recovering. If you take a moment before hand to size the /boot partition properly, before installing Linux, you will never have to look at it again. Kernels aren’t all that big anyways. In contrast, I would likely have to resize my root and home partitions later on, because the OS and user data does grow with time.
When deciding between option one and two, I would prefer option two in all cases. While there is no functional difference between these options, option two has an advantage. When initializing an entire disk with lvm, the lvm meta-data is stored at the beginning of the disk. lvm does not create a partition table however, since lvm itself doesn’t need one. This causes any tool that does not understand lvm to see the disk as empty, and it may overwrite any lvm meta-datawithout confirmation, because it thinks the disk is empty. Worse, data in the LV‘s themselves might be overwritten. If you first create a partition table and create a single disk filling partition, a tool would instead see an unrecognized partition, and would most likely prompt the user about what to do. This is more robust.
If you want to use lvm for your boot disk, use option three. I would suggest partitioning your disk using
gdisk 2. Next, create a small /boot partition (I use 250MB), and optionally an EFI partition, if you have a motherboard equipped with an UEFI instead of a BIOS. Partition the remaining space as one partition with the appropriate partitions type, which is
Before the blocks in a PV can be grouped into LV‘s, you must assign it to a volume group (VG). A VG is therefor a group of one or more PV‘s, with the purpose of mapping PE‘s to LE‘s, which in turn form LV‘s. Simply put, you can create partitions in a VG. You can change the size of a VG by adding more PV‘s, or removing them. Unbecoming of their importance in lvm, VG‘s are the lvm objects you will probably not mutate directly a lot. You basically just slap a few PV‘s together in a VG, and can forget about the VG, until you create a new LV, and need to specify what VG to use. The most used lvm commands for me personally deal with PV‘s and LV‘s.
After creating your VG and assigning your PV‘s to it, you can use the space available in the VG to create LV‘s. Think of LV‘s as partitions, and the VG as a single huge disk. LV‘s are presented to the Linux OS as block devices, which you can manipulate using all the tools you normally would use, like
Because a ‘partition’ in lvm no longer needs to be one contiguous block on the device, you can change its size dynamically. Using my earlier real-life example, I can create a new LV on
/dev/sda3 like this:
root@razengan:~# lvcreate -n new -L 14G razengan /dev/sda3 Logical volume "new" created root@razengan:~# lvdisplay -m razengan/new --- Logical volume --- LV Name /dev/razengan/new VG Name razengan LV UUID 8CDVce-5Q6B-AvPc-MAs1-b2FX-vGlA-iWNshV LV Write Access read/write LV Status available # open 0 LV Size 14.00 GiB Current LE 3584 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:14 --- Segments --- Logical extent 0 to 3583: Type linear Physical volume /dev/sda3 Physical extents 9536 to 13119
You can see that lvm created a new LV using the parameters I specified. Since I specified it should be created on
/dev/sda3, lvm has proceeded to do so. If I would have omitted a specific PV, lvm would have chosen the first available PV with free space. In the next example, lets assume that
/dev/sda3 is out of free space, but
/dev/sdc1 still has some. I can use that space and add it to the LV.
root@razengan:~# lvextend -L +10G razengan/new /dev/sdc1 Extending logical volume new to 24.00 GiB Logical volume new successfully resized root@razengan:~# lvdisplay -m razengan/new --- Logical volume --- LV Name /dev/razengan/new VG Name razengan ... LV Size 24.00 GiB Current LE 6144 Segments 2 ... --- Segments --- Logical extent 0 to 3583: Type linear Physical volume /dev/sda3 Physical extents 9536 to 13119 Logical extent 3584 to 6143: Type linear Physical volume /dev/sdc1 Physical extents 153600 to 156159
As you can see, the LV now has 3583 of its 6144 LE‘s located on
/dev/sda3, and the rest on
/dev/sdc1. Without lvm, this would have been impossible. Note that normally you would have to also resize the file system that is present on a LV after resizing it, but I have omitted this in the example for brevity’s sake.
If I wanted to get picky, I could decide later that I wouldn’t want the second segment of LE‘s on
/dev/sdc1, but on
/dev/sdd1. I could also decide to break up the second segment and create a third on a different PV altogether. This flexibility is one of the strong points of lvm.
I hope this article gives you an idea of what you can do with lvm. Without getting into the really advanced topics like moving extents, mirroring/striping and snapshots, I have already covered a lot of ground by explaining the core concepts. I will dedicate a future article on more hands-on examples on how to use lvm.
If you have any questions, leave them in the comments below.
- As RAID-like as this feature is, there is no support for striping with parity like RAID5 or RAID6 does, nor can striping and mirroring be stacked like in RAID10 or RAID0+1. ↩
- I recommend gdisk if you have a big disk (> 1.5TB), because of the limitations of the traditional MBR based partitioning. gdisk uses GPT partition tables, which fix a lot of the restrictions of MBR partition tables. gdisk also handles disks with big 4K sectors, by aligning the partitions properly. GPT partition tables are not supported in older versions of Windows however, but since you are using LVM, I don’t see this as a problem. ↩