Misplaced Pages

Logical volume management

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In computer storage , logical volume management or LVM provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes to store volumes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions (or block devices in general) into larger virtual partitions that administrators can re-size or move, potentially without interrupting system use.

#284715

46-522: Volume management represents just one of many forms of storage virtualization ; its implementation takes place in a layer in the device-driver stack of an operating system (OS) (as opposed to within storage devices or in a network). Most volume-manager implementations share the same basic design. They start with physical volumes (PVs), which can be either hard disks , hard disk partitions , or Logical Unit Numbers (LUNs) of an external storage device. Volume management treats each PV as being composed of

92-401: A common disk manager in the virtualized environment. Logical disks (vdisks) are created by the virtualization software or device and are mapped (made visible) to the required host or server, thus providing a common place or way for managing all volumes in the environment. Enhanced features are easy to provide in this environment: One of the major benefits of abstracting the host or server from

138-570: A common practice, this may render the snapshot inoperable. Snapshots can be useful for backing up self-consistent versions of volatile data such as table files from a busy database, or for rolling back large changes (such as an operating system upgrade) in a single operation. Snapshots have a similar effect as rendering storage quiescent , and are similar to the shadow copy (VSS) service in Microsoft Windows. Some Linux-based Live CDs also use snapshots to simulate read-write access to

184-404: A consistent state, usually in preparation for a backup or other maintenance. In software applications that modify information stored on disk , this generally involves flushing any outstanding writes; see buffering . With telecom applications, this generally involves allowing existing callers to finish their call but preventing new calls from initiating. Perhaps the best known support for this

230-471: A different PV; depending on the size of the LE, this can improve performance on large sequential reads by bringing to bear the combined read-throughput of multiple PVs. Administrators can grow LVs (by concatenating more LEs) or shrink them (by returning LEs to the pool). The concatenated LEs do not have to be contiguous. This allows LVs to grow without having to move already-allocated LEs. Some volume managers allow

276-443: A network, appear to be a single monolithic storage device and can be managed centrally. However, traditional storage controller management is still required. That is, the creation and maintenance of RAID arrays, including error and fault management. Once the abstraction layer is in place, only the virtualizer knows where the data actually resides on the physical medium. Backing out of a virtual storage environment therefore requires

322-409: A particular host or server. A new logical disk can be simply allocated from the available pool, or an existing disk can be expanded. Pooling also means that all the available storage capacity can potentially be used. In a traditional environment, an entire disk would be mapped to a host. This may be larger than is required, thus wasting space. In a virtual environment, the logical disk (LUN) is assigned

368-555: A read-only optical disc . Snapshots are handled by Time Machine ; Software-based RAID is provided by AppleRAID. Both are separate from Core Storage. Logical volumes can suffer from external fragmentation when the underlying storage devices do not allocate their PEs contiguously. This can reduce I/O performance on slow-seeking media such as magnetic disks and other rotational media. Volume managers that use fixed-size PEs, however, typically make PEs relatively large (for example, Linux LVM uses 4 MB by default) in order to amortize

414-509: A sequence of chunks called physical extents (PEs). Some volume managers (such as that in HP-UX and Linux) have PEs of a uniform size; others (such as that in Veritas ) have variably-sized PEs that can be split and merged at will. Normally, PEs simply map one-to-one to logical extents (LEs). With mirroring, multiple PEs map to each LE. These PEs are drawn from a physical volume group (PVG),

460-539: A set of same-sized PVs which act similarly to hard disks in a RAID1 array. PVGs are usually laid out so that they reside on different disks or data buses for maximum redundancy. The system pools LEs into a volume group (VG). The pooled LEs can then be concatenated together into virtual disk partitions called logical volumes or LVs . Systems can use LVs as raw block devices just like disk partitions: creating mountable file systems on them, or using them as swap storage. Striped LVs allocate each successive LE from

506-424: A single array (and possibly later divide the array it into smaller volumes). Advanced disk arrays often feature cloning, snapshots and remote replication. Generally these devices do not provide the benefits of data migration or replication across heterogeneous storage, as each vendor tends to use their own proprietary protocols. Quiesce To quiesce is to pause or alter a device or application to achieve

SECTION 10

#1732793529285

552-431: A single block of information is addressed using a LUN identifier and an offset within that LUN – known as a logical block addressing (LBA). The virtualization software or device is responsible for maintaining a consistent view of all the mapping information for the virtualized storage. This mapping information is often called metadata and is stored as a mapping table. The address space may be limited by

598-440: A single vendor's device (as with similar capabilities provided by specific storage controllers) and are in fact possible across different vendors' devices. Data replication techniques are not limited to virtualization appliances and as such are not described here in detail. However most implementations will provide some or all of these replication services. When storage is virtualized, replication services must be implemented above

644-448: A special purpose computer designed to provide storage capacity along with advanced data protection features. Disk drives are only one element within a storage system, along with hardware and special purpose embedded software within the system. Storage systems can provide either block accessed storage, or file accessed storage. Block access is typically delivered over Fibre Channel , iSCSI , SAS , FICON or other protocols. File access

690-450: Is common to utilize three layers of virtualization. Some implementations do not use a mapping table, and instead calculate locations using an algorithm. These implementations utilize dynamic methods to calculate the location on access, rather than storing the information in a mapping table. The virtualization software or device uses the metadata to re-direct I/O requests. It will receive an incoming I/O request containing information about

736-594: Is hot-pluggable this allows engineers to upgrade or replace storage without system downtime. A hybrid volume is any volume that intentionally and opaquely makes use of two separate physical volumes. For instance, a workload may consist of random seeks so an SSD may be used to permanently store frequently used or recently written data, while using higher-capacity rotational magnetic media for long-term storage of rarely needed data. On Linux, bcache or dm-cache may be used for this purpose, while Fusion Drive may be used on OS X. ZFS also implements this functionality at

782-477: Is limited to in-band and symmetric virtualization software and devices. However these implementations also directly influence the latency of an I/O request (cache miss), due to the I/O having to flow through the software or device. Assuming the software or device is efficiently designed this impact should be minimal when compared with the latency associated with physical disk accesses. Due to the nature of virtualization,

828-423: Is often provided using NFS or SMB protocols. Within the context of a storage system, there are two primary types of virtualization that can occur: Virtualization of storage helps achieve location independence by abstracting the physical location of the data. The virtualization system presents to the user a logical space for data storage and handles the process of mapping it to the actual physical location. It

874-444: Is possible to have multiple layers of virtualization or mapping. It is then possible that the output of one layer of virtualization can then be used as the input for a higher layer of virtualization. Virtualization maps space between back-end resources, to front-end resources. In this instance, "back-end" refers to a logical unit number (LUN) that is not presented to a computer, or host system for direct use. A "front-end" LUN or volume

920-417: Is presented to a host or computer system for use. The actual form of the mapping will depend on the chosen implementation. Some implementations may limit the granularity of the mapping which may limit the capabilities of the device. Typical granularities range from a single physical disk down to some small subset (multiples of megabytes or gigabytes) of the physical disk. In a block-based storage environment,

966-402: Is required during the migration, and how quickly the previous location is marked as free. The smaller the granularity the faster the update, less space required and quicker the old storage can be freed up. There are many day to day tasks a storage administrator has to perform that can be simply and concurrently performed using data migration techniques. Utilization can be increased by virtue of

SECTION 20

#1732793529285

1012-448: Is written to. This preserves an old version of the LV, the snapshot, which may be later reconstructed by overlaying the copy-on-write table atop the current LV. Unless the volume management supports both thin provisioning and discard, once an LE in the origin volume is written to, it is permanently stored in the snapshot volume. If the snapshot volume was made smaller than its origin, which is

1058-414: The file system level, by allowing administrators to configure multi-level read/write caching. Hybrid volumes present a similar concept as hybrid drives , which also combine solid-state storage and rotational magnetic media. Some volume managers also implement snapshots by applying copy-on-write to each LE. In this scheme, the volume manager will copy the LE to a copy-on-write table just before it

1104-400: The ability to provide certain fast update functions, such as point-in-time copies and caching where super fast updates are required to ensure minimal latency to the actual I/O being performed. In some implementations the performance of the physical storage can actually be improved, mainly due to caching. Caching however requires the visibility of the data contained within the I/O request and so

1150-444: The actual storage is the ability to migrate data while maintaining concurrent I/O access. The host only knows about the logical disk (the mapped LUN) and so any changes to the meta-data mapping is transparent to the host. This means the actual data can be moved or replicated to another physical location without affecting the operation of any client. When the data has been copied or moved, the meta-data can simply be updated to point to

1196-458: The capacity needed to maintain the mapping table. The level of granularity, and the total addressable space both directly impact the size of the meta-data, and hence the mapping table. For this reason, it is common to have trade-offs, between the amount of addressable capacity and the granularity or access granularity. One common method to address these limits is to use multiple levels of virtualization. In several storage systems deployed today, it

1242-507: The capacity required by the using host. Storage can be assigned where it is needed at that point in time, reducing the need to guess how much a given host will need in the future. Using Thin Provisioning , the administrator can create a very large thin provisioned logical disk, thus the using system thinks it has a very large disk from day one. With storage virtualization, multiple independent storage devices, even if scattered across

1288-424: The chosen implementation. Host-based virtualization requires additional software running on the host, as a privileged task or process. In some cases volume management is built into the operating system, and in other instances it is offered as a separate product. Volumes (LUN's) presented to the host system are handled by a traditional physical device driver. However, a software layer (the volume manager) resides above

1334-400: The cost of these seeks. With implementations that are solely volume management, such as Core Storage and Linux LVM, separating and abstracting away volume management from the file system loses the ability to easily make storage decisions for particular files or directories. For example, if a certain directory (but not the entire file system) is to be permanently moved to faster storage, both

1380-589: The disk device driver intercepts the I/O requests, and provides the meta-data lookup and I/O mapping. Most modern operating systems have some form of logical volume management built-in (in Linux called Logical Volume Manager or LVM; in Solaris and FreeBSD, ZFS 's zpool layer; in Windows called Logical Disk Manager or LDM), that performs virtualization tasks. Note: Host based volume managers were in use long before

1426-452: The file system layout and the underlying volume management layer need to be traversed. For example, on Linux it would be needed to manually determine the offset of a file's contents within a file system and then manually pvmove the extents (along with data not related to that file) to the faster storage. Having volume and file management implemented within the same subsystem, instead of having them implemented as separate subsystems, makes

Logical volume management - Misplaced Pages Continue

1472-403: The highest level of interoperability requirements as they have to interoperate with all devices, storage and hosts. Complexity affects several areas : Information is one of the most valuable assets in today's business environments. Once virtualized, the metadata are the glue in the middle. If the metadata are lost, so is all the actual data as it would be virtually impossible to reconstruct

1518-436: The implementation chosen. For example, virtualization implemented within a storage controller adds no extra overhead to host based interoperability, but will require additional support of other storage controllers if they are to be virtualized by the same software. Switch based virtualization may not require specific host interoperability — if it uses packet cracking techniques to redirect the I/O. Network based appliances have

1564-510: The information is read or written, bandwidth is less of a concern as the meta-data are a tiny fraction of the actual I/O size. In-band, symmetric flow through designs are directly limited by their processing power and connectivity bandwidths. Most implementations provide some form of scale-out model, where the inclusion of additional software or device instances provides increased scalability and potentially increased bandwidth. The performance and scalability characteristics are directly influenced by

1610-400: The location of the data in terms of the logical disk (vdisk) and translates this into a new I/O request to the physical disk location. For example, the virtualization device may : Most implementations allow for heterogeneous management of multi-vendor storage devices within the scope of a given implementation's support matrix. This means that the following capabilities are not limited to

1656-460: The logical drives without the mapping information. Any implementation must ensure its protection with appropriate levels of back-ups and replicas. It is important to be able to reconstruct the meta-data in the event of a catastrophic failure. The metadata management also has implications on performance. Any virtualization software or device must be able to keep all the copies of the metadata atomic and quickly updateable. Some implementations restrict

1702-410: The mapping of logical to physical requires some processing power and lookup tables. Therefore, every implementation will add some small amount of latency. In addition to response time concerns, throughput has to be considered. The bandwidth into and out of the meta-data lookup software directly impacts the available system bandwidth. In asymmetric implementations, where the meta-data lookup occurs before

1748-425: The new location, therefore freeing up the physical storage at the old location. The process of moving the physical location is known as data migration . Most implementations allow for this to be done in a non-disruptive manner, that is concurrently while the host continues to perform I/O to the logical disk (or LUN). The mapping granularity dictates how quickly the meta-data can be updated, how much extra capacity

1794-656: The overall process theoretically simpler. Storage Virtualization In computer science , storage virtualization is "the process of presenting a logical view of the physical storage resources to" a host computer system, "treating all storage media (hard disk, optical disk, tape, etc.) in the enterprise as a single pool of storage." A "storage system" is also known as a storage array, disk array , or filer . Storage systems typically use special hardware and software along with disk drives in order to provide very fast and reliable storage for computing and data processing. Storage systems are complex, and may be thought of as

1840-488: The pooling, migration, and thin provisioning services. This allows users to avoid over-buying and over-provisioning storage solutions. In other words, this kind of utilization through a shared pool of storage can be easily and quickly allocated as it is needed to avoid constraints on storage capacity that often hinder application performance. When all available storage capacity is pooled, system administrators no longer have to search for disks that have free space to allocate to

1886-486: The re-sizing of LVs in either direction while online. Changing the size of the LV does not necessarily change the size of a file system on it; it merely changes the size of its containing space. A file system that can be resized online is recommended in that it allows the system to adjust its storage on-the-fly without interrupting applications. PVs and LVs cannot be shared between or span different VGs (although some volume managers may allow moving them at will between VGs on

Logical volume management - Misplaced Pages Continue

1932-521: The reconstruction of the logical disks as contiguous disks that can be used in a traditional manner. Most implementations will provide some form of back-out procedure and with the data migration services it is at least possible, but time consuming. Interoperability is a key enabler to any virtualization software or device. It applies to the actual physical storage controllers and the hosts, their operating systems, multi-pathing software and connectivity hardware. Interoperability requirements differ based on

1978-447: The same host). This allows administrators conveniently to bring VGs online, to take them offline or to move them between host systems as a single administrative unit. VGs can grow their storage pool by absorbing new PVs or shrink by retracting from PVs. This may involve moving already-allocated LEs out of the PV. Most volume managers can perform this movement online; if the underlying hardware

2024-444: The software or device that is performing the virtualization. This is true because it is only above the virtualization layer that a true and consistent image of the logical disk (vdisk) can be copied. This limits the services that some implementations can implement – or makes them seriously difficult to implement. If the virtualization is implemented in the network or higher, this renders any replication services provided by

2070-412: The term storage virtualization had been coined. Like host-based virtualization, several categories have existed for years and have only recently been classified as virtualization. Simple data storage devices, like single hard disk drives , do not provide any virtualization. But even the simplest disk arrays provide a logical to physical abstraction, as they use RAID schemes to join multiple disks in

2116-465: The underlying storage controllers useless. The physical storage resources are aggregated into storage pools, from which the logical storage is created. More storage systems, which may be heterogeneous in nature, can be added as and when needed, and the virtual storage space will scale up by the same amount. This process is fully transparent to the applications using the storage infrastructure. The software or device providing storage virtualization becomes

#284715