import lvm2-2.03.09-5.el8_3.2

This commit is contained in:
CentOS Sources 2021-02-16 02:43:20 -05:00 committed by Andrew Lukoshko
parent da888a430e
commit 45bd5eb560
3 changed files with 576 additions and 1 deletions

View File

@ -0,0 +1,481 @@
man/lvmvdo.7_main | 321 +++++++++++++++++++++++++++++-------------------------
1 file changed, 173 insertions(+), 148 deletions(-)
diff --git a/man/lvmvdo.7_main b/man/lvmvdo.7_main
index 582f7a8..39dee39 100644
--- a/man/lvmvdo.7_main
+++ b/man/lvmvdo.7_main
@@ -1,32 +1,29 @@
.TH "LVMVDO" "7" "LVM TOOLS #VERSION#" "Red Hat, Inc" "\""
.SH NAME
-lvmvdo \(em EXPERIMENTAL LVM Virtual Data Optimizer support
-
+lvmvdo \(em Support for Virtual Data Optimizer in LVM
.SH DESCRIPTION
-
-VDO (which includes kvdo and vdo) is software that provides inline
+VDO is software that provides inline
block-level deduplication, compression, and thin provisioning capabilities
for primary storage.
Deduplication is a technique for reducing the consumption of storage
resources by eliminating multiple copies of duplicate blocks. Compression
-takes the individual unique blocks and shrinks them with coding
-algorithms; these reduced blocks are then efficiently packed together into
-physical blocks. Thin provisioning manages the mapping from LBAs presented
-by VDO to where the data has actually been stored, and also eliminates any
-blocks of all zeroes.
-
-With deduplication, instead of writing the same data more than once each
-duplicate block is detected and recorded as a reference to the original
+takes the individual unique blocks and shrinks them. These reduced blocks are then efficiently packed together into
+physical blocks. Thin provisioning manages the mapping from logical blocks
+presented by VDO to where the data has actually been physically stored,
+and also eliminates any blocks of all zeroes.
+
+With deduplication, instead of writing the same data more than once, VDO detects and records each
+duplicate block as a reference to the original
block. VDO maintains a mapping from logical block addresses (used by the
storage layer above VDO) to physical block addresses (used by the storage
layer under VDO). After deduplication, multiple logical block addresses
may be mapped to the same physical block address; these are called shared
blocks and are reference-counted by the software.
-With VDO's compression, multiple blocks (or shared blocks) are compressed
-with the fast LZ4 algorithm, and binned together where possible so that
+With compression, VDO compresses multiple blocks (or shared blocks)
+with the fast LZ4 algorithm, and bins them together where possible so that
multiple compressed blocks fit within a 4 KB block on the underlying
storage. Mapping from LBA is to a physical block address and index within
it for the desired compressed data. All compressed blocks are individually
@@ -39,65 +36,55 @@ allocated for storing the new block data to ensure that other logical
block addresses that are mapped to the shared physical block are not
modified.
-For usage of VDO with \fBlvm\fP(8) standard VDO userspace tools
-\fBvdoformat\fP(8) and currently non-standard kernel VDO module
-"\fIkvdo\fP" needs to be installed on the system.
+To use VDO with \fBlvm\fP(8), you must install the standard VDO user-space tools
+\fBvdoformat\fP(8) and the currently non-standard kernel VDO module
+"\fIkvdo\fP".
The "\fIkvdo\fP" module implements fine-grained storage virtualization,
-thin provisioning, block sharing, and compression;
-the "\fIuds\fP" module provides memory-efficient duplicate
-identification. The userspace tools include \fBvdostats\fP(8)
-for extracting statistics from those volumes.
-
-
-.SH VDO Terms
-
+thin provisioning, block sharing, and compression.
+The "\fIuds\fP" module provides memory-efficient duplicate
+identification. The user-space tools include \fBvdostats\fP(8)
+for extracting statistics from VDO volumes.
+.SH VDO TERMS
.TP
VDODataLV
.br
VDO data LV
.br
-large hidden LV with suffix _vdata created in a VG.
+A large hidden LV with the _vdata suffix. It is created in a VG
.br
-used by VDO target to store all data and metadata blocks.
-
+used by the VDO kernel target to store all data and metadata blocks.
.TP
VDOPoolLV
.br
VDO pool LV
.br
-maintains virtual for LV(s) stored in attached VDO data LV
-and it has same size.
+A pool for virtual VDOLV(s) with the size of used VDODataLV.
.br
-contains VDOLV(s) (currently supports only a single VDOLV).
-
+Only a single VDOLV is currently supported.
.TP
VDOLV
.br
VDO LV
.br
-created from VDOPoolLV
+Created from VDOPoolLV.
.br
-appears blank after creation
-
-.SH VDO Usage
-
+Appears blank after creation.
+.SH VDO USAGE
The primary methods for using VDO with lvm2:
-
.SS 1. Create VDOPoolLV with VDOLV
-
-Create an VDOPoolLV that will holds VDO data together with
-virtual size VDOLV, that user can use. When the virtual size
-is not specified, then such LV is created with maximum size that
-always fits into data volume even if there cannot happen any
-deduplication and compression
-(i.e. it can hold uncompressible content of /dev/urandom).
-When the name of VDOPoolLV is not specified, it tales name from
-sequence of vpool0, vpool1 ...
-
-Note: As the performance of TRIM/Discard operation is slow for large
-volumes of VDO type, please try to avoid sending discard requests unless
-necessary as it may take considerable amount of time to finish discard
+Create a VDOPoolLV that will hold VDO data, and a
+virtual size VDOLV that the user can use. If you do not specify the virtual size,
+then the VDOLV is created with the maximum size that
+always fits into data volume even if no
+deduplication or compression can happen
+(i.e. it can hold the incompressible content of /dev/urandom).
+If you do not specify the name of VDOPoolLV, it is taken from
+the sequence of vpool0, vpool1 ...
+
+Note: The performance of TRIM/Discard operations is slow for large
+volumes of VDO type. Please try to avoid sending discard requests unless
+necessary because it might take considerable amount of time to finish the discard
operation.
.nf
@@ -106,22 +93,19 @@ operation.
.fi
.I Example
-.br
.nf
# lvcreate --type vdo -n vdo0 -L 10G -V 100G vg/vdopool0
# mkfs.ext4 -E nodiscard /dev/vg/vdo0
.fi
-
-.SS 2. Create VDOPoolLV and convert existing LV into VDODataLV
-
-Convert an already created/existing LV into a volume that can hold
-VDO data and metadata (a volume reference by VDOPoolLV).
-User will be prompted to confirm such conversion as it is \fBIRREVERSIBLY
-DESTROYING\fP content of such volume, as it's being immediately
-formatted by \fBvdoformat\fP(8) as VDO pool data volume. User can
-specify virtual size of associated VDOLV with this VDOPoolLV.
-When the virtual size is not specified, it will set to the maximum size
-that can keep 100% uncompressible data there.
+.SS 2. Create VDOPoolLV from conversion of an existing LV into VDODataLV
+Convert an already created or existing LV into a volume that can hold
+VDO data and metadata (volume referenced by VDOPoolLV).
+You will be prompted to confirm such conversion because it \fBIRREVERSIBLY
+DESTROYS\fP the content of such volume and the volume is immediately
+formatted by \fBvdoformat\fP(8) as a VDO pool data volume. You can
+specify the virtual size of the VDOLV associated with this VDOPoolLV.
+If you do not specify the virtual size, it will be set to the maximum size
+that can keep 100% incompressible data there.
.nf
.B lvconvert --type vdo-pool -n VDOLV -V VirtualSize VG/VDOPoolLV
@@ -129,23 +113,20 @@ that can keep 100% uncompressible data there.
.fi
.I Example
-.br
.nf
-# lvconvert --type vdo-pool -n vdo0 -V10G vg/existinglv
+# lvconvert --type vdo-pool -n vdo0 -V10G vg/ExistingLV
.fi
-
-.SS 3. Change default setting used for creating VDOPoolLV
-
-VDO allows to set large variety of option. Lots of these setting
-can be specified by lvm.conf or profile settings. User can prepare
-number of different profiles and just specify profile file name.
-Check output of \fBlvmconfig --type full\fP for detailed description
-of all individual vdo settings.
+.SS 3. Change the default settings used for creating a VDOPoolLV
+VDO allows to set a large variety of options. Lots of these settings
+can be specified in lvm.conf or profile settings. You can prepare
+a number of different profiles in the #DEFAULT_SYS_DIR#/profile directory
+and just specify the profile file name.
+Check the output of \fBlvmconfig --type full\fP for a detailed description
+of all individual VDO settings.
.I Example
-.br
.nf
-# cat <<EOF > vdo.profile
+# cat <<EOF > #DEFAULT_SYS_DIR#/profile/vdo_create.profile
allocation {
vdo_use_compression=1
vdo_use_deduplication=1
@@ -169,13 +150,11 @@ allocation {
}
EOF
-# lvcreate --vdo -L10G --metadataprofile vdo.profile vg/vdopool0
+# lvcreate --vdo -L10G --metadataprofile vdo_create vg/vdopool0
# lvcreate --vdo -L10G --config 'allocation/vdo_cpu_threads=4' vg/vdopool1
.fi
-
-.SS 4. Change compression and deduplication of VDOPoolLV
-
-Disable or enable compression and deduplication for VDO pool LV
+.SS 4. Change the compression and deduplication of a VDOPoolLV
+Disable or enable the compression and deduplication for VDOPoolLV
(the volume that maintains all VDO LV(s) associated with it).
.nf
@@ -183,24 +162,20 @@ Disable or enable compression and deduplication for VDO pool LV
.fi
.I Example
-.br
.nf
-# lvchange --compression n vg/vdpool0
-# lvchange --deduplication y vg/vdpool1
+# lvchange --compression n vg/vdopool0
+# lvchange --deduplication y vg/vdopool1
.fi
-
-.SS 4. Checking usage of VDOPoolLV
-
-To quickly check how much data of VDOPoolLV are already consumed
-use \fBlvs\fP(8). Field Data% will report how much data occupies
-content of virtual data for VDOLV and how much space is already
-consumed with all the data and metadata blocks in VDOPoolLV.
-For a detailed description use \fBvdostats\fP(8) command.
+.SS 5. Checking the usage of VDOPoolLV
+To quickly check how much data on a VDOPoolLV is already consumed,
+use \fBlvs\fP(8). The Data% field reports how much data is occupied
+in the content of the virtual data for the VDOLV and how much space is already
+consumed with all the data and metadata blocks in the VDOPoolLV.
+For a detailed description, use the \fBvdostats\fP(8) command.
Note: \fBvdostats\fP(8) currently understands only /dev/mapper device names.
.I Example
-.br
.nf
# lvcreate --type vdo -L10G -V20G -n vdo0 vg/vdopool0
# mkfs.ext4 -E nodiscard /dev/vg/vdo0
@@ -211,40 +186,43 @@ Note: \fBvdostats\fP(8) currently understands only /dev/mapper device names.
vdopool0 vg dwi-ao---- 10.00g 30.16
[vdopool0_vdata] vg Dwi-ao---- 10.00g
-# vdostats --all /dev/mapper/vg-vdopool0
+# vdostats --all /dev/mapper/vg-vdopool0-vpool
/dev/mapper/vg-vdopool0 :
version : 30
release version : 133524
data blocks used : 79
...
.fi
+.SS 6. Extending the VDOPoolLV size
+You can add more space to hold VDO data and metadata by
+extending the VDODataLV using the commands
+\fBlvresize\fP(8) and \fBlvextend\fP(8).
+The extension needs to add at least one new VDO slab. You can configure
+the slab size with the \fBallocation/vdo_slab_size_mb\fP setting.
-.SS 4. Extending VDOPoolLV size
+You can also enable automatic size extension of a monitored VDOPoolLV
+with the \fBactivation/vdo_pool_autoextend_percent\fP and
+\fBactivation/vdo_pool_autoextend_threshold\fP settings.
-Adding more space to hold VDO data and metadata can be made via
-extension of VDODataLV with commands
-\fBlvresize\fP(8), \fBlvextend\fP(8).
+Note: You cannot reduce the size of a VDOPoolLV.
-Note: Size of VDOPoolLV cannot be reduced.
+Note: You cannot change the size of a cached VDOPoolLV.
.nf
.B lvextend -L+AddingSize VG/VDOPoolLV
.fi
.I Example
-.br
.nf
# lvextend -L+50G vg/vdopool0
# lvresize -L300G vg/vdopool1
.fi
+.SS 7. Extending or reducing the VDOLV size
+You can extend or reduce a virtual VDO LV as a standard LV with the
+\fBlvresize\fP(8), \fBlvextend\fP(8), and \fBlvreduce\fP(8) commands.
-.SS 4. Extending or reducing VDOLV size
-
-VDO LV can be extended or reduced as standard LV with commands
-\fBlvresize\fP(8), \fBlvextend\fP(8), \fBlvreduce\fP(8).
-
-Note: Reduction needs to process TRIM for reduced disk area
-to unmap used data blocks from VDOPoolLV and it may take
+Note: The reduction needs to process TRIM for reduced disk area
+to unmap used data blocks from the VDOPoolLV, which might take
a long time.
.nf
@@ -253,74 +231,121 @@ a long time.
.fi
.I Example
-.br
.nf
# lvextend -L+50G vg/vdo0
# lvreduce -L-50G vg/vdo1
# lvresize -L200G vg/vdo2
.fi
-
-.SS 5. Component activation of VDODataLV
-
-VDODataLV can be activated separately as component LV for examination
-purposes. It activates data LV in read-only mode and cannot be modified.
-If the VDODataLV is active as component, any upper LV using this volume CANNOT
-be activated. User has to deactivate VDODataLV first to continue to use VDOPoolLV.
+.SS 8. Component activation of a VDODataLV
+You can activate a VDODataLV separately as a component LV for examination
+purposes. It activates the data LV in read-only mode, and the data LV cannot be modified.
+If the VDODataLV is active as a component, any upper LV using this volume CANNOT
+be activated. You have to deactivate the VDODataLV first to continue to use the VDOPoolLV.
.I Example
-.br
.nf
# lvchange -ay vg/vpool0_vdata
# lvchange -an vg/vpool0_vdata
.fi
-
-
-.SH VDO Topics
-
+.SH VDO TOPICS
.SS 1. Stacking VDO
-
-User can convert/stack VDO with existing volumes.
-
-.SS 2. VDO on top of raid
-
-Using Raid type LV for VDO Data LV.
+You can convert or stack a VDOPooLV with these currently supported
+volume types: linear, stripe, raid, and cache with cachepool.
+.SS 2. VDOPoolLV on top of raid
+Using a raid type LV for a VDODataLV.
.I Example
-.br
.nf
-# lvcreate --type raid1 -L 5G -n vpool vg
-# lvconvert --type vdo-pool -V 10G vg/vpool
+# lvcreate --type raid1 -L 5G -n vdopool vg
+# lvconvert --type vdo-pool -V 10G vg/vdopool
.fi
+.SS 3. Caching a VDODataLV or a VDOPoolLV
+VDODataLV (accepts also VDOPoolLV) caching provides a mechanism
+to accelerate reads and writes of already compressed and deduplicated
+data blocks together with VDO metadata.
-.SS 3. Caching VDODataLV, VDOPoolLV
-
-Cache VDO Data LV (accepts also VDOPoolLV.
+A cached VDO data LV cannot be currently resized. Also, the threshold
+based automatic resize will not work.
.I Example
-.br
.nf
-# lvcreate -L 5G -V 10G -n vdo1 vg/vpool
-# lvcreate --type cache-pool -L 1G -n cpool vg
-# lvconvert --cache --cachepool vg/cpool vg/vpool
-# lvconvert --uncache vg/vpool
+# lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
+# lvcreate --type cache-pool -L 1G -n cachepool vg
+# lvconvert --cache --cachepool vg/cachepool vg/vdopool
+# lvconvert --uncache vg/vdopool
.fi
-
-.SS 3. Caching VDOLV
-
-Cache VDO LV.
+.SS 4. Caching a VDOLV
+VDO LV cache allow you to 'cache' a device for better performance before
+it hits the processing of the VDO Pool LV layer.
.I Example
-.br
.nf
-# lvcreate -L 5G -V 10G -n vdo1 vg/vpool
-# lvcreate --type cache-pool -L 1G -n cpool vg
-# lvconvert --cache --cachepool vg/cpool vg/vdo1
+# lvcreate -L 5G -V 10G -n vdo1 vg/vdopool
+# lvcreate --type cache-pool -L 1G -n cachepool vg
+# lvconvert --cache --cachepool vg/cachepool vg/vdo1
# lvconvert --uncache vg/vdo1
.fi
-
-.br
-
-\&
+.SS 5. Usage of Discard/TRIM with a VDOLV
+You can discard data on a VDO LV and reduce used blocks on a VDOPoolLV.
+However, the current performance of discard operations is still not optimal
+and takes a considerable amount of time and CPU.
+Unless you really need it, you should avoid using discard.
+
+When a block device is going to be rewritten,
+block will be automatically reused for new data.
+Discard is useful in situations when it is known that the given portion of a VDO LV
+is not going to be used and the discarded space can be used for block
+provisioning in other regions of the VDO LV.
+For the same reason, you should avoid using mkfs with discard for
+a freshly created VDO LV to save a lot of time that this operation would
+take otherwise as device after create empty.
+.SS 6. Memory usage
+The VDO target requires 370 MiB of RAM plus an additional 268 MiB
+per each 1 TiB of physical storage managed by the volume.
+
+UDS requires a minimum of 250 MiB of RAM,
+which is also the default amount that deduplication uses.
+
+The memory required for the UDS index is determined by the index type
+and the required size of the deduplication window and
+is controlled by the \fBallocation/vdo_use_sparse_index\fP setting.
+
+With enabled UDS sparse indexing, it relies on the temporal locality of data
+and attempts to retain only the most relevant index entries in memory and
+can maintain a deduplication window that is ten times larger
+than with dense while using the same amount of memory.
+
+Although the sparse index provides the greatest coverage,
+the dense index provides more deduplication advice.
+For most workloads, given the same amount of memory,
+the difference in deduplication rates between dense
+and sparse indexes is negligible.
+
+A dense index with 1 GiB of RAM maintains a 1 TiB deduplication window,
+while a sparse index with 1 GiB of RAM maintains a 10 TiB deduplication window.
+In general, 1 GiB is sufficient for 4 TiB of physical space with
+a dense index and 40 TiB with a sparse index.
+.SS 7. Storage space requirements
+You can configure a VDOPoolLV to use up to 256 TiB of physical storage.
+Only a certain part of the physical storage is usable to store data.
+This section provides the calculations to determine the usable size
+of a VDO-managed volume.
+
+The VDO target requires storage for two types of VDO metadata and for the UDS index:
+.TP
+\(bu
+The first type of VDO metadata uses approximately 1 MiB for each 4 GiB
+of physical storage plus an additional 1 MiB per slab.
+.TP
+\(bu
+The second type of VDO metadata consumes approximately 1.25 MiB
+for each 1 GiB of logical storage, rounded up to the nearest slab.
+.TP
+\(bu
+The amount of storage required for the UDS index depends on the type of index
+and the amount of RAM allocated to the index. For each 1 GiB of RAM,
+a dense UDS index uses 17 GiB of storage and a sparse UDS index will use
+170 GiB of storage.
.SH SEE ALSO
.BR lvm (8),

View File

@ -0,0 +1,83 @@
man/lvmvdo.7_main | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/man/lvmvdo.7_main b/man/lvmvdo.7_main
index 39dee39..474d6dd 100644
--- a/man/lvmvdo.7_main
+++ b/man/lvmvdo.7_main
@@ -16,7 +16,7 @@ and also eliminates any blocks of all zeroes.
With deduplication, instead of writing the same data more than once, VDO detects and records each
duplicate block as a reference to the original
-block. VDO maintains a mapping from logical block addresses (used by the
+block. VDO maintains a mapping from Logical Block Addresses (LBA) (used by the
storage layer above VDO) to physical block addresses (used by the storage
layer under VDO). After deduplication, multiple logical block addresses
may be mapped to the same physical block address; these are called shared
@@ -59,7 +59,7 @@ VDOPoolLV
.br
VDO pool LV
.br
-A pool for virtual VDOLV(s) with the size of used VDODataLV.
+A pool for virtual VDOLV(s), which are the size of used VDODataLV.
.br
Only a single VDOLV is currently supported.
.TP
@@ -72,7 +72,7 @@ Created from VDOPoolLV.
Appears blank after creation.
.SH VDO USAGE
The primary methods for using VDO with lvm2:
-.SS 1. Create VDOPoolLV with VDOLV
+.SS 1. Create a VDOPoolLV and a VDOLV
Create a VDOPoolLV that will hold VDO data, and a
virtual size VDOLV that the user can use. If you do not specify the virtual size,
then the VDOLV is created with the maximum size that
@@ -97,9 +97,9 @@ operation.
# lvcreate --type vdo -n vdo0 -L 10G -V 100G vg/vdopool0
# mkfs.ext4 -E nodiscard /dev/vg/vdo0
.fi
-.SS 2. Create VDOPoolLV from conversion of an existing LV into VDODataLV
-Convert an already created or existing LV into a volume that can hold
-VDO data and metadata (volume referenced by VDOPoolLV).
+.SS 2. Convert an existing LV into VDOPoolLV
+Convert an already created or existing LV into a VDOPoolLV, which is a volume
+that can hold data and metadata.
You will be prompted to confirm such conversion because it \fBIRREVERSIBLY
DESTROYS\fP the content of such volume and the volume is immediately
formatted by \fBvdoformat\fP(8) as a VDO pool data volume. You can
@@ -238,7 +238,8 @@ a long time.
.fi
.SS 8. Component activation of a VDODataLV
You can activate a VDODataLV separately as a component LV for examination
-purposes. It activates the data LV in read-only mode, and the data LV cannot be modified.
+purposes. The activation of the VDODataLV activates the data LV in read-only mode,
+and the data LV cannot be modified.
If the VDODataLV is active as a component, any upper LV using this volume CANNOT
be activated. You have to deactivate the VDODataLV first to continue to use the VDOPoolLV.
@@ -280,7 +281,7 @@ it hits the processing of the VDO Pool LV layer.
.I Example
.nf
-# lvcreate -L 5G -V 10G -n vdo1 vg/vdopool
+# lvcreate --type vdo -L 5G -V 10G -n vdo1 vg/vdopool
# lvcreate --type cache-pool -L 1G -n cachepool vg
# lvconvert --cache --cachepool vg/cachepool vg/vdo1
# lvconvert --uncache vg/vdo1
@@ -292,13 +293,13 @@ and takes a considerable amount of time and CPU.
Unless you really need it, you should avoid using discard.
When a block device is going to be rewritten,
-block will be automatically reused for new data.
-Discard is useful in situations when it is known that the given portion of a VDO LV
+its blocks will be automatically reused for new data.
+Discard is useful in situations when user knows that the given portion of a VDO LV
is not going to be used and the discarded space can be used for block
provisioning in other regions of the VDO LV.
For the same reason, you should avoid using mkfs with discard for
a freshly created VDO LV to save a lot of time that this operation would
-take otherwise as device after create empty.
+take otherwise as device is already expected to be empty.
.SS 6. Memory usage
The VDO target requires 370 MiB of RAM plus an additional 268 MiB
per each 1 TiB of physical storage managed by the volume.

View File

@ -58,7 +58,7 @@ Name: lvm2
Epoch: %{rhel}
%endif
Version: 2.03.09
Release: 5%{?dist}
Release: 5%{?dist}.2
License: GPLv2
URL: http://sourceware.org/lvm2
Source0: ftp://sourceware.org/pub/lvm2/releases/LVM2.%{version}.tgz
@ -79,6 +79,9 @@ Patch13: 0002-Merge-master-up-to-commit-be61bd6ff5c6.patch
Patch14: 0003-Merge-master-up-to-commit-c1d136fea3d1.patch
# BZ 1868169:
Patch15: 0004-Revert-wipe_lv-changes.patch
# BZ 1895081:
Patch16: lvm2-2_03_11-man-lvmvdo-update.patch
Patch17: lvm2-2_03_11-man-update-lvmvdo.patch
BuildRequires: gcc
%if %{enable_testsuite}
@ -150,6 +153,8 @@ or more physical volumes and creating one or more logical volumes
%patch13 -p1 -b .backup13
%patch14 -p1 -b .backup14
%patch15 -p1 -b .backup15
%patch16 -p1 -b .backup16
%patch17 -p1 -b .backup17
%build
%global _default_pid_dir /run
@ -754,6 +759,12 @@ An extensive functional testsuite for LVM2.
%endif
%changelog
* Wed Dec 09 2020 Marian Csontos <mcsontos@redhat.com> - 2.03.09-5.el8_3.2
- Update lvmvdo man page.
* Wed Dec 02 2020 Marian Csontos <mcsontos@redhat.com> - 2.03.09-5.el8_3.1
- Update lvmvdo man page.
* Wed Aug 12 2020 Marian Csontos <mcsontos@redhat.com> - 2.03.09-5
- Revert wipe_lv changes.