- Fix problem where reshape of RAID volume is broken after trying to
stop all MD devices.
- Enhance raid-check to allow the adming to specify the max number of
concurrent arrays to be checked at any given time.
- Resolves bz830177, bz820124
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
This fix removes a dangling symlink to mdmonitor-takeover.service, if
the mdadm package is uninstalled from the system.
Resolves bz828354
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
- Fix Monitor mode sometimes crashes when a resync completes
- Fix missing symlink for mdadm container device when incremental creates
the array
- Make sure when creating a second array in a container that the second
array uses all available space since leaving space for a third array
is invalid
- Validate the number of imsm volumes per controller
- Fix issues with imsm arrays and disks larger than 2TB
- Add support for expanding imsm arrays/containers
- The support for expanding imsm arrays/containers was accepted upstream,
update to the official patches from there
- Fix for the issue of --add not being very smart
- Fix an issue causing rebuilds to fail to restart on reboot (data
corrupter level problem)
- Reset the bad flag on map file updates
- Correctly fix failure when trying to add internal bitmap to 1.0 arrays
- Resolves: bz817023 (f17) bz817024 (f17) bz817026 (f17) bz817028 (f17)
- Resolves: bz817029 (f17) bz817032 (f17) bz817038 (f17) bz808774 (f17)
- Resolves: bz817039 (f17) bz817042 (f17)
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
This fixes an issue with devices failing to be added to a raid using
bitmaps, due to trying to write the bitmap with mis-aligned buffers
using O_DIRECT
Resolves bz789898 (f16) bz791189 (f15)
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
which resolves issues when booting off sysvinit based system.
Resolves: bz736387 (Fedora 15) bz744217 (Fedora 16)
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Older imsm arrays, or arrays created by something other than mdadm,
might have one of two unused bits in the attributes field set. If
they do, we need to ignore them, not fail to assemble the array.
Signed-off-by: Doug Ledford <dledford@redhat.com>
(cherry picked from commit 22ef59a98600f5900f957e2a0bdc16139aa528da)
Added support for nested md devices, md on top of LVM devices such as
encrypted partitions (although I don't recommend that, I recommend
encrypting the md device instead), and support for md devices on top
of multipath dm devices.
Signed-off-by: Doug Ledford <dledford@redhat.com>
so we can control it in our own incremental assembly specific rule file
- Don't build the static package, we don't install it, also remove the
glibc-static buildreq
we are supposed to
- Add a rule to run incremental assembly on containers in case there are
multiple volumes in a container and we only started some of them in the
initramfs
- Make -If work with imsm arrays. We had too restrictive of a test in
sysfs_unique_holder.
- Make incremental assembly of containers act like incremental assembly of
regular devices (aka, --run is needed to start a degraded array)
we are supposed to
- Add a rule to run incremental assembly on containers in case there are
multiple volumes in a container and we only started some of them in the
initramfs
- Make -If work with imsm arrays. We had too restrictive of a test in
sysfs_unique_holder.
- Make incremental assembly of containers act like incremental assembly of
regular devices (aka, --run is needed to start a degraded array)
we are supposed to
- Add a rule to run incremental assembly on containers in case there are
multiple volumes in a container and we only started some of them in the
initramfs
- Make -If work with imsm arrays. We had too restrictive of a test in
sysfs_unique_holder.
(bz557053)
- Don't report any mismatch_cnt issues on raid1 devices as there are
legitimate reasons why the count may not be 0 and we are getting enough
false positives that it renders the check useless (bz554217, bz547128)
(bz557053)
- Don't report any mismatch_cnt issues on raid1 devices as there are
legitimate reasons why the count may not be 0 and we are getting enough
false positives that it renders the check useless (bz554217, bz547128)
- Update a couple internal patches
- Drop a patch in that was in Neil's tree for 3.0.3 that we had pulled for
immediate use to resolve a bug
- Drop the endian patch because it no longer applied cleanly and all
attempts to reproduce the original problem as reported in bz510605
failed, even up to and including downloading the specific package that
was reported as failing in that bug and trying to reproduce with it on
both ppc and ppc64 hardware and with both ppc and ppc64 versions on the
64bit hardware. Without a reproducer, it is impossible to determine if
a rehashed patch to apply to this code would actually solve the
problem, so remove the patch entirely since the original problem, as
reported, was an easy to detect DOA issue where installing to a raid
array was bound to fail on reboot and so we should be able to quickly
and definitively tell if the problem resurfaces.
- Update the mdmonitor init script for LSB compliance (bz527957)
- Link from mdadm.static man page to mdadm man page (bz529314)
- Fix a problem in the raid-check script (bz523000)
- Fix the intel superblock handler so we can test on non-scsi block devices
- Update a couple internal patches
- Drop a patch in that was in Neil's tree for 3.0.3 that we had pulled for
immediate use to resolve a bug
- Drop the endian patch because it no longer applied cleanly and all
attempts to reproduce the original problem as reported in bz510605
failed, even up to and including downloading the specific package that
was reported as failing in that bug and trying to reproduce with it on
both ppc and ppc64 hardware and with both ppc and ppc64 versions on the
64bit hardware. Without a reproducer, it is impossible to determine if
a rehashed patch to apply to this code would actually solve the
problem, so remove the patch entirely since the original problem, as
reported, was an easy to detect DOA issue where installing to a raid
array was bound to fail on reboot and so we should be able to quickly
and definitively tell if the problem resurfaces.
- Update the mdmonitor init script for LSB compliance (bz527957)
- Link from mdadm.static man page to mdadm man page (bz529314)
- Fix a problem in the raid-check script (bz523000)
- Fix the intel superblock handler so we can test on non-scsi block devices
- Add a patch fixing mdadm --detail -export segfaults (bz526761, bz523862)
- Add a patch making mdmon store its state under /dev/.mdadm for initrd
mdmon, rootfs mdmon handover
- Restart mdmon from initscript (when running) for rootfs mdmon handover
running in rc.sysinit...leave array starting to it instead
- Modify mdadm to put its mapfile in /dev/md instead of /var/run/mdadm
since at startup /var/run/mdadm is read-only by default and this breaks
incremental assembly
- Change how mdadm decides to assemble incremental devices using their
preferred name or a random name to avoid possible conflicts when
plugging a foreign array into a host
- Remove the no longer necessary udev patch
- Remove the no longer necessary warn patch
- Remove the no longer necessary alias patch
- Update the mdadm.rules file to only pay attention to device adds, not
changes and to enable incremental assembly
- Add a cron job to run a weekly repair of the array to correct bad sectors
- Resolves: bz474436, bz490972