so we can control it in our own incremental assembly specific rule file
- Don't build the static package, we don't install it, also remove the
glibc-static buildreq
we are supposed to
- Add a rule to run incremental assembly on containers in case there are
multiple volumes in a container and we only started some of them in the
initramfs
- Make -If work with imsm arrays. We had too restrictive of a test in
sysfs_unique_holder.
- Make incremental assembly of containers act like incremental assembly of
regular devices (aka, --run is needed to start a degraded array)
we are supposed to
- Add a rule to run incremental assembly on containers in case there are
multiple volumes in a container and we only started some of them in the
initramfs
- Make -If work with imsm arrays. We had too restrictive of a test in
sysfs_unique_holder.
- Make incremental assembly of containers act like incremental assembly of
regular devices (aka, --run is needed to start a degraded array)
we are supposed to
- Add a rule to run incremental assembly on containers in case there are
multiple volumes in a container and we only started some of them in the
initramfs
- Make -If work with imsm arrays. We had too restrictive of a test in
sysfs_unique_holder.
(bz557053)
- Don't report any mismatch_cnt issues on raid1 devices as there are
legitimate reasons why the count may not be 0 and we are getting enough
false positives that it renders the check useless (bz554217, bz547128)
(bz557053)
- Don't report any mismatch_cnt issues on raid1 devices as there are
legitimate reasons why the count may not be 0 and we are getting enough
false positives that it renders the check useless (bz554217, bz547128)
- Update a couple internal patches
- Drop a patch in that was in Neil's tree for 3.0.3 that we had pulled for
immediate use to resolve a bug
- Drop the endian patch because it no longer applied cleanly and all
attempts to reproduce the original problem as reported in bz510605
failed, even up to and including downloading the specific package that
was reported as failing in that bug and trying to reproduce with it on
both ppc and ppc64 hardware and with both ppc and ppc64 versions on the
64bit hardware. Without a reproducer, it is impossible to determine if
a rehashed patch to apply to this code would actually solve the
problem, so remove the patch entirely since the original problem, as
reported, was an easy to detect DOA issue where installing to a raid
array was bound to fail on reboot and so we should be able to quickly
and definitively tell if the problem resurfaces.
- Update the mdmonitor init script for LSB compliance (bz527957)
- Link from mdadm.static man page to mdadm man page (bz529314)
- Fix a problem in the raid-check script (bz523000)
- Fix the intel superblock handler so we can test on non-scsi block devices
- Update a couple internal patches
- Drop a patch in that was in Neil's tree for 3.0.3 that we had pulled for
immediate use to resolve a bug
- Drop the endian patch because it no longer applied cleanly and all
attempts to reproduce the original problem as reported in bz510605
failed, even up to and including downloading the specific package that
was reported as failing in that bug and trying to reproduce with it on
both ppc and ppc64 hardware and with both ppc and ppc64 versions on the
64bit hardware. Without a reproducer, it is impossible to determine if
a rehashed patch to apply to this code would actually solve the
problem, so remove the patch entirely since the original problem, as
reported, was an easy to detect DOA issue where installing to a raid
array was bound to fail on reboot and so we should be able to quickly
and definitively tell if the problem resurfaces.
- Update the mdmonitor init script for LSB compliance (bz527957)
- Link from mdadm.static man page to mdadm man page (bz529314)
- Fix a problem in the raid-check script (bz523000)
- Fix the intel superblock handler so we can test on non-scsi block devices
- Add a patch fixing mdadm --detail -export segfaults (bz526761, bz523862)
- Add a patch making mdmon store its state under /dev/.mdadm for initrd
mdmon, rootfs mdmon handover
- Restart mdmon from initscript (when running) for rootfs mdmon handover
running in rc.sysinit...leave array starting to it instead
- Modify mdadm to put its mapfile in /dev/md instead of /var/run/mdadm
since at startup /var/run/mdadm is read-only by default and this breaks
incremental assembly
- Change how mdadm decides to assemble incremental devices using their
preferred name or a random name to avoid possible conflicts when
plugging a foreign array into a host
- Remove the no longer necessary udev patch
- Remove the no longer necessary warn patch
- Remove the no longer necessary alias patch
- Update the mdadm.rules file to only pay attention to device adds, not
changes and to enable incremental assembly
- Add a cron job to run a weekly repair of the array to correct bad sectors
- Resolves: bz474436, bz490972
- Use the udev rules file included with mdadm instead of our own
- Drop all the no longer relevant patches
- Fix a build error in mdopen.c
- Fix the udev rules path in Makefile
- Fix a compile issue with the __le32_to_cpu() macro usage (bad juju to to
operations on the target of the macro as it could get executed multiple
times, and gcc now throws an error on that)
- Add some casts to some print statements to keep gcc from complaining
- Use the udev rules file included with mdadm instead of our own
- Drop all the no longer relevant patches
- Fix a build error in mdopen.c
- Fix the udev rules path in Makefile
- Fix a compile issue with the __le32_to_cpu() macro usage (bad juju to to
operations on the target of the macro as it could get executed multiple
times, and gcc now throws an error on that)
- Use the udev rules file included with mdadm instead of our own
- Drop all the no longer relevant patches
- Fix a build error in mdopen.c
- Fix the udev rules path in Makefile
- Use the udev rules file included with mdadm instead of our own
- Drop all the no longer relevant patches
- Fix a build error in mdopen.c
- Fix the udev rules path in Makefile
- Use the udev rules file included with mdadm instead of our own
- Drop all the no longer relevant patches
- Fix a build error in mdopen.c
- Fix the udev rules path in Makefile
- Drop incremental patch as it's now part of upstream
- Clean up all the open() calls in the code (#437145)
- Fix the build process to actually generate mdassemble (#446988)
- Update the udev rules to get additional info about arrays being assembled
from the /etc/mdadm.conf file (--scan option) (#447818)
- Update the udev rules to run degraded arrays (--run option) (#452459)
- Update mdadm init script so that status will always run and so return
codes are standards compliant
- Fix assembly of version 1 superblock devices
- Make the attempt to create an already running device have a clearer error
message
- Allow the creation of a degraded raid4 array like we allow for raid5
- Make mdadm actually pay attention to raid4 devices when in monitor mode
- Make the mdmonitor script use daemon() correctly
- Fix a bug where manage mode would not add disks correctly under certain
conditions
- Resolves: bz244582, bz242688, bz230207, bz169596, bz171862, bz171938
- Resolves: bz174642, bz224272, bz186524
- Remove requirement for /usr/sbin/sendmail - it's optional and not on by
default, and sendmail isn't *required* for mdadm itself to work, and
isn't even required for the monitoring capability to work, just if you
want to have the monitoring capability do the automatic email thing
instead of run your own program (and if you use the program option of
the monitor capability, your program could email you in a different
manner entirely)
- Remove the mdmpd daemon entirely. Now that multipath tools from the
lvm/dm packages handles multipath devices well, this is no longer
needed.
- Various cleanups in the spec file