glusterfs/0403-Fix-spurious-failure-in-bug-1744548-heal-timeout.t.patch
Rinku Kothiya 14e44be7b0 autobuild v6.0-38
Resolves: bz#1234220 bz#1286171 bz#1487177 bz#1524457 bz#1640573
Resolves: bz#1663557 bz#1667954 bz#1683602 bz#1686897 bz#1721355
Resolves: bz#1748865 bz#1750211 bz#1754391 bz#1759875 bz#1761531
Resolves: bz#1761932 bz#1763124 bz#1763129 bz#1764091 bz#1775637
Resolves: bz#1776901 bz#1781550 bz#1781649 bz#1781710 bz#1783232
Resolves: bz#1784211 bz#1784415 bz#1786516 bz#1786681 bz#1787294
Resolves: bz#1787310 bz#1787331 bz#1787994 bz#1790336 bz#1792873
Resolves: bz#1794663 bz#1796814 bz#1804164 bz#1810924 bz#1815434
Resolves: bz#1836099 bz#1837467 bz#1837926 bz#1838479 bz#1839137
Resolves: bz#1844359
Signed-off-by: Rinku Kothiya <rkothiya@redhat.com>
2020-06-10 00:48:36 -04:00

85 lines
3.1 KiB
Diff

From b311385a3c4bd56d69d1fa7e9bd3d9a2ae5c344e Mon Sep 17 00:00:00 2001
From: Pranith Kumar K <pkarampu@redhat.com>
Date: Mon, 7 Oct 2019 12:27:01 +0530
Subject: [PATCH 403/449] Fix spurious failure in bug-1744548-heal-timeout.t
Script was assuming that the heal would have triggered
by the time test was executed, which may not be the case.
It can lead to following failures when the race happens:
...
18:29:45 not ok 14 [ 85/ 1] < 26> '[ 331 == 333 ]' -> ''
...
18:29:45 not ok 16 [ 10097/ 1] < 33> '[ 668 == 666 ]' -> ''
Heal on 3rd brick didn't start completely first time the command was executed.
So the extra count got added to the next profile info.
Fixed it by depending on cumulative stats and waiting until the count is
satisfied using EXPECT_WITHIN
> Upstream patch link:https://review.gluster.org/23523
>fixes: bz#1759002
>Change-Id: I3b410671c902d6b1458a757fa245613cb29d967d
>Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
BUG: 1764091
Change-Id: Ic4d16b6c8a1bbc35735567d60fd0383456b9f534
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/202369
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
tests/bugs/replicate/bug-1744548-heal-timeout.t | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/tests/bugs/replicate/bug-1744548-heal-timeout.t b/tests/bugs/replicate/bug-1744548-heal-timeout.t
index 3cb73bc..0aaa3ea 100644
--- a/tests/bugs/replicate/bug-1744548-heal-timeout.t
+++ b/tests/bugs/replicate/bug-1744548-heal-timeout.t
@@ -4,6 +4,11 @@
. $(dirname $0)/../../volume.rc
. $(dirname $0)/../../afr.rc
+function get_cumulative_opendir_count {
+#sed 'n:d' prints odd-numbered lines
+ $CLI volume profile $V0 info |grep OPENDIR|sed 'n;d' | awk '{print $8}'|tr -d '\n'
+}
+
cleanup;
TEST glusterd;
@@ -20,23 +25,23 @@ TEST ! $CLI volume heal $V0
TEST $CLI volume profile $V0 start
TEST $CLI volume profile $V0 info clear
TEST $CLI volume heal $V0 enable
-TEST $CLI volume heal $V0
# Each brick does 3 opendirs, corresponding to dirty, xattrop and entry-changes
-COUNT=`$CLI volume profile $V0 info incremental |grep OPENDIR|awk '{print $8}'|tr -d '\n'`
-TEST [ "$COUNT" == "333" ]
+EXPECT_WITHIN $HEAL_TIMEOUT "^333$" get_cumulative_opendir_count
# Check that a change in heal-timeout is honoured immediately.
TEST $CLI volume set $V0 cluster.heal-timeout 5
sleep 10
-COUNT=`$CLI volume profile $V0 info incremental |grep OPENDIR|awk '{print $8}'|tr -d '\n'`
# Two crawls must have happened.
-TEST [ "$COUNT" == "666" ]
+EXPECT_WITHIN $HEAL_TIMEOUT "^999$" get_cumulative_opendir_count
# shd must not heal if it is disabled and heal-timeout is changed.
TEST $CLI volume heal $V0 disable
+#Wait for configuration update and any opendir fops to complete
+sleep 10
TEST $CLI volume profile $V0 info clear
TEST $CLI volume set $V0 cluster.heal-timeout 6
-sleep 6
+#Better to wait for more than 6 seconds to account for configuration updates
+sleep 10
COUNT=`$CLI volume profile $V0 info incremental |grep OPENDIR|awk '{print $8}'|tr -d '\n'`
TEST [ -z $COUNT ]
cleanup;
--
1.8.3.1