autobuild v3.12.2-35

Resolves: bz#1648893 bz#1654161
Signed-off-by: Milind Changire <mchangir@redhat.com>
This commit is contained in:
Milind Changire 2019-01-02 10:54:10 -05:00
parent 5f5929f321
commit a902a1d5ca
4 changed files with 269 additions and 1 deletions

View File

@ -0,0 +1,64 @@
From 61fd5c07791d82e830d7caac008247765437b7ca Mon Sep 17 00:00:00 2001
From: Sanju Rakonde <srakonde@redhat.com>
Date: Wed, 2 Jan 2019 12:29:53 +0530
Subject: [PATCH 496/498] glusterd: kill the process without releasing the
cleanup mutex lock
Problem:
glusterd acquires a cleanup mutex lock before it starts
cleanup process, so that any other thread which tries to acquire
lock on any resource will be blocked on cleanup mutex lock.
We don't want any thread to try to acquire any resource, once
the cleanup is started. because other threads might try to acquire
lock on resources which are already freed by the thread which is
going though the cleanup phase.
previously we were releasing the cleanup mutex lock before the
process exit. As we are releasing the cleanup mutex lock, before
the process can exit some other thread which is blocked on
cleanup mutex lock is acquiring the cleanup mutex lock and
trying to acquire some resources which are already freed as a
part of cleanup. This is leading glusterd to crash.
Solution: We should exit the process without releasing the
cleanup mutex lock.
> Change-Id: Ibae1c62260f141019017f7a547519a5d38dc2bb6
> fixes: bz#1654270
> Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
upstream patch: https://review.gluster.org/#/c/glusterfs/+/21974/
Change-Id: Ibae1c62260f141019017f7a547519a5d38dc2bb6
BUG: 1654161
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/159635
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfsd/src/glusterfsd.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/glusterfsd/src/glusterfsd.c b/glusterfsd/src/glusterfsd.c
index 57effbd..990036c 100644
--- a/glusterfsd/src/glusterfsd.c
+++ b/glusterfsd/src/glusterfsd.c
@@ -1446,11 +1446,10 @@ cleanup_and_exit (int signum)
#endif
trav = NULL;
+ /* NOTE: Only the least significant 8 bits i.e (signum & 255)
+ will be available to parent process on calling exit() */
+ exit(abs(signum));
}
- pthread_mutex_unlock(&ctx->cleanup_lock);
- /* NOTE: Only the least significant 8 bits i.e (signum & 255)
- will be available to parent process on calling exit() */
- exit(abs(signum));
}
--
1.8.3.1

View File

@ -0,0 +1,145 @@
From 2029bf72400a380a4a0f1bf7f1b72816c70f9774 Mon Sep 17 00:00:00 2001
From: N Balachandran <nbalacha@redhat.com>
Date: Mon, 31 Dec 2018 17:42:27 +0530
Subject: [PATCH 497/498] cluster/dht: Use percentages for space check
With heterogenous bricks now being supported in DHT
we could run into issues where files are not migrated
even though there is sufficient space in newly added bricks
which just happen to be considerably smaller than older
bricks. Using percentages instead of absolute available
space for space checks can mitigate that to some extent.
upstream patch:https://review.gluster.org/#/c/glusterfs/+/19101/
This is not an identical backport as there were some changes
to upstream master that are not available in the downstream code.
Marking bug-1247563.t bad as that used to depend on the easier
code to prevent a file from migrating. This will be removed
once we find a way to force a file migration failure.
Change-Id: Ie89bfdd114406a986b3ff4f53b0bb0fae6574c8e
BUG: 1290124
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/159569
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Susant Palai <spalai@redhat.com>
Reviewed-by: Raghavendra Gowdappa <rgowdapp@redhat.com>
---
tests/bugs/distribute/bug-1247563.t | 3 ++
xlators/cluster/dht/src/dht-rebalance.c | 57 ++++++++++++++++++++++++---------
2 files changed, 45 insertions(+), 15 deletions(-)
diff --git a/tests/bugs/distribute/bug-1247563.t b/tests/bugs/distribute/bug-1247563.t
index f7f9258..12cd080 100644
--- a/tests/bugs/distribute/bug-1247563.t
+++ b/tests/bugs/distribute/bug-1247563.t
@@ -55,3 +55,6 @@ COUNT=`getfacl $FPATH2 |grep -c "user:root:rwx"`
EXPECT "0" echo $COUNT
cleanup;
+
+#G_TESTDEF_TEST_STATUS_CENTOS6=BAD_TEST,BUG=000000
+#G_TESTDEF_TEST_STATUS_NETBSD7=BAD_TEST,BUG=000000
diff --git a/xlators/cluster/dht/src/dht-rebalance.c b/xlators/cluster/dht/src/dht-rebalance.c
index d0f49d2..291b557 100644
--- a/xlators/cluster/dht/src/dht-rebalance.c
+++ b/xlators/cluster/dht/src/dht-rebalance.c
@@ -880,8 +880,12 @@ __dht_check_free_space (xlator_t *this, xlator_t *to, xlator_t *from, loc_t *loc
dict_t *xdata = NULL;
dht_layout_t *layout = NULL;
uint64_t src_statfs_blocks = 1;
+ uint64_t src_total_blocks = 0;
uint64_t dst_statfs_blocks = 1;
- double post_availspacepercent = 0;
+ uint64_t dst_total_blocks = 0;
+ uint64_t file_blocks = 0;
+ double dst_post_availspacepercent = 0;
+ double src_post_availspacepercent = 0;
xdata = dict_new ();
if (!xdata) {
@@ -926,8 +930,24 @@ __dht_check_free_space (xlator_t *this, xlator_t *to, xlator_t *from, loc_t *loc
}
gf_msg_debug (this->name, 0, "min_free_disk - %f , block available - %lu ,"
- " block size - %lu ", conf->min_free_disk, dst_statfs.f_bavail,
- dst_statfs.f_bsize);
+ " block size - %lu ", conf->min_free_disk,
+ dst_statfs.f_bavail, dst_statfs.f_frsize);
+
+ dst_statfs_blocks = ((dst_statfs.f_bavail *
+ dst_statfs.f_frsize) /
+ GF_DISK_SECTOR_SIZE);
+
+ src_statfs_blocks = ((src_statfs.f_bavail *
+ src_statfs.f_frsize) /
+ GF_DISK_SECTOR_SIZE);
+
+ dst_total_blocks = ((dst_statfs.f_blocks *
+ dst_statfs.f_frsize) /
+ GF_DISK_SECTOR_SIZE);
+
+ src_total_blocks = ((src_statfs.f_blocks *
+ src_statfs.f_frsize) /
+ GF_DISK_SECTOR_SIZE);
/* if force option is given, do not check for space @ dst.
* Check only if space is avail for the file */
@@ -940,17 +960,22 @@ __dht_check_free_space (xlator_t *this, xlator_t *to, xlator_t *from, loc_t *loc
subvol gains certain 'blocks' of free space. A valid check is
necessary here to avoid errorneous move to destination where
the space could be scantily available.
+ With heterogenous brick support, an actual space comparison could
+ prevent any files being migrated to newly added bricks if they are
+ smaller then the free space available on the existing bricks.
*/
if (stbuf) {
- dst_statfs_blocks = ((dst_statfs.f_bavail *
- dst_statfs.f_bsize) /
- GF_DISK_SECTOR_SIZE);
- src_statfs_blocks = ((src_statfs.f_bavail *
- src_statfs.f_bsize) /
- GF_DISK_SECTOR_SIZE);
- if ((dst_statfs_blocks) <
- (src_statfs_blocks + stbuf->ia_blocks)) {
+ file_blocks = stbuf->ia_size + GF_DISK_SECTOR_SIZE - 1;
+ file_blocks /= GF_DISK_SECTOR_SIZE;
+ src_post_availspacepercent =
+ (((src_statfs_blocks + file_blocks) * 100) /
+ src_total_blocks);
+
+ dst_post_availspacepercent = ((dst_statfs_blocks * 100) /
+ dst_total_blocks);
+
+ if (dst_post_availspacepercent < src_post_availspacepercent) {
gf_msg (this->name, GF_LOG_WARNING, 0,
DHT_MSG_MIGRATE_FILE_FAILED,
"data movement of file "
@@ -969,16 +994,18 @@ __dht_check_free_space (xlator_t *this, xlator_t *to, xlator_t *from, loc_t *loc
}
}
-
check_avail_space:
if (conf->disk_unit == 'p' && dst_statfs.f_blocks) {
- post_availspacepercent = (dst_statfs.f_bavail * 100) / dst_statfs.f_blocks;
+ dst_post_availspacepercent =
+ (dst_statfs_blocks) / dst_total_blocks;
+
gf_msg_debug (this->name, 0, "file : %s, post_availspacepercent : %lf "
"f_bavail : %lu min-free-disk: %lf", loc->path,
- post_availspacepercent, dst_statfs.f_bavail, conf->min_free_disk);
+ dst_post_availspacepercent, dst_statfs.f_bavail,
+ conf->min_free_disk);
- if (post_availspacepercent < conf->min_free_disk) {
+ if (dst_post_availspacepercent < conf->min_free_disk) {
gf_msg (this->name, GF_LOG_WARNING, 0, 0,
"Write will cross min-free-disk for "
"file - %s on subvol - %s. Looking "
--
1.8.3.1

View File

@ -0,0 +1,53 @@
From 9080a49b75c2802f7739cb631050c8befa9ae760 Mon Sep 17 00:00:00 2001
From: Mohit Agrawal <moagrawa@redhat.com>
Date: Mon, 31 Dec 2018 13:52:27 +0530
Subject: [PATCH 498/498] mem-pool: Code refactor in mem_pool.c
Problem: In the last commit 10868bfc5ed099a90fbfd2310bc89c299475d94e
the patch was not complete according to upstream commit.
Solution: Update some changes to match with an upstream patch.
BUG: 1648893
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> Reviewed-on: https://code.engineering.redhat.com/gerrit/159029
> Tested-by: RHGS Build Bot <nigelb@redhat.com>
> Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
Change-Id: I924ba4967ce28ece6329dbda3e0309b79784fbe7
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/159628
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
libglusterfs/src/mem-pool.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/libglusterfs/src/mem-pool.c b/libglusterfs/src/mem-pool.c
index 76daca9..7b62358 100644
--- a/libglusterfs/src/mem-pool.c
+++ b/libglusterfs/src/mem-pool.c
@@ -825,6 +825,7 @@ mem_get_from_pool (struct mem_pool *mem_pool)
if (retval) {
retval->magic = GF_MEM_HEADER_MAGIC;
retval->next = NULL;
+ retval->pool = mem_pool;
retval->pool_list = pool_list;
retval->power_of_two = mem_pool->pool->power_of_two;
}
@@ -860,11 +861,6 @@ mem_get (struct mem_pool *mem_pool)
return NULL;
}
- retval->magic = GF_MEM_HEADER_MAGIC;
- retval->pool = mem_pool;
- retval->pool_list = pool_list;
- retval->power_of_two = mem_pool->pool->power_of_two;
-
GF_ATOMIC_INC (mem_pool->active);
return retval + 1;
--
1.8.3.1

View File

@ -192,7 +192,7 @@ Release: 0.1%{?prereltag:.%{prereltag}}%{?dist}
%else
Name: glusterfs
Version: 3.12.2
Release: 34%{?dist}
Release: 35%{?dist}
%endif
License: GPLv2 or LGPLv3+
Group: System Environment/Base
@ -760,6 +760,9 @@ Patch0492: 0492-mem-pool-track-glusterfs_ctx_t-in-struct-mem_pool.patch
Patch0493: 0493-mem-pool-count-allocations-done-per-user-pool.patch
Patch0494: 0494-mem-pool-Resolve-crash-in-mem_pool_destroy.patch
Patch0495: 0495-build-add-conditional-dependency-on-server-for-devel.patch
Patch0496: 0496-glusterd-kill-the-process-without-releasing-the-clea.patch
Patch0497: 0497-cluster-dht-Use-percentages-for-space-check.patch
Patch0498: 0498-mem-pool-Code-refactor-in-mem_pool.c.patch
%description
GlusterFS is a distributed file-system capable of scaling to several
@ -2720,6 +2723,9 @@ fi
%endif
%changelog
* Wed Jan 02 2019 Milind Changire <mchangir@redhat.com> - 3.12.2-35
- fixes bugs bz#1654161
* Wed Dec 19 2018 Milind Changire <mchangir@redhat.com> - 3.12.2-34
- fixes bugs bz#1648893 bz#1656357