autobuild v3.12.2-8

Resolves: bz#1466129 bz#1475779 bz#1523216 bz#1535281 bz#1546941
Resolves: bz#1550315 bz#1550991 bz#1553677 bz#1554291 bz#1559452
Resolves: bz#1560955 bz#1562744 bz#1563692 bz#1565962 bz#1567110
Resolves: bz#1569457
Signed-off-by: Milind Changire <mchangir@redhat.com>
This commit is contained in:
Milind Changire 2018-04-20 06:34:51 -04:00
parent 155a159af9
commit c211c8d97e
25 changed files with 6950 additions and 3 deletions

View File

@ -0,0 +1,116 @@
From 975e18d864b0b5c9158abae8752271e4a7fe6299 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Tue, 27 Mar 2018 16:53:33 +0530
Subject: [PATCH 213/236] glusterd: mark port_registered to true for all
running bricks with brick mux
glusterd maintains a boolean flag 'port_registered' which is used to determine
if a brick has completed its portmap sign in process. This flag is (re)set in
pmap_sigin and pmap_signout events. In case of brick multiplexing this flag is
the identifier to determine if the very first brick with which the process is
spawned up has completed its sign in process. However in case of glusterd
restart when a brick is already identified as running, glusterd does a
pmap_registry_bind to ensure its portmap table is updated but this flag isn't
which is fine in case of non brick multiplex case but causes an issue if
the very first brick which came as part of process is replaced and then
the subsequent brick attach will fail. One of the way to validate this
is to create and start a volume, remove the first brick and then
add-brick a new one. Add-brick operation will take a very long time and
post that the volume status will show all other brick status apart from
the new brick as down.
Solution is to set brickinfo->port_registered to true for all the
running bricks when brick multiplexing is enabled.
>upstream mainline patch : https://review.gluster.org/#/c/19800/
>Change-Id: Ib0662d99d0fa66b1538947fd96b43f1cbc04e4ff
>Fixes: bz#1560957
>Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Change-Id: Ib0662d99d0fa66b1538947fd96b43f1cbc04e4ff
BUG: 1560955
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/134827
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sanju Rakonde <srakonde@redhat.com>
---
.../bug-1560955-brick-mux-port-registered-issue.t | 39 ++++++++++++++++++++++
xlators/mgmt/glusterd/src/glusterd-handler.c | 2 ++
xlators/mgmt/glusterd/src/glusterd-utils.c | 1 +
3 files changed, 42 insertions(+)
create mode 100644 tests/bugs/glusterd/bug-1560955-brick-mux-port-registered-issue.t
diff --git a/tests/bugs/glusterd/bug-1560955-brick-mux-port-registered-issue.t b/tests/bugs/glusterd/bug-1560955-brick-mux-port-registered-issue.t
new file mode 100644
index 0000000..d1b8f06
--- /dev/null
+++ b/tests/bugs/glusterd/bug-1560955-brick-mux-port-registered-issue.t
@@ -0,0 +1,39 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../traps.rc
+. $(dirname $0)/../../volume.rc
+
+function count_brick_processes {
+ pgrep glusterfsd | wc -l
+}
+
+function count_brick_pids {
+ $CLI --xml volume status all | sed -n '/.*<pid>\([^<]*\).*/s//\1/p' \
+ | grep -v "N/A" | sort | uniq | wc -l
+}
+
+cleanup;
+
+#bug-1560955 - brick status goes offline after remove-brick followed by add-brick
+TEST glusterd
+TEST $CLI volume set all cluster.brick-multiplex on
+push_trapfunc "$CLI volume set all cluster.brick-multiplex off"
+push_trapfunc "cleanup"
+
+TEST $CLI volume create $V0 $H0:$B0/${V0}{1..3}
+TEST $CLI volume start $V0
+
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT 1 count_brick_processes
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT 1 count_brick_pids
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT 3 online_brick_count
+
+
+pkill glusterd
+TEST glusterd
+TEST $CLI volume remove-brick $V0 $H0:$B0/${V0}1 force
+TEST $CLI volume add-brick $V0 $H0:$B0/${V0}1_new force
+
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT 1 count_brick_processes
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT 1 count_brick_pids
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT 3 online_brick_count
diff --git a/xlators/mgmt/glusterd/src/glusterd-handler.c b/xlators/mgmt/glusterd/src/glusterd-handler.c
index dbf80a1..cb19321 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handler.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handler.c
@@ -5721,6 +5721,8 @@ glusterd_get_state (rpcsvc_request_t *req, dict_t *dict)
count, brickinfo->port);
fprintf (fp, "Volume%d.Brick%d.rdma_port: %d\n", count_bkp,
count, brickinfo->rdma_port);
+ fprintf (fp, "Volume%d.Brick%d.port_registered: %d\n",
+ count_bkp, count, brickinfo->port_registered);
fprintf (fp, "Volume%d.Brick%d.status: %s\n", count_bkp,
count, brickinfo->status ? "Started" : "Stopped");
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
index 49605cc..5e9213c 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
@@ -5976,6 +5976,7 @@ glusterd_brick_start (glusterd_volinfo_t *volinfo,
* TBD: re-use RPC connection across bricks
*/
if (is_brick_mx_enabled ()) {
+ brickinfo->port_registered = _gf_true;
ret = glusterd_get_sock_from_brick_pid (pid, socketpath,
sizeof(socketpath));
if (ret) {
--
1.8.3.1

View File

@ -0,0 +1,780 @@
From edd4d523869cc65c389253a225b02c578ad3af85 Mon Sep 17 00:00:00 2001
From: Mohit Agrawal <moagrawa@redhat.com>
Date: Fri, 6 Oct 2017 15:13:02 +0530
Subject: [PATCH 214/236] cluster/dht: Serialize mds update code path with
lookup unwind in selfheal
Problem: Sometime test case ./tests/bugs/bug-1371806_1.t is failing on
centos due to race condition between fresh lookup and setxattr fop.
Solution: In selfheal code path we do save mds on inode_ctx, it was not
serialize with lookup unwind. Due to this behavior after lookup
unwind if mds is not saved on inode_ctx and if any subsequent
setxattr fop call it has failed with ENOENT because
no mds has found on inode ctx.To resolve it save mds on
inode ctx has been serialize with lookup unwind.
> BUG: 1498966
> Change-Id: I8d4bb40a6cbf0cec35d181ec0095cc7142b02e29
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> (cherry picked from commit https://review.gluster.org/#/c/18436/)
> (Upstream patch link https://review.gluster.org/#/c/18436/)
BUG: 1550315
Change-Id: I0d3c03cb6ab9a3729f8c4219fd54058d97ed526b
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/134282
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Nithya Balachandran <nbalacha@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
tests/bugs/bug-1371806_1.t | 1 -
xlators/cluster/dht/src/dht-common.c | 314 ++++++++++++++++++++-------------
xlators/cluster/dht/src/dht-common.h | 14 +-
xlators/cluster/dht/src/dht-selfheal.c | 188 +++-----------------
4 files changed, 231 insertions(+), 286 deletions(-)
diff --git a/tests/bugs/bug-1371806_1.t b/tests/bugs/bug-1371806_1.t
index 44a57a9..df19a8c 100644
--- a/tests/bugs/bug-1371806_1.t
+++ b/tests/bugs/bug-1371806_1.t
@@ -46,4 +46,3 @@ EXPECT "abc" get_getfattr ./tmp{1..10}
cd -
cleanup
-exit
diff --git a/xlators/cluster/dht/src/dht-common.c b/xlators/cluster/dht/src/dht-common.c
index 6319a87..2fd145d 100644
--- a/xlators/cluster/dht/src/dht-common.c
+++ b/xlators/cluster/dht/src/dht-common.c
@@ -579,6 +579,7 @@ dht_discover_complete (xlator_t *this, call_frame_t *discover_frame)
uint32_t vol_commit_hash = 0;
xlator_t *source = NULL;
int heal_path = 0;
+ int error_while_marking_mds = 0;
int i = 0;
loc_t loc = {0 };
int8_t is_read_only = 0, layout_anomalies = 0;
@@ -684,7 +685,8 @@ dht_discover_complete (xlator_t *this, call_frame_t *discover_frame)
internal mds xattr is not present and all subvols are up
*/
if (!local->op_ret && !__is_root_gfid (local->stbuf.ia_gfid))
- (void) dht_mark_mds_subvolume (discover_frame, this);
+ (void) dht_common_mark_mdsxattr (discover_frame,
+ &error_while_marking_mds, 1);
if (local->need_xattr_heal && !heal_path) {
local->need_xattr_heal = 0;
@@ -699,7 +701,7 @@ dht_discover_complete (xlator_t *this, call_frame_t *discover_frame)
}
}
- if (source && (heal_path || layout_anomalies)) {
+ if (source && (heal_path || layout_anomalies || error_while_marking_mds)) {
gf_uuid_copy (loc.gfid, local->gfid);
if (gf_uuid_is_null (loc.gfid)) {
goto done;
@@ -761,62 +763,82 @@ out:
}
int
-dht_mds_internal_setxattr_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
- int op_ret, int op_errno, dict_t *xdata)
+dht_common_mark_mdsxattr_cbk (call_frame_t *frame, void *cookie,
+ xlator_t *this, int op_ret, int op_errno,
+ dict_t *xdata)
{
- dht_local_t *local = NULL;
- xlator_t *hashed_subvol = NULL;
- dht_conf_t *conf = NULL;
- int ret = 0;
+ dht_local_t *local = NULL;
+ xlator_t *prev = cookie;
+ int ret = -1;
+ dht_conf_t *conf = 0;
+ dht_layout_t *layout = NULL;
GF_VALIDATE_OR_GOTO (this->name, frame, out);
GF_VALIDATE_OR_GOTO (this->name, frame->local, out);
local = frame->local;
- hashed_subvol = cookie;
conf = this->private;
+ layout = local->selfheal.layout;
if (op_ret) {
gf_msg_debug (this->name, op_ret,
- "Failed to set %s on the MDS for path %s. ",
- conf->mds_xattr_key, local->loc.path);
+ "Failed to set %s on the MDS %s for path %s. ",
+ conf->mds_xattr_key, prev->name, local->loc.path);
} else {
- /* Save mds subvol on inode ctx */
- ret = dht_inode_ctx_mdsvol_set (local->inode, this,
- hashed_subvol);
+ /* Save mds subvol on inode ctx */
+ ret = dht_inode_ctx_mdsvol_set (local->inode, this, prev);
if (ret) {
gf_msg (this->name, GF_LOG_ERROR, 0,
DHT_MSG_SET_INODE_CTX_FAILED,
"Failed to set mds subvol on inode ctx"
- " %s for %s", hashed_subvol->name,
+ " %s for %s ", prev->name,
local->loc.path);
}
}
+ if (!local->mds_heal_fresh_lookup && layout) {
+ dht_selfheal_dir_setattr (frame, &local->loc, &local->stbuf,
+ 0xffffffff, layout);
+ }
out:
- DHT_STACK_DESTROY (frame);
+ if (local && local->mds_heal_fresh_lookup)
+ DHT_STACK_DESTROY (frame);
return 0;
}
-/* Code to save hashed subvol on inode ctx only while no
- mds xattr is availble and all subvols are up for fresh
+/* Common function call by revalidate/selfheal code path to populate
+ internal xattr if it is not present, mark_during_fresh_lookup value
+ determines either function is call by revalidate_cbk(discover_complete)
+ or call by selfheal code path while fresh lookup.
+ Here we do wind a call serially in case of fresh lookup and
+ for other lookup code path we do wind a call parallel.The reason
+ to wind a call serially is at the time of fresh lookup directory is not
+ discovered and at the time of revalidate_lookup directory is
+ already discovered. So, revalidate codepath can race with setxattr
+ codepath and can get into spurious heals because of an ongoing setxattr.
+ This can slow down revalidates, if healing happens in foreground.
+ However, if healing happens in background, there is no direct performance
+ penalty.
*/
int
-dht_mark_mds_subvolume (call_frame_t *frame, xlator_t *this)
+dht_common_mark_mdsxattr (call_frame_t *frame, int *errst, int mark_during_fresh_lookup)
{
- dht_local_t *local = NULL;
- xlator_t *hashed_subvol = NULL;
- int i = 0;
- gf_boolean_t vol_down = _gf_false;
- dht_conf_t *conf = 0;
- int ret = -1;
- char gfid_local[GF_UUID_BUF_SIZE] = {0};
- dict_t *xattrs = NULL;
- dht_local_t *copy_local = NULL;
- call_frame_t *xattr_frame = NULL;
- int32_t zero[1] = {0};
+ dht_local_t *local = NULL;
+ xlator_t *this = NULL;
+ xlator_t *hashed_subvol = NULL;
+ int ret = 0;
+ int i = 0;
+ dict_t *xattrs = NULL;
+ char gfid_local[GF_UUID_BUF_SIZE] = {0,};
+ int32_t zero[1] = {0};
+ dht_conf_t *conf = 0;
+ dht_layout_t *layout = NULL;
+ dht_local_t *copy_local = NULL;
+ call_frame_t *xattr_frame = NULL;
+ gf_boolean_t vol_down = _gf_false;
+ this = frame->this;
GF_VALIDATE_OR_GOTO ("dht", frame, out);
GF_VALIDATE_OR_GOTO ("dht", this, out);
@@ -825,66 +847,78 @@ dht_mark_mds_subvolume (call_frame_t *frame, xlator_t *this)
local = frame->local;
conf = this->private;
+ layout = local->selfheal.layout;
+ local->mds_heal_fresh_lookup = mark_during_fresh_lookup;
gf_uuid_unparse(local->gfid, gfid_local);
-
/* Code to update hashed subvol consider as a mds subvol
- and save on inode ctx if all subvols are up and no internal
- xattr has been set yet
+ and wind a setxattr call on hashed subvol to update
+ internal xattr
*/
if (!dict_get (local->xattr, conf->mds_xattr_key)) {
/* It means no internal MDS xattr has been set yet
*/
- /* Check the status of all subvol are up
+ /* Check the status of all subvol are up while call
+ this function call by lookup code path
*/
- for (i = 0; i < conf->subvolume_cnt; i++) {
- if (!conf->subvolume_status[i]) {
- vol_down = _gf_true;
- break;
+ if (mark_during_fresh_lookup) {
+ for (i = 0; i < conf->subvolume_cnt; i++) {
+ if (!conf->subvolume_status[i]) {
+ vol_down = _gf_true;
+ break;
+ }
+ }
+ if (vol_down) {
+ gf_msg_debug (this->name, 0,
+ "subvol %s is down. Unable to "
+ " save mds subvol on inode for "
+ " path %s gfid is %s " ,
+ conf->subvolumes[i]->name,
+ local->loc.path, gfid_local);
+ goto out;
}
}
- if (vol_down) {
- ret = 0;
- gf_msg_debug (this->name, 0,
- "subvol %s is down. Unable to "
- " save mds subvol on inode for "
- " path %s gfid is %s " ,
- conf->subvolumes[i]->name, local->loc.path,
- gfid_local);
- goto out;
- }
- /* Calculate hashed subvol based on inode and
- parent inode
+
+ /* Calculate hashed subvol based on inode and parent node
*/
- hashed_subvol = dht_inode_get_hashed_subvol (local->inode,
- this, &local->loc);
+ hashed_subvol = dht_inode_get_hashed_subvol (local->inode, this,
+ &local->loc);
if (!hashed_subvol) {
gf_msg (this->name, GF_LOG_DEBUG, 0,
DHT_MSG_HASHED_SUBVOL_GET_FAILED,
"Failed to get hashed subvol for path %s"
- " gfid is %s ",
+ "gfid is %s ",
local->loc.path, gfid_local);
- } else {
- xattrs = dict_new ();
- if (!xattrs) {
- gf_msg (this->name, GF_LOG_ERROR, ENOMEM,
- DHT_MSG_NO_MEMORY, "dict_new failed");
- ret = -1;
- goto out;
- }
- /* Add internal MDS xattr on disk for hashed subvol
- */
- ret = dht_dict_set_array (xattrs, conf->mds_xattr_key, zero, 1);
- if (ret) {
- gf_msg (this->name, GF_LOG_WARNING, ENOMEM,
- DHT_MSG_DICT_SET_FAILED,
- "Failed to set dictionary"
- " value:key = %s for "
- "path %s", conf->mds_xattr_key,
- local->loc.path);
- ret = -1;
- goto out;
- }
+ (*errst) = 1;
+ ret = -1;
+ goto out;
+ }
+ xattrs = dict_new ();
+ if (!xattrs) {
+ gf_msg (this->name, GF_LOG_ERROR, ENOMEM,
+ DHT_MSG_NO_MEMORY, "dict_new failed");
+ ret = -1;
+ goto out;
+ }
+ /* Add internal MDS xattr on disk for hashed subvol
+ */
+ ret = dht_dict_set_array (xattrs, conf->mds_xattr_key,
+ zero, 1);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_WARNING, ENOMEM,
+ DHT_MSG_DICT_SET_FAILED,
+ "Failed to set dictionary"
+ " value:key = %s for "
+ "path %s", conf->mds_xattr_key,
+ local->loc.path);
+ ret = -1;
+ goto out;
+ }
+ /* Create a new frame to wind a call only while
+ this function call by revalidate_cbk code path
+ To wind a call parallel need to create a new frame
+ */
+ if (mark_during_fresh_lookup) {
xattr_frame = create_frame (this, this->ctx->pool);
if (!xattr_frame) {
ret = -1;
@@ -898,32 +932,42 @@ dht_mark_mds_subvolume (call_frame_t *frame, xlator_t *this)
goto out;
}
copy_local->stbuf = local->stbuf;
+ copy_local->mds_heal_fresh_lookup = mark_during_fresh_lookup;
if (!copy_local->inode)
copy_local->inode = inode_ref (local->inode);
gf_uuid_copy (copy_local->loc.gfid, local->gfid);
- STACK_WIND_COOKIE (xattr_frame, dht_mds_internal_setxattr_cbk,
+ FRAME_SU_DO (xattr_frame, dht_local_t);
+ STACK_WIND_COOKIE (xattr_frame, dht_common_mark_mdsxattr_cbk,
hashed_subvol, hashed_subvol,
hashed_subvol->fops->setxattr,
&local->loc, xattrs, 0, NULL);
- ret = 0;
+ } else {
+ STACK_WIND_COOKIE (frame,
+ dht_common_mark_mdsxattr_cbk,
+ (void *)hashed_subvol,
+ hashed_subvol,
+ hashed_subvol->fops->setxattr,
+ &local->loc, xattrs, 0,
+ NULL);
}
} else {
- ret = 0;
gf_msg_debug (this->name, 0,
"internal xattr %s is present on subvol"
"on path %s gfid is %s " , conf->mds_xattr_key,
local->loc.path, gfid_local);
+ if (!mark_during_fresh_lookup)
+ dht_selfheal_dir_setattr (frame, &local->loc,
+ &local->stbuf, 0xffffffff,
+ layout);
}
-
out:
if (xattrs)
dict_unref (xattrs);
- return ret;
+ return ret;
}
-
int
dht_discover_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
int op_ret, int op_errno,
@@ -1646,11 +1690,11 @@ dht_revalidate_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
} else {
check_mds = dht_dict_get_array (xattr, conf->mds_xattr_key,
mds_xattr_val, 1, &errst);
- if (local->mds_subvol == prev) {
- local->mds_stbuf.ia_gid = stbuf->ia_gid;
- local->mds_stbuf.ia_uid = stbuf->ia_uid;
- local->mds_stbuf.ia_prot = stbuf->ia_prot;
- }
+ local->mds_subvol = prev;
+ local->mds_stbuf.ia_gid = stbuf->ia_gid;
+ local->mds_stbuf.ia_uid = stbuf->ia_uid;
+ local->mds_stbuf.ia_prot = stbuf->ia_prot;
+
/* save mds subvol on inode ctx */
ret = dht_inode_ctx_mdsvol_set (local->inode, this,
prev);
@@ -1672,7 +1716,6 @@ dht_revalidate_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
local->loc.path,
prev->name, gfid);
local->need_xattr_heal = 1;
- local->mds_subvol = prev;
}
}
ret = dht_layout_dir_mismatch (this, layout,
@@ -1749,31 +1792,35 @@ out:
if (conf->subvolume_cnt == 1)
local->need_xattr_heal = 0;
- /* Code to update all extended attributed from hashed subvol
- to local->xattr
- */
- if (local->need_xattr_heal && (local->mds_xattr)) {
- dht_dir_set_heal_xattr (this, local, local->xattr,
- local->mds_xattr, NULL, NULL);
- dict_unref (local->mds_xattr);
- local->mds_xattr = NULL;
- }
- /* Call function to save hashed subvol on inode ctx if
- internal mds xattr is not present and all subvols are up
- */
- if (inode && !__is_root_gfid (inode->gfid) &&
- (!local->op_ret) && (IA_ISDIR (local->stbuf.ia_type)))
- (void) dht_mark_mds_subvolume (frame, this);
-
- if (local->need_xattr_heal) {
- local->need_xattr_heal = 0;
- ret = dht_dir_xattr_heal (this, local);
- if (ret)
- gf_msg (this->name, GF_LOG_ERROR,
- ret, DHT_MSG_DIR_XATTR_HEAL_FAILED,
- "xattr heal failed for directory %s "
- " gfid %s ", local->loc.path,
- gfid);
+ if (IA_ISDIR (local->stbuf.ia_type)) {
+ /* Code to update all extended attributed from hashed
+ subvol to local->xattr and call heal code to heal
+ custom xattr from hashed subvol to non-hashed subvol
+ */
+ if (local->need_xattr_heal && (local->mds_xattr)) {
+ dht_dir_set_heal_xattr (this, local,
+ local->xattr,
+ local->mds_xattr, NULL,
+ NULL);
+ dict_unref (local->mds_xattr);
+ local->mds_xattr = NULL;
+ local->need_xattr_heal = 0;
+ ret = dht_dir_xattr_heal (this, local);
+ if (ret)
+ gf_msg (this->name, GF_LOG_ERROR,
+ ret, DHT_MSG_DIR_XATTR_HEAL_FAILED,
+ "xattr heal failed for directory %s "
+ " gfid %s ", local->loc.path,
+ gfid);
+ } else {
+ /* Call function to save hashed subvol on inode
+ ctx if internal mds xattr is not present and
+ all subvols are up
+ */
+ if (inode && !__is_root_gfid (inode->gfid) &&
+ (!local->op_ret))
+ (void) dht_common_mark_mdsxattr (frame, NULL, 1);
+ }
}
if (local->need_selfheal) {
local->need_selfheal = 0;
@@ -3629,6 +3676,28 @@ int32_t dht_dict_set_array (dict_t *dict, char *key, int32_t value[],
return ret;
}
+int
+dht_common_mds_xattrop_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
+ int32_t op_ret, int32_t op_errno, dict_t *dict, dict_t *xdata)
+{
+ dht_local_t *local = NULL;
+ call_frame_t *prev = cookie;
+
+ local = frame->local;
+
+ if (op_ret)
+ gf_msg_debug (this->name, op_errno,
+ "subvolume %s returned -1",
+ prev->this->name);
+
+ if (local->fop == GF_FOP_SETXATTR) {
+ DHT_STACK_UNWIND (setxattr, frame, 0, op_errno, local->xdata);
+ } else {
+ DHT_STACK_UNWIND (fsetxattr, frame, 0, op_errno, local->xdata);
+ }
+ return 0;
+}
+
/* Code to wind a xattrop call to add 1 on current mds internal xattr
value
*/
@@ -3682,13 +3751,13 @@ dht_setxattr_non_mds_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
goto out;
}
if (local->fop == GF_FOP_SETXATTR) {
- STACK_WIND (frame, dht_common_xattrop_cbk,
+ STACK_WIND (frame, dht_common_mds_xattrop_cbk,
local->mds_subvol,
local->mds_subvol->fops->xattrop,
&local->loc, GF_XATTROP_ADD_ARRAY,
xattrop, NULL);
} else {
- STACK_WIND (frame, dht_common_xattrop_cbk,
+ STACK_WIND (frame, dht_common_mds_xattrop_cbk,
local->mds_subvol,
local->mds_subvol->fops->fxattrop,
local->fd, GF_XATTROP_ADD_ARRAY,
@@ -8822,15 +8891,11 @@ dht_mkdir_hashed_cbk (call_frame_t *frame, void *cookie,
if (gf_uuid_is_null (local->loc.gfid))
gf_uuid_copy (local->loc.gfid, stbuf->ia_gfid);
- if (local->call_cnt == 0) {
- /*Unlock namespace lock once mkdir is done on all subvols*/
- dht_unlock_namespace (frame, &local->lock[0]);
- FRAME_SU_DO (frame, dht_local_t);
- dht_selfheal_directory (frame, dht_mkdir_selfheal_cbk,
- &local->loc, layout);
- }
/* Set hashed subvol as a mds subvol on inode ctx */
+ /*if (!local->inode)
+ local->inode = inode_ref (inode);
+ */
ret = dht_inode_ctx_mdsvol_set (local->inode, this, hashed_subvol);
if (ret) {
gf_msg (this->name, GF_LOG_ERROR, 0, DHT_MSG_SET_INODE_CTX_FAILED,
@@ -8838,6 +8903,15 @@ dht_mkdir_hashed_cbk (call_frame_t *frame, void *cookie,
local->loc.path, hashed_subvol->name);
}
+ if (local->call_cnt == 0) {
+ /*Unlock namespace lock once mkdir is done on all subvols*/
+ dht_unlock_namespace (frame, &local->lock[0]);
+ FRAME_SU_DO (frame, dht_local_t);
+ dht_selfheal_directory (frame, dht_mkdir_selfheal_cbk,
+ &local->loc, layout);
+ return 0;
+ }
+
for (i = 0; i < conf->subvolume_cnt; i++) {
if (conf->subvolumes[i] == hashed_subvol)
continue;
diff --git a/xlators/cluster/dht/src/dht-common.h b/xlators/cluster/dht/src/dht-common.h
index 2aa7251..a785876 100644
--- a/xlators/cluster/dht/src/dht-common.h
+++ b/xlators/cluster/dht/src/dht-common.h
@@ -381,6 +381,7 @@ struct dht_local {
/* This is use only for directory operation */
int32_t valid;
gf_boolean_t heal_layout;
+ int32_t mds_heal_fresh_lookup;
};
typedef struct dht_local dht_local_t;
@@ -1463,12 +1464,13 @@ xlator_t *
dht_inode_get_hashed_subvol (inode_t *inode, xlator_t *this, loc_t *loc);
int
-dht_mark_mds_subvolume (call_frame_t *frame, xlator_t *this);
+dht_common_mark_mdsxattr (call_frame_t *frame, int *errst, int flag);
int
-dht_mds_internal_setxattr_cbk (call_frame_t *frame, void *cookie,
- xlator_t *this, int op_ret, int op_errno,
- dict_t *xdata);
+dht_common_mark_mdsxattr_cbk (call_frame_t *frame, void *cookie,
+ xlator_t *this, int op_ret, int op_errno,
+ dict_t *xdata);
+
int
dht_inode_ctx_mdsvol_set (inode_t *inode, xlator_t *this,
xlator_t *mds_subvol);
@@ -1476,4 +1478,8 @@ int
dht_inode_ctx_mdsvol_get (inode_t *inode, xlator_t *this,
xlator_t **mdsvol);
+int
+dht_selfheal_dir_setattr (call_frame_t *frame, loc_t *loc, struct iatt *stbuf,
+ int32_t valid, dht_layout_t *layout);
+
#endif/* _DHT_H */
diff --git a/xlators/cluster/dht/src/dht-selfheal.c b/xlators/cluster/dht/src/dht-selfheal.c
index 1707e08..c2c4034 100644
--- a/xlators/cluster/dht/src/dht-selfheal.c
+++ b/xlators/cluster/dht/src/dht-selfheal.c
@@ -1159,141 +1159,6 @@ dht_selfheal_dir_setattr_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
return 0;
}
-int
-dht_selfheal_dir_check_set_mdsxattr_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
- int op_ret, int op_errno, dict_t *xdata)
-{
- dht_local_t *local = NULL;
- xlator_t *prev = cookie;
- int ret = -1;
- dht_conf_t *conf = 0;
-
- GF_VALIDATE_OR_GOTO (this->name, frame, out);
- GF_VALIDATE_OR_GOTO (this->name, frame->local, out);
-
- local = frame->local;
- conf = this->private;
-
- if (op_ret) {
- gf_msg_debug (this->name, op_ret,
- "internal mds setxattr %s is failed on mds subvol "
- "at the time of heal on path %s " ,
- conf->mds_xattr_key, local->loc.path);
- } else {
- /* Save mds subvol on inode ctx */
- ret = dht_inode_ctx_mdsvol_set (local->inode, this, prev);
- if (ret) {
- gf_msg (this->name, GF_LOG_ERROR, 0,
- DHT_MSG_SET_INODE_CTX_FAILED,
- "Failed to set hashed subvol "
- " %s for %s ", prev->name,
- local->loc.path);
- }
- }
-
-out:
- DHT_STACK_DESTROY (frame);
- return 0;
-}
-
-/* Code to set internal mds xattr if it is not present
-*/
-int
-dht_selfheal_dir_check_set_mdsxattr (call_frame_t *frame, loc_t *loc)
-{
- dht_local_t *local = NULL;
- xlator_t *this = NULL;
- xlator_t *hashed_subvol = NULL;
- int ret = -1;
- dict_t *xattrs = NULL;
- char gfid_local[GF_UUID_BUF_SIZE] = {0,};
- int32_t zero[1] = {0};
- call_frame_t *xattr_frame = NULL;
- dht_local_t *copy_local = NULL;
- dht_conf_t *conf = 0;
-
- local = frame->local;
- this = frame->this;
- conf = this->private;
- gf_uuid_unparse(local->gfid, gfid_local);
-
- if (!dict_get (local->xattr, conf->mds_xattr_key)) {
- /* It means no internal MDS xattr has been set yet
- */
- /* Calculate hashed subvol based on inode and
- parent inode
- */
- hashed_subvol = dht_inode_get_hashed_subvol (local->inode, this,
- loc);
- if (!hashed_subvol) {
- gf_msg (this->name, GF_LOG_DEBUG, 0,
- DHT_MSG_HASHED_SUBVOL_GET_FAILED,
- "Failed to get hashed subvol for path %s"
- "gfid is %s ",
- local->loc.path, gfid_local);
- ret = -1;
- goto out;
- } else {
- /* Set internal mds xattr on disk */
- xattrs = dict_new ();
- if (!xattrs) {
- gf_msg (this->name, GF_LOG_ERROR, ENOMEM,
- DHT_MSG_NO_MEMORY, "dict_new failed");
- ret = -1;
- goto out;
- }
- /* Add internal MDS xattr on disk for hashed subvol
- */
- ret = dht_dict_set_array (xattrs, conf->mds_xattr_key, zero, 1);
- if (ret) {
- gf_msg (this->name, GF_LOG_WARNING, ENOMEM,
- DHT_MSG_DICT_SET_FAILED,
- "Failed to set dictionary"
- " value:key = %s for "
- "path %s", conf->mds_xattr_key,
- local->loc.path);
- ret = -1;
- goto out;
- }
-
- xattr_frame = create_frame (this, this->ctx->pool);
- if (!xattr_frame) {
- ret = -1;
- goto out;
- }
- copy_local = dht_local_init (xattr_frame, &(local->loc),
- NULL, 0);
- if (!copy_local) {
- ret = -1;
- DHT_STACK_DESTROY (xattr_frame);
- goto out;
- }
-
- copy_local->stbuf = local->stbuf;
- copy_local->inode = inode_ref (local->inode);
- gf_uuid_copy (copy_local->loc.gfid, local->gfid);
-
- STACK_WIND_COOKIE (xattr_frame,
- dht_selfheal_dir_check_set_mdsxattr_cbk,
- (void *)hashed_subvol, hashed_subvol,
- hashed_subvol->fops->setxattr,
- loc, xattrs, 0, NULL);
- ret = 0;
- }
- } else {
- ret = 0;
- gf_msg_debug (this->name, 0,
- "internal xattr %s is present on subvol"
- "on path %s gfid is %s " , conf->mds_xattr_key,
- local->loc.path, gfid_local);
- }
-
-out:
- if (xattrs)
- dict_unref (xattrs);
- return ret;
-}
-
int
dht_selfheal_dir_setattr (call_frame_t *frame, loc_t *loc, struct iatt *stbuf,
@@ -1313,32 +1178,6 @@ dht_selfheal_dir_setattr (call_frame_t *frame, loc_t *loc, struct iatt *stbuf,
missing_attr++;
}
- if (!__is_root_gfid (local->stbuf.ia_gfid)) {
- if (local->need_xattr_heal) {
- local->need_xattr_heal = 0;
- ret = dht_dir_xattr_heal (this, local);
- if (ret)
- gf_msg (this->name, GF_LOG_ERROR,
- ret,
- DHT_MSG_DIR_XATTR_HEAL_FAILED,
- "xattr heal failed for "
- "directory %s gfid %s ",
- local->loc.path,
- local->gfid);
- } else {
- ret = dht_selfheal_dir_check_set_mdsxattr (frame, loc);
- if (ret)
- gf_msg (this->name, GF_LOG_INFO, ret,
- DHT_MSG_DIR_XATTR_HEAL_FAILED,
- "set mds internal xattr failed for "
- "directory %s gfid %s ", local->loc.path,
- local->gfid);
- }
- }
-
- if (!gf_uuid_is_null (local->gfid))
- gf_uuid_copy (loc->gfid, local->gfid);
-
if (missing_attr == 0) {
if (!local->heal_layout) {
gf_msg_trace (this->name, 0,
@@ -1789,6 +1628,33 @@ dht_selfheal_dir_mkdir (call_frame_t *frame, loc_t *loc,
}
if (missing_dirs == 0) {
+ if (!__is_root_gfid (local->stbuf.ia_gfid)) {
+ if (local->need_xattr_heal) {
+ local->need_xattr_heal = 0;
+ ret = dht_dir_xattr_heal (this, local);
+ if (ret)
+ gf_msg (this->name, GF_LOG_ERROR,
+ ret,
+ DHT_MSG_DIR_XATTR_HEAL_FAILED,
+ "xattr heal failed for "
+ "directory %s gfid %s ",
+ local->loc.path,
+ local->gfid);
+ } else {
+ if (!gf_uuid_is_null (local->gfid))
+ gf_uuid_copy (loc->gfid, local->gfid);
+
+ ret = dht_common_mark_mdsxattr (frame, NULL, 0);
+ if (!ret)
+ return 0;
+
+ gf_msg (this->name, GF_LOG_INFO, 0,
+ DHT_MSG_DIR_XATTR_HEAL_FAILED,
+ "Failed to set mds xattr "
+ "for directory %s gfid %s ",
+ local->loc.path, local->gfid);
+ }
+ }
dht_selfheal_dir_setattr (frame, loc, &local->stbuf,
0xffffffff, layout);
return 0;
--
1.8.3.1

View File

@ -0,0 +1,59 @@
From d6da0e0874de04c184c54147d0ee0e0370e55ff2 Mon Sep 17 00:00:00 2001
From: N Balachandran <nbalacha@redhat.com>
Date: Mon, 2 Apr 2018 09:11:06 +0530
Subject: [PATCH 215/236] cluster/dht: ENOSPC will not fail rebalance
ENOSPC returned by a file migration is no longer
considered a rebalance failure.
upstream: https://review.gluster.org/#/c/19806/
> Change-Id: I21cf3a8acdc827bc478e138d6cb5db649d53a28c
> BUG: 1555161
> Signed-off-by: N Balachandran <nbalacha@redhat.com>
Change-Id: I22f90194507331e981fbcd10b2cafced6fc05cc2
BUG: 1546941
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/134252
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/cluster/dht/src/dht-rebalance.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/xlators/cluster/dht/src/dht-rebalance.c b/xlators/cluster/dht/src/dht-rebalance.c
index 9e31ff8..bba44b9 100644
--- a/xlators/cluster/dht/src/dht-rebalance.c
+++ b/xlators/cluster/dht/src/dht-rebalance.c
@@ -2392,10 +2392,11 @@ dht_build_root_loc (inode_t *inode, loc_t *loc)
int32_t
gf_defrag_handle_migrate_error (int32_t op_errno, gf_defrag_info_t *defrag)
{
- /* if errno is not ENOSPC or ENOTCONN, we can still continue
+ /* if errno is not ENOTCONN, we can still continue
with rebalance process */
- if ((op_errno != ENOSPC) && (op_errno != ENOTCONN))
+ if (op_errno != ENOTCONN) {
return 1;
+ }
if (op_errno == ENOTCONN) {
/* Most probably mount point went missing (mostly due
@@ -2405,13 +2406,6 @@ gf_defrag_handle_migrate_error (int32_t op_errno, gf_defrag_info_t *defrag)
return -1;
}
- if (op_errno == ENOSPC) {
- /* rebalance process itself failed, may be
- remote brick went down, or write failed due to
- disk full etc etc.. */
- return 0;
- }
-
return 0;
}
--
1.8.3.1

View File

@ -0,0 +1,54 @@
From a263e2a308221de328eb5e0dc4cb9c0aed98ec37 Mon Sep 17 00:00:00 2001
From: N Balachandran <nbalacha@redhat.com>
Date: Thu, 5 Apr 2018 21:41:44 +0530
Subject: [PATCH 216/236] cluster/dht: Wind open to all subvols
dht_opendir should wind the open to all subvols
whether or not local->subvols is set. This is
because dht_readdirp winds the calls to all subvols.
upstream master: https://review.gluster.org/19827
> Change-Id: I67a96b06dad14a08967c3721301e88555aa01017
> updates: bz#1564198
> Signed-off-by: N Balachandran <nbalacha@redhat.com>
Change-Id: Ibdb099c333bc23d0cb769a7636c949ab886b87e2
BUG: 1553677
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/135514
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/cluster/dht/src/dht-common.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/xlators/cluster/dht/src/dht-common.c b/xlators/cluster/dht/src/dht-common.c
index 2fd145d..3b8ba6d 100644
--- a/xlators/cluster/dht/src/dht-common.c
+++ b/xlators/cluster/dht/src/dht-common.c
@@ -6315,16 +6315,11 @@ dht_opendir (call_frame_t *frame, xlator_t *this, loc_t *loc, fd_t *fd,
"Failed to set dictionary value : key = %s",
conf->link_xattr_name);
- if ((conf->defrag && conf->defrag->cmd == GF_DEFRAG_CMD_START_TIER) ||
- (conf->defrag && conf->defrag->cmd ==
- GF_DEFRAG_CMD_START_DETACH_TIER) ||
- (!(conf->local_subvols_cnt) || !conf->defrag)) {
- call_count = local->call_cnt = conf->subvolume_cnt;
- subvolumes = conf->subvolumes;
- } else {
- call_count = local->call_cnt = conf->local_subvols_cnt;
- subvolumes = conf->local_subvols;
- }
+ /* dht_readdirp will wind to all subvols so open has to be sent to
+ * all subvols whether or not conf->local_subvols is set */
+
+ call_count = local->call_cnt = conf->subvolume_cnt;
+ subvolumes = conf->subvolumes;
/* In case of parallel-readdir, the readdir-ahead will be loaded
* below dht, in this case, if we want to enable or disable SKIP_DIRs
--
1.8.3.1

View File

@ -0,0 +1,124 @@
From 87a24e342c422ba6b04563d63d431430c0156b52 Mon Sep 17 00:00:00 2001
From: N Balachandran <nbalacha@redhat.com>
Date: Fri, 6 Apr 2018 16:06:51 +0530
Subject: [PATCH 217/236] cluster/dht: Handle file migrations when brick down
The decision as to which node would migrate a file
was based on the gfid of the file. Files were divided
among the nodes for the replica/disperse set. However,
if a brick was down when rebalance started, the nodeuuids
would be saved as NULL and a set of files would not be migrated.
Now, if the nodeuuid is NULL, the first non-null entry in
the set is the node responsible for migrating the file.
upstream master: https://review.gluster.org/#/c/19831/
> Change-Id: I72554c107792c7d534e0f25640654b6f8417d373
> fixes: bz#1564198
> Signed-off-by: N Balachandran <nbalacha@redhat.com>
Change-Id: Ia0e15339aefee2712e85d7e282c9b7934665376b
BUG: 1553677
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/135515
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/cluster/dht/src/dht-rebalance.c | 56 ++++++++++++++++++++++++++++++---
1 file changed, 51 insertions(+), 5 deletions(-)
diff --git a/xlators/cluster/dht/src/dht-rebalance.c b/xlators/cluster/dht/src/dht-rebalance.c
index bba44b9..a4be348 100644
--- a/xlators/cluster/dht/src/dht-rebalance.c
+++ b/xlators/cluster/dht/src/dht-rebalance.c
@@ -2469,6 +2469,27 @@ gf_defrag_ctx_subvols_init (dht_dfoffset_ctx_t *offset_var, xlator_t *this) {
}
+static int
+dht_get_first_non_null_index (subvol_nodeuuids_info_t *entry)
+{
+ int i = 0;
+ int index = 0;
+
+ for (i = 0; i < entry->count; i++) {
+ if (!gf_uuid_is_null (entry->elements[i].uuid)) {
+ index = i;
+ goto out;
+ }
+ }
+
+ if (i == entry->count) {
+ index = -1;
+ }
+out:
+ return index;
+}
+
+
/* Return value
* 0 : this node does not migrate the file
* 1 : this node migrates the file
@@ -2485,28 +2506,53 @@ gf_defrag_should_i_migrate (xlator_t *this, int local_subvol_index, uuid_t gfid)
int i = local_subvol_index;
char *str = NULL;
uint32_t hashval = 0;
- int32_t index = 0;
+ int32_t index = 0;
dht_conf_t *conf = NULL;
char buf[UUID_CANONICAL_FORM_LEN + 1] = {0, };
+ subvol_nodeuuids_info_t *entry = NULL;
+
conf = this->private;
- /* Pure distribute */
+ /* Pure distribute. A subvol in this case
+ will be handled by only one node */
- if (conf->local_nodeuuids[i].count == 1) {
+ entry = &(conf->local_nodeuuids[i]);
+ if (entry->count == 1) {
return 1;
}
str = uuid_utoa_r (gfid, buf);
ret = dht_hash_compute (this, 0, str, &hashval);
if (ret == 0) {
- index = (hashval % conf->local_nodeuuids[i].count);
- if (conf->local_nodeuuids[i].elements[index].info
+ index = (hashval % entry->count);
+ if (entry->elements[index].info
== REBAL_NODEUUID_MINE) {
/* Index matches this node's nodeuuid.*/
ret = 1;
+ goto out;
+ }
+
+ /* Brick down - some other node has to migrate these files*/
+ if (gf_uuid_is_null (entry->elements[index].uuid)) {
+ /* Fall back to the first non-null index */
+ index = dht_get_first_non_null_index (entry);
+
+ if (index == -1) {
+ /* None of the bricks in the subvol are up.
+ * CHILD_DOWN will kill the process soon */
+
+ return 0;
+ }
+
+ if (entry->elements[index].info == REBAL_NODEUUID_MINE) {
+ /* Index matches this node's nodeuuid.*/
+ ret = 1;
+ goto out;
+ }
}
}
+out:
return ret;
}
--
1.8.3.1

View File

@ -0,0 +1,68 @@
From 391a6263318dfa674b3cfecbd3725f4b54633bb7 Mon Sep 17 00:00:00 2001
From: moagrawa <moagrawa@redhat.com>
Date: Wed, 11 Apr 2018 16:39:47 +0530
Subject: [PATCH 218/236] posix: reserve option behavior is not correct while
using fallocate
Problem: storage.reserve option is not working correctly while
disk space is allocate throguh fallocate
Solution: In posix_disk_space_check_thread_proc after every 5 sec interval
it calls posix_disk_space_check to monitor disk space and set the
flag in posix priv.In 5 sec timestamp user can create big file with
fallocate that can reach posix reserve limit and no error is shown on
terminal even limit has reached.
To resolve the same call posix_disk_space for every fallocate fop
instead to call by a thread after 5 second
> BUG: 1560411
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> (cherry picked from commit 0002c36666c9b043a330ee08533a87fe7fd16491)
> (Upstream patch link https://review.gluster.org/#/c/19771/)
BUG: 1550991
Change-Id: I6a959dfe38d63ea37f25a431a49f9299fa3ae403
Signed-off-by: moagrawa <moagrawa@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/135208
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/storage/posix/src/posix-handle.h | 3 +++
xlators/storage/posix/src/posix.c | 9 +++++++++
2 files changed, 12 insertions(+)
diff --git a/xlators/storage/posix/src/posix-handle.h b/xlators/storage/posix/src/posix-handle.h
index 9af6a7a..a40feb5 100644
--- a/xlators/storage/posix/src/posix-handle.h
+++ b/xlators/storage/posix/src/posix-handle.h
@@ -285,4 +285,7 @@ int posix_create_link_if_gfid_exists (xlator_t *this, uuid_t gfid,
int
posix_handle_trash_init (xlator_t *this);
+
+void
+posix_disk_space_check (xlator_t *this);
#endif /* !_POSIX_HANDLE_H */
diff --git a/xlators/storage/posix/src/posix.c b/xlators/storage/posix/src/posix.c
index 56a2ca9..d1ef8a2 100644
--- a/xlators/storage/posix/src/posix.c
+++ b/xlators/storage/posix/src/posix.c
@@ -792,6 +792,15 @@ posix_do_fallocate (call_frame_t *frame, xlator_t *this, fd_t *fd,
VALIDATE_OR_GOTO (fd, out);
priv = this->private;
+
+ /* fallocate case is special so call posix_disk_space_check separately
+ for every fallocate fop instead of calling posix_disk_space with
+ thread after every 5 sec sleep to working correctly storage.reserve
+ option behaviour
+ */
+ if (priv->disk_reserve)
+ posix_disk_space_check (this);
+
DISK_SPACE_CHECK_AND_GOTO (frame, priv, xdata, ret, ret, out);
ret = posix_fd_ctx_get (fd, this, &pfd, &op_errno);
--
1.8.3.1

View File

@ -0,0 +1,449 @@
From acfde85dc1e44f37432ee80619ed28cfe4df280b Mon Sep 17 00:00:00 2001
From: Sanoj Unnikrishnan <sunnikri@redhat.com>
Date: Wed, 4 Apr 2018 14:36:56 +0530
Subject: [PATCH 219/236] Quota: heal directory on newly added bricks when
quota limit is reached
Problem: if a lookup is done on a newly added brick for a path on which limit
has been reached, the lookup fails to heal the directory tree due to quota.
Solution: Tag the lookup as an internal fop and ignore it in quota.
Since marking internal fop does not usually give enough contextual information.
Introducing new flags to pass the contextual info.
Adding dict_check_flag and dict_set_flag to aid flag operations.
A flag is a single bit in a bit array (currently limited to 256 bits).
Upstream Reference:
> Change-Id: Ifb6a68bcaffedd425dd0f01f7db24edd5394c095
> fixes: bz#1505355
> BUG: 1505355
> Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
> Patch : https://review.gluster.org/#/c/18554/
BUG: 1475779
Change-Id: Ifb6a68bcaffedd425dd0f01f7db24edd5394c095
Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/134506
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Hari Gowtham Gopal <hgowtham@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
libglusterfs/src/dict.c | 190 ++++++++++++++++++++++++++++
libglusterfs/src/dict.h | 8 ++
libglusterfs/src/glusterfs.h | 27 ++++
xlators/cluster/dht/src/dht-selfheal.c | 17 ++-
xlators/features/quota/src/quota-messages.h | 19 ++-
xlators/features/quota/src/quota.c | 31 ++++-
xlators/storage/posix/src/posix-helpers.c | 4 +
7 files changed, 292 insertions(+), 4 deletions(-)
diff --git a/libglusterfs/src/dict.c b/libglusterfs/src/dict.c
index 243c929..ebcf694 100644
--- a/libglusterfs/src/dict.c
+++ b/libglusterfs/src/dict.c
@@ -2020,6 +2020,196 @@ err:
return ret;
}
+/*
+ * dict_check_flag can be used to check a one bit flag in an array of flags
+ * The flag argument indicates the bit position (within the array of bits).
+ * Currently limited to max of 256 flags for a key.
+ * return value,
+ * 1 : flag is set
+ * 0 : flag is not set
+ * <0: Error
+ */
+int
+dict_check_flag (dict_t *this, char *key, int flag)
+{
+ data_t *data = NULL;
+ int ret = -ENOENT;
+
+ ret = dict_get_with_ref (this, key, &data);
+ if (ret < 0) {
+ return ret;
+ }
+
+ if (BIT_VALUE((unsigned char *)(data->data), flag))
+ ret = 1;
+ else
+ ret = 0;
+
+ data_unref(data);
+ return ret;
+}
+
+/*
+ * _dict_modify_flag can be used to set/clear a bit flag in an array of flags
+ * flag: indicates the bit position. limited to max of DICT_MAX_FLAGS.
+ * op: Indicates operation DICT_FLAG_SET / DICT_FLAG_CLEAR
+ */
+static int
+_dict_modify_flag (dict_t *this, char *key, int flag, int op)
+{
+ data_t *data = NULL;
+ int ret = 0;
+ uint32_t hash = 0;
+ data_pair_t *pair = NULL;
+ char *ptr = NULL;
+ int hashval = 0;
+
+ if (!this || !key) {
+ gf_msg_callingfn ("dict", GF_LOG_WARNING, EINVAL,
+ LG_MSG_INVALID_ARG,
+ "dict OR key (%s) is NULL", key);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ /*
+ * Using a size of 32 bytes to support max of 256
+ * flags in a single key. This should be suffcient.
+ */
+ GF_ASSERT(flag >= 0 && flag < DICT_MAX_FLAGS);
+
+ hash = SuperFastHash (key, strlen (key));
+ LOCK (&this->lock);
+ {
+ pair = dict_lookup_common (this, key, hash);
+
+ if (pair) {
+ data = pair->value;
+ if (op == DICT_FLAG_SET)
+ BIT_SET((unsigned char *)(data->data), flag);
+ else
+ BIT_CLEAR((unsigned char *)(data->data), flag);
+ ret = 0;
+ } else {
+ ptr = GF_CALLOC(1, DICT_MAX_FLAGS / 8,
+ gf_common_mt_char);
+ if (!ptr) {
+ gf_msg("dict", GF_LOG_ERROR, ENOMEM,
+ LG_MSG_NO_MEMORY,
+ "unable to allocate flag bit array");
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ data = data_from_dynptr(ptr, DICT_MAX_FLAGS / 8);
+
+ if (!data) {
+ gf_msg("dict", GF_LOG_ERROR, ENOMEM,
+ LG_MSG_NO_MEMORY,
+ "unable to allocate data");
+ GF_FREE(ptr);
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ if (op == DICT_FLAG_SET)
+ BIT_SET((unsigned char *)(data->data), flag);
+ else
+ BIT_CLEAR((unsigned char *)(data->data), flag);
+
+ if (this->free_pair_in_use) {
+ pair = mem_get0 (THIS->ctx->dict_pair_pool);
+ if (!pair) {
+ gf_msg("dict", GF_LOG_ERROR, ENOMEM,
+ LG_MSG_NO_MEMORY,
+ "unable to allocate dict pair");
+ ret = -ENOMEM;
+ goto err;
+ }
+ } else {
+ pair = &this->free_pair;
+ this->free_pair_in_use = _gf_true;
+ }
+
+ pair->key = (char *)GF_CALLOC(1, strlen (key) + 1,
+ gf_common_mt_char);
+ if (!pair->key) {
+ gf_msg("dict", GF_LOG_ERROR, ENOMEM,
+ LG_MSG_NO_MEMORY,
+ "unable to allocate dict pair");
+ ret = -ENOMEM;
+ goto err;
+ }
+ strcpy (pair->key, key);
+ pair->key_hash = hash;
+ pair->value = data_ref (data);
+
+ hashval = hash % this->hash_size;
+ pair->hash_next = this->members[hashval];
+ this->members[hashval] = pair;
+
+ pair->next = this->members_list;
+ pair->prev = NULL;
+ if (this->members_list)
+ this->members_list->prev = pair;
+ this->members_list = pair;
+ this->count++;
+
+
+ if (this->max_count < this->count)
+ this->max_count = this->count;
+ }
+ }
+
+ UNLOCK (&this->lock);
+ return 0;
+
+err:
+ UNLOCK (&this->lock);
+ if (pair) {
+ if (pair->key)
+ free(pair->key);
+
+ if (pair == &this->free_pair) {
+ this->free_pair_in_use = _gf_false;
+ } else {
+ mem_put (pair);
+ }
+ }
+
+ if (data)
+ data_destroy(data);
+
+
+ gf_msg("dict", GF_LOG_ERROR, EINVAL,
+ LG_MSG_DICT_SET_FAILED,
+ "unable to set key (%s) in dict ", key);
+
+ return ret;
+}
+
+/*
+ * Todo:
+ * Add below primitives as needed:
+ * dict_check_flags(this, key, flag...): variadic function to check
+ * multiple flags at a time.
+ * dict_set_flags(this, key, flag...): set multiple flags
+ * dict_clear_flags(this, key, flag...): reset multiple flags
+ */
+
+int
+dict_set_flag (dict_t *this, char *key, int flag)
+{
+ return _dict_modify_flag (this, key, flag, DICT_FLAG_SET);
+}
+
+int
+dict_clear_flag (dict_t *this, char *key, int flag)
+{
+ return _dict_modify_flag (this, key, flag, DICT_FLAG_CLEAR);
+}
+
+
int
dict_get_double (dict_t *this, char *key, double *val)
{
diff --git a/libglusterfs/src/dict.h b/libglusterfs/src/dict.h
index b131363..be3b0ad 100644
--- a/libglusterfs/src/dict.h
+++ b/libglusterfs/src/dict.h
@@ -60,6 +60,10 @@ typedef struct _data_pair data_pair_t;
\
} while (0)
+#define DICT_MAX_FLAGS 256
+#define DICT_FLAG_SET 1
+#define DICT_FLAG_CLEAR 0
+
struct _data {
unsigned char is_static:1;
unsigned char is_const:1;
@@ -222,6 +226,10 @@ GF_MUST_CHECK int dict_set_uint32 (dict_t *this, char *key, uint32_t val);
GF_MUST_CHECK int dict_get_uint64 (dict_t *this, char *key, uint64_t *val);
GF_MUST_CHECK int dict_set_uint64 (dict_t *this, char *key, uint64_t val);
+GF_MUST_CHECK int dict_check_flag (dict_t *this, char *key, int flag);
+GF_MUST_CHECK int dict_set_flag (dict_t *this, char *key, int flag);
+GF_MUST_CHECK int dict_clear_flag (dict_t *this, char *key, int flag);
+
GF_MUST_CHECK int dict_get_double (dict_t *this, char *key, double *val);
GF_MUST_CHECK int dict_set_double (dict_t *this, char *key, double val);
diff --git a/libglusterfs/src/glusterfs.h b/libglusterfs/src/glusterfs.h
index 5d5f5c8..b161bf0 100644
--- a/libglusterfs/src/glusterfs.h
+++ b/libglusterfs/src/glusterfs.h
@@ -156,6 +156,33 @@
#define GLUSTERFS_VERSION_XCHG_KEY "glusterfs.version.xchg"
#define GLUSTERFS_INTERNAL_FOP_KEY "glusterfs-internal-fop"
+
+/* GlusterFS Internal FOP Indicator flags
+ * (To pass information on the context in which a paritcular
+ * fop is performed between translators)
+ * The presence of a particular flag must be treated as an
+ * indicator of the context, however the flag is added only in
+ * a scenario where there is a need for such context across translators.
+ * So it cannot be an absolute information on context.
+ */
+#define GF_INTERNAL_CTX_KEY "glusterfs.internal-ctx"
+
+/*
+ * Always append entries to end of the enum, do not delete entries.
+ * Currently dict_set_flag allows to set upto 256 flag, if the enum
+ * needs to grow beyond this dict_set_flag has to be changed accordingly
+ */
+enum gf_internal_fop_indicator {
+ GF_DHT_HEAL_DIR /* Index 0 in bit array*/
+};
+
+/* Todo:
+ * Add GF_FOP_LINK_FILE 0x2ULL
+ * address GLUSTERFS_MARKER_DONT_ACCOUNT_KEY and
+ * GLUSTERFS_INTERNAL_FOP_KEY with this flag
+ */
+
+
#define DHT_CHANGELOG_RENAME_OP_KEY "changelog.rename-op"
#define ZR_FILE_CONTENT_STR "glusterfs.file."
diff --git a/xlators/cluster/dht/src/dht-selfheal.c b/xlators/cluster/dht/src/dht-selfheal.c
index c2c4034..7b192d3 100644
--- a/xlators/cluster/dht/src/dht-selfheal.c
+++ b/xlators/cluster/dht/src/dht-selfheal.c
@@ -1404,10 +1404,25 @@ dht_selfheal_dir_mkdir_lookup_done (call_frame_t *frame, xlator_t *this)
dht_dir_set_heal_xattr (this, local, dict, local->xattr, NULL,
NULL);
- if (!dict)
+ if (!dict) {
gf_msg (this->name, GF_LOG_WARNING, 0,
DHT_MSG_DICT_SET_FAILED,
"dict is NULL, need to make sure gfids are same");
+ dict = dict_new ();
+ if (!dict)
+ return -1;
+ }
+ ret = dict_set_flag (dict, GF_INTERNAL_CTX_KEY, GF_DHT_HEAL_DIR);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ DHT_MSG_DICT_SET_FAILED,
+ "Failed to set dictionary value for"
+ " key = %s at path: %s",
+ GF_INTERNAL_CTX_KEY, loc->path);
+ /* We can still continue. As heal can still happen
+ * unless quota limits have reached for the dir.
+ */
+ }
cnt = layout->cnt;
for (i = 0; i < cnt; i++) {
diff --git a/xlators/features/quota/src/quota-messages.h b/xlators/features/quota/src/quota-messages.h
index b01fe98..7478b20 100644
--- a/xlators/features/quota/src/quota-messages.h
+++ b/xlators/features/quota/src/quota-messages.h
@@ -46,7 +46,7 @@
*/
#define GLFS_QUOTA_BASE GLFS_MSGID_COMP_QUOTA
-#define GLFS_NUM_MESSAGES 23
+#define GLFS_NUM_MESSAGES 25
#define GLFS_MSGID_END (GLFS_QUOTA_BASE + GLFS_NUM_MESSAGES + 1)
/* Messaged with message IDs */
#define glfs_msg_start_x GLFS_QUOTA_BASE, "Invalid: Start of messages"
@@ -240,6 +240,23 @@
#define Q_MSG_RPC_SUBMIT_FAILED (GLFS_QUOTA_BASE + 23)
+/*!
+ * @messageid 120024
+ * @diagnosis
+ * @recommendedaction
+ */
+
+#define Q_MSG_ENFORCEMENT_SKIPPED (GLFS_QUOTA_BASE + 24)
+
+/*!
+ * @messageid 120025
+ * @diagnosis
+ * @recommendedaction
+ */
+
+#define Q_MSG_INTERNAL_FOP_KEY_MISSING (GLFS_QUOTA_BASE + 25)
+
+
/*------------*/
#define glfs_msg_end_x GLFS_MSGID_END, "Invalid: End of messages"
diff --git a/xlators/features/quota/src/quota.c b/xlators/features/quota/src/quota.c
index a307845..af7b65a 100644
--- a/xlators/features/quota/src/quota.c
+++ b/xlators/features/quota/src/quota.c
@@ -1591,6 +1591,28 @@ out:
return ret;
}
+/*
+ * return _gf_true if enforcement is needed and _gf_false otherwise
+ */
+gf_boolean_t
+should_quota_enforce (xlator_t *this, dict_t *dict, glusterfs_fop_t fop)
+{
+ int ret = 0;
+
+ ret = dict_check_flag(dict, GF_INTERNAL_CTX_KEY, GF_DHT_HEAL_DIR);
+
+ if (fop == GF_FOP_MKDIR && ret == DICT_FLAG_SET) {
+ return _gf_false;
+ } else if (ret == -ENOENT) {
+ gf_msg (this->name, GF_LOG_DEBUG, EINVAL,
+ Q_MSG_INTERNAL_FOP_KEY_MISSING,
+ "No internal fop context present");
+ goto out;
+ }
+out:
+ return _gf_true;
+}
+
int32_t
quota_lookup_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
int32_t op_ret, int32_t op_errno, inode_t *inode,
@@ -1965,7 +1987,6 @@ unwind:
return 0;
}
-
int32_t
quota_mkdir (call_frame_t *frame, xlator_t *this, loc_t *loc, mode_t mode,
mode_t umask, dict_t *xdata)
@@ -1976,9 +1997,15 @@ quota_mkdir (call_frame_t *frame, xlator_t *this, loc_t *loc, mode_t mode,
call_stub_t *stub = NULL;
priv = this->private;
-
WIND_IF_QUOTAOFF (priv->is_quota_on, off);
+ if (!should_quota_enforce(this, xdata, GF_FOP_MKDIR)) {
+ gf_msg (this->name, GF_LOG_DEBUG, 0,
+ Q_MSG_ENFORCEMENT_SKIPPED,
+ "Enforcement has been skipped(internal fop).");
+ goto off;
+ }
+
local = quota_local_new ();
if (local == NULL) {
op_errno = ENOMEM;
diff --git a/xlators/storage/posix/src/posix-helpers.c b/xlators/storage/posix/src/posix-helpers.c
index ba1d8c3..4107265 100644
--- a/xlators/storage/posix/src/posix-helpers.c
+++ b/xlators/storage/posix/src/posix-helpers.c
@@ -1200,6 +1200,10 @@ posix_handle_pair (xlator_t *this, const char *real_path,
} else if (!strncmp(key, POSIX_ACL_ACCESS_XATTR, strlen(key))
&& stbuf && IS_DHT_LINKFILE_MODE (stbuf)) {
goto out;
+ } else if (!strncmp(key, GF_INTERNAL_CTX_KEY, strlen(key))) {
+ /* ignore this key value pair */
+ ret = 0;
+ goto out;
} else {
sys_ret = sys_lsetxattr (real_path, key, value->data,
value->len, flags);
--
1.8.3.1

View File

@ -0,0 +1,35 @@
From 205d5531eda83095232c888d97c9b7b98f146201 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Mon, 16 Apr 2018 13:47:12 +0530
Subject: [PATCH 220/236] glusterd: turn off selinux feature in downstream
In RHGS 3.4.0 selinux feature was never meant to be qualified.
Label: DOWNSTREAM ONLY
Change-Id: I0cd5eb5207a757c8b6ef789980c061f211410bd5
BUG: 1565962
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/135716
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-volume-set.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-volume-set.c b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
index d01e282..4d41c6e 100644
--- a/xlators/mgmt/glusterd/src/glusterd-volume-set.c
+++ b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
@@ -3236,7 +3236,7 @@ struct volopt_map_entry glusterd_volopt_map[] = {
{ .key = VKEY_FEATURES_SELINUX,
.voltype = "features/selinux",
.type = NO_DOC,
- .value = "on",
+ .value = "off",
.op_version = GD_OP_VERSION_3_11_0,
.description = "Convert security.selinux xattrs to "
"trusted.gluster.selinux on the bricks. Recommended "
--
1.8.3.1

View File

@ -0,0 +1,80 @@
From 3d845c38b0620547a58654a3b38ceed483b9779f Mon Sep 17 00:00:00 2001
From: N Balachandran <nbalacha@redhat.com>
Date: Wed, 14 Mar 2018 10:05:06 +0530
Subject: [PATCH 221/236] cluster/dht: Skipped files are not treated as errors
For skipped files, use a return value of 1 to prevent
error messages being logged.
upstream patch: https://review.gluster.org/#/c/19710/
> Change-Id: I18de31ac1a64d4460e88dea7826c3ba03c895861
> BUG: 1553598
> Signed-off-by: N Balachandran <nbalacha@redhat.com>
Change-Id: I18de31ac1a64d4460e88dea7826c3ba03c895861
BUG: 1546941
Signed-off-by: N Balachandran <nbalacha@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/132609
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/cluster/dht/src/dht-rebalance.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/xlators/cluster/dht/src/dht-rebalance.c b/xlators/cluster/dht/src/dht-rebalance.c
index a4be348..eee00b8 100644
--- a/xlators/cluster/dht/src/dht-rebalance.c
+++ b/xlators/cluster/dht/src/dht-rebalance.c
@@ -594,7 +594,7 @@ __check_file_has_hardlink (xlator_t *this, loc_t *loc,
"Migration skipped for:"
"%s: file has hardlinks", loc->path);
*fop_errno = ENOTSUP;
- ret = -1;
+ ret = 1;
}
}
@@ -649,7 +649,7 @@ __is_file_migratable (xlator_t *this, loc_t *loc,
"Migrate file failed: %s: File has locks."
" Skipping file migration", loc->path);
*fop_errno = ENOTSUP;
- ret = -1;
+ ret = 1;
goto out;
}
}
@@ -964,7 +964,7 @@ __dht_check_free_space (xlator_t *this, xlator_t *to, xlator_t *from, loc_t *loc
/* this is not a 'failure', but we don't want to
consider this as 'success' too :-/ */
*fop_errno = ENOSPC;
- ret = -1;
+ ret = 1;
goto out;
}
}
@@ -2719,7 +2719,7 @@ gf_defrag_migrate_single_file (void *opaque)
ret = dht_migrate_file (this, &entry_loc, cached_subvol,
hashed_subvol, rebal_type, &fop_errno);
- if (ret < 0) {
+ if (ret == 1) {
if (fop_errno == ENOSPC) {
gf_msg_debug (this->name, 0, "migrate-data skipped for"
" %s due to space constraints",
@@ -2768,8 +2768,12 @@ gf_defrag_migrate_single_file (void *opaque)
DHT_MSG_MIGRATE_FILE_SKIPPED,
"File migration skipped for %s.",
entry_loc.path);
+ }
+
+ ret = 0;
- } else if (fop_errno != EEXIST) {
+ } else if (ret < 0) {
+ if (fop_errno != EEXIST) {
gf_msg (this->name, GF_LOG_ERROR, fop_errno,
DHT_MSG_MIGRATE_FILE_FAILED,
"migrate-data failed for %s", entry_loc.path);
--
1.8.3.1

View File

@ -0,0 +1,292 @@
From a25382d5aa9cddde04b1b3355e9d0d1b43e66406 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Mon, 16 Apr 2018 22:49:37 +0530
Subject: [PATCH 222/236] hooks: remove selinux hooks
Label: DOWNSTREAM ONLY
Change-Id: I810466a0ca99ab21f5a8eac8cdffbb18333d10ad
BUG: 1565962
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/135800
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Jiffin Thottan <jthottan@redhat.com>
Reviewed-by: Milind Changire <mchangir@redhat.com>
---
configure.ac | 20 -------
extras/hook-scripts/Makefile.am | 2 +-
extras/hook-scripts/create/Makefile.am | 1 -
extras/hook-scripts/create/post/Makefile.am | 6 ---
.../create/post/S10selinux-label-brick.sh | 62 ----------------------
extras/hook-scripts/delete/Makefile.am | 1 -
extras/hook-scripts/delete/pre/Makefile.am | 6 ---
.../delete/pre/S10selinux-del-fcontext.sh | 59 --------------------
glusterfs.spec.in | 5 +-
9 files changed, 4 insertions(+), 158 deletions(-)
delete mode 100644 extras/hook-scripts/create/Makefile.am
delete mode 100644 extras/hook-scripts/create/post/Makefile.am
delete mode 100755 extras/hook-scripts/create/post/S10selinux-label-brick.sh
delete mode 100644 extras/hook-scripts/delete/Makefile.am
delete mode 100644 extras/hook-scripts/delete/pre/Makefile.am
delete mode 100755 extras/hook-scripts/delete/pre/S10selinux-del-fcontext.sh
diff --git a/configure.ac b/configure.ac
index c9a1cde..b388a13 100644
--- a/configure.ac
+++ b/configure.ac
@@ -228,10 +228,6 @@ AC_CONFIG_FILES([Makefile
extras/hook-scripts/add-brick/Makefile
extras/hook-scripts/add-brick/pre/Makefile
extras/hook-scripts/add-brick/post/Makefile
- extras/hook-scripts/create/Makefile
- extras/hook-scripts/create/post/Makefile
- extras/hook-scripts/delete/Makefile
- extras/hook-scripts/delete/pre/Makefile
extras/hook-scripts/start/Makefile
extras/hook-scripts/start/post/Makefile
extras/hook-scripts/set/Makefile
@@ -911,21 +907,6 @@ else
fi
# end of xml-output
-dnl SELinux feature enablement
-case $host_os in
- linux*)
- AC_ARG_ENABLE([selinux],
- AC_HELP_STRING([--disable-selinux],
- [Disable SELinux features]),
- [USE_SELINUX="${enableval}"], [USE_SELINUX="yes"])
- ;;
- *)
- USE_SELINUX=no
- ;;
-esac
-AM_CONDITIONAL(USE_SELINUX, test "x${USE_SELINUX}" = "xyes")
-dnl end of SELinux feature enablement
-
AC_CHECK_HEADERS([execinfo.h], [have_backtrace=yes])
if test "x${have_backtrace}" = "xyes"; then
AC_DEFINE(HAVE_BACKTRACE, 1, [define if found backtrace])
@@ -1577,7 +1558,6 @@ echo "Unit Tests : $BUILD_UNITTEST"
echo "Track priv ports : $TRACK_PRIVPORTS"
echo "POSIX ACLs : $BUILD_POSIX_ACLS"
echo "Data Classification : $BUILD_GFDB"
-echo "SELinux features : $USE_SELINUX"
echo "firewalld-config : $BUILD_FIREWALLD"
echo "Events : $BUILD_EVENTS"
echo "EC dynamic support : $EC_DYNAMIC_SUPPORT"
diff --git a/extras/hook-scripts/Makefile.am b/extras/hook-scripts/Makefile.am
index 26059d7..771b37e 100644
--- a/extras/hook-scripts/Makefile.am
+++ b/extras/hook-scripts/Makefile.am
@@ -1,5 +1,5 @@
EXTRA_DIST = S40ufo-stop.py S56glusterd-geo-rep-create-post.sh
-SUBDIRS = add-brick create delete set start stop reset
+SUBDIRS = add-brick set start stop reset
scriptsdir = $(GLUSTERD_WORKDIR)/hooks/1/gsync-create/post/
if USE_GEOREP
diff --git a/extras/hook-scripts/create/Makefile.am b/extras/hook-scripts/create/Makefile.am
deleted file mode 100644
index b083a91..0000000
--- a/extras/hook-scripts/create/Makefile.am
+++ /dev/null
@@ -1 +0,0 @@
-SUBDIRS = post
diff --git a/extras/hook-scripts/create/post/Makefile.am b/extras/hook-scripts/create/post/Makefile.am
deleted file mode 100644
index adbce78..0000000
--- a/extras/hook-scripts/create/post/Makefile.am
+++ /dev/null
@@ -1,6 +0,0 @@
-EXTRA_DIST = S10selinux-label-brick.sh
-
-scriptsdir = $(GLUSTERD_WORKDIR)/hooks/1/create/post/
-if USE_SELINUX
-scripts_SCRIPTS = S10selinux-label-brick.sh
-endif
diff --git a/extras/hook-scripts/create/post/S10selinux-label-brick.sh b/extras/hook-scripts/create/post/S10selinux-label-brick.sh
deleted file mode 100755
index de242d2..0000000
--- a/extras/hook-scripts/create/post/S10selinux-label-brick.sh
+++ /dev/null
@@ -1,62 +0,0 @@
-#!/bin/bash
-#
-# Install to hooks/<HOOKS_VER>/create/post
-#
-# Add an SELinux file context for each brick using the glusterd_brick_t type.
-# This ensures that the brick is relabeled correctly on an SELinux restart or
-# restore. Subsequently, run a restore on the brick path to set the selinux
-# labels.
-#
-###
-
-PROGNAME="Sselinux"
-OPTSPEC="volname:"
-VOL=
-
-parse_args () {
- ARGS=$(getopt -o '' -l ${OPTSPEC} -n ${PROGNAME} -- "$@")
- eval set -- "${ARGS}"
-
- while true; do
- case ${1} in
- --volname)
- shift
- VOL=${1}
- ;;
- *)
- shift
- break
- ;;
- esac
- shift
- done
-}
-
-set_brick_labels()
-{
- volname=${1}
-
- # grab the path for each local brick
- brickpath="/var/lib/glusterd/vols/${volname}/bricks/*"
- brickdirs=$(grep '^path=' "${brickpath}" | cut -d= -f 2 | sort -u)
-
- for b in ${brickdirs}; do
- # Add a file context for each brick path and associate with the
- # glusterd_brick_t SELinux type.
- pattern="${b}\(/.*\)?"
- semanage fcontext --add -t glusterd_brick_t -r s0 "${pattern}"
-
- # Set the labels on the new brick path.
- restorecon -R "${b}"
- done
-}
-
-SELINUX_STATE=$(which getenforce && getenforce)
-[ "${SELINUX_STATE}" = 'Disabled' ] && exit 0
-
-parse_args "$@"
-[ -z "${VOL}" ] && exit 1
-
-set_brick_labels "${VOL}"
-
-exit 0
diff --git a/extras/hook-scripts/delete/Makefile.am b/extras/hook-scripts/delete/Makefile.am
deleted file mode 100644
index c98a05d..0000000
--- a/extras/hook-scripts/delete/Makefile.am
+++ /dev/null
@@ -1 +0,0 @@
-SUBDIRS = pre
diff --git a/extras/hook-scripts/delete/pre/Makefile.am b/extras/hook-scripts/delete/pre/Makefile.am
deleted file mode 100644
index bf0eabe..0000000
--- a/extras/hook-scripts/delete/pre/Makefile.am
+++ /dev/null
@@ -1,6 +0,0 @@
-EXTRA_DIST = S10selinux-del-fcontext.sh
-
-scriptsdir = $(GLUSTERD_WORKDIR)/hooks/1/delete/pre/
-if USE_SELINUX
-scripts_SCRIPTS = S10selinux-del-fcontext.sh
-endif
diff --git a/extras/hook-scripts/delete/pre/S10selinux-del-fcontext.sh b/extras/hook-scripts/delete/pre/S10selinux-del-fcontext.sh
deleted file mode 100755
index 6eba66f..0000000
--- a/extras/hook-scripts/delete/pre/S10selinux-del-fcontext.sh
+++ /dev/null
@@ -1,59 +0,0 @@
-#!/bin/bash
-#
-# Install to hooks/<HOOKS_VER>/delete/pre
-#
-# Delete the file context associated with the brick path on volume deletion. The
-# associated file context was added during volume creation.
-#
-# We do not explicitly relabel the brick, as this could be time consuming and
-# unnecessary.
-#
-###
-
-PROGNAME="Sselinux"
-OPTSPEC="volname:"
-VOL=
-
-function parse_args () {
- ARGS=$(getopt -o '' -l $OPTSPEC -n $PROGNAME -- "$@")
- eval set -- "$ARGS"
-
- while true; do
- case $1 in
- --volname)
- shift
- VOL=$1
- ;;
- *)
- shift
- break
- ;;
- esac
- shift
- done
-}
-
-function delete_brick_fcontext()
-{
- volname=$1
-
- # grab the path for each local brick
- brickdirs=$(grep '^path=' /var/lib/glusterd/vols/${volname}/bricks/* | cut -d= -f 2)
-
- for b in $brickdirs
- do
- # remove the file context associated with the brick path
- semanage fcontext --delete $b\(/.*\)?
- done
-}
-
-SELINUX_STATE=$(which getenforce && getenforce)
-[ "${SELINUX_STATE}" = 'Disabled' ] && exit 0
-
-parse_args "$@"
-[ -z "$VOL" ] && exit 1
-
-delete_brick_fcontext $VOL
-
-# failure to delete the fcontext is not fatal
-exit 0
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 4b5238a..64e7e29 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1523,7 +1523,6 @@ exit 0
%attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/create
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/create/post
- %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/create/post/S10selinux-label-brick.sh
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/create/pre
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/copy-file
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/copy-file/post
@@ -1532,7 +1531,6 @@ exit 0
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/delete/post
%{_sharedstatedir}/glusterd/hooks/1/delete/post/S57glusterfind-delete-post
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/delete/pre
- %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/delete/pre/S10selinux-del-fcontext.sh
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/remove-brick
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/remove-brick/post
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/remove-brick/pre
@@ -2157,6 +2155,9 @@ fi
%endif
%changelog
+* Wed Apr 18 2018 Atin Mukherjee <amukherj@redhat.com>
+- Revert SELinux hooks (#1565962)
+
* Thu Feb 22 2018 Kotresh HR <khiremat@redhat.com>
- Added util-linux as dependency to georeplication rpm (#1544382)
--
1.8.3.1

View File

@ -0,0 +1,53 @@
From b37253008e93571088537d77282d985b484302c8 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Mon, 16 Apr 2018 17:44:19 +0530
Subject: [PATCH 223/236] glusterd: Make localtime-logging option invisible in
downstream
Label: DOWNSTREAM ONLY
Change-Id: Ie631edebb7e19152392bfd3c369a96e88796bd75
BUG: 1567110
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/135754
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-op-sm.c | 2 +-
xlators/mgmt/glusterd/src/glusterd-volume-set.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index 72d349b..d479ed4 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -85,7 +85,7 @@ glusterd_all_vol_opts valid_all_vol_opts[] = {
* TBD: Discuss the default value for this. Maybe this should be a
* dynamic value depending on the memory specifications per node */
{ GLUSTERD_BRICKMUX_LIMIT_KEY, "0"},
- { GLUSTERD_LOCALTIME_LOGGING_KEY, "disable"},
+ /*{ GLUSTERD_LOCALTIME_LOGGING_KEY, "disable"},*/
{ NULL },
};
diff --git a/xlators/mgmt/glusterd/src/glusterd-volume-set.c b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
index 4d41c6e..9bc0933 100644
--- a/xlators/mgmt/glusterd/src/glusterd-volume-set.c
+++ b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
@@ -3567,12 +3567,12 @@ struct volopt_map_entry glusterd_volopt_map[] = {
.op_version = GD_OP_VERSION_3_11_0,
.flags = OPT_FLAG_CLIENT_OPT
},
- { .key = GLUSTERD_LOCALTIME_LOGGING_KEY,
+ /*{ .key = GLUSTERD_LOCALTIME_LOGGING_KEY,
.voltype = "mgmt/glusterd",
.type = GLOBAL_DOC,
.op_version = GD_OP_VERSION_3_12_0,
.validate_fn = validate_boolean
- },
+ },*/
{ .key = "disperse.parallel-writes",
.voltype = "cluster/disperse",
.type = NO_DOC,
--
1.8.3.1

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,993 @@
From 40d6af4d4c42b1880abcf576a9a9b6e734298ff0 Mon Sep 17 00:00:00 2001
From: Mohit Agrawal <moagrawa@redhat.com>
Date: Sat, 10 Feb 2018 12:25:15 +0530
Subject: [PATCH 225/236] glusterfsd: Memleak in glusterfsd process while
brick mux is on
Problem: At the time of stopping the volume while brick multiplex is
enabled memory is not cleanup from all server side xlators.
Solution: To cleanup memory for all server side xlators call fini
in glusterfs_handle_terminate after send GF_EVENT_CLEANUP
notification to top xlator.
> BUG: 1544090
> (cherry picked from commit 7c3cc485054e4ede1efb358552135b432fb7047a)
> (upstream patch review link https://review.gluster.org/#/c/19616/)
BUG: 1535281
Change-Id: Ia10dc7f2605aa50f2b90b3fe4eb380ba9299e2fc
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/136216
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfsd/src/glusterfsd-mgmt.c | 67 ++++++++++++++++++++
glusterfsd/src/glusterfsd.c | 13 ----
glusterfsd/src/glusterfsd.h | 3 +
xlators/debug/io-stats/src/io-stats.c | 1 -
xlators/features/bit-rot/src/stub/bit-rot-stub.c | 22 +++----
.../features/changelog/src/changelog-ev-handle.c | 8 ++-
.../features/changelog/src/changelog-rpc-common.c | 4 ++
xlators/features/changelog/src/changelog.c | 18 ++----
.../changetimerecorder/src/changetimerecorder.c | 15 ++---
xlators/features/index/src/index.c | 21 ++++---
xlators/features/leases/src/leases.c | 15 +++--
xlators/features/marker/src/marker.c | 69 ++++++++++++---------
xlators/features/quota/src/quota.c | 25 +++++++-
xlators/features/shard/src/shard.c | 3 +
xlators/features/trash/src/trash.c | 15 ++++-
xlators/features/upcall/src/upcall.c | 18 +++++-
.../performance/decompounder/src/decompounder.c | 7 +++
xlators/performance/io-threads/src/io-threads.c | 6 +-
xlators/protocol/server/src/server-rpc-fops.c | 7 +++
xlators/protocol/server/src/server.c | 1 -
xlators/storage/posix/src/posix-handle.h | 6 ++
xlators/storage/posix/src/posix-helpers.c | 1 +
xlators/storage/posix/src/posix.c | 71 ++++++++++++----------
xlators/system/posix-acl/src/posix-acl.c | 4 +-
24 files changed, 281 insertions(+), 139 deletions(-)
diff --git a/glusterfsd/src/glusterfsd-mgmt.c b/glusterfsd/src/glusterfsd-mgmt.c
index ef53d09..8f4450b 100644
--- a/glusterfsd/src/glusterfsd-mgmt.c
+++ b/glusterfsd/src/glusterfsd-mgmt.c
@@ -197,6 +197,72 @@ glusterfs_autoscale_threads (glusterfs_ctx_t *ctx, int incr, xlator_t *this)
rpcsvc_ownthread_reconf (conf->rpc, pool->eventthreadcount);
}
+static int
+xlator_mem_free (xlator_t *xl)
+{
+ volume_opt_list_t *vol_opt = NULL;
+ volume_opt_list_t *tmp = NULL;
+
+ if (!xl)
+ return 0;
+
+ GF_FREE (xl->name);
+ GF_FREE (xl->type);
+ xl->name = NULL;
+ xl->type = NULL;
+
+ if (xl->options) {
+ dict_ref (xl->options);
+ dict_unref (xl->options);
+ xl->options = NULL;
+ }
+
+ list_for_each_entry_safe (vol_opt, tmp, &xl->volume_options, list) {
+ list_del_init (&vol_opt->list);
+ GF_FREE (vol_opt);
+ }
+
+ return 0;
+}
+
+void
+xlator_call_fini (xlator_t *this) {
+ if (!this)
+ return;
+ xlator_call_fini (this->next);
+ this->fini (this);
+}
+
+void
+xlator_mem_cleanup (xlator_t *this) {
+ xlator_list_t *list = this->children;
+ xlator_t *trav = list->xlator;
+ inode_table_t *inode_table = NULL;
+ xlator_t *prev = trav;
+
+ inode_table = this->itable;
+
+ xlator_call_fini (trav);
+
+ while (prev) {
+ trav = prev->next;
+ xlator_mem_free (prev);
+ prev = trav;
+ }
+
+ if (inode_table) {
+ inode_table_destroy (inode_table);
+ this->itable = NULL;
+ }
+
+ if (this->fini) {
+ this->fini (this);
+ }
+
+ xlator_mem_free (this);
+}
+
+
int
glusterfs_handle_terminate (rpcsvc_request_t *req)
{
@@ -263,6 +329,7 @@ glusterfs_handle_terminate (rpcsvc_request_t *req)
gf_log (THIS->name, GF_LOG_INFO, "detaching not-only"
" child %s", xlator_req.name);
top->notify (top, GF_EVENT_CLEANUP, victim);
+ xlator_mem_cleanup (victim);
}
err:
if (!lockflag)
diff --git a/glusterfsd/src/glusterfsd.c b/glusterfsd/src/glusterfsd.c
index 3ae89a6..6b7adc4 100644
--- a/glusterfsd/src/glusterfsd.c
+++ b/glusterfsd/src/glusterfsd.c
@@ -1416,20 +1416,7 @@ cleanup_and_exit (int signum)
}
#endif
- /* call fini() of each xlator */
-
- /*call fini for glusterd xlator */
- /* TODO : Invoke fini for rest of the xlators */
trav = NULL;
- if (ctx->active)
- trav = ctx->active->top;
- while (trav) {
- if (should_call_fini(ctx,trav)) {
- THIS = trav;
- trav->fini (trav);
- }
- trav = trav->next;
- }
/* NOTE: Only the least significant 8 bits i.e (signum & 255)
will be available to parent process on calling exit() */
diff --git a/glusterfsd/src/glusterfsd.h b/glusterfsd/src/glusterfsd.h
index 43cef52..a72acc8 100644
--- a/glusterfsd/src/glusterfsd.h
+++ b/glusterfsd/src/glusterfsd.h
@@ -126,5 +126,8 @@ int glusterfs_volume_top_read_perf (uint32_t blk_size, uint32_t blk_count,
void
glusterfs_autoscale_threads (glusterfs_ctx_t *ctx, int incr, xlator_t *this);
+void
+xlator_mem_cleanup (xlator_t *this);
+
extern glusterfs_ctx_t *glusterfsd_ctx;
#endif /* __GLUSTERFSD_H__ */
diff --git a/xlators/debug/io-stats/src/io-stats.c b/xlators/debug/io-stats/src/io-stats.c
index a46d116..f46474b 100644
--- a/xlators/debug/io-stats/src/io-stats.c
+++ b/xlators/debug/io-stats/src/io-stats.c
@@ -300,7 +300,6 @@ is_fop_latency_started (call_frame_t *frame)
throughput, iosstat); \
} while (0)
-
static int
ios_fd_ctx_get (fd_t *fd, xlator_t *this, struct ios_fd **iosfd)
{
diff --git a/xlators/features/bit-rot/src/stub/bit-rot-stub.c b/xlators/features/bit-rot/src/stub/bit-rot-stub.c
index 4be7caa..05cac63 100644
--- a/xlators/features/bit-rot/src/stub/bit-rot-stub.c
+++ b/xlators/features/bit-rot/src/stub/bit-rot-stub.c
@@ -228,18 +228,6 @@ notify (xlator_t *this, int event, void *data, ...)
if (!priv)
return 0;
- switch (event) {
- case GF_EVENT_CLEANUP:
- if (priv->signth) {
- (void) gf_thread_cleanup_xint (priv->signth);
- priv->signth = 0;
- }
- if (priv->container.thread) {
- (void) gf_thread_cleanup_xint (priv->container.thread);
- priv->container.thread = 0;
- }
- break;
- }
default_notify (this, event, data);
return 0;
}
@@ -262,6 +250,7 @@ fini (xlator_t *this)
"Could not cancel sign serializer thread");
goto out;
}
+ priv->signth = 0;
while (!list_empty (&priv->squeue)) {
sigstub = list_first_entry (&priv->squeue,
@@ -283,12 +272,19 @@ fini (xlator_t *this)
goto out;
}
+ priv->container.thread = 0;
+
while (!list_empty (&priv->container.bad_queue)) {
stub = list_first_entry (&priv->container.bad_queue, call_stub_t,
list);
list_del_init (&stub->list);
call_stub_destroy (stub);
- };
+ }
+
+ if (priv->local_pool) {
+ mem_pool_destroy (priv->local_pool);
+ priv->local_pool = NULL;
+ }
pthread_mutex_destroy (&priv->container.bad_lock);
pthread_cond_destroy (&priv->container.bad_cond);
diff --git a/xlators/features/changelog/src/changelog-ev-handle.c b/xlators/features/changelog/src/changelog-ev-handle.c
index 38e127b..3e8dc9a 100644
--- a/xlators/features/changelog/src/changelog-ev-handle.c
+++ b/xlators/features/changelog/src/changelog-ev-handle.c
@@ -163,12 +163,14 @@ changelog_rpc_notify (struct rpc_clnt *rpc,
*/
rpc_clnt_unref (crpc->rpc);
- selection = &priv->ev_selection;
+ if (priv)
+ selection = &priv->ev_selection;
LOCK (&crpc->lock);
{
- changelog_deselect_event (this, selection,
- crpc->filter);
+ if (selection)
+ changelog_deselect_event (this, selection,
+ crpc->filter);
changelog_set_disconnect_flag (crpc, _gf_true);
}
UNLOCK (&crpc->lock);
diff --git a/xlators/features/changelog/src/changelog-rpc-common.c b/xlators/features/changelog/src/changelog-rpc-common.c
index 08cd41e..21bef76 100644
--- a/xlators/features/changelog/src/changelog-rpc-common.c
+++ b/xlators/features/changelog/src/changelog-rpc-common.c
@@ -275,6 +275,10 @@ changelog_rpc_server_destroy (xlator_t *this, rpcsvc_t *rpc, char *sockfile,
(void) rpcsvc_unregister_notify (rpc, fn, this);
sys_unlink (sockfile);
+ if (rpc->rxpool) {
+ mem_pool_destroy (rpc->rxpool);
+ rpc->rxpool = NULL;
+ }
GF_FREE (rpc);
}
diff --git a/xlators/features/changelog/src/changelog.c b/xlators/features/changelog/src/changelog.c
index 8b22a04..a472208 100644
--- a/xlators/features/changelog/src/changelog.c
+++ b/xlators/features/changelog/src/changelog.c
@@ -2110,20 +2110,6 @@ notify (xlator_t *this, int event, void *data, ...)
if (!priv)
goto out;
- if (event == GF_EVENT_CLEANUP) {
- if (priv->connector) {
- (void) gf_thread_cleanup_xint (priv->connector);
- priv->connector = 0;
- }
-
- for (; i < NR_DISPATCHERS; i++) {
- if (priv->ev_dispatcher[i]) {
- (void) gf_thread_cleanup_xint (priv->ev_dispatcher[i]);
- priv->ev_dispatcher[i] = 0;
- }
- }
- }
-
if (event == GF_EVENT_TRANSLATOR_OP) {
dict = data;
@@ -2901,6 +2887,9 @@ fini (xlator_t *this)
/* cleanup barrier related objects */
changelog_barrier_pthread_destroy (priv);
+ /* cleanup helper threads */
+ changelog_cleanup_helper_threads (this, priv);
+
/* cleanup allocated options */
changelog_freeup_options (this, priv);
@@ -2911,6 +2900,7 @@ fini (xlator_t *this)
}
this->private = NULL;
+ this->local_pool = NULL;
return;
}
diff --git a/xlators/features/changetimerecorder/src/changetimerecorder.c b/xlators/features/changetimerecorder/src/changetimerecorder.c
index 99519d1..5f82d33 100644
--- a/xlators/features/changetimerecorder/src/changetimerecorder.c
+++ b/xlators/features/changetimerecorder/src/changetimerecorder.c
@@ -19,7 +19,6 @@
#include "tier-ctr-interface.h"
/*******************************inode forget***********************************/
-
int
ctr_forget (xlator_t *this, inode_t *inode)
{
@@ -2310,15 +2309,6 @@ notify (xlator_t *this, int event, void *data, ...)
if (!priv)
goto out;
- if (event == GF_EVENT_CLEANUP) {
- if (fini_db (priv->_db_conn)) {
- gf_msg (this->name, GF_LOG_WARNING, 0,
- CTR_MSG_CLOSE_DB_CONN_FAILED, "Failed closing "
- "db connection");
- }
- if (priv->_db_conn)
- priv->_db_conn = NULL;
- }
ret = default_notify (this, event, data);
out:
@@ -2359,6 +2349,10 @@ fini (xlator_t *this)
CTR_MSG_CLOSE_DB_CONN_FAILED, "Failed closing "
"db connection");
}
+
+ if (priv->_db_conn)
+ priv->_db_conn = NULL;
+
GF_FREE (priv->ctr_db_path);
if (pthread_mutex_destroy (&priv->compact_lock)) {
gf_msg (this->name, GF_LOG_WARNING, 0,
@@ -2368,6 +2362,7 @@ fini (xlator_t *this)
}
GF_FREE (priv);
mem_pool_destroy (this->local_pool);
+ this->local_pool = NULL;
return;
}
diff --git a/xlators/features/index/src/index.c b/xlators/features/index/src/index.c
index 8590482..f3b0270 100644
--- a/xlators/features/index/src/index.c
+++ b/xlators/features/index/src/index.c
@@ -2444,6 +2444,13 @@ fini (xlator_t *this)
priv = this->private;
if (!priv)
goto out;
+
+ priv->down = _gf_true;
+ pthread_cond_broadcast (&priv->cond);
+ if (priv->thread) {
+ gf_thread_cleanup_xint (priv->thread);
+ priv->thread = 0;
+ }
this->private = NULL;
LOCK_DESTROY (&priv->lock);
pthread_cond_destroy (&priv->cond);
@@ -2455,8 +2462,11 @@ fini (xlator_t *this)
if (priv->complete_watchlist)
dict_unref (priv->complete_watchlist);
GF_FREE (priv);
- mem_pool_destroy (this->local_pool);
- this->local_pool = NULL;
+
+ if (this->local_pool) {
+ mem_pool_destroy (this->local_pool);
+ this->local_pool = NULL;
+ }
out:
return;
}
@@ -2526,13 +2536,6 @@ notify (xlator_t *this, int event, void *data, ...)
if (!priv)
return 0;
- switch (event) {
- case GF_EVENT_CLEANUP:
- priv->down = _gf_true;
- pthread_cond_broadcast (&priv->cond);
- break;
- }
-
ret = default_notify (this, event, data);
return ret;
}
diff --git a/xlators/features/leases/src/leases.c b/xlators/features/leases/src/leases.c
index 551dd9b..a8ffb35 100644
--- a/xlators/features/leases/src/leases.c
+++ b/xlators/features/leases/src/leases.c
@@ -1062,14 +1062,17 @@ fini (xlator_t *this)
priv->fini = _gf_true;
pthread_cond_broadcast (&priv->cond);
- pthread_join (priv->recall_thr, NULL);
-
- priv->inited_recall_thr = _gf_false;
+ if (priv->recall_thr) {
+ gf_thread_cleanup_xint (priv->recall_thr);
+ priv->recall_thr = 0;
+ priv->inited_recall_thr = _gf_false;
+ }
GF_FREE (priv);
-
- glusterfs_ctx_tw_put (this->ctx);
-
+ if (this->ctx->tw) {
+ glusterfs_ctx_tw_put (this->ctx);
+ this->ctx->tw = NULL;
+ }
return 0;
}
diff --git a/xlators/features/marker/src/marker.c b/xlators/features/marker/src/marker.c
index b51b9cc..3094c68 100644
--- a/xlators/features/marker/src/marker.c
+++ b/xlators/features/marker/src/marker.c
@@ -3193,9 +3193,9 @@ mem_acct_init (xlator_t *this)
int32_t
init_xtime_priv (xlator_t *this, dict_t *options)
{
- data_t *data = NULL;
int32_t ret = -1;
marker_conf_t *priv = NULL;
+ char *tmp_opt = NULL;
GF_VALIDATE_OR_GOTO ("marker", this, out);
GF_VALIDATE_OR_GOTO (this->name, options, out);
@@ -3203,29 +3203,11 @@ init_xtime_priv (xlator_t *this, dict_t *options)
priv = this->private;
- if((data = dict_get (options, VOLUME_UUID)) != NULL) {
- priv->volume_uuid = data->data;
+ ret = dict_get_str (options, "volume-uuid", &tmp_opt);
- ret = gf_uuid_parse (priv->volume_uuid, priv->volume_uuid_bin);
- if (ret == -1) {
- gf_log (this->name, GF_LOG_ERROR,
- "invalid volume uuid %s", priv->volume_uuid);
- goto out;
- }
-
- ret = gf_asprintf (& (priv->marker_xattr), "%s.%s.%s",
- MARKER_XATTR_PREFIX, priv->volume_uuid,
- XTIME);
-
- if (ret == -1){
- priv->marker_xattr = NULL;
- goto out;
- }
-
- gf_log (this->name, GF_LOG_DEBUG,
- "volume-uuid = %s", priv->volume_uuid);
- } else {
+ if (ret) {
priv->volume_uuid = NULL;
+ tmp_opt = "";
gf_log (this->name, GF_LOG_ERROR,
"please specify the volume-uuid"
@@ -3233,16 +3215,32 @@ init_xtime_priv (xlator_t *this, dict_t *options)
return -1;
}
+ gf_asprintf (&priv->volume_uuid, "%s", tmp_opt);
- if ((data = dict_get (options, TIMESTAMP_FILE)) != NULL) {
- priv->timestamp_file = data->data;
+ ret = gf_uuid_parse (priv->volume_uuid, priv->volume_uuid_bin);
- gf_log (this->name, GF_LOG_DEBUG,
- "the timestamp-file is = %s",
- priv->timestamp_file);
+ if (ret == -1) {
+ gf_log (this->name, GF_LOG_ERROR,
+ "invalid volume uuid %s", priv->volume_uuid);
+ goto out;
+ }
- } else {
+ ret = gf_asprintf (&(priv->marker_xattr), "%s.%s.%s",
+ MARKER_XATTR_PREFIX, priv->volume_uuid,
+ XTIME);
+
+ if (ret == -1) {
+ priv->marker_xattr = NULL;
+ goto out;
+ }
+
+ gf_log (this->name, GF_LOG_DEBUG,
+ "volume-uuid = %s", priv->volume_uuid);
+
+ ret = dict_get_str (options, "timestamp-file", &tmp_opt);
+ if (ret) {
priv->timestamp_file = NULL;
+ tmp_opt = "";
gf_log (this->name, GF_LOG_ERROR,
"please specify the timestamp-file"
@@ -3251,6 +3249,15 @@ init_xtime_priv (xlator_t *this, dict_t *options)
goto out;
}
+ ret = gf_asprintf (&priv->timestamp_file, "%s", tmp_opt);
+ if (ret == -1) {
+ priv->timestamp_file = NULL;
+ goto out;
+ }
+
+ gf_log (this->name, GF_LOG_DEBUG,
+ "the timestamp-file is = %s", priv->timestamp_file);
+
ret = 0;
out:
return ret;
@@ -3292,6 +3299,12 @@ marker_priv_cleanup (xlator_t *this)
LOCK_DESTROY (&priv->lock);
GF_FREE (priv);
+
+ if (this->local_pool) {
+ mem_pool_destroy (this->local_pool);
+ this->local_pool = NULL;
+ }
+
out:
return;
}
diff --git a/xlators/features/quota/src/quota.c b/xlators/features/quota/src/quota.c
index af7b65a..c4817bc 100644
--- a/xlators/features/quota/src/quota.c
+++ b/xlators/features/quota/src/quota.c
@@ -5221,12 +5221,14 @@ quota_priv_dump (xlator_t *this)
GF_ASSERT (this);
priv = this->private;
+ if (!priv)
+ goto out;
gf_proc_dump_add_section ("xlators.features.quota.priv", this->name);
ret = TRY_LOCK (&priv->lock);
if (ret)
- goto out;
+ goto out;
else {
gf_proc_dump_write("soft-timeout", "%d", priv->soft_timeout);
gf_proc_dump_write("hard-timeout", "%d", priv->hard_timeout);
@@ -5246,6 +5248,27 @@ out:
void
fini (xlator_t *this)
{
+ quota_priv_t *priv = NULL;
+ rpc_clnt_t *rpc = NULL;
+ int i = 0, cnt = 0;
+
+ priv = this->private;
+ if (!priv)
+ return;
+ rpc = priv->rpc_clnt;
+ priv->rpc_clnt = NULL;
+ this->private = NULL;
+ if (rpc) {
+ cnt = GF_ATOMIC_GET (rpc->refcount);
+ for (i = 0; i < cnt; i++)
+ rpc_clnt_unref (rpc);
+ }
+ LOCK_DESTROY (&priv->lock);
+ GF_FREE (priv);
+ if (this->local_pool) {
+ mem_pool_destroy (this->local_pool);
+ this->local_pool = NULL;
+ }
return;
}
diff --git a/xlators/features/shard/src/shard.c b/xlators/features/shard/src/shard.c
index 945458e..29989d3 100644
--- a/xlators/features/shard/src/shard.c
+++ b/xlators/features/shard/src/shard.c
@@ -5514,6 +5514,9 @@ shard_forget (xlator_t *this, inode_t *inode)
shard_priv_t *priv = NULL;
priv = this->private;
+ if (!priv)
+ return 0;
+
inode_ctx_del (inode, this, &ctx_uint);
if (!ctx_uint)
return 0;
diff --git a/xlators/features/trash/src/trash.c b/xlators/features/trash/src/trash.c
index 4a41a14..d114858 100644
--- a/xlators/features/trash/src/trash.c
+++ b/xlators/features/trash/src/trash.c
@@ -33,7 +33,6 @@ trash_unlink_rename_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
struct iatt *preoldparent, struct iatt *postoldparent,
struct iatt *prenewparent, struct iatt *postnewparent,
dict_t *xdata);
-
/* Common routines used in this translator */
/**
@@ -2406,6 +2405,7 @@ notify (xlator_t *this, int event, void *data, ...)
ret = create_internalop_directory (this);
}
+
out:
ret = default_notify (this, event, data);
if (ret)
@@ -2587,10 +2587,11 @@ void
fini (xlator_t *this)
{
trash_private_t *priv = NULL;
+ inode_table_t *inode_table = NULL;
GF_VALIDATE_OR_GOTO ("trash", this, out);
priv = this->private;
-
+ inode_table = priv->trash_itable;
if (priv) {
if (priv->newtrash_dir)
GF_FREE (priv->newtrash_dir);
@@ -2600,9 +2601,17 @@ fini (xlator_t *this)
GF_FREE (priv->brick_path);
if (priv->eliminate)
wipe_eliminate_path (&priv->eliminate);
+ if (inode_table) {
+ inode_table_destroy (inode_table);
+ priv->trash_itable = NULL;
+ }
GF_FREE (priv);
}
- mem_pool_destroy (this->local_pool);
+
+ if (this->local_pool) {
+ mem_pool_destroy (this->local_pool);
+ this->local_pool = NULL;
+ }
this->private = NULL;
out:
return;
diff --git a/xlators/features/upcall/src/upcall.c b/xlators/features/upcall/src/upcall.c
index 3e1d307..66d22f6 100644
--- a/xlators/features/upcall/src/upcall.c
+++ b/xlators/features/upcall/src/upcall.c
@@ -2447,8 +2447,11 @@ fini (xlator_t *this)
priv->fini = 1;
- if (priv->reaper_init_done)
- pthread_join (priv->reaper_thr, NULL);
+ if (priv->reaper_thr) {
+ gf_thread_cleanup_xint (priv->reaper_thr);
+ priv->reaper_thr = 0;
+ priv->reaper_init_done = _gf_false;
+ }
dict_unref (priv->xattrs);
LOCK_DESTROY (&priv->inode_ctx_lk);
@@ -2458,13 +2461,24 @@ fini (xlator_t *this)
* before calling xlator_fini */
GF_FREE (priv);
+ if (this->local_pool) {
+ mem_pool_destroy (this->local_pool);
+ this->local_pool = NULL;
+ }
+
return 0;
}
int
upcall_forget (xlator_t *this, inode_t *inode)
{
+ upcall_private_t *priv = this->private;
+
+ if (!priv)
+ goto out;
+
upcall_cleanup_inode_ctx (this, inode);
+out:
return 0;
}
diff --git a/xlators/performance/decompounder/src/decompounder.c b/xlators/performance/decompounder/src/decompounder.c
index d3d9b9f..095a300 100644
--- a/xlators/performance/decompounder/src/decompounder.c
+++ b/xlators/performance/decompounder/src/decompounder.c
@@ -946,5 +946,12 @@ out:
int32_t
fini (xlator_t *this)
{
+ if (!this)
+ return 0;
+
+ if (this->local_pool) {
+ mem_pool_destroy (this->local_pool);
+ this->local_pool = NULL;
+ }
return 0;
}
diff --git a/xlators/performance/io-threads/src/io-threads.c b/xlators/performance/io-threads/src/io-threads.c
index 7c020e2..1e1816a 100644
--- a/xlators/performance/io-threads/src/io-threads.c
+++ b/xlators/performance/io-threads/src/io-threads.c
@@ -356,7 +356,8 @@ iot_schedule (call_frame_t *frame, xlator_t *this, call_stub_t *stub)
out:
gf_msg_debug (this->name, 0, "%s scheduled as %s fop",
gf_fop_list[stub->fop], iot_get_pri_meaning (pri));
- ret = do_iot_schedule (this->private, stub, pri);
+ if (this->private)
+ ret = do_iot_schedule (this->private, stub, pri);
return ret;
}
@@ -1073,8 +1074,7 @@ notify (xlator_t *this, int32_t event, void *data, ...)
{
iot_conf_t *conf = this->private;
- if ((GF_EVENT_PARENT_DOWN == event) ||
- (GF_EVENT_CLEANUP == event))
+ if (GF_EVENT_PARENT_DOWN == event)
iot_exit_threads (conf);
default_notify (this, event, data);
diff --git a/xlators/protocol/server/src/server-rpc-fops.c b/xlators/protocol/server/src/server-rpc-fops.c
index 21f78a3..91d5c03 100644
--- a/xlators/protocol/server/src/server-rpc-fops.c
+++ b/xlators/protocol/server/src/server-rpc-fops.c
@@ -3487,6 +3487,13 @@ rpc_receive_common (rpcsvc_request_t *req, call_frame_t **fr,
SERVER_REQ_SET_ERROR (req, ret);
goto out;
}
+
+ if (!(*fr)->root->client->bound_xl->itable) {
+ /* inode_table is not allocated successful in server_setvolume */
+ SERVER_REQ_SET_ERROR (req, ret);
+ goto out;
+ }
+
ret = 0;
out:
diff --git a/xlators/protocol/server/src/server.c b/xlators/protocol/server/src/server.c
index 89fde39..737bb96 100644
--- a/xlators/protocol/server/src/server.c
+++ b/xlators/protocol/server/src/server.c
@@ -1578,7 +1578,6 @@ notify (xlator_t *this, int32_t event, void *data, ...)
victim->name);
/* we need the protocol/server xlator here as 'this' */
glusterfs_autoscale_threads (ctx, -1, this);
- default_notify (victim, GF_EVENT_CLEANUP, data);
}
break;
diff --git a/xlators/storage/posix/src/posix-handle.h b/xlators/storage/posix/src/posix-handle.h
index a40feb5..cb1f84e 100644
--- a/xlators/storage/posix/src/posix-handle.h
+++ b/xlators/storage/posix/src/posix-handle.h
@@ -180,6 +180,12 @@
#define MAKE_INODE_HANDLE(rpath, this, loc, iatt_p) do { \
+ if (!this->private) { \
+ gf_msg ("make_inode_handle", GF_LOG_ERROR, 0, \
+ P_MSG_INODE_HANDLE_CREATE, \
+ "private is NULL, fini is already called"); \
+ break; \
+ } \
if (gf_uuid_is_null (loc->gfid)) { \
gf_msg (this->name, GF_LOG_ERROR, 0, \
P_MSG_INODE_HANDLE_CREATE, \
diff --git a/xlators/storage/posix/src/posix-helpers.c b/xlators/storage/posix/src/posix-helpers.c
index 4107265..334175d 100644
--- a/xlators/storage/posix/src/posix-helpers.c
+++ b/xlators/storage/posix/src/posix-helpers.c
@@ -1972,6 +1972,7 @@ abort:
gf_log (THIS->name, GF_LOG_INFO, "detaching not-only "
" child %s", priv->base_path);
top->notify (top, GF_EVENT_CLEANUP, victim);
+ xlator_mem_cleanup (victim);
}
}
diff --git a/xlators/storage/posix/src/posix.c b/xlators/storage/posix/src/posix.c
index d1ef8a2..74ee98f 100644
--- a/xlators/storage/posix/src/posix.c
+++ b/xlators/storage/posix/src/posix.c
@@ -172,6 +172,8 @@ posix_forget (xlator_t *this, inode_t *inode)
struct posix_private *priv_posix = NULL;
priv_posix = (struct posix_private *) this->private;
+ if (!priv_posix)
+ return 0;
ret = inode_ctx_del (inode, this, &ctx_uint);
if (!ctx_uint)
@@ -226,6 +228,7 @@ posix_lookup (call_frame_t *frame, xlator_t *this,
VALIDATE_OR_GOTO (frame, out);
VALIDATE_OR_GOTO (this, out);
VALIDATE_OR_GOTO (loc, out);
+ VALIDATE_OR_GOTO (this->private, out);
priv = this->private;
@@ -1304,6 +1307,8 @@ posix_releasedir (xlator_t *this,
}
priv = this->private;
+ if (!priv)
+ goto out;
pthread_mutex_lock (&priv->janitor_lock);
{
@@ -2103,6 +2108,7 @@ posix_unlink (call_frame_t *frame, xlator_t *this,
VALIDATE_OR_GOTO (frame, out);
VALIDATE_OR_GOTO (this, out);
VALIDATE_OR_GOTO (loc, out);
+ VALIDATE_OR_GOTO (this->private, out);
SET_FS_ID (frame->root->uid, frame->root->gid);
MAKE_ENTRY_HANDLE (real_path, par_path, this, loc, &stbuf);
@@ -3880,6 +3886,8 @@ posix_release (xlator_t *this, fd_t *fd)
"pfd->dir is %p (not NULL) for file fd=%p",
pfd->dir, fd);
}
+ if (!priv)
+ goto out;
pthread_mutex_lock (&priv->janitor_lock);
{
@@ -4067,6 +4075,7 @@ posix_setxattr (call_frame_t *frame, xlator_t *this,
VALIDATE_OR_GOTO (frame, out);
VALIDATE_OR_GOTO (this, out);
+ VALIDATE_OR_GOTO (this->private, out);
VALIDATE_OR_GOTO (loc, out);
VALIDATE_OR_GOTO (dict, out);
@@ -4650,6 +4659,7 @@ posix_getxattr (call_frame_t *frame, xlator_t *this,
VALIDATE_OR_GOTO (frame, out);
VALIDATE_OR_GOTO (this, out);
VALIDATE_OR_GOTO (loc, out);
+ VALIDATE_OR_GOTO (this->private, out);
SET_FS_ID (frame->root->uid, frame->root->gid);
MAKE_INODE_HANDLE (real_path, this, loc, NULL);
@@ -4757,11 +4767,12 @@ posix_getxattr (call_frame_t *frame, xlator_t *this,
goto done;
}
if (loc->inode && name && (XATTR_IS_PATHINFO (name))) {
- if (LOC_HAS_ABSPATH (loc))
+ VALIDATE_OR_GOTO (this->private, out);
+ if (LOC_HAS_ABSPATH (loc)) {
MAKE_REAL_PATH (rpath, this, loc->path);
- else
+ } else {
rpath = real_path;
-
+ }
(void) snprintf (host_buf, sizeof(host_buf),
"<POSIX(%s):%s:%s>", priv->base_path,
((priv->node_uuid_pathinfo
@@ -7018,9 +7029,6 @@ notify (xlator_t *this,
void *data,
...)
{
- struct posix_private *priv = NULL;
-
- priv = this->private;
switch (event)
{
case GF_EVENT_PARENT_UP:
@@ -7029,31 +7037,6 @@ notify (xlator_t *this,
default_notify (this, GF_EVENT_CHILD_UP, data);
}
break;
- case GF_EVENT_CLEANUP:
- if (priv->health_check) {
- priv->health_check_active = _gf_false;
- pthread_cancel (priv->health_check);
- priv->health_check = 0;
- }
- if (priv->disk_space_check) {
- priv->disk_space_check_active = _gf_false;
- pthread_cancel (priv->disk_space_check);
- priv->disk_space_check = 0;
- }
- if (priv->janitor) {
- (void) gf_thread_cleanup_xint (priv->janitor);
- priv->janitor = 0;
- }
- if (priv->fsyncer) {
- (void) gf_thread_cleanup_xint (priv->fsyncer);
- priv->fsyncer = 0;
- }
- if (priv->mount_lock) {
- (void) sys_closedir (priv->mount_lock);
- priv->mount_lock = NULL;
- }
-
- break;
default:
/* */
break;
@@ -7917,10 +7900,36 @@ fini (xlator_t *this)
if (!priv)
return;
this->private = NULL;
+ if (priv->health_check) {
+ priv->health_check_active = _gf_false;
+ pthread_cancel (priv->health_check);
+ priv->health_check = 0;
+ }
+ if (priv->disk_space_check) {
+ priv->disk_space_check_active = _gf_false;
+ pthread_cancel (priv->disk_space_check);
+ priv->disk_space_check = 0;
+ }
+ if (priv->janitor) {
+ (void) gf_thread_cleanup_xint (priv->janitor);
+ priv->janitor = 0;
+ }
+ if (priv->fsyncer) {
+ (void) gf_thread_cleanup_xint (priv->fsyncer);
+ priv->fsyncer = 0;
+ }
/*unlock brick dir*/
if (priv->mount_lock)
(void) sys_closedir (priv->mount_lock);
+
+ GF_FREE (priv->base_path);
+ LOCK_DESTROY (&priv->lock);
+ pthread_mutex_destroy (&priv->janitor_lock);
+ pthread_mutex_destroy (&priv->fsync_mutex);
+ GF_FREE (priv->hostname);
+ GF_FREE (priv->trash_path);
GF_FREE (priv);
+
return;
}
struct xlator_dumpops dumpops = {
diff --git a/xlators/system/posix-acl/src/posix-acl.c b/xlators/system/posix-acl/src/posix-acl.c
index 5dac688..aadd6fc 100644
--- a/xlators/system/posix-acl/src/posix-acl.c
+++ b/xlators/system/posix-acl/src/posix-acl.c
@@ -582,13 +582,15 @@ posix_acl_unref (xlator_t *this, struct posix_acl *acl)
int refcnt = 0;
conf = this->private;
+ if (!conf)
+ goto out;
LOCK(&conf->acl_lock);
{
refcnt = --acl->refcnt;
}
UNLOCK(&conf->acl_lock);
-
+out:
if (!refcnt)
posix_acl_destroy (this, acl);
}
--
1.8.3.1

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,131 @@
From 3d79f49f2c7752f8f43a35563f7a1253c901db60 Mon Sep 17 00:00:00 2001
From: Ravishankar N <ravishankar@redhat.com>
Date: Tue, 27 Mar 2018 20:54:25 +0530
Subject: [PATCH 227/236] afr: add quorum checks in pre-op
Upstream patch: https://review.gluster.org/#/c/19781/
Problem:
We seem to be winding the FOP if pre-op did not succeed on quorum bricks
and then failing the FOP with EROFS since the fop did not meet quorum.
This essentially masks the actual error due to which pre-op failed. (See
BZ).
Fix:
Skip FOP phase if pre-op quorum is not met and go to post-op.
Change-Id: Ie58a41e8fa1ad79aa06093706e96db8eef61b6d9
BUG: 1554291
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/136227
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
---
xlators/cluster/afr/src/afr-transaction.c | 64 +++++++++++++++----------------
1 file changed, 31 insertions(+), 33 deletions(-)
diff --git a/xlators/cluster/afr/src/afr-transaction.c b/xlators/cluster/afr/src/afr-transaction.c
index 993029d..88dc821 100644
--- a/xlators/cluster/afr/src/afr-transaction.c
+++ b/xlators/cluster/afr/src/afr-transaction.c
@@ -144,6 +144,29 @@ afr_needs_changelog_update (afr_local_t *local)
return _gf_false;
}
+gf_boolean_t
+afr_changelog_has_quorum (afr_local_t *local, xlator_t *this)
+{
+ afr_private_t *priv = NULL;
+ int i = 0;
+ unsigned char *success_children = NULL;
+
+ priv = this->private;
+ success_children = alloca0 (priv->child_count);
+
+ for (i = 0; i < priv->child_count; i++) {
+ if (!local->transaction.failed_subvols[i]) {
+ success_children[i] = 1;
+ }
+ }
+
+ if (afr_has_quorum (success_children, this)) {
+ return _gf_true;
+ }
+
+ return _gf_false;
+}
+
int
afr_transaction_fop (call_frame_t *frame, xlator_t *this)
{
@@ -157,17 +180,16 @@ afr_transaction_fop (call_frame_t *frame, xlator_t *this)
priv = this->private;
failed_subvols = local->transaction.failed_subvols;
-
call_count = priv->child_count - AFR_COUNT (failed_subvols,
priv->child_count);
-
- if (call_count == 0) {
+ /* Fail if pre-op did not succeed on quorum no. of bricks. */
+ if (!afr_changelog_has_quorum (local, this) || !call_count) {
+ local->op_ret = -1;
+ /* local->op_errno is already captured in changelog cbk. */
afr_transaction_resume (frame, this);
return 0;
}
-
local->call_count = call_count;
-
for (i = 0; i < priv->child_count; i++) {
if (local->transaction.pre_op[i] && !failed_subvols[i]) {
local->transaction.wind (frame, this, i);
@@ -531,33 +553,6 @@ afr_set_pending_dict (afr_private_t *priv, dict_t *xattr, int **pending)
/* {{{ pending */
-
-void
-afr_handle_post_op_quorum (afr_local_t *local, xlator_t *this)
-{
- afr_private_t *priv = NULL;
- int i = 0;
- unsigned char *post_op_children = NULL;
-
- priv = this->private;
- post_op_children = alloca0 (priv->child_count);
-
- for (i = 0; i < priv->child_count; i++) {
- if (!local->transaction.failed_subvols[i]) {
- post_op_children[i] = 1;
- }
- }
-
- if (afr_has_quorum (post_op_children, this)) {
- return;
- }
-
- local->op_ret = -1;
- /*local->op_errno is already captured in post-op callback.*/
-
- return;
-}
-
int
afr_changelog_post_op_done (call_frame_t *frame, xlator_t *this)
{
@@ -568,7 +563,10 @@ afr_changelog_post_op_done (call_frame_t *frame, xlator_t *this)
int_lock = &local->internal_lock;
/* Fail the FOP if post-op did not succeed on quorum no. of bricks. */
- afr_handle_post_op_quorum (local, this);
+ if (!afr_changelog_has_quorum (local, this)) {
+ local->op_ret = -1;
+ /*local->op_errno is already captured in changelog cbk*/
+ }
if (local->transaction.resume_stub) {
call_resume (local->transaction.resume_stub);
--
1.8.3.1

View File

@ -0,0 +1,79 @@
From a7e4ed507c3332f896fb4822cfc3f98731c11785 Mon Sep 17 00:00:00 2001
From: Ravishankar N <ravishankar@redhat.com>
Date: Mon, 16 Apr 2018 15:38:34 +0530
Subject: [PATCH 228/236] afr: fixes to afr-eager locking
Upstream patch: https://review.gluster.org/#/c/19879/
1. If pre-op fails on all bricks,set lock->release to true in
afr_handle_lock_acquire_failure so that the GF_ASSERT in afr_unlock() does not
crash.
2. Added a missing 'return' after handling pre-op failure in
afr_transaction_perform_fop(), fixing a use-after-free issue.
Change-Id: If0627a9124cb5d6405037cab3f17f8325eed2d83
BUG: 1554291
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/136228
Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
tests/bugs/replicate/bug-1561129-enospc.t | 24 ++++++++++++++++++++++++
xlators/cluster/afr/src/afr-transaction.c | 2 ++
2 files changed, 26 insertions(+)
create mode 100644 tests/bugs/replicate/bug-1561129-enospc.t
diff --git a/tests/bugs/replicate/bug-1561129-enospc.t b/tests/bugs/replicate/bug-1561129-enospc.t
new file mode 100644
index 0000000..1b402fc
--- /dev/null
+++ b/tests/bugs/replicate/bug-1561129-enospc.t
@@ -0,0 +1,24 @@
+#!/bin/bash
+#Tests that sequential write workload doesn't lead to FSYNCs
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../volume.rc
+
+cleanup;
+
+TEST truncate -s 128M $B0/xfs_image
+TEST mkfs.xfs -f $B0/xfs_image
+TEST mkdir $B0/bricks
+TEST mount -t xfs -o loop $B0/xfs_image $B0/bricks
+
+TEST glusterd
+TEST pidof glusterd
+TEST $CLI volume create $V0 replica 3 $H0:$B0/bricks/brick{0,1,3}
+TEST $CLI volume start $V0
+TEST $GFS --volfile-id=$V0 --volfile-server=$H0 $M0;
+
+# Write 50MB of data, which will try to consume 50x3=150MB on $B0/bricks.
+# Before that, we hit ENOSPC in pre-op cbk, which should not crash the mount.
+TEST ! dd if=/dev/zero of=$M0/a bs=1M count=50
+TEST stat $M0/a
+cleanup;
diff --git a/xlators/cluster/afr/src/afr-transaction.c b/xlators/cluster/afr/src/afr-transaction.c
index 88dc821..0506a78 100644
--- a/xlators/cluster/afr/src/afr-transaction.c
+++ b/xlators/cluster/afr/src/afr-transaction.c
@@ -285,6 +285,7 @@ afr_handle_lock_acquire_failure (afr_local_t *local, gf_boolean_t locked)
INIT_LIST_HEAD (&shared);
LOCK (&local->inode->lock);
{
+ lock->release = _gf_true;
list_splice_init (&lock->waiting, &shared);
}
UNLOCK (&local->inode->lock);
@@ -510,6 +511,7 @@ afr_transaction_perform_fop (call_frame_t *frame, xlator_t *this)
priv->child_count);
if (failure_count == priv->child_count) {
afr_handle_lock_acquire_failure (local, _gf_true);
+ return 0;
} else {
lock = &local->inode_ctx->lock[local->transaction.type];
LOCK (&local->inode->lock);
--
1.8.3.1

View File

@ -0,0 +1,93 @@
From 80efc701cb29684355cd133fbf29c14948772ba1 Mon Sep 17 00:00:00 2001
From: Susant Palai <spalai@redhat.com>
Date: Wed, 11 Apr 2018 23:14:02 +0530
Subject: [PATCH 229/236] fuse: do fd_resolve in fuse_getattr if fd is received
problem: With the current code, post graph switch the old fd is received for
fuse_getattr and since it is associated with old inode, it does not
have the inode ctx across xlators in new graph. Hence, dht
errored out saying "no layout" for fstat call. Hence the EINVAL.
Solution: if fd is passed, init and resolve fd to carry on getattr
test case:
- Created a single brick distributed volume
- Started untar
- Added a new-brick
Without this fix, untar used to abort with ERROR.
upstream patch: https://review.gluster.org/#/c/19849/
> Change-Id: I5805c463fb9a04ba5c24829b768127097ff8b9f9
> fixes: bz#1566207
> Signed-off-by: Susant Palai <spalai@redhat.com>
> (cherry picked from commit 87bcdd9465b140e0b9d33dadf3384e28b7b6ed9f)
Change-Id: I5805c463fb9a04ba5c24829b768127097ff8b9f9
BUG: 1563692
Signed-off-by: Susant Palai <spalai@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/136232
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/cluster/dht/src/dht-inode-read.c | 4 ++--
xlators/mount/fuse/src/fuse-bridge.c | 13 ++++++++-----
2 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/xlators/cluster/dht/src/dht-inode-read.c b/xlators/cluster/dht/src/dht-inode-read.c
index fa63fef..d1895eb 100644
--- a/xlators/cluster/dht/src/dht-inode-read.c
+++ b/xlators/cluster/dht/src/dht-inode-read.c
@@ -400,8 +400,8 @@ dht_fstat (call_frame_t *frame, xlator_t *this, fd_t *fd, dict_t *xdata)
layout = local->layout;
if (!layout) {
- gf_msg_debug (this->name, 0,
- "no layout for fd=%p", fd);
+ gf_msg (this->name, GF_LOG_ERROR, 0, 0,
+ "no layout for fd=%p", fd);
op_errno = EINVAL;
goto err;
}
diff --git a/xlators/mount/fuse/src/fuse-bridge.c b/xlators/mount/fuse/src/fuse-bridge.c
index 3e31eca..44697d2 100644
--- a/xlators/mount/fuse/src/fuse-bridge.c
+++ b/xlators/mount/fuse/src/fuse-bridge.c
@@ -893,7 +893,7 @@ fuse_root_lookup_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
void
fuse_getattr_resume (fuse_state_t *state)
{
- if (!state->loc.inode) {
+ if (!state->loc.inode && !(state->fd && state->fd->inode)) {
gf_log ("glusterfs-fuse", GF_LOG_ERROR,
"%"PRIu64": GETATTR %"PRIu64" (%s) resolution failed",
state->finh->unique, state->finh->nodeid,
@@ -904,9 +904,9 @@ fuse_getattr_resume (fuse_state_t *state)
return;
}
- if (!IA_ISDIR (state->loc.inode->ia_type)) {
- if (state->fd == NULL)
- state->fd = fd_lookup (state->loc.inode, state->finh->pid);
+ if (state->fd == NULL && !IA_ISDIR (state->loc.inode->ia_type)) {
+ state->fd = fd_lookup (state->loc.inode, state->finh->pid);
+
if (state->fd == NULL)
state->fd = fd_lookup (state->loc.inode, 0);
}
@@ -947,7 +947,10 @@ fuse_getattr (xlator_t *this, fuse_in_header_t *finh, void *msg)
state->fd = fd_ref ((fd_t *)fgi->fh);
#endif
- fuse_resolve_inode_init (state, &state->resolve, state->finh->nodeid);
+ if (state->fd)
+ fuse_resolve_fd_init (state, &state->resolve, state->fd);
+ else
+ fuse_resolve_inode_init (state, &state->resolve, state->finh->nodeid);
fuse_resolve_and_resume (state, fuse_getattr_resume);
}
--
1.8.3.1

View File

@ -0,0 +1,392 @@
From a661f617d22ab7555a039841c1959019af3e80a3 Mon Sep 17 00:00:00 2001
From: hari gowtham <hgowtham@redhat.com>
Date: Thu, 19 Apr 2018 12:22:07 +0530
Subject: [PATCH 230/236] glusterd: volume inode/fd status broken with brick
mux
backport of:https://review.gluster.org/#/c/19846/6
Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.
With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.
Fix:
Use the brick to validate and populate the inode/fd status.
>Signed-off-by: hari gowtham <hgowtham@redhat.com>
>Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
>fixes: bz#1566067
Signed-off-by: hari gowtham <hgowtham@redhat.com>
Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
BUG: 1559452
Reviewed-on: https://code.engineering.redhat.com/gerrit/136219
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfsd/src/glusterfsd-mgmt.c | 34 ++++++------
libglusterfs/src/client_t.c | 54 ++++++++++---------
libglusterfs/src/xlator.h | 3 +-
tests/basic/volume-status.t | 12 +++++
xlators/mgmt/glusterd/src/glusterd-handler.c | 4 ++
xlators/mgmt/glusterd/src/glusterd-op-sm.c | 3 ++
xlators/nfs/server/src/nfs.c | 2 +-
xlators/protocol/server/src/server.c | 77 ++++++++++++++++------------
8 files changed, 111 insertions(+), 78 deletions(-)
diff --git a/glusterfsd/src/glusterfsd-mgmt.c b/glusterfsd/src/glusterfsd-mgmt.c
index fdf403c..3b9671c 100644
--- a/glusterfsd/src/glusterfsd-mgmt.c
+++ b/glusterfsd/src/glusterfsd-mgmt.c
@@ -1111,14 +1111,14 @@ glusterfs_handle_brick_status (rpcsvc_request_t *req)
glusterfs_ctx_t *ctx = NULL;
glusterfs_graph_t *active = NULL;
xlator_t *this = NULL;
- xlator_t *any = NULL;
- xlator_t *xlator = NULL;
+ xlator_t *server_xl = NULL;
+ xlator_t *brick_xl = NULL;
dict_t *dict = NULL;
dict_t *output = NULL;
- char *volname = NULL;
char *xname = NULL;
uint32_t cmd = 0;
char *msg = NULL;
+ char *brickname = NULL;
GF_ASSERT (req);
this = THIS;
@@ -1146,32 +1146,26 @@ glusterfs_handle_brick_status (rpcsvc_request_t *req)
goto out;
}
- ret = dict_get_str (dict, "volname", &volname);
+ ret = dict_get_str (dict, "brick-name", &brickname);
if (ret) {
- gf_log (this->name, GF_LOG_ERROR, "Couldn't get volname");
+ gf_log (this->name, GF_LOG_ERROR, "Couldn't get brickname from"
+ " dict");
goto out;
}
ctx = glusterfsd_ctx;
GF_ASSERT (ctx);
active = ctx->active;
- any = active->first;
+ server_xl = active->first;
- ret = gf_asprintf (&xname, "%s-server", volname);
- if (-1 == ret) {
- gf_log (this->name, GF_LOG_ERROR, "Out of memory");
- goto out;
- }
-
- xlator = xlator_search_by_name (any, xname);
- if (!xlator) {
+ brick_xl = get_xlator_by_name (server_xl, brickname);
+ if (!brick_xl) {
gf_log (this->name, GF_LOG_ERROR, "xlator %s is not loaded",
xname);
ret = -1;
goto out;
}
-
output = dict_new ();
switch (cmd & GF_CLI_STATUS_MASK) {
case GF_CLI_STATUS_MEM:
@@ -1181,15 +1175,17 @@ glusterfs_handle_brick_status (rpcsvc_request_t *req)
break;
case GF_CLI_STATUS_CLIENTS:
- ret = xlator->dumpops->priv_to_dict (xlator, output);
+ ret = server_xl->dumpops->priv_to_dict (server_xl,
+ output, brickname);
break;
case GF_CLI_STATUS_INODE:
- ret = xlator->dumpops->inode_to_dict (xlator, output);
+ ret = server_xl->dumpops->inode_to_dict (brick_xl,
+ output);
break;
case GF_CLI_STATUS_FD:
- ret = xlator->dumpops->fd_to_dict (xlator, output);
+ ret = server_xl->dumpops->fd_to_dict (brick_xl, output);
break;
case GF_CLI_STATUS_CALLPOOL:
@@ -1365,7 +1361,7 @@ glusterfs_handle_node_status (rpcsvc_request_t *req)
"Error setting volname to dict");
goto out;
}
- ret = node->dumpops->priv_to_dict (node, output);
+ ret = node->dumpops->priv_to_dict (node, output, NULL);
break;
case GF_CLI_STATUS_INODE:
diff --git a/libglusterfs/src/client_t.c b/libglusterfs/src/client_t.c
index 55d891f..dc153cc 100644
--- a/libglusterfs/src/client_t.c
+++ b/libglusterfs/src/client_t.c
@@ -743,10 +743,13 @@ gf_client_dump_fdtables_to_dict (xlator_t *this, dict_t *dict)
clienttable->cliententries[count].next_free)
continue;
client = clienttable->cliententries[count].client;
- memset(key, 0, sizeof key);
- snprintf (key, sizeof key, "conn%d", count++);
- fdtable_dump_to_dict (client->server_ctx.fdtable,
- key, dict);
+ if (!strcmp (client->bound_xl->name, this->name)) {
+ memset(key, 0, sizeof (key));
+ snprintf (key, sizeof (key), "conn%d", count++);
+ fdtable_dump_to_dict (client->server_ctx.
+ fdtable,
+ key, dict);
+ }
}
}
UNLOCK(&clienttable->lock);
@@ -859,25 +862,30 @@ gf_client_dump_inodes_to_dict (xlator_t *this, dict_t *dict)
clienttable->cliententries[count].next_free)
continue;
client = clienttable->cliententries[count].client;
- memset(key, 0, sizeof key);
- if (client->bound_xl && client->bound_xl->itable) {
- /* Presently every brick contains only
- * one bound_xl for all connections.
- * This will lead to duplicating of
- * the inode lists, if listing is
- * done for every connection. This
- * simple check prevents duplication
- * in the present case. If need arises
- * the check can be improved.
- */
- if (client->bound_xl == prev_bound_xl)
- continue;
- prev_bound_xl = client->bound_xl;
-
- memset (key, 0, sizeof (key));
- snprintf (key, sizeof (key), "conn%d", count);
- inode_table_dump_to_dict (client->bound_xl->itable,
- key, dict);
+ if (!strcmp (client->bound_xl->name, this->name)) {
+ memset(key, 0, sizeof (key));
+ if (client->bound_xl && client->bound_xl->
+ itable) {
+ /* Presently every brick contains only
+ * one bound_xl for all connections.
+ * This will lead to duplicating of
+ * the inode lists, if listing is
+ * done for every connection. This
+ * simple check prevents duplication
+ * in the present case. If need arises
+ * the check can be improved.
+ */
+ if (client->bound_xl == prev_bound_xl)
+ continue;
+ prev_bound_xl = client->bound_xl;
+
+ memset (key, 0, sizeof (key));
+ snprintf (key, sizeof (key), "conn%d",
+ count);
+ inode_table_dump_to_dict (client->
+ bound_xl->itable,
+ key, dict);
+ }
}
}
}
diff --git a/libglusterfs/src/xlator.h b/libglusterfs/src/xlator.h
index 5ed8646..7434da8 100644
--- a/libglusterfs/src/xlator.h
+++ b/libglusterfs/src/xlator.h
@@ -873,7 +873,8 @@ typedef int32_t (*dumpop_inodectx_t) (xlator_t *this, inode_t *ino);
typedef int32_t (*dumpop_fdctx_t) (xlator_t *this, fd_t *fd);
-typedef int32_t (*dumpop_priv_to_dict_t) (xlator_t *this, dict_t *dict);
+typedef int32_t (*dumpop_priv_to_dict_t) (xlator_t *this, dict_t *dict,
+ char *brickname);
typedef int32_t (*dumpop_inode_to_dict_t) (xlator_t *this, dict_t *dict);
diff --git a/tests/basic/volume-status.t b/tests/basic/volume-status.t
index f87b0a9..d3a79c9 100644
--- a/tests/basic/volume-status.t
+++ b/tests/basic/volume-status.t
@@ -6,6 +6,14 @@
cleanup;
+function gluster_fd_status () {
+ gluster volume status $V0 fd | sed -n '/Brick :/ p' | wc -l
+}
+
+function gluster_inode_status () {
+ gluster volume status $V0 inode | sed -n '/Connection / p' | wc -l
+}
+
TEST glusterd
TEST pidof glusterd
TEST $CLI volume info;
@@ -21,6 +29,10 @@ EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" nfs_up_status
## Mount FUSE
TEST $GFS -s $H0 --volfile-id $V0 $M0;
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT "8" gluster_fd_status
+
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT "1024" gluster_inode_status
+
##Wait for connection establishment between nfs server and brick process
EXPECT_WITHIN $NFS_EXPORT_TIMEOUT "1" is_nfs_export_available;
diff --git a/xlators/mgmt/glusterd/src/glusterd-handler.c b/xlators/mgmt/glusterd/src/glusterd-handler.c
index cb19321..30adb99 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handler.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handler.c
@@ -5304,6 +5304,10 @@ glusterd_print_client_details (FILE *fp, dict_t *dict,
brick_req->op = GLUSTERD_BRICK_STATUS;
brick_req->name = "";
+ ret = dict_set_str (dict, "brick-name", brickinfo->path);
+ if (ret)
+ goto out;
+
ret = dict_set_int32 (dict, "cmd", GF_CLI_STATUS_CLIENTS);
if (ret)
goto out;
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index d479ed4..7107a46 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -612,6 +612,9 @@ glusterd_brick_op_build_payload (glusterd_op_t op, glusterd_brickinfo_t *brickin
goto out;
brick_req->op = GLUSTERD_BRICK_STATUS;
brick_req->name = "";
+ ret = dict_set_str (dict, "brick-name", brickinfo->path);
+ if (ret)
+ goto out;
}
break;
case GD_OP_REBALANCE:
diff --git a/xlators/nfs/server/src/nfs.c b/xlators/nfs/server/src/nfs.c
index c2c3c86..10502c2 100644
--- a/xlators/nfs/server/src/nfs.c
+++ b/xlators/nfs/server/src/nfs.c
@@ -1604,7 +1604,7 @@ _nfs_export_is_for_vol (char *exname, char *volname)
}
int
-nfs_priv_to_dict (xlator_t *this, dict_t *dict)
+nfs_priv_to_dict (xlator_t *this, dict_t *dict, char *brickname)
{
int ret = -1;
struct nfs_state *priv = NULL;
diff --git a/xlators/protocol/server/src/server.c b/xlators/protocol/server/src/server.c
index 792dfb3..eed4295 100644
--- a/xlators/protocol/server/src/server.c
+++ b/xlators/protocol/server/src/server.c
@@ -225,7 +225,7 @@ ret:
int
-server_priv_to_dict (xlator_t *this, dict_t *dict)
+server_priv_to_dict (xlator_t *this, dict_t *dict, char *brickname)
{
server_conf_t *conf = NULL;
rpc_transport_t *xprt = NULL;
@@ -245,39 +245,48 @@ server_priv_to_dict (xlator_t *this, dict_t *dict)
pthread_mutex_lock (&conf->mutex);
{
list_for_each_entry (xprt, &conf->xprt_list, list) {
- peerinfo = &xprt->peerinfo;
- memset (key, 0, sizeof (key));
- snprintf (key, sizeof (key), "client%d.hostname",
- count);
- ret = dict_set_str (dict, key, peerinfo->identifier);
- if (ret)
- goto unlock;
-
- memset (key, 0, sizeof (key));
- snprintf (key, sizeof (key), "client%d.bytesread",
- count);
- ret = dict_set_uint64 (dict, key,
- xprt->total_bytes_read);
- if (ret)
- goto unlock;
-
- memset (key, 0, sizeof (key));
- snprintf (key, sizeof (key), "client%d.byteswrite",
- count);
- ret = dict_set_uint64 (dict, key,
- xprt->total_bytes_write);
- if (ret)
- goto unlock;
-
- memset (key, 0, sizeof (key));
- snprintf (key, sizeof (key), "client%d.opversion",
- count);
- ret = dict_set_uint32 (dict, key,
- peerinfo->max_op_version);
- if (ret)
- goto unlock;
-
- count++;
+ if (!strcmp (brickname,
+ xprt->xl_private->bound_xl->name)) {
+ peerinfo = &xprt->peerinfo;
+ memset (key, 0, sizeof (key));
+ snprintf (key, sizeof (key),
+ "client%d.hostname",
+ count);
+ ret = dict_set_str (dict, key,
+ peerinfo->identifier);
+ if (ret)
+ goto unlock;
+
+ memset (key, 0, sizeof (key));
+ snprintf (key, sizeof (key),
+ "client%d.bytesread",
+ count);
+ ret = dict_set_uint64 (dict, key,
+ xprt->total_bytes_read);
+ if (ret)
+ goto unlock;
+
+ memset (key, 0, sizeof (key));
+ snprintf (key, sizeof (key),
+ "client%d.byteswrite",
+ count);
+ ret = dict_set_uint64 (dict, key,
+ xprt->total_bytes_write);
+ if (ret)
+ goto unlock;
+
+ memset (key, 0, sizeof (key));
+ snprintf (key, sizeof (key),
+ "client%d.opversion",
+ count);
+ ret = dict_set_uint32 (dict, key,
+ peerinfo->max_op_version);
+ if (ret)
+ goto unlock;
+
+
+ count++;
+ }
}
}
unlock:
--
1.8.3.1

View File

@ -0,0 +1,65 @@
From 4353f2061b81a7d3f9538d7d080890e394cbe67c Mon Sep 17 00:00:00 2001
From: Csaba Henk <csaba@redhat.com>
Date: Sat, 14 Apr 2018 08:22:48 +0200
Subject: [PATCH 231/236] fuse: retire statvfs tweak
fuse xlator used to override the filesystem
block size of the storage backend to indicate
its preferences. Now we retire this tweak and
pass on what we get from the backend.
This fixes the anomaly reported in the referred
BUG. For more background, see the following email,
which was sent out to gluster-devel and gluster-users
mailing lists to gauge if anyone sees any use of
this tweak:
http://lists.gluster.org/pipermail/gluster-devel/2018-March/054660.html
http://lists.gluster.org/pipermail/gluster-users/2018-March/033775.html
Noone vetoed the removal of it but it got endorsement:
http://lists.gluster.org/pipermail/gluster-devel/2018-March/054686.html
upstream: https://review.gluster.org/19873
> BUG: 1523219
> Change-Id: I3b7111d3037a1b91a288c1589f407b2c48d81bfa
> Signed-off-by: Csaba Henk <csaba@redhat.com>
BUG: 1523216
Change-Id: I3b7111d3037a1b91a288c1589f407b2c48d81bfa
Signed-off-by: Csaba Henk <csaba@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/136313
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/mount/fuse/src/fuse-bridge.c | 13 -------------
1 file changed, 13 deletions(-)
diff --git a/xlators/mount/fuse/src/fuse-bridge.c b/xlators/mount/fuse/src/fuse-bridge.c
index 44697d2..b767ea4 100644
--- a/xlators/mount/fuse/src/fuse-bridge.c
+++ b/xlators/mount/fuse/src/fuse-bridge.c
@@ -3164,19 +3164,6 @@ fuse_statfs_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
gf_fop_list[frame->root->op]);
if (op_ret == 0) {
-#ifndef GF_DARWIN_HOST_OS
- /* MacFUSE doesn't respect anyof these tweaks */
- buf->f_blocks *= buf->f_frsize;
- buf->f_blocks /= this->ctx->page_size;
-
- buf->f_bavail *= buf->f_frsize;
- buf->f_bavail /= this->ctx->page_size;
-
- buf->f_bfree *= buf->f_frsize;
- buf->f_bfree /= this->ctx->page_size;
-
- buf->f_frsize = buf->f_bsize =this->ctx->page_size;
-#endif /* GF_DARWIN_HOST_OS */
fso.st.bsize = buf->f_bsize;
fso.st.frsize = buf->f_frsize;
fso.st.blocks = buf->f_blocks;
--
1.8.3.1

View File

@ -0,0 +1,42 @@
From 10fa7c3ad785c0da0d1981b40470149e23cb4acc Mon Sep 17 00:00:00 2001
From: Aravinda VK <avishwan@redhat.com>
Date: Wed, 18 Apr 2018 15:08:55 +0530
Subject: [PATCH 232/236] eventsapi: Handle Unicode string during signing
Python 2.7 HMAC does not support Unicode strings. Secret is read
from file so it is possible that glustereventsd reads the content
as Unicode. This patch converts the secret to `str` type before
generating HMAC signature.
>Fixes: bz#1568820
>Change-Id: I7daa64499ac4ca02544405af26ac8af4b6b0bd95
>Signed-off-by: Aravinda VK <avishwan@redhat.com>
Upstream Patch: https://review.gluster.org/#/c/19900/
BUG: 1466129
Change-Id: I7daa64499ac4ca02544405af26ac8af4b6b0bd95
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/136327
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
events/src/utils.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/events/src/utils.py b/events/src/utils.py
index f405e44..7d9b7b5 100644
--- a/events/src/utils.py
+++ b/events/src/utils.py
@@ -206,7 +206,7 @@ def get_jwt_token(secret, event_type, event_ts, jwt_expiry_time_seconds=60):
msg = base64_urlencode(header) + "." + base64_urlencode(payload)
return "%s.%s" % (
msg,
- base64_urlencode(hmac.HMAC(secret, msg, sha256).digest())
+ base64_urlencode(hmac.HMAC(str(secret), msg, sha256).digest())
)
--
1.8.3.1

View File

@ -0,0 +1,54 @@
From a3cfdb4e3f32336f2fd16ac68bc4dc5a33e7f26e Mon Sep 17 00:00:00 2001
From: Xavi Hernandez <xhernandez@redhat.com>
Date: Thu, 12 Apr 2018 23:31:37 +0200
Subject: [PATCH 233/236] libglusterfs: fix comparison of a NULL dict with a
non-NULL dict
Function are_dicts_equal() had a bug when the first argument was NULL and
the second one wasn't NULL. In this case it incorrectly returned that the
dicts were different when they could be equal.
Upstream-patch: https://review.gluster.org/19861
BUG: 1569457
Change-Id: I0fc245c2e7d1395865a76405dbd05e5d34db3273
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/136332
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
libglusterfs/src/dict.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/libglusterfs/src/dict.c b/libglusterfs/src/dict.c
index ebcf694..36d91a7 100644
--- a/libglusterfs/src/dict.c
+++ b/libglusterfs/src/dict.c
@@ -188,17 +188,17 @@ are_dicts_equal (dict_t *one, dict_t *two,
if (!match)
match = dict_match_everything;
- cmp.dict = two;
- cmp.value_ignore = value_ignore;
- if (!two) {
- num_matches1 = dict_foreach_match (one, match, NULL,
- dict_null_foreach_fn, NULL);
+ if ((one == NULL) || (two == NULL)) {
+ num_matches1 = dict_foreach_match(one ? one : two, match, NULL,
+ dict_null_foreach_fn, NULL);
goto done;
- } else {
- num_matches1 = dict_foreach_match (one, match, NULL,
- key_value_cmp, &cmp);
}
+ cmp.dict = two;
+ cmp.value_ignore = value_ignore;
+ num_matches1 = dict_foreach_match (one, match, NULL, key_value_cmp,
+ &cmp);
+
if (num_matches1 == -1)
return _gf_false;
--
1.8.3.1

View File

@ -0,0 +1,76 @@
From ecbb79d192695e06345036aa4f24e0f029ce8b02 Mon Sep 17 00:00:00 2001
From: Pranith Kumar K <pkarampu@redhat.com>
Date: Fri, 17 Nov 2017 07:20:21 +0530
Subject: [PATCH 234/236] ec: Use tiebreaker_inodelk where necessary
When there are big directories or files that need to be healed,
other shds are stuck on getting lock on self-heal domain for these
directories/files. If there is a tie-breaker logic, other shds
can heal some other files/directories while 1 of the shds is healing
the big file/directory.
Before this patch:
96.67 4890.64 us 12.89 us 646115887.30us 340869 INODELK
After this patch:
40.76 42.35 us 15.09 us 6546.50us 438478 INODELK
>Fixes gluster/glusterfs#354
Upstream-patch: https://review.gluster.org/18820
BUG: 1562744
Change-Id: Ia995b5576b44f770c064090705c78459e543cc64
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/134280
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Ashish Pandey <aspandey@redhat.com>
---
xlators/cluster/ec/src/ec-heal.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/xlators/cluster/ec/src/ec-heal.c b/xlators/cluster/ec/src/ec-heal.c
index 8e02986..a1d3f3d 100644
--- a/xlators/cluster/ec/src/ec-heal.c
+++ b/xlators/cluster/ec/src/ec-heal.c
@@ -1562,9 +1562,9 @@ ec_heal_entry (call_frame_t *frame, ec_t *ec, inode_t *inode,
sprintf (selfheal_domain, "%s:self-heal", ec->xl->name);
ec_mask_to_char_array (ec->xl_up, up_subvols, ec->nodes);
/*If other processes are already doing the heal, don't block*/
- ret = cluster_inodelk (ec->xl_list, up_subvols, ec->nodes, replies,
- locked_on, frame, ec->xl, selfheal_domain, inode,
- 0, 0);
+ ret = cluster_tiebreaker_inodelk (ec->xl_list, up_subvols, ec->nodes,
+ replies, locked_on, frame, ec->xl,
+ selfheal_domain, inode, 0, 0);
{
if (ret <= ec->fragments) {
gf_msg_debug (ec->xl->name, 0, "%s: Skipping heal "
@@ -2400,9 +2400,10 @@ ec_heal_data (call_frame_t *frame, ec_t *ec, gf_boolean_t block, inode_t *inode,
locked_on, frame, ec->xl,
selfheal_domain, inode, 0, 0);
} else {
- ret = cluster_tryinodelk (ec->xl_list, output, ec->nodes,
- replies, locked_on, frame, ec->xl,
- selfheal_domain, inode, 0, 0);
+ ret = cluster_tiebreaker_inodelk (ec->xl_list, output,
+ ec->nodes, replies, locked_on,
+ frame, ec->xl,
+ selfheal_domain, inode, 0, 0);
}
{
if (ret <= ec->fragments) {
@@ -2453,8 +2454,10 @@ ec_heal_do (xlator_t *this, void *data, loc_t *loc, int32_t partial)
/* If it is heal request from getxattr, complete the heal and then
* unwind, if it is ec_heal with NULL as frame then no need to block
- * the heal as the caller doesn't care about its completion*/
- if (fop->req_frame)
+ * the heal as the caller doesn't care about its completion. In case
+ * of heald whichever gets tiebreaking inodelk will take care of the
+ * heal, so no need to block*/
+ if (fop->req_frame && !ec->shd.iamshd)
blocking = _gf_true;
frame = create_frame (this, this->ctx->pool);
--
1.8.3.1

View File

@ -0,0 +1,148 @@
From 01dcc756aa82ad535d349b40cc1d639734b5f7ca Mon Sep 17 00:00:00 2001
From: Pranith Kumar K <pkarampu@redhat.com>
Date: Fri, 17 Nov 2017 07:12:42 +0530
Subject: [PATCH 235/236] cluster-syncop: Implement tiebreaker inodelk/entrylk
In this implementation, inodelk/entrylk will be tried for the subvols
given with trylock. In this attempt if all locks are obtained, then
inodelk is successful, otherwise, if it gets success on the first
available subvolume, then it will go for blocking lock, where as other
subvolumes will not try and this acts as tie-breaker.
>Updates gluster/glusterfs#354
Upstream-patch: https://review.gluster.org/18819
BUG: 1562744
Change-Id: Ia2521b9ccb81a42bd6104ab21f610f761ba2b801
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/134278
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Ashish Pandey <aspandey@redhat.com>
---
libglusterfs/src/cluster-syncop.c | 96 +++++++++++++++++++++++++++++++++++++++
libglusterfs/src/cluster-syncop.h | 7 +++
2 files changed, 103 insertions(+)
diff --git a/libglusterfs/src/cluster-syncop.c b/libglusterfs/src/cluster-syncop.c
index b7f4dfe..75ba640 100644
--- a/libglusterfs/src/cluster-syncop.c
+++ b/libglusterfs/src/cluster-syncop.c
@@ -1191,3 +1191,99 @@ cluster_entrylk (xlator_t **subvols, unsigned char *on, int numsubvols,
loc_wipe (&loc);
return cluster_fop_success_fill (replies, numsubvols, locked_on);
}
+
+int
+cluster_tiebreaker_inodelk (xlator_t **subvols, unsigned char *on,
+ int numsubvols, default_args_cbk_t *replies,
+ unsigned char *locked_on, call_frame_t *frame,
+ xlator_t *this, char *dom, inode_t *inode,
+ off_t off, size_t size)
+{
+ struct gf_flock flock = {0, };
+ int i = 0;
+ int num_success = 0;
+ loc_t loc = {0};
+ unsigned char *output = NULL;
+
+ flock.l_type = F_WRLCK;
+ flock.l_start = off;
+ flock.l_len = size;
+
+ output = alloca(numsubvols);
+ loc.inode = inode_ref (inode);
+ gf_uuid_copy (loc.gfid, inode->gfid);
+ FOP_ONLIST (subvols, on, numsubvols, replies, locked_on, frame,
+ inodelk, dom, &loc, F_SETLK, &flock, NULL);
+
+ for (i = 0; i < numsubvols; i++) {
+ if (replies[i].valid && replies[i].op_ret == 0) {
+ num_success++;
+ continue;
+ }
+ if (replies[i].op_ret == -1 && replies[i].op_errno == EAGAIN) {
+ cluster_fop_success_fill (replies, numsubvols,
+ locked_on);
+ cluster_uninodelk (subvols, locked_on, numsubvols,
+ replies, output, frame, this, dom,
+ inode, off, size);
+
+ if (num_success) {
+ FOP_SEQ (subvols, on, numsubvols, replies,
+ locked_on, frame, inodelk, dom, &loc,
+ F_SETLKW, &flock, NULL);
+ } else {
+ memset (locked_on, 0, numsubvols);
+ }
+ break;
+ }
+ }
+
+ loc_wipe (&loc);
+ return cluster_fop_success_fill (replies, numsubvols, locked_on);
+}
+
+int
+cluster_tiebreaker_entrylk (xlator_t **subvols, unsigned char *on,
+ int numsubvols, default_args_cbk_t *replies,
+ unsigned char *locked_on, call_frame_t *frame,
+ xlator_t *this, char *dom, inode_t *inode,
+ const char *name)
+{
+ int i = 0;
+ loc_t loc = {0};
+ unsigned char *output = NULL;
+ int num_success = 0;
+
+ output = alloca(numsubvols);
+ loc.inode = inode_ref (inode);
+ gf_uuid_copy (loc.gfid, inode->gfid);
+ FOP_ONLIST (subvols, on, numsubvols, replies, locked_on, frame,
+ entrylk, dom, &loc, name, ENTRYLK_LOCK_NB, ENTRYLK_WRLCK,
+ NULL);
+
+ for (i = 0; i < numsubvols; i++) {
+ if (replies[i].valid && replies[i].op_ret == 0) {
+ num_success++;
+ continue;
+ }
+ if (replies[i].op_ret == -1 && replies[i].op_errno == EAGAIN) {
+ cluster_fop_success_fill (replies, numsubvols,
+ locked_on);
+ cluster_unentrylk (subvols, locked_on, numsubvols,
+ replies, output, frame, this, dom,
+ inode, name);
+ if (num_success) {
+ FOP_SEQ (subvols, on, numsubvols, replies,
+ locked_on, frame, entrylk, dom, &loc,
+ name, ENTRYLK_LOCK, ENTRYLK_WRLCK,
+ NULL);
+ } else {
+ memset (locked_on, 0, numsubvols);
+ }
+ break;
+ }
+ }
+
+ loc_wipe (&loc);
+ return cluster_fop_success_fill (replies, numsubvols, locked_on);
+}
diff --git a/libglusterfs/src/cluster-syncop.h b/libglusterfs/src/cluster-syncop.h
index ff9387a..b91a09e 100644
--- a/libglusterfs/src/cluster-syncop.h
+++ b/libglusterfs/src/cluster-syncop.h
@@ -209,4 +209,11 @@ int32_t
cluster_xattrop_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
int32_t op_ret, int32_t op_errno, dict_t *dict,
dict_t *xdata);
+
+int
+cluster_tiebreaker_inodelk (xlator_t **subvols, unsigned char *on,
+ int numsubvols, default_args_cbk_t *replies,
+ unsigned char *locked_on, call_frame_t *frame,
+ xlator_t *this, char *dom, inode_t *inode,
+ off_t off, size_t size);
#endif /* !_CLUSTER_SYNCOP_H */
--
1.8.3.1

View File

@ -0,0 +1,44 @@
From fbc5dae0c743d97d3f753f1c6a9635db6feba137 Mon Sep 17 00:00:00 2001
From: Pranith Kumar K <pkarampu@redhat.com>
Date: Mon, 27 Nov 2017 09:50:32 +0530
Subject: [PATCH 236/236] cluster-syncop: Address comments in
3ad68df725ac32f83b5ea7c0976e2327e7037c8c
Upstream-patch: https://review.gluster.org/18857
BUG: 1562744
Change-Id: I325f718c6c440076c9d9dcd5ad1a0c6bde5393b1
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/134279
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Ashish Pandey <aspandey@redhat.com>
---
libglusterfs/src/cluster-syncop.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/libglusterfs/src/cluster-syncop.c b/libglusterfs/src/cluster-syncop.c
index 75ba640..50542eb 100644
--- a/libglusterfs/src/cluster-syncop.c
+++ b/libglusterfs/src/cluster-syncop.c
@@ -1220,6 +1220,10 @@ cluster_tiebreaker_inodelk (xlator_t **subvols, unsigned char *on,
num_success++;
continue;
}
+
+ /* TODO: If earlier subvols fail with an error other
+ * than EAGAIN, we could still have 2 clients competing
+ * for the lock*/
if (replies[i].op_ret == -1 && replies[i].op_errno == EAGAIN) {
cluster_fop_success_fill (replies, numsubvols,
locked_on);
@@ -1231,8 +1235,6 @@ cluster_tiebreaker_inodelk (xlator_t **subvols, unsigned char *on,
FOP_SEQ (subvols, on, numsubvols, replies,
locked_on, frame, inodelk, dom, &loc,
F_SETLKW, &flock, NULL);
- } else {
- memset (locked_on, 0, numsubvols);
}
break;
}
--
1.8.3.1

View File

@ -192,7 +192,7 @@ Release: 0.1%{?prereltag:.%{prereltag}}%{?dist}
%else
Name: glusterfs
Version: 3.12.2
Release: 7%{?dist}
Release: 8%{?dist}
%endif
License: GPLv2 or LGPLv3+
Group: System Environment/Base
@ -477,6 +477,30 @@ Patch0209: 0209-cluster-dht-Update-layout-in-inode-only-on-success.patch
Patch0210: 0210-cluster-ec-send-list-node-uuids-request-to-all-subvo.patch
Patch0211: 0211-common-ha-scripts-pass-the-list-of-servers-properly-.patch
Patch0212: 0212-readdir-ahead-Cleanup-the-xattr-request-code.patch
Patch0213: 0213-glusterd-mark-port_registered-to-true-for-all-runnin.patch
Patch0214: 0214-cluster-dht-Serialize-mds-update-code-path-with-look.patch
Patch0215: 0215-cluster-dht-ENOSPC-will-not-fail-rebalance.patch
Patch0216: 0216-cluster-dht-Wind-open-to-all-subvols.patch
Patch0217: 0217-cluster-dht-Handle-file-migrations-when-brick-down.patch
Patch0218: 0218-posix-reserve-option-behavior-is-not-correct-while-u.patch
Patch0219: 0219-Quota-heal-directory-on-newly-added-bricks-when-quot.patch
Patch0220: 0220-glusterd-turn-off-selinux-feature-in-downstream.patch
Patch0221: 0221-cluster-dht-Skipped-files-are-not-treated-as-errors.patch
Patch0222: 0222-hooks-remove-selinux-hooks.patch
Patch0223: 0223-glusterd-Make-localtime-logging-option-invisible-in-.patch
Patch0224: 0224-protocol-server-Backport-patch-to-reduce-duplicate-c.patch
Patch0225: 0225-glusterfsd-Memleak-in-glusterfsd-process-while-brick.patch
Patch0226: 0226-gluster-Sometimes-Brick-process-is-crashed-at-the-ti.patch
Patch0227: 0227-afr-add-quorum-checks-in-pre-op.patch
Patch0228: 0228-afr-fixes-to-afr-eager-locking.patch
Patch0229: 0229-fuse-do-fd_resolve-in-fuse_getattr-if-fd-is-received.patch
Patch0230: 0230-glusterd-volume-inode-fd-status-broken-with-brick-mu.patch
Patch0231: 0231-fuse-retire-statvfs-tweak.patch
Patch0232: 0232-eventsapi-Handle-Unicode-string-during-signing.patch
Patch0233: 0233-libglusterfs-fix-comparison-of-a-NULL-dict-with-a-no.patch
Patch0234: 0234-ec-Use-tiebreaker_inodelk-where-necessary.patch
Patch0235: 0235-cluster-syncop-Implement-tiebreaker-inodelk-entrylk.patch
Patch0236: 0236-cluster-syncop-Address-comments-in-3ad68df725ac32f83.patch
%description
GlusterFS is a distributed file-system capable of scaling to several
@ -1786,7 +1810,6 @@ exit 0
%attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/create
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/create/post
%attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/create/post/S10selinux-label-brick.sh
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/create/pre
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/copy-file
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/copy-file/post
@ -1795,7 +1818,6 @@ exit 0
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/delete/post
%{_sharedstatedir}/glusterd/hooks/1/delete/post/S57glusterfind-delete-post
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/delete/pre
%attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/delete/pre/S10selinux-del-fcontext.sh
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/remove-brick
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/remove-brick/post
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/remove-brick/pre
@ -2420,6 +2442,11 @@ fi
%endif
%changelog
* Fri Apr 20 2018 Milind Changire <mchangir@redhat.com> - 3.12.2-8
- fixes bugs bz#1466129 bz#1475779 bz#1523216 bz#1535281 bz#1546941
bz#1550315 bz#1550991 bz#1553677 bz#1554291 bz#1559452 bz#1560955 bz#1562744
bz#1563692 bz#1565962 bz#1567110 bz#1569457
* Wed Apr 04 2018 Milind Changire <mchangir@redhat.com> - 3.12.2-7
- fixes bugs bz#958062 bz#1186664 bz#1226874 bz#1446046 bz#1529451 bz#1550315
bz#1557365 bz#1559884 bz#1561733