autobuild v6.0-6

Resolves: bz#1668001 bz#1704562 bz#1708043 bz#1708183 bz#1710701
Resolves: bz#1719640 bz#1720079 bz#1720248 bz#1720318 bz#1720461
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
This commit is contained in:
Sunil Kumar Acharya 2019-06-14 13:53:58 -04:00
parent dddf13c543
commit 51e6329e9d
15 changed files with 1245 additions and 10 deletions

View File

@ -0,0 +1,72 @@
From fe1d641e4666f9a20f656b1799cf6e7b75af1279 Mon Sep 17 00:00:00 2001
From: karthik-us <ksubrahm@redhat.com>
Date: Tue, 11 Jun 2019 11:31:02 +0530
Subject: [PATCH 179/192] tests: Fix split-brain-favorite-child-policy.t
failure
Backport of: https://review.gluster.org/#/c/glusterfs/+/22850/
Problem:
The test case is failing to heal the volume within $HEAL_TIMEOUT @195.
This is happening because as part of split-brain resolution the file
gets expunged from the sink and the new entry mark for that file will
be done on the source bricks as part of impunging. Since the source
bricks shd-threads failed to get the heal-domain lock, they will wait
for the heal-timeout of 10 minutes, which is greater than $HEAL_TIMEOUT.
Fix:
Set the cluster.heal-timeout to 5 seconds to trigger the heal so that
one of the source brick heals the file within the $HEAL_TIMEOUT.
Change-Id: Iae5e819aa564ccde6639c51711f49d1152260c2d
updates: bz#1704562
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/172965
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
tests/basic/afr/split-brain-favorite-child-policy.t | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/tests/basic/afr/split-brain-favorite-child-policy.t b/tests/basic/afr/split-brain-favorite-child-policy.t
index 0e321c6..c268c12 100644
--- a/tests/basic/afr/split-brain-favorite-child-policy.t
+++ b/tests/basic/afr/split-brain-favorite-child-policy.t
@@ -16,6 +16,7 @@ TEST $CLI volume set $V0 cluster.self-heal-daemon off
TEST $CLI volume set $V0 cluster.entry-self-heal off
TEST $CLI volume set $V0 cluster.data-self-heal off
TEST $CLI volume set $V0 cluster.metadata-self-heal off
+TEST $CLI volume set $V0 cluster.heal-timeout 5
TEST $CLI volume start $V0
TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 $M0
TEST touch $M0/file
@@ -38,7 +39,7 @@ EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 0
EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 1
TEST $CLI volume heal $V0
-#file fill in split-brain
+#file still in split-brain
cat $M0/file > /dev/null
EXPECT "1" echo $?
@@ -124,7 +125,7 @@ EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 0
EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 1
TEST $CLI volume heal $V0
-#file fill in split-brain
+#file still in split-brain
cat $M0/file > /dev/null
EXPECT "1" echo $?
@@ -179,7 +180,7 @@ EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 1
EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 2
TEST $CLI volume heal $V0
-#file fill in split-brain
+#file still in split-brain
cat $M0/file > /dev/null
EXPECT "1" echo $?
--
1.8.3.1

View File

@ -0,0 +1,44 @@
From 30b6d3452df0ef6621592a786f0c4347e09aa8f2 Mon Sep 17 00:00:00 2001
From: Jiffin Tony Thottan <jthottan@redhat.com>
Date: Tue, 11 Jun 2019 12:00:25 +0530
Subject: [PATCH 180/192] ganesha/scripts : Make generate-epoch.py python3
compatible
This would help in building RHEL8 glusterfs server build. We don't need
to validate this fix as such given RHEL8 glusterfs server support at
RHGS 3.5.0 is an internal milestone.
Label : DOWNSTREAM ONLY
Change-Id: I738219613680406de5c86a452446035c72a52bc4
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/172974
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
extras/ganesha/scripts/generate-epoch.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/extras/ganesha/scripts/generate-epoch.py b/extras/ganesha/scripts/generate-epoch.py
index 5db5e56..61ccda9 100755
--- a/extras/ganesha/scripts/generate-epoch.py
+++ b/extras/ganesha/scripts/generate-epoch.py
@@ -36,13 +36,13 @@ def epoch_uuid():
uuid_bin = binascii.unhexlify(glusterd_uuid.replace("-",""))
- epoch_uuid = int(uuid_bin.encode('hex'), 32) & 0xFFFF0000
+ epoch_uuid = int(binascii.hexlify(uuid_bin), 32) & 0xFFFF0000
return epoch_uuid
# Construct epoch as follows -
# first 32-bit contains the now() time
# rest 32-bit value contains the local glusterd node uuid
epoch = (epoch_now() | epoch_uuid())
-print str(epoch)
+print(str(epoch))
exit(0)
--
1.8.3.1

View File

@ -0,0 +1,59 @@
From 99f86ae7d45667d86b1b6f9f9540ec2889c6c4ce Mon Sep 17 00:00:00 2001
From: Ravishankar N <ravishankar@redhat.com>
Date: Wed, 8 May 2019 04:51:27 -0400
Subject: [PATCH 181/192] afr: log before attempting data self-heal.
Upstream patch: https://review.gluster.org/#/c/glusterfs/+/22685/
I was working on a blog about troubleshooting AFR issues and I wanted to copy
the messages logged by self-heal for my blog. I then realized that AFR-v2 is not
logging *before* attempting data heal while it logs it for metadata and entry
heals.
I [MSGID: 108026] [afr-self-heal-entry.c:883:afr_selfheal_entry_do]
0-testvol-replicate-0: performing entry selfheal on
d120c0cf-6e87-454b-965b-0d83a4c752bb
I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal]
0-testvol-replicate-0: Completed entry selfheal on
d120c0cf-6e87-454b-965b-0d83a4c752bb. sources=[0] 2 sinks=1
I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal]
0-testvol-replicate-0: Completed data selfheal on
a9b5f183-21eb-4fb3-a342-287d3a7dddc5. sources=[0] 2 sinks=1
I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-testvol-replicate-0: performing metadata selfheal on
a9b5f183-21eb-4fb3-a342-287d3a7dddc5
I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal]
0-testvol-replicate-0: Completed metadata selfheal on
a9b5f183-21eb-4fb3-a342-287d3a7dddc5. sources=[0] 2 sinks=1
Adding it in this patch. Now there is a 'performing' and a corresponding
'Completed' message for every type of heal.
BUG: 1710701
Change-Id: I91e29dd05af1c78dbc447d1a02dda32b03d64aef
fixes: bz#1710701
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173108
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/cluster/afr/src/afr-self-heal-data.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/xlators/cluster/afr/src/afr-self-heal-data.c b/xlators/cluster/afr/src/afr-self-heal-data.c
index 18a0334..cdff4a5 100644
--- a/xlators/cluster/afr/src/afr-self-heal-data.c
+++ b/xlators/cluster/afr/src/afr-self-heal-data.c
@@ -324,6 +324,9 @@ afr_selfheal_data_do(call_frame_t *frame, xlator_t *this, fd_t *fd, int source,
call_frame_t *iter_frame = NULL;
unsigned char arbiter_sink_status = 0;
+ gf_msg(this->name, GF_LOG_INFO, 0, AFR_MSG_SELF_HEAL_INFO,
+ "performing data selfheal on %s", uuid_utoa(fd->inode->gfid));
+
priv = this->private;
if (priv->arbiter_count) {
arbiter_sink_status = healed_sinks[ARBITER_BRICK_INDEX];
--
1.8.3.1

View File

@ -0,0 +1,55 @@
From 37df54966d5b7f01ad24d329bac5da1cf17f2abe Mon Sep 17 00:00:00 2001
From: Sunny Kumar <sunkumar@redhat.com>
Date: Wed, 12 Jun 2019 16:10:52 +0530
Subject: [PATCH 182/192] geo-rep : fix mountbroker setup
Problem:
Unable to setup mountbroker root directory while creating geo-replication
session for non-root user.
Casue:
With patch[1] which defines the max-port for glusterd one extra sapce
got added in field of 'option max-port'.
[1]. https://review.gluster.org/#/c/glusterfs/+/21872/
In geo-rep spliting of key-value pair form vol file was done on the
basis of space so this additional space caused "ValueError: too many values
to unpack".
Solution:
Use split so that it can treat consecutive whitespace as a single separator.
Backport of:
>Upstream Patch: https://review.gluster.org/#/c/glusterfs/+/22716/
>Fixes: bz#1709248
>Change-Id: Ia22070a43f95d66d84cb35487f23f9ee58b68c73
>Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
BUG: 1708043
Change-Id: Ic6d535a6faad62ce185c6aa5adc18f5fdf8f27be
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173149
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
geo-replication/src/peer_mountbroker.py.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/geo-replication/src/peer_mountbroker.py.in b/geo-replication/src/peer_mountbroker.py.in
index 54f95c4..ce33f97 100644
--- a/geo-replication/src/peer_mountbroker.py.in
+++ b/geo-replication/src/peer_mountbroker.py.in
@@ -47,7 +47,7 @@ class MountbrokerUserMgmt(object):
for line in f:
line = line.strip()
if line.startswith("option "):
- key, value = line.split(" ")[1:]
+ key, value = line.split()[1:]
self._options[key] = value
if line.startswith("#"):
self.commented_lines.append(line)
--
1.8.3.1

View File

@ -0,0 +1,47 @@
From fe9159ee42f0f67b01e6a495df8105ea0f66738d Mon Sep 17 00:00:00 2001
From: Mohammed Rafi KC <rkavunga@redhat.com>
Date: Thu, 30 May 2019 23:48:05 +0530
Subject: [PATCH 183/192] glusterd/svc: Stop stale process using the
glusterd_proc_stop
While restarting a glusterd process, when we have a stale pid
we were doing a simple kill. Instead we can use glusterd_proc_stop
Because it has more logging plus force kill in case if there is
any problem with kill signal handling.
Upstream patch: https://review.gluster.org/#/c/glusterfs/+/22791/
>Change-Id: I4a2dadc210a7a65762dd714e809899510622b7ec
>updates: bz#1710054
>Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Change-Id: I3327528d8ebf90bbb2221265a0cf059c9359f141
BUG: 1720248
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/172290
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-svc-helper.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-svc-helper.c b/xlators/mgmt/glusterd/src/glusterd-svc-helper.c
index 6a3ca52..a6e662f 100644
--- a/xlators/mgmt/glusterd/src/glusterd-svc-helper.c
+++ b/xlators/mgmt/glusterd/src/glusterd-svc-helper.c
@@ -488,9 +488,9 @@ glusterd_shd_svc_mux_init(glusterd_volinfo_t *volinfo, glusterd_svc_t *svc)
if (!mux_proc) {
if (pid != -1 && sys_access(svc->proc.pidfile, R_OK) == 0) {
- /* stale pid file, unlink it. */
- kill(pid, SIGTERM);
- sys_unlink(svc->proc.pidfile);
+ /* stale pid file, stop and unlink it */
+ glusterd_proc_stop(&svc->proc, SIGTERM, PROC_STOP_FORCE);
+ glusterd_unlink_file(svc->proc.pidfile);
}
mux_proc = __gf_find_compatible_svc(GD_NODE_SHD);
}
--
1.8.3.1

View File

@ -0,0 +1,38 @@
From c6fdb740675999883a8a7942fbcd32f9889dc739 Mon Sep 17 00:00:00 2001
From: Sunil Kumar Acharya <sheggodu@redhat.com>
Date: Thu, 13 Jun 2019 21:58:43 +0530
Subject: [PATCH 184/192] tests: Add gating configuration file for rhel8
Adding configuration files to enable automatic execution
of gating CI for glusterfs.
Label: DOWNSTREAM ONLY
BUG: 1720318
Change-Id: I8b42792d93d1eea455f86acd1576c20e12eed9f0
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173412
Reviewed-by: Vivek Das <vdas@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
gating.yaml | 7 +++++++
1 file changed, 7 insertions(+)
create mode 100644 gating.yaml
diff --git a/gating.yaml b/gating.yaml
new file mode 100644
index 0000000..eeab6e9
--- /dev/null
+++ b/gating.yaml
@@ -0,0 +1,7 @@
+--- !Policy
+product_versions:
+ - rhel-8
+decision_context: osci_compose_gate_modules
+subject_type: redhat-module
+rules:
+ - !PassingTestCaseRule {test_case_name: manual.sst_rh_gluster_storage.glusterfs.bvt}
--
1.8.3.1

View File

@ -0,0 +1,174 @@
From 462e3988936761317975fd811dd355b81328b60a Mon Sep 17 00:00:00 2001
From: Amar Tumballi <amarts@redhat.com>
Date: Thu, 14 Mar 2019 10:04:28 +0530
Subject: [PATCH 185/192] gfapi: provide an api for setting statedump path
Currently for an application using glfsapi to use glusterfs, when a
statedump is taken, it uses /var/run/gluster dir to dump info.
There can be concerns as this directory may be owned by some other
user, and hence it may fail taking statedump. Such applications
should have an option to use different path.
This patch provides an API to do so.
Upstream details:
> Updates: bz#1689097
> Change-Id: I8918e002bc823d83614c972b6c738baa04681b23
> URL: https://review.gluster.org/22364
BUG: 1720461
Change-Id: I6079c8d799f35eaf76e62d259b51573bf561ba5b
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173451
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
api/src/gfapi.aliases | 2 ++
api/src/gfapi.map | 5 ++++
api/src/glfs.c | 63 +++++++++++++++++++++++++++++++++++++++++++++++++++
api/src/glfs.h | 28 +++++++++++++++++++++++
4 files changed, 98 insertions(+)
diff --git a/api/src/gfapi.aliases b/api/src/gfapi.aliases
index 09c0fd8..8fdf734 100644
--- a/api/src/gfapi.aliases
+++ b/api/src/gfapi.aliases
@@ -195,3 +195,5 @@ _pub_glfs_zerofill_async _glfs_zerofill_async$GFAPI_6.0
_pub_glfs_copy_file_range _glfs_copy_file_range$GFAPI_6.0
_pub_glfs_fsetattr _glfs_fsetattr$GFAPI_6.0
_pub_glfs_setattr _glfs_setattr$GFAPI_6.0
+
+_pub_glfs_set_statedump_path _glfs_set_statedump_path@GFAPI_future
diff --git a/api/src/gfapi.map b/api/src/gfapi.map
index b97a614..cf118e8 100644
--- a/api/src/gfapi.map
+++ b/api/src/gfapi.map
@@ -271,3 +271,8 @@ GFAPI_PRIVATE_6.1 {
global:
glfs_setfspid;
} GFAPI_6.0;
+
+GFAPI_future {
+ global:
+ glfs_set_statedump_path;
+} GFAPI_PRIVATE_6.1;
diff --git a/api/src/glfs.c b/api/src/glfs.c
index f4a8e08..ba513e6 100644
--- a/api/src/glfs.c
+++ b/api/src/glfs.c
@@ -1212,6 +1212,7 @@ glusterfs_ctx_destroy(glusterfs_ctx_t *ctx)
glusterfs_graph_destroy_residual(trav_graph);
}
+ GF_FREE(ctx->statedump_path);
FREE(ctx);
return ret;
@@ -1738,3 +1739,65 @@ invalid_fs:
}
GFAPI_SYMVER_PUBLIC_DEFAULT(glfs_upcall_unregister, 3.13.0);
+
+int
+pub_glfs_set_statedump_path(struct glfs *fs, const char *path)
+{
+ struct stat st;
+ int ret;
+ DECLARE_OLD_THIS;
+ __GLFS_ENTRY_VALIDATE_FS(fs, invalid_fs);
+
+ if (!path) {
+ gf_log("glfs", GF_LOG_ERROR, "path is NULL");
+ errno = EINVAL;
+ goto err;
+ }
+
+ /* If path is not present OR, if it is directory AND has enough permission
+ * to create files, then proceed */
+ ret = sys_stat(path, &st);
+ if (ret && errno != ENOENT) {
+ gf_log("glfs", GF_LOG_ERROR, "%s: not a valid path (%s)", path,
+ strerror(errno));
+ errno = EINVAL;
+ goto err;
+ }
+
+ if (!ret) {
+ /* file is present, now check other things */
+ if (!S_ISDIR(st.st_mode)) {
+ gf_log("glfs", GF_LOG_ERROR, "%s: path is not directory", path);
+ errno = EINVAL;
+ goto err;
+ }
+ if (sys_access(path, W_OK | X_OK) < 0) {
+ gf_log("glfs", GF_LOG_ERROR,
+ "%s: path doesn't have write permission", path);
+ errno = EPERM;
+ goto err;
+ }
+ }
+
+ /* If set, it needs to be freed, so we don't have leak */
+ GF_FREE(fs->ctx->statedump_path);
+
+ fs->ctx->statedump_path = gf_strdup(path);
+ if (!fs->ctx->statedump_path) {
+ gf_log("glfs", GF_LOG_ERROR,
+ "%s: failed to set statedump path, no memory", path);
+ errno = ENOMEM;
+ goto err;
+ }
+
+ __GLFS_EXIT_FS;
+
+ return 0;
+err:
+ __GLFS_EXIT_FS;
+
+invalid_fs:
+ return -1;
+}
+
+GFAPI_SYMVER_PUBLIC_DEFAULT(glfs_set_statedump_path, future);
diff --git a/api/src/glfs.h b/api/src/glfs.h
index 6714782..a6c12e1 100644
--- a/api/src/glfs.h
+++ b/api/src/glfs.h
@@ -1453,5 +1453,33 @@ int
glfs_setattr(struct glfs *fs, const char *path, struct glfs_stat *stat,
int follow) __THROW GFAPI_PUBLIC(glfs_setattr, 6.0);
+/*
+ SYNOPSIS
+
+ glfs_set_statedump_path: Function to set statedump path.
+
+ DESCRIPTION
+
+ This function is used to set statedump directory
+
+ PARAMETERS
+
+ @fs: The 'virtual mount' object to be configured with the volume
+ specification file.
+
+ @path: statedump path. Should be a directory. But the API won't fail if the
+ directory doesn't exist yet, as one may create it later.
+
+ RETURN VALUES
+
+ 0 : Success.
+ -1 : Failure. @errno will be set with the type of failure.
+
+ */
+
+int
+glfs_set_statedump_path(struct glfs *fs, const char *path) __THROW
+ GFAPI_PUBLIC(glfs_set_statedump_path, future);
+
__END_DECLS
#endif /* !_GLFS_H */
--
1.8.3.1

View File

@ -0,0 +1,57 @@
From be925e84edcecd879e953bdb68c10f98825dba53 Mon Sep 17 00:00:00 2001
From: Shwetha K Acharya <sacharya@redhat.com>
Date: Mon, 3 Jun 2019 18:05:24 +0530
Subject: [PATCH 186/192] cli: Remove-brick warning seems unnecessary
As force-migration option is disabled by default,
the warning seems unnessary.
Rephrased the warning to make best sense out of it.
>fixes: bz#1712668
>Change-Id: Ia18c3c5e7b3fec808fce2194ca0504a837708822
>Signed-off-by: Shwetha K Acharya <sacharya@redhat.com>
backport of https://review.gluster.org/#/c/glusterfs/+/22805/
Bug: 1708183
Change-Id: Ia18c3c5e7b3fec808fce2194ca0504a837708822
Signed-off-by: Shwetha K Acharya <sacharya@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173447
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
cli/src/cli-cmd-volume.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/cli/src/cli-cmd-volume.c b/cli/src/cli-cmd-volume.c
index 564aef7..a42e663 100644
--- a/cli/src/cli-cmd-volume.c
+++ b/cli/src/cli-cmd-volume.c
@@ -2090,14 +2090,15 @@ cli_cmd_volume_remove_brick_cbk(struct cli_state *state,
" on the volume.\nDo you want to continue?";
} else if (command == GF_OP_CMD_START) {
question =
- "Running remove-brick with cluster.force-migration"
- " enabled can result in data corruption. It is safer"
- " to disable this option so that files that receive "
- "writes during migration are not migrated.\nFiles "
- "that are not migrated can then be manually copied "
- "after the remove-brick commit operation.\nDo you "
- "want to continue with your current "
- "cluster.force-migration settings?";
+ "It is recommended that remove-brick be run with"
+ " cluster.force-migration option disabled to prevent"
+ " possible data corruption. Doing so will ensure that"
+ " files that receive writes during migration will not"
+ " be migrated and will need to be manually copied"
+ " after the remove-brick commit operation. Please"
+ " check the value of the option and update accordingly."
+ " \nDo you want to continue with your current"
+ " cluster.force-migration settings?";
}
if (!brick_count) {
--
1.8.3.1

View File

@ -0,0 +1,98 @@
From a65982755b31fb548ff7a997ee754360a516da94 Mon Sep 17 00:00:00 2001
From: Amar Tumballi <amarts@redhat.com>
Date: Fri, 14 Jun 2019 13:58:25 +0530
Subject: [PATCH 187/192] gfapi: statedump_path() add proper version number
An API should have the proper version number, and 'future' version
number is just a place holder. One shouldn't be using it in the
release versions.
With the previous backport of the patch, the version remained same
as that of 'master' branch, which is future, but as it is an API,
it needed a fixed version number. With this patch, corrected the same.
Label: DOWNSTREAM_ONLY
> In upstream, this is corrected by a backport to the stable version, 6.4
> URL: https://review.gluster.org/22864
BUG: 1720461
Change-Id: I939850689d47d4f240c9d43f6be1a11de29c4760
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173475
Reviewed-by: Soumya Koduri <skoduri@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
api/examples/glfsxmp.c | 5 +++++
api/src/gfapi.aliases | 2 +-
api/src/gfapi.map | 2 +-
api/src/glfs.c | 2 +-
api/src/glfs.h | 2 +-
5 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/api/examples/glfsxmp.c b/api/examples/glfsxmp.c
index 9d96eea..33f44df 100644
--- a/api/examples/glfsxmp.c
+++ b/api/examples/glfsxmp.c
@@ -1573,6 +1573,11 @@ main(int argc, char *argv[])
ret = glfs_set_logging(fs2, "/dev/stderr", 7);
+ ret = glfs_set_statedump_path(fs2, "/tmp");
+ if (ret) {
+ fprintf(stderr, "glfs_set_statedump_path: %s\n", strerror(errno));
+ }
+
ret = glfs_init(fs2);
fprintf(stderr, "glfs_init: returned %d\n", ret);
diff --git a/api/src/gfapi.aliases b/api/src/gfapi.aliases
index 8fdf734..692ae13 100644
--- a/api/src/gfapi.aliases
+++ b/api/src/gfapi.aliases
@@ -196,4 +196,4 @@ _pub_glfs_copy_file_range _glfs_copy_file_range$GFAPI_6.0
_pub_glfs_fsetattr _glfs_fsetattr$GFAPI_6.0
_pub_glfs_setattr _glfs_setattr$GFAPI_6.0
-_pub_glfs_set_statedump_path _glfs_set_statedump_path@GFAPI_future
+_pub_glfs_set_statedump_path _glfs_set_statedump_path@GFAPI_6.4
diff --git a/api/src/gfapi.map b/api/src/gfapi.map
index cf118e8..df65837 100644
--- a/api/src/gfapi.map
+++ b/api/src/gfapi.map
@@ -272,7 +272,7 @@ GFAPI_PRIVATE_6.1 {
glfs_setfspid;
} GFAPI_6.0;
-GFAPI_future {
+GFAPI_6.4 {
global:
glfs_set_statedump_path;
} GFAPI_PRIVATE_6.1;
diff --git a/api/src/glfs.c b/api/src/glfs.c
index ba513e6..6bbb620 100644
--- a/api/src/glfs.c
+++ b/api/src/glfs.c
@@ -1800,4 +1800,4 @@ invalid_fs:
return -1;
}
-GFAPI_SYMVER_PUBLIC_DEFAULT(glfs_set_statedump_path, future);
+GFAPI_SYMVER_PUBLIC_DEFAULT(glfs_set_statedump_path, 6.4);
diff --git a/api/src/glfs.h b/api/src/glfs.h
index a6c12e1..08b6ca0 100644
--- a/api/src/glfs.h
+++ b/api/src/glfs.h
@@ -1479,7 +1479,7 @@ glfs_setattr(struct glfs *fs, const char *path, struct glfs_stat *stat,
int
glfs_set_statedump_path(struct glfs *fs, const char *path) __THROW
- GFAPI_PUBLIC(glfs_set_statedump_path, future);
+ GFAPI_PUBLIC(glfs_set_statedump_path, 6.4);
__END_DECLS
#endif /* !_GLFS_H */
--
1.8.3.1

View File

@ -0,0 +1,38 @@
From 7221352670a750e35268573dba36c139a5041b14 Mon Sep 17 00:00:00 2001
From: Krutika Dhananjay <kdhananj@redhat.com>
Date: Fri, 3 May 2019 10:50:40 +0530
Subject: [PATCH 188/192] features/shard: Fix integer overflow in block count
accounting
... by holding delta_blocks in 64-bit int as opposed to 32-bit int.
> Upstream: https://review.gluster.org/22655
> BUG: 1705884
> Change-Id: I2c1ddab17457f45e27428575ad16fa678fd6c0eb
Change-Id: I2c1ddab17457f45e27428575ad16fa678fd6c0eb
updates: bz#1668001
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173476
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/features/shard/src/shard.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xlators/features/shard/src/shard.h b/xlators/features/shard/src/shard.h
index 570fe46..cd6a663 100644
--- a/xlators/features/shard/src/shard.h
+++ b/xlators/features/shard/src/shard.h
@@ -275,7 +275,7 @@ typedef struct shard_local {
size_t req_size;
size_t readdir_size;
int64_t delta_size;
- int delta_blocks;
+ int64_t delta_blocks;
loc_t loc;
loc_t dot_shard_loc;
loc_t dot_shard_rm_loc;
--
1.8.3.1

View File

@ -0,0 +1,323 @@
From 369c5772a722b6e346ec8b41f992112785366778 Mon Sep 17 00:00:00 2001
From: Krutika Dhananjay <kdhananj@redhat.com>
Date: Wed, 8 May 2019 13:00:51 +0530
Subject: [PATCH 189/192] features/shard: Fix block-count accounting upon
truncate to lower size
> Upstream: https://review.gluster.org/22681
> BUG: 1705884
> Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
The way delta_blocks is computed in shard is incorrect, when a file
is truncated to a lower size. The accounting only considers change
in size of the last of the truncated shards.
FIX:
Get the block-count of each shard just before an unlink at posix in
xdata. Their summation plus the change in size of last shard
(from an actual truncate) is used to compute delta_blocks which is
used in the xattrop for size update.
Change-Id: I9128a192e9bf8c3c3a959e96b7400879d03d7c53
updates: bz#1668001
Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173477
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
libglusterfs/src/glusterfs/glusterfs.h | 2 +
tests/bugs/shard/bug-1705884.t | 32 +++++++++++++++
xlators/features/shard/src/shard.c | 60 +++++++++++++++++++++++------
xlators/features/shard/src/shard.h | 2 +-
xlators/storage/posix/src/posix-entry-ops.c | 9 +++++
5 files changed, 92 insertions(+), 13 deletions(-)
create mode 100644 tests/bugs/shard/bug-1705884.t
diff --git a/libglusterfs/src/glusterfs/glusterfs.h b/libglusterfs/src/glusterfs/glusterfs.h
index 516b497..9ec2365 100644
--- a/libglusterfs/src/glusterfs/glusterfs.h
+++ b/libglusterfs/src/glusterfs/glusterfs.h
@@ -328,6 +328,8 @@ enum gf_internal_fop_indicator {
#define GF_RESPONSE_LINK_COUNT_XDATA "gf_response_link_count"
#define GF_REQUEST_LINK_COUNT_XDATA "gf_request_link_count"
+#define GF_GET_FILE_BLOCK_COUNT "gf_get_file_block_count"
+
#define CTR_ATTACH_TIER_LOOKUP "ctr_attach_tier_lookup"
#define CLIENT_CMD_CONNECT "trusted.glusterfs.client-connect"
diff --git a/tests/bugs/shard/bug-1705884.t b/tests/bugs/shard/bug-1705884.t
new file mode 100644
index 0000000..f6e5037
--- /dev/null
+++ b/tests/bugs/shard/bug-1705884.t
@@ -0,0 +1,32 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../volume.rc
+. $(dirname $0)/../../fallocate.rc
+
+cleanup
+
+require_fallocate -l 1m $M0/file
+
+TEST glusterd
+TEST pidof glusterd
+TEST $CLI volume create $V0 replica 3 $H0:$B0/${V0}{0,1,2}
+TEST $CLI volume set $V0 features.shard on
+TEST $CLI volume set $V0 performance.write-behind off
+TEST $CLI volume set $V0 performance.stat-prefetch off
+TEST $CLI volume start $V0
+
+TEST $GFS --volfile-id=$V0 --volfile-server=$H0 $M0
+
+TEST fallocate -l 200M $M0/foo
+EXPECT `echo "$(( ( 200 * 1024 * 1024 ) / 512 ))"` stat -c %b $M0/foo
+TEST truncate -s 0 $M0/foo
+EXPECT "0" stat -c %b $M0/foo
+TEST fallocate -l 100M $M0/foo
+EXPECT `echo "$(( ( 100 * 1024 * 1024 ) / 512 ))"` stat -c %b $M0/foo
+
+EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $M0
+TEST $CLI volume stop $V0
+TEST $CLI volume delete $V0
+
+cleanup
diff --git a/xlators/features/shard/src/shard.c b/xlators/features/shard/src/shard.c
index c1799ad..b248767 100644
--- a/xlators/features/shard/src/shard.c
+++ b/xlators/features/shard/src/shard.c
@@ -1135,6 +1135,7 @@ shard_update_file_size(call_frame_t *frame, xlator_t *this, fd_t *fd,
{
int ret = -1;
int64_t *size_attr = NULL;
+ int64_t delta_blocks = 0;
inode_t *inode = NULL;
shard_local_t *local = NULL;
dict_t *xattr_req = NULL;
@@ -1156,13 +1157,13 @@ shard_update_file_size(call_frame_t *frame, xlator_t *this, fd_t *fd,
/* If both size and block count have not changed, then skip the xattrop.
*/
- if ((local->delta_size + local->hole_size == 0) &&
- (local->delta_blocks == 0)) {
+ delta_blocks = GF_ATOMIC_GET(local->delta_blocks);
+ if ((local->delta_size + local->hole_size == 0) && (delta_blocks == 0)) {
goto out;
}
ret = shard_set_size_attrs(local->delta_size + local->hole_size,
- local->delta_blocks, &size_attr);
+ delta_blocks, &size_attr);
if (ret) {
gf_msg(this->name, GF_LOG_ERROR, 0, SHARD_MSG_SIZE_SET_FAILED,
"Failed to set size attrs for %s", uuid_utoa(inode->gfid));
@@ -1947,6 +1948,7 @@ shard_truncate_last_shard_cbk(call_frame_t *frame, void *cookie, xlator_t *this,
dict_t *xdata)
{
inode_t *inode = NULL;
+ int64_t delta_blocks = 0;
shard_local_t *local = NULL;
local = frame->local;
@@ -1967,14 +1969,15 @@ shard_truncate_last_shard_cbk(call_frame_t *frame, void *cookie, xlator_t *this,
}
local->postbuf.ia_size = local->offset;
- local->postbuf.ia_blocks -= (prebuf->ia_blocks - postbuf->ia_blocks);
/* Let the delta be negative. We want xattrop to do subtraction */
local->delta_size = local->postbuf.ia_size - local->prebuf.ia_size;
- local->delta_blocks = postbuf->ia_blocks - prebuf->ia_blocks;
+ delta_blocks = GF_ATOMIC_ADD(local->delta_blocks,
+ postbuf->ia_blocks - prebuf->ia_blocks);
+ GF_ASSERT(delta_blocks <= 0);
+ local->postbuf.ia_blocks += delta_blocks;
local->hole_size = 0;
- shard_inode_ctx_set(inode, this, postbuf, 0, SHARD_MASK_TIMES);
-
+ shard_inode_ctx_set(inode, this, &local->postbuf, 0, SHARD_MASK_TIMES);
shard_update_file_size(frame, this, NULL, &local->loc,
shard_post_update_size_truncate_handler);
return 0;
@@ -2034,8 +2037,10 @@ shard_truncate_htol_cbk(call_frame_t *frame, void *cookie, xlator_t *this,
struct iatt *preparent, struct iatt *postparent,
dict_t *xdata)
{
+ int ret = 0;
int call_count = 0;
int shard_block_num = (long)cookie;
+ uint64_t block_count = 0;
shard_local_t *local = NULL;
local = frame->local;
@@ -2045,6 +2050,16 @@ shard_truncate_htol_cbk(call_frame_t *frame, void *cookie, xlator_t *this,
local->op_errno = op_errno;
goto done;
}
+ ret = dict_get_uint64(xdata, GF_GET_FILE_BLOCK_COUNT, &block_count);
+ if (!ret) {
+ GF_ATOMIC_SUB(local->delta_blocks, block_count);
+ } else {
+ /* dict_get failed possibly due to a heterogeneous cluster? */
+ gf_msg(this->name, GF_LOG_WARNING, 0, SHARD_MSG_DICT_OP_FAILED,
+ "Failed to get key %s from dict during truncate of gfid %s",
+ GF_GET_FILE_BLOCK_COUNT,
+ uuid_utoa(local->resolver_base_inode->gfid));
+ }
shard_unlink_block_inode(local, shard_block_num);
done:
@@ -2074,6 +2089,7 @@ shard_truncate_htol(call_frame_t *frame, xlator_t *this, inode_t *inode)
gf_boolean_t wind_failed = _gf_false;
shard_local_t *local = NULL;
shard_priv_t *priv = NULL;
+ dict_t *xdata_req = NULL;
local = frame->local;
priv = this->private;
@@ -2101,7 +2117,7 @@ shard_truncate_htol(call_frame_t *frame, xlator_t *this, inode_t *inode)
local->postbuf.ia_size = local->offset;
local->postbuf.ia_blocks = local->prebuf.ia_blocks;
local->delta_size = local->postbuf.ia_size - local->prebuf.ia_size;
- local->delta_blocks = 0;
+ GF_ATOMIC_INIT(local->delta_blocks, 0);
local->hole_size = 0;
shard_update_file_size(frame, this, local->fd, &local->loc,
shard_post_update_size_truncate_handler);
@@ -2110,6 +2126,21 @@ shard_truncate_htol(call_frame_t *frame, xlator_t *this, inode_t *inode)
local->call_count = call_count;
i = 1;
+ xdata_req = dict_new();
+ if (!xdata_req) {
+ shard_common_failure_unwind(local->fop, frame, -1, ENOMEM);
+ return 0;
+ }
+ ret = dict_set_uint64(xdata_req, GF_GET_FILE_BLOCK_COUNT, 8 * 8);
+ if (ret) {
+ gf_msg(this->name, GF_LOG_WARNING, 0, SHARD_MSG_DICT_OP_FAILED,
+ "Failed to set key %s into dict during truncate of %s",
+ GF_GET_FILE_BLOCK_COUNT,
+ uuid_utoa(local->resolver_base_inode->gfid));
+ dict_unref(xdata_req);
+ shard_common_failure_unwind(local->fop, frame, -1, ENOMEM);
+ return 0;
+ }
SHARD_SET_ROOT_FS_ID(frame, local);
while (cur_block <= last_block) {
@@ -2148,7 +2179,7 @@ shard_truncate_htol(call_frame_t *frame, xlator_t *this, inode_t *inode)
STACK_WIND_COOKIE(frame, shard_truncate_htol_cbk,
(void *)(long)cur_block, FIRST_CHILD(this),
- FIRST_CHILD(this)->fops->unlink, &loc, 0, NULL);
+ FIRST_CHILD(this)->fops->unlink, &loc, 0, xdata_req);
loc_wipe(&loc);
next:
i++;
@@ -2156,6 +2187,7 @@ shard_truncate_htol(call_frame_t *frame, xlator_t *this, inode_t *inode)
if (!--call_count)
break;
}
+ dict_unref(xdata_req);
return 0;
}
@@ -2608,7 +2640,7 @@ shard_post_lookup_truncate_handler(call_frame_t *frame, xlator_t *this)
*/
local->hole_size = local->offset - local->prebuf.ia_size;
local->delta_size = 0;
- local->delta_blocks = 0;
+ GF_ATOMIC_INIT(local->delta_blocks, 0);
local->postbuf.ia_size = local->offset;
tmp_stbuf.ia_size = local->offset;
shard_inode_ctx_set(local->loc.inode, this, &tmp_stbuf, 0,
@@ -2624,7 +2656,7 @@ shard_post_lookup_truncate_handler(call_frame_t *frame, xlator_t *this)
*/
local->hole_size = 0;
local->delta_size = (local->offset - local->prebuf.ia_size);
- local->delta_blocks = 0;
+ GF_ATOMIC_INIT(local->delta_blocks, 0);
tmp_stbuf.ia_size = local->offset;
shard_inode_ctx_set(local->loc.inode, this, &tmp_stbuf, 0,
SHARD_INODE_WRITE_MASK);
@@ -2680,6 +2712,7 @@ shard_truncate(call_frame_t *frame, xlator_t *this, loc_t *loc, off_t offset,
if (!local->xattr_req)
goto err;
local->resolver_base_inode = loc->inode;
+ GF_ATOMIC_INIT(local->delta_blocks, 0);
shard_lookup_base_file(frame, this, &local->loc,
shard_post_lookup_truncate_handler);
@@ -2735,6 +2768,7 @@ shard_ftruncate(call_frame_t *frame, xlator_t *this, fd_t *fd, off_t offset,
local->loc.inode = inode_ref(fd->inode);
gf_uuid_copy(local->loc.gfid, fd->inode->gfid);
local->resolver_base_inode = fd->inode;
+ GF_ATOMIC_INIT(local->delta_blocks, 0);
shard_lookup_base_file(frame, this, &local->loc,
shard_post_lookup_truncate_handler);
@@ -5295,7 +5329,8 @@ shard_common_inode_write_do_cbk(call_frame_t *frame, void *cookie,
local->op_errno = op_errno;
} else {
local->written_size += op_ret;
- local->delta_blocks += (post->ia_blocks - pre->ia_blocks);
+ GF_ATOMIC_ADD(local->delta_blocks,
+ post->ia_blocks - pre->ia_blocks);
local->delta_size += (post->ia_size - pre->ia_size);
shard_inode_ctx_set(local->fd->inode, this, post, 0,
SHARD_MASK_TIMES);
@@ -6599,6 +6634,7 @@ shard_common_inode_write_begin(call_frame_t *frame, xlator_t *this,
local->fd = fd_ref(fd);
local->block_size = block_size;
local->resolver_base_inode = local->fd->inode;
+ GF_ATOMIC_INIT(local->delta_blocks, 0);
local->loc.inode = inode_ref(fd->inode);
gf_uuid_copy(local->loc.gfid, fd->inode->gfid);
diff --git a/xlators/features/shard/src/shard.h b/xlators/features/shard/src/shard.h
index cd6a663..04abd62 100644
--- a/xlators/features/shard/src/shard.h
+++ b/xlators/features/shard/src/shard.h
@@ -275,7 +275,7 @@ typedef struct shard_local {
size_t req_size;
size_t readdir_size;
int64_t delta_size;
- int64_t delta_blocks;
+ gf_atomic_t delta_blocks;
loc_t loc;
loc_t dot_shard_loc;
loc_t dot_shard_rm_loc;
diff --git a/xlators/storage/posix/src/posix-entry-ops.c b/xlators/storage/posix/src/posix-entry-ops.c
index b24a052..34ee2b8 100644
--- a/xlators/storage/posix/src/posix-entry-ops.c
+++ b/xlators/storage/posix/src/posix-entry-ops.c
@@ -1071,6 +1071,7 @@ posix_unlink(call_frame_t *frame, xlator_t *this, loc_t *loc, int xflag,
char *real_path = NULL;
char *par_path = NULL;
int32_t fd = -1;
+ int ret = -1;
struct iatt stbuf = {
0,
};
@@ -1235,6 +1236,14 @@ posix_unlink(call_frame_t *frame, xlator_t *this, loc_t *loc, int xflag,
goto out;
}
+ if (xdata && dict_get(xdata, GF_GET_FILE_BLOCK_COUNT)) {
+ ret = dict_set_uint64(unwind_dict, GF_GET_FILE_BLOCK_COUNT,
+ stbuf.ia_blocks);
+ if (ret)
+ gf_msg(this->name, GF_LOG_WARNING, 0, P_MSG_SET_XDATA_FAIL,
+ "Failed to set %s in rsp dict", GF_GET_FILE_BLOCK_COUNT);
+ }
+
if (xdata && dict_get(xdata, GET_LINK_COUNT))
get_link_count = _gf_true;
op_ret = posix_unlink_gfid_handle_and_entry(frame, this, real_path, &stbuf,
--
1.8.3.1

View File

@ -0,0 +1,49 @@
From c7aae487213e464b2ee7a785d752bd8264ceb371 Mon Sep 17 00:00:00 2001
From: Hari Gowtham <hgowtham@redhat.com>
Date: Thu, 13 Jun 2019 20:12:14 +0530
Subject: [PATCH 190/192] Build: removing the hardcoded usage of python3
Label : DOWNSTREAM ONLY
Problem: RHEL8 needed python3 so python3 was hardcoded to be used
in gluster build. python2 was still being used by RHEL7 machines and
when the shebang was redirected to use python3 glusterfind failed.
It was not working from 6.0-5 downstream build.
Fix: revert back to the old mechanism where we check the python version
and redirect the python script according to the usage.
Change-Id: I8dc6c9185b2740e20e4c4d734cc1a9e335e9c449
fixes: bz#1719640
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173392
Reviewed-by: Kaleb Keithley <kkeithle@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Aravinda Vishwanathapura Krishna Murthy <avishwan@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfs.spec.in | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 9c7d7a7..0127e8e 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -722,10 +722,12 @@ GlusterFS Events
%prep
%setup -q -n %{name}-%{version}%{?prereltag}
+%if ( ! %{_usepython3} )
echo "fixing python shebangs..."
-for i in `find . -type f -exec bash -c "if file {} | grep 'Python script, ASCII text executable' >/dev/null; then echo {}; fi" ';'`; do
- sed -i -e 's|^#!/usr/bin/python.*|#!%{__python3}|' -e 's|^#!/usr/bin/env python.*|#!%{__python3}|' $i
+for f in api events extras geo-replication libglusterfs tools xlators; do
+find $f -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' {} \;
done
+%endif
%build
--
1.8.3.1

View File

@ -0,0 +1,49 @@
From 4f471c25dad4d7d51443005108ec53c2d390daf5 Mon Sep 17 00:00:00 2001
From: Sunil Kumar Acharya <sheggodu@redhat.com>
Date: Fri, 14 Jun 2019 20:20:26 +0530
Subject: [PATCH 191/192] Build: Update python shebangs based on version
RHEL 7 uses python2 where as RHEL 8 uses python 3.
Updating the spec file to use appropriate shebangs
to avoid script failures.
Label : DOWNSTREAM ONLY
BUG: 1719640
Change-Id: I075764b6a00ba53a305451e3fc58584facd75a78
Signed-off-by: Sunil Kumar Acharya <sheggodu@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173518
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Hari Gowtham Gopal <hgowtham@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfs.spec.in | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 0127e8e..29e4a37 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -722,11 +722,15 @@ GlusterFS Events
%prep
%setup -q -n %{name}-%{version}%{?prereltag}
-%if ( ! %{_usepython3} )
echo "fixing python shebangs..."
-for f in api events extras geo-replication libglusterfs tools xlators; do
-find $f -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' {} \;
-done
+%if ( %{_usepython3} )
+ for i in `find . -type f -exec bash -c "if file {} | grep 'Python script, ASCII text executable' >/dev/null; then echo {}; fi" ';'`; do
+ sed -i -e 's|^#!/usr/bin/python.*|#!%{__python3}|' -e 's|^#!/usr/bin/env python.*|#!%{__python3}|' $i
+ done
+%else
+ for f in api events extras geo-replication libglusterfs tools xlators; do
+ find $f -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' {} \;
+ done
%endif
%build
--
1.8.3.1

View File

@ -0,0 +1,114 @@
From d2319a4746ba07ada5b3a20462ec2900e1c03c5a Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Thu, 13 Jun 2019 19:56:32 +0530
Subject: [PATCH 192/192] build: Ensure gluster-cli package is built as part of
client build
Till RHGS 3.4.x RHGS client was shipping gluster-cli rpm. With RHGS 3.5
which is a rebase of glusterfs 6.0 gluster-cli is only built for server.
gluster cli offers a remote cli execution capability with --remote-host
option for which you need not to have cli and glusterd co located and
hence shipping cli as part of the client package is mandatory. With out
this change the client upgrade for RHEL minor versions are also broken.
>Fixes: bz#1720615
>Change-Id: I5071f3255ff615113b36b08cd5326be6e37d907d
>Signed-off-by: Niels de Vos <ndevos@redhat.com>
upstream patch: https://review.gluster.org/#/c/glusterfs/+/22868/
BUG: 1720079
Change-Id: I11ec3e2b4d98b3e701147c60ca797d54570d598e
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/173388
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
cli/src/Makefile.am | 2 --
doc/Makefile.am | 4 ++--
glusterfs.spec.in | 9 +++------
3 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/cli/src/Makefile.am b/cli/src/Makefile.am
index 6be070f..3e7511f 100644
--- a/cli/src/Makefile.am
+++ b/cli/src/Makefile.am
@@ -1,6 +1,4 @@
-if WITH_SERVER
sbin_PROGRAMS = gluster
-endif
gluster_SOURCES = cli.c registry.c input.c cli-cmd.c cli-rl.c cli-cmd-global.c \
cli-cmd-volume.c cli-cmd-peer.c cli-rpc-ops.c cli-cmd-parser.c\
diff --git a/doc/Makefile.am b/doc/Makefile.am
index 7c04d74..9904767 100644
--- a/doc/Makefile.am
+++ b/doc/Makefile.am
@@ -1,9 +1,9 @@
EXTRA_DIST = glusterfs.8 mount.glusterfs.8 gluster.8 \
glusterd.8 glusterfsd.8
-man8_MANS = glusterfs.8 mount.glusterfs.8
+man8_MANS = gluster.8 glusterfs.8 mount.glusterfs.8
if WITH_SERVER
-man8_MANS += gluster.8 glusterd.8 glusterfsd.8
+man8_MANS += glusterd.8 glusterfsd.8
endif
CLEANFILES =
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 29e4a37..c505cd9 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -353,7 +353,6 @@ is in user space and easily manageable.
This package provides the api include files.
-%if ( 0%{!?_without_server:1} )
%package cli
Summary: GlusterFS CLI
Requires: %{name}-libs%{?_isa} = %{version}-%{release}
@@ -368,7 +367,6 @@ called Translators from GNU Hurd kernel. Much of the code in GlusterFS
is in user space and easily manageable.
This package provides the GlusterFS CLI application and its man page
-%endif
%package cloudsync-plugins
Summary: Cloudsync Plugins
@@ -891,10 +889,8 @@ touch %{buildroot}%{_sharedstatedir}/glusterd/nfs/run/nfs.pid
find ./tests ./run-tests.sh -type f | cpio -pd %{buildroot}%{_prefix}/share/glusterfs
## Install bash completion for cli
-%if ( 0%{!?_without_server:1} )
install -p -m 0744 -D extras/command-completion/gluster.bash \
%{buildroot}%{_sysconfdir}/bash_completion.d/gluster
-%endif
%if ( 0%{!?_without_server:1} )
echo "RHGS 3.5" > %{buildroot}%{_datadir}/glusterfs/release
@@ -1193,12 +1189,10 @@ exit 0
%dir %{_includedir}/glusterfs/api
%{_includedir}/glusterfs/api/*
-%if ( 0%{!?_without_server:1} )
%files cli
%{_sbindir}/gluster
%{_mandir}/man8/gluster.8*
%{_sysconfdir}/bash_completion.d/gluster
-%endif
%files cloudsync-plugins
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/cloudsync-plugins
@@ -1938,6 +1932,9 @@ fi
%endif
%changelog
+* Fri Jun 14 2019 Atin Mukherjee <amukherj@redhat.com>
+- Ensure gluster-cli package is part of client build (#1720079)
+
* Mon May 27 2019 Jiffin Tony Thottan <jthottan@redhat.com>
- Change the dependency to 2.7.3 on nfs-ganesha for glusterfs-ganesha (#1714078)
--
1.8.3.1

View File

@ -231,7 +231,7 @@ Release: 0.1%{?prereltag:.%{prereltag}}%{?dist}
%else
Name: glusterfs
Version: 6.0
Release: 5%{?dist}
Release: 6%{?dist}
ExcludeArch: i686
%endif
License: GPLv2 or LGPLv3+
@ -484,6 +484,20 @@ Patch0175: 0175-ec-fini-Fix-race-between-xlator-cleanup-and-on-going.patch
Patch0176: 0176-features-shard-Fix-crash-during-background-shard-del.patch
Patch0177: 0177-features-shard-Fix-extra-unref-when-inode-object-is-.patch
Patch0178: 0178-Cluster-afr-Don-t-treat-all-bricks-having-metadata-p.patch
Patch0179: 0179-tests-Fix-split-brain-favorite-child-policy.t-failur.patch
Patch0180: 0180-ganesha-scripts-Make-generate-epoch.py-python3-compa.patch
Patch0181: 0181-afr-log-before-attempting-data-self-heal.patch
Patch0182: 0182-geo-rep-fix-mountbroker-setup.patch
Patch0183: 0183-glusterd-svc-Stop-stale-process-using-the-glusterd_p.patch
Patch0184: 0184-tests-Add-gating-configuration-file-for-rhel8.patch
Patch0185: 0185-gfapi-provide-an-api-for-setting-statedump-path.patch
Patch0186: 0186-cli-Remove-brick-warning-seems-unnecessary.patch
Patch0187: 0187-gfapi-statedump_path-add-proper-version-number.patch
Patch0188: 0188-features-shard-Fix-integer-overflow-in-block-count-a.patch
Patch0189: 0189-features-shard-Fix-block-count-accounting-upon-trunc.patch
Patch0190: 0190-Build-removing-the-hardcoded-usage-of-python3.patch
Patch0191: 0191-Build-Update-python-shebangs-based-on-version.patch
Patch0192: 0192-build-Ensure-gluster-cli-package-is-built-as-part-of.patch
%description
GlusterFS is a distributed file-system capable of scaling to several
@ -533,7 +547,6 @@ is in user space and easily manageable.
This package provides the api include files.
%if ( 0%{!?_without_server:1} )
%package cli
Summary: GlusterFS CLI
Requires: %{name}-libs%{?_isa} = %{version}-%{release}
@ -548,7 +561,6 @@ called Translators from GNU Hurd kernel. Much of the code in GlusterFS
is in user space and easily manageable.
This package provides the GlusterFS CLI application and its man page
%endif
%package cloudsync-plugins
Summary: Cloudsync Plugins
@ -976,9 +988,15 @@ do
done
echo "fixing python shebangs..."
for i in `find . -type f -exec bash -c "if file {} | grep 'Python script, ASCII text executable' >/dev/null; then echo {}; fi" ';'`; do
sed -i -e 's|^#!/usr/bin/python.*|#!%{__python3}|' -e 's|^#!/usr/bin/env python.*|#!%{__python3}|' $i
done
%if ( %{_usepython3} )
for i in `find . -type f -exec bash -c "if file {} | grep 'Python script, ASCII text executable' >/dev/null; then echo {}; fi" ';'`; do
sed -i -e 's|^#!/usr/bin/python.*|#!%{__python3}|' -e 's|^#!/usr/bin/env python.*|#!%{__python3}|' $i
done
%else
for f in api events extras geo-replication libglusterfs tools xlators; do
find $f -type f -exec sed -i 's|/usr/bin/python3|/usr/bin/python2|' {} \;
done
%endif
%build
@ -1138,10 +1156,8 @@ touch %{buildroot}%{_sharedstatedir}/glusterd/nfs/run/nfs.pid
find ./tests ./run-tests.sh -type f | cpio -pd %{buildroot}%{_prefix}/share/glusterfs
## Install bash completion for cli
%if ( 0%{!?_without_server:1} )
install -p -m 0744 -D extras/command-completion/gluster.bash \
%{buildroot}%{_sysconfdir}/bash_completion.d/gluster
%endif
%if ( 0%{!?_without_server:1} )
echo "RHGS 3.5" > %{buildroot}%{_datadir}/glusterfs/release
@ -1440,12 +1456,10 @@ exit 0
%dir %{_includedir}/glusterfs/api
%{_includedir}/glusterfs/api/*
%if ( 0%{!?_without_server:1} )
%files cli
%{_sbindir}/gluster
%{_mandir}/man8/gluster.8*
%{_sysconfdir}/bash_completion.d/gluster
%endif
%files cloudsync-plugins
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/cloudsync-plugins
@ -2185,6 +2199,10 @@ fi
%endif
%changelog
* Fri Jun 14 2019 Sunil Kumar Acharya <sheggodu@redhat.com> - 6.0-6
- fixes bugs bz#1668001 bz#1708043 bz#1708183 bz#1710701
bz#1719640 bz#1720079 bz#1720248 bz#1720318 bz#1720461
* Tue Jun 11 2019 Sunil Kumar Acharya <sheggodu@redhat.com> - 6.0-5
- fixes bugs bz#1573077 bz#1694595 bz#1703434 bz#1714536 bz#1714588
bz#1715407 bz#1715438 bz#1705018