autobuild v3.12.2-14

Resolves: bz#1547903 bz#1566336 bz#1568896 bz#1578716 bz#1581047
Resolves: bz#1581231 bz#1582066 bz#1593865 bz#1597506 bz#1597511
Resolves: bz#1597654 bz#1597768 bz#1598105 bz#1598356 bz#1599037
Resolves: bz#1599823 bz#1600057 bz#1601314
Signed-off-by: Milind Changire <mchangir@redhat.com>
This commit is contained in:
Milind Changire 2018-07-18 08:38:52 -04:00
parent 803d1bd34c
commit 0820681560
21 changed files with 2245 additions and 1 deletions

View File

@ -0,0 +1,76 @@
From 029fbbdaa7c4ddcc2479f507345a5c3ab1035313 Mon Sep 17 00:00:00 2001
From: Ravishankar N <ravishankar@redhat.com>
Date: Mon, 2 Jul 2018 16:05:39 +0530
Subject: [PATCH 306/325] glusterfsd: Do not process GLUSTERD_BRICK_XLATOR_OP
if graph is not ready
Patch in upstream master: https://review.gluster.org/#/c/20435/
Patch in release-3.12: https://review.gluster.org/#/c/20436/
Problem:
If glustershd gets restarted by glusterd due to node reboot/volume start force/
or any thing that changes shd graph (add/remove brick), and index heal
is launched via CLI, there can be a chance that shd receives this IPC
before the graph is fully active. Thus when it accesses
glusterfsd_ctx->active, it crashes.
Fix:
Since glusterd does not really wait for the daemons it spawned to be
fully initialized and can send the request as soon as rpc initialization has
succeeded, we just handle it at shd. If glusterfs_graph_activate() is
not yet done in shd but glusterd sends GD_OP_HEAL_VOLUME to shd,
we fail the request.
Change-Id: If6cc07bc5455c4ba03458a36c28b63664496b17d
BUG: 1593865
Signed-off-by: Ravishankar N <ravishankar@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143097
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfsd/src/glusterfsd-messages.h | 4 +++-
glusterfsd/src/glusterfsd-mgmt.c | 6 ++++++
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/glusterfsd/src/glusterfsd-messages.h b/glusterfsd/src/glusterfsd-messages.h
index e9c28f7..e38a88b 100644
--- a/glusterfsd/src/glusterfsd-messages.h
+++ b/glusterfsd/src/glusterfsd-messages.h
@@ -36,7 +36,7 @@
*/
#define GLFS_COMP_BASE GLFS_MSGID_COMP_GLUSTERFSD
-#define GLFS_NUM_MESSAGES 37
+#define GLFS_NUM_MESSAGES 38
#define GLFS_MSGID_END (GLFS_COMP_BASE + GLFS_NUM_MESSAGES + 1)
/* Messaged with message IDs */
#define glfs_msg_start_x GLFS_COMP_BASE, "Invalid: Start of messages"
@@ -109,6 +109,8 @@
#define glusterfsd_msg_36 (GLFS_COMP_BASE + 36), "problem in xlator " \
" loading."
#define glusterfsd_msg_37 (GLFS_COMP_BASE + 37), "failed to get dict value"
+#define glusterfsd_msg_38 (GLFS_COMP_BASE + 38), "Not processing brick-op no."\
+ " %d since volume graph is not yet active."
/*------------*/
#define glfs_msg_end_x GLFS_MSGID_END, "Invalid: End of messages"
diff --git a/glusterfsd/src/glusterfsd-mgmt.c b/glusterfsd/src/glusterfsd-mgmt.c
index 665b62c..2167241 100644
--- a/glusterfsd/src/glusterfsd-mgmt.c
+++ b/glusterfsd/src/glusterfsd-mgmt.c
@@ -790,6 +790,12 @@ glusterfs_handle_translator_op (rpcsvc_request_t *req)
ctx = glusterfsd_ctx;
active = ctx->active;
+ if (!active) {
+ ret = -1;
+ gf_msg (this->name, GF_LOG_ERROR, EAGAIN, glusterfsd_msg_38,
+ xlator_req.op);
+ goto out;
+ }
any = active->first;
input = dict_new ();
ret = dict_unserialize (xlator_req.input.input_val,
--
1.8.3.1

View File

@ -0,0 +1,329 @@
From b6aa09f8718c5ab91ae4e99abb6567fb1601cdbb Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Mon, 2 Jul 2018 20:48:22 +0530
Subject: [PATCH 307/325] glusterd: Introduce daemon-log-level cluster wide
option
This option, applicable to the node level daemons can be very helpful in
controlling the log level of these services. Please note any daemon
which is started prior to setting the specific value of this option (if
not INFO) will need to go through a restart to have this change into
effect.
> upstream patch : https://review.gluster.org/#/c/20442/
Please note there's a difference in deownstream delta. The op-version
against this option is already tageed as 3_11_2 in RHGS 3.3.1 and hence
the same is retained. Marking this DOWNSTREAM_ONLY label because of
Label: DOWNSTREAM ONLY
>Change-Id: I7f6d2620bab2b094c737f5cc816bc093e9c9c4c9
>fixes: bz#1597473
>Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Change-Id: I7f6d2620bab2b094c737f5cc816bc093e9c9c4c9
BUG: 1597511
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143137
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sanju Rakonde <srakonde@redhat.com>
---
libglusterfs/src/globals.h | 3 +
tests/bugs/glusterd/daemon-log-level-option.t | 93 +++++++++++++++++++++++++
xlators/mgmt/glusterd/src/glusterd-handler.c | 1 +
xlators/mgmt/glusterd/src/glusterd-messages.h | 10 ++-
xlators/mgmt/glusterd/src/glusterd-op-sm.c | 51 ++++++++++++++
xlators/mgmt/glusterd/src/glusterd-svc-mgmt.c | 8 +++
xlators/mgmt/glusterd/src/glusterd-volume-set.c | 6 ++
xlators/mgmt/glusterd/src/glusterd.h | 1 +
8 files changed, 172 insertions(+), 1 deletion(-)
create mode 100644 tests/bugs/glusterd/daemon-log-level-option.t
diff --git a/libglusterfs/src/globals.h b/libglusterfs/src/globals.h
index 8fd3318..39d9716 100644
--- a/libglusterfs/src/globals.h
+++ b/libglusterfs/src/globals.h
@@ -109,6 +109,9 @@
#define GD_OP_VERSION_3_13_2 31302 /* Op-version for GlusterFS 3.13.2 */
+/* Downstream only change */
+#define GD_OP_VERSION_3_11_2 31102 /* Op-version for RHGS 3.3.1-async */
+
#include "xlator.h"
/* THIS */
diff --git a/tests/bugs/glusterd/daemon-log-level-option.t b/tests/bugs/glusterd/daemon-log-level-option.t
new file mode 100644
index 0000000..66e55e3
--- /dev/null
+++ b/tests/bugs/glusterd/daemon-log-level-option.t
@@ -0,0 +1,93 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+
+function Info_messages_count() {
+ local shd_log=$1
+ cat $shd_log | grep " I " | wc -l
+}
+
+function Warning_messages_count() {
+ local shd_log=$1
+ cat $shd_log | grep " W " | wc -l
+}
+
+function Debug_messages_count() {
+ local shd_log=$1
+ cat $shd_log | grep " D " | wc -l
+}
+
+function Trace_messages_count() {
+ local shd_log=$1
+ cat $shd_log | grep " T " | wc -l
+}
+
+cleanup;
+
+# Basic checks
+TEST glusterd
+TEST pidof glusterd
+TEST $CLI volume info
+
+# set cluster.daemon-log-level option to DEBUG
+TEST $CLI volume set all cluster.daemon-log-level DEBUG
+
+#Create a 3X2 distributed-replicate volume
+TEST $CLI volume create $V0 replica 2 $H0:$B0/${V0}{1..6};
+TEST $CLI volume start $V0
+
+# log should not have any trace messages
+EXPECT 0 Trace_messages_count "/var/log/glusterfs/glustershd.log"
+
+# stop the volume and remove glustershd log
+TEST $CLI volume stop $V0
+rm -f /var/log/glusterfs/glustershd.log
+
+# set cluster.daemon-log-level option to INFO and start the volume
+TEST $CLI volume set all cluster.daemon-log-level INFO
+TEST $CLI volume start $V0
+
+# log should not have any debug messages
+EXPECT 0 Debug_messages_count "/var/log/glusterfs/glustershd.log"
+
+# log should not have any trace messages
+EXPECT 0 Trace_messages_count "/var/log/glusterfs/glustershd.log"
+
+# stop the volume and remove glustershd log
+TEST $CLI volume stop $V0
+rm -f /var/log/glusterfs/glustershd.log
+
+# set cluster.daemon-log-level option to WARNING and start the volume
+TEST $CLI volume set all cluster.daemon-log-level WARNING
+TEST $CLI volume start $V0
+
+# log should not have any info messages
+EXPECT 0 Info_messages_count "/var/log/glusterfs/glustershd.log"
+
+# log should not have any debug messages
+EXPECT 0 Debug_messages_count "/var/log/glusterfs/glustershd.log"
+
+# log should not have any trace messages
+EXPECT 0 Trace_messages_count "/var/log/glusterfs/glustershd.log"
+
+# stop the volume and remove glustershd log
+TEST $CLI volume stop $V0
+rm -f /var/log/glusterfs/glustershd.log
+
+# set cluster.daemon-log-level option to ERROR and start the volume
+TEST $CLI volume set all cluster.daemon-log-level ERROR
+TEST $CLI volume start $V0
+
+# log should not have any info messages
+EXPECT 0 Info_messages_count "/var/log/glusterfs/glustershd.log"
+
+# log should not have any warning messages
+EXPECT 0 Warning_messages_count "/var/log/glusterfs/glustershd.log"
+
+# log should not have any debug messages
+EXPECT 0 Debug_messages_count "/var/log/glusterfs/glustershd.log"
+
+# log should not have any trace messages
+EXPECT 0 Trace_messages_count "/var/log/glusterfs/glustershd.log"
+
+cleanup
diff --git a/xlators/mgmt/glusterd/src/glusterd-handler.c b/xlators/mgmt/glusterd/src/glusterd-handler.c
index c072b05..c0c3e25 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handler.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handler.c
@@ -4677,6 +4677,7 @@ gd_is_global_option (char *opt_key)
strcmp (opt_key, GLUSTERD_GLOBAL_OP_VERSION_KEY) == 0 ||
strcmp (opt_key, GLUSTERD_BRICK_MULTIPLEX_KEY) == 0 ||
strcmp (opt_key, GLUSTERD_LOCALTIME_LOGGING_KEY) == 0 ||
+ strcmp (opt_key, GLUSTERD_DAEMON_LOG_LEVEL_KEY) == 0 ||
strcmp (opt_key, GLUSTERD_MAX_OP_VERSION_KEY) == 0);
out:
diff --git a/xlators/mgmt/glusterd/src/glusterd-messages.h b/xlators/mgmt/glusterd/src/glusterd-messages.h
index 4ccf299..64f7378 100644
--- a/xlators/mgmt/glusterd/src/glusterd-messages.h
+++ b/xlators/mgmt/glusterd/src/glusterd-messages.h
@@ -41,7 +41,7 @@
#define GLUSTERD_COMP_BASE GLFS_MSGID_GLUSTERD
-#define GLFS_NUM_MESSAGES 614
+#define GLFS_NUM_MESSAGES 615
#define GLFS_MSGID_END (GLUSTERD_COMP_BASE + GLFS_NUM_MESSAGES + 1)
/* Messaged with message IDs */
@@ -4984,6 +4984,14 @@
*/
#define GD_MSG_MANAGER_FUNCTION_FAILED (GLUSTERD_COMP_BASE + 614)
+/*!
+ * @messageid
+ * @diagnosis
+ * @recommendedaction
+ *
+ */
+#define GD_MSG_DAEMON_LOG_LEVEL_VOL_OPT_VALIDATE_FAIL (GLUSTERD_COMP_BASE + 615)
+
/*------------*/
#define glfs_msg_end_x GLFS_MSGID_END, "Invalid: End of messages"
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index 7e959a0..d022532 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -86,6 +86,7 @@ glusterd_all_vol_opts valid_all_vol_opts[] = {
* dynamic value depending on the memory specifications per node */
{ GLUSTERD_BRICKMUX_LIMIT_KEY, "0"},
/*{ GLUSTERD_LOCALTIME_LOGGING_KEY, "disable"},*/
+ { GLUSTERD_DAEMON_LOG_LEVEL_KEY, "INFO"},
{ NULL },
};
@@ -928,6 +929,47 @@ out:
}
static int
+glusterd_validate_daemon_log_level (char *key, char *value, char *errstr)
+{
+ int32_t ret = -1;
+ xlator_t *this = NULL;
+ glusterd_conf_t *conf = NULL;
+
+ this = THIS;
+ GF_VALIDATE_OR_GOTO ("glusterd", this, out);
+
+ conf = this->private;
+ GF_VALIDATE_OR_GOTO (this->name, conf, out);
+
+ GF_VALIDATE_OR_GOTO (this->name, key, out);
+ GF_VALIDATE_OR_GOTO (this->name, value, out);
+ GF_VALIDATE_OR_GOTO (this->name, errstr, out);
+
+ ret = 0;
+
+ if (strcmp (key, GLUSTERD_DAEMON_LOG_LEVEL_KEY)) {
+ goto out;
+ }
+
+ if ((strcmp (value, "INFO")) &&
+ (strcmp (value, "WARNING")) &&
+ (strcmp (value, "DEBUG")) &&
+ (strcmp (value, "TRACE")) &&
+ (strcmp (value, "ERROR"))) {
+ snprintf (errstr, PATH_MAX,
+ "Invalid option(%s). Valid options "
+ "are 'INFO' or 'WARNING' or 'ERROR' or 'DEBUG' or "
+ " 'TRACE'", value);
+ gf_msg (this->name, GF_LOG_ERROR, EINVAL,
+ GD_MSG_INVALID_ENTRY, "%s", errstr);
+ ret = -1;
+ }
+
+out:
+ return ret;
+}
+
+static int
glusterd_op_stage_set_volume (dict_t *dict, char **op_errstr)
{
int ret = -1;
@@ -1326,6 +1368,15 @@ glusterd_op_stage_set_volume (dict_t *dict, char **op_errstr)
goto out;
}
+ ret = glusterd_validate_daemon_log_level (key, value, errstr);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_DAEMON_LOG_LEVEL_VOL_OPT_VALIDATE_FAIL,
+ "Failed to validate daemon-log-level volume "
+ "options");
+ goto out;
+ }
+
if (volinfo) {
ret = glusterd_volinfo_get (volinfo,
VKEY_FEATURES_TRASH, &val_dup);
diff --git a/xlators/mgmt/glusterd/src/glusterd-svc-mgmt.c b/xlators/mgmt/glusterd/src/glusterd-svc-mgmt.c
index ba948b4..ebb288c 100644
--- a/xlators/mgmt/glusterd/src/glusterd-svc-mgmt.c
+++ b/xlators/mgmt/glusterd/src/glusterd-svc-mgmt.c
@@ -151,6 +151,8 @@ glusterd_svc_start (glusterd_svc_t *svc, int flags, dict_t *cmdline)
xlator_t *this = NULL;
char valgrind_logfile[PATH_MAX] = {0};
char *localtime_logging = NULL;
+ char *log_level = NULL;
+ char daemon_log_level[30] = {0};
this = THIS;
GF_ASSERT (this);
@@ -196,6 +198,12 @@ glusterd_svc_start (glusterd_svc_t *svc, int flags, dict_t *cmdline)
if (strcmp (localtime_logging, "enable") == 0)
runner_add_arg (&runner, "--localtime-logging");
}
+ if (dict_get_str (priv->opts, GLUSTERD_DAEMON_LOG_LEVEL_KEY,
+ &log_level) == 0) {
+ snprintf (daemon_log_level, 30, "--log-level=%s", log_level);
+ runner_add_arg (&runner, daemon_log_level);
+ }
+
if (cmdline)
dict_foreach (cmdline, svc_add_args, (void *) &runner);
diff --git a/xlators/mgmt/glusterd/src/glusterd-volume-set.c b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
index b9da961..8cc756a 100644
--- a/xlators/mgmt/glusterd/src/glusterd-volume-set.c
+++ b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
@@ -3573,6 +3573,12 @@ struct volopt_map_entry glusterd_volopt_map[] = {
.op_version = GD_OP_VERSION_3_12_0,
.validate_fn = validate_boolean
},*/
+ { .key = GLUSTERD_DAEMON_LOG_LEVEL_KEY,
+ .voltype = "mgmt/glusterd",
+ .type = GLOBAL_NO_DOC,
+ .value = "INFO",
+ .op_version = GD_OP_VERSION_3_11_2
+ },
{ .key = "disperse.parallel-writes",
.voltype = "cluster/disperse",
.type = NO_DOC,
diff --git a/xlators/mgmt/glusterd/src/glusterd.h b/xlators/mgmt/glusterd/src/glusterd.h
index b0656e6..4ec609f 100644
--- a/xlators/mgmt/glusterd/src/glusterd.h
+++ b/xlators/mgmt/glusterd/src/glusterd.h
@@ -56,6 +56,7 @@
#define GLUSTERD_BRICK_MULTIPLEX_KEY "cluster.brick-multiplex"
#define GLUSTERD_BRICKMUX_LIMIT_KEY "cluster.max-bricks-per-process"
#define GLUSTERD_LOCALTIME_LOGGING_KEY "cluster.localtime-logging"
+#define GLUSTERD_DAEMON_LOG_LEVEL_KEY "cluster.daemon-log-level"
#define GANESHA_HA_CONF CONFDIR "/ganesha-ha.conf"
#define GANESHA_EXPORT_DIRECTORY CONFDIR"/exports"
--
1.8.3.1

View File

@ -0,0 +1,58 @@
From 4fb594e8d54bad70ddd1e195af422bbd0b9fd4a8 Mon Sep 17 00:00:00 2001
From: Sanju Rakonde <srakonde@redhat.com>
Date: Wed, 4 Jul 2018 14:45:51 +0530
Subject: [PATCH 308/325] glusterd: Fix glusterd crash
Problem: gluster get-state command is crashing glusterd process, when
geo-replication session is configured.
Cause: Crash is happening due to the double free of memory. In
glusterd_print_gsync_status_by_vol we are calling dict_unref(), which
will free all the keys and values in the dictionary. Before calling
dict_unref(), glusterd_print_gsync_status_by_vol is calling
glusterd_print_gsync_status(). glusterd_print_gsync_status is freeing
up values in the dictionary and again when dict_unref() is called, it
tries to free up the values which are already freed.
Solution: Remove the code which will free the memory in
glusterd_print_gsync_status function.
>Fixes: bz#1598345
>Change-Id: Id3d8aae109f377b462bbbdb96a8e3c5f6b0be752
>Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
upstream patch: https://review.gluster.org/#/c/20461/
Change-Id: Id3d8aae109f377b462bbbdb96a8e3c5f6b0be752
BUG: 1578716
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143323
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-handler.c | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-handler.c b/xlators/mgmt/glusterd/src/glusterd-handler.c
index c0c3e25..395b342 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handler.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handler.c
@@ -5155,15 +5155,6 @@ glusterd_print_gsync_status (FILE *fp, dict_t *gsync_dict)
volcount, i+1, get_struct_variable(15, status_vals[i]));
}
out:
- for (i = 0; i < gsync_count; i++) {
- if (status_vals[i]) {
- GF_FREE (status_vals[i]);
- }
- }
-
- if (status_vals)
- GF_FREE (status_vals);
-
return ret;
}
--
1.8.3.1

View File

@ -0,0 +1,114 @@
From 03bda9edb70d855cf602da06fde02c6131db3287 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Thu, 28 Jun 2018 10:42:56 +0530
Subject: [PATCH 309/325] extras/group : add database workload profile
Running DB workload patterns with all perf xlators enabled as default has
resulted into some inconsistency issues. Based on the internal testing done by
Elko Kuric (ekuric@redhat.com) there're certain set of perf xlators which need
to be turned off to get these types of workload supported by Gluster.
The proposal is to leverage group profile infrastructure to group together all
those tunables at one place so that users just need to apply the profile to the
volume to use it for the data base workload.
Credits : Elko Kuric (ekuric@redhat.com)
> upstream patch : https://review.gluster.org/#/c/20414/
>Change-Id: I8a50e915278ad4085b9aaa3f160a33af7c0b0444
>fixes: bz#1596020
>Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
>Change-Id: I8a50e915278ad4085b9aaa3f160a33af7c0b0444
>BUG: 1596076
>Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
>Reviewed-on: https://code.engineering.redhat.com/gerrit/142750
>Tested-by: RHGS Build Bot <nigelb@redhat.com>
>Reviewed-by: Milind Changire <mchangir@redhat.com>
>Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
Change-Id: I8a50e915278ad4085b9aaa3f160a33af7c0b0444
BUG: 1597506
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143320
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
.testignore | 1 +
extras/Makefile.am | 4 +++-
extras/group-db-workload | 8 ++++++++
glusterfs.spec.in | 4 ++++
4 files changed, 16 insertions(+), 1 deletion(-)
create mode 100644 extras/group-db-workload
diff --git a/.testignore b/.testignore
index 72c0b38..4a72bc4 100644
--- a/.testignore
+++ b/.testignore
@@ -33,6 +33,7 @@ extras/command-completion/README
extras/create_new_xlator/README.md
extras/glusterfs.vim
extras/group-gluster-block
+extras/group-db-workload
extras/group-metadata-cache
extras/group-nl-cache
extras/group-virt.example
diff --git a/extras/Makefile.am b/extras/Makefile.am
index d9572ac..7b791af 100644
--- a/extras/Makefile.am
+++ b/extras/Makefile.am
@@ -12,7 +12,7 @@ SUBDIRS = init.d systemd benchmarking hook-scripts $(OCF_SUBDIR) LinuxRPM \
confdir = $(sysconfdir)/glusterfs
conf_DATA = glusterfs-logrotate gluster-rsyslog-7.2.conf gluster-rsyslog-5.8.conf \
- logger.conf.example glusterfs-georep-logrotate group-virt.example group-metadata-cache group-gluster-block group-nl-cache
+ logger.conf.example glusterfs-georep-logrotate group-virt.example group-metadata-cache group-gluster-block group-nl-cache group-db-workload
voldir = $(sysconfdir)/glusterfs
vol_DATA = glusterd.vol
@@ -47,3 +47,5 @@ install-data-local:
$(DESTDIR)$(GLUSTERD_WORKDIR)/groups/gluster-block
$(INSTALL_DATA) $(top_srcdir)/extras/group-nl-cache \
$(DESTDIR)$(GLUSTERD_WORKDIR)/groups/nl-cache
+ $(INSTALL_DATA) $(top_srcdir)/extras/group-db-workload \
+ $(DESTDIR)$(GLUSTERD_WORKDIR)/groups/db-workload
diff --git a/extras/group-db-workload b/extras/group-db-workload
new file mode 100644
index 0000000..c9caf21
--- /dev/null
+++ b/extras/group-db-workload
@@ -0,0 +1,8 @@
+performance.open-behind=off
+performance.write-behind=off
+performance.stat-prefetch=off
+performance.quick-read=off
+performance.strict-o-direct=on
+performance.read-ahead=off
+performance.io-cache=off
+performance.readdir-ahead=off
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 36b465a..c3f5748 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1513,6 +1513,7 @@ exit 0
%attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/virt
%attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/metadata-cache
%attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/gluster-block
+ %attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/db-workload
%attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/nl-cache
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/glusterfind
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/glusterfind/.keys
@@ -2160,6 +2161,9 @@ fi
%endif
%changelog
+* Fri Jul 6 2018 Atin Mukherjee <amukherj@redhat.com>
+- Added db group profile (#1597506)
+
* Mon Apr 23 2018 Milind Changire <mchangir@redhat.com>
- make RHGS release number available in /usr/share/glusterfs/release (#1570514)
--
1.8.3.1

View File

@ -0,0 +1,50 @@
From 887fff825a546712bccd78b728c8aba66d5b1504 Mon Sep 17 00:00:00 2001
From: Pranith Kumar K <pkarampu@redhat.com>
Date: Tue, 3 Jul 2018 20:38:23 +0530
Subject: [PATCH 310/325] cluster/afr: Make sure lk-owner is assigned at the
time of lock
Upstream patch: https://review.gluster.org/20455
Problem:
In the new eager-lock implementation lk-owner is assigned after the
'local' is added to the eager-lock list, so there exists a possibility
of lock being sent even before lk-owner is assigned.
Fix:
Make sure to assign lk-owner before adding local to eager-lock list
BUG: 1597654
Change-Id: I26d1b7bcf3e8b22531f1dc0b952cae2d92889ef2
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143176
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/cluster/afr/src/afr-transaction.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/xlators/cluster/afr/src/afr-transaction.c b/xlators/cluster/afr/src/afr-transaction.c
index ff07319..5b18f63 100644
--- a/xlators/cluster/afr/src/afr-transaction.c
+++ b/xlators/cluster/afr/src/afr-transaction.c
@@ -2222,6 +2222,7 @@ __afr_eager_lock_handle (afr_local_t *local, gf_boolean_t *take_lock,
if (local->fd && !afr_are_multiple_fds_opened (local, this)) {
local->transaction.eager_lock_on = _gf_true;
+ afr_set_lk_owner (local->transaction.frame, this, local->inode);
}
lock = &local->inode_ctx->lock[local->transaction.type];
@@ -2325,8 +2326,6 @@ lock_phase:
if (!local->transaction.eager_lock_on) {
afr_set_lk_owner (local->transaction.frame, this,
local->transaction.frame->root);
- } else {
- afr_set_lk_owner (local->transaction.frame, this, local->inode);
}
--
1.8.3.1

View File

@ -0,0 +1,62 @@
From f5326dd5cfd3c2fae01bc62aa6e0725b501faa3a Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Sun, 1 Apr 2018 22:10:30 +0530
Subject: [PATCH 311/325] glusterd: show brick online after port registration
Upstream-patch: https://review.gluster.org/19804
gluster-block project needs a dependency check to see if all the bricks
are online before bringing up the relevant gluster-block services. While
the patch https://review.gluster.org/#/c/19785/ attempts to write the
script but brick should be only marked as online only when the
pmap_signin is completed.
While this is perfectly fine for non brick multiplexing, but with brick
multiplexing this patch still doesn't eliminate the race completely as
the attach_req call is asynchrnous and glusterd immediately marks the
port as registerd.
>Fixes: bz#1563273
BUG: 1598356
Change-Id: I81db54b88f7315e1b24e0234beebe00de6429f9d
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143591
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-utils.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
index 5e9213c..e08c053 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
@@ -5967,6 +5967,7 @@ glusterd_brick_start (glusterd_volinfo_t *volinfo,
(void) pmap_registry_bind (this,
brickinfo->port, brickinfo->path,
GF_PMAP_PORT_BRICKSERVER, NULL);
+ brickinfo->port_registered = _gf_true;
/*
* This will unfortunately result in a separate RPC
* connection per brick, even though they're all in
@@ -5976,7 +5977,6 @@ glusterd_brick_start (glusterd_volinfo_t *volinfo,
* TBD: re-use RPC connection across bricks
*/
if (is_brick_mx_enabled ()) {
- brickinfo->port_registered = _gf_true;
ret = glusterd_get_sock_from_brick_pid (pid, socketpath,
sizeof(socketpath));
if (ret) {
@@ -7083,7 +7083,8 @@ glusterd_add_brick_to_dict (glusterd_volinfo_t *volinfo,
GLUSTERD_GET_BRICK_PIDFILE (pidfile, volinfo, brickinfo, priv);
if (glusterd_is_brick_started (brickinfo)) {
- if (gf_is_service_running (pidfile, &pid)) {
+ if (gf_is_service_running (pidfile, &pid) &&
+ brickinfo->port_registered) {
brick_online = _gf_true;
} else {
pid = -1;
--
1.8.3.1

View File

@ -0,0 +1,155 @@
From df907fc49b6ecddd20fa06558c36e779521e85f3 Mon Sep 17 00:00:00 2001
From: Pranith Kumar K <pkarampu@redhat.com>
Date: Tue, 3 Jul 2018 14:14:59 +0530
Subject: [PATCH 312/325] glusterd: show brick online after port registration
even in brick-mux
Upstream-patch: https://review.gluster.org/20451
Problem:
With brick-mux even before brick attach is complete on the bricks
glusterd marks them as online. This can lead to a race where
scripts that check if the bricks are online to assume that the
brick is online before it is completely online.
Fix:
Wait for the callback from the brick before marking the port
as registered so that volume status will show the correct status
of the brick.
>fixes bz#1597568
BUG: 1598356
Change-Id: Icd3dc62506af0cf75195e96746695db823312051
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143592
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-snapshot.c | 2 +-
xlators/mgmt/glusterd/src/glusterd-utils.c | 36 +++++++++++++++++++++------
xlators/mgmt/glusterd/src/glusterd-utils.h | 3 ++-
3 files changed, 31 insertions(+), 10 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-snapshot.c b/xlators/mgmt/glusterd/src/glusterd-snapshot.c
index 5bdf27f..84335ef 100644
--- a/xlators/mgmt/glusterd/src/glusterd-snapshot.c
+++ b/xlators/mgmt/glusterd/src/glusterd-snapshot.c
@@ -2844,7 +2844,7 @@ glusterd_do_lvm_snapshot_remove (glusterd_volinfo_t *snap_vol,
GLUSTERD_GET_BRICK_PIDFILE (pidfile, snap_vol, brickinfo, priv);
if (gf_is_service_running (pidfile, &pid)) {
(void) send_attach_req (this, brickinfo->rpc,
- brickinfo->path,
+ brickinfo->path, NULL,
GLUSTERD_BRICK_TERMINATE);
brickinfo->status = GF_BRICK_STOPPED;
}
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
index e08c053..f62c917 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
@@ -93,9 +93,6 @@
#define NLMV4_VERSION 4
#define NLMV1_VERSION 1
-int
-send_attach_req (xlator_t *this, struct rpc_clnt *rpc, char *path, int op);
-
gf_boolean_t
is_brick_mx_enabled (void)
{
@@ -2481,7 +2478,7 @@ glusterd_volume_stop_glusterfs (glusterd_volinfo_t *volinfo,
brickinfo->hostname, brickinfo->path);
(void) send_attach_req (this, brickinfo->rpc,
- brickinfo->path,
+ brickinfo->path, NULL,
GLUSTERD_BRICK_TERMINATE);
} else {
gf_msg_debug (this->name, 0, "About to stop glusterfsd"
@@ -5403,8 +5400,27 @@ my_callback (struct rpc_req *req, struct iovec *iov, int count, void *v_frame)
return 0;
}
+static int32_t
+attach_brick_callback (struct rpc_req *req, struct iovec *iov, int count,
+ void *v_frame)
+{
+ call_frame_t *frame = v_frame;
+ glusterd_conf_t *conf = frame->this->private;
+ glusterd_brickinfo_t *brickinfo = frame->local;
+
+ frame->local = NULL;
+ brickinfo->port_registered = _gf_true;
+ synclock_lock (&conf->big_lock);
+ --(conf->blockers);
+ synclock_unlock (&conf->big_lock);
+
+ STACK_DESTROY (frame->root);
+ return 0;
+}
+
int
-send_attach_req (xlator_t *this, struct rpc_clnt *rpc, char *path, int op)
+send_attach_req (xlator_t *this, struct rpc_clnt *rpc, char *path,
+ glusterd_brickinfo_t *brickinfo, int op)
{
int ret = -1;
struct iobuf *iobuf = NULL;
@@ -5418,6 +5434,7 @@ send_attach_req (xlator_t *this, struct rpc_clnt *rpc, char *path, int op)
struct rpc_clnt_connection *conn;
glusterd_conf_t *conf = this->private;
extern struct rpc_clnt_program gd_brick_prog;
+ fop_cbk_fn_t cbkfn = my_callback;
if (!rpc) {
gf_log (this->name, GF_LOG_ERROR, "called with null rpc");
@@ -5475,10 +5492,14 @@ send_attach_req (xlator_t *this, struct rpc_clnt *rpc, char *path, int op)
iov.iov_len = ret;
+ if (op == GLUSTERD_BRICK_ATTACH) {
+ frame->local = brickinfo;
+ cbkfn = attach_brick_callback;
+ }
/* Send the msg */
++(conf->blockers);
ret = rpc_clnt_submit (rpc, &gd_brick_prog, op,
- my_callback, &iov, 1, NULL, 0, iobref,
+ cbkfn, &iov, 1, NULL, 0, iobref,
frame, NULL, 0, NULL, 0, NULL);
return ret;
@@ -5538,7 +5559,7 @@ attach_brick (xlator_t *this,
for (tries = 15; tries > 0; --tries) {
rpc = rpc_clnt_ref (other_brick->rpc);
if (rpc) {
- ret = send_attach_req (this, rpc, path,
+ ret = send_attach_req (this, rpc, path, brickinfo,
GLUSTERD_BRICK_ATTACH);
rpc_clnt_unref (rpc);
if (!ret) {
@@ -5558,7 +5579,6 @@ attach_brick (xlator_t *this,
brickinfo->status = GF_BRICK_STARTED;
brickinfo->rpc =
rpc_clnt_ref (other_brick->rpc);
- brickinfo->port_registered = _gf_true;
ret = glusterd_brick_process_add_brick (brickinfo,
volinfo);
if (ret) {
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.h b/xlators/mgmt/glusterd/src/glusterd-utils.h
index e69a779..4c9561e 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.h
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.h
@@ -199,7 +199,8 @@ glusterd_volume_stop_glusterfs (glusterd_volinfo_t *volinfo,
gf_boolean_t del_brick);
int
-send_attach_req (xlator_t *this, struct rpc_clnt *rpc, char *path, int op);
+send_attach_req (xlator_t *this, struct rpc_clnt *rpc, char *path,
+ glusterd_brickinfo_t *brick, int op);
glusterd_volinfo_t *
glusterd_volinfo_ref (glusterd_volinfo_t *volinfo);
--
1.8.3.1

View File

@ -0,0 +1,292 @@
From 9a95de08eb49f10c0a342099826d5e4c445749aa Mon Sep 17 00:00:00 2001
From: Mohit Agrawal <moagrawa@redhat.com>
Date: Thu, 31 May 2018 12:29:35 +0530
Subject: [PATCH 313/325] dht: Inconsistent permission for directories after
brick stop/start
Problem: Inconsistent access permissions on directories after
bringing back the down sub-volumes, in case of directories dht_setattr
first wind a call on MDS once call is finished on MDS then wind a call
on NON-MDS.At the time of revalidating dht just compare the uid/gid with
stbuf uid/gid and if anyone differs set a flag to heal the same.
Solution: Add a condition to compare permission also in dht_revalidate_cbk
to set a flag to call dht_dir_attr_heal.
> BUG: 1584517
> Change-Id: I3e039607148005015b5d93364536158380d4c5aa
> fixes: bz#1584517
> (cherry picked from commit e57cbae0bcc3d8649b869eda5ec20f3c6a6d34f0)
> (Reviewed on upstream link https://review.gluster.org/#/c/20108/)
BUG: 1582066
Change-Id: I985445521aeeddce52c0a56c20287e523aa3398b
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143721
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
tests/bugs/bug-1584517.t | 70 +++++++++++++++++++++++++++++
xlators/cluster/dht/src/dht-common.c | 81 ++++++++++++++++++++++++++++++----
xlators/cluster/dht/src/dht-common.h | 3 ++
xlators/cluster/dht/src/dht-selfheal.c | 2 +-
4 files changed, 147 insertions(+), 9 deletions(-)
create mode 100644 tests/bugs/bug-1584517.t
diff --git a/tests/bugs/bug-1584517.t b/tests/bugs/bug-1584517.t
new file mode 100644
index 0000000..7f48015
--- /dev/null
+++ b/tests/bugs/bug-1584517.t
@@ -0,0 +1,70 @@
+#!/bin/bash
+. $(dirname $0)/../include.rc
+. $(dirname $0)/../volume.rc
+. $(dirname $0)/../dht.rc
+cleanup;
+#This test case verifies attributes (uid/gid/perm) for the
+#directory are healed after stop/start brick. To verify the same
+#test case change attributes of the directory after down a DHT subvolume
+#and one AFR children. After start the volume with force and run lookup
+#operation attributes should be healed on started bricks at the backend.
+
+
+TEST glusterd
+TEST pidof glusterd
+TEST $CLI volume create $V0 replica 3 $H0:$B0/${V0}{0,1,2,3,4,5}
+TEST $CLI volume start $V0
+TEST useradd dev -M
+TEST groupadd QA
+
+TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 $M0;
+
+TEST mkdir $M0/dironedown
+
+TEST kill_brick $V0 $H0 $B0/${V0}2
+EXPECT_WITHIN ${PROCESS_UP_TIMEOUT} "5" online_brick_count
+
+TEST kill_brick $V0 $H0 $B0/${V0}3
+EXPECT_WITHIN ${PROCESS_UP_TIMEOUT} "4" online_brick_count
+
+TEST kill_brick $V0 $H0 $B0/${V0}4
+EXPECT_WITHIN ${PROCESS_UP_TIMEOUT} "3" online_brick_count
+
+TEST kill_brick $V0 $H0 $B0/${V0}5
+EXPECT_WITHIN ${PROCESS_UP_TIMEOUT} "2" online_brick_count
+
+TEST chown dev $M0/dironedown
+TEST chgrp QA $M0/dironedown
+TEST chmod 777 $M0/dironedown
+
+#store the permissions for comparision
+permission_onedown=`ls -l $M0 | grep dironedown | awk '{print $1}'`
+
+TEST $CLI volume start $V0 force
+EXPECT_WITHIN ${PROCESS_UP_TIMEOUT} "6" online_brick_count
+
+TEST glusterfs --volfile-id=$V0 --volfile-server=$H0 $M0;
+
+#Run lookup two times to hit revalidate code path in dht
+# to heal user attr
+
+TEST ls $M0/dironedown
+
+#check attributes those were created post brick going down
+TEST brick_perm=`ls -l $B0/${V0}3 | grep dironedown | awk '{print $1}'`
+TEST echo $brick_perm
+TEST [ ${brick_perm} = ${permission_onedown} ]
+uid=`ls -l $B0/${V0}3 | grep dironedown | awk '{print $3}'`
+TEST echo $uid
+TEST [ $uid = dev ]
+gid=`ls -l $B0/${V0}3 | grep dironedown | awk '{print $4}'`
+TEST echo $gid
+TEST [ $gid = QA ]
+
+TEST umount $M0
+userdel --force dev
+groupdel QA
+
+cleanup
+exit
+
diff --git a/xlators/cluster/dht/src/dht-common.c b/xlators/cluster/dht/src/dht-common.c
index c6adce4..23049b6 100644
--- a/xlators/cluster/dht/src/dht-common.c
+++ b/xlators/cluster/dht/src/dht-common.c
@@ -1329,6 +1329,8 @@ dht_lookup_dir_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
char gfid_local[GF_UUID_BUF_SIZE] = {0};
char gfid_node[GF_UUID_BUF_SIZE] = {0};
int32_t mds_xattr_val[1] = {0};
+ call_frame_t *copy = NULL;
+ dht_local_t *copy_local = NULL;
GF_VALIDATE_OR_GOTO ("dht", frame, out);
GF_VALIDATE_OR_GOTO ("dht", this, out);
@@ -1401,6 +1403,23 @@ dht_lookup_dir_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
dht_aggregate_xattr (local->xattr, xattr);
}
+ if (dict_get (xattr, conf->mds_xattr_key)) {
+ local->mds_subvol = prev;
+ local->mds_stbuf.ia_gid = stbuf->ia_gid;
+ local->mds_stbuf.ia_uid = stbuf->ia_uid;
+ local->mds_stbuf.ia_prot = stbuf->ia_prot;
+ }
+
+ if (local->stbuf.ia_type != IA_INVAL) {
+ if (!__is_root_gfid (stbuf->ia_gfid) &&
+ ((local->stbuf.ia_gid != stbuf->ia_gid) ||
+ (local->stbuf.ia_uid != stbuf->ia_uid) ||
+ (is_permission_different (&local->stbuf.ia_prot,
+ &stbuf->ia_prot)))) {
+ local->need_attrheal = 1;
+ }
+ }
+
if (local->inode == NULL)
local->inode = inode_ref (inode);
@@ -1496,6 +1515,43 @@ unlock:
&local->postparent, 1);
}
+ if (local->need_attrheal) {
+ local->need_attrheal = 0;
+ if (!__is_root_gfid (inode->gfid)) {
+ gf_uuid_copy (local->gfid, local->mds_stbuf.ia_gfid);
+ local->stbuf.ia_gid = local->mds_stbuf.ia_gid;
+ local->stbuf.ia_uid = local->mds_stbuf.ia_uid;
+ local->stbuf.ia_prot = local->mds_stbuf.ia_prot;
+ }
+ copy = create_frame (this, this->ctx->pool);
+ if (copy) {
+ copy_local = dht_local_init (copy, &local->loc,
+ NULL, 0);
+ if (!copy_local) {
+ DHT_STACK_DESTROY (copy);
+ goto skip_attr_heal;
+ }
+ copy_local->stbuf = local->stbuf;
+ copy_local->mds_stbuf = local->mds_stbuf;
+ copy_local->mds_subvol = local->mds_subvol;
+ copy->local = copy_local;
+ FRAME_SU_DO (copy, dht_local_t);
+ ret = synctask_new (this->ctx->env,
+ dht_dir_attr_heal,
+ dht_dir_attr_heal_done,
+ copy, copy);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, ENOMEM,
+ DHT_MSG_DIR_ATTR_HEAL_FAILED,
+ "Synctask creation failed to heal attr "
+ "for path %s gfid %s ",
+ local->loc.path, local->gfid);
+ DHT_STACK_DESTROY (copy);
+ }
+ }
+ }
+
+skip_attr_heal:
DHT_STRIP_PHASE1_FLAGS (&local->stbuf);
dht_set_fixed_dir_stat (&local->postparent);
/* Delete mds xattr at the time of STACK UNWIND */
@@ -1516,7 +1572,7 @@ out:
return ret;
}
-int static
+int
is_permission_different (ia_prot_t *prot1, ia_prot_t *prot2)
{
if ((prot1->owner.read != prot2->owner.read) ||
@@ -1677,12 +1733,12 @@ dht_revalidate_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
{
if ((local->stbuf.ia_gid != stbuf->ia_gid) ||
(local->stbuf.ia_uid != stbuf->ia_uid) ||
- (__is_root_gfid (stbuf->ia_gfid) &&
is_permission_different (&local->stbuf.ia_prot,
- &stbuf->ia_prot))) {
+ &stbuf->ia_prot)) {
local->need_selfheal = 1;
}
}
+
if (!dict_get (xattr, conf->mds_xattr_key)) {
gf_msg_debug (this->name, 0,
"internal xattr %s is not present"
@@ -1828,10 +1884,9 @@ out:
local->need_selfheal = 0;
if (!__is_root_gfid (inode->gfid)) {
gf_uuid_copy (local->gfid, local->mds_stbuf.ia_gfid);
- if (local->mds_stbuf.ia_gid || local->mds_stbuf.ia_uid) {
- local->stbuf.ia_gid = local->mds_stbuf.ia_gid;
- local->stbuf.ia_uid = local->mds_stbuf.ia_uid;
- }
+ local->stbuf.ia_gid = local->mds_stbuf.ia_gid;
+ local->stbuf.ia_uid = local->mds_stbuf.ia_uid;
+ local->stbuf.ia_prot = local->mds_stbuf.ia_prot;
} else {
gf_uuid_copy (local->gfid, local->stbuf.ia_gfid);
local->stbuf.ia_gid = local->prebuf.ia_gid;
@@ -1843,8 +1898,10 @@ out:
if (copy) {
copy_local = dht_local_init (copy, &local->loc,
NULL, 0);
- if (!copy_local)
+ if (!copy_local) {
+ DHT_STACK_DESTROY (copy);
goto cont;
+ }
copy_local->stbuf = local->stbuf;
copy_local->mds_stbuf = local->mds_stbuf;
copy_local->mds_subvol = local->mds_subvol;
@@ -1854,6 +1911,14 @@ out:
dht_dir_attr_heal,
dht_dir_attr_heal_done,
copy, copy);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, ENOMEM,
+ DHT_MSG_DIR_ATTR_HEAL_FAILED,
+ "Synctask creation failed to heal attr "
+ "for path %s gfid %s ",
+ local->loc.path, local->gfid);
+ DHT_STACK_DESTROY (copy);
+ }
}
}
cont:
diff --git a/xlators/cluster/dht/src/dht-common.h b/xlators/cluster/dht/src/dht-common.h
index a70342f..b40815c 100644
--- a/xlators/cluster/dht/src/dht-common.h
+++ b/xlators/cluster/dht/src/dht-common.h
@@ -298,6 +298,7 @@ struct dht_local {
xlator_t *mds_subvol; /* This is use for dir only */
char need_selfheal;
char need_xattr_heal;
+ char need_attrheal;
int file_count;
int dir_count;
call_frame_t *main_frame;
@@ -1491,4 +1492,6 @@ int
dht_selfheal_dir_setattr (call_frame_t *frame, loc_t *loc, struct iatt *stbuf,
int32_t valid, dht_layout_t *layout);
+int
+is_permission_different (ia_prot_t *prot1, ia_prot_t *prot2);
#endif/* _DHT_H */
diff --git a/xlators/cluster/dht/src/dht-selfheal.c b/xlators/cluster/dht/src/dht-selfheal.c
index e9b1db9..035a709 100644
--- a/xlators/cluster/dht/src/dht-selfheal.c
+++ b/xlators/cluster/dht/src/dht-selfheal.c
@@ -2495,7 +2495,7 @@ dht_dir_attr_heal (void *data)
NULL, NULL, NULL, NULL);
} else {
ret = syncop_setattr (subvol, &local->loc, &local->mds_stbuf,
- (GF_SET_ATTR_UID | GF_SET_ATTR_GID),
+ (GF_SET_ATTR_UID | GF_SET_ATTR_GID | GF_SET_ATTR_MODE),
NULL, NULL, NULL, NULL);
}
--
1.8.3.1

View File

@ -0,0 +1,63 @@
From d08f81216085cda58a64f51872b4d2497958a7ea Mon Sep 17 00:00:00 2001
From: Pranith Kumar K <pkarampu@redhat.com>
Date: Fri, 6 Jul 2018 12:28:53 +0530
Subject: [PATCH 314/325] cluster/afr: Prevent execution of code after
call_count decrementing
Upstream-patch: https://review.gluster.org/20470
Problem:
When call_count is decremented by one thread, another thread can
go ahead with the operation leading to undefined behavior for the
thread executing statements after decrementing call count.
Fix:
Do the operations necessary before decrementing call count.
>fixes bz#1598663
BUG: 1598105
Change-Id: Icc90cd92ac16e5fbdfe534d9f0a61312943393fe
Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143624
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
xlators/cluster/afr/src/afr-lk-common.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/xlators/cluster/afr/src/afr-lk-common.c b/xlators/cluster/afr/src/afr-lk-common.c
index be3de01..dff6644 100644
--- a/xlators/cluster/afr/src/afr-lk-common.c
+++ b/xlators/cluster/afr/src/afr-lk-common.c
@@ -970,6 +970,14 @@ afr_nonblocking_inodelk_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
local = frame->local;
int_lock = &local->internal_lock;
+ if (op_ret == 0 && local->transaction.type == AFR_DATA_TRANSACTION) {
+ LOCK (&local->inode->lock);
+ {
+ local->inode_ctx->lock_count++;
+ }
+ UNLOCK (&local->inode->lock);
+ }
+
LOCK (&frame->lock);
{
if (op_ret < 0) {
@@ -994,13 +1002,6 @@ afr_nonblocking_inodelk_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
}
UNLOCK (&frame->lock);
- if (op_ret == 0 && local->transaction.type == AFR_DATA_TRANSACTION) {
- LOCK (&local->inode->lock);
- {
- local->inode_ctx->lock_count++;
- }
- UNLOCK (&local->inode->lock);
- }
if (call_count == 0) {
gf_msg_trace (this->name, 0,
"Last inode locking reply received");
--
1.8.3.1

View File

@ -0,0 +1,59 @@
From f6b4b4308b838b29f17b2a4a54385269991f0118 Mon Sep 17 00:00:00 2001
From: Mohit Agrawal <moagrawal@redhat.com>
Date: Thu, 5 Jul 2018 15:02:06 +0530
Subject: [PATCH 315/325] changelog: fix br-state-check.t crash for brick_mux
Problem: br-state-check.t is getting crash
Solution: Check condition in rpcsvc_request_create
before allocate memory from rxpool
> BUG: 1597776
> Change-Id: I4fde1ade6073f603c32453f1840395db9a9155b7
> fixes: bz#1597776
> (cherry picked from commit e31c7a7c0c1ea3f6e931935226fb976a92779ba7)
> (Reviewed on upstream link https://review.gluster.org/#/c/20132/)
BUG: 1597768
Change-Id: I3a0dc8f9625a6ab9ce8364119413ae45e321e620
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143719
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
rpc/rpc-lib/src/rpcsvc.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/rpc/rpc-lib/src/rpcsvc.c b/rpc/rpc-lib/src/rpcsvc.c
index 3acaa8b..8d0c409 100644
--- a/rpc/rpc-lib/src/rpcsvc.c
+++ b/rpc/rpc-lib/src/rpcsvc.c
@@ -447,7 +447,7 @@ rpcsvc_request_create (rpcsvc_t *svc, rpc_transport_t *trans,
size_t msglen = 0;
int ret = -1;
- if (!svc || !trans)
+ if (!svc || !trans || !svc->rxpool)
return NULL;
/* We need to allocate the request before actually calling
@@ -1473,6 +1473,7 @@ rpcsvc_get_listener (rpcsvc_t *svc, uint16_t port, rpc_transport_t *trans)
{
rpcsvc_listener_t *listener = NULL;
char found = 0;
+ rpcsvc_listener_t *next = NULL;
uint32_t listener_port = 0;
if (!svc) {
@@ -1481,7 +1482,7 @@ rpcsvc_get_listener (rpcsvc_t *svc, uint16_t port, rpc_transport_t *trans)
pthread_mutex_lock (&svc->rpclock);
{
- list_for_each_entry (listener, &svc->listeners, list) {
+ list_for_each_entry_safe (listener, next, &svc->listeners, list) {
if (listener && trans) {
if (listener->trans == trans) {
found = 1;
--
1.8.3.1

View File

@ -0,0 +1,163 @@
From ed178ad11c75da242d68de5d24d9b0a8faaf9deb Mon Sep 17 00:00:00 2001
From: Sunny Kumar <sunkumar@redhat.com>
Date: Tue, 3 Jul 2018 16:03:35 +0530
Subject: [PATCH 316/325] snapshot : remove stale entry
During snap delete after removing brick-path we should remove
snap-path too i.e. /var/run/gluster/snaps/<snap-name>.
During snap deactivate also we should remove snap-path.
>fixes: bz#1597662
>Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Upstream Patch : https://review.gluster.org/#/c/20454/
Change-Id: Ib80b5d8844d6479d31beafa732e5671b0322248b
BUG: 1547903
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Change-Id: I93827f4e15a37247bafeb077575dc60658b8851f
Reviewed-on: https://code.engineering.redhat.com/gerrit/143814
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
tests/bugs/snapshot/bug-1597662.t | 57 +++++++++++++++++++++++++++
xlators/mgmt/glusterd/src/glusterd-snapshot.c | 38 ++++++++++++++++++
2 files changed, 95 insertions(+)
create mode 100644 tests/bugs/snapshot/bug-1597662.t
diff --git a/tests/bugs/snapshot/bug-1597662.t b/tests/bugs/snapshot/bug-1597662.t
new file mode 100644
index 0000000..dc87d17
--- /dev/null
+++ b/tests/bugs/snapshot/bug-1597662.t
@@ -0,0 +1,57 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../volume.rc
+. $(dirname $0)/../../snapshot.rc
+
+cleanup;
+
+TEST init_n_bricks 3;
+TEST setup_lvm 3;
+TEST glusterd;
+TEST pidof glusterd;
+
+TEST $CLI volume create $V0 $H0:$L1 $H0:$L2 $H0:$L3;
+TEST $CLI volume start $V0;
+
+snap_path=/var/run/gluster/snaps
+
+TEST $CLI snapshot create snap1 $V0 no-timestamp;
+
+$CLI snapshot activate snap1;
+
+EXPECT 'Started' snapshot_status snap1;
+
+# This Function will check for entry /var/run/gluster/snaps/<snap-name>
+# against snap-name
+
+function is_snap_path
+{
+ echo `ls $snap_path | grep snap1 | wc -l`
+}
+
+# snap is active so snap_path should exist
+EXPECT "1" is_snap_path
+
+$CLI snapshot deactivate snap1;
+
+# snap is deactivated so snap_path should not exist
+EXPECT "0" is_snap_path
+
+# activate snap again
+$CLI snapshot activate snap1;
+
+# snap is active so snap_path should exist
+EXPECT "1" is_snap_path
+
+# delete snap now
+TEST $CLI snapshot delete snap1;
+
+# snap is deleted so snap_path should not exist
+EXPECT "0" is_snap_path
+
+TEST $CLI volume stop $V0;
+TEST $CLI volume delete $V0;
+
+cleanup;
+
diff --git a/xlators/mgmt/glusterd/src/glusterd-snapshot.c b/xlators/mgmt/glusterd/src/glusterd-snapshot.c
index 84335ef..304cef6 100644
--- a/xlators/mgmt/glusterd/src/glusterd-snapshot.c
+++ b/xlators/mgmt/glusterd/src/glusterd-snapshot.c
@@ -2945,6 +2945,7 @@ glusterd_lvm_snapshot_remove (dict_t *rsp_dict, glusterd_volinfo_t *snap_vol)
glusterd_brickinfo_t *brickinfo = NULL;
xlator_t *this = NULL;
char brick_dir[PATH_MAX] = "";
+ char snap_path[PATH_MAX] = "";
char *tmp = NULL;
char *brick_mount_path = NULL;
gf_boolean_t is_brick_dir_present = _gf_false;
@@ -3116,6 +3117,28 @@ remove_brick_path:
brick_dir, strerror (errno));
goto out;
}
+
+ /* After removing brick_dir, fetch and remove snap path
+ * i.e. /var/run/gluster/snaps/<snap-name>.
+ */
+ if (!snap_vol->snapshot) {
+ gf_msg (this->name, GF_LOG_WARNING, EINVAL,
+ GD_MSG_INVALID_ENTRY, "snapshot not"
+ "present in snap_vol");
+ ret = -1;
+ goto out;
+ }
+
+ snprintf (snap_path, sizeof (snap_path) - 1, "%s/%s",
+ snap_mount_dir, snap_vol->snapshot->snapname);
+ ret = recursive_rmdir (snap_path);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, errno,
+ GD_MSG_DIR_OP_FAILED, "Failed to remove "
+ "%s directory : error : %s", snap_path,
+ strerror (errno));
+ goto out;
+ }
}
ret = 0;
@@ -6260,6 +6283,7 @@ glusterd_snapshot_deactivate_commit (dict_t *dict, char **op_errstr,
glusterd_snap_t *snap = NULL;
glusterd_volinfo_t *snap_volinfo = NULL;
xlator_t *this = NULL;
+ char snap_path[PATH_MAX] = "";
this = THIS;
GF_ASSERT (this);
@@ -6318,6 +6342,20 @@ glusterd_snapshot_deactivate_commit (dict_t *dict, char **op_errstr,
"Failed to unmounts for %s", snap->snapname);
}
+ /*Remove /var/run/gluster/snaps/<snap-name> entry for deactivated snaps.
+ * This entry will be created again during snap activate.
+ */
+ snprintf (snap_path, sizeof (snap_path) - 1, "%s/%s",
+ snap_mount_dir, snapname);
+ ret = recursive_rmdir (snap_path);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, errno,
+ GD_MSG_DIR_OP_FAILED, "Failed to remove "
+ "%s directory : error : %s", snap_path,
+ strerror (errno));
+ goto out;
+ }
+
ret = dict_set_dynstr_with_alloc (rsp_dict, "snapuuid",
uuid_utoa (snap->snap_id));
if (ret) {
--
1.8.3.1

View File

@ -0,0 +1,114 @@
From 8c9028b560b1f0fd816e7d2a9e0bec70cc526c1a Mon Sep 17 00:00:00 2001
From: Kotresh HR <khiremat@redhat.com>
Date: Sat, 7 Jul 2018 08:58:08 -0400
Subject: [PATCH 317/325] geo-rep/scheduler: Fix EBUSY trace back
Fix the trace back during temporary mount
cleanup. Temporary mount is done to touch
the root required for checkpoint to complete.
Backport of
> Patch: https://review.gluster.org/#/c/20476/
> BUG: 1598977
> Change-Id: I97fea538e92c4ef0747747e981ef98499504e336
> Signed-off-by: Kotresh HR <khiremat@redhat.com>
BUG: 1568896
Change-Id: I97fea538e92c4ef0747747e981ef98499504e336
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143949
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
extras/geo-rep/schedule_georep.py.in | 25 ++++++++++++++++++++-----
1 file changed, 20 insertions(+), 5 deletions(-)
diff --git a/extras/geo-rep/schedule_georep.py.in b/extras/geo-rep/schedule_georep.py.in
index 4e1a071..3797fff 100644
--- a/extras/geo-rep/schedule_georep.py.in
+++ b/extras/geo-rep/schedule_georep.py.in
@@ -43,7 +43,7 @@ SESSION_MOUNT_LOG_FILE = ("/var/log/glusterfs/geo-replication"
"/schedule_georep.mount.log")
USE_CLI_COLOR = True
-
+mnt_list = []
class GlusterBadXmlFormat(Exception):
"""
@@ -90,6 +90,8 @@ def execute(cmd, success_msg="", failure_msg="", exitcode=-1):
output_ok(success_msg)
return out
else:
+ if exitcode == 0:
+ return
err_msg = err if err else out
output_notok(failure_msg, err=err_msg, exitcode=exitcode)
@@ -112,12 +114,12 @@ def cleanup(hostname, volname, mnt):
"""
Unmount the Volume and Remove the temporary directory
"""
- execute(["umount", mnt],
+ execute(["umount", "-l", mnt],
failure_msg="Unable to Unmount Gluster Volume "
"{0}:{1}(Mounted at {2})".format(hostname, volname, mnt))
execute(["rmdir", mnt],
failure_msg="Unable to Remove temp directory "
- "{0}".format(mnt))
+ "{0}".format(mnt), exitcode=0)
@contextmanager
@@ -130,6 +132,7 @@ def glustermount(hostname, volname):
Automatically unmounts it in case of Exceptions/out of context
"""
mnt = tempfile.mkdtemp(prefix="georepsetup_")
+ mnt_list.append(mnt)
execute(["@SBIN_DIR@/glusterfs",
"--volfile-server", hostname,
"--volfile-id", volname,
@@ -378,14 +381,14 @@ def main(args):
output_ok("Started Geo-replication and watching Status for "
"Checkpoint completion")
- touch_mount_root(args.mastervol)
-
start_time = int(time.time())
duration = 0
# Sleep till Geo-rep initializes
time.sleep(60)
+ touch_mount_root(args.mastervol)
+
slave_url = "{0}::{1}".format(args.slave, args.slavevol)
# Loop to Check the Geo-replication Status and Checkpoint
@@ -446,6 +449,11 @@ def main(args):
time.sleep(args.interval)
+ for mnt in mnt_list:
+ execute(["rmdir", mnt],
+ failure_msg="Unable to Remove temp directory "
+ "{0}".format(mnt), exitcode=0)
+
if __name__ == "__main__":
parser = ArgumentParser(formatter_class=RawDescriptionHelpFormatter,
description=__doc__)
@@ -474,4 +482,11 @@ if __name__ == "__main__":
execute(cmd)
main(args)
except KeyboardInterrupt:
+ for mnt in mnt_list:
+ execute(["umount", "-l", mnt],
+ failure_msg="Unable to Unmount Gluster Volume "
+ "Mounted at {0}".format(mnt), exitcode=0)
+ execute(["rmdir", mnt],
+ failure_msg="Unable to Remove temp directory "
+ "{0}".format(mnt), exitcode=0)
output_notok("Exiting...")
--
1.8.3.1

View File

@ -0,0 +1,60 @@
From 4ef66d687918c87393b26b8e72d5eb2f5f9e30f0 Mon Sep 17 00:00:00 2001
From: Sanoj Unnikrishnan <sunnikri@redhat.com>
Date: Mon, 16 Jul 2018 11:16:24 +0530
Subject: [PATCH 318/325] Quota: Fix crawling of files
Problem: Running "find ." does not crawl files. It goes over the
directories and lists all dentries with getdents system call.
Hence the files are not looked up.
Solution:
explicitly triggerr stat on files with find . -exec stat {} \;
since crawl can take slightly longer, updating timeout in test case
Backport of,
> Change-Id: If3c1fba2ed8e300c9cc08c1b5c1ba93cb8e4d6b6
> bug: 1533000
> patch: https://review.gluster.org/#/c/20057/
> Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
BUG: 1581231
Change-Id: I39768e22a7139f77de1740432b9de4a5c870b359
Signed-off-by: Sanoj Unnikrishnan <sunnikri@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/144022
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
tests/bugs/quota/bug-1293601.t | 2 +-
xlators/mgmt/glusterd/src/glusterd-quota.c | 4 +++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/tests/bugs/quota/bug-1293601.t b/tests/bugs/quota/bug-1293601.t
index 52b03bc..def4ef9 100644
--- a/tests/bugs/quota/bug-1293601.t
+++ b/tests/bugs/quota/bug-1293601.t
@@ -27,6 +27,6 @@ EXPECT_WITHIN $MARKER_UPDATE_TIMEOUT "1.0MB" quotausage "/"
TEST $CLI volume quota $V0 disable
TEST $CLI volume quota $V0 enable
-EXPECT_WITHIN $MARKER_UPDATE_TIMEOUT "1.0MB" quotausage "/"
+EXPECT_WITHIN 40 "1.0MB" quotausage "/"
cleanup;
diff --git a/xlators/mgmt/glusterd/src/glusterd-quota.c b/xlators/mgmt/glusterd/src/glusterd-quota.c
index 1c3a801..6d3918b 100644
--- a/xlators/mgmt/glusterd/src/glusterd-quota.c
+++ b/xlators/mgmt/glusterd/src/glusterd-quota.c
@@ -341,7 +341,9 @@ _glusterd_quota_initiate_fs_crawl (glusterd_conf_t *priv,
if (type == GF_QUOTA_OPTION_TYPE_ENABLE ||
type == GF_QUOTA_OPTION_TYPE_ENABLE_OBJECTS)
- runner_add_args (&runner, "/usr/bin/find", ".", NULL);
+ runner_add_args (&runner, "/usr/bin/find", ".",
+ "-exec", "/usr/bin/stat",
+ "{}", "\\", ";", NULL);
else if (type == GF_QUOTA_OPTION_TYPE_DISABLE) {
--
1.8.3.1

View File

@ -0,0 +1,47 @@
From f62b21ca4a6e745f8589b257574aa2304e43871b Mon Sep 17 00:00:00 2001
From: Kaushal M <kaushal@redhat.com>
Date: Tue, 10 Jul 2018 20:56:08 +0530
Subject: [PATCH 319/325] glusterd: _is_prefix should handle 0-length paths
If one of the paths given to _is_prefix is 0-length, then it is not a
prefix of the other. Hence, _is_prefix should return false.
>Change-Id: I54aa577a64a58940ec91872d0d74dc19cff9106d
>fixes: bz#1599783
>Signed-off-by: Kaushal M <kaushal@redhat.com>
upstream patch:https://review.gluster.org/#/c/20490/
Change-Id: I54aa577a64a58940ec91872d0d74dc19cff9106d
BUG: 1599823
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143743
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-utils.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
index f62c917..565b6c5 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
@@ -1318,6 +1318,15 @@ _is_prefix (char *str1, char *str2)
len1 = strlen (str1);
len2 = strlen (str2);
small_len = min (len1, len2);
+
+ /*
+ * If either one (not both) of the strings are 0-length, they are not
+ * prefixes of each other.
+ */
+ if ((small_len == 0) && (len1 != len2)) {
+ return _gf_false;
+ }
+
for (i = 0; i < small_len; i++) {
if (str1[i] != str2[i]) {
prefix = _gf_false;
--
1.8.3.1

View File

@ -0,0 +1,70 @@
From 058d12fcab16688cd351bd9d409a83cae40c3ea6 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Tue, 10 Jul 2018 21:33:41 +0530
Subject: [PATCH 320/325] glusterd: log improvements on brick creation
validation
Added few log entries in glusterd_is_brickpath_available ().
>Change-Id: I8b758578f9db90d2974f7c79126c50ad3a001d71
>Updates: bz#1193929
>Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
upstream patch: https://review.gluster.org/#/c/20493/
Change-Id: I8b758578f9db90d2974f7c79126c50ad3a001d71
BUG: 1599823
Signed-off-by: Sanju Rakonde <srakonde@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/143744
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-utils.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
index 565b6c5..95df889 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
@@ -1282,7 +1282,8 @@ glusterd_brickinfo_new_from_brick (char *brick,
GD_MSG_BRICKINFO_CREATE_FAIL, "realpath"
" () failed for brick %s. The "
"underlying filesystem may be in bad "
- "state", new_brickinfo->path);
+ "state. Error - %s",
+ new_brickinfo->path, strerror(errno));
ret = -1;
goto out;
}
@@ -1367,6 +1368,12 @@ glusterd_is_brickpath_available (uuid_t uuid, char *path)
/* path may not yet exist */
if (!realpath (path, tmp_path)) {
if (errno != ENOENT) {
+ gf_msg (THIS->name, GF_LOG_CRITICAL, errno,
+ GD_MSG_BRICKINFO_CREATE_FAIL, "realpath"
+ " () failed for brick %s. The "
+ "underlying filesystem may be in bad "
+ "state. Error - %s",
+ path, strerror(errno));
goto out;
}
/* When realpath(3) fails, tmp_path is undefined. */
@@ -1378,8 +1385,14 @@ glusterd_is_brickpath_available (uuid_t uuid, char *path)
brick_list) {
if (gf_uuid_compare (uuid, brickinfo->uuid))
continue;
- if (_is_prefix (brickinfo->real_path, tmp_path))
+ if (_is_prefix (brickinfo->real_path, tmp_path)) {
+ gf_msg (THIS->name, GF_LOG_CRITICAL, 0,
+ GD_MSG_BRICKINFO_CREATE_FAIL,
+ "_is_prefix call failed for brick %s "
+ "against brick %s", tmp_path,
+ brickinfo->real_path);
goto out;
+ }
}
}
available = _gf_true;
--
1.8.3.1

View File

@ -0,0 +1,110 @@
From 737d077a44899f6222822408c400fcd91939ca5b Mon Sep 17 00:00:00 2001
From: Kotresh HR <khiremat@redhat.com>
Date: Thu, 12 Jul 2018 04:31:01 -0400
Subject: [PATCH 321/325] geo-rep: Fix symlink rename syncing issue
Problem:
Geo-rep sometimes fails to sync the rename of symlink
if the I/O is as follows
1. touch file1
2. ln -s "./file1" sym_400
3. mv sym_400 renamed_sym_400
4. mkdir sym_400
The file 'renamed_sym_400' failed to sync to slave
Cause:
Assume there are three distribute subvolume (brick1, brick2, brick3).
The changelogs are recorded as follows for above I/O pattern.
Note that the MKDIR is recorded on all bricks.
1. brick1:
-------
CREATE file1
SYMLINK sym_400
RENAME sym_400 renamed_sym_400
MKDIR sym_400
2. brick2:
-------
MKDIR sym_400
3. brick3:
-------
MKDIR sym_400
The operations on 'brick1' should be processed sequentially. But
since MKDIR is recorded on all the bricks, The brick 'brick2/brick3'
processed MKDIR first before 'brick1' causing out of order syncing
and created directory sym_400 first.
Now 'brick1' processed it's changelog.
CREATE file1 -> succeeds
SYMLINK sym_400 -> No longer present in master. Ignored
RENAME sym_400 renamed_sym_400
While processing RENAME, if source('sym_400') doesn't
present, destination('renamed_sym_400') is created. But
geo-rep stats the name 'sym_400' to confirm source file's
presence. In this race, since source name 'sym_400' is
present as directory, it doesn't create destination.
Hence RENAME is ignored.
Fix:
The fix is not rely only on stat of source name during RENAME.
It should stat the name and if the name is present, gfid should
be same. Only then it can conclude the presence of source.
>upstream patch : https://review.gluster.org/#/c/20496/
Backport of:
> BUG: 1600405
> Change-Id: I9fbec4f13ca6a182798a7f81b356fe2003aff969
> Signed-off-by: Kotresh HR <khiremat@redhat.com>
BUG: 1601314
Change-Id: I9fbec4f13ca6a182798a7f81b356fe2003aff969
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/144104
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
geo-replication/syncdaemon/resource.py | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/geo-replication/syncdaemon/resource.py b/geo-replication/syncdaemon/resource.py
index 00e62b7..0d5462a 100644
--- a/geo-replication/syncdaemon/resource.py
+++ b/geo-replication/syncdaemon/resource.py
@@ -674,8 +674,14 @@ class Server(object):
collect_failure(e, EEXIST)
elif op == 'RENAME':
en = e['entry1']
- st = lstat(entry)
- if isinstance(st, int):
+ # The matching disk gfid check validates two things
+ # 1. Validates name is present, return false otherwise
+ # 2. Validates gfid is same, returns false otherwise
+ # So both validations are necessary to decide src doesn't
+ # exist. We can't rely on only gfid stat as hardlink could
+ # be present and we can't rely only on name as name could
+ # exist with differnt gfid.
+ if not matching_disk_gfid(gfid, entry):
if e['stat'] and not stat.S_ISDIR(e['stat']['mode']):
if stat.S_ISLNK(e['stat']['mode']) and \
e['link'] is not None:
@@ -699,6 +705,7 @@ class Server(object):
[ENOENT, EEXIST], [ESTALE])
collect_failure(e, cmd_ret)
else:
+ st = lstat(entry)
st1 = lstat(en)
if isinstance(st1, int):
rename_with_disk_gfid_confirmation(gfid, entry, en)
--
1.8.3.1

View File

@ -0,0 +1,47 @@
From ee764746030bb04a702504e3e4c0f8115928aef5 Mon Sep 17 00:00:00 2001
From: Kotresh HR <khiremat@redhat.com>
Date: Thu, 7 Dec 2017 04:46:08 -0500
Subject: [PATCH 322/325] geo-rep: Cleanup stale unprocessed xsync changelogs
When geo-replication is in hybrid crawl, it crawls
the file system and generates xsync changelogs.
These changelogs are later processed to sync the
data. If the worker goes to Faulty before processing
all the generated xsync changelogs, those changelogs
remain and is not cleaned up. When the worker
comes back, it will re-crawl and re-generate xsync
changelogs. So this patch cleans up the stale
unprocessed xsync changelogs.
Backport of:
> Patch: https://review.gluster.org/18983
> Issue: #376
BUG: 1599037
Change-Id: Ib92920c716c7d27e1eeb4bc4ebaf3efb48e0694d
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/144102
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
geo-replication/syncdaemon/master.py | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/geo-replication/syncdaemon/master.py b/geo-replication/syncdaemon/master.py
index e484692..64e9836 100644
--- a/geo-replication/syncdaemon/master.py
+++ b/geo-replication/syncdaemon/master.py
@@ -1527,6 +1527,10 @@ class GMasterXsyncMixin(GMasterChangelogMixin):
pass
else:
raise
+ # Purge stale unprocessed xsync changelogs
+ for f in os.listdir(self.tempdir):
+ if f.startswith("XSYNC-CHANGELOG"):
+ os.remove(os.path.join(self.tempdir, f))
def crawl(self):
"""
--
1.8.3.1

View File

@ -0,0 +1,228 @@
From 77c33f6c257928576d328e6e735f7e7a086202a3 Mon Sep 17 00:00:00 2001
From: karthik-us <ksubrahm@redhat.com>
Date: Tue, 17 Jul 2018 11:56:10 +0530
Subject: [PATCH 323/325] cluster/afr: Mark dirty for entry transactions for
quorum failures
Backport of:https://review.gluster.org/#/c/20153/
Problem:
If an entry creation transaction fails on quprum number of bricks
it might end up setting the pending changelogs on the file itself
on the brick where it got created. But the parent does not have
any entry pending marker set. This will lead to the entry not
getting healed by the self heal daemon automatically.
Fix:
For entry transactions mark dirty on the parent if it fails on
quorum number of bricks, so that the heal can do conservative
merge and entry gets healed by shd.
Change-Id: I8bbd02da7c4c9edd9c3f947e9a4ed3d37c9bec1c
BUG: 1566336
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/144145
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
...20-mark-dirty-for-entry-txn-on-quorum-failure.t | 73 ++++++++++++++++++++++
xlators/cluster/afr/src/afr-transaction.c | 62 ++++++++++++++----
2 files changed, 124 insertions(+), 11 deletions(-)
create mode 100644 tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
diff --git a/tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t b/tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
new file mode 100644
index 0000000..7fec3b4
--- /dev/null
+++ b/tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t
@@ -0,0 +1,73 @@
+#!/bin/bash
+
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../volume.rc
+
+cleanup;
+
+function create_files {
+ local i=1
+ while (true)
+ do
+ dd if=/dev/zero of=$M0/file$i bs=1M count=10
+ if [ -e $B0/${V0}0/file$i ] && [ -e $B0/${V0}1/file$i ]; then
+ ((i++))
+ else
+ break
+ fi
+ done
+ echo $i
+}
+
+TEST glusterd
+
+#Create brick partitions
+TEST truncate -s 100M $B0/brick0
+TEST truncate -s 100M $B0/brick1
+#Have the 3rd brick of a higher size to test the scenario of entry transaction
+#passing on only one brick and not on other bricks.
+TEST truncate -s 110M $B0/brick2
+LO1=`SETUP_LOOP $B0/brick0`
+TEST [ $? -eq 0 ]
+TEST MKFS_LOOP $LO1
+LO2=`SETUP_LOOP $B0/brick1`
+TEST [ $? -eq 0 ]
+TEST MKFS_LOOP $LO2
+LO3=`SETUP_LOOP $B0/brick2`
+TEST [ $? -eq 0 ]
+TEST MKFS_LOOP $LO3
+TEST mkdir -p $B0/${V0}0 $B0/${V0}1 $B0/${V0}2
+TEST MOUNT_LOOP $LO1 $B0/${V0}0
+TEST MOUNT_LOOP $LO2 $B0/${V0}1
+TEST MOUNT_LOOP $LO3 $B0/${V0}2
+
+TEST $CLI volume create $V0 replica 3 $H0:$B0/${V0}{0,1,2}
+TEST $CLI volume start $V0
+TEST $CLI volume set $V0 performance.write-behind off
+TEST $CLI volume set $V0 self-heal-daemon off
+TEST $GFS --volfile-server=$H0 --volfile-id=$V0 $M0
+
+i=$(create_files)
+TEST ! ls $B0/${V0}0/file$i
+TEST ! ls $B0/${V0}1/file$i
+TEST ls $B0/${V0}2/file$i
+EXPECT "000000000000000000000001" get_hex_xattr trusted.afr.dirty $B0/${V0}2
+EXPECT "000000010000000100000000" get_hex_xattr trusted.afr.$V0-client-0 $B0/${V0}2/file$i
+EXPECT "000000010000000100000000" get_hex_xattr trusted.afr.$V0-client-1 $B0/${V0}2/file$i
+
+TEST $CLI volume set $V0 self-heal-daemon on
+EXPECT_WITHIN $PROCESS_UP_TIMEOUT "Y" glustershd_up_status
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 0
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 1
+EXPECT_WITHIN $CHILD_UP_TIMEOUT "1" afr_child_up_status_in_shd $V0 2
+TEST rm -f $M0/file1
+
+TEST $CLI volume heal $V0
+EXPECT_WITHIN $HEAL_TIMEOUT "0" get_pending_heal_count $V0
+TEST force_umount $M0
+TEST $CLI volume stop $V0
+EXPECT 'Stopped' volinfo_field $V0 'Status';
+TEST $CLI volume delete $V0;
+UMOUNT_LOOP ${B0}/${V0}{0,1,2}
+rm -f ${B0}/brick{0,1,2}
+cleanup;
diff --git a/xlators/cluster/afr/src/afr-transaction.c b/xlators/cluster/afr/src/afr-transaction.c
index 5b18f63..321b6f1 100644
--- a/xlators/cluster/afr/src/afr-transaction.c
+++ b/xlators/cluster/afr/src/afr-transaction.c
@@ -774,8 +774,38 @@ afr_has_fop_cbk_quorum (call_frame_t *frame)
return afr_has_quorum (success, this);
}
+gf_boolean_t
+afr_need_dirty_marking (call_frame_t *frame, xlator_t *this)
+{
+ afr_private_t *priv = this->private;
+ afr_local_t *local = NULL;
+ gf_boolean_t need_dirty = _gf_false;
+
+ local = frame->local;
+
+ if (!priv->quorum_count || !local->optimistic_change_log)
+ return _gf_false;
+
+ if (local->transaction.type == AFR_DATA_TRANSACTION ||
+ local->transaction.type == AFR_METADATA_TRANSACTION)
+ return _gf_false;
+
+ if (AFR_COUNT (local->transaction.failed_subvols, priv->child_count) ==
+ priv->child_count)
+ return _gf_false;
+
+ if (priv->arbiter_count) {
+ if (!afr_has_arbiter_fop_cbk_quorum (frame))
+ need_dirty = _gf_true;
+ } else if (!afr_has_fop_cbk_quorum (frame)) {
+ need_dirty = _gf_true;
+ }
+
+ return need_dirty;
+}
+
void
-afr_handle_quorum (call_frame_t *frame)
+afr_handle_quorum (call_frame_t *frame, xlator_t *this)
{
afr_local_t *local = NULL;
afr_private_t *priv = NULL;
@@ -826,11 +856,15 @@ afr_handle_quorum (call_frame_t *frame)
return;
}
+ if (afr_need_dirty_marking (frame, this))
+ goto set_response;
+
for (i = 0; i < priv->child_count; i++) {
if (local->transaction.pre_op[i])
afr_transaction_fop_failed (frame, frame->this, i);
}
+set_response:
local->op_ret = -1;
local->op_errno = afr_final_errno (local, priv);
if (local->op_errno == 0)
@@ -874,9 +908,17 @@ afr_changelog_post_op_now (call_frame_t *frame, xlator_t *this)
int nothing_failed = 1;
gf_boolean_t need_undirty = _gf_false;
- afr_handle_quorum (frame);
+ afr_handle_quorum (frame, this);
local = frame->local;
- idx = afr_index_for_transaction_type (local->transaction.type);
+ idx = afr_index_for_transaction_type (local->transaction.type);
+
+ xattr = dict_new ();
+ if (!xattr) {
+ local->op_ret = -1;
+ local->op_errno = ENOMEM;
+ afr_changelog_post_op_done (frame, this);
+ goto out;
+ }
nothing_failed = afr_txn_nothing_failed (frame, this);
@@ -886,6 +928,11 @@ afr_changelog_post_op_now (call_frame_t *frame, xlator_t *this)
need_undirty = _gf_true;
if (local->op_ret < 0 && !nothing_failed) {
+ if (afr_need_dirty_marking (frame, this)) {
+ local->dirty[idx] = hton32(1);
+ goto set_dirty;
+ }
+
afr_changelog_post_op_done (frame, this);
goto out;
}
@@ -902,14 +949,6 @@ afr_changelog_post_op_now (call_frame_t *frame, xlator_t *this)
goto out;
}
- xattr = dict_new ();
- if (!xattr) {
- local->op_ret = -1;
- local->op_errno = ENOMEM;
- afr_changelog_post_op_done (frame, this);
- goto out;
- }
-
for (i = 0; i < priv->child_count; i++) {
if (local->transaction.failed_subvols[i])
local->pending[i][idx] = hton32(1);
@@ -928,6 +967,7 @@ afr_changelog_post_op_now (call_frame_t *frame, xlator_t *this)
else
local->dirty[idx] = hton32(0);
+set_dirty:
ret = dict_set_static_bin (xattr, AFR_DIRTY, local->dirty,
sizeof(int) * AFR_NUM_CHANGE_LOGS);
if (ret) {
--
1.8.3.1

View File

@ -0,0 +1,57 @@
From 2a26ea5a3ee33ba5e6baedca8b18e29277ff385a Mon Sep 17 00:00:00 2001
From: Sunny Kumar <sunkumar@redhat.com>
Date: Tue, 3 Jul 2018 13:58:23 +0530
Subject: [PATCH 324/325] dht: delete tier related internal xattr in
dht_getxattr_cbk
Problem : Hot and Cold tier brick changelogs report rsync failure
Solution : georep session is failing to sync directory
from master volume to slave volume due to lot
of changelog retries, solution would be to ignore tier
related internal xattrs trusted.tier.fix.layout.complete and
trusted.tier.tier-dht.commithash in dht_getxattr_cbk.
Upstream Patch : https://review.gluster.org/#/c/20450/ and
https://review.gluster.org/#/c/20520/
>fixes: bz#1597563
>Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Change-Id: I3530ffe7c4157584b439486f33ecd82ed8d66aee
BUG: 1581047
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/144024
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Nithya Balachandran <nbalacha@redhat.com>
---
xlators/cluster/dht/src/dht-common.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/xlators/cluster/dht/src/dht-common.c b/xlators/cluster/dht/src/dht-common.c
index 23049b6..2207708 100644
--- a/xlators/cluster/dht/src/dht-common.c
+++ b/xlators/cluster/dht/src/dht-common.c
@@ -4606,6 +4606,20 @@ dht_getxattr_cbk (call_frame_t *frame, void *cookie, xlator_t *this,
dict_del (xattr, conf->mds_xattr_key);
}
+ /* filter out following two xattrs that need not
+ * be visible on the mount point for geo-rep -
+ * trusted.tier.fix.layout.complete and
+ * trusted.tier.tier-dht.commithash
+ */
+
+ if (dict_get (xattr, conf->commithash_xattr_name)) {
+ dict_del (xattr, conf->commithash_xattr_name);
+ }
+
+ if (frame->root->pid >= 0 && dht_is_tier_xlator (this)) {
+ dict_del(xattr, GF_XATTR_TIER_LAYOUT_FIXED_KEY);
+ }
+
if (frame->root->pid >= 0) {
GF_REMOVE_INTERNAL_XATTR
("trusted.glusterfs.quota*", xattr);
--
1.8.3.1

View File

@ -0,0 +1,63 @@
From 6869ad72b95983975675a4b920df8fea1edcfca4 Mon Sep 17 00:00:00 2001
From: Hari Gowtham <hgowtham@redhat.com>
Date: Thu, 12 Jul 2018 14:02:03 +0530
Subject: [PATCH 325/325] core: dereference check on the variables in
glusterfs_handle_brick_status
back-port of:https://review.gluster.org/#/c/20498/
problem: In a race condition, the active->first which is supposed to be filled
is NULL and trying to dereference it crashs.
back trace:
Core was generated by `/usr/sbin/glusterfsd -s bxts470192.eu.rabonet.com --volfile-id prod_xvavol.bxts'.
Program terminated with signal 11, Segmentation fault.
1029 any = active->first;
(gdb) bt
>Change-Id: Ia6291865319a9456b8b01a5251be2679c4985b7c
>fixes: bz#1600451
>Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Change-Id: Ia6291865319a9456b8b01a5251be2679c4985b7c
BUG: 1600057
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/144258
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
---
glusterfsd/src/glusterfsd-mgmt.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/glusterfsd/src/glusterfsd-mgmt.c b/glusterfsd/src/glusterfsd-mgmt.c
index 2167241..30a717f 100644
--- a/glusterfsd/src/glusterfsd-mgmt.c
+++ b/glusterfsd/src/glusterfsd-mgmt.c
@@ -1150,8 +1150,23 @@ glusterfs_handle_brick_status (rpcsvc_request_t *req)
}
ctx = glusterfsd_ctx;
- GF_ASSERT (ctx);
+ if (ctx == NULL) {
+ gf_log (this->name, GF_LOG_ERROR, "ctx returned NULL");
+ ret = -1;
+ goto out;
+ }
+ if (ctx->active == NULL) {
+ gf_log (this->name, GF_LOG_ERROR, "ctx->active returned NULL");
+ ret = -1;
+ goto out;
+ }
active = ctx->active;
+ if (ctx->active->first == NULL) {
+ gf_log (this->name, GF_LOG_ERROR, "ctx->active->first "
+ "returned NULL");
+ ret = -1;
+ goto out;
+ }
server_xl = active->first;
brick_xl = get_xlator_by_name (server_xl, brickname);
--
1.8.3.1

View File

@ -192,7 +192,7 @@ Release: 0.1%{?prereltag:.%{prereltag}}%{?dist}
%else
Name: glusterfs
Version: 3.12.2
Release: 13%{?dist}
Release: 14%{?dist}
%endif
License: GPLv2 or LGPLv3+
Group: System Environment/Base
@ -570,6 +570,26 @@ Patch0302: 0302-storage-posix-Handle-ENOSPC-correctly-in-zero_fill.patch
Patch0303: 0303-block-profile-enable-cluster.eager-lock-in-block-pro.patch
Patch0304: 0304-cluster-dht-Fix-rename-journal-in-changelog.patch
Patch0305: 0305-geo-rep-Fix-geo-rep-for-older-versions-of-unshare.patch
Patch0306: 0306-glusterfsd-Do-not-process-GLUSTERD_BRICK_XLATOR_OP-i.patch
Patch0307: 0307-glusterd-Introduce-daemon-log-level-cluster-wide-opt.patch
Patch0308: 0308-glusterd-Fix-glusterd-crash.patch
Patch0309: 0309-extras-group-add-database-workload-profile.patch
Patch0310: 0310-cluster-afr-Make-sure-lk-owner-is-assigned-at-the-ti.patch
Patch0311: 0311-glusterd-show-brick-online-after-port-registration.patch
Patch0312: 0312-glusterd-show-brick-online-after-port-registration-e.patch
Patch0313: 0313-dht-Inconsistent-permission-for-directories-after-br.patch
Patch0314: 0314-cluster-afr-Prevent-execution-of-code-after-call_cou.patch
Patch0315: 0315-changelog-fix-br-state-check.t-crash-for-brick_mux.patch
Patch0316: 0316-snapshot-remove-stale-entry.patch
Patch0317: 0317-geo-rep-scheduler-Fix-EBUSY-trace-back.patch
Patch0318: 0318-Quota-Fix-crawling-of-files.patch
Patch0319: 0319-glusterd-_is_prefix-should-handle-0-length-paths.patch
Patch0320: 0320-glusterd-log-improvements-on-brick-creation-validati.patch
Patch0321: 0321-geo-rep-Fix-symlink-rename-syncing-issue.patch
Patch0322: 0322-geo-rep-Cleanup-stale-unprocessed-xsync-changelogs.patch
Patch0323: 0323-cluster-afr-Mark-dirty-for-entry-transactions-for-qu.patch
Patch0324: 0324-dht-delete-tier-related-internal-xattr-in-dht_getxat.patch
Patch0325: 0325-core-dereference-check-on-the-variables-in-glusterfs.patch
%description
GlusterFS is a distributed file-system capable of scaling to several
@ -1044,6 +1064,7 @@ do
# apply the patch with 'git apply'
git apply -p1 --exclude=rfc.sh \
--exclude=.gitignore \
--exclude=.testignore \
--exclude=MAINTAINERS \
--exclude=extras/checkpatch.pl \
--exclude=build-aux/checkpatch.pl \
@ -1869,6 +1890,7 @@ exit 0
%attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/virt
%attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/metadata-cache
%attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/gluster-block
%attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/db-workload
%attr(0644,-,-) %{_sharedstatedir}/glusterd/groups/nl-cache
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/glusterfind
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/glusterfind/.keys
@ -2516,6 +2538,11 @@ fi
%endif
%changelog
* Wed Jul 18 2018 Milind Changire <mchangir@redhat.com> - 3.12.2-14
- fixes bugs bz#1547903 bz#1566336 bz#1568896 bz#1578716 bz#1581047
bz#1581231 bz#1582066 bz#1593865 bz#1597506 bz#1597511 bz#1597654 bz#1597768
bz#1598105 bz#1598356 bz#1599037 bz#1599823 bz#1600057 bz#1601314
* Thu Jun 28 2018 Milind Changire <mchangir@redhat.com> - 3.12.2-13
- fixes bugs bz#1493085 bz#1518710 bz#1554255 bz#1558948 bz#1558989
bz#1559452 bz#1567001 bz#1569312 bz#1569951 bz#1575539 bz#1575557 bz#1577051