Compare commits

...

No commits in common. "c8" and "c9s_new" have entirely different histories.
c8 ... c9s_new

1087 changed files with 106800 additions and 16278 deletions

3
.gitignore vendored
View File

@ -1 +1,2 @@
SOURCES/glusterfs-6.0.tar.gz
/glusterfs-3.12.2.tar.gz
/glusterfs-6.0.tar.gz

View File

@ -1 +0,0 @@
c9d75f37e00502a10f64cd4ba9aafb17552e0800 SOURCES/glusterfs-6.0.tar.gz

View File

@ -0,0 +1,26 @@
From d4d80332fb3231b1501720d604cf72882c4564ef Mon Sep 17 00:00:00 2001
From: Milind Changire <mchangir@redhat.com>
Date: Thu, 9 Nov 2017 01:46:40 -0500
Subject: [PATCH 01/74] Update rfc.sh to rhgs-3.4.0
Signed-off-by: Milind Changire <mchangir@redhat.com>
---
rfc.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/rfc.sh b/rfc.sh
index 1354715..356242e 100755
--- a/rfc.sh
+++ b/rfc.sh
@@ -17,7 +17,7 @@ done
shift $((OPTIND-1))
-branch="release-3.12";
+branch="rhgs-3.4.0";
set_hooks_commit_msg()
{
--
1.8.3.1

View File

@ -0,0 +1,26 @@
From d6ae2eb7fa7431db2108173c08b9e4455dd06005 Mon Sep 17 00:00:00 2001
From: Milind Changire <mchangir@redhat.com>
Date: Thu, 21 Mar 2019 12:22:43 +0530
Subject: [PATCH 01/52] Update rfc.sh to rhgs-3.5.0
Signed-off-by: Milind Changire <mchangir@redhat.com>
---
rfc.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/rfc.sh b/rfc.sh
index 764205c..94c92ef 100755
--- a/rfc.sh
+++ b/rfc.sh
@@ -18,7 +18,7 @@ done
shift $((OPTIND-1))
-branch="release-6";
+branch="rhgs-3.5.0";
set_hooks_commit_msg()
{
--
1.8.3.1

View File

@ -0,0 +1,56 @@
From 8fa58c563cf01934a64773e814f74727ee009b42 Mon Sep 17 00:00:00 2001
From: Joseph Fernandes <josferna@redhat.com>
Date: Wed, 30 Dec 2015 16:53:25 +0530
Subject: [PATCH 03/74] tier/ctr/sql : Dafault values for sql cache and wal
size
Setting default values for sql cache and wal size
cache : 12500 pages
wal : 25000 pages
1 pages - 4096 bytes
Porting this downstream 3.1.2 patch to 3.1.3
Label: DOWNSTREAM ONLY
> Change-Id: Iae3927e021af2e3f7617d45f84e81de3b7d93f1c
> BUG: 1282729
> Signed-off-by: Joseph Fernandes <josferna@redhat.com>
> Reviewed-on: https://code.engineering.redhat.com/gerrit/64642
> Reviewed-by: Dan Lambright <dlambrig@redhat.com>
> Tested-by: Dan Lambright <dlambrig@redhat.com>
Change-Id: Ib3cd951709dff25157371006637b8c0d881f5d61
Signed-off-by: Joseph Fernandes <josferna@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/70346
Reviewed-by: Nithya Balachandran <nbalacha@redhat.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-volume-set.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-volume-set.c b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
index 982275e..93ef85c 100644
--- a/xlators/mgmt/glusterd/src/glusterd-volume-set.c
+++ b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
@@ -3152,7 +3152,7 @@ struct volopt_map_entry glusterd_volopt_map[] = {
"changetimerecorder xlator."
"The input to this option is in pages."
"Each page is 4096 bytes. Default value is 12500 "
- "pages."
+ "pages i.e ~ 49 MB. "
"The max value is 262144 pages i.e 1 GB and "
"the min value is 1000 pages i.e ~ 4 MB. "
},
@@ -3166,7 +3166,7 @@ struct volopt_map_entry glusterd_volopt_map[] = {
" changetimerecorder. "
"The input to this option is in pages. "
"Each page is 4096 bytes. Default value is 25000 "
- "pages."
+ "pages i.e ~ 98 MB."
"The max value is 262144 pages i.e 1 GB and "
"the min value is 1000 pages i.e ~4 MB."
},
--
1.8.3.1

View File

@ -0,0 +1,51 @@
From b67f788dfe5855c455c8f4b41fe8159a5b41c4bd Mon Sep 17 00:00:00 2001
From: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Date: Mon, 21 Mar 2016 13:54:19 +0530
Subject: [PATCH 04/74] rpc: set bind-insecure to off by default
commit 243a5b429f225acb8e7132264fe0a0835ff013d5 turn's 'ON'
allow-insecure and bind-insecure by default.
Problem:
Now with newer versions we have bind-insecure 'ON' by default.
So, while upgrading subset of nodes from a trusted storage pool,
nodes which have older versions of glusterfs will expect
connection from secure ports only (since they still have
bind-insecure off) thus they reject connection from upgraded
nodes which now have insecure ports.
Hence we will run into connection issues between peers.
Solution:
This patch will turn bind-insecure 'OFF' by default to avoid
problem explained above.
Label: DOWNSTREAM ONLY
Change-Id: Id7a19b4872399d3b019243b0857c9c7af75472f7
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/70313
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
---
rpc/rpc-lib/src/rpc-transport.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/rpc/rpc-lib/src/rpc-transport.c b/rpc/rpc-lib/src/rpc-transport.c
index fc26f46..94880f4 100644
--- a/rpc/rpc-lib/src/rpc-transport.c
+++ b/rpc/rpc-lib/src/rpc-transport.c
@@ -258,8 +258,8 @@ rpc_transport_load (glusterfs_ctx_t *ctx, dict_t *options, char *trans_name)
else
trans->bind_insecure = 0;
} else {
- /* By default allow bind insecure */
- trans->bind_insecure = 1;
+ /* Turning off bind insecure by default*/
+ trans->bind_insecure = 0;
}
ret = dict_get_str (options, "transport-type", &type);
--
1.8.3.1

View File

@ -0,0 +1,47 @@
From 174ed444ad3b2007ecf55992acc3418455c46893 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Mon, 21 Mar 2016 17:07:00 +0530
Subject: [PATCH 05/74] glusterd/spec: fixing autogen issue
Backport of https://code.engineering.redhat.com/gerrit/#/c/59463/
Because of the incorrect build section, autogen.sh wasn't re-run during the rpm
build process. The `extras/Makefile.in` was not regenerated with the changes
made to `extras/Makefile.am` in the firewalld patch. This meant that
`extras/Makefile` was generated without the firewalld changes. So the firewalld
config wasn't installed during `make install` and rpmbuild later failed when it
failed to find `/usr/lib/firewalld/glusterfs.xml`
Label: DOWNSTREAM ONLY
>Reviewed-on: https://code.engineering.redhat.com/gerrit/59463
Change-Id: I498bcceeacbd839640282eb6467c9f1464505697
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/70343
Reviewed-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index f68e38f..50db6cb 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -651,12 +651,7 @@ CFLAGS=-DUSE_INSECURE_OPENSSL
export CFLAGS
%endif
-# RHEL6 and earlier need to manually replace config.guess and config.sub
-%if ( 0%{?rhel} && 0%{?rhel} <= 6 )
-./autogen.sh
-%endif
-
-%configure \
+./autogen.sh && %configure \
%{?_with_cmocka} \
%{?_with_debug} \
%{?_with_firewalld} \
--
1.8.3.1

View File

@ -0,0 +1,36 @@
From 69a19b225dd5bc9fb0279ffd729dc5927548428e Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Mon, 21 Mar 2016 22:31:02 +0530
Subject: [PATCH 06/74] libglusterfs/glusterd: Fix compilation errors
1. Removed duplicate definition of GD_OP_VER_PERSISTENT_AFR_XATTRS introduced in
d367a88 where GD_OP_VER_PERSISTENT_AFR_XATTRS was redfined
2. Fixed incorrect op-version
Label: DOWNSTREAM ONLY
Change-Id: Icfa3206e8a41a11875641f57523732b80837f8f6
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/70384
Reviewed-by: Nithya Balachandran <nbalacha@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-store.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-store.c b/xlators/mgmt/glusterd/src/glusterd-store.c
index 229391a..8a662ef 100644
--- a/xlators/mgmt/glusterd/src/glusterd-store.c
+++ b/xlators/mgmt/glusterd/src/glusterd-store.c
@@ -968,7 +968,7 @@ glusterd_volume_exclude_options_write (int fd, glusterd_volinfo_t *volinfo)
goto out;
}
- if (conf->op_version >= GD_OP_VERSION_RHS_3_0) {
+ if (conf->op_version >= GD_OP_VERSION_3_7_0) {
snprintf (buf, sizeof (buf), "%d", volinfo->disperse_count);
ret = gf_store_save_value (fd,
GLUSTERD_STORE_KEY_VOL_DISPERSE_CNT,
--
1.8.3.1

View File

@ -0,0 +1,65 @@
From 6ed11f5918cf21907df99839c9b76cf1144b2572 Mon Sep 17 00:00:00 2001
From: "Bala.FA" <barumuga@redhat.com>
Date: Mon, 7 Apr 2014 15:24:10 +0530
Subject: [PATCH 07/74] build: remove ghost directory entries
ovirt requires hook directories for gluster management and ghost
directories are no more ghost entries
Label: DOWNSTREAM ONLY
Change-Id: Iaf1066ba0655619024f87eaaa039f0010578c567
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/60133
Tested-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 50db6cb..3be99b6 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -757,14 +757,29 @@ install -D -p -m 0644 extras/glusterfs-logrotate \
%{buildroot}%{_sysconfdir}/logrotate.d/glusterfs
%if ( 0%{!?_without_georeplication:1} )
-# geo-rep ghosts
mkdir -p %{buildroot}%{_sharedstatedir}/glusterd/geo-replication
touch %{buildroot}%{_sharedstatedir}/glusterd/geo-replication/gsyncd_template.conf
install -D -p -m 0644 extras/glusterfs-georep-logrotate \
%{buildroot}%{_sysconfdir}/logrotate.d/glusterfs-georep
%endif
-# the rest of the ghosts
+%if ( 0%{!?_without_syslog:1} )
+%if ( 0%{?fedora} ) || ( 0%{?rhel} && 0%{?rhel} > 6 )
+install -D -p -m 0644 extras/gluster-rsyslog-7.2.conf \
+ %{buildroot}%{_sysconfdir}/rsyslog.d/gluster.conf.example
+%endif
+
+%if ( 0%{?rhel} && 0%{?rhel} == 6 )
+install -D -p -m 0644 extras/gluster-rsyslog-5.8.conf \
+ %{buildroot}%{_sysconfdir}/rsyslog.d/gluster.conf.example
+%endif
+
+%if ( 0%{?fedora} ) || ( 0%{?rhel} && 0%{?rhel} >= 6 )
+install -D -p -m 0644 extras/logger.conf.example \
+ %{buildroot}%{_sysconfdir}/glusterfs/logger.conf.example
+%endif
+%endif
+
touch %{buildroot}%{_sharedstatedir}/glusterd/glusterd.info
touch %{buildroot}%{_sharedstatedir}/glusterd/options
subdirs=(add-brick create copy-file delete gsync-create remove-brick reset set start stop)
@@ -1262,6 +1277,7 @@ exit 0
%{_sbindir}/gcron.py
%{_sbindir}/conf.py
+<<<<<<< 2944c7b6656a36a79551f9f9f24ab7a10467f13a
# /var/lib/glusterd, e.g. hookscripts, etc.
%ghost %attr(0644,-,-) %config(noreplace) %{_sharedstatedir}/glusterd/glusterd.info
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd
--
1.8.3.1

View File

@ -0,0 +1,878 @@
From cac41ae2729cffa23a348c4de14486043ef08163 Mon Sep 17 00:00:00 2001
From: "Bala.FA" <barumuga@redhat.com>
Date: Sat, 11 Nov 2017 10:32:42 +0530
Subject: [PATCH 08/74] build: add RHGS specific changes
Label: DOWNSTREAM ONLY
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1074947
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1097782
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1115267
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=1221743
Change-Id: I08333334745adf2350e772c6454ffcfe9c08cb89
Reviewed-on: https://code.engineering.redhat.com/gerrit/24983
Reviewed-on: https://code.engineering.redhat.com/gerrit/25451
Reviewed-on: https://code.engineering.redhat.com/gerrit/25518
Reviewed-on: https://code.engineering.redhat.com/gerrit/25983
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/60134
Tested-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 605 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 597 insertions(+), 8 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 3be99b6..8458e8a 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -80,6 +80,23 @@
%global _without_tiering --disable-tiering
%endif
+# if you wish not to build server rpms, compile like this.
+# rpmbuild -ta @PACKAGE_NAME@-@PACKAGE_VERSION@.tar.gz --without server
+
+%global _build_server 1
+%if "%{?_without_server}"
+%global _build_server 0
+%endif
+
+%if ( "%{?dist}" == ".el6rhs" ) || ( "%{?dist}" == ".el7rhs" ) || ( "%{?dist}" == ".el7rhgs" )
+%global _build_server 1
+%else
+%global _build_server 0
+%endif
+
+%global _without_extra_xlators 1
+%global _without_regression_tests 1
+
##-----------------------------------------------------------------------------
## All %%global definitions should be placed here and keep them sorted
##
@@ -178,7 +195,8 @@ Release: 0.1%{?prereltag:.%{prereltag}}%{?dist}
%else
Name: @PACKAGE_NAME@
Version: @PACKAGE_VERSION@
-Release: 0.@PACKAGE_RELEASE@%{?dist}
+Release: @PACKAGE_RELEASE@%{?dist}
+ExclusiveArch: x86_64 aarch64
%endif
License: GPLv2 or LGPLv3+
Group: System Environment/Base
@@ -320,7 +338,9 @@ Summary: Development Libraries
Group: Development/Libraries
Requires: %{name}%{?_isa} = %{version}-%{release}
# Needed for the Glupy examples to work
-Requires: %{name}-extra-xlators%{?_isa} = %{version}-%{release}
+%if ( 0%{!?_without_extra_xlators:1} )
+Requires: %{name}-extra-xlators = %{version}-%{release}
+%endif
%description devel
GlusterFS is a distributed file-system capable of scaling to several
@@ -333,6 +353,7 @@ is in user space and easily manageable.
This package provides the development libraries and include files.
+%if ( 0%{!?_without_extra_xlators:1} )
%package extra-xlators
Summary: Extra Gluster filesystem Translators
Group: Applications/File
@@ -355,6 +376,7 @@ is in user space and easily manageable.
This package provides extra filesystem Translators, such as Glupy,
for GlusterFS.
+%endif
%package fuse
Summary: Fuse client
@@ -381,6 +403,31 @@ is in user space and easily manageable.
This package provides support to FUSE based clients and inlcudes the
glusterfs(d) binary.
+%if ( 0%{?_build_server} )
+%package ganesha
+Summary: NFS-Ganesha configuration
+Group: Applications/File
+
+Requires: %{name}-server%{?_isa} = %{version}-%{release}
+Requires: nfs-ganesha-gluster, pcs, dbus
+%if ( 0%{?rhel} && 0%{?rhel} == 6 )
+Requires: cman, pacemaker, corosync
+%endif
+
+%description ganesha
+GlusterFS is a distributed file-system capable of scaling to several
+petabytes. It aggregates various storage bricks over Infiniband RDMA
+or TCP/IP interconnect into one large parallel network file
+system. GlusterFS is one of the most sophisticated file systems in
+terms of features and extensibility. It borrows a powerful concept
+called Translators from GNU Hurd kernel. Much of the code in GlusterFS
+is in user space and easily manageable.
+
+This package provides the configuration and related files for using
+NFS-Ganesha as the NFS server using GlusterFS
+%endif
+
+%if ( 0%{?_build_server} )
%if ( 0%{!?_without_georeplication:1} )
%package geo-replication
Summary: GlusterFS Geo-replication
@@ -406,6 +453,7 @@ is in userspace and easily manageable.
This package provides support to geo-replication.
%endif
+%endif
%if ( 0%{?_with_gnfs:1} )
%package gnfs
@@ -498,6 +546,8 @@ is in user space and easily manageable.
This package provides support to ib-verbs library.
%endif
+%if ( 0%{?_build_server} )
+%if ( 0%{!?_without_regression_tests:1} )
%package regression-tests
Summary: Development Tools
Group: Development/Tools
@@ -513,7 +563,10 @@ Requires: nfs-utils xfsprogs yajl psmisc bc
%description regression-tests
The Gluster Test Framework, is a suite of scripts used for
regression testing of Gluster.
+%endif
+%endif
+%if ( 0%{?_build_server} )
%if ( 0%{!?_without_ocf:1} )
%package resource-agents
Summary: OCF Resource Agents for GlusterFS
@@ -546,7 +599,9 @@ This package provides the resource agents which plug glusterd into
Open Cluster Framework (OCF) compliant cluster resource managers,
like Pacemaker.
%endif
+%endif
+%if ( 0%{?_build_server} )
%package server
Summary: Clustered file-system server
Group: System Environment/Daemons
@@ -602,6 +657,7 @@ called Translators from GNU Hurd kernel. Much of the code in GlusterFS
is in user space and easily manageable.
This package provides the glusterfs server daemon.
+%endif
%package client-xlators
Summary: GlusterFS client-side translators
@@ -618,6 +674,7 @@ is in user space and easily manageable.
This package provides the translators needed on any GlusterFS client.
+%if ( 0%{?_build_server} )
%if ( 0%{!?_without_events:1} )
%package events
Summary: GlusterFS Events
@@ -641,6 +698,7 @@ Requires: python-argparse
GlusterFS Events
%endif
+%endif
%prep
%setup -q -n %{name}-%{version}%{?prereltag}
@@ -822,10 +880,12 @@ exit 0
%post api
/sbin/ldconfig
+%if ( 0%{?_build_server} )
%if ( 0%{!?_without_events:1} )
%post events
%_init_restart glustereventsd
%endif
+%endif
%if ( 0%{?rhel} == 5 )
%post fuse
@@ -833,6 +893,7 @@ modprobe fuse
exit 0
%endif
+%if ( 0%{?_build_server} )
%if ( 0%{!?_without_georeplication:1} )
%post geo-replication
if [ $1 -ge 1 ]; then
@@ -840,10 +901,12 @@ if [ $1 -ge 1 ]; then
fi
exit 0
%endif
+%endif
%post libs
/sbin/ldconfig
+%if ( 0%{?_build_server} )
%post server
# Legacy server
%_init_enable glusterd
@@ -914,7 +977,7 @@ else
#rpm_script_t context.
rm -f %{_rundir}/glusterd.socket
fi
-exit 0
+%endif
##-----------------------------------------------------------------------------
## All %%pre should be placed here and keep them sorted
@@ -928,6 +991,7 @@ exit 0
##-----------------------------------------------------------------------------
## All %%preun should be placed here and keep them sorted
##
+%if ( 0%{?_build_server} )
%if ( 0%{!?_without_events:1} )
%preun events
if [ $1 -eq 0 ]; then
@@ -956,7 +1020,7 @@ if [ $1 -ge 1 ]; then
fi
%_init_restart glusterd
fi
-exit 0
+%endif
##-----------------------------------------------------------------------------
## All %%postun should be placed here and keep them sorted
@@ -986,6 +1050,73 @@ exit 0
## All %%files should be placed here and keep them grouped
##
%files
+# exclude extra-xlators files
+%if ( ! 0%{!?_without_extra_xlators:1} )
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/encryption/rot-13.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/glupy.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/quiesce.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/selinux.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/testing/features/template.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/testing/performance/symlink-cache.so
+%exclude %{python_sitelib}/*
+%endif
+# exclude regression-tests files
+%if ( ! 0%{!?_without_regression_tests:1} )
+%exclude %{_prefix}/share/glusterfs/run-tests.sh
+%exclude %{_prefix}/share/glusterfs/tests/*
+%endif
+%if ( ! 0%{?_build_server} )
+# exclude ganesha files
+%exclude %{_prefix}/lib/ocf/*
+# exclude geo-replication files
+%exclude %{_sysconfdir}/logrotate.d/glusterfs-georep
+%exclude %{_libexecdir}/glusterfs/*
+%exclude %{_sbindir}/gfind_missing_files
+%exclude %{_datadir}/glusterfs/scripts/get-gfid.sh
+%exclude %{_datadir}/glusterfs/scripts/slave-upgrade.sh
+%exclude %{_datadir}/glusterfs/scripts/gsync-upgrade.sh
+%exclude %{_datadir}/glusterfs/scripts/generate-gfid-file.sh
+%exclude %{_datadir}/glusterfs/scripts/gsync-sync-gfid
+%exclude %{_sharedstatedir}/glusterd/*
+# exclude server files
+%exclude %{_sysconfdir}/glusterfs
+%exclude %{_sysconfdir}/glusterfs/glusterd.vol
+%exclude %{_sysconfdir}/glusterfs/glusterfs-georep-logrotate
+%exclude %{_sysconfdir}/glusterfs/glusterfs-logrotate
+%exclude %{_sysconfdir}/glusterfs/gluster-rsyslog-5.8.conf
+%exclude %{_sysconfdir}/glusterfs/gluster-rsyslog-7.2.conf
+%exclude %{_sysconfdir}/glusterfs/group-virt.example
+%exclude %{_sysconfdir}/glusterfs/logger.conf.example
+%exclude %_init_glusterd
+%exclude %{_sysconfdir}/sysconfig/glusterd
+%exclude %{_bindir}/glusterfind
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/arbiter.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/bit-rot.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/bitrot-stub.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/changetimerecorder.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/index.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/leases.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/locks.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/posix*
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/snapview-server.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/marker.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/quota*
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/trash.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/upcall.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/mgmt*
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/performance/decompounder.so
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/protocol/server*
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/storage*
+%exclude %{_libdir}/libgfdb.so.*
+%exclude %{_sbindir}/gcron.py
+%exclude %{_sbindir}/glfsheal
+%exclude %{_sbindir}/glusterd
+%exclude %{_sbindir}/snap_scheduler.py
+%exclude %{_datadir}/glusterfs/scripts/stop-all-gluster-processes.sh
+%if 0%{?_tmpfilesdir:1}
+%exclude %{_tmpfilesdir}/gluster.conf
+%endif
+%endif
%doc ChangeLog COPYING-GPLV2 COPYING-LGPLV3 INSTALL README.md THANKS
%{_mandir}/man8/*gluster*.8*
%exclude %{_mandir}/man8/gluster.8*
@@ -1044,6 +1175,11 @@ exit 0
%if 0%{?_tmpfilesdir:1}
%{_tmpfilesdir}/gluster.conf
%endif
+%if ( ! 0%{?_build_server} )
+%{_libdir}/pkgconfig/libgfchangelog.pc
+%{_libdir}/pkgconfig/libgfdb.pc
+%{_sbindir}/gluster-setgfid2path
+%endif
%files api
%exclude %{_libdir}/*.so
@@ -1078,9 +1214,11 @@ exit 0
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/glupy/debug-trace.*
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/glupy/helloworld.*
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/glupy/negative.*
-%{_libdir}/pkgconfig/libgfchangelog.pc
-%if ( 0%{!?_without_tiering:1} )
-%{_libdir}/pkgconfig/libgfdb.pc
+%if ( 0%{?_build_server} )
+%exclude %{_libdir}/pkgconfig/libgfchangelog.pc
+%exclude %{_libdir}/pkgconfig/libgfdb.pc
+%exclude %{_sbindir}/gluster-setgfid2path
+%exclude %{_mandir}/man8/gluster-setgfid2path.8*
%endif
%files client-xlators
@@ -1090,6 +1228,7 @@ exit 0
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/protocol
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/protocol/client.so
++%if ( 0%{!?_without_extra_xlators:1} )
%files extra-xlators
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/encryption
@@ -1106,6 +1245,11 @@ exit 0
%dir %{python2_sitelib}/gluster
%dir %{python2_sitelib}/gluster/glupy
%{python2_sitelib}/gluster/glupy/*
+# Don't expect a .egg-info file on EL5
+%if ( ! ( 0%{?rhel} && 0%{?rhel} < 6 ) )
+%{python_sitelib}/glusterfs_glupy*.egg-info
+%endif
+%endif
%files fuse
# glusterfs is a symlink to glusterfsd, -server depends on -fuse.
@@ -1125,6 +1269,7 @@ exit 0
%endif
%endif
+%if ( 0%{?_build_server} )
%if ( 0%{?_with_gnfs:1} )
%files gnfs
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator
@@ -1135,7 +1280,13 @@ exit 0
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/nfs/run
%ghost %attr(0600,-,-) %{_sharedstatedir}/glusterd/nfs/run/nfs.pid
%endif
+%endif
+
+%if ( 0%{?_build_server} )
+%files ganesha
+%endif
+%if ( 0%{?_build_server} )
%if ( 0%{!?_without_georeplication:1} )
%files geo-replication
%config(noreplace) %{_sysconfdir}/logrotate.d/glusterfs-georep
@@ -1172,6 +1323,7 @@ exit 0
%{_datadir}/glusterfs/scripts/gsync-sync-gfid
%{_datadir}/glusterfs/scripts/schedule_georep.py*
%endif
+%endif
%files libs
%{_libdir}/*.so.*
@@ -1194,19 +1346,26 @@ exit 0
%{_libdir}/glusterfs/%{version}%{?prereltag}/rpc-transport/rdma*
%endif
+%if ( 0%{?_build_server} )
%files regression-tests
%dir %{_datadir}/glusterfs
%{_datadir}/glusterfs/run-tests.sh
%{_datadir}/glusterfs/tests
%exclude %{_datadir}/glusterfs/tests/vagrant
+%exclude %{_datadir}/share/glusterfs/tests/basic/rpm.t
+%endif
+%if ( 0%{?_build_server} )
%if ( 0%{!?_without_ocf:1} )
%files resource-agents
# /usr/lib is the standard for OCF, also on x86_64
%{_prefix}/lib/ocf/resource.d/glusterfs
%endif
+%endif
+%if ( 0%{?_build_server} )
%files server
+%exclude %{_sharedstatedir}/glusterd/hooks/1/gsync-create/post/S56glusterd-geo-rep-create-post.sh
%doc extras/clear_xattrs.sh
# sysconf
%config(noreplace) %{_sysconfdir}/glusterfs
@@ -1277,7 +1436,6 @@ exit 0
%{_sbindir}/gcron.py
%{_sbindir}/conf.py
-<<<<<<< 2944c7b6656a36a79551f9f9f24ab7a10467f13a
# /var/lib/glusterd, e.g. hookscripts, etc.
%ghost %attr(0644,-,-) %config(noreplace) %{_sharedstatedir}/glusterd/glusterd.info
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd
@@ -1354,8 +1512,438 @@ exit 0
%if ( 0%{?_with_firewalld:1} )
%{_prefix}/lib/firewalld/services/glusterfs.xml
%endif
+%endif
+
+
+##-----------------------------------------------------------------------------
+## All %pretrans should be placed here and keep them sorted
+##
+%if 0%{?_build_server}
+%pretrans -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ echo "ERROR: Distribute volumes detected. In-service rolling upgrade requires distribute volume(s) to be stopped."
+ echo "ERROR: Please stop distribute volume(s) before proceeding... exiting!"
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ echo "WARNING: Updating glusterfs requires its processes to be killed. This action does NOT incur downtime."
+ echo "WARNING: Ensure to wait for the upgraded server to finish healing before proceeding."
+ echo "WARNING: Refer upgrade section of install guide for more details"
+ echo "Please run # service glusterd stop; pkill glusterfs; pkill glusterfsd; pkill gsyncd.py;"
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
+%pretrans api -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-api_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
+%pretrans api-devel -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-api-devel_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
+%pretrans devel -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-devel_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
+%pretrans fuse -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-fuse_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
+%if 0%{?_can_georeplicate}
+%if ( 0%{!?_without_georeplication:1} )
+%pretrans geo-replication -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-geo-replication_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+%endif
+%endif
+
+
+
+%pretrans libs -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-libs_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
+%if ( 0%{!?_without_rdma:1} )
+%pretrans rdma -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-rdma_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+%endif
+
+
+
+%if ( 0%{!?_without_ocf:1} )
+%pretrans resource-agents -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-resource-agents_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+%endif
+
+
+
+%pretrans server -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-server_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+%endif
# Events
+%if ( 0%{?_build_server} )
%if ( 0%{!?_without_events:1} )
%files events
%config(noreplace) %{_sysconfdir}/glusterfs/eventsconfig.json
@@ -1373,6 +1961,7 @@ exit 0
%{_sysconfdir}/init.d/glustereventsd
%endif
%endif
+%endif
%changelog
* Tue Aug 22 2017 Kaleb S. KEITHLEY <kkeithle@redhat.com>
--
1.8.3.1

View File

@ -0,0 +1,35 @@
From bfa0315b0437602ff1e568fb16c43d9937703eb4 Mon Sep 17 00:00:00 2001
From: "Bala.FA" <barumuga@redhat.com>
Date: Thu, 22 May 2014 08:37:27 +0530
Subject: [PATCH 09/74] secalert: remove setuid bit for fusermount-glusterfs
glusterfs-fuse: File /usr/bin/fusermount-glusterfs on x86_64 is setuid
root but is not on the setxid whitelist
Label: DOWNSTREAM ONLY
Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=989480
Change-Id: Icf6e5db72ae15ccc60b02be6713fb6c4f4c8a15f
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/25453
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/60135
Tested-by: Milind Changire <mchangir@redhat.com>
---
contrib/fuse-util/Makefile.am | 1 -
1 file changed, 1 deletion(-)
diff --git a/contrib/fuse-util/Makefile.am b/contrib/fuse-util/Makefile.am
index abbc10e..a071c81 100644
--- a/contrib/fuse-util/Makefile.am
+++ b/contrib/fuse-util/Makefile.am
@@ -9,6 +9,5 @@ AM_CFLAGS = -Wall $(GF_CFLAGS)
install-exec-hook:
-chown root $(DESTDIR)$(bindir)/fusermount-glusterfs
- chmod u+s $(DESTDIR)$(bindir)/fusermount-glusterfs
CLEANFILES =
--
1.8.3.1

View File

@ -0,0 +1,79 @@
From b40c05f7c099e860464faddd81722c7a3ab860a4 Mon Sep 17 00:00:00 2001
From: Niels de Vos <ndevos@redhat.com>
Date: Wed, 10 Jun 2015 16:16:47 +0200
Subject: [PATCH 10/74] build: packaging corrections for RHEL-5
Because the RHEL-5 version of these packages do not contain the -server
bits, some additional changes for the .spec are needed. These changes
are not applicable upstream.
Label: DOWNSTREAM ONLY
Change-Id: I3c4237bd986617f42b725efd75d1128a69e5dbe3
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/50447
Reviewed-by: Balamurugan Arumugam <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/60136
Tested-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 8458e8a..dbdb818 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -92,6 +92,7 @@
%global _build_server 1
%else
%global _build_server 0
+%global _without_georeplication --disable-georeplication
%endif
%global _without_extra_xlators 1
@@ -1068,17 +1069,14 @@ exit 0
%if ( ! 0%{?_build_server} )
# exclude ganesha files
%exclude %{_prefix}/lib/ocf/*
-# exclude geo-replication files
-%exclude %{_sysconfdir}/logrotate.d/glusterfs-georep
+# exclude incrementalapi
%exclude %{_libexecdir}/glusterfs/*
%exclude %{_sbindir}/gfind_missing_files
-%exclude %{_datadir}/glusterfs/scripts/get-gfid.sh
-%exclude %{_datadir}/glusterfs/scripts/slave-upgrade.sh
-%exclude %{_datadir}/glusterfs/scripts/gsync-upgrade.sh
-%exclude %{_datadir}/glusterfs/scripts/generate-gfid-file.sh
-%exclude %{_datadir}/glusterfs/scripts/gsync-sync-gfid
-%exclude %{_sharedstatedir}/glusterd/*
+%exclude %{_libexecdir}/glusterfs/glusterfind
+%exclude %{_bindir}/glusterfind
+%exclude %{_libexecdir}/glusterfs/peer_add_secret_pub
# exclude server files
+%exclude %{_sharedstatedir}/glusterd/*
%exclude %{_sysconfdir}/glusterfs
%exclude %{_sysconfdir}/glusterfs/glusterd.vol
%exclude %{_sysconfdir}/glusterfs/glusterfs-georep-logrotate
@@ -1093,7 +1091,9 @@ exit 0
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/arbiter.so
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/bit-rot.so
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/bitrot-stub.so
+%if ( 0%{!?_without_tiering:1} )
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/changetimerecorder.so
+%endif
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/index.so
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/leases.so
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/locks.so
@@ -1107,7 +1107,9 @@ exit 0
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/performance/decompounder.so
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/protocol/server*
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/storage*
+%if ( 0%{!?_without_tiering:1} )
%exclude %{_libdir}/libgfdb.so.*
+%endif
%exclude %{_sbindir}/gcron.py
%exclude %{_sbindir}/glfsheal
%exclude %{_sbindir}/glusterd
--
1.8.3.1

View File

@ -0,0 +1,70 @@
From ada27d07526acb0ef09f37de7f364fa3dcea0b36 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Wed, 3 Jun 2015 11:09:21 +0530
Subject: [PATCH 11/74] build: introduce security hardening flags in gluster
This patch introduces two of the security hardening compiler flags RELRO & PIE
in gluster codebase. Using _hardened_build as 1 doesn't guarantee the existance
of these flags in the compilation as different versions of RHEL have different
redhat-rpm-config macro. So the idea is to export these flags at spec file
level.
Label: DOWNSTREAM ONLY
Change-Id: I0a1a56d0a8f54f110d306ba5e55e39b1b073dc84
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/49780
Reviewed-by: Balamurugan Arumugam <barumuga@redhat.com>
Tested-by: Balamurugan Arumugam <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/60137
Tested-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index dbdb818..458b8bc 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -709,6 +709,24 @@ GlusterFS Events
CFLAGS=-DUSE_INSECURE_OPENSSL
export CFLAGS
%endif
+# In RHEL7 few hardening flags are available by default, however the RELRO
+# default behaviour is partial, convert to full
+%if ( 0%{?rhel} && 0%{?rhel} >= 7 )
+LDFLAGS="$RPM_LD_FLAGS -Wl,-z,relro,-z,now"
+export LDFLAGS
+%else
+%if ( 0%{?rhel} && 0%{?rhel} == 6 )
+CFLAGS="$RPM_OPT_FLAGS -fPIE -DPIE"
+LDFLAGS="$RPM_LD_FLAGS -pie -Wl,-z,relro,-z,now"
+%else
+#It appears that with gcc-4.1.2 in RHEL5 there is an issue using both -fPIC and
+ # -fPIE that makes -z relro not work; -fPIE seems to undo what -fPIC does
+CFLAGS="$CFLAGS $RPM_OPT_FLAGS"
+LDFLAGS="$RPM_LD_FLAGS -Wl,-z,relro,-z,now"
+%endif
+export CFLAGS
+export LDFLAGS
+%endif
./autogen.sh && %configure \
%{?_with_cmocka} \
@@ -2110,8 +2128,11 @@ end
* Fri Jun 12 2015 Aravinda VK <avishwan@redhat.com>
- Added rsync as dependency to georeplication rpm (#1231205)
-* Tue Jun 02 2015 Aravinda VK <avishwan@redhat.com>
-- Added post hook for volume delete as part of glusterfind (#1225465)
+* Thu Jun 11 2015 Atin Mukherjee <amukherj@redhat.com>
+- Security hardening flags inclusion (#1200815)
+
+* Thu Jun 11 2015 Aravinda VK <avishwan@redhat.com>
+- Added post hook for volume delete as part of glusterfind (#1225551)
* Wed May 27 2015 Aravinda VK <avishwan@redhat.com>
- Added stop-all-gluster-processes.sh in glusterfs-server section (#1204641)
--
1.8.3.1

View File

@ -0,0 +1,100 @@
From 280eddebd49483343cc08b42c12f26d89f6d51e1 Mon Sep 17 00:00:00 2001
From: Niels de Vos <ndevos@redhat.com>
Date: Wed, 22 Apr 2015 15:39:59 +0200
Subject: [PATCH 12/74] spec: fix/add pre-transaction scripts for geo-rep and
cli packages
The cli subpackage never had a %pretrans script, this has been added
now.
The %pretrans script for ge-repliaction was never included in the RPM
package because it was disable by a undefined macro (_can_georeplicate).
This macro is not used/set anywhere else and _without_georeplication
should take care of it anyway.
Note: This is a Red Hat Gluster Storage specific patch. Upstream
packaging guidelines do not allow these kind of 'features'.
Label: DOWNSTREAM ONLY
Change-Id: I16aab5bba72f1ed178f3bcac47f9d8ef767cfcef
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/50491
Reviewed-on: https://code.engineering.redhat.com/gerrit/60138
Tested-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 43 +++++++++++++++++++++++++++++++++++++++++--
1 file changed, 41 insertions(+), 2 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 458b8bc..68eba56 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1668,6 +1668,47 @@ end
+%pretrans cli -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-cli_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
%pretrans devel -p <lua>
if not posix.access("/bin/bash", "x") then
-- initial installation, no shell, no running glusterfsd
@@ -1750,7 +1791,6 @@ end
-%if 0%{?_can_georeplicate}
%if ( 0%{!?_without_georeplication:1} )
%pretrans geo-replication -p <lua>
if not posix.access("/bin/bash", "x") then
@@ -1791,7 +1831,6 @@ if not (ok == 0) then
error("Detected running glusterfs processes", ok)
end
%endif
-%endif
--
1.8.3.1

View File

@ -0,0 +1,84 @@
From cf8f5a4e4098a6aae9b986dc2da2006eadd4fef1 Mon Sep 17 00:00:00 2001
From: Niels de Vos <ndevos@redhat.com>
Date: Thu, 18 Jun 2015 12:16:16 +0200
Subject: [PATCH 13/74] rpm: glusterfs-devel for client-builds should not
depend on -server
glusterfs-devel for client-side packages should *not* include the
libgfdb.so symlink and libgfdb.pc file or any of the libchangelog
ones.
Label: DOWNSTREAM ONLY
Change-Id: Ifb4a9cf48841e5af5dd0a98b6de51e2ee469fc56
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/51019
Reviewed-by: Balamurugan Arumugam <barumuga@redhat.com>
Tested-by: Balamurugan Arumugam <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/60139
Tested-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 24 +++++++++++++++++++-----
1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 68eba56..b2fb4d5 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1196,9 +1196,10 @@ exit 0
%{_tmpfilesdir}/gluster.conf
%endif
%if ( ! 0%{?_build_server} )
-%{_libdir}/pkgconfig/libgfchangelog.pc
-%{_libdir}/pkgconfig/libgfdb.pc
-%{_sbindir}/gluster-setgfid2path
+%exclude %{_libdir}/pkgconfig/libgfchangelog.pc
+%exclude %{_libdir}/pkgconfig/libgfdb.pc
+%exclude %{_sbindir}/gluster-setgfid2path
+%exclude %{_mandir}/man8/gluster-setgfid2path.8*
%endif
%files api
@@ -1226,6 +1227,12 @@ exit 0
%{_includedir}/glusterfs/*
%exclude %{_includedir}/glusterfs/api
%exclude %{_libdir}/libgfapi.so
+%if ( ! 0%{?_build_server} )
+%exclude %{_libdir}/libgfchangelog.so
+%endif
+%if ( 0%{!?_without_tiering:1} && ! 0%{?_build_server})
+%exclude %{_libdir}/libgfdb.so
+%endif
%{_libdir}/*.so
# Glupy Translator examples
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator
@@ -1235,10 +1242,14 @@ exit 0
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/glupy/helloworld.*
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/glupy/negative.*
%if ( 0%{?_build_server} )
+%{_libdir}/pkgconfig/libgfchangelog.pc
+%else
%exclude %{_libdir}/pkgconfig/libgfchangelog.pc
+%endif
+%if ( 0%{!?_without_tiering:1} && 0%{?_build_server})
+%{_libdir}/pkgconfig/libgfdb.pc
+%else
%exclude %{_libdir}/pkgconfig/libgfdb.pc
-%exclude %{_sbindir}/gluster-setgfid2path
-%exclude %{_mandir}/man8/gluster-setgfid2path.8*
%endif
%files client-xlators
@@ -2161,6 +2172,9 @@ end
* Tue Aug 18 2015 Niels de Vos <ndevos@redhat.com>
- Include missing directories for glusterfind hooks scripts (#1225465)
+* Thu Jun 18 2015 Niels de Vos <ndevos@redhat.com>
+- glusterfs-devel for client-builds should not depend on -server (#1227029)
+
* Mon Jun 15 2015 Niels de Vos <ndevos@redhat.com>
- Replace hook script S31ganesha-set.sh by S31ganesha-start.sh (#1231738)
--
1.8.3.1

View File

@ -0,0 +1,181 @@
From 59602f5c55a05b9652247803d37efa85f6e8f526 Mon Sep 17 00:00:00 2001
From: "Bala.FA" <barumuga@redhat.com>
Date: Wed, 17 Jun 2015 21:34:52 +0530
Subject: [PATCH 14/74] build: add pretrans check
This patch adds pretrans check for client-xlators, ganesha and
python-gluster sub-packages.
Label: DOWNSTREAM ONLY
Change-Id: I454016319832c11902c0ca79a79fbbcf8ac0a121
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/50967
Reviewed-on: https://code.engineering.redhat.com/gerrit/60140
Tested-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 127 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 127 insertions(+)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index b2fb4d5..0d1161d 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1720,6 +1720,47 @@ end
+%pretrans client-xlators -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-client-xlators_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
%pretrans devel -p <lua>
if not posix.access("/bin/bash", "x") then
-- initial installation, no shell, no running glusterfsd
@@ -1802,6 +1843,47 @@ end
+%pretrans ganesha -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/glusterfs-ganesha_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
%if ( 0%{!?_without_georeplication:1} )
%pretrans geo-replication -p <lua>
if not posix.access("/bin/bash", "x") then
@@ -1886,6 +1968,47 @@ end
+%pretrans -n python-gluster -p <lua>
+if not posix.access("/bin/bash", "x") then
+ -- initial installation, no shell, no running glusterfsd
+ return 0
+end
+
+-- TODO: move this completely to a lua script
+-- For now, we write a temporary bash script and execute that.
+
+script = [[#!/bin/sh
+pidof -c -o %PPID -x glusterfsd &>/dev/null
+
+if [ $? -eq 0 ]; then
+ pushd . > /dev/null 2>&1
+ for volume in /var/lib/glusterd/vols/*; do cd $volume;
+ vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
+ volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
+ if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
+ exit 1;
+ fi
+ done
+
+ popd > /dev/null 2>&1
+ exit 1;
+fi
+]]
+
+-- rpm in RHEL5 does not have os.tmpname()
+-- io.tmpfile() can not be resolved to a filename to pass to bash :-/
+tmpname = "/tmp/python-gluster_pretrans_" .. os.date("%s")
+tmpfile = io.open(tmpname, "w")
+tmpfile:write(script)
+tmpfile:close()
+ok, how, val = os.execute("/bin/bash " .. tmpname)
+os.remove(tmpname)
+if not (ok == 0) then
+ error("Detected running glusterfs processes", ok)
+end
+
+
+
%if ( 0%{!?_without_rdma:1} )
%pretrans rdma -p <lua>
if not posix.access("/bin/bash", "x") then
@@ -2172,6 +2295,10 @@ end
* Tue Aug 18 2015 Niels de Vos <ndevos@redhat.com>
- Include missing directories for glusterfind hooks scripts (#1225465)
+* Thu Jun 18 2015 Bala.FA <barumuga@redhat.com>
+- add pretrans check for client-xlators, ganesha and python-gluster
+ sub-packages (#1232641)
+
* Thu Jun 18 2015 Niels de Vos <ndevos@redhat.com>
- glusterfs-devel for client-builds should not depend on -server (#1227029)
--
1.8.3.1

View File

@ -0,0 +1,87 @@
From 444324cfdcd8da750bc0ae04a3a416725489dd06 Mon Sep 17 00:00:00 2001
From: "Bala.FA" <barumuga@redhat.com>
Date: Fri, 19 Jun 2015 11:09:53 +0530
Subject: [PATCH 15/74] build: exclude libgfdb.pc conditionally
This patch fixes rhel-5 build failure where libgfdb.pc is not
applicable.
Label: DOWNSTREAM ONLY
Change-Id: Ied3978aa14ff6bd72f25eff9759e501100cb6343
Signed-off-by: Bala.FA <barumuga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/51099
Reviewed-on: https://code.engineering.redhat.com/gerrit/60141
Tested-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 0d1161d..f308f37 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1058,12 +1058,14 @@ fi
%postun libs
/sbin/ldconfig
+%if ( 0%{?_build_server} )
%postun server
/sbin/ldconfig
%if (0%{?_with_firewalld:1})
%firewalld_reload
%endif
exit 0
+%endif
##-----------------------------------------------------------------------------
## All %%files should be placed here and keep them grouped
@@ -1249,8 +1251,10 @@ exit 0
%if ( 0%{!?_without_tiering:1} && 0%{?_build_server})
%{_libdir}/pkgconfig/libgfdb.pc
%else
+%if ( 0%{?rhel} && 0%{?rhel} >= 6 )
%exclude %{_libdir}/pkgconfig/libgfdb.pc
%endif
+%endif
%files client-xlators
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator
@@ -1259,7 +1263,7 @@ exit 0
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/protocol
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/protocol/client.so
-+%if ( 0%{!?_without_extra_xlators:1} )
+%if ( 0%{!?_without_extra_xlators:1} )
%files extra-xlators
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/encryption
@@ -1378,6 +1382,7 @@ exit 0
%endif
%if ( 0%{?_build_server} )
+%if ( 0%{!?_without_regression_tests:1} )
%files regression-tests
%dir %{_datadir}/glusterfs
%{_datadir}/glusterfs/run-tests.sh
@@ -1385,6 +1390,7 @@ exit 0
%exclude %{_datadir}/glusterfs/tests/vagrant
%exclude %{_datadir}/share/glusterfs/tests/basic/rpm.t
%endif
+%endif
%if ( 0%{?_build_server} )
%if ( 0%{!?_without_ocf:1} )
@@ -2295,6 +2301,9 @@ end
* Tue Aug 18 2015 Niels de Vos <ndevos@redhat.com>
- Include missing directories for glusterfind hooks scripts (#1225465)
+* Fri Jun 19 2015 Bala.FA <barumuga@redhat.com>
+- exclude libgfdb.pc conditionally for rhel-5 (#1233486)
+
* Thu Jun 18 2015 Bala.FA <barumuga@redhat.com>
- add pretrans check for client-xlators, ganesha and python-gluster
sub-packages (#1232641)
--
1.8.3.1

View File

@ -0,0 +1,33 @@
From 5b117b1f8cf05d645512bb6f07cbe2803119652f Mon Sep 17 00:00:00 2001
From: Milind Changire <mchangir@redhat.com>
Date: Thu, 29 Oct 2015 15:55:26 +0530
Subject: [PATCH 16/74] build: exclude glusterfs.xml on rhel-7 client build
Label: DOWNSTREAM ONLY
Change-Id: Iae1ee01b3aa61d4dd150e17646b330871b948ef3
Signed-off-by: Milind Changire <mchangir@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/60433
Reviewed-by: Balamurugan Arumugam <barumuga@redhat.com>
Tested-by: Balamurugan Arumugam <barumuga@redhat.com>
---
glusterfs.spec.in | 3 +++
1 file changed, 3 insertions(+)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index f308f37..85f7f21 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1138,6 +1138,9 @@ exit 0
%if 0%{?_tmpfilesdir:1}
%exclude %{_tmpfilesdir}/gluster.conf
%endif
+%if ( 0%{?_with_firewalld:1} )
+%exclude /usr/lib/firewalld/services/glusterfs.xml
+%endif
%endif
%doc ChangeLog COPYING-GPLV2 COPYING-LGPLV3 INSTALL README.md THANKS
%{_mandir}/man8/*gluster*.8*
--
1.8.3.1

View File

@ -0,0 +1,56 @@
From 5d3441530f71047483b5973bad7efd2c73ccfff9 Mon Sep 17 00:00:00 2001
From: anand <anekkunt@redhat.com>
Date: Wed, 18 Nov 2015 16:13:46 +0530
Subject: [PATCH 17/74] glusterd: fix info file checksum mismatch during
upgrade
peers are moving rejected state when upgrading from RHS2.1 to RHGS3.1.2
due to checksum mismatch.
Label: DOWNSTREAM ONLY
Change-Id: Ifea6b7dfe8477c7f17eefc5ca87ced58aaa21c84
Signed-off-by: anand <anekkunt@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/61774
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-store.c | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-store.c b/xlators/mgmt/glusterd/src/glusterd-store.c
index 8a662ef..42bb8ce 100644
--- a/xlators/mgmt/glusterd/src/glusterd-store.c
+++ b/xlators/mgmt/glusterd/src/glusterd-store.c
@@ -1014,16 +1014,19 @@ glusterd_volume_exclude_options_write (int fd, glusterd_volinfo_t *volinfo)
goto out;
}
- snprintf (buf, sizeof (buf), "%d", volinfo->op_version);
- ret = gf_store_save_value (fd, GLUSTERD_STORE_KEY_VOL_OP_VERSION, buf);
- if (ret)
- goto out;
+ if (conf->op_version >= GD_OP_VERSION_RHS_3_0) {
+ snprintf (buf, sizeof (buf), "%d", volinfo->op_version);
+ ret = gf_store_save_value (fd, GLUSTERD_STORE_KEY_VOL_OP_VERSION, buf);
+ if (ret)
+ goto out;
+
+ snprintf (buf, sizeof (buf), "%d", volinfo->client_op_version);
+ ret = gf_store_save_value (fd, GLUSTERD_STORE_KEY_VOL_CLIENT_OP_VERSION,
+ buf);
+ if (ret)
+ goto out;
+ }
- snprintf (buf, sizeof (buf), "%d", volinfo->client_op_version);
- ret = gf_store_save_value (fd, GLUSTERD_STORE_KEY_VOL_CLIENT_OP_VERSION,
- buf);
- if (ret)
- goto out;
if (volinfo->caps) {
snprintf (buf, sizeof (buf), "%d", volinfo->caps);
ret = gf_store_save_value (fd, GLUSTERD_STORE_KEY_VOL_CAPS,
--
1.8.3.1

View File

@ -0,0 +1,61 @@
From 75d0e5c542c4d1a2df1a49a6f526ccb099f9f53f Mon Sep 17 00:00:00 2001
From: Milind Changire <mchangir@redhat.com>
Date: Tue, 22 Mar 2016 23:33:13 +0530
Subject: [PATCH 18/74] build: spec file conflict resolution
Missed conflict resolution for removing references to
gluster.conf.example as mentioned in patch titled:
packaging: gratuitous dependencies on rsyslog-mm{count,jsonparse}
by Kaleb
References to hook scripts S31ganesha-start.sh and
S31ganesha-reset.sh got lost in the downstream only
patch conflict resolution.
Commented blanket reference to %{_sharedsstatedir}/glusterd/*
in section %files server to avoid rpmbuild warning related to
multiple references to hook scripts and other files under
/var/lib/glusterd.
Label: DOWNSTREAM ONLY
Change-Id: I9d409f1595ab985ed9f79d9d4f4298877609ba17
Signed-off-by: Milind Changire <mchangir@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/70535
Reviewed-by: Rajesh Joseph <rjoseph@redhat.com>
Tested-by: Rajesh Joseph <rjoseph@redhat.com>
---
glusterfs.spec.in | 17 -----------------
1 file changed, 17 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 85f7f21..fe566e5 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -840,23 +840,6 @@ install -D -p -m 0644 extras/glusterfs-georep-logrotate \
%{buildroot}%{_sysconfdir}/logrotate.d/glusterfs-georep
%endif
-%if ( 0%{!?_without_syslog:1} )
-%if ( 0%{?fedora} ) || ( 0%{?rhel} && 0%{?rhel} > 6 )
-install -D -p -m 0644 extras/gluster-rsyslog-7.2.conf \
- %{buildroot}%{_sysconfdir}/rsyslog.d/gluster.conf.example
-%endif
-
-%if ( 0%{?rhel} && 0%{?rhel} == 6 )
-install -D -p -m 0644 extras/gluster-rsyslog-5.8.conf \
- %{buildroot}%{_sysconfdir}/rsyslog.d/gluster.conf.example
-%endif
-
-%if ( 0%{?fedora} ) || ( 0%{?rhel} && 0%{?rhel} >= 6 )
-install -D -p -m 0644 extras/logger.conf.example \
- %{buildroot}%{_sysconfdir}/glusterfs/logger.conf.example
-%endif
-%endif
-
touch %{buildroot}%{_sharedstatedir}/glusterd/glusterd.info
touch %{buildroot}%{_sharedstatedir}/glusterd/options
subdirs=(add-brick create copy-file delete gsync-create remove-brick reset set start stop)
--
1.8.3.1

View File

@ -0,0 +1,36 @@
From 5c5283f873e72d7305953ca357b709a3ab1919f4 Mon Sep 17 00:00:00 2001
From: Kaleb S KEITHLEY <kkeithle@redhat.com>
Date: Tue, 10 May 2016 12:37:23 -0400
Subject: [PATCH 19/74] build: dependency error during upgrade
Not sure who thought config params in the form without_foo were a
good idea. Trying to parse !without_tiering conditionals makes my
head hurt.
Label: DOWNSTREAM ONLY
Change-Id: Ie1c43fc60d6f747c27b22e3a1c40539aba3d2cad
Signed-off-by: Kaleb S KEITHLEY <kkeithle@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/74041
Reviewed-by: Niels de Vos <ndevos@redhat.com>
---
glusterfs.spec.in | 3 +++
1 file changed, 3 insertions(+)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index fe566e5..f83ae5e 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1234,6 +1234,9 @@ exit 0
%else
%exclude %{_libdir}/pkgconfig/libgfchangelog.pc
%endif
+%if ( 0%{!?_without_tiering:1} && ! 0%{?_build_server})
+%exclude %{_libdir}/libgfdb.so
+%endif
%if ( 0%{!?_without_tiering:1} && 0%{?_build_server})
%{_libdir}/pkgconfig/libgfdb.pc
%else
--
1.8.3.1

View File

@ -0,0 +1,88 @@
From a7570af0bc6dc53044dce2cace9a65e96c571da6 Mon Sep 17 00:00:00 2001
From: Aravinda VK <avishwan@redhat.com>
Date: Mon, 19 Sep 2016 16:59:30 +0530
Subject: [PATCH 20/74] eventsapi: Fix eventtypes.h header generation with
Python 2.4
eventskeygen.py file generates eventtypes.h and eventtypes.py files
during build. If Python version is old(Version 2.4), then Gluster
Client build will fail. eventskeygen.py uses "with" statement to
open file, which is introduced in Python 2.5
Label: DOWNSTREAM ONLY
Change-Id: I995e102fad0c7bc66e840b1ab9d53ed564266253
Signed-off-by: Aravinda VK <avishwan@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/85060
Reviewed-by: Milind Changire <mchangir@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
events/eventskeygen.py | 47 +++++++++++++++++++++++++----------------------
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/events/eventskeygen.py b/events/eventskeygen.py
index 23dfb47..a9c5573 100644
--- a/events/eventskeygen.py
+++ b/events/eventskeygen.py
@@ -207,33 +207,36 @@ ERRORS = (
if gen_header_type == "C_HEADER":
# Generate eventtypes.h
- with open(eventtypes_h, "w") as f:
- f.write("#ifndef __EVENTTYPES_H__\n")
- f.write("#define __EVENTTYPES_H__\n\n")
- f.write("typedef enum {\n")
- for k in ERRORS:
- f.write(" {0},\n".format(k))
- f.write("} event_errors_t;\n")
+ f = open(eventtypes_h, "w")
+ f.write("#ifndef __EVENTTYPES_H__\n")
+ f.write("#define __EVENTTYPES_H__\n\n")
+ f.write("typedef enum {\n")
+ for k in ERRORS:
+ f.write(" %s,\n" % k)
+ f.write("} event_errors_t;\n")
- f.write("\n")
+ f.write("\n")
- f.write("typedef enum {\n")
- for k in keys:
- f.write(" {0},\n".format(k))
+ f.write("typedef enum {\n")
+ for k in keys:
+ f.write(" %s,\n" % k)
- f.write(" {0}\n".format(LAST_EVENT))
- f.write("} eventtypes_t;\n")
- f.write("\n#endif /* __EVENTTYPES_H__ */\n")
+ f.write(" %s\n" % LAST_EVENT)
+ f.write("} eventtypes_t;\n")
+ f.write("\n#endif /* __EVENTTYPES_H__ */\n")
+ f.close()
if gen_header_type == "PY_HEADER":
# Generate eventtypes.py
- with open(eventtypes_py, "w") as f:
- f.write("# -*- coding: utf-8 -*-\n")
- f.write("all_events = [\n")
- for ev in keys:
- f.write(' "{0}",\n'.format(ev))
+ f = open(eventtypes_py, "w")
+ f.write("# -*- coding: utf-8 -*-\n")
+ f.write("all_events = [\n")
+ for ev in keys:
+ f.write(' "%s",\n' % ev)
- f.write("]\n\n")
+ f.write("]\n\n")
- for idx, ev in enumerate(keys):
- f.write("{0} = {1}\n".format(ev.replace("EVENT_", ""), idx))
+ for idx, ev in enumerate(keys):
+ f.write("%s = %s\n" % (ev.replace("EVENT_", ""), idx))
+
+ f.close()
--
1.8.3.1

View File

@ -0,0 +1,86 @@
From ab44b5af9915e15dbe679ac5a16a80d7b0ae45cc Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Tue, 20 Sep 2016 03:09:08 +0530
Subject: [PATCH 21/74] syscall: remove preadv and pwritev sys wrappers
Commit 76f1680 introduced sys wrappers for preadv and pwritev where these
syscalls are not supported for RHEL5. These functions are of actually no use
w.r.t downstream code as sys_pwritev is used only in bd xlator which is not
supported in downstream
Label: DOWNSTREAM ONLY
Change-Id: Ifdc798f1fa74affd77abb06dd14cf9b51f484fe7
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
---
libglusterfs/src/syscall.c | 14 --------------
libglusterfs/src/syscall.h | 6 ------
xlators/storage/bd/src/bd.c | 4 ++--
3 files changed, 2 insertions(+), 22 deletions(-)
diff --git a/libglusterfs/src/syscall.c b/libglusterfs/src/syscall.c
index a7d4402..90ef39a 100644
--- a/libglusterfs/src/syscall.c
+++ b/libglusterfs/src/syscall.c
@@ -318,20 +318,6 @@ sys_write (int fd, const void *buf, size_t count)
ssize_t
-sys_preadv (int fd, const struct iovec *iov, int iovcnt, off_t offset)
-{
- return preadv (fd, iov, iovcnt, offset);
-}
-
-
-ssize_t
-sys_pwritev (int fd, const struct iovec *iov, int iovcnt, off_t offset)
-{
- return pwritev (fd, iov, iovcnt, offset);
-}
-
-
-ssize_t
sys_pread (int fd, void *buf, size_t count, off_t offset)
{
return pread (fd, buf, count, offset);
diff --git a/libglusterfs/src/syscall.h b/libglusterfs/src/syscall.h
index 0cb61b6..da816cb 100644
--- a/libglusterfs/src/syscall.h
+++ b/libglusterfs/src/syscall.h
@@ -208,12 +208,6 @@ int
sys_fallocate(int fd, int mode, off_t offset, off_t len);
ssize_t
-sys_preadv (int fd, const struct iovec *iov, int iovcnt, off_t offset);
-
-ssize_t
-sys_pwritev (int fd, const struct iovec *iov, int iovcnt, off_t offset);
-
-ssize_t
sys_pread(int fd, void *buf, size_t count, off_t offset);
ssize_t
diff --git a/xlators/storage/bd/src/bd.c b/xlators/storage/bd/src/bd.c
index 07b7ecd..af3ac84 100644
--- a/xlators/storage/bd/src/bd.c
+++ b/xlators/storage/bd/src/bd.c
@@ -1782,7 +1782,7 @@ __bd_pwritev (int fd, struct iovec *vector, int count, off_t offset,
if (!vector)
return -EFAULT;
- retval = sys_pwritev (fd, vector, count, offset);
+ retval = pwritev (fd, vector, count, offset);
if (retval == -1) {
int64_t off = offset;
gf_log (THIS->name, GF_LOG_WARNING,
@@ -1805,7 +1805,7 @@ __bd_pwritev (int fd, struct iovec *vector, int count, off_t offset,
vector[index].iov_len = bd_size - internal_offset;
no_space = 1;
}
- retval = sys_pwritev (fd, vector[index].iov_base,
+ retval = pwritev (fd, vector[index].iov_base,
vector[index].iov_len, internal_offset);
if (retval == -1) {
gf_log (THIS->name, GF_LOG_WARNING,
--
1.8.3.1

View File

@ -0,0 +1,32 @@
From c5b4f68e24c718dcbc5f4ebe0094dcb900ac5314 Mon Sep 17 00:00:00 2001
From: Milind Changire <mchangir@redhat.com>
Date: Tue, 20 Sep 2016 12:43:43 +0530
Subject: [PATCH 22/74] build: ignore %{sbindir}/conf.py* for RHEL-5
commit dca6f06 has introduced this file in a very wrong location
for a Python file. And rpmbuild is behaving very differently than
RHEL-6 as regards ignoring .pyc and .pyo files.
Label: DOWNSTREAM ONLY
Change-Id: I574a500586162917102ae8eb32b939885d2b2d4c
Signed-off-by: Milind Changire <mchangir@redhat.com>
---
glusterfs.spec.in | 1 +
1 file changed, 1 insertion(+)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index f83ae5e..8f30020 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1118,6 +1118,7 @@ exit 0
%exclude %{_sbindir}/glusterd
%exclude %{_sbindir}/snap_scheduler.py
%exclude %{_datadir}/glusterfs/scripts/stop-all-gluster-processes.sh
+%exclude %{_sbindir}/conf.py*
%if 0%{?_tmpfilesdir:1}
%exclude %{_tmpfilesdir}/gluster.conf
%endif
--
1.8.3.1

View File

@ -0,0 +1,248 @@
From fdf4475ea3598b4287803001932f426f2c58f3b1 Mon Sep 17 00:00:00 2001
From: Milind Changire <mchangir@redhat.com>
Date: Fri, 14 Oct 2016 12:53:27 +0530
Subject: [PATCH 23/74] build: randomize temp file names in pretrans scriptlets
Security issue CVE-2015-1795 mentions about possibility of file name
spoof attack for the %pretrans server scriptlet.
Since %pretrans scriptlets are executed only for server builds, we can
use os.tmpname() to randomize temporary file names for all %pretrans
scriptlets using this mechanism.
Label: DOWNSTREAM ONLY
Change-Id: Ic82433897432794b6d311d836355aa4bad886369
Signed-off-by: Milind Changire <mchangir@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/86187
Reviewed-by: Siddharth Sharma <siddharth@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfs.spec.in | 106 ++++++++++++++++++++++++++++++++----------------------
1 file changed, 64 insertions(+), 42 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 8f30020..ab61688 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1579,9 +1579,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1620,9 +1621,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-api_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1661,9 +1663,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-api-devel_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1702,9 +1705,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-cli_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1743,9 +1747,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-client-xlators_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1784,9 +1789,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-devel_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1825,9 +1831,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-fuse_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1866,9 +1873,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-ganesha_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1908,9 +1916,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-geo-replication_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1950,9 +1959,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-libs_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -1991,9 +2001,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/python-gluster_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -2033,9 +2044,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-rdma_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -2076,9 +2088,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-resource-agents_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -2118,9 +2131,10 @@ if [ $? -eq 0 ]; then
fi
]]
--- rpm in RHEL5 does not have os.tmpname()
--- io.tmpfile() can not be resolved to a filename to pass to bash :-/
-tmpname = "/tmp/glusterfs-server_pretrans_" .. os.date("%s")
+-- Since we run pretrans scripts only for RPMs built for a server build,
+-- we can now use os.tmpname() since it is available on RHEL6 and later
+-- platforms which are server platforms.
+tmpname = os.tmpname()
tmpfile = io.open(tmpname, "w")
tmpfile:write(script)
tmpfile:close()
@@ -2211,6 +2225,13 @@ end
* Thu Nov 24 2016 Jiffin Tony Thottan <jhottan@redhat.com>
- remove S31ganesha-reset.sh from hooks (#1397795)
+* Fri Oct 14 2016 Milind Changire <mchangir@redhat.com>
+- Changed pretrans scripts to use os.tmpname() for enhanced security
+ for server builds only (#1362044)
+
+* Tue Sep 27 2016 Milind Changire <mchangir@redhat.com>
+- Added systemd requirement to glusterfs-server and glusterfs-events packages
+
* Thu Sep 22 2016 Kaleb S. KEITHLEY <kkeithle@redhat.com>
- python-ctypes no long exists, now in python stdlib (#1378436)
@@ -2330,6 +2351,7 @@ end
* Mon May 18 2015 Milind Changire <mchangir@redhat.com>
- Move file peer_add_secret_pub to the server RPM to support glusterfind (#1221544)
+
* Sun May 17 2015 Niels de Vos <ndevos@redhat.com>
- Fix building on RHEL-5 based distributions (#1222317)
--
1.8.3.1

View File

@ -0,0 +1,80 @@
From abd66a26f1a6fb998c0b6b60c3004ea8414ffee0 Mon Sep 17 00:00:00 2001
From: Jiffin Tony Thottan <jthottan@redhat.com>
Date: Thu, 17 Nov 2016 12:44:38 +0530
Subject: [PATCH 24/74] glusterd/gNFS : On post upgrade to 3.2, disable gNFS
for all volumes
Currently on 3.2 gNFS is dsiabled for newly created volumes or old volumes
with default value. There will be volumes which have explicitly turn off
nfs.disable option. This change disable gNFS even for that volume as well.
label : DOWNSTREAM ONLY
Change-Id: I4ddeb23690271034b0bbb3fc50b359350b5eae87
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/90425
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-op-sm.c | 43 +++++++++++++++++-------------
1 file changed, 25 insertions(+), 18 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index 6d5b8cf..09be165 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -2437,26 +2437,33 @@ glusterd_update_volumes_dict (glusterd_volinfo_t *volinfo)
GF_VALIDATE_OR_GOTO (this->name, conf, out);
/* 3.9.0 onwards gNFS will be disabled by default. In case of an upgrade
- * from anything below than 3.9.0 to 3.9.x the volume's dictionary will
- * not have 'nfs.disable' key set which means the same will not be set
- * to on until explicitly done. setnfs.disable to 'on' at op-version
- * bump up flow is the ideal way here. The same is also applicable for
- * transport.address-family where if the transport type is set to tcp
- * then transport.address-family is defaulted to 'inet'.
+ * from anything below than 3.9.0 to 3.9.x, the value for nfs.disable is
+ * set to 'on' for all volumes even if it is explicitly set to 'off' in
+ * previous version. This change is only applicable to downstream code.
+ * Setting nfs.disable to 'on' at op-version bump up flow is the ideal
+ * way here. The same is also applicable for transport.address-family
+ * where if the transport type is set to tcp then transport.address-family
+ * is defaulted to 'inet'.
*/
if (conf->op_version >= GD_OP_VERSION_3_9_0) {
- if (dict_get_str_boolean (volinfo->dict, NFS_DISABLE_MAP_KEY,
- 1)) {
- ret = dict_set_dynstr_with_alloc (volinfo->dict,
- NFS_DISABLE_MAP_KEY,
- "on");
- if (ret) {
- gf_msg (this->name, GF_LOG_ERROR, errno,
- GD_MSG_DICT_SET_FAILED, "Failed to set "
- "option ' NFS_DISABLE_MAP_KEY ' on "
- "volume %s", volinfo->volname);
- goto out;
- }
+ if (!(dict_get_str_boolean (volinfo->dict, NFS_DISABLE_MAP_KEY,
+ 0))) {
+ gf_msg (this->name, GF_LOG_INFO, 0, 0, "Gluster NFS is"
+ " being deprecated in favor of NFS-Ganesha, "
+ "hence setting nfs.disable to 'on' for volume "
+ "%s. Please re-enable it if requires",
+ volinfo->volname);
+ }
+
+ ret = dict_set_dynstr_with_alloc (volinfo->dict,
+ NFS_DISABLE_MAP_KEY,
+ "on");
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, errno,
+ GD_MSG_DICT_SET_FAILED, "Failed to set "
+ "option ' NFS_DISABLE_MAP_KEY ' on "
+ "volume %s", volinfo->volname);
+ goto out;
}
ret = dict_get_str (volinfo->dict, "transport.address-family",
&address_family_str);
--
1.8.3.1

View File

@ -0,0 +1,58 @@
From 867536a4ced38d72a7d980cd34bcbf0ce876206a Mon Sep 17 00:00:00 2001
From: Soumya Koduri <skoduri@redhat.com>
Date: Fri, 18 Nov 2016 12:47:06 +0530
Subject: [PATCH 25/74] build: Add dependency on netstat for glusterfs-ganesha
pkg
portblock resource-agent needs netstat command but this dependency
should have been ideally added to resource-agents package. But the
fixes (bug1395594, bug1395596) are going to be available only
in the future RHEL 6.9 and RHEL 7.4 releases. Hence as an interim
workaround, we agreed to add this dependency for glusterfs-ganesha package.
label : DOWNSTREAM ONLY
Change-Id: I6ac1003103755d7534dd079c821bbaacd8dd94b8
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/90529
Reviewed-by: Jiffin Thottan <jthottan@redhat.com>
Reviewed-by: Milind Changire <mchangir@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfs.spec.in | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index ab61688..343e88f 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -414,6 +414,11 @@ Requires: nfs-ganesha-gluster, pcs, dbus
%if ( 0%{?rhel} && 0%{?rhel} == 6 )
Requires: cman, pacemaker, corosync
%endif
+%if ( 0%{?fedora} ) || ( 0%{?rhel} && 0%{?rhel} > 5 )
+# we need portblock resource-agent in 3.9.5 and later.
+Requires: resource-agents >= 3.9.5
+Requires: net-tools
+%endif
%description ganesha
GlusterFS is a distributed file-system capable of scaling to several
@@ -2225,6 +2230,14 @@ end
* Thu Nov 24 2016 Jiffin Tony Thottan <jhottan@redhat.com>
- remove S31ganesha-reset.sh from hooks (#1397795)
+* Fri Nov 18 2016 Soumya Koduri <skoduri@redhat.com>
+- As an interim fix add dependency on netstat(/net-tools) for glusterfs-ganesha package (#1395574)
+
+* Fri Nov 11 2016 Soumya Koduri <skoduri@redhat.com>
+- Add dependency on portblock resource agent for ganesha package (#1278336)
+- Fix incorrect Requires for portblock resource agent (#1278336)
+- Update version checks for portblock resource agent on RHEL (#1278336)
+
* Fri Oct 14 2016 Milind Changire <mchangir@redhat.com>
- Changed pretrans scripts to use os.tmpname() for enhanced security
for server builds only (#1362044)
--
1.8.3.1

View File

@ -0,0 +1,105 @@
From 14bfa98824d40ff1f721a905f8e8ffd557f96eef Mon Sep 17 00:00:00 2001
From: Jiffin Tony Thottan <jthottan@redhat.com>
Date: Thu, 15 Dec 2016 17:14:01 +0530
Subject: [PATCH 26/74] glusterd/gNFS : explicitly set "nfs.disable" to "off"
after 3.2 upgrade
Gluster NFS was enabled by default for all volumes till 3.1. But 3.2 onwards
for the new volumes it will be disabled by setting "nfs.disable" to "on".
This take patch will take care of existing volume in such a way that if the
option is not configured, it will set "nfs.disable" to "off" during op-version
bump up.
Also this patch removes the warning message while enabling gluster NFS for
a volume.
label : DOWNSTREAM ONLY
Change-Id: Ib199c3180204f917791b4627c58d846750d18a5a
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/93146
Reviewed-by: Soumya Koduri <skoduri@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
cli/src/cli-cmd-parser.c | 14 --------------
xlators/mgmt/glusterd/src/glusterd-op-sm.c | 29 ++++++++++++-----------------
2 files changed, 12 insertions(+), 31 deletions(-)
diff --git a/cli/src/cli-cmd-parser.c b/cli/src/cli-cmd-parser.c
index c8ed367..ca4d906 100644
--- a/cli/src/cli-cmd-parser.c
+++ b/cli/src/cli-cmd-parser.c
@@ -1621,20 +1621,6 @@ cli_cmd_volume_set_parse (struct cli_state *state, const char **words,
goto out;
}
}
- if ((!strcmp (key, "nfs.disable")) &&
- (!strcmp (value, "off"))) {
- question = "Gluster NFS is being deprecated in favor "
- "of NFS-Ganesha Enter \"yes\" to continue "
- "using Gluster NFS";
- answer = cli_cmd_get_confirmation (state, question);
- if (GF_ANSWER_NO == answer) {
- gf_log ("cli", GF_LOG_ERROR, "Operation "
- "cancelled, exiting");
- *op_errstr = gf_strdup ("Aborted by user.");
- ret = -1;
- goto out;
- }
- }
}
ret = dict_set_int32 (dict, "count", wordcount-3);
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index 09be165..0557ad8 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -2438,9 +2438,9 @@ glusterd_update_volumes_dict (glusterd_volinfo_t *volinfo)
/* 3.9.0 onwards gNFS will be disabled by default. In case of an upgrade
* from anything below than 3.9.0 to 3.9.x, the value for nfs.disable is
- * set to 'on' for all volumes even if it is explicitly set to 'off' in
+ * set to 'off' for all volumes even if it is not explicitly set in the
* previous version. This change is only applicable to downstream code.
- * Setting nfs.disable to 'on' at op-version bump up flow is the ideal
+ * Setting nfs.disable to 'off' at op-version bump up flow is the ideal
* way here. The same is also applicable for transport.address-family
* where if the transport type is set to tcp then transport.address-family
* is defaulted to 'inet'.
@@ -2448,23 +2448,18 @@ glusterd_update_volumes_dict (glusterd_volinfo_t *volinfo)
if (conf->op_version >= GD_OP_VERSION_3_9_0) {
if (!(dict_get_str_boolean (volinfo->dict, NFS_DISABLE_MAP_KEY,
0))) {
- gf_msg (this->name, GF_LOG_INFO, 0, 0, "Gluster NFS is"
- " being deprecated in favor of NFS-Ganesha, "
- "hence setting nfs.disable to 'on' for volume "
- "%s. Please re-enable it if requires",
- volinfo->volname);
+ ret = dict_set_dynstr_with_alloc (volinfo->dict,
+ NFS_DISABLE_MAP_KEY,
+ "off");
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, errno,
+ GD_MSG_DICT_SET_FAILED, "Failed to turn "
+ "off ' NFS_DISABLE_MAP_KEY ' option for "
+ "volume %s", volinfo->volname);
+ goto out;
+ }
}
- ret = dict_set_dynstr_with_alloc (volinfo->dict,
- NFS_DISABLE_MAP_KEY,
- "on");
- if (ret) {
- gf_msg (this->name, GF_LOG_ERROR, errno,
- GD_MSG_DICT_SET_FAILED, "Failed to set "
- "option ' NFS_DISABLE_MAP_KEY ' on "
- "volume %s", volinfo->volname);
- goto out;
- }
ret = dict_get_str (volinfo->dict, "transport.address-family",
&address_family_str);
if (ret) {
--
1.8.3.1

View File

@ -0,0 +1,132 @@
From 52798b6934ea584b25b1ade64cb52a7439c1b113 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Tue, 3 Jan 2017 18:13:29 +0530
Subject: [PATCH 27/74] glusterd: spawn nfs daemon in op-version bump if
nfs.disable key is absent
3.2.0 onwards gNFS will be disabled by default. However any cluster
upgraded to 3.2.0 with existing volumes exposed over gNFS should
continue to have gNFS access and hence post upgrade gNFS service should
come up after bumping up the op-version. Although the key nfs.disable
was handled and managed correctly in the upgrade path but gNFS daemon
was never spawned in this case.
Fix is to spawn gNFS daemon in op-version bump up code path if
nfs.disable option is not set.
Label : DOWNSTREAM ONLY
Change-Id: Icac6f3653160f79b271f25f5df0c89690917e702
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/94006
Reviewed-by: Jiffin Thottan <jthottan@redhat.com>
Reviewed-by: Samikshan Bairagya <sbairagy@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-messages.h | 8 ++++++
xlators/mgmt/glusterd/src/glusterd-op-sm.c | 35 ++++++++++++++++++++++++---
2 files changed, 40 insertions(+), 3 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-messages.h b/xlators/mgmt/glusterd/src/glusterd-messages.h
index 65d4353..8bb4c43 100644
--- a/xlators/mgmt/glusterd/src/glusterd-messages.h
+++ b/xlators/mgmt/glusterd/src/glusterd-messages.h
@@ -4937,6 +4937,14 @@
*/
#define GD_MSG_GARBAGE_ARGS (GLUSTERD_COMP_BASE + 611)
+/*!
+ * @messageid
+ * @diagnosis
+ * @recommendedaction
+ *
+ */
+#define GD_MSG_SVC_START_FAIL (GLUSTERD_COMP_BASE + 590)
+
/*------------*/
#define glfs_msg_end_x GLFS_MSGID_END, "Invalid: End of messages"
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index 0557ad8..4fc719a 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -2423,7 +2423,8 @@ out:
}
static int
-glusterd_update_volumes_dict (glusterd_volinfo_t *volinfo)
+glusterd_update_volumes_dict (glusterd_volinfo_t *volinfo,
+ gf_boolean_t *start_nfs_svc)
{
int ret = -1;
xlator_t *this = NULL;
@@ -2436,6 +2437,8 @@ glusterd_update_volumes_dict (glusterd_volinfo_t *volinfo)
conf = this->private;
GF_VALIDATE_OR_GOTO (this->name, conf, out);
+ ret = 0;
+
/* 3.9.0 onwards gNFS will be disabled by default. In case of an upgrade
* from anything below than 3.9.0 to 3.9.x, the value for nfs.disable is
* set to 'off' for all volumes even if it is not explicitly set in the
@@ -2458,6 +2461,12 @@ glusterd_update_volumes_dict (glusterd_volinfo_t *volinfo)
"volume %s", volinfo->volname);
goto out;
}
+ /* If the volume is started then mark start_nfs_svc to
+ * true such that nfs daemon can be spawned up
+ */
+ if (GLUSTERD_STATUS_STARTED == volinfo->status)
+ *start_nfs_svc = _gf_true;
+
}
ret = dict_get_str (volinfo->dict, "transport.address-family",
@@ -2478,9 +2487,12 @@ glusterd_update_volumes_dict (glusterd_volinfo_t *volinfo)
}
}
}
+ ret = glusterd_store_volinfo (volinfo,
+ GLUSTERD_VOLINFO_VER_AC_INCREMENT);
+ if (ret)
+ goto out;
+
}
- ret = glusterd_store_volinfo (volinfo,
- GLUSTERD_VOLINFO_VER_AC_INCREMENT);
out:
return ret;
@@ -2529,6 +2541,7 @@ glusterd_op_set_all_volume_options (xlator_t *this, dict_t *dict,
uint32_t op_version = 0;
glusterd_volinfo_t *volinfo = NULL;
glusterd_svc_t *svc = NULL;
+ gf_boolean_t start_nfs_svc = _gf_false;
conf = this->private;
ret = dict_get_str (dict, "key1", &key);
@@ -2645,6 +2658,22 @@ glusterd_op_set_all_volume_options (xlator_t *this, dict_t *dict,
"Failed to store op-version.");
}
}
+ cds_list_for_each_entry (volinfo, &conf->volumes, vol_list) {
+ ret = glusterd_update_volumes_dict (volinfo,
+ &start_nfs_svc);
+ if (ret)
+ goto out;
+ }
+ if (start_nfs_svc) {
+ ret = conf->nfs_svc.manager (&(conf->nfs_svc), NULL,
+ PROC_START_NO_WAIT);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_SVC_START_FAIL,
+ "unable to start nfs service");
+ goto out;
+ }
+ }
/* No need to save cluster.op-version in conf->opts
*/
goto out;
--
1.8.3.1

View File

@ -0,0 +1,42 @@
From 91489431c48f6fa9bce3ee6f377bc9702602b18d Mon Sep 17 00:00:00 2001
From: Poornima G <pgurusid@redhat.com>
Date: Wed, 26 Apr 2017 14:07:58 +0530
Subject: [PATCH 28/74] glusterd, parallel-readdir: Change the op-version of
parallel-readdir to 31100
Issue: Downstream 3.2 was released with op-version 31001, parallel-readdir
feature in upstream was released in 3.10 and hence with op-version 31000.
With this, parallel-readdir will be allowed in 3.2 cluster/clients as well.
But 3.2 didn't have parallel-readdir feature backported.
Fix:
Increase the op-version of parallel-readdir feature only in downstream
to 31100(3.3 highest op-version)
Label: DOWNSTREAM ONLY
Change-Id: I2640520985627f3a1cb4fb96e28350f8bb9b146c
Signed-off-by: Poornima G <pgurusid@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/104403
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-volume-set.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-volume-set.c b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
index 93ef85c..9729767 100644
--- a/xlators/mgmt/glusterd/src/glusterd-volume-set.c
+++ b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
@@ -3376,7 +3376,7 @@ struct volopt_map_entry glusterd_volopt_map[] = {
.option = "parallel-readdir",
.value = "off",
.type = DOC,
- .op_version = GD_OP_VERSION_3_10_0,
+ .op_version = GD_OP_VERSION_3_11_0,
.validate_fn = validate_parallel_readdir,
.description = "If this option is enabled, the readdir operation "
"is performed in parallel on all the bricks, thus "
--
1.8.3.1

View File

@ -0,0 +1,68 @@
From 7562ffbce9d768d5af9d23361cf6dd6ef992bead Mon Sep 17 00:00:00 2001
From: Jiffin Tony Thottan <jthottan@redhat.com>
Date: Fri, 10 Nov 2017 23:38:14 +0530
Subject: [PATCH 29/74] build: exclude glusterfssharedstorage.service and
mount-shared-storage.sh from client builds
Label: DOWNSTREAM ONLY
Change-Id: I7d76ba0742b5c6a44505eb883eacda0c91efbe51
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/109684
Reviewed-by: Milind Changire <mchangir@redhat.com>
Tested-by: Milind Changire <mchangir@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfs.spec.in | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 343e88f..4596e3f 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1083,6 +1083,20 @@ exit 0
%exclude %{_libexecdir}/glusterfs/glusterfind
%exclude %{_bindir}/glusterfind
%exclude %{_libexecdir}/glusterfs/peer_add_secret_pub
+# exclude eventsapi files
+%exclude %{_sysconfdir}/glusterfs/eventsconfig.json
+%exclude %{_sharedstatedir}/glusterd/events
+%exclude %{_libexecdir}/glusterfs/events
+%exclude %{_libexecdir}/glusterfs/peer_eventsapi.py*
+%exclude %{_sbindir}/glustereventsd
+%exclude %{_sbindir}/gluster-eventsapi
+%exclude %{_datadir}/glusterfs/scripts/eventsdash.py*
+%if ( 0%{?_with_systemd:1} )
+%exclude %{_unitdir}/glustereventsd.service
+%exclude %_init_glusterfssharedstorage
+%else
+%exclude %{_sysconfdir}/init.d/glustereventsd
+%endif
# exclude server files
%exclude %{_sharedstatedir}/glusterd/*
%exclude %{_sysconfdir}/glusterfs
@@ -1123,6 +1137,9 @@ exit 0
%exclude %{_sbindir}/glusterd
%exclude %{_sbindir}/snap_scheduler.py
%exclude %{_datadir}/glusterfs/scripts/stop-all-gluster-processes.sh
+%if ( 0%{?_with_systemd:1} )
+%exclude %{_libexecdir}/glusterfs/mount-shared-storage.sh
+%endif
%exclude %{_sbindir}/conf.py*
%if 0%{?_tmpfilesdir:1}
%exclude %{_tmpfilesdir}/gluster.conf
@@ -2181,7 +2198,10 @@ end
* Thu Jul 13 2017 Kaleb S. KEITHLEY <kkeithle@redhat.com>
- various directories not owned by any package
-* Fri Jun 16 2017 Jiffin Tony Thottan <jthottan@redhat.com>
+* Wed Jun 21 2017 Jiffin Tony Thottan <jthottan@redhat.com>
+- Exclude glusterfssharedstorage.service and mount-shared-storage.sh from client builds
+
+* Tue Jun 20 2017 Jiffin Tony Thottan <jthottan@redhat.com>
- Add glusterfssharedstorage.service systemd file
* Fri Jun 9 2017 Poornima G <pgurusid@redhat.com>
--
1.8.3.1

View File

@ -0,0 +1,50 @@
From 8279b8c5f23cddd1b7db59c56ed2d8896ac49aa7 Mon Sep 17 00:00:00 2001
From: Milind Changire <mchangir@redhat.com>
Date: Tue, 4 Jul 2017 17:10:27 +0530
Subject: [PATCH 30/74] build: make gf_attach available in glusterfs-server
Problem:
gf_attach was erroneously packaged in glusterfs-fuse
Solution:
move gf_attach listing to server package
add gf_attach to the exclude listing for client builds
Label: DOWNSTREAM ONLY
Change-Id: I0de45700badcbab65febf2385f1ac074c44cfa7c
Signed-off-by: Milind Changire <mchangir@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/111001
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
glusterfs.spec.in | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 4596e3f..600fa6e 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1135,6 +1135,7 @@ exit 0
%exclude %{_sbindir}/gcron.py
%exclude %{_sbindir}/glfsheal
%exclude %{_sbindir}/glusterd
+%exclude %{_sbindir}/gf_attach
%exclude %{_sbindir}/snap_scheduler.py
%exclude %{_datadir}/glusterfs/scripts/stop-all-gluster-processes.sh
%if ( 0%{?_with_systemd:1} )
@@ -2198,6 +2199,12 @@ end
* Thu Jul 13 2017 Kaleb S. KEITHLEY <kkeithle@redhat.com>
- various directories not owned by any package
+* Tue Jul 04 2017 Milind Changire <mchangir@redhat.com>
+- moved %{_sbindir}/gf_attach from glusterfs-fuse to glusterfs-server
+
+* Fri Jun 23 2017 Kaleb S. KEITHLEY <kkeithle@redhat.com>
+- DOWNSTREAM ONLY remove Requires: selinux-policy for puddle generation
+
* Wed Jun 21 2017 Jiffin Tony Thottan <jthottan@redhat.com>
- Exclude glusterfssharedstorage.service and mount-shared-storage.sh from client builds
--
1.8.3.1

View File

@ -0,0 +1,37 @@
From e1f21c716b9a9f245e8ad2c679fb12fd86c8655e Mon Sep 17 00:00:00 2001
From: Samikshan Bairagya <sbairagy@redhat.com>
Date: Mon, 10 Jul 2017 11:54:52 +0530
Subject: [PATCH 31/74] glusterd: Revert op-version for
"cluster.max-brick-per-process"
The op-version for the "cluster.max-brick-per-process" option was
set to 3.12.0 in the upstream patch and was backported here:
https://code.engineering.redhat.com/gerrit/#/c/111799. This commit
reverts the op-version for this option to 3.11.1 instead.
Label: DOWNSTREAM ONLY
Change-Id: I23639cef43d41915eea0394d019b1e0796a99d7b
Signed-off-by: Samikshan Bairagya <sbairagy@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/111804
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-volume-set.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-volume-set.c b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
index 9729767..2210b82 100644
--- a/xlators/mgmt/glusterd/src/glusterd-volume-set.c
+++ b/xlators/mgmt/glusterd/src/glusterd-volume-set.c
@@ -3449,7 +3449,7 @@ struct volopt_map_entry glusterd_volopt_map[] = {
{ .key = GLUSTERD_BRICKMUX_LIMIT_KEY,
.voltype = "mgmt/glusterd",
.value = "0",
- .op_version = GD_OP_VERSION_3_12_0,
+ .op_version = GD_OP_VERSION_3_11_1,
.validate_fn = validate_mux_limit,
.type = GLOBAL_DOC,
.description = "This option can be used to limit the number of brick "
--
1.8.3.1

View File

@ -0,0 +1,56 @@
From 472aebd90fb081db85b00491ce7034a9b971f4e1 Mon Sep 17 00:00:00 2001
From: Samikshan Bairagya <sbairagy@redhat.com>
Date: Wed, 9 Aug 2017 14:32:59 +0530
Subject: [PATCH 32/74] cli: Add message for user before modifying
brick-multiplex option
Users should ne notified that brick-multiplexing feature is
supported only for container workloads (CNS/CRS). It should also be
made known to users that it is advisable to either have all volumes
in stopped state or have no bricks running before modifying the
"brick-multiplex" option. This commit makes sure these messages
are displayed to the user before brick-multiplexing is enabled or
disabled.
Label: DOWNSTREAM ONLY
Change-Id: Ic40294b26c691ea03185c4d1fce840ef23f95718
Signed-off-by: Samikshan Bairagya <sbairagy@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/114793
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
cli/src/cli-cmd-parser.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/cli/src/cli-cmd-parser.c b/cli/src/cli-cmd-parser.c
index ca4d906..216e050 100644
--- a/cli/src/cli-cmd-parser.c
+++ b/cli/src/cli-cmd-parser.c
@@ -1621,6 +1621,24 @@ cli_cmd_volume_set_parse (struct cli_state *state, const char **words,
goto out;
}
}
+
+ if ((strcmp (key, "cluster.brick-multiplex") == 0)) {
+ question = "Brick-multiplexing is supported only for "
+ "container workloads (CNS/CRS). Also it is "
+ "advised to make sure that either all "
+ "volumes are in stopped state or no bricks "
+ "are running before this option is modified."
+ "Do you still want to continue?";
+
+ answer = cli_cmd_get_confirmation (state, question);
+ if (GF_ANSWER_NO == answer) {
+ gf_log ("cli", GF_LOG_ERROR, "Operation "
+ "cancelled, exiting");
+ *op_errstr = gf_strdup ("Aborted by user.");
+ ret = -1;
+ goto out;
+ }
+ }
}
ret = dict_set_int32 (dict, "count", wordcount-3);
--
1.8.3.1

View File

@ -0,0 +1,114 @@
From 1ce0b65090c888b0e2b28cab03731674f4988aeb Mon Sep 17 00:00:00 2001
From: Milind Changire <mchangir@redhat.com>
Date: Tue, 10 Oct 2017 09:58:24 +0530
Subject: [PATCH 33/74] build: launch glusterd upgrade after all new bits are
installed
Problem:
glusterd upgrade mode needs new bits from glusterfs-rdma which
optional and causes the dependency graph to break since it is
not tied into glusterfs-server requirements
Solution:
Run glusterd upgrade mode after all new bits are installed
i.e. in %posttrans server section
Label: DOWNSTREAM ONLY
Change-Id: I356e02d0bf0eaaef43c20ce07b388262f63093a4
Signed-off-by: Milind Changire <mchangir@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/120094
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Raghavendra Talur <rtalur@redhat.com>
---
glusterfs.spec.in | 56 ++++++++++++++++++++++++++++++++++---------------------
1 file changed, 35 insertions(+), 21 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 600fa6e..f4386de 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -963,27 +963,6 @@ fi
%firewalld_reload
%endif
-pidof -c -o %PPID -x glusterd &> /dev/null
-if [ $? -eq 0 ]; then
- kill -9 `pgrep -f gsyncd.py` &> /dev/null
-
- killall --wait glusterd &> /dev/null
- glusterd --xlator-option *.upgrade=on -N
-
- #Cleaning leftover glusterd socket file which is created by glusterd in
- #rpm_script_t context.
- rm -f %{_rundir}/glusterd.socket
-
- # glusterd _was_ running, we killed it, it exited after *.upgrade=on,
- # so start it again
- %_init_start glusterd
-else
- glusterd --xlator-option *.upgrade=on -N
-
- #Cleaning leftover glusterd socket file which is created by glusterd in
- #rpm_script_t context.
- rm -f %{_rundir}/glusterd.socket
-fi
%endif
##-----------------------------------------------------------------------------
@@ -2166,6 +2145,35 @@ os.remove(tmpname)
if not (ok == 0) then
error("Detected running glusterfs processes", ok)
end
+
+%posttrans server
+pidof -c -o %PPID -x glusterd &> /dev/null
+if [ $? -eq 0 ]; then
+ kill -9 `pgrep -f gsyncd.py` &> /dev/null
+
+ killall --wait -SIGTERM glusterd &> /dev/null
+
+ if [ "$?" != "0" ]; then
+ echo "killall failed while killing glusterd"
+ fi
+
+ glusterd --xlator-option *.upgrade=on -N
+
+ #Cleaning leftover glusterd socket file which is created by glusterd in
+ #rpm_script_t context.
+ rm -rf /var/run/glusterd.socket
+
+ # glusterd _was_ running, we killed it, it exited after *.upgrade=on,
+ # so start it again
+ %_init_start glusterd
+else
+ glusterd --xlator-option *.upgrade=on -N
+
+ #Cleaning leftover glusterd socket file which is created by glusterd in
+ #rpm_script_t context.
+ rm -rf /var/run/glusterd.socket
+fi
+
%endif
# Events
@@ -2190,9 +2198,15 @@ end
%endif
%changelog
+* Tue Oct 10 2017 Milind Changire <mchangir@redhat.com>
+- DOWNSTREAM ONLY patch - launch glusterd in upgrade mode after all new bits have been installed
+
* Tue Aug 22 2017 Kaleb S. KEITHLEY <kkeithle@redhat.com>
- libibverbs-devel, librdmacm-devel -> rdma-core-devel #1483996
+* Fri Aug 04 2017 Kaleb S. KEITHLEY <kkeithle@rehat.com>
+- /var/lib/glusterd/options made config(noreplace) to avoid losing shared state info
+
* Thu Jul 20 2017 Aravinda VK <avishwan@redhat.com>
- Added new tool/binary to set the gfid2path xattr on files
--
1.8.3.1

View File

@ -0,0 +1,76 @@
From 58e52a8862aff553a883ee8b554f38baa2bda9a6 Mon Sep 17 00:00:00 2001
From: Milind Changire <mchangir@redhat.com>
Date: Tue, 7 Nov 2017 18:32:59 +0530
Subject: [PATCH 34/74] build: remove pretrans script for python-gluster
Remove pretrans scriptlet for python-gluster.
Label: DOWNSTREAM ONLY
Change-Id: Iee006354c596aedbd70438a3bdd583de28837190
Signed-off-by: Milind Changire <mchangir@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/122556
Reviewed-by: Prashanth Pai <ppai@redhat.com>
Reviewed-by: Aravinda Vishwanathapura Krishna Murthy <avishwan@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
glusterfs.spec.in | 42 ------------------------------------------
1 file changed, 42 deletions(-)
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index f4386de..8c16477 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -1976,48 +1976,6 @@ end
-%pretrans -n python-gluster -p <lua>
-if not posix.access("/bin/bash", "x") then
- -- initial installation, no shell, no running glusterfsd
- return 0
-end
-
--- TODO: move this completely to a lua script
--- For now, we write a temporary bash script and execute that.
-
-script = [[#!/bin/sh
-pidof -c -o %PPID -x glusterfsd &>/dev/null
-
-if [ $? -eq 0 ]; then
- pushd . > /dev/null 2>&1
- for volume in /var/lib/glusterd/vols/*; do cd $volume;
- vol_type=`grep '^type=' info | awk -F'=' '{print $2}'`
- volume_started=`grep '^status=' info | awk -F'=' '{print $2}'`
- if [ $vol_type -eq 0 ] && [ $volume_started -eq 1 ] ; then
- exit 1;
- fi
- done
-
- popd > /dev/null 2>&1
- exit 1;
-fi
-]]
-
--- Since we run pretrans scripts only for RPMs built for a server build,
--- we can now use os.tmpname() since it is available on RHEL6 and later
--- platforms which are server platforms.
-tmpname = os.tmpname()
-tmpfile = io.open(tmpname, "w")
-tmpfile:write(script)
-tmpfile:close()
-ok, how, val = os.execute("/bin/bash " .. tmpname)
-os.remove(tmpname)
-if not (ok == 0) then
- error("Detected running glusterfs processes", ok)
-end
-
-
-
%if ( 0%{!?_without_rdma:1} )
%pretrans rdma -p <lua>
if not posix.access("/bin/bash", "x") then
--
1.8.3.1

View File

@ -0,0 +1,99 @@
From 88ed6bd3e752a028b5372aa948a191fa49377459 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Fri, 10 Nov 2017 19:17:27 +0530
Subject: [PATCH 35/74] glusterd: regenerate volfiles on op-version bump up
Please note that LOC of downstream patch differs because of a
downstream only fix https://code.engineering.redhat.com/gerrit/94006
Label: DOWNSTREAM ONLY
>Reviewed-on: https://review.gluster.org/16455
>NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
>Smoke: Gluster Build System <jenkins@build.gluster.org>
>CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
>Reviewed-by: Prashanth Pai <ppai@redhat.com>
>Reviewed-by: Kaushal M <kaushal@redhat.com>
Change-Id: I2fe7a3ebea19492d52253ad5a1fdd67ac95c71c8
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/96368
Reviewed-by: Samikshan Bairagya <sbairagy@redhat.com>
Reviewed-by: Prashanth Pai <ppai@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-op-sm.c | 38 ++++++++++--------------------
1 file changed, 13 insertions(+), 25 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index 4fc719a..96e0860 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -2612,7 +2612,8 @@ glusterd_op_set_all_volume_options (xlator_t *this, dict_t *dict,
NULL);
if (ret)
goto out;
- ret = glusterd_update_volumes_dict (volinfo);
+ ret = glusterd_update_volumes_dict
+ (volinfo, &start_nfs_svc);
if (ret)
goto out;
if (!volinfo->is_snap_volume) {
@@ -2622,14 +2623,6 @@ glusterd_op_set_all_volume_options (xlator_t *this, dict_t *dict,
if (ret)
goto out;
}
-
- if (volinfo->type == GF_CLUSTER_TYPE_TIER) {
- svc = &(volinfo->tierd.svc);
- ret = svc->reconfigure (volinfo);
- if (ret)
- goto out;
- }
-
ret = glusterd_create_volfiles_and_notify_services (volinfo);
if (ret) {
gf_msg (this->name, GF_LOG_ERROR, 0,
@@ -2651,6 +2644,17 @@ glusterd_op_set_all_volume_options (xlator_t *this, dict_t *dict,
}
}
}
+ if (start_nfs_svc) {
+ ret = conf->nfs_svc.manager (&(conf->nfs_svc),
+ NULL,
+ PROC_START_NO_WAIT);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_SVC_START_FAIL,
+ "unable to start nfs service");
+ goto out;
+ }
+ }
ret = glusterd_store_global_info (this);
if (ret) {
gf_msg (this->name, GF_LOG_ERROR, 0,
@@ -2658,22 +2662,6 @@ glusterd_op_set_all_volume_options (xlator_t *this, dict_t *dict,
"Failed to store op-version.");
}
}
- cds_list_for_each_entry (volinfo, &conf->volumes, vol_list) {
- ret = glusterd_update_volumes_dict (volinfo,
- &start_nfs_svc);
- if (ret)
- goto out;
- }
- if (start_nfs_svc) {
- ret = conf->nfs_svc.manager (&(conf->nfs_svc), NULL,
- PROC_START_NO_WAIT);
- if (ret) {
- gf_msg (this->name, GF_LOG_ERROR, 0,
- GD_MSG_SVC_START_FAIL,
- "unable to start nfs service");
- goto out;
- }
- }
/* No need to save cluster.op-version in conf->opts
*/
goto out;
--
1.8.3.1

View File

@ -0,0 +1,50 @@
From b5f16e56bd1a9e64fa461f22f24790992fd2c008 Mon Sep 17 00:00:00 2001
From: Mohammed Rafi KC <rkavunga@redhat.com>
Date: Thu, 12 Oct 2017 14:31:14 +0530
Subject: [PATCH 36/74] mount/fuse : Fix parsing of vol_id for snapshot volume
For supporting sub-dir mount, we changed the volid. Which means anything
after a '/' in volume_id will be considered as sub-dir path.
But snapshot volume has vol_id stracture of /snaps/<volname>/<snapname>
which has to be considered as during the parsing.
Note 1: sub-dir mount is not supported on snapshot volume
Note 2: With sub-dir mount changes brick based mount for quota cannot be
executed via mount command. It has to be a direct call via glusterfs
Backport of>
>Change-Id: I0d824de0236b803db8a918f683dabb0cb523cb04
>BUG: 1501235
>Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
>Upstream patch : https://review.gluster.org/18506
Change-Id: I82903bdd0bfcf8454faef958b38f13d4d95a2346
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/120524
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/mount/fuse/utils/mount.glusterfs.in | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/xlators/mount/fuse/utils/mount.glusterfs.in b/xlators/mount/fuse/utils/mount.glusterfs.in
index bd6503a..36b60ff 100755
--- a/xlators/mount/fuse/utils/mount.glusterfs.in
+++ b/xlators/mount/fuse/utils/mount.glusterfs.in
@@ -675,8 +675,10 @@ main ()
[ ${first_char} = '/' ] && {
volume_str_temp=$(echo "$volume_str" | cut -c 2-)
}
- [ $(echo $volume_str_temp | grep -c "/") -eq 1 ] && {
- volume_id=$(echo "$volume_str_temp" | cut -f1 -d '/');
+ volume_id_temp=$(echo "$volume_str_temp" | cut -f1 -d '/');
+ [ $(echo $volume_str_temp | grep -c "/") -eq 1 ] &&
+ [ "$volume_id_temp" != "snaps" ] && {
+ volume_id=$volume_id_temp;
subdir_mount=$(echo "$volume_str_temp" | cut -f2- -d '/');
}
}
--
1.8.3.1

View File

@ -0,0 +1,141 @@
From 6d6e3a4100fcb9333d82618d64e96e49ddddcbf4 Mon Sep 17 00:00:00 2001
From: Amar Tumballi <amarts@redhat.com>
Date: Mon, 16 Oct 2017 11:44:59 +0530
Subject: [PATCH 37/74] protocol-auth: use the proper validation method
Currently, server protocol's init and glusterd's option
validation methods are different, causing an issue. They
should be same for having consistent behavior
> Upstream:
> Change-Id: Ibbf9a18c7192b2d77f9b7675ae7da9b8d2fe5de4
> URL: https://review.gluster.org/#/c/18489/
Change-Id: Id595a1032b14233ca8f31d20813dca98476b2468
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/120558
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
libglusterfs/src/options.c | 4 ++--
libglusterfs/src/options.h | 5 +++++
tests/features/subdir-mount.t | 4 ++++
xlators/protocol/server/src/server.c | 40 +++++++-----------------------------
4 files changed, 18 insertions(+), 35 deletions(-)
diff --git a/libglusterfs/src/options.c b/libglusterfs/src/options.c
index f0292ea..a0f04c7 100644
--- a/libglusterfs/src/options.c
+++ b/libglusterfs/src/options.c
@@ -590,7 +590,7 @@ xlator_option_validate_addr (xlator_t *xl, const char *key, const char *value,
return ret;
}
-static int
+int
xlator_option_validate_addr_list (xlator_t *xl, const char *key,
const char *value, volume_option_t *opt,
char **op_errstr)
@@ -668,7 +668,7 @@ xlator_option_validate_addr_list (xlator_t *xl, const char *key,
out:
if (ret) {
snprintf (errstr, sizeof (errstr), "option %s %s: '%s' is not "
- "a valid internet-address-list", key, value, value);
+ "a valid internet-address-list", key, value, value);
gf_msg (xl->name, GF_LOG_ERROR, 0, LG_MSG_INVALID_ENTRY, "%s",
errstr);
if (op_errstr)
diff --git a/libglusterfs/src/options.h b/libglusterfs/src/options.h
index 3154dce..d259d44 100644
--- a/libglusterfs/src/options.h
+++ b/libglusterfs/src/options.h
@@ -87,6 +87,11 @@ int xlator_options_validate_list (xlator_t *xl, dict_t *options,
int xlator_option_validate (xlator_t *xl, char *key, char *value,
volume_option_t *opt, char **op_errstr);
int xlator_options_validate (xlator_t *xl, dict_t *options, char **errstr);
+
+int xlator_option_validate_addr_list (xlator_t *xl, const char *key,
+ const char *value, volume_option_t *opt,
+ char **op_errstr);
+
volume_option_t *
xlator_volume_option_get (xlator_t *xl, const char *key);
diff --git a/tests/features/subdir-mount.t b/tests/features/subdir-mount.t
index 2fb0be4..ab7ef35 100644
--- a/tests/features/subdir-mount.t
+++ b/tests/features/subdir-mount.t
@@ -78,6 +78,10 @@ TEST ! $CLI volume set $V0 auth.allow "subdir2\(1.2.3.4\)"
# support subdir inside subdir
TEST $CLI volume set $V0 auth.allow '/subdir1/subdir1.1/subdir1.2/\(1.2.3.4\|::1\),/\(192.168.10.1\|192.168.11.1\),/subdir2\(1.2.3.4\)'
+TEST $CLI volume stop $V0
+
+TEST $CLI volume start $V0
+
# /subdir2 has not allowed IP
TEST $GFS --subdir-mount /subdir2 -s $H0 --volfile-id $V0 $M1
TEST stat $M1
diff --git a/xlators/protocol/server/src/server.c b/xlators/protocol/server/src/server.c
index e47acb2..6dc9d0f 100644
--- a/xlators/protocol/server/src/server.c
+++ b/xlators/protocol/server/src/server.c
@@ -386,9 +386,6 @@ _check_for_auth_option (dict_t *d, char *k, data_t *v,
int ret = 0;
xlator_t *xl = NULL;
char *tail = NULL;
- char *tmp_addr_list = NULL;
- char *addr = NULL;
- char *tmp_str = NULL;
xl = tmp;
@@ -417,38 +414,15 @@ _check_for_auth_option (dict_t *d, char *k, data_t *v,
* valid auth.allow.<xlator>
* Now we verify the ip address
*/
- if (!strcmp (v->data, "*")) {
- ret = 0;
- goto out;
- }
-
- /* TODO-SUBDIR-MOUNT: fix the format */
- tmp_addr_list = gf_strdup (v->data);
- addr = strtok_r (tmp_addr_list, ",", &tmp_str);
- if (!addr)
- addr = v->data;
-
- while (addr) {
- if (valid_internet_address (addr, _gf_true)) {
- ret = 0;
- } else {
- ret = -1;
- gf_msg (xl->name, GF_LOG_ERROR, 0,
- PS_MSG_INTERNET_ADDR_ERROR,
- "internet address '%s'"
- " does not conform to"
- " standards.", addr);
- goto out;
- }
- if (tmp_str)
- addr = strtok_r (NULL, ",", &tmp_str);
- else
- addr = NULL;
- }
+ ret = xlator_option_validate_addr_list (xl, "auth-*", v->data,
+ NULL, NULL);
+ if (ret)
+ gf_msg (xl->name, GF_LOG_ERROR, 0,
+ PS_MSG_INTERNET_ADDR_ERROR,
+ "internet address '%s' does not conform "
+ "to standards.", v->data);
}
out:
- GF_FREE (tmp_addr_list);
-
return ret;
}
--
1.8.3.1

View File

@ -0,0 +1,111 @@
From 4fd6388cf08d9c902f20683579d62408847c3766 Mon Sep 17 00:00:00 2001
From: Amar Tumballi <amarts@redhat.com>
Date: Mon, 23 Oct 2017 21:17:52 +0200
Subject: [PATCH 38/74] protocol/server: fix the comparision logic in case of
subdir mount
without the fix, the stat entry on a file would return inode==1 for
many files, in case of subdir mount
This happened with the confusion of return value of 'gf_uuid_compare()',
it is more like strcmp, instead of a gf_boolean return value, and hence
resulted in the bug.
> Upstream:
> URL: https://review.gluster.org/#/c/18558/
>
Also fixes the bz1501714
Change-Id: I31b8cbd95eaa3af5ff916a969458e8e4020c86bb
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/121726
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
xlators/protocol/server/src/server-common.c | 60 ++++++++++++++---------------
1 file changed, 30 insertions(+), 30 deletions(-)
diff --git a/xlators/protocol/server/src/server-common.c b/xlators/protocol/server/src/server-common.c
index b972918..ce33089 100644
--- a/xlators/protocol/server/src/server-common.c
+++ b/xlators/protocol/server/src/server-common.c
@@ -12,22 +12,22 @@
void
server_post_stat (server_state_t *state, gfs3_stat_rsp *rsp, struct iatt *stbuf)
{
- if (state->client->subdir_mount) {
- if (gf_uuid_compare (stbuf->ia_gfid,
- state->client->subdir_gfid)) {
- /* This is very important as when we send iatt of
- root-inode, fuse/client expect the gfid to be 1,
- along with inode number. As for subdirectory mount,
- we use inode table which is shared by everyone, but
- make sure we send fops only from subdir and below,
- we have to alter inode gfid and send it to client */
- uuid_t gfid = {0,};
-
- gfid[15] = 1;
- stbuf->ia_ino = 1;
- gf_uuid_copy (stbuf->ia_gfid, gfid);
- }
+ if (state->client->subdir_mount &&
+ !gf_uuid_compare (stbuf->ia_gfid,
+ state->client->subdir_gfid)) {
+ /* This is very important as when we send iatt of
+ root-inode, fuse/client expect the gfid to be 1,
+ along with inode number. As for subdirectory mount,
+ we use inode table which is shared by everyone, but
+ make sure we send fops only from subdir and below,
+ we have to alter inode gfid and send it to client */
+ uuid_t gfid = {0,};
+
+ gfid[15] = 1;
+ stbuf->ia_ino = 1;
+ gf_uuid_copy (stbuf->ia_gfid, gfid);
}
+
gf_stat_from_iatt (&rsp->stat, stbuf);
}
@@ -185,22 +185,22 @@ void
server_post_fstat (server_state_t *state, gfs3_fstat_rsp *rsp,
struct iatt *stbuf)
{
- if (state->client->subdir_mount) {
- if (gf_uuid_compare (stbuf->ia_gfid,
- state->client->subdir_gfid)) {
- /* This is very important as when we send iatt of
- root-inode, fuse/client expect the gfid to be 1,
- along with inode number. As for subdirectory mount,
- we use inode table which is shared by everyone, but
- make sure we send fops only from subdir and below,
- we have to alter inode gfid and send it to client */
- uuid_t gfid = {0,};
-
- gfid[15] = 1;
- stbuf->ia_ino = 1;
- gf_uuid_copy (stbuf->ia_gfid, gfid);
- }
+ if (state->client->subdir_mount &&
+ !gf_uuid_compare (stbuf->ia_gfid,
+ state->client->subdir_gfid)) {
+ /* This is very important as when we send iatt of
+ root-inode, fuse/client expect the gfid to be 1,
+ along with inode number. As for subdirectory mount,
+ we use inode table which is shared by everyone, but
+ make sure we send fops only from subdir and below,
+ we have to alter inode gfid and send it to client */
+ uuid_t gfid = {0,};
+
+ gfid[15] = 1;
+ stbuf->ia_ino = 1;
+ gf_uuid_copy (stbuf->ia_gfid, gfid);
}
+
gf_stat_from_iatt (&rsp->stat, stbuf);
}
--
1.8.3.1

View File

@ -0,0 +1,108 @@
From 0f3a3c9ed32fec80f1b88cc649a98bcdcc234b6a Mon Sep 17 00:00:00 2001
From: Amar Tumballi <amarts@redhat.com>
Date: Sun, 22 Oct 2017 12:41:38 +0530
Subject: [PATCH 39/74] protocol/client: handle the subdir handshake properly
for add-brick
There should be different way we handle handshake in case of subdir
mount for the first time, and in case of subsequent graph changes.
> Upstream
> URL: https://review.gluster.org/#/c/18550/
>
Change-Id: I2a7ba836433bb0a0f4a861809e2bb0d7fbc4da54
Signed-off-by: Amar Tumballi <amarts@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/121725
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
tests/features/subdir-mount.t | 31 +++++++++++++++++++++-----
xlators/protocol/client/src/client-handshake.c | 10 ++++++++-
2 files changed, 35 insertions(+), 6 deletions(-)
diff --git a/tests/features/subdir-mount.t b/tests/features/subdir-mount.t
index ab7ef35..1742f86 100644
--- a/tests/features/subdir-mount.t
+++ b/tests/features/subdir-mount.t
@@ -82,17 +82,38 @@ TEST $CLI volume stop $V0
TEST $CLI volume start $V0
-# /subdir2 has not allowed IP
-TEST $GFS --subdir-mount /subdir2 -s $H0 --volfile-id $V0 $M1
-TEST stat $M1
-
TEST $GFS --subdir-mount /subdir1/subdir1.1/subdir1.2 -s $H0 --volfile-id $V0 $M2
TEST stat $M2
+# mount shouldn't fail even after add-brick
+TEST $CLI volume add-brick $V0 replica 2 $H0:$B0/${V0}{5,6};
+
+# Give time for client process to get notified and use the new
+# volfile after add-brick
+sleep 1
+
+# Existing mount should still be active
+mount_inode=$(stat --format "%i" "$M2")
+TEST test "$mount_inode" == "1"
+
+TEST umount $M2
+
+# because the subdir is not yet 'healed', below should fail.
+TEST $GFS --subdir-mount /subdir2 -s $H0 --volfile-id $V0 $M2
+mount_inode=$(stat --format "%i" "$M2")
+TEST test "$mount_inode" != "1"
+
+# Allow the heal to complete
+TEST stat $M0/subdir1/subdir1.1/subdir1.2/subdir1.2_file;
+TEST stat $M0/subdir2/
+
+# Now the mount should succeed
+TEST $GFS --subdir-mount /subdir2 -s $H0 --volfile-id $V0 $M1
+TEST stat $M1
+
# umount $M1 / $M2
TEST umount $M0
TEST umount $M1
-TEST umount $M2
TEST $CLI volume stop $V0;
diff --git a/xlators/protocol/client/src/client-handshake.c b/xlators/protocol/client/src/client-handshake.c
index b6dc079..aee6b3a 100644
--- a/xlators/protocol/client/src/client-handshake.c
+++ b/xlators/protocol/client/src/client-handshake.c
@@ -1079,10 +1079,14 @@ client_setvolume_cbk (struct rpc_req *req, struct iovec *iov, int count, void *m
int32_t op_errno = 0;
gf_boolean_t auth_fail = _gf_false;
uint32_t lk_ver = 0;
+ glusterfs_ctx_t *ctx = NULL;
frame = myframe;
this = frame->this;
conf = this->private;
+ GF_VALIDATE_OR_GOTO (this->name, conf, out);
+ ctx = this->ctx;
+ GF_VALIDATE_OR_GOTO (this->name, ctx, out);
if (-1 == req->rpc_status) {
gf_msg (frame->this->name, GF_LOG_WARNING, ENOTCONN,
@@ -1145,9 +1149,13 @@ client_setvolume_cbk (struct rpc_req *req, struct iovec *iov, int count, void *m
auth_fail = _gf_true;
op_ret = 0;
}
- if ((op_errno == ENOENT) && this->ctx->cmd_args.subdir_mount) {
+ if ((op_errno == ENOENT) && this->ctx->cmd_args.subdir_mount &&
+ (ctx->graph_id <= 1)) {
/* A case of subdir not being present at the moment,
ride on auth_fail framework to notify the error */
+ /* Make sure this case is handled only in the new
+ graph, so mount may fail in this case. In case
+ of 'add-brick' etc, we need to continue retry */
auth_fail = _gf_true;
op_ret = 0;
}
--
1.8.3.1

View File

@ -0,0 +1,69 @@
From 8fb2496f67b1170595144eecb9a3b8f3be35044e Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Mon, 30 Oct 2017 15:55:32 +0530
Subject: [PATCH 40/74] glusterd: delete source brick only once in reset-brick
commit force
While stopping the brick which is to be reset and replaced delete_brick
flag was passed as true which resulted glusterd to free up to source
brick before the actual operation. This results commit force to fail
failing to find the source brickinfo.
>upstream patch : https://review.gluster.org/#/c/18581
Change-Id: I1aa7508eff7cc9c9b5d6f5163f3bb92736d6df44
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/121876
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
.../bug-1507466-reset-brick-commit-force.t | 24 ++++++++++++++++++++++
xlators/mgmt/glusterd/src/glusterd-reset-brick.c | 2 +-
2 files changed, 25 insertions(+), 1 deletion(-)
create mode 100644 tests/bugs/glusterd/bug-1507466-reset-brick-commit-force.t
diff --git a/tests/bugs/glusterd/bug-1507466-reset-brick-commit-force.t b/tests/bugs/glusterd/bug-1507466-reset-brick-commit-force.t
new file mode 100644
index 0000000..764399d
--- /dev/null
+++ b/tests/bugs/glusterd/bug-1507466-reset-brick-commit-force.t
@@ -0,0 +1,24 @@
+#!/bin/bash
+. $(dirname $0)/../../include.rc
+. $(dirname $0)/../../cluster.rc
+cleanup;
+
+function check_peers {
+ $CLI_1 peer status | grep 'Peer in Cluster (Connected)' | wc -l
+}
+
+TEST launch_cluster 3
+TEST $CLI_1 peer probe $H2;
+EXPECT_WITHIN $PROBE_TIMEOUT 1 check_peers
+
+TEST $CLI_1 volume create $V0 replica 2 $H1:$B0/${V0} $H2:$B0/${V0}
+TEST $CLI_1 volume start $V0
+
+# Negative case with brick not killed && volume-id xattrs present
+TEST ! $CLI_1 volume reset-brick $V0 $H1:$B0/${V0} $H1:$B0/${V0} commit force
+
+TEST $CLI_1 volume reset-brick $V0 $H1:$B0/${V0} start
+# Now test if reset-brick commit force works
+TEST $CLI_1 volume reset-brick $V0 $H1:$B0/${V0} $H1:$B0/${V0} commit force
+
+cleanup;
diff --git a/xlators/mgmt/glusterd/src/glusterd-reset-brick.c b/xlators/mgmt/glusterd/src/glusterd-reset-brick.c
index c127d64..abb44e0 100644
--- a/xlators/mgmt/glusterd/src/glusterd-reset-brick.c
+++ b/xlators/mgmt/glusterd/src/glusterd-reset-brick.c
@@ -343,7 +343,7 @@ glusterd_op_reset_brick (dict_t *dict, dict_t *rsp_dict)
gf_msg_debug (this->name, 0, "I AM THE DESTINATION HOST");
ret = glusterd_volume_stop_glusterfs (volinfo,
src_brickinfo,
- _gf_true);
+ _gf_false);
if (ret) {
gf_msg (this->name, GF_LOG_CRITICAL, 0,
GD_MSG_BRICK_STOP_FAIL,
--
1.8.3.1

View File

@ -0,0 +1,193 @@
From 548895f0333a0706ec9475efc3b28456d591f093 Mon Sep 17 00:00:00 2001
From: Gaurav Yadav <gyadav@redhat.com>
Date: Fri, 27 Oct 2017 16:04:46 +0530
Subject: [PATCH 41/74] glusterd: persist brickinfo's port change into
glusterd's store
Problem:
Consider a case where node reboot is performed and prior to reboot
brick was listening to 49153. Post reboot glusterd assigned 49152
to brick and started the brick process but the new port was never
persisted. Now when glusterd restarts glusterd always read the port
from its persisted store i.e 49153 however pmap signin happens with
the correct port i.e 49152.
Fix:
Make sure when glusterd_brick_start is called, glusterd_store_volinfo is
eventually invoked.
>upstream mainline patch : https://review.gluster.org/#/c/18579/
Change-Id: Ic0efbd48c51d39729ed951a42922d0e59f7115a1
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/121878
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: Atin Mukherjee <amukherj@redhat.com>
Tested-by: RHGS Build Bot <nigelb@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-handshake.c | 18 +++++++++---------
xlators/mgmt/glusterd/src/glusterd-op-sm.c | 9 ++++++++-
xlators/mgmt/glusterd/src/glusterd-server-quorum.c | 16 ++++++++++++++++
xlators/mgmt/glusterd/src/glusterd-snapshot-utils.c | 10 ++++++++++
xlators/mgmt/glusterd/src/glusterd-utils.c | 19 +++++++++++++++++++
5 files changed, 62 insertions(+), 10 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-handshake.c b/xlators/mgmt/glusterd/src/glusterd-handshake.c
index c7e419c..8dfb528 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handshake.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handshake.c
@@ -658,6 +658,15 @@ glusterd_create_missed_snap (glusterd_missed_snap_info *missed_snapinfo,
}
brickinfo->snap_status = 0;
+ ret = glusterd_brick_start (snap_vol, brickinfo, _gf_false);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_WARNING, 0,
+ GD_MSG_BRICK_DISCONNECTED, "starting the "
+ "brick %s:%s for the snap %s failed",
+ brickinfo->hostname, brickinfo->path,
+ snap->snapname);
+ goto out;
+ }
ret = glusterd_store_volinfo (snap_vol,
GLUSTERD_VOLINFO_VER_AC_NONE);
if (ret) {
@@ -668,15 +677,6 @@ glusterd_create_missed_snap (glusterd_missed_snap_info *missed_snapinfo,
goto out;
}
- ret = glusterd_brick_start (snap_vol, brickinfo, _gf_false);
- if (ret) {
- gf_msg (this->name, GF_LOG_WARNING, 0,
- GD_MSG_BRICK_DISCONNECTED, "starting the "
- "brick %s:%s for the snap %s failed",
- brickinfo->hostname, brickinfo->path,
- snap->snapname);
- goto out;
- }
out:
if (device)
GF_FREE (device);
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index 96e0860..9641b4f 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -2415,8 +2415,15 @@ glusterd_start_bricks (glusterd_volinfo_t *volinfo)
brickinfo->path);
goto out;
}
- }
+ }
+ ret = glusterd_store_volinfo (volinfo, GLUSTERD_VOLINFO_VER_AC_NONE);
+ if (ret) {
+ gf_msg (THIS->name, GF_LOG_ERROR, 0, GD_MSG_VOLINFO_STORE_FAIL,
+ "Failed to write volinfo for volume %s",
+ volinfo->volname);
+ goto out;
+ }
ret = 0;
out:
return ret;
diff --git a/xlators/mgmt/glusterd/src/glusterd-server-quorum.c b/xlators/mgmt/glusterd/src/glusterd-server-quorum.c
index a4637f8..659ff9d 100644
--- a/xlators/mgmt/glusterd/src/glusterd-server-quorum.c
+++ b/xlators/mgmt/glusterd/src/glusterd-server-quorum.c
@@ -12,6 +12,7 @@
#include "glusterd-utils.h"
#include "glusterd-messages.h"
#include "glusterd-server-quorum.h"
+#include "glusterd-store.h"
#include "glusterd-syncop.h"
#include "glusterd-op-sm.h"
@@ -309,6 +310,7 @@ void
glusterd_do_volume_quorum_action (xlator_t *this, glusterd_volinfo_t *volinfo,
gf_boolean_t meets_quorum)
{
+ int ret = -1;
glusterd_brickinfo_t *brickinfo = NULL;
gd_quorum_status_t quorum_status = NOT_APPLICABLE_QUORUM;
gf_boolean_t follows_quorum = _gf_false;
@@ -365,6 +367,20 @@ glusterd_do_volume_quorum_action (xlator_t *this, glusterd_volinfo_t *volinfo,
glusterd_brick_start (volinfo, brickinfo, _gf_false);
}
volinfo->quorum_status = quorum_status;
+ if (quorum_status == MEETS_QUORUM) {
+ /* bricks might have been restarted and so as the port change
+ * might have happened
+ */
+ ret = glusterd_store_volinfo (volinfo,
+ GLUSTERD_VOLINFO_VER_AC_NONE);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_VOLINFO_STORE_FAIL,
+ "Failed to write volinfo for volume %s",
+ volinfo->volname);
+ goto out;
+ }
+ }
out:
return;
}
diff --git a/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.c b/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.c
index 6fb49c3..4cbade1 100644
--- a/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.c
@@ -1680,6 +1680,16 @@ glusterd_import_friend_snap (dict_t *peer_data, int32_t snap_count,
}
if (glusterd_is_volume_started (snap_vol)) {
(void) glusterd_start_bricks (snap_vol);
+ ret = glusterd_store_volinfo
+ (snap_vol,
+ GLUSTERD_VOLINFO_VER_AC_NONE);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_VOLINFO_STORE_FAIL, "Failed to "
+ "write volinfo for volume %s",
+ snap_vol->volname);
+ goto out;
+ }
} else {
(void) glusterd_stop_bricks(snap_vol);
}
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
index f34e218..bb236df 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
@@ -6003,6 +6003,15 @@ glusterd_restart_bricks (glusterd_conf_t *conf)
glusterd_brick_start (volinfo, brickinfo,
_gf_false);
}
+ ret = glusterd_store_volinfo
+ (volinfo, GLUSTERD_VOLINFO_VER_AC_NONE);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_VOLINFO_STORE_FAIL, "Failed to "
+ "write volinfo for volume %s",
+ volinfo->volname);
+ goto out;
+ }
}
}
@@ -6034,6 +6043,16 @@ glusterd_restart_bricks (glusterd_conf_t *conf)
glusterd_brick_start (volinfo, brickinfo,
_gf_false);
}
+ ret = glusterd_store_volinfo
+ (volinfo, GLUSTERD_VOLINFO_VER_AC_NONE);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_VOLINFO_STORE_FAIL, "Failed to "
+ "write volinfo for volume %s",
+ volinfo->volname);
+ goto out;
+ }
+
}
}
ret = 0;
--
1.8.3.1

View File

@ -0,0 +1,39 @@
From 4ea251b0a23ae8fc0740abc2c5d85c09c31e0c70 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Mon, 6 Nov 2017 13:23:32 +0530
Subject: [PATCH 42/74] glusterd: restart the brick if qorum status is
NOT_APPLICABLE_QUORUM
If a volume is not having server quorum enabled and in a trusted storage
pool all the glusterd instances from other peers are down, on restarting
glusterd the brick start trigger doesn't happen resulting into the
brick not coming up.
> mainline patch : https://review.gluster.org/18669
Change-Id: If1458e03b50a113f1653db553bb2350d11577539
BUG: 1509102
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/123055
Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-server-quorum.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-server-quorum.c b/xlators/mgmt/glusterd/src/glusterd-server-quorum.c
index 659ff9d..4706403 100644
--- a/xlators/mgmt/glusterd/src/glusterd-server-quorum.c
+++ b/xlators/mgmt/glusterd/src/glusterd-server-quorum.c
@@ -341,7 +341,8 @@ glusterd_do_volume_quorum_action (xlator_t *this, glusterd_volinfo_t *volinfo,
* the bricks that are down are brought up again. In this process it
* also brings up the brick that is purposefully taken down.
*/
- if (volinfo->quorum_status == quorum_status)
+ if (quorum_status != NOT_APPLICABLE_QUORUM &&
+ volinfo->quorum_status == quorum_status)
goto out;
if (quorum_status == MEETS_QUORUM) {
--
1.8.3.1

View File

@ -0,0 +1,173 @@
From 385b61f9a6f818c2810cc0a2223c9d71340cd345 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Tue, 17 Oct 2017 21:32:44 +0530
Subject: [PATCH 43/74] glusterd: clean up portmap on brick disconnect
GlusterD's portmap entry for a brick is cleaned up when a PMAP_SIGNOUT event is
initiated by the brick process at the shutdown. But if the brick process crashes
or gets killed through SIGKILL then this event is not initiated and glusterd
ends up with a stale port. Since GlusterD's portmap traversal happens both ways,
forward for allocation and backward for registry search, there is a possibility
that glusterd might end up running with a stale port for a brick which
eventually will end up with clients to fail to connect to the bricks.
Solution is to clean up the port entry in case the process is down as
part of the brick disconnect event. Although with this the handling
PMAP_SIGNOUT event becomes redundant in most of the cases, but this is
the safeguard method to avoid glusterd getting into the stale port
issues.
>mainline patch : https://review.gluster.org/#/c/18541
Change-Id: I04c5be6d11e772ee4de16caf56dbb37d5c944303
BUG: 1503244
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/123057
Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-handler.c | 25 +++++++++++++++++++++++++
xlators/mgmt/glusterd/src/glusterd-pmap.c | 26 +++++++++++++++++---------
xlators/mgmt/glusterd/src/glusterd-pmap.h | 3 ++-
xlators/mgmt/glusterd/src/glusterd.c | 3 ++-
4 files changed, 46 insertions(+), 11 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-handler.c b/xlators/mgmt/glusterd/src/glusterd-handler.c
index af9a796..34e751c 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handler.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handler.c
@@ -5974,8 +5974,10 @@ __glusterd_brick_rpc_notify (struct rpc_clnt *rpc, void *mydata,
glusterd_volinfo_t *volinfo = NULL;
xlator_t *this = NULL;
int temp = 0;
+ int32_t pid = -1;
glusterd_brickinfo_t *brickinfo_tmp = NULL;
glusterd_brick_proc_t *brick_proc = NULL;
+ char pidfile[PATH_MAX] = {0};
brickid = mydata;
if (!brickid)
@@ -6074,6 +6076,29 @@ __glusterd_brick_rpc_notify (struct rpc_clnt *rpc, void *mydata,
"peer=%s;volume=%s;brick=%s",
brickinfo->hostname, volinfo->volname,
brickinfo->path);
+ /* In case of an abrupt shutdown of a brick PMAP_SIGNOUT
+ * event is not received by glusterd which can lead to a
+ * stale port entry in glusterd, so forcibly clean up
+ * the same if the process is not running
+ */
+ GLUSTERD_GET_BRICK_PIDFILE (pidfile, volinfo,
+ brickinfo, conf);
+ if (!gf_is_service_running (pidfile, &pid)) {
+ ret = pmap_registry_remove (
+ THIS, brickinfo->port,
+ brickinfo->path,
+ GF_PMAP_PORT_BRICKSERVER,
+ NULL, _gf_true);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_WARNING,
+ GD_MSG_PMAP_REGISTRY_REMOVE_FAIL,
+ 0, "Failed to remove pmap "
+ "registry for port %d for "
+ "brick %s", brickinfo->port,
+ brickinfo->path);
+ ret = 0;
+ }
+ }
}
if (is_brick_mx_enabled()) {
diff --git a/xlators/mgmt/glusterd/src/glusterd-pmap.c b/xlators/mgmt/glusterd/src/glusterd-pmap.c
index 2a75476..1b547e7 100644
--- a/xlators/mgmt/glusterd/src/glusterd-pmap.c
+++ b/xlators/mgmt/glusterd/src/glusterd-pmap.c
@@ -239,7 +239,8 @@ pmap_assign_port (xlator_t *this, int old_port, const char *path)
if (old_port) {
ret = pmap_registry_remove (this, 0, path,
- GF_PMAP_PORT_BRICKSERVER, NULL);
+ GF_PMAP_PORT_BRICKSERVER, NULL,
+ _gf_false);
if (ret) {
gf_msg (this->name, GF_LOG_WARNING,
GD_MSG_PMAP_REGISTRY_REMOVE_FAIL, 0, "Failed to"
@@ -342,7 +343,8 @@ pmap_registry_extend (xlator_t *this, int port, const char *brickname)
int
pmap_registry_remove (xlator_t *this, int port, const char *brickname,
- gf_pmap_port_type_t type, void *xprt)
+ gf_pmap_port_type_t type, void *xprt,
+ gf_boolean_t brick_disconnect)
{
struct pmap_registry *pmap = NULL;
int p = 0;
@@ -389,11 +391,16 @@ remove:
* can delete the entire entry.
*/
if (!pmap->ports[p].xprt) {
- brick_str = pmap->ports[p].brickname;
- if (brick_str) {
- while (*brick_str != '\0') {
- if (*(brick_str++) != ' ') {
- goto out;
+ /* If the signout call is being triggered by brick disconnect
+ * then clean up all the bricks (in case of brick mux)
+ */
+ if (!brick_disconnect) {
+ brick_str = pmap->ports[p].brickname;
+ if (brick_str) {
+ while (*brick_str != '\0') {
+ if (*(brick_str++) != ' ') {
+ goto out;
+ }
}
}
}
@@ -548,14 +555,15 @@ __gluster_pmap_signout (rpcsvc_request_t *req)
goto fail;
}
rsp.op_ret = pmap_registry_remove (THIS, args.port, args.brick,
- GF_PMAP_PORT_BRICKSERVER, req->trans);
+ GF_PMAP_PORT_BRICKSERVER, req->trans,
+ _gf_false);
ret = glusterd_get_brickinfo (THIS, args.brick, args.port, &brickinfo);
if (args.rdma_port) {
snprintf(brick_path, PATH_MAX, "%s.rdma", args.brick);
rsp.op_ret = pmap_registry_remove (THIS, args.rdma_port,
brick_path, GF_PMAP_PORT_BRICKSERVER,
- req->trans);
+ req->trans, _gf_false);
}
/* Update portmap status on brickinfo */
if (brickinfo)
diff --git a/xlators/mgmt/glusterd/src/glusterd-pmap.h b/xlators/mgmt/glusterd/src/glusterd-pmap.h
index 9965a95..253b4cc 100644
--- a/xlators/mgmt/glusterd/src/glusterd-pmap.h
+++ b/xlators/mgmt/glusterd/src/glusterd-pmap.h
@@ -42,7 +42,8 @@ int pmap_registry_bind (xlator_t *this, int port, const char *brickname,
gf_pmap_port_type_t type, void *xprt);
int pmap_registry_extend (xlator_t *this, int port, const char *brickname);
int pmap_registry_remove (xlator_t *this, int port, const char *brickname,
- gf_pmap_port_type_t type, void *xprt);
+ gf_pmap_port_type_t type, void *xprt,
+ gf_boolean_t brick_disconnect);
int pmap_registry_search (xlator_t *this, const char *brickname,
gf_pmap_port_type_t type, gf_boolean_t destroy);
struct pmap_registry *pmap_registry_get (xlator_t *this);
diff --git a/xlators/mgmt/glusterd/src/glusterd.c b/xlators/mgmt/glusterd/src/glusterd.c
index 4887ff4..81a3206 100644
--- a/xlators/mgmt/glusterd/src/glusterd.c
+++ b/xlators/mgmt/glusterd/src/glusterd.c
@@ -424,7 +424,8 @@ glusterd_rpcsvc_notify (rpcsvc_t *rpc, void *xl, rpcsvc_event_t event,
pthread_mutex_lock (&priv->xprt_lock);
list_del (&xprt->list);
pthread_mutex_unlock (&priv->xprt_lock);
- pmap_registry_remove (this, 0, NULL, GF_PMAP_PORT_ANY, xprt);
+ pmap_registry_remove (this, 0, NULL, GF_PMAP_PORT_ANY, xprt,
+ _gf_false);
break;
}
--
1.8.3.1

View File

@ -0,0 +1,283 @@
From 938ee38c02cce2a743c672f9c03798ebcbb1e348 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Thu, 26 Oct 2017 14:26:30 +0530
Subject: [PATCH 44/74] glusterd: fix brick restart parallelism
glusterd's brick restart logic is not always sequential as there is
atleast three different ways how the bricks are restarted.
1. through friend-sm and glusterd_spawn_daemons ()
2. through friend-sm and handling volume quorum action
3. through friend handshaking when there is a mimatch on quorum on
friend import.
In a brick multiplexing setup, glusterd ended up trying to spawn the
same brick process couple of times as almost in fraction of milliseconds
two threads hit glusterd_brick_start () because of which glusterd didn't
have any choice of rejecting any one of them as for both the case brick
start criteria met.
As a solution, it'd be better to control this madness by two different
flags, one is a boolean called start_triggered which indicates a brick
start has been triggered and it continues to be true till a brick dies
or killed, the second is a mutex lock to ensure for a particular brick
we don't end up getting into glusterd_brick_start () more than once at
same point of time.
>mainline patch : https://review.gluster.org/#/c/18577
Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2
BUG: 1505363
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/123056
Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
---
xlators/mgmt/glusterd/src/glusterd-handler.c | 24 ++++++++-----
xlators/mgmt/glusterd/src/glusterd-op-sm.c | 31 ++++++++++-------
xlators/mgmt/glusterd/src/glusterd-server-quorum.c | 15 +++++++--
xlators/mgmt/glusterd/src/glusterd-utils.c | 39 +++++++++++++++++-----
xlators/mgmt/glusterd/src/glusterd-volume-ops.c | 8 +++++
xlators/mgmt/glusterd/src/glusterd.h | 2 ++
6 files changed, 87 insertions(+), 32 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-handler.c b/xlators/mgmt/glusterd/src/glusterd-handler.c
index 34e751c..c3b9252 100644
--- a/xlators/mgmt/glusterd/src/glusterd-handler.c
+++ b/xlators/mgmt/glusterd/src/glusterd-handler.c
@@ -5946,16 +5946,22 @@ glusterd_mark_bricks_stopped_by_proc (glusterd_brick_proc_t *brick_proc) {
int ret = -1;
cds_list_for_each_entry (brickinfo, &brick_proc->bricks, brick_list) {
- ret = glusterd_get_volinfo_from_brick (brickinfo->path, &volinfo);
+ ret = glusterd_get_volinfo_from_brick (brickinfo->path,
+ &volinfo);
if (ret) {
- gf_msg (THIS->name, GF_LOG_ERROR, 0, GD_MSG_VOLINFO_GET_FAIL,
- "Failed to get volinfo from brick(%s)",
- brickinfo->path);
+ gf_msg (THIS->name, GF_LOG_ERROR, 0,
+ GD_MSG_VOLINFO_GET_FAIL, "Failed to get volinfo"
+ " from brick(%s)", brickinfo->path);
goto out;
}
- cds_list_for_each_entry (brickinfo_tmp, &volinfo->bricks, brick_list) {
- if (strcmp (brickinfo->path, brickinfo_tmp->path) == 0)
- glusterd_set_brick_status (brickinfo_tmp, GF_BRICK_STOPPED);
+ cds_list_for_each_entry (brickinfo_tmp, &volinfo->bricks,
+ brick_list) {
+ if (strcmp (brickinfo->path,
+ brickinfo_tmp->path) == 0) {
+ glusterd_set_brick_status (brickinfo_tmp,
+ GF_BRICK_STOPPED);
+ brickinfo_tmp->start_triggered = _gf_false;
+ }
}
}
return 0;
@@ -6129,8 +6135,10 @@ __glusterd_brick_rpc_notify (struct rpc_clnt *rpc, void *mydata,
if (temp == 1)
break;
}
- } else
+ } else {
glusterd_set_brick_status (brickinfo, GF_BRICK_STOPPED);
+ brickinfo->start_triggered = _gf_false;
+ }
break;
case RPC_CLNT_DESTROY:
diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
index 9641b4f..5b8f833 100644
--- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
+++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
@@ -2402,18 +2402,25 @@ glusterd_start_bricks (glusterd_volinfo_t *volinfo)
GF_ASSERT (volinfo);
cds_list_for_each_entry (brickinfo, &volinfo->bricks, brick_list) {
- ret = glusterd_brick_start (volinfo, brickinfo, _gf_false);
- if (ret) {
- gf_msg (THIS->name, GF_LOG_ERROR, 0,
- GD_MSG_BRICK_DISCONNECTED,
- "Failed to start %s:%s for %s",
- brickinfo->hostname, brickinfo->path,
- volinfo->volname);
- gf_event (EVENT_BRICK_START_FAILED,
- "peer=%s;volume=%s;brick=%s",
- brickinfo->hostname, volinfo->volname,
- brickinfo->path);
- goto out;
+ if (!brickinfo->start_triggered) {
+ pthread_mutex_lock (&brickinfo->restart_mutex);
+ {
+ ret = glusterd_brick_start (volinfo, brickinfo,
+ _gf_false);
+ }
+ pthread_mutex_unlock (&brickinfo->restart_mutex);
+ if (ret) {
+ gf_msg (THIS->name, GF_LOG_ERROR, 0,
+ GD_MSG_BRICK_DISCONNECTED,
+ "Failed to start %s:%s for %s",
+ brickinfo->hostname, brickinfo->path,
+ volinfo->volname);
+ gf_event (EVENT_BRICK_START_FAILED,
+ "peer=%s;volume=%s;brick=%s",
+ brickinfo->hostname, volinfo->volname,
+ brickinfo->path);
+ goto out;
+ }
}
}
diff --git a/xlators/mgmt/glusterd/src/glusterd-server-quorum.c b/xlators/mgmt/glusterd/src/glusterd-server-quorum.c
index 4706403..995a568 100644
--- a/xlators/mgmt/glusterd/src/glusterd-server-quorum.c
+++ b/xlators/mgmt/glusterd/src/glusterd-server-quorum.c
@@ -362,10 +362,19 @@ glusterd_do_volume_quorum_action (xlator_t *this, glusterd_volinfo_t *volinfo,
list_for_each_entry (brickinfo, &volinfo->bricks, brick_list) {
if (!glusterd_is_local_brick (this, volinfo, brickinfo))
continue;
- if (quorum_status == DOESNT_MEET_QUORUM)
+ if (quorum_status == DOESNT_MEET_QUORUM) {
glusterd_brick_stop (volinfo, brickinfo, _gf_false);
- else
- glusterd_brick_start (volinfo, brickinfo, _gf_false);
+ } else {
+ if (!brickinfo->start_triggered) {
+ pthread_mutex_lock (&brickinfo->restart_mutex);
+ {
+ glusterd_brick_start (volinfo,
+ brickinfo,
+ _gf_false);
+ }
+ pthread_mutex_unlock (&brickinfo->restart_mutex);
+ }
+ }
}
volinfo->quorum_status = quorum_status;
if (quorum_status == MEETS_QUORUM) {
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
index bb236df..18de517 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
@@ -1084,7 +1084,7 @@ glusterd_brickinfo_new (glusterd_brickinfo_t **brickinfo)
goto out;
CDS_INIT_LIST_HEAD (&new_brickinfo->brick_list);
-
+ pthread_mutex_init (&new_brickinfo->restart_mutex, NULL);
*brickinfo = new_brickinfo;
ret = 0;
@@ -2481,7 +2481,7 @@ glusterd_volume_stop_glusterfs (glusterd_volinfo_t *volinfo,
(void) sys_unlink (pidfile);
brickinfo->status = GF_BRICK_STOPPED;
-
+ brickinfo->start_triggered = _gf_false;
if (del_brick)
glusterd_delete_brick (volinfo, brickinfo);
out:
@@ -5817,13 +5817,14 @@ glusterd_brick_start (glusterd_volinfo_t *volinfo,
* three different triggers for an attempt to start the brick process
* due to the quorum handling code in glusterd_friend_sm.
*/
- if (brickinfo->status == GF_BRICK_STARTING) {
+ if (brickinfo->status == GF_BRICK_STARTING ||
+ brickinfo->start_triggered) {
gf_msg_debug (this->name, 0, "brick %s is already in starting "
"phase", brickinfo->path);
ret = 0;
goto out;
}
-
+ brickinfo->start_triggered = _gf_true;
GLUSTERD_GET_BRICK_PIDFILE (pidfile, volinfo, brickinfo, conf);
if (gf_is_service_running (pidfile, &pid)) {
if (brickinfo->status != GF_BRICK_STARTING &&
@@ -5936,6 +5937,9 @@ run:
}
out:
+ if (ret && brickinfo) {
+ brickinfo->start_triggered = _gf_false;
+ }
gf_msg_debug (this->name, 0, "returning %d ", ret);
return ret;
}
@@ -5997,11 +6001,19 @@ glusterd_restart_bricks (glusterd_conf_t *conf)
start_svcs = _gf_true;
glusterd_svcs_manager (NULL);
}
-
cds_list_for_each_entry (brickinfo, &volinfo->bricks,
brick_list) {
- glusterd_brick_start (volinfo, brickinfo,
- _gf_false);
+ if (!brickinfo->start_triggered) {
+ pthread_mutex_lock
+ (&brickinfo->restart_mutex);
+ {
+ glusterd_brick_start
+ (volinfo, brickinfo,
+ _gf_false);
+ }
+ pthread_mutex_unlock
+ (&brickinfo->restart_mutex);
+ }
}
ret = glusterd_store_volinfo
(volinfo, GLUSTERD_VOLINFO_VER_AC_NONE);
@@ -6040,8 +6052,17 @@ glusterd_restart_bricks (glusterd_conf_t *conf)
"volume %s", volinfo->volname);
cds_list_for_each_entry (brickinfo, &volinfo->bricks,
brick_list) {
- glusterd_brick_start (volinfo, brickinfo,
- _gf_false);
+ if (!brickinfo->start_triggered) {
+ pthread_mutex_lock
+ (&brickinfo->restart_mutex);
+ {
+ glusterd_brick_start
+ (volinfo, brickinfo,
+ _gf_false);
+ }
+ pthread_mutex_unlock
+ (&brickinfo->restart_mutex);
+ }
}
ret = glusterd_store_volinfo
(volinfo, GLUSTERD_VOLINFO_VER_AC_NONE);
diff --git a/xlators/mgmt/glusterd/src/glusterd-volume-ops.c b/xlators/mgmt/glusterd/src/glusterd-volume-ops.c
index 834acab..bec5f72 100644
--- a/xlators/mgmt/glusterd/src/glusterd-volume-ops.c
+++ b/xlators/mgmt/glusterd/src/glusterd-volume-ops.c
@@ -2545,6 +2545,14 @@ glusterd_start_volume (glusterd_volinfo_t *volinfo, int flags,
GF_ASSERT (volinfo);
cds_list_for_each_entry (brickinfo, &volinfo->bricks, brick_list) {
+ /* Mark start_triggered to false so that in case if this brick
+ * was brought down through gf_attach utility, the
+ * brickinfo->start_triggered wouldn't have been updated to
+ * _gf_false
+ */
+ if (flags & GF_CLI_FLAG_OP_FORCE) {
+ brickinfo->start_triggered = _gf_false;
+ }
ret = glusterd_brick_start (volinfo, brickinfo, wait);
/* If 'force' try to start all bricks regardless of success or
* failure
diff --git a/xlators/mgmt/glusterd/src/glusterd.h b/xlators/mgmt/glusterd/src/glusterd.h
index 722d2f8..d4bb236 100644
--- a/xlators/mgmt/glusterd/src/glusterd.h
+++ b/xlators/mgmt/glusterd/src/glusterd.h
@@ -240,6 +240,8 @@ struct glusterd_brickinfo {
uint64_t statfs_fsid;
uint32_t fs_share_count;
gf_boolean_t port_registered;
+ gf_boolean_t start_triggered;
+ pthread_mutex_t restart_mutex;
};
typedef struct glusterd_brickinfo glusterd_brickinfo_t;
--
1.8.3.1

View File

@ -0,0 +1,265 @@
From b027d2fdd184d2ee2b2c4236603200be344156f8 Mon Sep 17 00:00:00 2001
From: Atin Mukherjee <amukherj@redhat.com>
Date: Thu, 10 Aug 2017 18:31:55 +0530
Subject: [PATCH 45/74] glusterd: introduce max-port range
glusterd.vol file always had an option (commented out) to indicate the
base-port to start the portmapper allocation. This patch brings in the
max-port configuration where one can limit the range of ports which
gluster can be allowed to bind.
>Fixes: #305
>Change-Id: Id7a864f818227b9530a07e13d605138edacd9aa9
>Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
>Reviewed-on: https://review.gluster.org/18016
>Smoke: Gluster Build System <jenkins@build.gluster.org>
>Reviewed-by: Prashanth Pai <ppai@redhat.com>
>Reviewed-by: Niels de Vos <ndevos@redhat.com>
>CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
>Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
Change-Id: Id7a864f818227b9530a07e13d605138edacd9aa9
BUG: 1474745
Signed-off-by: Atin Mukherjee <amukherj@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/123060
Reviewed-by: Gaurav Yadav <gyadav@redhat.com>
---
extras/glusterd.vol.in | 1 +
xlators/mgmt/glusterd/src/glusterd-messages.h | 10 +++++++++-
xlators/mgmt/glusterd/src/glusterd-pmap.c | 20 +++++++++++---------
xlators/mgmt/glusterd/src/glusterd-pmap.h | 3 ++-
xlators/mgmt/glusterd/src/glusterd-snapd-svc.c | 8 ++++++++
xlators/mgmt/glusterd/src/glusterd-utils.c | 18 +++++++++++++++++-
xlators/mgmt/glusterd/src/glusterd.c | 17 +++++++++++++++--
xlators/mgmt/glusterd/src/glusterd.h | 1 +
8 files changed, 64 insertions(+), 14 deletions(-)
diff --git a/extras/glusterd.vol.in b/extras/glusterd.vol.in
index 957b277..0152996 100644
--- a/extras/glusterd.vol.in
+++ b/extras/glusterd.vol.in
@@ -9,4 +9,5 @@ volume management
option event-threads 1
# option transport.address-family inet6
# option base-port 49152
+# option max-port 65535
end-volume
diff --git a/xlators/mgmt/glusterd/src/glusterd-messages.h b/xlators/mgmt/glusterd/src/glusterd-messages.h
index 8bb4c43..de9ae92 100644
--- a/xlators/mgmt/glusterd/src/glusterd-messages.h
+++ b/xlators/mgmt/glusterd/src/glusterd-messages.h
@@ -41,7 +41,7 @@
#define GLUSTERD_COMP_BASE GLFS_MSGID_GLUSTERD
-#define GLFS_NUM_MESSAGES 611
+#define GLFS_NUM_MESSAGES 612
#define GLFS_MSGID_END (GLUSTERD_COMP_BASE + GLFS_NUM_MESSAGES + 1)
/* Messaged with message IDs */
@@ -4945,6 +4945,14 @@
*/
#define GD_MSG_SVC_START_FAIL (GLUSTERD_COMP_BASE + 590)
+/*!
+ * @messageid
+ * @diagnosis
+ * @recommendedaction
+ *
+ */
+#define GD_MSG_PORTS_EXHAUSTED (GLUSTERD_COMP_BASE + 612)
+
/*------------*/
#define glfs_msg_end_x GLFS_MSGID_END, "Invalid: End of messages"
diff --git a/xlators/mgmt/glusterd/src/glusterd-pmap.c b/xlators/mgmt/glusterd/src/glusterd-pmap.c
index 1b547e7..4f045ab 100644
--- a/xlators/mgmt/glusterd/src/glusterd-pmap.c
+++ b/xlators/mgmt/glusterd/src/glusterd-pmap.c
@@ -61,8 +61,8 @@ pmap_registry_new (xlator_t *this)
pmap->base_port = pmap->last_alloc =
((glusterd_conf_t *)(this->private))->base_port;
-
- for (i = pmap->base_port; i <= GF_PORT_MAX; i++) {
+ pmap->max_port = ((glusterd_conf_t *)(this->private))->max_port;
+ for (i = pmap->base_port; i <= pmap->max_port; i++) {
if (pmap_port_isfree (i))
pmap->ports[i].type = GF_PMAP_PORT_FREE;
else
@@ -184,10 +184,12 @@ pmap_registry_search_by_xprt (xlator_t *this, void *xprt,
static char *
pmap_registry_search_by_port (xlator_t *this, int port)
{
- struct pmap_registry *pmap = NULL;
- char *brickname = NULL;
+ struct pmap_registry *pmap = NULL;
+ char *brickname = NULL;
+ int max_port = 0;
- if (port > GF_PORT_MAX)
+ max_port = ((glusterd_conf_t *)(this->private))->max_port;
+ if (port > max_port)
goto out;
pmap = pmap_registry_get (this);
@@ -209,7 +211,7 @@ pmap_registry_alloc (xlator_t *this)
pmap = pmap_registry_get (this);
- for (p = pmap->base_port; p <= GF_PORT_MAX; p++) {
+ for (p = pmap->base_port; p <= pmap->max_port; p++) {
/* GF_PMAP_PORT_FOREIGN may be freed up ? */
if ((pmap->ports[p].type == GF_PMAP_PORT_FREE) ||
(pmap->ports[p].type == GF_PMAP_PORT_FOREIGN)) {
@@ -261,7 +263,7 @@ pmap_registry_bind (xlator_t *this, int port, const char *brickname,
pmap = pmap_registry_get (this);
- if (port > GF_PORT_MAX)
+ if (port > pmap->max_port)
goto out;
p = port;
@@ -297,7 +299,7 @@ pmap_registry_extend (xlator_t *this, int port, const char *brickname)
pmap = pmap_registry_get (this);
- if (port > GF_PORT_MAX) {
+ if (port > pmap->max_port) {
return -1;
}
@@ -357,7 +359,7 @@ pmap_registry_remove (xlator_t *this, int port, const char *brickname,
goto out;
if (port) {
- if (port > GF_PORT_MAX)
+ if (port > pmap->max_port)
goto out;
p = port;
diff --git a/xlators/mgmt/glusterd/src/glusterd-pmap.h b/xlators/mgmt/glusterd/src/glusterd-pmap.h
index 253b4cc..f642d66 100644
--- a/xlators/mgmt/glusterd/src/glusterd-pmap.h
+++ b/xlators/mgmt/glusterd/src/glusterd-pmap.h
@@ -31,8 +31,9 @@ struct pmap_port_status {
struct pmap_registry {
int base_port;
+ int max_port;
int last_alloc;
- struct pmap_port_status ports[65536];
+ struct pmap_port_status ports[GF_PORT_MAX + 1];
};
int pmap_assign_port (xlator_t *this, int port, const char *path);
diff --git a/xlators/mgmt/glusterd/src/glusterd-snapd-svc.c b/xlators/mgmt/glusterd/src/glusterd-snapd-svc.c
index 59d8fbd..5621852 100644
--- a/xlators/mgmt/glusterd/src/glusterd-snapd-svc.c
+++ b/xlators/mgmt/glusterd/src/glusterd-snapd-svc.c
@@ -300,6 +300,14 @@ glusterd_snapdsvc_start (glusterd_svc_t *svc, int flags)
"-S", svc->conn.sockpath, NULL);
snapd_port = pmap_assign_port (THIS, volinfo->snapd.port, snapd_id);
+ if (!snapd_port) {
+ gf_msg (this->name, GF_LOG_ERROR, 0, GD_MSG_PORTS_EXHAUSTED,
+ "All the ports in the range are exhausted, can't start "
+ "snapd for volume %s", volinfo->volname);
+ ret = -1;
+ goto out;
+ }
+
volinfo->snapd.port = snapd_port;
runner_add_arg (&runner, "--brick-port");
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
index 18de517..55c4fa7 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
@@ -2002,7 +2002,14 @@ glusterd_volume_start_glusterfs (glusterd_volinfo_t *volinfo,
}
port = pmap_assign_port (THIS, brickinfo->port, brickinfo->path);
-
+ if (!port) {
+ gf_msg (this->name, GF_LOG_ERROR, 0, GD_MSG_PORTS_EXHAUSTED,
+ "All the ports in the range are exhausted, can't start "
+ "brick %s for volume %s", brickinfo->path,
+ volinfo->volname);
+ ret = -1;
+ goto out;
+ }
/* Build the exp_path, before starting the glusterfsd even in
valgrind mode. Otherwise all the glusterfsd processes start
writing the valgrind log to the same file.
@@ -2076,6 +2083,15 @@ retry:
brickinfo->path);
rdma_port = pmap_assign_port (THIS, brickinfo->rdma_port,
rdma_brick_path);
+ if (!rdma_port) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_PORTS_EXHAUSTED, "All rdma ports in the "
+ "range are exhausted, can't start brick %s for "
+ "volume %s", rdma_brick_path,
+ volinfo->volname);
+ ret = -1;
+ goto out;
+ }
runner_argprintf (&runner, "%d,%d", port, rdma_port);
runner_add_arg (&runner, "--xlator-option");
runner_argprintf (&runner, "%s-server.transport.rdma.listen-port=%d",
diff --git a/xlators/mgmt/glusterd/src/glusterd.c b/xlators/mgmt/glusterd/src/glusterd.c
index 81a3206..68d3e90 100644
--- a/xlators/mgmt/glusterd/src/glusterd.c
+++ b/xlators/mgmt/glusterd/src/glusterd.c
@@ -1824,12 +1824,20 @@ init (xlator_t *this)
if (ret)
goto out;
- conf->base_port = GF_IANA_PRIV_PORTS_START;
- if (dict_get_uint32(this->options, "base-port", &conf->base_port) == 0) {
+ conf->base_port = GF_IANA_PRIV_PORTS_START;
+ if (dict_get_uint32 (this->options, "base-port",
+ &conf->base_port) == 0) {
gf_msg (this->name, GF_LOG_INFO, 0,
GD_MSG_DICT_SET_FAILED,
"base-port override: %d", conf->base_port);
}
+ conf->max_port = GF_PORT_MAX;
+ if (dict_get_uint32 (this->options, "max-port",
+ &conf->max_port) == 0) {
+ gf_msg (this->name, GF_LOG_INFO, 0,
+ GD_MSG_DICT_SET_FAILED,
+ "max-port override: %d", conf->max_port);
+ }
/* Set option to run bricks on valgrind if enabled in glusterd.vol */
this->ctx->cmd_args.valgrind = valgrind;
@@ -2135,6 +2143,11 @@ struct volume_options options[] = {
.type = GF_OPTION_TYPE_INT,
.description = "Sets the base port for portmap query"
},
+ { .key = {"max-port"},
+ .type = GF_OPTION_TYPE_INT,
+ .max = GF_PORT_MAX,
+ .description = "Sets the max port for portmap query"
+ },
{ .key = {"snap-brick-path"},
.type = GF_OPTION_TYPE_STR,
.description = "directory where the bricks for the snapshots will be created"
diff --git a/xlators/mgmt/glusterd/src/glusterd.h b/xlators/mgmt/glusterd/src/glusterd.h
index d4bb236..291f2f7 100644
--- a/xlators/mgmt/glusterd/src/glusterd.h
+++ b/xlators/mgmt/glusterd/src/glusterd.h
@@ -187,6 +187,7 @@ typedef struct {
gf_boolean_t restart_done;
rpcsvc_t *uds_rpc; /* RPCSVC for the unix domain socket */
uint32_t base_port;
+ uint32_t max_port;
char *snap_bricks_directory;
gf_store_handle_t *missed_snaps_list_shandle;
struct cds_list_head missed_snaps_list;
--
1.8.3.1

View File

@ -0,0 +1,424 @@
From 538b92ebe180186d84e3f5288f168c404e8957d4 Mon Sep 17 00:00:00 2001
From: Jiffin Tony Thottan <jthottan@redhat.com>
Date: Mon, 13 Nov 2017 18:41:58 +0530
Subject: [PATCH 46/74] Revert "build: conditionally build legacy gNFS server
and associated sub-packaging"
This reverts commit 83abcba6b42f94eb5a6495a634d4055362a9d79d.
Conflicts:
glusterfs.spec.in
xlators/Makefile.am
xlators/mgmt/glusterd/src/glusterd-messages.h
---
configure.ac | 12 -----
extras/LinuxRPM/Makefile.am | 4 +-
glusterfs.spec.in | 65 +++++++--------------------
xlators/Makefile.am | 6 +--
xlators/mgmt/glusterd/src/Makefile.am | 4 +-
xlators/mgmt/glusterd/src/glusterd-nfs-svc.c | 28 ++++++------
xlators/mgmt/glusterd/src/glusterd-svc-mgmt.h | 1 +
xlators/mgmt/glusterd/src/glusterd-utils.c | 7 ++-
xlators/mgmt/glusterd/src/glusterd.c | 35 ++++++++++++---
9 files changed, 68 insertions(+), 94 deletions(-)
diff --git a/configure.ac b/configure.ac
index 3841959..dfccd40 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1345,17 +1345,6 @@ if test "x$enable_glupy" = "xyes"; then
fi
dnl end glupy section
-dnl gnfs section
-BUILD_GNFS="no"
-AC_ARG_ENABLE([gnfs],
- AC_HELP_STRING([--enable-gnfs],
- [Enable legacy gnfs server xlator.]))
-if test "x$enable_gnfs" = "xyes"; then
- BUILD_GNFS="yes"
-fi
-AM_CONDITIONAL([BUILD_GNFS], [test x$BUILD_GNFS = xyes])
-dnl end gnfs section
-
dnl Check for userspace-rcu
PKG_CHECK_MODULES([URCU], [liburcu-bp], [],
[AC_CHECK_HEADERS([urcu-bp.h],
@@ -1590,5 +1579,4 @@ echo "Events : $BUILD_EVENTS"
echo "EC dynamic support : $EC_DYNAMIC_SUPPORT"
echo "Use memory pools : $USE_MEMPOOL"
echo "Nanosecond m/atimes : $BUILD_NANOSECOND_TIMESTAMPS"
-echo "Legacy gNFS server : $BUILD_GNFS"
echo
diff --git a/extras/LinuxRPM/Makefile.am b/extras/LinuxRPM/Makefile.am
index f028537..61fd6da 100644
--- a/extras/LinuxRPM/Makefile.am
+++ b/extras/LinuxRPM/Makefile.am
@@ -18,7 +18,7 @@ autogen:
cd ../.. && \
rm -rf autom4te.cache && \
./autogen.sh && \
- ./configure --enable-gnfs --with-previous-options
+ ./configure --with-previous-options
prep:
$(MAKE) -C ../.. dist;
@@ -36,7 +36,7 @@ srcrpm:
mv rpmbuild/SRPMS/* .
rpms:
- rpmbuild --define '_topdir $(shell pwd)/rpmbuild' --with gnfs -bb rpmbuild/SPECS/glusterfs.spec
+ rpmbuild --define '_topdir $(shell pwd)/rpmbuild' -bb rpmbuild/SPECS/glusterfs.spec
mv rpmbuild/RPMS/*/* .
# EPEL-5 does not like new versions of rpmbuild and requires some
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 8c16477..10339fe 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -47,10 +47,6 @@
%global _without_georeplication --disable-georeplication
%endif
-# if you wish to compile an rpm with the legacy gNFS server xlator
-# rpmbuild -ta @PACKAGE_NAME@-@PACKAGE_VERSION@.tar.gz --with gnfs
-%{?_with_gnfs:%global _with_gnfs --enable-gnfs}
-
# if you wish to compile an rpm without the OCF resource agents...
# rpmbuild -ta @PACKAGE_NAME@-@PACKAGE_VERSION@.tar.gz --without ocf
%{?_without_ocf:%global _without_ocf --without-ocf}
@@ -122,7 +118,7 @@
%endif
# From https://fedoraproject.org/wiki/Packaging:Python#Macros
-%if ( 0%{?rhel} && 0%{?rhel} <= 6 )
+%if ( 0%{?rhel} && 0%{?rhel} <= 5 )
%{!?python2_sitelib: %global python2_sitelib %(python2 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")}
%{!?python2_sitearch: %global python2_sitearch %(python2 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")}
%global _rundir %{_localstatedir}/run
@@ -461,26 +457,6 @@ This package provides support to geo-replication.
%endif
%endif
-%if ( 0%{?_with_gnfs:1} )
-%package gnfs
-Summary: GlusterFS gNFS server
-Group: System Environment/Daemons
-Requires: %{name}%{?_isa} = %{version}-%{release}
-Requires: %{name}-client-xlators%{?_isa} = %{version}-%{release}
-Requires: nfs-utils
-
-%description gnfs
-GlusterFS is a distributed file-system capable of scaling to several
-petabytes. It aggregates various storage bricks over Infiniband RDMA
-or TCP/IP interconnect into one large parallel network file
-system. GlusterFS is one of the most sophisticated file systems in
-terms of features and extensibility. It borrows a powerful concept
-called Translators from GNU Hurd kernel. Much of the code in GlusterFS
-is in user space and easily manageable.
-
-This package provides the glusterfs legacy gNFS server xlator
-%endif
-
%package libs
Summary: GlusterFS common libraries
Group: Applications/File
@@ -621,6 +597,7 @@ Requires: %{name}-api%{?_isa} = %{version}-%{release}
Requires: %{name}-client-xlators%{?_isa} = %{version}-%{release}
# lvm2 for snapshot, and nfs-utils and rpcbind/portmap for gnfs server
Requires: lvm2
+Requires: nfs-utils
%if ( 0%{?_with_systemd:1} )
%{?systemd_requires}
%else
@@ -736,19 +713,18 @@ export LDFLAGS
./autogen.sh && %configure \
%{?_with_cmocka} \
%{?_with_debug} \
- %{?_with_firewalld} \
- %{?_with_gnfs} \
- %{?_with_tmpfilesdir} \
%{?_with_valgrind} \
+ %{?_with_tmpfilesdir} \
%{?_without_bd} \
%{?_without_epoll} \
- %{?_without_events} \
%{?_without_fusermount} \
%{?_without_georeplication} \
+ %{?_with_firewalld} \
%{?_without_ocf} \
%{?_without_rdma} \
%{?_without_syslog} \
- %{?_without_tiering}
+ %{?_without_tiering} \
+ %{?_without_events}
# fix hardening and remove rpath in shlibs
%if ( 0%{?fedora} && 0%{?fedora} > 17 ) || ( 0%{?rhel} && 0%{?rhel} > 6 )
@@ -1105,6 +1081,7 @@ exit 0
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/trash.so
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/upcall.so
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/mgmt*
+%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/nfs*
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/performance/decompounder.so
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/protocol/server*
%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/storage*
@@ -1297,19 +1274,6 @@ exit 0
%endif
%if ( 0%{?_build_server} )
-%if ( 0%{?_with_gnfs:1} )
-%files gnfs
-%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator
-%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/nfs
- %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/nfs/server.so
-%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/nfs
-%ghost %attr(0600,-,-) %{_sharedstatedir}/glusterd/nfs/nfs-server.vol
-%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/nfs/run
-%ghost %attr(0600,-,-) %{_sharedstatedir}/glusterd/nfs/run/nfs.pid
-%endif
-%endif
-
-%if ( 0%{?_build_server} )
%files ganesha
%endif
@@ -1399,11 +1363,6 @@ exit 0
# sysconf
%config(noreplace) %{_sysconfdir}/glusterfs
%exclude %{_sysconfdir}/glusterfs/eventsconfig.json
-%exclude %{_sharedstatedir}/glusterd/nfs/nfs-server.vol
-%exclude %{_sharedstatedir}/glusterd/nfs/run/nfs.pid
-%if ( 0%{?_with_gnfs:1} )
-%exclude %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/nfs/*
-%endif
%config(noreplace) %{_sysconfdir}/sysconfig/glusterd
%if ( 0%{_for_fedora_koji_builds} )
%config(noreplace) %{_sysconfdir}/sysconfig/glusterfsd
@@ -1450,6 +1409,7 @@ exit 0
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/trash.so
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/upcall.so
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/features/leases.so
+ %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/nfs*
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/mgmt
%{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/mgmt/glusterd.so
%dir %{_libdir}/glusterfs/%{version}%{?prereltag}/xlator/protocol
@@ -1517,7 +1477,11 @@ exit 0
%dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/stop/pre
%attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/stop/pre/S30samba-stop.sh
%attr(0755,-,-) %{_sharedstatedir}/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
-%config(noreplace) %ghost %attr(0600,-,-) %{_sharedstatedir}/glusterd/options
+%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/nfs
+%ghost %attr(0600,-,-) %{_sharedstatedir}/glusterd/nfs/nfs-server.vol
+%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/nfs/run
+%ghost %attr(0600,-,-) %{_sharedstatedir}/glusterd/nfs/run/nfs.pid
+%ghost %attr(0600,-,-) %{_sharedstatedir}/glusterd/options
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/peers
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/quotad
%ghost %dir %attr(0755,-,-) %{_sharedstatedir}/glusterd/scrub
@@ -2156,6 +2120,9 @@ fi
%endif
%changelog
+* Mon Nov 13 2017 Jiffin Tony Thottan <jthottan@redhat.com>
+- DOWNSTREAM ONLY - revert of 83abcb(gnfs in an optional subpackage)
+
* Tue Oct 10 2017 Milind Changire <mchangir@redhat.com>
- DOWNSTREAM ONLY patch - launch glusterd in upgrade mode after all new bits have been installed
diff --git a/xlators/Makefile.am b/xlators/Makefile.am
index 29549db..c3c9cf2 100644
--- a/xlators/Makefile.am
+++ b/xlators/Makefile.am
@@ -1,12 +1,8 @@
-if BUILD_GNFS
- GNFS_DIR = nfs
-endif
-
DIST_SUBDIRS = cluster storage protocol performance debug features encryption \
mount nfs mgmt system playground meta
SUBDIRS = cluster storage protocol performance debug features encryption \
- mount ${GNFS_DIR} mgmt system playground meta
+ mount nfs mgmt system playground meta
EXTRA_DIST = xlator.sym
diff --git a/xlators/mgmt/glusterd/src/Makefile.am b/xlators/mgmt/glusterd/src/Makefile.am
index b0f5a9b..4858dee 100644
--- a/xlators/mgmt/glusterd/src/Makefile.am
+++ b/xlators/mgmt/glusterd/src/Makefile.am
@@ -1,8 +1,6 @@
xlator_LTLIBRARIES = glusterd.la
xlatordir = $(libdir)/glusterfs/$(PACKAGE_VERSION)/xlator/mgmt
-glusterd_la_CPPFLAGS = $(AM_CPPFLAGS) \
- -DFILTERDIR=\"$(libdir)/glusterfs/$(PACKAGE_VERSION)/filter\" \
- -DXLATORDIR=\"$(libdir)/glusterfs/$(PACKAGE_VERSION)/xlator\"
+glusterd_la_CPPFLAGS = $(AM_CPPFLAGS) "-DFILTERDIR=\"$(libdir)/glusterfs/$(PACKAGE_VERSION)/filter\""
glusterd_la_LDFLAGS = -module $(GF_XLATOR_DEFAULT_LDFLAGS)
glusterd_la_SOURCES = glusterd.c glusterd-handler.c glusterd-sm.c \
glusterd-op-sm.c glusterd-utils.c glusterd-rpc-ops.c \
diff --git a/xlators/mgmt/glusterd/src/glusterd-nfs-svc.c b/xlators/mgmt/glusterd/src/glusterd-nfs-svc.c
index 32b1064..eab9746 100644
--- a/xlators/mgmt/glusterd/src/glusterd-nfs-svc.c
+++ b/xlators/mgmt/glusterd/src/glusterd-nfs-svc.c
@@ -10,7 +10,6 @@
#include "globals.h"
#include "run.h"
-#include "syscall.h"
#include "glusterd.h"
#include "glusterd-utils.h"
#include "glusterd-volgen.h"
@@ -18,6 +17,8 @@
#include "glusterd-messages.h"
#include "glusterd-svc-helper.h"
+static char *nfs_svc_name = "nfs";
+
static gf_boolean_t
glusterd_nfssvc_need_start ()
{
@@ -40,13 +41,19 @@ glusterd_nfssvc_need_start ()
return start;
}
+int
+glusterd_nfssvc_init (glusterd_svc_t *svc)
+{
+ return glusterd_svc_init (svc, nfs_svc_name);
+}
+
static int
glusterd_nfssvc_create_volfile ()
{
char filepath[PATH_MAX] = {0,};
glusterd_conf_t *conf = THIS->private;
- glusterd_svc_build_volfile_path (conf->nfs_svc.name, conf->workdir,
+ glusterd_svc_build_volfile_path (nfs_svc_name, conf->workdir,
filepath, sizeof (filepath));
return glusterd_create_global_volfile (build_nfs_graph,
filepath, NULL);
@@ -58,16 +65,15 @@ glusterd_nfssvc_manager (glusterd_svc_t *svc, void *data, int flags)
int ret = -1;
if (!svc->inited) {
- ret = glusterd_svc_init (svc, "nfs");
+ ret = glusterd_nfssvc_init (svc);
if (ret) {
gf_msg (THIS->name, GF_LOG_ERROR, 0,
- GD_MSG_FAILED_INIT_NFSSVC,
- "Failed to init nfs service");
+ GD_MSG_FAILED_INIT_NFSSVC, "Failed to init nfs "
+ "service");
goto out;
} else {
svc->inited = _gf_true;
- gf_msg_debug (THIS->name, 0,
- "nfs service initialized");
+ gf_msg_debug (THIS->name, 0, "nfs service initialized");
}
}
@@ -75,14 +81,6 @@ glusterd_nfssvc_manager (glusterd_svc_t *svc, void *data, int flags)
if (ret)
goto out;
- /* not an error, or a (very) soft error at best */
- if (sys_access (XLATORDIR "/nfs/server.so", R_OK) != 0) {
- gf_msg (THIS->name, GF_LOG_INFO, 0,
- GD_MSG_GNFS_XLATOR_NOT_INSTALLED,
- "nfs/server.so xlator is not installed");
- goto out;
- }
-
ret = glusterd_nfssvc_create_volfile ();
if (ret)
goto out;
diff --git a/xlators/mgmt/glusterd/src/glusterd-svc-mgmt.h b/xlators/mgmt/glusterd/src/glusterd-svc-mgmt.h
index 8b70a62..c505d1e 100644
--- a/xlators/mgmt/glusterd/src/glusterd-svc-mgmt.h
+++ b/xlators/mgmt/glusterd/src/glusterd-svc-mgmt.h
@@ -29,6 +29,7 @@ struct glusterd_svc_ {
char name[PATH_MAX];
glusterd_conn_t conn;
glusterd_proc_t proc;
+ glusterd_svc_build_t build;
glusterd_svc_manager_t manager;
glusterd_svc_start_t start;
glusterd_svc_stop_t stop;
diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
index 55c4fa7..f611fbb 100644
--- a/xlators/mgmt/glusterd/src/glusterd-utils.c
+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
@@ -668,8 +668,11 @@ glusterd_volinfo_new (glusterd_volinfo_t **volinfo)
new_volinfo->xl = THIS;
- glusterd_snapdsvc_build (&new_volinfo->snapd.svc);
- glusterd_tierdsvc_build (&new_volinfo->tierd.svc);
+ new_volinfo->snapd.svc.build = glusterd_snapdsvc_build;
+ new_volinfo->snapd.svc.build (&(new_volinfo->snapd.svc));
+
+ new_volinfo->tierd.svc.build = glusterd_tierdsvc_build;
+ new_volinfo->tierd.svc.build (&(new_volinfo->tierd.svc));
pthread_mutex_init (&new_volinfo->reflock, NULL);
*volinfo = glusterd_volinfo_ref (new_volinfo);
diff --git a/xlators/mgmt/glusterd/src/glusterd.c b/xlators/mgmt/glusterd/src/glusterd.c
index 68d3e90..6ce4156 100644
--- a/xlators/mgmt/glusterd/src/glusterd.c
+++ b/xlators/mgmt/glusterd/src/glusterd.c
@@ -1330,6 +1330,34 @@ out:
return ret;
}
+static void
+glusterd_svcs_build ()
+{
+ xlator_t *this = NULL;
+ glusterd_conf_t *priv = NULL;
+
+ this = THIS;
+ GF_ASSERT (this);
+
+ priv = this->private;
+ GF_ASSERT (priv);
+
+ priv->shd_svc.build = glusterd_shdsvc_build;
+ priv->shd_svc.build (&(priv->shd_svc));
+
+ priv->nfs_svc.build = glusterd_nfssvc_build;
+ priv->nfs_svc.build (&(priv->nfs_svc));
+
+ priv->quotad_svc.build = glusterd_quotadsvc_build;
+ priv->quotad_svc.build (&(priv->quotad_svc));
+
+ priv->bitd_svc.build = glusterd_bitdsvc_build;
+ priv->bitd_svc.build (&(priv->bitd_svc));
+
+ priv->scrub_svc.build = glusterd_scrubsvc_build;
+ priv->scrub_svc.build (&(priv->scrub_svc));
+}
+
static int
is_upgrade (dict_t *options, gf_boolean_t *upgrade)
{
@@ -1864,12 +1892,7 @@ init (xlator_t *this)
this->private = conf;
glusterd_mgmt_v3_lock_init ();
glusterd_txn_opinfo_dict_init ();
-
- glusterd_shdsvc_build (&conf->shd_svc);
- glusterd_nfssvc_build (&conf->nfs_svc);
- glusterd_quotadsvc_build (&conf->quotad_svc);
- glusterd_bitdsvc_build (&conf->bitd_svc);
- glusterd_scrubsvc_build (&conf->scrub_svc);
+ glusterd_svcs_build ();
/* Make install copies few of the hook-scripts by creating hooks
* directory. Hence purposefully not doing the check for the presence of
--
1.8.3.1

View File

@ -0,0 +1,34 @@
From 7dd54e4e500a41105f375b2aa3620fcd619d5148 Mon Sep 17 00:00:00 2001
From: Jiffin Tony Thottan <jthottan@redhat.com>
Date: Mon, 13 Nov 2017 18:43:00 +0530
Subject: [PATCH 47/74] Revert "glusterd: skip nfs svc reconfigure if nfs
xlator is not installed"
This reverts commit 316e3300cfaa646b7fa45fcc7f57b81c7bb15a0e.
---
xlators/mgmt/glusterd/src/glusterd-nfs-svc.c | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/xlators/mgmt/glusterd/src/glusterd-nfs-svc.c b/xlators/mgmt/glusterd/src/glusterd-nfs-svc.c
index eab9746..da34342 100644
--- a/xlators/mgmt/glusterd/src/glusterd-nfs-svc.c
+++ b/xlators/mgmt/glusterd/src/glusterd-nfs-svc.c
@@ -154,15 +154,6 @@ glusterd_nfssvc_reconfigure ()
priv = this->private;
GF_VALIDATE_OR_GOTO (this->name, priv, out);
- /* not an error, or a (very) soft error at best */
- if (sys_access (XLATORDIR "/nfs/server.so", R_OK) != 0) {
- gf_msg (THIS->name, GF_LOG_INFO, 0,
- GD_MSG_GNFS_XLATOR_NOT_INSTALLED,
- "nfs/server.so xlator is not installed");
- ret = 0;
- goto out;
- }
-
cds_list_for_each_entry (volinfo, &priv->volumes, vol_list) {
if (GLUSTERD_STATUS_STARTED == volinfo->status) {
vol_started = _gf_true;
--
1.8.3.1

View File

@ -0,0 +1,473 @@
From f37a409a8c0fa683ad95a61bf71e949f215e2f81 Mon Sep 17 00:00:00 2001
From: Gaurav Yadav <gyadav@redhat.com>
Date: Thu, 5 Oct 2017 23:44:46 +0530
Subject: [PATCH 48/74] glusterd : introduce timer in mgmt_v3_lock
Problem:
In a multinode environment, if two of the op-sm transactions
are initiated on one of the receiver nodes at the same time,
there might be a possibility that glusterd may end up in
stale lock.
Solution:
During mgmt_v3_lock a registration is made to gf_timer_call_after
which release the lock after certain period of time
>mainline patch : https://review.gluster.org/#/c/18437
Change-Id: I16cc2e5186a2e8a5e35eca2468b031811e093843
BUG: 1442983
Signed-off-by: Gaurav Yadav <gyadav@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/123069
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
---
extras/glusterd.vol.in | 1 +
libglusterfs/src/common-utils.h | 2 +-
libglusterfs/src/mem-types.h | 1 +
xlators/mgmt/glusterd/src/glusterd-locks.c | 219 +++++++++++++++++++++++++++--
xlators/mgmt/glusterd/src/glusterd-locks.h | 13 ++
xlators/mgmt/glusterd/src/glusterd.c | 28 +++-
xlators/mgmt/glusterd/src/glusterd.h | 2 +
7 files changed, 246 insertions(+), 20 deletions(-)
diff --git a/extras/glusterd.vol.in b/extras/glusterd.vol.in
index 0152996..fe413a9 100644
--- a/extras/glusterd.vol.in
+++ b/extras/glusterd.vol.in
@@ -7,6 +7,7 @@ volume management
option transport.socket.read-fail-log off
option ping-timeout 0
option event-threads 1
+# option lock-timer 180
# option transport.address-family inet6
# option base-port 49152
# option max-port 65535
diff --git a/libglusterfs/src/common-utils.h b/libglusterfs/src/common-utils.h
index e1c5f66..0131070 100644
--- a/libglusterfs/src/common-utils.h
+++ b/libglusterfs/src/common-utils.h
@@ -102,7 +102,7 @@ void trap (void);
#define GF_CLNT_INSECURE_PORT_CEILING (GF_IANA_PRIV_PORTS_START - 1)
#define GF_PORT_MAX 65535
#define GF_PORT_ARRAY_SIZE ((GF_PORT_MAX + 7) / 8)
-
+#define GF_LOCK_TIMER 180
#define GF_MINUTE_IN_SECONDS 60
#define GF_HOUR_IN_SECONDS (60*60)
#define GF_DAY_IN_SECONDS (24*60*60)
diff --git a/libglusterfs/src/mem-types.h b/libglusterfs/src/mem-types.h
index d244fb5..85cb5d2 100644
--- a/libglusterfs/src/mem-types.h
+++ b/libglusterfs/src/mem-types.h
@@ -177,6 +177,7 @@ enum gf_common_mem_types_ {
gf_common_mt_pthread_t,
gf_common_ping_local_t,
gf_common_volfile_t,
+ gf_common_mt_mgmt_v3_lock_timer_t,
gf_common_mt_end
};
#endif
diff --git a/xlators/mgmt/glusterd/src/glusterd-locks.c b/xlators/mgmt/glusterd/src/glusterd-locks.c
index 146092d..bd73b37 100644
--- a/xlators/mgmt/glusterd/src/glusterd-locks.c
+++ b/xlators/mgmt/glusterd/src/glusterd-locks.c
@@ -94,6 +94,50 @@ glusterd_mgmt_v3_lock_fini ()
dict_unref (priv->mgmt_v3_lock);
}
+/* Initialize the global mgmt_v3_timer lock list(dict) when
+ * glusterd is spawned */
+int32_t
+glusterd_mgmt_v3_lock_timer_init ()
+{
+ int32_t ret = -1;
+ xlator_t *this = NULL;
+ glusterd_conf_t *priv = NULL;
+
+ this = THIS;
+ GF_VALIDATE_OR_GOTO ("glusterd", this, out);
+
+ priv = this->private;
+ GF_VALIDATE_OR_GOTO (this->name, priv, out);
+
+ priv->mgmt_v3_lock_timer = dict_new ();
+ if (!priv->mgmt_v3_lock_timer)
+ goto out;
+
+ ret = 0;
+out:
+ return ret;
+}
+
+/* Destroy the global mgmt_v3_timer lock list(dict) when
+ * glusterd cleanup is performed */
+void
+glusterd_mgmt_v3_lock_timer_fini ()
+{
+ xlator_t *this = NULL;
+ glusterd_conf_t *priv = NULL;
+
+ this = THIS;
+ GF_VALIDATE_OR_GOTO ("glusterd", this, out);
+
+ priv = this->private;
+ GF_VALIDATE_OR_GOTO (this->name, priv, out);
+
+ if (priv->mgmt_v3_lock_timer)
+ dict_unref (priv->mgmt_v3_lock_timer);
+out:
+ return;
+}
+
int32_t
glusterd_get_mgmt_v3_lock_owner (char *key, uuid_t *uuid)
{
@@ -513,17 +557,23 @@ int32_t
glusterd_mgmt_v3_lock (const char *name, uuid_t uuid, uint32_t *op_errno,
char *type)
{
- char key[PATH_MAX] = "";
- int32_t ret = -1;
- glusterd_mgmt_v3_lock_obj *lock_obj = NULL;
- glusterd_conf_t *priv = NULL;
- gf_boolean_t is_valid = _gf_true;
- uuid_t owner = {0};
- xlator_t *this = NULL;
- char *bt = NULL;
+ char key[PATH_MAX] = "";
+ int32_t ret = -1;
+ glusterd_mgmt_v3_lock_obj *lock_obj = NULL;
+ glusterd_mgmt_v3_lock_timer *mgmt_lock_timer = NULL;
+ glusterd_conf_t *priv = NULL;
+ gf_boolean_t is_valid = _gf_true;
+ uuid_t owner = {0};
+ xlator_t *this = NULL;
+ char *bt = NULL;
+ struct timespec delay = {0};
+ char *key_dup = NULL;
+ glusterfs_ctx_t *mgmt_lock_timer_ctx = NULL;
+ xlator_t *mgmt_lock_timer_xl = NULL;
this = THIS;
GF_ASSERT (this);
+
priv = this->private;
GF_ASSERT (priv);
@@ -594,6 +644,42 @@ glusterd_mgmt_v3_lock (const char *name, uuid_t uuid, uint32_t *op_errno,
goto out;
}
+ mgmt_lock_timer = GF_CALLOC (1, sizeof(glusterd_mgmt_v3_lock_timer),
+ gf_common_mt_mgmt_v3_lock_timer_t);
+
+ if (!mgmt_lock_timer) {
+ ret = -1;
+ goto out;
+ }
+
+ mgmt_lock_timer->xl = THIS;
+ key_dup = gf_strdup (key);
+ delay.tv_sec = priv->mgmt_v3_lock_timeout;
+ delay.tv_nsec = 0;
+
+ ret = -1;
+ mgmt_lock_timer_xl = mgmt_lock_timer->xl;
+ GF_VALIDATE_OR_GOTO (this->name, mgmt_lock_timer_xl, out);
+
+ mgmt_lock_timer_ctx = mgmt_lock_timer_xl->ctx;
+ GF_VALIDATE_OR_GOTO (this->name, mgmt_lock_timer_ctx, out);
+
+ mgmt_lock_timer->timer = gf_timer_call_after
+ (mgmt_lock_timer_ctx, delay,
+ gd_mgmt_v3_unlock_timer_cbk,
+ key_dup);
+
+ ret = dict_set_bin (priv->mgmt_v3_lock_timer, key, mgmt_lock_timer,
+ sizeof (glusterd_mgmt_v3_lock_timer));
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_DICT_SET_FAILED,
+ "Unable to set timer in mgmt_v3 lock");
+ GF_FREE (mgmt_lock_timer);
+ goto out;
+ }
+
+
/* Saving the backtrace into the pre-allocated buffer, ctx->btbuf*/
if ((bt = gf_backtrace_save (NULL))) {
snprintf (key, sizeof (key), "debug.last-success-bt-%s-%s",
@@ -617,18 +703,98 @@ out:
return ret;
}
+/*
+ * This call back will ensure to unlock the lock_obj, in case we hit a situation
+ * where unlocking failed and stale lock exist*/
+void
+gd_mgmt_v3_unlock_timer_cbk (void *data)
+{
+ xlator_t *this = NULL;
+ glusterd_conf_t *conf = NULL;
+ glusterd_mgmt_v3_lock_timer *mgmt_lock_timer = NULL;
+ char *key = NULL;
+ char *type = NULL;
+ char bt_key[PATH_MAX] = "";
+ char name[PATH_MAX] = "";
+ int32_t ret = -1;
+ glusterfs_ctx_t *mgmt_lock_timer_ctx = NULL;
+ xlator_t *mgmt_lock_timer_xl = NULL;
+
+ this = THIS;
+ GF_VALIDATE_OR_GOTO ("glusterd", this, out);
+
+ conf = this->private;
+ GF_VALIDATE_OR_GOTO (this->name, conf, out);
+
+ gf_log (THIS->name, GF_LOG_INFO, "In gd_mgmt_v3_unlock_timer_cbk");
+ GF_ASSERT (NULL != data);
+ key = (char *)data;
+
+ dict_del (conf->mgmt_v3_lock, key);
+
+ type = strrchr (key, '_');
+ strncpy (name, key, strlen (key) - strlen (type) - 1);
+
+ ret = snprintf (bt_key, PATH_MAX, "debug.last-success-bt-%s-%s",
+ name, type + 1);
+ if (ret != strlen ("debug.last-success-bt-") + strlen (name) +
+ strlen (type)) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_CREATE_KEY_FAIL, "Unable to create backtrace "
+ "key");
+ goto out;
+ }
+
+ dict_del (conf->mgmt_v3_lock, bt_key);
+
+ ret = dict_get_bin (conf->mgmt_v3_lock_timer, key,
+ (void **)&mgmt_lock_timer);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_DICT_SET_FAILED,
+ "Unable to get lock owner in mgmt_v3 lock");
+ goto out;
+ }
+
+out:
+ if (mgmt_lock_timer->timer) {
+ mgmt_lock_timer_xl = mgmt_lock_timer->xl;
+ GF_VALIDATE_OR_GOTO (this->name, mgmt_lock_timer_xl,
+ ret_function);
+
+ mgmt_lock_timer_ctx = mgmt_lock_timer_xl->ctx;
+ GF_VALIDATE_OR_GOTO (this->name, mgmt_lock_timer_ctx,
+ ret_function);
+
+ gf_timer_call_cancel (mgmt_lock_timer_ctx,
+ mgmt_lock_timer->timer);
+ GF_FREE(key);
+ dict_del (conf->mgmt_v3_lock_timer, bt_key);
+ mgmt_lock_timer->timer = NULL;
+ }
+
+ret_function:
+
+ return;
+}
+
int32_t
glusterd_mgmt_v3_unlock (const char *name, uuid_t uuid, char *type)
{
- char key[PATH_MAX] = "";
- int32_t ret = -1;
- gf_boolean_t is_valid = _gf_true;
- glusterd_conf_t *priv = NULL;
- uuid_t owner = {0};
- xlator_t *this = NULL;
+ char key[PATH_MAX] = "";
+ char key_dup[PATH_MAX] = "";
+ int32_t ret = -1;
+ gf_boolean_t is_valid = _gf_true;
+ glusterd_conf_t *priv = NULL;
+ glusterd_mgmt_v3_lock_timer *mgmt_lock_timer = NULL;
+ uuid_t owner = {0};
+ xlator_t *this = NULL;
+ glusterfs_ctx_t *mgmt_lock_timer_ctx = NULL;
+ xlator_t *mgmt_lock_timer_xl = NULL;
this = THIS;
GF_ASSERT (this);
+
priv = this->private;
GF_ASSERT (priv);
@@ -657,6 +823,7 @@ glusterd_mgmt_v3_unlock (const char *name, uuid_t uuid, char *type)
ret = -1;
goto out;
}
+ strncpy (key_dup, key, strlen(key));
gf_msg_debug (this->name, 0,
"Trying to release lock of %s %s for %s as %s",
@@ -690,6 +857,15 @@ glusterd_mgmt_v3_unlock (const char *name, uuid_t uuid, char *type)
/* Removing the mgmt_v3 lock from the global list */
dict_del (priv->mgmt_v3_lock, key);
+ ret = dict_get_bin (priv->mgmt_v3_lock_timer, key,
+ (void **)&mgmt_lock_timer);
+ if (ret) {
+ gf_msg (this->name, GF_LOG_ERROR, 0,
+ GD_MSG_DICT_SET_FAILED,
+ "Unable to get mgmt lock key in mgmt_v3 lock");
+ goto out;
+ }
+
/* Remove the backtrace key as well */
ret = snprintf (key, sizeof(key), "debug.last-success-bt-%s-%s", name,
type);
@@ -708,7 +884,22 @@ glusterd_mgmt_v3_unlock (const char *name, uuid_t uuid, char *type)
type, name);
ret = 0;
+ /* Release owner refernce which was held during lock */
+ if (mgmt_lock_timer->timer) {
+ ret = -1;
+ mgmt_lock_timer_xl = mgmt_lock_timer->xl;
+ GF_VALIDATE_OR_GOTO (this->name, mgmt_lock_timer_xl, out);
+
+ mgmt_lock_timer_ctx = mgmt_lock_timer_xl->ctx;
+ GF_VALIDATE_OR_GOTO (this->name, mgmt_lock_timer_ctx, out);
+ ret = 0;
+ gf_timer_call_cancel (mgmt_lock_timer_ctx,
+ mgmt_lock_timer->timer);
+ dict_del (priv->mgmt_v3_lock_timer, key_dup);
+ mgmt_lock_timer->timer = NULL;
+ }
out:
+
gf_msg_trace (this->name, 0, "Returning %d", ret);
return ret;
}
diff --git a/xlators/mgmt/glusterd/src/glusterd-locks.h b/xlators/mgmt/glusterd/src/glusterd-locks.h
index 437053d..226d5c6 100644
--- a/xlators/mgmt/glusterd/src/glusterd-locks.h
+++ b/xlators/mgmt/glusterd/src/glusterd-locks.h
@@ -14,6 +14,11 @@ typedef struct glusterd_mgmt_v3_lock_object_ {
uuid_t lock_owner;
} glusterd_mgmt_v3_lock_obj;
+typedef struct glusterd_mgmt_v3_lock_timer_ {
+ gf_timer_t *timer;
+ xlator_t *xl;
+} glusterd_mgmt_v3_lock_timer;
+
typedef struct glusterd_mgmt_v3_lock_valid_entities {
char *type; /* Entity type like vol, snap */
gf_boolean_t default_value; /* The default value that *
@@ -29,6 +34,12 @@ void
glusterd_mgmt_v3_lock_fini ();
int32_t
+glusterd_mgmt_v3_lock_timer_init ();
+
+void
+glusterd_mgmt_v3_lock_timer_fini ();
+
+int32_t
glusterd_get_mgmt_v3_lock_owner (char *volname, uuid_t *uuid);
int32_t
@@ -44,4 +55,6 @@ glusterd_multiple_mgmt_v3_lock (dict_t *dict, uuid_t uuid, uint32_t *op_errno);
int32_t
glusterd_multiple_mgmt_v3_unlock (dict_t *dict, uuid_t uuid);
+void
+gd_mgmt_v3_unlock_timer_cbk(void *data);
#endif
diff --git a/xlators/mgmt/glusterd/src/glusterd.c b/xlators/mgmt/glusterd/src/glusterd.c
index 6ce4156..ed01b93 100644
--- a/xlators/mgmt/glusterd/src/glusterd.c
+++ b/xlators/mgmt/glusterd/src/glusterd.c
@@ -1858,14 +1858,22 @@ init (xlator_t *this)
gf_msg (this->name, GF_LOG_INFO, 0,
GD_MSG_DICT_SET_FAILED,
"base-port override: %d", conf->base_port);
- }
- conf->max_port = GF_PORT_MAX;
- if (dict_get_uint32 (this->options, "max-port",
- &conf->max_port) == 0) {
+ }
+ conf->max_port = GF_PORT_MAX;
+ if (dict_get_uint32 (this->options, "max-port",
+ &conf->max_port) == 0) {
gf_msg (this->name, GF_LOG_INFO, 0,
GD_MSG_DICT_SET_FAILED,
"max-port override: %d", conf->max_port);
- }
+ }
+
+ conf->mgmt_v3_lock_timeout = GF_LOCK_TIMER;
+ if (dict_get_uint32 (this->options, "lock-timer",
+ &conf->mgmt_v3_lock_timeout) == 0) {
+ gf_msg (this->name, GF_LOG_INFO, 0,
+ GD_MSG_DICT_SET_FAILED,
+ "lock-timer override: %d", conf->mgmt_v3_lock_timeout);
+ }
/* Set option to run bricks on valgrind if enabled in glusterd.vol */
this->ctx->cmd_args.valgrind = valgrind;
@@ -1891,6 +1899,7 @@ init (xlator_t *this)
this->private = conf;
glusterd_mgmt_v3_lock_init ();
+ glusterd_mgmt_v3_lock_timer_init();
glusterd_txn_opinfo_dict_init ();
glusterd_svcs_build ();
@@ -2048,6 +2057,7 @@ fini (xlator_t *this)
gf_store_handle_destroy (conf->handle);
glusterd_sm_tr_log_delete (&conf->op_sm_log);
glusterd_mgmt_v3_lock_fini ();
+ glusterd_mgmt_v3_lock_timer_fini ();
glusterd_txn_opinfo_dict_fini ();
GF_FREE (conf);
@@ -2171,6 +2181,14 @@ struct volume_options options[] = {
.max = GF_PORT_MAX,
.description = "Sets the max port for portmap query"
},
+ { .key = {"mgmt-v3-lock-timeout"},
+ .type = GF_OPTION_TYPE_INT,
+ .max = 600,
+ .description = "Sets the mgmt-v3-lock-timeout for transactions."
+ "Specifes the default timeout value after which "
+ "lock acquired while performing transaction will "
+ "be released."
+ },
{ .key = {"snap-brick-path"},
.type = GF_OPTION_TYPE_STR,
.description = "directory where the bricks for the snapshots will be created"
diff --git a/xlators/mgmt/glusterd/src/glusterd.h b/xlators/mgmt/glusterd/src/glusterd.h
index 291f2f7..59b1775 100644
--- a/xlators/mgmt/glusterd/src/glusterd.h
+++ b/xlators/mgmt/glusterd/src/glusterd.h
@@ -174,6 +174,7 @@ typedef struct {
* cluster with no
* transaction ids */
+ dict_t *mgmt_v3_lock_timer;
struct cds_list_head mount_specs;
pthread_t brick_thread;
void *hooks_priv;
@@ -195,6 +196,7 @@ typedef struct {
uint32_t generation;
int32_t workers;
uint32_t blockers;
+ uint32_t mgmt_v3_lock_timeout;
} glusterd_conf_t;
--
1.8.3.1

View File

@ -0,0 +1,514 @@
From 2278782dddf80611c7305ed982532647e38b5664 Mon Sep 17 00:00:00 2001
From: Jiffin Tony Thottan <jthottan@redhat.com>
Date: Mon, 16 Oct 2017 14:18:31 +0530
Subject: [PATCH 49/74] Revert "packaging: (ganesha) remove glusterfs-ganesha
subpackage and related files)"
This reverts commit 0cf2963f12a8b540a7042605d8c79f638fdf6cee.
Change-Id: Id6e7585021bd4dd78a59580cfa4838bdd4e539a0
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
---
configure.ac | 3 +
extras/Makefile.am | 2 +-
extras/ganesha/Makefile.am | 2 +
extras/ganesha/config/Makefile.am | 4 +
extras/ganesha/config/ganesha-ha.conf.sample | 19 ++++
extras/ganesha/scripts/Makefile.am | 4 +
extras/ganesha/scripts/create-export-ganesha.sh | 91 +++++++++++++++
extras/ganesha/scripts/dbus-send.sh | 61 +++++++++++
extras/ganesha/scripts/generate-epoch.py | 48 ++++++++
extras/hook-scripts/start/post/Makefile.am | 2 +-
extras/hook-scripts/start/post/S31ganesha-start.sh | 122 +++++++++++++++++++++
glusterfs.spec.in | 10 +-
12 files changed, 362 insertions(+), 6 deletions(-)
create mode 100644 extras/ganesha/Makefile.am
create mode 100644 extras/ganesha/config/Makefile.am
create mode 100644 extras/ganesha/config/ganesha-ha.conf.sample
create mode 100644 extras/ganesha/scripts/Makefile.am
create mode 100755 extras/ganesha/scripts/create-export-ganesha.sh
create mode 100755 extras/ganesha/scripts/dbus-send.sh
create mode 100755 extras/ganesha/scripts/generate-epoch.py
create mode 100755 extras/hook-scripts/start/post/S31ganesha-start.sh
diff --git a/configure.ac b/configure.ac
index dfccd40..c8e6e44 100644
--- a/configure.ac
+++ b/configure.ac
@@ -207,6 +207,9 @@ AC_CONFIG_FILES([Makefile
extras/init.d/glustereventsd-Debian
extras/init.d/glustereventsd-Redhat
extras/init.d/glustereventsd-FreeBSD
+ extras/ganesha/Makefile
+ extras/ganesha/config/Makefile
+ extras/ganesha/scripts/Makefile
extras/systemd/Makefile
extras/systemd/glusterd.service
extras/systemd/glustereventsd.service
diff --git a/extras/Makefile.am b/extras/Makefile.am
index 6863772..2812a4c 100644
--- a/extras/Makefile.am
+++ b/extras/Makefile.am
@@ -8,7 +8,7 @@ EditorModedir = $(docdir)
EditorMode_DATA = glusterfs-mode.el glusterfs.vim
SUBDIRS = init.d systemd benchmarking hook-scripts $(OCF_SUBDIR) LinuxRPM \
- $(GEOREP_EXTRAS_SUBDIR) snap_scheduler firewalld cliutils
+ $(GEOREP_EXTRAS_SUBDIR) ganesha snap_scheduler firewalld cliutils
confdir = $(sysconfdir)/glusterfs
conf_DATA = glusterfs-logrotate gluster-rsyslog-7.2.conf gluster-rsyslog-5.8.conf \
diff --git a/extras/ganesha/Makefile.am b/extras/ganesha/Makefile.am
new file mode 100644
index 0000000..542de68
--- /dev/null
+++ b/extras/ganesha/Makefile.am
@@ -0,0 +1,2 @@
+SUBDIRS = scripts config
+CLEANFILES =
diff --git a/extras/ganesha/config/Makefile.am b/extras/ganesha/config/Makefile.am
new file mode 100644
index 0000000..c729273
--- /dev/null
+++ b/extras/ganesha/config/Makefile.am
@@ -0,0 +1,4 @@
+EXTRA_DIST= ganesha-ha.conf.sample
+
+confdir = $(sysconfdir)/ganesha
+conf_DATA = ganesha-ha.conf.sample
diff --git a/extras/ganesha/config/ganesha-ha.conf.sample b/extras/ganesha/config/ganesha-ha.conf.sample
new file mode 100644
index 0000000..c22892b
--- /dev/null
+++ b/extras/ganesha/config/ganesha-ha.conf.sample
@@ -0,0 +1,19 @@
+# Name of the HA cluster created.
+# must be unique within the subnet
+HA_NAME="ganesha-ha-360"
+#
+# N.B. you may use short names or long names; you may not use IP addrs.
+# Once you select one, stay with it as it will be mildly unpleasant to
+# clean up if you switch later on. Ensure that all names - short and/or
+# long - are in DNS or /etc/hosts on all machines in the cluster.
+#
+# The subset of nodes of the Gluster Trusted Pool that form the ganesha
+# HA cluster. Hostname is specified.
+HA_CLUSTER_NODES="server1,server2,..."
+#HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
+#
+# Virtual IPs for each of the nodes specified above.
+VIP_server1="10.0.2.1"
+VIP_server2="10.0.2.2"
+#VIP_server1_lab_redhat_com="10.0.2.1"
+#VIP_server2_lab_redhat_com="10.0.2.2"
diff --git a/extras/ganesha/scripts/Makefile.am b/extras/ganesha/scripts/Makefile.am
new file mode 100644
index 0000000..9ee8867
--- /dev/null
+++ b/extras/ganesha/scripts/Makefile.am
@@ -0,0 +1,4 @@
+EXTRA_DIST= create-export-ganesha.sh generate-epoch.py dbus-send.sh
+
+scriptsdir = $(libexecdir)/ganesha
+scripts_SCRIPTS = create-export-ganesha.sh generate-epoch.py
diff --git a/extras/ganesha/scripts/create-export-ganesha.sh b/extras/ganesha/scripts/create-export-ganesha.sh
new file mode 100755
index 0000000..1ffba42
--- /dev/null
+++ b/extras/ganesha/scripts/create-export-ganesha.sh
@@ -0,0 +1,91 @@
+#!/bin/bash
+
+#This script is called by glusterd when the user
+#tries to export a volume via NFS-Ganesha.
+#An export file specific to a volume
+#is created in GANESHA_DIR/exports.
+
+# Try loading the config from any of the distro
+# specific configuration locations
+if [ -f /etc/sysconfig/ganesha ]
+ then
+ . /etc/sysconfig/ganesha
+fi
+if [ -f /etc/conf.d/ganesha ]
+ then
+ . /etc/conf.d/ganesha
+fi
+if [ -f /etc/default/ganesha ]
+ then
+ . /etc/default/ganesha
+fi
+
+GANESHA_DIR=${1%/}
+OPTION=$2
+VOL=$3
+CONF=$GANESHA_DIR"/ganesha.conf"
+declare -i EXPORT_ID
+
+function check_cmd_status()
+{
+ if [ "$1" != "0" ]
+ then
+ rm -rf $GANESHA_DIR/exports/export.$VOL.conf
+ sed -i /$VOL.conf/d $CONF
+ exit 1
+ fi
+}
+
+
+if [ ! -d "$GANESHA_DIR/exports" ];
+ then
+ mkdir $GANESHA_DIR/exports
+ check_cmd_status `echo $?`
+fi
+
+function write_conf()
+{
+echo -e "# WARNING : Using Gluster CLI will overwrite manual
+# changes made to this file. To avoid it, edit the
+# file and run ganesha-ha.sh --refresh-config."
+
+echo "EXPORT{"
+echo " Export_Id = 2;"
+echo " Path = \"/$VOL\";"
+echo " FSAL {"
+echo " name = "GLUSTER";"
+echo " hostname=\"localhost\";"
+echo " volume=\"$VOL\";"
+echo " }"
+echo " Access_type = RW;"
+echo " Disable_ACL = true;"
+echo ' Squash="No_root_squash";'
+echo " Pseudo=\"/$VOL\";"
+echo ' Protocols = "3", "4" ;'
+echo ' Transports = "UDP","TCP";'
+echo ' SecType = "sys";'
+echo " }"
+}
+if [ "$OPTION" = "on" ];
+then
+ if ! (cat $CONF | grep $VOL.conf\"$ )
+ then
+ write_conf $@ > $GANESHA_DIR/exports/export.$VOL.conf
+ echo "%include \"$GANESHA_DIR/exports/export.$VOL.conf\"" >> $CONF
+ count=`ls -l $GANESHA_DIR/exports/*.conf | wc -l`
+ if [ "$count" = "1" ] ; then
+ EXPORT_ID=2
+ else
+ EXPORT_ID=`cat $GANESHA_DIR/.export_added`
+ check_cmd_status `echo $?`
+ EXPORT_ID=EXPORT_ID+1
+ sed -i s/Export_Id.*/"Export_Id= $EXPORT_ID ;"/ \
+ $GANESHA_DIR/exports/export.$VOL.conf
+ check_cmd_status `echo $?`
+ fi
+ echo $EXPORT_ID > $GANESHA_DIR/.export_added
+ fi
+else
+ rm -rf $GANESHA_DIR/exports/export.$VOL.conf
+ sed -i /$VOL.conf/d $CONF
+fi
diff --git a/extras/ganesha/scripts/dbus-send.sh b/extras/ganesha/scripts/dbus-send.sh
new file mode 100755
index 0000000..c071d03
--- /dev/null
+++ b/extras/ganesha/scripts/dbus-send.sh
@@ -0,0 +1,61 @@
+#!/bin/bash
+
+# Try loading the config from any of the distro
+# specific configuration locations
+if [ -f /etc/sysconfig/ganesha ]
+ then
+ . /etc/sysconfig/ganesha
+fi
+if [ -f /etc/conf.d/ganesha ]
+ then
+ . /etc/conf.d/ganesha
+fi
+if [ -f /etc/default/ganesha ]
+ then
+ . /etc/default/ganesha
+fi
+
+GANESHA_DIR=${1%/}
+OPTION=$2
+VOL=$3
+CONF=$GANESHA_DIR"/ganesha.conf"
+
+function check_cmd_status()
+{
+ if [ "$1" != "0" ]
+ then
+ logger "dynamic export failed on node :${hostname -s}"
+ fi
+}
+
+#This function keeps track of export IDs and increments it with every new entry
+function dynamic_export_add()
+{
+ dbus-send --system \
+--dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr \
+org.ganesha.nfsd.exportmgr.AddExport string:$GANESHA_DIR/exports/export.$VOL.conf \
+string:"EXPORT(Path=/$VOL)"
+ check_cmd_status `echo $?`
+}
+
+#This function removes an export dynamically(uses the export_id of the export)
+function dynamic_export_remove()
+{
+ removed_id=`cat $GANESHA_DIR/exports/export.$VOL.conf |\
+grep Export_Id | awk -F"[=,;]" '{print$2}'| tr -d '[[:space:]]'`
+ dbus-send --print-reply --system \
+--dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr \
+org.ganesha.nfsd.exportmgr.RemoveExport uint16:$removed_id
+ check_cmd_status `echo $?`
+}
+
+if [ "$OPTION" = "on" ];
+then
+ dynamic_export_add $@
+fi
+
+if [ "$OPTION" = "off" ];
+then
+ dynamic_export_remove $@
+fi
+
diff --git a/extras/ganesha/scripts/generate-epoch.py b/extras/ganesha/scripts/generate-epoch.py
new file mode 100755
index 0000000..5db5e56
--- /dev/null
+++ b/extras/ganesha/scripts/generate-epoch.py
@@ -0,0 +1,48 @@
+#!/usr/bin/python
+#
+# Copyright (c) 2016 Red Hat, Inc. <http://www.redhat.com>
+# This file is part of GlusterFS.
+#
+# This file is licensed to you under your choice of the GNU Lesser
+# General Public License, version 3 or any later version (LGPLv3 or
+# later), or the GNU General Public License, version 2 (GPLv2), in all
+# cases as published by the Free Software Foundation.
+#
+# Generates unique epoch value on each gluster node to be used by
+# nfs-ganesha service on that node.
+#
+# Configure 'EPOCH_EXEC' option to this script path in
+# '/etc/sysconfig/ganesha' file used by nfs-ganesha service.
+#
+# Construct epoch as follows -
+# first 32-bit contains the now() time
+# rest 32-bit value contains the local glusterd node uuid
+
+import time
+import binascii
+
+# Calculate the now() time into a 64-bit integer value
+def epoch_now():
+ epoch_time = int(time.mktime(time.localtime())) << 32
+ return epoch_time
+
+# Read glusterd UUID and extract first 32-bit of it
+def epoch_uuid():
+ file_name = '/var/lib/glusterd/glusterd.info'
+
+ for line in open(file_name):
+ if "UUID" in line:
+ glusterd_uuid = line.split('=')[1].strip()
+
+ uuid_bin = binascii.unhexlify(glusterd_uuid.replace("-",""))
+
+ epoch_uuid = int(uuid_bin.encode('hex'), 32) & 0xFFFF0000
+ return epoch_uuid
+
+# Construct epoch as follows -
+# first 32-bit contains the now() time
+# rest 32-bit value contains the local glusterd node uuid
+epoch = (epoch_now() | epoch_uuid())
+print str(epoch)
+
+exit(0)
diff --git a/extras/hook-scripts/start/post/Makefile.am b/extras/hook-scripts/start/post/Makefile.am
index 384a582..03bb300 100644
--- a/extras/hook-scripts/start/post/Makefile.am
+++ b/extras/hook-scripts/start/post/Makefile.am
@@ -1,4 +1,4 @@
-EXTRA_DIST = S29CTDBsetup.sh S30samba-start.sh
+EXTRA_DIST = S29CTDBsetup.sh S30samba-start.sh S31ganesha-start.sh
hookdir = $(GLUSTERD_WORKDIR)/hooks/1/start/post/
hook_SCRIPTS = $(EXTRA_DIST)
diff --git a/extras/hook-scripts/start/post/S31ganesha-start.sh b/extras/hook-scripts/start/post/S31ganesha-start.sh
new file mode 100755
index 0000000..90ba6bc
--- /dev/null
+++ b/extras/hook-scripts/start/post/S31ganesha-start.sh
@@ -0,0 +1,122 @@
+#!/bin/bash
+PROGNAME="Sganesha-start"
+OPTSPEC="volname:,gd-workdir:"
+VOL=
+declare -i EXPORT_ID
+ganesha_key="ganesha.enable"
+GANESHA_DIR="/var/run/gluster/shared_storage/nfs-ganesha"
+CONF1="$GANESHA_DIR/ganesha.conf"
+GLUSTERD_WORKDIR=
+
+function parse_args ()
+{
+ ARGS=$(getopt -l $OPTSPEC -o "o" -name $PROGNAME $@)
+ eval set -- "$ARGS"
+
+ while true; do
+ case $1 in
+ --volname)
+ shift
+ VOL=$1
+ ;;
+ --gd-workdir)
+ shift
+ GLUSTERD_WORKDIR=$1
+ ;;
+ *)
+ shift
+ break
+ ;;
+ esac
+ shift
+ done
+}
+
+
+
+#This function generates a new export entry as export.volume_name.conf
+function write_conf()
+{
+echo -e "# WARNING : Using Gluster CLI will overwrite manual
+# changes made to this file. To avoid it, edit the
+# file, copy it over to all the NFS-Ganesha nodes
+# and run ganesha-ha.sh --refresh-config."
+
+echo "EXPORT{"
+echo " Export_Id = 2;"
+echo " Path = \"/$VOL\";"
+echo " FSAL {"
+echo " name = \"GLUSTER\";"
+echo " hostname=\"localhost\";"
+echo " volume=\"$VOL\";"
+echo " }"
+echo " Access_type = RW;"
+echo " Disable_ACL = true;"
+echo " Squash=\"No_root_squash\";"
+echo " Pseudo=\"/$VOL\";"
+echo " Protocols = \"3\", \"4\" ;"
+echo " Transports = \"UDP\",\"TCP\";"
+echo " SecType = \"sys\";"
+echo "}"
+}
+
+#It adds the export dynamically by sending dbus signals
+function export_add()
+{
+ dbus-send --print-reply --system --dest=org.ganesha.nfsd \
+/org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport \
+string:$GANESHA_DIR/exports/export.$VOL.conf string:"EXPORT(Export_Id=$EXPORT_ID)"
+
+}
+
+# based on src/scripts/ganeshactl/Ganesha/export_mgr.py
+function is_exported()
+{
+ local volume="${1}"
+
+ dbus-send --type=method_call --print-reply --system \
+ --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr \
+ org.ganesha.nfsd.exportmgr.ShowExports \
+ | grep -w -q "/${volume}"
+
+ return $?
+}
+
+# Check the info file (contains the volume options) to see if Ganesha is
+# enabled for this volume.
+function ganesha_enabled()
+{
+ local volume="${1}"
+ local info_file="${GLUSTERD_WORKDIR}/vols/${VOL}/info"
+ local enabled="off"
+
+ enabled=$(grep -w ${ganesha_key} ${info_file} | cut -d"=" -f2)
+
+ [ "${enabled}" == "on" ]
+
+ return $?
+}
+
+parse_args $@
+
+if ganesha_enabled ${VOL} && ! is_exported ${VOL}
+then
+ if [ ! -e ${GANESHA_DIR}/exports/export.${VOL}.conf ]
+ then
+ #Remove export entry from nfs-ganesha.conf
+ sed -i /$VOL.conf/d $CONF1
+ write_conf ${VOL} > ${GANESHA_DIR}/exports/export.${VOL}.conf
+ EXPORT_ID=`cat $GANESHA_DIR/.export_added`
+ EXPORT_ID=EXPORT_ID+1
+ echo $EXPORT_ID > $GANESHA_DIR/.export_added
+ sed -i s/Export_Id.*/"Export_Id=$EXPORT_ID;"/ \
+ $GANESHA_DIR/exports/export.$VOL.conf
+ echo "%include \"$GANESHA_DIR/exports/export.$VOL.conf\"" >> $CONF1
+ else
+ EXPORT_ID=$(grep ^[[:space:]]*Export_Id $GANESHA_DIR/exports/export.$VOL.conf |\
+ awk -F"[=,;]" '{print $2}' | tr -d '[[:space:]]')
+ fi
+ export_add $VOL
+fi
+
+exit 0
diff --git a/glusterfs.spec.in b/glusterfs.spec.in
index 10339fe..6e710e5 100644
--- a/glusterfs.spec.in
+++ b/glusterfs.spec.in
@@ -262,7 +262,6 @@ Obsoletes: hekafs
Obsoletes: %{name}-common < %{version}-%{release}
Obsoletes: %{name}-core < %{version}-%{release}
Obsoletes: %{name}-ufo
-Obsoletes: %{name}-ganesha
Provides: %{name}-common = %{version}-%{release}
Provides: %{name}-core = %{version}-%{release}
@@ -1275,6 +1274,9 @@ exit 0
%if ( 0%{?_build_server} )
%files ganesha
+%{_sysconfdir}/ganesha/*
+%{_libexecdir}/ganesha/*
+%{_sharedstatedir}/glusterd/hooks/1/start/post/S31ganesha-start.sh
%endif
%if ( 0%{?_build_server} )
@@ -2121,6 +2123,9 @@ fi
%changelog
* Mon Nov 13 2017 Jiffin Tony Thottan <jthottan@redhat.com>
+- Adding ganesha bits back in gluster repository #1499784
+
+* Mon Nov 13 2017 Jiffin Tony Thottan <jthottan@redhat.com>
- DOWNSTREAM ONLY - revert of 83abcb(gnfs in an optional subpackage)
* Tue Oct 10 2017 Milind Changire <mchangir@redhat.com>
@@ -2178,9 +2183,6 @@ fi
* Thu Feb 16 2017 Niels de Vos <ndevos@redhat.com>
- Obsolete and Provide python-gluster for upgrading from glusterfs < 3.10
-* Tue Feb 7 2017 Kaleb S. KEITHLEY <kkeithle@redhat.com>
-- remove ganesha (#1418417)
-
* Wed Feb 1 2017 Poornima G <pgurusid@redhat.com>
- Install /var/lib/glusterd/groups/metadata-cache by default
--
1.8.3.1

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More