import CS pcs-0.11.7-2.el9_4

This commit is contained in:
eabdullin 2024-04-30 19:48:39 +00:00
parent 1a486023ad
commit 1a6763eec6
16 changed files with 170 additions and 4860 deletions

30
.gitignore vendored
View File

@ -1,21 +1,19 @@
SOURCES/backports-3.23.0.gem SOURCES/backports-3.24.1.gem
SOURCES/childprocess-4.1.0.gem SOURCES/childprocess-4.1.0.gem
SOURCES/dacite-1.6.0.tar.gz SOURCES/dacite-1.8.1.tar.gz
SOURCES/daemons-1.4.1.gem
SOURCES/ethon-0.16.0.gem SOURCES/ethon-0.16.0.gem
SOURCES/eventmachine-1.2.7.gem SOURCES/ffi-1.16.3.gem
SOURCES/ffi-1.15.5.gem
SOURCES/mustermann-3.0.0.gem SOURCES/mustermann-3.0.0.gem
SOURCES/pcs-0.11.4.tar.gz SOURCES/nio4r-2.5.9.gem
SOURCES/pcs-web-ui-0.1.16.1.tar.gz SOURCES/pcs-0.11.7.tar.gz
SOURCES/pcs-web-ui-node-modules-0.1.16.1.tar.xz SOURCES/pcs-web-ui-0.1.18.tar.gz
SOURCES/pcs-web-ui-node-modules-0.1.18.tar.xz
SOURCES/puma-6.4.0.gem
SOURCES/pyagentx-0.4.pcs.2.tar.gz SOURCES/pyagentx-0.4.pcs.2.tar.gz
SOURCES/rack-2.2.6.4.gem SOURCES/rack-2.2.8.1.gem
SOURCES/rack-protection-3.0.5.gem SOURCES/rack-protection-3.1.0.gem
SOURCES/rack-test-2.0.2.gem SOURCES/rack-test-2.1.0.gem
SOURCES/ruby2_keywords-0.0.5.gem SOURCES/ruby2_keywords-0.0.5.gem
SOURCES/sinatra-3.0.5.gem SOURCES/sinatra-3.1.0.gem
SOURCES/thin-1.8.1.gem SOURCES/tilt-2.3.0.gem
SOURCES/tilt-2.0.11.gem SOURCES/tornado-6.3.3.tar.gz
SOURCES/tornado-6.2.0.tar.gz
SOURCES/webrick-1.7.0.gem

View File

@ -1,21 +1,19 @@
0e11246385a9e0a4bc122b74fb74fe536a234f81 SOURCES/backports-3.23.0.gem 0ef72a288913e220695ad62718aeb75171924028 SOURCES/backports-3.24.1.gem
81639c8886342e01d189c10a6beab6ad0526dc4e SOURCES/childprocess-4.1.0.gem 81639c8886342e01d189c10a6beab6ad0526dc4e SOURCES/childprocess-4.1.0.gem
31546c37fbdc6270d5097687619e9c0db6f1c05c SOURCES/dacite-1.6.0.tar.gz 07b26abbf7ff0dcba5c7f9e814ff7eebafefb058 SOURCES/dacite-1.8.1.tar.gz
4795a8962cc1608bfec0d91fa4d438c7cfe90c62 SOURCES/daemons-1.4.1.gem
5b56a68268708c474bef04550639ded3add5e946 SOURCES/ethon-0.16.0.gem 5b56a68268708c474bef04550639ded3add5e946 SOURCES/ethon-0.16.0.gem
7a5b2896e210fac9759c786ee4510f265f75b481 SOURCES/eventmachine-1.2.7.gem 10e4cf0e11ef4581ec4ad5fe2cdf3c78b6077d39 SOURCES/ffi-1.16.3.gem
97632b7975067266c0b39596de0a4c86d9330658 SOURCES/ffi-1.15.5.gem
e892678aaf02ccb27f3a6cd58482cda00aea6ce8 SOURCES/mustermann-3.0.0.gem e892678aaf02ccb27f3a6cd58482cda00aea6ce8 SOURCES/mustermann-3.0.0.gem
b7aecf2f71777395b2b3bb79012de3e658383d4e SOURCES/pcs-0.11.4.tar.gz 2f65d371f5f37460ad74afcedcb97d2b41a46806 SOURCES/nio4r-2.5.9.gem
dba53fa53eb99770f9633fb15f19335fdb2530e2 SOURCES/pcs-web-ui-0.1.16.1.tar.gz 3aec6fd614169e4d0272a71eb3688ad3a54f91b3 SOURCES/pcs-0.11.7.tar.gz
9a8a94313975247239df63f2842d17f9a526dd3f SOURCES/pcs-web-ui-node-modules-0.1.16.1.tar.xz 59d3e570bcbb7b3bcb2b9bf519425b2036e0faad SOURCES/pcs-web-ui-0.1.18.tar.gz
252cc42bf9715209c67981da06f2791a91c2f3fb SOURCES/pcs-web-ui-node-modules-0.1.18.tar.xz
d6049c4555f3c9d198e6eb1d7e53ce9b68e175ff SOURCES/puma-6.4.0.gem
3176b2f2b332c2b6bf79fe882e83feecf3d3f011 SOURCES/pyagentx-0.4.pcs.2.tar.gz 3176b2f2b332c2b6bf79fe882e83feecf3d3f011 SOURCES/pyagentx-0.4.pcs.2.tar.gz
bbaa023e07bdc4143c5dd18d752c2543f254666f SOURCES/rack-2.2.6.4.gem fcdee79d1b0bb7e3666bad96321fc124bc8215e9 SOURCES/rack-2.2.8.1.gem
b311f9d60fc3ac0e20078a5aca7c51efa404727c SOURCES/rack-protection-3.0.5.gem d34d1d308e3a1028c85bd0a7e4ba1d4f1ec0f725 SOURCES/rack-protection-3.1.0.gem
3c669527ecbcb9f915a83983ec89320c356e1fe3 SOURCES/rack-test-2.0.2.gem ae09ea83748b55875edc3708fffba90db180cb8e SOURCES/rack-test-2.1.0.gem
d017b9e4d1978e0b3ccc3e2a31493809e4693cd3 SOURCES/ruby2_keywords-0.0.5.gem d017b9e4d1978e0b3ccc3e2a31493809e4693cd3 SOURCES/ruby2_keywords-0.0.5.gem
2a2fb3c121c6e5adc6f29d7e06cef66cdda303f1 SOURCES/sinatra-3.0.5.gem cd57dfa17b103c514dd0b107ebda6ee4bfb6b0d4 SOURCES/sinatra-3.1.0.gem
1ac6292a98e17247b7bb847a35ff868605256f7b SOURCES/thin-1.8.1.gem 4a38a9a55887b2882182a2c5771e592efe514e5e SOURCES/tilt-2.3.0.gem
360d77c80d2851a538fb13d43751093115c34712 SOURCES/tilt-2.0.11.gem 4db49c4d5570e6fdc7ec845335bb341ebd5346a7 SOURCES/tornado-6.3.3.tar.gz
9e809453db3a3347b7c0e7837a189833247e0828 SOURCES/tornado-6.2.0.tar.gz
10ba51035928541b7713415f1f2e3a41114972fc SOURCES/webrick-1.7.0.gem

View File

@ -1,215 +0,0 @@
From 2f1b9d7f225530dfc88af57d364547d9ad425172 Mon Sep 17 00:00:00 2001
From: Ondrej Mular <omular@redhat.com>
Date: Thu, 24 Nov 2022 15:10:20 +0100
Subject: [PATCH] smoke test improvements
---
.gitignore | 1 +
.gitlab-ci.yml | 1 +
configure.ac | 2 +-
pcs/Makefile.am | 1 -
pcs/api_v2_client.in | 22 --------------------
pcs_test/Makefile.am | 1 +
pcs_test/api_v2_client.in | 20 +++++++++++++++++++
{pcs => pcs_test}/api_v2_client.py | 0
pcs_test/smoke.sh.in | 32 +++++++++++++++++++-----------
9 files changed, 44 insertions(+), 36 deletions(-)
delete mode 100644 pcs/api_v2_client.in
create mode 100644 pcs_test/api_v2_client.in
rename {pcs => pcs_test}/api_v2_client.py (100%)
diff --git a/.gitignore b/.gitignore
index b368a048..8dd3d5be 100644
--- a/.gitignore
+++ b/.gitignore
@@ -21,6 +21,7 @@ requirements.txt
setup.py
setup.cfg
pcs/api_v2_client
+pcs_test/api_v2_client
pcs/pcs
pcs/pcs_internal
pcs/settings.py
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index d4e0074d..3d797729 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -126,6 +126,7 @@ python_smoke_tests:
- ./autogen.sh
- ./configure --enable-local-build
- make
+ - rm -rf pcs
- pcs_test/smoke.sh
artifacts:
paths:
diff --git a/configure.ac b/configure.ac
index bc8abb39..b61c1b25 100644
--- a/configure.ac
+++ b/configure.ac
@@ -578,10 +578,10 @@ AC_CONFIG_FILES([Makefile
pcsd/settings.rb
pcsd/logrotate/pcsd])
-AC_CONFIG_FILES([pcs/api_v2_client], [chmod +x pcs/api_v2_client])
AC_CONFIG_FILES([pcs/pcs], [chmod +x pcs/pcs])
AC_CONFIG_FILES([pcs/pcs_internal], [chmod +x pcs/pcs_internal])
AC_CONFIG_FILES([pcs/snmp/pcs_snmp_agent], [chmod +x pcs/snmp/pcs_snmp_agent])
+AC_CONFIG_FILES([pcs_test/api_v2_client], [chmod +x pcs_test/api_v2_client])
AC_CONFIG_FILES([pcs_test/smoke.sh], [chmod +x pcs_test/smoke.sh])
AC_CONFIG_FILES([pcs_test/pcs_for_tests], [chmod +x pcs_test/pcs_for_tests])
AC_CONFIG_FILES([pcs_test/suite], [chmod +x pcs_test/suite])
diff --git a/pcs/Makefile.am b/pcs/Makefile.am
index 5c5104b4..f562b32c 100644
--- a/pcs/Makefile.am
+++ b/pcs/Makefile.am
@@ -20,7 +20,6 @@ EXTRA_DIST = \
acl.py \
alert.py \
app.py \
- api_v2_client.py \
cli/booth/command.py \
cli/booth/env.py \
cli/booth/__init__.py \
diff --git a/pcs/api_v2_client.in b/pcs/api_v2_client.in
deleted file mode 100644
index 93336c31..00000000
--- a/pcs/api_v2_client.in
+++ /dev/null
@@ -1,22 +0,0 @@
-#!@PYTHON@
-
-import os.path
-import sys
-
-CURRENT_DIR = os.path.dirname(os.path.abspath(__file__))
-
-# We prevent to import some module from this dir instead of e.g. standard module.
-# There is no reason to import anything from this module.
-sys.path.remove(CURRENT_DIR)
-
-# Add pcs package.
-PACKAGE_DIR = os.path.dirname(CURRENT_DIR)
-BUNDLED_PACKAGES_DIR = os.path.join(
- PACKAGE_DIR, "@PCS_BUNDLED_DIR_LOCAL@", "packages"
-)
-sys.path.insert(0, BUNDLED_PACKAGES_DIR)
-sys.path.insert(0, PACKAGE_DIR)
-
-from pcs import api_v2_client
-
-api_v2_client.main()
diff --git a/pcs_test/Makefile.am b/pcs_test/Makefile.am
index 89a23e05..6f497a0e 100644
--- a/pcs_test/Makefile.am
+++ b/pcs_test/Makefile.am
@@ -57,6 +57,7 @@ EXTRA_DIST = \
resources/transitions01.xml \
resources/transitions02.xml \
suite.py \
+ api_v2_client.py \
tier0/cli/booth/__init__.py \
tier0/cli/booth/test_env.py \
tier0/cli/cluster/__init__.py \
diff --git a/pcs_test/api_v2_client.in b/pcs_test/api_v2_client.in
new file mode 100644
index 00000000..73a22324
--- /dev/null
+++ b/pcs_test/api_v2_client.in
@@ -0,0 +1,20 @@
+#!@PYTHON@
+import os.path
+import sys
+
+CURRENT_DIR = os.path.dirname(os.path.abspath(__file__))
+
+TEST_INSTALLED = os.environ.get("PCS_TEST.TEST_INSTALLED", "0") == "1"
+
+if TEST_INSTALLED:
+ BUNDLED_PACKAGES_DIR = os.path.join("@PCS_BUNDLED_DIR@", "packages")
+else:
+ PACKAGE_DIR = os.path.dirname(CURRENT_DIR)
+ sys.path.insert(0, PACKAGE_DIR)
+ BUNDLED_PACKAGES_DIR = os.path.join(PACKAGE_DIR, "@PCS_BUNDLED_DIR_LOCAL@", "packages")
+
+sys.path.insert(0, BUNDLED_PACKAGES_DIR)
+
+from api_v2_client import main
+
+main()
diff --git a/pcs/api_v2_client.py b/pcs_test/api_v2_client.py
similarity index 100%
rename from pcs/api_v2_client.py
rename to pcs_test/api_v2_client.py
diff --git a/pcs_test/smoke.sh.in b/pcs_test/smoke.sh.in
index 42321777..b845b6d6 100755
--- a/pcs_test/smoke.sh.in
+++ b/pcs_test/smoke.sh.in
@@ -1,6 +1,8 @@
#!@BASH@
set -ex
+SCRIPT_DIR="$(dirname -- "$(realpath -- "$0")")"
+
cluster_user=hacluster
cluster_user_password=qa57Jk27eP
pcsd_socket_path="@LOCALSTATEDIR@/run/pcsd.socket"
@@ -15,13 +17,15 @@ if pidof systemd | grep "\b1\b"; then
pcs cluster setup cluster-name localhost --debug
fi
+output_file=$(mktemp)
+token_file=$(mktemp)
+
# Sanity check of API V0
token=$(python3 -c "import json; print(json.load(open('@LOCALSTATEDIR@/lib/pcsd/known-hosts'))['known_hosts']['localhost']['token']);")
-curl -kb "token=${token}" https://localhost:2224/remote/cluster_status_plaintext -d 'data_json={}' > output.json
-cat output.json; echo ""
-python3 -c "import json; import sys; json.load(open('output.json'))['status'] == 'exception' and (sys.exit(1))";
+curl -kb "token=${token}" https://localhost:2224/remote/cluster_status_plaintext -d 'data_json={}' > "${output_file}"
+cat "${output_file}"; echo ""
+python3 -c "import json; import sys; json.load(open('${output_file}'))['status'] == 'exception' and (sys.exit(1))";
-token_file=$(mktemp)
dd if=/dev/urandom bs=32 count=1 status=none | base64 > "${token_file}"
custom_localhost_node_name="custom-node-name"
@@ -30,24 +34,28 @@ pcs pcsd accept_token "${token_file}"
pcs pcsd status "${custom_localhost_node_name}" | grep "${custom_localhost_node_name}: Online"
# Sanity check of API V1
-curl -kb "token=${token}" https://localhost:2224/api/v1/resource-agent-get-agents-list/v1 --data '{}' > output.json
-cat output.json; echo ""
-python3 -c "import json; import sys; json.load(open('output.json'))['status'] != 'success' and (sys.exit(1))";
+curl -kb "token=${token}" https://localhost:2224/api/v1/resource-agent-get-agents-list/v1 --data '{}' > "${output_file}"
+cat "${output_file}"; echo ""
+python3 -c "import json; import sys; json.load(open('${output_file}'))['status'] != 'success' and (sys.exit(1))";
# Sanity check of API V2
# async
-pcs/api_v2_client resource_agent.get_agent_metadata '{"agent_name":{"standard":"ocf","provider":"pacemaker","type":"Dummy"}}'
+env "PCS_TEST.TEST_INSTALLED=1" ${SCRIPT_DIR}/api_v2_client resource_agent.get_agent_metadata '{"agent_name":{"standard":"ocf","provider":"pacemaker","type":"Dummy"}}'
# sync
-pcs/api_v2_client --sync resource_agent.get_agent_metadata '{"agent_name":{"standard":"ocf","provider":"pacemaker","type":"Stateful"}}'
+env "PCS_TEST.TEST_INSTALLED=1" ${SCRIPT_DIR}/api_v2_client --sync resource_agent.get_agent_metadata '{"agent_name":{"standard":"ocf","provider":"pacemaker","type":"Stateful"}}'
# unix socket test
-curl --unix-socket "${pcsd_socket_path}" http:/something/api/v1/resource-agent-get-agents-list/v1 --data '{}' > output.json
-cat output.json; echo ""
-python3 -c "import json; import sys; json.load(open('output.json'))['status'] != 'success' and (sys.exit(1))";
+curl --unix-socket "${pcsd_socket_path}" http:/something/api/v1/resource-agent-get-agents-list/v1 --data '{}' > "${output_file}"
+cat "${output_file}"; echo ""
+python3 -c "import json; import sys; json.load(open('${output_file}'))['status'] != 'success' and (sys.exit(1))";
# make sure socket is not accessible by all users
useradd testuser
su testuser
! curl --unix-socket "${pcsd_socket_path}" http:/something/api/v1/resource-agent-get-agents-list/v1 --data '{}'
+
+# cleanup
+rm "${token_file}"
+rm "${output_file}"
exit 0
--
2.38.1

View File

@ -1,30 +0,0 @@
From 4692f1032b7954d56f76f243283d519e8453ff58 Mon Sep 17 00:00:00 2001
From: Miroslav Lisik <mlisik@redhat.com>
Date: Tue, 29 Nov 2022 17:46:08 +0100
Subject: [PATCH 1/3] fix smoke test
---
pcs_test/smoke.sh.in | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/pcs_test/smoke.sh.in b/pcs_test/smoke.sh.in
index b845b6d6..e9466efd 100755
--- a/pcs_test/smoke.sh.in
+++ b/pcs_test/smoke.sh.in
@@ -52,10 +52,11 @@ python3 -c "import json; import sys; json.load(open('${output_file}'))['status']
# make sure socket is not accessible by all users
useradd testuser
-su testuser
-! curl --unix-socket "${pcsd_socket_path}" http:/something/api/v1/resource-agent-get-agents-list/v1 --data '{}'
+su testuser -c '! curl --unix-socket '"${pcsd_socket_path}"' http:/something/api/v1/resource-agent-get-agents-list/v1 --data '\''{}'\'''
# cleanup
rm "${token_file}"
rm "${output_file}"
+pcs cluster destroy --force
+userdel -r testuser
exit 0
--
2.38.1

View File

@ -1,40 +0,0 @@
From d486d9c9bafbfc13be7ff86c0ae781feed184d52 Mon Sep 17 00:00:00 2001
From: Ondrej Mular <omular@redhat.com>
Date: Thu, 24 Nov 2022 08:15:13 +0100
Subject: [PATCH 1/2] fix graceful termination of pcsd via systemd
---
CHANGELOG.md | 5 +++++
pcsd/pcsd.service.in | 1 +
2 files changed, 6 insertions(+)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 7d3d606b..7927eae6 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,10 @@
# Change Log
+## [Unreleased]
+
+### Fixed
+- Graceful stopping pcsd service using `systemctl stop pcsd` command
+
## [0.11.4] - 2022-11-21
### Security
diff --git a/pcsd/pcsd.service.in b/pcsd/pcsd.service.in
index 8591e750..dca5052d 100644
--- a/pcsd/pcsd.service.in
+++ b/pcsd/pcsd.service.in
@@ -11,6 +11,7 @@ After=pcsd-ruby.service
EnvironmentFile=@CONF_DIR@/pcsd
ExecStart=@SBINDIR@/pcsd
Type=notify
+KillMode=mixed
[Install]
WantedBy=multi-user.target
--
2.38.1

View File

@ -1,145 +0,0 @@
From 1edf85bdabadf10708f63c0767991c7f4150e842 Mon Sep 17 00:00:00 2001
From: Ondrej Mular <omular@redhat.com>
Date: Wed, 7 Dec 2022 15:53:25 +0100
Subject: [PATCH 3/3] fix displaying bool and integer values in `pcs resource
config` command
---
CHANGELOG.md | 4 ++++
pcs/cli/resource/output.py | 18 +++++++++---------
pcs_test/resources/cib-resources.xml | 2 +-
pcs_test/tier1/legacy/test_resource.py | 3 ++-
pcs_test/tools/resources_dto.py | 4 ++--
5 files changed, 18 insertions(+), 13 deletions(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index ed2083af..378cca50 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,7 +9,11 @@
### Fixed
- Graceful stopping pcsd service using `systemctl stop pcsd` command
+- Displaying bool and integer values in `pcs resource config` command
+ ([rhbz#2151164], [ghissue#604])
+[ghissue#604]: https://github.com/ClusterLabs/pcs/issues/604
+[rhbz#2151164]: https://bugzilla.redhat.com/show_bug.cgi?id=2151164
[rhbz#2151524]: https://bugzilla.redhat.com/show_bug.cgi?id=2151524
diff --git a/pcs/cli/resource/output.py b/pcs/cli/resource/output.py
index daa713a0..dbdf009f 100644
--- a/pcs/cli/resource/output.py
+++ b/pcs/cli/resource/output.py
@@ -69,9 +69,9 @@ def _resource_operation_to_pairs(
pairs.append(("interval-origin", operation_dto.interval_origin))
if operation_dto.timeout:
pairs.append(("timeout", operation_dto.timeout))
- if operation_dto.enabled:
+ if operation_dto.enabled is not None:
pairs.append(("enabled", _bool_to_cli_value(operation_dto.enabled)))
- if operation_dto.record_pending:
+ if operation_dto.record_pending is not None:
pairs.append(
("record-pending", _bool_to_cli_value(operation_dto.record_pending))
)
@@ -474,13 +474,13 @@ def _resource_bundle_container_options_to_pairs(
options: CibResourceBundleContainerRuntimeOptionsDto,
) -> List[Tuple[str, str]]:
option_list = [("image", options.image)]
- if options.replicas:
+ if options.replicas is not None:
option_list.append(("replicas", str(options.replicas)))
- if options.replicas_per_host:
+ if options.replicas_per_host is not None:
option_list.append(
("replicas-per-host", str(options.replicas_per_host))
)
- if options.promoted_max:
+ if options.promoted_max is not None:
option_list.append(("promoted-max", str(options.promoted_max)))
if options.run_command:
option_list.append(("run-command", options.run_command))
@@ -505,7 +505,7 @@ def _resource_bundle_network_options_to_pairs(
network_options.append(
("ip-range-start", bundle_network_dto.ip_range_start)
)
- if bundle_network_dto.control_port:
+ if bundle_network_dto.control_port is not None:
network_options.append(
("control-port", str(bundle_network_dto.control_port))
)
@@ -513,7 +513,7 @@ def _resource_bundle_network_options_to_pairs(
network_options.append(
("host-interface", bundle_network_dto.host_interface)
)
- if bundle_network_dto.host_netmask:
+ if bundle_network_dto.host_netmask is not None:
network_options.append(
("host-netmask", str(bundle_network_dto.host_netmask))
)
@@ -528,9 +528,9 @@ def _resource_bundle_port_mapping_to_pairs(
bundle_net_port_mapping_dto: CibResourceBundlePortMappingDto,
) -> List[Tuple[str, str]]:
mapping = []
- if bundle_net_port_mapping_dto.port:
+ if bundle_net_port_mapping_dto.port is not None:
mapping.append(("port", str(bundle_net_port_mapping_dto.port)))
- if bundle_net_port_mapping_dto.internal_port:
+ if bundle_net_port_mapping_dto.internal_port is not None:
mapping.append(
("internal-port", str(bundle_net_port_mapping_dto.internal_port))
)
diff --git a/pcs_test/resources/cib-resources.xml b/pcs_test/resources/cib-resources.xml
index 1e256b42..9242fd4a 100644
--- a/pcs_test/resources/cib-resources.xml
+++ b/pcs_test/resources/cib-resources.xml
@@ -53,7 +53,7 @@
</instance_attributes>
</op>
<op name="migrate_from" timeout="20s" interval="0s" id="R7-migrate_from-interval-0s"/>
- <op name="migrate_to" timeout="20s" interval="0s" id="R7-migrate_to-interval-0s"/>
+ <op name="migrate_to" timeout="20s" interval="0s" id="R7-migrate_to-interval-0s" enabled="false" record-pending="false"/>
<op name="monitor" timeout="20s" interval="10s" id="R7-monitor-interval-10s"/>
<op name="reload" timeout="20s" interval="0s" id="R7-reload-interval-0s"/>
<op name="reload-agent" timeout="20s" interval="0s" id="R7-reload-agent-interval-0s"/>
diff --git a/pcs_test/tier1/legacy/test_resource.py b/pcs_test/tier1/legacy/test_resource.py
index c097a937..3ba32ec7 100644
--- a/pcs_test/tier1/legacy/test_resource.py
+++ b/pcs_test/tier1/legacy/test_resource.py
@@ -774,7 +774,7 @@ Error: moni=tor does not appear to be a valid operation action
o, r = pcs(
self.temp_cib.name,
- "resource create --no-default-ops OPTest ocf:heartbeat:Dummy op monitor interval=30s OCF_CHECK_LEVEL=1 op monitor interval=25s OCF_CHECK_LEVEL=1".split(),
+ "resource create --no-default-ops OPTest ocf:heartbeat:Dummy op monitor interval=30s OCF_CHECK_LEVEL=1 op monitor interval=25s OCF_CHECK_LEVEL=1 enabled=0".split(),
)
ac(o, "")
assert r == 0
@@ -791,6 +791,7 @@ Error: moni=tor does not appear to be a valid operation action
OCF_CHECK_LEVEL=1
monitor: OPTest-monitor-interval-25s
interval=25s
+ enabled=0
OCF_CHECK_LEVEL=1
"""
),
diff --git a/pcs_test/tools/resources_dto.py b/pcs_test/tools/resources_dto.py
index e010037e..af0b4ac3 100644
--- a/pcs_test/tools/resources_dto.py
+++ b/pcs_test/tools/resources_dto.py
@@ -236,8 +236,8 @@ PRIMITIVE_R7 = CibResourcePrimitiveDto(
start_delay=None,
interval_origin=None,
timeout="20s",
- enabled=None,
- record_pending=None,
+ enabled=False,
+ record_pending=False,
role=None,
on_fail=None,
meta_attributes=[],
--
2.38.1

View File

@ -1,755 +0,0 @@
From e292dd4de2504da09901133fdab7ace5a97f9d73 Mon Sep 17 00:00:00 2001
From: Ondrej Mular <omular@redhat.com>
Date: Wed, 7 Dec 2022 11:33:25 +0100
Subject: [PATCH 2/3] add warning when updating a misconfigured resource
---
CHANGELOG.md | 8 ++
pcs/common/reports/codes.py | 3 +
pcs/common/reports/messages.py | 19 +++++
pcs/lib/cib/resource/primitive.py | 84 ++++++++++++++-----
pcs/lib/pacemaker/live.py | 38 ++-------
.../tier0/common/reports/test_messages.py | 16 ++++
.../cib/resource/test_primitive_validate.py | 56 +++++++------
pcs_test/tier0/lib/pacemaker/test_live.py | 78 +++++------------
pcs_test/tier1/legacy/test_stonith.py | 5 +-
9 files changed, 169 insertions(+), 138 deletions(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 7927eae6..ed2083af 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,9 +2,17 @@
## [Unreleased]
+### Added
+- Warning to `pcs resource|stonith update` commands about not using agent
+ self-validation feature when the resource is already misconfigured
+ ([rhbz#2151524])
+
### Fixed
- Graceful stopping pcsd service using `systemctl stop pcsd` command
+[rhbz#2151524]: https://bugzilla.redhat.com/show_bug.cgi?id=2151524
+
+
## [0.11.4] - 2022-11-21
### Security
diff --git a/pcs/common/reports/codes.py b/pcs/common/reports/codes.py
index 76963733..90609f47 100644
--- a/pcs/common/reports/codes.py
+++ b/pcs/common/reports/codes.py
@@ -44,6 +44,9 @@ AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE = M("AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE")
AGENT_NAME_GUESS_FOUND_NONE = M("AGENT_NAME_GUESS_FOUND_NONE")
AGENT_NAME_GUESSED = M("AGENT_NAME_GUESSED")
AGENT_SELF_VALIDATION_INVALID_DATA = M("AGENT_SELF_VALIDATION_INVALID_DATA")
+AGENT_SELF_VALIDATION_SKIPPED_UPDATED_RESOURCE_MISCONFIGURED = M(
+ "AGENT_SELF_VALIDATION_SKIPPED_UPDATED_RESOURCE_MISCONFIGURED"
+)
AGENT_SELF_VALIDATION_RESULT = M("AGENT_SELF_VALIDATION_RESULT")
BAD_CLUSTER_STATE_FORMAT = M("BAD_CLUSTER_STATE_FORMAT")
BOOTH_ADDRESS_DUPLICATION = M("BOOTH_ADDRESS_DUPLICATION")
diff --git a/pcs/common/reports/messages.py b/pcs/common/reports/messages.py
index ba748eb2..fbc4de62 100644
--- a/pcs/common/reports/messages.py
+++ b/pcs/common/reports/messages.py
@@ -7494,6 +7494,25 @@ class AgentSelfValidationInvalidData(ReportItemMessage):
return f"Invalid validation data from agent: {self.reason}"
+@dataclass(frozen=True)
+class AgentSelfValidationSkippedUpdatedResourceMisconfigured(ReportItemMessage):
+ """
+ Agent self validation is skipped when updating a resource as it is
+ misconfigured in its current state.
+ """
+
+ result: str
+ _code = codes.AGENT_SELF_VALIDATION_SKIPPED_UPDATED_RESOURCE_MISCONFIGURED
+
+ @property
+ def message(self) -> str:
+ return (
+ "The resource was misconfigured before the update, therefore agent "
+ "self-validation will not be run for the updated configuration. "
+ "Validation output of the original configuration:\n{result}"
+ ).format(result="\n".join(indent(self.result.splitlines())))
+
+
@dataclass(frozen=True)
class ResourceCloneIncompatibleMetaAttributes(ReportItemMessage):
"""
diff --git a/pcs/lib/cib/resource/primitive.py b/pcs/lib/cib/resource/primitive.py
index d9940e9d..f9d52b9b 100644
--- a/pcs/lib/cib/resource/primitive.py
+++ b/pcs/lib/cib/resource/primitive.py
@@ -357,6 +357,31 @@ def _is_ocf_or_stonith_agent(resource_agent_name: ResourceAgentName) -> bool:
return resource_agent_name.standard in ("stonith", "ocf")
+def _get_report_from_agent_self_validation(
+ is_valid: Optional[bool],
+ reason: str,
+ report_severity: reports.ReportItemSeverity,
+) -> reports.ReportItemList:
+ report_items = []
+ if is_valid is None:
+ report_items.append(
+ reports.ReportItem(
+ report_severity,
+ reports.messages.AgentSelfValidationInvalidData(reason),
+ )
+ )
+ elif not is_valid or reason:
+ if is_valid:
+ report_severity = reports.ReportItemSeverity.warning()
+ report_items.append(
+ reports.ReportItem(
+ report_severity,
+ reports.messages.AgentSelfValidationResult(reason),
+ )
+ )
+ return report_items
+
+
def validate_resource_instance_attributes_create(
cmd_runner: CommandRunner,
resource_agent: ResourceAgentFacade,
@@ -405,16 +430,16 @@ def validate_resource_instance_attributes_create(
for report_item in report_items
)
):
- (
- dummy_is_valid,
- agent_validation_reports,
- ) = validate_resource_instance_attributes_via_pcmk(
- cmd_runner,
- agent_name,
- instance_attributes,
- reports.get_severity(reports.codes.FORCE, force),
+ report_items.extend(
+ _get_report_from_agent_self_validation(
+ *validate_resource_instance_attributes_via_pcmk(
+ cmd_runner,
+ agent_name,
+ instance_attributes,
+ ),
+ reports.get_severity(reports.codes.FORCE, force),
+ )
)
- report_items.extend(agent_validation_reports)
return report_items
@@ -508,25 +533,40 @@ def validate_resource_instance_attributes_update(
)
):
(
- is_valid,
- dummy_reports,
+ original_is_valid,
+ original_reason,
) = validate_resource_instance_attributes_via_pcmk(
cmd_runner,
agent_name,
current_instance_attrs,
- reports.ReportItemSeverity.error(),
)
- if is_valid:
- (
- dummy_is_valid,
- agent_validation_reports,
- ) = validate_resource_instance_attributes_via_pcmk(
- cmd_runner,
- resource_agent.metadata.name,
- final_attrs,
- reports.get_severity(reports.codes.FORCE, force),
+ if original_is_valid:
+ report_items.extend(
+ _get_report_from_agent_self_validation(
+ *validate_resource_instance_attributes_via_pcmk(
+ cmd_runner,
+ resource_agent.metadata.name,
+ final_attrs,
+ ),
+ reports.get_severity(reports.codes.FORCE, force),
+ )
+ )
+ elif original_is_valid is None:
+ report_items.append(
+ reports.ReportItem.warning(
+ reports.messages.AgentSelfValidationInvalidData(
+ original_reason
+ )
+ )
+ )
+ else:
+ report_items.append(
+ reports.ReportItem.warning(
+ reports.messages.AgentSelfValidationSkippedUpdatedResourceMisconfigured(
+ original_reason
+ )
+ )
)
- report_items.extend(agent_validation_reports)
return report_items
diff --git a/pcs/lib/pacemaker/live.py b/pcs/lib/pacemaker/live.py
index 6dab613e..fb1e0a4a 100644
--- a/pcs/lib/pacemaker/live.py
+++ b/pcs/lib/pacemaker/live.py
@@ -884,8 +884,7 @@ def _validate_stonith_instance_attributes_via_pcmk(
cmd_runner: CommandRunner,
agent_name: ResourceAgentName,
instance_attributes: Mapping[str, str],
- not_valid_severity: reports.ReportItemSeverity,
-) -> tuple[Optional[bool], reports.ReportItemList]:
+) -> tuple[Optional[bool], str]:
cmd = [
settings.stonith_admin,
"--validate",
@@ -899,7 +898,6 @@ def _validate_stonith_instance_attributes_via_pcmk(
cmd,
"./validate/command/output",
instance_attributes,
- not_valid_severity,
)
@@ -907,8 +905,7 @@ def _validate_resource_instance_attributes_via_pcmk(
cmd_runner: CommandRunner,
agent_name: ResourceAgentName,
instance_attributes: Mapping[str, str],
- not_valid_severity: reports.ReportItemSeverity,
-) -> tuple[Optional[bool], reports.ReportItemList]:
+) -> tuple[Optional[bool], str]:
cmd = [
settings.crm_resource_binary,
"--validate",
@@ -926,7 +923,6 @@ def _validate_resource_instance_attributes_via_pcmk(
cmd,
"./resource-agent-action/command/output",
instance_attributes,
- not_valid_severity,
)
@@ -935,8 +931,7 @@ def _handle_instance_attributes_validation_via_pcmk(
cmd: StringSequence,
data_xpath: str,
instance_attributes: Mapping[str, str],
- not_valid_severity: reports.ReportItemSeverity,
-) -> tuple[Optional[bool], reports.ReportItemList]:
+) -> tuple[Optional[bool], str]:
full_cmd = list(cmd)
for key, value in sorted(instance_attributes.items()):
full_cmd.extend(["--option", f"{key}={value}"])
@@ -945,12 +940,7 @@ def _handle_instance_attributes_validation_via_pcmk(
# dom = _get_api_result_dom(stdout)
dom = xml_fromstring(stdout)
except (etree.XMLSyntaxError, etree.DocumentInvalid) as e:
- return None, [
- reports.ReportItem(
- not_valid_severity,
- reports.messages.AgentSelfValidationInvalidData(str(e)),
- )
- ]
+ return None, str(e)
result = "\n".join(
"\n".join(
line.strip() for line in item.text.split("\n") if line.strip()
@@ -958,38 +948,22 @@ def _handle_instance_attributes_validation_via_pcmk(
for item in dom.iterfind(data_xpath)
if item.get("source") == "stderr" and item.text
).strip()
- if return_value == 0:
- if result:
- return True, [
- reports.ReportItem.warning(
- reports.messages.AgentSelfValidationResult(result)
- )
- ]
- return True, []
- return False, [
- reports.ReportItem(
- not_valid_severity,
- reports.messages.AgentSelfValidationResult(result),
- )
- ]
+ return return_value == 0, result
def validate_resource_instance_attributes_via_pcmk(
cmd_runner: CommandRunner,
resource_agent_name: ResourceAgentName,
instance_attributes: Mapping[str, str],
- not_valid_severity: reports.ReportItemSeverity,
-) -> tuple[Optional[bool], reports.ReportItemList]:
+) -> tuple[Optional[bool], str]:
if resource_agent_name.is_stonith:
return _validate_stonith_instance_attributes_via_pcmk(
cmd_runner,
resource_agent_name,
instance_attributes,
- not_valid_severity,
)
return _validate_resource_instance_attributes_via_pcmk(
cmd_runner,
resource_agent_name,
instance_attributes,
- not_valid_severity,
)
diff --git a/pcs_test/tier0/common/reports/test_messages.py b/pcs_test/tier0/common/reports/test_messages.py
index 64e74daa..b1e009ce 100644
--- a/pcs_test/tier0/common/reports/test_messages.py
+++ b/pcs_test/tier0/common/reports/test_messages.py
@@ -5525,6 +5525,22 @@ class AgentSelfValidationInvalidData(NameBuildTest):
)
+class AgentSelfValidationSkippedUpdatedResourceMisconfigured(NameBuildTest):
+ def test_message(self):
+ lines = list(f"line #{i}" for i in range(3))
+ self.assert_message_from_report(
+ (
+ "The resource was misconfigured before the update, therefore "
+ "agent self-validation will not be run for the updated "
+ "configuration. Validation output of the original "
+ "configuration:\n {}"
+ ).format("\n ".join(lines)),
+ reports.AgentSelfValidationSkippedUpdatedResourceMisconfigured(
+ "\n".join(lines)
+ ),
+ )
+
+
class ResourceCloneIncompatibleMetaAttributes(NameBuildTest):
def test_with_provider(self):
attr = "attr_name"
diff --git a/pcs_test/tier0/lib/cib/resource/test_primitive_validate.py b/pcs_test/tier0/lib/cib/resource/test_primitive_validate.py
index 8b52314f..7a4e5c8f 100644
--- a/pcs_test/tier0/lib/cib/resource/test_primitive_validate.py
+++ b/pcs_test/tier0/lib/cib/resource/test_primitive_validate.py
@@ -660,7 +660,6 @@ class ValidateResourceInstanceAttributesCreateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
)
def test_force(self):
@@ -680,15 +679,14 @@ class ValidateResourceInstanceAttributesCreateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
attributes,
- reports.ReportItemSeverity.warning(),
)
def test_failure(self):
attributes = {"required": "value"}
facade = _fixture_ocf_agent()
- failure_reports = ["report1", "report2"]
- self.agent_self_validation_mock.return_value = False, failure_reports
- self.assertEqual(
+ failure_reason = "failure reason"
+ self.agent_self_validation_mock.return_value = False, failure_reason
+ assert_report_item_list_equal(
primitive.validate_resource_instance_attributes_create(
self.cmd_runner,
facade,
@@ -696,13 +694,18 @@ class ValidateResourceInstanceAttributesCreateSelfValidation(TestCase):
etree.Element("resources"),
force=False,
),
- failure_reports,
+ [
+ fixture.error(
+ reports.codes.AGENT_SELF_VALIDATION_RESULT,
+ result=failure_reason,
+ force_code=reports.codes.FORCE,
+ )
+ ],
)
self.agent_self_validation_mock.assert_called_once_with(
self.cmd_runner,
facade.metadata.name,
attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
)
def test_stonith_check(self):
@@ -722,7 +725,6 @@ class ValidateResourceInstanceAttributesCreateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
)
def test_nonexisting_agent(self):
@@ -1346,13 +1348,11 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
mock.call(
self.cmd_runner,
facade.metadata.name,
new_attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
),
],
)
@@ -1379,13 +1379,11 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
mock.call(
self.cmd_runner,
facade.metadata.name,
new_attributes,
- reports.ReportItemSeverity.warning(),
),
],
)
@@ -1393,13 +1391,13 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
def test_failure(self):
old_attributes = {"required": "old_value"}
new_attributes = {"required": "new_value"}
- failure_reports = ["report1", "report2"]
+ failure_reason = "failure reason"
facade = _fixture_ocf_agent()
self.agent_self_validation_mock.side_effect = (
- (True, []),
- (False, failure_reports),
+ (True, ""),
+ (False, failure_reason),
)
- self.assertEqual(
+ assert_report_item_list_equal(
primitive.validate_resource_instance_attributes_update(
self.cmd_runner,
facade,
@@ -1408,7 +1406,13 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self._fixture_resources(old_attributes),
force=False,
),
- failure_reports,
+ [
+ fixture.error(
+ reports.codes.AGENT_SELF_VALIDATION_RESULT,
+ result=failure_reason,
+ force_code=reports.codes.FORCE,
+ )
+ ],
)
self.assertEqual(
self.agent_self_validation_mock.mock_calls,
@@ -1417,13 +1421,11 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
mock.call(
self.cmd_runner,
facade.metadata.name,
new_attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
),
],
)
@@ -1450,13 +1452,11 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
mock.call(
self.cmd_runner,
facade.metadata.name,
new_attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
),
],
)
@@ -1522,10 +1522,10 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
def test_current_attributes_failure(self):
old_attributes = {"required": "old_value"}
new_attributes = {"required": "new_value"}
- failure_reports = ["report1", "report2"]
+ failure_reason = "failure reason"
facade = _fixture_ocf_agent()
- self.agent_self_validation_mock.return_value = False, failure_reports
- self.assertEqual(
+ self.agent_self_validation_mock.return_value = False, failure_reason
+ assert_report_item_list_equal(
primitive.validate_resource_instance_attributes_update(
self.cmd_runner,
facade,
@@ -1534,7 +1534,12 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self._fixture_resources(old_attributes),
force=False,
),
- [],
+ [
+ fixture.warn(
+ reports.codes.AGENT_SELF_VALIDATION_SKIPPED_UPDATED_RESOURCE_MISCONFIGURED,
+ result=failure_reason,
+ )
+ ],
)
self.assertEqual(
self.agent_self_validation_mock.mock_calls,
@@ -1543,7 +1548,6 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
],
)
diff --git a/pcs_test/tier0/lib/pacemaker/test_live.py b/pcs_test/tier0/lib/pacemaker/test_live.py
index 1f37d759..c1363a65 100644
--- a/pcs_test/tier0/lib/pacemaker/test_live.py
+++ b/pcs_test/tier0/lib/pacemaker/test_live.py
@@ -1706,16 +1706,15 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertTrue(is_valid)
- self.assertEqual(report_list, [])
+ self.assertEqual(reason, "")
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
)
@@ -1725,23 +1724,17 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertIsNone(is_valid)
- assert_report_item_list_equal(
- report_list,
- [
- fixture.info(
- report_codes.AGENT_SELF_VALIDATION_INVALID_DATA,
- reason="Start tag expected, '<' not found, line 1, column 1 (<string>, line 1)",
- )
- ],
+ self.assertEqual(
+ reason,
+ "Start tag expected, '<' not found, line 1, column 1 (<string>, line 1)",
)
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
@@ -1760,19 +1753,15 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertTrue(is_valid)
- assert_report_item_list_equal(
- report_list,
- [],
- )
+ self.assertEqual(reason, "")
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
)
@@ -1791,23 +1780,15 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertFalse(is_valid)
- assert_report_item_list_equal(
- report_list,
- [
- fixture.info(
- report_codes.AGENT_SELF_VALIDATION_RESULT, result=""
- )
- ],
- )
+ self.assertEqual(reason, "")
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
)
@@ -1835,23 +1816,17 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertFalse(is_valid)
- assert_report_item_list_equal(
- report_list,
- [
- fixture.info(
- report_codes.AGENT_SELF_VALIDATION_RESULT,
- result="first line\nImportant output\nand another line",
- )
- ],
+ self.assertEqual(
+ reason,
+ "first line\nImportant output\nand another line",
)
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
@@ -1879,23 +1854,17 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertTrue(is_valid)
- assert_report_item_list_equal(
- report_list,
- [
- fixture.warn(
- report_codes.AGENT_SELF_VALIDATION_RESULT,
- result="first line\nImportant output\nand another line",
- )
- ],
+ self.assertEqual(
+ reason,
+ "first line\nImportant output\nand another line",
)
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
@@ -1907,7 +1876,6 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
def setUp(self):
self.runner = mock.Mock()
self.attrs = dict(attra="val1", attrb="val2")
- self.severity = Severity.info()
patcher = mock.patch(
"pcs.lib.pacemaker.live._handle_instance_attributes_validation_via_pcmk"
)
@@ -1921,7 +1889,7 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
)
self.assertEqual(
lib._validate_resource_instance_attributes_via_pcmk(
- self.runner, agent, self.attrs, self.severity
+ self.runner, agent, self.attrs
),
self.ret_val,
)
@@ -1941,7 +1909,6 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
],
"./resource-agent-action/command/output",
self.attrs,
- self.severity,
)
def test_without_provider(self):
@@ -1950,7 +1917,7 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
)
self.assertEqual(
lib._validate_resource_instance_attributes_via_pcmk(
- self.runner, agent, self.attrs, self.severity
+ self.runner, agent, self.attrs
),
self.ret_val,
)
@@ -1968,7 +1935,6 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
],
"./resource-agent-action/command/output",
self.attrs,
- self.severity,
)
@@ -1978,7 +1944,6 @@ class ValidateStonithInstanceAttributesViaPcmkTest(TestCase):
def setUp(self):
self.runner = mock.Mock()
self.attrs = dict(attra="val1", attrb="val2")
- self.severity = Severity.info()
patcher = mock.patch(
"pcs.lib.pacemaker.live._handle_instance_attributes_validation_via_pcmk"
)
@@ -1992,7 +1957,7 @@ class ValidateStonithInstanceAttributesViaPcmkTest(TestCase):
)
self.assertEqual(
lib._validate_stonith_instance_attributes_via_pcmk(
- self.runner, agent, self.attrs, self.severity
+ self.runner, agent, self.attrs
),
self.ret_val,
)
@@ -2008,5 +1973,4 @@ class ValidateStonithInstanceAttributesViaPcmkTest(TestCase):
],
"./validate/command/output",
self.attrs,
- self.severity,
)
diff --git a/pcs_test/tier1/legacy/test_stonith.py b/pcs_test/tier1/legacy/test_stonith.py
index 8b31094b..7e7ec030 100644
--- a/pcs_test/tier1/legacy/test_stonith.py
+++ b/pcs_test/tier1/legacy/test_stonith.py
@@ -1291,7 +1291,10 @@ class StonithTest(TestCase, AssertPcsMixin):
),
)
- self.assert_pcs_success("stonith update test3 username=testA".split())
+ self.assert_pcs_success(
+ "stonith update test3 username=testA".split(),
+ stdout_start="Warning: ",
+ )
self.assert_pcs_success(
"stonith config test2".split(),
--
2.38.1

View File

@ -1,506 +0,0 @@
From bbf53f713189eb2233efa03bf3aa9c96eb79ba82 Mon Sep 17 00:00:00 2001
From: Miroslav Lisik <mlisik@redhat.com>
Date: Thu, 5 Jan 2023 16:21:44 +0100
Subject: [PATCH 2/2] Fix stonith-watchdog-timeout validation
---
CHANGELOG.md | 2 +
pcs/lib/cluster_property.py | 25 ++++-
pcs/lib/sbd.py | 15 ++-
.../lib/commands/test_cluster_property.py | 50 ++++++++--
pcs_test/tier0/lib/test_cluster_property.py | 98 ++++++++++++++-----
pcs_test/tier1/test_cluster_property.py | 14 ++-
6 files changed, 159 insertions(+), 45 deletions(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 47212f00..0945d727 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -11,6 +11,7 @@
- Graceful stopping pcsd service using `systemctl stop pcsd` command
- Displaying bool and integer values in `pcs resource config` command
([rhbz#2151164], [ghissue#604])
+- Allow time values in stonith-watchdog-time property ([rhbz#2158790])
### Changed
- Resource/stonith agent self-validation of instance attributes is now
@@ -22,6 +23,7 @@
[rhbz#2151164]: https://bugzilla.redhat.com/show_bug.cgi?id=2151164
[rhbz#2151524]: https://bugzilla.redhat.com/show_bug.cgi?id=2151524
[rhbz#2159454]: https://bugzilla.redhat.com/show_bug.cgi?id=2159454
+[rhbz#2158790]: https://bugzilla.redhat.com/show_bug.cgi?id=2158790
## [0.11.4] - 2022-11-21
diff --git a/pcs/lib/cluster_property.py b/pcs/lib/cluster_property.py
index 3bbc093d..d3c8a896 100644
--- a/pcs/lib/cluster_property.py
+++ b/pcs/lib/cluster_property.py
@@ -7,6 +7,7 @@ from lxml.etree import _Element
from pcs.common import reports
from pcs.common.services.interfaces import ServiceManagerInterface
+from pcs.common.tools import timeout_to_seconds
from pcs.common.types import StringSequence
from pcs.lib import (
sbd,
@@ -37,8 +38,21 @@ def _validate_stonith_watchdog_timeout_property(
force: bool = False,
) -> reports.ReportItemList:
report_list: reports.ReportItemList = []
+ original_value = value
+ # if value is not empty, try to convert time interval string
+ if value:
+ seconds = timeout_to_seconds(value)
+ if seconds is None:
+ # returns empty list because this should be reported by
+ # ValueTimeInterval validator
+ return report_list
+ value = str(seconds)
if sbd.is_sbd_enabled(service_manager):
- report_list.extend(sbd.validate_stonith_watchdog_timeout(value, force))
+ report_list.extend(
+ sbd.validate_stonith_watchdog_timeout(
+ validate.ValuePair(original_value, value), force
+ )
+ )
else:
if value not in ["", "0"]:
report_list.append(
@@ -123,9 +137,6 @@ def validate_set_cluster_properties(
# unknow properties are reported by NamesIn validator
continue
property_metadata = possible_properties_dict[property_name]
- if property_metadata.name == "stonith-watchdog-timeout":
- # needs extra validation
- continue
if property_metadata.type == "boolean":
validators.append(
validate.ValuePcmkBoolean(
@@ -153,9 +164,13 @@ def validate_set_cluster_properties(
)
)
elif property_metadata.type == "time":
+ # make stonith-watchdog-timeout value not forcable
validators.append(
validate.ValueTimeInterval(
- property_metadata.name, severity=severity
+ property_metadata.name,
+ severity=severity
+ if property_metadata.name != "stonith-watchdog-timeout"
+ else reports.ReportItemSeverity.error(),
)
)
report_list.extend(
diff --git a/pcs/lib/sbd.py b/pcs/lib/sbd.py
index 1e3cfb37..38cd8767 100644
--- a/pcs/lib/sbd.py
+++ b/pcs/lib/sbd.py
@@ -1,6 +1,9 @@
import re
from os import path
-from typing import Optional
+from typing import (
+ Optional,
+ Union,
+)
from pcs import settings
from pcs.common import reports
@@ -392,7 +395,10 @@ def _get_local_sbd_watchdog_timeout() -> int:
def validate_stonith_watchdog_timeout(
- stonith_watchdog_timeout: str, force: bool = False
+ stonith_watchdog_timeout: Union[
+ validate.TypeOptionValue, validate.ValuePair
+ ],
+ force: bool = False,
) -> reports.ReportItemList:
"""
Check sbd status and config when user is setting stonith-watchdog-timeout
@@ -401,6 +407,7 @@ def validate_stonith_watchdog_timeout(
stonith_watchdog_timeout -- value to be validated
"""
+ stonith_watchdog_timeout = validate.ValuePair.get(stonith_watchdog_timeout)
severity = reports.get_severity(reports.codes.FORCE, force)
if _is_device_set_local():
return (
@@ -412,11 +419,11 @@ def validate_stonith_watchdog_timeout(
),
)
]
- if stonith_watchdog_timeout not in ["", "0"]
+ if stonith_watchdog_timeout.normalized not in ["", "0"]
else []
)
- if stonith_watchdog_timeout in ["", "0"]:
+ if stonith_watchdog_timeout.normalized in ["", "0"]:
return [
reports.ReportItem(
severity,
diff --git a/pcs_test/tier0/lib/commands/test_cluster_property.py b/pcs_test/tier0/lib/commands/test_cluster_property.py
index 94c0938a..781222ab 100644
--- a/pcs_test/tier0/lib/commands/test_cluster_property.py
+++ b/pcs_test/tier0/lib/commands/test_cluster_property.py
@@ -120,6 +120,34 @@ class StonithWatchdogTimeoutMixin(LoadMetadataMixin):
)
self.env_assist.assert_reports([])
+ def _set_invalid_value(self, forced=False):
+ self.config.remove("services.is_enabled")
+ self.env_assist.assert_raise_library_error(
+ lambda: cluster_property.set_properties(
+ self.env_assist.get_env(),
+ {"stonith-watchdog-timeout": "15x"},
+ [] if not forced else [reports.codes.FORCE],
+ )
+ )
+ self.env_assist.assert_reports(
+ [
+ fixture.error(
+ reports.codes.INVALID_OPTION_VALUE,
+ option_name="stonith-watchdog-timeout",
+ option_value="15x",
+ allowed_values="time interval (e.g. 1, 2s, 3m, 4h, ...)",
+ cannot_be_empty=False,
+ forbidden_characters=None,
+ ),
+ ]
+ )
+
+ def test_set_invalid_value(self):
+ self._set_invalid_value(forced=False)
+
+ def test_set_invalid_value_forced(self):
+ self._set_invalid_value(forced=True)
+
class TestSetStonithWatchdogTimeoutSBDIsDisabled(
StonithWatchdogTimeoutMixin, TestCase
@@ -132,6 +160,9 @@ class TestSetStonithWatchdogTimeoutSBDIsDisabled(
def test_set_zero(self):
self._set_success({"stonith-watchdog-timeout": "0"})
+ def test_set_zero_time_suffix(self):
+ self._set_success({"stonith-watchdog-timeout": "0s"})
+
def test_set_not_zero_or_empty(self):
self.env_assist.assert_raise_library_error(
lambda: cluster_property.set_properties(
@@ -231,12 +262,12 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
def test_set_zero_forced(self):
self.config.env.push_cib(
crm_config=fixture_crm_config_properties(
- [("cib-bootstrap-options", {"stonith-watchdog-timeout": "0"})]
+ [("cib-bootstrap-options", {"stonith-watchdog-timeout": "0s"})]
)
)
cluster_property.set_properties(
self.env_assist.get_env(),
- {"stonith-watchdog-timeout": "0"},
+ {"stonith-watchdog-timeout": "0s"},
[reports.codes.FORCE],
)
self.env_assist.assert_reports(
@@ -271,7 +302,7 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
self.env_assist.assert_raise_library_error(
lambda: cluster_property.set_properties(
self.env_assist.get_env(),
- {"stonith-watchdog-timeout": "9"},
+ {"stonith-watchdog-timeout": "9s"},
[],
)
)
@@ -281,7 +312,7 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
reports.codes.STONITH_WATCHDOG_TIMEOUT_TOO_SMALL,
force_code=reports.codes.FORCE,
cluster_sbd_watchdog_timeout=10,
- entered_watchdog_timeout="9",
+ entered_watchdog_timeout="9s",
)
]
)
@@ -289,12 +320,12 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
def test_too_small_forced(self):
self.config.env.push_cib(
crm_config=fixture_crm_config_properties(
- [("cib-bootstrap-options", {"stonith-watchdog-timeout": "9"})]
+ [("cib-bootstrap-options", {"stonith-watchdog-timeout": "9s"})]
)
)
cluster_property.set_properties(
self.env_assist.get_env(),
- {"stonith-watchdog-timeout": "9"},
+ {"stonith-watchdog-timeout": "9s"},
[reports.codes.FORCE],
)
self.env_assist.assert_reports(
@@ -302,13 +333,13 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
fixture.warn(
reports.codes.STONITH_WATCHDOG_TIMEOUT_TOO_SMALL,
cluster_sbd_watchdog_timeout=10,
- entered_watchdog_timeout="9",
+ entered_watchdog_timeout="9s",
)
]
)
def test_more_than_timeout(self):
- self._set_success({"stonith-watchdog-timeout": "11"})
+ self._set_success({"stonith-watchdog-timeout": "11s"})
@mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: ["dev1", "dev2"])
@@ -323,6 +354,9 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledSharedDevices(
def test_set_to_zero(self):
self._set_success({"stonith-watchdog-timeout": "0"})
+ def test_set_to_zero_time_suffix(self):
+ self._set_success({"stonith-watchdog-timeout": "0min"})
+
def test_set_not_zero_or_empty(self):
self.env_assist.assert_raise_library_error(
lambda: cluster_property.set_properties(
diff --git a/pcs_test/tier0/lib/test_cluster_property.py b/pcs_test/tier0/lib/test_cluster_property.py
index 2feb728d..8d6f90b1 100644
--- a/pcs_test/tier0/lib/test_cluster_property.py
+++ b/pcs_test/tier0/lib/test_cluster_property.py
@@ -83,6 +83,7 @@ FIXTURE_VALID_OPTIONS_DICT = {
"integer_param": "10",
"percentage_param": "20%",
"select_param": "s3",
+ "stonith-watchdog-timeout": "0",
"time_param": "5min",
}
@@ -96,6 +97,8 @@ FIXTURE_INVALID_OPTIONS_DICT = {
"have-watchdog": "100",
}
+STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES = ["", "0", "0s"]
+
def _fixture_parameter(name, param_type, default, enum_values):
return ResourceAgentParameter(
@@ -239,6 +242,7 @@ class TestValidateSetClusterProperties(TestCase):
sbd_enabled=False,
sbd_devices=False,
force=False,
+ valid_value=True,
):
self.mock_is_sbd_enabled.return_value = sbd_enabled
self.mock_sbd_devices.return_value = ["devices"] if sbd_devices else []
@@ -254,9 +258,13 @@ class TestValidateSetClusterProperties(TestCase):
),
expected_report_list,
)
- if "stonith-watchdog-timeout" in new_properties and (
- new_properties["stonith-watchdog-timeout"]
- or "stonith-watchdog-timeout" in configured_properties
+ if (
+ "stonith-watchdog-timeout" in new_properties
+ and (
+ new_properties["stonith-watchdog-timeout"]
+ or "stonith-watchdog-timeout" in configured_properties
+ )
+ and valid_value
):
self.mock_is_sbd_enabled.assert_called_once_with(
self.mock_service_manager
@@ -266,7 +274,10 @@ class TestValidateSetClusterProperties(TestCase):
if sbd_devices:
self.mock_sbd_timeout.assert_not_called()
else:
- if new_properties["stonith-watchdog-timeout"] in ["", "0"]:
+ if (
+ new_properties["stonith-watchdog-timeout"]
+ in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES
+ ):
self.mock_sbd_timeout.assert_not_called()
else:
self.mock_sbd_timeout.assert_called_once_with()
@@ -280,6 +291,8 @@ class TestValidateSetClusterProperties(TestCase):
self.mock_sbd_timeout.assert_not_called()
self.mock_is_sbd_enabled.reset_mock()
+ self.mock_sbd_devices.reset_mock()
+ self.mock_sbd_timeout.reset_mock()
def test_no_properties_to_set_or_unset(self):
self.assert_validate_set(
@@ -328,7 +341,7 @@ class TestValidateSetClusterProperties(TestCase):
)
def test_unset_stonith_watchdog_timeout_sbd_disabled(self):
- for value in ["0", ""]:
+ for value in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES:
with self.subTest(value=value):
self.assert_validate_set(
["stonith-watchdog-timeout"],
@@ -349,22 +362,27 @@ class TestValidateSetClusterProperties(TestCase):
)
def test_set_ok_stonith_watchdog_timeout_sbd_enabled_without_devices(self):
- self.assert_validate_set(
- [], {"stonith-watchdog-timeout": "15"}, [], sbd_enabled=True
- )
+ for value in ["15", "15s"]:
+ with self.subTest(value=value):
+ self.assert_validate_set(
+ [],
+ {"stonith-watchdog-timeout": value},
+ [],
+ sbd_enabled=True,
+ )
def test_set_small_stonith_watchdog_timeout_sbd_enabled_without_devices(
self,
):
self.assert_validate_set(
[],
- {"stonith-watchdog-timeout": "9"},
+ {"stonith-watchdog-timeout": "9s"},
[
fixture.error(
reports.codes.STONITH_WATCHDOG_TIMEOUT_TOO_SMALL,
force_code=reports.codes.FORCE,
cluster_sbd_watchdog_timeout=10,
- entered_watchdog_timeout="9",
+ entered_watchdog_timeout="9s",
)
],
sbd_enabled=True,
@@ -387,28 +405,54 @@ class TestValidateSetClusterProperties(TestCase):
force=True,
)
- def test_set_not_a_number_stonith_watchdog_timeout_sbd_enabled_without_devices(
+ def _set_invalid_value_stonith_watchdog_timeout(
+ self, sbd_enabled=False, sbd_devices=False
+ ):
+ for value in ["invalid", "10x"]:
+ with self.subTest(value=value):
+ self.assert_validate_set(
+ [],
+ {"stonith-watchdog-timeout": value},
+ [
+ fixture.error(
+ reports.codes.INVALID_OPTION_VALUE,
+ option_name="stonith-watchdog-timeout",
+ option_value=value,
+ allowed_values="time interval (e.g. 1, 2s, 3m, 4h, ...)",
+ cannot_be_empty=False,
+ forbidden_characters=None,
+ )
+ ],
+ sbd_enabled=sbd_enabled,
+ sbd_devices=sbd_devices,
+ valid_value=False,
+ )
+
+ def test_set_invalid_value_stonith_watchdog_timeout_sbd_enabled_without_devices(
self,
):
+ self._set_invalid_value_stonith_watchdog_timeout(
+ sbd_enabled=True, sbd_devices=False
+ )
- self.assert_validate_set(
- [],
- {"stonith-watchdog-timeout": "invalid"},
- [
- fixture.error(
- reports.codes.STONITH_WATCHDOG_TIMEOUT_TOO_SMALL,
- force_code=reports.codes.FORCE,
- cluster_sbd_watchdog_timeout=10,
- entered_watchdog_timeout="invalid",
- )
- ],
- sbd_enabled=True,
+ def test_set_invalid_value_stonith_watchdog_timeout_sbd_enabled_with_devices(
+ self,
+ ):
+ self._set_invalid_value_stonith_watchdog_timeout(
+ sbd_enabled=True, sbd_devices=True
+ )
+
+ def test_set_invalid_value_stonith_watchdog_timeout_sbd_disabled(
+ self,
+ ):
+ self._set_invalid_value_stonith_watchdog_timeout(
+ sbd_enabled=False, sbd_devices=False
)
def test_unset_stonith_watchdog_timeout_sbd_enabled_without_devices(
self,
):
- for value in ["0", ""]:
+ for value in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES:
with self.subTest(value=value):
self.assert_validate_set(
["stonith-watchdog-timeout"],
@@ -426,7 +470,7 @@ class TestValidateSetClusterProperties(TestCase):
def test_unset_stonith_watchdog_timeout_sbd_enabled_without_devices_forced(
self,
):
- for value in ["0", ""]:
+ for value in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES:
with self.subTest(value=value):
self.assert_validate_set(
["stonith-watchdog-timeout"],
@@ -459,7 +503,7 @@ class TestValidateSetClusterProperties(TestCase):
def test_set_stonith_watchdog_timeout_sbd_enabled_with_devices_forced(self):
self.assert_validate_set(
[],
- {"stonith-watchdog-timeout": 15},
+ {"stonith-watchdog-timeout": "15s"},
[
fixture.warn(
reports.codes.STONITH_WATCHDOG_TIMEOUT_CANNOT_BE_SET,
@@ -472,7 +516,7 @@ class TestValidateSetClusterProperties(TestCase):
)
def test_unset_stonith_watchdog_timeout_sbd_enabled_with_devices(self):
- for value in ["0", ""]:
+ for value in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES:
with self.subTest(value=value):
self.assert_validate_set(
["stonith-watchdog-timeout"],
diff --git a/pcs_test/tier1/test_cluster_property.py b/pcs_test/tier1/test_cluster_property.py
index 39d70b9d..cb2d8f5c 100644
--- a/pcs_test/tier1/test_cluster_property.py
+++ b/pcs_test/tier1/test_cluster_property.py
@@ -169,7 +169,7 @@ class TestPropertySet(PropertyMixin, TestCase):
def test_set_stonith_watchdog_timeout(self):
self.assert_pcs_fail(
- "property set stonith-watchdog-timeout=5".split(),
+ "property set stonith-watchdog-timeout=5s".split(),
stderr_full=(
"Error: stonith-watchdog-timeout can only be unset or set to 0 "
"while SBD is disabled\n"
@@ -179,6 +179,18 @@ class TestPropertySet(PropertyMixin, TestCase):
)
self.assert_resources_xml_in_cib(UNCHANGED_CRM_CONFIG)
+ def test_set_stonith_watchdog_timeout_invalid_value(self):
+ self.assert_pcs_fail(
+ "property set stonith-watchdog-timeout=5x".split(),
+ stderr_full=(
+ "Error: '5x' is not a valid stonith-watchdog-timeout value, use"
+ " time interval (e.g. 1, 2s, 3m, 4h, ...)\n"
+ "Error: Errors have occurred, therefore pcs is unable to "
+ "continue\n"
+ ),
+ )
+ self.assert_resources_xml_in_cib(UNCHANGED_CRM_CONFIG)
+
class TestPropertyUnset(PropertyMixin, TestCase):
def test_success(self):
--
2.39.0

File diff suppressed because it is too large Load Diff

View File

@ -1,311 +0,0 @@
From 166fd04bb5505f29463088080044689f15635018 Mon Sep 17 00:00:00 2001
From: Ondrej Mular <omular@redhat.com>
Date: Tue, 31 Jan 2023 17:44:16 +0100
Subject: [PATCH] fix update of stonith-watchdog-timeout when cluster is not
running
---
pcs/lib/communication/sbd.py | 4 +-
.../lib/commands/sbd/test_disable_sbd.py | 10 ++--
.../tier0/lib/commands/sbd/test_enable_sbd.py | 49 ++++++++++---------
pcsd/pcs.rb | 17 +++++--
4 files changed, 48 insertions(+), 32 deletions(-)
diff --git a/pcs/lib/communication/sbd.py b/pcs/lib/communication/sbd.py
index f31bf16f..83d912b2 100644
--- a/pcs/lib/communication/sbd.py
+++ b/pcs/lib/communication/sbd.py
@@ -98,8 +98,8 @@ class StonithWatchdogTimeoutAction(
)
if report_item is None:
self._on_success()
- return []
- self._report(report_item)
+ else:
+ self._report(report_item)
return self._get_next_list()
diff --git a/pcs_test/tier0/lib/commands/sbd/test_disable_sbd.py b/pcs_test/tier0/lib/commands/sbd/test_disable_sbd.py
index 13135fb2..f8f165bf 100644
--- a/pcs_test/tier0/lib/commands/sbd/test_disable_sbd.py
+++ b/pcs_test/tier0/lib/commands/sbd/test_disable_sbd.py
@@ -19,7 +19,7 @@ class DisableSbd(TestCase):
self.config.corosync_conf.load(filename=self.corosync_conf_name)
self.config.http.host.check_auth(node_labels=self.node_list)
self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero(
- node_labels=self.node_list[:1]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.disable_sbd(node_labels=self.node_list)
disable_sbd(self.env_assist.get_env())
@@ -56,7 +56,7 @@ class DisableSbd(TestCase):
self.config.corosync_conf.load(filename=self.corosync_conf_name)
self.config.http.host.check_auth(node_labels=self.node_list)
self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero(
- node_labels=self.node_list[:1]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.disable_sbd(node_labels=self.node_list)
@@ -158,7 +158,9 @@ class DisableSbd(TestCase):
]
)
self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero(
- node_labels=online_nodes_list[:1]
+ communication_list=[
+ [dict(label=node)] for node in self.node_list[1:]
+ ],
)
self.config.http.sbd.disable_sbd(node_labels=online_nodes_list)
disable_sbd(self.env_assist.get_env(), ignore_offline_nodes=True)
@@ -291,7 +293,7 @@ class DisableSbd(TestCase):
self.config.corosync_conf.load(filename=self.corosync_conf_name)
self.config.http.host.check_auth(node_labels=self.node_list)
self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero(
- node_labels=self.node_list[:1]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.disable_sbd(
communication_list=[
diff --git a/pcs_test/tier0/lib/commands/sbd/test_enable_sbd.py b/pcs_test/tier0/lib/commands/sbd/test_enable_sbd.py
index be479a34..77863d5e 100644
--- a/pcs_test/tier0/lib/commands/sbd/test_enable_sbd.py
+++ b/pcs_test/tier0/lib/commands/sbd/test_enable_sbd.py
@@ -130,7 +130,7 @@ class OddNumOfNodesSuccess(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -164,7 +164,7 @@ class OddNumOfNodesSuccess(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -218,7 +218,7 @@ class OddNumOfNodesDefaultsSuccess(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -248,7 +248,7 @@ class OddNumOfNodesDefaultsSuccess(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -351,7 +351,7 @@ class WatchdogValidations(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -407,7 +407,7 @@ class EvenNumOfNodes(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -443,7 +443,7 @@ class EvenNumOfNodes(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -480,7 +480,7 @@ class EvenNumOfNodes(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -513,7 +513,7 @@ class EvenNumOfNodes(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -604,7 +604,9 @@ class OfflineNodes(TestCase):
node_labels=self.online_node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.online_node_list[0]]
+ communication_list=[
+ [dict(label=node)] for node in self.online_node_list
+ ],
)
self.config.http.sbd.enable_sbd(node_labels=self.online_node_list)
enable_sbd(
@@ -644,7 +646,9 @@ class OfflineNodes(TestCase):
node_labels=self.online_node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.online_node_list[0]]
+ communication_list=[
+ [dict(label=node)] for node in self.online_node_list
+ ],
)
self.config.http.sbd.enable_sbd(node_labels=self.online_node_list)
enable_sbd(
@@ -1228,7 +1232,7 @@ class FailureHandling(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
def _remove_calls(self, count):
@@ -1304,7 +1308,8 @@ class FailureHandling(TestCase):
)
def test_removing_stonith_wd_timeout_failure(self):
- self._remove_calls(2)
+ self._remove_calls(len(self.node_list) + 1)
+
self.config.http.pcmk.remove_stonith_watchdog_timeout(
communication_list=[
self.communication_list_failure[:1],
@@ -1333,7 +1338,7 @@ class FailureHandling(TestCase):
)
def test_removing_stonith_wd_timeout_not_connected(self):
- self._remove_calls(2)
+ self._remove_calls(len(self.node_list) + 1)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
communication_list=[
self.communication_list_not_connected[:1],
@@ -1362,7 +1367,7 @@ class FailureHandling(TestCase):
)
def test_removing_stonith_wd_timeout_complete_failure(self):
- self._remove_calls(2)
+ self._remove_calls(len(self.node_list) + 1)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
communication_list=[
self.communication_list_not_connected[:1],
@@ -1408,7 +1413,7 @@ class FailureHandling(TestCase):
)
def test_set_sbd_config_failure(self):
- self._remove_calls(4)
+ self._remove_calls(len(self.node_list) + 1 + 2)
self.config.http.sbd.set_sbd_config(
communication_list=[
dict(
@@ -1455,7 +1460,7 @@ class FailureHandling(TestCase):
)
def test_set_corosync_conf_failed(self):
- self._remove_calls(5)
+ self._remove_calls(len(self.node_list) + 1 + 3)
self.config.env.push_corosync_conf(
corosync_conf_text=_get_corosync_conf_text_with_atb(
self.corosync_conf_name
@@ -1479,7 +1484,7 @@ class FailureHandling(TestCase):
)
def test_check_sbd_invalid_data_format(self):
- self._remove_calls(7)
+ self._remove_calls(len(self.node_list) + 1 + 5)
self.config.http.sbd.check_sbd(
communication_list=[
dict(
@@ -1518,7 +1523,7 @@ class FailureHandling(TestCase):
)
def test_check_sbd_failure(self):
- self._remove_calls(7)
+ self._remove_calls(len(self.node_list) + 1 + 5)
self.config.http.sbd.check_sbd(
communication_list=[
dict(
@@ -1560,7 +1565,7 @@ class FailureHandling(TestCase):
)
def test_check_sbd_not_connected(self):
- self._remove_calls(7)
+ self._remove_calls(len(self.node_list) + 1 + 5)
self.config.http.sbd.check_sbd(
communication_list=[
dict(
@@ -1603,7 +1608,7 @@ class FailureHandling(TestCase):
)
def test_get_online_targets_failed(self):
- self._remove_calls(9)
+ self._remove_calls(len(self.node_list) + 1 + 7)
self.config.http.host.check_auth(
communication_list=self.communication_list_failure
)
@@ -1628,7 +1633,7 @@ class FailureHandling(TestCase):
)
def test_get_online_targets_not_connected(self):
- self._remove_calls(9)
+ self._remove_calls(len(self.node_list) + 1 + 7)
self.config.http.host.check_auth(
communication_list=self.communication_list_not_connected
)
diff --git a/pcsd/pcs.rb b/pcsd/pcs.rb
index 6d8669d1..d79c863b 100644
--- a/pcsd/pcs.rb
+++ b/pcsd/pcs.rb
@@ -1642,13 +1642,22 @@ end
def set_cluster_prop_force(auth_user, prop, val)
cmd = ['property', 'set', "#{prop}=#{val}"]
flags = ['--force']
+ sig_file = "#{CIB_PATH}.sig"
+ retcode = 0
+
if pacemaker_running?
- user = auth_user
+ _, _, retcode = run_cmd(auth_user, PCS, *flags, "--", *cmd)
else
- user = PCSAuth.getSuperuserAuth()
- flags += ['-f', CIB_PATH]
+ if File.exist?(CIB_PATH)
+ flags += ['-f', CIB_PATH]
+ _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), PCS, *flags, "--", *cmd)
+ begin
+ File.delete(sig_file)
+ rescue => e
+ $logger.debug("Cannot delete file '#{sig_file}': #{e.message}")
+ end
+ end
end
- _, _, retcode = run_cmd(user, PCS, *flags, "--", *cmd)
return (retcode == 0)
end
--
2.39.0

View File

@ -1,67 +0,0 @@
From c51faf31a1abc08e26e5ccb4492c1a46f101a22a Mon Sep 17 00:00:00 2001
From: Ivan Devat <idevat@redhat.com>
Date: Tue, 13 Dec 2022 12:58:00 +0100
Subject: [PATCH] fix agents filter in resource/fence device create
---
.../cluster/fenceDevices/task/create/NameTypeTypeSelect.tsx | 4 ++--
.../view/cluster/resources/task/create/NameTypeTypeSelect.tsx | 4 ++--
src/app/view/share/form/Select.tsx | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/app/view/cluster/fenceDevices/task/create/NameTypeTypeSelect.tsx b/src/app/view/cluster/fenceDevices/task/create/NameTypeTypeSelect.tsx
index 80327801..8d623e2b 100644
--- a/src/app/view/cluster/fenceDevices/task/create/NameTypeTypeSelect.tsx
+++ b/src/app/view/cluster/fenceDevices/task/create/NameTypeTypeSelect.tsx
@@ -38,13 +38,13 @@ export const NameTypeTypeSelect = ({
return (
<Select
variant="typeahead"
- typeAheadAriaLabel="Select a fence device"
+ typeAheadAriaLabel="Select a fence device agent"
+ placeholderText="Select a fence device agent"
onSelect={onSelect}
onClear={onClear}
onFilter={onFilter}
selections={agentName}
isGrouped
- hasInlineFilter
customBadgeText={agentName.length > 0 ? agentName : undefined}
optionsValues={filteredFenceAgentList}
data-test="fence-device-agent"
diff --git a/src/app/view/cluster/resources/task/create/NameTypeTypeSelect.tsx b/src/app/view/cluster/resources/task/create/NameTypeTypeSelect.tsx
index bd7807d8..b531e825 100644
--- a/src/app/view/cluster/resources/task/create/NameTypeTypeSelect.tsx
+++ b/src/app/view/cluster/resources/task/create/NameTypeTypeSelect.tsx
@@ -52,13 +52,13 @@ export const NameTypeTypeSelect = ({
return (
<Select
variant="typeahead"
- typeAheadAriaLabel="Select a state"
+ typeAheadAriaLabel="Select a resource agent"
+ placeholderText="Select a resource agent"
onSelect={onSelect}
onClear={onClear}
onFilter={onFilter}
selections={agentName}
isGrouped
- hasInlineFilter
customBadgeText={agentName.length > 0 ? agentName : undefined}
data-test="resource-agent"
>
diff --git a/src/app/view/share/form/Select.tsx b/src/app/view/share/form/Select.tsx
index d73f126c..e2b81ce2 100644
--- a/src/app/view/share/form/Select.tsx
+++ b/src/app/view/share/form/Select.tsx
@@ -31,7 +31,7 @@ export const Select = (
const filter = onFilter
? (_event: React.ChangeEvent<HTMLInputElement> | null, value: string) => {
onFilter(value);
- return null as unknown as React.ReactElement[];
+ return undefined;
}
: null;
--
2.39.0

View File

@ -1,121 +0,0 @@
From d1a658601487175ec70054e56ade116f3dbcecf6 Mon Sep 17 00:00:00 2001
From: Miroslav Lisik <mlisik@redhat.com>
Date: Mon, 6 Mar 2023 15:42:35 +0100
Subject: [PATCH 1/2] fix `pcs config checkpoint diff` command
---
CHANGELOG.md | 26 --------------------------
pcs/cli/common/lib_wrapper.py | 15 +--------------
pcs/config.py | 3 +++
3 files changed, 4 insertions(+), 40 deletions(-)
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0945d727..7d3d606b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,31 +1,5 @@
# Change Log
-## [Unreleased]
-
-### Added
-- Warning to `pcs resource|stonith update` commands about not using agent
- self-validation feature when the resource is already misconfigured
- ([rhbz#2151524])
-
-### Fixed
-- Graceful stopping pcsd service using `systemctl stop pcsd` command
-- Displaying bool and integer values in `pcs resource config` command
- ([rhbz#2151164], [ghissue#604])
-- Allow time values in stonith-watchdog-time property ([rhbz#2158790])
-
-### Changed
-- Resource/stonith agent self-validation of instance attributes is now
- disabled by default, as many agents do not work with it properly.
- Use flag '--agent-validation' to enable it in supported commands.
- ([rhbz#2159454])
-
-[ghissue#604]: https://github.com/ClusterLabs/pcs/issues/604
-[rhbz#2151164]: https://bugzilla.redhat.com/show_bug.cgi?id=2151164
-[rhbz#2151524]: https://bugzilla.redhat.com/show_bug.cgi?id=2151524
-[rhbz#2159454]: https://bugzilla.redhat.com/show_bug.cgi?id=2159454
-[rhbz#2158790]: https://bugzilla.redhat.com/show_bug.cgi?id=2158790
-
-
## [0.11.4] - 2022-11-21
### Security
diff --git a/pcs/cli/common/lib_wrapper.py b/pcs/cli/common/lib_wrapper.py
index 217bfe3e..e6411e3c 100644
--- a/pcs/cli/common/lib_wrapper.py
+++ b/pcs/cli/common/lib_wrapper.py
@@ -1,9 +1,5 @@
import logging
from collections import namedtuple
-from typing import (
- Any,
- Dict,
-)
from pcs import settings
from pcs.cli.common import middleware
@@ -36,9 +32,6 @@ from pcs.lib.commands.constraint import order as constraint_order
from pcs.lib.commands.constraint import ticket as constraint_ticket
from pcs.lib.env import LibraryEnvironment
-# Note: not properly typed
-_CACHE: Dict[Any, Any] = {}
-
def wrapper(dictionary):
return namedtuple("wrapper", dictionary.keys())(**dictionary)
@@ -106,12 +99,6 @@ def bind_all(env, run_with_middleware, dictionary):
)
-def get_module(env, middleware_factory, name):
- if name not in _CACHE:
- _CACHE[name] = load_module(env, middleware_factory, name)
- return _CACHE[name]
-
-
def load_module(env, middleware_factory, name):
# pylint: disable=too-many-return-statements, too-many-branches
if name == "acl":
@@ -544,4 +531,4 @@ class Library:
self.middleware_factory = middleware_factory
def __getattr__(self, name):
- return get_module(self.env, self.middleware_factory, name)
+ return load_module(self.env, self.middleware_factory, name)
diff --git a/pcs/config.py b/pcs/config.py
index e0d179f0..6da1151b 100644
--- a/pcs/config.py
+++ b/pcs/config.py
@@ -691,6 +691,7 @@ def _checkpoint_to_lines(lib, checkpoint_number):
orig_usefile = utils.usefile
orig_filename = utils.filename
orig_middleware = lib.middleware_factory
+ orig_env = lib.env
# configure old code to read the CIB from a file
utils.usefile = True
utils.filename = os.path.join(
@@ -700,6 +701,7 @@ def _checkpoint_to_lines(lib, checkpoint_number):
lib.middleware_factory = orig_middleware._replace(
cib=middleware.cib(utils.filename, utils.touch_cib_file)
)
+ lib.env = utils.get_cli_env()
# export the CIB to text
result = False, []
if os.path.isfile(utils.filename):
@@ -708,6 +710,7 @@ def _checkpoint_to_lines(lib, checkpoint_number):
utils.usefile = orig_usefile
utils.filename = orig_filename
lib.middleware_factory = orig_middleware
+ lib.env = orig_env
return result
--
2.39.2

View File

@ -1,975 +0,0 @@
From 6841064bf1d06e16c9c5bf9a6ab42b3185d55afb Mon Sep 17 00:00:00 2001
From: Miroslav Lisik <mlisik@redhat.com>
Date: Mon, 20 Mar 2023 10:35:34 +0100
Subject: [PATCH 2/2] fix `pcs stonith update-scsi-devices` command
---
pcs/lib/cib/resource/stonith.py | 168 +++++-
.../test_stonith_update_scsi_devices.py | 571 ++++++++++++++----
2 files changed, 601 insertions(+), 138 deletions(-)
diff --git a/pcs/lib/cib/resource/stonith.py b/pcs/lib/cib/resource/stonith.py
index 1f5bddff..07cffba6 100644
--- a/pcs/lib/cib/resource/stonith.py
+++ b/pcs/lib/cib/resource/stonith.py
@@ -169,12 +169,64 @@ def get_node_key_map_for_mpath(
return node_key_map
-DIGEST_ATTRS = ["op-digest", "op-secure-digest", "op-restart-digest"]
-DIGEST_ATTR_TO_TYPE_MAP = {
+DIGEST_ATTR_TO_DIGEST_TYPE_MAP = {
"op-digest": "all",
"op-secure-digest": "nonprivate",
"op-restart-digest": "nonreloadable",
}
+TRANSIENT_DIGEST_ATTR_TO_DIGEST_TYPE_MAP = {
+ "#digests-all": "all",
+ "#digests-secure": "nonprivate",
+}
+DIGEST_ATTRS = frozenset(DIGEST_ATTR_TO_DIGEST_TYPE_MAP.keys())
+TRANSIENT_DIGEST_ATTRS = frozenset(
+ TRANSIENT_DIGEST_ATTR_TO_DIGEST_TYPE_MAP.keys()
+)
+
+
+def _get_digest(
+ attr: str,
+ attr_to_type_map: Dict[str, str],
+ calculated_digests: Dict[str, Optional[str]],
+) -> str:
+ """
+ Return digest of right type for the specified attribute. If missing, raise
+ an error.
+
+ attr -- name of digest attribute
+ atttr_to_type_map -- map for attribute name to digest type conversion
+ calculated_digests -- digests calculated by pacemaker
+ """
+ if attr not in attr_to_type_map:
+ raise AssertionError(
+ f"Key '{attr}' is missing in the attribute name to digest type map"
+ )
+ digest = calculated_digests.get(attr_to_type_map[attr])
+ if digest is None:
+ # this should not happen and when it does it is pacemaker fault
+ raise LibraryError(
+ ReportItem.error(
+ reports.messages.StonithRestartlessUpdateUnableToPerform(
+ f"necessary digest for '{attr}' attribute is missing"
+ )
+ )
+ )
+ return digest
+
+
+def _get_transient_instance_attributes(cib: _Element) -> List[_Element]:
+ """
+ Return list of instance_attributes elements which could contain digest
+ attributes.
+
+ cib -- CIB root element
+ """
+ return cast(
+ List[_Element],
+ cib.xpath(
+ "./status/node_state/transient_attributes/instance_attributes"
+ ),
+ )
def _get_lrm_rsc_op_elements(
@@ -278,21 +330,89 @@ def _update_digest_attrs_in_lrm_rsc_op(
)
)
for attr in common_digests_attrs:
- new_digest = calculated_digests[DIGEST_ATTR_TO_TYPE_MAP[attr]]
- if new_digest is None:
- # this should not happen and when it does it is pacemaker fault
+ # update digest in cib
+ lrm_rsc_op.attrib[attr] = _get_digest(
+ attr, DIGEST_ATTR_TO_DIGEST_TYPE_MAP, calculated_digests
+ )
+
+
+def _get_transient_digest_value(
+ old_value: str, stonith_id: str, stonith_type: str, digest: str
+) -> str:
+ """
+ Return transient digest value with replaced digest.
+
+ Value has comma separated format:
+ <stonith_id>:<stonith_type>:<digest>,...
+
+ and we need to replace only digest for our currently updated stonith device.
+
+ old_value -- value to be replaced
+ stonith_id -- id of stonith resource
+ stonith_type -- stonith resource type
+ digest -- digest for new value
+ """
+ new_comma_values_list = []
+ for comma_value in old_value.split(","):
+ if comma_value:
+ try:
+ _id, _type, _ = comma_value.split(":")
+ except ValueError as e:
+ raise LibraryError(
+ ReportItem.error(
+ reports.messages.StonithRestartlessUpdateUnableToPerform(
+ f"invalid digest attribute value: '{old_value}'"
+ )
+ )
+ ) from e
+ if _id == stonith_id and _type == stonith_type:
+ comma_value = ":".join([stonith_id, stonith_type, digest])
+ new_comma_values_list.append(comma_value)
+ return ",".join(new_comma_values_list)
+
+
+def _update_digest_attrs_in_transient_instance_attributes(
+ nvset_el: _Element,
+ stonith_id: str,
+ stonith_type: str,
+ calculated_digests: Dict[str, Optional[str]],
+) -> None:
+ """
+ Update digests attributes in transient instance attributes element.
+
+ nvset_el -- instance_attributes element containing nvpairs with digests
+ attributes
+ stonith_id -- id of stonith resource being updated
+ stonith_type -- type of stonith resource being updated
+ calculated_digests -- digests calculated by pacemaker
+ """
+ for attr in TRANSIENT_DIGEST_ATTRS:
+ nvpair_list = cast(
+ List[_Element],
+ nvset_el.xpath("./nvpair[@name=$name]", name=attr),
+ )
+ if not nvpair_list:
+ continue
+ if len(nvpair_list) > 1:
raise LibraryError(
ReportItem.error(
reports.messages.StonithRestartlessUpdateUnableToPerform(
- (
- f"necessary digest for '{attr}' attribute is "
- "missing"
- )
+ f"multiple digests attributes: '{attr}'"
)
)
)
- # update digest in cib
- lrm_rsc_op.attrib[attr] = new_digest
+ old_value = nvpair_list[0].attrib["value"]
+ if old_value:
+ nvpair_list[0].attrib["value"] = _get_transient_digest_value(
+ str(old_value),
+ stonith_id,
+ stonith_type,
+ _get_digest(
+ attr,
+ TRANSIENT_DIGEST_ATTR_TO_DIGEST_TYPE_MAP,
+ calculated_digests,
+ ),
+ )
def update_scsi_devices_without_restart(
@@ -311,6 +431,8 @@ def update_scsi_devices_without_restart(
id_provider -- elements' ids generator
device_list -- list of updated scsi devices
"""
+ # pylint: disable=too-many-locals
+ cib = get_root(resource_el)
resource_id = resource_el.get("id", "")
roles_with_nodes = get_resource_state(cluster_state, resource_id)
if "Started" not in roles_with_nodes:
@@ -341,17 +463,14 @@ def update_scsi_devices_without_restart(
)
lrm_rsc_op_start_list = _get_lrm_rsc_op_elements(
- get_root(resource_el), resource_id, node_name, "start"
+ cib, resource_id, node_name, "start"
+ )
+ new_instance_attrs_digests = get_resource_digests(
+ runner, resource_id, node_name, new_instance_attrs
)
if len(lrm_rsc_op_start_list) == 1:
_update_digest_attrs_in_lrm_rsc_op(
- lrm_rsc_op_start_list[0],
- get_resource_digests(
- runner,
- resource_id,
- node_name,
- new_instance_attrs,
- ),
+ lrm_rsc_op_start_list[0], new_instance_attrs_digests
)
else:
raise LibraryError(
@@ -364,7 +483,7 @@ def update_scsi_devices_without_restart(
monitor_attrs_list = _get_monitor_attrs(resource_el)
lrm_rsc_op_monitor_list = _get_lrm_rsc_op_elements(
- get_root(resource_el), resource_id, node_name, "monitor"
+ cib, resource_id, node_name, "monitor"
)
if len(lrm_rsc_op_monitor_list) != len(monitor_attrs_list):
raise LibraryError(
@@ -380,7 +499,7 @@ def update_scsi_devices_without_restart(
for monitor_attrs in monitor_attrs_list:
lrm_rsc_op_list = _get_lrm_rsc_op_elements(
- get_root(resource_el),
+ cib,
resource_id,
node_name,
"monitor",
@@ -409,3 +528,10 @@ def update_scsi_devices_without_restart(
)
)
)
+ for nvset_el in _get_transient_instance_attributes(cib):
+ _update_digest_attrs_in_transient_instance_attributes(
+ nvset_el,
+ resource_id,
+ resource_el.get("type", ""),
+ new_instance_attrs_digests,
+ )
diff --git a/pcs_test/tier0/lib/commands/test_stonith_update_scsi_devices.py b/pcs_test/tier0/lib/commands/test_stonith_update_scsi_devices.py
index 69ea097c..72c7dbcf 100644
--- a/pcs_test/tier0/lib/commands/test_stonith_update_scsi_devices.py
+++ b/pcs_test/tier0/lib/commands/test_stonith_update_scsi_devices.py
@@ -38,6 +38,7 @@ DEFAULT_DIGEST = _DIGEST + "0"
ALL_DIGEST = _DIGEST + "1"
NONPRIVATE_DIGEST = _DIGEST + "2"
NONRELOADABLE_DIGEST = _DIGEST + "3"
+DIGEST_ATTR_VALUE_GOOD_FORMAT = f"stonith_id:stonith_type:{DEFAULT_DIGEST},"
DEV_1 = "/dev/sda"
DEV_2 = "/dev/sdb"
DEV_3 = "/dev/sdc"
@@ -151,33 +152,58 @@ def _fixture_lrm_rsc_start_ops(resource_id, lrm_start_ops):
return _fixture_lrm_rsc_ops("start", resource_id, lrm_start_ops)
-def _fixture_status_lrm_ops_base(
- resource_id,
- resource_type,
- lrm_ops,
-):
+def _fixture_status_lrm_ops(resource_id, resource_type, lrm_ops):
return f"""
- <status>
- <node_state id="1" uname="node1">
- <lrm id="1">
- <lrm_resources>
- <lrm_resource id="{resource_id}" type="{resource_type}" class="stonith">
- {lrm_ops}
- </lrm_resource>
- </lrm_resources>
- </lrm>
- </node_state>
- </status>
+ <lrm id="1">
+ <lrm_resources>
+ <lrm_resource id="{resource_id}" type="{resource_type}" class="stonith">
+ {lrm_ops}
+ </lrm_resource>
+ </lrm_resources>
+ </lrm>
+ """
+
+
+def _fixture_digest_nvpair(node_id, digest_name, digest_value):
+ return (
+ f'<nvpair id="status-{node_id}-.{digest_name}" name="#{digest_name}" '
+ f'value="{digest_value}"/>'
+ )
+
+
+def _fixture_transient_attributes(node_id, digests_nvpairs):
+ return f"""
+ <transient_attributes id="{node_id}">
+ <instance_attributes id="status-{node_id}">
+ <nvpair id="status-{node_id}-.feature-set" name="#feature-set" value="3.16.2"/>
+ <nvpair id="status-{node_id}-.node-unfenced" name="#node-unfenced" value="1679319764"/>
+ {digests_nvpairs}
+ </instance_attributes>
+ </transient_attributes>
+ """
+
+
+def _fixture_node_state(node_id, lrm_ops=None, transient_attrs=None):
+ if transient_attrs is None:
+ transient_attrs = ""
+ if lrm_ops is None:
+ lrm_ops = ""
+ return f"""
+ <node_state id="{node_id}" uname="node{node_id}">
+ {lrm_ops}
+ {transient_attrs}
+ </node_state>
"""
-def _fixture_status_lrm_ops(
+def _fixture_status(
resource_id,
resource_type,
lrm_start_ops=DEFAULT_LRM_START_OPS,
lrm_monitor_ops=DEFAULT_LRM_MONITOR_OPS,
+ digests_attrs_list=None,
):
- return _fixture_status_lrm_ops_base(
+ lrm_ops = _fixture_status_lrm_ops(
resource_id,
resource_type,
"\n".join(
@@ -185,18 +211,52 @@ def _fixture_status_lrm_ops(
+ _fixture_lrm_rsc_monitor_ops(resource_id, lrm_monitor_ops)
),
)
+ node_states_list = []
+ if not digests_attrs_list:
+ node_states_list.append(
+ _fixture_node_state("1", lrm_ops, transient_attrs=None)
+ )
+ else:
+ for node_id, digests_attrs in enumerate(digests_attrs_list, start=1):
+ transient_attrs = _fixture_transient_attributes(
+ node_id,
+ "\n".join(
+ _fixture_digest_nvpair(node_id, name, value)
+ for name, value in digests_attrs
+ ),
+ )
+ node_state = _fixture_node_state(
+ node_id,
+ lrm_ops=lrm_ops if node_id == 1 else None,
+ transient_attrs=transient_attrs,
+ )
+ node_states_list.append(node_state)
+ node_states = "\n".join(node_states_list)
+ return f"""
+ <status>
+ {node_states}
+ </status>
+ """
+
+def fixture_digests_xml(resource_id, node_name, devices="", nonprivate=True):
+ nonprivate_xml = (
+ f"""
+ <digest type="nonprivate" hash="{NONPRIVATE_DIGEST}">
+ <parameters devices="{devices}"/>
+ </digest>
+ """
+ if nonprivate
+ else ""
+ )
-def fixture_digests_xml(resource_id, node_name, devices=""):
return f"""
<pacemaker-result api-version="2.9" request="crm_resource --digests --resource {resource_id} --node {node_name} --output-as xml devices={devices}">
<digests resource="{resource_id}" node="{node_name}" task="stop" interval="0ms">
<digest type="all" hash="{ALL_DIGEST}">
<parameters devices="{devices}" pcmk_host_check="static-list" pcmk_host_list="node1 node2 node3" pcmk_reboot_action="off"/>
</digest>
- <digest type="nonprivate" hash="{NONPRIVATE_DIGEST}">
- <parameters devices="{devices}"/>
- </digest>
+ {nonprivate_xml}
</digests>
<status code="0" message="OK"/>
</pacemaker-result>
@@ -334,6 +394,8 @@ class UpdateScsiDevicesMixin:
nodes_running_on=1,
start_digests=True,
monitor_digests=True,
+ digests_attrs_list=None,
+ crm_digests_xml=None,
):
# pylint: disable=too-many-arguments
# pylint: disable=too-many-locals
@@ -346,11 +408,12 @@ class UpdateScsiDevicesMixin:
resource_ops=resource_ops,
host_map=host_map,
),
- status=_fixture_status_lrm_ops(
+ status=_fixture_status(
self.stonith_id,
self.stonith_type,
lrm_start_ops=lrm_start_ops,
lrm_monitor_ops=lrm_monitor_ops,
+ digests_attrs_list=digests_attrs_list,
),
)
self.config.runner.pcmk.is_resource_digests_supported()
@@ -363,14 +426,17 @@ class UpdateScsiDevicesMixin:
nodes=FIXTURE_CRM_MON_NODES,
)
devices_opt = "devices={}".format(devices_value)
+
+ if crm_digests_xml is None:
+ crm_digests_xml = fixture_digests_xml(
+ self.stonith_id, SCSI_NODE, devices=devices_value
+ )
if start_digests:
self.config.runner.pcmk.resource_digests(
self.stonith_id,
SCSI_NODE,
name="start.op.digests",
- stdout=fixture_digests_xml(
- self.stonith_id, SCSI_NODE, devices=devices_value
- ),
+ stdout=crm_digests_xml,
args=[devices_opt],
)
if monitor_digests:
@@ -394,11 +460,7 @@ class UpdateScsiDevicesMixin:
self.stonith_id,
SCSI_NODE,
name=f"{name}-{num}.op.digests",
- stdout=fixture_digests_xml(
- self.stonith_id,
- SCSI_NODE,
- devices=devices_value,
- ),
+ stdout=crm_digests_xml,
args=args,
)
@@ -406,14 +468,16 @@ class UpdateScsiDevicesMixin:
self,
devices_before=DEVICES_1,
devices_updated=DEVICES_2,
- devices_add=(),
- devices_remove=(),
+ devices_add=None,
+ devices_remove=None,
unfence=None,
resource_ops=DEFAULT_OPS,
lrm_monitor_ops=DEFAULT_LRM_MONITOR_OPS,
lrm_start_ops=DEFAULT_LRM_START_OPS,
lrm_monitor_ops_updated=DEFAULT_LRM_MONITOR_OPS_UPDATED,
lrm_start_ops_updated=DEFAULT_LRM_START_OPS_UPDATED,
+ digests_attrs_list=None,
+ digests_attrs_list_updated=None,
):
# pylint: disable=too-many-arguments
self.config_cib(
@@ -422,6 +486,7 @@ class UpdateScsiDevicesMixin:
resource_ops=resource_ops,
lrm_monitor_ops=lrm_monitor_ops,
lrm_start_ops=lrm_start_ops,
+ digests_attrs_list=digests_attrs_list,
)
if unfence:
self.config.corosync_conf.load_content(
@@ -445,20 +510,34 @@ class UpdateScsiDevicesMixin:
devices=devices_updated,
resource_ops=resource_ops,
),
- status=_fixture_status_lrm_ops(
+ status=_fixture_status(
self.stonith_id,
self.stonith_type,
lrm_start_ops=lrm_start_ops_updated,
lrm_monitor_ops=lrm_monitor_ops_updated,
+ digests_attrs_list=digests_attrs_list_updated,
),
)
- self.command(
- devices_updated=devices_updated,
- devices_add=devices_add,
- devices_remove=devices_remove,
- )()
+ kwargs = dict(devices_updated=devices_updated)
+ if devices_add is not None:
+ kwargs["devices_add"] = devices_add
+ if devices_remove is not None:
+ kwargs["devices_remove"] = devices_remove
+ self.command(**kwargs)()
self.env_assist.assert_reports([])
+ def digest_attr_value_single(self, digest, last_comma=True):
+ comma = "," if last_comma else ""
+ return f"{self.stonith_id}:{self.stonith_type}:{digest}{comma}"
+
+ def digest_attr_value_multiple(self, digest, last_comma=True):
+ if self.stonith_type == STONITH_TYPE_SCSI:
+ value = f"{STONITH_ID_MPATH}:{STONITH_TYPE_MPATH}:{DEFAULT_DIGEST},"
+ else:
+ value = f"{STONITH_ID_SCSI}:{STONITH_TYPE_SCSI}:{DEFAULT_DIGEST},"
+
+ return f"{value}{self.digest_attr_value_single(digest, last_comma=last_comma)}"
+
class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
def test_pcmk_doesnt_support_digests(self):
@@ -567,9 +646,7 @@ class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
)
def test_no_lrm_start_op(self):
- self.config_cib(
- lrm_start_ops=(), start_digests=False, monitor_digests=False
- )
+ self.config_cib(lrm_start_ops=(), monitor_digests=False)
self.env_assist.assert_raise_library_error(
self.command(),
[
@@ -622,6 +699,59 @@ class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
expected_in_processor=False,
)
+ def test_crm_resource_digests_missing_for_transient_digests_attrs(self):
+ self.config_cib(
+ digests_attrs_list=[
+ [
+ (
+ "digests-secure",
+ self.digest_attr_value_single(ALL_DIGEST),
+ ),
+ ],
+ ],
+ crm_digests_xml=fixture_digests_xml(
+ self.stonith_id, SCSI_NODE, devices="", nonprivate=False
+ ),
+ )
+ self.env_assist.assert_raise_library_error(
+ self.command(),
+ [
+ fixture.error(
+ reports.codes.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM,
+ reason=(
+ "necessary digest for '#digests-secure' attribute is "
+ "missing"
+ ),
+ reason_type=reports.const.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM_REASON_OTHER,
+ )
+ ],
+ expected_in_processor=False,
+ )
+
+ def test_multiple_digests_attributes(self):
+ self.config_cib(
+ digests_attrs_list=[
+ 2
+ * [
+ (
+ "digests-all",
+ self.digest_attr_value_single(DEFAULT_DIGEST),
+ ),
+ ],
+ ],
+ )
+ self.env_assist.assert_raise_library_error(
+ self.command(),
+ [
+ fixture.error(
+ reports.codes.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM,
+ reason=("multiple digests attributes: '#digests-all'"),
+ reason_type=reports.const.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM_REASON_OTHER,
+ )
+ ],
+ expected_in_processor=False,
+ )
+
def test_monitor_ops_and_lrm_monitor_ops_do_not_match(self):
self.config_cib(
resource_ops=(
@@ -812,7 +942,7 @@ class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
stonith_type=self.stonith_type,
devices=DEVICES_2,
),
- status=_fixture_status_lrm_ops(
+ status=_fixture_status(
self.stonith_id,
self.stonith_type,
lrm_start_ops=DEFAULT_LRM_START_OPS_UPDATED,
@@ -959,6 +1089,28 @@ class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
]
)
+ def test_transient_digests_attrs_bad_value_format(self):
+ bad_format = f"{DIGEST_ATTR_VALUE_GOOD_FORMAT}id:type,"
+ self.config_cib(
+ digests_attrs_list=[
+ [
+ ("digests-all", DIGEST_ATTR_VALUE_GOOD_FORMAT),
+ ("digests-secure", bad_format),
+ ]
+ ]
+ )
+ self.env_assist.assert_raise_library_error(
+ self.command(),
+ [
+ fixture.error(
+ reports.codes.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM,
+ reason=f"invalid digest attribute value: '{bad_format}'",
+ reason_type=reports.const.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM_REASON_OTHER,
+ )
+ ],
+ expected_in_processor=False,
+ )
+
class UpdateScsiDevicesSetBase(UpdateScsiDevicesMixin, CommandSetMixin):
def test_update_1_to_1_devices(self):
@@ -1002,80 +1154,6 @@ class UpdateScsiDevicesSetBase(UpdateScsiDevicesMixin, CommandSetMixin):
unfence=[DEV_3, DEV_4],
)
- def test_default_monitor(self):
- self.assert_command_success(unfence=[DEV_2])
-
- def test_no_monitor_ops(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(),
- lrm_monitor_ops=(),
- lrm_monitor_ops_updated=(),
- )
-
- def test_1_monitor_with_timeout(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(("monitor", "30s", "10s", None),),
- lrm_monitor_ops=(("30000", DEFAULT_DIGEST, None, None),),
- lrm_monitor_ops_updated=(("30000", ALL_DIGEST, None, None),),
- )
-
- def test_2_monitor_ops_with_timeouts(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(
- ("monitor", "30s", "10s", None),
- ("monitor", "40s", "20s", None),
- ),
- lrm_monitor_ops=(
- ("30000", DEFAULT_DIGEST, None, None),
- ("40000", DEFAULT_DIGEST, None, None),
- ),
- lrm_monitor_ops_updated=(
- ("30000", ALL_DIGEST, None, None),
- ("40000", ALL_DIGEST, None, None),
- ),
- )
-
- def test_2_monitor_ops_with_one_timeout(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(
- ("monitor", "30s", "10s", None),
- ("monitor", "60s", None, None),
- ),
- lrm_monitor_ops=(
- ("30000", DEFAULT_DIGEST, None, None),
- ("60000", DEFAULT_DIGEST, None, None),
- ),
- lrm_monitor_ops_updated=(
- ("30000", ALL_DIGEST, None, None),
- ("60000", ALL_DIGEST, None, None),
- ),
- )
-
- def test_various_start_ops_one_lrm_start_op(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(
- ("monitor", "60s", None, None),
- ("start", "0s", "40s", None),
- ("start", "0s", "30s", "1"),
- ("start", "10s", "5s", None),
- ("start", "20s", None, None),
- ),
- )
-
- def test_1_nonrecurring_start_op_with_timeout(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(
- ("monitor", "60s", None, None),
- ("start", "0s", "40s", None),
- ),
- )
-
class UpdateScsiDevicesAddRemoveBase(
UpdateScsiDevicesMixin, CommandAddRemoveMixin
@@ -1245,6 +1323,221 @@ class MpathFailuresMixin:
self.assert_failure("node1:1;node2=", ["node2", "node3"])
+class UpdateScsiDevicesDigestsBase(UpdateScsiDevicesMixin):
+ def test_default_monitor(self):
+ self.assert_command_success(unfence=[DEV_2])
+
+ def test_no_monitor_ops(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(),
+ lrm_monitor_ops=(),
+ lrm_monitor_ops_updated=(),
+ )
+
+ def test_1_monitor_with_timeout(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(("monitor", "30s", "10s", None),),
+ lrm_monitor_ops=(("30000", DEFAULT_DIGEST, None, None),),
+ lrm_monitor_ops_updated=(("30000", ALL_DIGEST, None, None),),
+ )
+
+ def test_2_monitor_ops_with_timeouts(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(
+ ("monitor", "30s", "10s", None),
+ ("monitor", "40s", "20s", None),
+ ),
+ lrm_monitor_ops=(
+ ("30000", DEFAULT_DIGEST, None, None),
+ ("40000", DEFAULT_DIGEST, None, None),
+ ),
+ lrm_monitor_ops_updated=(
+ ("30000", ALL_DIGEST, None, None),
+ ("40000", ALL_DIGEST, None, None),
+ ),
+ )
+
+ def test_2_monitor_ops_with_one_timeout(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(
+ ("monitor", "30s", "10s", None),
+ ("monitor", "60s", None, None),
+ ),
+ lrm_monitor_ops=(
+ ("30000", DEFAULT_DIGEST, None, None),
+ ("60000", DEFAULT_DIGEST, None, None),
+ ),
+ lrm_monitor_ops_updated=(
+ ("30000", ALL_DIGEST, None, None),
+ ("60000", ALL_DIGEST, None, None),
+ ),
+ )
+
+ def test_various_start_ops_one_lrm_start_op(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(
+ ("monitor", "60s", None, None),
+ ("start", "0s", "40s", None),
+ ("start", "0s", "30s", "1"),
+ ("start", "10s", "5s", None),
+ ("start", "20s", None, None),
+ ),
+ )
+
+ def test_1_nonrecurring_start_op_with_timeout(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(
+ ("monitor", "60s", None, None),
+ ("start", "0s", "40s", None),
+ ),
+ )
+
+ def _digests_attrs_before(self, last_comma=True):
+ return [
+ (
+ "digests-all",
+ self.digest_attr_value_single(DEFAULT_DIGEST, last_comma),
+ ),
+ (
+ "digests-secure",
+ self.digest_attr_value_single(DEFAULT_DIGEST, last_comma),
+ ),
+ ]
+
+ def _digests_attrs_after(self, last_comma=True):
+ return [
+ (
+ "digests-all",
+ self.digest_attr_value_single(ALL_DIGEST, last_comma),
+ ),
+ (
+ "digests-secure",
+ self.digest_attr_value_single(NONPRIVATE_DIGEST, last_comma),
+ ),
+ ]
+
+ def _digests_attrs_before_multi(self, last_comma=True):
+ return [
+ (
+ "digests-all",
+ self.digest_attr_value_multiple(DEFAULT_DIGEST, last_comma),
+ ),
+ (
+ "digests-secure",
+ self.digest_attr_value_multiple(DEFAULT_DIGEST, last_comma),
+ ),
+ ]
+
+ def _digests_attrs_after_multi(self, last_comma=True):
+ return [
+ (
+ "digests-all",
+ self.digest_attr_value_multiple(ALL_DIGEST, last_comma),
+ ),
+ (
+ "digests-secure",
+ self.digest_attr_value_multiple(NONPRIVATE_DIGEST, last_comma),
+ ),
+ ]
+
+ def test_transient_digests_attrs_all_nodes(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=len(self.existing_nodes)
+ * [self._digests_attrs_before()],
+ digests_attrs_list_updated=len(self.existing_nodes)
+ * [self._digests_attrs_after()],
+ )
+
+ def test_transient_digests_attrs_not_on_all_nodes(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=[self._digests_attrs_before()],
+ digests_attrs_list_updated=[self._digests_attrs_after()],
+ )
+
+ def test_transient_digests_attrs_all_nodes_multi_value(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=len(self.existing_nodes)
+ * [self._digests_attrs_before_multi()],
+ digests_attrs_list_updated=len(self.existing_nodes)
+ * [self._digests_attrs_after_multi()],
+ )
+
+ def test_transient_digests_attrs_not_on_all_nodes_multi_value(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=[self._digests_attrs_before()],
+ digests_attrs_list_updated=[self._digests_attrs_after()],
+ )
+
+ def test_transient_digests_attrs_not_all_digest_types(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=len(self.existing_nodes)
+ * [self._digests_attrs_before()[0:1]],
+ digests_attrs_list_updated=len(self.existing_nodes)
+ * [self._digests_attrs_after()[0:1]],
+ )
+
+ def test_transient_digests_attrs_without_digests_attrs(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=len(self.existing_nodes) * [[]],
+ digests_attrs_list_updated=len(self.existing_nodes) * [[]],
+ )
+
+ def test_transient_digests_attrs_without_last_comma(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=[self._digests_attrs_before(last_comma=False)],
+ digests_attrs_list_updated=[
+ self._digests_attrs_after(last_comma=False)
+ ],
+ )
+
+ def test_transient_digests_attrs_without_last_comma_multi_value(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=[
+ self._digests_attrs_before_multi(last_comma=False)
+ ],
+ digests_attrs_list_updated=[
+ self._digests_attrs_after_multi(last_comma=False)
+ ],
+ )
+
+ def test_transient_digests_attrs_no_digest_for_our_stonith_id(self):
+ digests_attrs_list = len(self.existing_nodes) * [
+ [
+ ("digests-all", DIGEST_ATTR_VALUE_GOOD_FORMAT),
+ ("digests-secure", DIGEST_ATTR_VALUE_GOOD_FORMAT),
+ ]
+ ]
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=digests_attrs_list,
+ digests_attrs_list_updated=digests_attrs_list,
+ )
+
+ def test_transient_digests_attrs_digests_with_empty_value(self):
+ digests_attrs_list = len(self.existing_nodes) * [
+ [("digests-all", ""), ("digests-secure", "")]
+ ]
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=digests_attrs_list,
+ digests_attrs_list_updated=digests_attrs_list,
+ )
+
+
@mock.patch.object(
settings,
"pacemaker_api_result_schema",
@@ -1337,3 +1630,47 @@ class TestUpdateScsiDevicesAddRemoveFailuresScsi(
UpdateScsiDevicesAddRemoveFailuresBaseMixin, ScsiMixin, TestCase
):
pass
+
+
+@mock.patch.object(
+ settings,
+ "pacemaker_api_result_schema",
+ rc("pcmk_api_rng/api-result.rng"),
+)
+class TestUpdateScsiDevicesDigestsSetScsi(
+ UpdateScsiDevicesDigestsBase, ScsiMixin, CommandSetMixin, TestCase
+):
+ pass
+
+
+@mock.patch.object(
+ settings,
+ "pacemaker_api_result_schema",
+ rc("pcmk_api_rng/api-result.rng"),
+)
+class TestUpdateScsiDevicesDigestsAddRemoveScsi(
+ UpdateScsiDevicesDigestsBase, ScsiMixin, CommandAddRemoveMixin, TestCase
+):
+ pass
+
+
+@mock.patch.object(
+ settings,
+ "pacemaker_api_result_schema",
+ rc("pcmk_api_rng/api-result.rng"),
+)
+class TestUpdateScsiDevicesDigestsSetMpath(
+ UpdateScsiDevicesDigestsBase, MpathMixin, CommandSetMixin, TestCase
+):
+ pass
+
+
+@mock.patch.object(
+ settings,
+ "pacemaker_api_result_schema",
+ rc("pcmk_api_rng/api-result.rng"),
+)
+class TestUpdateScsiDevicesDigestsAddRemoveMpath(
+ UpdateScsiDevicesDigestsBase, MpathMixin, CommandAddRemoveMixin, TestCase
+):
+ pass
--
2.39.2

View File

@ -1,89 +0,0 @@
From 2403a2414f234a4025055e56f8202094caf1b655 Mon Sep 17 00:00:00 2001
From: Ivan Devat <idevat@redhat.com>
Date: Thu, 30 Mar 2023 17:03:06 +0200
Subject: [PATCH] fix cluster-status/fence_levels shape expectation
---
jest.config.js | 1 +
.../endpoints/clusterStatus/shape/cluster.ts | 10 +++--
.../cluster/displayAdvancedStatus.test.ts | 37 +++++++++++++++++++
3 files changed, 44 insertions(+), 4 deletions(-)
create mode 100644 src/test/scenes/cluster/displayAdvancedStatus.test.ts
diff --git a/jest.config.js b/jest.config.js
index 08660443..c5c39dc5 100644
--- a/jest.config.js
+++ b/jest.config.js
@@ -1,4 +1,5 @@
module.exports = {
globalSetup: "./src/test/jest-preset.ts",
moduleDirectories: ["node_modules", "src"],
+ testTimeout: 10000,
};
diff --git a/src/app/backend/endpoints/clusterStatus/shape/cluster.ts b/src/app/backend/endpoints/clusterStatus/shape/cluster.ts
index 97ec4f17..ea29470e 100644
--- a/src/app/backend/endpoints/clusterStatus/shape/cluster.ts
+++ b/src/app/backend/endpoints/clusterStatus/shape/cluster.ts
@@ -13,10 +13,12 @@ The key of record is a target.
*/
const ApiFencingLevels = t.record(
t.string,
- t.type({
- level: t.string,
- devices: t.array(t.string),
- }),
+ t.array(
+ t.type({
+ level: t.string,
+ devices: t.string,
+ }),
+ ),
);
export const ApiClusterStatusFlag = t.keyof({
diff --git a/src/test/scenes/cluster/displayAdvancedStatus.test.ts b/src/test/scenes/cluster/displayAdvancedStatus.test.ts
new file mode 100644
index 00000000..78eb7dbe
--- /dev/null
+++ b/src/test/scenes/cluster/displayAdvancedStatus.test.ts
@@ -0,0 +1,37 @@
+// Cluster status is pretty complex. Sometimes a discrepancy between frontend
+// and backend appears. This modules collect tests for discovered cases.
+
+import * as t from "dev/responses/clusterStatus/tools";
+
+import {dt} from "test/tools/selectors";
+import {location, shortcuts} from "test/tools";
+
+const clusterName = "test-cluster";
+
+// We want to see browser behavior with (for now) invalid status before fix. But
+// the typecheck tell us that it is wrong and dev build fails. So, we decive it.
+const deceiveTypeCheck = (maybeInvalidPart: ReturnType<typeof JSON.parse>) =>
+ JSON.parse(JSON.stringify(maybeInvalidPart));
+
+describe("Cluster with advanced status", () => {
+ it("accept fence levels", async () => {
+ shortcuts.interceptWithCluster({
+ clusterStatus: t.cluster(clusterName, "ok", {
+ fence_levels: deceiveTypeCheck({
+ "node-1": [
+ {
+ level: "1",
+ devices: "fence-1",
+ },
+ {
+ level: "2",
+ devices: "fence-2",
+ },
+ ],
+ }),
+ }),
+ });
+ await page.goto(location.cluster({clusterName}));
+ await page.waitForSelector(dt("cluster-overview"));
+ });
+});
--
2.39.2

View File

@ -1,4 +1,4 @@
From c7b8c999f796cee4899df578944b239e1db29cb5 Mon Sep 17 00:00:00 2001 From cf1e0cc06a94804a4a98a12ee06d09e5786bad1b Mon Sep 17 00:00:00 2001
From: Ivan Devat <idevat@redhat.com> From: Ivan Devat <idevat@redhat.com>
Date: Tue, 20 Nov 2018 15:03:56 +0100 Date: Tue, 20 Nov 2018 15:03:56 +0100
Subject: [PATCH] do not support cluster setup with udp(u) transport in RHEL9 Subject: [PATCH] do not support cluster setup with udp(u) transport in RHEL9
@ -9,10 +9,10 @@ Subject: [PATCH] do not support cluster setup with udp(u) transport in RHEL9
2 files changed, 3 insertions(+) 2 files changed, 3 insertions(+)
diff --git a/pcs/pcs.8.in b/pcs/pcs.8.in diff --git a/pcs/pcs.8.in b/pcs/pcs.8.in
index 7bbd1ae2..53bfebb4 100644 index 55f4b4a9..8cc9360d 100644
--- a/pcs/pcs.8.in --- a/pcs/pcs.8.in
+++ b/pcs/pcs.8.in +++ b/pcs/pcs.8.in
@@ -457,6 +457,8 @@ By default, encryption is enabled with cipher=aes256 and hash=sha256. To disable @@ -479,6 +479,8 @@ By default, encryption is enabled with cipher=aes256 and hash=sha256. To disable
Transports udp and udpu: Transports udp and udpu:
.br .br
@ -22,10 +22,10 @@ index 7bbd1ae2..53bfebb4 100644
.br .br
Transport options are: ip_version, netmtu Transport options are: ip_version, netmtu
diff --git a/pcs/usage.py b/pcs/usage.py diff --git a/pcs/usage.py b/pcs/usage.py
index 073e2b1e..91ceb787 100644 index cc6c5803..a7d4b24b 100644
--- a/pcs/usage.py --- a/pcs/usage.py
+++ b/pcs/usage.py +++ b/pcs/usage.py
@@ -1431,6 +1431,7 @@ Commands: @@ -1482,6 +1482,7 @@ Commands:
hash=sha256. To disable encryption, set cipher=none and hash=none. hash=sha256. To disable encryption, set cipher=none and hash=none.
Transports udp and udpu: Transports udp and udpu:
@ -34,5 +34,5 @@ index 073e2b1e..91ceb787 100644
support traffic encryption nor compression. support traffic encryption nor compression.
Transport options are: Transport options are:
-- --
2.38.1 2.43.0

View File

@ -1,53 +1,51 @@
Name: pcs Name: pcs
Version: 0.11.4 Version: 0.11.7
Release: 7%{?dist} Release: 2%{?dist}
# https://docs.fedoraproject.org/en-US/packaging-guidelines/LicensingGuidelines/ # https://docs.fedoraproject.org/en-US/packaging-guidelines/LicensingGuidelines/
# https://fedoraproject.org/wiki/Licensing:Main?rd=Licensing#Good_Licenses # https://fedoraproject.org/wiki/Licensing:Main?rd=Licensing#Good_Licenses
# GPL-2.0-only: pcs # GPL-2.0-only: pcs
# Apache-2.0: tornado # Apache-2.0: tornado
# MIT: backports, childprocess, dacite, daemons, ethon, mustermann, rack, # MIT: backports, childprocess, dacite, ethon, mustermann, rack,
# rack-protection, rack-test, sinatra, tilt # rack-protection, rack-test, sinatra, tilt
# GPL-2.0-only or Ruby: eventmachine # MIT and (BSD-2-Clause or GPL-2.0-or-later): nio4r
# (GPL-2.0-only or Ruby) and BSD-2-Clause: thin # BSD-2-Clause or Ruby: ruby2_keywords
# BSD-2-Clause or Ruby: ruby2_keywords, webrick # BSD-3-Clause: puma
# BSD-3-Clause and MIT: ffi # BSD-3-Clause and MIT: ffi
License: GPL-2.0-only AND Apache-2.0 AND MIT AND BSD-3-Clause AND (GPL-2.0-only OR Ruby) AND BSD-2-Clause AND (BSD-2-Clause OR Ruby) License: GPL-2.0-only AND Apache-2.0 AND MIT AND BSD-3-Clause AND (BSD-2-Clause OR Ruby) AND (BSD-2-Clause OR GPL-2.0-or-later)
URL: https://github.com/ClusterLabs/pcs URL: https://github.com/ClusterLabs/pcs
Group: System Environment/Base Group: System Environment/Base
Summary: Pacemaker Configuration System Summary: Pacemaker/Corosync Configuration System
#building only for architectures with pacemaker and corosync available #building only for architectures with pacemaker and corosync available
ExclusiveArch: i686 x86_64 s390x ppc64le aarch64 ExclusiveArch: i686 x86_64 s390x ppc64le aarch64
# When specifying a commit, use its long hash
%global version_or_commit %{version} %global version_or_commit %{version}
# %%global version_or_commit %%{version}.206-f51f # %%global version_or_commit aaa16e0de986890e6ca3038f907bbad331e41a87
%global pcs_source_name %{name}-%{version_or_commit} %global pcs_source_name %{name}-%{version_or_commit}
# ui_commit can be determined by hash, tag or branch # ui_commit can be determined by hash, tag or branch
%global ui_commit 0.1.16.1 %global ui_commit 0.1.18
%global ui_modules_version 0.1.16.1 %global ui_modules_version 0.1.18
%global ui_src_name pcs-web-ui-%{ui_commit} %global ui_src_name pcs-web-ui-%{ui_commit}
%global pcs_snmp_pkg_name pcs-snmp %global pcs_snmp_pkg_name pcs-snmp
%global pyagentx_version 0.4.pcs.2 %global pyagentx_version 0.4.pcs.2
%global tornado_version 6.2.0 %global tornado_version 6.3.3
%global dacite_version 1.6.0 %global dacite_version 1.8.1
%global version_rubygem_backports 3.23.0 %global version_rubygem_backports 3.24.1
%global version_rubygem_childprocess 4.1.0 %global version_rubygem_childprocess 4.1.0
%global version_rubygem_daemons 1.4.1
%global version_rubygem_ethon 0.16.0 %global version_rubygem_ethon 0.16.0
%global version_rubygem_eventmachine 1.2.7 %global version_rubygem_ffi 1.16.3
%global version_rubygem_ffi 1.15.5
%global version_rubygem_mustermann 3.0.0 %global version_rubygem_mustermann 3.0.0
%global version_rubygem_rack 2.2.6.4 %global version_rubygem_nio4r 2.5.9
%global version_rubygem_rack_protection 3.0.5 %global version_rubygem_puma 6.4.0
%global version_rubygem_rack_test 2.0.2 %global version_rubygem_rack 2.2.8.1
%global version_rubygem_rack_protection 3.1.0
%global version_rubygem_rack_test 2.1.0
%global version_rubygem_ruby2_keywords 0.0.5 %global version_rubygem_ruby2_keywords 0.0.5
%global version_rubygem_sinatra 3.0.5 %global version_rubygem_sinatra 3.1.0
%global version_rubygem_thin 1.8.1 %global version_rubygem_tilt 2.3.0
%global version_rubygem_tilt 2.0.11
%global version_rubygem_webrick 1.7.0
%global required_pacemaker_version 2.1.0 %global required_pacemaker_version 2.1.0
@ -71,7 +69,13 @@ ExclusiveArch: i686 x86_64 s390x ppc64le aarch64
# /usr/bin/python will be removed or switched to Python 3 in the future. # /usr/bin/python will be removed or switched to Python 3 in the future.
%global __python %{__python3} %global __python %{__python3}
Source0: %{url}/archive/%{version_or_commit}/%{pcs_source_name}.tar.gz # prepend v for folder in GitHub link when using tagged tarball
%if "%{version}" == "%{version_or_commit}"
%global v_prefix v
%endif
# part after the last slash is recognized as filename in look-aside cache
Source0: %{url}/archive/%{?v_prefix}%{version_or_commit}/%{pcs_source_name}.tar.gz
Source41: https://github.com/ondrejmular/pyagentx/archive/v%{pyagentx_version}/pyagentx-%{pyagentx_version}.tar.gz Source41: https://github.com/ondrejmular/pyagentx/archive/v%{pyagentx_version}/pyagentx-%{pyagentx_version}.tar.gz
Source42: https://github.com/tornadoweb/tornado/archive/v%{tornado_version}/tornado-%{tornado_version}.tar.gz Source42: https://github.com/tornadoweb/tornado/archive/v%{tornado_version}/tornado-%{tornado_version}.tar.gz
@ -80,6 +84,8 @@ Source44: https://github.com/konradhalas/dacite/archive/v%{dacite_version}/dacit
Source81: https://rubygems.org/downloads/backports-%{version_rubygem_backports}.gem Source81: https://rubygems.org/downloads/backports-%{version_rubygem_backports}.gem
Source82: https://rubygems.org/downloads/ethon-%{version_rubygem_ethon}.gem Source82: https://rubygems.org/downloads/ethon-%{version_rubygem_ethon}.gem
Source83: https://rubygems.org/downloads/ffi-%{version_rubygem_ffi}.gem Source83: https://rubygems.org/downloads/ffi-%{version_rubygem_ffi}.gem
Source84: https://rubygems.org/downloads/nio4r-%{version_rubygem_nio4r}.gem
Source85: https://rubygems.org/downloads/puma-%{version_rubygem_puma}.gem
Source86: https://rubygems.org/downloads/mustermann-%{version_rubygem_mustermann}.gem Source86: https://rubygems.org/downloads/mustermann-%{version_rubygem_mustermann}.gem
Source87: https://rubygems.org/downloads/childprocess-%{version_rubygem_childprocess}.gem Source87: https://rubygems.org/downloads/childprocess-%{version_rubygem_childprocess}.gem
Source88: https://rubygems.org/downloads/rack-%{version_rubygem_rack}.gem Source88: https://rubygems.org/downloads/rack-%{version_rubygem_rack}.gem
@ -87,44 +93,21 @@ Source89: https://rubygems.org/downloads/rack-protection-%{version_rubygem_rack_
Source90: https://rubygems.org/downloads/rack-test-%{version_rubygem_rack_test}.gem Source90: https://rubygems.org/downloads/rack-test-%{version_rubygem_rack_test}.gem
Source91: https://rubygems.org/downloads/sinatra-%{version_rubygem_sinatra}.gem Source91: https://rubygems.org/downloads/sinatra-%{version_rubygem_sinatra}.gem
Source92: https://rubygems.org/downloads/tilt-%{version_rubygem_tilt}.gem Source92: https://rubygems.org/downloads/tilt-%{version_rubygem_tilt}.gem
Source93: https://rubygems.org/downloads/eventmachine-%{version_rubygem_eventmachine}.gem Source93: https://rubygems.org/downloads/ruby2_keywords-%{version_rubygem_ruby2_keywords}.gem
Source94: https://rubygems.org/downloads/daemons-%{version_rubygem_daemons}.gem
Source95: https://rubygems.org/downloads/thin-%{version_rubygem_thin}.gem
Source96: https://rubygems.org/downloads/ruby2_keywords-%{version_rubygem_ruby2_keywords}.gem
Source97: https://rubygems.org/downloads/webrick-%{version_rubygem_webrick}.gem
Source100: https://github.com/ClusterLabs/pcs-web-ui/archive/%{ui_commit}/%{ui_src_name}.tar.gz Source100: https://github.com/ClusterLabs/pcs-web-ui/archive/%{ui_commit}/%{ui_src_name}.tar.gz
Source101: https://github.com/ClusterLabs/pcs-web-ui/releases/download/%{ui_commit}/pcs-web-ui-node-modules-%{ui_modules_version}.tar.xz Source101: https://github.com/ClusterLabs/pcs-web-ui/releases/download/%{ui_commit}/pcs-web-ui-node-modules-%{ui_modules_version}.tar.xz
# Patches from upstream.
# They should come before downstream patches to avoid unnecessary conflicts.
# Z-streams are exception here: they can come from upstream but should be
# applied at the end to keep z-stream changes as straightforward as possible.
# Patch1: bzNUMBER-01-name.patch
# Downstream patches do not come from upstream. They adapt pcs for specific
# RHEL needs.
# pcs patches: <= 200 # pcs patches: <= 200
Patch1: do-not-support-cluster-setup-with-udp-u-transport.patch # Patch0: bzNUMBER-01-name.patch
Patch2: bz2148124-01-pcsd-systemd-killmode.patch Patch0: do-not-support-cluster-setup-with-udp-u-transport.patch
Patch3: 01-smoke-test-fix.patch
Patch4: 02-smoke-test-fix.patch
Patch5: bz2151524-01-add-warning-when-updating-a-misconfigured-resource.patch
Patch6: bz2151164-01-fix-displaying-bool-and-integer-values.patch
Patch7: bz2159454-01-add-agent-validation-option.patch
Patch8: bz2158790-01-fix-stonith-watchdog-timeout-validation.patch
Patch9: bz2166249-01-fix-stonith-watchdog-timeout-offline-update.patch
Patch10: bz2180697-01-fix-pcs-config-checkpoint-diff.patch
Patch11: bz2180704-01-fix-pcs-stonith-update-scsi.patch
# ui patches: >200 # ui patches: >200
Patch201: bz2167471-01-fix-broken-typeahead-component.patch # Patch201: bzNUMBER-01-name.patch
Patch202: bz2183180-01-fix-loading-with-fence-levels.patch
# git for patches # git for patches
BuildRequires: git-core BuildRequires: git-core
#printf from coreutils is used in makefile # printf from coreutils is used in makefile, head is used in spec
BuildRequires: coreutils BuildRequires: coreutils
# python for pcs # python for pcs
BuildRequires: python3 >= 3.9 BuildRequires: python3 >= 3.9
@ -173,6 +156,10 @@ BuildRequires: fence-agents-common
BuildRequires: pacemaker-libs-devel >= %{required_pacemaker_version} BuildRequires: pacemaker-libs-devel >= %{required_pacemaker_version}
BuildRequires: resource-agents BuildRequires: resource-agents
BuildRequires: sbd BuildRequires: sbd
# for working with qdevice certificates (certutil) - used in configure.ac
BuildRequires: nss-tools
# for generating MiniDebugInfo with find-debuginfo
BuildRequires: debugedit
# python and libraries for pcs, setuptools for pcs entrypoint # python and libraries for pcs, setuptools for pcs entrypoint
Requires: python3 >= 3.9 Requires: python3 >= 3.9
@ -208,24 +195,24 @@ Requires: pam
Requires: redhat-logos Requires: redhat-logos
# needs logrotate for /etc/logrotate.d/pcsd # needs logrotate for /etc/logrotate.d/pcsd
Requires: logrotate Requires: logrotate
# for working with qdevice certificates (certutil)
Requires: nss-tools
Provides: bundled(tornado) = %{tornado_version} Provides: bundled(tornado) = %{tornado_version}
Provides: bundled(dacite) = %{dacite_version} Provides: bundled(dacite) = %{dacite_version}
Provides: bundled(backports) = %{version_rubygem_backports} Provides: bundled(backports) = %{version_rubygem_backports}
Provides: bundled(daemons) = %{version_rubygem_daemons} Provides: bundled(childprocess) = %{version_rubygem_childprocess}
Provides: bundled(ethon) = %{version_rubygem_ethon} Provides: bundled(ethon) = %{version_rubygem_ethon}
Provides: bundled(eventmachine) = %{version_rubygem_eventmachine}
Provides: bundled(ffi) = %{version_rubygem_ffi} Provides: bundled(ffi) = %{version_rubygem_ffi}
Provides: bundled(mustermann) = %{version_rubygem_mustermann} Provides: bundled(mustermann) = %{version_rubygem_mustermann}
Provides: bundled(childprocess) = %{version_rubygem_childprocess} Provides: bundled(nio4r) = %{version_rubygem_nio4r}
Provides: bundled(puma) = %{version_rubygem_puma}
Provides: bundled(rack) = %{version_rubygem_rack} Provides: bundled(rack) = %{version_rubygem_rack}
Provides: bundled(rack_protection) = %{version_rubygem_rack_protection} Provides: bundled(rack_protection) = %{version_rubygem_rack_protection}
Provides: bundled(rack_test) = %{version_rubygem_rack_test} Provides: bundled(rack_test) = %{version_rubygem_rack_test}
Provides: bundled(ruby2_keywords) = %{version_rubygem_ruby2_keywords} Provides: bundled(ruby2_keywords) = %{version_rubygem_ruby2_keywords}
Provides: bundled(sinatra) = %{version_rubygem_sinatra} Provides: bundled(sinatra) = %{version_rubygem_sinatra}
Provides: bundled(thin) = %{version_rubygem_thin}
Provides: bundled(tilt) = %{version_rubygem_tilt} Provides: bundled(tilt) = %{version_rubygem_tilt}
Provides: bundled(webrick) = %{version_rubygem_webrick}
%description %description
pcs is a corosync and pacemaker configuration tool. It permits users to pcs is a corosync and pacemaker configuration tool. It permits users to
@ -241,7 +228,7 @@ Summary: Pacemaker cluster SNMP agent
License: GPL-2.0-only and BSD-2-Clause License: GPL-2.0-only and BSD-2-Clause
URL: https://github.com/ClusterLabs/pcs URL: https://github.com/ClusterLabs/pcs
# tar for unpacking pyagetx source tar ball # tar for unpacking pyagentx source tarball
BuildRequires: tar BuildRequires: tar
Requires: pcs = %{version}-%{release} Requires: pcs = %{version}-%{release}
@ -299,23 +286,19 @@ update_times_patch(){
# patch web-ui sources # patch web-ui sources
%autosetup -D -T -b 100 -a 101 -S git -n %{ui_src_name} -N %autosetup -D -T -b 100 -a 101 -S git -n %{ui_src_name} -N
%autopatch -p1 -m 201 %autopatch -p1 -m 201
update_times_patch %{PATCH201} # update_times_patch %%{PATCH201}
update_times_patch %{PATCH202}
# patch pcs sources # patch pcs sources
%autosetup -S git -n %{pcs_source_name} -N %autosetup -S git -n %{pcs_source_name} -N
%autopatch -p1 -M 200 %autopatch -p1 -M 200
update_times_patch %{PATCH1} # update_times_patch %%{PATCH0}
update_times_patch %{PATCH2} update_times_patch %{PATCH0}
update_times_patch %{PATCH3}
update_times_patch %{PATCH4} # generate .tarball-version if building from an untagged commit, not a released version
update_times_patch %{PATCH5} # autogen uses git-version-gen which uses .tarball-version for generating version number
update_times_patch %{PATCH6} %if "%{version}" != "%{version_or_commit}"
update_times_patch %{PATCH7} echo "%version+$(echo "%{version_or_commit}" | head -c 8)" > %{_builddir}/%{pcs_source_name}/.tarball-version
update_times_patch %{PATCH8} %endif
update_times_patch %{PATCH9}
update_times_patch %{PATCH10}
update_times_patch %{PATCH11}
# prepare dirs/files necessary for building all bundles # prepare dirs/files necessary for building all bundles
# ----------------------------------------------------- # -----------------------------------------------------
@ -325,6 +308,8 @@ mkdir -p %{rubygem_cache_dir}
cp -f %SOURCE81 %{rubygem_cache_dir} cp -f %SOURCE81 %{rubygem_cache_dir}
cp -f %SOURCE82 %{rubygem_cache_dir} cp -f %SOURCE82 %{rubygem_cache_dir}
cp -f %SOURCE83 %{rubygem_cache_dir} cp -f %SOURCE83 %{rubygem_cache_dir}
cp -f %SOURCE84 %{rubygem_cache_dir}
cp -f %SOURCE85 %{rubygem_cache_dir}
cp -f %SOURCE86 %{rubygem_cache_dir} cp -f %SOURCE86 %{rubygem_cache_dir}
cp -f %SOURCE87 %{rubygem_cache_dir} cp -f %SOURCE87 %{rubygem_cache_dir}
cp -f %SOURCE88 %{rubygem_cache_dir} cp -f %SOURCE88 %{rubygem_cache_dir}
@ -333,10 +318,6 @@ cp -f %SOURCE90 %{rubygem_cache_dir}
cp -f %SOURCE91 %{rubygem_cache_dir} cp -f %SOURCE91 %{rubygem_cache_dir}
cp -f %SOURCE92 %{rubygem_cache_dir} cp -f %SOURCE92 %{rubygem_cache_dir}
cp -f %SOURCE93 %{rubygem_cache_dir} cp -f %SOURCE93 %{rubygem_cache_dir}
cp -f %SOURCE94 %{rubygem_cache_dir}
cp -f %SOURCE95 %{rubygem_cache_dir}
cp -f %SOURCE96 %{rubygem_cache_dir}
cp -f %SOURCE97 %{rubygem_cache_dir}
# 2) prepare python bundles # 2) prepare python bundles
@ -349,11 +330,14 @@ cp -f %SOURCE44 rpm/
%define debug_package %{nil} %define debug_package %{nil}
./autogen.sh ./autogen.sh
%{configure} --enable-local-build --enable-use-local-cache-only --enable-individual-bundling --enable-booth-enable-authfile-set --enable-booth-enable-authfile-unset PYTHON=%{__python3} ruby_CFLAGS="%{optflags}" ruby_LIBS="%{build_ldflags}" %{configure} --enable-local-build --enable-use-local-cache-only \
--enable-individual-bundling --with-pcsd-default-cipherlist='PROFILE=SYSTEM' \
--enable-booth-enable-authfile-unset --enable-booth-enable-authfile-set \
PYTHON=%{__python3} ruby_CFLAGS="%{optflags}" ruby_LIBS="%{build_ldflags}"
make all make all
# build pcs-web-ui # build pcs-web-ui
make -C %{_builddir}/%{ui_src_name} build BUILD_USE_EXISTING_NODE_MODULES=true BUILD_USE_CURRENT_NODE_MODULES=true make -C %{_builddir}/%{ui_src_name} build
%install %install
rm -rf $RPM_BUILD_ROOT rm -rf $RPM_BUILD_ROOT
@ -361,32 +345,35 @@ pwd
%make_install %make_install
# RHEL-7716 - fix rubygem permissions - remove write access for owner's group
# and other users
chmod --recursive g-w,o-w ${RPM_BUILD_ROOT}%{_libdir}/%{rubygem_bundle_dir}
# install pcs-web-ui # install pcs-web-ui
cp -r %{_builddir}/%{ui_src_name}/build ${RPM_BUILD_ROOT}%{_libdir}/%{pcsd_public_dir}/ui cp -r %{_builddir}/%{ui_src_name}/build ${RPM_BUILD_ROOT}%{_libdir}/%{pcsd_public_dir}/ui
# symlink favicon into pcsd directories # symlink favicon into pcsd directories
mkdir -p ${RPM_BUILD_ROOT}%{_libdir}/%{pcsd_public_dir}/images/ mkdir -p ${RPM_BUILD_ROOT}%{_libdir}/%{pcsd_public_dir}/ui/static/media
ln -fs /etc/favicon.png ${RPM_BUILD_ROOT}%{_libdir}/%{pcsd_public_dir}/images/favicon.png ln -fs /etc/favicon.png ${RPM_BUILD_ROOT}%{_libdir}/%{pcsd_public_dir}/ui/static/media/favicon.png
# prepare license files # prepare license files
# some rubygems do not have a license file (thin) # some rubygems do not have a license file (thin)
mv %{rubygem_bundle_dir}/gems/backports-%{version_rubygem_backports}/LICENSE.txt backports_LICENSE.txt mv %{rubygem_bundle_dir}/gems/backports-%{version_rubygem_backports}/LICENSE.txt backports_LICENSE.txt
mv %{rubygem_bundle_dir}/gems/daemons-%{version_rubygem_daemons}/LICENSE daemons_LICENSE mv %{rubygem_bundle_dir}/gems/childprocess-%{version_rubygem_childprocess}/LICENSE childprocess_LICENSE
mv %{rubygem_bundle_dir}/gems/ethon-%{version_rubygem_ethon}/LICENSE ethon_LICENSE mv %{rubygem_bundle_dir}/gems/ethon-%{version_rubygem_ethon}/LICENSE ethon_LICENSE
mv %{rubygem_bundle_dir}/gems/eventmachine-%{version_rubygem_eventmachine}/LICENSE eventmachine_LICENSE
mv %{rubygem_bundle_dir}/gems/eventmachine-%{version_rubygem_eventmachine}/GNU eventmachine_GNU
mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/COPYING ffi_COPYING mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/COPYING ffi_COPYING
mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/LICENSE ffi_LICENSE mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/LICENSE ffi_LICENSE
mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/LICENSE.SPECS ffi_LICENSE.SPECS mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/LICENSE.SPECS ffi_LICENSE.SPECS
mv %{rubygem_bundle_dir}/gems/mustermann-%{version_rubygem_mustermann}/LICENSE mustermann_LICENSE mv %{rubygem_bundle_dir}/gems/mustermann-%{version_rubygem_mustermann}/LICENSE mustermann_LICENSE
mv %{rubygem_bundle_dir}/gems/childprocess-%{version_rubygem_childprocess}/LICENSE childprocess_LICENSE mv %{rubygem_bundle_dir}/gems/nio4r-%{version_rubygem_nio4r}/license.md nio4r_license.md
mv %{rubygem_bundle_dir}/gems/nio4r-%{version_rubygem_nio4r}/ext/libev/LICENSE nio4r_libev_LICENSE
mv %{rubygem_bundle_dir}/gems/puma-%{version_rubygem_puma}/LICENSE puma_LICENSE
mv %{rubygem_bundle_dir}/gems/rack-%{version_rubygem_rack}/MIT-LICENSE rack_MIT-LICENSE mv %{rubygem_bundle_dir}/gems/rack-%{version_rubygem_rack}/MIT-LICENSE rack_MIT-LICENSE
mv %{rubygem_bundle_dir}/gems/rack-protection-%{version_rubygem_rack_protection}/License rack-protection_License mv %{rubygem_bundle_dir}/gems/rack-protection-%{version_rubygem_rack_protection}/License rack-protection_License
mv %{rubygem_bundle_dir}/gems/rack-test-%{version_rubygem_rack_test}/MIT-LICENSE.txt rack-test_MIT-LICENSE.txt mv %{rubygem_bundle_dir}/gems/rack-test-%{version_rubygem_rack_test}/MIT-LICENSE.txt rack-test_MIT-LICENSE.txt
mv %{rubygem_bundle_dir}/gems/ruby2_keywords-%{version_rubygem_ruby2_keywords}/LICENSE ruby2_keywords_LICENSE mv %{rubygem_bundle_dir}/gems/ruby2_keywords-%{version_rubygem_ruby2_keywords}/LICENSE ruby2_keywords_LICENSE
mv %{rubygem_bundle_dir}/gems/sinatra-%{version_rubygem_sinatra}/LICENSE sinatra_LICENSE mv %{rubygem_bundle_dir}/gems/sinatra-%{version_rubygem_sinatra}/LICENSE sinatra_LICENSE
mv %{rubygem_bundle_dir}/gems/tilt-%{version_rubygem_tilt}/COPYING tilt_COPYING mv %{rubygem_bundle_dir}/gems/tilt-%{version_rubygem_tilt}/COPYING tilt_COPYING
mv %{rubygem_bundle_dir}/gems/webrick-%{version_rubygem_webrick}/LICENSE.txt webrick_LICENSE.txt
cp %{pcs_bundled_dir}/src/pyagentx-*/LICENSE.txt pyagentx_LICENSE.txt cp %{pcs_bundled_dir}/src/pyagentx-*/LICENSE.txt pyagentx_LICENSE.txt
cp %{pcs_bundled_dir}/src/pyagentx-*/CONTRIBUTORS.txt pyagentx_CONTRIBUTORS.txt cp %{pcs_bundled_dir}/src/pyagentx-*/CONTRIBUTORS.txt pyagentx_CONTRIBUTORS.txt
@ -401,19 +388,19 @@ cp %{pcs_bundled_dir}/src/dacite-*/README.md dacite_README.md
# We are not building debug package for pcs but we need to add MiniDebuginfo # We are not building debug package for pcs but we need to add MiniDebuginfo
# to the bundled shared libraries from rubygem extensions in order to satisfy # to the bundled shared libraries from rubygem extensions in order to satisfy
# rpmdiff's binary stripping checker. # rpmdiff's binary stripping checker.
# Therefore we call find-debuginfo.sh script manually in order to strip # Therefore we call find-debuginfo from debugedit manually in order to strip
# binaries and add MiniDebugInfo with .gnu_debugdata section # binaries and add MiniDebugInfo with .gnu_debugdata section
/usr/lib/rpm/find-debuginfo.sh -j2 -m -i -S debugsourcefiles.list find-debuginfo -j2 -m -i -S debugsourcefiles.list
# find-debuginfo.sh generated some files into /usr/lib/debug and # find-debuginfo generated some files into /usr/lib/debug and
# /usr/src/debug/ that we don't want in the package # /usr/src/debug/ that we don't want in the package
rm -rf $RPM_BUILD_ROOT%{_libdir}/debug rm -rf $RPM_BUILD_ROOT%{_libdir}/debug
rm -rf $RPM_BUILD_ROOT/usr/lib/debug rm -rf $RPM_BUILD_ROOT/usr/lib/debug
rm -rf $RPM_BUILD_ROOT%{_prefix}/src/debug rm -rf $RPM_BUILD_ROOT%{_prefix}/src/debug
# We can remove files required for gem compilation # We can remove files required for gem compilation
rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/eventmachine-%{version_rubygem_eventmachine}/ext
rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/ext rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/ext
rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/thin-%{version_rubygem_thin}/ext rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/nio4r-%{version_rubygem_nio4r}/ext
rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/puma-%{version_rubygem_puma}/ext
%check %check
# In the building environment LC_CTYPE is set to C which causes tests to fail # In the building environment LC_CTYPE is set to C which causes tests to fail
@ -495,21 +482,20 @@ run_all_tests
# rubygem licenses # rubygem licenses
%license backports_LICENSE.txt %license backports_LICENSE.txt
%license childprocess_LICENSE %license childprocess_LICENSE
%license daemons_LICENSE
%license ethon_LICENSE %license ethon_LICENSE
%license eventmachine_LICENSE
%license eventmachine_GNU
%license ffi_COPYING %license ffi_COPYING
%license ffi_LICENSE %license ffi_LICENSE
%license ffi_LICENSE.SPECS %license ffi_LICENSE.SPECS
%license mustermann_LICENSE %license mustermann_LICENSE
%license nio4r_license.md
%license nio4r_libev_LICENSE
%license puma_LICENSE
%license rack_MIT-LICENSE %license rack_MIT-LICENSE
%license rack-protection_License %license rack-protection_License
%license rack-test_MIT-LICENSE.txt %license rack-test_MIT-LICENSE.txt
%license ruby2_keywords_LICENSE %license ruby2_keywords_LICENSE
%license sinatra_LICENSE %license sinatra_LICENSE
%license tilt_COPYING %license tilt_COPYING
%license webrick_LICENSE.txt
%{python3_sitelib}/* %{python3_sitelib}/*
%{_sbindir}/pcs %{_sbindir}/pcs
%{_sbindir}/pcsd %{_sbindir}/pcsd
@ -550,6 +536,61 @@ run_all_tests
%license pyagentx_LICENSE.txt %license pyagentx_LICENSE.txt
%changelog %changelog
* Tue Mar 19 2024 Michal Pospisil <mpospisi@redhat.com> - 0.11.7-2
- Fixed CVE-2024-25126, CVE-2024-26141, CVE-2024-26146 in bundled dependency rack
Resolves: RHEL-26446, RHEL-26448, RHEL-26450
* Fri Jan 05 2024 Michal Pospisil <mpospisi@redhat.com> - 0.11.7-1
- Rebased to the latest sources (see CHANGELOG.md)
Resolves: RHEL-7740
* Mon Nov 13 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.6-6
- Rebased to the latest upstream sources (see CHANGELOG.md)
Resolves: RHEL-7582, RHEL-7583, RHEL-7669, RHEL-7672, RHEL-7697, RHEL-7698, RHEL-7700, RHEL-7703, RHEL-7719, RHEL-7725, RHEL-7730, RHEL-7738, RHEL-7739, RHEL-7740, RHEL-7744, RHEL-7746
- TLS cipher setting in pcsd now follows system-wide crypto policies by default
Resolves: RHEL-7724
- Tightened permissions of bundled rubygems to be 755 or stricter
Resolves: RHEL-7716
* Thu Nov 2 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.6-5
- No changes, fixing an error in a new quality control process
- Resolves: RHEL-15217
* Fri Oct 27 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.6-4
- No changes, testing a new quality control process
- Resolves: RHEL-15217
* Fri Jul 14 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.6-3
- Refreshing any page in pcs-web-ui no longer causes it to display a blank page
- Resolves: rhbz#2222788
* Mon Jul 10 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.6-2
- Added BuildRequires: debugedit - for generating MiniDebugInfo - triggered by removing find-debuginfo.sh from rpm
- Make use of filters when extracting tarballs to enhance security if provided by Python (pcs config restore command)
- Exporting constraints with rules in form of pcs commands now escapes # and fixes spaces in dates to make the commands valid
- Constraints containing options unsupported by pcs are not exported and a warning is printed instead
- Using spaces in dates in location constraint rules is deprecated
- Resolves: rhbz#2163953 rhbz#2216434 rhbz#2217850 rhbz#2219407
* Tue Jun 20 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.6-1
- Rebased to the latest upstream sources (see CHANGELOG.md)
- Updated bundled rubygems: puma, tilt
- Resolves: rhbz#1465829 rhbz#2163440 rhbz#2168155
* Wed May 31 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.5-2
- Fixed a regression causing crash in `pcs resource move` command (broken since pcs-0.11.5)
- Resolves: rhbz#2210855
* Mon May 22 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.5-1
- Rebased to the latest upstream sources (see CHANGELOG.md)
- Updated pcs-web-ui
- Updated bundled dependencies: tornado, dacite
- Added bundled rubygems: nio4r, puma
- Removed bundled rubygems: daemons, eventmachine, thin, webrick
- Updated bundled rubygems: backports, rack, rack-protection, rack-test, sinatra, tilt
- Added dependency nss-tools - for working with qdevice certificates
- Resolves: rhbz#1423473 rhbz#1860626 rhbz#2160664 rhbz#2163440 rhbz#2163914 rhbz#2163953 rhbz#2168155 rhbz#2168617 rhbz#2174735 rhbz#2174829 rhbz#2175881 rhbz#2177996 rhbz#2178701 rhbz#2178714 rhbz#2179902 rhbz#2180379 rhbz#2182810
* Tue Mar 28 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.4-7 * Tue Mar 28 2023 Michal Pospisil <mpospisi@redhat.com> - 0.11.4-7
- Fix displaying differences between configuration checkpoints in “pcs config checkpoint diff” command - Fix displaying differences between configuration checkpoints in “pcs config checkpoint diff” command
- Fix “pcs stonith update-scsi-devices” command which was broken since Pacemaker-2.1.5-rc1 - Fix “pcs stonith update-scsi-devices” command which was broken since Pacemaker-2.1.5-rc1