import CS pcs-0.10.18-2.el8

This commit is contained in:
eabdullin 2024-05-22 10:47:08 +00:00
parent 1c820eb891
commit 2db2bb996f
13 changed files with 181 additions and 4319 deletions

23
.gitignore vendored
View File

@ -1,25 +1,22 @@
SOURCES/HAM-logo.png SOURCES/HAM-logo.png
SOURCES/backports-3.23.0.gem SOURCES/backports-3.24.1.gem
SOURCES/dacite-1.6.0.tar.gz SOURCES/dacite-1.8.1.tar.gz
SOURCES/daemons-1.4.1.gem
SOURCES/dataclasses-0.8.tar.gz SOURCES/dataclasses-0.8.tar.gz
SOURCES/ethon-0.16.0.gem SOURCES/ethon-0.16.0.gem
SOURCES/eventmachine-1.2.7.gem SOURCES/ffi-1.16.3.gem
SOURCES/ffi-1.15.5.gem
SOURCES/json-2.6.3.gem SOURCES/json-2.6.3.gem
SOURCES/mustermann-2.0.2.gem SOURCES/mustermann-2.0.2.gem
SOURCES/nio4r-2.5.9.gem
SOURCES/open4-1.3.4-1.gem SOURCES/open4-1.3.4-1.gem
SOURCES/pcs-0.10.15.tar.gz SOURCES/pcs-0.10.18.tar.gz
SOURCES/pcs-web-ui-0.1.13.tar.gz SOURCES/puma-6.4.0.gem
SOURCES/pcs-web-ui-node-modules-0.1.13.tar.xz
SOURCES/pyagentx-0.4.pcs.2.tar.gz SOURCES/pyagentx-0.4.pcs.2.tar.gz
SOURCES/python-dateutil-2.8.2.tar.gz SOURCES/python-dateutil-2.8.2.tar.gz
SOURCES/rack-2.2.6.4.gem SOURCES/rack-2.2.8.1.gem
SOURCES/rack-protection-2.2.4.gem SOURCES/rack-protection-2.2.4.gem
SOURCES/rack-test-2.0.2.gem SOURCES/rack-test-2.1.0.gem
SOURCES/rexml-3.2.5.gem SOURCES/rexml-3.2.6.gem
SOURCES/ruby2_keywords-0.0.5.gem SOURCES/ruby2_keywords-0.0.5.gem
SOURCES/sinatra-2.2.4.gem SOURCES/sinatra-2.2.4.gem
SOURCES/thin-1.8.1.gem SOURCES/tilt-2.3.0.gem
SOURCES/tilt-2.0.11.gem
SOURCES/tornado-6.1.0.tar.gz SOURCES/tornado-6.1.0.tar.gz

View File

@ -1,25 +1,22 @@
679a4ce22a33ffd4d704261a17c00cff98d9499a SOURCES/HAM-logo.png 679a4ce22a33ffd4d704261a17c00cff98d9499a SOURCES/HAM-logo.png
0e11246385a9e0a4bc122b74fb74fe536a234f81 SOURCES/backports-3.23.0.gem 0ef72a288913e220695ad62718aeb75171924028 SOURCES/backports-3.24.1.gem
31546c37fbdc6270d5097687619e9c0db6f1c05c SOURCES/dacite-1.6.0.tar.gz 07b26abbf7ff0dcba5c7f9e814ff7eebafefb058 SOURCES/dacite-1.8.1.tar.gz
4795a8962cc1608bfec0d91fa4d438c7cfe90c62 SOURCES/daemons-1.4.1.gem
8b7598273d2ae6dad2b88466aefac55071a41926 SOURCES/dataclasses-0.8.tar.gz 8b7598273d2ae6dad2b88466aefac55071a41926 SOURCES/dataclasses-0.8.tar.gz
5b56a68268708c474bef04550639ded3add5e946 SOURCES/ethon-0.16.0.gem 5b56a68268708c474bef04550639ded3add5e946 SOURCES/ethon-0.16.0.gem
7a5b2896e210fac9759c786ee4510f265f75b481 SOURCES/eventmachine-1.2.7.gem 10e4cf0e11ef4581ec4ad5fe2cdf3c78b6077d39 SOURCES/ffi-1.16.3.gem
97632b7975067266c0b39596de0a4c86d9330658 SOURCES/ffi-1.15.5.gem
6d78f730b7f3b25fb3f93684fe1364acf58bce6b SOURCES/json-2.6.3.gem 6d78f730b7f3b25fb3f93684fe1364acf58bce6b SOURCES/json-2.6.3.gem
f5f804366823c1126791dfefd98dd0539563785c SOURCES/mustermann-2.0.2.gem f5f804366823c1126791dfefd98dd0539563785c SOURCES/mustermann-2.0.2.gem
2f65d371f5f37460ad74afcedcb97d2b41a46806 SOURCES/nio4r-2.5.9.gem
41a7fe9f8e3e02da5ae76c821b89c5b376a97746 SOURCES/open4-1.3.4-1.gem 41a7fe9f8e3e02da5ae76c821b89c5b376a97746 SOURCES/open4-1.3.4-1.gem
00e234824e85afca99df9043dd6eb47490b220c4 SOURCES/pcs-0.10.15.tar.gz b3cd873042b17021355b68f1f7aa313f0c1f3fee SOURCES/pcs-0.10.18.tar.gz
f7455776936492ce7b241f9801d6bbc946b0461a SOURCES/pcs-web-ui-0.1.13.tar.gz d6049c4555f3c9d198e6eb1d7e53ce9b68e175ff SOURCES/puma-6.4.0.gem
bd18d97d611233914828719c97b4d98d079913d2 SOURCES/pcs-web-ui-node-modules-0.1.13.tar.xz
3176b2f2b332c2b6bf79fe882e83feecf3d3f011 SOURCES/pyagentx-0.4.pcs.2.tar.gz 3176b2f2b332c2b6bf79fe882e83feecf3d3f011 SOURCES/pyagentx-0.4.pcs.2.tar.gz
c2ba10c775b7a52a4b57cac4d4110a0c0f812a82 SOURCES/python-dateutil-2.8.2.tar.gz c2ba10c775b7a52a4b57cac4d4110a0c0f812a82 SOURCES/python-dateutil-2.8.2.tar.gz
bbaa023e07bdc4143c5dd18d752c2543f254666f SOURCES/rack-2.2.6.4.gem fcdee79d1b0bb7e3666bad96321fc124bc8215e9 SOURCES/rack-2.2.8.1.gem
5347315a7283f0b04443e924ed4eaa17807432c8 SOURCES/rack-protection-2.2.4.gem 5347315a7283f0b04443e924ed4eaa17807432c8 SOURCES/rack-protection-2.2.4.gem
3c669527ecbcb9f915a83983ec89320c356e1fe3 SOURCES/rack-test-2.0.2.gem ae09ea83748b55875edc3708fffba90db180cb8e SOURCES/rack-test-2.1.0.gem
e7f48fa5fb2d92e6cb21d6b1638fe41a5a7c4287 SOURCES/rexml-3.2.5.gem c88fc3ffdbde9dd49b24b4d9876673533b4aba76 SOURCES/rexml-3.2.6.gem
d017b9e4d1978e0b3ccc3e2a31493809e4693cd3 SOURCES/ruby2_keywords-0.0.5.gem d017b9e4d1978e0b3ccc3e2a31493809e4693cd3 SOURCES/ruby2_keywords-0.0.5.gem
fa6a6c98f885e93f54c23dd0454cae906e82c31b SOURCES/sinatra-2.2.4.gem fa6a6c98f885e93f54c23dd0454cae906e82c31b SOURCES/sinatra-2.2.4.gem
1ac6292a98e17247b7bb847a35ff868605256f7b SOURCES/thin-1.8.1.gem 4a38a9a55887b2882182a2c5771e592efe514e5e SOURCES/tilt-2.3.0.gem
360d77c80d2851a538fb13d43751093115c34712 SOURCES/tilt-2.0.11.gem
c23c617c7a0205e465bebad5b8cdf289ae8402a2 SOURCES/tornado-6.1.0.tar.gz c23c617c7a0205e465bebad5b8cdf289ae8402a2 SOURCES/tornado-6.1.0.tar.gz

View File

@ -0,0 +1,55 @@
From 957856a556f5ed92129ce602538c3df3aebce7a3 Mon Sep 17 00:00:00 2001
From: Ivan Devat <idevat@redhat.com>
Date: Tue, 5 Dec 2023 15:18:35 +0100
Subject: [PATCH 2/2] disable alternative webui routes
This commit is intended to be downstream only.
The new web ui was part of rhel8 as a technical preview. But new web ui
is now the main in rhel9 and there is no need to keep it in rhel8.
To prevent unnecessary maintenance burden it is disabled now.
No handler code is removed, just routing disabled.
---
pcs/daemon/run.py | 26 ++++++++++++++++----------
1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/pcs/daemon/run.py b/pcs/daemon/run.py
index 7fdeda2a..0a6b1b21 100644
--- a/pcs/daemon/run.py
+++ b/pcs/daemon/run.py
@@ -81,16 +81,22 @@ def configure_app(
routes.extend(
# old web ui by default
[(r"/", RedirectHandler, dict(url="/manage"))]
- + [(r"/ui", RedirectHandler, dict(url="/ui/"))]
- + ui.get_routes(
- url_prefix="/ui/",
- app_dir=os.path.join(public_dir, "ui"),
- fallback_page_path=os.path.join(
- public_dir,
- "ui_instructions.html",
- ),
- session_storage=session_storage,
- )
+ # The following disabled routes was for the new web ui. The new
+ # web ui was here as a technical preview. But new web ui is now
+ # the main in rhel9 and there is no need to keep it in rhel8.
+ # To prevent unnecessary maintenance burden it is disabled now.
+ # No handler code is removed, just routing disabled.
+ #
+ # + [(r"/ui", RedirectHandler, dict(url="/ui/"))]
+ # + ui.get_routes(
+ # url_prefix="/ui/",
+ # app_dir=os.path.join(public_dir, "ui"),
+ # fallback_page_path=os.path.join(
+ # public_dir,
+ # "ui_instructions.html",
+ # ),
+ # session_storage=session_storage,
+ # )
+ sinatra_ui.get_routes(
session_storage, ruby_pcsd_wrapper, public_dir
)
--
2.43.0

View File

@ -1,128 +0,0 @@
From 0da95a7f05ae7600eebe30df78a3d4622cd6b4f8 Mon Sep 17 00:00:00 2001
From: Ondrej Mular <omular@redhat.com>
Date: Wed, 7 Dec 2022 15:53:25 +0100
Subject: [PATCH 2/5] fix displaying bool and integer values in `pcs resource
config` command
---
pcs/cli/resource/output.py | 18 +++++++++---------
pcs_test/resources/cib-resources.xml | 2 +-
pcs_test/tier1/legacy/test_resource.py | 3 ++-
pcs_test/tools/resources_dto.py | 4 ++--
4 files changed, 14 insertions(+), 13 deletions(-)
diff --git a/pcs/cli/resource/output.py b/pcs/cli/resource/output.py
index 6d1fad16..0705d27b 100644
--- a/pcs/cli/resource/output.py
+++ b/pcs/cli/resource/output.py
@@ -69,9 +69,9 @@ def _resource_operation_to_pairs(
pairs.append(("interval-origin", operation_dto.interval_origin))
if operation_dto.timeout:
pairs.append(("timeout", operation_dto.timeout))
- if operation_dto.enabled:
+ if operation_dto.enabled is not None:
pairs.append(("enabled", _bool_to_cli_value(operation_dto.enabled)))
- if operation_dto.record_pending:
+ if operation_dto.record_pending is not None:
pairs.append(
("record-pending", _bool_to_cli_value(operation_dto.record_pending))
)
@@ -477,13 +477,13 @@ def _resource_bundle_container_options_to_pairs(
options: CibResourceBundleContainerRuntimeOptionsDto,
) -> List[Tuple[str, str]]:
option_list = [("image", options.image)]
- if options.replicas:
+ if options.replicas is not None:
option_list.append(("replicas", str(options.replicas)))
- if options.replicas_per_host:
+ if options.replicas_per_host is not None:
option_list.append(
("replicas-per-host", str(options.replicas_per_host))
)
- if options.promoted_max:
+ if options.promoted_max is not None:
option_list.append(("promoted-max", str(options.promoted_max)))
if options.run_command:
option_list.append(("run-command", options.run_command))
@@ -508,7 +508,7 @@ def _resource_bundle_network_options_to_pairs(
network_options.append(
("ip-range-start", bundle_network_dto.ip_range_start)
)
- if bundle_network_dto.control_port:
+ if bundle_network_dto.control_port is not None:
network_options.append(
("control-port", str(bundle_network_dto.control_port))
)
@@ -516,7 +516,7 @@ def _resource_bundle_network_options_to_pairs(
network_options.append(
("host-interface", bundle_network_dto.host_interface)
)
- if bundle_network_dto.host_netmask:
+ if bundle_network_dto.host_netmask is not None:
network_options.append(
("host-netmask", str(bundle_network_dto.host_netmask))
)
@@ -531,9 +531,9 @@ def _resource_bundle_port_mapping_to_pairs(
bundle_net_port_mapping_dto: CibResourceBundlePortMappingDto,
) -> List[Tuple[str, str]]:
mapping = []
- if bundle_net_port_mapping_dto.port:
+ if bundle_net_port_mapping_dto.port is not None:
mapping.append(("port", str(bundle_net_port_mapping_dto.port)))
- if bundle_net_port_mapping_dto.internal_port:
+ if bundle_net_port_mapping_dto.internal_port is not None:
mapping.append(
("internal-port", str(bundle_net_port_mapping_dto.internal_port))
)
diff --git a/pcs_test/resources/cib-resources.xml b/pcs_test/resources/cib-resources.xml
index 67cf5178..524b8fbb 100644
--- a/pcs_test/resources/cib-resources.xml
+++ b/pcs_test/resources/cib-resources.xml
@@ -53,7 +53,7 @@
</instance_attributes>
</op>
<op name="migrate_from" timeout="20s" interval="0s" id="R7-migrate_from-interval-0s"/>
- <op name="migrate_to" timeout="20s" interval="0s" id="R7-migrate_to-interval-0s"/>
+ <op name="migrate_to" timeout="20s" interval="0s" id="R7-migrate_to-interval-0s" enabled="false" record-pending="false"/>
<op name="monitor" timeout="20s" interval="10s" id="R7-monitor-interval-10s"/>
<op name="reload" timeout="20s" interval="0s" id="R7-reload-interval-0s"/>
<op name="reload-agent" timeout="20s" interval="0s" id="R7-reload-agent-interval-0s"/>
diff --git a/pcs_test/tier1/legacy/test_resource.py b/pcs_test/tier1/legacy/test_resource.py
index 2ea5c423..65ad1090 100644
--- a/pcs_test/tier1/legacy/test_resource.py
+++ b/pcs_test/tier1/legacy/test_resource.py
@@ -753,7 +753,7 @@ Error: moni=tor does not appear to be a valid operation action
o, r = pcs(
self.temp_cib.name,
- "resource create --no-default-ops OPTest ocf:heartbeat:Dummy op monitor interval=30s OCF_CHECK_LEVEL=1 op monitor interval=25s OCF_CHECK_LEVEL=1".split(),
+ "resource create --no-default-ops OPTest ocf:heartbeat:Dummy op monitor interval=30s OCF_CHECK_LEVEL=1 op monitor interval=25s OCF_CHECK_LEVEL=1 enabled=0".split(),
)
ac(o, "")
assert r == 0
@@ -770,6 +770,7 @@ Error: moni=tor does not appear to be a valid operation action
OCF_CHECK_LEVEL=1
monitor: OPTest-monitor-interval-25s
interval=25s
+ enabled=0
OCF_CHECK_LEVEL=1
"""
),
diff --git a/pcs_test/tools/resources_dto.py b/pcs_test/tools/resources_dto.py
index 8f46f6dd..a980ec80 100644
--- a/pcs_test/tools/resources_dto.py
+++ b/pcs_test/tools/resources_dto.py
@@ -233,8 +233,8 @@ PRIMITIVE_R7 = CibResourcePrimitiveDto(
start_delay=None,
interval_origin=None,
timeout="20s",
- enabled=None,
- record_pending=None,
+ enabled=False,
+ record_pending=False,
role=None,
on_fail=None,
meta_attributes=[],
--
2.39.0

View File

@ -1,732 +0,0 @@
From 58589e47f2913276ea1c2164a3ce8ee694fb2b78 Mon Sep 17 00:00:00 2001
From: Ondrej Mular <omular@redhat.com>
Date: Wed, 7 Dec 2022 11:33:25 +0100
Subject: [PATCH 1/5] add warning when updating a misconfigured resource
---
pcs/common/reports/codes.py | 3 +
pcs/common/reports/messages.py | 19 +++++
pcs/lib/cib/resource/primitive.py | 84 ++++++++++++++-----
pcs/lib/pacemaker/live.py | 38 ++-------
.../tier0/common/reports/test_messages.py | 16 ++++
.../cib/resource/test_primitive_validate.py | 56 +++++++------
pcs_test/tier0/lib/pacemaker/test_live.py | 78 +++++------------
pcs_test/tier1/legacy/test_stonith.py | 5 +-
8 files changed, 161 insertions(+), 138 deletions(-)
diff --git a/pcs/common/reports/codes.py b/pcs/common/reports/codes.py
index deecc626..48048af7 100644
--- a/pcs/common/reports/codes.py
+++ b/pcs/common/reports/codes.py
@@ -40,6 +40,9 @@ AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE = M("AGENT_NAME_GUESS_FOUND_MORE_THAN_ONE")
AGENT_NAME_GUESS_FOUND_NONE = M("AGENT_NAME_GUESS_FOUND_NONE")
AGENT_NAME_GUESSED = M("AGENT_NAME_GUESSED")
AGENT_SELF_VALIDATION_INVALID_DATA = M("AGENT_SELF_VALIDATION_INVALID_DATA")
+AGENT_SELF_VALIDATION_SKIPPED_UPDATED_RESOURCE_MISCONFIGURED = M(
+ "AGENT_SELF_VALIDATION_SKIPPED_UPDATED_RESOURCE_MISCONFIGURED"
+)
AGENT_SELF_VALIDATION_RESULT = M("AGENT_SELF_VALIDATION_RESULT")
BAD_CLUSTER_STATE_FORMAT = M("BAD_CLUSTER_STATE_FORMAT")
BOOTH_ADDRESS_DUPLICATION = M("BOOTH_ADDRESS_DUPLICATION")
diff --git a/pcs/common/reports/messages.py b/pcs/common/reports/messages.py
index d27c1dee..24bb222f 100644
--- a/pcs/common/reports/messages.py
+++ b/pcs/common/reports/messages.py
@@ -7584,6 +7584,25 @@ class AgentSelfValidationInvalidData(ReportItemMessage):
return f"Invalid validation data from agent: {self.reason}"
+@dataclass(frozen=True)
+class AgentSelfValidationSkippedUpdatedResourceMisconfigured(ReportItemMessage):
+ """
+ Agent self validation is skipped when updating a resource as it is
+ misconfigured in its current state.
+ """
+
+ result: str
+ _code = codes.AGENT_SELF_VALIDATION_SKIPPED_UPDATED_RESOURCE_MISCONFIGURED
+
+ @property
+ def message(self) -> str:
+ return (
+ "The resource was misconfigured before the update, therefore agent "
+ "self-validation will not be run for the updated configuration. "
+ "Validation output of the original configuration:\n{result}"
+ ).format(result="\n".join(indent(self.result.splitlines())))
+
+
@dataclass(frozen=True)
class BoothAuthfileNotUsed(ReportItemMessage):
"""
diff --git a/pcs/lib/cib/resource/primitive.py b/pcs/lib/cib/resource/primitive.py
index 3ebd01c6..c5df8e58 100644
--- a/pcs/lib/cib/resource/primitive.py
+++ b/pcs/lib/cib/resource/primitive.py
@@ -357,6 +357,31 @@ def _is_ocf_or_stonith_agent(resource_agent_name: ResourceAgentName) -> bool:
return resource_agent_name.standard in ("stonith", "ocf")
+def _get_report_from_agent_self_validation(
+ is_valid: Optional[bool],
+ reason: str,
+ report_severity: reports.ReportItemSeverity,
+) -> reports.ReportItemList:
+ report_items = []
+ if is_valid is None:
+ report_items.append(
+ reports.ReportItem(
+ report_severity,
+ reports.messages.AgentSelfValidationInvalidData(reason),
+ )
+ )
+ elif not is_valid or reason:
+ if is_valid:
+ report_severity = reports.ReportItemSeverity.warning()
+ report_items.append(
+ reports.ReportItem(
+ report_severity,
+ reports.messages.AgentSelfValidationResult(reason),
+ )
+ )
+ return report_items
+
+
def validate_resource_instance_attributes_create(
cmd_runner: CommandRunner,
resource_agent: ResourceAgentFacade,
@@ -402,16 +427,16 @@ def validate_resource_instance_attributes_create(
for report_item in report_items
)
):
- (
- dummy_is_valid,
- agent_validation_reports,
- ) = validate_resource_instance_attributes_via_pcmk(
- cmd_runner,
- agent_name,
- instance_attributes,
- reports.get_severity(reports.codes.FORCE, force),
+ report_items.extend(
+ _get_report_from_agent_self_validation(
+ *validate_resource_instance_attributes_via_pcmk(
+ cmd_runner,
+ agent_name,
+ instance_attributes,
+ ),
+ reports.get_severity(reports.codes.FORCE, force),
+ )
)
- report_items.extend(agent_validation_reports)
return report_items
@@ -505,25 +530,40 @@ def validate_resource_instance_attributes_update(
)
):
(
- is_valid,
- dummy_reports,
+ original_is_valid,
+ original_reason,
) = validate_resource_instance_attributes_via_pcmk(
cmd_runner,
agent_name,
current_instance_attrs,
- reports.ReportItemSeverity.error(),
)
- if is_valid:
- (
- dummy_is_valid,
- agent_validation_reports,
- ) = validate_resource_instance_attributes_via_pcmk(
- cmd_runner,
- resource_agent.metadata.name,
- final_attrs,
- reports.get_severity(reports.codes.FORCE, force),
+ if original_is_valid:
+ report_items.extend(
+ _get_report_from_agent_self_validation(
+ *validate_resource_instance_attributes_via_pcmk(
+ cmd_runner,
+ resource_agent.metadata.name,
+ final_attrs,
+ ),
+ reports.get_severity(reports.codes.FORCE, force),
+ )
+ )
+ elif original_is_valid is None:
+ report_items.append(
+ reports.ReportItem.warning(
+ reports.messages.AgentSelfValidationInvalidData(
+ original_reason
+ )
+ )
+ )
+ else:
+ report_items.append(
+ reports.ReportItem.warning(
+ reports.messages.AgentSelfValidationSkippedUpdatedResourceMisconfigured(
+ original_reason
+ )
+ )
)
- report_items.extend(agent_validation_reports)
return report_items
diff --git a/pcs/lib/pacemaker/live.py b/pcs/lib/pacemaker/live.py
index fd26dabb..726f6b67 100644
--- a/pcs/lib/pacemaker/live.py
+++ b/pcs/lib/pacemaker/live.py
@@ -902,8 +902,7 @@ def _validate_stonith_instance_attributes_via_pcmk(
cmd_runner: CommandRunner,
agent_name: ResourceAgentName,
instance_attributes: Mapping[str, str],
- not_valid_severity: reports.ReportItemSeverity,
-) -> Tuple[Optional[bool], reports.ReportItemList]:
+) -> Tuple[Optional[bool], str]:
cmd = [
settings.stonith_admin,
"--validate",
@@ -917,7 +916,6 @@ def _validate_stonith_instance_attributes_via_pcmk(
cmd,
"./validate/command/output",
instance_attributes,
- not_valid_severity,
)
@@ -925,8 +923,7 @@ def _validate_resource_instance_attributes_via_pcmk(
cmd_runner: CommandRunner,
agent_name: ResourceAgentName,
instance_attributes: Mapping[str, str],
- not_valid_severity: reports.ReportItemSeverity,
-) -> Tuple[Optional[bool], reports.ReportItemList]:
+) -> Tuple[Optional[bool], str]:
cmd = [
settings.crm_resource_binary,
"--validate",
@@ -944,7 +941,6 @@ def _validate_resource_instance_attributes_via_pcmk(
cmd,
"./resource-agent-action/command/output",
instance_attributes,
- not_valid_severity,
)
@@ -953,8 +949,7 @@ def _handle_instance_attributes_validation_via_pcmk(
cmd: StringSequence,
data_xpath: str,
instance_attributes: Mapping[str, str],
- not_valid_severity: reports.ReportItemSeverity,
-) -> Tuple[Optional[bool], reports.ReportItemList]:
+) -> Tuple[Optional[bool], str]:
full_cmd = list(cmd)
for key, value in sorted(instance_attributes.items()):
full_cmd.extend(["--option", f"{key}={value}"])
@@ -963,12 +958,7 @@ def _handle_instance_attributes_validation_via_pcmk(
# dom = _get_api_result_dom(stdout)
dom = xml_fromstring(stdout)
except (etree.XMLSyntaxError, etree.DocumentInvalid) as e:
- return None, [
- reports.ReportItem(
- not_valid_severity,
- reports.messages.AgentSelfValidationInvalidData(str(e)),
- )
- ]
+ return None, str(e)
result = "\n".join(
"\n".join(
line.strip() for line in item.text.split("\n") if line.strip()
@@ -976,38 +966,22 @@ def _handle_instance_attributes_validation_via_pcmk(
for item in dom.iterfind(data_xpath)
if item.get("source") == "stderr" and item.text
).strip()
- if return_value == 0:
- if result:
- return True, [
- reports.ReportItem.warning(
- reports.messages.AgentSelfValidationResult(result)
- )
- ]
- return True, []
- return False, [
- reports.ReportItem(
- not_valid_severity,
- reports.messages.AgentSelfValidationResult(result),
- )
- ]
+ return return_value == 0, result
def validate_resource_instance_attributes_via_pcmk(
cmd_runner: CommandRunner,
resource_agent_name: ResourceAgentName,
instance_attributes: Mapping[str, str],
- not_valid_severity: reports.ReportItemSeverity,
-) -> Tuple[Optional[bool], reports.ReportItemList]:
+) -> Tuple[Optional[bool], str]:
if resource_agent_name.is_stonith:
return _validate_stonith_instance_attributes_via_pcmk(
cmd_runner,
resource_agent_name,
instance_attributes,
- not_valid_severity,
)
return _validate_resource_instance_attributes_via_pcmk(
cmd_runner,
resource_agent_name,
instance_attributes,
- not_valid_severity,
)
diff --git a/pcs_test/tier0/common/reports/test_messages.py b/pcs_test/tier0/common/reports/test_messages.py
index 17627b80..5fcc62fc 100644
--- a/pcs_test/tier0/common/reports/test_messages.py
+++ b/pcs_test/tier0/common/reports/test_messages.py
@@ -5562,6 +5562,22 @@ class AgentSelfValidationInvalidData(NameBuildTest):
)
+class AgentSelfValidationSkippedUpdatedResourceMisconfigured(NameBuildTest):
+ def test_message(self):
+ lines = list(f"line #{i}" for i in range(3))
+ self.assert_message_from_report(
+ (
+ "The resource was misconfigured before the update, therefore "
+ "agent self-validation will not be run for the updated "
+ "configuration. Validation output of the original "
+ "configuration:\n {}"
+ ).format("\n ".join(lines)),
+ reports.AgentSelfValidationSkippedUpdatedResourceMisconfigured(
+ "\n".join(lines)
+ ),
+ )
+
+
class BoothAuthfileNotUsed(NameBuildTest):
def test_message(self):
self.assert_message_from_report(
diff --git a/pcs_test/tier0/lib/cib/resource/test_primitive_validate.py b/pcs_test/tier0/lib/cib/resource/test_primitive_validate.py
index 2cba7086..1bc3a5a6 100644
--- a/pcs_test/tier0/lib/cib/resource/test_primitive_validate.py
+++ b/pcs_test/tier0/lib/cib/resource/test_primitive_validate.py
@@ -609,7 +609,6 @@ class ValidateResourceInstanceAttributesCreateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
)
def test_force(self):
@@ -629,15 +628,14 @@ class ValidateResourceInstanceAttributesCreateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
attributes,
- reports.ReportItemSeverity.warning(),
)
def test_failure(self):
attributes = {"required": "value"}
facade = _fixture_ocf_agent()
- failure_reports = ["report1", "report2"]
- self.agent_self_validation_mock.return_value = False, failure_reports
- self.assertEqual(
+ failure_reason = "failure reason"
+ self.agent_self_validation_mock.return_value = False, failure_reason
+ assert_report_item_list_equal(
primitive.validate_resource_instance_attributes_create(
self.cmd_runner,
facade,
@@ -645,13 +643,18 @@ class ValidateResourceInstanceAttributesCreateSelfValidation(TestCase):
etree.Element("resources"),
force=False,
),
- failure_reports,
+ [
+ fixture.error(
+ reports.codes.AGENT_SELF_VALIDATION_RESULT,
+ result=failure_reason,
+ force_code=reports.codes.FORCE,
+ )
+ ],
)
self.agent_self_validation_mock.assert_called_once_with(
self.cmd_runner,
facade.metadata.name,
attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
)
def test_stonith_check(self):
@@ -671,7 +674,6 @@ class ValidateResourceInstanceAttributesCreateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
)
def test_nonexisting_agent(self):
@@ -1295,13 +1297,11 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
mock.call(
self.cmd_runner,
facade.metadata.name,
new_attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
),
],
)
@@ -1328,13 +1328,11 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
mock.call(
self.cmd_runner,
facade.metadata.name,
new_attributes,
- reports.ReportItemSeverity.warning(),
),
],
)
@@ -1342,13 +1340,13 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
def test_failure(self):
old_attributes = {"required": "old_value"}
new_attributes = {"required": "new_value"}
- failure_reports = ["report1", "report2"]
+ failure_reason = "failure reason"
facade = _fixture_ocf_agent()
self.agent_self_validation_mock.side_effect = (
- (True, []),
- (False, failure_reports),
+ (True, ""),
+ (False, failure_reason),
)
- self.assertEqual(
+ assert_report_item_list_equal(
primitive.validate_resource_instance_attributes_update(
self.cmd_runner,
facade,
@@ -1357,7 +1355,13 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self._fixture_resources(old_attributes),
force=False,
),
- failure_reports,
+ [
+ fixture.error(
+ reports.codes.AGENT_SELF_VALIDATION_RESULT,
+ result=failure_reason,
+ force_code=reports.codes.FORCE,
+ )
+ ],
)
self.assertEqual(
self.agent_self_validation_mock.mock_calls,
@@ -1366,13 +1370,11 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
mock.call(
self.cmd_runner,
facade.metadata.name,
new_attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
),
],
)
@@ -1399,13 +1401,11 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
mock.call(
self.cmd_runner,
facade.metadata.name,
new_attributes,
- reports.ReportItemSeverity.error(reports.codes.FORCE),
),
],
)
@@ -1471,10 +1471,10 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
def test_current_attributes_failure(self):
old_attributes = {"required": "old_value"}
new_attributes = {"required": "new_value"}
- failure_reports = ["report1", "report2"]
+ failure_reason = "failure reason"
facade = _fixture_ocf_agent()
- self.agent_self_validation_mock.return_value = False, failure_reports
- self.assertEqual(
+ self.agent_self_validation_mock.return_value = False, failure_reason
+ assert_report_item_list_equal(
primitive.validate_resource_instance_attributes_update(
self.cmd_runner,
facade,
@@ -1483,7 +1483,12 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self._fixture_resources(old_attributes),
force=False,
),
- [],
+ [
+ fixture.warn(
+ reports.codes.AGENT_SELF_VALIDATION_SKIPPED_UPDATED_RESOURCE_MISCONFIGURED,
+ result=failure_reason,
+ )
+ ],
)
self.assertEqual(
self.agent_self_validation_mock.mock_calls,
@@ -1492,7 +1497,6 @@ class ValidateResourceInstanceAttributesUpdateSelfValidation(TestCase):
self.cmd_runner,
facade.metadata.name,
old_attributes,
- reports.ReportItemSeverity.error(),
),
],
)
diff --git a/pcs_test/tier0/lib/pacemaker/test_live.py b/pcs_test/tier0/lib/pacemaker/test_live.py
index 5c8000cd..239a72b1 100644
--- a/pcs_test/tier0/lib/pacemaker/test_live.py
+++ b/pcs_test/tier0/lib/pacemaker/test_live.py
@@ -1752,16 +1752,15 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertTrue(is_valid)
- self.assertEqual(report_list, [])
+ self.assertEqual(reason, "")
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
)
@@ -1771,23 +1770,17 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertIsNone(is_valid)
- assert_report_item_list_equal(
- report_list,
- [
- fixture.info(
- report_codes.AGENT_SELF_VALIDATION_INVALID_DATA,
- reason="Start tag expected, '<' not found, line 1, column 1 (<string>, line 1)",
- )
- ],
+ self.assertEqual(
+ reason,
+ "Start tag expected, '<' not found, line 1, column 1 (<string>, line 1)",
)
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
@@ -1806,19 +1799,15 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertTrue(is_valid)
- assert_report_item_list_equal(
- report_list,
- [],
- )
+ self.assertEqual(reason, "")
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
)
@@ -1837,23 +1826,15 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertFalse(is_valid)
- assert_report_item_list_equal(
- report_list,
- [
- fixture.info(
- report_codes.AGENT_SELF_VALIDATION_RESULT, result=""
- )
- ],
- )
+ self.assertEqual(reason, "")
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
)
@@ -1881,23 +1862,17 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertFalse(is_valid)
- assert_report_item_list_equal(
- report_list,
- [
- fixture.info(
- report_codes.AGENT_SELF_VALIDATION_RESULT,
- result="first line\nImportant output\nand another line",
- )
- ],
+ self.assertEqual(
+ reason,
+ "first line\nImportant output\nand another line",
)
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
@@ -1925,23 +1900,17 @@ class HandleInstanceAttributesValidateViaPcmkTest(TestCase):
base_cmd = ["some", "command"]
(
is_valid,
- report_list,
+ reason,
) = lib._handle_instance_attributes_validation_via_pcmk(
runner,
base_cmd,
"result/output",
{"attr1": "val1", "attr2": "val2"},
- not_valid_severity=Severity.info(),
)
self.assertTrue(is_valid)
- assert_report_item_list_equal(
- report_list,
- [
- fixture.warn(
- report_codes.AGENT_SELF_VALIDATION_RESULT,
- result="first line\nImportant output\nand another line",
- )
- ],
+ self.assertEqual(
+ reason,
+ "first line\nImportant output\nand another line",
)
runner.run.assert_called_once_with(
base_cmd + ["--option", "attr1=val1", "--option", "attr2=val2"]
@@ -1953,7 +1922,6 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
def setUp(self):
self.runner = mock.Mock()
self.attrs = dict(attra="val1", attrb="val2")
- self.severity = Severity.info()
patcher = mock.patch(
"pcs.lib.pacemaker.live._handle_instance_attributes_validation_via_pcmk"
)
@@ -1967,7 +1935,7 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
)
self.assertEqual(
lib._validate_resource_instance_attributes_via_pcmk(
- self.runner, agent, self.attrs, self.severity
+ self.runner, agent, self.attrs
),
self.ret_val,
)
@@ -1987,7 +1955,6 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
],
"./resource-agent-action/command/output",
self.attrs,
- self.severity,
)
def test_without_provider(self):
@@ -1996,7 +1963,7 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
)
self.assertEqual(
lib._validate_resource_instance_attributes_via_pcmk(
- self.runner, agent, self.attrs, self.severity
+ self.runner, agent, self.attrs
),
self.ret_val,
)
@@ -2014,7 +1981,6 @@ class ValidateResourceInstanceAttributesViaPcmkTest(TestCase):
],
"./resource-agent-action/command/output",
self.attrs,
- self.severity,
)
@@ -2024,7 +1990,6 @@ class ValidateStonithInstanceAttributesViaPcmkTest(TestCase):
def setUp(self):
self.runner = mock.Mock()
self.attrs = dict(attra="val1", attrb="val2")
- self.severity = Severity.info()
patcher = mock.patch(
"pcs.lib.pacemaker.live._handle_instance_attributes_validation_via_pcmk"
)
@@ -2038,7 +2003,7 @@ class ValidateStonithInstanceAttributesViaPcmkTest(TestCase):
)
self.assertEqual(
lib._validate_stonith_instance_attributes_via_pcmk(
- self.runner, agent, self.attrs, self.severity
+ self.runner, agent, self.attrs
),
self.ret_val,
)
@@ -2054,5 +2019,4 @@ class ValidateStonithInstanceAttributesViaPcmkTest(TestCase):
],
"./validate/command/output",
self.attrs,
- self.severity,
)
diff --git a/pcs_test/tier1/legacy/test_stonith.py b/pcs_test/tier1/legacy/test_stonith.py
index 9911d604..cf430d75 100644
--- a/pcs_test/tier1/legacy/test_stonith.py
+++ b/pcs_test/tier1/legacy/test_stonith.py
@@ -1294,7 +1294,10 @@ class StonithTest(TestCase, AssertPcsMixin):
),
)
- self.assert_pcs_success("stonith update test3 username=testA".split())
+ self.assert_pcs_success(
+ "stonith update test3 username=testA".split(),
+ stdout_start="Warning: ",
+ )
self.assert_pcs_success(
"stonith config test2".split(),
--
2.39.0

View File

@ -1,485 +0,0 @@
From 5bed788246ac19c866a60ab3773d94fa4ca28c37 Mon Sep 17 00:00:00 2001
From: Miroslav Lisik <mlisik@redhat.com>
Date: Thu, 5 Jan 2023 16:21:44 +0100
Subject: [PATCH 5/5] Fix stonith-watchdog-timeout validation
---
pcs/lib/cluster_property.py | 25 ++++-
pcs/lib/sbd.py | 15 ++-
.../lib/commands/test_cluster_property.py | 50 ++++++++--
pcs_test/tier0/lib/test_cluster_property.py | 98 ++++++++++++++-----
pcs_test/tier1/test_cluster_property.py | 14 ++-
5 files changed, 157 insertions(+), 45 deletions(-)
diff --git a/pcs/lib/cluster_property.py b/pcs/lib/cluster_property.py
index 9ccacd74..b622bdaf 100644
--- a/pcs/lib/cluster_property.py
+++ b/pcs/lib/cluster_property.py
@@ -8,6 +8,7 @@ from lxml.etree import _Element
from pcs.common import reports
from pcs.common.services.interfaces import ServiceManagerInterface
+from pcs.common.tools import timeout_to_seconds
from pcs.common.types import StringSequence
from pcs.lib import (
sbd,
@@ -38,8 +39,21 @@ def _validate_stonith_watchdog_timeout_property(
force: bool = False,
) -> reports.ReportItemList:
report_list: reports.ReportItemList = []
+ original_value = value
+ # if value is not empty, try to convert time interval string
+ if value:
+ seconds = timeout_to_seconds(value)
+ if seconds is None:
+ # returns empty list because this should be reported by
+ # ValueTimeInterval validator
+ return report_list
+ value = str(seconds)
if sbd.is_sbd_enabled(service_manager):
- report_list.extend(sbd.validate_stonith_watchdog_timeout(value, force))
+ report_list.extend(
+ sbd.validate_stonith_watchdog_timeout(
+ validate.ValuePair(original_value, value), force
+ )
+ )
else:
if value not in ["", "0"]:
report_list.append(
@@ -124,9 +138,6 @@ def validate_set_cluster_properties(
# unknow properties are reported by NamesIn validator
continue
property_metadata = possible_properties_dict[property_name]
- if property_metadata.name == "stonith-watchdog-timeout":
- # needs extra validation
- continue
if property_metadata.type == "boolean":
validators.append(
validate.ValuePcmkBoolean(
@@ -154,9 +165,13 @@ def validate_set_cluster_properties(
)
)
elif property_metadata.type == "time":
+ # make stonith-watchdog-timeout value not forcable
validators.append(
validate.ValueTimeInterval(
- property_metadata.name, severity=severity
+ property_metadata.name,
+ severity=severity
+ if property_metadata.name != "stonith-watchdog-timeout"
+ else reports.ReportItemSeverity.error(),
)
)
report_list.extend(
diff --git a/pcs/lib/sbd.py b/pcs/lib/sbd.py
index 1e3cfb37..38cd8767 100644
--- a/pcs/lib/sbd.py
+++ b/pcs/lib/sbd.py
@@ -1,6 +1,9 @@
import re
from os import path
-from typing import Optional
+from typing import (
+ Optional,
+ Union,
+)
from pcs import settings
from pcs.common import reports
@@ -392,7 +395,10 @@ def _get_local_sbd_watchdog_timeout() -> int:
def validate_stonith_watchdog_timeout(
- stonith_watchdog_timeout: str, force: bool = False
+ stonith_watchdog_timeout: Union[
+ validate.TypeOptionValue, validate.ValuePair
+ ],
+ force: bool = False,
) -> reports.ReportItemList:
"""
Check sbd status and config when user is setting stonith-watchdog-timeout
@@ -401,6 +407,7 @@ def validate_stonith_watchdog_timeout(
stonith_watchdog_timeout -- value to be validated
"""
+ stonith_watchdog_timeout = validate.ValuePair.get(stonith_watchdog_timeout)
severity = reports.get_severity(reports.codes.FORCE, force)
if _is_device_set_local():
return (
@@ -412,11 +419,11 @@ def validate_stonith_watchdog_timeout(
),
)
]
- if stonith_watchdog_timeout not in ["", "0"]
+ if stonith_watchdog_timeout.normalized not in ["", "0"]
else []
)
- if stonith_watchdog_timeout in ["", "0"]:
+ if stonith_watchdog_timeout.normalized in ["", "0"]:
return [
reports.ReportItem(
severity,
diff --git a/pcs_test/tier0/lib/commands/test_cluster_property.py b/pcs_test/tier0/lib/commands/test_cluster_property.py
index 319d1df6..fd124843 100644
--- a/pcs_test/tier0/lib/commands/test_cluster_property.py
+++ b/pcs_test/tier0/lib/commands/test_cluster_property.py
@@ -120,6 +120,34 @@ class StonithWatchdogTimeoutMixin(LoadMetadataMixin):
)
self.env_assist.assert_reports([])
+ def _set_invalid_value(self, forced=False):
+ self.config.remove("services.is_enabled")
+ self.env_assist.assert_raise_library_error(
+ lambda: cluster_property.set_properties(
+ self.env_assist.get_env(),
+ {"stonith-watchdog-timeout": "15x"},
+ [] if not forced else [reports.codes.FORCE],
+ )
+ )
+ self.env_assist.assert_reports(
+ [
+ fixture.error(
+ reports.codes.INVALID_OPTION_VALUE,
+ option_name="stonith-watchdog-timeout",
+ option_value="15x",
+ allowed_values="time interval (e.g. 1, 2s, 3m, 4h, ...)",
+ cannot_be_empty=False,
+ forbidden_characters=None,
+ ),
+ ]
+ )
+
+ def test_set_invalid_value(self):
+ self._set_invalid_value(forced=False)
+
+ def test_set_invalid_value_forced(self):
+ self._set_invalid_value(forced=True)
+
class TestSetStonithWatchdogTimeoutSBDIsDisabled(
StonithWatchdogTimeoutMixin, TestCase
@@ -132,6 +160,9 @@ class TestSetStonithWatchdogTimeoutSBDIsDisabled(
def test_set_zero(self):
self._set_success({"stonith-watchdog-timeout": "0"})
+ def test_set_zero_time_suffix(self):
+ self._set_success({"stonith-watchdog-timeout": "0s"})
+
def test_set_not_zero_or_empty(self):
self.env_assist.assert_raise_library_error(
lambda: cluster_property.set_properties(
@@ -231,12 +262,12 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
def test_set_zero_forced(self):
self.config.env.push_cib(
crm_config=fixture_crm_config_properties(
- [("cib-bootstrap-options", {"stonith-watchdog-timeout": "0"})]
+ [("cib-bootstrap-options", {"stonith-watchdog-timeout": "0s"})]
)
)
cluster_property.set_properties(
self.env_assist.get_env(),
- {"stonith-watchdog-timeout": "0"},
+ {"stonith-watchdog-timeout": "0s"},
[reports.codes.FORCE],
)
self.env_assist.assert_reports(
@@ -271,7 +302,7 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
self.env_assist.assert_raise_library_error(
lambda: cluster_property.set_properties(
self.env_assist.get_env(),
- {"stonith-watchdog-timeout": "9"},
+ {"stonith-watchdog-timeout": "9s"},
[],
)
)
@@ -281,7 +312,7 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
reports.codes.STONITH_WATCHDOG_TIMEOUT_TOO_SMALL,
force_code=reports.codes.FORCE,
cluster_sbd_watchdog_timeout=10,
- entered_watchdog_timeout="9",
+ entered_watchdog_timeout="9s",
)
]
)
@@ -289,12 +320,12 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
def test_too_small_forced(self):
self.config.env.push_cib(
crm_config=fixture_crm_config_properties(
- [("cib-bootstrap-options", {"stonith-watchdog-timeout": "9"})]
+ [("cib-bootstrap-options", {"stonith-watchdog-timeout": "9s"})]
)
)
cluster_property.set_properties(
self.env_assist.get_env(),
- {"stonith-watchdog-timeout": "9"},
+ {"stonith-watchdog-timeout": "9s"},
[reports.codes.FORCE],
)
self.env_assist.assert_reports(
@@ -302,13 +333,13 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledWatchdogOnly(
fixture.warn(
reports.codes.STONITH_WATCHDOG_TIMEOUT_TOO_SMALL,
cluster_sbd_watchdog_timeout=10,
- entered_watchdog_timeout="9",
+ entered_watchdog_timeout="9s",
)
]
)
def test_more_than_timeout(self):
- self._set_success({"stonith-watchdog-timeout": "11"})
+ self._set_success({"stonith-watchdog-timeout": "11s"})
@mock.patch("pcs.lib.sbd.get_local_sbd_device_list", lambda: ["dev1", "dev2"])
@@ -323,6 +354,9 @@ class TestSetStonithWatchdogTimeoutSBDIsEnabledSharedDevices(
def test_set_to_zero(self):
self._set_success({"stonith-watchdog-timeout": "0"})
+ def test_set_to_zero_time_suffix(self):
+ self._set_success({"stonith-watchdog-timeout": "0min"})
+
def test_set_not_zero_or_empty(self):
self.env_assist.assert_raise_library_error(
lambda: cluster_property.set_properties(
diff --git a/pcs_test/tier0/lib/test_cluster_property.py b/pcs_test/tier0/lib/test_cluster_property.py
index 2feb728d..8d6f90b1 100644
--- a/pcs_test/tier0/lib/test_cluster_property.py
+++ b/pcs_test/tier0/lib/test_cluster_property.py
@@ -83,6 +83,7 @@ FIXTURE_VALID_OPTIONS_DICT = {
"integer_param": "10",
"percentage_param": "20%",
"select_param": "s3",
+ "stonith-watchdog-timeout": "0",
"time_param": "5min",
}
@@ -96,6 +97,8 @@ FIXTURE_INVALID_OPTIONS_DICT = {
"have-watchdog": "100",
}
+STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES = ["", "0", "0s"]
+
def _fixture_parameter(name, param_type, default, enum_values):
return ResourceAgentParameter(
@@ -239,6 +242,7 @@ class TestValidateSetClusterProperties(TestCase):
sbd_enabled=False,
sbd_devices=False,
force=False,
+ valid_value=True,
):
self.mock_is_sbd_enabled.return_value = sbd_enabled
self.mock_sbd_devices.return_value = ["devices"] if sbd_devices else []
@@ -254,9 +258,13 @@ class TestValidateSetClusterProperties(TestCase):
),
expected_report_list,
)
- if "stonith-watchdog-timeout" in new_properties and (
- new_properties["stonith-watchdog-timeout"]
- or "stonith-watchdog-timeout" in configured_properties
+ if (
+ "stonith-watchdog-timeout" in new_properties
+ and (
+ new_properties["stonith-watchdog-timeout"]
+ or "stonith-watchdog-timeout" in configured_properties
+ )
+ and valid_value
):
self.mock_is_sbd_enabled.assert_called_once_with(
self.mock_service_manager
@@ -266,7 +274,10 @@ class TestValidateSetClusterProperties(TestCase):
if sbd_devices:
self.mock_sbd_timeout.assert_not_called()
else:
- if new_properties["stonith-watchdog-timeout"] in ["", "0"]:
+ if (
+ new_properties["stonith-watchdog-timeout"]
+ in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES
+ ):
self.mock_sbd_timeout.assert_not_called()
else:
self.mock_sbd_timeout.assert_called_once_with()
@@ -280,6 +291,8 @@ class TestValidateSetClusterProperties(TestCase):
self.mock_sbd_timeout.assert_not_called()
self.mock_is_sbd_enabled.reset_mock()
+ self.mock_sbd_devices.reset_mock()
+ self.mock_sbd_timeout.reset_mock()
def test_no_properties_to_set_or_unset(self):
self.assert_validate_set(
@@ -328,7 +341,7 @@ class TestValidateSetClusterProperties(TestCase):
)
def test_unset_stonith_watchdog_timeout_sbd_disabled(self):
- for value in ["0", ""]:
+ for value in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES:
with self.subTest(value=value):
self.assert_validate_set(
["stonith-watchdog-timeout"],
@@ -349,22 +362,27 @@ class TestValidateSetClusterProperties(TestCase):
)
def test_set_ok_stonith_watchdog_timeout_sbd_enabled_without_devices(self):
- self.assert_validate_set(
- [], {"stonith-watchdog-timeout": "15"}, [], sbd_enabled=True
- )
+ for value in ["15", "15s"]:
+ with self.subTest(value=value):
+ self.assert_validate_set(
+ [],
+ {"stonith-watchdog-timeout": value},
+ [],
+ sbd_enabled=True,
+ )
def test_set_small_stonith_watchdog_timeout_sbd_enabled_without_devices(
self,
):
self.assert_validate_set(
[],
- {"stonith-watchdog-timeout": "9"},
+ {"stonith-watchdog-timeout": "9s"},
[
fixture.error(
reports.codes.STONITH_WATCHDOG_TIMEOUT_TOO_SMALL,
force_code=reports.codes.FORCE,
cluster_sbd_watchdog_timeout=10,
- entered_watchdog_timeout="9",
+ entered_watchdog_timeout="9s",
)
],
sbd_enabled=True,
@@ -387,28 +405,54 @@ class TestValidateSetClusterProperties(TestCase):
force=True,
)
- def test_set_not_a_number_stonith_watchdog_timeout_sbd_enabled_without_devices(
+ def _set_invalid_value_stonith_watchdog_timeout(
+ self, sbd_enabled=False, sbd_devices=False
+ ):
+ for value in ["invalid", "10x"]:
+ with self.subTest(value=value):
+ self.assert_validate_set(
+ [],
+ {"stonith-watchdog-timeout": value},
+ [
+ fixture.error(
+ reports.codes.INVALID_OPTION_VALUE,
+ option_name="stonith-watchdog-timeout",
+ option_value=value,
+ allowed_values="time interval (e.g. 1, 2s, 3m, 4h, ...)",
+ cannot_be_empty=False,
+ forbidden_characters=None,
+ )
+ ],
+ sbd_enabled=sbd_enabled,
+ sbd_devices=sbd_devices,
+ valid_value=False,
+ )
+
+ def test_set_invalid_value_stonith_watchdog_timeout_sbd_enabled_without_devices(
self,
):
+ self._set_invalid_value_stonith_watchdog_timeout(
+ sbd_enabled=True, sbd_devices=False
+ )
- self.assert_validate_set(
- [],
- {"stonith-watchdog-timeout": "invalid"},
- [
- fixture.error(
- reports.codes.STONITH_WATCHDOG_TIMEOUT_TOO_SMALL,
- force_code=reports.codes.FORCE,
- cluster_sbd_watchdog_timeout=10,
- entered_watchdog_timeout="invalid",
- )
- ],
- sbd_enabled=True,
+ def test_set_invalid_value_stonith_watchdog_timeout_sbd_enabled_with_devices(
+ self,
+ ):
+ self._set_invalid_value_stonith_watchdog_timeout(
+ sbd_enabled=True, sbd_devices=True
+ )
+
+ def test_set_invalid_value_stonith_watchdog_timeout_sbd_disabled(
+ self,
+ ):
+ self._set_invalid_value_stonith_watchdog_timeout(
+ sbd_enabled=False, sbd_devices=False
)
def test_unset_stonith_watchdog_timeout_sbd_enabled_without_devices(
self,
):
- for value in ["0", ""]:
+ for value in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES:
with self.subTest(value=value):
self.assert_validate_set(
["stonith-watchdog-timeout"],
@@ -426,7 +470,7 @@ class TestValidateSetClusterProperties(TestCase):
def test_unset_stonith_watchdog_timeout_sbd_enabled_without_devices_forced(
self,
):
- for value in ["0", ""]:
+ for value in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES:
with self.subTest(value=value):
self.assert_validate_set(
["stonith-watchdog-timeout"],
@@ -459,7 +503,7 @@ class TestValidateSetClusterProperties(TestCase):
def test_set_stonith_watchdog_timeout_sbd_enabled_with_devices_forced(self):
self.assert_validate_set(
[],
- {"stonith-watchdog-timeout": 15},
+ {"stonith-watchdog-timeout": "15s"},
[
fixture.warn(
reports.codes.STONITH_WATCHDOG_TIMEOUT_CANNOT_BE_SET,
@@ -472,7 +516,7 @@ class TestValidateSetClusterProperties(TestCase):
)
def test_unset_stonith_watchdog_timeout_sbd_enabled_with_devices(self):
- for value in ["0", ""]:
+ for value in STONITH_WATCHDOG_TIMEOUT_UNSET_VALUES:
with self.subTest(value=value):
self.assert_validate_set(
["stonith-watchdog-timeout"],
diff --git a/pcs_test/tier1/test_cluster_property.py b/pcs_test/tier1/test_cluster_property.py
index ff1f9cfb..51e25efc 100644
--- a/pcs_test/tier1/test_cluster_property.py
+++ b/pcs_test/tier1/test_cluster_property.py
@@ -169,7 +169,7 @@ class TestPropertySet(PropertyMixin, TestCase):
def test_set_stonith_watchdog_timeout(self):
self.assert_pcs_fail(
- "property set stonith-watchdog-timeout=5".split(),
+ "property set stonith-watchdog-timeout=5s".split(),
stdout_full=(
"Error: stonith-watchdog-timeout can only be unset or set to 0 "
"while SBD is disabled\n"
@@ -179,6 +179,18 @@ class TestPropertySet(PropertyMixin, TestCase):
)
self.assert_resources_xml_in_cib(UNCHANGED_CRM_CONFIG)
+ def test_set_stonith_watchdog_timeout_invalid_value(self):
+ self.assert_pcs_fail(
+ "property set stonith-watchdog-timeout=5x".split(),
+ stdout_full=(
+ "Error: '5x' is not a valid stonith-watchdog-timeout value, use"
+ " time interval (e.g. 1, 2s, 3m, 4h, ...)\n"
+ "Error: Errors have occurred, therefore pcs is unable to "
+ "continue\n"
+ ),
+ )
+ self.assert_resources_xml_in_cib(UNCHANGED_CRM_CONFIG)
+
class TestPropertyUnset(PropertyMixin, TestCase):
def test_success(self):
--
2.39.0

File diff suppressed because it is too large Load Diff

View File

@ -1,311 +0,0 @@
From 3cd35ed8e5b190c2e8203acd68a0100b84ed3bb4 Mon Sep 17 00:00:00 2001
From: Ondrej Mular <omular@redhat.com>
Date: Tue, 31 Jan 2023 17:44:16 +0100
Subject: [PATCH] fix update of stonith-watchdog-timeout when cluster is not
running
---
pcs/lib/communication/sbd.py | 4 +-
.../lib/commands/sbd/test_disable_sbd.py | 10 ++--
.../tier0/lib/commands/sbd/test_enable_sbd.py | 49 ++++++++++---------
pcsd/pcs.rb | 17 +++++--
4 files changed, 48 insertions(+), 32 deletions(-)
diff --git a/pcs/lib/communication/sbd.py b/pcs/lib/communication/sbd.py
index 4762245c..633312a4 100644
--- a/pcs/lib/communication/sbd.py
+++ b/pcs/lib/communication/sbd.py
@@ -98,8 +98,8 @@ class StonithWatchdogTimeoutAction(
)
if report_item is None:
self._on_success()
- return []
- self._report(report_item)
+ else:
+ self._report(report_item)
return self._get_next_list()
diff --git a/pcs_test/tier0/lib/commands/sbd/test_disable_sbd.py b/pcs_test/tier0/lib/commands/sbd/test_disable_sbd.py
index 13135fb2..f8f165bf 100644
--- a/pcs_test/tier0/lib/commands/sbd/test_disable_sbd.py
+++ b/pcs_test/tier0/lib/commands/sbd/test_disable_sbd.py
@@ -19,7 +19,7 @@ class DisableSbd(TestCase):
self.config.corosync_conf.load(filename=self.corosync_conf_name)
self.config.http.host.check_auth(node_labels=self.node_list)
self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero(
- node_labels=self.node_list[:1]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.disable_sbd(node_labels=self.node_list)
disable_sbd(self.env_assist.get_env())
@@ -56,7 +56,7 @@ class DisableSbd(TestCase):
self.config.corosync_conf.load(filename=self.corosync_conf_name)
self.config.http.host.check_auth(node_labels=self.node_list)
self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero(
- node_labels=self.node_list[:1]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.disable_sbd(node_labels=self.node_list)
@@ -158,7 +158,9 @@ class DisableSbd(TestCase):
]
)
self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero(
- node_labels=online_nodes_list[:1]
+ communication_list=[
+ [dict(label=node)] for node in self.node_list[1:]
+ ],
)
self.config.http.sbd.disable_sbd(node_labels=online_nodes_list)
disable_sbd(self.env_assist.get_env(), ignore_offline_nodes=True)
@@ -291,7 +293,7 @@ class DisableSbd(TestCase):
self.config.corosync_conf.load(filename=self.corosync_conf_name)
self.config.http.host.check_auth(node_labels=self.node_list)
self.config.http.pcmk.set_stonith_watchdog_timeout_to_zero(
- node_labels=self.node_list[:1]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.disable_sbd(
communication_list=[
diff --git a/pcs_test/tier0/lib/commands/sbd/test_enable_sbd.py b/pcs_test/tier0/lib/commands/sbd/test_enable_sbd.py
index 57e680e0..f192f429 100644
--- a/pcs_test/tier0/lib/commands/sbd/test_enable_sbd.py
+++ b/pcs_test/tier0/lib/commands/sbd/test_enable_sbd.py
@@ -130,7 +130,7 @@ class OddNumOfNodesSuccess(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -164,7 +164,7 @@ class OddNumOfNodesSuccess(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -218,7 +218,7 @@ class OddNumOfNodesDefaultsSuccess(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -248,7 +248,7 @@ class OddNumOfNodesDefaultsSuccess(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -351,7 +351,7 @@ class WatchdogValidations(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -407,7 +407,7 @@ class EvenNumOfNodes(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -443,7 +443,7 @@ class EvenNumOfNodes(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -480,7 +480,7 @@ class EvenNumOfNodes(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -513,7 +513,7 @@ class EvenNumOfNodes(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
self.config.http.sbd.enable_sbd(node_labels=self.node_list)
enable_sbd(
@@ -604,7 +604,9 @@ class OfflineNodes(TestCase):
node_labels=self.online_node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.online_node_list[0]]
+ communication_list=[
+ [dict(label=node)] for node in self.online_node_list
+ ],
)
self.config.http.sbd.enable_sbd(node_labels=self.online_node_list)
enable_sbd(
@@ -644,7 +646,9 @@ class OfflineNodes(TestCase):
node_labels=self.online_node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.online_node_list[0]]
+ communication_list=[
+ [dict(label=node)] for node in self.online_node_list
+ ],
)
self.config.http.sbd.enable_sbd(node_labels=self.online_node_list)
enable_sbd(
@@ -1226,7 +1230,7 @@ class FailureHandling(TestCase):
node_labels=self.node_list,
)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
- node_labels=[self.node_list[0]]
+ communication_list=[[dict(label=node)] for node in self.node_list],
)
def _remove_calls(self, count):
@@ -1302,7 +1306,8 @@ class FailureHandling(TestCase):
)
def test_removing_stonith_wd_timeout_failure(self):
- self._remove_calls(2)
+ self._remove_calls(len(self.node_list) + 1)
+
self.config.http.pcmk.remove_stonith_watchdog_timeout(
communication_list=[
self.communication_list_failure[:1],
@@ -1331,7 +1336,7 @@ class FailureHandling(TestCase):
)
def test_removing_stonith_wd_timeout_not_connected(self):
- self._remove_calls(2)
+ self._remove_calls(len(self.node_list) + 1)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
communication_list=[
self.communication_list_not_connected[:1],
@@ -1360,7 +1365,7 @@ class FailureHandling(TestCase):
)
def test_removing_stonith_wd_timeout_complete_failure(self):
- self._remove_calls(2)
+ self._remove_calls(len(self.node_list) + 1)
self.config.http.pcmk.remove_stonith_watchdog_timeout(
communication_list=[
self.communication_list_not_connected[:1],
@@ -1406,7 +1411,7 @@ class FailureHandling(TestCase):
)
def test_set_sbd_config_failure(self):
- self._remove_calls(4)
+ self._remove_calls(len(self.node_list) + 1 + 2)
self.config.http.sbd.set_sbd_config(
communication_list=[
dict(
@@ -1453,7 +1458,7 @@ class FailureHandling(TestCase):
)
def test_set_corosync_conf_failed(self):
- self._remove_calls(5)
+ self._remove_calls(len(self.node_list) + 1 + 3)
self.config.env.push_corosync_conf(
corosync_conf_text=_get_corosync_conf_text_with_atb(
self.corosync_conf_name
@@ -1477,7 +1482,7 @@ class FailureHandling(TestCase):
)
def test_check_sbd_invalid_data_format(self):
- self._remove_calls(7)
+ self._remove_calls(len(self.node_list) + 1 + 5)
self.config.http.sbd.check_sbd(
communication_list=[
dict(
@@ -1516,7 +1521,7 @@ class FailureHandling(TestCase):
)
def test_check_sbd_failure(self):
- self._remove_calls(7)
+ self._remove_calls(len(self.node_list) + 1 + 5)
self.config.http.sbd.check_sbd(
communication_list=[
dict(
@@ -1558,7 +1563,7 @@ class FailureHandling(TestCase):
)
def test_check_sbd_not_connected(self):
- self._remove_calls(7)
+ self._remove_calls(len(self.node_list) + 1 + 5)
self.config.http.sbd.check_sbd(
communication_list=[
dict(
@@ -1601,7 +1606,7 @@ class FailureHandling(TestCase):
)
def test_get_online_targets_failed(self):
- self._remove_calls(9)
+ self._remove_calls(len(self.node_list) + 1 + 7)
self.config.http.host.check_auth(
communication_list=self.communication_list_failure
)
@@ -1626,7 +1631,7 @@ class FailureHandling(TestCase):
)
def test_get_online_targets_not_connected(self):
- self._remove_calls(9)
+ self._remove_calls(len(self.node_list) + 1 + 7)
self.config.http.host.check_auth(
communication_list=self.communication_list_not_connected
)
diff --git a/pcsd/pcs.rb b/pcsd/pcs.rb
index 452de97f..e3397c25 100644
--- a/pcsd/pcs.rb
+++ b/pcsd/pcs.rb
@@ -1838,13 +1838,22 @@ end
def set_cluster_prop_force(auth_user, prop, val)
cmd = ['property', 'set', "#{prop}=#{val}"]
flags = ['--force']
+ sig_file = "#{CIB_PATH}.sig"
+ retcode = 0
+
if pacemaker_running?
- user = auth_user
+ _, _, retcode = run_cmd(auth_user, PCS, *flags, "--", *cmd)
else
- user = PCSAuth.getSuperuserAuth()
- flags += ['-f', CIB_PATH]
+ if File.exist?(CIB_PATH)
+ flags += ['-f', CIB_PATH]
+ _, _, retcode = run_cmd(PCSAuth.getSuperuserAuth(), PCS, *flags, "--", *cmd)
+ begin
+ File.delete(sig_file)
+ rescue => e
+ $logger.debug("Cannot delete file '#{sig_file}': #{e.message}")
+ end
+ end
end
- _, _, retcode = run_cmd(user, PCS, *flags, "--", *cmd)
return (retcode == 0)
end
--
2.39.0

View File

@ -1,84 +0,0 @@
From ce48dbe8b410b2dc4f3159e22c243c1d8824cba0 Mon Sep 17 00:00:00 2001
From: Miroslav Lisik <mlisik@redhat.com>
Date: Thu, 16 Mar 2023 11:32:40 +0100
Subject: [PATCH 1/2] fix `pcs config checkpoint diff` command
---
pcs/cli/common/lib_wrapper.py | 15 +--------------
pcs/config.py | 3 +++
2 files changed, 4 insertions(+), 14 deletions(-)
diff --git a/pcs/cli/common/lib_wrapper.py b/pcs/cli/common/lib_wrapper.py
index 0643a808..b17f43b1 100644
--- a/pcs/cli/common/lib_wrapper.py
+++ b/pcs/cli/common/lib_wrapper.py
@@ -1,9 +1,5 @@
import logging
from collections import namedtuple
-from typing import (
- Any,
- Dict,
-)
from pcs import settings
from pcs.cli.common import middleware
@@ -36,9 +32,6 @@ from pcs.lib.commands.constraint import order as constraint_order
from pcs.lib.commands.constraint import ticket as constraint_ticket
from pcs.lib.env import LibraryEnvironment
-# Note: not properly typed
-_CACHE: Dict[Any, Any] = {}
-
def wrapper(dictionary):
return namedtuple("wrapper", dictionary.keys())(**dictionary)
@@ -106,12 +99,6 @@ def bind_all(env, run_with_middleware, dictionary):
)
-def get_module(env, middleware_factory, name):
- if name not in _CACHE:
- _CACHE[name] = load_module(env, middleware_factory, name)
- return _CACHE[name]
-
-
def load_module(env, middleware_factory, name):
# pylint: disable=too-many-return-statements, too-many-branches
if name == "acl":
@@ -541,4 +528,4 @@ class Library:
self.middleware_factory = middleware_factory
def __getattr__(self, name):
- return get_module(self.env, self.middleware_factory, name)
+ return load_module(self.env, self.middleware_factory, name)
diff --git a/pcs/config.py b/pcs/config.py
index 6c90c13f..25007d26 100644
--- a/pcs/config.py
+++ b/pcs/config.py
@@ -711,6 +711,7 @@ def _checkpoint_to_lines(lib, checkpoint_number):
orig_usefile = utils.usefile
orig_filename = utils.filename
orig_middleware = lib.middleware_factory
+ orig_env = lib.env
# configure old code to read the CIB from a file
utils.usefile = True
utils.filename = os.path.join(
@@ -720,6 +721,7 @@ def _checkpoint_to_lines(lib, checkpoint_number):
lib.middleware_factory = orig_middleware._replace(
cib=middleware.cib(utils.filename, utils.touch_cib_file)
)
+ lib.env = utils.get_cli_env()
# export the CIB to text
result = False, []
if os.path.isfile(utils.filename):
@@ -728,6 +730,7 @@ def _checkpoint_to_lines(lib, checkpoint_number):
utils.usefile = orig_usefile
utils.filename = orig_filename
lib.middleware_factory = orig_middleware
+ lib.env = orig_env
return result
--
2.39.2

View File

@ -1,975 +0,0 @@
From 0b9175e04ee0527bbf603ad8dd8240c50c623bd6 Mon Sep 17 00:00:00 2001
From: Miroslav Lisik <mlisik@redhat.com>
Date: Mon, 20 Mar 2023 10:35:34 +0100
Subject: [PATCH 2/2] fix `pcs stonith update-scsi-devices` command
---
pcs/lib/cib/stonith.py | 168 +++++-
.../test_stonith_update_scsi_devices.py | 571 ++++++++++++++----
2 files changed, 601 insertions(+), 138 deletions(-)
diff --git a/pcs/lib/cib/stonith.py b/pcs/lib/cib/stonith.py
index 85b46fd7..f49bbc37 100644
--- a/pcs/lib/cib/stonith.py
+++ b/pcs/lib/cib/stonith.py
@@ -158,12 +158,64 @@ def get_node_key_map_for_mpath(
return node_key_map
-DIGEST_ATTRS = ["op-digest", "op-secure-digest", "op-restart-digest"]
-DIGEST_ATTR_TO_TYPE_MAP = {
+DIGEST_ATTR_TO_DIGEST_TYPE_MAP = {
"op-digest": "all",
"op-secure-digest": "nonprivate",
"op-restart-digest": "nonreloadable",
}
+TRANSIENT_DIGEST_ATTR_TO_DIGEST_TYPE_MAP = {
+ "#digests-all": "all",
+ "#digests-secure": "nonprivate",
+}
+DIGEST_ATTRS = frozenset(DIGEST_ATTR_TO_DIGEST_TYPE_MAP.keys())
+TRANSIENT_DIGEST_ATTRS = frozenset(
+ TRANSIENT_DIGEST_ATTR_TO_DIGEST_TYPE_MAP.keys()
+)
+
+
+def _get_digest(
+ attr: str,
+ attr_to_type_map: Dict[str, str],
+ calculated_digests: Dict[str, Optional[str]],
+) -> str:
+ """
+ Return digest of right type for the specified attribute. If missing, raise
+ an error.
+
+ attr -- name of digest attribute
+ atttr_to_type_map -- map for attribute name to digest type conversion
+ calculated_digests -- digests calculated by pacemaker
+ """
+ if attr not in attr_to_type_map:
+ raise AssertionError(
+ f"Key '{attr}' is missing in the attribute name to digest type map"
+ )
+ digest = calculated_digests.get(attr_to_type_map[attr])
+ if digest is None:
+ # this should not happen and when it does it is pacemaker fault
+ raise LibraryError(
+ ReportItem.error(
+ reports.messages.StonithRestartlessUpdateUnableToPerform(
+ f"necessary digest for '{attr}' attribute is missing"
+ )
+ )
+ )
+ return digest
+
+
+def _get_transient_instance_attributes(cib: _Element) -> List[_Element]:
+ """
+ Return list of instance_attributes elements which could contain digest
+ attributes.
+
+ cib -- CIB root element
+ """
+ return cast(
+ List[_Element],
+ cib.xpath(
+ "./status/node_state/transient_attributes/instance_attributes"
+ ),
+ )
def _get_lrm_rsc_op_elements(
@@ -267,21 +319,89 @@ def _update_digest_attrs_in_lrm_rsc_op(
)
)
for attr in common_digests_attrs:
- new_digest = calculated_digests[DIGEST_ATTR_TO_TYPE_MAP[attr]]
- if new_digest is None:
- # this should not happen and when it does it is pacemaker fault
+ # update digest in cib
+ lrm_rsc_op.attrib[attr] = _get_digest(
+ attr, DIGEST_ATTR_TO_DIGEST_TYPE_MAP, calculated_digests
+ )
+
+
+def _get_transient_digest_value(
+ old_value: str, stonith_id: str, stonith_type: str, digest: str
+) -> str:
+ """
+ Return transient digest value with replaced digest.
+
+ Value has comma separated format:
+ <stonith_id>:<stonith_type>:<digest>,...
+
+ and we need to replace only digest for our currently updated stonith device.
+
+ old_value -- value to be replaced
+ stonith_id -- id of stonith resource
+ stonith_type -- stonith resource type
+ digest -- digest for new value
+ """
+ new_comma_values_list = []
+ for comma_value in old_value.split(","):
+ if comma_value:
+ try:
+ _id, _type, _ = comma_value.split(":")
+ except ValueError as e:
+ raise LibraryError(
+ ReportItem.error(
+ reports.messages.StonithRestartlessUpdateUnableToPerform(
+ f"invalid digest attribute value: '{old_value}'"
+ )
+ )
+ ) from e
+ if _id == stonith_id and _type == stonith_type:
+ comma_value = ":".join([stonith_id, stonith_type, digest])
+ new_comma_values_list.append(comma_value)
+ return ",".join(new_comma_values_list)
+
+
+def _update_digest_attrs_in_transient_instance_attributes(
+ nvset_el: _Element,
+ stonith_id: str,
+ stonith_type: str,
+ calculated_digests: Dict[str, Optional[str]],
+) -> None:
+ """
+ Update digests attributes in transient instance attributes element.
+
+ nvset_el -- instance_attributes element containing nvpairs with digests
+ attributes
+ stonith_id -- id of stonith resource being updated
+ stonith_type -- type of stonith resource being updated
+ calculated_digests -- digests calculated by pacemaker
+ """
+ for attr in TRANSIENT_DIGEST_ATTRS:
+ nvpair_list = cast(
+ List[_Element],
+ nvset_el.xpath("./nvpair[@name=$name]", name=attr),
+ )
+ if not nvpair_list:
+ continue
+ if len(nvpair_list) > 1:
raise LibraryError(
ReportItem.error(
reports.messages.StonithRestartlessUpdateUnableToPerform(
- (
- f"necessary digest for '{attr}' attribute is "
- "missing"
- )
+ f"multiple digests attributes: '{attr}'"
)
)
)
- # update digest in cib
- lrm_rsc_op.attrib[attr] = new_digest
+ old_value = nvpair_list[0].attrib["value"]
+ if old_value:
+ nvpair_list[0].attrib["value"] = _get_transient_digest_value(
+ str(old_value),
+ stonith_id,
+ stonith_type,
+ _get_digest(
+ attr,
+ TRANSIENT_DIGEST_ATTR_TO_DIGEST_TYPE_MAP,
+ calculated_digests,
+ ),
+ )
def update_scsi_devices_without_restart(
@@ -300,6 +420,8 @@ def update_scsi_devices_without_restart(
id_provider -- elements' ids generator
device_list -- list of updated scsi devices
"""
+ # pylint: disable=too-many-locals
+ cib = get_root(resource_el)
resource_id = resource_el.get("id", "")
roles_with_nodes = get_resource_state(cluster_state, resource_id)
if "Started" not in roles_with_nodes:
@@ -330,17 +452,14 @@ def update_scsi_devices_without_restart(
)
lrm_rsc_op_start_list = _get_lrm_rsc_op_elements(
- get_root(resource_el), resource_id, node_name, "start"
+ cib, resource_id, node_name, "start"
+ )
+ new_instance_attrs_digests = get_resource_digests(
+ runner, resource_id, node_name, new_instance_attrs
)
if len(lrm_rsc_op_start_list) == 1:
_update_digest_attrs_in_lrm_rsc_op(
- lrm_rsc_op_start_list[0],
- get_resource_digests(
- runner,
- resource_id,
- node_name,
- new_instance_attrs,
- ),
+ lrm_rsc_op_start_list[0], new_instance_attrs_digests
)
else:
raise LibraryError(
@@ -353,7 +472,7 @@ def update_scsi_devices_without_restart(
monitor_attrs_list = _get_monitor_attrs(resource_el)
lrm_rsc_op_monitor_list = _get_lrm_rsc_op_elements(
- get_root(resource_el), resource_id, node_name, "monitor"
+ cib, resource_id, node_name, "monitor"
)
if len(lrm_rsc_op_monitor_list) != len(monitor_attrs_list):
raise LibraryError(
@@ -369,7 +488,7 @@ def update_scsi_devices_without_restart(
for monitor_attrs in monitor_attrs_list:
lrm_rsc_op_list = _get_lrm_rsc_op_elements(
- get_root(resource_el),
+ cib,
resource_id,
node_name,
"monitor",
@@ -398,3 +517,10 @@ def update_scsi_devices_without_restart(
)
)
)
+ for nvset_el in _get_transient_instance_attributes(cib):
+ _update_digest_attrs_in_transient_instance_attributes(
+ nvset_el,
+ resource_id,
+ resource_el.get("type", ""),
+ new_instance_attrs_digests,
+ )
diff --git a/pcs_test/tier0/lib/commands/test_stonith_update_scsi_devices.py b/pcs_test/tier0/lib/commands/test_stonith_update_scsi_devices.py
index 6cb1f80c..db8953c8 100644
--- a/pcs_test/tier0/lib/commands/test_stonith_update_scsi_devices.py
+++ b/pcs_test/tier0/lib/commands/test_stonith_update_scsi_devices.py
@@ -35,6 +35,7 @@ DEFAULT_DIGEST = _DIGEST + "0"
ALL_DIGEST = _DIGEST + "1"
NONPRIVATE_DIGEST = _DIGEST + "2"
NONRELOADABLE_DIGEST = _DIGEST + "3"
+DIGEST_ATTR_VALUE_GOOD_FORMAT = f"stonith_id:stonith_type:{DEFAULT_DIGEST},"
DEV_1 = "/dev/sda"
DEV_2 = "/dev/sdb"
DEV_3 = "/dev/sdc"
@@ -148,33 +149,58 @@ def _fixture_lrm_rsc_start_ops(resource_id, lrm_start_ops):
return _fixture_lrm_rsc_ops("start", resource_id, lrm_start_ops)
-def _fixture_status_lrm_ops_base(
- resource_id,
- resource_type,
- lrm_ops,
-):
+def _fixture_status_lrm_ops(resource_id, resource_type, lrm_ops):
return f"""
- <status>
- <node_state id="1" uname="node1">
- <lrm id="1">
- <lrm_resources>
- <lrm_resource id="{resource_id}" type="{resource_type}" class="stonith">
- {lrm_ops}
- </lrm_resource>
- </lrm_resources>
- </lrm>
- </node_state>
- </status>
+ <lrm id="1">
+ <lrm_resources>
+ <lrm_resource id="{resource_id}" type="{resource_type}" class="stonith">
+ {lrm_ops}
+ </lrm_resource>
+ </lrm_resources>
+ </lrm>
+ """
+
+
+def _fixture_digest_nvpair(node_id, digest_name, digest_value):
+ return (
+ f'<nvpair id="status-{node_id}-.{digest_name}" name="#{digest_name}" '
+ f'value="{digest_value}"/>'
+ )
+
+
+def _fixture_transient_attributes(node_id, digests_nvpairs):
+ return f"""
+ <transient_attributes id="{node_id}">
+ <instance_attributes id="status-{node_id}">
+ <nvpair id="status-{node_id}-.feature-set" name="#feature-set" value="3.16.2"/>
+ <nvpair id="status-{node_id}-.node-unfenced" name="#node-unfenced" value="1679319764"/>
+ {digests_nvpairs}
+ </instance_attributes>
+ </transient_attributes>
+ """
+
+
+def _fixture_node_state(node_id, lrm_ops=None, transient_attrs=None):
+ if transient_attrs is None:
+ transient_attrs = ""
+ if lrm_ops is None:
+ lrm_ops = ""
+ return f"""
+ <node_state id="{node_id}" uname="node{node_id}">
+ {lrm_ops}
+ {transient_attrs}
+ </node_state>
"""
-def _fixture_status_lrm_ops(
+def _fixture_status(
resource_id,
resource_type,
lrm_start_ops=DEFAULT_LRM_START_OPS,
lrm_monitor_ops=DEFAULT_LRM_MONITOR_OPS,
+ digests_attrs_list=None,
):
- return _fixture_status_lrm_ops_base(
+ lrm_ops = _fixture_status_lrm_ops(
resource_id,
resource_type,
"\n".join(
@@ -182,18 +208,52 @@ def _fixture_status_lrm_ops(
+ _fixture_lrm_rsc_monitor_ops(resource_id, lrm_monitor_ops)
),
)
+ node_states_list = []
+ if not digests_attrs_list:
+ node_states_list.append(
+ _fixture_node_state("1", lrm_ops, transient_attrs=None)
+ )
+ else:
+ for node_id, digests_attrs in enumerate(digests_attrs_list, start=1):
+ transient_attrs = _fixture_transient_attributes(
+ node_id,
+ "\n".join(
+ _fixture_digest_nvpair(node_id, name, value)
+ for name, value in digests_attrs
+ ),
+ )
+ node_state = _fixture_node_state(
+ node_id,
+ lrm_ops=lrm_ops if node_id == 1 else None,
+ transient_attrs=transient_attrs,
+ )
+ node_states_list.append(node_state)
+ node_states = "\n".join(node_states_list)
+ return f"""
+ <status>
+ {node_states}
+ </status>
+ """
+
+def fixture_digests_xml(resource_id, node_name, devices="", nonprivate=True):
+ nonprivate_xml = (
+ f"""
+ <digest type="nonprivate" hash="{NONPRIVATE_DIGEST}">
+ <parameters devices="{devices}"/>
+ </digest>
+ """
+ if nonprivate
+ else ""
+ )
-def fixture_digests_xml(resource_id, node_name, devices=""):
return f"""
<pacemaker-result api-version="2.9" request="crm_resource --digests --resource {resource_id} --node {node_name} --output-as xml devices={devices}">
<digests resource="{resource_id}" node="{node_name}" task="stop" interval="0ms">
<digest type="all" hash="{ALL_DIGEST}">
<parameters devices="{devices}" pcmk_host_check="static-list" pcmk_host_list="node1 node2 node3" pcmk_reboot_action="off"/>
</digest>
- <digest type="nonprivate" hash="{NONPRIVATE_DIGEST}">
- <parameters devices="{devices}"/>
- </digest>
+ {nonprivate_xml}
</digests>
<status code="0" message="OK"/>
</pacemaker-result>
@@ -331,6 +391,8 @@ class UpdateScsiDevicesMixin:
nodes_running_on=1,
start_digests=True,
monitor_digests=True,
+ digests_attrs_list=None,
+ crm_digests_xml=None,
):
# pylint: disable=too-many-arguments
# pylint: disable=too-many-locals
@@ -343,11 +405,12 @@ class UpdateScsiDevicesMixin:
resource_ops=resource_ops,
host_map=host_map,
),
- status=_fixture_status_lrm_ops(
+ status=_fixture_status(
self.stonith_id,
self.stonith_type,
lrm_start_ops=lrm_start_ops,
lrm_monitor_ops=lrm_monitor_ops,
+ digests_attrs_list=digests_attrs_list,
),
)
self.config.runner.pcmk.is_resource_digests_supported()
@@ -360,14 +423,17 @@ class UpdateScsiDevicesMixin:
nodes=FIXTURE_CRM_MON_NODES,
)
devices_opt = "devices={}".format(devices_value)
+
+ if crm_digests_xml is None:
+ crm_digests_xml = fixture_digests_xml(
+ self.stonith_id, SCSI_NODE, devices=devices_value
+ )
if start_digests:
self.config.runner.pcmk.resource_digests(
self.stonith_id,
SCSI_NODE,
name="start.op.digests",
- stdout=fixture_digests_xml(
- self.stonith_id, SCSI_NODE, devices=devices_value
- ),
+ stdout=crm_digests_xml,
args=[devices_opt],
)
if monitor_digests:
@@ -391,11 +457,7 @@ class UpdateScsiDevicesMixin:
self.stonith_id,
SCSI_NODE,
name=f"{name}-{num}.op.digests",
- stdout=fixture_digests_xml(
- self.stonith_id,
- SCSI_NODE,
- devices=devices_value,
- ),
+ stdout=crm_digests_xml,
args=args,
)
@@ -403,14 +465,16 @@ class UpdateScsiDevicesMixin:
self,
devices_before=DEVICES_1,
devices_updated=DEVICES_2,
- devices_add=(),
- devices_remove=(),
+ devices_add=None,
+ devices_remove=None,
unfence=None,
resource_ops=DEFAULT_OPS,
lrm_monitor_ops=DEFAULT_LRM_MONITOR_OPS,
lrm_start_ops=DEFAULT_LRM_START_OPS,
lrm_monitor_ops_updated=DEFAULT_LRM_MONITOR_OPS_UPDATED,
lrm_start_ops_updated=DEFAULT_LRM_START_OPS_UPDATED,
+ digests_attrs_list=None,
+ digests_attrs_list_updated=None,
):
# pylint: disable=too-many-arguments
self.config_cib(
@@ -419,6 +483,7 @@ class UpdateScsiDevicesMixin:
resource_ops=resource_ops,
lrm_monitor_ops=lrm_monitor_ops,
lrm_start_ops=lrm_start_ops,
+ digests_attrs_list=digests_attrs_list,
)
if unfence:
self.config.corosync_conf.load_content(
@@ -442,20 +507,34 @@ class UpdateScsiDevicesMixin:
devices=devices_updated,
resource_ops=resource_ops,
),
- status=_fixture_status_lrm_ops(
+ status=_fixture_status(
self.stonith_id,
self.stonith_type,
lrm_start_ops=lrm_start_ops_updated,
lrm_monitor_ops=lrm_monitor_ops_updated,
+ digests_attrs_list=digests_attrs_list_updated,
),
)
- self.command(
- devices_updated=devices_updated,
- devices_add=devices_add,
- devices_remove=devices_remove,
- )()
+ kwargs = dict(devices_updated=devices_updated)
+ if devices_add is not None:
+ kwargs["devices_add"] = devices_add
+ if devices_remove is not None:
+ kwargs["devices_remove"] = devices_remove
+ self.command(**kwargs)()
self.env_assist.assert_reports([])
+ def digest_attr_value_single(self, digest, last_comma=True):
+ comma = "," if last_comma else ""
+ return f"{self.stonith_id}:{self.stonith_type}:{digest}{comma}"
+
+ def digest_attr_value_multiple(self, digest, last_comma=True):
+ if self.stonith_type == STONITH_TYPE_SCSI:
+ value = f"{STONITH_ID_MPATH}:{STONITH_TYPE_MPATH}:{DEFAULT_DIGEST},"
+ else:
+ value = f"{STONITH_ID_SCSI}:{STONITH_TYPE_SCSI}:{DEFAULT_DIGEST},"
+
+ return f"{value}{self.digest_attr_value_single(digest, last_comma=last_comma)}"
+
class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
def test_pcmk_doesnt_support_digests(self):
@@ -564,9 +643,7 @@ class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
)
def test_no_lrm_start_op(self):
- self.config_cib(
- lrm_start_ops=(), start_digests=False, monitor_digests=False
- )
+ self.config_cib(lrm_start_ops=(), monitor_digests=False)
self.env_assist.assert_raise_library_error(
self.command(),
[
@@ -619,6 +696,59 @@ class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
expected_in_processor=False,
)
+ def test_crm_resource_digests_missing_for_transient_digests_attrs(self):
+ self.config_cib(
+ digests_attrs_list=[
+ [
+ (
+ "digests-secure",
+ self.digest_attr_value_single(ALL_DIGEST),
+ ),
+ ],
+ ],
+ crm_digests_xml=fixture_digests_xml(
+ self.stonith_id, SCSI_NODE, devices="", nonprivate=False
+ ),
+ )
+ self.env_assist.assert_raise_library_error(
+ self.command(),
+ [
+ fixture.error(
+ reports.codes.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM,
+ reason=(
+ "necessary digest for '#digests-secure' attribute is "
+ "missing"
+ ),
+ reason_type=reports.const.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM_REASON_OTHER,
+ )
+ ],
+ expected_in_processor=False,
+ )
+
+ def test_multiple_digests_attributes(self):
+ self.config_cib(
+ digests_attrs_list=[
+ 2
+ * [
+ (
+ "digests-all",
+ self.digest_attr_value_single(DEFAULT_DIGEST),
+ ),
+ ],
+ ],
+ )
+ self.env_assist.assert_raise_library_error(
+ self.command(),
+ [
+ fixture.error(
+ reports.codes.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM,
+ reason=("multiple digests attributes: '#digests-all'"),
+ reason_type=reports.const.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM_REASON_OTHER,
+ )
+ ],
+ expected_in_processor=False,
+ )
+
def test_monitor_ops_and_lrm_monitor_ops_do_not_match(self):
self.config_cib(
resource_ops=(
@@ -809,7 +939,7 @@ class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
stonith_type=self.stonith_type,
devices=DEVICES_2,
),
- status=_fixture_status_lrm_ops(
+ status=_fixture_status(
self.stonith_id,
self.stonith_type,
lrm_start_ops=DEFAULT_LRM_START_OPS_UPDATED,
@@ -956,6 +1086,28 @@ class UpdateScsiDevicesFailuresMixin(UpdateScsiDevicesMixin):
]
)
+ def test_transient_digests_attrs_bad_value_format(self):
+ bad_format = f"{DIGEST_ATTR_VALUE_GOOD_FORMAT}id:type,"
+ self.config_cib(
+ digests_attrs_list=[
+ [
+ ("digests-all", DIGEST_ATTR_VALUE_GOOD_FORMAT),
+ ("digests-secure", bad_format),
+ ]
+ ]
+ )
+ self.env_assist.assert_raise_library_error(
+ self.command(),
+ [
+ fixture.error(
+ reports.codes.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM,
+ reason=f"invalid digest attribute value: '{bad_format}'",
+ reason_type=reports.const.STONITH_RESTARTLESS_UPDATE_UNABLE_TO_PERFORM_REASON_OTHER,
+ )
+ ],
+ expected_in_processor=False,
+ )
+
class UpdateScsiDevicesSetBase(UpdateScsiDevicesMixin, CommandSetMixin):
def test_update_1_to_1_devices(self):
@@ -999,80 +1151,6 @@ class UpdateScsiDevicesSetBase(UpdateScsiDevicesMixin, CommandSetMixin):
unfence=[DEV_3, DEV_4],
)
- def test_default_monitor(self):
- self.assert_command_success(unfence=[DEV_2])
-
- def test_no_monitor_ops(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(),
- lrm_monitor_ops=(),
- lrm_monitor_ops_updated=(),
- )
-
- def test_1_monitor_with_timeout(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(("monitor", "30s", "10s", None),),
- lrm_monitor_ops=(("30000", DEFAULT_DIGEST, None, None),),
- lrm_monitor_ops_updated=(("30000", ALL_DIGEST, None, None),),
- )
-
- def test_2_monitor_ops_with_timeouts(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(
- ("monitor", "30s", "10s", None),
- ("monitor", "40s", "20s", None),
- ),
- lrm_monitor_ops=(
- ("30000", DEFAULT_DIGEST, None, None),
- ("40000", DEFAULT_DIGEST, None, None),
- ),
- lrm_monitor_ops_updated=(
- ("30000", ALL_DIGEST, None, None),
- ("40000", ALL_DIGEST, None, None),
- ),
- )
-
- def test_2_monitor_ops_with_one_timeout(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(
- ("monitor", "30s", "10s", None),
- ("monitor", "60s", None, None),
- ),
- lrm_monitor_ops=(
- ("30000", DEFAULT_DIGEST, None, None),
- ("60000", DEFAULT_DIGEST, None, None),
- ),
- lrm_monitor_ops_updated=(
- ("30000", ALL_DIGEST, None, None),
- ("60000", ALL_DIGEST, None, None),
- ),
- )
-
- def test_various_start_ops_one_lrm_start_op(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(
- ("monitor", "60s", None, None),
- ("start", "0s", "40s", None),
- ("start", "0s", "30s", "1"),
- ("start", "10s", "5s", None),
- ("start", "20s", None, None),
- ),
- )
-
- def test_1_nonrecurring_start_op_with_timeout(self):
- self.assert_command_success(
- unfence=[DEV_2],
- resource_ops=(
- ("monitor", "60s", None, None),
- ("start", "0s", "40s", None),
- ),
- )
-
class UpdateScsiDevicesAddRemoveBase(
UpdateScsiDevicesMixin, CommandAddRemoveMixin
@@ -1242,6 +1320,221 @@ class MpathFailuresMixin:
self.assert_failure("node1:1;node2=", ["node2", "node3"])
+class UpdateScsiDevicesDigestsBase(UpdateScsiDevicesMixin):
+ def test_default_monitor(self):
+ self.assert_command_success(unfence=[DEV_2])
+
+ def test_no_monitor_ops(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(),
+ lrm_monitor_ops=(),
+ lrm_monitor_ops_updated=(),
+ )
+
+ def test_1_monitor_with_timeout(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(("monitor", "30s", "10s", None),),
+ lrm_monitor_ops=(("30000", DEFAULT_DIGEST, None, None),),
+ lrm_monitor_ops_updated=(("30000", ALL_DIGEST, None, None),),
+ )
+
+ def test_2_monitor_ops_with_timeouts(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(
+ ("monitor", "30s", "10s", None),
+ ("monitor", "40s", "20s", None),
+ ),
+ lrm_monitor_ops=(
+ ("30000", DEFAULT_DIGEST, None, None),
+ ("40000", DEFAULT_DIGEST, None, None),
+ ),
+ lrm_monitor_ops_updated=(
+ ("30000", ALL_DIGEST, None, None),
+ ("40000", ALL_DIGEST, None, None),
+ ),
+ )
+
+ def test_2_monitor_ops_with_one_timeout(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(
+ ("monitor", "30s", "10s", None),
+ ("monitor", "60s", None, None),
+ ),
+ lrm_monitor_ops=(
+ ("30000", DEFAULT_DIGEST, None, None),
+ ("60000", DEFAULT_DIGEST, None, None),
+ ),
+ lrm_monitor_ops_updated=(
+ ("30000", ALL_DIGEST, None, None),
+ ("60000", ALL_DIGEST, None, None),
+ ),
+ )
+
+ def test_various_start_ops_one_lrm_start_op(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(
+ ("monitor", "60s", None, None),
+ ("start", "0s", "40s", None),
+ ("start", "0s", "30s", "1"),
+ ("start", "10s", "5s", None),
+ ("start", "20s", None, None),
+ ),
+ )
+
+ def test_1_nonrecurring_start_op_with_timeout(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ resource_ops=(
+ ("monitor", "60s", None, None),
+ ("start", "0s", "40s", None),
+ ),
+ )
+
+ def _digests_attrs_before(self, last_comma=True):
+ return [
+ (
+ "digests-all",
+ self.digest_attr_value_single(DEFAULT_DIGEST, last_comma),
+ ),
+ (
+ "digests-secure",
+ self.digest_attr_value_single(DEFAULT_DIGEST, last_comma),
+ ),
+ ]
+
+ def _digests_attrs_after(self, last_comma=True):
+ return [
+ (
+ "digests-all",
+ self.digest_attr_value_single(ALL_DIGEST, last_comma),
+ ),
+ (
+ "digests-secure",
+ self.digest_attr_value_single(NONPRIVATE_DIGEST, last_comma),
+ ),
+ ]
+
+ def _digests_attrs_before_multi(self, last_comma=True):
+ return [
+ (
+ "digests-all",
+ self.digest_attr_value_multiple(DEFAULT_DIGEST, last_comma),
+ ),
+ (
+ "digests-secure",
+ self.digest_attr_value_multiple(DEFAULT_DIGEST, last_comma),
+ ),
+ ]
+
+ def _digests_attrs_after_multi(self, last_comma=True):
+ return [
+ (
+ "digests-all",
+ self.digest_attr_value_multiple(ALL_DIGEST, last_comma),
+ ),
+ (
+ "digests-secure",
+ self.digest_attr_value_multiple(NONPRIVATE_DIGEST, last_comma),
+ ),
+ ]
+
+ def test_transient_digests_attrs_all_nodes(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=len(self.existing_nodes)
+ * [self._digests_attrs_before()],
+ digests_attrs_list_updated=len(self.existing_nodes)
+ * [self._digests_attrs_after()],
+ )
+
+ def test_transient_digests_attrs_not_on_all_nodes(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=[self._digests_attrs_before()],
+ digests_attrs_list_updated=[self._digests_attrs_after()],
+ )
+
+ def test_transient_digests_attrs_all_nodes_multi_value(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=len(self.existing_nodes)
+ * [self._digests_attrs_before_multi()],
+ digests_attrs_list_updated=len(self.existing_nodes)
+ * [self._digests_attrs_after_multi()],
+ )
+
+ def test_transient_digests_attrs_not_on_all_nodes_multi_value(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=[self._digests_attrs_before()],
+ digests_attrs_list_updated=[self._digests_attrs_after()],
+ )
+
+ def test_transient_digests_attrs_not_all_digest_types(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=len(self.existing_nodes)
+ * [self._digests_attrs_before()[0:1]],
+ digests_attrs_list_updated=len(self.existing_nodes)
+ * [self._digests_attrs_after()[0:1]],
+ )
+
+ def test_transient_digests_attrs_without_digests_attrs(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=len(self.existing_nodes) * [[]],
+ digests_attrs_list_updated=len(self.existing_nodes) * [[]],
+ )
+
+ def test_transient_digests_attrs_without_last_comma(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=[self._digests_attrs_before(last_comma=False)],
+ digests_attrs_list_updated=[
+ self._digests_attrs_after(last_comma=False)
+ ],
+ )
+
+ def test_transient_digests_attrs_without_last_comma_multi_value(self):
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=[
+ self._digests_attrs_before_multi(last_comma=False)
+ ],
+ digests_attrs_list_updated=[
+ self._digests_attrs_after_multi(last_comma=False)
+ ],
+ )
+
+ def test_transient_digests_attrs_no_digest_for_our_stonith_id(self):
+ digests_attrs_list = len(self.existing_nodes) * [
+ [
+ ("digests-all", DIGEST_ATTR_VALUE_GOOD_FORMAT),
+ ("digests-secure", DIGEST_ATTR_VALUE_GOOD_FORMAT),
+ ]
+ ]
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=digests_attrs_list,
+ digests_attrs_list_updated=digests_attrs_list,
+ )
+
+ def test_transient_digests_attrs_digests_with_empty_value(self):
+ digests_attrs_list = len(self.existing_nodes) * [
+ [("digests-all", ""), ("digests-secure", "")]
+ ]
+ self.assert_command_success(
+ unfence=[DEV_2],
+ digests_attrs_list=digests_attrs_list,
+ digests_attrs_list_updated=digests_attrs_list,
+ )
+
+
@mock.patch.object(
settings,
"pacemaker_api_result_schema",
@@ -1334,3 +1627,47 @@ class TestUpdateScsiDevicesAddRemoveFailuresScsi(
UpdateScsiDevicesAddRemoveFailuresBaseMixin, ScsiMixin, TestCase
):
pass
+
+
+@mock.patch.object(
+ settings,
+ "pacemaker_api_result_schema",
+ rc("pcmk_api_rng/api-result.rng"),
+)
+class TestUpdateScsiDevicesDigestsSetScsi(
+ UpdateScsiDevicesDigestsBase, ScsiMixin, CommandSetMixin, TestCase
+):
+ pass
+
+
+@mock.patch.object(
+ settings,
+ "pacemaker_api_result_schema",
+ rc("pcmk_api_rng/api-result.rng"),
+)
+class TestUpdateScsiDevicesDigestsAddRemoveScsi(
+ UpdateScsiDevicesDigestsBase, ScsiMixin, CommandAddRemoveMixin, TestCase
+):
+ pass
+
+
+@mock.patch.object(
+ settings,
+ "pacemaker_api_result_schema",
+ rc("pcmk_api_rng/api-result.rng"),
+)
+class TestUpdateScsiDevicesDigestsSetMpath(
+ UpdateScsiDevicesDigestsBase, MpathMixin, CommandSetMixin, TestCase
+):
+ pass
+
+
+@mock.patch.object(
+ settings,
+ "pacemaker_api_result_schema",
+ rc("pcmk_api_rng/api-result.rng"),
+)
+class TestUpdateScsiDevicesDigestsAddRemoveMpath(
+ UpdateScsiDevicesDigestsBase, MpathMixin, CommandAddRemoveMixin, TestCase
+):
+ pass
--
2.39.2

View File

@ -1,7 +1,7 @@
From 4470259655fa10cb5908fee00653483e7056f1a7 Mon Sep 17 00:00:00 2001 From 854efcf148c82e5a5e4f0afd71cc3333ea4a8ce4 Mon Sep 17 00:00:00 2001
From: Ivan Devat <idevat@redhat.com> From: Ivan Devat <idevat@redhat.com>
Date: Tue, 20 Nov 2018 15:03:56 +0100 Date: Tue, 20 Nov 2018 15:03:56 +0100
Subject: [PATCH] do not support cluster setup with udp(u) transport Subject: [PATCH 1/2] do not support cluster setup with udp(u) transport
--- ---
pcs/pcs.8.in | 2 ++ pcs/pcs.8.in | 2 ++
@ -10,10 +10,10 @@ Subject: [PATCH] do not support cluster setup with udp(u) transport
3 files changed, 6 insertions(+) 3 files changed, 6 insertions(+)
diff --git a/pcs/pcs.8.in b/pcs/pcs.8.in diff --git a/pcs/pcs.8.in b/pcs/pcs.8.in
index d1a6dcf2..cd00f8ac 100644 index d504e8b4..93202d05 100644
--- a/pcs/pcs.8.in --- a/pcs/pcs.8.in
+++ b/pcs/pcs.8.in +++ b/pcs/pcs.8.in
@@ -436,6 +436,8 @@ By default, encryption is enabled with cipher=aes256 and hash=sha256. To disable @@ -438,6 +438,8 @@ By default, encryption is enabled with cipher=aes256 and hash=sha256. To disable
Transports udp and udpu: Transports udp and udpu:
.br .br
@ -23,10 +23,10 @@ index d1a6dcf2..cd00f8ac 100644
.br .br
Transport options are: ip_version, netmtu Transport options are: ip_version, netmtu
diff --git a/pcs/usage.py b/pcs/usage.py diff --git a/pcs/usage.py b/pcs/usage.py
index c3174d82..0a6ffcb6 100644 index f4b84202..ee10370a 100644
--- a/pcs/usage.py --- a/pcs/usage.py
+++ b/pcs/usage.py +++ b/pcs/usage.py
@@ -1004,6 +1004,7 @@ Commands: @@ -1038,6 +1038,7 @@ Commands:
hash=sha256. To disable encryption, set cipher=none and hash=none. hash=sha256. To disable encryption, set cipher=none and hash=none.
Transports udp and udpu: Transports udp and udpu:
@ -49,5 +49,5 @@ index 2f26e831..a7702ac4 100644
#csetup-transport-options.knet .without-knet #csetup-transport-options.knet .without-knet
{ {
-- --
2.38.1 2.43.0

View File

@ -1,40 +0,0 @@
From 91d13a82a0803f2a4653a2ec9379a27f4555dcb5 Mon Sep 17 00:00:00 2001
From: Mamoru TASAKA <mtasaka@fedoraproject.org>
Date: Thu, 8 Dec 2022 22:47:59 +0900
Subject: [PATCH 3/5] pcsd ruby: adjust to json 2.6.3 error message change
json 2.6.3 now removes line number information from parser
error message.
Adjust regex pattern on pcs test code for ruby to support
this error format.
Fixes #606 .
---
pcsd/test/test_config.rb | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/pcsd/test/test_config.rb b/pcsd/test/test_config.rb
index 7aaf4349..a580b24f 100644
--- a/pcsd/test/test_config.rb
+++ b/pcsd/test/test_config.rb
@@ -126,7 +126,7 @@ class TestConfig < Test::Unit::TestCase
assert_equal('error', $logger.log[0][0])
assert_match(
# the number is based on JSON gem version
- /Unable to parse pcs_settings file: \d+: unexpected token/,
+ /Unable to parse pcs_settings file: (\d+: )?unexpected token/,
$logger.log[0][1]
)
assert_equal(fixture_empty_config, cfg.text)
@@ -723,7 +723,7 @@ class TestCfgKnownHosts < Test::Unit::TestCase
assert_equal('error', $logger.log[0][0])
assert_match(
# the number is based on JSON gem version
- /Unable to parse known-hosts file: \d+: unexpected token/,
+ /Unable to parse known-hosts file: (\d+: )?unexpected token/,
$logger.log[0][1]
)
assert_empty_data(cfg)
--
2.39.0

View File

@ -1,56 +1,51 @@
Name: pcs Name: pcs
Version: 0.10.15 Version: 0.10.18
Release: 4%{?dist}.1 Release: 2%{?dist}
# https://docs.fedoraproject.org/en-US/packaging-guidelines/LicensingGuidelines/ # https://docs.fedoraproject.org/en-US/packaging-guidelines/LicensingGuidelines/
# https://fedoraproject.org/wiki/Licensing:Main?rd=Licensing#Good_Licenses # https://fedoraproject.org/wiki/Licensing:Main?rd=Licensing#Good_Licenses
# GPL-2.0-only: pcs # GPL-2.0-only: pcs
# Apache-2.0: dataclasses, tornado # Apache-2.0: dataclasses, tornado
# Apache-2.0 or BSD-3-Clause: dateutil # Apache-2.0 or BSD-3-Clause: dateutil
# MIT: backports, dacite, daemons, ember, ethon, handlebars, jquery, jquery-ui, # MIT: backports, dacite, ember, ethon, handlebars, jquery, jquery-ui,
# mustermann, rack, rack-protection, rack-test, sinatra, tilt # mustermann, rack, rack-protection, rack-test, sinatra, tilt
# GPL-2.0-only or Ruby: eventmachine, json # MIT and (BSD-2-Clause or GPL-2.0-or-later): nio4r
# (GPL-2.0-only or Ruby) and BSD-2-Clause: thin # GPL-2.0-only or Ruby: json
# BSD-2-Clause or Ruby: open4, ruby2_keywords # BSD-2-Clause or Ruby: open4, ruby2_keywords
# BSD-3-Clause: puma
# BSD-3-Clause and MIT: ffi # BSD-3-Clause and MIT: ffi
License: GPL-2.0-only AND Apache-2.0 AND MIT AND BSD-3-Clause AND (GPL-2.0-only OR Ruby) AND (BSD-2-Clause OR Ruby) AND BSD-2-Clause AND (Apache-2.0 OR BSD-3-Clause) License: GPL-2.0-only AND Apache-2.0 AND MIT AND BSD-3-Clause AND (Apache-2.0 OR BSD-3-Clause) AND (BSD-2-Clause OR Ruby) AND (BSD-2-Clause OR GPL-2.0-or-later) AND (GPL-2.0-only or Ruby)
URL: https://github.com/ClusterLabs/pcs URL: https://github.com/ClusterLabs/pcs
Group: System Environment/Base Group: System Environment/Base
Summary: Pacemaker Configuration System Summary: Pacemaker/Corosync Configuration System
#building only for architectures with pacemaker and corosync available #building only for architectures with pacemaker and corosync available
ExclusiveArch: i686 x86_64 s390x ppc64le aarch64 ExclusiveArch: i686 x86_64 s390x ppc64le aarch64
# When specifying a commit, use its long hash
%global version_or_commit %{version} %global version_or_commit %{version}
# %%global version_or_commit %%{version}.27-cb2fb # %%global version_or_commit 1fa11fa39029896939a5545968ed60ede714b992
%global pcs_source_name %{name}-%{version_or_commit} %global pcs_source_name %{name}-%{version_or_commit}
# ui_commit can be determined by hash, tag or branch
%global ui_commit 0.1.13
%global ui_modules_version 0.1.13
%global ui_src_name pcs-web-ui-%{ui_commit}
%global pcs_snmp_pkg_name pcs-snmp %global pcs_snmp_pkg_name pcs-snmp
%global pyagentx_version 0.4.pcs.2 %global pyagentx_version 0.4.pcs.2
%global dataclasses_version 0.8 %global dataclasses_version 0.8
%global dacite_version 1.6.0 %global dacite_version 1.8.1
%global dateutil_version 2.8.2 %global dateutil_version 2.8.2
%global version_rubygem_backports 3.23.0 %global version_rubygem_backports 3.24.1
%global version_rubygem_daemons 1.4.1
%global version_rubygem_ethon 0.16.0 %global version_rubygem_ethon 0.16.0
%global version_rubygem_eventmachine 1.2.7 %global version_rubygem_ffi 1.16.3
%global version_rubygem_ffi 1.15.5
%global version_rubygem_json 2.6.3 %global version_rubygem_json 2.6.3
%global version_rubygem_mustermann 2.0.2 %global version_rubygem_mustermann 2.0.2
%global version_rubygem_nio4r 2.5.9
%global version_rubygem_open4 1.3.4 %global version_rubygem_open4 1.3.4
%global version_rubygem_rack 2.2.6.4 %global version_rubygem_puma 6.4.0
%global version_rubygem_rack 2.2.8.1
%global version_rubygem_rack_protection 2.2.4 %global version_rubygem_rack_protection 2.2.4
%global version_rubygem_rack_test 2.0.2 %global version_rubygem_rack_test 2.1.0
%global version_rubygem_rexml 3.2.5 %global version_rubygem_rexml 3.2.6
%global version_rubygem_ruby2_keywords 0.0.5 %global version_rubygem_ruby2_keywords 0.0.5
%global version_rubygem_sinatra 2.2.4 %global version_rubygem_sinatra 2.2.4
%global version_rubygem_thin 1.8.1 %global version_rubygem_tilt 2.3.0
%global version_rubygem_tilt 2.0.11
# javascript bundled libraries for old web-ui # javascript bundled libraries for old web-ui
%global ember_version 1.4.0 %global ember_version 1.4.0
@ -82,7 +77,13 @@ ExclusiveArch: i686 x86_64 s390x ppc64le aarch64
# /usr/bin/python will be removed or switched to Python 3 in the future. # /usr/bin/python will be removed or switched to Python 3 in the future.
%global __python %{__python3} %global __python %{__python3}
Source0: %{url}/archive/%{version_or_commit}/%{pcs_source_name}.tar.gz # prepend v for folder in GitHub link when using tagged tarball
%if "%{version}" == "%{version_or_commit}"
%global v_prefix v
%endif
# part after the last slash is recognized as filename in look-aside cache
Source0: %{url}/archive/%{?v_prefix}%{version_or_commit}/%{pcs_source_name}.tar.gz
Source1: HAM-logo.png Source1: HAM-logo.png
Source41: https://github.com/ondrejmular/pyagentx/archive/v%{pyagentx_version}/pyagentx-%{pyagentx_version}.tar.gz Source41: https://github.com/ondrejmular/pyagentx/archive/v%{pyagentx_version}/pyagentx-%{pyagentx_version}.tar.gz
@ -106,40 +107,18 @@ Source89: https://rubygems.org/downloads/rack-protection-%{version_rubygem_rack_
Source90: https://rubygems.org/downloads/rack-test-%{version_rubygem_rack_test}.gem Source90: https://rubygems.org/downloads/rack-test-%{version_rubygem_rack_test}.gem
Source91: https://rubygems.org/downloads/sinatra-%{version_rubygem_sinatra}.gem Source91: https://rubygems.org/downloads/sinatra-%{version_rubygem_sinatra}.gem
Source92: https://rubygems.org/downloads/tilt-%{version_rubygem_tilt}.gem Source92: https://rubygems.org/downloads/tilt-%{version_rubygem_tilt}.gem
Source93: https://rubygems.org/downloads/eventmachine-%{version_rubygem_eventmachine}.gem Source93: https://rubygems.org/downloads/nio4r-%{version_rubygem_nio4r}.gem
Source94: https://rubygems.org/downloads/daemons-%{version_rubygem_daemons}.gem Source94: https://rubygems.org/downloads/puma-%{version_rubygem_puma}.gem
Source95: https://rubygems.org/downloads/thin-%{version_rubygem_thin}.gem Source95: https://rubygems.org/downloads/ruby2_keywords-%{version_rubygem_ruby2_keywords}.gem
Source96: https://rubygems.org/downloads/ruby2_keywords-%{version_rubygem_ruby2_keywords}.gem
Source100: https://github.com/ClusterLabs/pcs-web-ui/archive/%{ui_commit}/%{ui_src_name}.tar.gz
Source101: https://github.com/ClusterLabs/pcs-web-ui/releases/download/%{ui_modules_version}/pcs-web-ui-node-modules-%{ui_modules_version}.tar.xz
# Patches from upstream.
# They should come before downstream patches to avoid unnecessary conflicts.
# Z-streams are exception here: they can come from upstream but should be
# applied at the end to keep z-stream changes as straightforward as possible.
# pcs patches: <= 200 # pcs patches: <= 200
# Patch1: bzNUMBER-01-name.patch # Patch1: bzNUMBER-01-name.patch
Patch1: do-not-support-cluster-setup-with-udp-u-transport.patch Patch1: do-not-support-cluster-setup-with-udp-u-transport.patch
Patch2: bz2151511-01-add-warning-when-updating-a-misconfigured-resource.patch Patch2: RHEL-17280-01-disable-new-webui-routes.patch
Patch3: bz2151166-01-fix-displaying-bool-and-integer-values.patch
Patch4: pcsd-rubygem-json-error-message-change.patch
Patch5: bz2159455-01-add-agent-validation-option.patch
Patch6: bz2158804-01-fix-stonith-watchdog-timeout-validation.patch
Patch7: bz2166243-01-fix-stonith-watchdog-timeout-offline-update.patch
Patch8: bz2180700-01-fix-pcs-config-checkpoint-diff.patch
Patch9: bz2180706-01-fix-pcs-stonith-update-scsi-devices.patch
# Downstream patches do not come from upstream. They adapt pcs for specific
# RHEL needs.
# Patch101: do-not-support-cluster-setup-with-udp-u-transport.patch
# ui patches: >200
# git for patches # git for patches
BuildRequires: git-core BuildRequires: git-core
#printf from coreutils is used in makefile # printf from coreutils is used in makefile, head is used in spec
BuildRequires: coreutils BuildRequires: coreutils
# python for pcs # python for pcs
BuildRequires: platform-python BuildRequires: platform-python
@ -180,9 +159,6 @@ BuildRequires: overpass-fonts
# Red Hat logo for creating symlink of favicon # Red Hat logo for creating symlink of favicon
BuildRequires: redhat-logos BuildRequires: redhat-logos
# for building web ui
BuildRequires: npm
# cluster stack packages for pkg-config # cluster stack packages for pkg-config
BuildRequires: booth BuildRequires: booth
BuildRequires: corosync-qdevice-devel BuildRequires: corosync-qdevice-devel
@ -232,20 +208,19 @@ Provides: bundled(dataclasses) = %{dataclasses_version}
Provides: bundled(dacite) = %{dacite_version} Provides: bundled(dacite) = %{dacite_version}
Provides: bundled(dateutil) = %{dateutil_version} Provides: bundled(dateutil) = %{dateutil_version}
Provides: bundled(backports) = %{version_rubygem_backports} Provides: bundled(backports) = %{version_rubygem_backports}
Provides: bundled(daemons) = %{version_rubygem_daemons}
Provides: bundled(ethon) = %{version_rubygem_ethon} Provides: bundled(ethon) = %{version_rubygem_ethon}
Provides: bundled(eventmachine) = %{version_rubygem_eventmachine}
Provides: bundled(ffi) = %{version_rubygem_ffi} Provides: bundled(ffi) = %{version_rubygem_ffi}
Provides: bundled(json) = %{version_rubygem_json} Provides: bundled(json) = %{version_rubygem_json}
Provides: bundled(mustermann) = %{version_rubygem_mustermann} Provides: bundled(mustermann) = %{version_rubygem_mustermann}
Provides: bundled(nio4r) = %{version_rubygem_nio4r}
Provides: bundled(open4) = %{version_rubygem_open4} Provides: bundled(open4) = %{version_rubygem_open4}
Provides: bundled(puma) = %{version_rubygem_puma}
Provides: bundled(rack) = %{version_rubygem_rack} Provides: bundled(rack) = %{version_rubygem_rack}
Provides: bundled(rack_protection) = %{version_rubygem_rack_protection} Provides: bundled(rack_protection) = %{version_rubygem_rack_protection}
Provides: bundled(rack_test) = %{version_rubygem_rack_test} Provides: bundled(rack_test) = %{version_rubygem_rack_test}
Provides: bundled(rexml) = %{version_rubygem_rexml} Provides: bundled(rexml) = %{version_rubygem_rexml}
Provides: bundled(ruby2_keywords) = %{version_rubygem_ruby2_keywords} Provides: bundled(ruby2_keywords) = %{version_rubygem_ruby2_keywords}
Provides: bundled(sinatra) = %{version_rubygem_sinatra} Provides: bundled(sinatra) = %{version_rubygem_sinatra}
Provides: bundled(thin) = %{version_rubygem_thin}
Provides: bundled(tilt) = %{version_rubygem_tilt} Provides: bundled(tilt) = %{version_rubygem_tilt}
# javascript bundled libraries for old web-ui # javascript bundled libraries for old web-ui
@ -268,7 +243,7 @@ Summary: Pacemaker cluster SNMP agent
License: GPL-2.0-only and BSD-2-Clause License: GPL-2.0-only and BSD-2-Clause
URL: https://github.com/ClusterLabs/pcs URL: https://github.com/ClusterLabs/pcs
# tar for unpacking pyagetx source tar ball # tar for unpacking pyagentx source tarball
BuildRequires: tar BuildRequires: tar
Requires: pcs = %{version}-%{release} Requires: pcs = %{version}-%{release}
@ -323,26 +298,18 @@ update_times_patch(){
# documentation for setup/autosetup/autopatch: # documentation for setup/autosetup/autopatch:
# * http://ftp.rpm.org/max-rpm/s1-rpm-inside-macros.html # * http://ftp.rpm.org/max-rpm/s1-rpm-inside-macros.html
# * https://rpm-software-management.github.io/rpm/manual/autosetup.html # * https://rpm-software-management.github.io/rpm/manual/autosetup.html
# patch web-ui sources
%autosetup -D -T -b 100 -a 101 -S git -n %{ui_src_name} -N
%autopatch -p1 -m 201
# update_times_patch %%{PATCH201}
# patch pcs sources # patch pcs sources
%autosetup -S git -n %{pcs_source_name} -N %autosetup -S git -n %{pcs_source_name} -N
%autopatch -p1 -M 200 %autopatch -p1 -M 200
# update_times_patch %%{PATCH1}
update_times_patch %{PATCH1} update_times_patch %{PATCH1}
update_times_patch %{PATCH2} update_times_patch %{PATCH2}
update_times_patch %{PATCH3}
update_times_patch %{PATCH4}
update_times_patch %{PATCH5}
update_times_patch %{PATCH6}
update_times_patch %{PATCH7}
update_times_patch %{PATCH8}
update_times_patch %{PATCH9}
# update_times_patch %{PATCH101} # generate .tarball-version if building from an untagged commit, not a released version
# autogen uses git-version-gen which uses .tarball-version for generating version number
%if "%{version}" != "%{version_or_commit}"
echo "%version+$(echo "%{version_or_commit}" | head -c 8)" > %{_builddir}/%{pcs_source_name}/.tarball-version
%endif
cp -f %SOURCE1 %{pcsd_public_dir}/images cp -f %SOURCE1 %{pcsd_public_dir}/images
@ -368,7 +335,6 @@ cp -f %SOURCE92 %{rubygem_cache_dir}
cp -f %SOURCE93 %{rubygem_cache_dir} cp -f %SOURCE93 %{rubygem_cache_dir}
cp -f %SOURCE94 %{rubygem_cache_dir} cp -f %SOURCE94 %{rubygem_cache_dir}
cp -f %SOURCE95 %{rubygem_cache_dir} cp -f %SOURCE95 %{rubygem_cache_dir}
cp -f %SOURCE96 %{rubygem_cache_dir}
# 2) prepare python bundles # 2) prepare python bundles
@ -383,34 +349,35 @@ cp -f %SOURCE45 rpm/
%define debug_package %{nil} %define debug_package %{nil}
./autogen.sh ./autogen.sh
%{configure} --enable-local-build --enable-use-local-cache-only --enable-individual-bundling --enable-booth-enable-authfile-set --enable-booth-enable-authfile-unset PYTHON=%{__python3} ruby_CFLAGS="%{optflags}" ruby_LIBS="%{build_ldflags}" %{configure} --enable-local-build --enable-use-local-cache-only \
--enable-individual-bundling \
--enable-booth-enable-authfile-set --enable-booth-enable-authfile-unset \
PYTHON=%{__python3} ruby_CFLAGS="%{optflags}" ruby_LIBS="%{build_ldflags}"
make all make all
# build pcs-web-ui
make -C %{_builddir}/%{ui_src_name} build BUILD_USE_EXISTING_NODE_MODULES=true
%install %install
rm -rf $RPM_BUILD_ROOT rm -rf $RPM_BUILD_ROOT
pwd pwd
%make_install %make_install
# something like make install for pcs-web-ui # RHEL-7715 - fix rubygem permissions - remove write access for owner's group
cp -r %{_builddir}/%{ui_src_name}/build ${RPM_BUILD_ROOT}%{_libdir}/%{pcsd_public_dir}/ui # and other users
chmod --recursive g-w,o-w ${RPM_BUILD_ROOT}%{_libdir}/%{rubygem_bundle_dir}
# prepare license files # prepare license files
# some rubygems do not have a license file (thin) # some rubygems do not have a license file (thin)
mv %{rubygem_bundle_dir}/gems/backports-%{version_rubygem_backports}/LICENSE.txt backports_LICENSE.txt mv %{rubygem_bundle_dir}/gems/backports-%{version_rubygem_backports}/LICENSE.txt backports_LICENSE.txt
mv %{rubygem_bundle_dir}/gems/daemons-%{version_rubygem_daemons}/LICENSE daemons_LICENSE
mv %{rubygem_bundle_dir}/gems/ethon-%{version_rubygem_ethon}/LICENSE ethon_LICENSE mv %{rubygem_bundle_dir}/gems/ethon-%{version_rubygem_ethon}/LICENSE ethon_LICENSE
mv %{rubygem_bundle_dir}/gems/eventmachine-%{version_rubygem_eventmachine}/LICENSE eventmachine_LICENSE
mv %{rubygem_bundle_dir}/gems/eventmachine-%{version_rubygem_eventmachine}/GNU eventmachine_GNU
mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/COPYING ffi_COPYING mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/COPYING ffi_COPYING
mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/LICENSE ffi_LICENSE mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/LICENSE ffi_LICENSE
mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/LICENSE.SPECS ffi_LICENSE.SPECS mv %{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/LICENSE.SPECS ffi_LICENSE.SPECS
mv %{rubygem_bundle_dir}/gems/json-%{version_rubygem_json}/LICENSE json_LICENSE mv %{rubygem_bundle_dir}/gems/json-%{version_rubygem_json}/LICENSE json_LICENSE
mv %{rubygem_bundle_dir}/gems/mustermann-%{version_rubygem_mustermann}/LICENSE mustermann_LICENSE mv %{rubygem_bundle_dir}/gems/mustermann-%{version_rubygem_mustermann}/LICENSE mustermann_LICENSE
mv %{rubygem_bundle_dir}/gems/nio4r-%{version_rubygem_nio4r}/license.md nio4r_license.md
mv %{rubygem_bundle_dir}/gems/nio4r-%{version_rubygem_nio4r}/ext/libev/LICENSE nio4r_libev_LICENSE
mv %{rubygem_bundle_dir}/gems/open4-%{version_rubygem_open4}/LICENSE open4_LICENSE mv %{rubygem_bundle_dir}/gems/open4-%{version_rubygem_open4}/LICENSE open4_LICENSE
mv %{rubygem_bundle_dir}/gems/puma-%{version_rubygem_puma}/LICENSE puma_LICENSE
mv %{rubygem_bundle_dir}/gems/rack-%{version_rubygem_rack}/MIT-LICENSE rack_MIT-LICENSE mv %{rubygem_bundle_dir}/gems/rack-%{version_rubygem_rack}/MIT-LICENSE rack_MIT-LICENSE
mv %{rubygem_bundle_dir}/gems/rack-protection-%{version_rubygem_rack_protection}/License rack-protection_License mv %{rubygem_bundle_dir}/gems/rack-protection-%{version_rubygem_rack_protection}/License rack-protection_License
mv %{rubygem_bundle_dir}/gems/rack-test-%{version_rubygem_rack_test}/MIT-LICENSE.txt rack-test_MIT-LICENSE.txt mv %{rubygem_bundle_dir}/gems/rack-test-%{version_rubygem_rack_test}/MIT-LICENSE.txt rack-test_MIT-LICENSE.txt
@ -451,10 +418,10 @@ rm -rf $RPM_BUILD_ROOT/usr/lib/debug
rm -rf $RPM_BUILD_ROOT%{_prefix}/src/debug rm -rf $RPM_BUILD_ROOT%{_prefix}/src/debug
# We can remove files required for gem compilation # We can remove files required for gem compilation
rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/eventmachine-%{version_rubygem_eventmachine}/ext
rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/ext rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/ffi-%{version_rubygem_ffi}/ext
rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/json-%{version_rubygem_json}/ext rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/json-%{version_rubygem_json}/ext
rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/thin-%{version_rubygem_thin}/ext rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/nio4r-%{version_rubygem_nio4r}/ext
rm -rf $RPM_BUILD_ROOT%{_libdir}/%{rubygem_bundle_dir}/gems/puma-%{version_rubygem_puma}/ext
%check %check
run_all_tests(){ run_all_tests(){
@ -537,16 +504,16 @@ remove_all_tests
%license COPYING %license COPYING
# rugygem licenses # rugygem licenses
%license backports_LICENSE.txt %license backports_LICENSE.txt
%license daemons_LICENSE
%license ethon_LICENSE %license ethon_LICENSE
%license eventmachine_LICENSE
%license eventmachine_GNU
%license ffi_COPYING %license ffi_COPYING
%license ffi_LICENSE %license ffi_LICENSE
%license ffi_LICENSE.SPECS %license ffi_LICENSE.SPECS
%license json_LICENSE %license json_LICENSE
%license mustermann_LICENSE %license mustermann_LICENSE
%license nio4r_license.md
%license nio4r_libev_LICENSE
%license open4_LICENSE %license open4_LICENSE
%license puma_LICENSE
%license rack_MIT-LICENSE %license rack_MIT-LICENSE
%license rack-protection_License %license rack-protection_License
%license rack-test_MIT-LICENSE.txt %license rack-test_MIT-LICENSE.txt
@ -593,11 +560,50 @@ remove_all_tests
%license pyagentx_LICENSE.txt %license pyagentx_LICENSE.txt
%changelog %changelog
* Thu Mar 30 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.15-4.el8_8.1 * Wed Mar 20 2024 Michal Pospisil <mpospisi@redhat.com> - 0.10.18-2
- Fix displaying differences between configuration checkpoints in “pcs config checkpoint diff” command - Fixed CVE-2024-25126, CVE-2024-26141, CVE-2024-26146 in bundled dependency rack
- Fix “pcs stonith update-scsi-devices” command which was broken since Pacemaker-2.1.5-rc1 Resolves: RHEL-26445, RHEL-26447, RHEL-26449
- Updated bundled rubygem rack
- Resolves: rhbz#2180700 rhbz#2180706 rhbz#2180713 rhbz#2180974 * Mon Jan 8 2024 Michal Pospisil <mpospisi@redhat.com> - 0.10.18-1
- Rebased to the latest sources (see CHANGELOG.md)
Resolves: RHEL-7741
* Fri Dec 8 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.17-6
- Rebased to the latest upstream sources (see CHANGELOG.md)
- Remove the preview of the new pcs web interface
Resolves: RHEL-17280
* Tue Nov 14 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.17-5
- Rebased to the latest upstream sources (see CHANGELOG.md)
Resolves: RHEL-7584, RHEL-7668, RHEL-7729, RHEL-7731, RHEL-7732, RHEL-7741, RHEL-7742, RHEL-7743, RHEL-7745, RHEL-8467
- Tightened permissions of bundled rubygems to be 755 or stricter
Resolves: RHEL-7715
* Mon Nov 6 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.17-4
- No changes, fixed an error in the new quality control process
- Resolves: RHEL-15218
* Wed Nov 1 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.17-3
- No changes, testing a new quality control process
- Resolves: RHEL-15218
* Thu Jul 13 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.17-2
- Make use of filters when extracting tarballs to enhance security if provided by Python (`pcs config restore` command)
- Do not display duplicate records in commands `pcs property [config] --all` and `pcs property describe`
- Resolves: rhbz#2218841 rhbz#2219388
* Mon Jun 19 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.17-1
- Rebased to the latest upstream sources (see CHANGELOG.md)
- Updated bundled rubygems: tilt, puma
- Resolves: rhbz#2112259 rhbz#2163439 rhbz#2166289
* Thu May 25 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.16-1
- Rebased to the latest upstream sources (see CHANGELOG.md)
- Updated bundled dependencies: dacite
- Added bundled rubygems: nio4r, puma
- Removed bundled rubygems: daemons, eventmachine, thin
- Updated bundled rubygems: backports, rack, rack-test, tilt
- Resolves: rhbz#1957591 rhbz#2022748 rhbz#2160555 rhbz#2163439 rhbz#2166289 rhbz#2166294 rhbz#2176490 rhbz#2178700 rhbz#2178707 rhbz#2179010 rhbz#2180378 rhbz#2189958
* Thu Feb 9 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.15-4 * Thu Feb 9 2023 Michal Pospisil <mpospisi@redhat.com> - 0.10.15-4
- Fixed enabling/disabling sbd when cluster is not running - Fixed enabling/disabling sbd when cluster is not running