From 36258cf45869441661ecc17a2e18d2018aa4eb30 Mon Sep 17 00:00:00 2001 From: Viktor Ashirov Date: Thu, 8 Jan 2026 13:16:46 +0100 Subject: [PATCH] Update to 3.2.0 - Resolves: RHEL-18041 - [RFE] When there are multiple backends cache auto-tune should adapt its tuning - Resolves: RHEL-58682 - [RFE] modify entry cache eviction strategy to allow large groups to stay in the cache - Resolves: RHEL-64019 - Units for changing MDB max size are not consistent across different tools - Resolves: RHEL-83274 - Replication online reinitialization of a large database gets stalled. - Resolves: RHEL-83852 - LDAP high CPU usage while handling indexes with IDL scan limit at INT_MAX - Resolves: RHEL-86534 - [RFE] Support Dynamic Groups similar to OpenLDAP - Resolves: RHEL-89601 - (&(cn:dn:=groups)) no longer returns results - Resolves: RHEL-94025 - PAM Pass Through Authentication Plugin processes requests for accounts that do not meet the criteria specified in the `pamFilter` option. - Resolves: RHEL-95395 - Getting "build_candidate_list - Database error 11" messages after migrating to LMDB. - Resolves: RHEL-96196 - Duplicate/double local password policy entry display from redhat directory server webconsole - Resolves: RHEL-99331 - [RFE] Need an option with dsconf to delete all the conflicts at once instead of deleting each conflict one after the other - Resolves: RHEL-105578 - IPA health check up script shows time skew is over 24 hours - Resolves: RHEL-106502 - Ignore the memberOfDeferredUpdate setting when LMDB is used. - Resolves: RHEL-106559 - When deferred memberof update is enabled after the server crashed it should not launch memberof fixup task by default - Resolves: RHEL-106849 - Abort the offline import if the root entry cannot be added. - Resolves: RHEL-107003 - Improve output dsctl dbverify when backend does not exist - Resolves: RHEL-109113 - [RFE] memberOf plugin - Add scope for specific groups - Resolves: RHEL-111219 - Attribute uniqueness is not enforced upon modrdn operation - Resolves: RHEL-113965 - RetroCL plugin generates invalid LDIF - Resolves: RHEL-115179 - Several password related attributes are not replicating from the Replicas to the Masters - Resolves: RHEL-115484 - ns-slapd crash in libdb, possible memory corruption - Resolves: RHEL-116060 - Fix paged result search locking - Resolves: RHEL-117124 - Missing access JSON logging for TLS/CLient auth - Resolves: RHEL-117140 - Changelog trimming - add number of scanned entries to the log - Resolves: RHEL-117520 - Typo in errors log after a Memberof fixup task. - Resolves: RHEL-121208 - [RFE] Make nsslapd-haproxy-trusted-ip accept whole subnets - Resolves: RHEL-122625 - ipa-healthcheck is complaining about missing or incorrectly configured system indexes. - Resolves: RHEL-122674 - [WebUI] Replication tab crashes after enabling replication as a consumer - Resolves: RHEL-123220 - Improve the way to detect asynchronous operations in the access logs - Resolves: RHEL-123275 - The new ipahealthcheck test ipahealthcheck.ds.backends.BackendsCheck raises CRITICAL issue - Resolves: RHEL-123663 - Online initialization of consumers fails with error -23 - Resolves: RHEL-123664 - RHDS 12.6 doesn't handle 'ldapsearch' filter with space char in DN name correctly - Resolves: RHEL-123762 - 389-ds-base OpenScanHub Leaks Detected - Resolves: RHEL-124694 - Access logs are not getting deleted as configured. - Resolves: RHEL-126535 - memory corruption in alias entry plugin - Resolves: RHEL-128906 - Scalability issue of replication online initialization with large database - Resolves: RHEL-129675 - Can't locate CSN error seen in errors log when replicating after importing data from ldif files - Resolves: RHEL-131129 - ns-slapd[2233]: segfault at 0 ip 00007f1f1d7cd7fc sp 00007f1e775fc070 error 4 in libjemalloc.so.2[7f1f1d738000+ac000] on 389-ds-base-2.6.1-11 - Resolves: RHEL-133795 - Memory leak observed in ns-slapd with 389-ds-base-2.6.1-12 - Resolves: RHEL-139826 - Rebase 389-ds-base to 3.2.x --- .gitignore | 4 +- ...nd-creation-cleanup-and-Database-UI-.patch | 488 ---- ...g-replication-online-total-init-the-.patch | 318 +++ ...-6852-Move-ds-CLI-tools-back-to-sbin.patch | 56 - ...e-Revise-paged-result-search-locking.patch | 765 ++++++ ...ULL-subsystem-crash-in-JSON-error-lo.patch | 380 --- ...ate-parametrized-docstring-for-tests.patch | 205 -- ...ue-6782-Improve-paged-result-locking.patch | 127 - ...9-replica.py-is-using-nonexistent-da.patch | 37 - ...dd_exclude_subtree-and-remove_exclud.patch | 515 ---- ...iq-allow-specifying-match-rules-in-t.patch | 45 - ...I-Properly-handle-disabled-NDN-cache.patch | 1201 --------- ...tor-for-improved-data-management-685.patch | 2237 ----------------- ...essSanitizer-memory-leak-in-mdb_init.patch | 65 - ...8-AddressSanitizer-leak-in-do_search.patch | 58 - ...ssSanitizer-leak-in-agmt_update_init.patch | 58 - ...ilter-is-not-fully-applying-matching.patch | 169 -- ...essed-log-rotation-creates-files-wit.patch | 163 -- ...nt-repeated-disconnect-logs-during-s.patch | 116 - ...ng-access-JSON-logging-for-TLS-Clien.patch | 590 ----- ...ate-parametrized-docstring-for-tests.patch | 43 - ...f-Replicas-with-the-consumer-role-al.patch | 67 - ...ser-that-is-updated-during-password-.patch | 360 --- 0021-Issue-6352-Fix-DeprecationWarning.patch | 37 - ...-6880-Fix-ds_logs-test-suite-failure.patch | 38 - ...01-Update-changelog-trimming-logging.patch | 53 - ...-if-repl-keep-alive-entry-can-not-be.patch | 98 - ...est-for-entryUSN-overflow-on-failed-.patch | 352 --- ...est-for-numSubordinates-replication-.patch | 172 -- ...k-password-hashes-in-audit-logs-6885.patch | 814 ------ ...isk-monitoring-test-failures-and-imp.patch | 1719 ------------- ...y-leak-in-roles_cache_create_object_.patch | 262 -- ...e-changelog-trimming-logging-fix-tes.patch | 64 - ...llow-system-to-manage-uid-gid-at-sta.patch | 32 - ...6468-CLI-Fix-default-error-log-level.patch | 31 - ...apd-crashes-when-a-referral-is-added.patch | 97 - ...13-6886-6250-Adjust-xfail-marks-6914.patch | 222 -- 0035-Issue-6875-Fix-dsidm-tests.patch | 378 --- ...e-6519-Add-basic-dsidm-account-tests.patch | 503 ---- ...f-monitor-server-fails-with-ldapi-du.patch | 268 -- ...user-subtree-policy-creation-idempot.patch | 569 ----- ...bordinates-tombstoneNumSubordinates-.patch | 1460 ----------- ...ssue-6910-Fix-latest-coverity-issues.patch | 574 ----- ...lation-failure-with-rust-1.89-on-Fed.patch | 35 - 389-ds-base.spec | 143 +- main.fmf | 2 +- sources | 6 +- 47 files changed, 1135 insertions(+), 14861 deletions(-) delete mode 100644 0001-Issue-6822-Backend-creation-cleanup-and-Database-UI-.patch create mode 100644 0001-Issue-7096-During-replication-online-total-init-the-.patch delete mode 100644 0002-Issue-6852-Move-ds-CLI-tools-back-to-sbin.patch create mode 100644 0002-Issue-Revise-paged-result-search-locking.patch delete mode 100644 0003-Issue-6663-Fix-NULL-subsystem-crash-in-JSON-error-lo.patch delete mode 100644 0004-Issue-6829-Update-parametrized-docstring-for-tests.patch delete mode 100644 0005-Issue-6782-Improve-paged-result-locking.patch delete mode 100644 0006-Issue-6838-lib389-replica.py-is-using-nonexistent-da.patch delete mode 100644 0007-Issue-6753-Add-add_exclude_subtree-and-remove_exclud.patch delete mode 100644 0008-Issue-6857-uiduniq-allow-specifying-match-rules-in-t.patch delete mode 100644 0009-Issue-6756-CLI-UI-Properly-handle-disabled-NDN-cache.patch delete mode 100644 0010-Issue-6854-Refactor-for-improved-data-management-685.patch delete mode 100644 0011-Issue-6850-AddressSanitizer-memory-leak-in-mdb_init.patch delete mode 100644 0012-Issue-6848-AddressSanitizer-leak-in-do_search.patch delete mode 100644 0013-Issue-6865-AddressSanitizer-leak-in-agmt_update_init.patch delete mode 100644 0014-Issue-6859-str2filter-is-not-fully-applying-matching.patch delete mode 100644 0015-Issue-6872-compressed-log-rotation-creates-files-wit.patch delete mode 100644 0016-Issue-6878-Prevent-repeated-disconnect-logs-during-s.patch delete mode 100644 0017-Issue-6888-Missing-access-JSON-logging-for-TLS-Clien.patch delete mode 100644 0018-Issue-6829-Update-parametrized-docstring-for-tests.patch delete mode 100644 0019-Issue-6772-dsconf-Replicas-with-the-consumer-role-al.patch delete mode 100644 0020-Issue-6893-Log-user-that-is-updated-during-password-.patch delete mode 100644 0021-Issue-6352-Fix-DeprecationWarning.patch delete mode 100644 0022-Issue-6880-Fix-ds_logs-test-suite-failure.patch delete mode 100644 0023-Issue-6901-Update-changelog-trimming-logging.patch delete mode 100644 0024-Issue-6895-Crash-if-repl-keep-alive-entry-can-not-be.patch delete mode 100644 0025-Issue-6250-Add-test-for-entryUSN-overflow-on-failed-.patch delete mode 100644 0026-Issue-6594-Add-test-for-numSubordinates-replication-.patch delete mode 100644 0027-Issue-6884-Mask-password-hashes-in-audit-logs-6885.patch delete mode 100644 0028-Issue-6897-Fix-disk-monitoring-test-failures-and-imp.patch delete mode 100644 0029-Issue-6778-Memory-leak-in-roles_cache_create_object_.patch delete mode 100644 0030-Issue-6901-Update-changelog-trimming-logging-fix-tes.patch delete mode 100644 0031-Issue-6181-RFE-Allow-system-to-manage-uid-gid-at-sta.patch delete mode 100644 0032-Issue-6468-CLI-Fix-default-error-log-level.patch delete mode 100644 0033-Issue-6768-ns-slapd-crashes-when-a-referral-is-added.patch delete mode 100644 0034-Issues-6913-6886-6250-Adjust-xfail-marks-6914.patch delete mode 100644 0035-Issue-6875-Fix-dsidm-tests.patch delete mode 100644 0036-Issue-6519-Add-basic-dsidm-account-tests.patch delete mode 100644 0037-Issue-6940-dsconf-monitor-server-fails-with-ldapi-du.patch delete mode 100644 0038-Issue-6936-Make-user-subtree-policy-creation-idempot.patch delete mode 100644 0039-Issue-6919-numSubordinates-tombstoneNumSubordinates-.patch delete mode 100644 0040-Issue-6910-Fix-latest-coverity-issues.patch delete mode 100644 0041-Issue-6929-Compilation-failure-with-rust-1.89-on-Fed.patch diff --git a/.gitignore b/.gitignore index d088947..71ff1b8 100644 --- a/.gitignore +++ b/.gitignore @@ -2,5 +2,5 @@ /389-ds-base-*.tar.bz2 /jemalloc-*.tar.bz2 /libdb-5.3.28-59.tar.bz2 -/Cargo-3.1.3-1.lock -/vendor-3.1.3-1.tar.gz +/Cargo-*.lock +/vendor-*.tar.gz diff --git a/0001-Issue-6822-Backend-creation-cleanup-and-Database-UI-.patch b/0001-Issue-6822-Backend-creation-cleanup-and-Database-UI-.patch deleted file mode 100644 index ce7a603..0000000 --- a/0001-Issue-6822-Backend-creation-cleanup-and-Database-UI-.patch +++ /dev/null @@ -1,488 +0,0 @@ -From 8f68c90b69bb09563ad8aa8c365bff534e133419 Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Fri, 27 Jun 2025 18:43:39 -0700 -Subject: [PATCH] Issue 6822 - Backend creation cleanup and Database UI tab - error handling (#6823) - -Description: Add rollback functionality when mapping tree creation fails -during backend creation to prevent orphaned backends. -Improve error handling in Database, Replication and Monitoring UI tabs -to gracefully handle backend get-tree command failures. - -Fixes: https://github.com/389ds/389-ds-base/issues/6822 - -Reviewed by: @mreynolds389 (Thanks!) ---- - src/cockpit/389-console/src/database.jsx | 119 ++++++++------ - src/cockpit/389-console/src/monitor.jsx | 172 +++++++++++--------- - src/cockpit/389-console/src/replication.jsx | 55 ++++--- - src/lib389/lib389/backend.py | 18 +- - 4 files changed, 210 insertions(+), 154 deletions(-) - -diff --git a/src/cockpit/389-console/src/database.jsx b/src/cockpit/389-console/src/database.jsx -index c0c4be414..276125dfc 100644 ---- a/src/cockpit/389-console/src/database.jsx -+++ b/src/cockpit/389-console/src/database.jsx -@@ -478,6 +478,59 @@ export class Database extends React.Component { - } - - loadSuffixTree(fullReset) { -+ const treeData = [ -+ { -+ name: _("Global Database Configuration"), -+ icon: , -+ id: "dbconfig", -+ }, -+ { -+ name: _("Chaining Configuration"), -+ icon: , -+ id: "chaining-config", -+ }, -+ { -+ name: _("Backups & LDIFs"), -+ icon: , -+ id: "backups", -+ }, -+ { -+ name: _("Password Policies"), -+ id: "pwp", -+ icon: , -+ children: [ -+ { -+ name: _("Global Policy"), -+ icon: , -+ id: "pwpolicy", -+ }, -+ { -+ name: _("Local Policies"), -+ icon: , -+ id: "localpwpolicy", -+ }, -+ ], -+ defaultExpanded: true -+ }, -+ { -+ name: _("Suffixes"), -+ icon: , -+ id: "suffixes-tree", -+ children: [], -+ defaultExpanded: true, -+ action: ( -+ -+ ), -+ } -+ ]; -+ - const cmd = [ - "dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket", - "backend", "get-tree", -@@ -491,58 +544,20 @@ export class Database extends React.Component { - suffixData = JSON.parse(content); - this.processTree(suffixData); - } -- const treeData = [ -- { -- name: _("Global Database Configuration"), -- icon: , -- id: "dbconfig", -- }, -- { -- name: _("Chaining Configuration"), -- icon: , -- id: "chaining-config", -- }, -- { -- name: _("Backups & LDIFs"), -- icon: , -- id: "backups", -- }, -- { -- name: _("Password Policies"), -- id: "pwp", -- icon: , -- children: [ -- { -- name: _("Global Policy"), -- icon: , -- id: "pwpolicy", -- }, -- { -- name: _("Local Policies"), -- icon: , -- id: "localpwpolicy", -- }, -- ], -- defaultExpanded: true -- }, -- { -- name: _("Suffixes"), -- icon: , -- id: "suffixes-tree", -- children: suffixData, -- defaultExpanded: true, -- action: ( -- -- ), -- } -- ]; -+ -+ let current_node = this.state.node_name; -+ if (fullReset) { -+ current_node = DB_CONFIG; -+ } -+ -+ treeData[4].children = suffixData; // suffixes node -+ this.setState(() => ({ -+ nodes: treeData, -+ node_name: current_node, -+ }), this.loadAttrs); -+ }) -+ .fail(err => { -+ // Handle backend get-tree failure gracefully - let current_node = this.state.node_name; - if (fullReset) { - current_node = DB_CONFIG; -diff --git a/src/cockpit/389-console/src/monitor.jsx b/src/cockpit/389-console/src/monitor.jsx -index ad48d1f87..91a8e3e37 100644 ---- a/src/cockpit/389-console/src/monitor.jsx -+++ b/src/cockpit/389-console/src/monitor.jsx -@@ -200,6 +200,84 @@ export class Monitor extends React.Component { - } - - loadSuffixTree(fullReset) { -+ const basicData = [ -+ { -+ name: _("Server Statistics"), -+ icon: , -+ id: "server-monitor", -+ type: "server", -+ }, -+ { -+ name: _("Replication"), -+ icon: , -+ id: "replication-monitor", -+ type: "replication", -+ defaultExpanded: true, -+ children: [ -+ { -+ name: _("Synchronization Report"), -+ icon: , -+ id: "sync-report", -+ item: "sync-report", -+ type: "repl-mon", -+ }, -+ { -+ name: _("Log Analysis"), -+ icon: , -+ id: "log-analysis", -+ item: "log-analysis", -+ type: "repl-mon", -+ } -+ ], -+ }, -+ { -+ name: _("Database"), -+ icon: , -+ id: "database-monitor", -+ type: "database", -+ children: [], // Will be populated with treeData on success -+ defaultExpanded: true, -+ }, -+ { -+ name: _("Logging"), -+ icon: , -+ id: "log-monitor", -+ defaultExpanded: true, -+ children: [ -+ { -+ name: _("Access Log"), -+ icon: , -+ id: "access-log-monitor", -+ type: "log", -+ }, -+ { -+ name: _("Audit Log"), -+ icon: , -+ id: "audit-log-monitor", -+ type: "log", -+ }, -+ { -+ name: _("Audit Failure Log"), -+ icon: , -+ id: "auditfail-log-monitor", -+ type: "log", -+ }, -+ { -+ name: _("Errors Log"), -+ icon: , -+ id: "error-log-monitor", -+ type: "log", -+ }, -+ { -+ name: _("Security Log"), -+ icon: , -+ id: "security-log-monitor", -+ type: "log", -+ }, -+ ] -+ }, -+ ]; -+ - const cmd = [ - "dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket", - "backend", "get-tree", -@@ -210,83 +288,7 @@ export class Monitor extends React.Component { - .done(content => { - const treeData = JSON.parse(content); - this.processTree(treeData); -- const basicData = [ -- { -- name: _("Server Statistics"), -- icon: , -- id: "server-monitor", -- type: "server", -- }, -- { -- name: _("Replication"), -- icon: , -- id: "replication-monitor", -- type: "replication", -- defaultExpanded: true, -- children: [ -- { -- name: _("Synchronization Report"), -- icon: , -- id: "sync-report", -- item: "sync-report", -- type: "repl-mon", -- }, -- { -- name: _("Log Analysis"), -- icon: , -- id: "log-analysis", -- item: "log-analysis", -- type: "repl-mon", -- } -- ], -- }, -- { -- name: _("Database"), -- icon: , -- id: "database-monitor", -- type: "database", -- children: [], -- defaultExpanded: true, -- }, -- { -- name: _("Logging"), -- icon: , -- id: "log-monitor", -- defaultExpanded: true, -- children: [ -- { -- name: _("Access Log"), -- icon: , -- id: "access-log-monitor", -- type: "log", -- }, -- { -- name: _("Audit Log"), -- icon: , -- id: "audit-log-monitor", -- type: "log", -- }, -- { -- name: _("Audit Failure Log"), -- icon: , -- id: "auditfail-log-monitor", -- type: "log", -- }, -- { -- name: _("Errors Log"), -- icon: , -- id: "error-log-monitor", -- type: "log", -- }, -- { -- name: _("Security Log"), -- icon: , -- id: "security-log-monitor", -- type: "log", -- }, -- ] -- }, -- ]; -+ - let current_node = this.state.node_name; - let type = this.state.node_type; - if (fullReset) { -@@ -296,6 +298,22 @@ export class Monitor extends React.Component { - basicData[2].children = treeData; // database node - this.processReplSuffixes(basicData[1].children); - -+ this.setState(() => ({ -+ nodes: basicData, -+ node_name: current_node, -+ node_type: type, -+ }), this.update_tree_nodes); -+ }) -+ .fail(err => { -+ // Handle backend get-tree failure gracefully -+ let current_node = this.state.node_name; -+ let type = this.state.node_type; -+ if (fullReset) { -+ current_node = "server-monitor"; -+ type = "server"; -+ } -+ this.processReplSuffixes(basicData[1].children); -+ - this.setState(() => ({ - nodes: basicData, - node_name: current_node, -diff --git a/src/cockpit/389-console/src/replication.jsx b/src/cockpit/389-console/src/replication.jsx -index fa492fd2a..aa535bfc7 100644 ---- a/src/cockpit/389-console/src/replication.jsx -+++ b/src/cockpit/389-console/src/replication.jsx -@@ -177,6 +177,16 @@ export class Replication extends React.Component { - loaded: false - }); - -+ const basicData = [ -+ { -+ name: _("Suffixes"), -+ icon: , -+ id: "repl-suffixes", -+ children: [], -+ defaultExpanded: true -+ } -+ ]; -+ - const cmd = [ - "dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket", - "backend", "get-tree", -@@ -199,15 +209,7 @@ export class Replication extends React.Component { - } - } - } -- const basicData = [ -- { -- name: _("Suffixes"), -- icon: , -- id: "repl-suffixes", -- children: [], -- defaultExpanded: true -- } -- ]; -+ - let current_node = this.state.node_name; - let current_type = this.state.node_type; - let replicated = this.state.node_replicated; -@@ -258,6 +260,19 @@ export class Replication extends React.Component { - } - - basicData[0].children = treeData; -+ this.setState({ -+ nodes: basicData, -+ node_name: current_node, -+ node_type: current_type, -+ node_replicated: replicated, -+ }, () => { this.update_tree_nodes() }); -+ }) -+ .fail(err => { -+ // Handle backend get-tree failure gracefully -+ let current_node = this.state.node_name; -+ let current_type = this.state.node_type; -+ let replicated = this.state.node_replicated; -+ - this.setState({ - nodes: basicData, - node_name: current_node, -@@ -905,18 +920,18 @@ export class Replication extends React.Component { - disableTree: false - }); - }); -- }) -- .fail(err => { -- const errMsg = JSON.parse(err); -- this.props.addNotification( -- "error", -- cockpit.format(_("Error loading replication agreements configuration - $0"), errMsg.desc) -- ); -- this.setState({ -- suffixLoading: false, -- disableTree: false -+ }) -+ .fail(err => { -+ const errMsg = JSON.parse(err); -+ this.props.addNotification( -+ "error", -+ cockpit.format(_("Error loading replication agreements configuration - $0"), errMsg.desc) -+ ); -+ this.setState({ -+ suffixLoading: false, -+ disableTree: false -+ }); - }); -- }); - }) - .fail(err => { - // changelog failure -diff --git a/src/lib389/lib389/backend.py b/src/lib389/lib389/backend.py -index 1d000ed66..53f15b6b0 100644 ---- a/src/lib389/lib389/backend.py -+++ b/src/lib389/lib389/backend.py -@@ -694,24 +694,32 @@ class Backend(DSLdapObject): - parent_suffix = properties.pop('parent', False) - - # Okay, now try to make the backend. -- super(Backend, self).create(dn, properties, basedn) -+ backend_obj = super(Backend, self).create(dn, properties, basedn) - - # We check if the mapping tree exists in create, so do this *after* - if create_mapping_tree is True: -- properties = { -+ mapping_tree_properties = { - 'cn': self._nprops_stash['nsslapd-suffix'], - 'nsslapd-state': 'backend', - 'nsslapd-backend': self._nprops_stash['cn'], - } - if parent_suffix: - # This is a subsuffix, set the parent suffix -- properties['nsslapd-parent-suffix'] = parent_suffix -- self._mts.create(properties=properties) -+ mapping_tree_properties['nsslapd-parent-suffix'] = parent_suffix -+ -+ try: -+ self._mts.create(properties=mapping_tree_properties) -+ except Exception as e: -+ try: -+ backend_obj.delete() -+ except Exception as cleanup_error: -+ self._instance.log.error(f"Failed to cleanup backend after mapping tree creation failure: {cleanup_error}") -+ raise e - - # We can't create the sample entries unless a mapping tree was installed. - if sample_entries is not False and create_mapping_tree is True: - self.create_sample_entries(sample_entries) -- return self -+ return backend_obj - - def delete(self): - """Deletes the backend, it's mapping tree and all related indices. --- -2.49.0 - diff --git a/0001-Issue-7096-During-replication-online-total-init-the-.patch b/0001-Issue-7096-During-replication-online-total-init-the-.patch new file mode 100644 index 0000000..a5792b6 --- /dev/null +++ b/0001-Issue-7096-During-replication-online-total-init-the-.patch @@ -0,0 +1,318 @@ +From 1c9c535888b9a850095794787d67900b04924a76 Mon Sep 17 00:00:00 2001 +From: tbordaz +Date: Wed, 7 Jan 2026 11:21:12 +0100 +Subject: [PATCH] Issue 7096 - During replication online total init the + function idl_id_is_in_idlist is not scaling with large database (#7145) + +Bug description: + During a online total initialization, the supplier sorts + the candidate list of entries so that the parents are sent before + children entries. + With large DB the ID array used for the sorting is not + scaling. It takes so long to build the candidate list that + the connection gets closed + +Fix description: + Instead of using an ID array, uses a list of ID ranges + +fixes: #7096 + +Reviewed by: Mark Reynolds, Pierre Rogier (Thanks !!) +--- + ldap/servers/slapd/back-ldbm/back-ldbm.h | 12 ++ + ldap/servers/slapd/back-ldbm/idl_common.c | 163 ++++++++++++++++++ + ldap/servers/slapd/back-ldbm/idl_new.c | 30 ++-- + .../servers/slapd/back-ldbm/proto-back-ldbm.h | 3 + + 4 files changed, 189 insertions(+), 19 deletions(-) + +diff --git a/ldap/servers/slapd/back-ldbm/back-ldbm.h b/ldap/servers/slapd/back-ldbm/back-ldbm.h +index 1bc36720d..b187c26bc 100644 +--- a/ldap/servers/slapd/back-ldbm/back-ldbm.h ++++ b/ldap/servers/slapd/back-ldbm/back-ldbm.h +@@ -282,6 +282,18 @@ typedef struct _idlist_set + #define INDIRECT_BLOCK(idl) ((idl)->b_nids == INDBLOCK) + #define IDL_NIDS(idl) (idl ? (idl)->b_nids : (NIDS)0) + ++/* ++ * used by the supplier during online total init ++ * it stores the ranges of ID that are already present ++ * in the candidate list ('parentid>=1') ++ */ ++typedef struct IdRange { ++ ID first; ++ ID last; ++ struct IdRange *next; ++} IdRange_t; ++ ++ + typedef size_t idl_iterator; + + /* small hashtable implementation used in the entry cache -- the table +diff --git a/ldap/servers/slapd/back-ldbm/idl_common.c b/ldap/servers/slapd/back-ldbm/idl_common.c +index fcb0ece4b..fdc9b4e67 100644 +--- a/ldap/servers/slapd/back-ldbm/idl_common.c ++++ b/ldap/servers/slapd/back-ldbm/idl_common.c +@@ -172,6 +172,169 @@ idl_min(IDList *a, IDList *b) + return (a->b_nids > b->b_nids ? b : a); + } + ++/* ++ * This is a faster version of idl_id_is_in_idlist. ++ * idl_id_is_in_idlist uses an array of ID so lookup is expensive ++ * idl_id_is_in_idlist_ranges uses a list of ranges of ID lookup is faster ++ * returns ++ * 1: 'id' is present in idrange_list ++ * 0: 'id' is not present in idrange_list ++ */ ++int ++idl_id_is_in_idlist_ranges(IDList *idl, IdRange_t *idrange_list, ID id) ++{ ++ IdRange_t *range = idrange_list; ++ int found = 0; ++ ++ if (NULL == idl || NOID == id) { ++ return 0; /* not in the list */ ++ } ++ if (ALLIDS(idl)) { ++ return 1; /* in the list */ ++ } ++ ++ for(;range; range = range->next) { ++ if (id > range->last) { ++ /* check if it belongs to the next range */ ++ continue; ++ } ++ if (id >= range->first) { ++ /* It belongs to that range [first..last ] */ ++ found = 1; ++ break; ++ } else { ++ /* this range is after id */ ++ break; ++ } ++ } ++ return found; ++} ++ ++/* This function is used during the online total initialisation ++ * (see next function) ++ * It frees all ranges of ID in the list ++ */ ++void idrange_free(IdRange_t **head) ++{ ++ IdRange_t *curr, *sav; ++ ++ if ((head == NULL) || (*head == NULL)) { ++ return; ++ } ++ curr = *head; ++ sav = NULL; ++ for (; curr;) { ++ sav = curr; ++ curr = curr->next; ++ slapi_ch_free((void *) &sav); ++ } ++ if (sav) { ++ slapi_ch_free((void *) &sav); ++ } ++ *head = NULL; ++} ++ ++/* This function is used during the online total initialisation ++ * Because a MODRDN can move entries under a parent that ++ * has a higher ID we need to sort the IDList so that parents ++ * are sent, to the consumer, before the children are sent. ++ * The sorting with a simple IDlist does not scale instead ++ * a list of IDs ranges is much faster. ++ * In that list we only ADD/lookup ID. ++ */ ++IdRange_t *idrange_add_id(IdRange_t **head, ID id) ++{ ++ if (head == NULL) { ++ slapi_log_err(SLAPI_LOG_ERR, "idrange_add_id", ++ "Can not add ID %d in non defined list\n", id); ++ return NULL; ++ } ++ ++ if (*head == NULL) { ++ /* This is the first range */ ++ IdRange_t *new_range = (IdRange_t *)slapi_ch_malloc(sizeof(IdRange_t)); ++ new_range->first = id; ++ new_range->last = id; ++ new_range->next = NULL; ++ *head = new_range; ++ return *head; ++ } ++ ++ IdRange_t *curr = *head, *prev = NULL; ++ ++ /* First, find if id already falls within any existing range, or it is adjacent to any */ ++ while (curr) { ++ if (id >= curr->first && id <= curr->last) { ++ /* inside a range, nothing to do */ ++ return curr; ++ } ++ ++ if (id == curr->last + 1) { ++ /* Extend this range upwards */ ++ curr->last = id; ++ ++ /* Check for possible merge with next range */ ++ IdRange_t *next = curr->next; ++ if (next && curr->last + 1 >= next->first) { ++ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id", ++ "(id=%d) merge current with next range [%d..%d]\n", id, curr->first, curr->last); ++ curr->last = (next->last > curr->last) ? next->last : curr->last; ++ curr->next = next->next; ++ slapi_ch_free((void*) &next); ++ } else { ++ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id", ++ "(id=%d) extend forward current range [%d..%d]\n", id, curr->first, curr->last); ++ } ++ return curr; ++ } ++ ++ if (id + 1 == curr->first) { ++ /* Extend this range downwards */ ++ curr->first = id; ++ ++ /* Check for possible merge with previous range */ ++ if (prev && prev->last + 1 >= curr->first) { ++ prev->last = curr->last; ++ prev->next = curr->next; ++ slapi_ch_free((void *) &curr); ++ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id", ++ "(id=%d) merge current with previous range [%d..%d]\n", id, prev->first, prev->last); ++ return prev; ++ } else { ++ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id", ++ "(id=%d) extend backward current range [%d..%d]\n", id, curr->first, curr->last); ++ return curr; ++ } ++ } ++ ++ /* If id is before the current range, break so we can insert before */ ++ if (id < curr->first) { ++ break; ++ } ++ ++ prev = curr; ++ curr = curr->next; ++ } ++ /* Need to insert a new standalone IdRange */ ++ IdRange_t *new_range = (IdRange_t *)slapi_ch_malloc(sizeof(IdRange_t)); ++ new_range->first = id; ++ new_range->last = id; ++ new_range->next = curr; ++ ++ if (prev) { ++ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id", ++ "(id=%d) add new range [%d..%d]\n", id, new_range->first, new_range->last); ++ prev->next = new_range; ++ } else { ++ /* Insert at head */ ++ slapi_log_err(SLAPI_LOG_REPL, "idrange_add_id", ++ "(id=%d) head range [%d..%d]\n", id, new_range->first, new_range->last); ++ *head = new_range; ++ } ++ return *head; ++} ++ ++ + int + idl_id_is_in_idlist(IDList *idl, ID id) + { +diff --git a/ldap/servers/slapd/back-ldbm/idl_new.c b/ldap/servers/slapd/back-ldbm/idl_new.c +index 5fbcaff2e..2d978353f 100644 +--- a/ldap/servers/slapd/back-ldbm/idl_new.c ++++ b/ldap/servers/slapd/back-ldbm/idl_new.c +@@ -417,7 +417,6 @@ idl_new_range_fetch( + { + int ret = 0; + int ret2 = 0; +- int idl_rc = 0; + dbi_cursor_t cursor = {0}; + IDList *idl = NULL; + dbi_val_t cur_key = {0}; +@@ -436,6 +435,7 @@ idl_new_range_fetch( + size_t leftoverlen = 32; + size_t leftovercnt = 0; + char *index_id = get_index_name(be, db, ai); ++ IdRange_t *idrange_list = NULL; + + + if (NULL == flag_err) { +@@ -578,10 +578,12 @@ idl_new_range_fetch( + * found entry is the one from the suffix + */ + suffix = key; +- idl_rc = idl_append_extend(&idl, id); +- } else if ((key == suffix) || idl_id_is_in_idlist(idl, key)) { ++ idl_append_extend(&idl, id); ++ idrange_add_id(&idrange_list, id); ++ } else if ((key == suffix) || idl_id_is_in_idlist_ranges(idl, idrange_list, key)) { + /* the parent is the suffix or already in idl. */ +- idl_rc = idl_append_extend(&idl, id); ++ idl_append_extend(&idl, id); ++ idrange_add_id(&idrange_list, id); + } else { + /* Otherwise, keep the {key,id} in leftover array */ + if (!leftover) { +@@ -596,13 +598,7 @@ idl_new_range_fetch( + leftovercnt++; + } + } else { +- idl_rc = idl_append_extend(&idl, id); +- } +- if (idl_rc) { +- slapi_log_err(SLAPI_LOG_ERR, "idl_new_range_fetch", +- "Unable to extend id list (err=%d)\n", idl_rc); +- idl_free(&idl); +- goto error; ++ idl_append_extend(&idl, id); + } + + count++; +@@ -695,21 +691,17 @@ error: + + while(remaining > 0) { + for (size_t i = 0; i < leftovercnt; i++) { +- if (leftover[i].key > 0 && idl_id_is_in_idlist(idl, leftover[i].key) != 0) { ++ if (leftover[i].key > 0 && idl_id_is_in_idlist_ranges(idl, idrange_list, leftover[i].key) != 0) { + /* if the leftover key has its parent in the idl */ +- idl_rc = idl_append_extend(&idl, leftover[i].id); +- if (idl_rc) { +- slapi_log_err(SLAPI_LOG_ERR, "idl_new_range_fetch", +- "Unable to extend id list (err=%d)\n", idl_rc); +- idl_free(&idl); +- return NULL; +- } ++ idl_append_extend(&idl, leftover[i].id); ++ idrange_add_id(&idrange_list, leftover[i].id); + leftover[i].key = 0; + remaining--; + } + } + } + slapi_ch_free((void **)&leftover); ++ idrange_free(&idrange_list); + } + slapi_log_err(SLAPI_LOG_FILTER, "idl_new_range_fetch", + "Found %d candidates; error code is: %d\n", +diff --git a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h +index 91d61098a..30a7aa11f 100644 +--- a/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h ++++ b/ldap/servers/slapd/back-ldbm/proto-back-ldbm.h +@@ -217,6 +217,9 @@ ID idl_firstid(IDList *idl); + ID idl_nextid(IDList *idl, ID id); + int idl_init_private(backend *be, struct attrinfo *a); + int idl_release_private(struct attrinfo *a); ++IdRange_t *idrange_add_id(IdRange_t **head, ID id); ++void idrange_free(IdRange_t **head); ++int idl_id_is_in_idlist_ranges(IDList *idl, IdRange_t *idrange_list, ID id); + int idl_id_is_in_idlist(IDList *idl, ID id); + + idl_iterator idl_iterator_init(const IDList *idl); +-- +2.52.0 + diff --git a/0002-Issue-6852-Move-ds-CLI-tools-back-to-sbin.patch b/0002-Issue-6852-Move-ds-CLI-tools-back-to-sbin.patch deleted file mode 100644 index 1ee668a..0000000 --- a/0002-Issue-6852-Move-ds-CLI-tools-back-to-sbin.patch +++ /dev/null @@ -1,56 +0,0 @@ -From 6ed6a67f142fec393cd254df38b9750a14848528 Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Tue, 8 Jul 2025 19:09:34 +0200 -Subject: [PATCH] Issue 6852 - Move ds* CLI tools back to /sbin - -Bug Description: -After #6767 ds* CLI tools are packaged in /bin instead of /sbin. Even -though Fedora 42 has unified /bin and /sbin, some tools (ipa-backup) and -our tests still rely on hardcoded paths. - -Fix Description: -Move ds* tools back to /sbin - -Fixes: https://github.com/389ds/389-ds-base/issues/6852 - -Reviewed by: @droideck (Thanks!) ---- - rpm/389-ds-base.spec.in | 15 ++++++++++----- - 1 file changed, 10 insertions(+), 5 deletions(-) - -diff --git a/rpm/389-ds-base.spec.in b/rpm/389-ds-base.spec.in -index 2f1df63c9..101771574 100644 ---- a/rpm/389-ds-base.spec.in -+++ b/rpm/389-ds-base.spec.in -@@ -486,6 +486,11 @@ cp -r %{_builddir}/%{name}-%{version}/man/man3 $RPM_BUILD_ROOT/%{_mandir}/man3 - # lib389 - pushd src/lib389 - %pyproject_install -+%if 0%{?fedora} <= 41 || (0%{?rhel} && 0%{?rhel} <= 10) -+for clitool in dsconf dscreate dsctl dsidm openldap_to_ds; do -+ mv %{buildroot}%{_bindir}/$clitool %{buildroot}%{_sbindir}/ -+done -+%endif - %pyproject_save_files -l lib389 - popd - -@@ -743,11 +748,11 @@ fi - %doc src/lib389/README.md - %license LICENSE LICENSE.GPLv3+ - # Binaries --%{_bindir}/dsconf --%{_bindir}/dscreate --%{_bindir}/dsctl --%{_bindir}/dsidm --%{_bindir}/openldap_to_ds -+%{_sbindir}/dsconf -+%{_sbindir}/dscreate -+%{_sbindir}/dsctl -+%{_sbindir}/dsidm -+%{_sbindir}/openldap_to_ds - %{_libexecdir}/%{pkgname}/dscontainer - # Man pages - %{_mandir}/man8/dsconf.8.gz --- -2.49.0 - diff --git a/0002-Issue-Revise-paged-result-search-locking.patch b/0002-Issue-Revise-paged-result-search-locking.patch new file mode 100644 index 0000000..e27ced3 --- /dev/null +++ b/0002-Issue-Revise-paged-result-search-locking.patch @@ -0,0 +1,765 @@ +From 446bc42e7b64a8496c2c3fe486f86bba318bed5e Mon Sep 17 00:00:00 2001 +From: Mark Reynolds +Date: Wed, 7 Jan 2026 16:55:27 -0500 +Subject: [PATCH] Issue - Revise paged result search locking + +Description: + +Move to a single lock approach verses having two locks. This will impact +concurrency when multiple async paged result searches are done on the same +connection, but it simplifies the code and avoids race conditions and +deadlocks. + +Relates: https://github.com/389ds/389-ds-base/issues/7118 + +Reviewed by: progier & tbordaz (Thanks!!) +--- + ldap/servers/slapd/abandon.c | 2 +- + ldap/servers/slapd/opshared.c | 60 ++++---- + ldap/servers/slapd/pagedresults.c | 228 +++++++++++++++++++----------- + ldap/servers/slapd/proto-slap.h | 26 ++-- + ldap/servers/slapd/slap.h | 5 +- + 5 files changed, 187 insertions(+), 134 deletions(-) + +diff --git a/ldap/servers/slapd/abandon.c b/ldap/servers/slapd/abandon.c +index 6024fcd31..1f47c531c 100644 +--- a/ldap/servers/slapd/abandon.c ++++ b/ldap/servers/slapd/abandon.c +@@ -179,7 +179,7 @@ do_abandon(Slapi_PBlock *pb) + logpb.tv_sec = -1; + logpb.tv_nsec = -1; + +- if (0 == pagedresults_free_one_msgid(pb_conn, id, pageresult_lock_get_addr(pb_conn))) { ++ if (0 == pagedresults_free_one_msgid(pb_conn, id, PR_NOT_LOCKED)) { + if (log_format != LOG_FORMAT_DEFAULT) { + /* JSON logging */ + logpb.target_op = "Simple Paged Results"; +diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c +index a5cddfd23..bf800f7dc 100644 +--- a/ldap/servers/slapd/opshared.c ++++ b/ldap/servers/slapd/opshared.c +@@ -572,8 +572,8 @@ op_shared_search(Slapi_PBlock *pb, int send_result) + be = be_list[index]; + } + } +- pr_search_result = pagedresults_get_search_result(pb_conn, operation, 0 /*not locked*/, pr_idx); +- estimate = pagedresults_get_search_result_set_size_estimate(pb_conn, operation, pr_idx); ++ pr_search_result = pagedresults_get_search_result(pb_conn, operation, PR_NOT_LOCKED, pr_idx); ++ estimate = pagedresults_get_search_result_set_size_estimate(pb_conn, operation, PR_NOT_LOCKED, pr_idx); + /* Set operation note flags as required. */ + if (pagedresults_get_unindexed(pb_conn, operation, pr_idx)) { + slapi_pblock_set_flag_operation_notes(pb, SLAPI_OP_NOTE_UNINDEXED); +@@ -619,14 +619,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result) + int32_t tlimit; + slapi_pblock_get(pb, SLAPI_SEARCH_TIMELIMIT, &tlimit); + pagedresults_set_timelimit(pb_conn, operation, (time_t)tlimit, pr_idx); +- /* When using this mutex in conjunction with the main paged +- * result lock, you must do so in this order: +- * +- * --> pagedresults_lock() +- * --> pagedresults_mutex +- * <-- pagedresults_mutex +- * <-- pagedresults_unlock() +- */ ++ /* IMPORTANT: Never acquire pagedresults_mutex when holding c_mutex. */ + pagedresults_mutex = pageresult_lock_get_addr(pb_conn); + } + +@@ -743,17 +736,15 @@ op_shared_search(Slapi_PBlock *pb, int send_result) + if (op_is_pagedresults(operation) && pr_search_result) { + void *sr = NULL; + /* PAGED RESULTS and already have the search results from the prev op */ +- pagedresults_lock(pb_conn, pr_idx); + /* + * In async paged result case, the search result might be released + * by other theads. We need to double check it in the locked region. + */ + pthread_mutex_lock(pagedresults_mutex); +- pr_search_result = pagedresults_get_search_result(pb_conn, operation, 1 /*locked*/, pr_idx); ++ pr_search_result = pagedresults_get_search_result(pb_conn, operation, PR_LOCKED, pr_idx); + if (pr_search_result) { +- if (pagedresults_is_abandoned_or_notavailable(pb_conn, 1 /*locked*/, pr_idx)) { ++ if (pagedresults_is_abandoned_or_notavailable(pb_conn, PR_LOCKED, pr_idx)) { + pthread_mutex_unlock(pagedresults_mutex); +- pagedresults_unlock(pb_conn, pr_idx); + /* Previous operation was abandoned and the simplepaged object is not in use. */ + send_ldap_result(pb, 0, NULL, "Simple Paged Results Search abandoned", 0, NULL); + rc = LDAP_SUCCESS; +@@ -764,14 +755,13 @@ op_shared_search(Slapi_PBlock *pb, int send_result) + + /* search result could be reset in the backend/dse */ + slapi_pblock_get(pb, SLAPI_SEARCH_RESULT_SET, &sr); +- pagedresults_set_search_result(pb_conn, operation, sr, 1 /*locked*/, pr_idx); ++ pagedresults_set_search_result(pb_conn, operation, sr, PR_LOCKED, pr_idx); + } + } else { + pr_stat = PAGEDRESULTS_SEARCH_END; + rc = LDAP_SUCCESS; + } + pthread_mutex_unlock(pagedresults_mutex); +- pagedresults_unlock(pb_conn, pr_idx); + + if ((PAGEDRESULTS_SEARCH_END == pr_stat) || (0 == pnentries)) { + /* no more entries to send in the backend */ +@@ -789,22 +779,22 @@ op_shared_search(Slapi_PBlock *pb, int send_result) + } + pagedresults_set_response_control(pb, 0, estimate, + curr_search_count, pr_idx); +- if (pagedresults_get_with_sort(pb_conn, operation, pr_idx)) { ++ if (pagedresults_get_with_sort(pb_conn, operation, PR_NOT_LOCKED, pr_idx)) { + sort_make_sort_response_control(pb, CONN_GET_SORT_RESULT_CODE, NULL); + } + pagedresults_set_search_result_set_size_estimate(pb_conn, + operation, +- estimate, pr_idx); ++ estimate, PR_NOT_LOCKED, pr_idx); + if (PAGEDRESULTS_SEARCH_END == pr_stat) { +- pagedresults_lock(pb_conn, pr_idx); ++ pthread_mutex_lock(pagedresults_mutex); + slapi_pblock_set(pb, SLAPI_SEARCH_RESULT_SET, NULL); +- if (!pagedresults_is_abandoned_or_notavailable(pb_conn, 0 /*not locked*/, pr_idx)) { +- pagedresults_free_one(pb_conn, operation, pr_idx); ++ if (!pagedresults_is_abandoned_or_notavailable(pb_conn, PR_LOCKED, pr_idx)) { ++ pagedresults_free_one(pb_conn, operation, PR_LOCKED, pr_idx); + } +- pagedresults_unlock(pb_conn, pr_idx); ++ pthread_mutex_unlock(pagedresults_mutex); + if (next_be) { + /* no more entries, but at least another backend */ +- if (pagedresults_set_current_be(pb_conn, next_be, pr_idx, 0) < 0) { ++ if (pagedresults_set_current_be(pb_conn, next_be, pr_idx, PR_NOT_LOCKED) < 0) { + goto free_and_return; + } + } +@@ -915,7 +905,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result) + } + } + pagedresults_set_search_result(pb_conn, operation, NULL, 1, pr_idx); +- rc = pagedresults_set_current_be(pb_conn, NULL, pr_idx, 1); ++ rc = pagedresults_set_current_be(pb_conn, NULL, pr_idx, PR_LOCKED); + pthread_mutex_unlock(pagedresults_mutex); + #pragma GCC diagnostic pop + } +@@ -954,7 +944,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result) + pthread_mutex_lock(pagedresults_mutex); + pagedresults_set_search_result(pb_conn, operation, NULL, 1, pr_idx); + be->be_search_results_release(&sr); +- rc = pagedresults_set_current_be(pb_conn, next_be, pr_idx, 1); ++ rc = pagedresults_set_current_be(pb_conn, next_be, pr_idx, PR_LOCKED); + pthread_mutex_unlock(pagedresults_mutex); + pr_stat = PAGEDRESULTS_SEARCH_END; /* make sure stat is SEARCH_END */ + if (NULL == next_be) { +@@ -967,23 +957,23 @@ op_shared_search(Slapi_PBlock *pb, int send_result) + } else { + curr_search_count = pnentries; + slapi_pblock_get(pb, SLAPI_SEARCH_RESULT_SET_SIZE_ESTIMATE, &estimate); +- pagedresults_lock(pb_conn, pr_idx); +- if ((pagedresults_set_current_be(pb_conn, be, pr_idx, 0) < 0) || +- (pagedresults_set_search_result(pb_conn, operation, sr, 0, pr_idx) < 0) || +- (pagedresults_set_search_result_count(pb_conn, operation, curr_search_count, pr_idx) < 0) || +- (pagedresults_set_search_result_set_size_estimate(pb_conn, operation, estimate, pr_idx) < 0) || +- (pagedresults_set_with_sort(pb_conn, operation, with_sort, pr_idx) < 0)) { +- pagedresults_unlock(pb_conn, pr_idx); ++ pthread_mutex_lock(pagedresults_mutex); ++ if ((pagedresults_set_current_be(pb_conn, be, pr_idx, PR_LOCKED) < 0) || ++ (pagedresults_set_search_result(pb_conn, operation, sr, PR_LOCKED, pr_idx) < 0) || ++ (pagedresults_set_search_result_count(pb_conn, operation, curr_search_count, PR_LOCKED, pr_idx) < 0) || ++ (pagedresults_set_search_result_set_size_estimate(pb_conn, operation, estimate, PR_LOCKED, pr_idx) < 0) || ++ (pagedresults_set_with_sort(pb_conn, operation, with_sort, PR_LOCKED, pr_idx) < 0)) { ++ pthread_mutex_unlock(pagedresults_mutex); + cache_return_target_entry(pb, be, operation); + goto free_and_return; + } +- pagedresults_unlock(pb_conn, pr_idx); ++ pthread_mutex_unlock(pagedresults_mutex); + } + slapi_pblock_set(pb, SLAPI_SEARCH_RESULT_SET, NULL); + next_be = NULL; /* to break the loop */ + if (operation->o_status & SLAPI_OP_STATUS_ABANDONED) { + /* It turned out this search was abandoned. */ +- pagedresults_free_one_msgid(pb_conn, operation->o_msgid, pagedresults_mutex); ++ pagedresults_free_one_msgid(pb_conn, operation->o_msgid, PR_NOT_LOCKED); + /* paged-results-request was abandoned; making an empty cookie. */ + pagedresults_set_response_control(pb, 0, estimate, -1, pr_idx); + send_ldap_result(pb, 0, NULL, "Simple Paged Results Search abandoned", 0, NULL); +@@ -993,7 +983,7 @@ op_shared_search(Slapi_PBlock *pb, int send_result) + } + pagedresults_set_response_control(pb, 0, estimate, curr_search_count, pr_idx); + if (curr_search_count == -1) { +- pagedresults_free_one(pb_conn, operation, pr_idx); ++ pagedresults_free_one(pb_conn, operation, PR_NOT_LOCKED, pr_idx); + } + } + +diff --git a/ldap/servers/slapd/pagedresults.c b/ldap/servers/slapd/pagedresults.c +index 941ab97e3..0d6c4a1aa 100644 +--- a/ldap/servers/slapd/pagedresults.c ++++ b/ldap/servers/slapd/pagedresults.c +@@ -34,9 +34,9 @@ pageresult_lock_cleanup() + slapi_ch_free((void**)&lock_hash); + } + +-/* Beware to the lock order with c_mutex: +- * c_mutex is sometime locked while holding pageresult_lock +- * ==> Do not lock pageresult_lock when holing c_mutex ++/* Lock ordering constraint with c_mutex: ++ * c_mutex is sometimes locked while holding pageresult_lock. ++ * Therefore: DO NOT acquire pageresult_lock when holding c_mutex. + */ + pthread_mutex_t * + pageresult_lock_get_addr(Connection *conn) +@@ -44,7 +44,11 @@ pageresult_lock_get_addr(Connection *conn) + return &lock_hash[(((size_t)conn)/sizeof (Connection))%LOCK_HASH_SIZE]; + } + +-/* helper function to clean up one prp slot */ ++/* helper function to clean up one prp slot ++ * ++ * NOTE: This function must be called while holding the pageresult_lock ++ * (via pageresult_lock_get_addr(conn)) to ensure thread-safe cleanup. ++ */ + static void + _pr_cleanup_one_slot(PagedResults *prp) + { +@@ -56,7 +60,7 @@ _pr_cleanup_one_slot(PagedResults *prp) + prp->pr_current_be->be_search_results_release(&(prp->pr_search_result_set)); + } + +- /* clean up the slot except the mutex */ ++ /* clean up the slot */ + prp->pr_current_be = NULL; + prp->pr_search_result_set = NULL; + prp->pr_search_result_count = 0; +@@ -136,6 +140,8 @@ pagedresults_parse_control_value(Slapi_PBlock *pb, + return LDAP_UNWILLING_TO_PERFORM; + } + ++ /* Acquire hash-based lock for paged results list access ++ * IMPORTANT: Never acquire this lock when holding c_mutex */ + pthread_mutex_lock(pageresult_lock_get_addr(conn)); + /* the ber encoding is no longer needed */ + ber_free(ber, 1); +@@ -184,10 +190,6 @@ pagedresults_parse_control_value(Slapi_PBlock *pb, + goto bail; + } + +- if ((*index > -1) && (*index < conn->c_pagedresults.prl_maxlen) && +- !conn->c_pagedresults.prl_list[*index].pr_mutex) { +- conn->c_pagedresults.prl_list[*index].pr_mutex = PR_NewLock(); +- } + conn->c_pagedresults.prl_count++; + } else { + /* Repeated paged results request. +@@ -327,8 +329,14 @@ bailout: + "<= idx=%d\n", index); + } + ++/* ++ * Free one paged result entry by index. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + int +-pagedresults_free_one(Connection *conn, Operation *op, int index) ++pagedresults_free_one(Connection *conn, Operation *op, bool locked, int index) + { + int rc = -1; + +@@ -338,7 +346,9 @@ pagedresults_free_one(Connection *conn, Operation *op, int index) + slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_free_one", + "=> idx=%d\n", index); + if (conn && (index > -1)) { +- pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ } + if (conn->c_pagedresults.prl_count <= 0) { + slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_free_one", + "conn=%" PRIu64 " paged requests list count is %d\n", +@@ -349,7 +359,9 @@ pagedresults_free_one(Connection *conn, Operation *op, int index) + conn->c_pagedresults.prl_count--; + rc = 0; + } +- pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ } + } + + slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_free_one", "<= %d\n", rc); +@@ -357,21 +369,28 @@ pagedresults_free_one(Connection *conn, Operation *op, int index) + } + + /* +- * Used for abandoning - pageresult_lock_get_addr(conn) is already locked in do_abandone. ++ * Free one paged result entry by message ID. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. + */ + int +-pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, pthread_mutex_t *mutex) ++pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, bool locked) + { + int rc = -1; + int i; ++ pthread_mutex_t *lock = NULL; + + if (conn && (msgid > -1)) { + if (conn->c_pagedresults.prl_maxlen <= 0) { + ; /* Not a paged result. */ + } else { + slapi_log_err(SLAPI_LOG_TRACE, +- "pagedresults_free_one_msgid_nolock", "=> msgid=%d\n", msgid); +- pthread_mutex_lock(mutex); ++ "pagedresults_free_one_msgid", "=> msgid=%d\n", msgid); ++ lock = pageresult_lock_get_addr(conn); ++ if (!locked) { ++ pthread_mutex_lock(lock); ++ } + for (i = 0; i < conn->c_pagedresults.prl_maxlen; i++) { + if (conn->c_pagedresults.prl_list[i].pr_msgid == msgid) { + PagedResults *prp = conn->c_pagedresults.prl_list + i; +@@ -390,9 +409,11 @@ pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, pthread_mutex_t * + break; + } + } +- pthread_mutex_unlock(mutex); ++ if (!locked) { ++ pthread_mutex_unlock(lock); ++ } + slapi_log_err(SLAPI_LOG_TRACE, +- "pagedresults_free_one_msgid_nolock", "<= %d\n", rc); ++ "pagedresults_free_one_msgid", "<= %d\n", rc); + } + } + +@@ -418,29 +439,43 @@ pagedresults_get_current_be(Connection *conn, int index) + return be; + } + ++/* ++ * Set current backend for a paged result entry. ++ * ++ * Locking: If locked=false, acquires pageresult_lock. If locked=true, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + int +-pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, int nolock) ++pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, bool locked) + { + int rc = -1; + slapi_log_err(SLAPI_LOG_TRACE, + "pagedresults_set_current_be", "=> idx=%d\n", index); + if (conn && (index > -1)) { +- if (!nolock) ++ if (!locked) { + pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ } + if (index < conn->c_pagedresults.prl_maxlen) { + conn->c_pagedresults.prl_list[index].pr_current_be = be; + } + rc = 0; +- if (!nolock) ++ if (!locked) { + pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ } + } + slapi_log_err(SLAPI_LOG_TRACE, + "pagedresults_set_current_be", "<= %d\n", rc); + return rc; + } + ++/* ++ * Get search result set for a paged result entry. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + void * +-pagedresults_get_search_result(Connection *conn, Operation *op, int locked, int index) ++pagedresults_get_search_result(Connection *conn, Operation *op, bool locked, int index) + { + void *sr = NULL; + if (!op_is_pagedresults(op)) { +@@ -465,8 +500,14 @@ pagedresults_get_search_result(Connection *conn, Operation *op, int locked, int + return sr; + } + ++/* ++ * Set search result set for a paged result entry. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + int +-pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, int locked, int index) ++pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, bool locked, int index) + { + int rc = -1; + if (!op_is_pagedresults(op)) { +@@ -494,8 +535,14 @@ pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, int lo + return rc; + } + ++/* ++ * Get search result count for a paged result entry. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + int +-pagedresults_get_search_result_count(Connection *conn, Operation *op, int index) ++pagedresults_get_search_result_count(Connection *conn, Operation *op, bool locked, int index) + { + int count = 0; + if (!op_is_pagedresults(op)) { +@@ -504,19 +551,29 @@ pagedresults_get_search_result_count(Connection *conn, Operation *op, int index) + slapi_log_err(SLAPI_LOG_TRACE, + "pagedresults_get_search_result_count", "=> idx=%d\n", index); + if (conn && (index > -1)) { +- pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ } + if (index < conn->c_pagedresults.prl_maxlen) { + count = conn->c_pagedresults.prl_list[index].pr_search_result_count; + } +- pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ } + } + slapi_log_err(SLAPI_LOG_TRACE, + "pagedresults_get_search_result_count", "<= %d\n", count); + return count; + } + ++/* ++ * Set search result count for a paged result entry. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + int +-pagedresults_set_search_result_count(Connection *conn, Operation *op, int count, int index) ++pagedresults_set_search_result_count(Connection *conn, Operation *op, int count, bool locked, int index) + { + int rc = -1; + if (!op_is_pagedresults(op)) { +@@ -525,11 +582,15 @@ pagedresults_set_search_result_count(Connection *conn, Operation *op, int count, + slapi_log_err(SLAPI_LOG_TRACE, + "pagedresults_set_search_result_count", "=> idx=%d\n", index); + if (conn && (index > -1)) { +- pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ } + if (index < conn->c_pagedresults.prl_maxlen) { + conn->c_pagedresults.prl_list[index].pr_search_result_count = count; + } +- pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ } + rc = 0; + } + slapi_log_err(SLAPI_LOG_TRACE, +@@ -537,9 +598,16 @@ pagedresults_set_search_result_count(Connection *conn, Operation *op, int count, + return rc; + } + ++/* ++ * Get search result set size estimate for a paged result entry. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + int + pagedresults_get_search_result_set_size_estimate(Connection *conn, + Operation *op, ++ bool locked, + int index) + { + int count = 0; +@@ -550,11 +618,15 @@ pagedresults_get_search_result_set_size_estimate(Connection *conn, + "pagedresults_get_search_result_set_size_estimate", + "=> idx=%d\n", index); + if (conn && (index > -1)) { +- pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ } + if (index < conn->c_pagedresults.prl_maxlen) { + count = conn->c_pagedresults.prl_list[index].pr_search_result_set_size_estimate; + } +- pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ } + } + slapi_log_err(SLAPI_LOG_TRACE, + "pagedresults_get_search_result_set_size_estimate", "<= %d\n", +@@ -562,10 +634,17 @@ pagedresults_get_search_result_set_size_estimate(Connection *conn, + return count; + } + ++/* ++ * Set search result set size estimate for a paged result entry. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + int + pagedresults_set_search_result_set_size_estimate(Connection *conn, + Operation *op, + int count, ++ bool locked, + int index) + { + int rc = -1; +@@ -576,11 +655,15 @@ pagedresults_set_search_result_set_size_estimate(Connection *conn, + "pagedresults_set_search_result_set_size_estimate", + "=> idx=%d\n", index); + if (conn && (index > -1)) { +- pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ } + if (index < conn->c_pagedresults.prl_maxlen) { + conn->c_pagedresults.prl_list[index].pr_search_result_set_size_estimate = count; + } +- pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ } + rc = 0; + } + slapi_log_err(SLAPI_LOG_TRACE, +@@ -589,8 +672,14 @@ pagedresults_set_search_result_set_size_estimate(Connection *conn, + return rc; + } + ++/* ++ * Get with_sort flag for a paged result entry. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + int +-pagedresults_get_with_sort(Connection *conn, Operation *op, int index) ++pagedresults_get_with_sort(Connection *conn, Operation *op, bool locked, int index) + { + int flags = 0; + if (!op_is_pagedresults(op)) { +@@ -599,19 +688,29 @@ pagedresults_get_with_sort(Connection *conn, Operation *op, int index) + slapi_log_err(SLAPI_LOG_TRACE, + "pagedresults_get_with_sort", "=> idx=%d\n", index); + if (conn && (index > -1)) { +- pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ } + if (index < conn->c_pagedresults.prl_maxlen) { + flags = conn->c_pagedresults.prl_list[index].pr_flags & CONN_FLAG_PAGEDRESULTS_WITH_SORT; + } +- pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ } + } + slapi_log_err(SLAPI_LOG_TRACE, + "pagedresults_get_with_sort", "<= %d\n", flags); + return flags; + } + ++/* ++ * Set with_sort flag for a paged result entry. ++ * ++ * Locking: If locked=0, acquires pageresult_lock. If locked=1, assumes ++ * caller already holds pageresult_lock. Never call when holding c_mutex. ++ */ + int +-pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, int index) ++pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, bool locked, int index) + { + int rc = -1; + if (!op_is_pagedresults(op)) { +@@ -620,14 +719,18 @@ pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, int index + slapi_log_err(SLAPI_LOG_TRACE, + "pagedresults_set_with_sort", "=> idx=%d\n", index); + if (conn && (index > -1)) { +- pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_lock(pageresult_lock_get_addr(conn)); ++ } + if (index < conn->c_pagedresults.prl_maxlen) { + if (flags & OP_FLAG_SERVER_SIDE_SORTING) { + conn->c_pagedresults.prl_list[index].pr_flags |= + CONN_FLAG_PAGEDRESULTS_WITH_SORT; + } + } +- pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ if (!locked) { ++ pthread_mutex_unlock(pageresult_lock_get_addr(conn)); ++ } + rc = 0; + } + slapi_log_err(SLAPI_LOG_TRACE, "pagedresults_set_with_sort", "<= %d\n", rc); +@@ -802,10 +905,6 @@ pagedresults_cleanup(Connection *conn, int needlock) + rc = 1; + } + prp->pr_current_be = NULL; +- if (prp->pr_mutex) { +- PR_DestroyLock(prp->pr_mutex); +- prp->pr_mutex = NULL; +- } + memset(prp, '\0', sizeof(PagedResults)); + } + conn->c_pagedresults.prl_count = 0; +@@ -840,10 +939,6 @@ pagedresults_cleanup_all(Connection *conn, int needlock) + i < conn->c_pagedresults.prl_maxlen; + i++) { + prp = conn->c_pagedresults.prl_list + i; +- if (prp->pr_mutex) { +- PR_DestroyLock(prp->pr_mutex); +- prp->pr_mutex = NULL; +- } + if (prp->pr_current_be && prp->pr_search_result_set && + prp->pr_current_be->be_search_results_release) { + prp->pr_current_be->be_search_results_release(&(prp->pr_search_result_set)); +@@ -1010,43 +1105,8 @@ op_set_pagedresults(Operation *op) + op->o_flags |= OP_FLAG_PAGED_RESULTS; + } + +-/* +- * pagedresults_lock/unlock -- introduced to protect search results for the +- * asynchronous searches. Do not call these functions while the PR conn lock +- * is held (e.g. pageresult_lock_get_addr(conn)) +- */ +-void +-pagedresults_lock(Connection *conn, int index) +-{ +- PagedResults *prp; +- if (!conn || (index < 0) || (index >= conn->c_pagedresults.prl_maxlen)) { +- return; +- } +- pthread_mutex_lock(pageresult_lock_get_addr(conn)); +- prp = conn->c_pagedresults.prl_list + index; +- if (prp->pr_mutex) { +- PR_Lock(prp->pr_mutex); +- } +- pthread_mutex_unlock(pageresult_lock_get_addr(conn)); +-} +- +-void +-pagedresults_unlock(Connection *conn, int index) +-{ +- PagedResults *prp; +- if (!conn || (index < 0) || (index >= conn->c_pagedresults.prl_maxlen)) { +- return; +- } +- pthread_mutex_lock(pageresult_lock_get_addr(conn)); +- prp = conn->c_pagedresults.prl_list + index; +- if (prp->pr_mutex) { +- PR_Unlock(prp->pr_mutex); +- } +- pthread_mutex_unlock(pageresult_lock_get_addr(conn)); +-} +- + int +-pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int index) ++pagedresults_is_abandoned_or_notavailable(Connection *conn, bool locked, int index) + { + PagedResults *prp; + int32_t result; +@@ -1066,7 +1126,7 @@ pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int inde + } + + int +-pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, int locked) ++pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, bool locked) + { + int rc = -1; + Connection *conn = NULL; +diff --git a/ldap/servers/slapd/proto-slap.h b/ldap/servers/slapd/proto-slap.h +index 765c12bf5..455d6d718 100644 +--- a/ldap/servers/slapd/proto-slap.h ++++ b/ldap/servers/slapd/proto-slap.h +@@ -1614,20 +1614,22 @@ pthread_mutex_t *pageresult_lock_get_addr(Connection *conn); + int pagedresults_parse_control_value(Slapi_PBlock *pb, struct berval *psbvp, ber_int_t *pagesize, int *index, Slapi_Backend *be); + void pagedresults_set_response_control(Slapi_PBlock *pb, int iscritical, ber_int_t estimate, int curr_search_count, int index); + Slapi_Backend *pagedresults_get_current_be(Connection *conn, int index); +-int pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, int nolock); +-void *pagedresults_get_search_result(Connection *conn, Operation *op, int locked, int index); +-int pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, int locked, int index); +-int pagedresults_get_search_result_count(Connection *conn, Operation *op, int index); +-int pagedresults_set_search_result_count(Connection *conn, Operation *op, int cnt, int index); ++int pagedresults_set_current_be(Connection *conn, Slapi_Backend *be, int index, bool locked); ++void *pagedresults_get_search_result(Connection *conn, Operation *op, bool locked, int index); ++int pagedresults_set_search_result(Connection *conn, Operation *op, void *sr, bool locked, int index); ++int pagedresults_get_search_result_count(Connection *conn, Operation *op, bool locked, int index); ++int pagedresults_set_search_result_count(Connection *conn, Operation *op, int cnt, bool locked, int index); + int pagedresults_get_search_result_set_size_estimate(Connection *conn, + Operation *op, ++ bool locked, + int index); + int pagedresults_set_search_result_set_size_estimate(Connection *conn, + Operation *op, + int cnt, ++ bool locked, + int index); +-int pagedresults_get_with_sort(Connection *conn, Operation *op, int index); +-int pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, int index); ++int pagedresults_get_with_sort(Connection *conn, Operation *op, bool locked, int index); ++int pagedresults_set_with_sort(Connection *conn, Operation *op, int flags, bool locked, int index); + int pagedresults_get_unindexed(Connection *conn, Operation *op, int index); + int pagedresults_set_unindexed(Connection *conn, Operation *op, int index); + int pagedresults_get_sort_result_code(Connection *conn, Operation *op, int index); +@@ -1639,15 +1641,13 @@ int pagedresults_cleanup(Connection *conn, int needlock); + int pagedresults_is_timedout_nolock(Connection *conn); + int pagedresults_reset_timedout_nolock(Connection *conn); + int pagedresults_in_use_nolock(Connection *conn); +-int pagedresults_free_one(Connection *conn, Operation *op, int index); +-int pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, pthread_mutex_t *mutex); ++int pagedresults_free_one(Connection *conn, Operation *op, bool locked, int index); ++int pagedresults_free_one_msgid(Connection *conn, ber_int_t msgid, bool locked); + int op_is_pagedresults(Operation *op); + int pagedresults_cleanup_all(Connection *conn, int needlock); + void op_set_pagedresults(Operation *op); +-void pagedresults_lock(Connection *conn, int index); +-void pagedresults_unlock(Connection *conn, int index); +-int pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int index); +-int pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, int locked); ++int pagedresults_is_abandoned_or_notavailable(Connection *conn, bool locked, int index); ++int pagedresults_set_search_result_pb(Slapi_PBlock *pb, void *sr, bool locked); + + /* + * sort.c +diff --git a/ldap/servers/slapd/slap.h b/ldap/servers/slapd/slap.h +index 11c5602e3..d494931c2 100644 +--- a/ldap/servers/slapd/slap.h ++++ b/ldap/servers/slapd/slap.h +@@ -89,6 +89,10 @@ static char ptokPBE[34] = "Internal (Software) Token "; + #include + #include /* For timespec definitions */ + ++/* Macros for paged results lock parameter */ ++#define PR_LOCKED true ++#define PR_NOT_LOCKED false ++ + /* Provides our int types and platform specific requirements. */ + #include + +@@ -1669,7 +1673,6 @@ typedef struct _paged_results + struct timespec pr_timelimit_hr; /* expiry time of this request rel to clock monotonic */ + int pr_flags; + ber_int_t pr_msgid; /* msgid of the request; to abandon */ +- PRLock *pr_mutex; /* protect each conn structure */ + } PagedResults; + + /* array of simple paged structure stashed in connection */ +-- +2.52.0 + diff --git a/0003-Issue-6663-Fix-NULL-subsystem-crash-in-JSON-error-lo.patch b/0003-Issue-6663-Fix-NULL-subsystem-crash-in-JSON-error-lo.patch deleted file mode 100644 index 6281b86..0000000 --- a/0003-Issue-6663-Fix-NULL-subsystem-crash-in-JSON-error-lo.patch +++ /dev/null @@ -1,380 +0,0 @@ -From bd9ab54f64148d467e022c59ee8e5aed16f0c385 Mon Sep 17 00:00:00 2001 -From: Akshay Adhikari -Date: Mon, 28 Jul 2025 18:14:15 +0530 -Subject: [PATCH] Issue 6663 - Fix NULL subsystem crash in JSON error logging - (#6883) - -Description: Fixes crash in JSON error logging when subsystem is NULL. -Parametrized test case for better debugging. - -Relates: https://github.com/389ds/389-ds-base/issues/6663 - -Reviewed by: @mreynolds389 ---- - .../tests/suites/clu/dsconf_logging.py | 168 ------------------ - .../tests/suites/clu/dsconf_logging_test.py | 164 +++++++++++++++++ - ldap/servers/slapd/log.c | 2 +- - 3 files changed, 165 insertions(+), 169 deletions(-) - delete mode 100644 dirsrvtests/tests/suites/clu/dsconf_logging.py - create mode 100644 dirsrvtests/tests/suites/clu/dsconf_logging_test.py - -diff --git a/dirsrvtests/tests/suites/clu/dsconf_logging.py b/dirsrvtests/tests/suites/clu/dsconf_logging.py -deleted file mode 100644 -index 1c2f7fc2e..000000000 ---- a/dirsrvtests/tests/suites/clu/dsconf_logging.py -+++ /dev/null -@@ -1,168 +0,0 @@ --# --- BEGIN COPYRIGHT BLOCK --- --# Copyright (C) 2025 Red Hat, Inc. --# All rights reserved. --# --# License: GPL (version 3 or any later version). --# See LICENSE for details. --# --- END COPYRIGHT BLOCK --- --# --import json --import subprocess --import logging --import pytest --from lib389._constants import DN_DM --from lib389.topologies import topology_st as topo -- --pytestmark = pytest.mark.tier1 -- --log = logging.getLogger(__name__) -- --SETTINGS = [ -- ('logging-enabled', None), -- ('logging-disabled', None), -- ('mode', '700'), -- ('compress-enabled', None), -- ('compress-disabled', None), -- ('buffering-enabled', None), -- ('buffering-disabled', None), -- ('max-logs', '4'), -- ('max-logsize', '7'), -- ('rotation-interval', '2'), -- ('rotation-interval-unit', 'week'), -- ('rotation-tod-enabled', None), -- ('rotation-tod-disabled', None), -- ('rotation-tod-hour', '12'), -- ('rotation-tod-minute', '20'), -- ('deletion-interval', '3'), -- ('deletion-interval-unit', 'day'), -- ('max-disk-space', '20'), -- ('free-disk-space', '2'), --] -- --DEFAULT_TIME_FORMAT = "%FT%TZ" -- -- --def execute_dsconf_command(dsconf_cmd, subcommands): -- """Execute dsconf command and return output and return code""" -- -- cmdline = dsconf_cmd + subcommands -- proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE) -- out, _ = proc.communicate() -- return out.decode('utf-8'), proc.returncode -- -- --def get_dsconf_base_cmd(topo): -- """Return base dsconf command list""" -- return ['/usr/sbin/dsconf', topo.standalone.serverid, -- '-j', '-D', DN_DM, '-w', 'password', 'logging'] -- -- --def test_log_settings(topo): -- """Test each log setting can be set successfully -- -- :id: b800fd03-37f5-4e74-9af8-eeb07030eb52 -- :setup: Standalone DS instance -- :steps: -- 1. Test each log's settings -- :expectedresults: -- 1. Success -- """ -- -- dsconf_cmd = get_dsconf_base_cmd(topo) -- for log_type in ['access', 'audit', 'auditfail', 'error', 'security']: -- # Test "get" command -- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'get']) -- assert rc == 0 -- json_result = json.loads(output) -- default_location = json_result['Log name and location'] -- -- # Log location -- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set', -- 'location', -- f'/tmp/{log_type}']) -- assert rc == 0 -- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set', -- 'location', -- default_location]) -- assert rc == 0 -- -- # Log levels -- if log_type == "access": -- # List levels -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'list-levels']) -- assert rc == 0 -- -- # Set levels -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', 'level', -- 'internal']) -- assert rc == 0 -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', 'level', -- 'internal', 'entry']) -- assert rc == 0 -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', 'level', -- 'internal', 'default']) -- assert rc == 0 -- -- if log_type == "error": -- # List levels -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'list-levels']) -- assert rc == 0 -- -- # Set levels -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', 'level', -- 'plugin', 'replication']) -- assert rc == 0 -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', 'level', -- 'default']) -- assert rc == 0 -- -- # Log formats -- if log_type in ["access", "audit", "error"]: -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', -- 'time-format', '%D']) -- assert rc == 0 -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', -- 'time-format', -- DEFAULT_TIME_FORMAT]) -- assert rc == 0 -- -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', -- 'log-format', -- 'json']) -- assert rc == 0 -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', -- 'log-format', -- 'default']) -- assert rc == 0 -- -- # Audit log display attrs -- if log_type == "audit": -- output, rc = execute_dsconf_command(dsconf_cmd, -- [log_type, 'set', -- 'display-attrs', 'cn']) -- assert rc == 0 -- -- # Common settings -- for attr, value in SETTINGS: -- if log_type == "auditfail" and attr.startswith("buffer"): -- # auditfail doesn't have a buffering settings -- continue -- -- if value is None: -- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, -- 'set', attr]) -- else: -- output, rc = execute_dsconf_command(dsconf_cmd, [log_type, -- 'set', attr, value]) -- assert rc == 0 -diff --git a/dirsrvtests/tests/suites/clu/dsconf_logging_test.py b/dirsrvtests/tests/suites/clu/dsconf_logging_test.py -new file mode 100644 -index 000000000..ca3f71997 ---- /dev/null -+++ b/dirsrvtests/tests/suites/clu/dsconf_logging_test.py -@@ -0,0 +1,164 @@ -+# --- BEGIN COPYRIGHT BLOCK --- -+# Copyright (C) 2025 Red Hat, Inc. -+# All rights reserved. -+# -+# License: GPL (version 3 or any later version). -+# See LICENSE for details. -+# --- END COPYRIGHT BLOCK --- -+# -+import json -+import subprocess -+import logging -+import pytest -+from lib389._constants import DN_DM -+from lib389.topologies import topology_st as topo -+ -+pytestmark = pytest.mark.tier1 -+ -+log = logging.getLogger(__name__) -+ -+SETTINGS = [ -+ ('logging-enabled', None), -+ ('logging-disabled', None), -+ ('mode', '700'), -+ ('compress-enabled', None), -+ ('compress-disabled', None), -+ ('buffering-enabled', None), -+ ('buffering-disabled', None), -+ ('max-logs', '4'), -+ ('max-logsize', '7'), -+ ('rotation-interval', '2'), -+ ('rotation-interval-unit', 'week'), -+ ('rotation-tod-enabled', None), -+ ('rotation-tod-disabled', None), -+ ('rotation-tod-hour', '12'), -+ ('rotation-tod-minute', '20'), -+ ('deletion-interval', '3'), -+ ('deletion-interval-unit', 'day'), -+ ('max-disk-space', '20'), -+ ('free-disk-space', '2'), -+] -+ -+DEFAULT_TIME_FORMAT = "%FT%TZ" -+ -+ -+def execute_dsconf_command(dsconf_cmd, subcommands): -+ """Execute dsconf command and return output and return code""" -+ -+ cmdline = dsconf_cmd + subcommands -+ proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stderr=subprocess.PIPE) -+ out, err = proc.communicate() -+ -+ if proc.returncode != 0 and err: -+ log.error(f"Command failed: {' '.join(cmdline)}") -+ log.error(f"Stderr: {err.decode('utf-8')}") -+ -+ return out.decode('utf-8'), proc.returncode -+ -+ -+def get_dsconf_base_cmd(topo): -+ """Return base dsconf command list""" -+ return ['/usr/sbin/dsconf', topo.standalone.serverid, -+ '-j', '-D', DN_DM, '-w', 'password', 'logging'] -+ -+ -+@pytest.mark.parametrize("log_type", ['access', 'audit', 'auditfail', 'error', 'security']) -+def test_log_settings(topo, log_type): -+ """Test each log setting can be set successfully -+ -+ :id: b800fd03-37f5-4e74-9af8-eeb07030eb52 -+ :setup: Standalone DS instance -+ :steps: -+ 1. Test each log's settings -+ :expectedresults: -+ 1. Success -+ """ -+ -+ dsconf_cmd = get_dsconf_base_cmd(topo) -+ -+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'get']) -+ assert rc == 0 -+ json_result = json.loads(output) -+ default_location = json_result['Log name and location'] -+ -+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set', -+ 'location', -+ f'/tmp/{log_type}']) -+ assert rc == 0 -+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, 'set', -+ 'location', -+ default_location]) -+ assert rc == 0 -+ -+ if log_type == "access": -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'list-levels']) -+ assert rc == 0 -+ -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', 'level', -+ 'internal']) -+ assert rc == 0 -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', 'level', -+ 'internal', 'entry']) -+ assert rc == 0 -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', 'level', -+ 'internal', 'default']) -+ assert rc == 0 -+ -+ if log_type == "error": -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'list-levels']) -+ assert rc == 0 -+ -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', 'level', -+ 'plugin', 'replication']) -+ assert rc == 0 -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', 'level', -+ 'default']) -+ assert rc == 0 -+ -+ if log_type in ["access", "audit", "error"]: -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', -+ 'time-format', '%D']) -+ assert rc == 0 -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', -+ 'time-format', -+ DEFAULT_TIME_FORMAT]) -+ assert rc == 0 -+ -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', -+ 'log-format', -+ 'json']) -+ assert rc == 0, f"Failed to set {log_type} log-format to json: {output}" -+ -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', -+ 'log-format', -+ 'default']) -+ assert rc == 0, f"Failed to set {log_type} log-format to default: {output}" -+ -+ if log_type == "audit": -+ output, rc = execute_dsconf_command(dsconf_cmd, -+ [log_type, 'set', -+ 'display-attrs', 'cn']) -+ assert rc == 0 -+ -+ for attr, value in SETTINGS: -+ if log_type == "auditfail" and attr.startswith("buffer"): -+ continue -+ -+ if value is None: -+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, -+ 'set', attr]) -+ else: -+ output, rc = execute_dsconf_command(dsconf_cmd, [log_type, -+ 'set', attr, value]) -+ assert rc == 0 -diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c -index 06dae4d0b..e859682fe 100644 ---- a/ldap/servers/slapd/log.c -+++ b/ldap/servers/slapd/log.c -@@ -2937,7 +2937,7 @@ vslapd_log_error( - json_obj = json_object_new_object(); - json_object_object_add(json_obj, "local_time", json_object_new_string(local_time)); - json_object_object_add(json_obj, "severity", json_object_new_string(get_log_sev_name(sev_level, sev_name))); -- json_object_object_add(json_obj, "subsystem", json_object_new_string(subsystem)); -+ json_object_object_add(json_obj, "subsystem", json_object_new_string(subsystem ? subsystem : "")); - json_object_object_add(json_obj, "msg", json_object_new_string(vbuf)); - - PR_snprintf(buffer, sizeof(buffer), "%s\n", --- -2.49.0 - diff --git a/0004-Issue-6829-Update-parametrized-docstring-for-tests.patch b/0004-Issue-6829-Update-parametrized-docstring-for-tests.patch deleted file mode 100644 index 65130e1..0000000 --- a/0004-Issue-6829-Update-parametrized-docstring-for-tests.patch +++ /dev/null @@ -1,205 +0,0 @@ -From 23e0b93c3bbe96a365357a3af11bc86172102c05 Mon Sep 17 00:00:00 2001 -From: Barbora Simonova -Date: Wed, 25 Jun 2025 10:43:37 +0200 -Subject: [PATCH] Issue 6829 - Update parametrized docstring for tests - -Description: -Update missing parametrized value in docstring for some tests - -Fixes: https://github.com/389ds/389-ds-base/issues/6829 - -Reviewed by: @vashirov (Thanks!) ---- - dirsrvtests/tests/suites/basic/basic_test.py | 2 +- - dirsrvtests/tests/suites/clu/dsconf_config_test.py | 8 ++++++++ - dirsrvtests/tests/suites/clu/schema_test.py | 5 +++++ - dirsrvtests/tests/suites/mapping_tree/regression_test.py | 1 + - dirsrvtests/tests/suites/password/password_test.py | 1 + - .../tests/suites/replication/regression_m2_test.py | 1 + - dirsrvtests/tests/suites/vlv/regression_test.py | 2 ++ - 7 files changed, 19 insertions(+), 1 deletion(-) - -diff --git a/dirsrvtests/tests/suites/basic/basic_test.py b/dirsrvtests/tests/suites/basic/basic_test.py -index 8f5de91aa..db6bfab56 100644 ---- a/dirsrvtests/tests/suites/basic/basic_test.py -+++ b/dirsrvtests/tests/suites/basic/basic_test.py -@@ -961,7 +961,7 @@ def test_basic_search_lookthroughlimit(topology_st, limit, resp, import_example_ - Tests normal search with lookthroughlimit set high and low. - - :id: b5119970-6c9f-41b7-9649-de9233226fec -- -+ :parametrized: yes - :setup: Standalone instance, add example.ldif to the database, search filter (uid=*). - - :steps: -diff --git a/dirsrvtests/tests/suites/clu/dsconf_config_test.py b/dirsrvtests/tests/suites/clu/dsconf_config_test.py -index d67679adf..232018097 100644 ---- a/dirsrvtests/tests/suites/clu/dsconf_config_test.py -+++ b/dirsrvtests/tests/suites/clu/dsconf_config_test.py -@@ -58,6 +58,7 @@ def test_single_value_add(topology_st, attr_name, values_dict): - """Test adding a single value to an attribute - - :id: ffc912a6-c188-413d-9c35-7f4b3774d946 -+ :parametrized: yes - :setup: Standalone DS instance - :steps: - 1. Add a single value to the specified attribute -@@ -89,6 +90,7 @@ def test_single_value_replace(topology_st, attr_name, values_dict): - """Test replacing a single value in configuration attributes - - :id: 112e3e5e-8db8-4974-9ea4-ed789c2d02f2 -+ :parametrized: yes - :setup: Standalone DS instance - :steps: - 1. Add initial value to the specified attribute -@@ -127,6 +129,7 @@ def test_multi_value_batch_add(topology_st, attr_name, values_dict): - """Test adding multiple values in a single batch command - - :id: 4c34c7f8-16cc-4ab6-938a-967537be5470 -+ :parametrized: yes - :setup: Standalone DS instance - :steps: - 1. Add multiple values to the attribute in a single command -@@ -157,6 +160,7 @@ def test_multi_value_batch_replace(topology_st, attr_name, values_dict): - """Test replacing with multiple values in a single batch command - - :id: 05cf28b8-000e-4856-a10b-7e1df012737d -+ :parametrized: yes - :setup: Standalone DS instance - :steps: - 1. Add initial single value -@@ -194,6 +198,7 @@ def test_multi_value_specific_delete(topology_st, attr_name, values_dict): - """Test deleting specific values from multi-valued attribute - - :id: bb325c9a-eae8-438a-b577-bd63540b91cb -+ :parametrized: yes - :setup: Standalone DS instance - :steps: - 1. Add multiple values to the attribute -@@ -232,6 +237,7 @@ def test_multi_value_batch_delete(topology_st, attr_name, values_dict): - """Test deleting multiple values in a single batch command - - :id: 4b105824-b060-4f83-97d7-001a01dba1a5 -+ :parametrized: yes - :setup: Standalone DS instance - :steps: - 1. Add multiple values to the attribute -@@ -269,6 +275,7 @@ def test_single_value_persists_after_restart(topology_st, attr_name, values_dict - """Test single value persists after server restart - - :id: be1a7e3d-a9ca-48a1-a3bc-062990d4f3e9 -+ :parametrized: yes - :setup: Standalone DS instance - :steps: - 1. Add single value to the attribute -@@ -310,6 +317,7 @@ def test_multi_value_batch_persists_after_restart(topology_st, attr_name, values - """Test multiple values added in batch persist after server restart - - :id: fd0435e2-90b1-465a-8968-d3a375c8fb22 -+ :parametrized: yes - :setup: Standalone DS instance - :steps: - 1. Add multiple values in a single batch command -diff --git a/dirsrvtests/tests/suites/clu/schema_test.py b/dirsrvtests/tests/suites/clu/schema_test.py -index 19ec032bc..5709471cf 100644 ---- a/dirsrvtests/tests/suites/clu/schema_test.py -+++ b/dirsrvtests/tests/suites/clu/schema_test.py -@@ -100,6 +100,7 @@ def test_origins(create_attribute): - """Test the various possibilities of x-origin - - :id: 3229f6f8-67c1-4558-9be5-71434283086a -+ :parametrized: yes - :setup: Standalone Instance - :steps: - 1. Add an attribute with different x-origin values/types -@@ -116,6 +117,7 @@ def test_mrs(create_attribute): - """Test an attribute can be added with a matching rule - - :id: e4eb06e0-7f80-41fe-8868-08c2bafc7590 -+ :parametrized: yes - :setup: Standalone Instance - :steps: - 1. Add an attribute with a matching rule -@@ -132,6 +134,7 @@ def test_edit_attributetype(create_attribute): - """Test editing an attribute type in the schema - - :id: 07c98f6a-89f8-44e5-9cc1-353d1f7bccf4 -+ :parametrized: yes - :setup: Standalone Instance - :steps: - 1. Add a new attribute type -@@ -209,6 +212,7 @@ def test_edit_attributetype_remove_superior(create_attribute): - """Test editing an attribute type to remove a parameter from it - - :id: bd6ae89f-9617-4620-adc2-465884ca568b -+ :parametrized: yes - :setup: Standalone Instance - :steps: - 1. Add a new attribute type with a superior -@@ -244,6 +248,7 @@ def test_edit_attribute_keep_custom_values(create_attribute): - """Test editing a custom schema attribute keeps all custom values - - :id: 5b1e2e8b-28c2-4f77-9c03-07eff20f763d -+ :parametrized: yes - :setup: Standalone Instance - :steps: - 1. Create a custom attribute -diff --git a/dirsrvtests/tests/suites/mapping_tree/regression_test.py b/dirsrvtests/tests/suites/mapping_tree/regression_test.py -index 51c687059..2c57c2973 100644 ---- a/dirsrvtests/tests/suites/mapping_tree/regression_test.py -+++ b/dirsrvtests/tests/suites/mapping_tree/regression_test.py -@@ -111,6 +111,7 @@ def test_sub_suffixes(topo, orphan_param): - """ check the entries found on suffix/sub-suffix - - :id: 5b4421c2-d851-11ec-a760-482ae39447e5 -+ :parametrized: yes - :feature: mapping-tree - :setup: Standalone instance with 3 additional backends: - dc=parent, dc=child1,dc=parent, dc=childr21,dc=parent -diff --git a/dirsrvtests/tests/suites/password/password_test.py b/dirsrvtests/tests/suites/password/password_test.py -index 2d4aef028..94a23e669 100644 ---- a/dirsrvtests/tests/suites/password/password_test.py -+++ b/dirsrvtests/tests/suites/password/password_test.py -@@ -156,6 +156,7 @@ def test_pwd_scheme_no_upgrade_on_bind(topology_st, crypt_scheme, request, no_up - the current hash is in nsslapd-scheme-list-no-upgrade-hash - - :id: b4d2c525-a239-4ca6-a168-5126da7abedd -+ :parametrized: yes - :setup: Standalone instance - :steps: - 1. Create a user with userpassword stored as CRYPT -diff --git a/dirsrvtests/tests/suites/replication/regression_m2_test.py b/dirsrvtests/tests/suites/replication/regression_m2_test.py -index 10a5fa419..68966ac49 100644 ---- a/dirsrvtests/tests/suites/replication/regression_m2_test.py -+++ b/dirsrvtests/tests/suites/replication/regression_m2_test.py -@@ -991,6 +991,7 @@ def test_change_repl_passwd(topo_m2, request, bind_cn): - Testing when agmt bind group are used. - - :id: a305913a-cc76-11ec-b324-482ae39447e5 -+ :parametrized: yes - :setup: 2 Supplier Instances - :steps: - 1. Insure agmt from supplier1 to supplier2 is properly set to use bind group -diff --git a/dirsrvtests/tests/suites/vlv/regression_test.py b/dirsrvtests/tests/suites/vlv/regression_test.py -index d9d1cb444..f7847ac74 100644 ---- a/dirsrvtests/tests/suites/vlv/regression_test.py -+++ b/dirsrvtests/tests/suites/vlv/regression_test.py -@@ -775,6 +775,7 @@ def test_vlv_reindex(topology_st, prefix, basedn): - """Test VLV reindexing. - - :id: d5dc0d8e-cbe6-11ee-95b1-482ae39447e5 -+ :parametrized: yes - :setup: Standalone instance. - :steps: - 1. Cleanup leftover from previous tests -@@ -830,6 +831,7 @@ def test_vlv_offline_import(topology_st, prefix, basedn): - """Test VLV after off line import. - - :id: 8732d7a8-e851-11ee-9d63-482ae39447e5 -+ :parametrized: yes - :setup: Standalone instance. - :steps: - 1. Cleanup leftover from previous tests --- -2.49.0 - diff --git a/0005-Issue-6782-Improve-paged-result-locking.patch b/0005-Issue-6782-Improve-paged-result-locking.patch deleted file mode 100644 index c058b68..0000000 --- a/0005-Issue-6782-Improve-paged-result-locking.patch +++ /dev/null @@ -1,127 +0,0 @@ -From 4f81f696e85dc7c50555df2d410222214c8ac883 Mon Sep 17 00:00:00 2001 -From: Mark Reynolds -Date: Thu, 15 May 2025 10:35:27 -0400 -Subject: [PATCH] Issue 6782 - Improve paged result locking - -Description: - -When cleaning a slot, instead of mem setting everything to Zero and restoring -the mutex, manually reset all the values leaving the mutex pointer -intact. - -There is also a deadlock possibility when checking for abandoned PR search -in opshared.c, and we were checking a flag value outside of the per_conn -lock. - -Relates: https://github.com/389ds/389-ds-base/issues/6782 - -Reviewed by: progier & spichugi(Thanks!!) ---- - ldap/servers/slapd/opshared.c | 10 +++++++++- - ldap/servers/slapd/pagedresults.c | 27 +++++++++++++++++---------- - 2 files changed, 26 insertions(+), 11 deletions(-) - -diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c -index 5ea919e2d..545518748 100644 ---- a/ldap/servers/slapd/opshared.c -+++ b/ldap/servers/slapd/opshared.c -@@ -619,6 +619,14 @@ op_shared_search(Slapi_PBlock *pb, int send_result) - int32_t tlimit; - slapi_pblock_get(pb, SLAPI_SEARCH_TIMELIMIT, &tlimit); - pagedresults_set_timelimit(pb_conn, operation, (time_t)tlimit, pr_idx); -+ /* When using this mutex in conjunction with the main paged -+ * result lock, you must do so in this order: -+ * -+ * --> pagedresults_lock() -+ * --> pagedresults_mutex -+ * <-- pagedresults_mutex -+ * <-- pagedresults_unlock() -+ */ - pagedresults_mutex = pageresult_lock_get_addr(pb_conn); - } - -@@ -744,11 +752,11 @@ op_shared_search(Slapi_PBlock *pb, int send_result) - pr_search_result = pagedresults_get_search_result(pb_conn, operation, 1 /*locked*/, pr_idx); - if (pr_search_result) { - if (pagedresults_is_abandoned_or_notavailable(pb_conn, 1 /*locked*/, pr_idx)) { -+ pthread_mutex_unlock(pagedresults_mutex); - pagedresults_unlock(pb_conn, pr_idx); - /* Previous operation was abandoned and the simplepaged object is not in use. */ - send_ldap_result(pb, 0, NULL, "Simple Paged Results Search abandoned", 0, NULL); - rc = LDAP_SUCCESS; -- pthread_mutex_unlock(pagedresults_mutex); - goto free_and_return; - } else { - slapi_pblock_set(pb, SLAPI_SEARCH_RESULT_SET, pr_search_result); -diff --git a/ldap/servers/slapd/pagedresults.c b/ldap/servers/slapd/pagedresults.c -index 642aefb3d..c3f3aae01 100644 ---- a/ldap/servers/slapd/pagedresults.c -+++ b/ldap/servers/slapd/pagedresults.c -@@ -48,7 +48,6 @@ pageresult_lock_get_addr(Connection *conn) - static void - _pr_cleanup_one_slot(PagedResults *prp) - { -- PRLock *prmutex = NULL; - if (!prp) { - return; - } -@@ -56,13 +55,17 @@ _pr_cleanup_one_slot(PagedResults *prp) - /* sr is left; release it. */ - prp->pr_current_be->be_search_results_release(&(prp->pr_search_result_set)); - } -- /* clean up the slot */ -- if (prp->pr_mutex) { -- /* pr_mutex is reused; back it up and reset it. */ -- prmutex = prp->pr_mutex; -- } -- memset(prp, '\0', sizeof(PagedResults)); -- prp->pr_mutex = prmutex; -+ -+ /* clean up the slot except the mutex */ -+ prp->pr_current_be = NULL; -+ prp->pr_search_result_set = NULL; -+ prp->pr_search_result_count = 0; -+ prp->pr_search_result_set_size_estimate = 0; -+ prp->pr_sort_result_code = 0; -+ prp->pr_timelimit_hr.tv_sec = 0; -+ prp->pr_timelimit_hr.tv_nsec = 0; -+ prp->pr_flags = 0; -+ prp->pr_msgid = 0; - } - - /* -@@ -1007,7 +1010,8 @@ op_set_pagedresults(Operation *op) - - /* - * pagedresults_lock/unlock -- introduced to protect search results for the -- * asynchronous searches. -+ * asynchronous searches. Do not call these functions while the PR conn lock -+ * is held (e.g. pageresult_lock_get_addr(conn)) - */ - void - pagedresults_lock(Connection *conn, int index) -@@ -1045,6 +1049,8 @@ int - pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int index) - { - PagedResults *prp; -+ int32_t result; -+ - if (!conn || (index < 0) || (index >= conn->c_pagedresults.prl_maxlen)) { - return 1; /* not abandoned, but do not want to proceed paged results op. */ - } -@@ -1052,10 +1058,11 @@ pagedresults_is_abandoned_or_notavailable(Connection *conn, int locked, int inde - pthread_mutex_lock(pageresult_lock_get_addr(conn)); - } - prp = conn->c_pagedresults.prl_list + index; -+ result = prp->pr_flags & CONN_FLAG_PAGEDRESULTS_ABANDONED; - if (!locked) { - pthread_mutex_unlock(pageresult_lock_get_addr(conn)); - } -- return prp->pr_flags & CONN_FLAG_PAGEDRESULTS_ABANDONED; -+ return result; - } - - int --- -2.49.0 - diff --git a/0006-Issue-6838-lib389-replica.py-is-using-nonexistent-da.patch b/0006-Issue-6838-lib389-replica.py-is-using-nonexistent-da.patch deleted file mode 100644 index cb5c3ba..0000000 --- a/0006-Issue-6838-lib389-replica.py-is-using-nonexistent-da.patch +++ /dev/null @@ -1,37 +0,0 @@ -From 0ab37e0848e6f1c4e46068bee46bd91c3bb3d22d Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Tue, 1 Jul 2025 12:44:04 +0200 -Subject: [PATCH] Issue 6838 - lib389/replica.py is using nonexistent - datetime.UTC in Python 3.9 - -Bug Description: -389-ds-base-2.x is supposed to be used with Python 3.9. -But lib389/replica.py is using `datetime.UTC`, which is an alias -to `datetime.timezone.utc` was added only in Python 3.11. - -Fix Description: -Use `datetime.timezone.utc` instead. - -Fixes: https://github.com/389ds/389-ds-base/issues/6838 - -Reviewed by: @mreynolds389 (Thanks!) ---- - src/lib389/lib389/replica.py | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - -diff --git a/src/lib389/lib389/replica.py b/src/lib389/lib389/replica.py -index 18ce1b1d5..59be00a33 100644 ---- a/src/lib389/lib389/replica.py -+++ b/src/lib389/lib389/replica.py -@@ -917,7 +917,7 @@ class RUV(object): - ValueError("Wrong CSN value was supplied") - - timestamp = int(csn[:8], 16) -- time_str = datetime.datetime.fromtimestamp(timestamp, datetime.UTC).strftime('%Y-%m-%d %H:%M:%S') -+ time_str = datetime.datetime.fromtimestamp(timestamp, datetime.timezone.utc).strftime('%Y-%m-%d %H:%M:%S') - # We are parsing shorter CSN which contains only timestamp - if len(csn) == 8: - return time_str --- -2.49.0 - diff --git a/0007-Issue-6753-Add-add_exclude_subtree-and-remove_exclud.patch b/0007-Issue-6753-Add-add_exclude_subtree-and-remove_exclud.patch deleted file mode 100644 index bea64d1..0000000 --- a/0007-Issue-6753-Add-add_exclude_subtree-and-remove_exclud.patch +++ /dev/null @@ -1,515 +0,0 @@ -From 8984550568737142dd22020e1b9efd87cc0e42f8 Mon Sep 17 00:00:00 2001 -From: Lenka Doudova -Date: Mon, 9 Jun 2025 15:15:04 +0200 -Subject: [PATCH] Issue 6753 - Add 'add_exclude_subtree' and - 'remove_exclude_subtree' methods to Attribute uniqueness plugin - -Description: -Adding 'add_exclude_subtree' and 'remove_exclude_subtree' methods to AttributeUniquenessPlugin in -order to be able to easily add or remove an exclude subtree. -Porting ticket 47927 test to -dirsrvtests/tests/suites/plugins/attruniq_test.py - -Relates: #6753 - -Author: Lenka Doudova - -Reviewers: Simon Pichugin, Mark Reynolds ---- - .../tests/suites/plugins/attruniq_test.py | 171 +++++++++++ - dirsrvtests/tests/tickets/ticket47927_test.py | 267 ------------------ - src/lib389/lib389/plugins.py | 10 + - 3 files changed, 181 insertions(+), 267 deletions(-) - delete mode 100644 dirsrvtests/tests/tickets/ticket47927_test.py - -diff --git a/dirsrvtests/tests/suites/plugins/attruniq_test.py b/dirsrvtests/tests/suites/plugins/attruniq_test.py -index c1ccad9ae..aac659c29 100644 ---- a/dirsrvtests/tests/suites/plugins/attruniq_test.py -+++ b/dirsrvtests/tests/suites/plugins/attruniq_test.py -@@ -10,6 +10,7 @@ import pytest - import ldap - import logging - from lib389.plugins import AttributeUniquenessPlugin -+from lib389.idm.nscontainer import nsContainers - from lib389.idm.user import UserAccounts - from lib389.idm.group import Groups - from lib389._constants import DEFAULT_SUFFIX -@@ -22,6 +23,19 @@ log = logging.getLogger(__name__) - MAIL_ATTR_VALUE = 'non-uniq@value.net' - MAIL_ATTR_VALUE_ALT = 'alt-mail@value.net' - -+EXCLUDED_CONTAINER_CN = "excluded_container" -+EXCLUDED_CONTAINER_DN = "cn={},{}".format(EXCLUDED_CONTAINER_CN, DEFAULT_SUFFIX) -+ -+EXCLUDED_BIS_CONTAINER_CN = "excluded_bis_container" -+EXCLUDED_BIS_CONTAINER_DN = "cn={},{}".format(EXCLUDED_BIS_CONTAINER_CN, DEFAULT_SUFFIX) -+ -+ENFORCED_CONTAINER_CN = "enforced_container" -+ -+USER_1_CN = "test_1" -+USER_2_CN = "test_2" -+USER_3_CN = "test_3" -+USER_4_CN = "test_4" -+ - - def test_modrdn_attr_uniqueness(topology_st): - """Test that we can not add two entries that have the same attr value that is -@@ -154,3 +168,160 @@ def test_multiple_attr_uniqueness(topology_st): - testuser2.delete() - attruniq.disable() - attruniq.delete() -+ -+ -+def test_exclude_subtrees(topology_st): -+ """ Test attribute uniqueness with exclude scope -+ -+ :id: 43d29a60-40e1-4ebd-b897-6ef9f20e9f27 -+ :setup: Standalone instance -+ :steps: -+ 1. Setup and enable attribute uniqueness plugin for telephonenumber unique attribute -+ 2. Create subtrees and test users -+ 3. Add a unique attribute to a user within uniqueness scope -+ 4. Add exclude subtree -+ 5. Try to add existing value attribute to an entry within uniqueness scope -+ 6. Try to add existing value attribute to an entry within exclude scope -+ 7. Remove the attribute from affected entries -+ 8. Add a unique attribute to a user within exclude scope -+ 9. Try to add existing value attribute to an entry within uniqueness scope -+ 10. Try to add existing value attribute to another entry within uniqueness scope -+ 11. Remove the attribute from affected entries -+ 12. Add another exclude subtree -+ 13. Add a unique attribute to a user within uniqueness scope -+ 14. Try to add existing value attribute to an entry within uniqueness scope -+ 15. Try to add existing value attribute to an entry within exclude scope -+ 16. Try to add existing value attribute to an entry within another exclude scope -+ 17. Clean up entries -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success -+ 5. Should raise CONSTRAINT_VIOLATION -+ 6. Success -+ 7. Success -+ 8. Success -+ 9. Success -+ 10. Should raise CONSTRAINT_VIOLATION -+ 11. Success -+ 12. Success -+ 13. Success -+ 14. Should raise CONSTRAINT_VIOLATION -+ 15. Success -+ 16. Success -+ 17. Success -+ """ -+ log.info('Setup attribute uniqueness plugin') -+ attruniq = AttributeUniquenessPlugin(topology_st.standalone, dn="cn=attruniq,cn=plugins,cn=config") -+ attruniq.create(properties={'cn': 'attruniq'}) -+ attruniq.add_unique_attribute('telephonenumber') -+ attruniq.add_unique_subtree(DEFAULT_SUFFIX) -+ attruniq.enable_all_subtrees() -+ attruniq.enable() -+ topology_st.standalone.restart() -+ -+ log.info('Create subtrees container') -+ containers = nsContainers(topology_st.standalone, DEFAULT_SUFFIX) -+ cont1 = containers.create(properties={'cn': EXCLUDED_CONTAINER_CN}) -+ cont2 = containers.create(properties={'cn': EXCLUDED_BIS_CONTAINER_CN}) -+ cont3 = containers.create(properties={'cn': ENFORCED_CONTAINER_CN}) -+ -+ log.info('Create test users') -+ users = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX, -+ rdn='cn={}'.format(ENFORCED_CONTAINER_CN)) -+ users_excluded = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX, -+ rdn='cn={}'.format(EXCLUDED_CONTAINER_CN)) -+ users_excluded2 = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX, -+ rdn='cn={}'.format(EXCLUDED_BIS_CONTAINER_CN)) -+ -+ user1 = users.create(properties={'cn': USER_1_CN, -+ 'uid': USER_1_CN, -+ 'sn': USER_1_CN, -+ 'uidNumber': '1', -+ 'gidNumber': '11', -+ 'homeDirectory': '/home/{}'.format(USER_1_CN)}) -+ user2 = users.create(properties={'cn': USER_2_CN, -+ 'uid': USER_2_CN, -+ 'sn': USER_2_CN, -+ 'uidNumber': '2', -+ 'gidNumber': '22', -+ 'homeDirectory': '/home/{}'.format(USER_2_CN)}) -+ user3 = users_excluded.create(properties={'cn': USER_3_CN, -+ 'uid': USER_3_CN, -+ 'sn': USER_3_CN, -+ 'uidNumber': '3', -+ 'gidNumber': '33', -+ 'homeDirectory': '/home/{}'.format(USER_3_CN)}) -+ user4 = users_excluded2.create(properties={'cn': USER_4_CN, -+ 'uid': USER_4_CN, -+ 'sn': USER_4_CN, -+ 'uidNumber': '4', -+ 'gidNumber': '44', -+ 'homeDirectory': '/home/{}'.format(USER_4_CN)}) -+ -+ UNIQUE_VALUE = '1234' -+ -+ try: -+ log.info('Create user with unique attribute') -+ user1.add('telephonenumber', UNIQUE_VALUE) -+ assert user1.present('telephonenumber', UNIQUE_VALUE) -+ -+ log.info('Add exclude subtree') -+ attruniq.add_exclude_subtree(EXCLUDED_CONTAINER_DN) -+ topology_st.standalone.restart() -+ -+ log.info('Verify an already used attribute value cannot be added within the same subtree') -+ with pytest.raises(ldap.CONSTRAINT_VIOLATION): -+ user2.add('telephonenumber', UNIQUE_VALUE) -+ -+ log.info('Verify an entry with same attribute value can be added within exclude subtree') -+ user3.add('telephonenumber', UNIQUE_VALUE) -+ assert user3.present('telephonenumber', UNIQUE_VALUE) -+ -+ log.info('Cleanup unique attribute values') -+ user1.remove_all('telephonenumber') -+ user3.remove_all('telephonenumber') -+ -+ log.info('Add a unique value to an entry in excluded scope') -+ user3.add('telephonenumber', UNIQUE_VALUE) -+ assert user3.present('telephonenumber', UNIQUE_VALUE) -+ -+ log.info('Verify the same value can be added to an entry within uniqueness scope') -+ user1.add('telephonenumber', UNIQUE_VALUE) -+ assert user1.present('telephonenumber', UNIQUE_VALUE) -+ -+ log.info('Verify that yet another same value cannot be added to another entry within uniqueness scope') -+ with pytest.raises(ldap.CONSTRAINT_VIOLATION): -+ user2.add('telephonenumber', UNIQUE_VALUE) -+ -+ log.info('Cleanup unique attribute values') -+ user1.remove_all('telephonenumber') -+ user3.remove_all('telephonenumber') -+ -+ log.info('Add another exclude subtree') -+ attruniq.add_exclude_subtree(EXCLUDED_BIS_CONTAINER_DN) -+ topology_st.standalone.restart() -+ -+ user1.add('telephonenumber', UNIQUE_VALUE) -+ log.info('Verify an already used attribute value cannot be added within the same subtree') -+ with pytest.raises(ldap.CONSTRAINT_VIOLATION): -+ user2.add('telephonenumber', UNIQUE_VALUE) -+ -+ log.info('Verify an already used attribute can be added to an entry in exclude scope') -+ user3.add('telephonenumber', UNIQUE_VALUE) -+ assert user3.present('telephonenumber', UNIQUE_VALUE) -+ user4.add('telephonenumber', UNIQUE_VALUE) -+ assert user4.present('telephonenumber', UNIQUE_VALUE) -+ -+ finally: -+ log.info('Clean up users, containers and attribute uniqueness plugin') -+ user1.delete() -+ user2.delete() -+ user3.delete() -+ user4.delete() -+ cont1.delete() -+ cont2.delete() -+ cont3.delete() -+ attruniq.disable() -+ attruniq.delete() -\ No newline at end of file -diff --git a/dirsrvtests/tests/tickets/ticket47927_test.py b/dirsrvtests/tests/tickets/ticket47927_test.py -deleted file mode 100644 -index 887fe1af4..000000000 ---- a/dirsrvtests/tests/tickets/ticket47927_test.py -+++ /dev/null -@@ -1,267 +0,0 @@ --# --- BEGIN COPYRIGHT BLOCK --- --# Copyright (C) 2016 Red Hat, Inc. --# All rights reserved. --# --# License: GPL (version 3 or any later version). --# See LICENSE for details. --# --- END COPYRIGHT BLOCK --- --# --import pytest --from lib389.tasks import * --from lib389.utils import * --from lib389.topologies import topology_st -- --from lib389._constants import SUFFIX, DEFAULT_SUFFIX, PLUGIN_ATTR_UNIQUENESS -- --# Skip on older versions --pytestmark = [pytest.mark.tier2, -- pytest.mark.skipif(ds_is_older('1.3.4'), reason="Not implemented")] -- --logging.getLogger(__name__).setLevel(logging.DEBUG) --log = logging.getLogger(__name__) -- --EXCLUDED_CONTAINER_CN = "excluded_container" --EXCLUDED_CONTAINER_DN = "cn=%s,%s" % (EXCLUDED_CONTAINER_CN, SUFFIX) -- --EXCLUDED_BIS_CONTAINER_CN = "excluded_bis_container" --EXCLUDED_BIS_CONTAINER_DN = "cn=%s,%s" % (EXCLUDED_BIS_CONTAINER_CN, SUFFIX) -- --ENFORCED_CONTAINER_CN = "enforced_container" --ENFORCED_CONTAINER_DN = "cn=%s,%s" % (ENFORCED_CONTAINER_CN, SUFFIX) -- --USER_1_CN = "test_1" --USER_1_DN = "cn=%s,%s" % (USER_1_CN, ENFORCED_CONTAINER_DN) --USER_2_CN = "test_2" --USER_2_DN = "cn=%s,%s" % (USER_2_CN, ENFORCED_CONTAINER_DN) --USER_3_CN = "test_3" --USER_3_DN = "cn=%s,%s" % (USER_3_CN, EXCLUDED_CONTAINER_DN) --USER_4_CN = "test_4" --USER_4_DN = "cn=%s,%s" % (USER_4_CN, EXCLUDED_BIS_CONTAINER_DN) -- -- --def test_ticket47927_init(topology_st): -- topology_st.standalone.plugins.enable(name=PLUGIN_ATTR_UNIQUENESS) -- try: -- topology_st.standalone.modify_s('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config', -- [(ldap.MOD_REPLACE, 'uniqueness-attribute-name', b'telephonenumber'), -- (ldap.MOD_REPLACE, 'uniqueness-subtrees', ensure_bytes(DEFAULT_SUFFIX)), -- ]) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927: Failed to configure plugin for "telephonenumber": error ' + e.args[0]['desc']) -- assert False -- topology_st.standalone.restart(timeout=120) -- -- topology_st.standalone.add_s(Entry((EXCLUDED_CONTAINER_DN, {'objectclass': "top nscontainer".split(), -- 'cn': EXCLUDED_CONTAINER_CN}))) -- topology_st.standalone.add_s(Entry((EXCLUDED_BIS_CONTAINER_DN, {'objectclass': "top nscontainer".split(), -- 'cn': EXCLUDED_BIS_CONTAINER_CN}))) -- topology_st.standalone.add_s(Entry((ENFORCED_CONTAINER_DN, {'objectclass': "top nscontainer".split(), -- 'cn': ENFORCED_CONTAINER_CN}))) -- -- # adding an entry on a stage with a different 'cn' -- topology_st.standalone.add_s(Entry((USER_1_DN, { -- 'objectclass': "top person".split(), -- 'sn': USER_1_CN, -- 'cn': USER_1_CN}))) -- # adding an entry on a stage with a different 'cn' -- topology_st.standalone.add_s(Entry((USER_2_DN, { -- 'objectclass': "top person".split(), -- 'sn': USER_2_CN, -- 'cn': USER_2_CN}))) -- topology_st.standalone.add_s(Entry((USER_3_DN, { -- 'objectclass': "top person".split(), -- 'sn': USER_3_CN, -- 'cn': USER_3_CN}))) -- topology_st.standalone.add_s(Entry((USER_4_DN, { -- 'objectclass': "top person".split(), -- 'sn': USER_4_CN, -- 'cn': USER_4_CN}))) -- -- --def test_ticket47927_one(topology_st): -- ''' -- Check that uniqueness is enforce on all SUFFIX -- ''' -- UNIQUE_VALUE = b'1234' -- try: -- topology_st.standalone.modify_s(USER_1_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_one: Failed to set the telephonenumber for %s: %s' % (USER_1_DN, e.args[0]['desc'])) -- assert False -- -- # we expect to fail because user1 is in the scope of the plugin -- try: -- topology_st.standalone.modify_s(USER_2_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- log.fatal('test_ticket47927_one: unexpected success to set the telephonenumber for %s' % (USER_2_DN)) -- assert False -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_one: Failed (expected) to set the telephonenumber for %s: %s' % ( -- USER_2_DN, e.args[0]['desc'])) -- pass -- -- # we expect to fail because user1 is in the scope of the plugin -- try: -- topology_st.standalone.modify_s(USER_3_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- log.fatal('test_ticket47927_one: unexpected success to set the telephonenumber for %s' % (USER_3_DN)) -- assert False -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_one: Failed (expected) to set the telephonenumber for %s: %s' % ( -- USER_3_DN, e.args[0]['desc'])) -- pass -- -- --def test_ticket47927_two(topology_st): -- ''' -- Exclude the EXCLUDED_CONTAINER_DN from the uniqueness plugin -- ''' -- try: -- topology_st.standalone.modify_s('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config', -- [(ldap.MOD_REPLACE, 'uniqueness-exclude-subtrees', ensure_bytes(EXCLUDED_CONTAINER_DN))]) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_two: Failed to configure plugin for to exclude %s: error %s' % ( -- EXCLUDED_CONTAINER_DN, e.args[0]['desc'])) -- assert False -- topology_st.standalone.restart(timeout=120) -- -- --def test_ticket47927_three(topology_st): -- ''' -- Check that uniqueness is enforced on full SUFFIX except EXCLUDED_CONTAINER_DN -- First case: it exists an entry (with the same attribute value) in the scope -- of the plugin and we set the value in an entry that is in an excluded scope -- ''' -- UNIQUE_VALUE = b'9876' -- try: -- topology_st.standalone.modify_s(USER_1_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_three: Failed to set the telephonenumber ' + e.args[0]['desc']) -- assert False -- -- # we should not be allowed to set this value (because user1 is in the scope) -- try: -- topology_st.standalone.modify_s(USER_2_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- log.fatal('test_ticket47927_three: unexpected success to set the telephonenumber for %s' % (USER_2_DN)) -- assert False -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_three: Failed (expected) to set the telephonenumber for %s: %s' % ( -- USER_2_DN, e.args[0]['desc'])) -- -- # USER_3_DN is in EXCLUDED_CONTAINER_DN so update should be successful -- try: -- topology_st.standalone.modify_s(USER_3_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- log.fatal('test_ticket47927_three: success to set the telephonenumber for %s' % (USER_3_DN)) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_three: Failed (unexpected) to set the telephonenumber for %s: %s' % ( -- USER_3_DN, e.args[0]['desc'])) -- assert False -- -- --def test_ticket47927_four(topology_st): -- ''' -- Check that uniqueness is enforced on full SUFFIX except EXCLUDED_CONTAINER_DN -- Second case: it exists an entry (with the same attribute value) in an excluded scope -- of the plugin and we set the value in an entry is in the scope -- ''' -- UNIQUE_VALUE = b'1111' -- # USER_3_DN is in EXCLUDED_CONTAINER_DN so update should be successful -- try: -- topology_st.standalone.modify_s(USER_3_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- log.fatal('test_ticket47927_four: success to set the telephonenumber for %s' % USER_3_DN) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_four: Failed (unexpected) to set the telephonenumber for %s: %s' % ( -- USER_3_DN, e.args[0]['desc'])) -- assert False -- -- # we should be allowed to set this value (because user3 is excluded from scope) -- try: -- topology_st.standalone.modify_s(USER_1_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- except ldap.LDAPError as e: -- log.fatal( -- 'test_ticket47927_four: Failed to set the telephonenumber for %s: %s' % (USER_1_DN, e.args[0]['desc'])) -- assert False -- -- # we should not be allowed to set this value (because user1 is in the scope) -- try: -- topology_st.standalone.modify_s(USER_2_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- log.fatal('test_ticket47927_four: unexpected success to set the telephonenumber %s' % USER_2_DN) -- assert False -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_four: Failed (expected) to set the telephonenumber for %s: %s' % ( -- USER_2_DN, e.args[0]['desc'])) -- pass -- -- --def test_ticket47927_five(topology_st): -- ''' -- Exclude the EXCLUDED_BIS_CONTAINER_DN from the uniqueness plugin -- ''' -- try: -- topology_st.standalone.modify_s('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config', -- [(ldap.MOD_ADD, 'uniqueness-exclude-subtrees', ensure_bytes(EXCLUDED_BIS_CONTAINER_DN))]) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_five: Failed to configure plugin for to exclude %s: error %s' % ( -- EXCLUDED_BIS_CONTAINER_DN, e.args[0]['desc'])) -- assert False -- topology_st.standalone.restart(timeout=120) -- topology_st.standalone.getEntry('cn=' + PLUGIN_ATTR_UNIQUENESS + ',cn=plugins,cn=config', ldap.SCOPE_BASE) -- -- --def test_ticket47927_six(topology_st): -- ''' -- Check that uniqueness is enforced on full SUFFIX except EXCLUDED_CONTAINER_DN -- and EXCLUDED_BIS_CONTAINER_DN -- First case: it exists an entry (with the same attribute value) in the scope -- of the plugin and we set the value in an entry that is in an excluded scope -- ''' -- UNIQUE_VALUE = b'222' -- try: -- topology_st.standalone.modify_s(USER_1_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_six: Failed to set the telephonenumber ' + e.args[0]['desc']) -- assert False -- -- # we should not be allowed to set this value (because user1 is in the scope) -- try: -- topology_st.standalone.modify_s(USER_2_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- log.fatal('test_ticket47927_six: unexpected success to set the telephonenumber for %s' % (USER_2_DN)) -- assert False -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_six: Failed (expected) to set the telephonenumber for %s: %s' % ( -- USER_2_DN, e.args[0]['desc'])) -- -- # USER_3_DN is in EXCLUDED_CONTAINER_DN so update should be successful -- try: -- topology_st.standalone.modify_s(USER_3_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- log.fatal('test_ticket47927_six: success to set the telephonenumber for %s' % (USER_3_DN)) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_six: Failed (unexpected) to set the telephonenumber for %s: %s' % ( -- USER_3_DN, e.args[0]['desc'])) -- assert False -- # USER_4_DN is in EXCLUDED_CONTAINER_DN so update should be successful -- try: -- topology_st.standalone.modify_s(USER_4_DN, -- [(ldap.MOD_REPLACE, 'telephonenumber', UNIQUE_VALUE)]) -- log.fatal('test_ticket47927_six: success to set the telephonenumber for %s' % (USER_4_DN)) -- except ldap.LDAPError as e: -- log.fatal('test_ticket47927_six: Failed (unexpected) to set the telephonenumber for %s: %s' % ( -- USER_4_DN, e.args[0]['desc'])) -- assert False -- -- --if __name__ == '__main__': -- # Run isolated -- # -s for DEBUG mode -- CURRENT_FILE = os.path.realpath(__file__) -- pytest.main("-s %s" % CURRENT_FILE) -diff --git a/src/lib389/lib389/plugins.py b/src/lib389/lib389/plugins.py -index 31bbfa502..977091726 100644 ---- a/src/lib389/lib389/plugins.py -+++ b/src/lib389/lib389/plugins.py -@@ -175,6 +175,16 @@ class AttributeUniquenessPlugin(Plugin): - - self.set('uniqueness-across-all-subtrees', 'off') - -+ def add_exclude_subtree(self, basedn): -+ """Add a uniqueness-exclude-subtrees attribute""" -+ -+ self.add('uniqueness-exclude-subtrees', basedn) -+ -+ def remove_exclude_subtree(self, basedn): -+ """Remove a uniqueness-exclude-subtrees attribute""" -+ -+ self.remove('uniqueness-exclude-subtrees', basedn) -+ - - class AttributeUniquenessPlugins(DSLdapObjects): - """A DSLdapObjects entity which represents Attribute Uniqueness plugin instances --- -2.49.0 - diff --git a/0008-Issue-6857-uiduniq-allow-specifying-match-rules-in-t.patch b/0008-Issue-6857-uiduniq-allow-specifying-match-rules-in-t.patch deleted file mode 100644 index 0687001..0000000 --- a/0008-Issue-6857-uiduniq-allow-specifying-match-rules-in-t.patch +++ /dev/null @@ -1,45 +0,0 @@ -From 4be22be50dfdf0a5ddd27dc8f9d9618b941c8be8 Mon Sep 17 00:00:00 2001 -From: Alexander Bokovoy -Date: Wed, 9 Jul 2025 12:08:09 +0300 -Subject: [PATCH] Issue 6857 - uiduniq: allow specifying match rules in the - filter - -Allow uniqueness plugin to work with attributes where uniqueness should -be enforced using different matching rule than the one defined for the -attribute itself. - -Since uniqueness plugin configuration can contain multiple attributes, -add matching rule right to the attribute as it is used in the LDAP rule -(e.g. 'attribute:caseIgnoreMatch:' to force 'attribute' to be searched -with case-insensitive matching rule instead of the original matching -rule. - -Fixes: https://github.com/389ds/389-ds-base/issues/6857 - -Signed-off-by: Alexander Bokovoy ---- - ldap/servers/plugins/uiduniq/uid.c | 7 +++++++ - 1 file changed, 7 insertions(+) - -diff --git a/ldap/servers/plugins/uiduniq/uid.c b/ldap/servers/plugins/uiduniq/uid.c -index 053af4f9d..887e79d78 100644 ---- a/ldap/servers/plugins/uiduniq/uid.c -+++ b/ldap/servers/plugins/uiduniq/uid.c -@@ -1030,7 +1030,14 @@ preop_add(Slapi_PBlock *pb) - } - - for (i = 0; attrNames && attrNames[i]; i++) { -+ char *attr_match = strchr(attrNames[i], ':'); -+ if (attr_match != NULL) { -+ attr_match[0] = '\0'; -+ } - err = slapi_entry_attr_find(e, attrNames[i], &attr); -+ if (attr_match != NULL) { -+ attr_match[0] = ':'; -+ } - if (!err) { - /* - * Passed all the requirements - this is an operation we --- -2.49.0 - diff --git a/0009-Issue-6756-CLI-UI-Properly-handle-disabled-NDN-cache.patch b/0009-Issue-6756-CLI-UI-Properly-handle-disabled-NDN-cache.patch deleted file mode 100644 index f81127f..0000000 --- a/0009-Issue-6756-CLI-UI-Properly-handle-disabled-NDN-cache.patch +++ /dev/null @@ -1,1201 +0,0 @@ -From efa21c3d7bdc7369cae184b9f66e667ee981a4d5 Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Thu, 10 Jul 2025 11:53:12 -0700 -Subject: [PATCH] Issue 6756 - CLI, UI - Properly handle disabled NDN cache - (#6757) - -Description: Fix the db_monitor function in monitor.py to check if -nsslapd-ndn-cache-enabled is off and conditionally include NDN cache -statistics only when enabled. - -Update dbMonitor.jsx components to detect when NDN cache is disabled and -conditionally render NDN cache tabs, charts, and related content with proper -fallback display when disabled. - -Add test_ndn_cache_disabled to verify both JSON and non-JSON output formats -correctly handle when NDN cache is turned off and on. - -Fixes: https://github.com/389ds/389-ds-base/issues/6756 - -Reviewed by: @mreynolds389 (Thanks!) ---- - dirsrvtests/tests/suites/clu/dbmon_test.py | 90 +++ - src/cockpit/389-console/src/database.jsx | 4 +- - .../src/lib/database/databaseConfig.jsx | 48 +- - .../389-console/src/lib/monitor/dbMonitor.jsx | 735 ++++++++++-------- - src/lib389/lib389/cli_conf/monitor.py | 77 +- - 5 files changed, 580 insertions(+), 374 deletions(-) - -diff --git a/dirsrvtests/tests/suites/clu/dbmon_test.py b/dirsrvtests/tests/suites/clu/dbmon_test.py -index 5eeaca162..bf57690c4 100644 ---- a/dirsrvtests/tests/suites/clu/dbmon_test.py -+++ b/dirsrvtests/tests/suites/clu/dbmon_test.py -@@ -11,6 +11,7 @@ import subprocess - import pytest - import json - import glob -+import re - - from lib389.tasks import * - from lib389.utils import * -@@ -272,6 +273,95 @@ def test_dbmon_mp_pagesize(topology_st): - assert real_free_percentage == dbmon_free_percentage - - -+def test_ndn_cache_disabled(topology_st): -+ """Test dbmon output when ndn-cache-enabled is turned off -+ -+ :id: 760e217c-70e8-4767-b504-dda7ba2e1f64 -+ :setup: Standalone instance -+ :steps: -+ 1. Run dbmon with nsslapd-ndn-cache-enabled=on (default) -+ 2. Verify NDN cache stats are present in the output -+ 3. Set nsslapd-ndn-cache-enabled=off and restart -+ 4. Run dbmon again and verify NDN cache stats are not present -+ 5. Set nsslapd-ndn-cache-enabled=on and restart -+ 6. Run dbmon again and verify NDN cache stats are back -+ :expectedresults: -+ 1. Success -+ 2. Should display NDN cache data -+ 3. Success -+ 4. Should not display NDN cache data -+ 5. Success -+ 6. Should display NDN cache data -+ """ -+ inst = topology_st.standalone -+ args = FakeArgs() -+ args.backends = None -+ args.indexes = False -+ args.json = True -+ lc = LogCapture() -+ -+ log.info("Testing with NDN cache enabled (default)") -+ db_monitor(inst, DEFAULT_SUFFIX, lc.log, args) -+ db_mon_as_str = "".join((str(rec) for rec in lc.outputs)) -+ db_mon_as_str = re.sub("^[^{]*{", "{", db_mon_as_str)[:-2] -+ db_mon = json.loads(db_mon_as_str) -+ -+ assert 'ndncache' in db_mon -+ assert 'hit_ratio' in db_mon['ndncache'] -+ lc.flush() -+ -+ log.info("Setting nsslapd-ndn-cache-enabled to OFF") -+ inst.config.set('nsslapd-ndn-cache-enabled', 'off') -+ inst.restart() -+ -+ log.info("Testing with NDN cache disabled") -+ db_monitor(inst, DEFAULT_SUFFIX, lc.log, args) -+ db_mon_as_str = "".join((str(rec) for rec in lc.outputs)) -+ db_mon_as_str = re.sub("^[^{]*{", "{", db_mon_as_str)[:-2] -+ db_mon = json.loads(db_mon_as_str) -+ -+ assert 'ndncache' not in db_mon -+ lc.flush() -+ -+ log.info("Setting nsslapd-ndn-cache-enabled to ON") -+ inst.config.set('nsslapd-ndn-cache-enabled', 'on') -+ inst.restart() -+ -+ log.info("Testing with NDN cache re-enabled") -+ db_monitor(inst, DEFAULT_SUFFIX, lc.log, args) -+ db_mon_as_str = "".join((str(rec) for rec in lc.outputs)) -+ db_mon_as_str = re.sub("^[^{]*{", "{", db_mon_as_str)[:-2] -+ db_mon = json.loads(db_mon_as_str) -+ -+ assert 'ndncache' in db_mon -+ assert 'hit_ratio' in db_mon['ndncache'] -+ lc.flush() -+ -+ args.json = False -+ -+ log.info("Testing with NDN cache enabled - non-JSON output") -+ db_monitor(inst, DEFAULT_SUFFIX, lc.log, args) -+ output = "".join((str(rec) for rec in lc.outputs)) -+ -+ assert "Normalized DN Cache:" in output -+ assert "Cache Hit Ratio:" in output -+ lc.flush() -+ -+ log.info("Setting nsslapd-ndn-cache-enabled to OFF") -+ inst.config.set('nsslapd-ndn-cache-enabled', 'off') -+ inst.restart() -+ -+ log.info("Testing with NDN cache disabled - non-JSON output") -+ db_monitor(inst, DEFAULT_SUFFIX, lc.log, args) -+ output = "".join((str(rec) for rec in lc.outputs)) -+ -+ assert "Normalized DN Cache:" not in output -+ lc.flush() -+ -+ inst.config.set('nsslapd-ndn-cache-enabled', 'on') -+ inst.restart() -+ -+ - if __name__ == '__main__': - # Run isolated - # -s for DEBUG mode -diff --git a/src/cockpit/389-console/src/database.jsx b/src/cockpit/389-console/src/database.jsx -index 276125dfc..86b642b92 100644 ---- a/src/cockpit/389-console/src/database.jsx -+++ b/src/cockpit/389-console/src/database.jsx -@@ -198,7 +198,7 @@ export class Database extends React.Component { - }); - const cmd = [ - "dsconf", "-j", "ldapi://%2fvar%2frun%2fslapd-" + this.props.serverId + ".socket", -- "config", "get", "nsslapd-ndn-cache-max-size" -+ "config", "get", "nsslapd-ndn-cache-max-size", "nsslapd-ndn-cache-enabled" - ]; - log_cmd("loadNDN", "Load NDN cache size", cmd); - cockpit -@@ -206,10 +206,12 @@ export class Database extends React.Component { - .done(content => { - const config = JSON.parse(content); - const attrs = config.attrs; -+ const ndn_cache_enabled = attrs['nsslapd-ndn-cache-enabled'][0] === "on"; - this.setState(prevState => ({ - globalDBConfig: { - ...prevState.globalDBConfig, - ndncachemaxsize: attrs['nsslapd-ndn-cache-max-size'][0], -+ ndn_cache_enabled: ndn_cache_enabled, - }, - configUpdated: 0, - loaded: true, -diff --git a/src/cockpit/389-console/src/lib/database/databaseConfig.jsx b/src/cockpit/389-console/src/lib/database/databaseConfig.jsx -index 4c7fce706..adb8227d7 100644 ---- a/src/cockpit/389-console/src/lib/database/databaseConfig.jsx -+++ b/src/cockpit/389-console/src/lib/database/databaseConfig.jsx -@@ -2,12 +2,16 @@ import cockpit from "cockpit"; - import React from "react"; - import { log_cmd } from "../tools.jsx"; - import { -+ Alert, - Button, - Checkbox, -+ Form, - Grid, - GridItem, -+ Hr, - NumberInput, - Spinner, -+ Switch, - Tab, - Tabs, - TabTitleText, -@@ -852,12 +856,29 @@ export class GlobalDatabaseConfig extends React.Component { - - {_("NDN Cache")}}> -
-+ -+ {this.props.data.ndn_cache_enabled === false && ( -+ -+ -+ {_("The Normalized DN Cache is currently disabled. To enable it, go to Server Settings → Tuning & Limits and enable 'Normalized DN Cache', then restart the server for the changes to take effect.")} -+ -+ -+ )} -+ - - -- {_("Normalized DN Cache Max Size")} -+ {_("Normalized DN Cache Max Size") } - - - - - -@@ -1470,7 +1491,7 @@ export class GlobalDatabaseConfigMDB extends React.Component { - {_("Database Size")}}> -
- - -@@ -1641,6 +1662,23 @@ export class GlobalDatabaseConfigMDB extends React.Component { - - {_("NDN Cache")}}> -
-+ -+ {this.props.data.ndn_cache_enabled === false && ( -+ -+ -+ {_("The Normalized DN Cache is currently disabled. To enable it, go to Server Settings → Tuning & Limits and enable 'Normalized DN Cache', then restart the server for the changes to take effect.")} -+ -+ -+ )} -+ - 0; -+ let ndn_chart_data = this.state.ndnCacheList; -+ let ndn_util_chart_data = this.state.ndnCacheUtilList; -+ -+ // Only build NDN cache chart data if NDN cache is enabled -+ if (ndn_cache_enabled) { -+ const ndnratio = config.attrs.normalizeddncachehitratio[0]; -+ ndn_chart_data = this.state.ndnCacheList; -+ ndn_chart_data.shift(); -+ ndn_chart_data.push({ name: _("Cache Hit Ratio"), x: count.toString(), y: parseInt(ndnratio) }); -+ -+ // Build up the NDN Cache Util chart data -+ ndn_util_chart_data = this.state.ndnCacheUtilList; -+ const currNDNSize = parseInt(config.attrs.currentnormalizeddncachesize[0]); -+ const maxNDNSize = parseInt(config.attrs.maxnormalizeddncachesize[0]); -+ const ndn_utilization = (currNDNSize / maxNDNSize) * 100; -+ ndn_util_chart_data.shift(); -+ ndn_util_chart_data.push({ name: _("Cache Utilization"), x: ndnCount.toString(), y: parseInt(ndn_utilization) }); -+ } - - this.setState({ - data: config.attrs, -@@ -157,7 +167,8 @@ export class DatabaseMonitor extends React.Component { - ndnCacheList: ndn_chart_data, - ndnCacheUtilList: ndn_util_chart_data, - count, -- ndnCount -+ ndnCount, -+ ndn_cache_enabled - }); - }) - .fail(() => { -@@ -197,13 +208,20 @@ export class DatabaseMonitor extends React.Component { - - if (!this.state.loading) { - dbcachehit = parseInt(this.state.data.dbcachehitratio[0]); -- ndncachehit = parseInt(this.state.data.normalizeddncachehitratio[0]); -- ndncachemax = parseInt(this.state.data.maxnormalizeddncachesize[0]); -- ndncachecurr = parseInt(this.state.data.currentnormalizeddncachesize[0]); -- utilratio = Math.round((ndncachecurr / ndncachemax) * 100); -- if (utilratio === 0) { -- // Just round up to 1 -- utilratio = 1; -+ -+ // Check if NDN cache is enabled -+ const ndn_cache_enabled = this.state.data.normalizeddncachehitratio && -+ this.state.data.normalizeddncachehitratio.length > 0; -+ -+ if (ndn_cache_enabled) { -+ ndncachehit = parseInt(this.state.data.normalizeddncachehitratio[0]); -+ ndncachemax = parseInt(this.state.data.maxnormalizeddncachesize[0]); -+ ndncachecurr = parseInt(this.state.data.currentnormalizeddncachesize[0]); -+ utilratio = Math.round((ndncachecurr / ndncachemax) * 100); -+ if (utilratio === 0) { -+ // Just round up to 1 -+ utilratio = 1; -+ } - } - - // Database cache -@@ -214,119 +232,131 @@ export class DatabaseMonitor extends React.Component { - } else { - chartColor = ChartThemeColor.purple; - } -- // NDN cache ratio -- if (ndncachehit > 89) { -- ndnChartColor = ChartThemeColor.green; -- } else if (ndncachehit > 74) { -- ndnChartColor = ChartThemeColor.orange; -- } else { -- ndnChartColor = ChartThemeColor.purple; -- } -- // NDN cache utilization -- if (utilratio > 95) { -- ndnUtilColor = ChartThemeColor.purple; -- } else if (utilratio > 90) { -- ndnUtilColor = ChartThemeColor.orange; -- } else { -- ndnUtilColor = ChartThemeColor.green; -+ -+ // NDN cache colors only if enabled -+ if (ndn_cache_enabled) { -+ // NDN cache ratio -+ if (ndncachehit > 89) { -+ ndnChartColor = ChartThemeColor.green; -+ } else if (ndncachehit > 74) { -+ ndnChartColor = ChartThemeColor.orange; -+ } else { -+ ndnChartColor = ChartThemeColor.purple; -+ } -+ // NDN cache utilization -+ if (utilratio > 95) { -+ ndnUtilColor = ChartThemeColor.purple; -+ } else if (utilratio > 90) { -+ ndnUtilColor = ChartThemeColor.orange; -+ } else { -+ ndnUtilColor = ChartThemeColor.green; -+ } - } - -- content = ( -- -- {_("Database Cache")}}> --
-- -- --
--
-- -- -- {_("Cache Hit Ratio")} -- -- -- -- -- {dbcachehit}% -- -- --
--
-- `${datum.name}: ${datum.y}`} constrainToVisibleArea />} -- height={200} -- maxDomain={{ y: 100 }} -- minDomain={{ y: 0 }} -- padding={{ -- bottom: 30, -- left: 40, -- top: 10, -- right: 10, -- }} -- width={500} -- themeColor={chartColor} -- > -- -- -- -- -- -- --
-+ // Create tabs based on what caches are available -+ const tabs = []; -+ -+ // Database Cache tab is always available -+ tabs.push( -+ {_("Database Cache")}}> -+
-+ -+ -+
-+
-+ -+ -+ {_("Cache Hit Ratio")} -+ -+ -+ -+ -+ {dbcachehit}% -+ -+ -
-- -- --
-+
-+ `${datum.name}: ${datum.y}`} constrainToVisibleArea />} -+ height={200} -+ maxDomain={{ y: 100 }} -+ minDomain={{ y: 0 }} -+ padding={{ -+ bottom: 30, -+ left: 40, -+ top: 10, -+ right: 10, -+ }} -+ width={500} -+ themeColor={chartColor} -+ > -+ -+ -+ -+ -+ -+ -+
-+
-+ -+ -+
-+ -+ -+ -+ {_("Database Cache Hit Ratio:")} -+ -+ -+ {this.state.data.dbcachehitratio}% -+ -+ -+ {_("Database Cache Tries:")} -+ -+ -+ {numToCommas(this.state.data.dbcachetries)} -+ -+ -+ {_("Database Cache Hits:")} -+ -+ -+ {numToCommas(this.state.data.dbcachehits)} -+ -+ -+ {_("Cache Pages Read:")} -+ -+ -+ {numToCommas(this.state.data.dbcachepagein)} -+ -+ -+ {_("Cache Pages Written:")} -+ -+ -+ {numToCommas(this.state.data.dbcachepageout)} -+ -+ -+ {_("Read-Only Page Evictions:")} -+ -+ -+ {numToCommas(this.state.data.dbcacheroevict)} -+ -+ -+ {_("Read-Write Page Evictions:")} -+ -+ -+ {numToCommas(this.state.data.dbcacherwevict)} -+ -+ -+ -+ ); - -- -- -- {_("Database Cache Hit Ratio:")} -- -- -- {this.state.data.dbcachehitratio}% -- -- -- {_("Database Cache Tries:")} -- -- -- {numToCommas(this.state.data.dbcachetries)} -- -- -- {_("Database Cache Hits:")} -- -- -- {numToCommas(this.state.data.dbcachehits)} -- -- -- {_("Cache Pages Read:")} -- -- -- {numToCommas(this.state.data.dbcachepagein)} -- -- -- {_("Cache Pages Written:")} -- -- -- {numToCommas(this.state.data.dbcachepageout)} -- -- -- {_("Read-Only Page Evictions:")} -- -- -- {numToCommas(this.state.data.dbcacheroevict)} -- -- -- {_("Read-Write Page Evictions:")} -- -- -- {numToCommas(this.state.data.dbcacherwevict)} -- -- -- -- {_("Normalized DN Cache")}}> -+ // Only add NDN Cache tab if NDN cache is enabled -+ if (ndn_cache_enabled) { -+ tabs.push( -+ {_("Normalized DN Cache")}}> -
- - -@@ -487,6 +517,12 @@ export class DatabaseMonitor extends React.Component { - -
-
-+ ); -+ } -+ -+ content = ( -+ -+ {tabs} - - ); - } -@@ -533,7 +569,8 @@ export class DatabaseMonitorMDB extends React.Component { - ndnCount: 5, - dbCacheList: [], - ndnCacheList: [], -- ndnCacheUtilList: [] -+ ndnCacheUtilList: [], -+ ndn_cache_enabled: false - }; - - // Toggle currently active tab -@@ -585,6 +622,7 @@ export class DatabaseMonitorMDB extends React.Component { - { name: "", x: "4", y: 0 }, - { name: "", x: "5", y: 0 }, - ], -+ ndn_cache_enabled: false - }); - } - -@@ -605,19 +643,28 @@ export class DatabaseMonitorMDB extends React.Component { - count = 1; - } - -- // Build up the NDN Cache chart data -- const ndnratio = config.attrs.normalizeddncachehitratio[0]; -- const ndn_chart_data = this.state.ndnCacheList; -- ndn_chart_data.shift(); -- ndn_chart_data.push({ name: _("Cache Hit Ratio"), x: count.toString(), y: parseInt(ndnratio) }); -- -- // Build up the DB Cache Util chart data -- const ndn_util_chart_data = this.state.ndnCacheUtilList; -- const currNDNSize = parseInt(config.attrs.currentnormalizeddncachesize[0]); -- const maxNDNSize = parseInt(config.attrs.maxnormalizeddncachesize[0]); -- const ndn_utilization = (currNDNSize / maxNDNSize) * 100; -- ndn_util_chart_data.shift(); -- ndn_util_chart_data.push({ name: _("Cache Utilization"), x: ndnCount.toString(), y: parseInt(ndn_utilization) }); -+ // Check if NDN cache is enabled -+ const ndn_cache_enabled = config.attrs.normalizeddncachehitratio && -+ config.attrs.normalizeddncachehitratio.length > 0; -+ let ndn_chart_data = this.state.ndnCacheList; -+ let ndn_util_chart_data = this.state.ndnCacheUtilList; -+ -+ // Only build NDN cache chart data if NDN cache is enabled -+ if (ndn_cache_enabled) { -+ // Build up the NDN Cache chart data -+ const ndnratio = config.attrs.normalizeddncachehitratio[0]; -+ ndn_chart_data = this.state.ndnCacheList; -+ ndn_chart_data.shift(); -+ ndn_chart_data.push({ name: _("Cache Hit Ratio"), x: count.toString(), y: parseInt(ndnratio) }); -+ -+ // Build up the DB Cache Util chart data -+ ndn_util_chart_data = this.state.ndnCacheUtilList; -+ const currNDNSize = parseInt(config.attrs.currentnormalizeddncachesize[0]); -+ const maxNDNSize = parseInt(config.attrs.maxnormalizeddncachesize[0]); -+ const ndn_utilization = (currNDNSize / maxNDNSize) * 100; -+ ndn_util_chart_data.shift(); -+ ndn_util_chart_data.push({ name: _("Cache Utilization"), x: ndnCount.toString(), y: parseInt(ndn_utilization) }); -+ } - - this.setState({ - data: config.attrs, -@@ -625,7 +672,8 @@ export class DatabaseMonitorMDB extends React.Component { - ndnCacheList: ndn_chart_data, - ndnCacheUtilList: ndn_util_chart_data, - count, -- ndnCount -+ ndnCount, -+ ndn_cache_enabled - }); - }) - .fail(() => { -@@ -662,197 +710,214 @@ export class DatabaseMonitorMDB extends React.Component { - ); - - if (!this.state.loading) { -- ndncachehit = parseInt(this.state.data.normalizeddncachehitratio[0]); -- ndncachemax = parseInt(this.state.data.maxnormalizeddncachesize[0]); -- ndncachecurr = parseInt(this.state.data.currentnormalizeddncachesize[0]); -- utilratio = Math.round((ndncachecurr / ndncachemax) * 100); -- if (utilratio === 0) { -- // Just round up to 1 -- utilratio = 1; -- } -- -- // NDN cache ratio -- if (ndncachehit > 89) { -- ndnChartColor = ChartThemeColor.green; -- } else if (ndncachehit > 74) { -- ndnChartColor = ChartThemeColor.orange; -- } else { -- ndnChartColor = ChartThemeColor.purple; -- } -- // NDN cache utilization -- if (utilratio > 95) { -- ndnUtilColor = ChartThemeColor.purple; -- } else if (utilratio > 90) { -- ndnUtilColor = ChartThemeColor.orange; -- } else { -- ndnUtilColor = ChartThemeColor.green; -- } -- -- content = ( -- -- {_("Normalized DN Cache")}}> --
-- -- -- -- --
--
-- -- -- {_("Cache Hit Ratio")} -- -- -- -- -- {ndncachehit}% -- -- --
--
-- `${datum.name}: ${datum.y}`} constrainToVisibleArea />} -- height={200} -- maxDomain={{ y: 100 }} -- minDomain={{ y: 0 }} -- padding={{ -- bottom: 40, -- left: 60, -- top: 10, -- right: 15, -- }} -- width={350} -- themeColor={ndnChartColor} -- > -- -- -- -- -- -- --
--
--
--
--
-- -- -- --
--
-- -- -- {_("Cache Utilization")} -- -- -- -- -- {utilratio}% -- -- -- -- -- {_("Cached DN's")} -- -- -- {numToCommas(this.state.data.currentnormalizeddncachecount[0])} -+ // Check if NDN cache is enabled -+ const ndn_cache_enabled = this.state.data.normalizeddncachehitratio && -+ this.state.data.normalizeddncachehitratio.length > 0; -+ -+ if (ndn_cache_enabled) { -+ ndncachehit = parseInt(this.state.data.normalizeddncachehitratio[0]); -+ ndncachemax = parseInt(this.state.data.maxnormalizeddncachesize[0]); -+ ndncachecurr = parseInt(this.state.data.currentnormalizeddncachesize[0]); -+ utilratio = Math.round((ndncachecurr / ndncachemax) * 100); -+ if (utilratio === 0) { -+ // Just round up to 1 -+ utilratio = 1; -+ } -+ -+ // NDN cache ratio -+ if (ndncachehit > 89) { -+ ndnChartColor = ChartThemeColor.green; -+ } else if (ndncachehit > 74) { -+ ndnChartColor = ChartThemeColor.orange; -+ } else { -+ ndnChartColor = ChartThemeColor.purple; -+ } -+ // NDN cache utilization -+ if (utilratio > 95) { -+ ndnUtilColor = ChartThemeColor.purple; -+ } else if (utilratio > 90) { -+ ndnUtilColor = ChartThemeColor.orange; -+ } else { -+ ndnUtilColor = ChartThemeColor.green; -+ } -+ -+ content = ( -+ -+ {_("Normalized DN Cache")}}> -+
-+ -+ -+ -+ -+
-+
-+ -+ -+ {_("Cache Hit Ratio")} -+ -+ -+ -+ -+ {ndncachehit}% -+ -+ -+
-+
-+ `${datum.name}: ${datum.y}`} constrainToVisibleArea />} -+ height={200} -+ maxDomain={{ y: 100 }} -+ minDomain={{ y: 0 }} -+ padding={{ -+ bottom: 40, -+ left: 60, -+ top: 10, -+ right: 15, -+ }} -+ width={350} -+ themeColor={ndnChartColor} -+ > -+ -+ -+ -+ -+ -+ -+
-
--
-- `${datum.name}: ${datum.y}`} constrainToVisibleArea />} -- height={200} -- maxDomain={{ y: 100 }} -- minDomain={{ y: 0 }} -- padding={{ -- bottom: 40, -- left: 60, -- top: 10, -- right: 15, -- }} -- width={350} -- themeColor={ndnUtilColor} -- > -- -- -- -- -- -- -+ -+ -+ -+ -+ -+ -+
-+
-+ -+ -+ {_("Cache Utilization")} -+ -+ -+ -+ -+ {utilratio}% -+ -+ -+ -+ -+ {_("Cached DN's")} -+ -+ -+ {numToCommas(this.state.data.currentnormalizeddncachecount[0])} -+
-+
-+ `${datum.name}: ${datum.y}`} constrainToVisibleArea />} -+ height={200} -+ maxDomain={{ y: 100 }} -+ minDomain={{ y: 0 }} -+ padding={{ -+ bottom: 40, -+ left: 60, -+ top: 10, -+ right: 15, -+ }} -+ width={350} -+ themeColor={ndnUtilColor} -+ > -+ -+ -+ -+ -+ -+ -+
-
--
--
--
--
--
-- -- -- -- {_("NDN Cache Hit Ratio:")} -- -- -- {this.state.data.normalizeddncachehitratio}% -- -- -- {_("NDN Cache Max Size:")} -- -- -- {displayBytes(this.state.data.maxnormalizeddncachesize)} -- -- -- {_("NDN Cache Tries:")} -- -- -- {numToCommas(this.state.data.normalizeddncachetries)} -- -- -- {_("NDN Current Cache Size:")} -- -- -- {displayBytes(this.state.data.currentnormalizeddncachesize)} -- -- -- {_("NDN Cache Hits:")} -- -- -- {numToCommas(this.state.data.normalizeddncachehits)} -- -- -- {_("NDN Cache DN Count:")} -- -- -- {numToCommas(this.state.data.currentnormalizeddncachecount)} -- -- -- {_("NDN Cache Evictions:")} -- -- -- {numToCommas(this.state.data.normalizeddncacheevictions)} -- -- -- {_("NDN Cache Thread Size:")} -- -- -- {numToCommas(this.state.data.normalizeddncachethreadsize)} -- -- -- {_("NDN Cache Thread Slots:")} -- -- -- {numToCommas(this.state.data.normalizeddncachethreadslots)} -- -- --
--
--
-- ); -+ -+ -+ -+ -+ -+ -+ -+ {_("NDN Cache Hit Ratio:")} -+ -+ -+ {this.state.data.normalizeddncachehitratio}% -+ -+ -+ {_("NDN Cache Max Size:")} -+ -+ -+ {displayBytes(this.state.data.maxnormalizeddncachesize)} -+ -+ -+ {_("NDN Cache Tries:")} -+ -+ -+ {numToCommas(this.state.data.normalizeddncachetries)} -+ -+ -+ {_("NDN Current Cache Size:")} -+ -+ -+ {displayBytes(this.state.data.currentnormalizeddncachesize)} -+ -+ -+ {_("NDN Cache Hits:")} -+ -+ -+ {numToCommas(this.state.data.normalizeddncachehits)} -+ -+ -+ {_("NDN Cache DN Count:")} -+ -+ -+ {numToCommas(this.state.data.currentnormalizeddncachecount)} -+ -+ -+ {_("NDN Cache Evictions:")} -+ -+ -+ {numToCommas(this.state.data.normalizeddncacheevictions)} -+ -+ -+ {_("NDN Cache Thread Size:")} -+ -+ -+ {numToCommas(this.state.data.normalizeddncachethreadsize)} -+ -+ -+ {_("NDN Cache Thread Slots:")} -+ -+ -+ {numToCommas(this.state.data.normalizeddncachethreadslots)} -+ -+ -+
-+ -+ -+ ); -+ } else { -+ // No NDN cache available -+ content = ( -+
-+ -+ -+ {_("Normalized DN Cache is disabled")} -+ -+ -+
-+ ); -+ } - } - - return ( -diff --git a/src/lib389/lib389/cli_conf/monitor.py b/src/lib389/lib389/cli_conf/monitor.py -index b01796549..c7f9322d1 100644 ---- a/src/lib389/lib389/cli_conf/monitor.py -+++ b/src/lib389/lib389/cli_conf/monitor.py -@@ -129,6 +129,14 @@ def db_monitor(inst, basedn, log, args): - # Gather the global DB stats - report_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") - ldbm_mon = ldbm_monitor.get_status() -+ ndn_cache_enabled = inst.config.get_attr_val_utf8('nsslapd-ndn-cache-enabled') == 'on' -+ -+ # Build global cache stats -+ result = { -+ 'date': report_time, -+ 'backends': {}, -+ } -+ - if ldbm_monitor.inst_db_impl == DB_IMPL_BDB: - dbcachesize = int(ldbm_mon['nsslapd-db-cache-size-bytes'][0]) - # Warning: there are two different page sizes associated with bdb: -@@ -153,32 +161,6 @@ def db_monitor(inst, basedn, log, args): - dbcachefree = max(int(dbcachesize - (pagesize * dbpages)), 0) - dbcachefreeratio = dbcachefree/dbcachesize - -- ndnratio = ldbm_mon['normalizeddncachehitratio'][0] -- ndncursize = int(ldbm_mon['currentnormalizeddncachesize'][0]) -- ndnmaxsize = int(ldbm_mon['maxnormalizeddncachesize'][0]) -- ndncount = ldbm_mon['currentnormalizeddncachecount'][0] -- ndnevictions = ldbm_mon['normalizeddncacheevictions'][0] -- if ndncursize > ndnmaxsize: -- ndnfree = 0 -- ndnfreeratio = 0 -- else: -- ndnfree = ndnmaxsize - ndncursize -- ndnfreeratio = "{:.1f}".format(ndnfree / ndnmaxsize * 100) -- -- # Build global cache stats -- result = { -- 'date': report_time, -- 'ndncache': { -- 'hit_ratio': ndnratio, -- 'free': convert_bytes(str(ndnfree)), -- 'free_percentage': ndnfreeratio, -- 'count': ndncount, -- 'evictions': ndnevictions -- }, -- 'backends': {}, -- } -- -- if ldbm_monitor.inst_db_impl == DB_IMPL_BDB: - result['dbcache'] = { - 'hit_ratio': dbhitratio, - 'free': convert_bytes(str(dbcachefree)), -@@ -188,6 +170,32 @@ def db_monitor(inst, basedn, log, args): - 'pageout': dbcachepageout - } - -+ # Add NDN cache stats only if enabled -+ if ndn_cache_enabled: -+ try: -+ ndnratio = ldbm_mon['normalizeddncachehitratio'][0] -+ ndncursize = int(ldbm_mon['currentnormalizeddncachesize'][0]) -+ ndnmaxsize = int(ldbm_mon['maxnormalizeddncachesize'][0]) -+ ndncount = ldbm_mon['currentnormalizeddncachecount'][0] -+ ndnevictions = ldbm_mon['normalizeddncacheevictions'][0] -+ if ndncursize > ndnmaxsize: -+ ndnfree = 0 -+ ndnfreeratio = 0 -+ else: -+ ndnfree = ndnmaxsize - ndncursize -+ ndnfreeratio = "{:.1f}".format(ndnfree / ndnmaxsize * 100) -+ -+ result['ndncache'] = { -+ 'hit_ratio': ndnratio, -+ 'free': convert_bytes(str(ndnfree)), -+ 'free_percentage': ndnfreeratio, -+ 'count': ndncount, -+ 'evictions': ndnevictions -+ } -+ # In case, the user enabled NDN cache but still have not restarted the instance -+ except IndexError: -+ ndn_cache_enabled = False -+ - # Build the backend results - for be in backend_objs: - be_name = be.rdn -@@ -277,13 +285,16 @@ def db_monitor(inst, basedn, log, args): - log.info(" - Pages In: {}".format(result['dbcache']['pagein'])) - log.info(" - Pages Out: {}".format(result['dbcache']['pageout'])) - log.info("") -- log.info("Normalized DN Cache:") -- log.info(" - Cache Hit Ratio: {}%".format(result['ndncache']['hit_ratio'])) -- log.info(" - Free Space: {}".format(result['ndncache']['free'])) -- log.info(" - Free Percentage: {}%".format(result['ndncache']['free_percentage'])) -- log.info(" - DN Count: {}".format(result['ndncache']['count'])) -- log.info(" - Evictions: {}".format(result['ndncache']['evictions'])) -- log.info("") -+ -+ if ndn_cache_enabled: -+ log.info("Normalized DN Cache:") -+ log.info(" - Cache Hit Ratio: {}%".format(result['ndncache']['hit_ratio'])) -+ log.info(" - Free Space: {}".format(result['ndncache']['free'])) -+ log.info(" - Free Percentage: {}%".format(result['ndncache']['free_percentage'])) -+ log.info(" - DN Count: {}".format(result['ndncache']['count'])) -+ log.info(" - Evictions: {}".format(result['ndncache']['evictions'])) -+ log.info("") -+ - log.info("Backends:") - for be_name, attr_dict in result['backends'].items(): - log.info(f" - {attr_dict['suffix']} ({be_name}):") --- -2.49.0 - diff --git a/0010-Issue-6854-Refactor-for-improved-data-management-685.patch b/0010-Issue-6854-Refactor-for-improved-data-management-685.patch deleted file mode 100644 index a6e3049..0000000 --- a/0010-Issue-6854-Refactor-for-improved-data-management-685.patch +++ /dev/null @@ -1,2237 +0,0 @@ -From c70c4ec35df7434135baa85de04356e1e15e7d92 Mon Sep 17 00:00:00 2001 -From: James Chapman -Date: Fri, 11 Jul 2025 10:16:13 +0000 -Subject: [PATCH] Issue 6854 - Refactor for improved data management (#6855) - -Description: Replaced standard dictionaries with defaultdict and -introduced @dataclass structures to streamline data handling. -Improved reliability and readability by reducing explicit key -checks and default initialisation logic. Simplified nested -structure management and ensured consistency across components -such as ResultData, BindData, and ConnectionData. - -Fixes: https://github.com/389ds/389-ds-base/issues/6854 - -Reviewed by: @mreynolds389 (Thank you) ---- - ldap/admin/src/logconv.py | 1307 ++++++++++++++++++++----------------- - 1 file changed, 698 insertions(+), 609 deletions(-) - -diff --git a/ldap/admin/src/logconv.py b/ldap/admin/src/logconv.py -index f4495ca35..162447b4d 100755 ---- a/ldap/admin/src/logconv.py -+++ b/ldap/admin/src/logconv.py -@@ -16,11 +16,11 @@ import argparse - import logging - import sys - import csv -+from collections import defaultdict, Counter - from datetime import datetime, timedelta, timezone -+from dataclasses import dataclass, field - import heapq --from collections import Counter --from collections import defaultdict --from typing import Optional -+from typing import Optional, Dict, List, Set, Tuple, DefaultDict - import magic - - # Globals -@@ -159,9 +159,191 @@ SCOPE_LABEL = { - - STLS_OID = '1.3.6.1.4.1.1466.20037' - --# Version --logAnalyzerVersion = "8.3" -+logAnalyzerVersion = "8.4" - -+@dataclass -+class VLVData: -+ counters: Dict[str, int] = field(default_factory=lambda: defaultdict( -+ int, -+ { -+ 'vlv': 0 -+ } -+ )) -+ -+ rst_con_op_map: Dict = field(default_factory=lambda: defaultdict(dict)) -+ -+@dataclass -+class ServerData: -+ counters: Dict[str, int] = field(default_factory=lambda: defaultdict( -+ int, -+ { -+ 'restart': 0, -+ 'lines_parsed': 0 -+ } -+ )) -+ -+ first_time: Optional[str] = None -+ last_time: Optional[str] = None -+ parse_start_time: Optional[str] = None -+ parse_stop_time: Optional[str] = None -+ -+@dataclass -+class OperationData: -+ counters: DefaultDict[str, int] = field(default_factory=lambda: defaultdict( -+ int, -+ { -+ 'add': 0, -+ 'mod': 0, -+ 'del': 0, -+ 'modrdn': 0, -+ 'cmp': 0, -+ 'abandon': 0, -+ 'sort': 0, -+ 'internal': 0, -+ 'extnd': 0, -+ 'authzid': 0, -+ 'total': 0 -+ } -+ )) -+ -+ rst_con_op_map: DefaultDict[str, DefaultDict[str, int]] = field( -+ default_factory=lambda: defaultdict(lambda: defaultdict(int)) -+ ) -+ -+ extended: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ -+@dataclass -+class ConnectionData: -+ counters: DefaultDict[str, int] = field(default_factory=lambda: defaultdict( -+ int, -+ { -+ 'conn': 0, -+ 'fd_taken': 0, -+ 'fd_returned': 0, -+ 'fd_max': 0, -+ 'sim_conn': 0, -+ 'max_sim_conn': 0, -+ 'ldap': 0, -+ 'ldapi': 0, -+ 'ldaps': 0 -+ } -+ )) -+ -+ start_time: DefaultDict[str, str] = field(default_factory=lambda: defaultdict(str)) -+ open_conns: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ exclude_ip: DefaultDict[Tuple[str, str], str] = field(default_factory=lambda: defaultdict(str)) -+ -+ broken_pipe: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ resource_unavail: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ connection_reset: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ disconnect_code: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ -+ restart_conn_disconnect_map: DefaultDict[Tuple[int, str], int] = field(default_factory=lambda: defaultdict(int)) -+ restart_conn_ip_map: Dict[Tuple[int, str], str] = field(default_factory=dict) -+ -+ src_ip_map: DefaultDict[str, DefaultDict[str, object]] = field( -+ default_factory=lambda: defaultdict(lambda: defaultdict(object)) -+ ) -+ -+@dataclass -+class BindData: -+ counters: DefaultDict[str, int] = field(default_factory=lambda: defaultdict( -+ int, -+ { -+ 'bind': 0, -+ 'unbind': 0, -+ 'sasl': 0, -+ 'anon': 0, -+ 'autobind': 0, -+ 'rootdn': 0 -+ } -+ )) -+ -+ restart_conn_dn_map: Dict[Tuple[int, str], str] = field(default_factory=dict) -+ -+ version: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ dns: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ sasl_mech: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ conn_op_sasl_mech_map: DefaultDict[str, str] = field(default_factory=lambda: defaultdict(str)) -+ root_dn: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ -+ report_dn: DefaultDict[str, Dict[str, Set[str]]] = field( -+ default_factory=lambda: defaultdict( -+ lambda: { -+ 'conn': set(), -+ 'ips': set() -+ } -+ ) -+ ) -+ -+@dataclass -+class ResultData: -+ counters: DefaultDict[str, int] = field(default_factory=lambda: defaultdict( -+ int, { -+ 'result': 0, -+ 'notesA': 0, -+ 'notesF': 0, -+ 'notesM': 0, -+ 'notesP': 0, -+ 'notesU': 0, -+ 'timestamp': 0, -+ 'entry': 0, -+ 'referral': 0 -+ } -+ )) -+ -+ notes: DefaultDict[str, Dict] = field(default_factory=lambda: defaultdict(dict)) -+ -+ timestamp_ctr: int = 0 -+ entry_count: int = 0 -+ referral_count: int = 0 -+ -+ total_etime: float = 0.0 -+ total_wtime: float = 0.0 -+ total_optime: float = 0.0 -+ etime_stat: float = 0.0 -+ -+ etime_duration: List[float] = field(default_factory=list) -+ wtime_duration: List[float] = field(default_factory=list) -+ optime_duration: List[float] = field(default_factory=list) -+ -+ nentries_num: List[int] = field(default_factory=list) -+ nentries_set: Set[int] = field(default_factory=set) -+ -+ error_freq: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ bad_pwd_map: Dict[str, int] = field(default_factory=dict) -+ -+@dataclass -+class SearchData: -+ counters: Dict[str, int] = field(default_factory=lambda: defaultdict( -+ int, -+ { -+ 'search': 0, -+ 'base_search': 0, -+ 'persistent': 0 -+ } -+ )) -+ attrs: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ bases: DefaultDict[str, int] = field(default_factory=lambda: defaultdict(int)) -+ -+ base_rst_con_op_map: Dict[Tuple[int, str, str], str] = field(default_factory=dict) -+ scope_rst_con_op_map: Dict[Tuple[int, str, str], str] = field(default_factory=dict) -+ -+ filter_dict: Dict[str, int] = field(default_factory=dict) -+ filter_list: List[str] = field(default_factory=list) -+ filter_rst_con_op_map: Dict[Tuple[int, str, str], str] = field(default_factory=dict) -+ -+@dataclass -+class AuthData: -+ counters: Dict[str, int] = field(default_factory=lambda: defaultdict( -+ int, -+ { -+ 'ssl_client_bind_ctr': 0, -+ 'ssl_client_bind_failed_ctr': 0, -+ 'cipher_ctr': 0 -+ } -+ )) -+ auth_info: DefaultDict[str, str] = field(default_factory=lambda: defaultdict(str)) - - class logAnalyser: - """ -@@ -272,136 +454,14 @@ class logAnalyser: - self.notesP = {} - self.notesU = {} - -- self.vlv = { -- 'vlv_ctr': 0, -- 'vlv_map_rco': {} -- } -- -- self.server = { -- 'restart_ctr': 0, -- 'first_time': None, -- 'last_time': None, -- 'parse_start_time': None, -- 'parse_stop_time': None, -- 'lines_parsed': 0 -- } -- -- self.operation = { -- 'all_op_ctr': 0, -- 'add_op_ctr': 0, -- 'mod_op_ctr': 0, -- 'del_op_ctr': 0, -- 'modrdn_op_ctr': 0, -- 'cmp_op_ctr': 0, -- 'abandon_op_ctr': 0, -- 'sort_op_ctr': 0, -- 'extnd_op_ctr': 0, -- 'add_map_rco': {}, -- 'mod_map_rco': {}, -- 'del_map_rco': {}, -- 'cmp_map_rco': {}, -- 'modrdn_map_rco': {}, -- 'extop_dict': {}, -- 'extop_map_rco': {}, -- 'abandoned_map_rco': {} -- } -- -- self.connection = { -- 'conn_ctr': 0, -- 'fd_taken_ctr': 0, -- 'fd_returned_ctr': 0, -- 'fd_max_ctr': 0, -- 'sim_conn_ctr': 0, -- 'max_sim_conn_ctr': 0, -- 'ldap_ctr': 0, -- 'ldapi_ctr': 0, -- 'ldaps_ctr': 0, -- 'start_time': {}, -- 'open_conns': {}, -- 'exclude_ip_map': {}, -- 'broken_pipe': {}, -- 'resource_unavail': {}, -- 'connection_reset': {}, -- 'disconnect_code': {}, -- 'disconnect_code_map': {}, -- 'ip_map': {}, -- 'restart_conn_ip_map': {} -- } -- -- self.bind = { -- 'bind_ctr': 0, -- 'unbind_ctr': 0, -- 'sasl_bind_ctr': 0, -- 'anon_bind_ctr': 0, -- 'autobind_ctr': 0, -- 'rootdn_bind_ctr': 0, -- 'version': {}, -- 'dn_freq': {}, -- 'dn_map_rc': {}, -- 'sasl_mech_freq': {}, -- 'sasl_map_co': {}, -- 'root_dn': {}, -- 'report_dn': defaultdict(lambda: defaultdict(int, conn=set(), ips=set())) -- } -- -- self.result = { -- 'result_ctr': 0, -- 'notesA_ctr': 0, # dynamically referenced -- 'notesF_ctr': 0, # dynamically referenced -- 'notesM_ctr': 0, # dynamically referenced -- 'notesP_ctr': 0, # dynamically referenced -- 'notesU_ctr': 0, # dynamically referenced -- 'timestamp_ctr': 0, -- 'entry_count': 0, -- 'referral_count': 0, -- 'total_etime': 0.0, -- 'total_wtime': 0.0, -- 'total_optime': 0.0, -- 'notesA_map': {}, -- 'notesF_map': {}, -- 'notesM_map': {}, -- 'notesP_map': {}, -- 'notesU_map': {}, -- 'etime_stat': 0.0, -- 'etime_counts': defaultdict(int), -- 'etime_freq': [], -- 'etime_duration': [], -- 'wtime_counts': defaultdict(int), -- 'wtime_freq': [], -- 'wtime_duration': [], -- 'optime_counts': defaultdict(int), -- 'optime_freq': [], -- 'optime_duration': [], -- 'nentries_dict': defaultdict(int), -- 'nentries_num': [], -- 'nentries_set': set(), -- 'nentries_returned': [], -- 'error_freq': defaultdict(str), -- 'bad_pwd_map': {} -- } -- -- self.search = { -- 'search_ctr': 0, -- 'search_map_rco': {}, -- 'attr_dict': defaultdict(int), -- 'base_search_ctr': 0, -- 'base_map': {}, -- 'base_map_rco': {}, -- 'scope_map_rco': {}, -- 'filter_dict': {}, -- 'filter_list': [], -- 'filter_seen': set(), -- 'filter_counter': Counter(), -- 'filter_map_rco': {}, -- 'persistent_ctr': 0 -- } -- -- self.auth = { -- 'ssl_client_bind_ctr': 0, -- 'ssl_client_bind_failed_ctr': 0, -- 'cipher_ctr': 0, -- 'auth_info': {} -- } -+ self.vlv = VLVData() -+ self.server = ServerData() -+ self.operation = OperationData() -+ self.connection = ConnectionData() -+ self.bind = BindData() -+ self.result = ResultData() -+ self.search = SearchData() -+ self.auth = AuthData() - - def _init_regexes(self): - """ -@@ -564,22 +624,22 @@ class logAnalyser: - """ - print("\nBind Report") - print("====================================================================\n") -- for k, v in self.bind['report_dn'].items(): -+ for k, v in self.bind.report_dn.items(): - print(f"\nBind DN: {k}") - print("--------------------------------------------------------------------\n") - print(" Client Addresses:\n") -- ips = self.bind['report_dn'][k].get('ips', set()) -+ ips = self.bind.report_dn[k].get('ips', set()) - for i, ip in enumerate(ips, start=1): - print(f" {i}: {ip}") - print("\n Operations Performed:\n") -- print(f" Binds: {self.bind['report_dn'][k].get('bind', 0)}") -- print(f" Searches: {self.bind['report_dn'][k].get('srch', 0)}") -- print(f" Modifies: {self.bind['report_dn'][k].get('mod', 0)}") -- print(f" Adds: {self.bind['report_dn'][k].get('add', 0)}") -- print(f" Deletes: {self.bind['report_dn'][k].get('del', 0)}") -- print(f" Compares: {self.bind['report_dn'][k].get('cmp', 0)}") -- print(f" ModRDNs: {self.bind['report_dn'][k].get('modrdn', 0)}") -- print(f" Ext Ops: {self.bind['report_dn'][k].get('ext', 0)}") -+ print(f" Binds: {self.bind.report_dn[k].get('bind', 0)}") -+ print(f" Searches: {self.bind.report_dn[k].get('srch', 0)}") -+ print(f" Modifies: {self.bind.report_dn[k].get('mod', 0)}") -+ print(f" Adds: {self.bind.report_dn[k].get('add', 0)}") -+ print(f" Deletes: {self.bind.report_dn[k].get('del', 0)}") -+ print(f" Compares: {self.bind.report_dn[k].get('cmp', 0)}") -+ print(f" ModRDNs: {self.bind.report_dn[k].get('modrdn', 0)}") -+ print(f" Ext Ops: {self.bind.report_dn[k].get('ext', 0)}") - - print("Done.") - -@@ -618,12 +678,9 @@ class logAnalyser: - self.logger.error(f"Converting timestamp: {timestamp} to datetime failed with: {e}") - return False - -- # Add server restart count to groups for connection tracking -- groups['restart_ctr'] = self.server.get('restart_ctr', 0) -- - # Are there time range restrictions -- parse_start = self.server.get('parse_start_time', None) -- parse_stop = self.server.get('parse_stop_time', None) -+ parse_start = self.server.parse_start_time -+ parse_stop = self.server.parse_stop_time - - if parse_start and parse_stop: - if parse_start.microsecond == 0 and parse_stop.microsecond == 0: -@@ -634,12 +691,12 @@ class logAnalyser: - return False - - # Get the first and last timestamps -- if self.server.get('first_time') is None: -- self.server['first_time'] = timestamp -- self.server['last_time'] = timestamp -+ if self.server.first_time is None: -+ self.server.first_time = timestamp -+ self.server.last_time = timestamp - - # Bump lines parsed -- self.server['lines_parsed'] = self.server.get('lines_parsed', 0) + 1 -+ self.server.counters['lines_parsed'] += 1 - - # Call the associated method for this match - action(groups) -@@ -658,16 +715,11 @@ class logAnalyser: - """ - Process and update statistics based on the parsed result group. - -- Args: -- groups (dict): Parsed groups from the log line. -- -- - Args: - groups (dict): A dictionary containing operation information. Expected keys: - - 'timestamp': The timestamp of the connection event. - - 'conn_id': Connection identifier. - - 'op_id': Operation identifier. -- - 'restart_ctr': Server restart count. - - 'etime': Result elapsed time. - - 'wtime': Result wait time. - - 'optime': Result operation time. -@@ -679,182 +731,173 @@ class logAnalyser: - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ self.logger.debug(f"_process_result_stats - Start - {groups}") -+ - try: - timestamp = groups.get('timestamp') - conn_id = groups.get('conn_id') - op_id = groups.get('op_id') -- restart_ctr = groups.get('restart_ctr') - etime = float(groups.get('etime')) - wtime = float(groups.get('wtime')) - optime = float(groups.get('optime')) -+ nentries = int(groups.get('nentries')) - tag = groups.get('tag') - err = groups.get('err') -- nentries = int(groups.get('nentries')) - internal = groups.get('internal') -+ notes = groups.get('notes') - except KeyError as e: - self.logger.error(f"Missing key in groups: {e}") - return - - # Mapping keys for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_op_key = (restart_ctr, conn_id, op_id) - restart_conn_key = (restart_ctr, conn_id) - conn_op_key = (conn_id, op_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - -- # Bump global result count -- self.result['result_ctr'] = self.result.get('result_ctr', 0) + 1 -- -- # Bump global result count -- self.result['timestamp_ctr'] = self.result.get('timestamp_ctr', 0) + 1 -+ self.result.counters['result'] += 1 -+ self.result.counters['timestamp'] += 1 - - # Longest etime, push current etime onto the heap -- heapq.heappush(self.result['etime_duration'], float(etime)) -+ heapq.heappush(self.result.etime_duration, etime) - - # If the heap exceeds size_limit, pop the smallest element from root -- if len(self.result['etime_duration']) > self.size_limit: -- heapq.heappop(self.result['etime_duration']) -+ if len(self.result.etime_duration) > self.size_limit: -+ heapq.heappop(self.result.etime_duration) - - # Longest wtime, push current wtime onto the heap -- heapq.heappush(self.result['wtime_duration'], float(wtime)) -+ heapq.heappush(self.result.wtime_duration, wtime) - - # If the heap exceeds size_limit, pop the smallest element from root -- if len(self.result['wtime_duration']) > self.size_limit: -- heapq.heappop(self.result['wtime_duration']) -+ if len(self.result.wtime_duration) > self.size_limit: -+ heapq.heappop(self.result.wtime_duration) - - # Longest optime, push current optime onto the heap -- heapq.heappush(self.result['optime_duration'], float(optime)) -+ heapq.heappush(self.result.optime_duration, optime) - - # If the heap exceeds size_limit, pop the smallest element from root -- if len(self.result['optime_duration']) > self.size_limit: -- heapq.heappop(self.result['optime_duration']) -+ if len(self.result.optime_duration) > self.size_limit: -+ heapq.heappop(self.result.optime_duration) - - # Total result times -- self.result['total_etime'] = self.result.get('total_etime', 0) + float(etime) -- self.result['total_wtime'] = self.result.get('total_wtime', 0) + float(wtime) -- self.result['total_optime'] = self.result.get('total_optime', 0) + float(optime) -+ self.result.total_etime = self.result.total_etime + etime -+ self.result.total_wtime = self.result.total_wtime + wtime -+ self.result.total_optime = self.result.total_optime + optime - - # Statistic reporting -- self.result['etime_stat'] = round(self.result['etime_stat'] + float(etime), 8) -+ self.result.etime_stat = round(self.result.etime_stat + float(etime), 8) - -- if err: -- # Capture error code -- self.result['error_freq'][err] = self.result['error_freq'].get(err, 0) + 1 -+ if err is not None: -+ self.result.error_freq[err] += 1 - - # Check for internal operations based on either conn_id or internal flag - if 'Internal' in conn_id or internal: -- self.server['internal_op_ctr'] = self.server.get('internal_op_ctr', 0) + 1 -+ self.operation.counters['internal'] +=1 - - # Process result notes if present -- notes = groups['notes'] -- if notes is not None: -- # match.group('notes') can be A|U|F -- self.result[f'notes{notes}_ctr'] = self.result.get(f'notes{notes}_ctr', 0) + 1 -- # Track result times using server restart count, conn id and op_id as key -- self.result[f'notes{notes}_map'][restart_conn_op_key] = restart_conn_op_key -- -- # Construct the notes dict -- note_dict = getattr(self, f'notes{notes}') -+ NOTE_TYPES = {'A', 'U', 'F', 'M', 'P'} -+ if notes in NOTE_TYPES: -+ note_dict = self.result.notes[notes] -+ self.result.counters[f'notes{notes}'] += 1 - - # Exclude VLV -- if restart_conn_op_key not in self.vlv['vlv_map_rco']: -- if restart_conn_op_key in note_dict: -- note_dict[restart_conn_op_key]['time'] = timestamp -- else: -- # First time round -- note_dict[restart_conn_op_key] = {'time': timestamp} -- -- note_dict[restart_conn_op_key]['etime'] = etime -- note_dict[restart_conn_op_key]['nentries'] = nentries -- note_dict[restart_conn_op_key]['ip'] = ( -- self.connection['restart_conn_ip_map'].get(restart_conn_key, '') -- ) -- -- if restart_conn_op_key in self.search['base_map_rco']: -- note_dict[restart_conn_op_key]['base'] = self.search['base_map_rco'][restart_conn_op_key] -- del self.search['base_map_rco'][restart_conn_op_key] -- -- if restart_conn_op_key in self.search['scope_map_rco']: -- note_dict[restart_conn_op_key]['scope'] = self.search['scope_map_rco'][restart_conn_op_key] -- del self.search['scope_map_rco'][restart_conn_op_key] -- -- if restart_conn_op_key in self.search['filter_map_rco']: -- note_dict[restart_conn_op_key]['filter'] = self.search['filter_map_rco'][restart_conn_op_key] -- del self.search['filter_map_rco'][restart_conn_op_key] -- -- note_dict[restart_conn_op_key]['bind_dn'] = self.bind['dn_map_rc'].get(restart_conn_key, '') -- -- elif restart_conn_op_key in self.vlv['vlv_map_rco']: -- # This "note" result is VLV, assign the note type for later filtering -- self.vlv['vlv_map_rco'][restart_conn_op_key] = notes -+ if restart_conn_op_key not in self.vlv.rst_con_op_map: -+ # Construct the notes dict -+ note_dict = self.result.notes[notes] -+ note_entry = note_dict.setdefault(restart_conn_op_key, {}) -+ note_entry.update({ -+ 'time': timestamp, -+ 'etime': etime, -+ 'nentries': nentries, -+ 'ip': self.connection.restart_conn_ip_map.get(restart_conn_key, 'Unknown IP'), -+ 'bind_dn': self.bind.restart_conn_dn_map.get(restart_conn_key, 'Unknown DN') -+ }) -+ -+ if restart_conn_op_key in self.search.base_rst_con_op_map: -+ note_dict[restart_conn_op_key]['base'] = self.search.base_rst_con_op_map[restart_conn_op_key] -+ del self.search.base_rst_con_op_map[restart_conn_op_key] -+ -+ if restart_conn_op_key in self.search.scope_rst_con_op_map: -+ note_dict[restart_conn_op_key]['scope'] = self.search.scope_rst_con_op_map[restart_conn_op_key] -+ del self.search.scope_rst_con_op_map[restart_conn_op_key] -+ -+ if restart_conn_op_key in self.search.filter_rst_con_op_map: -+ note_dict[restart_conn_op_key]['filter'] = self.search.filter_rst_con_op_map[restart_conn_op_key] -+ del self.search.filter_rst_con_op_map[restart_conn_op_key] -+ else: -+ self.vlv.rst_con_op_map[restart_conn_op_key] = notes - - # Trim the search data we dont need (not associated with a notes=X) -- if restart_conn_op_key in self.search['base_map_rco']: -- del self.search['base_map_rco'][restart_conn_op_key] -+ if restart_conn_op_key in self.search.base_rst_con_op_map: -+ del self.search.base_rst_con_op_map[restart_conn_op_key] - -- if restart_conn_op_key in self.search['scope_map_rco']: -- del self.search['scope_map_rco'][restart_conn_op_key] -+ if restart_conn_op_key in self.search.scope_rst_con_op_map: -+ del self.search.scope_rst_con_op_map[restart_conn_op_key] - -- if restart_conn_op_key in self.search['filter_map_rco']: -- del self.search['filter_map_rco'][restart_conn_op_key] -+ if restart_conn_op_key in self.search.filter_rst_con_op_map: -+ del self.search.filter_rst_con_op_map[restart_conn_op_key] - - # Process bind response based on the tag and error code. - if tag == '97': - # Invalid credentials|Entry does not exist - if err == '49': - # if self.verbose: -- bad_pwd_dn = self.bind['dn_map_rc'].get(restart_conn_key, None) -- bad_pwd_ip = self.connection['restart_conn_ip_map'].get(restart_conn_key, None) -- self.result['bad_pwd_map'][(bad_pwd_dn, bad_pwd_ip)] = ( -- self.result['bad_pwd_map'].get((bad_pwd_dn, bad_pwd_ip), 0) + 1 -+ bad_pwd_dn = self.bind.restart_conn_dn_map[restart_conn_key] -+ bad_pwd_ip = self.connection.restart_conn_ip_map.get(restart_conn_key, None) -+ self.result.bad_pwd_map[(bad_pwd_dn, bad_pwd_ip)] = ( -+ self.result.bad_pwd_map.get((bad_pwd_dn, bad_pwd_ip), 0) + 1 - ) - # Trim items to size_limit -- if len(self.result['bad_pwd_map']) > self.size_limit: -+ if len(self.result.bad_pwd_map) > self.size_limit: - within_size_limit = dict( - sorted( -- self.result['bad_pwd_map'].items(), -+ self.result.bad_pwd_map.items(), - key=lambda item: item[1], - reverse=True - )[:self.size_limit]) -- self.result['bad_pwd_map'] = within_size_limit -+ self.result.bad_pwd_map = within_size_limit - - # Ths result is involved in the SASL bind process, decrement bind count, etc - elif err == '14': -- self.bind['bind_ctr'] = self.bind.get('bind_ctr', 0) - 1 -- self.operation['all_op_ctr'] = self.operation.get('all_op_ctr', 0) - 1 -- self.bind['sasl_bind_ctr'] = self.bind.get('sasl_bind_ctr', 0) - 1 -- self.bind['version']['3'] = self.bind['version'].get('3', 0) - 1 -+ self.bind.counters['bind'] -= 1 -+ self.operation.counters['total'] -= 1 -+ self.bind.counters['sasl'] -= 1 -+ self.bind.version['3'] = self.bind.version.get('3', 0) - 1 - - # Drop the sasl mech count also -- mech = self.bind['sasl_map_co'].get(conn_op_key, 0) -+ mech = self.bind.conn_op_sasl_mech_map[conn_op_key] - if mech: -- self.bind['sasl_mech_freq'][mech] = self.bind['sasl_mech_freq'].get(mech, 0) - 1 -+ self.bind.sasl_mech[mech] -= 1 - # Is this is a result to a sasl bind - else: - result_dn = groups['dn'] - if result_dn: - if result_dn != "": - # If this is a result of a sasl bind, grab the dn -- if conn_op_key in self.bind['sasl_map_co']: -+ if conn_op_key in self.bind.conn_op_sasl_mech_map: - if result_dn is not None: -- self.bind['dn_map_rc'][restart_conn_key] = result_dn.lower() -- self.bind['dn_freq'][result_dn] = ( -- self.bind['dn_freq'].get(result_dn, 0) + 1 -+ self.bind.restart_conn_dn_map[restart_conn_key] = result_dn.lower() -+ self.bind.dns[result_dn] = ( -+ self.bind.dns.get(result_dn, 0) + 1 - ) - # Handle other tag values - elif tag in ['100', '101', '111', '115']: - - # Largest nentry, push current nentry onto the heap, no duplicates -- if int(nentries) not in self.result['nentries_set']: -- heapq.heappush(self.result['nentries_num'], int(nentries)) -- self.result['nentries_set'].add(int(nentries)) -+ if int(nentries) not in self.result.nentries_set: -+ heapq.heappush(self.result.nentries_num, int(nentries)) -+ self.result.nentries_set.add(int(nentries)) - - # If the heap exceeds size_limit, pop the smallest element from root -- if len(self.result['nentries_num']) > self.size_limit: -- removed = heapq.heappop(self.result['nentries_num']) -- self.result['nentries_set'].remove(removed) -+ if len(self.result.nentries_num) > self.size_limit: -+ removed = heapq.heappop(self.result.nentries_num) -+ self.result.nentries_set.remove(removed) -+ -+ self.logger.debug(f"_process_result_stats - End") - - def _process_search_stats(self, groups: dict): - """ -@@ -873,10 +916,11 @@ class logAnalyser: - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ self.logger.debug(f"_process_search_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') - op_id = groups.get('op_id') -- restart_ctr = groups.get('restart_ctr') - search_base = groups['search_base'] - search_scope = groups['search_scope'] - search_attrs = groups['search_attrs'] -@@ -886,33 +930,34 @@ class logAnalyser: - return - - # Create a tracking keys for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_op_key = (restart_ctr, conn_id, op_id) - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - - # Bump search and global op count -- self.search['search_ctr'] = self.search.get('search_ctr', 0) + 1 -- self.operation['all_op_ctr'] = self.operation.get('all_op_ctr', 0) + 1 -+ self.search.counters['search'] += 1 -+ self.operation.counters['total'] += 1 - - # Search attributes - if search_attrs is not None: - if search_attrs == 'ALL': -- self.search['attr_dict']['All Attributes'] += 1 -+ self.search.attrs['All Attributes'] += 1 - else: - for attr in search_attrs.split(): - attr = attr.strip('"') -- self.search['attr_dict'][attr] += 1 -+ self.search.attrs[attr] += 1 - - # If the associated conn id for the bind DN matches update op counter -- for dn in self.bind['report_dn']: -- conns = self.bind['report_dn'][dn]['conn'] -+ for dn in self.bind.report_dn: -+ conns = self.bind.report_dn[dn]['conn'] - if conn_id in conns: - bind_dn_key = self._report_dn_key(dn, self.report_dn) - if bind_dn_key: -- self.bind['report_dn'][bind_dn_key]['srch'] = self.bind['report_dn'][bind_dn_key].get('srch', 0) + 1 -+ self.bind.report_dn[bind_dn_key]['srch'] = self.bind.report_dn[bind_dn_key].get('srch', 0) + 1 - - # Search base - if search_base is not None: -@@ -924,49 +969,51 @@ class logAnalyser: - search_base = base.lower() - if search_base: - if self.verbose: -- self.search['base_map'][search_base] = self.search['base_map'].get(search_base, 0) + 1 -- self.search['base_map_rco'][restart_conn_op_key] = search_base -+ self.search.bases[search_base] += 1#self.search.bases.get(search_base, 0) + 1 -+ self.search.base_rst_con_op_map[restart_conn_op_key] = search_base - - # Search scope - if search_scope is not None: - if self.verbose: -- self.search['scope_map_rco'][restart_conn_op_key] = SCOPE_LABEL[int(search_scope)] -+ self.search.scope_rst_con_op_map[restart_conn_op_key] = SCOPE_LABEL[int(search_scope)] - - # Search filter - if search_filter is not None: - if self.verbose: -- self.search['filter_map_rco'][restart_conn_op_key] = search_filter -- self.search['filter_dict'][search_filter] = self.search['filter_dict'].get(search_filter, 0) + 1 -+ self.search.filter_rst_con_op_map[restart_conn_op_key] = search_filter -+ self.search.filter_dict[search_filter] = self.search.filter_dict.get(search_filter, 0) + 1 - - found = False -- for idx, (count, filter) in enumerate(self.search['filter_list']): -+ for idx, (count, filter) in enumerate(self.search.filter_list): - if filter == search_filter: - found = True -- self.search['filter_list'][idx] = (self.search['filter_dict'][search_filter] + 1, search_filter) -- heapq.heapify(self.search['filter_list']) -+ self.search.filter_list[idx] = (self.search.filter_dict[search_filter] + 1, search_filter) -+ heapq.heapify(self.search.filter_list) - break - - if not found: -- if len(self.search['filter_list']) < self.size_limit: -- heapq.heappush(self.search['filter_list'], (1, search_filter)) -+ if len(self.search.filter_list) < self.size_limit: -+ heapq.heappush(self.search.filter_list, (1, search_filter)) - else: -- heapq.heappushpop(self.search['filter_list'], (self.search['filter_dict'][search_filter], search_filter)) -+ heapq.heappushpop(self.search.filter_list, (self.search.filter_dict[search_filter], search_filter)) - - # Check for an entire base search - if "objectclass=*" in search_filter.lower() or "objectclass=top" in search_filter.lower(): - if search_scope == '2': -- self.search['base_search_ctr'] = self.search.get('base_search_ctr', 0) + 1 -+ self.search.counters['base_search'] += 1 - - # Persistent search - if groups['options'] is not None: - options = groups['options'] - if options == 'persistent': -- self.search['persistent_ctr'] = self.search.get('persistent_ctr', 0) + 1 -+ self.search.counters['persistent'] += 1 - - # Authorization identity - if groups['authzid_dn'] is not None: - self.search['authzid'] = self.search.get('authzid', 0) + 1 - -+ self.logger.debug(f"_process_search_stats - End") -+ - def _process_bind_stats(self, groups: dict): - """ - Process and update statistics based on the parsed result group. -@@ -975,7 +1022,6 @@ class logAnalyser: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. - - 'op_id': Operation identifier. -- - 'restart_ctr': Server restart count. - - 'bind_dn': Bind DN. - - 'bind_method': Bind method (sasl, simple). - - 'bind_version': Bind version. -@@ -983,80 +1029,74 @@ class logAnalyser: - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ self.logger.debug(f"_process_bind_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') - op_id = groups.get('op_id') -- restart_ctr = groups.get('restart_ctr') - bind_dn = groups.get('bind_dn') -- bind_method = groups['bind_method'] -- bind_version = groups['bind_version'] -+ bind_method = groups.get('bind_method') -+ bind_version = groups.get('bind_version') - except KeyError as e: - self.logger.error(f"Missing key in groups: {e}") - return - -- # If this is the first connection (indicating a server restart), increment restart counter -- if conn_id == '1': -- self.server['restart_ctr'] = self.server.get('restart_ctr', 0) + 1 -- - # Create a tracking keys for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - conn_op_key = (conn_id, op_id) - -+ if bind_dn.strip() == '': -+ bind_dn = 'Anonymous' -+ bind_dn_normalised = bind_dn.lower() if bind_dn != 'Anonymous' else 'anonymous' -+ - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - -- # Bump bind and global op count -- self.bind['bind_ctr'] = self.bind.get('bind_ctr', 0) + 1 -- self.operation['all_op_ctr'] = self.operation.get('all_op_ctr', 0) + 1 -+ # Update counters -+ self.bind.counters['bind'] += 1 -+ self.operation.counters['total'] = self.operation.counters['total'] + 1 -+ self.bind.version[bind_version] += 1 - -- # Update bind version count -- self.bind['version'][bind_version] = self.bind['version'].get(bind_version, 0) + 1 -- if bind_dn == "": -- bind_dn = 'Anonymous' - - # If we need to report on this DN, capture some info for tracking - bind_dn_key = self._report_dn_key(bind_dn, self.report_dn) - if bind_dn_key: -- # Update bind count -- self.bind['report_dn'][bind_dn_key]['bind'] = self.bind['report_dn'][bind_dn_key].get('bind', 0) + 1 -- # Connection ID -- self.bind['report_dn'][bind_dn_key]['conn'].add(conn_id) -+ self.bind.report_dn[bind_dn_key]['bind'] = self.bind.report_dn[bind_dn_key].get('bind', 0) + 1 -+ self.bind.report_dn[bind_dn_key]['conn'].add(conn_id) -+ - # Loop over IPs captured at connection time to find the associated IP -- for (ip, ip_info) in self.connection['ip_map'].items(): -+ for (ip, ip_info) in self.connection.src_ip_map.items(): - if restart_conn_key in ip_info['keys']: -- self.bind['report_dn'][bind_dn_key]['ips'].add(ip) -+ self.bind.report_dn[bind_dn_key]['ips'].add(ip) - -- # sasl or simple bind -+ # Handle SASL or simple bind - if bind_method == 'sasl': -- self.bind['sasl_bind_ctr'] = self.bind.get('sasl_bind_ctr', 0) + 1 -+ self.bind.counters['sasl'] += 1 - sasl_mech = groups['sasl_mech'] -- if sasl_mech is not None: -- # Bump sasl mechanism count -- self.bind['sasl_mech_freq'][sasl_mech] = self.bind['sasl_mech_freq'].get(sasl_mech, 0) + 1 -+ if sasl_mech: -+ self.bind.sasl_mech[sasl_mech] += 1 -+ self.bind.conn_op_sasl_mech_map[conn_op_key] = sasl_mech - -- # Keep track of bind key to handle sasl result later -- self.bind['sasl_map_co'][conn_op_key] = sasl_mech -+ if bind_dn_normalised == self.root_dn.casefold(): -+ self.bind.counters['rootdn'] += 1 - - if bind_dn != "Anonymous": -- if bind_dn.casefold() == self.root_dn.casefold(): -- self.bind['rootdn_bind_ctr'] = self.bind.get('rootdn_bind_ctr', 0) + 1 -+ self.bind.dns[bind_dn] += 1 - -- # if self.verbose: -- self.bind['dn_freq'][bind_dn] = self.bind['dn_freq'].get(bind_dn, 0) + 1 -- self.bind['dn_map_rc'][restart_conn_key] = bind_dn.lower() - else: - if bind_dn == "Anonymous": -- self.bind['anon_bind_ctr'] = self.bind.get('anon_bind_ctr', 0) + 1 -- self.bind['dn_freq']['Anonymous'] = self.bind['dn_freq'].get('Anonymous', 0) + 1 -- self.bind['dn_map_rc'][restart_conn_key] = "anonymous" -+ self.bind.counters['anon'] += 1 -+ self.bind.dns['Anonymous'] += 1 - else: - if bind_dn.casefold() == self.root_dn.casefold(): -- self.bind['rootdn_bind_ctr'] = self.bind.get('rootdn_bind_ctr', 0) + 1 -+ self.bind.counters['rootdn'] += 1 -+ self.bind.dns[bind_dn] += 1 - -- # if self.verbose: -- self.bind['dn_freq'][bind_dn] = self.bind['dn_freq'].get(bind_dn, 0) + 1 -- self.bind['dn_map_rc'][restart_conn_key] = bind_dn.lower() -+ self.bind.restart_conn_dn_map[restart_conn_key] = bind_dn_normalised -+ -+ self.logger.debug(f"_process_bind_stats - End") - - def _process_unbind_stats(self, groups: dict): - """ -@@ -1065,27 +1105,31 @@ class logAnalyser: - Args: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. -- - 'restart_ctr': Server restart count. - - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ -+ self.logger.debug(f"_process_unbind_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') -- restart_ctr = groups.get('restart_ctr') - except KeyError as e: - self.logger.error(f"Missing key in groups: {e}") - return - - # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - - # Bump unbind count -- self.bind['unbind_ctr'] = self.bind.get('unbind_ctr', 0) + 1 -+ self.bind.counters['unbind'] += 1 -+ -+ self.logger.debug(f"_process_unbind_stats - End") - - def _process_connect_stats(self, groups: dict): - """ -@@ -1094,15 +1138,18 @@ class logAnalyser: - Args: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. -- - 'restart_ctr': Server restart count. -+ - 'src_ip': Source IP address. -+ - 'fd': File descriptor. -+ - 'ssl': LDAPS. - - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ -+ self.logger.debug(f"_process_connect_stats - Start - {groups}") -+ - try: -- timestamp = groups.get('timestamp') - conn_id = groups.get('conn_id') -- restart_ctr = groups.get('restart_ctr') - src_ip = groups.get('src_ip') - fd = groups['fd'] - ssl = groups['ssl'] -@@ -1110,61 +1157,63 @@ class logAnalyser: - self.logger.error(f"Missing key in groups: {e}") - return - -+ # If conn=1, server has started a new lifecycle -+ if conn_id == '1': -+ self.server.counters['restart'] += 1 -+ - # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - - # Should we exclude this IP - if self.exclude_ip and src_ip in self.exclude_ip: -- self.connection['exclude_ip_map'][restart_conn_key] = src_ip -+ self.connection.exclude_ip[restart_conn_key] = src_ip - return None - - if self.verbose: - # Update open connection count -- self.connection['open_conns'][src_ip] = self.connection['open_conns'].get(src_ip, 0) + 1 -+ self.connection.open_conns[src_ip] += 1 - - # Track the connection start normalised datetime object for latency report -- self.connection['start_time'][conn_id] = groups.get('timestamp') -+ self.connection.start_time[conn_id] = groups.get('timestamp') - - # Update general connection counters -- for key in ['conn_ctr', 'sim_conn_ctr']: -- self.connection[key] = self.connection.get(key, 0) + 1 -+ self.connection.counters['conn'] += 1 -+ self.connection.counters['sim_conn'] += 1 - - # Update the maximum number of simultaneous connections seen -- self.connection['max_sim_conn_ctr'] = max( -- self.connection.get('max_sim_conn_ctr', 0), -- self.connection['sim_conn_ctr'] -+ self.connection.counters['max_sim_conn'] = max( -+ self.connection.counters['max_sim_conn'], -+ self.connection.counters['sim_conn'] - ) - - # Update protocol counters -- src_ip_tmp = 'local' if src_ip == 'local' else 'ldap' - if ssl: -- stat_count_key = 'ldaps_ctr' -+ self.connection.counters['ldaps'] += 1 -+ elif src_ip == 'local': -+ self.connection.counters['ldapi'] += 1 - else: -- stat_count_key = 'ldapi_ctr' if src_ip_tmp == 'local' else 'ldap_ctr' -- self.connection[stat_count_key] = self.connection.get(stat_count_key, 0) + 1 -+ self.connection.counters['ldap'] += 1 - - # Track file descriptor counters -- self.connection['fd_max_ctr'] = ( -- max(self.connection.get('fd_max_ctr', 0), int(fd)) -- ) -- self.connection['fd_taken_ctr'] = ( -- self.connection.get('fd_taken_ctr', 0) + 1 -- ) -+ self.connection.counters['fd_max'] = max(self.connection.counters['fd_taken'], int(fd)) -+ self.connection.counters['fd_taken'] += 1 - - # Track source IP -- self.connection['restart_conn_ip_map'][restart_conn_key] = src_ip -+ self.connection.restart_conn_ip_map[restart_conn_key] = src_ip - - # Update the count of connections seen from this IP -- if src_ip not in self.connection['ip_map']: -- self.connection['ip_map'][src_ip] = {} -+ if src_ip not in self.connection.src_ip_map: -+ self.connection.src_ip_map[src_ip] = {} -+ -+ self.connection.src_ip_map[src_ip]['count'] = self.connection.src_ip_map[src_ip].get('count', 0) + 1 - -- self.connection['ip_map'][src_ip]['count'] = self.connection['ip_map'][src_ip].get('count', 0) + 1 -+ if 'keys' not in self.connection.src_ip_map[src_ip]: -+ self.connection.src_ip_map[src_ip]['keys'] = set() - -- if 'keys' not in self.connection['ip_map'][src_ip]: -- self.connection['ip_map'][src_ip]['keys'] = set() -+ self.connection.src_ip_map[src_ip]['keys'].add(restart_conn_key) - -- self.connection['ip_map'][src_ip]['keys'].add(restart_conn_key) -- # self.connection['ip_map']['ip_key'] = restart_conn_key -+ self.logger.debug(f"_process_connect_stats - End") - - def _process_auth_stats(self, groups: dict): - """ -@@ -1173,7 +1222,6 @@ class logAnalyser: - Args: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. -- - 'restart_ctr': Server restart count. - - 'auth_protocol': Auth protocol (SSL, TLS). - - 'auth_version': Auth version. - - 'auth_message': Optional auth message. -@@ -1181,9 +1229,11 @@ class logAnalyser: - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ -+ self.logger.debug(f"_process_auth_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') -- restart_ctr = groups.get('restart_ctr') - auth_protocol = groups.get('auth_protocol') - auth_version = groups.get('auth_version') - auth_message = groups.get('auth_message') -@@ -1192,15 +1242,16 @@ class logAnalyser: - return - - # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - - if auth_protocol: -- if restart_conn_key not in self.auth['auth_info']: -- self.auth['auth_info'][restart_conn_key] = { -+ if restart_conn_key not in self.auth.auth_info: -+ self.auth.auth_info[restart_conn_key] = { - 'proto': auth_protocol, - 'version': auth_version, - 'count': 0, -@@ -1209,19 +1260,19 @@ class logAnalyser: - - if auth_message: - # Increment counters and add auth message -- self.auth['auth_info'][restart_conn_key]['message'].append(auth_message) -+ self.auth.auth_info[restart_conn_key]['message'].append(auth_message) - - # Bump auth related counters -- self.auth['cipher_ctr'] = self.auth.get('cipher_ctr', 0) + 1 -- self.auth['auth_info'][restart_conn_key]['count'] = ( -- self.auth['auth_info'][restart_conn_key].get('count', 0) + 1 -- ) -+ self.auth.counters['cipher_ctr'] += 1 -+ self.auth.auth_info[restart_conn_key]['count'] += 1 - - if auth_message: - if auth_message == 'client bound as': -- self.auth['ssl_client_bind_ctr'] = self.auth.get('ssl_client_bind_ctr', 0) + 1 -+ self.auth.counters['ssl_client_bind_ctr'] += 1 - elif auth_message == 'failed to map client certificate to LDAP DN': -- self.auth['ssl_client_bind_failed_ctr'] = self.auth.get('ssl_client_bind_failed_ctr', 0) + 1 -+ self.auth.counters['ssl_client_bind_failed_ctr'] += 1 -+ -+ self.logger.debug(f"_process_auth_stats - End") - - def _process_vlv_stats(self, groups: dict): - """ -@@ -1231,33 +1282,37 @@ class logAnalyser: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. - - 'op_id': Operation identifier. -- - 'restart_ctr': Server restart count. - - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ -+ self.logger.debug(f"_process_vlv_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') - op_id = groups.get('op_id') -- restart_ctr = groups.get('restart_ctr') - except KeyError as e: - self.logger.error(f"Missing key in groups: {e}") - return - -- # Create tracking keys -+ # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_op_key = (restart_ctr, conn_id, op_id) - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - - # Bump vlv and global op stats -- self.vlv['vlv_ctr'] = self.vlv.get('vlv_ctr', 0) + 1 -- self.operation['all_op_ctr'] = self.operation.get('all_op_ctr', 0) + 1 -+ self.vlv.counters['vlv'] += 1 -+ self.operation.counters['total'] = self.operation.counters['total'] + 1 - - # Key and value are the same, makes set operations easier later on -- self.vlv['vlv_map_rco'][restart_conn_op_key] = restart_conn_op_key -+ self.vlv.rst_con_op_map[restart_conn_op_key] = restart_conn_op_key -+ -+ self.logger.debug(f"_process_vlv_stats - End") - - def _process_abandon_stats(self, groups: dict): - """ -@@ -1267,38 +1322,41 @@ class logAnalyser: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. - - 'op_id': Operation identifier. -- - 'restart_ctr': Server restart count. - - 'targetop': The target operation. - - 'msgid': Message ID. - - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ -+ self.logger.debug(f"_process_abandon_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') - op_id = groups.get('op_id') -- restart_ctr = groups.get('restart_ctr') - targetop = groups.get('targetop') - msgid = groups.get('msgid') - except KeyError as e: - self.logger.error(f"Missing key in groups: {e}") - return - -- # Create a tracking keys -- restart_conn_op_key = (restart_ctr, conn_id, op_id) -+ # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - - # Bump some stats -- self.result['result_ctr'] = self.result.get('result_ctr', 0) + 1 -- self.operation['all_op_ctr'] = self.operation.get('all_op_ctr', 0) + 1 -- self.operation['abandon_op_ctr'] = self.operation.get('abandon_op_ctr', 0) + 1 -+ self.result.counters['result'] += 1 -+ self.operation.counters['total'] = self.operation.counters['total'] + 1 -+ self.operation.counters['abandon'] += 1 - - # Track abandoned operation for later processing -- self.operation['abandoned_map_rco'][restart_conn_op_key] = (conn_id, op_id, targetop, msgid) -+ self.operation.rst_con_op_map['abandon'] = (conn_id, op_id, targetop, msgid) -+ -+ self.logger.debug(f"_process_abandon_stats - End") - - def _process_sort_stats(self, groups: dict): - """ -@@ -1307,26 +1365,30 @@ class logAnalyser: - Args: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. -- - 'restart_ctr': Server restart count. - - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ -+ self.logger.debug(f"_process_sort_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') -- restart_ctr = groups.get('restart_ctr') - except KeyError as e: - self.logger.error(f"Missing key in groups: {e}") - return - - # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - -- self.operation['sort_op_ctr'] = self.operation.get('sort_op_ctr', 0) + 1 -+ self.operation.counters['sort'] += 1 -+ -+ self.logger.debug(f"_process_sort_stats - End") - - def _process_extend_op_stats(self, groups: dict): - """ -@@ -1342,6 +1404,9 @@ class logAnalyser: - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ -+ self.logger.debug(f"_process_extend_op_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') - op_id = groups.get('op_id') -@@ -1352,31 +1417,34 @@ class logAnalyser: - return - - # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_op_key = (restart_ctr, conn_id, op_id) - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - - # Increment global operation counters -- self.operation['all_op_ctr'] = self.operation.get('all_op_ctr', 0) + 1 -- self.operation['extnd_op_ctr'] = self.operation.get('extnd_op_ctr', 0) + 1 -+ self.operation.counters['total'] = self.operation.counters['total'] + 1 -+ self.operation.counters['extnd'] += 1 - - # Track extended operation data if an OID is present - if oid is not None: -- self.operation['extop_dict'][oid] = self.operation['extop_dict'].get(oid, 0) + 1 -- self.operation['extop_map_rco'][restart_conn_op_key] = ( -- self.operation['extop_map_rco'].get(restart_conn_op_key, 0) + 1 -+ self.operation.extended[oid] += 1 -+ self.operation.rst_con_op_map['extnd'][restart_conn_op_key] = ( -+ self.operation.rst_con_op_map['extnd'].get(restart_conn_op_key, 0) + 1 - ) - - # If the conn_id is associated with this DN, update op counter -- for dn in self.bind['report_dn']: -- conns = self.bind['report_dn'][dn]['conn'] -+ for dn in self.bind.report_dn: -+ conns = self.bind.report_dn[dn]['conn'] - if conn_id in conns: - bind_dn_key = self._report_dn_key(dn, self.report_dn) - if bind_dn_key: -- self.bind['report_dn'][bind_dn_key]['ext'] = self.bind['report_dn'][bind_dn_key].get('ext', 0) + 1 -+ self.bind.report_dn[bind_dn_key]['ext'] = self.bind.report_dn[bind_dn_key].get('ext', 0) + 1 -+ -+ self.logger.debug(f"_process_extend_op_stats - End") - - def _process_autobind_stats(self, groups: dict): - """ -@@ -1385,43 +1453,47 @@ class logAnalyser: - Args: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. -- - 'restart_ctr': Server restart count. - - 'bind_dn': Bind DN ("cn=Directory Manager") - - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ -+ self.logger.debug(f"_process_autobind_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') -- restart_ctr = groups.get('restart_ctr') - bind_dn = groups.get('bind_dn') - except KeyError as e: - self.logger.error(f"Missing key in groups: {e}") - return - - # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - - # Bump relevant counters -- self.bind['bind_ctr'] = self.bind.get('bind_ctr', 0) + 1 -- self.bind['autobind_ctr'] = self.bind.get('autobind_ctr', 0) + 1 -- self.operation['all_op_ctr'] = self.operation.get('all_op_ctr', 0) + 1 -+ self.bind.counters['bind'] += 1 -+ self.bind.counters['autobind'] += 1 -+ self.operation.counters['total'] += 1 - - # Handle an anonymous autobind (empty bind_dn) - if bind_dn == "": -- self.bind['anon_bind_ctr'] = self.bind.get('anon_bind_ctr', 0) + 1 -+ self.bind.counters['anon'] += 1 - else: -- # Process non-anonymous binds, does the bind_dn if exist in dn_map_rc -- bind_dn = self.bind['dn_map_rc'].get(restart_conn_key, bind_dn) -+ # Process non-anonymous binds, does the bind_dn if exist in restart_conn_dn_map -+ bind_dn = self.bind.restart_conn_dn_map.get(restart_conn_key, bind_dn) - if bind_dn: - if bind_dn.casefold() == self.root_dn.casefold(): -- self.bind['rootdn_bind_ctr'] = self.bind.get('rootdn_bind_ctr', 0) + 1 -+ self.bind.counters['rootdn'] += 1 - bind_dn = bind_dn.lower() -- self.bind['dn_freq'][bind_dn] = self.bind['dn_freq'].get(bind_dn, 0) + 1 -+ self.bind.dns[bind_dn] = self.bind.dns.get(bind_dn, 0) + 1 -+ -+ self.logger.debug(f"_process_autobind_stats - End") - - def _process_disconnect_stats(self, groups: dict): - """ -@@ -1430,7 +1502,6 @@ class logAnalyser: - Args: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. -- - 'restart_ctr': Server restart count. - - 'timestamp': The timestamp of the disconnect event. - - 'error_code': Error code associated with the disconnect, if any. - - 'disconnect_code': Disconnect code, if any. -@@ -1438,9 +1509,11 @@ class logAnalyser: - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ -+ self.logger.debug(f"_process_disconnect_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') -- restart_ctr = groups.get('restart_ctr') - timestamp = groups.get('timestamp') - error_code = groups.get('error_code') - disconnect_code = groups.get('disconnect_code') -@@ -1449,17 +1522,18 @@ class logAnalyser: - return - - # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - - if self.verbose: - # Handle verbose logging for open connections and IP addresses -- src_ip = self.connection['restart_conn_ip_map'].get(restart_conn_key) -- if src_ip and src_ip in self.connection.get('open_conns', {}): -- open_conns = self.connection['open_conns'] -+ src_ip = self.connection.restart_conn_ip_map.get(restart_conn_key) -+ if src_ip and src_ip in self.connection.open_conns: -+ open_conns = self.connection.open_conns - if open_conns[src_ip] > 1: - open_conns[src_ip] -= 1 - else: -@@ -1467,7 +1541,7 @@ class logAnalyser: - - # Handle latency and disconnect times - if self.verbose: -- start_time = self.connection['start_time'].get(conn_id, None) -+ start_time = self.connection.start_time[conn_id] - finish_time = groups.get('timestamp') - if start_time and timestamp: - latency = self.get_elapsed_time(start_time, finish_time, "seconds") -@@ -1475,28 +1549,29 @@ class logAnalyser: - LATENCY_GROUPS[bucket] += 1 - - # Reset start time for the connection -- self.connection['start_time'][conn_id] = None -+ self.connection.start_time[conn_id] = None - - # Update connection stats -- self.connection['sim_conn_ctr'] = self.connection.get('sim_conn_ctr', 0) - 1 -- self.connection['fd_returned_ctr'] = ( -- self.connection.get('fd_returned_ctr', 0) + 1 -- ) -+ self.connection.counters['sim_conn'] -= 1 -+ self.connection.counters['fd_returned'] += 1 - - # Track error and disconnect codes if provided - if error_code is not None: - error_type = DISCONNECT_ERRORS.get(error_code, 'unknown') - if disconnect_code is not None: - # Increment the count for the specific error and disconnect code -- error_map = self.connection.setdefault(error_type, {}) -- error_map[disconnect_code] = error_map.get(disconnect_code, 0) + 1 -+ # error_map = self.connection.setdefault(error_type, {}) -+ # error_map[disconnect_code] = error_map.get(disconnect_code, 0) + 1 -+ self.connection[error_type][disconnect_code] += 1 - - # Handle disconnect code and update stats - if disconnect_code is not None: -- self.connection['disconnect_code'][disconnect_code] = ( -- self.connection['disconnect_code'].get(disconnect_code, 0) + 1 -+ self.connection.disconnect_code[disconnect_code] = ( -+ self.connection.disconnect_code.get(disconnect_code, 0) + 1 - ) -- self.connection['disconnect_code_map'][restart_conn_key] = disconnect_code -+ self.connection.restart_conn_disconnect_map[restart_conn_key] = disconnect_code -+ -+ self.logger.debug(f"_process_disconnect_stats - End") - - def _group_latencies(self, latency_seconds: int): - """ -@@ -1530,49 +1605,57 @@ class logAnalyser: - Args: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. -- - 'restart_ctr': Server restart count. - - 'op_id': Operation identifier. - - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ self.logger.debug(f"_process_crud_stats - Start - {groups}") - try: - conn_id = groups.get('conn_id') -- restart_ctr = groups.get('restart_ctr') - op_type = groups.get('op_type') -- internal = groups.get('internal') - except KeyError as e: - self.logger.error(f"Missing key in groups: {e}") - return - - # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - -- self.operation['all_op_ctr'] = self.operation.get('all_op_ctr', 0) + 1 -+ self.operation.counters['total'] = self.operation.counters['total'] + 1 - - # Use operation type as key for stats - if op_type is not None: - op_key = op_type.lower() -- self.operation[f"{op_key}_op_ctr"] = self.operation.get(f"{op_key}_op_ctr", 0) + 1 -- self.operation[f"{op_key}_map_rco"][restart_conn_key] = ( -- self.operation[f"{op_key}_map_rco"].get(restart_conn_key, 0) + 1 -- ) -+ if op_key not in self.operation.counters: -+ self.operation.counters[op_key] = 0 -+ if op_key not in self.operation.rst_con_op_map: -+ self.operation.rst_con_op_map[op_key] = {} -+ -+ # Increment op type counter -+ self.operation.counters[op_key] += 1 -+ -+ # Increment the op type map counter -+ current_count = self.operation.rst_con_op_map[op_key].get(restart_conn_key, 0) -+ self.operation.rst_con_op_map[op_key][restart_conn_key] = current_count + 1 - - # If the conn_id is associated with this DN, update op counter -- for dn in self.bind['report_dn']: -- conns = self.bind['report_dn'][dn]['conn'] -+ for dn in self.bind.report_dn: -+ conns = self.bind.report_dn[dn]['conn'] - if conn_id in conns: - bind_dn_key = self._report_dn_key(dn, self.report_dn) - if bind_dn_key: -- self.bind['report_dn'][bind_dn_key][op_key] = self.bind['report_dn'][bind_dn_key].get(op_key, 0) + 1 -+ self.bind.report_dn[bind_dn_key][op_key] = self.bind.report_dn[bind_dn_key].get(op_key, 0) + 1 - - # Authorization identity - if groups['authzid_dn'] is not None: -- self.operation['authzid'] = self.operation.get('authzid', 0) + 1 -+ self.operation.counters['authzid'] += 1 -+ -+ self.logger.debug(f"_process_crud_stats - End") - - def _process_entry_referral_stats(self, groups: dict): - """ -@@ -1581,33 +1664,36 @@ class logAnalyser: - Args: - groups (dict): A dictionary containing operation information. Expected keys: - - 'conn_id': Connection identifier. -- - 'restart_ctr': Server restart count. - - 'op_id': Operation identifier. - - Raises: - KeyError: If required keys are missing in the `groups` dictionary. - """ -+ self.logger.debug(f"_process_entry_referral_stats - Start - {groups}") -+ - try: - conn_id = groups.get('conn_id') -- restart_ctr = groups.get('restart_ctr') - op_type = groups.get('op_type') - except KeyError as e: - self.logger.error(f"Missing key in groups: {e}") - return - - # Create a tracking key for this entry -+ restart_ctr = self.server.counters['restart'] - restart_conn_key = (restart_ctr, conn_id) - - # Should we ignore this operation -- if restart_conn_key in self.connection['exclude_ip_map']: -+ if restart_conn_key in self.connection.exclude_ip: - return None - - # Process operation type - if op_type is not None: - if op_type == 'ENTRY': -- self.result['entry_count'] = self.result.get('entry_count', 0) + 1 -+ self.result.counters['entry'] += 1 - elif op_type == 'REFERRAL': -- self.result['referral_count'] = self.result.get('referral_count', 0) + 1 -+ self.result.counters['referral'] += 1 -+ -+ self.logger.debug(f"_process_entry_referral_stats - End") - - def _process_and_write_stats(self, norm_timestamp: str, bytes_read: int): - """ -@@ -1620,6 +1706,7 @@ class logAnalyser: - Returns: - None - """ -+ self.logger.debug(f"_process_and_write_stats - Start") - - if self.csv_writer is None: - self.logger.error("CSV writer not enabled.") -@@ -1627,23 +1714,23 @@ class logAnalyser: - - # Define the stat mapping - stats = { -- 'result_ctr': self.result, -- 'search_ctr': self.search, -- 'add_op_ctr': self.operation, -- 'mod_op_ctr': self.operation, -- 'modrdn_op_ctr': self.operation, -- 'cmp_op_ctr': self.operation, -- 'del_op_ctr': self.operation, -- 'abandon_op_ctr': self.operation, -- 'conn_ctr': self.connection, -- 'ldaps_ctr': self.connection, -- 'bind_ctr': self.bind, -- 'anon_bind_ctr': self.bind, -- 'unbind_ctr': self.bind, -- 'notesA_ctr': self.result, -- 'notesU_ctr': self.result, -- 'notesF_ctr': self.result, -- 'etime_stat': self.result -+ 'result': self.result.counters, -+ 'search': self.search.counters, -+ 'add': self.operation.counters, -+ 'mod': self.operation.counters, -+ 'modrdn': self.operation.counters, -+ 'cmp': self.operation.counters, -+ 'del': self.operation.counters, -+ 'abandon': self.operation.counters, -+ 'conn': self.connection.counters, -+ 'ldaps': self.connection.counters, -+ 'bind': self.bind.counters, -+ 'anon': self.bind.counters, -+ 'unbind': self.bind.counters, -+ 'notesA': self.result.counters, -+ 'notesU': self.result.counters, -+ 'notesF': self.result.counters, -+ 'etime_stat': self.result.counters - } - - # Build the current stat block -@@ -1675,8 +1762,7 @@ class logAnalyser: - - # out_stat_block[0] = self._convert_datetime_to_timestamp(out_stat_block[0]) - self.csv_writer.writerow(out_stat_block) -- -- self.result['etime_stat'] = 0.0 -+ self.result.etime_stat = 0.0 - - # Update previous stats for the next interval - self.prev_stats = curr_stat_block -@@ -1705,7 +1791,9 @@ class logAnalyser: - # Write the stat block to csv and reset elapsed time for the next interval - # out_stat_block[0] = self._convert_datetime_to_timestamp(out_stat_block[0]) - self.csv_writer.writerow(out_stat_block) -- self.result['etime_stat'] = 0.0 -+ self.result.etime_stat = 0.0 -+ -+ self.logger.debug(f"_process_and_write_stats - End") - - def process_file(self, log_num: str, filepath: str): - """ -@@ -2254,45 +2342,45 @@ def main(): - sys.exit(1) - - # Prep for display -- elapsed_time = db.get_elapsed_time(db.server['first_time'], db.server['last_time'], "hms") -- elapsed_secs = db.get_elapsed_time(db.server['first_time'], db.server['last_time'], "seconds") -- num_ops = db.operation.get('all_op_ctr', 0) -- num_results = db.result.get('result_ctr', 0) -- num_conns = db.connection.get('conn_ctr', 0) -- num_ldap = db.connection.get('ldap_ctr', 0) -- num_ldapi = db.connection.get('ldapi_ctr', 0) -- num_ldaps = db.connection.get('ldaps_ctr', 0) -- num_startls = db.operation['extop_dict'].get(STLS_OID, 0) -- num_search = db.search.get('search_ctr', 0) -- num_mod = db.operation.get('mod_op_ctr', 0) -- num_add = db.operation.get('add_op_ctr', 0) -- num_del = db.operation.get('del_op_ctr', 0) -- num_modrdn = db.operation.get('modrdn_op_ctr', 0) -- num_cmp = db.operation.get('cmp_op_ctr', 0) -- num_bind = db.bind.get('bind_ctr', 0) -- num_unbind = db.bind.get('unbind_ctr', 0) -- num_proxyd_auths = db.operation.get('authzid', 0) + db.search.get('authzid', 0) -- num_time_count = db.result.get('timestamp_ctr') -+ elapsed_time = db.get_elapsed_time(db.server.first_time, db.server.last_time, "hms") -+ elapsed_secs = db.get_elapsed_time(db.server.first_time, db.server.last_time, "seconds") -+ num_ops = db.operation.counters['total'] -+ num_results = db.result.counters['result'] -+ num_conns = db.connection.counters['conn'] -+ num_ldap = db.connection.counters['ldap'] -+ num_ldapi = db.connection.counters['ldapi'] -+ num_ldaps = db.connection.counters['ldaps'] -+ num_startls = db.operation.extended.get(STLS_OID, 0) -+ num_search = db.search.counters['search'] -+ num_mod = db.operation.counters['mod'] -+ num_add = db.operation.counters['add'] -+ num_del = db.operation.counters['del'] -+ num_modrdn = db.operation.counters['modrdn'] -+ num_cmp = db.operation.counters['cmp'] -+ num_bind = db.bind.counters['bind'] -+ num_unbind = db.bind.counters['unbind'] -+ num_proxyd_auths = db.operation.counters['authzid'] + db.search.counters['authzid'] -+ num_time_count = db.result.counters['timestamp'] - if num_time_count: -- avg_wtime = round(db.result.get('total_wtime', 0)/num_time_count, 9) -- avg_optime = round(db.result.get('total_optime', 0)/num_time_count, 9) -- avg_etime = round(db.result.get('total_etime', 0)/num_time_count, 9) -- num_fd_taken = db.connection.get('fd_taken_ctr', 0) -- num_fd_rtn = db.connection.get('fd_returned_ctr', 0) -- -- num_DM_binds = db.bind.get('rootdn_bind_ctr', 0) -- num_base_search = db.search.get('base_search_ctr', 0) -+ avg_wtime = round(db.result.total_wtime/num_time_count, 9) -+ avg_optime = round(db.result.total_optime/num_time_count, 9) -+ avg_etime = round(db.result.total_etime/num_time_count, 9) -+ num_fd_taken = db.connection.counters['fd_taken'] -+ num_fd_rtn = db.connection.counters['fd_returned'] -+ -+ num_DM_binds = db.bind.counters['rootdn'] -+ num_base_search = db.search.counters['base_search'] - try: -- log_start_time = db.convert_timestamp_to_string(db.server.get('first_time', "")) -+ log_start_time = db.convert_timestamp_to_string(db.server.first_time) - except ValueError: - log_start_time = "Unknown" - - try: -- log_end_time = db.convert_timestamp_to_string(db.server.get('last_time', "")) -+ log_end_time = db.convert_timestamp_to_string(db.server.last_time) - except ValueError: - log_end_time = "Unknown" - -- print(f"\n\nTotal Log Lines Analysed:{db.server['lines_parsed']}\n") -+ print(f"\n\nTotal Log Lines Analysed:{db.server.counters['lines_parsed']}\n") - print("\n----------- Access Log Output ------------\n") - print(f"Start of Logs: {log_start_time}") - print(f"End of Logs: {log_end_time}") -@@ -2302,12 +2390,12 @@ def main(): - db.display_bind_report() - sys.exit(1) - -- print(f"\nRestarts: {db.server.get('restart_ctr', 0)}") -- if db.auth.get('cipher_ctr', 0) > 0: -+ print(f"\nRestarts: {db.server.counters['restart']}") -+ if db.auth.counters['cipher_ctr'] > 0: - print(f"Secure Protocol Versions:") - # Group data by protocol + version + unique message - grouped_data = defaultdict(lambda: {'count': 0, 'messages': set()}) -- for _, details in db.auth['auth_info'].items(): -+ for _, details in db.auth.auth_info.items(): - # If there is no protocol version - if details['version']: - proto_version = f"{details['proto']}{details['version']}" -@@ -2323,7 +2411,7 @@ def main(): - for ((proto_version, message), data) in grouped_data.items(): - print(f" - {proto_version} {message} ({data['count']} connection{'s' if data['count'] > 1 else ''})") - -- print(f"Peak Concurrent connections: {db.connection.get('max_sim_conn_ctr', 0)}") -+ print(f"Peak Concurrent connections: {db.connection.counters['max_sim_conn']}") - print(f"Total Operations: {num_ops}") - print(f"Total Results: {num_results}") - print(f"Overall Performance: {db.get_overall_perf(num_results, num_ops)}%") -@@ -2344,111 +2432,112 @@ def main(): - print(f"\nAverage wtime (wait time): {avg_wtime:.9f}") - print(f"Average optime (op time): {avg_optime:.9f}") - print(f"Average etime (elapsed time): {avg_etime:.9f}") -- print(f"\nMulti-factor Authentications: {db.result.get('notesM_ctr', 0)}") -+ print(f"\nMulti-factor Authentications: {db.result.counters['notesM']}") - print(f"Proxied Auth Operations: {num_proxyd_auths}") -- print(f"Persistent Searches: {db.search.get('persistent_ctr', 0)}") -- print(f"Internal Operations: {db.server.get('internal_op_ctr', 0)}") -- print(f"Entry Operations: {db.result.get('entry_count', 0)}") -- print(f"Extended Operations: {db.operation.get('extnd_op_ctr', 0)}") -- print(f"Abandoned Requests: {db.operation.get('abandon_op_ctr', 0)}") -- print(f"Smart Referrals Received: {db.result.get('referral_count', 0)}") -- print(f"\nVLV Operations: {db.vlv.get('vlv_ctr', 0)}") -- print(f"VLV Unindexed Searches: {len([key for key, value in db.vlv['vlv_map_rco'].items() if value == 'A'])}") -- print(f"VLV Unindexed Components: {len([key for key, value in db.vlv['vlv_map_rco'].items() if value == 'U'])}") -- print(f"SORT Operations: {db.operation.get('sort_op_ctr', 0)}") -- print(f"\nEntire Search Base Queries: {db.search.get('base_search_ctr', 0)}") -- print(f"Paged Searches: {db.result.get('notesP_ctr', 0)}") -- num_unindexed_search = len(db.notesA.keys()) -+ print(f"Persistent Searches: {db.search.counters['persistent']}") -+ print(f"Internal Operations: {db.operation.counters['internal']}") -+ print(f"Entry Operations: {db.result.counters['entry']}") -+ print(f"Extended Operations: {db.operation.counters['extnd']}") -+ print(f"Abandoned Requests: {db.operation.counters['abandon']}") -+ print(f"Smart Referrals Received: {db.result.counters['referral']}") -+ print(f"\nVLV Operations: {db.vlv.counters['vlv']}") -+ print(f"VLV Unindexed Searches: {len([key for key, value in db.vlv.rst_con_op_map.items() if value == 'A'])}") -+ print(f"VLV Unindexed Components: {len([key for key, value in db.vlv.rst_con_op_map.items() if value == 'U'])}") -+ print(f"SORT Operations: {db.operation.counters['sort']}") -+ print(f"\nEntire Search Base Queries: {num_base_search}") -+ print(f"Paged Searches: {db.result.counters['notesP']}") -+ num_unindexed_search = len(db.result.notes['A']) - print(f"Unindexed Searches: {num_unindexed_search}") -- if db.verbose: -- if num_unindexed_search > 0: -- for num, key in enumerate(db.notesA, start=1): -- src, conn, op = key -- restart_conn_op_key = (src, conn, op) -- print(f"\nUnindexed Search #{num} (notes=A)") -- print(f" - Date/Time: {db.notesA[restart_conn_op_key]['time']}") -- print(f" - Connection Number: {conn}") -- print(f" - Operation Number: {op}") -- print(f" - Etime: {db.notesA[restart_conn_op_key]['etime']}") -- print(f" - Nentries: {db.notesA[restart_conn_op_key]['nentries']}") -- print(f" - IP Address: {db.notesA[restart_conn_op_key]['ip']}") -- print(f" - Search Base: {db.notesA[restart_conn_op_key]['base']}") -- print(f" - Search Scope: {db.notesA[restart_conn_op_key]['scope']}") -- print(f" - Search Filter: {db.notesA[restart_conn_op_key]['filter']}") -- print(f" - Bind DN: {db.notesA[restart_conn_op_key]['bind_dn']}\n") -- -- num_unindexed_component = len(db.notesU.keys()) -+ if db.verbose and num_unindexed_search > 0: -+ for num, key in enumerate(db.result.notes['A'], start=1): -+ src, conn, op = key -+ data = db.result.notes['A'][key] -+ -+ print(f"\n Unindexed Search #{num} (notes=A)") -+ print(f" - Date/Time: {data.get('time', '-')}") -+ print(f" - Connection Number: {conn}") -+ print(f" - Operation Number: {op}") -+ print(f" - Etime: {data.get('etime', '-')}") -+ print(f" - Nentries: {data.get('nentries', 0)}") -+ print(f" - IP Address: {data.get('ip', '-')}") -+ print(f" - Search Base: {data.get('base', '-')}") -+ print(f" - Search Scope: {data.get('scope', '-')}") -+ print(f" - Search Filter: {data.get('filter', '-')}") -+ print(f" - Bind DN: {data.get('bind_dn', '-')}\n") -+ -+ num_unindexed_component = len(db.result.notes['U']) - print(f"Unindexed Components: {num_unindexed_component}") -- if db.verbose: -- if num_unindexed_component > 0: -- for num, key in enumerate(db.notesU, start=1): -- src, conn, op = key -- restart_conn_op_key = (src, conn, op) -- print(f"\nUnindexed Component #{num} (notes=U)") -- print(f" - Date/Time: {db.notesU[restart_conn_op_key]['time']}") -- print(f" - Connection Number: {conn}") -- print(f" - Operation Number: {op}") -- print(f" - Etime: {db.notesU[restart_conn_op_key]['etime']}") -- print(f" - Nentries: {db.notesU[restart_conn_op_key]['nentries']}") -- print(f" - IP Address: {db.notesU[restart_conn_op_key]['ip']}") -- print(f" - Search Base: {db.notesU[restart_conn_op_key]['base']}") -- print(f" - Search Scope: {db.notesU[restart_conn_op_key]['scope']}") -- print(f" - Search Filter: {db.notesU[restart_conn_op_key]['filter']}") -- print(f" - Bind DN: {db.notesU[restart_conn_op_key]['bind_dn']}\n") -- -- num_invalid_filter = len(db.notesF.keys()) -+ if db.verbose and num_unindexed_component > 0: -+ for num, key in enumerate(db.result.notes['U'], start=1): -+ src, conn, op = key -+ data = db.result.notes['U'][key] -+ -+ print(f"\n Unindexed Component #{num} (notes=U)") -+ print(f" - Date/Time: {data.get('time', '-')}") -+ print(f" - Connection Number: {conn}") -+ print(f" - Operation Number: {op}") -+ print(f" - Etime: {data.get('etime', '-')}") -+ print(f" - Nentries: {data.get('nentries', 0)}") -+ print(f" - IP Address: {data.get('ip', '-')}") -+ print(f" - Search Base: {data.get('base', '-')}") -+ print(f" - Search Scope: {data.get('scope', '-')}") -+ print(f" - Search Filter: {data.get('filter', '-')}") -+ print(f" - Bind DN: {data.get('bind_dn', '-')}\n") -+ -+ num_invalid_filter = len(db.result.notes['F']) - print(f"Invalid Attribute Filters: {num_invalid_filter}") -- if db.verbose: -- if num_invalid_filter > 0: -- for num, key in enumerate(db.notesF, start=1): -- src, conn, op = key -- restart_conn_op_key = (src, conn, op) -- print(f"\nInvalid Attribute Filter #{num} (notes=F)") -- print(f" - Date/Time: {db.notesF[restart_conn_op_key]['time']}") -- print(f" - Connection Number: {conn}") -- print(f" - Operation Number: {op}") -- print(f" - Etime: {db.notesF[restart_conn_op_key]['etime']}") -- print(f" - Nentries: {db.notesF[restart_conn_op_key]['nentries']}") -- print(f" - IP Address: {db.notesF[restart_conn_op_key]['ip']}") -- print(f" - Search Filter: {db.notesF[restart_conn_op_key]['filter']}") -- print(f" - Bind DN: {db.notesF[restart_conn_op_key]['bind_dn']}\n") -+ if db.verbose and num_invalid_filter > 0: -+ for num, key in enumerate(db.result.notes['F'], start=1): -+ src, conn, op = key -+ data = db.result.notes['F'][key] -+ -+ print(f"\n Invalid Attribute Filter #{num} (notes=F)") -+ print(f" - Date/Time: {data.get('time', '-')}") -+ print(f" - Connection Number: {conn}") -+ print(f" - Operation Number: {op}") -+ print(f" - Etime: {data.get('etime', '-')}") -+ print(f" - Nentries: {data.get('nentries', 0)}") -+ print(f" - IP Address: {data.get('ip', '-')}") -+ print(f" - Search Filter: {data.get('filter', '-')}") -+ print(f" - Bind DN: {data.get('bind_dn', '-')}\n") -+ - print(f"FDs Taken: {num_fd_taken}") - print(f"FDs Returned: {num_fd_rtn}") -- print(f"Highest FD Taken: {db.connection.get('fd_max_ctr', 0)}\n") -- num_broken_pipe = len(db.connection['broken_pipe']) -+ print(f"Highest FD Taken: {db.connection.counters['fd_max']}\n") -+ num_broken_pipe = len(db.connection.broken_pipe) - print(f"Broken Pipes: {num_broken_pipe}") - if num_broken_pipe > 0: -- for code, count in db.connection['broken_pipe'].items(): -+ for code, count in db.connection.broken_pipe.items(): - print(f" - {count} ({code}) {DISCONNECT_MSG.get(code, 'unknown')}") - print() -- num_reset_peer = len(db.connection['connection_reset']) -+ num_reset_peer = len(db.connection.connection_reset) - print(f"Connection Reset By Peer: {num_reset_peer}") - if num_reset_peer > 0: -- for code, count in db.connection['connection_reset'].items(): -+ for code, count in db.connection.connection_reset.items(): - print(f" - {count} ({code}) {DISCONNECT_MSG.get(code, 'unknown')}") - print() -- num_resource_unavail = len(db.connection['resource_unavail']) -+ num_resource_unavail = len(db.connection.resource_unavail) - print(f"Resource Unavailable: {num_resource_unavail}") - if num_resource_unavail > 0: -- for code, count in db.connection['resource_unavail'].items(): -+ for code, count in db.connection.resource_unavail.items(): - print(f" - {count} ({code}) {DISCONNECT_MSG.get(code, 'unknown')}") - print() -- print(f"Max BER Size Exceeded: {db.connection['disconnect_code'].get('B2', 0)}\n") -- print(f"Binds: {db.bind.get('bind_ctr', 0)}") -- print(f"Unbinds: {db.bind.get('unbind_ctr', 0)}") -+ print(f"Max BER Size Exceeded: {db.connection.disconnect_code.get('B2', 0)}\n") -+ print(f"Binds: {db.bind.counters['bind']}") -+ print(f"Unbinds: {db.bind.counters['unbind']}") - print(f"----------------------------------") -- print(f"- LDAP v2 Binds: {db.bind.get('version', {}).get('2', 0)}") -- print(f"- LDAP v3 Binds: {db.bind.get('version', {}).get('3', 0)}") -- print(f"- AUTOBINDs(LDAPI): {db.bind.get('autobind_ctr', 0)}") -- print(f"- SSL Client Binds {db.auth.get('ssl_client_bind_ctr', 0)}") -- print(f"- Failed SSL Client Binds: {db.auth.get('ssl_client_bind_failed_ctr', 0)}") -- print(f"- SASL Binds: {db.bind.get('sasl_bind_ctr', 0)}") -- if db.bind.get('sasl_bind_ctr', 0) > 0: -- saslmech = db.bind['sasl_mech_freq'] -+ print(f"- LDAP v2 Binds: {db.bind.version.get('2', 0)}") -+ print(f"- LDAP v3 Binds: {db.bind.version.get('3', 0)}") -+ print(f"- AUTOBINDs(LDAPI): {db.bind.counters['autobind']}") -+ print(f"- SSL Client Binds {db.auth.counters['ssl_client_bind_ctr']}") -+ print(f"- Failed SSL Client Binds: {db.auth.counters['ssl_client_bind_failed_ctr']}") -+ print(f"- SASL Binds: {db.bind.counters['sasl']}") -+ if db.bind.counters['sasl'] > 0: -+ saslmech = db.bind.sasl_mech - for saslb in sorted(saslmech.keys(), key=lambda k: saslmech[k], reverse=True): - print(f" - {saslb:<4}: {saslmech[saslb]}") - print(f"- Directory Manager Binds: {num_DM_binds}") -- print(f"- Anonymous Binds: {db.bind.get('anon_bind_ctr', 0)}\n") -+ print(f"- Anonymous Binds: {db.bind.counters['anon']}\n") - if db.verbose: - # Connection Latency - print(f"\n ----- Connection Latency Details -----\n") -@@ -2465,7 +2554,7 @@ def main(): - f"{LATENCY_GROUPS['> 15']:^7}") - - # Open Connections -- open_conns = db.connection['open_conns'] -+ open_conns = db.connection.open_conns - if len(open_conns) > 0: - print(f"\n ----- Current Open Connection IDs -----\n") - for conn in sorted(open_conns.keys(), key=lambda k: open_conns[k], reverse=True): -@@ -2473,12 +2562,12 @@ def main(): - - # Error Codes - print(f"\n----- Errors -----\n") -- error_freq = db.result['error_freq'] -+ error_freq = db.result.error_freq - for err in sorted(error_freq.keys(), key=lambda k: error_freq[k], reverse=True): - print(f"err={err:<2} {error_freq[err]:>10} {LDAP_ERR_CODES[err]:<30}") - - # Failed Logins -- bad_pwd_map = db.result['bad_pwd_map'] -+ bad_pwd_map = db.result.bad_pwd_map - bad_pwd_map_len = len(bad_pwd_map) - if bad_pwd_map_len > 0: - print(f"\n----- Top {db.size_limit} Failed Logins ------\n") -@@ -2495,27 +2584,27 @@ def main(): - print(f"{count:<10} {ip}") - - # Connection Codes -- disconnect_codes = db.connection['disconnect_code'] -+ disconnect_codes = db.connection.disconnect_code - if len(disconnect_codes) > 0: - print(f"\n----- Total Connection Codes ----\n") - for code in disconnect_codes: - print(f"{code:<2} {disconnect_codes[code]:>10} {DISCONNECT_MSG.get(code, 'unknown'):<30}") - - # Unique IPs -- restart_conn_ip_map = db.connection['restart_conn_ip_map'] -- ip_map = db.connection['ip_map'] -- ips_len = len(ip_map) -+ restart_conn_ip_map = db.connection.restart_conn_ip_map -+ src_ip_map = db.connection.src_ip_map -+ ips_len = len(src_ip_map) - if ips_len > 0: - print(f"\n----- Top {db.size_limit} Clients -----\n") - print(f"Number of Clients: {ips_len}") -- for num, (outer_ip, ip_info) in enumerate(ip_map.items(), start=1): -+ for num, (outer_ip, ip_info) in enumerate(src_ip_map.items(), start=1): - temp = {} - print(f"\n[{num}] Client: {outer_ip}") - print(f" {ip_info['count']} - Connection{'s' if ip_info['count'] > 1 else ''}") - for id, inner_ip in restart_conn_ip_map.items(): - (src, conn) = id - if outer_ip == inner_ip: -- code = db.connection['disconnect_code_map'].get((src, conn), 0) -+ code = db.connection.restart_conn_disconnect_map[(src, conn)] - if code: - temp[code] = temp.get(code, 0) + 1 - for code, count in temp.items(): -@@ -2524,7 +2613,7 @@ def main(): - break - - # Unique Bind DN's -- binds = db.bind.get('dn_freq', 0) -+ binds = db.bind.dns - binds_len = len(binds) - if binds_len > 0: - print(f"\n----- Top {db.size_limit} Bind DN's ----\n") -@@ -2532,10 +2621,10 @@ def main(): - for num, bind in enumerate(sorted(binds.keys(), key=lambda k: binds[k], reverse=True)): - if num >= db.size_limit: - break -- print(f"{db.bind['dn_freq'][bind]:<10} {bind:<30}") -+ print(f"{db.bind.dns[bind]:<10} {bind:<30}") - - # Unique search bases -- bases = db.search['base_map'] -+ bases = db.search.bases - num_bases = len(bases) - if num_bases > 0: - print(f"\n----- Top {db.size_limit} Search Bases -----\n") -@@ -2543,10 +2632,10 @@ def main(): - for num, base in enumerate(sorted(bases.keys(), key=lambda k: bases[k], reverse=True)): - if num >= db.size_limit: - break -- print(f"{db.search['base_map'][base]:<10} {base}") -+ print(f"{db.search.bases[base]:<10} {base}") - - # Unique search filters -- filters = sorted(db.search['filter_list'], reverse=True) -+ filters = sorted(db.search.filter_list, reverse=True) - num_filters = len(filters) - if num_filters > 0: - print(f"\n----- Top {db.size_limit} Search Filters -----\n") -@@ -2556,7 +2645,7 @@ def main(): - print(f"{count:<10} {filter}") - - # Longest elapsed times -- etimes = sorted(db.result['etime_duration'], reverse=True) -+ etimes = sorted(db.result.etime_duration, reverse=True) - num_etimes = len(etimes) - if num_etimes > 0: - print(f"\n----- Top {db.size_limit} Longest etimes (elapsed times) -----\n") -@@ -2566,7 +2655,7 @@ def main(): - print(f"etime={etime:<12}") - - # Longest wait times -- wtimes = sorted(db.result['wtime_duration'], reverse=True) -+ wtimes = sorted(db.result.wtime_duration, reverse=True) - num_wtimes = len(wtimes) - if num_wtimes > 0: - print(f"\n----- Top {db.size_limit} Longest wtimes (wait times) -----\n") -@@ -2576,7 +2665,7 @@ def main(): - print(f"wtime={wtime:<12}") - - # Longest operation times -- optimes = sorted(db.result['optime_duration'], reverse=True) -+ optimes = sorted(db.result.optime_duration, reverse=True) - num_optimes = len(optimes) - if num_optimes > 0: - print(f"\n----- Top {db.size_limit} Longest optimes (actual operation times) -----\n") -@@ -2586,7 +2675,7 @@ def main(): - print(f"optime={optime:<12}") - - # Largest nentries returned -- nentries = sorted(db.result['nentries_num'], reverse=True) -+ nentries = sorted(db.result.nentries_num, reverse=True) - num_nentries = len(nentries) - if num_nentries > 0: - print(f"\n----- Top {db.size_limit} Largest nentries -----\n") -@@ -2597,7 +2686,7 @@ def main(): - print() - - # Extended operations -- oids = db.operation['extop_dict'] -+ oids = db.operation.extended - num_oids = len(oids) - if num_oids > 0: - print(f"\n----- Top {db.size_limit} Extended Operations -----\n") -@@ -2607,7 +2696,7 @@ def main(): - print(f"{oids[oid]:<12} {oid:<30} {OID_MSG.get(oid, 'Other'):<60}") - - # Commonly requested attributes -- attrs = db.search['attr_dict'] -+ attrs = db.search.attrs - num_nattrs = len(attrs) - if num_nattrs > 0: - print(f"\n----- Top {db.size_limit} Most Requested Attributes -----\n") -@@ -2617,14 +2706,14 @@ def main(): - print(f"{attrs[attr]:<11} {attr:<10}") - print() - -- abandoned = db.operation['abandoned_map_rco'] -+ abandoned = db.operation.rst_con_op_map['abandon'] - num_abandoned = len(abandoned) - if num_abandoned > 0: - print(f"\n----- Abandon Request Stats -----\n") - for num, abandon in enumerate(abandoned, start=1): - (restart, conn, op) = abandon -- conn, op, target_op, msgid = db.operation['abandoned_map_rco'][(restart, conn, op)] -- print(f"{num:<6} conn={conn} op={op} msgid={msgid} target_op:{target_op} client={db.connection['restart_conn_ip_map'].get((restart, conn), 'Unknown')}") -+ conn, op, target_op, msgid = db.operation.rst_con_op_map['abandoned'][(restart, conn, op)] -+ print(f"{num:<6} conn={conn} op={op} msgid={msgid} target_op:{target_op} client={db.connection.restart_conn_ip_map.get((restart, conn), 'Unknown')}") - print() - - if db.recommends or db.verbose: -@@ -2639,15 +2728,15 @@ def main(): - print(f"\n {rec_count}. You have unindexed components. This can be caused by a search on an unindexed attribute or by returned results exceeding the nsslapd-idlistscanlimit. Unindexed components are not recommended. To refuse unindexed searches, set 'nsslapd-require-index' to 'on' under your database entry (e.g. cn=UserRoot,cn=ldbm database,cn=plugins,cn=config).\n") - rec_count += 1 - -- if db.connection['disconnect_code'].get('T1', 0) > 0: -+ if db.connection.disconnect_code.get('T1', 0) > 0: - print(f"\n {rec_count}. You have some connections being closed by the idletimeout setting. You may want to increase the idletimeout if it is set low.\n") - rec_count += 1 - -- if db.connection['disconnect_code'].get('T2', 0) > 0: -+ if db.connection.disconnect_code.get('T2', 0) > 0: - print(f"\n {rec_count}. You have some connections being closed by the ioblocktimeout setting. You may want to increase the ioblocktimeout.\n") - rec_count += 1 - -- if db.connection['disconnect_code'].get('T3', 0) > 0: -+ if db.connection.disconnect_code.get('T3', 0) > 0: - print(f"\n {rec_count}. You have some connections being closed because a paged result search limit has been exceeded. You may want to increase the search time limit.\n") - rec_count += 1 - -@@ -2663,29 +2752,29 @@ def main(): - print(f"\n {rec_count}. You have a high number of Directory Manager binds. The Directory Manager account should only be used under certain circumstances. Avoid using this account for client applications.\n") - rec_count += 1 - -- num_success = db.result['error_freq'].get('0', 0) -- num_err = sum(v for k, v in db.result['error_freq'].items() if k != '0') -+ num_success = db.result.error_freq.get('0', 0) -+ num_err = sum(v for k, v in db.result.error_freq.items() if k != '0') - if num_err > num_success: - print(f"\n {rec_count}. You have more unsuccessful operations than successful operations. You should investigate this difference.\n") - rec_count += 1 - -- num_close_clean = db.connection['disconnect_code'].get('U1', 0) -- num_close_total = num_err = sum(v for k, v in db.connection['disconnect_code'].items()) -+ num_close_clean = db.connection.disconnect_code.get('U1', 0) -+ num_close_total = num_err = sum(v for k, v in db.connection.disconnect_code.items()) - if num_close_clean < (num_close_total - num_close_clean): - print(f"\n {rec_count}. You have more abnormal connection codes than cleanly closed connections. You may want to investigate this difference.\n") - rec_count += 1 - - if num_time_count: -- if round(avg_etime, 1) > 0: -- print(f"\n {rec_count}. Your average etime is {avg_etime:.1f}. You may want to investigate this performance problem.\n") -+ if round(avg_etime, 9) > 0: -+ print(f"\n {rec_count}. Your average etime is {avg_etime:.9f}. You may want to investigate this performance problem.\n") - rec_count += 1 - -- if round(avg_wtime, 1) > 0.5: -- print(f"\n {rec_count}. Your average wtime is {avg_wtime:.1f}. You may need to increase the number of worker threads (nsslapd-threadnumber).\n") -+ if round(avg_wtime, 9) > 0.5: -+ print(f"\n {rec_count}. Your average wtime is {avg_wtime:.9f}. You may need to increase the number of worker threads (nsslapd-threadnumber).\n") - rec_count += 1 - -- if round(avg_optime, 1) > 0: -- print(f"\n {rec_count}. Your average optime is {avg_optime:.1f}. You may want to investigate this performance problem.\n") -+ if round(avg_optime, 9) > 0: -+ print(f"\n {rec_count}. Your average optime is {avg_optime:.9f}. You may want to investigate this performance problem.\n") - rec_count += 1 - - if num_base_search > (num_search * 0.25): -@@ -2699,4 +2788,4 @@ def main(): - - - if __name__ == "__main__": -- main() -+ main() -\ No newline at end of file --- -2.49.0 - diff --git a/0011-Issue-6850-AddressSanitizer-memory-leak-in-mdb_init.patch b/0011-Issue-6850-AddressSanitizer-memory-leak-in-mdb_init.patch deleted file mode 100644 index 71cfad2..0000000 --- a/0011-Issue-6850-AddressSanitizer-memory-leak-in-mdb_init.patch +++ /dev/null @@ -1,65 +0,0 @@ -From ad2a06cb64156c55d81b7a1647f9bec7071df9f4 Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Mon, 7 Jul 2025 23:11:17 +0200 -Subject: [PATCH] Issue 6850 - AddressSanitizer: memory leak in mdb_init - -Bug Description: -`dbmdb_componentid` can be allocated multiple times. To avoid a memory -leak, allocate it only once, and free at the cleanup. - -Fixes: https://github.com/389ds/389-ds-base/issues/6850 - -Reviewed by: @mreynolds389, @tbordaz (Tnanks!) ---- - ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c | 4 +++- - ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c | 2 +- - ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c | 5 +++++ - 3 files changed, 9 insertions(+), 2 deletions(-) - -diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c -index 447f3c70a..54ca03b0b 100644 ---- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c -+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_config.c -@@ -146,7 +146,9 @@ dbmdb_compute_limits(struct ldbminfo *li) - int mdb_init(struct ldbminfo *li, config_info *config_array) - { - dbmdb_ctx_t *conf = (dbmdb_ctx_t *)slapi_ch_calloc(1, sizeof(dbmdb_ctx_t)); -- dbmdb_componentid = generate_componentid(NULL, "db-mdb"); -+ if (dbmdb_componentid == NULL) { -+ dbmdb_componentid = generate_componentid(NULL, "db-mdb"); -+ } - - li->li_dblayer_config = conf; - strncpy(conf->home, li->li_directory, MAXPATHLEN-1); -diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c -index c4e87987f..ed17f979f 100644 ---- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c -+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.c -@@ -19,7 +19,7 @@ - #include - #include - --Slapi_ComponentId *dbmdb_componentid; -+Slapi_ComponentId *dbmdb_componentid = NULL; - - #define BULKOP_MAX_RECORDS 100 /* Max records handled by a single bulk operations */ - -diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c -index 2d07db9b5..ae10ac7cf 100644 ---- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c -+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_misc.c -@@ -49,6 +49,11 @@ dbmdb_cleanup(struct ldbminfo *li) - } - slapi_ch_free((void **)&(li->li_dblayer_config)); - -+ if (dbmdb_componentid != NULL) { -+ release_componentid(dbmdb_componentid); -+ dbmdb_componentid = NULL; -+ } -+ - return 0; - } - --- -2.49.0 - diff --git a/0012-Issue-6848-AddressSanitizer-leak-in-do_search.patch b/0012-Issue-6848-AddressSanitizer-leak-in-do_search.patch deleted file mode 100644 index ab91a04..0000000 --- a/0012-Issue-6848-AddressSanitizer-leak-in-do_search.patch +++ /dev/null @@ -1,58 +0,0 @@ -From 98a83bb00255f77467a370d3347a8428b6659463 Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Mon, 7 Jul 2025 22:01:09 +0200 -Subject: [PATCH] Issue 6848 - AddressSanitizer: leak in do_search - -Bug Description: -When there's a BER decoding error and the function goes to -`free_and_return`, the `attrs` variable is not being freed because it's -only freed if `!psearch || rc != 0 || err != 0`, but `err` is still 0 at -that point. - -If we reach `free_and_return` from the `ber_scanf` error path, `attrs` -was never set in the pblock with `slapi_pblock_set()`, so the -`slapi_pblock_get()` call will not retrieve the potentially partially -allocated `attrs` from the BER decoding. - -Fixes: https://github.com/389ds/389-ds-base/issues/6848 - -Reviewed by: @tbordaz, @droideck (Thanks!) ---- - ldap/servers/slapd/search.c | 14 ++++++++++++-- - 1 file changed, 12 insertions(+), 2 deletions(-) - -diff --git a/ldap/servers/slapd/search.c b/ldap/servers/slapd/search.c -index e9b2c3670..f9d03c090 100644 ---- a/ldap/servers/slapd/search.c -+++ b/ldap/servers/slapd/search.c -@@ -235,6 +235,7 @@ do_search(Slapi_PBlock *pb) - log_search_access(pb, base, scope, fstr, "decoding error"); - send_ldap_result(pb, LDAP_PROTOCOL_ERROR, NULL, NULL, 0, - NULL); -+ err = 1; /* Make sure we free everything */ - goto free_and_return; - } - -@@ -420,8 +421,17 @@ free_and_return: - if (!psearch || rc != 0 || err != 0) { - slapi_ch_free_string(&fstr); - slapi_filter_free(filter, 1); -- slapi_pblock_get(pb, SLAPI_SEARCH_ATTRS, &attrs); -- charray_free(attrs); /* passing NULL is fine */ -+ -+ /* Get attrs from pblock if it was set there, otherwise use local attrs */ -+ char **pblock_attrs = NULL; -+ slapi_pblock_get(pb, SLAPI_SEARCH_ATTRS, &pblock_attrs); -+ if (pblock_attrs != NULL) { -+ charray_free(pblock_attrs); /* Free attrs from pblock */ -+ slapi_pblock_set(pb, SLAPI_SEARCH_ATTRS, NULL); -+ } else if (attrs != NULL) { -+ /* Free attrs that were allocated but never put in pblock */ -+ charray_free(attrs); -+ } - charray_free(gerattrs); /* passing NULL is fine */ - /* - * Fix for defect 526719 / 553356 : Persistent search op failed. --- -2.49.0 - diff --git a/0013-Issue-6865-AddressSanitizer-leak-in-agmt_update_init.patch b/0013-Issue-6865-AddressSanitizer-leak-in-agmt_update_init.patch deleted file mode 100644 index 1911baf..0000000 --- a/0013-Issue-6865-AddressSanitizer-leak-in-agmt_update_init.patch +++ /dev/null @@ -1,58 +0,0 @@ -From e89a5acbc1bcc1b460683aa498005d6f0ce7054e Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Fri, 11 Jul 2025 12:32:38 +0200 -Subject: [PATCH] Issue 6865 - AddressSanitizer: leak in - agmt_update_init_status - -Bug Description: -We allocate an array of `LDAPMod *` pointers, but never free it: - -``` -================================================================= -==2748356==ERROR: LeakSanitizer: detected memory leaks - -Direct leak of 24 byte(s) in 1 object(s) allocated from: - #0 0x7f05e8cb4a07 in __interceptor_malloc (/lib64/libasan.so.6+0xb4a07) - #1 0x7f05e85c0138 in slapi_ch_malloc (/usr/lib64/dirsrv/libslapd.so.0+0x1c0138) - #2 0x7f05e109e481 in agmt_update_init_status ldap/servers/plugins/replication/repl5_agmt.c:2583 - #3 0x7f05e10a0aa5 in agmtlist_shutdown ldap/servers/plugins/replication/repl5_agmtlist.c:789 - #4 0x7f05e10ab6bc in multisupplier_stop ldap/servers/plugins/replication/repl5_init.c:844 - #5 0x7f05e10ab6bc in multisupplier_stop ldap/servers/plugins/replication/repl5_init.c:837 - #6 0x7f05e862507d in plugin_call_func ldap/servers/slapd/plugin.c:2001 - #7 0x7f05e8625be1 in plugin_call_one ldap/servers/slapd/plugin.c:1950 - #8 0x7f05e8625be1 in plugin_dependency_closeall ldap/servers/slapd/plugin.c:1844 - #9 0x55e1a7ff9815 in slapd_daemon ldap/servers/slapd/daemon.c:1275 - #10 0x55e1a7fd36ef in main (/usr/sbin/ns-slapd+0x3e6ef) - #11 0x7f05e80295cf in __libc_start_call_main (/lib64/libc.so.6+0x295cf) - #12 0x7f05e802967f in __libc_start_main_alias_2 (/lib64/libc.so.6+0x2967f) - #13 0x55e1a7fd74a4 in _start (/usr/sbin/ns-slapd+0x424a4) - -SUMMARY: AddressSanitizer: 24 byte(s) leaked in 1 allocation(s). -``` - -Fix Description: -Ensure `mods` is freed in the cleanup code. - -Fixes: https://github.com/389ds/389-ds-base/issues/6865 -Relates: https://github.com/389ds/389-ds-base/issues/6470 - -Reviewed by: @mreynolds389 (Thanks!) ---- - ldap/servers/plugins/replication/repl5_agmt.c | 1 + - 1 file changed, 1 insertion(+) - -diff --git a/ldap/servers/plugins/replication/repl5_agmt.c b/ldap/servers/plugins/replication/repl5_agmt.c -index c818c5857..0a81167b7 100644 ---- a/ldap/servers/plugins/replication/repl5_agmt.c -+++ b/ldap/servers/plugins/replication/repl5_agmt.c -@@ -2743,6 +2743,7 @@ agmt_update_init_status(Repl_Agmt *ra) - } else { - PR_Unlock(ra->lock); - } -+ slapi_ch_free((void **)&mods); - slapi_mod_done(&smod_start_time); - slapi_mod_done(&smod_end_time); - slapi_mod_done(&smod_status); --- -2.49.0 - diff --git a/0014-Issue-6859-str2filter-is-not-fully-applying-matching.patch b/0014-Issue-6859-str2filter-is-not-fully-applying-matching.patch deleted file mode 100644 index 2c603d8..0000000 --- a/0014-Issue-6859-str2filter-is-not-fully-applying-matching.patch +++ /dev/null @@ -1,169 +0,0 @@ -From bf58cba210cd785d3abe6ffbbf174481258dcf5e Mon Sep 17 00:00:00 2001 -From: Mark Reynolds -Date: Wed, 9 Jul 2025 14:18:50 -0400 -Subject: [PATCH] Issue 6859 - str2filter is not fully applying matching rules - -Description: - -When we have an extended filter, one with a MR applied, it is ignored during -internal searches: - - "(cn:CaseExactMatch:=Value)" - -For internal searches we use str2filter() and it doesn't fully apply extended -search filter matching rules - -Also needed to update attr uniqueness plugin to apply this change for mod -operations (previously only Adds were correctly handling these attribute -filters) - -Relates: https://github.com/389ds/389-ds-base/issues/6857 -Relates: https://github.com/389ds/389-ds-base/issues/6859 - -Reviewed by: spichugi & tbordaz(Thanks!!) ---- - .../tests/suites/plugins/attruniq_test.py | 65 ++++++++++++++++++- - ldap/servers/plugins/uiduniq/uid.c | 7 ++ - ldap/servers/slapd/plugin_mr.c | 2 +- - ldap/servers/slapd/str2filter.c | 8 +++ - 4 files changed, 79 insertions(+), 3 deletions(-) - -diff --git a/dirsrvtests/tests/suites/plugins/attruniq_test.py b/dirsrvtests/tests/suites/plugins/attruniq_test.py -index aac659c29..046952df3 100644 ---- a/dirsrvtests/tests/suites/plugins/attruniq_test.py -+++ b/dirsrvtests/tests/suites/plugins/attruniq_test.py -@@ -1,5 +1,5 @@ - # --- BEGIN COPYRIGHT BLOCK --- --# Copyright (C) 2021 Red Hat, Inc. -+# Copyright (C) 2025 Red Hat, Inc. - # All rights reserved. - # - # License: GPL (version 3 or any later version). -@@ -324,4 +324,65 @@ def test_exclude_subtrees(topology_st): - cont2.delete() - cont3.delete() - attruniq.disable() -- attruniq.delete() -\ No newline at end of file -+ attruniq.delete() -+ -+ -+def test_matchingrule_attr(topology_st): -+ """ Test list extension MR attribute. Check for "cn" using CES (versus it -+ being defined as CIS) -+ -+ :id: 5cde4342-6fa3-4225-b23d-0af918981075 -+ :setup: Standalone instance -+ :steps: -+ 1. Setup and enable attribute uniqueness plugin to use CN attribute -+ with a matching rule of CaseExactMatch. -+ 2. Add user with CN value is lowercase -+ 3. Add second user with same lowercase CN which should be rejected -+ 4. Add second user with same CN value but with mixed case -+ 5. Modify second user replacing CN value to lc which should be rejected -+ -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success -+ 5. Success -+ """ -+ -+ inst = topology_st.standalone -+ -+ attruniq = AttributeUniquenessPlugin(inst, -+ dn="cn=attribute uniqueness,cn=plugins,cn=config") -+ attruniq.add_unique_attribute('cn:CaseExactMatch:') -+ attruniq.enable_all_subtrees() -+ attruniq.enable() -+ inst.restart() -+ -+ users = UserAccounts(inst, DEFAULT_SUFFIX) -+ users.create(properties={'cn': "common_name", -+ 'uid': "uid_name", -+ 'sn': "uid_name", -+ 'uidNumber': '1', -+ 'gidNumber': '11', -+ 'homeDirectory': '/home/uid_name'}) -+ -+ log.info('Add entry with the exact CN value which should be rejected') -+ with pytest.raises(ldap.CONSTRAINT_VIOLATION): -+ users.create(properties={'cn': "common_name", -+ 'uid': "uid_name2", -+ 'sn': "uid_name2", -+ 'uidNumber': '11', -+ 'gidNumber': '111', -+ 'homeDirectory': '/home/uid_name2'}) -+ -+ log.info('Add entry with the mixed case CN value which should be allowed') -+ user = users.create(properties={'cn': "Common_Name", -+ 'uid': "uid_name2", -+ 'sn': "uid_name2", -+ 'uidNumber': '11', -+ 'gidNumber': '111', -+ 'homeDirectory': '/home/uid_name2'}) -+ -+ log.info('Mod entry with exact case CN value which should be rejected') -+ with pytest.raises(ldap.CONSTRAINT_VIOLATION): -+ user.replace('cn', 'common_name') -diff --git a/ldap/servers/plugins/uiduniq/uid.c b/ldap/servers/plugins/uiduniq/uid.c -index 887e79d78..fdb1404a0 100644 ---- a/ldap/servers/plugins/uiduniq/uid.c -+++ b/ldap/servers/plugins/uiduniq/uid.c -@@ -1178,6 +1178,10 @@ preop_modify(Slapi_PBlock *pb) - for (; mods && *mods; mods++) { - mod = *mods; - for (i = 0; attrNames && attrNames[i]; i++) { -+ char *attr_match = strchr(attrNames[i], ':'); -+ if (attr_match != NULL) { -+ attr_match[0] = '\0'; -+ } - if ((slapi_attr_type_cmp(mod->mod_type, attrNames[i], 1) == 0) && /* mod contains target attr */ - (mod->mod_op & LDAP_MOD_BVALUES) && /* mod is bval encoded (not string val) */ - (mod->mod_bvalues && mod->mod_bvalues[0]) && /* mod actually contains some values */ -@@ -1186,6 +1190,9 @@ preop_modify(Slapi_PBlock *pb) - { - addMod(&checkmods, &checkmodsCapacity, &modcount, mod); - } -+ if (attr_match != NULL) { -+ attr_match[0] = ':'; -+ } - } - } - if (modcount == 0) { -diff --git a/ldap/servers/slapd/plugin_mr.c b/ldap/servers/slapd/plugin_mr.c -index 9809a4374..757355dbc 100644 ---- a/ldap/servers/slapd/plugin_mr.c -+++ b/ldap/servers/slapd/plugin_mr.c -@@ -625,7 +625,7 @@ attempt_mr_filter_create(mr_filter_t *f, struct slapdplugin *mrp, Slapi_PBlock * - int rc; - int32_t (*mrf_create)(Slapi_PBlock *) = NULL; - f->mrf_match = NULL; -- pblock_init(pb); -+ slapi_pblock_init(pb); - if (!(rc = slapi_pblock_set(pb, SLAPI_PLUGIN, mrp)) && - !(rc = slapi_pblock_get(pb, SLAPI_PLUGIN_MR_FILTER_CREATE_FN, &mrf_create)) && - mrf_create != NULL && -diff --git a/ldap/servers/slapd/str2filter.c b/ldap/servers/slapd/str2filter.c -index 9fdc500f7..5620b7439 100644 ---- a/ldap/servers/slapd/str2filter.c -+++ b/ldap/servers/slapd/str2filter.c -@@ -344,6 +344,14 @@ str2simple(char *str, int unescape_filter) - return NULL; /* error */ - } else { - f->f_choice = LDAP_FILTER_EXTENDED; -+ if (f->f_mr_oid) { -+ /* apply the MR indexers */ -+ rc = plugin_mr_filter_create(&f->f_mr); -+ if (rc) { -+ slapi_filter_free(f, 1); -+ return NULL; /* error */ -+ } -+ } - } - } else if (str_find_star(value) == NULL) { - f->f_choice = LDAP_FILTER_EQUALITY; --- -2.49.0 - diff --git a/0015-Issue-6872-compressed-log-rotation-creates-files-wit.patch b/0015-Issue-6872-compressed-log-rotation-creates-files-wit.patch deleted file mode 100644 index 156fe4b..0000000 --- a/0015-Issue-6872-compressed-log-rotation-creates-files-wit.patch +++ /dev/null @@ -1,163 +0,0 @@ -From 4719e7df7ba0eb5e26830812ab9ead51f1e0c5f5 Mon Sep 17 00:00:00 2001 -From: Mark Reynolds -Date: Tue, 15 Jul 2025 17:56:18 -0400 -Subject: [PATCH] Issue 6872 - compressed log rotation creates files with world - readable permission - -Description: - -When compressing a log file, first create the empty file using open() -so we can set the correct permissions right from the start. gzopen() -always uses permission 644 and that is not safe. So after creating it -with open(), with the correct permissions, then pass the FD to gzdopen() -and write the compressed content. - -relates: https://github.com/389ds/389-ds-base/issues/6872 - -Reviewed by: progier(Thanks!) ---- - .../logging/logging_compression_test.py | 15 ++++++++-- - ldap/servers/slapd/log.c | 28 +++++++++++++------ - ldap/servers/slapd/schema.c | 2 +- - 3 files changed, 33 insertions(+), 12 deletions(-) - -diff --git a/dirsrvtests/tests/suites/logging/logging_compression_test.py b/dirsrvtests/tests/suites/logging/logging_compression_test.py -index e30874cc0..3a987d62c 100644 ---- a/dirsrvtests/tests/suites/logging/logging_compression_test.py -+++ b/dirsrvtests/tests/suites/logging/logging_compression_test.py -@@ -1,5 +1,5 @@ - # --- BEGIN COPYRIGHT BLOCK --- --# Copyright (C) 2022 Red Hat, Inc. -+# Copyright (C) 2025 Red Hat, Inc. - # All rights reserved. - # - # License: GPL (version 3 or any later version). -@@ -22,12 +22,21 @@ log = logging.getLogger(__name__) - - pytestmark = pytest.mark.tier1 - -+ - def log_rotated_count(log_type, log_dir, check_compressed=False): -- # Check if the log was rotated -+ """ -+ Check if the log was rotated and has the correct permissions -+ """ - log_file = f'{log_dir}/{log_type}.2*' - if check_compressed: - log_file += ".gz" -- return len(glob.glob(log_file)) -+ log_files = glob.glob(log_file) -+ for logf in log_files: -+ # Check permissions -+ st = os.stat(logf) -+ assert oct(st.st_mode) == '0o100600' # 0600 -+ -+ return len(log_files) - - - def update_and_sleep(inst, suffix, sleep=True): -diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c -index e859682fe..f535011ab 100644 ---- a/ldap/servers/slapd/log.c -+++ b/ldap/servers/slapd/log.c -@@ -174,17 +174,28 @@ get_syslog_loglevel(int loglevel) - } - - static int --compress_log_file(char *log_name) -+compress_log_file(char *log_name, int32_t mode) - { - char gzip_log[BUFSIZ] = {0}; - char buf[LOG_CHUNK] = {0}; - size_t bytes_read = 0; - gzFile outfile = NULL; - FILE *source = NULL; -+ int fd = 0; - - PR_snprintf(gzip_log, sizeof(gzip_log), "%s.gz", log_name); -- if ((outfile = gzopen(gzip_log,"wb")) == NULL) { -- /* Failed to open new gzip file */ -+ -+ /* -+ * Try to open the file as we may have an incorrect path. We also need to -+ * set the permissions using open() as gzopen() creates the file with -+ * 644 permissions (world readable - bad). So we create an empty file with -+ * the correct permissions, then we pass the FD to gzdopen() to write the -+ * compressed content. -+ */ -+ if ((fd = open(gzip_log, O_WRONLY|O_CREAT|O_TRUNC, mode)) >= 0) { -+ /* FIle successfully created, now pass the FD to gzdopen() */ -+ outfile = gzdopen(fd, "ab"); -+ } else { - return -1; - } - -@@ -193,6 +204,7 @@ compress_log_file(char *log_name) - gzclose(outfile); - return -1; - } -+ - bytes_read = fread(buf, 1, LOG_CHUNK, source); - while (bytes_read > 0) { - int bytes_written = gzwrite(outfile, buf, bytes_read); -@@ -3402,7 +3414,7 @@ log__open_accesslogfile(int logfile_state, int locked) - return LOG_UNABLE_TO_OPENFILE; - } - } else if (loginfo.log_access_compress) { -- if (compress_log_file(newfile) != 0) { -+ if (compress_log_file(newfile, loginfo.log_access_mode) != 0) { - slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile", - "failed to compress rotated access log (%s)\n", - newfile); -@@ -3570,7 +3582,7 @@ log__open_securitylogfile(int logfile_state, int locked) - return LOG_UNABLE_TO_OPENFILE; - } - } else if (loginfo.log_security_compress) { -- if (compress_log_file(newfile) != 0) { -+ if (compress_log_file(newfile, loginfo.log_security_mode) != 0) { - slapi_log_err(SLAPI_LOG_ERR, "log__open_securitylogfile", - "failed to compress rotated security audit log (%s)\n", - newfile); -@@ -6288,7 +6300,7 @@ log__open_errorlogfile(int logfile_state, int locked) - return LOG_UNABLE_TO_OPENFILE; - } - } else if (loginfo.log_error_compress) { -- if (compress_log_file(newfile) != 0) { -+ if (compress_log_file(newfile, loginfo.log_error_mode) != 0) { - PR_snprintf(buffer, sizeof(buffer), "Failed to compress errors log file (%s)\n", newfile); - log__error_emergency(buffer, 1, 1); - } else { -@@ -6476,7 +6488,7 @@ log__open_auditlogfile(int logfile_state, int locked) - return LOG_UNABLE_TO_OPENFILE; - } - } else if (loginfo.log_audit_compress) { -- if (compress_log_file(newfile) != 0) { -+ if (compress_log_file(newfile, loginfo.log_audit_mode) != 0) { - slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile", - "failed to compress rotated audit log (%s)\n", - newfile); -@@ -6641,7 +6653,7 @@ log__open_auditfaillogfile(int logfile_state, int locked) - return LOG_UNABLE_TO_OPENFILE; - } - } else if (loginfo.log_auditfail_compress) { -- if (compress_log_file(newfile) != 0) { -+ if (compress_log_file(newfile, loginfo.log_auditfail_mode) != 0) { - slapi_log_err(SLAPI_LOG_ERR, "log__open_auditfaillogfile", - "failed to compress rotated auditfail log (%s)\n", - newfile); -diff --git a/ldap/servers/slapd/schema.c b/ldap/servers/slapd/schema.c -index a8e6b1210..9ef4ee4bf 100644 ---- a/ldap/servers/slapd/schema.c -+++ b/ldap/servers/slapd/schema.c -@@ -903,7 +903,7 @@ oc_check_allowed_sv(Slapi_PBlock *pb, Slapi_Entry *e, const char *type, struct o - - if (pb) { - PR_snprintf(errtext, sizeof(errtext), -- "attribute \"%s\" not allowed\n", -+ "attribute \"%s\" not allowed", - escape_string(type, ebuf)); - slapi_pblock_set(pb, SLAPI_PB_RESULT_TEXT, errtext); - } --- -2.49.0 - diff --git a/0016-Issue-6878-Prevent-repeated-disconnect-logs-during-s.patch b/0016-Issue-6878-Prevent-repeated-disconnect-logs-during-s.patch deleted file mode 100644 index 7a328a8..0000000 --- a/0016-Issue-6878-Prevent-repeated-disconnect-logs-during-s.patch +++ /dev/null @@ -1,116 +0,0 @@ -From 1cc3715130650d9e778430792c5e2c2e9690cc72 Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Fri, 18 Jul 2025 18:50:33 -0700 -Subject: [PATCH] Issue 6878 - Prevent repeated disconnect logs during shutdown - (#6879) - -Description: Avoid logging non-active initialized connections via CONN in disconnect_server_nomutex_ext by adding a check to skip invalid conn=0 with invalid sockets, preventing excessive repeated messages. - -Update ds_logs_test.py by adding test_no_repeated_disconnect_messages to verify the fix. - -Fixes: https://github.com/389ds/389-ds-base/issues/6878 - -Reviewed by: @mreynolds389 (Thanks!) ---- - .../tests/suites/ds_logs/ds_logs_test.py | 51 ++++++++++++++++++- - ldap/servers/slapd/connection.c | 15 +++--- - 2 files changed, 59 insertions(+), 7 deletions(-) - -diff --git a/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py b/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py -index 209d63b5d..6fd790c18 100644 ---- a/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py -+++ b/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py -@@ -24,7 +24,7 @@ from lib389.plugins import AutoMembershipPlugin, ReferentialIntegrityPlugin, Aut - from lib389.idm.user import UserAccounts, UserAccount - from lib389.idm.group import Groups - from lib389.idm.organizationalunit import OrganizationalUnits --from lib389._constants import DEFAULT_SUFFIX, LOG_ACCESS_LEVEL, PASSWORD -+from lib389._constants import DEFAULT_SUFFIX, LOG_ACCESS_LEVEL, PASSWORD, ErrorLog - from lib389.utils import ds_is_older, ds_is_newer - from lib389.config import RSA - from lib389.dseldif import DSEldif -@@ -1410,6 +1410,55 @@ def test_errorlog_buffering(topology_st, request): - assert inst.ds_error_log.match(".*slapd_daemon - slapd started.*") - - -+def test_no_repeated_disconnect_messages(topology_st): -+ """Test that there are no repeated "Not setting conn 0 to be disconnected: socket is invalid" messages on restart -+ -+ :id: 72b5e1ce-2db8-458f-b2cd-0a0b6525f51f -+ :setup: Standalone Instance -+ :steps: -+ 1. Set error log level to CONNECTION -+ 2. Clear existing error logs -+ 3. Restart the server with 30 second timeout -+ 4. Check error log for repeated disconnect messages -+ 5. Verify there are no more than 10 occurrences of the disconnect message -+ :expectedresults: -+ 1. Error log level should be set successfully -+ 2. Error logs should be cleared -+ 3. Server should restart successfully within 30 seconds -+ 4. Error log should be accessible -+ 5. There should be no more than 10 repeated disconnect messages -+ """ -+ -+ inst = topology_st.standalone -+ -+ log.info('Set error log level to CONNECTION') -+ inst.config.loglevel([ErrorLog.CONNECT]) -+ current_level = inst.config.get_attr_val_int('nsslapd-errorlog-level') -+ log.info(f'Error log level set to: {current_level}') -+ -+ log.info('Clear existing error logs') -+ inst.deleteErrorLogs() -+ -+ log.info('Restart the server with 30 second timeout') -+ inst.restart(timeout=30) -+ -+ log.info('Check error log for repeated disconnect messages') -+ disconnect_message = "Not setting conn 0 to be disconnected: socket is invalid" -+ -+ # Count occurrences of the disconnect message -+ error_log_lines = inst.ds_error_log.readlines() -+ disconnect_count = 0 -+ -+ for line in error_log_lines: -+ if disconnect_message in line: -+ disconnect_count += 1 -+ -+ log.info(f'Found {disconnect_count} occurrences of disconnect message') -+ -+ log.info('Verify there are no more than 10 occurrences') -+ assert disconnect_count <= 10, f"Found {disconnect_count} repeated disconnect messages, expected <= 10" -+ -+ - if __name__ == '__main__': - # Run isolated - # -s for DEBUG mode -diff --git a/ldap/servers/slapd/connection.c b/ldap/servers/slapd/connection.c -index 9f3c374cf..b3ca2e773 100644 ---- a/ldap/servers/slapd/connection.c -+++ b/ldap/servers/slapd/connection.c -@@ -2556,12 +2556,15 @@ disconnect_server_nomutex_ext(Connection *conn, PRUint64 opconnid, int opid, PRE - } - } - } else { -- slapi_log_err(SLAPI_LOG_CONNS, "disconnect_server_nomutex_ext", -- "Not setting conn %d to be disconnected: %s\n", -- conn->c_sd, -- (conn->c_sd == SLAPD_INVALID_SOCKET) ? "socket is invalid" : -- ((conn->c_connid != opconnid) ? "conn id does not match op conn id" : -- ((conn->c_flags & CONN_FLAG_CLOSING) ? "conn is closing" : "unknown"))); -+ /* We avoid logging an invalid conn=0 connection as it is not a real connection. */ -+ if (!(conn->c_sd == SLAPD_INVALID_SOCKET && conn->c_connid == 0)) { -+ slapi_log_err(SLAPI_LOG_CONNS, "disconnect_server_nomutex_ext", -+ "Not setting conn %d to be disconnected: %s\n", -+ conn->c_sd, -+ (conn->c_sd == SLAPD_INVALID_SOCKET) ? "socket is invalid" : -+ ((conn->c_connid != opconnid) ? "conn id does not match op conn id" : -+ ((conn->c_flags & CONN_FLAG_CLOSING) ? "conn is closing" : "unknown"))); -+ } - } - } - --- -2.49.0 - diff --git a/0017-Issue-6888-Missing-access-JSON-logging-for-TLS-Clien.patch b/0017-Issue-6888-Missing-access-JSON-logging-for-TLS-Clien.patch deleted file mode 100644 index 4ed2247..0000000 --- a/0017-Issue-6888-Missing-access-JSON-logging-for-TLS-Clien.patch +++ /dev/null @@ -1,590 +0,0 @@ -From 5ff051f936a5bc4f5e1edb17ee1c3149b66644a2 Mon Sep 17 00:00:00 2001 -From: Mark Reynolds -Date: Wed, 16 Jul 2025 20:54:48 -0400 -Subject: [PATCH] Issue 6888 - Missing access JSON logging for TLS/Client auth - -Description: - -TLS/Client auth logging was not converted to JSON (auth.c got missed) - -Relates: https://github.com/389ds/389-ds-base/issues/6888 - -Reviewed by: spichugi(Thanks!) ---- - .../logging/access_json_logging_test.py | 96 ++++++++- - ldap/servers/slapd/accesslog.c | 114 +++++++++++ - ldap/servers/slapd/auth.c | 182 +++++++++++++----- - ldap/servers/slapd/log.c | 2 + - ldap/servers/slapd/slapi-private.h | 10 + - 5 files changed, 353 insertions(+), 51 deletions(-) - -diff --git a/dirsrvtests/tests/suites/logging/access_json_logging_test.py b/dirsrvtests/tests/suites/logging/access_json_logging_test.py -index ae91dc487..f0dc861a7 100644 ---- a/dirsrvtests/tests/suites/logging/access_json_logging_test.py -+++ b/dirsrvtests/tests/suites/logging/access_json_logging_test.py -@@ -19,6 +19,8 @@ from lib389.idm.user import UserAccounts - from lib389.dirsrv_log import DirsrvAccessJSONLog - from lib389.index import VLVSearch, VLVIndex - from lib389.tasks import Tasks -+from lib389.config import CertmapLegacy -+from lib389.nss_ssl import NssSsl - from ldap.controls.vlv import VLVRequestControl - from ldap.controls.sss import SSSRequestControl - from ldap.controls import SimplePagedResultsControl -@@ -67,11 +69,11 @@ def get_log_event(inst, op, key=None, val=None, key2=None, val2=None): - if val == str(event[key]).lower() and \ - val2 == str(event[key2]).lower(): - return event -- -- elif key is not None and key in event: -- val = str(val).lower() -- if val == str(event[key]).lower(): -- return event -+ elif key is not None: -+ if key in event: -+ val = str(val).lower() -+ if val == str(event[key]).lower(): -+ return event - else: - return event - -@@ -163,6 +165,7 @@ def test_access_json_format(topo_m2, setup_test): - 14. Test PAGED SEARCH is logged correctly - 15. Test PERSISTENT SEARCH is logged correctly - 16. Test EXTENDED OP -+ 17. Test TLS_INFO is logged correctly - :expectedresults: - 1. Success - 2. Success -@@ -180,6 +183,7 @@ def test_access_json_format(topo_m2, setup_test): - 14. Success - 15. Success - 16. Success -+ 17. Success - """ - - inst = topo_m2.ms["supplier1"] -@@ -560,6 +564,88 @@ def test_access_json_format(topo_m2, setup_test): - assert event['oid_name'] == "REPL_END_NSDS50_REPLICATION_REQUEST_OID" - assert event['name'] == "replication-multisupplier-extop" - -+ # -+ # TLS INFO/TLS CLIENT INFO -+ # -+ RDN_TEST_USER = 'testuser' -+ RDN_TEST_USER_WRONG = 'testuser_wrong' -+ inst.enable_tls() -+ inst.restart() -+ -+ users = UserAccounts(inst, DEFAULT_SUFFIX) -+ user = users.create(properties={ -+ 'uid': RDN_TEST_USER, -+ 'cn': RDN_TEST_USER, -+ 'sn': RDN_TEST_USER, -+ 'uidNumber': '1000', -+ 'gidNumber': '2000', -+ 'homeDirectory': f'/home/{RDN_TEST_USER}' -+ }) -+ -+ ssca_dir = inst.get_ssca_dir() -+ ssca = NssSsl(dbpath=ssca_dir) -+ ssca.create_rsa_user(RDN_TEST_USER) -+ ssca.create_rsa_user(RDN_TEST_USER_WRONG) -+ -+ # Get the details of where the key and crt are. -+ tls_locs = ssca.get_rsa_user(RDN_TEST_USER) -+ tls_locs_wrong = ssca.get_rsa_user(RDN_TEST_USER_WRONG) -+ -+ user.enroll_certificate(tls_locs['crt_der_path']) -+ -+ # Turn on the certmap. -+ cm = CertmapLegacy(inst) -+ certmaps = cm.list() -+ certmaps['default']['DNComps'] = '' -+ certmaps['default']['FilterComps'] = ['cn'] -+ certmaps['default']['VerifyCert'] = 'off' -+ cm.set(certmaps) -+ -+ # Check that EXTERNAL is listed in supported mechns. -+ assert (inst.rootdse.supports_sasl_external()) -+ -+ # Restart to allow certmaps to be re-read: Note, we CAN NOT use post_open -+ # here, it breaks on auth. see lib389/__init__.py -+ inst.restart(post_open=False) -+ -+ # Attempt a bind with TLS external -+ inst.open(saslmethod='EXTERNAL', connOnly=True, certdir=ssca_dir, -+ userkey=tls_locs['key'], usercert=tls_locs['crt']) -+ inst.restart() -+ -+ event = get_log_event(inst, "TLS_INFO") -+ assert event is not None -+ assert 'tls_version' in event -+ assert 'keysize' in event -+ assert 'cipher' in event -+ -+ event = get_log_event(inst, "TLS_CLIENT_INFO", -+ "subject", -+ "CN=testuser,O=testing,L=389ds,ST=Queensland,C=AU") -+ assert event is not None -+ assert 'tls_version' in event -+ assert 'keysize' in event -+ assert 'issuer' in event -+ -+ event = get_log_event(inst, "TLS_CLIENT_INFO", -+ "client_dn", -+ "uid=testuser,ou=People,dc=example,dc=com") -+ assert event is not None -+ assert 'tls_version' in event -+ assert event['msg'] == "client bound" -+ -+ # Check for failed certmap error -+ with pytest.raises(ldap.INVALID_CREDENTIALS): -+ inst.open(saslmethod='EXTERNAL', connOnly=True, certdir=ssca_dir, -+ userkey=tls_locs_wrong['key'], -+ usercert=tls_locs_wrong['crt']) -+ -+ event = get_log_event(inst, "TLS_CLIENT_INFO", "err", -185) -+ assert event is not None -+ assert 'tls_version' in event -+ assert event['msg'] == "failed to map client certificate to LDAP DN" -+ assert event['err_msg'] == "Certificate couldn't be mapped to an ldap entry" -+ - - if __name__ == '__main__': - # Run isolated -diff --git a/ldap/servers/slapd/accesslog.c b/ldap/servers/slapd/accesslog.c -index 68022fe38..072ace203 100644 ---- a/ldap/servers/slapd/accesslog.c -+++ b/ldap/servers/slapd/accesslog.c -@@ -1147,3 +1147,117 @@ slapd_log_access_sort(slapd_log_pblock *logpb) - - return rc; - } -+ -+/* -+ * TLS connection -+ * -+ * int32_t log_format -+ * time_t conn_time -+ * uint64_t conn_id -+ * const char *msg -+ * const char *tls_version -+ * int32_t keysize -+ * const char *cipher -+ * int32_t err -+ * const char *err_str -+ */ -+int32_t -+slapd_log_access_tls(slapd_log_pblock *logpb) -+{ -+ int32_t rc = 0; -+ char *msg = NULL; -+ json_object *json_obj = NULL; -+ -+ if ((json_obj = build_base_obj(logpb, "TLS_INFO")) == NULL) { -+ return rc; -+ } -+ -+ if (logpb->msg) { -+ json_object_object_add(json_obj, "msg", json_obj_add_str(logpb->msg)); -+ } -+ if (logpb->tls_version) { -+ json_object_object_add(json_obj, "tls_version", json_obj_add_str(logpb->tls_version)); -+ } -+ if (logpb->cipher) { -+ json_object_object_add(json_obj, "cipher", json_obj_add_str(logpb->cipher)); -+ } -+ if (logpb->keysize) { -+ json_object_object_add(json_obj, "keysize", json_object_new_int(logpb->keysize)); -+ } -+ if (logpb->err_str) { -+ json_object_object_add(json_obj, "err", json_object_new_int(logpb->err)); -+ json_object_object_add(json_obj, "err_msg", json_obj_add_str(logpb->err_str)); -+ } -+ -+ /* Convert json object to string and log it */ -+ msg = (char *)json_object_to_json_string_ext(json_obj, logpb->log_format); -+ rc = slapd_log_access_json(msg); -+ -+ /* Done with JSON object - free it */ -+ json_object_put(json_obj); -+ -+ return rc; -+} -+ -+/* -+ * TLS client auth -+ * -+ * int32_t log_format -+ * time_t conn_time -+ * uint64_t conn_id -+ * const char* tls_version -+ * const char* keysize -+ * const char* cipher -+ * const char* msg -+ * const char* subject -+ * const char* issuer -+ * int32_t err -+ * const char* err_str -+ * const char *client_dn -+ */ -+int32_t -+slapd_log_access_tls_client_auth(slapd_log_pblock *logpb) -+{ -+ int32_t rc = 0; -+ char *msg = NULL; -+ json_object *json_obj = NULL; -+ -+ if ((json_obj = build_base_obj(logpb, "TLS_CLIENT_INFO")) == NULL) { -+ return rc; -+ } -+ -+ if (logpb->tls_version) { -+ json_object_object_add(json_obj, "tls_version", json_obj_add_str(logpb->tls_version)); -+ } -+ if (logpb->cipher) { -+ json_object_object_add(json_obj, "cipher", json_obj_add_str(logpb->cipher)); -+ } -+ if (logpb->keysize) { -+ json_object_object_add(json_obj, "keysize", json_object_new_int(logpb->keysize)); -+ } -+ if (logpb->subject) { -+ json_object_object_add(json_obj, "subject", json_obj_add_str(logpb->subject)); -+ } -+ if (logpb->issuer) { -+ json_object_object_add(json_obj, "issuer", json_obj_add_str(logpb->issuer)); -+ } -+ if (logpb->client_dn) { -+ json_object_object_add(json_obj, "client_dn", json_obj_add_str(logpb->client_dn)); -+ } -+ if (logpb->msg) { -+ json_object_object_add(json_obj, "msg", json_obj_add_str(logpb->msg)); -+ } -+ if (logpb->err_str) { -+ json_object_object_add(json_obj, "err", json_object_new_int(logpb->err)); -+ json_object_object_add(json_obj, "err_msg", json_obj_add_str(logpb->err_str)); -+ } -+ -+ /* Convert json object to string and log it */ -+ msg = (char *)json_object_to_json_string_ext(json_obj, logpb->log_format); -+ rc = slapd_log_access_json(msg); -+ -+ /* Done with JSON object - free it */ -+ json_object_put(json_obj); -+ -+ return rc; -+} -diff --git a/ldap/servers/slapd/auth.c b/ldap/servers/slapd/auth.c -index e4231bf45..48e4b7129 100644 ---- a/ldap/servers/slapd/auth.c -+++ b/ldap/servers/slapd/auth.c -@@ -1,6 +1,6 @@ - /** BEGIN COPYRIGHT BLOCK - * Copyright (C) 2001 Sun Microsystems, Inc. Used by permission. -- * Copyright (C) 2005 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -363,19 +363,32 @@ handle_bad_certificate(void *clientData, PRFileDesc *prfd) - char sbuf[BUFSIZ], ibuf[BUFSIZ]; - Connection *conn = (Connection *)clientData; - CERTCertificate *clientCert = slapd_ssl_peerCertificate(prfd); -- - PRErrorCode errorCode = PR_GetError(); - char *subject = subject_of(clientCert); - char *issuer = issuer_of(clientCert); -- slapi_log_access(LDAP_DEBUG_STATS, -- "conn=%" PRIu64 " " SLAPI_COMPONENT_NAME_NSPR " error %i (%s); unauthenticated client %s; issuer %s\n", -- conn->c_connid, errorCode, slapd_pr_strerror(errorCode), -- subject ? escape_string(subject, sbuf) : "NULL", -- issuer ? escape_string(issuer, ibuf) : "NULL"); -+ int32_t log_format = config_get_accesslog_log_format(); -+ slapd_log_pblock logpb = {0}; -+ -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ slapd_log_pblock_init(&logpb, log_format, NULL); -+ logpb.conn_id = conn->c_connid; -+ logpb.msg = "unauthenticated client"; -+ logpb.subject = subject ? escape_string(subject, sbuf) : "NULL"; -+ logpb.issuer = issuer ? escape_string(issuer, ibuf) : "NULL"; -+ logpb.err = errorCode; -+ logpb.err_str = slapd_pr_strerror(errorCode); -+ slapd_log_access_tls_client_auth(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, -+ "conn=%" PRIu64 " " SLAPI_COMPONENT_NAME_NSPR " error %i (%s); unauthenticated client %s; issuer %s\n", -+ conn->c_connid, errorCode, slapd_pr_strerror(errorCode), -+ subject ? escape_string(subject, sbuf) : "NULL", -+ issuer ? escape_string(issuer, ibuf) : "NULL"); -+ } - if (issuer) -- free(issuer); -+ slapi_ch_free_string(&issuer); - if (subject) -- free(subject); -+ slapi_ch_free_string(&subject); - if (clientCert) - CERT_DestroyCertificate(clientCert); - return -1; /* non-zero means reject this certificate */ -@@ -394,7 +407,8 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData) - { - Connection *conn = (Connection *)clientData; - CERTCertificate *clientCert = slapd_ssl_peerCertificate(prfd); -- -+ int32_t log_format = config_get_accesslog_log_format(); -+ slapd_log_pblock logpb = {0}; - char *clientDN = NULL; - int keySize = 0; - char *cipher = NULL; -@@ -403,19 +417,39 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData) - SSLCipherSuiteInfo cipherInfo; - char *subject = NULL; - char sslversion[64]; -+ int err = 0; - - if ((slapd_ssl_getChannelInfo(prfd, &channelInfo, sizeof(channelInfo))) != SECSuccess) { - PRErrorCode errorCode = PR_GetError(); -- slapi_log_access(LDAP_DEBUG_STATS, -- "conn=%" PRIu64 " SSL failed to obtain channel info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n", -- conn->c_connid, errorCode, slapd_pr_strerror(errorCode)); -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ slapd_log_pblock_init(&logpb, log_format, NULL); -+ logpb.conn_id = conn->c_connid; -+ logpb.err = errorCode; -+ logpb.err_str = slapd_pr_strerror(errorCode); -+ logpb.msg = "SSL failed to obtain channel info; " SLAPI_COMPONENT_NAME_NSPR; -+ slapd_log_access_tls(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, -+ "conn=%" PRIu64 " SSL failed to obtain channel info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n", -+ conn->c_connid, errorCode, slapd_pr_strerror(errorCode)); -+ } - goto done; - } -+ - if ((slapd_ssl_getCipherSuiteInfo(channelInfo.cipherSuite, &cipherInfo, sizeof(cipherInfo))) != SECSuccess) { - PRErrorCode errorCode = PR_GetError(); -- slapi_log_access(LDAP_DEBUG_STATS, -- "conn=%" PRIu64 " SSL failed to obtain cipher info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n", -- conn->c_connid, errorCode, slapd_pr_strerror(errorCode)); -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ slapd_log_pblock_init(&logpb, log_format, NULL); -+ logpb.conn_id = conn->c_connid; -+ logpb.err = errorCode; -+ logpb.err_str = slapd_pr_strerror(errorCode); -+ logpb.msg = "SSL failed to obtain cipher info; " SLAPI_COMPONENT_NAME_NSPR; -+ slapd_log_access_tls(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, -+ "conn=%" PRIu64 " SSL failed to obtain cipher info; " SLAPI_COMPONENT_NAME_NSPR " error %i (%s)\n", -+ conn->c_connid, errorCode, slapd_pr_strerror(errorCode)); -+ } - goto done; - } - -@@ -434,47 +468,84 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData) - - if (config_get_SSLclientAuth() == SLAPD_SSLCLIENTAUTH_OFF) { - (void)slapi_getSSLVersion_str(channelInfo.protocolVersion, sslversion, sizeof(sslversion)); -- slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n", -- conn->c_connid, -- sslversion, keySize, cipher ? cipher : "NULL"); -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ slapd_log_pblock_init(&logpb, log_format, NULL); -+ logpb.conn_id = conn->c_connid; -+ logpb.tls_version = sslversion; -+ logpb.keysize = keySize; -+ logpb.cipher = cipher ? cipher : "NULL"; -+ slapd_log_access_tls(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n", -+ conn->c_connid, -+ sslversion, keySize, cipher ? cipher : "NULL"); -+ } - goto done; - } - if (clientCert == NULL) { - (void)slapi_getSSLVersion_str(channelInfo.protocolVersion, sslversion, sizeof(sslversion)); -- slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n", -- conn->c_connid, -- sslversion, keySize, cipher ? cipher : "NULL"); -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ slapd_log_pblock_init(&logpb, log_format, NULL); -+ logpb.conn_id = conn->c_connid; -+ logpb.tls_version = sslversion; -+ logpb.keysize = keySize; -+ logpb.cipher = cipher ? cipher : "NULL"; -+ slapd_log_access_tls(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, "conn=%" PRIu64 " %s %i-bit %s\n", -+ conn->c_connid, -+ sslversion, keySize, cipher ? cipher : "NULL"); -+ } - } else { - subject = subject_of(clientCert); - if (!subject) { - (void)slapi_getSSLVersion_str(channelInfo.protocolVersion, - sslversion, sizeof(sslversion)); -- slapi_log_access(LDAP_DEBUG_STATS, -- "conn=%" PRIu64 " %s %i-bit %s; missing subject\n", -- conn->c_connid, -- sslversion, keySize, cipher ? cipher : "NULL"); -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ slapd_log_pblock_init(&logpb, log_format, NULL); -+ logpb.conn_id = conn->c_connid; -+ logpb.msg = "missing subject"; -+ logpb.tls_version = sslversion; -+ logpb.keysize = keySize; -+ logpb.cipher = cipher ? cipher : "NULL"; -+ slapd_log_access_tls_client_auth(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, -+ "conn=%" PRIu64 " %s %i-bit %s; missing subject\n", -+ conn->c_connid, -+ sslversion, keySize, cipher ? cipher : "NULL"); -+ } - goto done; -- } -- { -+ } else { - char *issuer = issuer_of(clientCert); - char sbuf[BUFSIZ], ibuf[BUFSIZ]; - (void)slapi_getSSLVersion_str(channelInfo.protocolVersion, - sslversion, sizeof(sslversion)); -- slapi_log_access(LDAP_DEBUG_STATS, -- "conn=%" PRIu64 " %s %i-bit %s; client %s; issuer %s\n", -- conn->c_connid, -- sslversion, keySize, -- cipher ? cipher : "NULL", -- escape_string(subject, sbuf), -- issuer ? escape_string(issuer, ibuf) : "NULL"); -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ slapd_log_pblock_init(&logpb, log_format, NULL); -+ logpb.conn_id = conn->c_connid; -+ logpb.tls_version = sslversion; -+ logpb.keysize = keySize; -+ logpb.cipher = cipher ? cipher : "NULL"; -+ logpb.subject = escape_string(subject, sbuf); -+ logpb.issuer = issuer ? escape_string(issuer, ibuf) : "NULL"; -+ slapd_log_access_tls_client_auth(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, -+ "conn=%" PRIu64 " %s %i-bit %s; client %s; issuer %s\n", -+ conn->c_connid, -+ sslversion, keySize, -+ cipher ? cipher : "NULL", -+ escape_string(subject, sbuf), -+ issuer ? escape_string(issuer, ibuf) : "NULL"); -+ } - if (issuer) -- free(issuer); -+ slapi_ch_free_string(&issuer); - } - slapi_dn_normalize(subject); - { - LDAPMessage *chain = NULL; - char *basedn = config_get_basedn(); -- int err; - - err = ldapu_cert_to_ldap_entry(clientCert, internal_ld, basedn ? basedn : "" /*baseDN*/, &chain); - if (err == LDAPU_SUCCESS && chain) { -@@ -505,18 +576,37 @@ handle_handshake_done(PRFileDesc *prfd, void *clientData) - slapi_sdn_free(&sdn); - (void)slapi_getSSLVersion_str(channelInfo.protocolVersion, - sslversion, sizeof(sslversion)); -- slapi_log_access(LDAP_DEBUG_STATS, -- "conn=%" PRIu64 " %s client bound as %s\n", -- conn->c_connid, -- sslversion, clientDN); -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ slapd_log_pblock_init(&logpb, log_format, NULL); -+ logpb.conn_id = conn->c_connid; -+ logpb.msg = "client bound"; -+ logpb.tls_version = sslversion; -+ logpb.client_dn = clientDN; -+ slapd_log_access_tls_client_auth(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, -+ "conn=%" PRIu64 " %s client bound as %s\n", -+ conn->c_connid, -+ sslversion, clientDN); -+ } - } else if (clientCert != NULL) { - (void)slapi_getSSLVersion_str(channelInfo.protocolVersion, - sslversion, sizeof(sslversion)); -- slapi_log_access(LDAP_DEBUG_STATS, -- "conn=%" PRIu64 " %s failed to map client " -- "certificate to LDAP DN (%s)\n", -- conn->c_connid, -- sslversion, extraErrorMsg); -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ slapd_log_pblock_init(&logpb, log_format, NULL); -+ logpb.conn_id = conn->c_connid; -+ logpb.msg = "failed to map client certificate to LDAP DN"; -+ logpb.tls_version = sslversion; -+ logpb.err = err; -+ logpb.err_str = extraErrorMsg; -+ slapd_log_access_tls_client_auth(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, -+ "conn=%" PRIu64 " %s failed to map client " -+ "certificate to LDAP DN (%s)\n", -+ conn->c_connid, -+ sslversion, extraErrorMsg); -+ } - } - - /* -diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c -index f535011ab..91ba23047 100644 ---- a/ldap/servers/slapd/log.c -+++ b/ldap/servers/slapd/log.c -@@ -7270,6 +7270,8 @@ slapd_log_pblock_init(slapd_log_pblock *logpb, int32_t log_format, Slapi_PBlock - slapi_pblock_get(pb, SLAPI_CONNECTION, &conn); - } - -+ memset(logpb, 0, sizeof(slapd_log_pblock)); -+ - logpb->loginfo = &loginfo; - logpb->level = 256; /* default log level */ - logpb->log_format = log_format; -diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h -index 6438a81fe..da232ae2f 100644 ---- a/ldap/servers/slapd/slapi-private.h -+++ b/ldap/servers/slapd/slapi-private.h -@@ -1549,6 +1549,13 @@ typedef struct slapd_log_pblock { - PRBool using_tls; - PRBool haproxied; - const char *bind_dn; -+ /* TLS */ -+ const char *tls_version; -+ int32_t keysize; -+ const char *cipher; -+ const char *subject; -+ const char *issuer; -+ const char *client_dn; - /* Close connection */ - const char *close_error; - const char *close_reason; -@@ -1619,6 +1626,7 @@ typedef struct slapd_log_pblock { - const char *oid; - const char *msg; - const char *name; -+ const char *err_str; - LDAPControl **request_controls; - LDAPControl **response_controls; - } slapd_log_pblock; -@@ -1645,6 +1653,8 @@ int32_t slapd_log_access_entry(slapd_log_pblock *logpb); - int32_t slapd_log_access_referral(slapd_log_pblock *logpb); - int32_t slapd_log_access_extop(slapd_log_pblock *logpb); - int32_t slapd_log_access_sort(slapd_log_pblock *logpb); -+int32_t slapd_log_access_tls(slapd_log_pblock *logpb); -+int32_t slapd_log_access_tls_client_auth(slapd_log_pblock *logpb); - - #ifdef __cplusplus - } --- -2.49.0 - diff --git a/0018-Issue-6829-Update-parametrized-docstring-for-tests.patch b/0018-Issue-6829-Update-parametrized-docstring-for-tests.patch deleted file mode 100644 index 2ed2bb0..0000000 --- a/0018-Issue-6829-Update-parametrized-docstring-for-tests.patch +++ /dev/null @@ -1,43 +0,0 @@ -From 091016df4680e1f9ffc3f78292583800626153c2 Mon Sep 17 00:00:00 2001 -From: Barbora Simonova -Date: Thu, 17 Jul 2025 16:46:57 +0200 -Subject: [PATCH] Issue 6829 - Update parametrized docstring for tests - -Description: -Update the rest of missing parametrized values - -Relates: https://github.com/389ds/389-ds-base/issues/6829 - -Reviewed by: @droideck (Thanks!) ---- - dirsrvtests/tests/suites/logging/error_json_logging_test.py | 1 + - .../tests/suites/schema/schema_replication_origin_test.py | 1 + - 2 files changed, 2 insertions(+) - -diff --git a/dirsrvtests/tests/suites/logging/error_json_logging_test.py b/dirsrvtests/tests/suites/logging/error_json_logging_test.py -index 87e1840a6..e0b3d7317 100644 ---- a/dirsrvtests/tests/suites/logging/error_json_logging_test.py -+++ b/dirsrvtests/tests/suites/logging/error_json_logging_test.py -@@ -29,6 +29,7 @@ def test_error_json_format(topo, log_format): - """Test error log is in JSON - - :id: c9afb295-43de-4581-af8b-ec8f25a06d75 -+ :parametrized: yes - :setup: Standalone - :steps: - 1. Check error log has json and the expected data is present -diff --git a/dirsrvtests/tests/suites/schema/schema_replication_origin_test.py b/dirsrvtests/tests/suites/schema/schema_replication_origin_test.py -index 9e4ce498c..e93dddad0 100644 ---- a/dirsrvtests/tests/suites/schema/schema_replication_origin_test.py -+++ b/dirsrvtests/tests/suites/schema/schema_replication_origin_test.py -@@ -157,6 +157,7 @@ def test_schema_xorigin_repl(topology, schema_replication_init, xorigin): - schema is pushed and there is a message in the error log - - :id: 2b29823b-3e83-4b25-954a-8a081dbc15ee -+ :parametrized: yes - :setup: Supplier and consumer topology, with one user entry; - Supplier, hub and consumer topology, with one user entry - :steps: --- -2.49.0 - diff --git a/0019-Issue-6772-dsconf-Replicas-with-the-consumer-role-al.patch b/0019-Issue-6772-dsconf-Replicas-with-the-consumer-role-al.patch deleted file mode 100644 index 8556f9a..0000000 --- a/0019-Issue-6772-dsconf-Replicas-with-the-consumer-role-al.patch +++ /dev/null @@ -1,67 +0,0 @@ -From 863c244cc137376ee8da0f007fc9b9da88d8dbc0 Mon Sep 17 00:00:00 2001 -From: Anuar Beisembayev <111912342+abeisemb@users.noreply.github.com> -Date: Wed, 23 Jul 2025 23:48:11 -0400 -Subject: [PATCH] Issue 6772 - dsconf - Replicas with the "consumer" role allow - for viewing and modification of their changelog. (#6773) - -dsconf currently allows users to set and retrieve changelogs in consumer replicas, which do not have officially supported changelogs. This can lead to undefined behavior and confusion. -This commit prints a warning message if the user tries to interact with a changelog on a consumer replica. - -Resolves: https://github.com/389ds/389-ds-base/issues/6772 - -Reviewed by: @droideck ---- - src/lib389/lib389/cli_conf/replication.py | 23 +++++++++++++++++++++++ - 1 file changed, 23 insertions(+) - -diff --git a/src/lib389/lib389/cli_conf/replication.py b/src/lib389/lib389/cli_conf/replication.py -index 6f77f34ca..a18bf83ca 100644 ---- a/src/lib389/lib389/cli_conf/replication.py -+++ b/src/lib389/lib389/cli_conf/replication.py -@@ -686,6 +686,9 @@ def set_per_backend_cl(inst, basedn, log, args): - replace_list = [] - did_something = False - -+ if (is_replica_role_consumer(inst, suffix)): -+ log.info("Warning: Changelogs are not supported for consumer replicas. You may run into undefined behavior.") -+ - if args.encrypt: - cl.replace('nsslapd-encryptionalgorithm', 'AES') - del args.encrypt -@@ -715,6 +718,10 @@ def set_per_backend_cl(inst, basedn, log, args): - # that means there is a changelog config entry per backend (aka suffix) - def get_per_backend_cl(inst, basedn, log, args): - suffix = args.suffix -+ -+ if (is_replica_role_consumer(inst, suffix)): -+ log.info("Warning: Changelogs are not supported for consumer replicas. You may run into undefined behavior.") -+ - cl = Changelog(inst, suffix) - if args and args.json: - log.info(cl.get_all_attrs_json()) -@@ -822,6 +829,22 @@ def del_repl_manager(inst, basedn, log, args): - - log.info("Successfully deleted replication manager: " + manager_dn) - -+def is_replica_role_consumer(inst, suffix): -+ """Helper function for get_per_backend_cl and set_per_backend_cl. -+ Makes sure the instance in question is not a consumer, which is a role that -+ does not support changelogs. -+ """ -+ replicas = Replicas(inst) -+ try: -+ replica = replicas.get(suffix) -+ role = replica.get_role() -+ except ldap.NO_SUCH_OBJECT: -+ raise ValueError(f"Backend \"{suffix}\" is not enabled for replication") -+ -+ if role == ReplicaRole.CONSUMER: -+ return True -+ else: -+ return False - - # - # Agreements --- -2.49.0 - diff --git a/0020-Issue-6893-Log-user-that-is-updated-during-password-.patch b/0020-Issue-6893-Log-user-that-is-updated-during-password-.patch deleted file mode 100644 index 80842be..0000000 --- a/0020-Issue-6893-Log-user-that-is-updated-during-password-.patch +++ /dev/null @@ -1,360 +0,0 @@ -From 79b68019ff4b17c4b80fd2c6c725071a050559ca Mon Sep 17 00:00:00 2001 -From: Mark Reynolds -Date: Mon, 21 Jul 2025 18:07:21 -0400 -Subject: [PATCH] Issue 6893 - Log user that is updated during password modify - extended operation - -Description: - -When a user's password is updated via an extended operation (password modify -plugin) we only log the bind DN and not what user was updated. While "internal -operation" logging will display the the user it should be logged by the default -logging level. - -Add access logging using "EXT_INFO" for the old logging format, and -"EXTENDED_OP_INFO" for json logging where we display the bind dn, target -dn, and message. - -Relates: https://github.com/389ds/389-ds-base/issues/6893 - -Reviewed by: spichugi & tbordaz(Thanks!!) ---- - .../logging/access_json_logging_test.py | 98 +++++++++++++++---- - ldap/servers/slapd/accesslog.c | 47 +++++++++ - ldap/servers/slapd/passwd_extop.c | 69 +++++++------ - ldap/servers/slapd/slapi-private.h | 1 + - 4 files changed, 169 insertions(+), 46 deletions(-) - -diff --git a/dirsrvtests/tests/suites/logging/access_json_logging_test.py b/dirsrvtests/tests/suites/logging/access_json_logging_test.py -index f0dc861a7..699bd8c4d 100644 ---- a/dirsrvtests/tests/suites/logging/access_json_logging_test.py -+++ b/dirsrvtests/tests/suites/logging/access_json_logging_test.py -@@ -11,7 +11,7 @@ import os - import time - import ldap - import pytest --from lib389._constants import DEFAULT_SUFFIX, PASSWORD, LOG_ACCESS_LEVEL -+from lib389._constants import DEFAULT_SUFFIX, PASSWORD, LOG_ACCESS_LEVEL, DN_DM - from lib389.properties import TASK_WAIT - from lib389.topologies import topology_m2 as topo_m2 - from lib389.idm.group import Groups -@@ -548,22 +548,6 @@ def test_access_json_format(topo_m2, setup_test): - "2.16.840.1.113730.3.4.3", - "LDAP_CONTROL_PERSISTENTSEARCH") - -- # -- # Extended op -- # -- log.info("Test EXTENDED_OP") -- event = get_log_event(inst, "EXTENDED_OP", "oid", -- "2.16.840.1.113730.3.5.12") -- assert event is not None -- assert event['oid_name'] == "REPL_START_NSDS90_REPLICATION_REQUEST_OID" -- assert event['name'] == "replication-multisupplier-extop" -- -- event = get_log_event(inst, "EXTENDED_OP", "oid", -- "2.16.840.1.113730.3.5.5") -- assert event is not None -- assert event['oid_name'] == "REPL_END_NSDS50_REPLICATION_REQUEST_OID" -- assert event['name'] == "replication-multisupplier-extop" -- - # - # TLS INFO/TLS CLIENT INFO - # -@@ -579,7 +563,8 @@ def test_access_json_format(topo_m2, setup_test): - 'sn': RDN_TEST_USER, - 'uidNumber': '1000', - 'gidNumber': '2000', -- 'homeDirectory': f'/home/{RDN_TEST_USER}' -+ 'homeDirectory': f'/home/{RDN_TEST_USER}', -+ 'userpassword': 'password' - }) - - ssca_dir = inst.get_ssca_dir() -@@ -646,6 +631,83 @@ def test_access_json_format(topo_m2, setup_test): - assert event['msg'] == "failed to map client certificate to LDAP DN" - assert event['err_msg'] == "Certificate couldn't be mapped to an ldap entry" - -+ # -+ # Extended op -+ # -+ log.info("Test EXTENDED_OP") -+ event = get_log_event(inst, "EXTENDED_OP", "oid", -+ "2.16.840.1.113730.3.5.12") -+ assert event is not None -+ assert event['oid_name'] == "REPL_START_NSDS90_REPLICATION_REQUEST_OID" -+ assert event['name'] == "replication-multisupplier-extop" -+ -+ event = get_log_event(inst, "EXTENDED_OP", "oid", -+ "2.16.840.1.113730.3.5.5") -+ assert event is not None -+ assert event['oid_name'] == "REPL_END_NSDS50_REPLICATION_REQUEST_OID" -+ assert event['name'] == "replication-multisupplier-extop" -+ -+ # -+ # Extended op info -+ # -+ log.info("Test EXTENDED_OP_INFO") -+ OLD_PASSWD = 'password' -+ NEW_PASSWD = 'newpassword' -+ -+ assert inst.simple_bind_s(DN_DM, PASSWORD) -+ -+ assert inst.passwd_s(user.dn, OLD_PASSWD, NEW_PASSWD) -+ event = get_log_event(inst, "EXTENDED_OP_INFO", "name", -+ "passwd_modify_plugin") -+ assert event is not None -+ assert event['bind_dn'] == "cn=directory manager" -+ assert event['target_dn'] == user.dn.lower() -+ assert event['msg'] == "success" -+ -+ # Test no such object -+ BAD_DN = user.dn + ",dc=not" -+ with pytest.raises(ldap.NO_SUCH_OBJECT): -+ inst.passwd_s(BAD_DN, OLD_PASSWD, NEW_PASSWD) -+ -+ event = get_log_event(inst, "EXTENDED_OP_INFO", "target_dn", BAD_DN) -+ assert event is not None -+ assert event['bind_dn'] == "cn=directory manager" -+ assert event['target_dn'] == BAD_DN.lower() -+ assert event['msg'] == "No such entry exists." -+ -+ # Test invalid old password -+ with pytest.raises(ldap.INVALID_CREDENTIALS): -+ inst.passwd_s(user.dn, "not_the_old_pw", NEW_PASSWD) -+ event = get_log_event(inst, "EXTENDED_OP_INFO", "err", 49) -+ assert event is not None -+ assert event['bind_dn'] == "cn=directory manager" -+ assert event['target_dn'] == user.dn.lower() -+ assert event['msg'] == "Invalid oldPasswd value." -+ -+ # Test user without permissions -+ user2 = users.create(properties={ -+ 'uid': RDN_TEST_USER + "2", -+ 'cn': RDN_TEST_USER + "2", -+ 'sn': RDN_TEST_USER + "2", -+ 'uidNumber': '1001', -+ 'gidNumber': '2001', -+ 'homeDirectory': f'/home/{RDN_TEST_USER + "2"}', -+ 'userpassword': 'password' -+ }) -+ inst.simple_bind_s(user2.dn, 'password') -+ with pytest.raises(ldap.INSUFFICIENT_ACCESS): -+ inst.passwd_s(user.dn, NEW_PASSWD, OLD_PASSWD) -+ event = get_log_event(inst, "EXTENDED_OP_INFO", "err", 50) -+ assert event is not None -+ assert event['bind_dn'] == user2.dn.lower() -+ assert event['target_dn'] == user.dn.lower() -+ assert event['msg'] == "Insufficient access rights" -+ -+ -+ # Reset bind -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ -+ - - if __name__ == '__main__': - # Run isolated -diff --git a/ldap/servers/slapd/accesslog.c b/ldap/servers/slapd/accesslog.c -index 072ace203..46228d4a1 100644 ---- a/ldap/servers/slapd/accesslog.c -+++ b/ldap/servers/slapd/accesslog.c -@@ -1113,6 +1113,53 @@ slapd_log_access_extop(slapd_log_pblock *logpb) - return rc; - } - -+/* -+ * Extended operation information -+ * -+ * int32_t log_format -+ * time_t conn_time -+ * uint64_t conn_id -+ * int32_t op_id -+ * const char *name -+ * const char *bind_dn -+ * const char *tartet_dn -+ * const char *msg -+ */ -+int32_t -+slapd_log_access_extop_info(slapd_log_pblock *logpb) -+{ -+ int32_t rc = 0; -+ char *msg = NULL; -+ json_object *json_obj = NULL; -+ -+ if ((json_obj = build_base_obj(logpb, "EXTENDED_OP_INFO")) == NULL) { -+ return rc; -+ } -+ -+ if (logpb->name) { -+ json_object_object_add(json_obj, "name", json_obj_add_str(logpb->name)); -+ } -+ if (logpb->target_dn) { -+ json_object_object_add(json_obj, "target_dn", json_obj_add_str(logpb->target_dn)); -+ } -+ if (logpb->bind_dn) { -+ json_object_object_add(json_obj, "bind_dn", json_obj_add_str(logpb->bind_dn)); -+ } -+ if (logpb->msg) { -+ json_object_object_add(json_obj, "msg", json_obj_add_str(logpb->msg)); -+ } -+ json_object_object_add(json_obj, "err", json_object_new_int(logpb->err)); -+ -+ /* Convert json object to string and log it */ -+ msg = (char *)json_object_to_json_string_ext(json_obj, logpb->log_format); -+ rc = slapd_log_access_json(msg); -+ -+ /* Done with JSON object - free it */ -+ json_object_put(json_obj); -+ -+ return rc; -+} -+ - /* - * Sort - * -diff --git a/ldap/servers/slapd/passwd_extop.c b/ldap/servers/slapd/passwd_extop.c -index 4bb60afd6..69bb3494c 100644 ---- a/ldap/servers/slapd/passwd_extop.c -+++ b/ldap/servers/slapd/passwd_extop.c -@@ -465,12 +465,14 @@ passwd_modify_extop(Slapi_PBlock *pb) - BerElement *response_ber = NULL; - Slapi_Entry *targetEntry = NULL; - Connection *conn = NULL; -+ Operation *pb_op = NULL; - LDAPControl **req_controls = NULL; - LDAPControl **resp_controls = NULL; - passwdPolicy *pwpolicy = NULL; - Slapi_DN *target_sdn = NULL; - Slapi_Entry *referrals = NULL; -- /* Slapi_DN sdn; */ -+ Slapi_Backend *be = NULL; -+ int32_t log_format = config_get_accesslog_log_format(); - - slapi_log_err(SLAPI_LOG_TRACE, "passwd_modify_extop", "=>\n"); - -@@ -647,7 +649,7 @@ parse_req_done: - } - dn = slapi_sdn_get_ndn(target_sdn); - if (dn == NULL || *dn == '\0') { -- /* Refuse the operation because they're bound anonymously */ -+ /* Invalid DN - refuse the operation */ - errMesg = "Invalid dn."; - rc = LDAP_INVALID_DN_SYNTAX; - goto free_and_return; -@@ -724,14 +726,19 @@ parse_req_done: - ber_free(response_ber, 1); - } - -- slapi_pblock_set(pb, SLAPI_ORIGINAL_TARGET, (void *)dn); -+ slapi_pblock_get(pb, SLAPI_OPERATION, &pb_op); -+ if (pb_op == NULL) { -+ slapi_log_err(SLAPI_LOG_ERR, "passwd_modify_extop", "pb_op is NULL\n"); -+ goto free_and_return; -+ } - -+ slapi_pblock_set(pb, SLAPI_ORIGINAL_TARGET, (void *)dn); - /* Now we have the DN, look for the entry */ - ret = passwd_modify_getEntry(dn, &targetEntry); - /* If we can't find the entry, then that's an error */ - if (ret) { - /* Couldn't find the entry, fail */ -- errMesg = "No such Entry exists."; -+ errMesg = "No such entry exists."; - rc = LDAP_NO_SUCH_OBJECT; - goto free_and_return; - } -@@ -742,30 +749,18 @@ parse_req_done: - leak any useful information to the client such as current password - wrong, etc. - */ -- Operation *pb_op = NULL; -- slapi_pblock_get(pb, SLAPI_OPERATION, &pb_op); -- if (pb_op == NULL) { -- slapi_log_err(SLAPI_LOG_ERR, "passwd_modify_extop", "pb_op is NULL\n"); -- goto free_and_return; -- } -- - operation_set_target_spec(pb_op, slapi_entry_get_sdn(targetEntry)); - slapi_pblock_set(pb, SLAPI_REQUESTOR_ISROOT, &pb_op->o_isroot); - -- /* In order to perform the access control check , we need to select a backend (even though -- * we don't actually need it otherwise). -- */ -- { -- Slapi_Backend *be = NULL; -- -- be = slapi_mapping_tree_find_backend_for_sdn(slapi_entry_get_sdn(targetEntry)); -- if (NULL == be) { -- errMesg = "Failed to find backend for target entry"; -- rc = LDAP_OPERATIONS_ERROR; -- goto free_and_return; -- } -- slapi_pblock_set(pb, SLAPI_BACKEND, be); -+ /* In order to perform the access control check, we need to select a backend (even though -+ * we don't actually need it otherwise). */ -+ be = slapi_mapping_tree_find_backend_for_sdn(slapi_entry_get_sdn(targetEntry)); -+ if (NULL == be) { -+ errMesg = "Failed to find backend for target entry"; -+ rc = LDAP_NO_SUCH_OBJECT; -+ goto free_and_return; - } -+ slapi_pblock_set(pb, SLAPI_BACKEND, be); - - /* Check if the pwpolicy control is present */ - slapi_pblock_get(pb, SLAPI_PWPOLICY, &need_pwpolicy_ctrl); -@@ -797,10 +792,7 @@ parse_req_done: - /* Check if password policy allows users to change their passwords. We need to do - * this here since the normal modify code doesn't perform this check for - * internal operations. */ -- -- Connection *pb_conn; -- slapi_pblock_get(pb, SLAPI_CONNECTION, &pb_conn); -- if (!pb_op->o_isroot && !pb_conn->c_needpw && !pwpolicy->pw_change) { -+ if (!pb_op->o_isroot && !conn->c_needpw && !pwpolicy->pw_change) { - if (NULL == bindSDN) { - bindSDN = slapi_sdn_new_normdn_byref(bindDN); - } -@@ -848,6 +840,27 @@ free_and_return: - slapi_log_err(SLAPI_LOG_PLUGIN, "passwd_modify_extop", - "%s\n", errMesg ? errMesg : "success"); - -+ if (dn) { -+ /* Log the target ndn (if we have a target ndn) */ -+ if (log_format != LOG_FORMAT_DEFAULT) { -+ /* JSON logging */ -+ slapd_log_pblock logpb = {0}; -+ slapd_log_pblock_init(&logpb, log_format, pb); -+ logpb.name = "passwd_modify_plugin"; -+ logpb.target_dn = dn; -+ logpb.bind_dn = bindDN; -+ logpb.msg = errMesg ? errMesg : "success"; -+ logpb.err = rc; -+ slapd_log_access_extop_info(&logpb); -+ } else { -+ slapi_log_access(LDAP_DEBUG_STATS, -+ "conn=%" PRIu64 " op=%d EXT_INFO name=\"passwd_modify_plugin\" bind_dn=\"%s\" target_dn=\"%s\" msg=\"%s\" rc=%d\n", -+ conn ? conn->c_connid : -1, pb_op ? pb_op->o_opid : -1, -+ bindDN ? bindDN : "", dn, -+ errMesg ? errMesg : "success", rc); -+ } -+ } -+ - if ((rc == LDAP_REFERRAL) && (referrals)) { - send_referrals_from_entry(pb, referrals); - } else { -diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h -index da232ae2f..e9abf8b75 100644 ---- a/ldap/servers/slapd/slapi-private.h -+++ b/ldap/servers/slapd/slapi-private.h -@@ -1652,6 +1652,7 @@ int32_t slapd_log_access_vlv(slapd_log_pblock *logpb); - int32_t slapd_log_access_entry(slapd_log_pblock *logpb); - int32_t slapd_log_access_referral(slapd_log_pblock *logpb); - int32_t slapd_log_access_extop(slapd_log_pblock *logpb); -+int32_t slapd_log_access_extop_info(slapd_log_pblock *logpb); - int32_t slapd_log_access_sort(slapd_log_pblock *logpb); - int32_t slapd_log_access_tls(slapd_log_pblock *logpb); - int32_t slapd_log_access_tls_client_auth(slapd_log_pblock *logpb); --- -2.49.0 - diff --git a/0021-Issue-6352-Fix-DeprecationWarning.patch b/0021-Issue-6352-Fix-DeprecationWarning.patch deleted file mode 100644 index 49a484a..0000000 --- a/0021-Issue-6352-Fix-DeprecationWarning.patch +++ /dev/null @@ -1,37 +0,0 @@ -From ab188bdfed7a144734c715f57ba4772c8d453b6f Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Fri, 11 Jul 2025 13:12:44 +0200 -Subject: [PATCH] Issue 6352 - Fix DeprecationWarning - -Bug Description: -When pytest is used on ASAN build, pytest-html plugin collects `*asan*` -files, which results in the following deprecation warning: - -``` -The 'report.extra' attribute is deprecated and will be removed in a -future release, use 'report.extras' instead. -``` - -Fixes: https://github.com/389ds/389-ds-base/issues/6352 - -Reviwed by: @droideck (Thanks!) ---- - dirsrvtests/conftest.py | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - -diff --git a/dirsrvtests/conftest.py b/dirsrvtests/conftest.py -index c989729c1..0db6045f4 100644 ---- a/dirsrvtests/conftest.py -+++ b/dirsrvtests/conftest.py -@@ -138,7 +138,7 @@ def pytest_runtest_makereport(item, call): - log_name = os.path.basename(f) - instance_name = os.path.basename(os.path.dirname(f)).split("slapd-",1)[1] - extra.append(pytest_html.extras.text(text, name=f"{instance_name}-{log_name}")) -- report.extra = extra -+ report.extras = extra - - # Make a screenshot if WebUI test fails - if call.when == "call" and "WEBUI" in os.environ: --- -2.49.0 - diff --git a/0022-Issue-6880-Fix-ds_logs-test-suite-failure.patch b/0022-Issue-6880-Fix-ds_logs-test-suite-failure.patch deleted file mode 100644 index 948fbb4..0000000 --- a/0022-Issue-6880-Fix-ds_logs-test-suite-failure.patch +++ /dev/null @@ -1,38 +0,0 @@ -From 5b0188baf00c395dac807657736ed51968d5c3a0 Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Thu, 17 Jul 2025 13:41:04 +0200 -Subject: [PATCH] Issue 6880 - Fix ds_logs test suite failure - -Bug Description: -After 947ee67df6 ds_logs test suite started to fail in -test_internal_log_server_level_4. It slightly changed the order and -timing of log messages. - -Fix Description: -Do another MOD after restart to trigger the internal search. - -Fixes: https://github.com/389ds/389-ds-base/issues/6880 - -Reviewed by: @bsimonova, @droideck (Thanks!) ---- - dirsrvtests/tests/suites/ds_logs/ds_logs_test.py | 4 ++++ - 1 file changed, 4 insertions(+) - -diff --git a/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py b/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py -index 6fd790c18..eff6780cd 100644 ---- a/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py -+++ b/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py -@@ -356,6 +356,10 @@ def test_internal_log_server_level_4(topology_st, clean_access_logs, disable_acc - log.info('Restart the server to flush the logs') - topo.restart() - -+ # After 947ee67 log dynamic has changed slightly -+ # Do another MOD to trigger the internal search -+ topo.config.set(LOG_ACCESS_LEVEL, access_log_level) -+ - try: - # These comments contain lines we are trying to find without regex (the op numbers are just examples) - log.info("Check if access log contains internal MOD operation in correct format") --- -2.49.0 - diff --git a/0023-Issue-6901-Update-changelog-trimming-logging.patch b/0023-Issue-6901-Update-changelog-trimming-logging.patch deleted file mode 100644 index 662c640..0000000 --- a/0023-Issue-6901-Update-changelog-trimming-logging.patch +++ /dev/null @@ -1,53 +0,0 @@ -From 13b0a1637b2fb8eb8b6f5fa391721f61bfe41874 Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Thu, 24 Jul 2025 19:09:40 +0200 -Subject: [PATCH] Issue 6901 - Update changelog trimming logging - -Description: -* Set SLAPI_LOG_ERR for message in `_cl5DispatchTrimThread` -* Set correct function name for logs in `_cl5TrimEntry`. -* Add number of scanned entries to the log. - -Fixes: https://github.com/389ds/389-ds-base/issues/6901 - -Reviewed by: @mreynolds389, @progier389 (Thanks!) ---- - ldap/servers/plugins/replication/cl5_api.c | 8 ++++---- - 1 file changed, 4 insertions(+), 4 deletions(-) - -diff --git a/ldap/servers/plugins/replication/cl5_api.c b/ldap/servers/plugins/replication/cl5_api.c -index 3c356abc0..1d62aa020 100644 ---- a/ldap/servers/plugins/replication/cl5_api.c -+++ b/ldap/servers/plugins/replication/cl5_api.c -@@ -2007,7 +2007,7 @@ _cl5DispatchTrimThread(Replica *replica) - (void *)replica, PR_PRIORITY_NORMAL, PR_GLOBAL_THREAD, - PR_UNJOINABLE_THREAD, DEFAULT_THREAD_STACKSIZE); - if (NULL == pth) { -- slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, -+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name_cl, - "_cl5DispatchTrimThread - Failed to create trimming thread for %s" - "; NSPR error - %d\n", replica_get_name(replica), - PR_GetError()); -@@ -2788,7 +2788,7 @@ _cl5TrimEntry(dbi_val_t *key, dbi_val_t *data, void *ctx) - return DBI_RC_NOTFOUND; - } else { - slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, -- "_cl5TrimReplica - Changelog purge skipped anchor csn %s\n", -+ "_cl5TrimEntry - Changelog purge skipped anchor csn %s\n", - (char*)key->data); - return DBI_RC_SUCCESS; - } -@@ -2867,8 +2867,8 @@ _cl5TrimReplica(Replica *r) - slapi_ch_free((void**)&dblcictx.rids); - - if (dblcictx.changed.tot) { -- slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, "_cl5TrimReplica - Trimmed %ld changes from the changelog\n", -- dblcictx.changed.tot); -+ slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, "_cl5TrimReplica - Scanned %ld records, and trimmed %ld changes from the changelog\n", -+ dblcictx.seen.tot, dblcictx.changed.tot); - } - } - --- -2.49.0 - diff --git a/0024-Issue-6895-Crash-if-repl-keep-alive-entry-can-not-be.patch b/0024-Issue-6895-Crash-if-repl-keep-alive-entry-can-not-be.patch deleted file mode 100644 index 78a41d6..0000000 --- a/0024-Issue-6895-Crash-if-repl-keep-alive-entry-can-not-be.patch +++ /dev/null @@ -1,98 +0,0 @@ -From 9cb193418d4dd07182d2e1a38a5cc5a2a41e2877 Mon Sep 17 00:00:00 2001 -From: Mark Reynolds -Date: Wed, 23 Jul 2025 19:35:32 -0400 -Subject: [PATCH] Issue 6895 - Crash if repl keep alive entry can not be - created - -Description: - -Heap use after free when logging that the replicaton keep-alive entry can not -be created. slapi_add_internal_pb() frees the slapi entry, then -we try and get the dn from the entry and we get a use-after-free crash. - -Relates: https://github.com/389ds/389-ds-base/issues/6895 - -Reviewed by: spichugi(Thanks!) ---- - ldap/servers/plugins/chainingdb/cb_config.c | 3 +-- - ldap/servers/plugins/posix-winsync/posix-winsync.c | 1 - - ldap/servers/plugins/replication/repl5_init.c | 3 --- - ldap/servers/plugins/replication/repl5_replica.c | 8 ++++---- - 4 files changed, 5 insertions(+), 10 deletions(-) - -diff --git a/ldap/servers/plugins/chainingdb/cb_config.c b/ldap/servers/plugins/chainingdb/cb_config.c -index 40a7088d7..24fa1bcb3 100644 ---- a/ldap/servers/plugins/chainingdb/cb_config.c -+++ b/ldap/servers/plugins/chainingdb/cb_config.c -@@ -44,8 +44,7 @@ cb_config_add_dse_entries(cb_backend *cb, char **entries, char *string1, char *s - slapi_pblock_get(util_pb, SLAPI_PLUGIN_INTOP_RESULT, &res); - if (LDAP_SUCCESS != res && LDAP_ALREADY_EXISTS != res) { - slapi_log_err(SLAPI_LOG_ERR, CB_PLUGIN_SUBSYSTEM, -- "cb_config_add_dse_entries - Unable to add config entry (%s) to the DSE: %s\n", -- slapi_entry_get_dn(e), -+ "cb_config_add_dse_entries - Unable to add config entry to the DSE: %s\n", - ldap_err2string(res)); - rc = res; - slapi_pblock_destroy(util_pb); -diff --git a/ldap/servers/plugins/posix-winsync/posix-winsync.c b/ldap/servers/plugins/posix-winsync/posix-winsync.c -index 51a55b643..3a002bb70 100644 ---- a/ldap/servers/plugins/posix-winsync/posix-winsync.c -+++ b/ldap/servers/plugins/posix-winsync/posix-winsync.c -@@ -1626,7 +1626,6 @@ posix_winsync_end_update_cb(void *cbdata __attribute__((unused)), - "posix_winsync_end_update_cb: " - "add task entry\n"); - } -- /* slapi_entry_free(e_task); */ - slapi_pblock_destroy(pb); - pb = NULL; - posix_winsync_config_reset_MOFTaskCreated(); -diff --git a/ldap/servers/plugins/replication/repl5_init.c b/ldap/servers/plugins/replication/repl5_init.c -index 8bc0b5372..5047fb8dc 100644 ---- a/ldap/servers/plugins/replication/repl5_init.c -+++ b/ldap/servers/plugins/replication/repl5_init.c -@@ -682,7 +682,6 @@ create_repl_schema_policy(void) - repl_schema_top, - ldap_err2string(return_value)); - rc = -1; -- slapi_entry_free(e); /* The entry was not consumed */ - goto done; - } - slapi_pblock_destroy(pb); -@@ -703,7 +702,6 @@ create_repl_schema_policy(void) - repl_schema_supplier, - ldap_err2string(return_value)); - rc = -1; -- slapi_entry_free(e); /* The entry was not consumed */ - goto done; - } - slapi_pblock_destroy(pb); -@@ -724,7 +722,6 @@ create_repl_schema_policy(void) - repl_schema_consumer, - ldap_err2string(return_value)); - rc = -1; -- slapi_entry_free(e); /* The entry was not consumed */ - goto done; - } - slapi_pblock_destroy(pb); -diff --git a/ldap/servers/plugins/replication/repl5_replica.c b/ldap/servers/plugins/replication/repl5_replica.c -index 59062b46b..a97c807e9 100644 ---- a/ldap/servers/plugins/replication/repl5_replica.c -+++ b/ldap/servers/plugins/replication/repl5_replica.c -@@ -465,10 +465,10 @@ replica_subentry_create(const char *repl_root, ReplicaId rid) - if (return_value != LDAP_SUCCESS && - return_value != LDAP_ALREADY_EXISTS && - return_value != LDAP_REFERRAL /* CONSUMER */) { -- slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "replica_subentry_create - Unable to " -- "create replication keep alive entry %s: error %d - %s\n", -- slapi_entry_get_dn_const(e), -- return_value, ldap_err2string(return_value)); -+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "replica_subentry_create - " -+ "Unable to create replication keep alive entry 'cn=%s %d,%s': error %d - %s\n", -+ KEEP_ALIVE_ENTRY, rid, repl_root, -+ return_value, ldap_err2string(return_value)); - rc = -1; - goto done; - } --- -2.49.0 - diff --git a/0025-Issue-6250-Add-test-for-entryUSN-overflow-on-failed-.patch b/0025-Issue-6250-Add-test-for-entryUSN-overflow-on-failed-.patch deleted file mode 100644 index 706a48c..0000000 --- a/0025-Issue-6250-Add-test-for-entryUSN-overflow-on-failed-.patch +++ /dev/null @@ -1,352 +0,0 @@ -From a0cdf2970edb46acec06a5ac204ec04135806b35 Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Mon, 28 Jul 2025 10:50:26 -0700 -Subject: [PATCH] Issue 6250 - Add test for entryUSN overflow on failed add - operations (#6821) - -Description: Add comprehensive test to reproduce the entryUSN -overflow issue where failed attempts to add existing entries followed by -modify operations cause entryUSN values to underflow/overflow instead of -incrementing properly. - -Related: https://github.com/389ds/389-ds-base/issues/6250 - -Reviewed by: @tbordaz (Thanks!) ---- - .../suites/plugins/entryusn_overflow_test.py | 323 ++++++++++++++++++ - 1 file changed, 323 insertions(+) - create mode 100644 dirsrvtests/tests/suites/plugins/entryusn_overflow_test.py - -diff --git a/dirsrvtests/tests/suites/plugins/entryusn_overflow_test.py b/dirsrvtests/tests/suites/plugins/entryusn_overflow_test.py -new file mode 100644 -index 000000000..a23d734ca ---- /dev/null -+++ b/dirsrvtests/tests/suites/plugins/entryusn_overflow_test.py -@@ -0,0 +1,323 @@ -+# --- BEGIN COPYRIGHT BLOCK --- -+# Copyright (C) 2025 Red Hat, Inc. -+# All rights reserved. -+# -+# License: GPL (version 3 or any later version). -+# See LICENSE for details. -+# --- END COPYRIGHT BLOCK --- -+# -+import os -+import ldap -+import logging -+import pytest -+import time -+import random -+from lib389._constants import DEFAULT_SUFFIX -+from lib389.config import Config -+from lib389.plugins import USNPlugin -+from lib389.idm.user import UserAccounts -+from lib389.topologies import topology_st -+from lib389.rootdse import RootDSE -+ -+pytestmark = pytest.mark.tier2 -+ -+log = logging.getLogger(__name__) -+ -+# Test constants -+DEMO_USER_BASE_DN = "uid=demo_user,ou=people," + DEFAULT_SUFFIX -+TEST_USER_PREFIX = "Demo User" -+MAX_USN_64BIT = 18446744073709551615 # 2^64 - 1 -+ITERATIONS = 10 -+ADD_EXISTING_ENTRY_MAX_ATTEMPTS = 5 -+ -+ -+@pytest.fixture(scope="module") -+def setup_usn_test(topology_st, request): -+ """Setup USN plugin and test data for entryUSN overflow testing""" -+ -+ inst = topology_st.standalone -+ -+ log.info("Enable the USN plugin...") -+ plugin = USNPlugin(inst) -+ plugin.enable() -+ plugin.enable_global_mode() -+ -+ inst.restart() -+ -+ # Create initial test users -+ users = UserAccounts(inst, DEFAULT_SUFFIX) -+ created_users = [] -+ -+ log.info("Creating initial test users...") -+ for i in range(3): -+ user_props = { -+ 'uid': f'{TEST_USER_PREFIX}-{i}', -+ 'cn': f'{TEST_USER_PREFIX}-{i}', -+ 'sn': f'User{i}', -+ 'uidNumber': str(1000 + i), -+ 'gidNumber': str(1000 + i), -+ 'homeDirectory': f'/home/{TEST_USER_PREFIX}-{i}', -+ 'userPassword': 'password123' -+ } -+ try: -+ user = users.create(properties=user_props) -+ created_users.append(user) -+ log.info(f"Created user: {user.dn}") -+ except ldap.ALREADY_EXISTS: -+ log.info(f"User {user_props['uid']} already exists, skipping creation") -+ user = users.get(user_props['uid']) -+ created_users.append(user) -+ -+ def fin(): -+ log.info("Cleaning up test users...") -+ for user in created_users: -+ try: -+ user.delete() -+ except ldap.NO_SUCH_OBJECT: -+ pass -+ -+ request.addfinalizer(fin) -+ -+ return created_users -+ -+ -+def test_entryusn_overflow_on_add_existing_entries(topology_st, setup_usn_test): -+ """Test that reproduces entryUSN overflow when adding existing entries -+ -+ :id: a5a8c33d-82f3-4113-be2b-027de51791c8 -+ :setup: Standalone instance with USN plugin enabled and test users -+ :steps: -+ 1. Record initial entryUSN values for existing users -+ 2. Attempt to add existing entries multiple times (should fail) -+ 3. Perform modify operations on the entries -+ 4. Check that entryUSN values increment correctly without overflow -+ 5. Verify lastusn values are consistent -+ :expectedresults: -+ 1. Initial entryUSN values are recorded successfully -+ 2. Add operations fail with ALREADY_EXISTS error -+ 3. Modify operations succeed -+ 4. EntryUSN values increment properly without underflow/overflow -+ 5. LastUSN values are consistent and increasing -+ """ -+ -+ inst = topology_st.standalone -+ users = setup_usn_test -+ -+ # Enable detailed logging for debugging -+ config = Config(inst) -+ config.replace('nsslapd-accesslog-level', '260') # Internal op logging -+ config.replace('nsslapd-errorlog-level', '65536') -+ config.replace('nsslapd-plugin-logging', 'on') -+ -+ root_dse = RootDSE(inst) -+ -+ log.info("Starting entryUSN overflow reproduction test") -+ -+ # Record initial state -+ initial_usn_values = {} -+ for user in users: -+ initial_usn = user.get_attr_val_int('entryusn') -+ initial_usn_values[user.dn] = initial_usn -+ log.info(f"Initial entryUSN for {user.get_attr_val_utf8('cn')}: {initial_usn}") -+ -+ initial_lastusn = root_dse.get_attr_val_int("lastusn") -+ log.info(f"Initial lastUSN: {initial_lastusn}") -+ -+ # Perform test iterations -+ for iteration in range(1, ITERATIONS + 1): -+ log.info(f"\n--- Iteration {iteration} ---") -+ -+ # Step 1: Try to add existing entries multiple times -+ selected_user = random.choice(users) -+ cn_value = selected_user.get_attr_val_utf8('cn') -+ attempts = random.randint(1, ADD_EXISTING_ENTRY_MAX_ATTEMPTS) -+ -+ log.info(f"Attempting to add existing entry '{cn_value}' {attempts} times") -+ -+ # Get user attributes for recreation attempt -+ user_attrs = { -+ 'uid': selected_user.get_attr_val_utf8('uid'), -+ 'cn': selected_user.get_attr_val_utf8('cn'), -+ 'sn': selected_user.get_attr_val_utf8('sn'), -+ 'uidNumber': selected_user.get_attr_val_utf8('uidNumber'), -+ 'gidNumber': selected_user.get_attr_val_utf8('gidNumber'), -+ 'homeDirectory': selected_user.get_attr_val_utf8('homeDirectory'), -+ 'userPassword': 'password123' -+ } -+ -+ users_collection = UserAccounts(inst, DEFAULT_SUFFIX) -+ -+ # Try to add the existing user multiple times -+ for attempt in range(attempts): -+ try: -+ users_collection.create(properties=user_attrs) -+ log.error(f"ERROR: Add operation should have failed but succeeded on attempt {attempt + 1}") -+ assert False, "Add operation should have failed with ALREADY_EXISTS" -+ except ldap.ALREADY_EXISTS: -+ log.info(f"Attempt {attempt + 1}: Got expected ALREADY_EXISTS error") -+ except Exception as e: -+ log.error(f"Unexpected error on attempt {attempt + 1}: {e}") -+ raise -+ -+ # Step 2: Perform modify operation -+ target_user = random.choice(users) -+ cn_value = target_user.get_attr_val_utf8('cn') -+ old_usn = target_user.get_attr_val_int('entryusn') -+ -+ # Modify the user entry -+ new_description = f"Modified in iteration {iteration} - {time.time()}" -+ target_user.replace('description', new_description) -+ -+ # Get new USN value -+ new_usn = target_user.get_attr_val_int('entryusn') -+ -+ log.info(f"Modified entry '{cn_value}': old USN = {old_usn}, new USN = {new_usn}") -+ -+ # Step 3: Validate USN values -+ # Check for overflow/underflow conditions -+ assert new_usn > 0, f"EntryUSN should be positive, got {new_usn}" -+ assert new_usn < MAX_USN_64BIT, f"EntryUSN overflow detected: {new_usn} >= {MAX_USN_64BIT}" -+ -+ # Check that USN didn't wrap around (underflow detection) -+ usn_diff = new_usn - old_usn -+ assert usn_diff < 1000, f"USN increment too large, possible overflow: {usn_diff}" -+ -+ # Verify lastUSN is also reasonable -+ current_lastusn = root_dse.get_attr_val_int("lastusn") -+ assert current_lastusn >= new_usn, f"LastUSN ({current_lastusn}) should be >= entryUSN ({new_usn})" -+ assert current_lastusn < MAX_USN_64BIT, f"LastUSN overflow detected: {current_lastusn}" -+ -+ log.info(f"USN validation passed for iteration {iteration}") -+ -+ # Add a new entry occasionally to increase USN diversity -+ if iteration % 3 == 0: -+ new_user_props = { -+ 'uid': f'{TEST_USER_PREFIX}-new-{iteration}', -+ 'cn': f'{TEST_USER_PREFIX}-new-{iteration}', -+ 'sn': f'NewUser{iteration}', -+ 'uidNumber': str(2000 + iteration), -+ 'gidNumber': str(2000 + iteration), -+ 'homeDirectory': f'/home/{TEST_USER_PREFIX}-new-{iteration}', -+ 'userPassword': 'newpassword123' -+ } -+ try: -+ new_user = users_collection.create(properties=new_user_props) -+ new_user_usn = new_user.get_attr_val_int('entryusn') -+ log.info(f"Created new entry '{new_user.get_attr_val_utf8('cn')}' with USN: {new_user_usn}") -+ users.append(new_user) # Add to cleanup list -+ except Exception as e: -+ log.warning(f"Failed to create new user in iteration {iteration}: {e}") -+ -+ # Final validation: Check all USN values are reasonable -+ log.info("\nFinal USN validation") -+ final_lastusn = root_dse.get_attr_val_int("lastusn") -+ -+ for user in users: -+ try: -+ final_usn = user.get_attr_val_int('entryusn') -+ cn_value = user.get_attr_val_utf8('cn') -+ log.info(f"Final entryUSN for '{cn_value}': {final_usn}") -+ -+ # Ensure no overflow occurred -+ assert final_usn > 0, f"Final entryUSN should be positive for {cn_value}: {final_usn}" -+ assert final_usn < MAX_USN_64BIT, f"EntryUSN overflow for {cn_value}: {final_usn}" -+ -+ except ldap.NO_SUCH_OBJECT: -+ log.info(f"User {user.dn} was deleted during test") -+ -+ log.info(f"Final lastUSN: {final_lastusn}") -+ assert final_lastusn > initial_lastusn, "LastUSN should have increased during test" -+ assert final_lastusn < MAX_USN_64BIT, f"LastUSN overflow detected: {final_lastusn}" -+ -+ log.info("EntryUSN overflow test completed successfully") -+ -+ -+def test_entryusn_consistency_after_failed_adds(topology_st, setup_usn_test): -+ """Test that entryUSN remains consistent after failed add operations -+ -+ :id: e380ccad-527b-427e-a331-df5c41badbed -+ :setup: Standalone instance with USN plugin enabled and test users -+ :steps: -+ 1. Record entryUSN values before failed add attempts -+ 2. Attempt to add existing entries (should fail) -+ 3. Verify entryUSN values haven't changed due to failed operations -+ 4. Perform successful modify operations -+ 5. Verify entryUSN increments correctly -+ :expectedresults: -+ 1. Initial entryUSN values recorded -+ 2. Add operations fail as expected -+ 3. EntryUSN values unchanged after failed adds -+ 4. Modify operations succeed -+ 5. EntryUSN values increment correctly without overflow -+ """ -+ -+ inst = topology_st.standalone -+ users = setup_usn_test -+ -+ log.info("Testing entryUSN consistency after failed adds") -+ -+ # Record USN values before any operations -+ pre_operation_usns = {} -+ for user in users: -+ usn = user.get_attr_val_int('entryusn') -+ pre_operation_usns[user.dn] = usn -+ log.info(f"Pre-operation entryUSN for {user.get_attr_val_utf8('cn')}: {usn}") -+ -+ # Attempt to add existing entries - these should fail -+ users_collection = UserAccounts(inst, DEFAULT_SUFFIX) -+ -+ for user in users: -+ cn_value = user.get_attr_val_utf8('cn') -+ log.info(f"Attempting to add existing user: {cn_value}") -+ -+ user_attrs = { -+ 'uid': user.get_attr_val_utf8('uid'), -+ 'cn': cn_value, -+ 'sn': user.get_attr_val_utf8('sn'), -+ 'uidNumber': user.get_attr_val_utf8('uidNumber'), -+ 'gidNumber': user.get_attr_val_utf8('gidNumber'), -+ 'homeDirectory': user.get_attr_val_utf8('homeDirectory'), -+ 'userPassword': 'password123' -+ } -+ -+ try: -+ users_collection.create(properties=user_attrs) -+ assert False, f"Add operation should have failed for existing user {cn_value}" -+ except ldap.ALREADY_EXISTS: -+ log.info(f"Got expected ALREADY_EXISTS for {cn_value}") -+ -+ # Verify USN values haven't changed after failed adds -+ log.info("Verifying entryUSN values after failed add operations...") -+ for user in users: -+ current_usn = user.get_attr_val_int('entryusn') -+ expected_usn = pre_operation_usns[user.dn] -+ cn_value = user.get_attr_val_utf8('cn') -+ -+ assert current_usn == expected_usn, \ -+ f"EntryUSN changed after failed add for {cn_value}: was {expected_usn}, now {current_usn}" -+ log.info(f"EntryUSN unchanged for {cn_value}: {current_usn}") -+ -+ # Now perform successful modify operations -+ log.info("Performing successful modify operations...") -+ for i, user in enumerate(users): -+ cn_value = user.get_attr_val_utf8('cn') -+ old_usn = user.get_attr_val_int('entryusn') -+ -+ # Modify the user -+ user.replace('description', f'Consistency test modification {i + 1}') -+ -+ new_usn = user.get_attr_val_int('entryusn') -+ log.info(f"Modified {cn_value}: USN {old_usn} -> {new_usn}") -+ -+ # Verify proper increment -+ assert (new_usn - old_usn) == 1, f"EntryUSN should increment by 1 for {cn_value}: {old_usn} -> {new_usn}" -+ assert new_usn < MAX_USN_64BIT, f"EntryUSN overflow for {cn_value}: {new_usn}" -+ -+ log.info("EntryUSN consistency test completed successfully") -+ -+ -+if __name__ == '__main__': -+ # Run isolated -+ # -s for DEBUG mode -+ CURRENT_FILE = os.path.realpath(__file__) -+ pytest.main("-s %s" % CURRENT_FILE) -\ No newline at end of file --- -2.49.0 - diff --git a/0026-Issue-6594-Add-test-for-numSubordinates-replication-.patch b/0026-Issue-6594-Add-test-for-numSubordinates-replication-.patch deleted file mode 100644 index b099406..0000000 --- a/0026-Issue-6594-Add-test-for-numSubordinates-replication-.patch +++ /dev/null @@ -1,172 +0,0 @@ -From 36ca93e8ad2915bcfcae0367051e0f606386f861 Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Mon, 28 Jul 2025 15:35:50 -0700 -Subject: [PATCH] Issue 6594 - Add test for numSubordinates replication - consistency with tombstones (#6862) - -Description: Add a comprehensive test to verify that numSubordinates and -tombstoneNumSubordinates attributes are correctly replicated between -instances when tombstone entries are present. - -Fixes: https://github.com/389ds/389-ds-base/issues/6594 - -Reviewed by: @progier389 (Thanks!) ---- - .../numsubordinates_replication_test.py | 144 ++++++++++++++++++ - 1 file changed, 144 insertions(+) - create mode 100644 dirsrvtests/tests/suites/replication/numsubordinates_replication_test.py - -diff --git a/dirsrvtests/tests/suites/replication/numsubordinates_replication_test.py b/dirsrvtests/tests/suites/replication/numsubordinates_replication_test.py -new file mode 100644 -index 000000000..9ba10657d ---- /dev/null -+++ b/dirsrvtests/tests/suites/replication/numsubordinates_replication_test.py -@@ -0,0 +1,144 @@ -+# --- BEGIN COPYRIGHT BLOCK --- -+# Copyright (C) 2025 Red Hat, Inc. -+# All rights reserved. -+# -+# License: GPL (version 3 or any later version). -+# See LICENSE for details. -+# --- END COPYRIGHT BLOCK --- -+ -+import os -+import logging -+import pytest -+from lib389._constants import DEFAULT_SUFFIX -+from lib389.replica import ReplicationManager -+from lib389.idm.organizationalunit import OrganizationalUnits -+from lib389.idm.user import UserAccounts -+from lib389.topologies import topology_i2 as topo_i2 -+ -+ -+pytestmark = pytest.mark.tier1 -+ -+DEBUGGING = os.getenv("DEBUGGING", default=False) -+if DEBUGGING: -+ logging.getLogger(__name__).setLevel(logging.DEBUG) -+else: -+ logging.getLogger(__name__).setLevel(logging.INFO) -+log = logging.getLogger(__name__) -+ -+ -+def test_numsubordinates_tombstone_replication_mismatch(topo_i2): -+ """Test that numSubordinates values match between replicas after tombstone creation -+ -+ :id: c43ecc7a-d706-42e8-9179-1ff7d0e7163a -+ :setup: Two standalone instances -+ :steps: -+ 1. Create a container (organizational unit) on the first instance -+ 2. Create a user object in that container -+ 3. Delete the user object (this creates a tombstone) -+ 4. Set up replication between the two instances -+ 5. Wait for replication to complete -+ 6. Check numSubordinates on both instances -+ 7. Check tombstoneNumSubordinates on both instances -+ 8. Verify that numSubordinates values match on both instances -+ :expectedresults: -+ 1. Container should be created successfully -+ 2. User object should be created successfully -+ 3. User object should be deleted successfully -+ 4. Replication should be set up successfully -+ 5. Replication should complete successfully -+ 6. numSubordinates should be accessible on both instances -+ 7. tombstoneNumSubordinates should be accessible on both instances -+ 8. numSubordinates values should match on both instances -+ """ -+ -+ instance1 = topo_i2.ins["standalone1"] -+ instance2 = topo_i2.ins["standalone2"] -+ -+ log.info("Create a container (organizational unit) on the first instance") -+ ous1 = OrganizationalUnits(instance1, DEFAULT_SUFFIX) -+ container = ous1.create(properties={ -+ 'ou': 'test_container', -+ 'description': 'Test container for numSubordinates replication test' -+ }) -+ container_rdn = container.rdn -+ log.info(f"Created container: {container_rdn}") -+ -+ log.info("Create a user object in that container") -+ users1 = UserAccounts(instance1, DEFAULT_SUFFIX, rdn=f"ou={container_rdn}") -+ test_user = users1.create_test_user(uid=1001) -+ log.info(f"Created user: {test_user.dn}") -+ -+ log.info("Checking initial numSubordinates on container") -+ container_obj1 = OrganizationalUnits(instance1, DEFAULT_SUFFIX).get(container_rdn) -+ initial_numsubordinates = container_obj1.get_attr_val_int('numSubordinates') -+ log.info(f"Initial numSubordinates: {initial_numsubordinates}") -+ assert initial_numsubordinates == 1 -+ -+ log.info("Delete the user object (this creates a tombstone)") -+ test_user.delete() -+ -+ log.info("Checking numSubordinates after deletion") -+ after_delete_numsubordinates = container_obj1.get_attr_val_int('numSubordinates') -+ log.info(f"numSubordinates after deletion: {after_delete_numsubordinates}") -+ -+ log.info("Checking tombstoneNumSubordinates after deletion") -+ try: -+ tombstone_numsubordinates = container_obj1.get_attr_val_int('tombstoneNumSubordinates') -+ log.info(f"tombstoneNumSubordinates: {tombstone_numsubordinates}") -+ except Exception as e: -+ log.info(f"tombstoneNumSubordinates not found or error: {e}") -+ tombstone_numsubordinates = 0 -+ -+ log.info("Set up replication between the two instances") -+ repl = ReplicationManager(DEFAULT_SUFFIX) -+ repl.create_first_supplier(instance1) -+ repl.join_supplier(instance1, instance2) -+ -+ log.info("Wait for replication to complete") -+ repl.wait_for_replication(instance1, instance2) -+ -+ log.info("Check numSubordinates on both instances") -+ container_obj1 = OrganizationalUnits(instance1, DEFAULT_SUFFIX).get(container_rdn) -+ numsubordinates_instance1 = container_obj1.get_attr_val_int('numSubordinates') -+ log.info(f"numSubordinates on instance1: {numsubordinates_instance1}") -+ -+ container_obj2 = OrganizationalUnits(instance2, DEFAULT_SUFFIX).get(container_rdn) -+ numsubordinates_instance2 = container_obj2.get_attr_val_int('numSubordinates') -+ log.info(f"numSubordinates on instance2: {numsubordinates_instance2}") -+ -+ log.info("Check tombstoneNumSubordinates on both instances") -+ try: -+ tombstone_numsubordinates_instance1 = container_obj1.get_attr_val_int('tombstoneNumSubordinates') -+ log.info(f"tombstoneNumSubordinates on instance1: {tombstone_numsubordinates_instance1}") -+ except Exception as e: -+ log.info(f"tombstoneNumSubordinates not found on instance1: {e}") -+ tombstone_numsubordinates_instance1 = 0 -+ -+ try: -+ tombstone_numsubordinates_instance2 = container_obj2.get_attr_val_int('tombstoneNumSubordinates') -+ log.info(f"tombstoneNumSubordinates on instance2: {tombstone_numsubordinates_instance2}") -+ except Exception as e: -+ log.info(f"tombstoneNumSubordinates not found on instance2: {e}") -+ tombstone_numsubordinates_instance2 = 0 -+ -+ log.info("Verify that numSubordinates values match on both instances") -+ log.info(f"Comparison: instance1 numSubordinates={numsubordinates_instance1}, " -+ f"instance2 numSubordinates={numsubordinates_instance2}") -+ log.info(f"Comparison: instance1 tombstoneNumSubordinates={tombstone_numsubordinates_instance1}, " -+ f"instance2 tombstoneNumSubordinates={tombstone_numsubordinates_instance2}") -+ -+ assert numsubordinates_instance1 == numsubordinates_instance2, ( -+ f"numSubordinates mismatch: instance1 has {numsubordinates_instance1}, " -+ f"instance2 has {numsubordinates_instance2}. " -+ ) -+ assert tombstone_numsubordinates_instance1 == tombstone_numsubordinates_instance2, ( -+ f"tombstoneNumSubordinates mismatch: instance1 has {tombstone_numsubordinates_instance1}, " -+ f"instance2 has {tombstone_numsubordinates_instance2}. " -+ ) -+ -+ -+if __name__ == '__main__': -+ # Run isolated -+ # -s for DEBUG mode -+ CURRENT_FILE = os.path.realpath(__file__) -+ pytest.main("-s %s" % CURRENT_FILE) -\ No newline at end of file --- -2.49.0 - diff --git a/0027-Issue-6884-Mask-password-hashes-in-audit-logs-6885.patch b/0027-Issue-6884-Mask-password-hashes-in-audit-logs-6885.patch deleted file mode 100644 index 2c6d48c..0000000 --- a/0027-Issue-6884-Mask-password-hashes-in-audit-logs-6885.patch +++ /dev/null @@ -1,814 +0,0 @@ -From 14b1407abc196df947fa50d48946ed072e4ea772 Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Mon, 28 Jul 2025 15:41:29 -0700 -Subject: [PATCH] Issue 6884 - Mask password hashes in audit logs (#6885) - -Description: Fix the audit log functionality to mask password hash values for -userPassword, nsslapd-rootpw, nsmultiplexorcredentials, nsds5ReplicaCredentials, -and nsds5ReplicaBootstrapCredentials attributes in ADD and MODIFY operations. -Update auditlog.c to detect password attributes and replace their values with -asterisks (**********************) in both LDIF and JSON audit log formats. -Add a comprehensive test suite audit_password_masking_test.py to verify -password masking works correctly across all log formats and operation types. - -Fixes: https://github.com/389ds/389-ds-base/issues/6884 - -Reviewed by: @mreynolds389, @vashirov (Thanks!!) ---- - .../logging/audit_password_masking_test.py | 501 ++++++++++++++++++ - ldap/servers/slapd/auditlog.c | 170 +++++- - ldap/servers/slapd/slapi-private.h | 1 + - src/lib389/lib389/chaining.py | 3 +- - 4 files changed, 652 insertions(+), 23 deletions(-) - create mode 100644 dirsrvtests/tests/suites/logging/audit_password_masking_test.py - -diff --git a/dirsrvtests/tests/suites/logging/audit_password_masking_test.py b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py -new file mode 100644 -index 000000000..3b6a54849 ---- /dev/null -+++ b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py -@@ -0,0 +1,501 @@ -+# --- BEGIN COPYRIGHT BLOCK --- -+# Copyright (C) 2025 Red Hat, Inc. -+# All rights reserved. -+# -+# License: GPL (version 3 or any later version). -+# See LICENSE for details. -+# --- END COPYRIGHT BLOCK --- -+# -+import logging -+import pytest -+import os -+import re -+import time -+import ldap -+from lib389._constants import DEFAULT_SUFFIX, DN_DM, PW_DM -+from lib389.topologies import topology_m2 as topo -+from lib389.idm.user import UserAccounts -+from lib389.dirsrv_log import DirsrvAuditJSONLog -+from lib389.plugins import ChainingBackendPlugin -+from lib389.chaining import ChainingLinks -+from lib389.agreement import Agreements -+from lib389.replica import ReplicationManager, Replicas -+from lib389.idm.directorymanager import DirectoryManager -+ -+log = logging.getLogger(__name__) -+ -+MASKED_PASSWORD = "**********************" -+TEST_PASSWORD = "MySecret123" -+TEST_PASSWORD_2 = "NewPassword789" -+TEST_PASSWORD_3 = "NewPassword101" -+ -+ -+def setup_audit_logging(inst, log_format='default', display_attrs=None): -+ """Configure audit logging settings""" -+ inst.config.replace('nsslapd-auditlog-logbuffering', 'off') -+ inst.config.replace('nsslapd-auditlog-logging-enabled', 'on') -+ inst.config.replace('nsslapd-auditlog-log-format', log_format) -+ -+ if display_attrs is not None: -+ inst.config.replace('nsslapd-auditlog-display-attrs', display_attrs) -+ -+ inst.deleteAuditLogs() -+ -+ -+def check_password_masked(inst, log_format, expected_password, actual_password): -+ """Helper function to check password masking in audit logs""" -+ -+ time.sleep(1) # Allow log to flush -+ -+ # List of all password/credential attributes that should be masked -+ password_attributes = [ -+ 'userPassword', -+ 'nsslapd-rootpw', -+ 'nsmultiplexorcredentials', -+ 'nsDS5ReplicaCredentials', -+ 'nsDS5ReplicaBootstrapCredentials' -+ ] -+ -+ # Get password schemes to check for hash leakage -+ user_password_scheme = inst.config.get_attr_val_utf8('passwordStorageScheme') -+ root_password_scheme = inst.config.get_attr_val_utf8('nsslapd-rootpwstoragescheme') -+ -+ if log_format == 'json': -+ # Check JSON format logs -+ audit_log = DirsrvAuditJSONLog(inst) -+ log_lines = audit_log.readlines() -+ -+ found_masked = False -+ found_actual = False -+ found_hashed = False -+ -+ for line in log_lines: -+ # Check if any password attribute is present in the line -+ for attr in password_attributes: -+ if attr in line: -+ if expected_password in line: -+ found_masked = True -+ if actual_password in line: -+ found_actual = True -+ # Check for password scheme indicators (hashed passwords) -+ if user_password_scheme and f'{{{user_password_scheme}}}' in line: -+ found_hashed = True -+ if root_password_scheme and f'{{{root_password_scheme}}}' in line: -+ found_hashed = True -+ break # Found a password attribute, no need to check others for this line -+ -+ else: -+ # Check LDIF format logs -+ found_masked = False -+ found_actual = False -+ found_hashed = False -+ -+ # Check each password attribute for masked password -+ for attr in password_attributes: -+ if inst.ds_audit_log.match(f"{attr}: {re.escape(expected_password)}"): -+ found_masked = True -+ if inst.ds_audit_log.match(f"{attr}: {actual_password}"): -+ found_actual = True -+ -+ # Check for hashed passwords in LDIF format -+ if user_password_scheme: -+ if inst.ds_audit_log.match(f"userPassword: {{{user_password_scheme}}}"): -+ found_hashed = True -+ if root_password_scheme: -+ if inst.ds_audit_log.match(f"nsslapd-rootpw: {{{root_password_scheme}}}"): -+ found_hashed = True -+ -+ # Delete audit logs to avoid interference with other tests -+ # We need to reset the root password to default as deleteAuditLogs() -+ # opens a new connection with the default password -+ dm = DirectoryManager(inst) -+ dm.change_password(PW_DM) -+ inst.deleteAuditLogs() -+ -+ return found_masked, found_actual, found_hashed -+ -+ -+@pytest.mark.parametrize("log_format,display_attrs", [ -+ ("default", None), -+ ("default", "*"), -+ ("default", "userPassword"), -+ ("json", None), -+ ("json", "*"), -+ ("json", "userPassword") -+]) -+def test_password_masking_add_operation(topo, log_format, display_attrs): -+ """Test password masking in ADD operations -+ -+ :id: 4358bd75-bcc7-401c-b492-d3209b10412d -+ :parametrized: yes -+ :setup: Standalone Instance -+ :steps: -+ 1. Configure audit logging format -+ 2. Add user with password -+ 3. Check that password is masked in audit log -+ 4. Verify actual password does not appear in log -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Password should be masked with asterisks -+ 4. Actual password should not be found in log -+ """ -+ inst = topo.ms['supplier1'] -+ setup_audit_logging(inst, log_format, display_attrs) -+ -+ users = UserAccounts(inst, DEFAULT_SUFFIX) -+ user = None -+ -+ try: -+ user = users.create(properties={ -+ 'uid': 'test_add_pwd_mask', -+ 'cn': 'Test Add User', -+ 'sn': 'User', -+ 'uidNumber': '1000', -+ 'gidNumber': '1000', -+ 'homeDirectory': '/home/test_add', -+ 'userPassword': TEST_PASSWORD -+ }) -+ -+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD) -+ -+ assert found_masked, f"Masked password not found in {log_format} ADD operation" -+ assert not found_actual, f"Actual password found in {log_format} ADD log (should be masked)" -+ assert not found_hashed, f"Hashed password found in {log_format} ADD log (should be masked)" -+ -+ finally: -+ if user is not None: -+ try: -+ user.delete() -+ except: -+ pass -+ -+ -+@pytest.mark.parametrize("log_format,display_attrs", [ -+ ("default", None), -+ ("default", "*"), -+ ("default", "userPassword"), -+ ("json", None), -+ ("json", "*"), -+ ("json", "userPassword") -+]) -+def test_password_masking_modify_operation(topo, log_format, display_attrs): -+ """Test password masking in MODIFY operations -+ -+ :id: e6963aa9-7609-419c-aae2-1d517aa434bd -+ :parametrized: yes -+ :setup: Standalone Instance -+ :steps: -+ 1. Configure audit logging format -+ 2. Add user without password -+ 3. Add password via MODIFY operation -+ 4. Check that password is masked in audit log -+ 5. Modify password to new value -+ 6. Check that new password is also masked -+ 7. Verify actual passwords do not appear in log -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Password should be masked with asterisks -+ 5. Success -+ 6. New password should be masked with asterisks -+ 7. No actual password values should be found in log -+ """ -+ inst = topo.ms['supplier1'] -+ setup_audit_logging(inst, log_format, display_attrs) -+ -+ users = UserAccounts(inst, DEFAULT_SUFFIX) -+ user = None -+ -+ try: -+ user = users.create(properties={ -+ 'uid': 'test_modify_pwd_mask', -+ 'cn': 'Test Modify User', -+ 'sn': 'User', -+ 'uidNumber': '2000', -+ 'gidNumber': '2000', -+ 'homeDirectory': '/home/test_modify' -+ }) -+ -+ user.replace('userPassword', TEST_PASSWORD) -+ -+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD) -+ assert found_masked, f"Masked password not found in {log_format} MODIFY operation (first password)" -+ assert not found_actual, f"Actual password found in {log_format} MODIFY log (should be masked)" -+ assert not found_hashed, f"Hashed password found in {log_format} MODIFY log (should be masked)" -+ -+ user.replace('userPassword', TEST_PASSWORD_2) -+ -+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2) -+ assert found_masked_2, f"Masked password not found in {log_format} MODIFY operation (second password)" -+ assert not found_actual_2, f"Second actual password found in {log_format} MODIFY log (should be masked)" -+ assert not found_hashed_2, f"Second hashed password found in {log_format} MODIFY log (should be masked)" -+ -+ finally: -+ if user is not None: -+ try: -+ user.delete() -+ except: -+ pass -+ -+ -+@pytest.mark.parametrize("log_format,display_attrs", [ -+ ("default", None), -+ ("default", "*"), -+ ("default", "nsslapd-rootpw"), -+ ("json", None), -+ ("json", "*"), -+ ("json", "nsslapd-rootpw") -+]) -+def test_password_masking_rootpw_modify_operation(topo, log_format, display_attrs): -+ """Test password masking for nsslapd-rootpw MODIFY operations -+ -+ :id: ec8c9fd4-56ba-4663-ab65-58efb3b445e4 -+ :parametrized: yes -+ :setup: Standalone Instance -+ :steps: -+ 1. Configure audit logging format -+ 2. Modify nsslapd-rootpw in configuration -+ 3. Check that root password is masked in audit log -+ 4. Modify root password to new value -+ 5. Check that new root password is also masked -+ 6. Verify actual root passwords do not appear in log -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Root password should be masked with asterisks -+ 4. Success -+ 5. New root password should be masked with asterisks -+ 6. No actual root password values should be found in log -+ """ -+ inst = topo.ms['supplier1'] -+ setup_audit_logging(inst, log_format, display_attrs) -+ dm = DirectoryManager(inst) -+ -+ try: -+ dm.change_password(TEST_PASSWORD) -+ dm.rebind(TEST_PASSWORD) -+ -+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD) -+ assert found_masked, f"Masked root password not found in {log_format} MODIFY operation (first root password)" -+ assert not found_actual, f"Actual root password found in {log_format} MODIFY log (should be masked)" -+ assert not found_hashed, f"Hashed root password found in {log_format} MODIFY log (should be masked)" -+ -+ dm.change_password(TEST_PASSWORD_2) -+ dm.rebind(TEST_PASSWORD_2) -+ -+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2) -+ assert found_masked_2, f"Masked root password not found in {log_format} MODIFY operation (second root password)" -+ assert not found_actual_2, f"Second actual root password found in {log_format} MODIFY log (should be masked)" -+ assert not found_hashed_2, f"Second hashed root password found in {log_format} MODIFY log (should be masked)" -+ -+ finally: -+ dm.change_password(PW_DM) -+ dm.rebind(PW_DM) -+ -+ -+@pytest.mark.parametrize("log_format,display_attrs", [ -+ ("default", None), -+ ("default", "*"), -+ ("default", "nsmultiplexorcredentials"), -+ ("json", None), -+ ("json", "*"), -+ ("json", "nsmultiplexorcredentials") -+]) -+def test_password_masking_multiplexor_credentials(topo, log_format, display_attrs): -+ """Test password masking for nsmultiplexorcredentials in chaining/multiplexor configurations -+ -+ :id: 161a9498-b248-4926-90be-a696a36ed36e -+ :parametrized: yes -+ :setup: Standalone Instance -+ :steps: -+ 1. Configure audit logging format -+ 2. Create a chaining backend configuration entry with nsmultiplexorcredentials -+ 3. Check that multiplexor credentials are masked in audit log -+ 4. Modify the credentials -+ 5. Check that updated credentials are also masked -+ 6. Verify actual credentials do not appear in log -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Multiplexor credentials should be masked with asterisks -+ 4. Success -+ 5. Updated credentials should be masked with asterisks -+ 6. No actual credential values should be found in log -+ """ -+ inst = topo.ms['supplier1'] -+ setup_audit_logging(inst, log_format, display_attrs) -+ -+ # Enable chaining plugin and create chaining link -+ chain_plugin = ChainingBackendPlugin(inst) -+ chain_plugin.enable() -+ -+ chains = ChainingLinks(inst) -+ chain = None -+ -+ try: -+ # Create chaining link with multiplexor credentials -+ chain = chains.create(properties={ -+ 'cn': 'testchain', -+ 'nsfarmserverurl': 'ldap://localhost:389/', -+ 'nsslapd-suffix': 'dc=example,dc=com', -+ 'nsmultiplexorbinddn': 'cn=manager', -+ 'nsmultiplexorcredentials': TEST_PASSWORD, -+ 'nsCheckLocalACI': 'on', -+ 'nsConnectionLife': '30', -+ }) -+ -+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD) -+ assert found_masked, f"Masked multiplexor credentials not found in {log_format} ADD operation" -+ assert not found_actual, f"Actual multiplexor credentials found in {log_format} ADD log (should be masked)" -+ assert not found_hashed, f"Hashed multiplexor credentials found in {log_format} ADD log (should be masked)" -+ -+ # Modify the credentials -+ chain.replace('nsmultiplexorcredentials', TEST_PASSWORD_2) -+ -+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2) -+ assert found_masked_2, f"Masked multiplexor credentials not found in {log_format} MODIFY operation" -+ assert not found_actual_2, f"Actual multiplexor credentials found in {log_format} MODIFY log (should be masked)" -+ assert not found_hashed_2, f"Hashed multiplexor credentials found in {log_format} MODIFY log (should be masked)" -+ -+ finally: -+ chain_plugin.disable() -+ if chain is not None: -+ inst.delete_branch_s(chain.dn, ldap.SCOPE_ONELEVEL) -+ chain.delete() -+ -+ -+@pytest.mark.parametrize("log_format,display_attrs", [ -+ ("default", None), -+ ("default", "*"), -+ ("default", "nsDS5ReplicaCredentials"), -+ ("json", None), -+ ("json", "*"), -+ ("json", "nsDS5ReplicaCredentials") -+]) -+def test_password_masking_replica_credentials(topo, log_format, display_attrs): -+ """Test password masking for nsDS5ReplicaCredentials in replication agreements -+ -+ :id: 7bf9e612-1b7c-49af-9fc0-de4c7df84b2a -+ :parametrized: yes -+ :setup: Standalone Instance -+ :steps: -+ 1. Configure audit logging format -+ 2. Create a replication agreement entry with nsDS5ReplicaCredentials -+ 3. Check that replica credentials are masked in audit log -+ 4. Modify the credentials -+ 5. Check that updated credentials are also masked -+ 6. Verify actual credentials do not appear in log -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Replica credentials should be masked with asterisks -+ 4. Success -+ 5. Updated credentials should be masked with asterisks -+ 6. No actual credential values should be found in log -+ """ -+ inst = topo.ms['supplier2'] -+ setup_audit_logging(inst, log_format, display_attrs) -+ agmt = None -+ -+ try: -+ replicas = Replicas(inst) -+ replica = replicas.get(DEFAULT_SUFFIX) -+ agmts = replica.get_agreements() -+ agmt = agmts.create(properties={ -+ 'cn': 'testagmt', -+ 'nsDS5ReplicaHost': 'localhost', -+ 'nsDS5ReplicaPort': '389', -+ 'nsDS5ReplicaBindDN': 'cn=replication manager,cn=config', -+ 'nsDS5ReplicaCredentials': TEST_PASSWORD, -+ 'nsDS5ReplicaRoot': DEFAULT_SUFFIX -+ }) -+ -+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD) -+ assert found_masked, f"Masked replica credentials not found in {log_format} ADD operation" -+ assert not found_actual, f"Actual replica credentials found in {log_format} ADD log (should be masked)" -+ assert not found_hashed, f"Hashed replica credentials found in {log_format} ADD log (should be masked)" -+ -+ # Modify the credentials -+ agmt.replace('nsDS5ReplicaCredentials', TEST_PASSWORD_2) -+ -+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2) -+ assert found_masked_2, f"Masked replica credentials not found in {log_format} MODIFY operation" -+ assert not found_actual_2, f"Actual replica credentials found in {log_format} MODIFY log (should be masked)" -+ assert not found_hashed_2, f"Hashed replica credentials found in {log_format} MODIFY log (should be masked)" -+ -+ finally: -+ if agmt is not None: -+ agmt.delete() -+ -+ -+@pytest.mark.parametrize("log_format,display_attrs", [ -+ ("default", None), -+ ("default", "*"), -+ ("default", "nsDS5ReplicaBootstrapCredentials"), -+ ("json", None), -+ ("json", "*"), -+ ("json", "nsDS5ReplicaBootstrapCredentials") -+]) -+def test_password_masking_bootstrap_credentials(topo, log_format, display_attrs): -+ """Test password masking for nsDS5ReplicaCredentials and nsDS5ReplicaBootstrapCredentials in replication agreements -+ -+ :id: 248bd418-ffa4-4733-963d-2314c60b7c5b -+ :parametrized: yes -+ :setup: Standalone Instance -+ :steps: -+ 1. Configure audit logging format -+ 2. Create a replication agreement entry with both nsDS5ReplicaCredentials and nsDS5ReplicaBootstrapCredentials -+ 3. Check that both credentials are masked in audit log -+ 4. Modify both credentials -+ 5. Check that both updated credentials are also masked -+ 6. Verify actual credentials do not appear in log -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Both credentials should be masked with asterisks -+ 4. Success -+ 5. Both updated credentials should be masked with asterisks -+ 6. No actual credential values should be found in log -+ """ -+ inst = topo.ms['supplier2'] -+ setup_audit_logging(inst, log_format, display_attrs) -+ agmt = None -+ -+ try: -+ replicas = Replicas(inst) -+ replica = replicas.get(DEFAULT_SUFFIX) -+ agmts = replica.get_agreements() -+ agmt = agmts.create(properties={ -+ 'cn': 'testbootstrapagmt', -+ 'nsDS5ReplicaHost': 'localhost', -+ 'nsDS5ReplicaPort': '389', -+ 'nsDS5ReplicaBindDN': 'cn=replication manager,cn=config', -+ 'nsDS5ReplicaCredentials': TEST_PASSWORD, -+ 'nsDS5replicabootstrapbinddn': 'cn=bootstrap manager,cn=config', -+ 'nsDS5ReplicaBootstrapCredentials': TEST_PASSWORD_2, -+ 'nsDS5ReplicaRoot': DEFAULT_SUFFIX -+ }) -+ -+ found_masked_bootstrap, found_actual_bootstrap, found_hashed_bootstrap = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2) -+ assert found_masked_bootstrap, f"Masked bootstrap credentials not found in {log_format} ADD operation" -+ assert not found_actual_bootstrap, f"Actual bootstrap credentials found in {log_format} ADD log (should be masked)" -+ assert not found_hashed_bootstrap, f"Hashed bootstrap credentials found in {log_format} ADD log (should be masked)" -+ -+ agmt.replace('nsDS5ReplicaBootstrapCredentials', TEST_PASSWORD_3) -+ -+ found_masked_bootstrap_2, found_actual_bootstrap_2, found_hashed_bootstrap_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_3) -+ assert found_masked_bootstrap_2, f"Masked bootstrap credentials not found in {log_format} MODIFY operation" -+ assert not found_actual_bootstrap_2, f"Actual bootstrap credentials found in {log_format} MODIFY log (should be masked)" -+ assert not found_hashed_bootstrap_2, f"Hashed bootstrap credentials found in {log_format} MODIFY log (should be masked)" -+ -+ finally: -+ if agmt is not None: -+ agmt.delete() -+ -+ -+ -+if __name__ == '__main__': -+ CURRENT_FILE = os.path.realpath(__file__) -+ pytest.main(["-s", CURRENT_FILE]) -\ No newline at end of file -diff --git a/ldap/servers/slapd/auditlog.c b/ldap/servers/slapd/auditlog.c -index 1121aef35..7b591e072 100644 ---- a/ldap/servers/slapd/auditlog.c -+++ b/ldap/servers/slapd/auditlog.c -@@ -39,6 +39,89 @@ static void write_audit_file(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype, - - static const char *modrdn_changes[4]; - -+/* Helper function to check if an attribute is a password that needs masking */ -+static int -+is_password_attribute(const char *attr_name) -+{ -+ return (strcasecmp(attr_name, SLAPI_USERPWD_ATTR) == 0 || -+ strcasecmp(attr_name, CONFIG_ROOTPW_ATTRIBUTE) == 0 || -+ strcasecmp(attr_name, SLAPI_MB_CREDENTIALS) == 0 || -+ strcasecmp(attr_name, SLAPI_REP_CREDENTIALS) == 0 || -+ strcasecmp(attr_name, SLAPI_REP_BOOTSTRAP_CREDENTIALS) == 0); -+} -+ -+/* Helper function to create a masked string representation of an entry */ -+static char * -+create_masked_entry_string(Slapi_Entry *original_entry, int *len) -+{ -+ Slapi_Attr *attr = NULL; -+ char *entry_str = NULL; -+ char *current_pos = NULL; -+ char *line_start = NULL; -+ char *next_line = NULL; -+ char *colon_pos = NULL; -+ int has_password_attrs = 0; -+ -+ if (original_entry == NULL) { -+ return NULL; -+ } -+ -+ /* Single pass through attributes to check for password attributes */ -+ for (slapi_entry_first_attr(original_entry, &attr); attr != NULL; -+ slapi_entry_next_attr(original_entry, attr, &attr)) { -+ -+ char *attr_name = NULL; -+ slapi_attr_get_type(attr, &attr_name); -+ -+ if (is_password_attribute(attr_name)) { -+ has_password_attrs = 1; -+ break; -+ } -+ } -+ -+ /* If no password attributes, return original string - no masking needed */ -+ entry_str = slapi_entry2str(original_entry, len); -+ if (!has_password_attrs) { -+ return entry_str; -+ } -+ -+ /* Process the string in-place, replacing password values */ -+ current_pos = entry_str; -+ while ((line_start = current_pos) != NULL && *line_start != '\0') { -+ /* Find the end of current line */ -+ next_line = strchr(line_start, '\n'); -+ if (next_line != NULL) { -+ *next_line = '\0'; /* Temporarily terminate line */ -+ current_pos = next_line + 1; -+ } else { -+ current_pos = NULL; /* Last line */ -+ } -+ -+ /* Find the colon that separates attribute name from value */ -+ colon_pos = strchr(line_start, ':'); -+ if (colon_pos != NULL) { -+ char saved_colon = *colon_pos; -+ *colon_pos = '\0'; /* Temporarily null-terminate attribute name */ -+ -+ /* Check if this is a password attribute that needs masking */ -+ if (is_password_attribute(line_start)) { -+ strcpy(colon_pos + 1, " **********************"); -+ } -+ -+ *colon_pos = saved_colon; /* Restore colon */ -+ } -+ -+ /* Restore newline if it was there */ -+ if (next_line != NULL) { -+ *next_line = '\n'; -+ } -+ } -+ -+ /* Update length since we may have shortened the string */ -+ *len = strlen(entry_str); -+ return entry_str; /* Return the modified original string */ -+} -+ - void - write_audit_log_entry(Slapi_PBlock *pb) - { -@@ -282,10 +365,31 @@ add_entry_attrs_ext(Slapi_Entry *entry, lenstr *l, PRBool use_json, json_object - { - slapi_entry_attr_find(entry, req_attr, &entry_attr); - if (entry_attr) { -- if (use_json) { -- log_entry_attr_json(entry_attr, req_attr, id_list); -+ if (strcmp(req_attr, PSEUDO_ATTR_UNHASHEDUSERPASSWORD) == 0) { -+ /* Do not write the unhashed clear-text password */ -+ continue; -+ } -+ -+ /* Check if this is a password attribute that needs masking */ -+ if (is_password_attribute(req_attr)) { -+ /* userpassword/rootdn password - mask the value */ -+ if (use_json) { -+ json_object *secret_obj = json_object_new_object(); -+ json_object_object_add(secret_obj, req_attr, -+ json_object_new_string("**********************")); -+ json_object_array_add(id_list, secret_obj); -+ } else { -+ addlenstr(l, "#"); -+ addlenstr(l, req_attr); -+ addlenstr(l, ": **********************\n"); -+ } - } else { -- log_entry_attr(entry_attr, req_attr, l); -+ /* Regular attribute - log normally */ -+ if (use_json) { -+ log_entry_attr_json(entry_attr, req_attr, id_list); -+ } else { -+ log_entry_attr(entry_attr, req_attr, l); -+ } - } - } - } -@@ -300,9 +404,7 @@ add_entry_attrs_ext(Slapi_Entry *entry, lenstr *l, PRBool use_json, json_object - continue; - } - -- if (strcasecmp(attr, SLAPI_USERPWD_ATTR) == 0 || -- strcasecmp(attr, CONFIG_ROOTPW_ATTRIBUTE) == 0) -- { -+ if (is_password_attribute(attr)) { - /* userpassword/rootdn password - mask the value */ - if (use_json) { - json_object *secret_obj = json_object_new_object(); -@@ -312,7 +414,7 @@ add_entry_attrs_ext(Slapi_Entry *entry, lenstr *l, PRBool use_json, json_object - } else { - addlenstr(l, "#"); - addlenstr(l, attr); -- addlenstr(l, ": ****************************\n"); -+ addlenstr(l, ": **********************\n"); - } - continue; - } -@@ -481,6 +583,9 @@ write_audit_file_json(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype, - } - } - -+ /* Check if this is a password attribute that needs masking */ -+ int is_password_attr = is_password_attribute(mods[j]->mod_type); -+ - mod = json_object_new_object(); - switch (operationtype) { - case LDAP_MOD_ADD: -@@ -505,7 +610,12 @@ write_audit_file_json(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype, - json_object *val_list = NULL; - val_list = json_object_new_array(); - for (size_t i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) { -- json_object_array_add(val_list, json_object_new_string(mods[j]->mod_bvalues[i]->bv_val)); -+ if (is_password_attr) { -+ /* Mask password values */ -+ json_object_array_add(val_list, json_object_new_string("**********************")); -+ } else { -+ json_object_array_add(val_list, json_object_new_string(mods[j]->mod_bvalues[i]->bv_val)); -+ } - } - json_object_object_add(mod, "values", val_list); - } -@@ -517,8 +627,11 @@ write_audit_file_json(Slapi_PBlock *pb, Slapi_Entry *entry, int logtype, - } - case SLAPI_OPERATION_ADD: { - int len; -+ - e = change; -- tmp = slapi_entry2str(e, &len); -+ -+ /* Create a masked string representation for password attributes */ -+ tmp = create_masked_entry_string(e, &len); - tmpsave = tmp; - while ((tmp = strchr(tmp, '\n')) != NULL) { - tmp++; -@@ -665,6 +778,10 @@ write_audit_file( - break; - } - } -+ -+ /* Check if this is a password attribute that needs masking */ -+ int is_password_attr = is_password_attribute(mods[j]->mod_type); -+ - switch (operationtype) { - case LDAP_MOD_ADD: - addlenstr(l, "add: "); -@@ -689,18 +806,27 @@ write_audit_file( - break; - } - if (operationtype != LDAP_MOD_IGNORE) { -- for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) { -- char *buf, *bufp; -- len = strlen(mods[j]->mod_type); -- len = LDIF_SIZE_NEEDED(len, mods[j]->mod_bvalues[i]->bv_len) + 1; -- buf = slapi_ch_malloc(len); -- bufp = buf; -- slapi_ldif_put_type_and_value_with_options(&bufp, mods[j]->mod_type, -- mods[j]->mod_bvalues[i]->bv_val, -- mods[j]->mod_bvalues[i]->bv_len, 0); -- *bufp = '\0'; -- addlenstr(l, buf); -- slapi_ch_free((void **)&buf); -+ if (is_password_attr) { -+ /* Add masked password */ -+ for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) { -+ addlenstr(l, mods[j]->mod_type); -+ addlenstr(l, ": **********************\n"); -+ } -+ } else { -+ /* Add actual values for non-password attributes */ -+ for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) { -+ char *buf, *bufp; -+ len = strlen(mods[j]->mod_type); -+ len = LDIF_SIZE_NEEDED(len, mods[j]->mod_bvalues[i]->bv_len) + 1; -+ buf = slapi_ch_malloc(len); -+ bufp = buf; -+ slapi_ldif_put_type_and_value_with_options(&bufp, mods[j]->mod_type, -+ mods[j]->mod_bvalues[i]->bv_val, -+ mods[j]->mod_bvalues[i]->bv_len, 0); -+ *bufp = '\0'; -+ addlenstr(l, buf); -+ slapi_ch_free((void **)&buf); -+ } - } - } - addlenstr(l, "-\n"); -@@ -711,7 +837,7 @@ write_audit_file( - e = change; - addlenstr(l, attr_changetype); - addlenstr(l, ": add\n"); -- tmp = slapi_entry2str(e, &len); -+ tmp = create_masked_entry_string(e, &len); - tmpsave = tmp; - while ((tmp = strchr(tmp, '\n')) != NULL) { - tmp++; -diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h -index e9abf8b75..02f22fd2d 100644 ---- a/ldap/servers/slapd/slapi-private.h -+++ b/ldap/servers/slapd/slapi-private.h -@@ -848,6 +848,7 @@ void task_cleanup(void); - /* for reversible encyrption */ - #define SLAPI_MB_CREDENTIALS "nsmultiplexorcredentials" - #define SLAPI_REP_CREDENTIALS "nsds5ReplicaCredentials" -+#define SLAPI_REP_BOOTSTRAP_CREDENTIALS "nsds5ReplicaBootstrapCredentials" - int pw_rever_encode(Slapi_Value **vals, char *attr_name); - int pw_rever_decode(char *cipher, char **plain, const char *attr_name); - -diff --git a/src/lib389/lib389/chaining.py b/src/lib389/lib389/chaining.py -index 533b83ebf..33ae78c8b 100644 ---- a/src/lib389/lib389/chaining.py -+++ b/src/lib389/lib389/chaining.py -@@ -134,7 +134,7 @@ class ChainingLink(DSLdapObject): - """ - - # Create chaining entry -- super(ChainingLink, self).create(rdn, properties, basedn) -+ link = super(ChainingLink, self).create(rdn, properties, basedn) - - # Create mapping tree entry - dn_comps = ldap.explode_dn(properties['nsslapd-suffix'][0]) -@@ -149,6 +149,7 @@ class ChainingLink(DSLdapObject): - self._mts.ensure_state(properties=mt_properties) - except ldap.ALREADY_EXISTS: - pass -+ return link - - - class ChainingLinks(DSLdapObjects): --- -2.49.0 - diff --git a/0028-Issue-6897-Fix-disk-monitoring-test-failures-and-imp.patch b/0028-Issue-6897-Fix-disk-monitoring-test-failures-and-imp.patch deleted file mode 100644 index 4d8cf0a..0000000 --- a/0028-Issue-6897-Fix-disk-monitoring-test-failures-and-imp.patch +++ /dev/null @@ -1,1719 +0,0 @@ -From 189524ddb43adb977c45f34e1982a4a62deac017 Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Mon, 28 Jul 2025 20:02:09 -0700 -Subject: [PATCH] Issue 6897 - Fix disk monitoring test failures and improve - test maintainability (#6898) - -Description: Refactor disk_monitoring_test.py to address failures and -improve maintainability. Replace manual sleep loops with proper wait -conditions using wait_for_condition() and wait_for_log_entry() helpers. -Add comprehensive logging throughout all tests for better debugging. -Implement configuration capture/restore to prevent test pollution -between runs. -Change fixture scope from module to function level for better test -isolation and ensure proper cleanup in all test cases. - -Fixes: https://github.com/389ds/389-ds-base/issues/6897 - -Reviewed by: @mreynolds389 (Thanks!) ---- - .../disk_monitoring/disk_monitoring_test.py | 1429 +++++++++++------ - 1 file changed, 940 insertions(+), 489 deletions(-) - -diff --git a/dirsrvtests/tests/suites/disk_monitoring/disk_monitoring_test.py b/dirsrvtests/tests/suites/disk_monitoring/disk_monitoring_test.py -index 69d853332..fb19085d7 100644 ---- a/dirsrvtests/tests/suites/disk_monitoring/disk_monitoring_test.py -+++ b/dirsrvtests/tests/suites/disk_monitoring/disk_monitoring_test.py -@@ -1,17 +1,18 @@ - # --- BEGIN COPYRIGHT BLOCK --- --# Copyright (C) 2018 Red Hat, Inc. -+# Copyright (C) 2025 Red Hat, Inc. - # All rights reserved. - # - # License: GPL (version 3 or any later version). - # See LICENSE for details. - # --- END COPYRIGHT BLOCK --- - -- - import os - import subprocess - import re - import time -+import ldap - import pytest -+import logging - from lib389.tasks import * - from lib389._constants import * - from lib389.utils import ensure_bytes -@@ -20,95 +21,221 @@ from lib389.topologies import topology_st as topo - from lib389.paths import * - from lib389.idm.user import UserAccounts - -+DEBUGGING = os.getenv("DEBUGGING", default=False) -+if DEBUGGING: -+ logging.getLogger(__name__).setLevel(logging.DEBUG) -+else: -+ logging.getLogger(__name__).setLevel(logging.INFO) -+log = logging.getLogger(__name__) -+ - pytestmark = pytest.mark.tier2 - disk_monitoring_ack = pytest.mark.skipif(not os.environ.get('DISK_MONITORING_ACK', False), reason="Disk monitoring tests may damage system configuration.") - --THRESHOLD = '30' --THRESHOLD_BYTES = '30000000' -+THRESHOLD_BYTES = 30000000 - - --def _withouterrorlog(topo, condition, maxtimesleep): -- timecount = 0 -- while eval(condition): -- time.sleep(1) -- timecount += 1 -- if timecount >= maxtimesleep: break -- assert not eval(condition) -+def presetup(inst): -+ """Presetup function to mount a tmpfs for log directory to simulate disk space limits.""" - -+ log.info("Setting up tmpfs for disk monitoring tests") -+ inst.stop() -+ log_dir = inst.ds_paths.log_dir - --def _witherrorlog(topo, condition, maxtimesleep): -- timecount = 0 -- with open(topo.standalone.errlog, 'r') as study: study = study.read() -- while condition not in study: -- time.sleep(1) -- timecount += 1 -- with open(topo.standalone.errlog, 'r') as study: study = study.read() -- if timecount >= maxtimesleep: break -- assert condition in study -+ if os.path.exists(log_dir): -+ log.debug(f"Mounting tmpfs on existing directory: {log_dir}") -+ subprocess.call(['mount', '-t', 'tmpfs', '-o', 'size=35M', 'tmpfs', log_dir]) -+ else: -+ log.debug(f"Creating and mounting tmpfs on new directory: {log_dir}") -+ os.mkdir(log_dir) -+ subprocess.call(['mount', '-t', 'tmpfs', '-o', 'size=35M', 'tmpfs', log_dir]) -+ -+ subprocess.call(f'chown {DEFAULT_USER}: -R {log_dir}', shell=True) -+ subprocess.call(f'chown {DEFAULT_USER}: -R {log_dir}/*', shell=True) -+ subprocess.call(f'restorecon -FvvR {log_dir}', shell=True) -+ inst.start() -+ log.info("tmpfs setup completed") -+ -+ -+def setupthesystem(inst): -+ """Setup system configuration for disk monitoring tests.""" -+ -+ log.info("Configuring system for disk monitoring tests") -+ inst.start() -+ inst.config.set('nsslapd-disk-monitoring-grace-period', '1') -+ inst.config.set('nsslapd-accesslog-logbuffering', 'off') -+ inst.config.set('nsslapd-disk-monitoring-threshold', ensure_bytes(str(THRESHOLD_BYTES))) -+ inst.restart() -+ log.info("System configuration completed") -+ -+ -+def capture_config(inst): -+ """Capture current configuration values for later restoration.""" -+ -+ log.info("Capturing current configuration values") -+ -+ config_attrs = [ -+ 'nsslapd-disk-monitoring', -+ 'nsslapd-disk-monitoring-threshold', -+ 'nsslapd-disk-monitoring-grace-period', -+ 'nsslapd-disk-monitoring-logging-critical', -+ 'nsslapd-disk-monitoring-readonly-on-threshold', -+ 'nsslapd-accesslog-logbuffering', -+ 'nsslapd-errorlog-level', -+ 'nsslapd-accesslog-logging-enabled', -+ 'nsslapd-accesslog-maxlogsize', -+ 'nsslapd-accesslog-logrotationtimeunit', -+ 'nsslapd-accesslog-level', -+ 'nsslapd-external-libs-debug-enabled', -+ 'nsslapd-errorlog-logging-enabled' -+ ] -+ -+ captured_config = {} -+ for config_attr in config_attrs: -+ try: -+ current_value = inst.config.get_attr_val_utf8(config_attr) -+ captured_config[config_attr] = current_value -+ log.debug(f"Captured {config_attr}: {current_value}") -+ except Exception as e: -+ log.debug(f"Could not capture {config_attr}: {e}") -+ captured_config[config_attr] = None - -+ log.info("Configuration capture completed") -+ return captured_config - --def presetup(topo): -- """ -- This is function is part of fixture function setup , will setup the environment for this test. -- """ -- topo.standalone.stop() -- if os.path.exists(topo.standalone.ds_paths.log_dir): -- subprocess.call(['mount', '-t', 'tmpfs', '-o', 'size=35M', 'tmpfs', topo.standalone.ds_paths.log_dir]) -- else: -- os.mkdir(topo.standalone.ds_paths.log_dir) -- subprocess.call(['mount', '-t', 'tmpfs', '-o', 'size=35M', 'tmpfs', topo.standalone.ds_paths.log_dir]) -- subprocess.call('chown {}: -R {}'.format(DEFAULT_USER, topo.standalone.ds_paths.log_dir), shell=True) -- subprocess.call('chown {}: -R {}/*'.format(DEFAULT_USER, topo.standalone.ds_paths.log_dir), shell=True) -- subprocess.call('restorecon -FvvR {}'.format(topo.standalone.ds_paths.log_dir), shell=True) -- topo.standalone.start() - -+def restore_config(inst, captured_config): -+ """Restore configuration values to previously captured state.""" - --def setupthesystem(topo): -- """ -- This function is part of fixture function setup , will setup the environment for this test. -- """ -- global TOTAL_SIZE, USED_SIZE, AVAIL_SIZE, HALF_THR_FILL_SIZE, FULL_THR_FILL_SIZE -- topo.standalone.start() -- topo.standalone.config.set('nsslapd-disk-monitoring-grace-period', '1') -- topo.standalone.config.set('nsslapd-accesslog-logbuffering', 'off') -- topo.standalone.config.set('nsslapd-disk-monitoring-threshold', ensure_bytes(THRESHOLD_BYTES)) -- TOTAL_SIZE = int(re.findall(r'\d+', str(os.statvfs(topo.standalone.ds_paths.log_dir)))[2])*4096/1024/1024 -- AVAIL_SIZE = round(int(re.findall(r'\d+', str(os.statvfs(topo.standalone.ds_paths.log_dir)))[3]) * 4096 / 1024 / 1024) -- USED_SIZE = TOTAL_SIZE - AVAIL_SIZE -- HALF_THR_FILL_SIZE = TOTAL_SIZE - float(THRESHOLD) + 5 - USED_SIZE -- FULL_THR_FILL_SIZE = TOTAL_SIZE - 0.5 * float(THRESHOLD) + 5 - USED_SIZE -- HALF_THR_FILL_SIZE = round(HALF_THR_FILL_SIZE) -- FULL_THR_FILL_SIZE = round(FULL_THR_FILL_SIZE) -- topo.standalone.restart() -- -- --@pytest.fixture(scope="module") -+ log.info("Restoring configuration to captured values") -+ -+ for config_attr, original_value in captured_config.items(): -+ if original_value is not None: -+ try: -+ current_value = inst.config.get_attr_val_utf8(config_attr) -+ if current_value != original_value: -+ log.debug(f"Restoring {config_attr} from '{current_value}' to '{original_value}'") -+ inst.config.set(config_attr, ensure_bytes(original_value)) -+ except Exception as e: -+ log.debug(f"Could not restore {config_attr}: {e}") -+ -+ log.info("Configuration restoration completed") -+ -+ -+@pytest.fixture(scope="function") - def setup(request, topo): -- """ -- This is the fixture function , will run before running every test case. -- """ -- presetup(topo) -- setupthesystem(topo) -+ """Module-level fixture to setup the test environment.""" -+ -+ log.info("Starting module setup for disk monitoring tests") -+ inst = topo.standalone -+ -+ # Capture current configuration before making any changes -+ original_config = capture_config(inst) -+ -+ presetup(inst) -+ setupthesystem(inst) - - def fin(): -- topo.standalone.stop() -- subprocess.call(['umount', '-fl', topo.standalone.ds_paths.log_dir]) -- topo.standalone.start() -+ log.info("Running module cleanup for disk monitoring tests") -+ inst.stop() -+ subprocess.call(['umount', '-fl', inst.ds_paths.log_dir]) -+ # Restore configuration to original values -+ inst.start() -+ restore_config(inst, original_config) -+ log.info("Module cleanup completed") - - request.addfinalizer(fin) - - -+def wait_for_condition(inst, condition_str, timeout=30): -+ """Wait until the given condition evaluates to False.""" -+ -+ log.debug(f"Waiting for condition to be False: {condition_str} (timeout: {timeout}s)") -+ start_time = time.time() -+ while time.time() - start_time < timeout: -+ if not eval(condition_str): -+ log.debug(f"Condition satisfied after {time.time() - start_time:.2f}s") -+ return -+ time.sleep(1) -+ raise AssertionError(f"Condition '{condition_str}' still True after {timeout} seconds") -+ -+ -+def wait_for_log_entry(inst, message, timeout=30): -+ """Wait for a specific message to appear in the error log.""" -+ -+ log.debug(f"Waiting for log entry: '{message}' (timeout: {timeout}s)") -+ start_time = time.time() -+ while time.time() - start_time < timeout: -+ with open(inst.errlog, 'r') as log_file: -+ if message in log_file.read(): -+ log.debug(f"Found log entry after {time.time() - start_time:.2f}s") -+ return -+ time.sleep(1) -+ raise AssertionError(f"Message '{message}' not found in error log after {timeout} seconds") -+ -+ -+def get_avail_bytes(path): -+ """Get available bytes on the filesystem at the given path.""" -+ -+ stat = os.statvfs(path) -+ return stat.f_bavail * stat.f_bsize -+ -+ -+def fill_to_target_avail(path, target_avail_bytes): -+ """Fill the disk to reach the target available bytes by creating a large file.""" -+ -+ avail = get_avail_bytes(path) -+ fill_bytes = avail - target_avail_bytes -+ log.debug(f"Current available: {avail}, target: {target_avail_bytes}, will create {fill_bytes} byte file") -+ if fill_bytes <= 0: -+ raise ValueError("Already below target avail") -+ -+ fill_file = os.path.join(path, 'fill.dd') -+ bs = 4096 -+ count = (fill_bytes + bs - 1) // bs # ceil division to ensure enough -+ log.info(f"Creating fill file {fill_file} with {count} blocks of {bs} bytes") -+ subprocess.check_call(['dd', 'if=/dev/zero', f'of={fill_file}', f'bs={bs}', f'count={count}']) -+ return fill_file -+ -+ - @pytest.fixture(scope="function") - def reset_logs(topo): -- """ -- Reset the errors log file before the test -- """ -- open('{}/errors'.format(topo.standalone.ds_paths.log_dir), 'w').close() -+ """Function-level fixture to reset the error log before each test.""" -+ -+ log.debug("Resetting error logs before test") -+ topo.standalone.deleteErrorLogs() -+ -+ -+def generate_access_log_activity(inst, num_users=10, num_binds=100): -+ """Generate access log activity by creating users and performing binds.""" -+ -+ log.info(f"Generating access log activity with {num_users} users and {num_binds} binds each") -+ users = UserAccounts(inst, DEFAULT_SUFFIX) -+ -+ # Create test users -+ for i in range(num_users): -+ user_properties = { -+ 'uid': f'cn=user{i}', -+ 'cn': f'cn=user{i}', -+ 'sn': f'cn=user{i}', -+ 'userPassword': "Itsme123", -+ 'uidNumber': f'1{i}', -+ 'gidNumber': f'2{i}', -+ 'homeDirectory': f'/home/{i}' -+ } -+ users.create(properties=user_properties) -+ -+ # Perform bind operations -+ for j in range(num_binds): -+ for user in users.list(): -+ user.bind('Itsme123') -+ -+ log.info("Access log activity generation completed") -+ return users - - - @disk_monitoring_ack - def test_verify_operation_when_disk_monitoring_is_off(topo, setup, reset_logs): -- """Verify operation when Disk monitoring is off -+ """Verify operation when Disk monitoring is off. - - :id: 73a97536-fe9e-11e8-ba9f-8c16451d917b - :setup: Standalone -@@ -117,94 +244,127 @@ def test_verify_operation_when_disk_monitoring_is_off(topo, setup, reset_logs): - 2. Go below the threshold - 3. Check DS is up and not entering shutdown mode - :expectedresults: -- 1. Should Success -- 2. Should Success -- 3. Should Success -+ 1. Success -+ 2. Success -+ 3. Success - """ -+ log.info("Starting test_verify_operation_when_disk_monitoring_is_off") -+ inst = topo.standalone -+ fill_file = None -+ - try: -- # Turn off disk monitoring -- topo.standalone.config.set('nsslapd-disk-monitoring', 'off') -- topo.standalone.restart() -- # go below the threshold -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE)]) -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo1'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE)]) -- # Wait for disk monitoring plugin thread to wake up -- _withouterrorlog(topo, 'topo.standalone.status() != True', 10) -+ log.info("Disabling disk monitoring") -+ inst.config.set('nsslapd-disk-monitoring', 'off') -+ inst.restart() -+ -+ log.info(f"Filling disk to go below threshold ({THRESHOLD_BYTES} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES - 1) -+ -+ log.info("Verifying server stays up despite being below threshold") -+ wait_for_condition(inst, 'inst.status() != True', 11) -+ - # Check DS is up and not entering shutdown mode -- assert topo.standalone.status() == True -+ assert inst.status() == True -+ log.info("Verified: server remains operational when disk monitoring is disabled") -+ - finally: -- os.remove('{}/foo'.format(topo.standalone.ds_paths.log_dir)) -- os.remove('{}/foo1'.format(topo.standalone.ds_paths.log_dir)) -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_enable_external_libs_debug_log(topo, setup, reset_logs): -- """Check that OpenLDAP logs are successfully enabled and disabled when -- disk threshold is reached -+ """Check that OpenLDAP logs are successfully enabled and disabled when disk threshold is reached. - - :id: 121b2b24-ecba-48e2-9ee2-312d929dc8c6 - :setup: Standalone instance -- :steps: 1. Set nsslapd-external-libs-debug-enabled to "on" -- 2. Go straight below 1/2 of the threshold -- 3. Verify that the external libs debug setting is disabled -- 4. Go back above 1/2 of the threshold -- 5. Verify that the external libs debug setting is enabled back -- :expectedresults: 1. Success -- 2. Success -- 3. Success -- 4. Success -- 5. Success -+ :steps: -+ 1. Set nsslapd-external-libs-debug-enabled to "on" -+ 2. Go straight below 1/2 of the threshold -+ 3. Verify that the external libs debug setting is disabled -+ 4. Go back above 1/2 of the threshold -+ 5. Verify that the external libs debug setting is enabled back -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success -+ 5. Success - """ -+ log.info("Starting test_enable_external_libs_debug_log") -+ inst = topo.standalone -+ fill_file = None -+ - try: -- # Verify that verbose logging was set to default level -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -- assert topo.standalone.config.set('nsslapd-external-libs-debug-enabled', 'on') -- assert topo.standalone.config.set('nsslapd-errorlog-level', '8') -- topo.standalone.restart() -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(HALF_THR_FILL_SIZE)]) -- # Verify that logging is disabled -- _withouterrorlog(topo, "topo.standalone.config.get_attr_val_utf8('nsslapd-external-libs-debug-enabled') != 'off'", 31) -+ log.info("Configuring disk monitoring and external libs debug") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -+ inst.config.set('nsslapd-external-libs-debug-enabled', 'on') -+ inst.config.set('nsslapd-errorlog-level', '8') -+ inst.restart() -+ -+ log.info(f"Filling disk to go below half threshold ({THRESHOLD_BYTES // 2} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES // 2 - 1) -+ -+ log.info("Verifying external libs debug is automatically disabled") -+ wait_for_condition(inst, "inst.config.get_attr_val_utf8('nsslapd-external-libs-debug-enabled') != 'off'", 31) -+ - finally: -- os.remove('{}/foo'.format(topo.standalone.ds_paths.log_dir)) -- _withouterrorlog(topo, "topo.standalone.config.get_attr_val_utf8('nsslapd-external-libs-debug-enabled') != 'on'", 31) -- assert topo.standalone.config.set('nsslapd-external-libs-debug-enabled', 'off') -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Verifying external libs debug is re-enabled after freeing space") -+ wait_for_condition(inst, "inst.config.get_attr_val_utf8('nsslapd-external-libs-debug-enabled') != 'on'", 31) -+ inst.config.set('nsslapd-external-libs-debug-enabled', 'off') -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_free_up_the_disk_space_and_change_ds_config(topo, setup, reset_logs): -- """Free up the disk space and change DS config -+ """Free up the disk space and change DS config. - - :id: 7be4d560-fe9e-11e8-a307-8c16451d917b - :setup: Standalone - :steps: -- 1. Enabling Disk Monitoring plugin and setting disk monitoring logging to critical -+ 1. Enable Disk Monitoring plugin and set disk monitoring logging to critical - 2. Verify no message about loglevel is present in the error log - 3. Verify no message about disabling logging is present in the error log - 4. Verify no message about removing rotated logs is present in the error log - :expectedresults: -- 1. Should Success -- 2. Should Success -- 3. Should Success -- 4. Should Success -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success - """ -- # Enabling Disk Monitoring plugin and setting disk monitoring logging to critical -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -- assert topo.standalone.config.set('nsslapd-errorlog-level', '8') -- topo.standalone.restart() -- # Verify no message about loglevel is present in the error log -- # Verify no message about disabling logging is present in the error log -- # Verify no message about removing rotated logs is present in the error log -- with open(topo.standalone.errlog, 'r') as study: study = study.read() -- assert 'temporarily setting error loglevel to zero' not in study -- assert 'disabling access and audit logging' not in study -- assert 'deleting rotated logs' not in study -+ log.info("Starting test_free_up_the_disk_space_and_change_ds_config") -+ inst = topo.standalone -+ -+ log.info("Enabling disk monitoring with critical logging") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -+ inst.config.set('nsslapd-errorlog-level', '8') -+ inst.restart() -+ -+ log.info("Verifying no premature disk monitoring messages in error log") -+ with open(inst.errlog, 'r') as err_log: -+ content = err_log.read() -+ -+ assert 'temporarily setting error loglevel to zero' not in content -+ assert 'disabling access and audit logging' not in content -+ assert 'deleting rotated logs' not in content -+ -+ log.info("Verified: no unexpected disk monitoring messages found") -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_verify_operation_with_nsslapd_disk_monitoring_logging_critical_off(topo, setup, reset_logs): -- """Verify operation with "nsslapd-disk-monitoring-logging-critical: off -+ """Verify operation with "nsslapd-disk-monitoring-logging-critical: off". - - :id: 82363bca-fe9e-11e8-9ae7-8c16451d917b - :setup: Standalone -@@ -213,39 +373,59 @@ def test_verify_operation_with_nsslapd_disk_monitoring_logging_critical_off(topo - 2. Verify that logging is disabled - 3. Verify that rotated logs were not removed - :expectedresults: -- 1. Should Success -- 2. Should Success -- 3. Should Success -+ 1. Success -+ 2. Success -+ 3. Success - """ -+ log.info("Starting test_verify_operation_with_nsslapd_disk_monitoring_logging_critical_off") -+ inst = topo.standalone -+ fill_file = None -+ - try: -- # Verify that verbose logging was set to default level -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -- assert topo.standalone.config.set('nsslapd-errorlog-level', '8') -- topo.standalone.restart() -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(HALF_THR_FILL_SIZE)]) -- _witherrorlog(topo, 'temporarily setting error loglevel to the default level', 11) -- assert LOG_DEFAULT == int(re.findall(r'nsslapd-errorlog-level: \d+', str( -- topo.standalone.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', ['nsslapd-errorlog-level'])))[ -- 0].split(' ')[1]) -- # Verify that logging is disabled -- _withouterrorlog(topo, "topo.standalone.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') != 'off'", 10) -- assert topo.standalone.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') == 'off' -- # Verify that rotated logs were not removed -- with open(topo.standalone.errlog, 'r') as study: study = study.read() -- assert 'disabling access and audit logging' in study -- _witherrorlog(topo, 'deleting rotated logs', 11) -- study = open(topo.standalone.errlog).read() -- assert "Unable to remove file: {}".format(topo.standalone.ds_paths.log_dir) not in study -- assert 'is too far below the threshold' not in study -+ log.info("Configuring disk monitoring with critical logging disabled") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -+ inst.config.set('nsslapd-errorlog-level', '8') -+ inst.restart() -+ -+ log.info(f"Filling disk to go below threshold ({THRESHOLD_BYTES} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES - 1) -+ -+ log.info("Waiting for loglevel to be set to default") -+ wait_for_log_entry(inst, 'temporarily setting error loglevel to the default level', 11) -+ -+ log.info("Verifying error log level was set to default") -+ config_entry = inst.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', ['nsslapd-errorlog-level']) -+ current_level = int(re.findall(r'nsslapd-errorlog-level: \d+', str(config_entry))[0].split(' ')[1]) -+ assert LOG_DEFAULT == current_level -+ -+ log.info("Verifying access logging is disabled") -+ wait_for_condition(inst, "inst.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') != 'off'", 11) -+ assert inst.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') == 'off' -+ -+ log.info("Verifying expected disk monitoring messages") -+ with open(inst.errlog, 'r') as err_log: -+ content = err_log.read() -+ -+ assert 'disabling access and audit logging' in content -+ wait_for_log_entry(inst, 'deleting rotated logs', 11) -+ assert f"Unable to remove file: {inst.ds_paths.log_dir}" not in content -+ assert 'is too far below the threshold' not in content -+ -+ log.info("All verifications passed") -+ - finally: -- os.remove('{}/foo'.format(topo.standalone.ds_paths.log_dir)) -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_operation_with_nsslapd_disk_monitoring_logging_critical_on_below_half_of_the_threshold(topo, setup, reset_logs): -- """Verify operation with \"nsslapd-disk-monitoring-logging-critical: on\" below 1/2 of the threshold -- Verify recovery -+ """Verify operation with "nsslapd-disk-monitoring-logging-critical: on" below 1/2 of the threshold. -+ Verify recovery. - - :id: 8940c502-fe9e-11e8-bcc0-8c16451d917b - :setup: Standalone -@@ -253,190 +433,277 @@ def test_operation_with_nsslapd_disk_monitoring_logging_critical_on_below_half_o - 1. Verify that DS goes into shutdown mode - 2. Verify that DS exited shutdown mode - :expectedresults: -- 1. Should Success -- 2. Should Success -+ 1. Success -+ 2. Success - """ -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -- topo.standalone.restart() -- # Verify that DS goes into shutdown mode -- if float(THRESHOLD) > FULL_THR_FILL_SIZE: -- FULL_THR_FILL_SIZE_new = FULL_THR_FILL_SIZE + round(float(THRESHOLD) - FULL_THR_FILL_SIZE) + 1 -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE_new)]) -- else: -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE)]) -- _witherrorlog(topo, 'is too far below the threshold', 20) -- os.remove('{}/foo'.format(topo.standalone.ds_paths.log_dir)) -- # Verify that DS exited shutdown mode -- _witherrorlog(topo, 'Available disk space is now acceptable', 25) -+ log.info("Starting test_operation_with_nsslapd_disk_monitoring_logging_critical_on_below_half_of_the_threshold") -+ inst = topo.standalone -+ fill_file = None -+ -+ try: -+ log.info("Configuring disk monitoring with critical logging enabled") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -+ inst.restart() -+ -+ log.info(f"Filling disk to go below half threshold ({THRESHOLD_BYTES // 2} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES // 2 - 1) -+ -+ log.info("Waiting for shutdown mode message") -+ wait_for_log_entry(inst, 'is too far below the threshold', 100) -+ -+ log.info("Freeing up disk space") -+ os.remove(fill_file) -+ fill_file = None -+ -+ log.info("Waiting for recovery message") -+ wait_for_log_entry(inst, 'Available disk space is now acceptable', 25) -+ -+ log.info("Verified: server entered and exited shutdown mode correctly") -+ -+ finally: -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_setting_nsslapd_disk_monitoring_logging_critical_to_off(topo, setup, reset_logs): -- """Setting nsslapd-disk-monitoring-logging-critical to "off" -+ """Setting nsslapd-disk-monitoring-logging-critical to "off". - - :id: 93265ec4-fe9e-11e8-af93-8c16451d917b - :setup: Standalone - :steps: -- 1. Setting nsslapd-disk-monitoring-logging-critical to "off" -+ 1. Set nsslapd-disk-monitoring-logging-critical to "off" - :expectedresults: -- 1. Should Success -+ 1. Success - """ -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -- assert topo.standalone.config.set('nsslapd-errorlog-level', '8') -- topo.standalone.restart() -- assert topo.standalone.status() == True -+ log.info("Starting test_setting_nsslapd_disk_monitoring_logging_critical_to_off") -+ inst = topo.standalone -+ -+ log.info("Setting disk monitoring configuration") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -+ inst.config.set('nsslapd-errorlog-level', '8') -+ inst.restart() -+ -+ log.info("Verifying server is running normally") -+ assert inst.status() == True -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_operation_with_nsslapd_disk_monitoring_logging_critical_off(topo, setup, reset_logs): -- """Verify operation with nsslapd-disk-monitoring-logging-critical: off -+ """Verify operation with nsslapd-disk-monitoring-logging-critical: off. - - :id: 97985a52-fe9e-11e8-9914-8c16451d917b - :setup: Standalone - :steps: -- 1. Verify that logging is disabled -- 2. Verify that rotated logs were removed -+ 1. Generate access log activity to create rotated logs -+ 2. Go below threshold to trigger disk monitoring - 3. Verify that verbose logging was set to default level - 4. Verify that logging is disabled - 5. Verify that rotated logs were removed - :expectedresults: -- 1. Should Success -- 2. Should Success -- 3. Should Success -- 4. Should Success -- 5. Should Success -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success -+ 5. Success - """ -- # Verify that logging is disabled -+ log.info("Starting test_operation_with_nsslapd_disk_monitoring_logging_critical_off") -+ inst = topo.standalone -+ fill_file = None -+ users = None -+ - try: -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -- assert topo.standalone.config.set('nsslapd-errorlog-level', '8') -- assert topo.standalone.config.set('nsslapd-accesslog-maxlogsize', '1') -- assert topo.standalone.config.set('nsslapd-accesslog-logrotationtimeunit', 'minute') -- assert topo.standalone.config.set('nsslapd-accesslog-level', '772') -- topo.standalone.restart() -- # Verify that rotated logs were removed -- users = UserAccounts(topo.standalone, DEFAULT_SUFFIX) -- for i in range(10): -- user_properties = { -- 'uid': 'cn=anuj{}'.format(i), -- 'cn': 'cn=anuj{}'.format(i), -- 'sn': 'cn=anuj{}'.format(i), -- 'userPassword': "Itsme123", -- 'uidNumber': '1{}'.format(i), -- 'gidNumber': '2{}'.format(i), -- 'homeDirectory': '/home/{}'.format(i) -- } -- users.create(properties=user_properties) -- for j in range(100): -- for i in [i for i in users.list()]: i.bind('Itsme123') -- assert re.findall(r'access.\d+-\d+',str(os.listdir(topo.standalone.ds_paths.log_dir))) -- topo.standalone.bind_s(DN_DM, PW_DM) -- assert topo.standalone.config.set('nsslapd-accesslog-maxlogsize', '100') -- assert topo.standalone.config.set('nsslapd-accesslog-logrotationtimeunit', 'day') -- assert topo.standalone.config.set('nsslapd-accesslog-level', '256') -- topo.standalone.restart() -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo2'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(HALF_THR_FILL_SIZE)]) -- # Verify that verbose logging was set to default level -- _witherrorlog(topo, 'temporarily setting error loglevel to the default level', 10) -- assert LOG_DEFAULT == int(re.findall(r'nsslapd-errorlog-level: \d+', str( -- topo.standalone.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', ['nsslapd-errorlog-level'])))[0].split(' ')[1]) -- # Verify that logging is disabled -- _withouterrorlog(topo, "topo.standalone.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') != 'off'", 20) -- with open(topo.standalone.errlog, 'r') as study: study = study.read() -- assert 'disabling access and audit logging' in study -- # Verify that rotated logs were removed -- _witherrorlog(topo, 'deleting rotated logs', 10) -- with open(topo.standalone.errlog, 'r') as study:study = study.read() -- assert 'Unable to remove file:' not in study -- assert 'is too far below the threshold' not in study -- for i in [i for i in users.list()]: i.delete() -+ log.info("Configuring disk monitoring and access log settings") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -+ inst.config.set('nsslapd-errorlog-level', '8') -+ inst.config.set('nsslapd-accesslog-maxlogsize', '1') -+ inst.config.set('nsslapd-accesslog-logrotationtimeunit', 'minute') -+ inst.config.set('nsslapd-accesslog-level', '772') -+ inst.restart() -+ -+ log.info("Generating access log activity to create rotated logs") -+ users = generate_access_log_activity(inst, num_users=10, num_binds=100) -+ -+ inst.bind_s(DN_DM, PW_DM) -+ -+ log.info("Resetting access log settings") -+ inst.config.set('nsslapd-accesslog-maxlogsize', '100') -+ inst.config.set('nsslapd-accesslog-logrotationtimeunit', 'day') -+ inst.config.set('nsslapd-accesslog-level', '256') -+ inst.restart() -+ -+ log.info(f"Filling disk to go below threshold ({THRESHOLD_BYTES} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES - 1) -+ -+ log.info("Waiting for loglevel to be set to default") -+ wait_for_log_entry(inst, 'temporarily setting error loglevel to the default level', 11) -+ -+ log.info("Verifying error log level was set to default") -+ config_level = None -+ for _ in range(10): -+ time.sleep(1) -+ config_entry = inst.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', ['nsslapd-errorlog-level']) -+ config_level = int(re.findall(r'nsslapd-errorlog-level: \d+', str(config_entry))[0].split(' ')[1]) -+ if LOG_DEFAULT == config_level: -+ break -+ assert LOG_DEFAULT == config_level -+ -+ log.info("Verifying access logging is disabled") -+ wait_for_condition(inst, "inst.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') == 'off'", 20) -+ -+ with open(inst.errlog, 'r') as err_log: -+ content = err_log.read() -+ assert 'disabling access and audit logging' in content -+ -+ log.info("Verifying rotated logs are removed") -+ wait_for_log_entry(inst, 'deleting rotated logs', 20) -+ -+ rotated_logs = re.findall(r'access.\d+-\d+', str(os.listdir(inst.ds_paths.log_dir))) -+ assert not rotated_logs, f"Found unexpected rotated logs: {rotated_logs}" -+ -+ with open(inst.errlog, 'r') as err_log: -+ content = err_log.read() -+ assert 'Unable to remove file:' not in content -+ assert 'is too far below the threshold' not in content -+ -+ log.info("All verifications passed") -+ - finally: -- os.remove('{}/foo2'.format(topo.standalone.ds_paths.log_dir)) -+ # Clean up users -+ if users: -+ log.debug("Cleaning up test users") -+ for user in users.list(): -+ try: -+ user.delete() -+ except ldap.ALREADY_EXISTS: -+ pass -+ -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_operation_with_nsslapd_disk_monitoring_logging_critical_off_below_half_of_the_threshold(topo, setup, reset_logs): -- """Verify operation with nsslapd-disk-monitoring-logging-critical: off below 1/2 of the threshold -- Verify shutdown -- Recovery and setup -+ """Verify operation with nsslapd-disk-monitoring-logging-critical: off below 1/2 of the threshold. -+ Verify shutdown and recovery. - - :id: 9d4c7d48-fe9e-11e8-b5d6-8c16451d917b - :setup: Standalone - :steps: -- 1. Verify that DS goes into shutdown mode -- 2. Verifying that DS has been shut down after the grace period -- 3. Verify logging enabled -- 4. Create rotated logfile -- 5. Enable verbose logging -+ 1. Go below half threshold to trigger shutdown -+ 2. Verify DS shutdown after grace period -+ 3. Free space and restart -+ 4. Verify logging is re-enabled -+ 5. Create rotated logs and enable verbose logging - :expectedresults: -- 1. Should Success -- 2. Should Success -- 3. Should Success -- 4. Should Success -- 5. Should Success -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success -+ 5. Success - """ -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -- topo.standalone.restart() -- # Verify that DS goes into shutdown mode -- if float(THRESHOLD) > FULL_THR_FILL_SIZE: -- FULL_THR_FILL_SIZE_new = FULL_THR_FILL_SIZE + round(float(THRESHOLD) - FULL_THR_FILL_SIZE) -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE_new)]) -- else: -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE)]) -- # Increased sleep to avoid failure -- _witherrorlog(topo, 'is too far below the threshold', 100) -- _witherrorlog(topo, 'Signaling slapd for shutdown', 90) -- # Verifying that DS has been shut down after the grace period -- time.sleep(2) -- assert topo.standalone.status() == False -- # free_space -- os.remove('{}/foo'.format(topo.standalone.ds_paths.log_dir)) -- open('{}/errors'.format(topo.standalone.ds_paths.log_dir), 'w').close() -- # StartSlapd -- topo.standalone.start() -- # verify logging enabled -- assert topo.standalone.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') == 'on' -- assert topo.standalone.config.get_attr_val_utf8('nsslapd-errorlog-logging-enabled') == 'on' -- with open(topo.standalone.errlog, 'r') as study: study = study.read() -- assert 'disabling access and audit logging' not in study -- assert topo.standalone.config.set('nsslapd-accesslog-maxlogsize', '1') -- assert topo.standalone.config.set('nsslapd-accesslog-logrotationtimeunit', 'minute') -- assert topo.standalone.config.set('nsslapd-accesslog-level', '772') -- topo.standalone.restart() -- # create rotated logfile -- users = UserAccounts(topo.standalone, DEFAULT_SUFFIX) -- for i in range(10): -- user_properties = { -- 'uid': 'cn=anuj{}'.format(i), -- 'cn': 'cn=anuj{}'.format(i), -- 'sn': 'cn=anuj{}'.format(i), -- 'userPassword': "Itsme123", -- 'uidNumber': '1{}'.format(i), -- 'gidNumber': '2{}'.format(i), -- 'homeDirectory': '/home/{}'.format(i) -- } -- users.create(properties=user_properties) -- for j in range(100): -- for i in [i for i in users.list()]: i.bind('Itsme123') -- assert re.findall(r'access.\d+-\d+',str(os.listdir(topo.standalone.ds_paths.log_dir))) -- topo.standalone.bind_s(DN_DM, PW_DM) -- # enable verbose logging -- assert topo.standalone.config.set('nsslapd-accesslog-maxlogsize', '100') -- assert topo.standalone.config.set('nsslapd-accesslog-logrotationtimeunit', 'day') -- assert topo.standalone.config.set('nsslapd-accesslog-level', '256') -- assert topo.standalone.config.set('nsslapd-errorlog-level', '8') -- topo.standalone.restart() -- for i in [i for i in users.list()]: i.delete() -+ log.info("Starting test_operation_with_nsslapd_disk_monitoring_logging_critical_off_below_half_of_the_threshold") -+ inst = topo.standalone -+ fill_file = None -+ users = None -+ -+ try: -+ log.info("Configuring disk monitoring with critical logging disabled") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -+ inst.restart() -+ -+ log.info(f"Filling disk to go below half threshold ({THRESHOLD_BYTES // 2} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES // 2 - 1) -+ -+ log.info("Waiting for shutdown messages") -+ wait_for_log_entry(inst, 'is too far below the threshold', 100) -+ wait_for_log_entry(inst, 'Signaling slapd for shutdown', 90) -+ -+ log.info("Verifying server shutdown within grace period") -+ for i in range(60): -+ time.sleep(1) -+ if not inst.status(): -+ log.info(f"Server shut down after {i+1} seconds") -+ break -+ assert inst.status() == False -+ -+ log.info("Freeing disk space and cleaning logs") -+ os.remove(fill_file) -+ fill_file = None -+ open(f'{inst.ds_paths.log_dir}/errors', 'w').close() -+ -+ log.info("Starting server after freeing space") -+ inst.start() -+ -+ log.info("Verifying logging is re-enabled") -+ assert inst.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') == 'on' -+ assert inst.config.get_attr_val_utf8('nsslapd-errorlog-logging-enabled') == 'on' -+ -+ with open(inst.errlog, 'r') as err_log: -+ content = err_log.read() -+ assert 'disabling access and audit logging' not in content -+ -+ log.info("Setting up access log rotation for testing") -+ inst.config.set('nsslapd-accesslog-maxlogsize', '1') -+ inst.config.set('nsslapd-accesslog-logrotationtimeunit', 'minute') -+ inst.config.set('nsslapd-accesslog-level', '772') -+ inst.restart() -+ -+ log.info("Creating rotated log files through user activity") -+ users = generate_access_log_activity(inst, num_users=10, num_binds=100) -+ -+ log.info("Waiting for log rotation to occur") -+ for i in range(61): -+ time.sleep(1) -+ rotated_logs = re.findall(r'access.\d+-\d+', str(os.listdir(inst.ds_paths.log_dir))) -+ if rotated_logs: -+ log.info(f"Log rotation detected after {i+1} seconds") -+ break -+ assert rotated_logs, "No rotated logs found after waiting" -+ -+ inst.bind_s(DN_DM, PW_DM) -+ -+ log.info("Enabling verbose logging") -+ inst.config.set('nsslapd-accesslog-maxlogsize', '100') -+ inst.config.set('nsslapd-accesslog-logrotationtimeunit', 'day') -+ inst.config.set('nsslapd-accesslog-level', '256') -+ inst.config.set('nsslapd-errorlog-level', '8') -+ inst.restart() -+ -+ log.info("Recovery and setup verification completed") -+ -+ finally: -+ # Clean up users -+ if users: -+ log.debug("Cleaning up test users") -+ for user in users.list(): -+ try: -+ user.delete() -+ except ldap.ALREADY_EXISTS: -+ pass -+ -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_go_straight_below_half_of_the_threshold(topo, setup, reset_logs): -- """Go straight below 1/2 of the threshold -- Recovery and setup -+ """Go straight below 1/2 of the threshold and verify recovery. - - :id: a2a0664c-fe9e-11e8-b220-8c16451d917b - :setup: Standalone -@@ -447,250 +714,415 @@ def test_go_straight_below_half_of_the_threshold(topo, setup, reset_logs): - 4. Verify DS is in shutdown mode - 5. Verify DS has recovered from shutdown - :expectedresults: -- 1. Should Success -- 2. Should Success -- 3. Should Success -- 4. Should Success -- 5. Should Success -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success -+ 5. Success - """ -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -- assert topo.standalone.config.set('nsslapd-errorlog-level', '8') -- topo.standalone.restart() -- if float(THRESHOLD) > FULL_THR_FILL_SIZE: -- FULL_THR_FILL_SIZE_new = FULL_THR_FILL_SIZE + round(float(THRESHOLD) - FULL_THR_FILL_SIZE) + 1 -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE_new)]) -- else: -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE)]) -- _witherrorlog(topo, 'temporarily setting error loglevel to the default level', 11) -- # Verify that verbose logging was set to default level -- assert LOG_DEFAULT == int(re.findall(r'nsslapd-errorlog-level: \d+', -- str(topo.standalone.search_s('cn=config', ldap.SCOPE_SUBTREE, -- '(objectclass=*)', -- ['nsslapd-errorlog-level'])) -- )[0].split(' ')[1]) -- # Verify that logging is disabled -- _withouterrorlog(topo, "topo.standalone.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') != 'off'", 11) -- # Verify that rotated logs were removed -- _witherrorlog(topo, 'disabling access and audit logging', 2) -- _witherrorlog(topo, 'deleting rotated logs', 11) -- with open(topo.standalone.errlog, 'r') as study:study = study.read() -- assert 'Unable to remove file:' not in study -- # Verify DS is in shutdown mode -- _withouterrorlog(topo, 'topo.standalone.status() != False', 90) -- _witherrorlog(topo, 'is too far below the threshold', 2) -- # Verify DS has recovered from shutdown -- os.remove('{}/foo'.format(topo.standalone.ds_paths.log_dir)) -- open('{}/errors'.format(topo.standalone.ds_paths.log_dir), 'w').close() -- topo.standalone.start() -- _withouterrorlog(topo, "topo.standalone.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') != 'on'", 20) -- with open(topo.standalone.errlog, 'r') as study: study = study.read() -- assert 'disabling access and audit logging' not in study -+ log.info("Starting test_go_straight_below_half_of_the_threshold") -+ inst = topo.standalone -+ fill_file = None -+ -+ try: -+ log.info("Configuring disk monitoring with critical logging disabled") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'off') -+ inst.config.set('nsslapd-errorlog-level', '8') -+ inst.restart() -+ -+ # Go straight below half threshold -+ log.info(f"Filling disk to go below half threshold ({THRESHOLD_BYTES // 2} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES // 2 - 1) -+ -+ # Verify that verbose logging was set to default level -+ log.info("Waiting for loglevel to be set to default") -+ wait_for_log_entry(inst, 'temporarily setting error loglevel to the default level', 11) -+ -+ log.info("Verifying error log level was set to default") -+ config_entry = inst.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', ['nsslapd-errorlog-level']) -+ current_level = int(re.findall(r'nsslapd-errorlog-level: \d+', str(config_entry))[0].split(' ')[1]) -+ assert LOG_DEFAULT == current_level -+ -+ log.info("Verifying access logging is disabled") -+ wait_for_condition(inst, "inst.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') != 'off'", 11) -+ -+ log.info("Verifying expected disk monitoring messages") -+ wait_for_log_entry(inst, 'disabling access and audit logging', 2) -+ wait_for_log_entry(inst, 'deleting rotated logs', 11) -+ -+ with open(inst.errlog, 'r') as err_log: -+ content = err_log.read() -+ assert 'Unable to remove file:' not in content -+ -+ log.info("Verifying server enters shutdown mode") -+ wait_for_condition(inst, 'inst.status() != False', 90) -+ wait_for_log_entry(inst, 'is too far below the threshold', 2) -+ -+ log.info("Freeing disk space and restarting server") -+ os.remove(fill_file) -+ fill_file = None -+ open(f'{inst.ds_paths.log_dir}/errors', 'w').close() -+ inst.start() -+ -+ log.info("Verifying server recovery") -+ wait_for_condition(inst, "inst.config.get_attr_val_utf8('nsslapd-accesslog-logging-enabled') != 'on'", 20) -+ -+ with open(inst.errlog, 'r') as err_log: -+ content = err_log.read() -+ assert 'disabling access and audit logging' not in content -+ -+ log.info("Recovery verification completed") -+ -+ finally: -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_readonly_on_threshold(topo, setup, reset_logs): -- """Verify that nsslapd-disk-monitoring-readonly-on-threshold switches the server to read-only mode -+ """Verify that nsslapd-disk-monitoring-readonly-on-threshold switches the server to read-only mode. - - :id: 06814c19-ef3c-4800-93c9-c7c6e76fcbb9 - :customerscenario: True - :setup: Standalone - :steps: -- 1. Verify that the backend is in read-only mode -- 2. Go back above the threshold -- 3. Verify that the backend is in read-write mode -+ 1. Configure readonly on threshold -+ 2. Go below threshold and verify backend is read-only -+ 3. Go back above threshold and verify backend is read-write - :expectedresults: -- 1. Should Success -- 2. Should Success -- 3. Should Success -+ 1. Success -+ 2. Success -+ 3. Success - """ -- file_path = '{}/foo'.format(topo.standalone.ds_paths.log_dir) -- backends = Backends(topo.standalone) -- backend_name = backends.list()[0].rdn -- # Verify that verbose logging was set to default level -- topo.standalone.deleteErrorLogs() -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-readonly-on-threshold', 'on') -- topo.standalone.restart() -+ log.info("Starting test_readonly_on_threshold") -+ inst = topo.standalone -+ fill_file = None -+ test_user = None -+ - try: -- subprocess.call(['dd', 'if=/dev/zero', f'of={file_path}', 'bs=1M', f'count={HALF_THR_FILL_SIZE}']) -- _witherrorlog(topo, f"Putting the backend '{backend_name}' to read-only mode", 11) -+ backends = Backends(inst) -+ backend_name = backends.list()[0].rdn -+ log.info(f"Testing with backend: {backend_name}") -+ -+ log.info("Configuring disk monitoring with readonly on threshold") -+ inst.deleteErrorLogs() -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-readonly-on-threshold', 'on') -+ inst.restart() -+ -+ log.info(f"Filling disk to go below threshold ({THRESHOLD_BYTES} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES - 1) -+ -+ log.info("Waiting for backend to enter read-only mode") -+ wait_for_log_entry(inst, f"Putting the backend '{backend_name}' to read-only mode", 11) -+ -+ log.info("Verifying backend is in read-only mode") - users = UserAccounts(topo.standalone, DEFAULT_SUFFIX) - try: -- user = users.create_test_user() -- user.delete() -+ test_user = users.create_test_user() -+ test_user.delete() -+ assert False, "Expected UNWILLING_TO_PERFORM error for read-only mode" - except ldap.UNWILLING_TO_PERFORM as e: - if 'database is read-only' not in str(e): - raise -- os.remove(file_path) -- _witherrorlog(topo, f"Putting the backend '{backend_name}' back to read-write mode", 11) -- user = users.create_test_user() -- assert user.exists() -- user.delete() -+ log.info("Confirmed: backend correctly rejects writes in read-only mode") -+ -+ log.info("Freeing disk space") -+ os.remove(fill_file) -+ fill_file = None -+ -+ log.info("Waiting for backend to return to read-write mode") -+ wait_for_log_entry(inst, f"Putting the backend '{backend_name}' back to read-write mode", 11) -+ -+ log.info("Verifying backend is in read-write mode") -+ test_user = users.create_test_user() -+ assert test_user.exists() -+ test_user.delete() -+ test_user = None -+ -+ log.info("Confirmed: backend correctly accepts writes in read-write mode") -+ - finally: -- if os.path.exists(file_path): -- os.remove(file_path) -+ if test_user: -+ try: -+ test_user.delete() -+ except: -+ pass -+ -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_readonly_on_threshold_below_half_of_the_threshold(topo, setup, reset_logs): -- """Go below 1/2 of the threshold when readonly on threshold is enabled -+ """Go below 1/2 of the threshold when readonly on threshold is enabled. - - :id: 10262663-b41f-420e-a2d0-9532dd54fa7c - :customerscenario: True - :setup: Standalone - :steps: -- 1. Go straight below 1/2 of the threshold -- 2. Verify that the backend is in read-only mode -- 3. Go back above the threshold -- 4. Verify that the backend is in read-write mode -+ 1. Configure readonly on threshold -+ 2. Go below half threshold -+ 3. Verify backend is read-only and shutdown messages appear -+ 4. Free space and verify backend returns to read-write - :expectedresults: -- 1. Should Success -- 2. Should Success -- 3. Should Success -- 4. Should Success -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success - """ -- file_path = '{}/foo'.format(topo.standalone.ds_paths.log_dir) -- backends = Backends(topo.standalone) -- backend_name = backends.list()[0].rdn -- topo.standalone.deleteErrorLogs() -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-readonly-on-threshold', 'on') -- topo.standalone.restart() -+ log.info("Starting test_readonly_on_threshold_below_half_of_the_threshold") -+ inst = topo.standalone -+ fill_file = None -+ test_user = None -+ - try: -- if float(THRESHOLD) > FULL_THR_FILL_SIZE: -- FULL_THR_FILL_SIZE_new = FULL_THR_FILL_SIZE + round(float(THRESHOLD) - FULL_THR_FILL_SIZE) + 1 -- subprocess.call(['dd', 'if=/dev/zero', f'of={file_path}', 'bs=1M', f'count={FULL_THR_FILL_SIZE_new}']) -- else: -- subprocess.call(['dd', 'if=/dev/zero', f'of={file_path}', 'bs=1M', f'count={FULL_THR_FILL_SIZE}']) -- _witherrorlog(topo, f"Putting the backend '{backend_name}' to read-only mode", 11) -- users = UserAccounts(topo.standalone, DEFAULT_SUFFIX) -+ backends = Backends(inst) -+ backend_name = backends.list()[0].rdn -+ log.info(f"Testing with backend: {backend_name}") -+ -+ log.info("Configuring disk monitoring with readonly on threshold") -+ inst.deleteErrorLogs() -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-readonly-on-threshold', 'on') -+ inst.restart() -+ -+ log.info(f"Filling disk to go below half threshold ({THRESHOLD_BYTES // 2} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES // 2 - 1) -+ -+ log.info("Waiting for backend to enter read-only mode") -+ wait_for_log_entry(inst, f"Putting the backend '{backend_name}' to read-only mode", 11) -+ -+ log.info("Verifying backend is in read-only mode") -+ users = UserAccounts(inst, DEFAULT_SUFFIX) - try: -- user = users.create_test_user() -- user.delete() -+ test_user = users.create_test_user() -+ test_user.delete() -+ assert False, "Expected UNWILLING_TO_PERFORM error for read-only mode" - except ldap.UNWILLING_TO_PERFORM as e: - if 'database is read-only' not in str(e): - raise -- _witherrorlog(topo, 'is too far below the threshold', 51) -- # Verify DS has recovered from shutdown -- os.remove(file_path) -- _witherrorlog(topo, f"Putting the backend '{backend_name}' back to read-write mode", 51) -- user = users.create_test_user() -- assert user.exists() -- user.delete() -+ log.info("Confirmed: backend correctly rejects writes in read-only mode") -+ -+ log.info("Waiting for shutdown threshold message") -+ wait_for_log_entry(inst, 'is too far below the threshold', 51) -+ -+ log.info("Freeing disk space") -+ os.remove(fill_file) -+ fill_file = None -+ -+ log.info("Waiting for backend to return to read-write mode") -+ wait_for_log_entry(inst, f"Putting the backend '{backend_name}' back to read-write mode", 51) -+ -+ log.info("Verifying backend is in read-write mode") -+ test_user = users.create_test_user() -+ assert test_user.exists() -+ test_user.delete() -+ test_user = None -+ -+ log.info("Confirmed: backend correctly accepts writes in read-write mode") -+ - finally: -- if os.path.exists(file_path): -- os.remove(file_path) -+ if test_user: -+ try: -+ test_user.delete() -+ except: -+ pass -+ -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_below_half_of_the_threshold_not_starting_after_shutdown(topo, setup, reset_logs): -- """Test that the instance won't start if we are below 1/2 of the threshold -+ """Test that the instance won't start if we are below 1/2 of the threshold. - - :id: cceeaefd-9fa4-45c5-9ac6-9887a0671ef8 - :customerscenario: True - :setup: Standalone - :steps: -- 1. Go straight below 1/2 of the threshold -- 2. Try to start the instance -- 3. Go back above the threshold -- 4. Try to start the instance -+ 1. Go below half threshold and wait for shutdown -+ 2. Try to start the instance and verify it fails -+ 3. Free space and verify instance starts successfully - :expectedresults: -- 1. Should Success -- 2. Should Fail -- 3. Should Success -- 4. Should Success -+ 1. Success -+ 2. Startup fails as expected -+ 3. Success - """ -- file_path = '{}/foo'.format(topo.standalone.ds_paths.log_dir) -- topo.standalone.deleteErrorLogs() -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- topo.standalone.restart() -+ log.info("Starting test_below_half_of_the_threshold_not_starting_after_shutdown") -+ inst = topo.standalone -+ fill_file = None -+ - try: -- if float(THRESHOLD) > FULL_THR_FILL_SIZE: -- FULL_THR_FILL_SIZE_new = FULL_THR_FILL_SIZE + round(float(THRESHOLD) - FULL_THR_FILL_SIZE) + 1 -- subprocess.call(['dd', 'if=/dev/zero', f'of={file_path}', 'bs=1M', f'count={FULL_THR_FILL_SIZE_new}']) -- else: -- subprocess.call(['dd', 'if=/dev/zero', f'of={file_path}', 'bs=1M', f'count={FULL_THR_FILL_SIZE}']) -- _withouterrorlog(topo, 'topo.standalone.status() == True', 120) -+ log.info("Configuring disk monitoring") -+ inst.deleteErrorLogs() -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.restart() -+ -+ log.info(f"Filling disk to go below half threshold ({THRESHOLD_BYTES // 2} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES // 2 - 1) -+ -+ log.info("Waiting for server to shut down due to disk space") -+ wait_for_condition(inst, 'inst.status() == True', 120) -+ -+ log.info("Attempting to start instance (should fail)") - try: -- topo.standalone.start() -+ inst.start() -+ assert False, "Instance startup should have failed due to low disk space" - except (ValueError, subprocess.CalledProcessError): -- topo.standalone.log.info("Instance start up has failed as expected") -- _witherrorlog(topo, f'is too far below the threshold({THRESHOLD_BYTES} bytes). Exiting now', 2) -- # Verify DS has recovered from shutdown -- os.remove(file_path) -- topo.standalone.start() -+ log.info("Instance startup failed as expected due to low disk space") -+ -+ wait_for_log_entry(inst, f'is too far below the threshold({THRESHOLD_BYTES} bytes). Exiting now', 2) -+ -+ log.info("Freeing disk space") -+ os.remove(fill_file) -+ fill_file = None -+ -+ log.info("Starting instance after freeing space") -+ inst.start() -+ assert inst.status() == True -+ log.info("Instance started successfully after freeing space") -+ - finally: -- if os.path.exists(file_path): -- os.remove(file_path) -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_go_straight_below_4kb(topo, setup, reset_logs): -- """Go straight below 4KB -+ """Go straight below 4KB and verify behavior. - - :id: a855115a-fe9e-11e8-8e91-8c16451d917b - :setup: Standalone - :steps: - 1. Go straight below 4KB -- 2. Clean space -+ 2. Verify server behavior -+ 3. Clean space and restart - :expectedresults: -- 1. Should Success -- 2. Should Success -+ 1. Success -+ 2. Success -+ 3. Success - """ -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- topo.standalone.restart() -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE)]) -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo1'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(FULL_THR_FILL_SIZE)]) -- _withouterrorlog(topo, 'topo.standalone.status() != False', 11) -- os.remove('{}/foo'.format(topo.standalone.ds_paths.log_dir)) -- os.remove('{}/foo1'.format(topo.standalone.ds_paths.log_dir)) -- topo.standalone.start() -- assert topo.standalone.status() == True -+ log.info("Starting test_go_straight_below_4kb") -+ inst = topo.standalone -+ fill_file = None -+ -+ try: -+ log.info("Configuring disk monitoring") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.restart() -+ -+ log.info("Filling disk to go below 4KB") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, 4000) -+ -+ log.info("Waiting for server shutdown due to extreme low disk space") -+ wait_for_condition(inst, 'inst.status() != False', 11) -+ -+ log.info("Freeing disk space and restarting") -+ os.remove(fill_file) -+ fill_file = None -+ inst.start() -+ -+ assert inst.status() == True -+ log.info("Server restarted successfully after freeing space") -+ -+ finally: -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) -+ -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_threshold_to_overflow_value(topo, setup, reset_logs): -- """Overflow in nsslapd-disk-monitoring-threshold -+ """Test overflow in nsslapd-disk-monitoring-threshold. - - :id: ad60ab3c-fe9e-11e8-88dc-8c16451d917b - :setup: Standalone - :steps: -- 1. Setting nsslapd-disk-monitoring-threshold to overflow_value -+ 1. Set nsslapd-disk-monitoring-threshold to overflow value -+ 2. Verify the value is set correctly - :expectedresults: -- 1. Should Success -+ 1. Success -+ 2. Success - """ -+ log.info("Starting test_threshold_to_overflow_value") -+ inst = topo.standalone -+ - overflow_value = '3000000000' -- # Setting nsslapd-disk-monitoring-threshold to overflow_value -- assert topo.standalone.config.set('nsslapd-disk-monitoring-threshold', ensure_bytes(overflow_value)) -- assert overflow_value == re.findall(r'nsslapd-disk-monitoring-threshold: \d+', str( -- topo.standalone.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', -- ['nsslapd-disk-monitoring-threshold'])))[0].split(' ')[1] -+ log.info(f"Setting threshold to overflow value: {overflow_value}") -+ -+ inst.config.set('nsslapd-disk-monitoring-threshold', ensure_bytes(overflow_value)) -+ -+ config_entry = inst.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', ['nsslapd-disk-monitoring-threshold']) -+ current_value = re.findall(r'nsslapd-disk-monitoring-threshold: \d+', str(config_entry))[0].split(' ')[1] -+ assert overflow_value == current_value -+ -+ log.info(f"Verified: threshold value set to {current_value}") -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_threshold_is_reached_to_half(topo, setup, reset_logs): -- """RHDS not shutting down when disk monitoring threshold is reached to half. -+ """Verify RHDS not shutting down when disk monitoring threshold is reached to half. - - :id: b2d3665e-fe9e-11e8-b9c0-8c16451d917b - :setup: Standalone -- :steps: Standalone -- 1. Verify that there is not endless loop of error messages -+ :steps: -+ 1. Configure disk monitoring with critical logging -+ 2. Go below threshold -+ 3. Verify there is no endless loop of error messages - :expectedresults: -- 1. Should Success -+ 1. Success -+ 2. Success -+ 3. Success - """ -+ log.info("Starting test_threshold_is_reached_to_half") -+ inst = topo.standalone -+ fill_file = None -+ -+ try: -+ log.info("Configuring disk monitoring with critical logging enabled") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -+ inst.config.set('nsslapd-errorlog-level', '8') -+ inst.config.set('nsslapd-disk-monitoring-threshold', ensure_bytes(str(THRESHOLD_BYTES))) -+ inst.restart() -+ -+ log.info(f"Filling disk to go below threshold ({THRESHOLD_BYTES} bytes)") -+ fill_file = fill_to_target_avail(inst.ds_paths.log_dir, THRESHOLD_BYTES // 2 - 1) -+ -+ log.info("Waiting for loglevel message and verifying it's not repeated") -+ wait_for_log_entry(inst, "temporarily setting error loglevel to the default level", 11) -+ -+ with open(inst.errlog, 'r') as err_log: -+ content = err_log.read() -+ -+ message_count = len(re.findall("temporarily setting error loglevel to the default level", content)) -+ assert message_count == 1, f"Expected 1 occurrence of message, found {message_count}" -+ -+ log.info("Verified: no endless loop of error messages") -+ -+ finally: -+ if fill_file and os.path.exists(fill_file): -+ log.debug(f"Cleaning up fill file: {fill_file}") -+ os.remove(fill_file) - -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -- assert topo.standalone.config.set('nsslapd-errorlog-level', '8') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-threshold', ensure_bytes(THRESHOLD_BYTES)) -- topo.standalone.restart() -- subprocess.call(['dd', 'if=/dev/zero', 'of={}/foo'.format(topo.standalone.ds_paths.log_dir), 'bs=1M', 'count={}'.format(HALF_THR_FILL_SIZE)]) -- # Verify that there is not endless loop of error messages -- _witherrorlog(topo, "temporarily setting error loglevel to the default level", 10) -- with open(topo.standalone.errlog, 'r') as study:study = study.read() -- assert len(re.findall("temporarily setting error loglevel to the default level", study)) == 1 -- os.remove('{}/foo'.format(topo.standalone.ds_paths.log_dir)) -+ log.info("Test completed successfully") - - - @disk_monitoring_ack -@@ -711,58 +1143,77 @@ def test_threshold_is_reached_to_half(topo, setup, reset_logs): - ("nsslapd-disk-monitoring-grace-period", '0'), - ]) - def test_negagtive_parameterize(topo, setup, reset_logs, test_input, expected): -- """Verify that invalid operations are not permitted -+ """Verify that invalid operations are not permitted. - - :id: b88efbf8-fe9e-11e8-8499-8c16451d917b - :parametrized: yes - :setup: Standalone - :steps: -- 1. Verify that invalid operations are not permitted. -+ 1. Try to set invalid configuration values - :expectedresults: -- 1. Should not success. -+ 1. Configuration change should fail - """ -+ log.info(f"Starting test_negagtive_parameterize for {test_input}={expected}") -+ inst = topo.standalone -+ -+ log.info(f"Attempting to set invalid value: {test_input}={expected}") - with pytest.raises(Exception): -- topo.standalone.config.set(test_input, ensure_bytes(expected)) -+ inst.config.set(test_input, ensure_bytes(expected)) -+ -+ log.info("Verified: invalid configuration value was rejected") -+ log.info("Test completed successfully") - - - @disk_monitoring_ack - def test_valid_operations_are_permitted(topo, setup, reset_logs): -- """Verify that valid operations are permitted -+ """Verify that valid operations are permitted. - - :id: bd4f83f6-fe9e-11e8-88f4-8c16451d917b - :setup: Standalone - :steps: -- 1. Verify that valid operations are permitted -+ 1. Perform various valid configuration operations - :expectedresults: -- 1. Should Success. -+ 1. All operations should succeed - """ -- assert topo.standalone.config.set('nsslapd-disk-monitoring', 'on') -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -- assert topo.standalone.config.set('nsslapd-errorlog-level', '8') -- topo.standalone.restart() -- # Trying to delete nsslapd-disk-monitoring-threshold -- assert topo.standalone.modify_s('cn=config', [(ldap.MOD_DELETE, 'nsslapd-disk-monitoring-threshold', '')]) -- # Trying to add another value to nsslapd-disk-monitoring-threshold (check that it is not multivalued) -- topo.standalone.config.add('nsslapd-disk-monitoring-threshold', '2000001') -- # Trying to delete nsslapd-disk-monitoring -- assert topo.standalone.modify_s('cn=config', [(ldap.MOD_DELETE, 'nsslapd-disk-monitoring', ensure_bytes(str( -- topo.standalone.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', ['nsslapd-disk-monitoring'])[ -- 0]).split(' ')[2].split('\n\n')[0]))]) -- # Trying to add another value to nsslapd-disk-monitoring -- topo.standalone.config.add('nsslapd-disk-monitoring', 'off') -- # Trying to delete nsslapd-disk-monitoring-grace-period -- assert topo.standalone.modify_s('cn=config', [(ldap.MOD_DELETE, 'nsslapd-disk-monitoring-grace-period', '')]) -- # Trying to add another value to nsslapd-disk-monitoring-grace-period -- topo.standalone.config.add('nsslapd-disk-monitoring-grace-period', '61') -- # Trying to delete nsslapd-disk-monitoring-logging-critical -- assert topo.standalone.modify_s('cn=config', [(ldap.MOD_DELETE, 'nsslapd-disk-monitoring-logging-critical', -- ensure_bytes(str( -- topo.standalone.search_s('cn=config', ldap.SCOPE_SUBTREE, -- '(objectclass=*)', [ -- 'nsslapd-disk-monitoring-logging-critical'])[ -- 0]).split(' ')[2].split('\n\n')[0]))]) -- # Trying to add another value to nsslapd-disk-monitoring-logging-critical -- assert topo.standalone.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -+ log.info("Starting test_valid_operations_are_permitted") -+ inst = topo.standalone -+ -+ log.info("Setting initial disk monitoring configuration") -+ inst.config.set('nsslapd-disk-monitoring', 'on') -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -+ inst.config.set('nsslapd-errorlog-level', '8') -+ inst.restart() -+ -+ log.info("Testing deletion of nsslapd-disk-monitoring-threshold") -+ inst.modify_s('cn=config', [(ldap.MOD_DELETE, 'nsslapd-disk-monitoring-threshold', '')]) -+ -+ log.info("Testing addition of nsslapd-disk-monitoring-threshold value") -+ inst.config.add('nsslapd-disk-monitoring-threshold', '2000001') -+ -+ log.info("Testing deletion of nsslapd-disk-monitoring") -+ config_entry = inst.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', ['nsslapd-disk-monitoring']) -+ current_value = str(config_entry[0]).split(' ')[2].split('\n\n')[0] -+ inst.modify_s('cn=config', [(ldap.MOD_DELETE, 'nsslapd-disk-monitoring', ensure_bytes(current_value))]) -+ -+ log.info("Testing addition of nsslapd-disk-monitoring value") -+ inst.config.add('nsslapd-disk-monitoring', 'off') -+ -+ log.info("Testing deletion of nsslapd-disk-monitoring-grace-period") -+ inst.modify_s('cn=config', [(ldap.MOD_DELETE, 'nsslapd-disk-monitoring-grace-period', '')]) -+ -+ log.info("Testing addition of nsslapd-disk-monitoring-grace-period value") -+ inst.config.add('nsslapd-disk-monitoring-grace-period', '61') -+ -+ log.info("Testing deletion of nsslapd-disk-monitoring-logging-critical") -+ config_entry = inst.search_s('cn=config', ldap.SCOPE_SUBTREE, '(objectclass=*)', ['nsslapd-disk-monitoring-logging-critical']) -+ current_value = str(config_entry[0]).split(' ')[2].split('\n\n')[0] -+ inst.modify_s('cn=config', [(ldap.MOD_DELETE, 'nsslapd-disk-monitoring-logging-critical', ensure_bytes(current_value))]) -+ -+ log.info("Testing addition of nsslapd-disk-monitoring-logging-critical value") -+ inst.config.set('nsslapd-disk-monitoring-logging-critical', 'on') -+ -+ log.info("All valid operations completed successfully") -+ log.info("Test completed successfully") - - - if __name__ == '__main__': --- -2.49.0 - diff --git a/0029-Issue-6778-Memory-leak-in-roles_cache_create_object_.patch b/0029-Issue-6778-Memory-leak-in-roles_cache_create_object_.patch deleted file mode 100644 index 6798497..0000000 --- a/0029-Issue-6778-Memory-leak-in-roles_cache_create_object_.patch +++ /dev/null @@ -1,262 +0,0 @@ -From c80554be0cea0eb5f2ab6d1e6e1fcef098304f69 Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Wed, 16 Jul 2025 11:22:30 +0200 -Subject: [PATCH] Issue 6778 - Memory leak in - roles_cache_create_object_from_entry part 2 - -Bug Description: -Everytime a role with scope DN is processed, we leak rolescopeDN. - -Fix Description: -* Initialize all pointer variables to NULL -* Add additional NULL checks -* Free rolescopeDN -* Move test_rewriter_with_invalid_filter before the DB contains 90k entries -* Use task.wait() for import task completion instead of parsing logs, -increase the timeout - -Fixes: https://github.com/389ds/389-ds-base/issues/6778 - -Reviewed by: @progier389 (Thanks!) ---- - dirsrvtests/tests/suites/roles/basic_test.py | 164 +++++++++---------- - ldap/servers/plugins/roles/roles_cache.c | 10 +- - 2 files changed, 82 insertions(+), 92 deletions(-) - -diff --git a/dirsrvtests/tests/suites/roles/basic_test.py b/dirsrvtests/tests/suites/roles/basic_test.py -index d92d6f0c3..ec208bae9 100644 ---- a/dirsrvtests/tests/suites/roles/basic_test.py -+++ b/dirsrvtests/tests/suites/roles/basic_test.py -@@ -510,6 +510,76 @@ def test_vattr_on_managed_role(topo, request): - - request.addfinalizer(fin) - -+def test_rewriter_with_invalid_filter(topo, request): -+ """Test that server does not crash when having -+ invalid filter in filtered role -+ -+ :id: 5013b0b2-0af6-11f0-8684-482ae39447e5 -+ :setup: standalone server -+ :steps: -+ 1. Setup filtered role with good filter -+ 2. Setup nsrole rewriter -+ 3. Restart the server -+ 4. Search for entries -+ 5. Setup filtered role with bad filter -+ 6. Search for entries -+ :expectedresults: -+ 1. Operation should succeed -+ 2. Operation should succeed -+ 3. Operation should succeed -+ 4. Operation should succeed -+ 5. Operation should succeed -+ 6. Operation should succeed -+ """ -+ inst = topo.standalone -+ entries = [] -+ -+ def fin(): -+ inst.start() -+ for entry in entries: -+ entry.delete() -+ request.addfinalizer(fin) -+ -+ # Setup filtered role -+ roles = FilteredRoles(inst, f'ou=people,{DEFAULT_SUFFIX}') -+ filter_ko = '(&((objectClass=top)(objectClass=nsPerson))' -+ filter_ok = '(&(objectClass=top)(objectClass=nsPerson))' -+ role_properties = { -+ 'cn': 'TestFilteredRole', -+ 'nsRoleFilter': filter_ok, -+ 'description': 'Test good filter', -+ } -+ role = roles.create(properties=role_properties) -+ entries.append(role) -+ -+ # Setup nsrole rewriter -+ rewriters = Rewriters(inst) -+ rewriter_properties = { -+ "cn": "nsrole", -+ "nsslapd-libpath": 'libroles-plugin', -+ "nsslapd-filterrewriter": 'role_nsRole_filter_rewriter', -+ } -+ rewriter = rewriters.ensure_state(properties=rewriter_properties) -+ entries.append(rewriter) -+ -+ # Restart thge instance -+ inst.restart() -+ -+ # Search for entries -+ entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn) -+ -+ # Set bad filter -+ role_properties = { -+ 'cn': 'TestFilteredRole', -+ 'nsRoleFilter': filter_ko, -+ 'description': 'Test bad filter', -+ } -+ role.ensure_state(properties=role_properties) -+ -+ # Search for entries -+ entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn) -+ -+ - def test_managed_and_filtered_role_rewrite(topo, request): - """Test that filter components containing 'nsrole=xxx' - are reworked if xxx is either a filtered role or a managed -@@ -581,17 +651,11 @@ def test_managed_and_filtered_role_rewrite(topo, request): - PARENT="ou=people,%s" % DEFAULT_SUFFIX - dbgen_users(topo.standalone, 90000, import_ldif, DEFAULT_SUFFIX, entry_name=RDN, generic=True, parent=PARENT) - -- # online import -+ # Online import - import_task = ImportTask(topo.standalone) - import_task.import_suffix_from_ldif(ldiffile=import_ldif, suffix=DEFAULT_SUFFIX) -- # Check for up to 200sec that the completion -- for i in range(1, 20): -- if len(topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9000.*')) > 0: -- break -- time.sleep(10) -- import_complete = topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9000.*') -- assert (len(import_complete) == 1) -- -+ import_task.wait(timeout=400) -+ assert import_task.get_exit_code() == 0 - # Restart server - topo.standalone.restart() - -@@ -715,17 +779,11 @@ def test_not_such_entry_role_rewrite(topo, request): - PARENT="ou=people,%s" % DEFAULT_SUFFIX - dbgen_users(topo.standalone, 91000, import_ldif, DEFAULT_SUFFIX, entry_name=RDN, generic=True, parent=PARENT) - -- # online import -+ # Online import - import_task = ImportTask(topo.standalone) - import_task.import_suffix_from_ldif(ldiffile=import_ldif, suffix=DEFAULT_SUFFIX) -- # Check for up to 200sec that the completion -- for i in range(1, 20): -- if len(topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9100.*')) > 0: -- break -- time.sleep(10) -- import_complete = topo.standalone.ds_error_log.match('.*import userRoot: Import complete. Processed 9100.*') -- assert (len(import_complete) == 1) -- -+ import_task.wait(timeout=400) -+ assert import_task.get_exit_code() == 0 - # Restart server - topo.standalone.restart() - -@@ -769,76 +827,6 @@ def test_not_such_entry_role_rewrite(topo, request): - request.addfinalizer(fin) - - --def test_rewriter_with_invalid_filter(topo, request): -- """Test that server does not crash when having -- invalid filter in filtered role -- -- :id: 5013b0b2-0af6-11f0-8684-482ae39447e5 -- :setup: standalone server -- :steps: -- 1. Setup filtered role with good filter -- 2. Setup nsrole rewriter -- 3. Restart the server -- 4. Search for entries -- 5. Setup filtered role with bad filter -- 6. Search for entries -- :expectedresults: -- 1. Operation should succeed -- 2. Operation should succeed -- 3. Operation should succeed -- 4. Operation should succeed -- 5. Operation should succeed -- 6. Operation should succeed -- """ -- inst = topo.standalone -- entries = [] -- -- def fin(): -- inst.start() -- for entry in entries: -- entry.delete() -- request.addfinalizer(fin) -- -- # Setup filtered role -- roles = FilteredRoles(inst, f'ou=people,{DEFAULT_SUFFIX}') -- filter_ko = '(&((objectClass=top)(objectClass=nsPerson))' -- filter_ok = '(&(objectClass=top)(objectClass=nsPerson))' -- role_properties = { -- 'cn': 'TestFilteredRole', -- 'nsRoleFilter': filter_ok, -- 'description': 'Test good filter', -- } -- role = roles.create(properties=role_properties) -- entries.append(role) -- -- # Setup nsrole rewriter -- rewriters = Rewriters(inst) -- rewriter_properties = { -- "cn": "nsrole", -- "nsslapd-libpath": 'libroles-plugin', -- "nsslapd-filterrewriter": 'role_nsRole_filter_rewriter', -- } -- rewriter = rewriters.ensure_state(properties=rewriter_properties) -- entries.append(rewriter) -- -- # Restart thge instance -- inst.restart() -- -- # Search for entries -- entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn) -- -- # Set bad filter -- role_properties = { -- 'cn': 'TestFilteredRole', -- 'nsRoleFilter': filter_ko, -- 'description': 'Test bad filter', -- } -- role.ensure_state(properties=role_properties) -- -- # Search for entries -- entries = inst.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(nsrole=%s)" % role.dn) -- -- - if __name__ == "__main__": - CURRENT_FILE = os.path.realpath(__file__) - pytest.main("-s -v %s" % CURRENT_FILE) -diff --git a/ldap/servers/plugins/roles/roles_cache.c b/ldap/servers/plugins/roles/roles_cache.c -index 3e1c5b429..05cabc3a3 100644 ---- a/ldap/servers/plugins/roles/roles_cache.c -+++ b/ldap/servers/plugins/roles/roles_cache.c -@@ -1117,16 +1117,17 @@ roles_cache_create_object_from_entry(Slapi_Entry *role_entry, role_object **resu - - rolescopeDN = slapi_entry_attr_get_charptr(role_entry, ROLE_SCOPE_DN); - if (rolescopeDN) { -- Slapi_DN *rolescopeSDN; -- Slapi_DN *top_rolescopeSDN, *top_this_roleSDN; -+ Slapi_DN *rolescopeSDN = NULL; -+ Slapi_DN *top_rolescopeSDN = NULL; -+ Slapi_DN *top_this_roleSDN = NULL; - - /* Before accepting to use this scope, first check if it belongs to the same suffix */ - rolescopeSDN = slapi_sdn_new_dn_byref(rolescopeDN); -- if ((strlen((char *)slapi_sdn_get_ndn(rolescopeSDN)) > 0) && -+ if (rolescopeSDN && (strlen((char *)slapi_sdn_get_ndn(rolescopeSDN)) > 0) && - (slapi_dn_syntax_check(NULL, (char *)slapi_sdn_get_ndn(rolescopeSDN), 1) == 0)) { - top_rolescopeSDN = roles_cache_get_top_suffix(rolescopeSDN); - top_this_roleSDN = roles_cache_get_top_suffix(this_role->dn); -- if (slapi_sdn_compare(top_rolescopeSDN, top_this_roleSDN) == 0) { -+ if (top_rolescopeSDN && top_this_roleSDN && slapi_sdn_compare(top_rolescopeSDN, top_this_roleSDN) == 0) { - /* rolescopeDN belongs to the same suffix as the role, we can use this scope */ - this_role->rolescopedn = rolescopeSDN; - } else { -@@ -1148,6 +1149,7 @@ roles_cache_create_object_from_entry(Slapi_Entry *role_entry, role_object **resu - rolescopeDN); - slapi_sdn_free(&rolescopeSDN); - } -+ slapi_ch_free_string(&rolescopeDN); - } - - /* Depending upon role type, pull out the remaining information we need */ --- -2.49.0 - diff --git a/0030-Issue-6901-Update-changelog-trimming-logging-fix-tes.patch b/0030-Issue-6901-Update-changelog-trimming-logging-fix-tes.patch deleted file mode 100644 index ea74fde..0000000 --- a/0030-Issue-6901-Update-changelog-trimming-logging-fix-tes.patch +++ /dev/null @@ -1,64 +0,0 @@ -From 2988a4ad320b7a4870cfa055bf7afd009424a15f Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Mon, 28 Jul 2025 13:16:10 +0200 -Subject: [PATCH] Issue 6901 - Update changelog trimming logging - fix tests - -Description: -Update changelog_trimming_test for the new error message. - -Fixes: https://github.com/389ds/389-ds-base/issues/6901 - -Reviewed by: @progier389, @aadhikar (Thanks!) ---- - .../suites/replication/changelog_trimming_test.py | 10 +++++----- - 1 file changed, 5 insertions(+), 5 deletions(-) - -diff --git a/dirsrvtests/tests/suites/replication/changelog_trimming_test.py b/dirsrvtests/tests/suites/replication/changelog_trimming_test.py -index 2d70d328e..27d19e8fd 100644 ---- a/dirsrvtests/tests/suites/replication/changelog_trimming_test.py -+++ b/dirsrvtests/tests/suites/replication/changelog_trimming_test.py -@@ -110,7 +110,7 @@ def test_max_age(topo, setup_max_age): - do_mods(supplier, 10) - - time.sleep(1) # Trimming should not have occurred -- if supplier.searchErrorsLog("Trimmed") is True: -+ if supplier.searchErrorsLog("trimmed") is True: - log.fatal('Trimming event unexpectedly occurred') - assert False - -@@ -120,12 +120,12 @@ def test_max_age(topo, setup_max_age): - cl.set_trim_interval('5') - - time.sleep(3) # Trimming should not have occurred -- if supplier.searchErrorsLog("Trimmed") is True: -+ if supplier.searchErrorsLog("trimmed") is True: - log.fatal('Trimming event unexpectedly occurred') - assert False - - time.sleep(3) # Trimming should have occurred -- if supplier.searchErrorsLog("Trimmed") is False: -+ if supplier.searchErrorsLog("trimmed") is False: - log.fatal('Trimming event did not occur') - assert False - -@@ -159,7 +159,7 @@ def test_max_entries(topo, setup_max_entries): - do_mods(supplier, 10) - - time.sleep(1) # Trimming should have occurred -- if supplier.searchErrorsLog("Trimmed") is True: -+ if supplier.searchErrorsLog("trimmed") is True: - log.fatal('Trimming event unexpectedly occurred') - assert False - -@@ -169,7 +169,7 @@ def test_max_entries(topo, setup_max_entries): - cl.set_trim_interval('5') - - time.sleep(6) # Trimming should have occurred -- if supplier.searchErrorsLog("Trimmed") is False: -+ if supplier.searchErrorsLog("trimmed") is False: - log.fatal('Trimming event did not occur') - assert False - --- -2.49.0 - diff --git a/0031-Issue-6181-RFE-Allow-system-to-manage-uid-gid-at-sta.patch b/0031-Issue-6181-RFE-Allow-system-to-manage-uid-gid-at-sta.patch deleted file mode 100644 index 57ebce3..0000000 --- a/0031-Issue-6181-RFE-Allow-system-to-manage-uid-gid-at-sta.patch +++ /dev/null @@ -1,32 +0,0 @@ -From 36c97c19dadda7f09a1e2b3d838e12fbdc39af23 Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Mon, 28 Jul 2025 13:18:26 +0200 -Subject: [PATCH] Issue 6181 - RFE - Allow system to manage uid/gid at startup - -Description: -Expand CapabilityBoundingSet to include CAP_FOWNER - -Relates: https://github.com/389ds/389-ds-base/issues/6181 -Relates: https://github.com/389ds/389-ds-base/issues/6906 - -Reviewed by: @progier389 (Thanks!) ---- - wrappers/systemd.template.service.in | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - -diff --git a/wrappers/systemd.template.service.in b/wrappers/systemd.template.service.in -index ada608c86..8d2b96c7e 100644 ---- a/wrappers/systemd.template.service.in -+++ b/wrappers/systemd.template.service.in -@@ -29,7 +29,7 @@ MemoryAccounting=yes - - # Allow non-root instances to bind to low ports. - AmbientCapabilities=CAP_NET_BIND_SERVICE --CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETUID CAP_SETGID CAP_DAC_OVERRIDE CAP_CHOWN -+CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_SETUID CAP_SETGID CAP_DAC_OVERRIDE CAP_CHOWN CAP_FOWNER - - PrivateTmp=on - # https://en.opensuse.org/openSUSE:Security_Features#Systemd_hardening_effort --- -2.49.0 - diff --git a/0032-Issue-6468-CLI-Fix-default-error-log-level.patch b/0032-Issue-6468-CLI-Fix-default-error-log-level.patch deleted file mode 100644 index 8d27497..0000000 --- a/0032-Issue-6468-CLI-Fix-default-error-log-level.patch +++ /dev/null @@ -1,31 +0,0 @@ -From 70256cd0e90b91733516d4434428d04aa55b39bd Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Tue, 29 Jul 2025 08:00:00 +0200 -Subject: [PATCH] Issue 6468 - CLI - Fix default error log level - -Description: -Default error log level is 16384 - -Relates: https://github.com/389ds/389-ds-base/issues/6468 - -Reviewed by: @droideck (Thanks!) ---- - src/lib389/lib389/cli_conf/logging.py | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - -diff --git a/src/lib389/lib389/cli_conf/logging.py b/src/lib389/lib389/cli_conf/logging.py -index 124556f1f..d9ae1ab16 100644 ---- a/src/lib389/lib389/cli_conf/logging.py -+++ b/src/lib389/lib389/cli_conf/logging.py -@@ -44,7 +44,7 @@ ERROR_LEVELS = { - + "methods used for a SASL bind" - }, - "default": { -- "level": 6384, -+ "level": 16384, - "desc": "Default logging level" - }, - "filter": { --- -2.49.0 - diff --git a/0033-Issue-6768-ns-slapd-crashes-when-a-referral-is-added.patch b/0033-Issue-6768-ns-slapd-crashes-when-a-referral-is-added.patch deleted file mode 100644 index 49b4e29..0000000 --- a/0033-Issue-6768-ns-slapd-crashes-when-a-referral-is-added.patch +++ /dev/null @@ -1,97 +0,0 @@ -From d668c477158e962ebb6fb25ccabe6d9d09f30259 Mon Sep 17 00:00:00 2001 -From: James Chapman -Date: Fri, 1 Aug 2025 13:27:02 +0100 -Subject: [PATCH] Issue 6768 - ns-slapd crashes when a referral is added - (#6780) - -Bug description: When a paged result search is successfully run on a referred -suffix, we retrieve the search result set from the pblock and try to release -it. In this case the search result set is NULL, which triggers a SEGV during -the release. - -Fix description: If the search result code is LDAP_REFERRAL, skip deletion of -the search result set. Added test case. - -Fixes: https://github.com/389ds/389-ds-base/issues/6768 - -Reviewed by: @tbordaz, @progier389 (Thank you) ---- - .../paged_results/paged_results_test.py | 46 +++++++++++++++++++ - ldap/servers/slapd/opshared.c | 4 +- - 2 files changed, 49 insertions(+), 1 deletion(-) - -diff --git a/dirsrvtests/tests/suites/paged_results/paged_results_test.py b/dirsrvtests/tests/suites/paged_results/paged_results_test.py -index fca48db0f..1bb94b53a 100644 ---- a/dirsrvtests/tests/suites/paged_results/paged_results_test.py -+++ b/dirsrvtests/tests/suites/paged_results/paged_results_test.py -@@ -1271,6 +1271,52 @@ def test_search_stress_abandon(create_40k_users, create_user): - paged_search(conn, create_40k_users.suffix, [req_ctrl], search_flt, searchreq_attrlist, abandon_rate=abandon_rate) - - -+def test_search_referral(topology_st): -+ """Test a paged search on a referred suffix doesnt crash the server. -+ -+ :id: c788bdbf-965b-4f12-ac24-d4d695e2cce2 -+ -+ :setup: Standalone instance -+ -+ :steps: -+ 1. Configure a default referral. -+ 2. Create a paged result search control. -+ 3. Paged result search on referral suffix (doesnt exist on the instance, triggering a referral). -+ 4. Check the server is still running. -+ 5. Remove referral. -+ -+ :expectedresults: -+ 1. Referral sucessfully set. -+ 2. Control created. -+ 3. Search returns ldap.REFERRAL (10). -+ 4. Server still running. -+ 5. Referral removed. -+ """ -+ -+ page_size = 5 -+ SEARCH_SUFFIX = "dc=referme,dc=com" -+ REFERRAL = "ldap://localhost.localdomain:389/o%3dnetscaperoot" -+ -+ log.info('Configuring referral') -+ topology_st.standalone.config.set('nsslapd-referral', REFERRAL) -+ referral = topology_st.standalone.config.get_attr_val_utf8('nsslapd-referral') -+ assert (referral == REFERRAL) -+ -+ log.info('Create paged result search control') -+ req_ctrl = SimplePagedResultsControl(True, size=page_size, cookie='') -+ -+ log.info('Perform a paged result search on referred suffix, no chase') -+ with pytest.raises(ldap.REFERRAL): -+ topology_st.standalone.search_ext_s(SEARCH_SUFFIX, ldap.SCOPE_SUBTREE, serverctrls=[req_ctrl]) -+ -+ log.info('Confirm instance is still running') -+ assert (topology_st.standalone.status()) -+ -+ log.info('Remove referral') -+ topology_st.standalone.config.remove_all('nsslapd-referral') -+ referral = topology_st.standalone.config.get_attr_val_utf8('nsslapd-referral') -+ assert (referral == None) -+ - if __name__ == '__main__': - # Run isolated - # -s for DEBUG mode -diff --git a/ldap/servers/slapd/opshared.c b/ldap/servers/slapd/opshared.c -index 545518748..a5cddfd23 100644 ---- a/ldap/servers/slapd/opshared.c -+++ b/ldap/servers/slapd/opshared.c -@@ -910,7 +910,9 @@ op_shared_search(Slapi_PBlock *pb, int send_result) - /* Free the results if not "no_such_object" */ - void *sr = NULL; - slapi_pblock_get(pb, SLAPI_SEARCH_RESULT_SET, &sr); -- be->be_search_results_release(&sr); -+ if (be->be_search_results_release != NULL) { -+ be->be_search_results_release(&sr); -+ } - } - pagedresults_set_search_result(pb_conn, operation, NULL, 1, pr_idx); - rc = pagedresults_set_current_be(pb_conn, NULL, pr_idx, 1); --- -2.49.0 - diff --git a/0034-Issues-6913-6886-6250-Adjust-xfail-marks-6914.patch b/0034-Issues-6913-6886-6250-Adjust-xfail-marks-6914.patch deleted file mode 100644 index a905e8a..0000000 --- a/0034-Issues-6913-6886-6250-Adjust-xfail-marks-6914.patch +++ /dev/null @@ -1,222 +0,0 @@ -From e0d9deaab662468e11b08105e1b155660076b5eb Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Fri, 1 Aug 2025 09:28:39 -0700 -Subject: [PATCH] Issues 6913, 6886, 6250 - Adjust xfail marks (#6914) - -Description: Some of the ACI invalid syntax issues were fixed, -so we need to remove xfail marks. -Disk space issue should have a 'skipif' mark. -Display all attrs (nsslapd-auditlog-display-attrs: *) fails because of a bug. -EntryUSN inconsistency and overflow bugs were exposed with the tests. - -Related: https://github.com/389ds/389-ds-base/issues/6913 -Related: https://github.com/389ds/389-ds-base/issues/6886 -Related: https://github.com/389ds/389-ds-base/issues/6250 - -Reviewed by: @vashirov (Thanks!) ---- - dirsrvtests/tests/suites/acl/syntax_test.py | 13 ++++++++-- - .../tests/suites/import/regression_test.py | 18 +++++++------- - .../logging/audit_password_masking_test.py | 24 +++++++++---------- - .../suites/plugins/entryusn_overflow_test.py | 2 ++ - 4 files changed, 34 insertions(+), 23 deletions(-) - -diff --git a/dirsrvtests/tests/suites/acl/syntax_test.py b/dirsrvtests/tests/suites/acl/syntax_test.py -index 4edc7fa4b..ed9919ba3 100644 ---- a/dirsrvtests/tests/suites/acl/syntax_test.py -+++ b/dirsrvtests/tests/suites/acl/syntax_test.py -@@ -190,10 +190,9 @@ FAILED = [('test_targattrfilters_18', - f'(all)userdn="ldap:///anyone";)'), ] - - --@pytest.mark.xfail(reason='https://bugzilla.redhat.com/show_bug.cgi?id=1691473') - @pytest.mark.parametrize("real_value", [a[1] for a in FAILED], - ids=[a[0] for a in FAILED]) --def test_aci_invalid_syntax_fail(topo, real_value): -+def test_aci_invalid_syntax_fail(topo, real_value, request): - """Try to set wrong ACI syntax. - - :id: 83c40784-fff5-49c8-9535-7064c9c19e7e -@@ -206,6 +205,16 @@ def test_aci_invalid_syntax_fail(topo, real_value): - 1. It should pass - 2. It should not pass - """ -+ # Mark specific test cases as xfail -+ xfail_cases = [ -+ 'test_targattrfilters_18', -+ 'test_targattrfilters_20', -+ 'test_bind_rule_set_with_more_than_three' -+ ] -+ -+ if request.node.callspec.id in xfail_cases: -+ pytest.xfail("DS6913 - This test case is expected to fail") -+ - domain = Domain(topo.standalone, DEFAULT_SUFFIX) - with pytest.raises(ldap.INVALID_SYNTAX): - domain.add("aci", real_value) -diff --git a/dirsrvtests/tests/suites/import/regression_test.py b/dirsrvtests/tests/suites/import/regression_test.py -index e6fef89cc..61fdf8559 100644 ---- a/dirsrvtests/tests/suites/import/regression_test.py -+++ b/dirsrvtests/tests/suites/import/regression_test.py -@@ -320,7 +320,7 @@ ou: myDups00001 - assert standalone.ds_error_log.match('.*Duplicated DN detected.*') - - @pytest.mark.tier2 --@pytest.mark.xfail(not _check_disk_space(), reason="not enough disk space for lmdb map") -+@pytest.mark.skipif(not _check_disk_space(), reason="not enough disk space for lmdb map") - @pytest.mark.xfail(ds_is_older("1.3.10.1"), reason="bz1749595 not fixed on versions older than 1.3.10.1") - def test_large_ldif2db_ancestorid_index_creation(topo, _set_mdb_map_size): - """Import with ldif2db a large file - check that the ancestorid index creation phase has a correct performance -@@ -396,39 +396,39 @@ def test_large_ldif2db_ancestorid_index_creation(topo, _set_mdb_map_size): - log.info('Starting the server') - topo.standalone.start() - -- # With lmdb there is no more any special phase for ancestorid -+ # With lmdb there is no more any special phase for ancestorid - # because ancestorsid get updated on the fly while processing the - # entryrdn (by up the parents chain to compute the parentid -- # -+ # - # But there is still a numSubordinates generation phase - if get_default_db_lib() == "mdb": - log.info('parse the errors logs to check lines with "Generating numSubordinates complete." are present') - end_numsubordinates = str(topo.standalone.ds_error_log.match(r'.*Generating numSubordinates complete.*'))[1:-1] - assert len(end_numsubordinates) > 0 -- -+ - else: - log.info('parse the errors logs to check lines with "Starting sort of ancestorid" are present') - start_sort_str = str(topo.standalone.ds_error_log.match(r'.*Starting sort of ancestorid non-leaf IDs*'))[1:-1] - assert len(start_sort_str) > 0 -- -+ - log.info('parse the errors logs to check lines with "Finished sort of ancestorid" are present') - end_sort_str = str(topo.standalone.ds_error_log.match(r'.*Finished sort of ancestorid non-leaf IDs*'))[1:-1] - assert len(end_sort_str) > 0 -- -+ - log.info('parse the error logs for the line with "Gathering ancestorid non-leaf IDs"') - start_ancestorid_indexing_op_str = str(topo.standalone.ds_error_log.match(r'.*Gathering ancestorid non-leaf IDs*'))[1:-1] - assert len(start_ancestorid_indexing_op_str) > 0 -- -+ - log.info('parse the error logs for the line with "Created ancestorid index"') - end_ancestorid_indexing_op_str = str(topo.standalone.ds_error_log.match(r'.*Created ancestorid index*'))[1:-1] - assert len(end_ancestorid_indexing_op_str) > 0 -- -+ - log.info('get the ancestorid non-leaf IDs indexing start and end time from the collected strings') - # Collected lines look like : '[15/May/2020:05:30:27.245967313 -0400] - INFO - bdb_get_nonleaf_ids - import userRoot: Gathering ancestorid non-leaf IDs...' - # We are getting the sec.nanosec part of the date, '27.245967313' in the above example - start_time = (start_ancestorid_indexing_op_str.split()[0]).split(':')[3] - end_time = (end_ancestorid_indexing_op_str.split()[0]).split(':')[3] -- -+ - log.info('Calculate the elapsed time for the ancestorid non-leaf IDs index creation') - etime = (Decimal(end_time) - Decimal(start_time)) - # The time for the ancestorid index creation should be less than 10s for an offline import of an ldif file with 100000 entries / 5 entries per node -diff --git a/dirsrvtests/tests/suites/logging/audit_password_masking_test.py b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py -index 3b6a54849..69a36cb5d 100644 ---- a/dirsrvtests/tests/suites/logging/audit_password_masking_test.py -+++ b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py -@@ -117,10 +117,10 @@ def check_password_masked(inst, log_format, expected_password, actual_password): - - @pytest.mark.parametrize("log_format,display_attrs", [ - ("default", None), -- ("default", "*"), -+ pytest.param("default", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("default", "userPassword"), - ("json", None), -- ("json", "*"), -+ pytest.param("json", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("json", "userPassword") - ]) - def test_password_masking_add_operation(topo, log_format, display_attrs): -@@ -173,10 +173,10 @@ def test_password_masking_add_operation(topo, log_format, display_attrs): - - @pytest.mark.parametrize("log_format,display_attrs", [ - ("default", None), -- ("default", "*"), -+ pytest.param("default", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("default", "userPassword"), - ("json", None), -- ("json", "*"), -+ pytest.param("json", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("json", "userPassword") - ]) - def test_password_masking_modify_operation(topo, log_format, display_attrs): -@@ -242,10 +242,10 @@ def test_password_masking_modify_operation(topo, log_format, display_attrs): - - @pytest.mark.parametrize("log_format,display_attrs", [ - ("default", None), -- ("default", "*"), -+ pytest.param("default", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("default", "nsslapd-rootpw"), - ("json", None), -- ("json", "*"), -+ pytest.param("json", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("json", "nsslapd-rootpw") - ]) - def test_password_masking_rootpw_modify_operation(topo, log_format, display_attrs): -@@ -297,10 +297,10 @@ def test_password_masking_rootpw_modify_operation(topo, log_format, display_attr - - @pytest.mark.parametrize("log_format,display_attrs", [ - ("default", None), -- ("default", "*"), -+ pytest.param("default", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("default", "nsmultiplexorcredentials"), - ("json", None), -- ("json", "*"), -+ pytest.param("json", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("json", "nsmultiplexorcredentials") - ]) - def test_password_masking_multiplexor_credentials(topo, log_format, display_attrs): -@@ -368,10 +368,10 @@ def test_password_masking_multiplexor_credentials(topo, log_format, display_attr - - @pytest.mark.parametrize("log_format,display_attrs", [ - ("default", None), -- ("default", "*"), -+ pytest.param("default", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("default", "nsDS5ReplicaCredentials"), - ("json", None), -- ("json", "*"), -+ pytest.param("json", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("json", "nsDS5ReplicaCredentials") - ]) - def test_password_masking_replica_credentials(topo, log_format, display_attrs): -@@ -432,10 +432,10 @@ def test_password_masking_replica_credentials(topo, log_format, display_attrs): - - @pytest.mark.parametrize("log_format,display_attrs", [ - ("default", None), -- ("default", "*"), -+ pytest.param("default", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("default", "nsDS5ReplicaBootstrapCredentials"), - ("json", None), -- ("json", "*"), -+ pytest.param("json", "*", marks=pytest.mark.xfail(reason="DS6886")), - ("json", "nsDS5ReplicaBootstrapCredentials") - ]) - def test_password_masking_bootstrap_credentials(topo, log_format, display_attrs): -diff --git a/dirsrvtests/tests/suites/plugins/entryusn_overflow_test.py b/dirsrvtests/tests/suites/plugins/entryusn_overflow_test.py -index a23d734ca..8c3a537ab 100644 ---- a/dirsrvtests/tests/suites/plugins/entryusn_overflow_test.py -+++ b/dirsrvtests/tests/suites/plugins/entryusn_overflow_test.py -@@ -81,6 +81,7 @@ def setup_usn_test(topology_st, request): - return created_users - - -+@pytest.mark.xfail(reason="DS6250") - def test_entryusn_overflow_on_add_existing_entries(topology_st, setup_usn_test): - """Test that reproduces entryUSN overflow when adding existing entries - -@@ -232,6 +233,7 @@ def test_entryusn_overflow_on_add_existing_entries(topology_st, setup_usn_test): - log.info("EntryUSN overflow test completed successfully") - - -+@pytest.mark.xfail(reason="DS6250") - def test_entryusn_consistency_after_failed_adds(topology_st, setup_usn_test): - """Test that entryUSN remains consistent after failed add operations - --- -2.49.0 - diff --git a/0035-Issue-6875-Fix-dsidm-tests.patch b/0035-Issue-6875-Fix-dsidm-tests.patch deleted file mode 100644 index 791e659..0000000 --- a/0035-Issue-6875-Fix-dsidm-tests.patch +++ /dev/null @@ -1,378 +0,0 @@ -From e48f31b756509938d69a626744b8862fc26aa3ef Mon Sep 17 00:00:00 2001 -From: Lenka Doudova -Date: Tue, 15 Jul 2025 17:17:04 +0200 -Subject: [PATCH] Issue 6875 - Fix dsidm tests - -Description: -Adding testing of the "full_dn" option with 'dsidm list' command for all -relevant types of entries -Removing xfail markers in dsidm role tests since the issues were -resolved - -Relates: #6875 - -Author: Lenka Doudova - -Reviewed by: ??? ---- - .../tests/suites/clu/dsidm_group_test.py | 12 +++++++++- - .../clu/dsidm_organizational_unit_test.py | 13 ++++++++++- - .../tests/suites/clu/dsidm_posixgroup_test.py | 13 ++++++++++- - .../tests/suites/clu/dsidm_role_test.py | 23 +++++-------------- - .../tests/suites/clu/dsidm_services_test.py | 13 ++++++++++- - .../suites/clu/dsidm_uniquegroup_test.py | 12 +++++++++- - .../tests/suites/clu/dsidm_user_test.py | 22 +++++++++++++++++- - 7 files changed, 85 insertions(+), 23 deletions(-) - -diff --git a/dirsrvtests/tests/suites/clu/dsidm_group_test.py b/dirsrvtests/tests/suites/clu/dsidm_group_test.py -index 36723a2d0..eba823d2d 100644 ---- a/dirsrvtests/tests/suites/clu/dsidm_group_test.py -+++ b/dirsrvtests/tests/suites/clu/dsidm_group_test.py -@@ -17,7 +17,7 @@ from lib389.cli_idm.group import (list, get, get_dn, create, delete, modify, ren - members, add_member, remove_member) - from lib389.topologies import topology_st - from lib389.cli_base import FakeArgs --from lib389.utils import ds_is_older, ensure_str -+from lib389.utils import ds_is_older, ensure_str, is_a_dn - from lib389.idm.group import Groups - from . import check_value_in_log_and_reset - -@@ -198,6 +198,7 @@ def test_dsidm_group_list(topology_st, create_test_group): - standalone = topology_st.standalone - args = FakeArgs() - args.json = False -+ args.full_dn = False - json_list = ['type', - 'list', - 'items'] -@@ -214,12 +215,21 @@ def test_dsidm_group_list(topology_st, create_test_group): - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, content_list=json_list, check_value=group_name) - -+ log.info('Test full_dn option with list') -+ args.full_dn = True -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ result = topology_st.logcap.get_raw_outputs() -+ json_result = json.loads(result[0]) -+ assert is_a_dn(json_result['items'][0]) -+ args.full_dn = False -+ - log.info('Delete the group') - groups = Groups(standalone, DEFAULT_SUFFIX) - testgroup = groups.get(group_name) - testgroup.delete() - - log.info('Test empty dsidm group list with json') -+ topology_st.logcap.flush() - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, content_list=json_list, check_value_not=group_name) - -diff --git a/dirsrvtests/tests/suites/clu/dsidm_organizational_unit_test.py b/dirsrvtests/tests/suites/clu/dsidm_organizational_unit_test.py -index ee908fb22..06556b227 100644 ---- a/dirsrvtests/tests/suites/clu/dsidm_organizational_unit_test.py -+++ b/dirsrvtests/tests/suites/clu/dsidm_organizational_unit_test.py -@@ -11,12 +11,13 @@ import subprocess - import pytest - import logging - import os -+import json - - from lib389 import DEFAULT_SUFFIX - from lib389.cli_idm.organizationalunit import get, get_dn, create, modify, delete, list, rename - from lib389.topologies import topology_st - from lib389.cli_base import FakeArgs --from lib389.utils import ds_is_older -+from lib389.utils import ds_is_older, is_a_dn - from lib389.idm.organizationalunit import OrganizationalUnits - from . import check_value_in_log_and_reset - -@@ -110,6 +111,7 @@ def test_dsidm_organizational_unit_list(topology_st, create_test_ou): - standalone = topology_st.standalone - args = FakeArgs() - args.json = False -+ args.full_dn = False - json_list = ['type', - 'list', - 'items'] -@@ -126,7 +128,16 @@ def test_dsidm_organizational_unit_list(topology_st, create_test_ou): - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, check_value=ou_name) - -+ log.info('Test full_dn option with list') -+ args.full_dn = True -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ result = topology_st.logcap.get_raw_outputs() -+ json_result = json.loads(result[0]) -+ assert is_a_dn(json_result['items'][0]) -+ args.full_dn = False -+ - log.info('Delete the organizational unit') -+ topology_st.logcap.flush() - ous = OrganizationalUnits(standalone, DEFAULT_SUFFIX) - test_ou = ous.get(ou_name) - test_ou.delete() -diff --git a/dirsrvtests/tests/suites/clu/dsidm_posixgroup_test.py b/dirsrvtests/tests/suites/clu/dsidm_posixgroup_test.py -index ccafd3905..10799ee28 100644 ---- a/dirsrvtests/tests/suites/clu/dsidm_posixgroup_test.py -+++ b/dirsrvtests/tests/suites/clu/dsidm_posixgroup_test.py -@@ -9,12 +9,13 @@ - import pytest - import logging - import os -+import json - - from lib389 import DEFAULT_SUFFIX - from lib389.cli_idm.posixgroup import list, get, get_dn, create, delete, modify, rename - from lib389.topologies import topology_st - from lib389.cli_base import FakeArgs --from lib389.utils import ds_is_older, ensure_str -+from lib389.utils import ds_is_older, ensure_str, is_a_dn - from lib389.idm.posixgroup import PosixGroups - from . import check_value_in_log_and_reset - -@@ -195,6 +196,7 @@ def test_dsidm_posixgroup_list(topology_st, create_test_posixgroup): - standalone = topology_st.standalone - args = FakeArgs() - args.json = False -+ args.full_dn = False - json_list = ['type', - 'list', - 'items'] -@@ -211,12 +213,21 @@ def test_dsidm_posixgroup_list(topology_st, create_test_posixgroup): - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, content_list=json_list, check_value=posixgroup_name) - -+ log.info('Test full_dn option with list') -+ args.full_dn = True -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ result = topology_st.logcap.get_raw_outputs() -+ json_result = json.loads(result[0]) -+ assert is_a_dn(json_result['items'][0]) -+ args.full_dn = False -+ - log.info('Delete the posixgroup') - posixgroups = PosixGroups(standalone, DEFAULT_SUFFIX) - test_posixgroup = posixgroups.get(posixgroup_name) - test_posixgroup.delete() - - log.info('Test empty dsidm posixgroup list with json') -+ topology_st.logcap.flush() - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, content_list=json_list, check_value_not=posixgroup_name) - -diff --git a/dirsrvtests/tests/suites/clu/dsidm_role_test.py b/dirsrvtests/tests/suites/clu/dsidm_role_test.py -index eb5f692d7..094db2f78 100644 ---- a/dirsrvtests/tests/suites/clu/dsidm_role_test.py -+++ b/dirsrvtests/tests/suites/clu/dsidm_role_test.py -@@ -67,6 +67,7 @@ def create_test_filtered_role(topology_st, request): - - properties = FakeArgs() - properties.cn = filtered_role_name -+ properties.nsrolefilter = "(cn=*)" - create_filtered(topology_st.standalone, DEFAULT_SUFFIX, topology_st.logcap.log, properties) - test_filtered_role = filtered_roles.get(filtered_role_name) - -@@ -92,7 +93,7 @@ def create_test_nested_role(topology_st, create_test_managed_role, request): - - properties = FakeArgs() - properties.cn = nested_role_name -- properties.nsRoleDN = managed_role.dn -+ properties.nsroledn = managed_role.dn - create_nested(topology_st.standalone, DEFAULT_SUFFIX, topology_st.logcap.log, properties) - test_nested_role = nested_roles.get(nested_role_name) - -@@ -341,14 +342,8 @@ def test_dsidm_role_list(topology_st, create_test_managed_role): - @pytest.mark.parametrize( - "role_name, fixture, objectclasses", - [(managed_role_name, 'create_test_managed_role', ['nsSimpleRoleDefinition', 'nsManagedRoleDefinition']), -- (pytest.param(filtered_role_name, -- create_test_filtered_role, -- ['nsComplexRoleDefinition', 'nsFilteredRoleDefinition'], -- marks=pytest.mark.xfail(reason="DS6492"))), -- (pytest.param(nested_role_name, -- create_test_nested_role, -- ['nsComplexRoleDefinition', 'nsNestedRoleDefinition'], -- marks=pytest.mark.xfail(reason="DS6493")))]) -+ (filtered_role_name, 'create_test_filtered_role', ['nsComplexRoleDefinition', 'nsFilteredRoleDefinition']), -+ (nested_role_name, 'create_test_nested_role', ['nsComplexRoleDefinition', 'nsNestedRoleDefinition'])]) - def test_dsidm_role_get(topology_st, role_name, fixture, objectclasses, request): - """ Test dsidm role get option for managed, filtered and nested role - -@@ -422,14 +417,8 @@ def test_dsidm_role_get(topology_st, role_name, fixture, objectclasses, request) - @pytest.mark.parametrize( - "role_name, fixture, objectclasses", - [(managed_role_name, 'create_test_managed_role', ['nsSimpleRoleDefinition', 'nsManagedRoleDefinition']), -- (pytest.param(filtered_role_name, -- create_test_filtered_role, -- ['nsComplexRoleDefinition', 'nsFilteredRoleDefinition'], -- marks=pytest.mark.xfail(reason="DS6492"))), -- (pytest.param(nested_role_name, -- create_test_nested_role, -- ['nsComplexRoleDefinition', 'nsNestedRoleDefinition'], -- marks=pytest.mark.xfail(reason="DS6493")))]) -+ (filtered_role_name, 'create_test_filtered_role', ['nsComplexRoleDefinition', 'nsFilteredRoleDefinition']), -+ (nested_role_name, 'create_test_nested_role', ['nsComplexRoleDefinition', 'nsNestedRoleDefinition'])]) - def test_dsidm_role_get_by_dn(topology_st, role_name, fixture, objectclasses, request): - """ Test dsidm role get-by-dn option for managed, filtered and nested role - -diff --git a/dirsrvtests/tests/suites/clu/dsidm_services_test.py b/dirsrvtests/tests/suites/clu/dsidm_services_test.py -index 61dd0ac11..f167b1c6f 100644 ---- a/dirsrvtests/tests/suites/clu/dsidm_services_test.py -+++ b/dirsrvtests/tests/suites/clu/dsidm_services_test.py -@@ -11,12 +11,13 @@ import subprocess - import pytest - import logging - import os -+import json - - from lib389 import DEFAULT_SUFFIX - from lib389.cli_idm.service import list, get, get_dn, create, delete, modify, rename - from lib389.topologies import topology_st - from lib389.cli_base import FakeArgs --from lib389.utils import ds_is_older, ensure_str -+from lib389.utils import ds_is_older, ensure_str, is_a_dn - from lib389.idm.services import ServiceAccounts - from . import check_value_in_log_and_reset - -@@ -73,6 +74,7 @@ def test_dsidm_service_list(topology_st, create_test_service): - standalone = topology_st.standalone - args = FakeArgs() - args.json = False -+ args.full_dn = False - service_value = 'test_service' - json_list = ['type', - 'list', -@@ -90,12 +92,21 @@ def test_dsidm_service_list(topology_st, create_test_service): - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, content_list=json_list, check_value=service_value) - -+ log.info('Test full_dn option with list') -+ args.full_dn = True -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ result = topology_st.logcap.get_raw_outputs() -+ json_result = json.loads(result[0]) -+ assert is_a_dn(json_result['items'][0]) -+ args.full_dn = False -+ - log.info('Delete the service') - services = ServiceAccounts(topology_st.standalone, DEFAULT_SUFFIX) - testservice = services.get(service_value) - testservice.delete() - - log.info('Test empty dsidm service list with json') -+ topology_st.logcap.flush() - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, content_list=json_list, check_value_not=service_value) - -diff --git a/dirsrvtests/tests/suites/clu/dsidm_uniquegroup_test.py b/dirsrvtests/tests/suites/clu/dsidm_uniquegroup_test.py -index 0532791c1..4689ae34b 100644 ---- a/dirsrvtests/tests/suites/clu/dsidm_uniquegroup_test.py -+++ b/dirsrvtests/tests/suites/clu/dsidm_uniquegroup_test.py -@@ -17,7 +17,7 @@ from lib389.cli_idm.uniquegroup import (list, get, get_dn, create, delete, modif - members, add_member, remove_member) - from lib389.topologies import topology_st - from lib389.cli_base import FakeArgs --from lib389.utils import ds_is_older, ensure_str -+from lib389.utils import ds_is_older, ensure_str, is_a_dn - from lib389.idm.group import UniqueGroups - from . import check_value_in_log_and_reset - -@@ -153,6 +153,7 @@ def test_dsidm_uniquegroup_list(topology_st, create_test_uniquegroup): - standalone = topology_st.standalone - args = FakeArgs() - args.json = False -+ args.full_dn = False - json_list = ['type', - 'list', - 'items'] -@@ -169,12 +170,21 @@ def test_dsidm_uniquegroup_list(topology_st, create_test_uniquegroup): - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, content_list=json_list, check_value=uniquegroup_name) - -+ log.info('Test full_dn option with list') -+ args.full_dn = True -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ result = topology_st.logcap.get_raw_outputs() -+ json_result = json.loads(result[0]) -+ assert is_a_dn(json_result['items'][0]) -+ args.full_dn = False -+ - log.info('Delete the uniquegroup') - uniquegroups = UniqueGroups(standalone, DEFAULT_SUFFIX) - test_uniquegroup = uniquegroups.get(uniquegroup_name) - test_uniquegroup.delete() - - log.info('Test empty dsidm uniquegroup list with json') -+ topology_st.logcap.flush() - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, content_list=json_list, check_value_not=uniquegroup_name) - -diff --git a/dirsrvtests/tests/suites/clu/dsidm_user_test.py b/dirsrvtests/tests/suites/clu/dsidm_user_test.py -index 4b5491735..620e183ac 100644 ---- a/dirsrvtests/tests/suites/clu/dsidm_user_test.py -+++ b/dirsrvtests/tests/suites/clu/dsidm_user_test.py -@@ -12,12 +12,13 @@ import pytest - import logging - import os - import ldap -+import json - - from lib389 import DEFAULT_SUFFIX - from lib389.cli_idm.user import list, get, get_dn, create, delete, modify, rename - from lib389.topologies import topology_st - from lib389.cli_base import FakeArgs --from lib389.utils import ds_is_older, ensure_str -+from lib389.utils import ds_is_older, ensure_str, is_a_dn - from lib389.idm.user import nsUserAccounts - from . import check_value_in_log_and_reset - -@@ -74,6 +75,7 @@ def test_dsidm_user_list(topology_st, create_test_user): - standalone = topology_st.standalone - args = FakeArgs() - args.json = False -+ args.full_dn = False - user_value = 'test_user_1000' - json_list = ['type', - 'list', -@@ -92,6 +94,15 @@ def test_dsidm_user_list(topology_st, create_test_user): - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - check_value_in_log_and_reset(topology_st, content_list=json_list, check_value=user_value) - -+ log.info('Test full_dn option with list') -+ args.full_dn = True -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ result = topology_st.logcap.get_raw_outputs() -+ json_result = json.loads(result[0]) -+ assert is_a_dn(json_result['items'][0]) -+ args.full_dn = False -+ topology_st.logcap.flush() -+ - log.info('Delete the user') - users = nsUserAccounts(topology_st.standalone, DEFAULT_SUFFIX) - testuser = users.get(user_value) -@@ -777,6 +788,7 @@ def test_dsidm_user_list_rdn_after_rename(topology_st): - log.info('Test dsidm user list without json') - args = FakeArgs() - args.json = False -+ args.full_dn = False - list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) - # Should show the new name, not the original name - check_value_in_log_and_reset(topology_st, check_value=new_name, check_value_not=original_name) -@@ -787,6 +799,14 @@ def test_dsidm_user_list_rdn_after_rename(topology_st): - # Should show the new name in JSON output as well - check_value_in_log_and_reset(topology_st, check_value=new_name, check_value_not=original_name) - -+ log.info('Test full_dn option with list') -+ args.full_dn = True -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ result = topology_st.logcap.get_raw_outputs() -+ json_result = json.loads(result[0]) -+ assert is_a_dn(json_result['items'][0]) -+ args.full_dn = False -+ - log.info('Directly verify RDN extraction works correctly') - renamed_user = users.get(new_name) - rdn_value = renamed_user.get_rdn_from_dn(renamed_user.dn) --- -2.49.0 - diff --git a/0036-Issue-6519-Add-basic-dsidm-account-tests.patch b/0036-Issue-6519-Add-basic-dsidm-account-tests.patch deleted file mode 100644 index 2d7cccf..0000000 --- a/0036-Issue-6519-Add-basic-dsidm-account-tests.patch +++ /dev/null @@ -1,503 +0,0 @@ -From 10937417415577569bd777aacf7941803e96da21 Mon Sep 17 00:00:00 2001 -From: Lenka Doudova -Date: Mon, 20 Jan 2025 14:19:51 +0100 -Subject: [PATCH] Issue 6519 - Add basic dsidm account tests - -Automating basic dsidm account tests - -Relates to: https://github.com/389ds/389-ds-base/issues/6519 - -Author: Lenka Doudova - -Reviewed by: Simon Pichugin ---- - .../tests/suites/clu/dsidm_account_test.py | 417 +++++++++++++++++- - src/lib389/lib389/cli_idm/account.py | 6 +- - 2 files changed, 409 insertions(+), 14 deletions(-) - -diff --git a/dirsrvtests/tests/suites/clu/dsidm_account_test.py b/dirsrvtests/tests/suites/clu/dsidm_account_test.py -index 4b48a11a5..c600e31fd 100644 ---- a/dirsrvtests/tests/suites/clu/dsidm_account_test.py -+++ b/dirsrvtests/tests/suites/clu/dsidm_account_test.py -@@ -6,22 +6,19 @@ - # See LICENSE for details. - # --- END COPYRIGHT BLOCK --- - # -+ - import logging - import os - import json - import pytest - import ldap - from lib389 import DEFAULT_SUFFIX --from lib389.cli_idm.account import ( -- get_dn, -- lock, -- unlock, -- entry_status, -- subtree_status, --) -+from lib389.cli_idm.account import list, get_dn, lock, unlock, delete, modify, rename, entry_status, \ -+ subtree_status, reset_password, change_password -+from lib389.cli_idm.user import create - from lib389.topologies import topology_st - from lib389.cli_base import FakeArgs --from lib389.utils import ds_is_older -+from lib389.utils import ds_is_older, is_a_dn - from lib389.idm.user import nsUserAccounts - from . import check_value_in_log_and_reset - -@@ -30,13 +27,28 @@ pytestmark = pytest.mark.tier0 - logging.getLogger(__name__).setLevel(logging.DEBUG) - log = logging.getLogger(__name__) - -+test_user_name = 'test_user_1000' - - @pytest.fixture(scope="function") - def create_test_user(topology_st, request): - log.info('Create test user') - users = nsUserAccounts(topology_st.standalone, DEFAULT_SUFFIX) -- test_user = users.create_test_user() -- log.info('Created test user: %s', test_user.dn) -+ -+ if users.exists(test_user_name): -+ test_user = users.get(test_user_name) -+ test_user.delete() -+ -+ properties = FakeArgs() -+ properties.uid = test_user_name -+ properties.cn = test_user_name -+ properties.sn = test_user_name -+ properties.uidNumber = '1000' -+ properties.gidNumber = '2000' -+ properties.homeDirectory = '/home/test_user_1000' -+ properties.displayName = test_user_name -+ -+ create(topology_st.standalone, DEFAULT_SUFFIX, topology_st.logcap.log, properties) -+ test_user = users.get(test_user_name) - - def fin(): - log.info('Delete test user') -@@ -74,7 +86,7 @@ def test_dsidm_account_entry_status_with_lock(topology_st, create_test_user): - - standalone = topology_st.standalone - users = nsUserAccounts(standalone, DEFAULT_SUFFIX) -- test_user = users.get('test_user_1000') -+ test_user = users.get(test_user_name) - - entry_list = ['Entry DN: {}'.format(test_user.dn), - 'Entry Creation Date', -@@ -169,8 +181,389 @@ def test_dsidm_account_entry_get_by_dn(topology_st, create_test_user): - assert json_result['dn'] == user_dn - - -+@pytest.mark.skipif(ds_is_older("1.4.2"), reason="Not implemented") -+def test_dsidm_account_delete(topology_st, create_test_user): -+ """ Test dsidm account delete option -+ -+ :id: a7960bc2-0282-4a82-8dfb-3af2088ec661 -+ :setup: Standalone -+ :steps: -+ 1. Run dsidm account delete on a created account -+ 2. Check that a message is provided on deletion -+ 3. Check that the account no longer exists -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ """ -+ -+ standalone = topology_st.standalone -+ accounts = nsUserAccounts(standalone, DEFAULT_SUFFIX) -+ test_account = accounts.get(test_user_name) -+ output = 'Successfully deleted {}'.format(test_account.dn) -+ -+ args = FakeArgs() -+ args.dn = test_account.dn -+ -+ log.info('Test dsidm account delete') -+ delete(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args, warn=False) -+ check_value_in_log_and_reset(topology_st, check_value=output) -+ -+ log.info('Check that the account no longer exists') -+ assert not test_account.exists() -+ -+ -+@pytest.mark.skipif(ds_is_older("1.4.2"), reason="Not implemented") -+def test_dsidm_account_list(topology_st, create_test_user): -+ """ Test dsidm account list option -+ -+ :id: 4d173a3e-ee36-4a8b-8d0d-4955c792faca -+ :setup: Standalone instance -+ :steps: -+ 1. Run dsidm account list without json -+ 2. Check the output content is correct -+ 3. Run dsidm account list with json -+ 4. Check the output content is correct -+ 5. Test full_dn option with list -+ 6. Delete the account -+ 7. Check the account is not in the list with json -+ 8. Check the account is not in the list without json -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success -+ 5. Success -+ 6. Success -+ 7. Success -+ 8. Success -+ """ -+ -+ standalone = topology_st.standalone -+ args = FakeArgs() -+ args.json = False -+ args.full_dn = False -+ json_list = ['type', -+ 'list', -+ 'items'] -+ -+ log.info('Empty the log file to prevent false data to check about group') -+ topology_st.logcap.flush() -+ -+ log.info('Test dsidm account list without json') -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ check_value_in_log_and_reset(topology_st, check_value=test_user_name) -+ -+ log.info('Test dsidm account list with json') -+ args.json = True -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ check_value_in_log_and_reset(topology_st, content_list=json_list, check_value=test_user_name) -+ -+ log.info('Test full_dn option with list') -+ args.full_dn = True -+ list(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ result = topology_st.logcap.get_raw_outputs() -+ json_result = json.loads(result[0]) -+ assert is_a_dn(json_result['items'][0]) -+ args.full_dn = False -+ topology_st.logcap.flush() -+ -+ log.info('Delete the account') -+ accounts = nsUserAccounts(standalone, DEFAULT_SUFFIX) -+ test_account = accounts.get(test_user_name) -+ test_account.delete() -+ -+ log.info('Test empty dsidm account list with json') -+ list(standalone,DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ check_value_in_log_and_reset(topology_st, content_list=json_list, check_value_not=test_user_name) -+ -+ log.info('Test empty dsidm account list without json') -+ args.json = False -+ list(standalone,DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ check_value_in_log_and_reset(topology_st, check_value_not=test_user_name) -+ -+ -+@pytest.mark.xfail(reason='DS6515') -+@pytest.mark.skipif(ds_is_older("1.4.2"), reason="Not implemented") -+def test_dsidm_account_get_by_dn(topology_st, create_test_user): -+ """ Test dsidm account get-by-dn option -+ -+ :id: 07945577-2da0-4fd9-9237-43dd2823f7b8 -+ :setup: Standalone instance -+ :steps: -+ 1. Run dsidm account get-by-dn for an account without json -+ 2. Check the output content is correct -+ 3. Run dsidm account get-by-dn for an account with json -+ 4. Check the output content is correct -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Success -+ """ -+ -+ standalone = topology_st.standalone -+ accounts = nsUserAccounts(standalone, DEFAULT_SUFFIX) -+ test_account = accounts.get(test_user_name) -+ -+ args = FakeArgs() -+ args.dn = test_account.dn -+ args.json = False -+ -+ account_content = ['dn: {}'.format(test_account.dn), -+ 'cn: {}'.format(test_account.rdn), -+ 'displayName: {}'.format(test_user_name), -+ 'gidNumber: 2000', -+ 'homeDirectory: /home/{}'.format(test_user_name), -+ 'objectClass: top', -+ 'objectClass: nsPerson', -+ 'objectClass: nsAccount', -+ 'objectClass: nsOrgPerson', -+ 'objectClass: posixAccount', -+ 'uid: {}'.format(test_user_name), -+ 'uidNumber: 1000'] -+ -+ json_content = ['attrs', -+ 'objectclass', -+ 'top', -+ 'nsPerson', -+ 'nsAccount', -+ 'nsOrgPerson', -+ 'posixAccount', -+ 'cn', -+ test_account.rdn, -+ 'gidnumber', -+ '2000', -+ 'homedirectory', -+ '/home/{}'.format(test_user_name), -+ 'displayname', -+ test_user_name, -+ 'uidnumber', -+ '1000', -+ 'creatorsname', -+ 'cn=directory manager', -+ 'modifiersname', -+ 'createtimestamp', -+ 'modifytimestamp', -+ 'nsuniqueid', -+ 'parentid', -+ 'entryid', -+ 'entryuuid', -+ 'dsentrydn', -+ 'entrydn', -+ test_account.dn] -+ -+ log.info('Empty the log file to prevent false data to check about the account') -+ topology_st.logcap.flush() -+ -+ log.info('Test dsidm account get-by-dn without json') -+ get_dn(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ check_value_in_log_and_reset(topology_st, content_list=account_content) -+ -+ log.info('Test dsidm account get-by-dn with json') -+ args.json = True -+ get_dn(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ check_value_in_log_and_reset(topology_st, content_list=json_content) -+ -+ -+@pytest.mark.skipif(ds_is_older("1.4.2"), reason="Not implemented") -+def test_dsidm_account_modify_by_dn(topology_st, create_test_user): -+ """ Test dsidm account modify-by-dn -+ -+ :id: e7288f8c-f0a8-4d8d-a00f-1b243eb117bc -+ :setup: Standalone instance -+ :steps: -+ 1. Run dsidm account modify-by-dn add description value -+ 2. Run dsidm account modify-by-dn replace description value -+ 3. Run dsidm account modify-by-dn delete description value -+ :expectedresults: -+ 1. A description value is added -+ 2. The original description value is replaced and the previous is not present -+ 3. The replaced description value is deleted -+ """ -+ -+ standalone = topology_st.standalone -+ accounts = nsUserAccounts(standalone, DEFAULT_SUFFIX) -+ test_account = accounts.get(test_user_name) -+ output = 'Successfully modified {}'.format(test_account.dn) -+ -+ args = FakeArgs() -+ args.dn = test_account.dn -+ args.changes = ['add:description:new_description'] -+ -+ log.info('Test dsidm account modify add') -+ modify(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args, warn=False) -+ check_value_in_log_and_reset(topology_st, check_value=output) -+ assert test_account.present('description', 'new_description') -+ -+ log.info('Test dsidm account modify replace') -+ args.changes = ['replace:description:replaced_description'] -+ modify(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args, warn=False) -+ check_value_in_log_and_reset(topology_st, check_value=output) -+ assert test_account.present('description', 'replaced_description') -+ assert not test_account.present('description', 'new_description') -+ -+ log.info('Test dsidm account modify delete') -+ args.changes = ['delete:description:replaced_description'] -+ modify(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args, warn=False) -+ check_value_in_log_and_reset(topology_st, check_value=output) -+ assert not test_account.present('description', 'replaced_description') -+ -+ -+@pytest.mark.skipif(ds_is_older("1.4.2"), reason="Not implemented") -+def test_dsidm_account_rename_by_dn(topology_st, create_test_user): -+ """ Test dsidm account rename-by-dn option -+ -+ :id: f4b8e491-35b1-4113-b9c4-e0a80f8985f3 -+ :setup: Standalone instance -+ :steps: -+ 1. Run dsidm account rename option on existing account -+ 2. Check the account does not have another uid attribute with the old rdn -+ 3. Check the old account is deleted -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ """ -+ -+ standalone = topology_st.standalone -+ accounts = nsUserAccounts(standalone, DEFAULT_SUFFIX) -+ test_account = accounts.get(test_user_name) -+ -+ args = FakeArgs() -+ args.dn = test_account.dn -+ args.new_name = 'renamed_account' -+ args.new_dn = 'uid=renamed_account,ou=people,{}'.format(DEFAULT_SUFFIX) -+ args.keep_old_rdn = False -+ -+ log.info('Test dsidm account rename-by-dn') -+ rename(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ new_account = accounts.get(args.new_name) -+ -+ try: -+ output = 'Successfully renamed to {}'.format(new_account.dn) -+ check_value_in_log_and_reset(topology_st, check_value=output) -+ -+ log.info('Verify the new account does not have a uid attribute with the old rdn') -+ assert not new_account.present('uid', test_user_name) -+ assert new_account.present('displayName', test_user_name) -+ -+ log.info('Verify the old account does not exist') -+ assert not test_account.exists() -+ finally: -+ log.info('Clean up') -+ new_account.delete() -+ -+ -+@pytest.mark.skipif(ds_is_older("1.4.2"), reason="Not implemented") -+def test_dsidm_account_rename_by_dn_keep_old_rdn(topology_st, create_test_user): -+ """ Test dsidm account rename-by-dn option with keep-old-rdn -+ -+ :id: a128bdbb-c0a4-4d9d-9a95-9be2d3780094 -+ :setup: Standalone instance -+ :steps: -+ 1. Run dsidm account rename option on existing account -+ 2. Check the account has another uid attribute with the old rdn -+ 3. Check the old account is deleted -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ """ -+ -+ standalone = topology_st.standalone -+ accounts = nsUserAccounts(standalone, DEFAULT_SUFFIX) -+ test_account = accounts.get(test_user_name) -+ -+ args = FakeArgs() -+ args.dn = test_account.dn -+ args.new_name = 'renamed_account' -+ args.new_dn = 'uid=renamed_account,ou=people,{}'.format(DEFAULT_SUFFIX) -+ args.keep_old_rdn = True -+ -+ log.info('Test dsidm account rename-by-dn') -+ rename(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ new_account = accounts.get(args.new_name) -+ -+ try: -+ output = 'Successfully renamed to {}'.format(new_account.dn) -+ check_value_in_log_and_reset(topology_st, check_value=output) -+ -+ log.info('Verify the new account does not have a uid attribute with the old rdn') -+ assert new_account.present('uid', test_user_name) -+ assert new_account.present('displayName', test_user_name) -+ -+ log.info('Verify the old account does not exist') -+ assert not test_account.exists() -+ finally: -+ log.info('Clean up') -+ new_account.delete() -+ -+ -+@pytest.mark.skipif(ds_is_older("1.4.2"), reason="Not implemented") -+def test_dsidm_account_reset_password(topology_st, create_test_user): -+ """ Test dsidm account reset_password option -+ -+ :id: 02ffa044-08ae-40c5-9108-b02d0c3b0521 -+ :setup: Standalone instance -+ :steps: -+ 1. Run dsidm account reset_password on an existing user -+ 2. Verify that the user has now userPassword attribute set -+ :expectedresults: -+ 1. Success -+ 2. Success -+ """ -+ -+ standalone = topology_st.standalone -+ accounts = nsUserAccounts(standalone, DEFAULT_SUFFIX) -+ test_account = accounts.get(test_user_name) -+ -+ args = FakeArgs() -+ args.dn = test_account.dn -+ args.new_password = 'newpasswd' -+ output = 'reset password for {}'.format(test_account.dn) -+ -+ log.info('Test dsidm account reset_password') -+ reset_password(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ check_value_in_log_and_reset(topology_st, check_value=output) -+ -+ log.info('Verify the userPassword attribute is set') -+ assert test_account.present('userPassword') -+ -+ -+@pytest.mark.skipif(ds_is_older("1.4.2"), reason="Not implemented") -+def test_dsidm_account_change_password(topology_st, create_test_user): -+ """ Test dsidm account change_password option -+ -+ :id: 24c25b8f-df2b-4d43-a88e-47e24bc4ff36 -+ :setup: Standalone instance -+ :steps: -+ 1. Run dsidm account change_password on an existing user -+ 2. Verify that the user has userPassword attribute set -+ :expectedresults: -+ 1. Success -+ 2. Success -+ """ -+ -+ standalone = topology_st.standalone -+ accounts = nsUserAccounts(standalone, DEFAULT_SUFFIX) -+ test_account = accounts.get(test_user_name) -+ -+ args = FakeArgs() -+ args.dn = test_account.dn -+ args.new_password = 'newpasswd' -+ output = 'changed password for {}'.format(test_account.dn) -+ -+ log.info('Test dsidm account change_password') -+ change_password(standalone, DEFAULT_SUFFIX, topology_st.logcap.log, args) -+ check_value_in_log_and_reset(topology_st, check_value=output) -+ -+ log.info('Verify the userPassword attribute is set') -+ assert test_account.present('userPassword') -+ -+ - if __name__ == '__main__': - # Run isolated - # -s for DEBUG mode - CURRENT_FILE = os.path.realpath(__file__) -- pytest.main("-s %s" % CURRENT_FILE) -+ pytest.main("-s {}".format(CURRENT_FILE)) -\ No newline at end of file -diff --git a/src/lib389/lib389/cli_idm/account.py b/src/lib389/lib389/cli_idm/account.py -index 8b6f99549..9877c533a 100644 ---- a/src/lib389/lib389/cli_idm/account.py -+++ b/src/lib389/lib389/cli_idm/account.py -@@ -12,10 +12,12 @@ import ldap - import math - from datetime import datetime - from lib389.idm.account import Account, Accounts, AccountState --from lib389.cli_base import ( -- _generic_get_dn, -+from lib389.cli_idm import ( - _generic_list, - _generic_delete, -+ _generic_get_dn -+) -+from lib389.cli_base import ( - _generic_modify_dn, - _get_arg, - _get_dn_arg, --- -2.49.0 - diff --git a/0037-Issue-6940-dsconf-monitor-server-fails-with-ldapi-du.patch b/0037-Issue-6940-dsconf-monitor-server-fails-with-ldapi-du.patch deleted file mode 100644 index 32d54ff..0000000 --- a/0037-Issue-6940-dsconf-monitor-server-fails-with-ldapi-du.patch +++ /dev/null @@ -1,268 +0,0 @@ -From 7423f0a0b90bac39a23b5ce54a1c61439d0ebcb6 Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Tue, 19 Aug 2025 16:10:09 -0700 -Subject: [PATCH] Issue 6940 - dsconf monitor server fails with ldapi:// due to - absent server ID (#6941) - -Description: The dsconf monitor server command fails when using ldapi:// -protocol because the server ID is not set, preventing PID retrieval from -defaults.inf. This causes the Web console to fail displaying the "Server -Version" field and potentially other CLI/WebUI issues. - -The fix attempts to derive the server ID from the LDAPI socket path when -not explicitly provided. This covers the common case where the socket name -contains the instance name (e.g., slapd-instance.socket). -If that's not possible, it also attempts to derive the server ID from the -nsslapd-instancedir configuration attribute. The derived server ID -is validated against actual system instances to ensure it exists. -Note that socket names can vary and nsslapd-instancedir can be changed. -This is a best-effort approach for the common naming pattern. - -Also fixes the LDAPI socket path extraction which was incorrectly using -offset 9 instead of 8 for ldapi:// URIs. - -The monitor command now handles missing PIDs gracefully, returning zero -values for process-specific stats instead of failing completely. - -Fixes: https://github.com/389ds/389-ds-base/issues/6940 - -Reviewed by: @vashirov, @mreynolds389 (Thanks!!) ---- - src/lib389/lib389/__init__.py | 93 +++++++++++++++++++++++++++--- - src/lib389/lib389/cli_base/dsrc.py | 4 +- - src/lib389/lib389/monitor.py | 50 ++++++++++++---- - 3 files changed, 124 insertions(+), 23 deletions(-) - -diff --git a/src/lib389/lib389/__init__.py b/src/lib389/lib389/__init__.py -index 0ddfca8ae..23a20739f 100644 ---- a/src/lib389/lib389/__init__.py -+++ b/src/lib389/lib389/__init__.py -@@ -17,7 +17,7 @@ - - import sys - import os --from urllib.parse import urlparse -+from urllib.parse import urlparse, unquote - import stat - import pwd - import grp -@@ -67,7 +67,8 @@ from lib389.utils import ( - get_default_db_lib, - selinux_present, - selinux_label_port, -- get_user_is_root) -+ get_user_is_root, -+ get_instance_list) - from lib389.paths import Paths - from lib389.nss_ssl import NssSsl - from lib389.tasks import BackupTask, RestoreTask, Task -@@ -249,6 +250,57 @@ class DirSrv(SimpleLDAPObject, object): - self.dbdir = self.ds_paths.db_dir - self.changelogdir = os.path.join(os.path.dirname(self.dbdir), DEFAULT_CHANGELOG_DB) - -+ def _extract_serverid_from_string(self, text): -+ """Extract serverid from a string containing 'slapd-' pattern. -+ Returns the serverid or None if not found or validation fails. -+ Only attempts derivation if serverid is currently None. -+ """ -+ if getattr(self, 'serverid', None) is not None: -+ return None -+ if not text: -+ return None -+ -+ # Use regex to extract serverid from "slapd-" or "slapd-.socket" -+ match = re.search(r'slapd-([A-Za-z0-9._-]+?)(?:\.socket)?(?:$|/)', text) -+ if not match: -+ return None -+ candidate = match.group(1) -+ -+ self.serverid = candidate -+ try: -+ insts = get_instance_list() -+ except Exception: -+ self.serverid = None -+ return None -+ if f'slapd-{candidate}' in insts or candidate in insts: -+ return candidate -+ # restore original and report failure -+ self.serverid = None -+ return None -+ -+ def _derive_serverid_from_ldapi(self): -+ """Attempt to derive serverid from an LDAPI socket path or URI and -+ verify it exists on the system. Returns the serverid or None. -+ """ -+ socket_path = None -+ if hasattr(self, 'ldapi_socket') and self.ldapi_socket: -+ socket_path = unquote(self.ldapi_socket) -+ elif hasattr(self, 'ldapuri') and isinstance(self.ldapuri, str) and self.ldapuri.startswith('ldapi://'): -+ socket_path = unquote(self.ldapuri[len('ldapi://'):]) -+ -+ return self._extract_serverid_from_string(socket_path) -+ -+ def _derive_serverid_from_instancedir(self): -+ """Extract serverid from nsslapd-instancedir path like '/usr/lib64/dirsrv/slapd-'""" -+ try: -+ from lib389.config import Config -+ config = Config(self) -+ instancedir = config.get_attr_val_utf8_l("nsslapd-instancedir") -+ except Exception: -+ return None -+ -+ return self._extract_serverid_from_string(instancedir) -+ - def rebind(self): - """Reconnect to the DS - -@@ -528,6 +580,15 @@ class DirSrv(SimpleLDAPObject, object): - self.ldapi_autobind = args.get(SER_LDAPI_AUTOBIND, 'off') - self.isLocal = True - self.log.debug("Allocate %s with %s", self.__class__, self.ldapi_socket) -+ elif self.ldapuri is not None and isinstance(self.ldapuri, str) and self.ldapuri.startswith('ldapi://'): -+ # Try to learn serverid from ldapi uri -+ try: -+ self.ldapi_enabled = 'on' -+ self.ldapi_socket = unquote(self.ldapuri[len('ldapi://'):]) -+ self.ldapi_autobind = args.get(SER_LDAPI_AUTOBIND, 'off') -+ self.isLocal = True -+ except Exception: -+ pass - # Settings from args of server attributes - self.strict_hostname = args.get(SER_STRICT_HOSTNAME_CHECKING, False) - if self.strict_hostname is True: -@@ -548,9 +609,16 @@ class DirSrv(SimpleLDAPObject, object): - - self.log.debug("Allocate %s with %s:%s", self.__class__, self.host, (self.sslport or self.port)) - -- if SER_SERVERID_PROP in args: -- self.ds_paths = Paths(serverid=args[SER_SERVERID_PROP], instance=self, local=self.isLocal) -+ # Try to determine serverid if not provided -+ if SER_SERVERID_PROP in args and args.get(SER_SERVERID_PROP) is not None: - self.serverid = args.get(SER_SERVERID_PROP, None) -+ elif getattr(self, 'serverid', None) is None and self.isLocal: -+ sid = self._derive_serverid_from_ldapi() -+ if sid: -+ self.serverid = sid -+ -+ if getattr(self, 'serverid', None): -+ self.ds_paths = Paths(serverid=self.serverid, instance=self, local=self.isLocal) - else: - self.ds_paths = Paths(instance=self, local=self.isLocal) - -@@ -989,6 +1057,17 @@ class DirSrv(SimpleLDAPObject, object): - self.__initPart2() - self.state = DIRSRV_STATE_ONLINE - # Now that we're online, some of our methods may try to query the version online. -+ -+ # After transitioning online, attempt to derive serverid if still unknown. -+ # If we find it, refresh ds_paths and rerun __initPart2 -+ if getattr(self, 'serverid', None) is None and self.isLocal: -+ sid = self._derive_serverid_from_instancedir() -+ if sid: -+ self.serverid = sid -+ # Reinitialize paths with the new serverid -+ self.ds_paths = Paths(serverid=self.serverid, instance=self, local=self.isLocal) -+ if not connOnly: -+ self.__initPart2() - self.__add_brookers__() - - def close(self): -@@ -3537,8 +3616,4 @@ class DirSrv(SimpleLDAPObject, object): - """ - Get the pid of the running server - """ -- pid = pid_from_file(self.pid_file()) -- if pid == 0 or pid is None: -- return 0 -- else: -- return pid -+ return pid_from_file(self.pid_file()) -diff --git a/src/lib389/lib389/cli_base/dsrc.py b/src/lib389/lib389/cli_base/dsrc.py -index 84567b990..498228ce0 100644 ---- a/src/lib389/lib389/cli_base/dsrc.py -+++ b/src/lib389/lib389/cli_base/dsrc.py -@@ -56,7 +56,7 @@ def dsrc_arg_concat(args, dsrc_inst): - new_dsrc_inst['args'][SER_ROOT_DN] = new_dsrc_inst['binddn'] - if new_dsrc_inst['uri'][0:8] == 'ldapi://': - new_dsrc_inst['args'][SER_LDAPI_ENABLED] = "on" -- new_dsrc_inst['args'][SER_LDAPI_SOCKET] = new_dsrc_inst['uri'][9:] -+ new_dsrc_inst['args'][SER_LDAPI_SOCKET] = new_dsrc_inst['uri'][8:] - new_dsrc_inst['args'][SER_LDAPI_AUTOBIND] = "on" - - # Make new -@@ -170,7 +170,7 @@ def dsrc_to_ldap(path, instance_name, log): - dsrc_inst['args'][SER_ROOT_DN] = dsrc_inst['binddn'] - if dsrc_inst['uri'][0:8] == 'ldapi://': - dsrc_inst['args'][SER_LDAPI_ENABLED] = "on" -- dsrc_inst['args'][SER_LDAPI_SOCKET] = dsrc_inst['uri'][9:] -+ dsrc_inst['args'][SER_LDAPI_SOCKET] = dsrc_inst['uri'][8:] - dsrc_inst['args'][SER_LDAPI_AUTOBIND] = "on" - - # Return the dict. -diff --git a/src/lib389/lib389/monitor.py b/src/lib389/lib389/monitor.py -index 27b99a7e3..bf3e1df76 100644 ---- a/src/lib389/lib389/monitor.py -+++ b/src/lib389/lib389/monitor.py -@@ -92,21 +92,47 @@ class Monitor(DSLdapObject): - Get CPU and memory stats - """ - stats = {} -- pid = self._instance.get_pid() -+ try: -+ pid = self._instance.get_pid() -+ except Exception: -+ pid = None - total_mem = psutil.virtual_memory()[0] -- p = psutil.Process(pid) -- memory_stats = p.memory_full_info() - -- # Get memory & CPU stats -+ # Always include total system memory - stats['total_mem'] = [str(total_mem)] -- stats['rss'] = [str(memory_stats[0])] -- stats['vms'] = [str(memory_stats[1])] -- stats['swap'] = [str(memory_stats[9])] -- stats['mem_rss_percent'] = [str(round(p.memory_percent("rss")))] -- stats['mem_vms_percent'] = [str(round(p.memory_percent("vms")))] -- stats['mem_swap_percent'] = [str(round(p.memory_percent("swap")))] -- stats['total_threads'] = [str(p.num_threads())] -- stats['cpu_usage'] = [str(round(p.cpu_percent(interval=0.1)))] -+ -+ # Process-specific stats - only if process is running (pid is not None) -+ if pid is not None: -+ try: -+ p = psutil.Process(pid) -+ memory_stats = p.memory_full_info() -+ -+ # Get memory & CPU stats -+ stats['rss'] = [str(memory_stats[0])] -+ stats['vms'] = [str(memory_stats[1])] -+ stats['swap'] = [str(memory_stats[9])] -+ stats['mem_rss_percent'] = [str(round(p.memory_percent("rss")))] -+ stats['mem_vms_percent'] = [str(round(p.memory_percent("vms")))] -+ stats['mem_swap_percent'] = [str(round(p.memory_percent("swap")))] -+ stats['total_threads'] = [str(p.num_threads())] -+ stats['cpu_usage'] = [str(round(p.cpu_percent(interval=0.1)))] -+ except (psutil.NoSuchProcess, psutil.AccessDenied): -+ # Process exists in PID file but is not accessible or doesn't exist -+ pid = None -+ -+ # If no valid PID, provide zero values for process stats -+ if pid is None: -+ stats['rss'] = ['0'] -+ stats['vms'] = ['0'] -+ stats['swap'] = ['0'] -+ stats['mem_rss_percent'] = ['0'] -+ stats['mem_vms_percent'] = ['0'] -+ stats['mem_swap_percent'] = ['0'] -+ stats['total_threads'] = ['0'] -+ stats['cpu_usage'] = ['0'] -+ stats['server_status'] = ['PID unavailable'] -+ else: -+ stats['server_status'] = ['Server running'] - - # Connections to DS - if self._instance.port == "0": --- -2.49.0 - diff --git a/0038-Issue-6936-Make-user-subtree-policy-creation-idempot.patch b/0038-Issue-6936-Make-user-subtree-policy-creation-idempot.patch deleted file mode 100644 index c946749..0000000 --- a/0038-Issue-6936-Make-user-subtree-policy-creation-idempot.patch +++ /dev/null @@ -1,569 +0,0 @@ -From 594333d1a6a8bba4d485b8227c4474e4ca2aa6a4 Mon Sep 17 00:00:00 2001 -From: Simon Pichugin -Date: Tue, 19 Aug 2025 14:30:15 -0700 -Subject: [PATCH] Issue 6936 - Make user/subtree policy creation idempotent - (#6937) - -Description: Correct the CLI mapping typo to use 'nsslapd-pwpolicy-local', -rework subtree policy detection to validate CoS templates and add user-policy detection. -Make user/subtree policy creation idempotent via ensure_state, and improve deletion -logic to distinguish subtree vs user policies and fail if none exist. - -Add a test suite (pwp_history_local_override_test.py) exercising global-only and local-only -history enforcement, local overriding global counts, immediate effect of dsconf updates, -and fallback to global after removing a user policy, ensuring reliable behavior -and preventing regressions. - -Fixes: https://github.com/389ds/389-ds-base/issues/6936 - -Reviewed by: @mreynolds389 (Thanks!) ---- - .../pwp_history_local_override_test.py | 351 ++++++++++++++++++ - src/lib389/lib389/cli_conf/pwpolicy.py | 4 +- - src/lib389/lib389/pwpolicy.py | 107 ++++-- - 3 files changed, 424 insertions(+), 38 deletions(-) - create mode 100644 dirsrvtests/tests/suites/password/pwp_history_local_override_test.py - -diff --git a/dirsrvtests/tests/suites/password/pwp_history_local_override_test.py b/dirsrvtests/tests/suites/password/pwp_history_local_override_test.py -new file mode 100644 -index 000000000..6d72725fa ---- /dev/null -+++ b/dirsrvtests/tests/suites/password/pwp_history_local_override_test.py -@@ -0,0 +1,351 @@ -+# --- BEGIN COPYRIGHT BLOCK --- -+# Copyright (C) 2025 Red Hat, Inc. -+# All rights reserved. -+# -+# License: GPL (version 3 or any later version). -+# See LICENSE for details. -+# --- END COPYRIGHT BLOCK --- -+# -+import os -+import time -+import ldap -+import pytest -+import subprocess -+import logging -+ -+from lib389._constants import DEFAULT_SUFFIX, DN_DM, PASSWORD, DN_CONFIG -+from lib389.topologies import topology_st -+from lib389.idm.user import UserAccounts -+from lib389.idm.domain import Domain -+from lib389.pwpolicy import PwPolicyManager -+ -+pytestmark = pytest.mark.tier1 -+ -+DEBUGGING = os.getenv("DEBUGGING", default=False) -+if DEBUGGING: -+ logging.getLogger(__name__).setLevel(logging.DEBUG) -+else: -+ logging.getLogger(__name__).setLevel(logging.INFO) -+log = logging.getLogger(__name__) -+ -+OU_DN = f"ou=People,{DEFAULT_SUFFIX}" -+USER_ACI = '(targetattr="userpassword || passwordHistory")(version 3.0; acl "pwp test"; allow (all) userdn="ldap:///self";)' -+ -+ -+@pytest.fixture(autouse=True, scope="function") -+def restore_global_policy(topology_st, request): -+ """Snapshot and restore global password policy around each test in this file.""" -+ inst = topology_st.standalone -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ -+ attrs = [ -+ 'nsslapd-pwpolicy-local', -+ 'nsslapd-pwpolicy-inherit-global', -+ 'passwordHistory', -+ 'passwordInHistory', -+ 'passwordChange', -+ ] -+ -+ entry = inst.getEntry(DN_CONFIG, ldap.SCOPE_BASE, '(objectClass=*)', attrs) -+ saved = {attr: entry.getValue(attr) for attr in attrs} -+ -+ def fin(): -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ for attr, value in saved.items(): -+ inst.config.replace(attr, value) -+ -+ request.addfinalizer(fin) -+ -+ -+@pytest.fixture(scope="function") -+def setup_entries(topology_st, request): -+ """Create test OU and user, and install an ACI for self password changes.""" -+ -+ inst = topology_st.standalone -+ -+ suffix = Domain(inst, DEFAULT_SUFFIX) -+ suffix.add('aci', USER_ACI) -+ -+ users = UserAccounts(inst, DEFAULT_SUFFIX) -+ try: -+ user = users.create_test_user(uid=1) -+ except ldap.ALREADY_EXISTS: -+ user = users.get("test_user_1") -+ -+ def fin(): -+ pwp = PwPolicyManager(inst) -+ try: -+ pwp.delete_local_policy(OU_DN) -+ except Exception as e: -+ if "No password policy" in str(e): -+ pass -+ else: -+ raise e -+ try: -+ pwp.delete_local_policy(user.dn) -+ except Exception as e: -+ if "No password policy" in str(e): -+ pass -+ else: -+ raise e -+ suffix.remove('aci', USER_ACI) -+ request.addfinalizer(fin) -+ -+ return user -+ -+ -+def set_user_password(inst, user, new_password, bind_as_user_password=None, expect_violation=False): -+ if bind_as_user_password is not None: -+ user.rebind(bind_as_user_password) -+ try: -+ user.reset_password(new_password) -+ if expect_violation: -+ pytest.fail("Password change unexpectedly succeeded") -+ except ldap.CONSTRAINT_VIOLATION: -+ if not expect_violation: -+ pytest.fail("Password change unexpectedly rejected with CONSTRAINT_VIOLATION") -+ finally: -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ time.sleep(1) -+ -+ -+def set_global_history(inst, enabled: bool, count: int, inherit_global: str = 'on'): -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ inst.config.replace('nsslapd-pwpolicy-local', 'on') -+ inst.config.replace('nsslapd-pwpolicy-inherit-global', inherit_global) -+ inst.config.replace('passwordHistory', 'on' if enabled else 'off') -+ inst.config.replace('passwordInHistory', str(count)) -+ inst.config.replace('passwordChange', 'on') -+ time.sleep(1) -+ -+ -+def ensure_local_subtree_policy(inst, count: int, track_update_time: str = 'on'): -+ pwp = PwPolicyManager(inst) -+ pwp.create_subtree_policy(OU_DN, { -+ 'passwordChange': 'on', -+ 'passwordHistory': 'on', -+ 'passwordInHistory': str(count), -+ 'passwordTrackUpdateTime': track_update_time, -+ }) -+ time.sleep(1) -+ -+ -+def set_local_history_via_cli(inst, count: int): -+ sbin_dir = inst.get_sbin_dir() -+ inst_name = inst.serverid -+ cmd = [f"{sbin_dir}/dsconf", inst_name, "localpwp", "set", f"--pwdhistorycount={count}", OU_DN] -+ rc = subprocess.call(cmd) -+ assert rc == 0, f"dsconf command failed rc={rc}: {' '.join(cmd)}" -+ time.sleep(1) -+ -+ -+def test_global_history_only_enforced(topology_st, setup_entries): -+ """Global-only history enforcement with count 2 -+ -+ :id: 3d8cf35b-4a33-4587-9814-ebe18b7a1f92 -+ :setup: Standalone instance, test OU and user, ACI for self password changes -+ :steps: -+ 1. Remove local policies -+ 2. Set global policy: passwordHistory=on, passwordInHistory=2 -+ 3. Set password to Alpha1, then change to Alpha2 and Alpha3 as the user -+ 4. Attempt to change to Alpha1 and Alpha2 -+ 5. Attempt to change to Alpha4 -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Changes to Welcome1 and Welcome2 are rejected with CONSTRAINT_VIOLATION -+ 5. Change to Welcome4 is accepted -+ """ -+ inst = topology_st.standalone -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ -+ set_global_history(inst, enabled=True, count=2) -+ -+ user = setup_entries -+ user.reset_password('Alpha1') -+ set_user_password(inst, user, 'Alpha2', bind_as_user_password='Alpha1') -+ set_user_password(inst, user, 'Alpha3', bind_as_user_password='Alpha2') -+ -+ # Within last 2 -+ set_user_password(inst, user, 'Alpha2', bind_as_user_password='Alpha3', expect_violation=True) -+ set_user_password(inst, user, 'Alpha1', bind_as_user_password='Alpha3', expect_violation=True) -+ -+ # New password should be allowed -+ set_user_password(inst, user, 'Alpha4', bind_as_user_password='Alpha3', expect_violation=False) -+ -+ -+def test_local_overrides_global_history(topology_st, setup_entries): -+ """Local subtree policy (history=3) overrides global (history=1) -+ -+ :id: 97c22f56-5ea6-40c1-8d8c-1cece3bf46fd -+ :setup: Standalone instance, test OU and user -+ :steps: -+ 1. Set global policy passwordInHistory=1 -+ 2. Create local subtree policy on the OU with passwordInHistory=3 -+ 3. Set password to Bravo1, then change to Bravo2 and Bravo3 as the user -+ 4. Attempt to change to Bravo1 -+ 5. Attempt to change to Bravo5 -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Change to Welcome1 is rejected (local policy wins) -+ 5. Change to Welcome5 is accepted -+ """ -+ inst = topology_st.standalone -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ -+ set_global_history(inst, enabled=True, count=1, inherit_global='on') -+ -+ ensure_local_subtree_policy(inst, count=3) -+ -+ user = setup_entries -+ user.reset_password('Bravo1') -+ set_user_password(inst, user, 'Bravo2', bind_as_user_password='Bravo1') -+ set_user_password(inst, user, 'Bravo3', bind_as_user_password='Bravo2') -+ -+ # Third prior should be rejected under local policy count=3 -+ set_user_password(inst, user, 'Bravo1', bind_as_user_password='Bravo3', expect_violation=True) -+ -+ # New password allowed -+ set_user_password(inst, user, 'Bravo5', bind_as_user_password='Bravo3', expect_violation=False) -+ -+ -+def test_change_local_history_via_cli_affects_enforcement(topology_st, setup_entries): -+ """Changing local policy via CLI is enforced immediately -+ -+ :id: 5a6d0d14-4009-4bad-86e1-cde5000c43dc -+ :setup: Standalone instance, test OU and user, dsconf available -+ :steps: -+ 1. Ensure local subtree policy passwordInHistory=3 -+ 2. Set password to Charlie1, then change to Charlie2 and Charlie3 as the user -+ 3. Attempt to change to Charlie1 (within last 3) -+ 4. Run: dsconf localpwp set --pwdhistorycount=1 "ou=product testing," -+ 5. Attempt to change to Charlie1 again -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Change to Welcome1 is rejected -+ 4. CLI command succeeds -+ 5. Change to Welcome1 now succeeds (only last 1 is disallowed) -+ """ -+ inst = topology_st.standalone -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ -+ ensure_local_subtree_policy(inst, count=3) -+ -+ user = setup_entries -+ user.reset_password('Charlie1') -+ set_user_password(inst, user, 'Charlie2', bind_as_user_password='Charlie1', expect_violation=False) -+ set_user_password(inst, user, 'Charlie3', bind_as_user_password='Charlie2', expect_violation=False) -+ -+ # With count=3, Welcome1 is within history -+ set_user_password(inst, user, 'Charlie1', bind_as_user_password='Charlie3', expect_violation=True) -+ -+ # Reduce local count to 1 via CLI to exercise CLI mapping and updated code -+ set_local_history_via_cli(inst, count=1) -+ -+ # Now Welcome1 should be allowed -+ set_user_password(inst, user, 'Charlie1', bind_as_user_password='Charlie3', expect_violation=False) -+ -+ -+def test_history_local_only_enforced(topology_st, setup_entries): -+ """Local-only history enforcement with count 3 -+ -+ :id: af6ff34d-ac94-4108-a7b6-2b589c960154 -+ :setup: Standalone instance, test OU and user -+ :steps: -+ 1. Disable global password history (passwordHistory=off, passwordInHistory=0, inherit off) -+ 2. Ensure local subtree policy with passwordInHistory=3 -+ 3. Set password to Delta1, then change to Delta2 and Delta3 as the user -+ 4. Attempt to change to Delta1 -+ 5. Attempt to change to Delta5 -+ 6. Change once more to Delta6, then change to Delta1 -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. Success -+ 4. Change to Welcome1 is rejected (within last 3) -+ 5. Change to Welcome5 is accepted -+ 6. Welcome1 is now older than the last 3 and is accepted -+ """ -+ inst = topology_st.standalone -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ -+ set_global_history(inst, enabled=False, count=0, inherit_global='off') -+ -+ ensure_local_subtree_policy(inst, count=3) -+ -+ user = setup_entries -+ user.reset_password('Delta1') -+ set_user_password(inst, user, 'Delta2', bind_as_user_password='Delta1') -+ set_user_password(inst, user, 'Delta3', bind_as_user_password='Delta2') -+ -+ # Within last 2 -+ set_user_password(inst, user, 'Delta1', bind_as_user_password='Delta3', expect_violation=True) -+ -+ # New password allowed -+ set_user_password(inst, user, 'Delta5', bind_as_user_password='Delta3', expect_violation=False) -+ -+ # Now Welcome1 is older than last 2 after one more change -+ set_user_password(inst, user, 'Delta6', bind_as_user_password='Delta5', expect_violation=False) -+ set_user_password(inst, user, 'Delta1', bind_as_user_password='Delta6', expect_violation=False) -+ -+ -+def test_user_policy_detection_and_enforcement(topology_st, setup_entries): -+ """User local policy is detected and enforced; removal falls back to global policy -+ -+ :id: 2213126a-1f47-468c-8337-0d2ee5d2d585 -+ :setup: Standalone instance, test OU and user -+ :steps: -+ 1. Set global policy passwordInHistory=1 -+ 2. Create a user local password policy on the user with passwordInHistory=3 -+ 3. Verify is_user_policy(USER_DN) is True -+ 4. Set password to Echo1, then change to Echo2 and Echo3 as the user -+ 5. Attempt to change to Echo1 (within last 3) -+ 6. Delete the user local policy -+ 7. Verify is_user_policy(USER_DN) is False -+ 8. Attempt to change to Echo1 again (now only last 1 disallowed by global) -+ :expectedresults: -+ 1. Success -+ 2. Success -+ 3. is_user_policy returns True -+ 4. Success -+ 5. Change to Welcome1 is rejected -+ 6. Success -+ 7. is_user_policy returns False -+ 8. Change to Welcome1 succeeds (two back is allowed by global=1) -+ """ -+ inst = topology_st.standalone -+ inst.simple_bind_s(DN_DM, PASSWORD) -+ -+ set_global_history(inst, enabled=True, count=1, inherit_global='on') -+ -+ pwp = PwPolicyManager(inst) -+ user = setup_entries -+ pwp.create_user_policy(user.dn, { -+ 'passwordChange': 'on', -+ 'passwordHistory': 'on', -+ 'passwordInHistory': '3', -+ }) -+ -+ assert pwp.is_user_policy(user.dn) is True -+ -+ user.reset_password('Echo1') -+ set_user_password(inst, user, 'Echo2', bind_as_user_password='Echo1', expect_violation=False) -+ set_user_password(inst, user, 'Echo3', bind_as_user_password='Echo2', expect_violation=False) -+ set_user_password(inst, user, 'Echo1', bind_as_user_password='Echo3', expect_violation=True) -+ -+ pwp.delete_local_policy(user.dn) -+ assert pwp.is_user_policy(user.dn) is False -+ -+ # With only global=1, Echo1 (two back) is allowed -+ set_user_password(inst, user, 'Echo1', bind_as_user_password='Echo3', expect_violation=False) -+ -+ -+if __name__ == '__main__': -+ # Run isolated -+ # -s for DEBUG mode -+ CURRENT_FILE = os.path.realpath(__file__) -+ pytest.main("-s %s" % CURRENT_FILE) -diff --git a/src/lib389/lib389/cli_conf/pwpolicy.py b/src/lib389/lib389/cli_conf/pwpolicy.py -index 2d4ba9b21..a3e59a90c 100644 ---- a/src/lib389/lib389/cli_conf/pwpolicy.py -+++ b/src/lib389/lib389/cli_conf/pwpolicy.py -@@ -1,5 +1,5 @@ - # --- BEGIN COPYRIGHT BLOCK --- --# Copyright (C) 2023 Red Hat, Inc. -+# Copyright (C) 2025 Red Hat, Inc. - # All rights reserved. - # - # License: GPL (version 3 or any later version). -@@ -43,7 +43,7 @@ def _get_pw_policy(inst, targetdn, log, use_json=None): - targetdn = 'cn=config' - policydn = targetdn - basedn = targetdn -- attr_list.extend(['passwordisglobalpolicy', 'nsslapd-pwpolicy_local']) -+ attr_list.extend(['passwordisglobalpolicy', 'nsslapd-pwpolicy-local']) - all_attrs = inst.config.get_attrs_vals_utf8(attr_list) - attrs = {k: v for k, v in all_attrs.items() if len(v) > 0} - else: -diff --git a/src/lib389/lib389/pwpolicy.py b/src/lib389/lib389/pwpolicy.py -index 6a47a44fe..539c230a9 100644 ---- a/src/lib389/lib389/pwpolicy.py -+++ b/src/lib389/lib389/pwpolicy.py -@@ -1,5 +1,5 @@ - # --- BEGIN COPYRIGHT BLOCK --- --# Copyright (C) 2018 Red Hat, Inc. -+# Copyright (C) 2025 Red Hat, Inc. - # All rights reserved. - # - # License: GPL (version 3 or any later version). -@@ -7,6 +7,7 @@ - # --- END COPYRIGHT BLOCK --- - - import ldap -+from ldap import filter as ldap_filter - from lib389._mapped_object import DSLdapObject, DSLdapObjects - from lib389.backend import Backends - from lib389.config import Config -@@ -74,19 +75,56 @@ class PwPolicyManager(object): - } - - def is_subtree_policy(self, dn): -- """Check if the entry has a subtree password policy. If we can find a -- template entry it is subtree policy -+ """Check if a subtree password policy exists for a given entry DN. - -- :param dn: Entry DN with PwPolicy set up -+ A subtree policy is indicated by the presence of any CoS template -+ (under `cn=nsPwPolicyContainer,`) that has a `pwdpolicysubentry` -+ attribute pointing to an existing entry with objectClass `passwordpolicy`. -+ -+ :param dn: Entry DN to check for subtree policy - :type dn: str - -- :returns: True if the entry has a subtree policy, False otherwise -+ :returns: True if a subtree policy exists, False otherwise -+ :rtype: bool - """ -- cos_templates = CosTemplates(self._instance, 'cn=nsPwPolicyContainer,{}'.format(dn)) - try: -- cos_templates.get('cn=nsPwTemplateEntry,%s' % dn) -- return True -- except: -+ container_basedn = 'cn=nsPwPolicyContainer,{}'.format(dn) -+ templates = CosTemplates(self._instance, container_basedn).list() -+ for tmpl in templates: -+ pwp_dn = tmpl.get_attr_val_utf8('pwdpolicysubentry') -+ if not pwp_dn: -+ continue -+ # Validate that the referenced entry exists and is a passwordpolicy -+ pwp_entry = PwPolicyEntry(self._instance, pwp_dn) -+ if pwp_entry.exists() and pwp_entry.present('objectClass', 'passwordpolicy'): -+ return True -+ except ldap.LDAPError: -+ pass -+ return False -+ -+ def is_user_policy(self, dn): -+ """Check if the entry has a user password policy. -+ -+ A user policy is indicated by the target entry having a -+ `pwdpolicysubentry` attribute that points to an existing -+ entry with objectClass `passwordpolicy`. -+ -+ :param dn: Entry DN to check -+ :type dn: str -+ -+ :returns: True if the entry has a user policy, False otherwise -+ :rtype: bool -+ """ -+ try: -+ entry = Account(self._instance, dn) -+ if not entry.exists(): -+ return False -+ pwp_dn = entry.get_attr_val_utf8('pwdpolicysubentry') -+ if not pwp_dn: -+ return False -+ pwp_entry = PwPolicyEntry(self._instance, pwp_dn) -+ return pwp_entry.exists() and pwp_entry.present('objectClass', 'passwordpolicy') -+ except ldap.LDAPError: - return False - - def create_user_policy(self, dn, properties): -@@ -114,10 +152,10 @@ class PwPolicyManager(object): - pwp_containers = nsContainers(self._instance, basedn=parentdn) - pwp_container = pwp_containers.ensure_state(properties={'cn': 'nsPwPolicyContainer'}) - -- # Create policy entry -+ # Create or update the policy entry - properties['cn'] = 'cn=nsPwPolicyEntry_user,%s' % dn - pwp_entries = PwPolicyEntries(self._instance, pwp_container.dn) -- pwp_entry = pwp_entries.create(properties=properties) -+ pwp_entry = pwp_entries.ensure_state(properties=properties) - try: - # Add policy to the entry - user_entry.replace('pwdpolicysubentry', pwp_entry.dn) -@@ -152,32 +190,27 @@ class PwPolicyManager(object): - pwp_containers = nsContainers(self._instance, basedn=dn) - pwp_container = pwp_containers.ensure_state(properties={'cn': 'nsPwPolicyContainer'}) - -- # Create policy entry -- pwp_entry = None -+ # Create or update the policy entry - properties['cn'] = 'cn=nsPwPolicyEntry_subtree,%s' % dn - pwp_entries = PwPolicyEntries(self._instance, pwp_container.dn) -- pwp_entry = pwp_entries.create(properties=properties) -- try: -- # The CoS template entry (nsPwTemplateEntry) that has the pwdpolicysubentry -- # value pointing to the above (nsPwPolicyEntry) entry -- cos_template = None -- cos_templates = CosTemplates(self._instance, pwp_container.dn) -- cos_template = cos_templates.create(properties={'cosPriority': '1', -- 'pwdpolicysubentry': pwp_entry.dn, -- 'cn': 'cn=nsPwTemplateEntry,%s' % dn}) -- -- # The CoS specification entry at the subtree level -- cos_pointer_defs = CosPointerDefinitions(self._instance, dn) -- cos_pointer_defs.create(properties={'cosAttribute': 'pwdpolicysubentry default operational-default', -- 'cosTemplateDn': cos_template.dn, -- 'cn': 'nsPwPolicy_CoS'}) -- except ldap.LDAPError as e: -- # Something went wrong, remove what we have done -- if pwp_entry is not None: -- pwp_entry.delete() -- if cos_template is not None: -- cos_template.delete() -- raise e -+ pwp_entry = pwp_entries.ensure_state(properties=properties) -+ -+ # Ensure the CoS template entry (nsPwTemplateEntry) that points to the -+ # password policy entry -+ cos_templates = CosTemplates(self._instance, pwp_container.dn) -+ cos_template = cos_templates.ensure_state(properties={ -+ 'cosPriority': '1', -+ 'pwdpolicysubentry': pwp_entry.dn, -+ 'cn': 'cn=nsPwTemplateEntry,%s' % dn -+ }) -+ -+ # Ensure the CoS specification entry at the subtree level -+ cos_pointer_defs = CosPointerDefinitions(self._instance, dn) -+ cos_pointer_defs.ensure_state(properties={ -+ 'cosAttribute': 'pwdpolicysubentry default operational-default', -+ 'cosTemplateDn': cos_template.dn, -+ 'cn': 'nsPwPolicy_CoS' -+ }) - - # make sure that local policies are enabled - self.set_global_policy({'nsslapd-pwpolicy-local': 'on'}) -@@ -244,10 +277,12 @@ class PwPolicyManager(object): - if self.is_subtree_policy(entry.dn): - parentdn = dn - subtree = True -- else: -+ elif self.is_user_policy(entry.dn): - dn_comps = ldap.dn.explode_dn(dn) - dn_comps.pop(0) - parentdn = ",".join(dn_comps) -+ else: -+ raise ValueError('The target entry dn does not have a password policy') - - # Starting deleting the policy, ignore the parts that might already have been removed - pwp_container = nsContainer(self._instance, 'cn=nsPwPolicyContainer,%s' % parentdn) --- -2.49.0 - diff --git a/0039-Issue-6919-numSubordinates-tombstoneNumSubordinates-.patch b/0039-Issue-6919-numSubordinates-tombstoneNumSubordinates-.patch deleted file mode 100644 index 10ac381..0000000 --- a/0039-Issue-6919-numSubordinates-tombstoneNumSubordinates-.patch +++ /dev/null @@ -1,1460 +0,0 @@ -From 9aa30236c7db41280af48d9cf74168e4d05c2c5c Mon Sep 17 00:00:00 2001 -From: progier389 -Date: Thu, 21 Aug 2025 17:30:00 +0200 -Subject: [PATCH] =?UTF-8?q?Issue=206919=20-=20numSubordinates/tombstoneNum?= - =?UTF-8?q?Subordinates=20are=20inconsisten=E2=80=A6=20(#6920)?= -MIME-Version: 1.0 -Content-Type: text/plain; charset=UTF-8 -Content-Transfer-Encoding: 8bit - -* Issue 6919 - numSubordinates/tombstoneNumSubordinates are inconsistent after import - -Problem was that the number of tombstone was not propperly computed. -With bdb: tombstoneNumSubordinates was not computed: -With lmdb: numSubordinates was also including the tombstones -Fixed the numSubordinates/tombstoneNumSubordinates computation during import by: -walking the entryrdn C keys (because parentid does not contains tombstone on bdb) -checking if the children entry is a tombstone by looking in objectclass index -and increasing numSubordinates/tombstoneNumSubordinates subcount accordingly -performed some code cleanup. -- removed the job->mother containing the hashtable of non leaf entry ids -- moved the function that replace the numSubordinates/tombstoneNumSubordinates -attribute in an entry back in export.c (rather than duplicating it in -db_import.c and db_import.c) -- changed a PR_ASSERT that is no more true. - -Notes: -Not using the parentid index because it does not contains the tombstone on bdb -(although it does on lmdb) -The new subcount computation algorythm was not possible when the code was origionally written -because it requires entrytrdn index and having alls the keys (i.e: no ALLIDs in the indexes) -That was why hash table of ids and idlist was used. ( I removed that code that code generate a serious -overhead if there is a large number of non leaf entries (typically if users entries have children) - -Issue: #6919 - -Reviewed by: @mreynolds389 (Thanks!) ---- - .../numsubordinates_replication_test.py | 124 ++++- - ldap/servers/plugins/replication/cl5_api.c | 2 +- - .../slapd/back-ldbm/db-bdb/bdb_import.c | 492 ++++++++---------- - .../back-ldbm/db-bdb/bdb_import_threads.c | 13 - - .../slapd/back-ldbm/db-bdb/bdb_layer.h | 3 +- - .../slapd/back-ldbm/db-mdb/mdb_import.c | 264 ++++++---- - .../back-ldbm/db-mdb/mdb_import_threads.c | 1 + - .../slapd/back-ldbm/db-mdb/mdb_instance.c | 2 +- - .../slapd/back-ldbm/db-mdb/mdb_layer.h | 2 - - ldap/servers/slapd/back-ldbm/import.c | 59 +++ - ldap/servers/slapd/back-ldbm/import.h | 2 +- - ldap/servers/slapd/back-ldbm/ldif2ldbm.c | 43 -- - ldap/servers/slapd/control.c | 2 +- - src/lib389/lib389/__init__.py | 2 +- - src/lib389/lib389/_mapped_object.py | 2 +- - 15 files changed, 572 insertions(+), 441 deletions(-) - -diff --git a/dirsrvtests/tests/suites/replication/numsubordinates_replication_test.py b/dirsrvtests/tests/suites/replication/numsubordinates_replication_test.py -index 9ba10657d..2624b2144 100644 ---- a/dirsrvtests/tests/suites/replication/numsubordinates_replication_test.py -+++ b/dirsrvtests/tests/suites/replication/numsubordinates_replication_test.py -@@ -9,12 +9,15 @@ - import os - import logging - import pytest -+import re - from lib389._constants import DEFAULT_SUFFIX --from lib389.replica import ReplicationManager - from lib389.idm.organizationalunit import OrganizationalUnits - from lib389.idm.user import UserAccounts --from lib389.topologies import topology_i2 as topo_i2 -- -+from lib389.replica import ReplicationManager -+from lib389.tasks import * -+from lib389.tombstone import Tombstones -+from lib389.topologies import topology_i2 as topo_i2, topology_m2 as topo_m2 -+from lib389.utils import get_default_db_lib - - pytestmark = pytest.mark.tier1 - -@@ -26,6 +29,61 @@ else: - log = logging.getLogger(__name__) - - -+def get_test_name(): -+ full_testname = os.getenv('PYTEST_CURRENT_TEST') -+ res = re.match('.*::([^ ]+) .*', full_testname) -+ assert res -+ return res.group(1) -+ -+ -+@pytest.fixture(scope="function") -+def with_container(topo_m2, request): -+ # Creates a organizational unit container with proper cleanup -+ testname = get_test_name() -+ ou = f'test_container_{testname}' -+ S1 = topo_m2.ms["supplier1"] -+ S2 = topo_m2.ms["supplier2"] -+ repl = ReplicationManager(DEFAULT_SUFFIX) -+ -+ log.info(f"Create container ou={ou},{DEFAULT_SUFFIX} on {S1.serverid}") -+ ous1 = OrganizationalUnits(S1, DEFAULT_SUFFIX) -+ container = ous1.create(properties={ -+ 'ou': ou, -+ 'description': f'Test container for {testname} test' -+ }) -+ -+ def fin(): -+ container.delete(recursive=True) -+ repl.wait_for_replication(S1, S2) -+ -+ if not DEBUGGING: -+ request.addfinalizer(fin) -+ repl.wait_for_replication(S1, S2) -+ -+ return container -+ -+ -+def verify_value_against_entries(container, attr, entries, msg): -+ # Check that container attr value match the number of entries -+ num = container.get_attr_val_int(attr) -+ num_entries = len(entries) -+ dns = [ e.dn for e in entries ] -+ log.debug(f"[{msg}] {attr}: entries: {entries}") -+ log.info(f"[{msg}] container is {container}") -+ log.info(f"[{msg}] {attr}: {num} (Expecting: {num_entries})") -+ assert num == num_entries, ( -+ f"{attr} attribute has wrong value: {num} {msg}, was expecting: {num_entries}", -+ f"entries are {dns}" ) -+ -+ -+def verify_subordinates(inst, container, msg): -+ log.info(f"Verify numSubordinates and tombstoneNumSubordinates {msg}") -+ tombstones = Tombstones(inst, container.dn).list() -+ entries = container.search(scope='one') -+ verify_value_against_entries(container, 'numSubordinates', entries, msg) -+ verify_value_against_entries(container, 'tombstoneNumSubordinates', tombstones, msg) -+ -+ - def test_numsubordinates_tombstone_replication_mismatch(topo_i2): - """Test that numSubordinates values match between replicas after tombstone creation - -@@ -136,9 +194,67 @@ def test_numsubordinates_tombstone_replication_mismatch(topo_i2): - f"instance2 has {tombstone_numsubordinates_instance2}. " - ) - -+def test_numsubordinates_tombstone_after_import(topo_m2, with_container): -+ """Test that numSubordinates values are the expected one after an import -+ -+ :id: 67bec454-6bb3-11f0-b9ae-c85309d5c3e3 -+ :setup: Two suppliers instances with an ou container -+ :steps: -+ 1. Create a container (organizational unit) on the first instance -+ 2. Create a user object in that container -+ 3. Delete the user object (this creates a tombstone) -+ 4. Set up replication between the two instances -+ 5. Wait for replication to complete -+ 6. Check numSubordinates on both instances -+ 7. Check tombstoneNumSubordinates on both instances -+ 8. Verify that numSubordinates values match on both instances -+ :expectedresults: -+ 1. Container should be created successfully -+ 2. User object should be created successfully -+ 3. User object should be deleted successfully -+ 4. Replication should be set up successfully -+ 5. Replication should complete successfully -+ 6. numSubordinates should be accessible on both instances -+ 7. tombstoneNumSubordinates should be accessible on both instances -+ 8. numSubordinates values should match on both instances -+ """ -+ -+ S1 = topo_m2.ms["supplier1"] -+ S2 = topo_m2.ms["supplier2"] -+ container = with_container -+ repl = ReplicationManager(DEFAULT_SUFFIX) -+ tasks = Tasks(S1) -+ -+ log.info("Create some user objects in that container") -+ users1 = UserAccounts(S1, DEFAULT_SUFFIX, rdn=f"ou={container.rdn}") -+ users = {} -+ for uid in range(1001,1010): -+ users[uid] = users1.create_test_user(uid=uid) -+ log.info(f"Created user: {users[uid].dn}") -+ -+ for uid in range(1002,1007,2): -+ users[uid].delete() -+ log.info(f"Removing user: {users[uid].dn}") -+ repl.wait_for_replication(S1, S2) -+ -+ ldif_file = f"{S1.get_ldif_dir()}/export.ldif" -+ log.info(f"Export into {ldif_file}") -+ args = {EXPORT_REPL_INFO: True, -+ TASK_WAIT: True} -+ tasks.exportLDIF(DEFAULT_SUFFIX, None, ldif_file, args) -+ -+ verify_subordinates(S1, container, "before importing") -+ -+ # import the ldif file -+ log.info(f"Import from {ldif_file}") -+ args = {TASK_WAIT: True} -+ tasks.importLDIF(DEFAULT_SUFFIX, None, ldif_file, args) -+ -+ verify_subordinates(S1, container, "after importing") -+ - - if __name__ == '__main__': - # Run isolated - # -s for DEBUG mode - CURRENT_FILE = os.path.realpath(__file__) -- pytest.main("-s %s" % CURRENT_FILE) -\ No newline at end of file -+ pytest.main("-s %s" % CURRENT_FILE) -diff --git a/ldap/servers/plugins/replication/cl5_api.c b/ldap/servers/plugins/replication/cl5_api.c -index 1d62aa020..a5e43c87d 100644 ---- a/ldap/servers/plugins/replication/cl5_api.c -+++ b/ldap/servers/plugins/replication/cl5_api.c -@@ -3211,7 +3211,7 @@ _cl5EnumConsumerRUV(const ruv_enum_data *element, void *arg) - RUV *ruv; - CSN *csn = NULL; - -- PR_ASSERT(element && element->csn && arg); -+ PR_ASSERT(element && arg); - - ruv = (RUV *)arg; - -diff --git a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import.c b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import.c -index 3f7392059..46c80ec3d 100644 ---- a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import.c -+++ b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import.c -@@ -30,6 +30,18 @@ static int bdb_ancestorid_create_index(backend *be, ImportJob *job); - static int bdb_ancestorid_default_create_index(backend *be, ImportJob *job); - static int bdb_ancestorid_new_idl_create_index(backend *be, ImportJob *job); - -+/* Helper struct used to compute numsubordinates */ -+ -+typedef struct { -+ backend *be; -+ DB_TXN *txn; -+ const char *attrname; -+ struct attrinfo *ai; -+ dbi_db_t *db; -+ DBC *dbc; -+} subcount_cursor_info_t; -+ -+ - /* Start of definitions for a simple cache using a hash table */ - - typedef struct id2idl -@@ -56,6 +68,69 @@ static int bdb_parentid(backend *be, DB_TXN *txn, ID id, ID *ppid); - static int bdb_check_cache(id2idl_hash *ht); - static IDList *bdb_idl_union_allids(backend *be, struct attrinfo *ai, IDList *a, IDList *b); - -+ -+/********** Code to debug numsubordinates/tombstonenumsubordinates computation **********/ -+ -+#ifdef DEBUG_SUBCOUNT -+#define DEBUG_SUBCOUNT_MSG(msg, ...) { debug_subcount(__FUNCTION__, __LINE__, (msg), __VA_ARGS__); } -+#define DUMP_SUBCOUNT_KEY(msg, key, ret) { debug_subcount(__FUNCTION__, __LINE__, "ret=%d size=%u ulen=%u doff=%u dlen=%u", \ -+ ret, (key).size, (key).ulen, (key).doff, (key).dlen); \ -+ if (ret == 0) hexadump(msg, (key).data, 0, (key).size); \ -+ else if (ret == DB_BUFFER_SMALL) \ -+ hexadump(msg, (key).data, 0, (key).ulen); } -+ -+static void -+debug_subcount(const char *funcname, int line, char *msg, ...) -+{ -+ va_list ap; -+ char buff[1024]; -+ va_start(ap, msg); -+ PR_vsnprintf(buff, (sizeof buff), msg, ap); -+ va_end(ap); -+ slapi_log_err(SLAPI_LOG_INFO, (char*)funcname, "DEBUG SUBCOUNT [%d] %s\n", line, buff); -+} -+ -+/* -+ * Dump a memory buffer in hexa and ascii in error log -+ * -+ * addr - The memory buffer address. -+ * len - The memory buffer lenght. -+ */ -+static void -+hexadump(char *msg, const void *addr, size_t offset, size_t len) -+{ -+#define HEXADUMP_TAB 4 -+/* 4 characters per bytes: 2 hexa digits, 1 space and the ascii */ -+#define HEXADUMP_BUF_SIZE (4*16+HEXADUMP_TAB) -+ char hexdigit[] = "0123456789ABCDEF"; -+ -+ const unsigned char *pt = addr; -+ char buff[HEXADUMP_BUF_SIZE+1]; -+ memset (buff, ' ', HEXADUMP_BUF_SIZE); -+ buff[HEXADUMP_BUF_SIZE] = '\0'; -+ while (len > 0) { -+ int dpl; -+ for (dpl = 0; dpl < 16 && len>0; dpl++, len--) { -+ buff[3*dpl] = hexdigit[((*pt) >> 4) & 0xf]; -+ buff[3*dpl+1] = hexdigit[(*pt) & 0xf]; -+ buff[3*16+HEXADUMP_TAB+dpl] = (*pt>=0x20 && *pt<0x7f) ? *pt : '.'; -+ pt++; -+ } -+ for (;dpl < 16; dpl++) { -+ buff[3*dpl] = ' '; -+ buff[3*dpl+1] = ' '; -+ buff[3*16+HEXADUMP_TAB+dpl] = ' '; -+ } -+ slapi_log_err(SLAPI_LOG_INFO, msg, "[0x%08lx] %s\n", offset, buff); -+ offset += 16; -+ } -+} -+#else -+#define DEBUG_SUBCOUNT_MSG(msg, ...) -+#define DUMP_SUBCOUNT_KEY(msg, key, ret) -+#endif -+ -+ - /********** routines to manipulate the entry fifo **********/ - - /* this is pretty bogus -- could be a HUGE amount of memory */ -@@ -994,173 +1069,85 @@ out: - - return ret; - } --/* Update subordinate count in a hint list, given the parent's ID */ --int --bdb_import_subcount_mother_init(import_subcount_stuff *mothers, ID parent_id, size_t count) --{ -- PR_ASSERT(NULL == PL_HashTableLookup(mothers->hashtable, (void *)((uintptr_t)parent_id))); -- PL_HashTableAdd(mothers->hashtable, (void *)((uintptr_t)parent_id), (void *)count); -- return 0; --} - --/* Look for a subordinate count in a hint list, given the parent's ID */ --static int --bdb_import_subcount_mothers_lookup(import_subcount_stuff *mothers, -- ID parent_id, -- size_t *count) -+static void -+bdb_close_subcount_cursor(subcount_cursor_info_t *info) - { -- size_t stored_count = 0; -- -- *count = 0; -- /* Lookup hash table for ID */ -- stored_count = (size_t)PL_HashTableLookup(mothers->hashtable, -- (void *)((uintptr_t)parent_id)); -- /* If present, return the count found */ -- if (0 != stored_count) { -- *count = stored_count; -- return 0; -+ if (info->dbc) { -+ int ret = info->dbc->c_close(info->dbc); -+ if (ret) { -+ char errfunc[60]; -+ snprintf(errfunc, (sizeof errfunc), "%s[%s]", __FUNCTION__, info->attrname); -+ ldbm_nasty(errfunc, sourcefile, 73, ret); -+ } -+ info->dbc = NULL; -+ } -+ if (info->db) { -+ dblayer_release_index_file(info->be, info->ai, info->db); -+ info->db = NULL; -+ info->ai = NULL; - } -- return -1; --} -- --/* Update subordinate count in a hint list, given the parent's ID */ --int --bdb_import_subcount_mother_count(import_subcount_stuff *mothers, ID parent_id) --{ -- size_t stored_count = 0; -- -- /* Lookup the hash table for the target ID */ -- stored_count = (size_t)PL_HashTableLookup(mothers->hashtable, -- (void *)((uintptr_t)parent_id)); -- PR_ASSERT(0 != stored_count); -- /* Increment the count */ -- stored_count++; -- PL_HashTableAdd(mothers->hashtable, (void *)((uintptr_t)parent_id), (void *)stored_count); -- return 0; - } - - static int --bdb_import_update_entry_subcount(backend *be, ID parentid, size_t sub_count, int isencrypted) -+bdb_open_subcount_cursor(backend *be, const char *attrname, DB_TXN *txn, subcount_cursor_info_t *info) - { -- ldbm_instance *inst = (ldbm_instance *)be->be_instance_info; -+ char errfunc[60]; -+ DB *db = NULL; - int ret = 0; -- modify_context mc = {0}; -- char value_buffer[22] = {0}; /* enough digits for 2^64 children */ -- struct backentry *e = NULL; -- int isreplace = 0; -- char *numsub_str = numsubordinates; -- -- /* Get hold of the parent */ -- e = id2entry(be, parentid, NULL, &ret); -- if ((NULL == e) || (0 != ret)) { -- ldbm_nasty("bdb_import_update_entry_subcount", sourcefile, 5, ret); -- return (0 == ret) ? -1 : ret; -- } -- /* Lock it (not really required since we're single-threaded here, but -- * let's do it so we can reuse the modify routines) */ -- cache_lock_entry(&inst->inst_cache, e); -- modify_init(&mc, e); -- mc.attr_encrypt = isencrypted; -- sprintf(value_buffer, "%lu", (long unsigned int)sub_count); -- /* If it is a tombstone entry, add tombstonesubordinates instead of -- * numsubordinates. */ -- if (slapi_entry_flag_is_set(e->ep_entry, SLAPI_ENTRY_FLAG_TOMBSTONE)) { -- numsub_str = LDBM_TOMBSTONE_NUMSUBORDINATES_STR; -- } -- /* attr numsubordinates/tombstonenumsubordinates could already exist in -- * the entry, let's check whether it's already there or not */ -- isreplace = (attrlist_find(e->ep_entry->e_attrs, numsub_str) != NULL); -- { -- int op = isreplace ? LDAP_MOD_REPLACE : LDAP_MOD_ADD; -- Slapi_Mods *smods = slapi_mods_new(); - -- slapi_mods_add(smods, op | LDAP_MOD_BVALUES, numsub_str, -- strlen(value_buffer), value_buffer); -- ret = modify_apply_mods(&mc, smods); /* smods passed in */ -- } -- if (0 == ret || LDAP_TYPE_OR_VALUE_EXISTS == ret) { -- /* This will correctly index subordinatecount: */ -- ret = modify_update_all(be, NULL, &mc, NULL); -- if (0 == ret) { -- modify_switch_entries(&mc, be); -- } -+ snprintf(errfunc, (sizeof errfunc), "%s[%s]", __FUNCTION__, attrname); -+ info->attrname = attrname; -+ info->txn = txn; -+ info->be = be; -+ -+ /* Lets get the attrinfo */ -+ ainfo_get(be, (char*)attrname, &info->ai); -+ PR_ASSERT(info->ai); -+ /* Lets get the db instance */ -+ if ((ret = dblayer_get_index_file(be, info->ai, &info->db, 0)) != 0) { -+ if (ret == DBI_RC_NOTFOUND) { -+ bdb_close_subcount_cursor(info); -+ return 0; -+ } -+ ldbm_nasty(errfunc, sourcefile, 70, ret); -+ bdb_close_subcount_cursor(info); -+ return ret; - } -- /* entry is unlocked and returned to the cache in modify_term */ -- modify_term(&mc, be); -- return ret; --} --struct _import_subcount_trawl_info --{ -- struct _import_subcount_trawl_info *next; -- ID id; -- size_t sub_count; --}; --typedef struct _import_subcount_trawl_info import_subcount_trawl_info; -- --static void --bdb_import_subcount_trawl_add(import_subcount_trawl_info **list, ID id) --{ -- import_subcount_trawl_info *new_info = CALLOC(import_subcount_trawl_info); - -- new_info->next = *list; -- new_info->id = id; -- *list = new_info; -+ /* Lets get the cursor */ -+ db = (DB*)(info->db); -+ if ((ret = db->cursor(db, info->txn, &info->dbc, 0)) != 0) { -+ ldbm_nasty(errfunc, sourcefile, 71, ret); -+ bdb_close_subcount_cursor(info); -+ ret = bdb_map_error(__FUNCTION__, ret); -+ } -+ return 0; - } - --static int --bdb_import_subcount_trawl(backend *be, -- import_subcount_trawl_info *trawl_list, -- int isencrypted) -+static bool -+bdb_subcount_is_tombstone(subcount_cursor_info_t *info, DBT *id) - { -- ldbm_instance *inst = (ldbm_instance *)be->be_instance_info; -- ID id = 1; -- int ret = 0; -- import_subcount_trawl_info *current = NULL; -- char value_buffer[20]; /* enough digits for 2^64 children */ -- -- /* OK, we do */ -- /* We open id2entry and iterate through it */ -- /* Foreach entry, we check to see if its parentID matches any of the -- * values in the trawl list . If so, we bump the sub count for that -- * parent in the list. -+ /* -+ * Check if record =nstombstone ==> id exists in objectclass index - */ -- while (1) { -- struct backentry *e = NULL; -- -- /* Get the next entry */ -- e = id2entry(be, id, NULL, &ret); -- if ((NULL == e) || (0 != ret)) { -- if (DB_NOTFOUND == ret) { -- break; -- } else { -- ldbm_nasty("bdb_import_subcount_trawl", sourcefile, 8, ret); -- return ret; -- } -- } -- for (current = trawl_list; current != NULL; current = current->next) { -- sprintf(value_buffer, "%lu", (u_long)current->id); -- if (slapi_entry_attr_hasvalue(e->ep_entry, LDBM_PARENTID_STR, value_buffer)) { -- /* If this entry's parent ID matches one we're trawling for, -- * bump its count */ -- current->sub_count++; -- } -- } -- /* Free the entry */ -- CACHE_REMOVE(&inst->inst_cache, e); -- CACHE_RETURN(&inst->inst_cache, &e); -- id++; -- } -- /* Now update the parent entries from the list */ -- for (current = trawl_list; current != NULL; current = current->next) { -- /* Update the parent entry with the correctly counted subcount */ -- ret = bdb_import_update_entry_subcount(be, current->id, -- current->sub_count, isencrypted); -- if (0 != ret) { -- ldbm_nasty("bdb_import_subcount_trawl", sourcefile, 10, ret); -- break; -- } -+ DBT key = {0}; -+ DBC *dbc = info->dbc; -+ int ret; -+ key.flags = DB_DBT_USERMEM; -+ key.data = "=nstombstone" ; -+ key.size = key.ulen = 13; -+ ret = dbc->c_get(dbc, &key, id, DB_GET_BOTH); -+ -+ switch (ret) { -+ case 0: -+ return true; -+ case DB_NOTFOUND: -+ return false; -+ default: -+ ldbm_nasty((char*)__FUNCTION__, sourcefile, 72, ret); -+ return false; - } -- return ret; - } - - /* -@@ -1172,65 +1159,69 @@ bdb_import_subcount_trawl(backend *be, - static int - bdb_update_subordinatecounts(backend *be, ImportJob *job, DB_TXN *txn) - { -- import_subcount_stuff *mothers = job->mothers; -- int isencrypted = job->encrypt; -+ subcount_cursor_info_t c_objectclass = {0}; -+ subcount_cursor_info_t c_entryrdn = {0}; - int started_progress_logging = 0; -+ int isencrypted = job->encrypt; -+ DBT data = {0}; -+ DBT key = {0}; - int key_count = 0; -+ char tmp[11]; -+ char oldkey[11]; -+ ID data_data; -+ int ret2 = 0; - int ret = 0; -- DB *db = NULL; -- DBC *dbc = NULL; -- struct attrinfo *ai = NULL; -- DBT key = {0}; -- dbi_val_t dbikey = {0}; -- DBT data = {0}; -- import_subcount_trawl_info *trawl_list = NULL; -- -- /* Open the parentid index */ -- ainfo_get(be, LDBM_PARENTID_STR, &ai); - -- /* Open the parentid index file */ -- if ((ret = dblayer_get_index_file(be, ai, (dbi_db_t**)&db, DBOPEN_CREATE)) != 0) { -- ldbm_nasty("bdb_update_subordinatecounts", sourcefile, 67, ret); -- return (ret); -+ /* Open cursor on the objectclass index */ -+ ret = bdb_open_subcount_cursor(be, SLAPI_ATTR_OBJECTCLASS, txn, &c_objectclass); -+ if (ret) { -+ if (ret != DBI_RC_NOTFOUND) { -+ /* No database ==> There is nothing to do. */ -+ ldbm_nasty((char*)__FUNCTION__, sourcefile, 61, ret); -+ } -+ return ret; - } -- /* Get a cursor so we can walk through the parentid */ -- ret = db->cursor(db, txn, &dbc, 0); -- if (ret != 0) { -- ldbm_nasty("bdb_update_subordinatecounts", sourcefile, 68, ret); -- dblayer_release_index_file(be, ai, db); -+ /* Open entryrdn index */ -+ /* Open cursor on the entryrdn index */ -+ ret = bdb_open_subcount_cursor(be, LDBM_ENTRYRDN_STR, txn, &c_entryrdn); -+ if (ret) { -+ ldbm_nasty((char*)__FUNCTION__, sourcefile, 62, ret); -+ bdb_close_subcount_cursor(&c_objectclass); - return ret; - } - -- /* Walk along the index */ -- while (1) { -+ key.flags = DB_DBT_USERMEM; -+ key.ulen = sizeof tmp; -+ key.data = tmp; -+ /* Only the first 4 bytes of the data record interrest us */ -+ data.flags = DB_DBT_USERMEM | DB_DBT_PARTIAL; -+ data.ulen = sizeof data_data; -+ data.data = &data_data; -+ data.dlen = sizeof (ID); -+ data.doff = 0; -+ -+ /* Walk along C* keys (usually starting at C1) */ -+ strcpy(tmp, "C"); -+ key.size = 1; -+ ret = c_entryrdn.dbc->c_get(c_entryrdn.dbc, &key, &data, DB_SET_RANGE); -+ -+ while (ret == 0) { - size_t sub_count = 0; -- int found_count = 1; -+ size_t t_sub_count = 0; - ID parentid = 0; - -- /* Foreach key which is an equality key : */ -- data.flags = DB_DBT_MALLOC; -- key.flags = DB_DBT_MALLOC; -- ret = dbc->c_get(dbc, &key, &data, DB_NEXT_NODUP); -- if (NULL != data.data) { -- slapi_ch_free(&(data.data)); -- data.data = NULL; -- } -- if (0 != ret) { -- if (ret != DB_NOTFOUND) { -- ldbm_nasty("bdb_update_subordinatecounts", sourcefile, 62, ret); -- } -- if (NULL != key.data) { -- slapi_ch_free(&(key.data)); -- key.data = NULL; -- } -- break; -- } -+ DUMP_SUBCOUNT_KEY("key:", key, ret); -+ DUMP_SUBCOUNT_KEY("data:", data, ret); - /* check if we need to abort */ - if (job->flags & FLAG_ABORT) { - import_log_notice(job, SLAPI_LOG_ERR, "bdb_update_subordinatecounts", - "numsubordinate generation aborted."); - break; - } -+ if (0 != ret) { -+ ldbm_nasty("bdb_update_subordinatecounts", sourcefile, 63, ret); -+ break; -+ } - /* - * Do an update count - */ -@@ -1241,57 +1232,47 @@ bdb_update_subordinatecounts(backend *be, ImportJob *job, DB_TXN *txn) - key_count); - started_progress_logging = 1; - } -+ if (key.size == 0 || *(char *)key.data != 'C') { -+ /* No more children */ -+ break; -+ } - -- if (*(char *)key.data == EQ_PREFIX) { -- char *idptr = NULL; -- -- /* construct the parent's ID from the key */ -- /* Look for the ID in the hint list supplied by the caller */ -- /* If its there, we know the answer already */ -- idptr = (((char *)key.data) + 1); -- parentid = (ID)atol(idptr); -- PR_ASSERT(0 != parentid); -- ret = bdb_import_subcount_mothers_lookup(mothers, parentid, &sub_count); -- if (0 != ret) { -- IDList *idl = NULL; -- -- /* If it's not, we need to compute it ourselves: */ -- /* Load the IDL matching the key */ -- key.flags = DB_DBT_REALLOC; -- ret = NEW_IDL_NO_ALLID; -- bdb_dbt2dbival(&key, &dbikey, PR_FALSE); -- idl = idl_fetch(be, db, &dbikey, NULL, NULL, &ret); -- bdb_dbival2dbt(&dbikey, &key, PR_TRUE); -- dblayer_value_protect_data(be, &dbikey); -- if ((NULL == idl) || (0 != ret)) { -- ldbm_nasty("bdb_update_subordinatecounts", sourcefile, 4, ret); -- dblayer_release_index_file(be, ai, db); -- return (0 == ret) ? -1 : ret; -- } -- /* The number of IDs in the IDL tells us the number of -- * subordinates for the entry */ -- /* Except, the number might be above the allidsthreshold, -- * in which case */ -- if (ALLIDS(idl)) { -- /* We add this ID to the list for which to trawl */ -- bdb_import_subcount_trawl_add(&trawl_list, parentid); -- found_count = 0; -- } else { -- /* We get the count from the IDL */ -- sub_count = idl->b_nids; -- } -- idl_free(&idl); -+ /* construct the parent's ID from the key */ -+ if (key.size >= sizeof tmp) { -+ ldbm_nasty("bdb_update_subordinatecounts", sourcefile, 64, ret); -+ break; -+ } -+ /* Generate expected value for parentid */ -+ tmp[key.size] = 0; -+ parentid = (ID)atol(tmp+1); -+ PR_ASSERT(0 != parentid); -+ strcpy(oldkey,tmp); -+ /* Walk the entries having same key and check if they are tombstone */ -+ do { -+ /* Reorder data_data */ -+ ID old_data_data = data_data; -+ id_internal_to_stored(old_data_data, (char*)&data_data); -+ if (!bdb_subcount_is_tombstone(&c_objectclass, &data)) { -+ sub_count++; -+ } else { -+ t_sub_count++; - } -- /* Did we get the count ? */ -- if (found_count) { -- PR_ASSERT(0 != sub_count); -- /* If so, update the parent now */ -- bdb_import_update_entry_subcount(be, parentid, sub_count, isencrypted); -+ DUMP_SUBCOUNT_KEY("key:", key, ret); -+ DUMP_SUBCOUNT_KEY("data:", data, ret); -+ ret = c_entryrdn.dbc->c_get(c_entryrdn.dbc, &key, &data, DB_NEXT); -+ DUMP_SUBCOUNT_KEY("key:", key, ret); -+ DUMP_SUBCOUNT_KEY("data:", data, ret); -+ if (ret == 0 && key.size < sizeof tmp) { -+ tmp[key.size] = 0; -+ } else { -+ break; - } -- } -- if (NULL != key.data) { -- slapi_ch_free(&(key.data)); -- key.data = NULL; -+ } while (strcmp(key.data, oldkey) == 0); -+ ret2 = import_update_entry_subcount(be, parentid, sub_count, t_sub_count, isencrypted, (dbi_txn_t*)txn); -+ if (ret2) { -+ ret = ret2; -+ ldbm_nasty("bdb_update_subordinatecounts", sourcefile, 65, ret); -+ break; - } - } - if (started_progress_logging) { -@@ -1301,22 +1282,15 @@ bdb_update_subordinatecounts(backend *be, ImportJob *job, DB_TXN *txn) - key_count); - job->numsubordinates = key_count; - } -- -- ret = dbc->c_close(dbc); -- if (0 != ret) { -- ldbm_nasty("bdb_update_subordinatecounts", sourcefile, 6, ret); -- } -- dblayer_release_index_file(be, ai, db); -- -- /* Now see if we need to go trawling through id2entry for the info -- * we need */ -- if (NULL != trawl_list) { -- ret = bdb_import_subcount_trawl(be, trawl_list, isencrypted); -- if (0 != ret) { -- ldbm_nasty("bdb_update_subordinatecounts", sourcefile, 7, ret); -- } -+ if (ret == DB_NOTFOUND || ret == DB_BUFFER_SMALL) { -+ /* No more records or record is the suffix dn -+ * ==> there is no more children to look at -+ */ -+ ret = 0; - } -- return (ret); -+ bdb_close_subcount_cursor(&c_entryrdn); -+ bdb_close_subcount_cursor(&c_objectclass); -+ return ret; - } - - /* Function used to gather a list of indexed attrs */ -@@ -1453,10 +1427,6 @@ bdb_import_free_job(ImportJob *job) - slapi_ch_free((void **)&asabird); - } - job->index_list = NULL; -- if (NULL != job->mothers) { -- import_subcount_stuff_term(job->mothers); -- slapi_ch_free((void **)&job->mothers); -- } - - bdb_back_free_incl_excl(job->include_subtrees, job->exclude_subtrees); - charray_free(job->input_filenames); -@@ -2708,7 +2678,6 @@ bdb_back_ldif2db(Slapi_PBlock *pb) - } - job->starting_ID = 1; - job->first_ID = 1; -- job->mothers = CALLOC(import_subcount_stuff); - - /* how much space should we allocate to index buffering? */ - job->job_index_buffer_size = bdb_import_get_index_buffer_size(); -@@ -2719,7 +2688,6 @@ bdb_back_ldif2db(Slapi_PBlock *pb) - (job->inst->inst_li->li_import_cachesize / 10) + (1024 * 1024); - PR_Unlock(job->inst->inst_li->li_config_mutex); - } -- import_subcount_stuff_init(job->mothers); - - if (job->task != NULL) { - /* count files, use that to track "progress" in cn=tasks */ -diff --git a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import_threads.c b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import_threads.c -index 08543b888..11054d438 100644 ---- a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import_threads.c -+++ b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import_threads.c -@@ -2244,17 +2244,6 @@ bdb_foreman_do_parentid(ImportJob *job, FifoItem *fi, struct attrinfo *parentid_ - ret = index_addordel_values_ext_sv(be, LDBM_PARENTID_STR, svals, NULL, - entry->ep_id, BE_INDEX_ADD, - NULL, &idl_disposition, NULL); -- if (idl_disposition != IDL_INSERT_NORMAL) { -- char *attr_value = slapi_value_get_berval(svals[0])->bv_val; -- ID parent_id = atol(attr_value); -- -- if (idl_disposition == IDL_INSERT_NOW_ALLIDS) { -- bdb_import_subcount_mother_init(job->mothers, parent_id, -- idl_get_allidslimit(parentid_ai, 0) + 1); -- } else if (idl_disposition == IDL_INSERT_ALLIDS) { -- bdb_import_subcount_mother_count(job->mothers, parent_id); -- } -- } - if (ret != 0) { - import_log_notice(job, SLAPI_LOG_ERR, "bdb_foreman_do_parentid", - "Can't update parentid index (error %d)", ret); -@@ -2989,7 +2978,6 @@ bdb_bulk_import_start(Slapi_PBlock *pb) - job->starting_ID = 1; - job->first_ID = 1; - -- job->mothers = CALLOC(import_subcount_stuff); - /* how much space should we allocate to index buffering? */ - job->job_index_buffer_size = bdb_import_get_index_buffer_size(); - if (job->job_index_buffer_size == 0) { -@@ -2997,7 +2985,6 @@ bdb_bulk_import_start(Slapi_PBlock *pb) - job->job_index_buffer_size = (job->inst->inst_li->li_dbcachesize / 10) + - (1024 * 1024); - } -- import_subcount_stuff_init(job->mothers); - - pthread_mutex_init(&job->wire_lock, NULL); - pthread_cond_init(&job->wire_cv, NULL); -diff --git a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.h b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.h -index 0be6cab49..f66640d2e 100644 ---- a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.h -+++ b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.h -@@ -210,8 +210,6 @@ void bdb_restore_file_update(struct ldbminfo *li, const char *directory); - int bdb_import_file_init(ldbm_instance *inst); - void bdb_import_file_update(ldbm_instance *inst); - int bdb_import_file_check(ldbm_instance *inst); --int bdb_import_subcount_mother_init(import_subcount_stuff *mothers, ID parent_id, size_t count); --int bdb_import_subcount_mother_count(import_subcount_stuff *mothers, ID parent_id); - void bdb_import_configure_index_buffer_size(size_t size); - size_t bdb_import_get_index_buffer_size(void); - int bdb_ldbm_back_wire_import(Slapi_PBlock *pb); -@@ -230,6 +228,7 @@ int bdb_public_in_import(ldbm_instance *inst); - int bdb_dblayer_cursor_iterate(dbi_cursor_t *cursor, - int (*action_cb)(dbi_val_t *key, dbi_val_t *data, void *ctx), - const dbi_val_t *startingkey, void *ctx); -+dbi_error_t bdb_map_error(const char *funcname, int err); - - - /* dbimpl helpers */ -diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import.c -index e9c9e73f5..801c7eb20 100644 ---- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import.c -+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import.c -@@ -26,7 +26,16 @@ - - static char *sourcefile = "dbmdb_import.c"; - --static int dbmdb_import_update_entry_subcount(backend *be, ID parentid, size_t sub_count, int isencrypted, back_txn *txn); -+/* Helper struct used to compute numsubordinates */ -+ -+typedef struct { -+ backend *be; -+ dbi_txn_t *txn; -+ const char *attrname; -+ struct attrinfo *ai; -+ dbi_db_t *db; -+ MDB_cursor *dbc; -+} subcount_cursor_info_t; - - /********** routines to manipulate the entry fifo **********/ - -@@ -126,57 +135,74 @@ dbmdb_import_task_abort(Slapi_Task *task) - - /********** helper functions for importing **********/ - -+static void -+dbmdb_close_subcount_cursor(subcount_cursor_info_t *info) -+{ -+ if (info->dbc) { -+ MDB_CURSOR_CLOSE(info->dbc); -+ info->dbc = NULL; -+ } -+ if (info->db) { -+ dblayer_release_index_file(info->be, info->ai, info->db); -+ info->db = NULL; -+ info->ai = NULL; -+ } -+} -+ - static int --dbmdb_import_update_entry_subcount(backend *be, ID parentid, size_t sub_count, int isencrypted, back_txn *txn) -+dbmdb_open_subcount_cursor(backend *be, const char *attrname, dbi_txn_t *txn, subcount_cursor_info_t *info) - { -- ldbm_instance *inst = (ldbm_instance *)be->be_instance_info; -+ char errfunc[60]; - int ret = 0; -- modify_context mc = {0}; -- char value_buffer[22] = {0}; /* enough digits for 2^64 children */ -- struct backentry *e = NULL; -- int isreplace = 0; -- char *numsub_str = numsubordinates; -- -- /* Get hold of the parent */ -- e = id2entry(be, parentid, txn, &ret); -- if ((NULL == e) || (0 != ret)) { -- slapi_log_err(SLAPI_LOG_ERR, "dbmdb_import_update_entry_subcount", "failed to read entry with ID %d ret=%d\n", -- parentid, ret); -- ldbm_nasty("dbmdb_import_update_entry_subcount", sourcefile, 5, ret); -- return (0 == ret) ? -1 : ret; -- } -- /* Lock it (not really required since we're single-threaded here, but -- * let's do it so we can reuse the modify routines) */ -- cache_lock_entry(&inst->inst_cache, e); -- modify_init(&mc, e); -- mc.attr_encrypt = isencrypted; -- sprintf(value_buffer, "%lu", (long unsigned int)sub_count); -- /* If it is a tombstone entry, add tombstonesubordinates instead of -- * numsubordinates. */ -- if (slapi_entry_flag_is_set(e->ep_entry, SLAPI_ENTRY_FLAG_TOMBSTONE)) { -- numsub_str = LDBM_TOMBSTONE_NUMSUBORDINATES_STR; -- } -- /* attr numsubordinates/tombstonenumsubordinates could already exist in -- * the entry, let's check whether it's already there or not */ -- isreplace = (attrlist_find(e->ep_entry->e_attrs, numsub_str) != NULL); -- { -- int op = isreplace ? LDAP_MOD_REPLACE : LDAP_MOD_ADD; -- Slapi_Mods *smods = slapi_mods_new(); - -- slapi_mods_add(smods, op | LDAP_MOD_BVALUES, numsub_str, -- strlen(value_buffer), value_buffer); -- ret = modify_apply_mods(&mc, smods); /* smods passed in */ -- } -- if (0 == ret || LDAP_TYPE_OR_VALUE_EXISTS == ret) { -- /* This will correctly index subordinatecount: */ -- ret = modify_update_all(be, NULL, &mc, txn); -- if (0 == ret) { -- modify_switch_entries(&mc, be); -+ snprintf(errfunc, (sizeof errfunc), "%s[%s]", __FUNCTION__, attrname); -+ info->attrname = attrname; -+ info->txn = txn; -+ info->be = be; -+ -+ /* Lets get the attrinfo */ -+ ainfo_get(be, (char*)attrname, &info->ai); -+ PR_ASSERT(info->ai); -+ /* Lets get the db instance */ -+ if ((ret = dblayer_get_index_file(be, info->ai, &info->db, 0)) != 0) { -+ if (ret == DBI_RC_NOTFOUND) { -+ dbmdb_close_subcount_cursor(info); -+ return 0; - } -+ ldbm_nasty(errfunc, sourcefile, 70, ret); -+ dbmdb_close_subcount_cursor(info); -+ return ret; -+ } -+ -+ /* Lets get the cursor */ -+ if ((ret = MDB_CURSOR_OPEN(TXN(info->txn), DB(info->db), &info->dbc)) != 0) { -+ ldbm_nasty(errfunc, sourcefile, 71, ret); -+ dbmdb_close_subcount_cursor(info); -+ ret = dbmdb_map_error(__FUNCTION__, ret); -+ } -+ return 0; -+} -+ -+static bool -+dbmdb_subcount_is_tombstone(subcount_cursor_info_t *info, MDB_val *id) -+{ -+ /* -+ * Check if record =nstombstone ==> id exists in objectclass index -+ */ -+ MDB_val key = {0}; -+ int ret; -+ key.mv_data = "=nstombstone" ; -+ key.mv_size = 13; -+ ret = MDB_CURSOR_GET(info->dbc, &key, id, MDB_GET_BOTH); -+ switch (ret) { -+ case 0: -+ return true; -+ case MDB_NOTFOUND: -+ return false; -+ default: -+ ldbm_nasty((char*)__FUNCTION__, sourcefile, 72, ret); -+ return false; - } -- /* entry is unlocked and returned to the cache in modify_term */ -- modify_term(&mc, be); -- return ret; - } - - /* -@@ -188,47 +214,56 @@ dbmdb_import_update_entry_subcount(backend *be, ID parentid, size_t sub_count, i - static int - dbmdb_update_subordinatecounts(backend *be, ImportJob *job, dbi_txn_t *txn) - { -- int isencrypted = job->encrypt; -+ subcount_cursor_info_t c_objectclass = {0}; -+ subcount_cursor_info_t c_entryrdn = {0}; - int started_progress_logging = 0; -+ int isencrypted = job->encrypt; -+ MDB_val data = {0}; -+ MDB_val key = {0}; -+ back_txn btxn = {0}; - int key_count = 0; -+ char tmp[11]; -+ int ret2 = 0; - int ret = 0; -- dbmdb_dbi_t*db = NULL; -- MDB_cursor *dbc = NULL; -- struct attrinfo *ai = NULL; -- MDB_val key = {0}; -- MDB_val data = {0}; -- dbmdb_cursor_t cursor = {0}; -- struct ldbminfo *li = (struct ldbminfo*)be->be_database->plg_private; -- back_txn btxn = {0}; -- -- /* Open the parentid index */ -- ainfo_get(be, LDBM_PARENTID_STR, &ai); - -- /* Open the parentid index file */ -- if ((ret = dblayer_get_index_file(be, ai, (dbi_db_t**)&db, DBOPEN_CREATE)) != 0) { -- ldbm_nasty("dbmdb_update_subordinatecounts", sourcefile, 67, ret); -- return (ret); -- } -- /* Get a cursor with r/w txn so we can walk through the parentid */ -- ret = dbmdb_open_cursor(&cursor, MDB_CONFIG(li), db, 0); -- if (ret != 0) { -- ldbm_nasty("dbmdb_update_subordinatecounts", sourcefile, 68, ret); -- dblayer_release_index_file(be, ai, db); -- return ret; -+ PR_ASSERT(txn == NULL); /* Apparently always called with null txn */ -+ /* Need txn / should be rw to update id2entry */ -+ ret = START_TXN(&txn, NULL, 0); -+ if (ret) { -+ ldbm_nasty((char*)__FUNCTION__, sourcefile, 60, ret); -+ return dbmdb_map_error(__FUNCTION__, ret); - } -- dbc = cursor.cur; -- txn = cursor.txn; - btxn.back_txn_txn = txn; -- ret = MDB_CURSOR_GET(dbc, &key, &data, MDB_FIRST); -+ /* Open cursor on the objectclass index */ -+ ret = dbmdb_open_subcount_cursor(be, SLAPI_ATTR_OBJECTCLASS, txn, &c_objectclass); -+ if (ret) { -+ if (ret != DBI_RC_NOTFOUND) { -+ /* No database ==> There is nothing to do. */ -+ ldbm_nasty((char*)__FUNCTION__, sourcefile, 61, ret); -+ } -+ return END_TXN(&txn, ret); -+ } -+ /* Open cursor on the entryrdn index */ -+ ret = dbmdb_open_subcount_cursor(be, LDBM_ENTRYRDN_STR, txn, &c_entryrdn); -+ if (ret) { -+ ldbm_nasty((char*)__FUNCTION__, sourcefile, 62, ret); -+ dbmdb_close_subcount_cursor(&c_objectclass); -+ return END_TXN(&txn, ret); -+ } - -- /* Walk along the index */ -- while (ret != MDB_NOTFOUND) { -+ /* Walk along C* keys (usually starting at C1) */ -+ key.mv_data = "C"; -+ key.mv_size = 1; -+ ret = MDB_CURSOR_GET(c_entryrdn.dbc, &key, &data, MDB_SET_RANGE); -+ while (ret == 0) { - size_t sub_count = 0; -+ size_t t_sub_count = 0; -+ MDB_val oldkey = key; - ID parentid = 0; - - if (0 != ret) { - key.mv_data=NULL; -- ldbm_nasty("dbmdb_update_subordinatecounts", sourcefile, 62, ret); -+ ldbm_nasty("dbmdb_update_subordinatecounts", sourcefile, 63, ret); - break; - } - /* check if we need to abort */ -@@ -247,33 +282,50 @@ dbmdb_update_subordinatecounts(backend *be, ImportJob *job, dbi_txn_t *txn) - key_count); - started_progress_logging = 1; - } -+ if (!key.mv_data || *(char *)key.mv_data != 'C') { -+ /* No more children */ -+ break; -+ } - -- if (*(char *)key.mv_data == EQ_PREFIX) { -- char tmp[11]; -- -- /* construct the parent's ID from the key */ -- if (key.mv_size >= sizeof tmp) { -+ /* construct the parent's ID from the key */ -+ if (key.mv_size >= sizeof tmp) { -+ ldbm_nasty("dbmdb_update_subordinatecounts", sourcefile, 64, ret); -+ ret = DBI_RC_INVALID; -+ break; -+ } -+ /* Generate expected value for parentid */ -+ memcpy(tmp, key.mv_data, key.mv_size); -+ tmp[key.mv_size] = 0; -+ parentid = (ID)atol(tmp+1); -+ PR_ASSERT(0 != parentid); -+ oldkey = key; -+ /* Walk the entries having same key and check if they are tombstone */ -+ do { -+ /* Reorder data */ -+ ID old_data, new_data; -+ if (data.mv_size < sizeof old_data) { - ldbm_nasty("dbmdb_update_subordinatecounts", sourcefile, 66, ret); -+ ret = DBI_RC_INVALID; - break; - } -- memcpy(tmp, key.mv_data, key.mv_size); -- tmp[key.mv_size] = 0; -- parentid = (ID)atol(tmp+1); -- PR_ASSERT(0 != parentid); -- /* Get number of records having the same key */ -- ret = mdb_cursor_count(dbc, &sub_count); -- if (ret) { -- ldbm_nasty("dbmdb_update_subordinatecounts", sourcefile, 63, ret); -- break; -- } -- PR_ASSERT(0 != sub_count); -- ret = dbmdb_import_update_entry_subcount(be, parentid, sub_count, isencrypted, &btxn); -- if (ret) { -- ldbm_nasty("dbmdb_update_subordinatecounts", sourcefile, 64, ret); -- break; -+ memcpy(&old_data, data.mv_data, sizeof old_data); -+ id_internal_to_stored(old_data, (char*)&new_data); -+ data.mv_data = &new_data; -+ data.mv_size = sizeof new_data; -+ if (!dbmdb_subcount_is_tombstone(&c_objectclass, &data)) { -+ sub_count++; -+ } else { -+ t_sub_count++; - } -+ ret = MDB_CURSOR_GET(c_entryrdn.dbc, &key, &data, MDB_NEXT); -+ } while (ret == 0 && key.mv_size == oldkey.mv_size && -+ memcmp(key.mv_data, oldkey.mv_data, key.mv_size) == 0); -+ ret2 = import_update_entry_subcount(be, parentid, sub_count, t_sub_count, isencrypted, &btxn); -+ if (ret2) { -+ ret = ret2; -+ ldbm_nasty("dbmdb_update_subordinatecounts", sourcefile, 65, ret); -+ break; - } -- ret = MDB_CURSOR_GET(dbc, &key, &data, MDB_NEXT_NODUP); - } - if (started_progress_logging) { - /* Finish what we started... */ -@@ -285,11 +337,13 @@ dbmdb_update_subordinatecounts(backend *be, ImportJob *job, dbi_txn_t *txn) - if (ret == MDB_NOTFOUND) { - ret = 0; - } -- -- dbmdb_close_cursor(&cursor, ret); -- dblayer_release_index_file(be, ai, db); -- -- return (ret); -+ dbmdb_close_subcount_cursor(&c_entryrdn); -+ dbmdb_close_subcount_cursor(&c_objectclass); -+ if (txn) { -+ return END_TXN(&txn, ret); -+ } else { -+ return ret; -+ } - } - - /* Function used to gather a list of indexed attrs */ -@@ -364,10 +418,6 @@ dbmdb_import_free_job(ImportJob *job) - slapi_ch_free((void **)&asabird); - } - job->index_list = NULL; -- if (NULL != job->mothers) { -- import_subcount_stuff_term(job->mothers); -- slapi_ch_free((void **)&job->mothers); -- } - - dbmdb_back_free_incl_excl(job->include_subtrees, job->exclude_subtrees); - -@@ -1245,7 +1295,6 @@ dbmdb_run_ldif2db(Slapi_PBlock *pb) - } - job->starting_ID = 1; - job->first_ID = 1; -- job->mothers = CALLOC(import_subcount_stuff); - - /* how much space should we allocate to index buffering? */ - job->job_index_buffer_size = dbmdb_import_get_index_buffer_size(); -@@ -1256,7 +1305,6 @@ dbmdb_run_ldif2db(Slapi_PBlock *pb) - (job->inst->inst_li->li_import_cachesize / 10) + (1024 * 1024); - PR_Unlock(job->inst->inst_li->li_config_mutex); - } -- import_subcount_stuff_init(job->mothers); - - if (job->task != NULL) { - /* count files, use that to track "progress" in cn=tasks */ -@@ -1383,7 +1431,6 @@ dbmdb_bulk_import_start(Slapi_PBlock *pb) - job->starting_ID = 1; - job->first_ID = 1; - -- job->mothers = CALLOC(import_subcount_stuff); - /* how much space should we allocate to index buffering? */ - job->job_index_buffer_size = dbmdb_import_get_index_buffer_size(); - if (job->job_index_buffer_size == 0) { -@@ -1391,7 +1438,6 @@ dbmdb_bulk_import_start(Slapi_PBlock *pb) - job->job_index_buffer_size = (job->inst->inst_li->li_dbcachesize / 10) + - (1024 * 1024); - } -- import_subcount_stuff_init(job->mothers); - dbmdb_import_init_writer(job, IM_BULKIMPORT); - - pthread_mutex_init(&job->wire_lock, NULL); -diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c -index aa10d3704..f44b831aa 100644 ---- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c -+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_import_threads.c -@@ -2900,6 +2900,7 @@ dbmdb_add_op_attrs(ImportJob *job, struct backentry *ep, ID pid) - /* Get rid of attributes you're not allowed to specify yourself */ - slapi_entry_delete_values(ep->ep_entry, hassubordinates, NULL); - slapi_entry_delete_values(ep->ep_entry, numsubordinates, NULL); -+ slapi_entry_delete_values(ep->ep_entry, tombstone_numsubordinates, NULL); - - /* Upgrade DN format only */ - /* Set current parentid to e_aux_attrs to remove it from the index file. */ -diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_instance.c b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_instance.c -index a794430e2..fbf8a9a47 100644 ---- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_instance.c -+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_instance.c -@@ -1357,7 +1357,7 @@ int dbmdb_open_cursor(dbmdb_cursor_t *dbicur, dbmdb_ctx_t *ctx, dbmdb_dbi_t *dbi - dbicur->dbi = dbi; - if (ctx->readonly) - flags |= MDB_RDONLY; -- rc = START_TXN(&dbicur->txn, NULL, 0); -+ rc = START_TXN(&dbicur->txn, NULL, ((flags&MDB_RDONLY) ? TXNFL_RDONLY : 0)); - if (rc) { - return rc; - } -diff --git a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.h b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.h -index fdc4a9288..c5a72e21c 100644 ---- a/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.h -+++ b/ldap/servers/slapd/back-ldbm/db-mdb/mdb_layer.h -@@ -394,8 +394,6 @@ void dbmdb_restore_file_update(struct ldbminfo *li, const char *directory); - int dbmdb_import_file_init(ldbm_instance *inst); - void dbmdb_import_file_update(ldbm_instance *inst); - int dbmdb_import_file_check(ldbm_instance *inst); --int dbmdb_import_subcount_mother_init(import_subcount_stuff *mothers, ID parent_id, size_t count); --int dbmdb_import_subcount_mother_count(import_subcount_stuff *mothers, ID parent_id); - void dbmdb_import_configure_index_buffer_size(size_t size); - size_t dbmdb_import_get_index_buffer_size(void); - int dbmdb_ldbm_back_wire_import(Slapi_PBlock *pb); -diff --git a/ldap/servers/slapd/back-ldbm/import.c b/ldap/servers/slapd/back-ldbm/import.c -index 5a03bb533..f9a20051a 100644 ---- a/ldap/servers/slapd/back-ldbm/import.c -+++ b/ldap/servers/slapd/back-ldbm/import.c -@@ -241,3 +241,62 @@ wait_for_ref_count(Slapi_Counter *inst_ref_count) - /* Done waiting, return the current ref count */ - return slapi_counter_get_value(inst_ref_count); - } -+ -+/********** helper functions for importing **********/ -+ -+int -+import_update_entry_subcount(backend *be, ID parentid, size_t sub_count, size_t t_sub_count, int isencrypted, back_txn *txn) -+{ -+ ldbm_instance *inst = (ldbm_instance *)be->be_instance_info; -+ int ret = 0; -+ modify_context mc = {0}; -+ char value_buffer[22] = {0}; /* enough digits for 2^64 children */ -+ char t_value_buffer[22] = {0}; /* enough digits for 2^64 children */ -+ struct backentry *e = NULL; -+ char *numsub_str = numsubordinates; -+ Slapi_Mods *smods = NULL; -+ static char *sourcefile = "import.c"; -+ -+ /* Get hold of the parent */ -+ e = id2entry(be, parentid, txn, &ret); -+ if ((NULL == e) || (0 != ret)) { -+ slapi_log_err(SLAPI_LOG_ERR, "import_update_entry_subcount", "failed to read entry with ID %d ret=%d\n", -+ parentid, ret); -+ ldbm_nasty("import_update_entry_subcount", sourcefile, 5, ret); -+ return (0 == ret) ? -1 : ret; -+ } -+ /* Lock it (not really required since we're single-threaded here, but -+ * let's do it so we can reuse the modify routines) */ -+ cache_lock_entry(&inst->inst_cache, e); -+ modify_init(&mc, e); -+ mc.attr_encrypt = isencrypted; -+ sprintf(value_buffer, "%lu", (long unsigned int)sub_count); -+ sprintf(t_value_buffer, "%lu", (long unsigned int)t_sub_count); -+ smods = slapi_mods_new(); -+ if (sub_count) { -+ slapi_mods_add(smods, LDAP_MOD_REPLACE | LDAP_MOD_BVALUES, numsub_str, -+ strlen(value_buffer), value_buffer); -+ } else { -+ /* Make sure that the attribute is deleted */ -+ slapi_mods_add_mod_values(smods, LDAP_MOD_REPLACE | LDAP_MOD_BVALUES, numsub_str, NULL); -+ } -+ if (t_sub_count) { -+ slapi_mods_add(smods, LDAP_MOD_REPLACE | LDAP_MOD_BVALUES, LDBM_TOMBSTONE_NUMSUBORDINATES_STR, -+ strlen(t_value_buffer), t_value_buffer); -+ } else { -+ /* Make sure that the attribute is deleted */ -+ slapi_mods_add_mod_values(smods, LDAP_MOD_REPLACE | LDAP_MOD_BVALUES, LDBM_TOMBSTONE_NUMSUBORDINATES_STR, NULL); -+ } -+ ret = modify_apply_mods(&mc, smods); /* smods passed in */ -+ if (0 == ret) { -+ /* This will correctly index subordinatecount: */ -+ ret = modify_update_all(be, NULL, &mc, txn); -+ if (0 == ret) { -+ modify_switch_entries(&mc, be); -+ } -+ } -+ /* entry is unlocked and returned to the cache in modify_term */ -+ modify_term(&mc, be); -+ return ret; -+} -+ -diff --git a/ldap/servers/slapd/back-ldbm/import.h b/ldap/servers/slapd/back-ldbm/import.h -index b3f6b7493..e066f4195 100644 ---- a/ldap/servers/slapd/back-ldbm/import.h -+++ b/ldap/servers/slapd/back-ldbm/import.h -@@ -117,7 +117,6 @@ typedef struct _ImportJob - * another pass */ - int uuid_gen_type; /* kind of uuid to generate */ - char *uuid_namespace; /* namespace for name-generated uuid */ -- import_subcount_stuff *mothers; - double average_progress_rate; - double recent_progress_rate; - double cache_hit_ratio; -@@ -209,6 +208,7 @@ struct _import_worker_info - /* import.c */ - void import_log_notice(ImportJob *job, int log_level, char *subsystem, char *format, ...); - int import_main_offline(void *arg); -+int import_update_entry_subcount(backend *be, ID parentid, size_t sub_count, size_t t_sub_count, int isencrypted, back_txn *txn); - - /* ldif2ldbm.c */ - void reset_progress(void); -diff --git a/ldap/servers/slapd/back-ldbm/ldif2ldbm.c b/ldap/servers/slapd/back-ldbm/ldif2ldbm.c -index 403ce6ae8..8b0386489 100644 ---- a/ldap/servers/slapd/back-ldbm/ldif2ldbm.c -+++ b/ldap/servers/slapd/back-ldbm/ldif2ldbm.c -@@ -54,49 +54,6 @@ typedef struct _export_args - /* static functions */ - - --/********** common routines for classic/deluxe import code **********/ -- --static PRIntn --import_subcount_hash_compare_keys(const void *v1, const void *v2) --{ -- return (((ID)((uintptr_t)v1) == (ID)((uintptr_t)v2)) ? 1 : 0); --} -- --static PRIntn --import_subcount_hash_compare_values(const void *v1, const void *v2) --{ -- return (((size_t)v1 == (size_t)v2) ? 1 : 0); --} -- --static PLHashNumber --import_subcount_hash_fn(const void *id) --{ -- return (PLHashNumber)((uintptr_t)id); --} -- --void --import_subcount_stuff_init(import_subcount_stuff *stuff) --{ -- stuff->hashtable = PL_NewHashTable(IMPORT_SUBCOUNT_HASHTABLE_SIZE, -- import_subcount_hash_fn, import_subcount_hash_compare_keys, -- import_subcount_hash_compare_values, NULL, NULL); --} -- --void --import_subcount_stuff_term(import_subcount_stuff *stuff) --{ -- if (stuff != NULL && stuff->hashtable != NULL) { -- PL_HashTableDestroy(stuff->hashtable); -- } --} -- -- -- --/********** functions for maintaining the subordinate count **********/ -- -- -- -- - /********** ldif2db entry point **********/ - - /* -diff --git a/ldap/servers/slapd/control.c b/ldap/servers/slapd/control.c -index f8744901a..7aeeba885 100644 ---- a/ldap/servers/slapd/control.c -+++ b/ldap/servers/slapd/control.c -@@ -172,7 +172,7 @@ create_sessiontracking_ctrl(const char *session_tracking_id, LDAPControl **sessi - { - BerElement *ctrlber = NULL; - char *undefined_sid = "undefined sid"; -- char *sid; -+ const char *sid; - int rc = 0; - int tag; - LDAPControl *ctrl = NULL; -diff --git a/src/lib389/lib389/__init__.py b/src/lib389/lib389/__init__.py -index 23a20739f..d57a91929 100644 ---- a/src/lib389/lib389/__init__.py -+++ b/src/lib389/lib389/__init__.py -@@ -1803,7 +1803,7 @@ class DirSrv(SimpleLDAPObject, object): - one entry. - @param - entry dn - @param - search scope, in ldap.SCOPE_BASE (default), -- ldap.SCOPE_SUB, ldap.SCOPE_ONE -+ ldap.SCOPE_SUB, ldap.SCOPE_ONELEVEL - @param filterstr - filterstr, default '(objectClass=*)' from - SimpleLDAPObject - @param attrlist - list of attributes to retrieve. eg ['cn', 'uid'] -diff --git a/src/lib389/lib389/_mapped_object.py b/src/lib389/lib389/_mapped_object.py -index 1f9f1556f..37277296d 100644 ---- a/src/lib389/lib389/_mapped_object.py -+++ b/src/lib389/lib389/_mapped_object.py -@@ -200,7 +200,7 @@ class DSLdapObject(DSLogging, DSLint): - if scope == 'base': - search_scope = ldap.SCOPE_BASE - elif scope == 'one': -- search_scope = ldap.SCOPE_ONE -+ search_scope = ldap.SCOPE_ONELEVEL - elif scope == 'subtree': - search_scope = ldap.SCOPE_SUBTREE - return _search_ext_s(self._instance,self._dn, search_scope, filter, --- -2.49.0 - diff --git a/0040-Issue-6910-Fix-latest-coverity-issues.patch b/0040-Issue-6910-Fix-latest-coverity-issues.patch deleted file mode 100644 index c1b2abc..0000000 --- a/0040-Issue-6910-Fix-latest-coverity-issues.patch +++ /dev/null @@ -1,574 +0,0 @@ -From 5d2dc7f78f0a834e46d5665f0c12024da5ddda9e Mon Sep 17 00:00:00 2001 -From: Mark Reynolds -Date: Mon, 28 Jul 2025 17:12:33 -0400 -Subject: [PATCH] Issue 6910 - Fix latest coverity issues - -Description: - -Fix various coverity/ASAN warnings: - -- CID 1618837: Out-of-bounds read (OVERRUN) - bdb_bdbreader_glue.c -- CID 1618831: Resource leak (RESOURCE_LEAK) - bdb_layer.c -- CID 1612606: Resource leak (RESOURCE_LEAK) - log.c -- CID 1611461: Uninitialized pointer read (UNINIT) - repl5_agmt.c -- CID 1568589: Dereference before null check (REVERSE_INULL) - repl5_agmt.c -- CID 1590353: Logically dead code (DEADCODE) - repl5_agmt.c -- CID 1611460: Logically dead code (DEADCODE) - control.c -- CID 1610568: Dereference after null check (FORWARD_NULL) - modify.c -- CID 1591259: Out-of-bounds read (OVERRUN) - memberof.c -- CID 1550231: Unsigned compared against 0 (NO_EFFECT) - memberof_config.c -- CID 1548904: Overflowed constant (INTEGER_OVERFLOW) - ch_malloc.c -- CID 1548902: Overflowed constant (INTEGER_OVERFLOW) - dse.lc -- CID 1548900: Overflowed return value (INTEGER_OVERFLOW) - acct_util.c -- CID 1548898: Overflowed constant (INTEGER_OVERFLOW) - parents.c -- CID 1546849: Resource leak (RESOURCE_LEAK) - referint.c -- ASAN - Use after free - automember.c - -Relates: http://github.com/389ds/389-ds-base/issues/6910 - -Reviewed by: progier & spichugi(Thanks!) ---- - ldap/servers/plugins/acctpolicy/acct_util.c | 6 ++- - ldap/servers/plugins/automember/automember.c | 9 ++-- - ldap/servers/plugins/memberof/memberof.c | 15 +++++-- - .../plugins/memberof/memberof_config.c | 11 +++-- - ldap/servers/plugins/referint/referint.c | 4 +- - ldap/servers/plugins/replication/repl5_agmt.c | 41 ++++++++----------- - .../slapd/back-ldbm/db-bdb/bdb_import.c | 5 ++- - .../back-ldbm/db-bdb/bdb_instance_config.c | 3 +- - .../slapd/back-ldbm/db-bdb/bdb_layer.c | 13 ++++-- - ldap/servers/slapd/back-ldbm/parents.c | 4 +- - ldap/servers/slapd/ch_malloc.c | 4 +- - ldap/servers/slapd/control.c | 5 +-- - ldap/servers/slapd/dse.c | 4 +- - ldap/servers/slapd/log.c | 5 ++- - ldap/servers/slapd/modify.c | 6 +-- - ldap/servers/slapd/passwd_extop.c | 2 +- - ldap/servers/slapd/unbind.c | 12 ++++-- - 17 files changed, 88 insertions(+), 61 deletions(-) - -diff --git a/ldap/servers/plugins/acctpolicy/acct_util.c b/ldap/servers/plugins/acctpolicy/acct_util.c -index b27eeaff1..7735d10e6 100644 ---- a/ldap/servers/plugins/acctpolicy/acct_util.c -+++ b/ldap/servers/plugins/acctpolicy/acct_util.c -@@ -17,7 +17,7 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. - Contributors: - Hewlett-Packard Development Company, L.P. - --Copyright (C) 2021 Red Hat, Inc. -+Copyright (C) 2025 Red Hat, Inc. - ******************************************************************************/ - - #include -@@ -248,6 +248,10 @@ gentimeToEpochtime(char *gentimestr) - - /* Turn tm object into local epoch time */ - epochtime = mktime(&t); -+ if (epochtime == (time_t) -1) { -+ /* mktime failed */ -+ return 0; -+ } - - /* Turn local epoch time into GMT epoch time */ - epochtime -= zone_offset; -diff --git a/ldap/servers/plugins/automember/automember.c b/ldap/servers/plugins/automember/automember.c -index f900db7f2..9eade495e 100644 ---- a/ldap/servers/plugins/automember/automember.c -+++ b/ldap/servers/plugins/automember/automember.c -@@ -1,5 +1,5 @@ - /** BEGIN COPYRIGHT BLOCK -- * Copyright (C) 2022 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -1756,9 +1756,10 @@ automember_update_member_value(Slapi_Entry *member_e, const char *group_dn, char - - mod_pb = slapi_pblock_new(); - /* Do a single mod with error overrides for DEL/ADD */ -- result = slapi_single_modify_internal_override(mod_pb, slapi_sdn_new_dn_byval(group_dn), mods, -- automember_get_plugin_id(), 0); -- -+ Slapi_DN *sdn = slapi_sdn_new_normdn_byref(group_dn); -+ result = slapi_single_modify_internal_override(mod_pb, sdn, mods, -+ automember_get_plugin_id(), 0); -+ slapi_sdn_free(&sdn); - if(add){ - if (result != LDAP_SUCCESS) { - slapi_log_err(SLAPI_LOG_ERR, AUTOMEMBER_PLUGIN_SUBSYSTEM, -diff --git a/ldap/servers/plugins/memberof/memberof.c b/ldap/servers/plugins/memberof/memberof.c -index 073d8d938..cfda977f0 100644 ---- a/ldap/servers/plugins/memberof/memberof.c -+++ b/ldap/servers/plugins/memberof/memberof.c -@@ -1,5 +1,5 @@ - /** BEGIN COPYRIGHT BLOCK -- * Copyright (C) 2021 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -1655,6 +1655,7 @@ memberof_call_foreach_dn(Slapi_PBlock *pb __attribute__((unused)), Slapi_DN *sdn - /* We already did the search for this backend, don't - * do it again when we fall through */ - do_suffix_search = PR_FALSE; -+ slapi_pblock_init(search_pb); - } - } - } else if (!all_backends) { -@@ -3763,6 +3764,10 @@ memberof_replace_list(Slapi_PBlock *pb, MemberOfConfig *config, Slapi_DN *group_ - - pre_index++; - } else { -+ if (pre_index >= pre_total || post_index >= post_total) { -+ /* Don't overrun pre_array/post_array */ -+ break; -+ } - /* decide what to do */ - int cmp = memberof_compare( - config, -@@ -4453,10 +4458,12 @@ memberof_add_memberof_attr(LDAPMod **mods, const char *dn, char *add_oc) - - while (1) { - slapi_pblock_init(mod_pb); -- -+ Slapi_DN *sdn = slapi_sdn_new_normdn_byref(dn); - /* Internal mod with error overrides for DEL/ADD */ -- rc = slapi_single_modify_internal_override(mod_pb, slapi_sdn_new_normdn_byref(dn), single_mod, -- memberof_get_plugin_id(), SLAPI_OP_FLAG_BYPASS_REFERRALS); -+ rc = slapi_single_modify_internal_override(mod_pb, sdn, single_mod, -+ memberof_get_plugin_id(), -+ SLAPI_OP_FLAG_BYPASS_REFERRALS); -+ slapi_sdn_free(&sdn); - if (rc == LDAP_OBJECT_CLASS_VIOLATION) { - if (!add_oc || added_oc) { - /* -diff --git a/ldap/servers/plugins/memberof/memberof_config.c b/ldap/servers/plugins/memberof/memberof_config.c -index 1e83ba6e0..e4da351d9 100644 ---- a/ldap/servers/plugins/memberof/memberof_config.c -+++ b/ldap/servers/plugins/memberof/memberof_config.c -@@ -1,5 +1,5 @@ - /** BEGIN COPYRIGHT BLOCK -- * Copyright (C) 2021 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -570,21 +570,24 @@ memberof_apply_config(Slapi_PBlock *pb __attribute__((unused)), - if (num_groupattrs > 1) { - size_t bytes_out = 0; - size_t filter_str_len = groupattr_name_len + (num_groupattrs * 4) + 4; -+ int32_t rc = 0; - - /* Allocate enough space for the filter */ - filter_str = slapi_ch_malloc(filter_str_len); - - /* Add beginning of filter. */ -- bytes_out = snprintf(filter_str, filter_str_len - bytes_out, "(|"); -- if (bytes_out<0) { -+ rc = snprintf(filter_str, filter_str_len - bytes_out, "(|"); -+ if (rc < 0) { - slapi_log_err(SLAPI_LOG_ERR, MEMBEROF_PLUGIN_SUBSYSTEM, "snprintf unexpectly failed in memberof_apply_config.\n"); - *returncode = LDAP_UNWILLING_TO_PERFORM; - goto done; -+ } else { -+ bytes_out = rc; - } - - /* Add filter section for each groupattr. */ - for (size_t i=0; theConfig.groupattrs && theConfig.groupattrs[i]; i++) { -- size_t bytes_read = snprintf(filter_str + bytes_out, filter_str_len - bytes_out, "(%s=*)", theConfig.groupattrs[i]); -+ int32_t bytes_read = snprintf(filter_str + bytes_out, filter_str_len - bytes_out, "(%s=*)", theConfig.groupattrs[i]); - if (bytes_read<0) { - slapi_log_err(SLAPI_LOG_ERR, MEMBEROF_PLUGIN_SUBSYSTEM, "snprintf unexpectly failed in memberof_apply_config.\n"); - *returncode = LDAP_UNWILLING_TO_PERFORM; -diff --git a/ldap/servers/plugins/referint/referint.c b/ldap/servers/plugins/referint/referint.c -index 5d7f9e5dd..5746b913f 100644 ---- a/ldap/servers/plugins/referint/referint.c -+++ b/ldap/servers/plugins/referint/referint.c -@@ -1,6 +1,6 @@ - /** BEGIN COPYRIGHT BLOCK - * Copyright (C) 2001 Sun Microsystems, Inc. Used by permission. -- * Copyright (C) 2021 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -1499,6 +1499,8 @@ referint_thread_func(void *arg __attribute__((unused))) - slapi_sdn_free(&sdn); - continue; - } -+ -+ slapi_sdn_free(&tmpsuperior); - if (!strcasecmp(ptoken, "NULL")) { - tmpsuperior = NULL; - } else { -diff --git a/ldap/servers/plugins/replication/repl5_agmt.c b/ldap/servers/plugins/replication/repl5_agmt.c -index 0a81167b7..eed97578e 100644 ---- a/ldap/servers/plugins/replication/repl5_agmt.c -+++ b/ldap/servers/plugins/replication/repl5_agmt.c -@@ -1,6 +1,6 @@ - /** BEGIN COPYRIGHT BLOCK - * Copyright (C) 2001 Sun Microsystems, Inc. Used by permission. -- * Copyright (C) 2021 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -202,7 +202,7 @@ agmt_init_session_id(Repl_Agmt *ra) - char *host = NULL; /* e.g. localhost.domain */ - char port[10]; /* e.g. 389 */ - char sport[10]; /* e.g. 636 */ -- char *hash_in; -+ char *hash_in = NULL; - int32_t max_str_sid = SESSION_ID_STR_SZ - 4; - - if (ra == NULL) { -@@ -2718,31 +2718,26 @@ agmt_update_init_status(Repl_Agmt *ra) - mod_idx++; - } - -- if (nb_mods) { -- /* it is ok to release the lock here because we are done with the agreement data. -- we have to do it before issuing the modify operation because it causes -- agmtlist_notify_all to be called which uses the same lock - hence the deadlock */ -- PR_Unlock(ra->lock); -- -- pb = slapi_pblock_new(); -- mods[nb_mods] = NULL; -+ /* it is ok to release the lock here because we are done with the agreement data. -+ we have to do it before issuing the modify operation because it causes -+ agmtlist_notify_all to be called which uses the same lock - hence the deadlock */ -+ PR_Unlock(ra->lock); - -- slapi_modify_internal_set_pb_ext(pb, ra->dn, mods, NULL, NULL, -- repl_get_plugin_identity(PLUGIN_MULTISUPPLIER_REPLICATION), 0); -- slapi_modify_internal_pb(pb); -+ pb = slapi_pblock_new(); -+ mods[nb_mods] = NULL; - -- slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_RESULT, &rc); -- if (rc != LDAP_SUCCESS && rc != LDAP_NO_SUCH_ATTRIBUTE) { -- slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "agmt_update_consumer_ruv - " -- "%s: agmt_update_consumer_ruv: " -- "failed to update consumer's RUV; LDAP error - %d\n", -- ra->long_name, rc); -- } -+ slapi_modify_internal_set_pb_ext(pb, ra->dn, mods, NULL, NULL, -+ repl_get_plugin_identity(PLUGIN_MULTISUPPLIER_REPLICATION), 0); -+ slapi_modify_internal_pb(pb); - -- slapi_pblock_destroy(pb); -- } else { -- PR_Unlock(ra->lock); -+ slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_RESULT, &rc); -+ if (rc != LDAP_SUCCESS && rc != LDAP_NO_SUCH_ATTRIBUTE) { -+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "agmt_update_consumer_ruv - " -+ "%s: agmt_update_consumer_ruv: failed to update consumer's RUV; LDAP error - %d\n", -+ ra->long_name, rc); - } -+ -+ slapi_pblock_destroy(pb); - slapi_ch_free((void **)&mods); - slapi_mod_done(&smod_start_time); - slapi_mod_done(&smod_end_time); -diff --git a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import.c b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import.c -index 46c80ec3d..0127bf2f9 100644 ---- a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import.c -+++ b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import.c -@@ -1,5 +1,5 @@ - /** BEGIN COPYRIGHT BLOCK -- * Copyright (C) 2020 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -947,6 +947,7 @@ bdb_ancestorid_new_idl_create_index(backend *be, ImportJob *job) - EQ_PREFIX, (u_long)id); - key.size++; /* include the null terminator */ - ret = NEW_IDL_NO_ALLID; -+ idl_free(&children); - children = idl_fetch(be, db_pid, &key, txn, ai_pid, &ret); - if (ret != 0) { - ldbm_nasty("bdb_ancestorid_new_idl_create_index", sourcefile, 13070, ret); -@@ -957,6 +958,7 @@ bdb_ancestorid_new_idl_create_index(backend *be, ImportJob *job) - if (job->flags & FLAG_ABORT) { - import_log_notice(job, SLAPI_LOG_ERR, "bdb_ancestorid_new_idl_create_index", - "ancestorid creation aborted."); -+ idl_free(&children); - ret = -1; - break; - } -@@ -1290,6 +1292,7 @@ bdb_update_subordinatecounts(backend *be, ImportJob *job, DB_TXN *txn) - } - bdb_close_subcount_cursor(&c_entryrdn); - bdb_close_subcount_cursor(&c_objectclass); -+ - return ret; - } - -diff --git a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_instance_config.c b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_instance_config.c -index bb515a23f..44a624fde 100644 ---- a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_instance_config.c -+++ b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_instance_config.c -@@ -1,5 +1,5 @@ - /** BEGIN COPYRIGHT BLOCK -- * Copyright (C) 2020 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -261,6 +261,7 @@ bdb_instance_cleanup(struct ldbm_instance *inst) - if (inst_dirp && *inst_dir) { - return_value = env->remove(env, inst_dirp, 0); - } else { -+ slapi_ch_free((void **)&env); - return_value = -1; - } - if (return_value == EBUSY) { -diff --git a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c -index 53f1cde69..b1e44a919 100644 ---- a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c -+++ b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_layer.c -@@ -1,5 +1,5 @@ - /** BEGIN COPYRIGHT BLOCK -- * Copyright (C) 2023 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -2027,9 +2027,13 @@ bdb_pre_close(struct ldbminfo *li) - conf = (bdb_config *)li->li_dblayer_config; - bdb_db_env *pEnv = (bdb_db_env *)priv->dblayer_env; - -+ if (pEnv == NULL) { -+ return; -+ } -+ - pthread_mutex_lock(&pEnv->bdb_thread_count_lock); - -- if (conf->bdb_stop_threads || !pEnv) { -+ if (conf->bdb_stop_threads) { - /* already stopped. do nothing... */ - goto timeout_escape; - } -@@ -2203,6 +2207,7 @@ bdb_remove_env(struct ldbminfo *li) - } - if (NULL == li) { - slapi_log_err(SLAPI_LOG_ERR, "bdb_remove_env", "No ldbm info is given\n"); -+ slapi_ch_free((void **)&env); - return -1; - } - -@@ -2212,10 +2217,11 @@ bdb_remove_env(struct ldbminfo *li) - if (rc) { - slapi_log_err(SLAPI_LOG_ERR, - "bdb_remove_env", "Failed to remove DB environment files. " -- "Please remove %s/__db.00# (# is 1 through 6)\n", -+ "Please remove %s/__db.00# (# is 1 through 6)\n", - home_dir); - } - } -+ slapi_ch_free((void **)&env); - return rc; - } - -@@ -6341,6 +6347,7 @@ bdb_back_ctrl(Slapi_Backend *be, int cmd, void *info) - db->close(db, 0); - rc = bdb_db_remove_ex((bdb_db_env *)priv->dblayer_env, path, NULL, PR_TRUE); - inst->inst_changelog = NULL; -+ slapi_ch_free_string(&path); - slapi_ch_free_string(&instancedir); - } - } -diff --git a/ldap/servers/slapd/back-ldbm/parents.c b/ldap/servers/slapd/back-ldbm/parents.c -index 31107591e..52c665ca4 100644 ---- a/ldap/servers/slapd/back-ldbm/parents.c -+++ b/ldap/servers/slapd/back-ldbm/parents.c -@@ -1,6 +1,6 @@ - /** BEGIN COPYRIGHT BLOCK - * Copyright (C) 2001 Sun Microsystems, Inc. Used by permission. -- * Copyright (C) 2005 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -123,7 +123,7 @@ parent_update_on_childchange(modify_context *mc, int op, size_t *new_sub_count) - /* Now compute the new value */ - if ((PARENTUPDATE_ADD == op) || (PARENTUPDATE_RESURECT == op)) { - current_sub_count++; -- } else { -+ } else if (current_sub_count > 0) { - current_sub_count--; - } - -diff --git a/ldap/servers/slapd/ch_malloc.c b/ldap/servers/slapd/ch_malloc.c -index cbab1d170..27ed546a5 100644 ---- a/ldap/servers/slapd/ch_malloc.c -+++ b/ldap/servers/slapd/ch_malloc.c -@@ -1,6 +1,6 @@ - /** BEGIN COPYRIGHT BLOCK - * Copyright (C) 2001 Sun Microsystems, Inc. Used by permission. -- * Copyright (C) 2005 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -254,7 +254,7 @@ slapi_ch_bvecdup(struct berval **v) - ++i; - newberval = (struct berval **)slapi_ch_malloc((i + 1) * sizeof(struct berval *)); - newberval[i] = NULL; -- while (i-- > 0) { -+ while (i > 0 && i-- > 0) { - newberval[i] = slapi_ch_bvdup(v[i]); - } - } -diff --git a/ldap/servers/slapd/control.c b/ldap/servers/slapd/control.c -index 7aeeba885..d661dc6e1 100644 ---- a/ldap/servers/slapd/control.c -+++ b/ldap/servers/slapd/control.c -@@ -174,7 +174,6 @@ create_sessiontracking_ctrl(const char *session_tracking_id, LDAPControl **sessi - char *undefined_sid = "undefined sid"; - const char *sid; - int rc = 0; -- int tag; - LDAPControl *ctrl = NULL; - - if (session_tracking_id) { -@@ -183,9 +182,7 @@ create_sessiontracking_ctrl(const char *session_tracking_id, LDAPControl **sessi - sid = undefined_sid; - } - ctrlber = ber_alloc(); -- tag = ber_printf( ctrlber, "{nnno}", sid, strlen(sid)); -- if (rc == LBER_ERROR) { -- tag = -1; -+ if ((rc = ber_printf( ctrlber, "{nnno}", sid, strlen(sid)) == LBER_ERROR)) { - goto done; - } - slapi_build_control(LDAP_CONTROL_X_SESSION_TRACKING, ctrlber, 0, &ctrl); -diff --git a/ldap/servers/slapd/dse.c b/ldap/servers/slapd/dse.c -index b788054db..bec3e32f4 100644 ---- a/ldap/servers/slapd/dse.c -+++ b/ldap/servers/slapd/dse.c -@@ -1,6 +1,6 @@ - /** BEGIN COPYRIGHT BLOCK - * Copyright (C) 2001 Sun Microsystems, Inc. Used by permission. -- * Copyright (C) 2005 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -637,7 +637,7 @@ dse_updateNumSubordinates(Slapi_Entry *entry, int op) - /* Now compute the new value */ - if (SLAPI_OPERATION_ADD == op) { - current_sub_count++; -- } else { -+ } else if (current_sub_count > 0) { - current_sub_count--; - } - { -diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c -index 91ba23047..a9a5f3b3f 100644 ---- a/ldap/servers/slapd/log.c -+++ b/ldap/servers/slapd/log.c -@@ -1,6 +1,6 @@ - /** BEGIN COPYRIGHT BLOCK - * Copyright (C) 2001 Sun Microsystems, Inc. Used by permission. -- * Copyright (C) 2005-2024 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * Copyright (C) 2010 Hewlett-Packard Development Company, L.P. - * All rights reserved. - * -@@ -201,6 +201,7 @@ compress_log_file(char *log_name, int32_t mode) - - if ((source = fopen(log_name, "r")) == NULL) { - /* Failed to open log file */ -+ /* coverity[leaked_storage] gzclose does close FD */ - gzclose(outfile); - return -1; - } -@@ -211,11 +212,13 @@ compress_log_file(char *log_name, int32_t mode) - if (bytes_written == 0) - { - fclose(source); -+ /* coverity[leaked_storage] gzclose does close FD */ - gzclose(outfile); - return -1; - } - bytes_read = fread(buf, 1, LOG_CHUNK, source); - } -+ /* coverity[leaked_storage] gzclose does close FD */ - gzclose(outfile); - fclose(source); - PR_Delete(log_name); /* remove the old uncompressed log */ -diff --git a/ldap/servers/slapd/modify.c b/ldap/servers/slapd/modify.c -index 0a351d46a..9e5bce80b 100644 ---- a/ldap/servers/slapd/modify.c -+++ b/ldap/servers/slapd/modify.c -@@ -1,6 +1,6 @@ - /** BEGIN COPYRIGHT BLOCK - * Copyright (C) 2001 Sun Microsystems, Inc. Used by permission. -- * Copyright (C) 2009 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * Copyright (C) 2009, 2010 Hewlett-Packard Development Company, L.P. - * All rights reserved. - * -@@ -498,7 +498,7 @@ slapi_modify_internal_set_pb_ext(Slapi_PBlock *pb, const Slapi_DN *sdn, LDAPMod - * - * Any other errors encountered during the operation will be returned as-is. - */ --int -+int - slapi_single_modify_internal_override(Slapi_PBlock *pb, const Slapi_DN *sdn, LDAPMod **mod, Slapi_ComponentId *plugin_id, int op_flags) - { - int rc = 0; -@@ -512,7 +512,7 @@ slapi_single_modify_internal_override(Slapi_PBlock *pb, const Slapi_DN *sdn, LDA - !pb ? "pb " : "", - !sdn ? "sdn " : "", - !mod ? "mod " : "", -- !mod[0] ? "mod[0] " : ""); -+ !mod || !mod[0] ? "mod[0] " : ""); - - return LDAP_PARAM_ERROR; - } -diff --git a/ldap/servers/slapd/passwd_extop.c b/ldap/servers/slapd/passwd_extop.c -index 69bb3494c..5f05cf74e 100644 ---- a/ldap/servers/slapd/passwd_extop.c -+++ b/ldap/servers/slapd/passwd_extop.c -@@ -1,5 +1,5 @@ - /** BEGIN COPYRIGHT BLOCK -- * Copyright (C) 2005 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -diff --git a/ldap/servers/slapd/unbind.c b/ldap/servers/slapd/unbind.c -index fa8cd649f..c4e7a5efd 100644 ---- a/ldap/servers/slapd/unbind.c -+++ b/ldap/servers/slapd/unbind.c -@@ -1,6 +1,6 @@ - /** BEGIN COPYRIGHT BLOCK - * Copyright (C) 2001 Sun Microsystems, Inc. Used by permission. -- * Copyright (C) 2005 Red Hat, Inc. -+ * Copyright (C) 2025 Red Hat, Inc. - * All rights reserved. - * - * License: GPL (version 3 or any later version). -@@ -112,8 +112,12 @@ do_unbind(Slapi_PBlock *pb) - /* pass the unbind to all backends */ - be_unbindall(pb_conn, operation); - --free_and_return:; -+free_and_return: - -- /* close the connection to the client */ -- disconnect_server(pb_conn, operation->o_connid, operation->o_opid, SLAPD_DISCONNECT_UNBIND, 0); -+ /* close the connection to the client after refreshing the operation */ -+ slapi_pblock_get(pb, SLAPI_OPERATION, &operation); -+ disconnect_server(pb_conn, -+ operation ? operation->o_connid : -1, -+ operation ? operation->o_opid : -1, -+ SLAPD_DISCONNECT_UNBIND, 0); - } --- -2.49.0 - diff --git a/0041-Issue-6929-Compilation-failure-with-rust-1.89-on-Fed.patch b/0041-Issue-6929-Compilation-failure-with-rust-1.89-on-Fed.patch deleted file mode 100644 index 76e45a7..0000000 --- a/0041-Issue-6929-Compilation-failure-with-rust-1.89-on-Fed.patch +++ /dev/null @@ -1,35 +0,0 @@ -From ea62e862c8ca7e036f7d1e23ec3a27bffbc39bdf Mon Sep 17 00:00:00 2001 -From: Viktor Ashirov -Date: Mon, 11 Aug 2025 13:19:13 +0200 -Subject: [PATCH] Issue 6929 - Compilation failure with rust-1.89 on Fedora ELN - -Bug Description: -The `ValueArrayRefIter` struct has a lifetime parameter `'a`. -But in the `iter` method the return type doesn't specify the lifetime parameter. - -Fix Description: -Make the lifetime explicit. - -Fixes: https://github.com/389ds/389-ds-base/issues/6929 - -Reviewed by: @droideck (Thanks!) ---- - src/slapi_r_plugin/src/value.rs | 2 +- - 1 file changed, 1 insertion(+), 1 deletion(-) - -diff --git a/src/slapi_r_plugin/src/value.rs b/src/slapi_r_plugin/src/value.rs -index 2fd35c808..fec74ac25 100644 ---- a/src/slapi_r_plugin/src/value.rs -+++ b/src/slapi_r_plugin/src/value.rs -@@ -61,7 +61,7 @@ impl ValueArrayRef { - ValueArrayRef { raw_slapi_val } - } - -- pub fn iter(&self) -> ValueArrayRefIter { -+ pub fn iter(&self) -> ValueArrayRefIter<'_> { - ValueArrayRefIter { - idx: 0, - va_ref: &self, --- -2.49.0 - diff --git a/389-ds-base.spec b/389-ds-base.spec index ac0ef81..29fb69b 100644 --- a/389-ds-base.spec +++ b/389-ds-base.spec @@ -60,9 +60,9 @@ ExcludeArch: i686 Summary: 389 Directory Server (%{variant}) Name: 389-ds-base -Version: 3.1.3 +Version: 3.2.0 Release: %{autorelease -n %{?with_asan:-e asan}}%{?dist} -License: GPL-3.0-or-later WITH GPL-3.0-389-ds-base-exception AND (0BSD OR Apache-2.0 OR MIT) AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR BSL-1.0) AND (Apache-2.0 OR LGPL-2.1-or-later OR MIT) AND (Apache-2.0 OR MIT OR Zlib) AND (Apache-2.0 OR MIT) AND (MIT OR Apache-2.0) AND Unicode-3.0 AND (MIT OR Unlicense) AND Apache-2.0 AND MIT AND MPL-2.0 AND Zlib +License: GPL-3.0-or-later WITH GPL-3.0-389-ds-base-exception AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR LGPL-2.1-or-later OR MIT) AND (Apache-2.0 OR MIT) AND (MIT OR Apache-2.0) AND Unicode-3.0 AND (MIT OR Unlicense) AND Apache-2.0 AND MIT AND MPL-2.0 AND Zlib Conflicts: selinux-policy-base < 3.9.8 Conflicts: freeipa-server < 4.0.3 Obsoletes: %{name} <= 1.4.4 @@ -72,18 +72,15 @@ Obsoletes: %{name}-legacy-tools-debuginfo < 1.4.4.6 Provides: ldif2ldbm >= 0 ##### Bundled cargo crates list - START ##### -Provides: bundled(crate(addr2line)) = 0.24.2 -Provides: bundled(crate(adler2)) = 2.0.1 Provides: bundled(crate(allocator-api2)) = 0.2.21 Provides: bundled(crate(atty)) = 0.2.14 Provides: bundled(crate(autocfg)) = 1.5.0 -Provides: bundled(crate(backtrace)) = 0.3.75 Provides: bundled(crate(base64)) = 0.13.1 -Provides: bundled(crate(bitflags)) = 2.9.1 +Provides: bundled(crate(bitflags)) = 2.10.0 Provides: bundled(crate(byteorder)) = 1.5.0 Provides: bundled(crate(cbindgen)) = 0.26.0 -Provides: bundled(crate(cc)) = 1.2.31 -Provides: bundled(crate(cfg-if)) = 1.0.1 +Provides: bundled(crate(cc)) = 1.2.51 +Provides: bundled(crate(cfg-if)) = 1.0.4 Provides: bundled(crate(clap)) = 3.2.25 Provides: bundled(crate(clap_lex)) = 0.2.4 Provides: bundled(crate(concread)) = 0.5.7 @@ -91,84 +88,71 @@ Provides: bundled(crate(crossbeam-epoch)) = 0.9.18 Provides: bundled(crate(crossbeam-queue)) = 0.3.12 Provides: bundled(crate(crossbeam-utils)) = 0.8.21 Provides: bundled(crate(equivalent)) = 1.0.2 -Provides: bundled(crate(errno)) = 0.3.13 +Provides: bundled(crate(errno)) = 0.3.14 Provides: bundled(crate(fastrand)) = 2.3.0 Provides: bundled(crate(fernet)) = 0.1.4 +Provides: bundled(crate(find-msvc-tools)) = 0.1.6 Provides: bundled(crate(foldhash)) = 0.1.5 Provides: bundled(crate(foreign-types)) = 0.3.2 Provides: bundled(crate(foreign-types-shared)) = 0.1.1 -Provides: bundled(crate(getrandom)) = 0.3.3 -Provides: bundled(crate(gimli)) = 0.31.1 -Provides: bundled(crate(hashbrown)) = 0.15.4 +Provides: bundled(crate(getrandom)) = 0.3.4 +Provides: bundled(crate(hashbrown)) = 0.15.5 Provides: bundled(crate(heck)) = 0.4.1 Provides: bundled(crate(hermit-abi)) = 0.1.19 Provides: bundled(crate(indexmap)) = 1.9.3 -Provides: bundled(crate(io-uring)) = 0.7.9 -Provides: bundled(crate(itoa)) = 1.0.15 -Provides: bundled(crate(jobserver)) = 0.1.33 -Provides: bundled(crate(libc)) = 0.2.174 -Provides: bundled(crate(linux-raw-sys)) = 0.9.4 -Provides: bundled(crate(log)) = 0.4.27 +Provides: bundled(crate(itoa)) = 1.0.17 +Provides: bundled(crate(jobserver)) = 0.1.34 +Provides: bundled(crate(libc)) = 0.2.179 +Provides: bundled(crate(linux-raw-sys)) = 0.11.0 +Provides: bundled(crate(log)) = 0.4.29 Provides: bundled(crate(lru)) = 0.13.0 -Provides: bundled(crate(memchr)) = 2.7.5 -Provides: bundled(crate(miniz_oxide)) = 0.8.9 -Provides: bundled(crate(mio)) = 1.0.4 -Provides: bundled(crate(object)) = 0.36.7 +Provides: bundled(crate(memchr)) = 2.7.6 Provides: bundled(crate(once_cell)) = 1.21.3 -Provides: bundled(crate(openssl)) = 0.10.73 +Provides: bundled(crate(openssl)) = 0.10.75 Provides: bundled(crate(openssl-macros)) = 0.1.1 -Provides: bundled(crate(openssl-sys)) = 0.9.109 +Provides: bundled(crate(openssl-sys)) = 0.9.111 Provides: bundled(crate(os_str_bytes)) = 6.6.1 Provides: bundled(crate(paste)) = 0.1.18 Provides: bundled(crate(paste-impl)) = 0.1.18 Provides: bundled(crate(pin-project-lite)) = 0.2.16 Provides: bundled(crate(pkg-config)) = 0.3.32 Provides: bundled(crate(proc-macro-hack)) = 0.5.20+deprecated -Provides: bundled(crate(proc-macro2)) = 1.0.95 -Provides: bundled(crate(quote)) = 1.0.40 +Provides: bundled(crate(proc-macro2)) = 1.0.105 +Provides: bundled(crate(quote)) = 1.0.43 Provides: bundled(crate(r-efi)) = 5.3.0 -Provides: bundled(crate(rustc-demangle)) = 0.1.26 -Provides: bundled(crate(rustix)) = 1.0.8 -Provides: bundled(crate(ryu)) = 1.0.20 -Provides: bundled(crate(serde)) = 1.0.219 -Provides: bundled(crate(serde_derive)) = 1.0.219 -Provides: bundled(crate(serde_json)) = 1.0.142 +Provides: bundled(crate(rustix)) = 1.1.3 +Provides: bundled(crate(serde)) = 1.0.228 +Provides: bundled(crate(serde_core)) = 1.0.228 +Provides: bundled(crate(serde_derive)) = 1.0.228 +Provides: bundled(crate(serde_json)) = 1.0.149 Provides: bundled(crate(shlex)) = 1.3.0 -Provides: bundled(crate(slab)) = 0.4.10 Provides: bundled(crate(smallvec)) = 1.15.1 Provides: bundled(crate(sptr)) = 0.3.2 Provides: bundled(crate(strsim)) = 0.10.0 -Provides: bundled(crate(syn)) = 2.0.104 -Provides: bundled(crate(tempfile)) = 3.20.0 +Provides: bundled(crate(syn)) = 2.0.114 +Provides: bundled(crate(tempfile)) = 3.24.0 Provides: bundled(crate(termcolor)) = 1.4.1 Provides: bundled(crate(textwrap)) = 0.16.2 -Provides: bundled(crate(tokio)) = 1.47.1 +Provides: bundled(crate(tokio)) = 1.49.0 Provides: bundled(crate(toml)) = 0.5.11 -Provides: bundled(crate(tracing)) = 0.1.41 -Provides: bundled(crate(tracing-attributes)) = 0.1.30 -Provides: bundled(crate(tracing-core)) = 0.1.34 -Provides: bundled(crate(unicode-ident)) = 1.0.18 +Provides: bundled(crate(tracing)) = 0.1.44 +Provides: bundled(crate(tracing-attributes)) = 0.1.31 +Provides: bundled(crate(tracing-core)) = 0.1.36 +Provides: bundled(crate(unicode-ident)) = 1.0.22 Provides: bundled(crate(uuid)) = 0.8.2 Provides: bundled(crate(vcpkg)) = 0.2.15 -Provides: bundled(crate(wasi)) = 0.14.2+wasi_0.2.4 +Provides: bundled(crate(wasi)) = 0.11.1+wasi_snapshot_preview1 +Provides: bundled(crate(wasip2)) = 1.0.1+wasi_0.2.4 Provides: bundled(crate(winapi)) = 0.3.9 Provides: bundled(crate(winapi-i686-pc-windows-gnu)) = 0.4.0 -Provides: bundled(crate(winapi-util)) = 0.1.9 +Provides: bundled(crate(winapi-util)) = 0.1.11 Provides: bundled(crate(winapi-x86_64-pc-windows-gnu)) = 0.4.0 -Provides: bundled(crate(windows-link)) = 0.1.3 -Provides: bundled(crate(windows-sys)) = 0.60.2 -Provides: bundled(crate(windows-targets)) = 0.53.3 -Provides: bundled(crate(windows_aarch64_gnullvm)) = 0.53.0 -Provides: bundled(crate(windows_aarch64_msvc)) = 0.53.0 -Provides: bundled(crate(windows_i686_gnu)) = 0.53.0 -Provides: bundled(crate(windows_i686_gnullvm)) = 0.53.0 -Provides: bundled(crate(windows_i686_msvc)) = 0.53.0 -Provides: bundled(crate(windows_x86_64_gnu)) = 0.53.0 -Provides: bundled(crate(windows_x86_64_gnullvm)) = 0.53.0 -Provides: bundled(crate(windows_x86_64_msvc)) = 0.53.0 -Provides: bundled(crate(wit-bindgen-rt)) = 0.39.0 -Provides: bundled(crate(zeroize)) = 1.8.1 -Provides: bundled(crate(zeroize_derive)) = 1.4.2 +Provides: bundled(crate(windows-link)) = 0.2.1 +Provides: bundled(crate(windows-sys)) = 0.61.2 +Provides: bundled(crate(wit-bindgen)) = 0.46.0 +Provides: bundled(crate(zeroize)) = 1.8.2 +Provides: bundled(crate(zeroize_derive)) = 1.4.3 +Provides: bundled(crate(zmij)) = 1.0.12 ##### Bundled cargo crates list - END ##### # Attach the buildrequires to the top level package: @@ -297,47 +281,8 @@ Source5: https://fedorapeople.org/groups/389ds/libdb-5.3.28-59.tar.bz2 Source6: vendor-%{version}-1.tar.gz Source7: Cargo-%{version}-1.lock -Patch: 0001-Issue-6822-Backend-creation-cleanup-and-Database-UI-.patch -Patch: 0002-Issue-6852-Move-ds-CLI-tools-back-to-sbin.patch -Patch: 0003-Issue-6663-Fix-NULL-subsystem-crash-in-JSON-error-lo.patch -Patch: 0004-Issue-6829-Update-parametrized-docstring-for-tests.patch -Patch: 0005-Issue-6782-Improve-paged-result-locking.patch -Patch: 0006-Issue-6838-lib389-replica.py-is-using-nonexistent-da.patch -Patch: 0007-Issue-6753-Add-add_exclude_subtree-and-remove_exclud.patch -Patch: 0008-Issue-6857-uiduniq-allow-specifying-match-rules-in-t.patch -Patch: 0009-Issue-6756-CLI-UI-Properly-handle-disabled-NDN-cache.patch -Patch: 0010-Issue-6854-Refactor-for-improved-data-management-685.patch -Patch: 0011-Issue-6850-AddressSanitizer-memory-leak-in-mdb_init.patch -Patch: 0012-Issue-6848-AddressSanitizer-leak-in-do_search.patch -Patch: 0013-Issue-6865-AddressSanitizer-leak-in-agmt_update_init.patch -Patch: 0014-Issue-6859-str2filter-is-not-fully-applying-matching.patch -Patch: 0015-Issue-6872-compressed-log-rotation-creates-files-wit.patch -Patch: 0016-Issue-6878-Prevent-repeated-disconnect-logs-during-s.patch -Patch: 0017-Issue-6888-Missing-access-JSON-logging-for-TLS-Clien.patch -Patch: 0018-Issue-6829-Update-parametrized-docstring-for-tests.patch -Patch: 0019-Issue-6772-dsconf-Replicas-with-the-consumer-role-al.patch -Patch: 0020-Issue-6893-Log-user-that-is-updated-during-password-.patch -Patch: 0021-Issue-6352-Fix-DeprecationWarning.patch -Patch: 0022-Issue-6880-Fix-ds_logs-test-suite-failure.patch -Patch: 0023-Issue-6901-Update-changelog-trimming-logging.patch -Patch: 0024-Issue-6895-Crash-if-repl-keep-alive-entry-can-not-be.patch -Patch: 0025-Issue-6250-Add-test-for-entryUSN-overflow-on-failed-.patch -Patch: 0026-Issue-6594-Add-test-for-numSubordinates-replication-.patch -Patch: 0027-Issue-6884-Mask-password-hashes-in-audit-logs-6885.patch -Patch: 0028-Issue-6897-Fix-disk-monitoring-test-failures-and-imp.patch -Patch: 0029-Issue-6778-Memory-leak-in-roles_cache_create_object_.patch -Patch: 0030-Issue-6901-Update-changelog-trimming-logging-fix-tes.patch -Patch: 0031-Issue-6181-RFE-Allow-system-to-manage-uid-gid-at-sta.patch -Patch: 0032-Issue-6468-CLI-Fix-default-error-log-level.patch -Patch: 0033-Issue-6768-ns-slapd-crashes-when-a-referral-is-added.patch -Patch: 0034-Issues-6913-6886-6250-Adjust-xfail-marks-6914.patch -Patch: 0035-Issue-6875-Fix-dsidm-tests.patch -Patch: 0036-Issue-6519-Add-basic-dsidm-account-tests.patch -Patch: 0037-Issue-6940-dsconf-monitor-server-fails-with-ldapi-du.patch -Patch: 0038-Issue-6936-Make-user-subtree-policy-creation-idempot.patch -Patch: 0039-Issue-6919-numSubordinates-tombstoneNumSubordinates-.patch -Patch: 0040-Issue-6910-Fix-latest-coverity-issues.patch -Patch: 0041-Issue-6929-Compilation-failure-with-rust-1.89-on-Fed.patch +Patch: 0001-Issue-7096-During-replication-online-total-init-the-.patch +Patch: 0002-Issue-Revise-paged-result-search-locking.patch %description 389 Directory Server is an LDAPv3 compliant server. The base package includes @@ -470,8 +415,12 @@ cd src/lib389 %prep %autosetup -p1 -n %{name}-%{version} rm -rf vendor +%if %{defined SOURCE6} tar xzf %{SOURCE6} +%endif +%if %{defined SOURCE7} cp %{SOURCE7} src/Cargo.lock +%endif %if %{with bundle_jemalloc} %setup -q -n %{name}-%{version} -T -D -b 3 diff --git a/main.fmf b/main.fmf index 338f547..6277251 100644 --- a/main.fmf +++ b/main.fmf @@ -10,7 +10,7 @@ package: [389-ds-base, git, pytest] - name: clone repo how: shell - script: git clone -b 389-ds-base-3.0 https://github.com/389ds/389-ds-base /root/ds + script: git clone -b 389-ds-base-3.2.0 https://github.com/389ds/389-ds-base /root/ds /test: /upstream_basic: test: pytest -v /root/ds/dirsrvtests/tests/suites/basic/basic_test.py diff --git a/sources b/sources index 519bf20..afce808 100644 --- a/sources +++ b/sources @@ -1,5 +1,5 @@ SHA512 (jemalloc-5.3.0.tar.bz2) = 22907bb052096e2caffb6e4e23548aecc5cc9283dce476896a2b1127eee64170e3562fa2e7db9571298814a7a2c7df6e8d1fbe152bd3f3b0c1abec22a2de34b1 SHA512 (libdb-5.3.28-59.tar.bz2) = 731a434fa2e6487ebb05c458b0437456eb9f7991284beb08cb3e21931e23bdeddddbc95bfabe3a2f9f029fe69cd33a2d4f0f5ce6a9811e9c3b940cb6fde4bf79 -SHA512 (389-ds-base-3.1.3.tar.bz2) = bd15c29dba5209ed828a2534e51fd000fdd5d32862fd07ea73339e73489b3c79f1991c91592c75dbb67384c696a03c82378f156bbea594e2e17421c95ca4c6be -SHA512 (Cargo-3.1.3-1.lock) = ea6db252e49de8aa2fe165f5cc773dc2eb227100d56953a36ca062680a3fc54870a961b05aaac1f7a761c69f3685cc8a7be474ac92377a1219c293fd1117f491 -SHA512 (vendor-3.1.3-1.tar.gz) = bf7f775da482a0164b5192e60cc335f32c65edf120ab94336835d98b2ea769eb116c808d06376e8ececb96e617194ec3febebf375821657e3d4751d9d8a0cf3c +SHA512 (389-ds-base-3.2.0.tar.bz2) = 9ff6aa56b30863c619f4f324344dca72cc883236bfe8d94520e8469d9e306f54b373ee2504eda18dcb0ecda33f915a3e64a6f3cdaa93a69b74d901caa48545e1 +SHA512 (Cargo-3.2.0-1.lock) = 96e724a6532e23920120116de1b67e6698b2fa435a59dc296e51a936ecdf91131c0499e359ece28b9c6d564db12fd86ff42f05b9ce856ba219b39be2847ac235 +SHA512 (vendor-3.2.0-1.tar.gz) = 04fe9ff8a08142641af07f5dc0729ef3e766dec622ec557dddaddacab9e7d39397d0c13392f757a9d50fd394d77305b2e2860559f34057ad8fdf3a84fa5e6579