Compare commits
1 Commits
c10
...
c8-stream-
| Author | SHA1 | Date | |
|---|---|---|---|
| 4d07f6f9dd |
@ -0,0 +1,55 @@
|
||||
From 77cc17e5dfb7ed71a320844d14a90c99c1474cc3 Mon Sep 17 00:00:00 2001
|
||||
From: Mark Reynolds <mreynolds@redhat.com>
|
||||
Date: Tue, 20 May 2025 08:13:24 -0400
|
||||
Subject: [PATCH] Issue 6787 - Improve error message when bulk import
|
||||
connection is closed
|
||||
|
||||
Description:
|
||||
|
||||
If an online replication initialization connection is closed a vague error
|
||||
message is reported when the init is aborted:
|
||||
|
||||
factory_destructor - ERROR bulk import abandoned
|
||||
|
||||
It should be clear that the import is being abandoned because the connection
|
||||
was closed and identify the conn id.
|
||||
|
||||
relates: https://github.com/389ds/389-ds-base/issues/6787
|
||||
|
||||
Reviewed by: progier(Thanks!)
|
||||
|
||||
(cherry picked from commit d472dd83d49f8dce6d71e202cbb4d897218ceffb)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
ldap/servers/slapd/back-ldbm/db-bdb/bdb_import_threads.c | 6 ++++--
|
||||
1 file changed, 4 insertions(+), 2 deletions(-)
|
||||
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import_threads.c b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import_threads.c
|
||||
index 67d6e3abc..e433f3db2 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import_threads.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/db-bdb/bdb_import_threads.c
|
||||
@@ -3432,9 +3432,10 @@ factory_constructor(void *object __attribute__((unused)), void *parent __attribu
|
||||
}
|
||||
|
||||
void
|
||||
-factory_destructor(void *extension, void *object __attribute__((unused)), void *parent __attribute__((unused)))
|
||||
+factory_destructor(void *extension, void *object, void *parent __attribute__((unused)))
|
||||
{
|
||||
ImportJob *job = (ImportJob *)extension;
|
||||
+ Connection *conn = (Connection *)object;
|
||||
PRThread *thread;
|
||||
|
||||
if (extension == NULL)
|
||||
@@ -3446,7 +3447,8 @@ factory_destructor(void *extension, void *object __attribute__((unused)), void *
|
||||
*/
|
||||
thread = job->main_thread;
|
||||
slapi_log_err(SLAPI_LOG_ERR, "factory_destructor",
|
||||
- "ERROR bulk import abandoned\n");
|
||||
+ "ERROR bulk import abandoned: conn=%ld was closed\n",
|
||||
+ conn->c_connid);
|
||||
import_abort_all(job, 1);
|
||||
/* wait for import_main to finish... */
|
||||
PR_JoinThread(thread);
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,536 @@
|
||||
From 8cba0dd699541d562d74502f35176df33f188512 Mon Sep 17 00:00:00 2001
|
||||
From: James Chapman <jachapma@redhat.com>
|
||||
Date: Fri, 30 May 2025 11:12:43 +0000
|
||||
Subject: [PATCH] Issue 6641 - modrdn fails when a user is member of multiple
|
||||
groups (#6643)
|
||||
|
||||
Bug description:
|
||||
Rename of a user that is member of multiple AM groups fail when MO and
|
||||
RI plugins are enabled.
|
||||
|
||||
Fix description:
|
||||
MO plugin - After updating the entry member attribute, check the return
|
||||
value. Retry the delete if the attr value exists and retry the add if the
|
||||
attr value is missing.
|
||||
|
||||
RI plugin - A previous commit checked if the attr value was not present
|
||||
before adding a mod. This commit was reverted in favour of overriding
|
||||
the internal op return value, consistent with other plugins.
|
||||
|
||||
CI test from Viktor Ashirov <vashirov@redhat.com>
|
||||
|
||||
Fixes: https://github.com/389ds/389-ds-base/issues/6641
|
||||
Relates: https://github.com/389ds/389-ds-base/issues/6566
|
||||
|
||||
Reviewed by: @progier389, @tbordaz, @vashirov (Thank you)
|
||||
|
||||
(cherry picked from commit 132ce4ab158679475cb83dbe28cc4fd7ced5cd19)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../tests/suites/plugins/modrdn_test.py | 174 ++++++++++++++++++
|
||||
ldap/servers/plugins/automember/automember.c | 11 +-
|
||||
ldap/servers/plugins/memberof/memberof.c | 123 +++++--------
|
||||
ldap/servers/plugins/referint/referint.c | 30 +--
|
||||
ldap/servers/slapd/modify.c | 51 +++++
|
||||
ldap/servers/slapd/slapi-plugin.h | 1 +
|
||||
6 files changed, 301 insertions(+), 89 deletions(-)
|
||||
create mode 100644 dirsrvtests/tests/suites/plugins/modrdn_test.py
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/plugins/modrdn_test.py b/dirsrvtests/tests/suites/plugins/modrdn_test.py
|
||||
new file mode 100644
|
||||
index 000000000..be79b0c3c
|
||||
--- /dev/null
|
||||
+++ b/dirsrvtests/tests/suites/plugins/modrdn_test.py
|
||||
@@ -0,0 +1,174 @@
|
||||
+# --- BEGIN COPYRIGHT BLOCK ---
|
||||
+# Copyright (C) 2025 Red Hat, Inc.
|
||||
+# All rights reserved.
|
||||
+#
|
||||
+# License: GPL (version 3 or any later version).
|
||||
+# See LICENSE for details.
|
||||
+# --- END COPYRIGHT BLOCK ---
|
||||
+#
|
||||
+import pytest
|
||||
+from lib389.topologies import topology_st
|
||||
+from lib389._constants import DEFAULT_SUFFIX
|
||||
+from lib389.idm.group import Groups
|
||||
+from lib389.idm.user import nsUserAccounts
|
||||
+from lib389.plugins import (
|
||||
+ AutoMembershipDefinitions,
|
||||
+ AutoMembershipPlugin,
|
||||
+ AutoMembershipRegexRules,
|
||||
+ MemberOfPlugin,
|
||||
+ ReferentialIntegrityPlugin,
|
||||
+)
|
||||
+
|
||||
+pytestmark = pytest.mark.tier1
|
||||
+
|
||||
+USER_PROPERTIES = {
|
||||
+ "uid": "userwith",
|
||||
+ "cn": "userwith",
|
||||
+ "uidNumber": "1000",
|
||||
+ "gidNumber": "2000",
|
||||
+ "homeDirectory": "/home/testuser",
|
||||
+ "displayName": "test user",
|
||||
+}
|
||||
+
|
||||
+
|
||||
+def test_modrdn_of_a_member_of_2_automember_groups(topology_st):
|
||||
+ """Test that a member of 2 automember groups can be renamed
|
||||
+
|
||||
+ :id: 0e40bdc4-a2d2-4bb8-8368-e02c8920bad2
|
||||
+
|
||||
+ :setup: Standalone instance
|
||||
+
|
||||
+ :steps:
|
||||
+ 1. Enable automember plugin
|
||||
+ 2. Create definiton for users with A in the name
|
||||
+ 3. Create regex rule for users with A in the name
|
||||
+ 4. Create definiton for users with Z in the name
|
||||
+ 5. Create regex rule for users with Z in the name
|
||||
+ 6. Enable memberof plugin
|
||||
+ 7. Enable referential integrity plugin
|
||||
+ 8. Restart the instance
|
||||
+ 9. Create groups
|
||||
+ 10. Create users userwitha, userwithz, userwithaz
|
||||
+ 11. Rename userwithaz
|
||||
+
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. Success
|
||||
+ 5. Success
|
||||
+ 6. Success
|
||||
+ 7. Success
|
||||
+ 8. Success
|
||||
+ 9. Success
|
||||
+ 10. Success
|
||||
+ 11. Success
|
||||
+ """
|
||||
+ inst = topology_st.standalone
|
||||
+
|
||||
+ # Enable automember plugin
|
||||
+ automember_plugin = AutoMembershipPlugin(inst)
|
||||
+ automember_plugin.enable()
|
||||
+
|
||||
+ # Create definiton for users with A in the name
|
||||
+ automembers = AutoMembershipDefinitions(inst)
|
||||
+ automember = automembers.create(
|
||||
+ properties={
|
||||
+ "cn": "userswithA",
|
||||
+ "autoMemberScope": DEFAULT_SUFFIX,
|
||||
+ "autoMemberFilter": "objectclass=posixAccount",
|
||||
+ "autoMemberGroupingAttr": "member:dn",
|
||||
+ }
|
||||
+ )
|
||||
+
|
||||
+ # Create regex rule for users with A in the name
|
||||
+ automembers_regex_rule = AutoMembershipRegexRules(inst, f"{automember.dn}")
|
||||
+ automembers_regex_rule.create(
|
||||
+ properties={
|
||||
+ "cn": "userswithA",
|
||||
+ "autoMemberInclusiveRegex": ["cn=.*a.*"],
|
||||
+ "autoMemberTargetGroup": [f"cn=userswithA,ou=Groups,{DEFAULT_SUFFIX}"],
|
||||
+ }
|
||||
+ )
|
||||
+
|
||||
+ # Create definiton for users with Z in the name
|
||||
+ automember = automembers.create(
|
||||
+ properties={
|
||||
+ "cn": "userswithZ",
|
||||
+ "autoMemberScope": DEFAULT_SUFFIX,
|
||||
+ "autoMemberFilter": "objectclass=posixAccount",
|
||||
+ "autoMemberGroupingAttr": "member:dn",
|
||||
+ }
|
||||
+ )
|
||||
+
|
||||
+ # Create regex rule for users with Z in the name
|
||||
+ automembers_regex_rule = AutoMembershipRegexRules(inst, f"{automember.dn}")
|
||||
+ automembers_regex_rule.create(
|
||||
+ properties={
|
||||
+ "cn": "userswithZ",
|
||||
+ "autoMemberInclusiveRegex": ["cn=.*z.*"],
|
||||
+ "autoMemberTargetGroup": [f"cn=userswithZ,ou=Groups,{DEFAULT_SUFFIX}"],
|
||||
+ }
|
||||
+ )
|
||||
+
|
||||
+ # Enable memberof plugin
|
||||
+ memberof_plugin = MemberOfPlugin(inst)
|
||||
+ memberof_plugin.enable()
|
||||
+
|
||||
+ # Enable referential integrity plugin
|
||||
+ referint_plugin = ReferentialIntegrityPlugin(inst)
|
||||
+ referint_plugin.enable()
|
||||
+
|
||||
+ # Restart the instance
|
||||
+ inst.restart()
|
||||
+
|
||||
+ # Create groups
|
||||
+ groups = Groups(inst, DEFAULT_SUFFIX)
|
||||
+ groupA = groups.create(properties={"cn": "userswithA"})
|
||||
+ groupZ = groups.create(properties={"cn": "userswithZ"})
|
||||
+
|
||||
+ # Create users
|
||||
+ users = nsUserAccounts(inst, DEFAULT_SUFFIX)
|
||||
+
|
||||
+ # userwitha
|
||||
+ user_props = USER_PROPERTIES.copy()
|
||||
+ user_props.update(
|
||||
+ {
|
||||
+ "uid": USER_PROPERTIES["uid"] + "a",
|
||||
+ "cn": USER_PROPERTIES["cn"] + "a",
|
||||
+ }
|
||||
+ )
|
||||
+ user = users.create(properties=user_props)
|
||||
+
|
||||
+ # userwithz
|
||||
+ user_props.update(
|
||||
+ {
|
||||
+ "uid": USER_PROPERTIES["uid"] + "z",
|
||||
+ "cn": USER_PROPERTIES["cn"] + "z",
|
||||
+ }
|
||||
+ )
|
||||
+ user = users.create(properties=user_props)
|
||||
+
|
||||
+ # userwithaz
|
||||
+ user_props.update(
|
||||
+ {
|
||||
+ "uid": USER_PROPERTIES["uid"] + "az",
|
||||
+ "cn": USER_PROPERTIES["cn"] + "az",
|
||||
+ }
|
||||
+ )
|
||||
+ user = users.create(properties=user_props)
|
||||
+ user_orig_dn = user.dn
|
||||
+
|
||||
+ # Rename userwithaz
|
||||
+ user.rename(new_rdn="uid=userwith")
|
||||
+ user_new_dn = user.dn
|
||||
+
|
||||
+ assert user.get_attr_val_utf8("uid") != "userwithaz"
|
||||
+
|
||||
+ # Check groups contain renamed username
|
||||
+ assert groupA.is_member(user_new_dn)
|
||||
+ assert groupZ.is_member(user_new_dn)
|
||||
+
|
||||
+ # Check groups dont contain original username
|
||||
+ assert not groupA.is_member(user_orig_dn)
|
||||
+ assert not groupZ.is_member(user_orig_dn)
|
||||
diff --git a/ldap/servers/plugins/automember/automember.c b/ldap/servers/plugins/automember/automember.c
|
||||
index 419adb052..fde92ee12 100644
|
||||
--- a/ldap/servers/plugins/automember/automember.c
|
||||
+++ b/ldap/servers/plugins/automember/automember.c
|
||||
@@ -1754,13 +1754,12 @@ automember_update_member_value(Slapi_Entry *member_e, const char *group_dn, char
|
||||
}
|
||||
|
||||
mod_pb = slapi_pblock_new();
|
||||
- slapi_modify_internal_set_pb(mod_pb, group_dn,
|
||||
- mods, 0, 0, automember_get_plugin_id(), 0);
|
||||
- slapi_modify_internal_pb(mod_pb);
|
||||
- slapi_pblock_get(mod_pb, SLAPI_PLUGIN_INTOP_RESULT, &result);
|
||||
+ /* Do a single mod with error overrides for DEL/ADD */
|
||||
+ result = slapi_single_modify_internal_override(mod_pb, slapi_sdn_new_dn_byval(group_dn), mods,
|
||||
+ automember_get_plugin_id(), 0);
|
||||
|
||||
if(add){
|
||||
- if ((result != LDAP_SUCCESS) && (result != LDAP_TYPE_OR_VALUE_EXISTS)) {
|
||||
+ if (result != LDAP_SUCCESS) {
|
||||
slapi_log_err(SLAPI_LOG_ERR, AUTOMEMBER_PLUGIN_SUBSYSTEM,
|
||||
"automember_update_member_value - Unable to add \"%s\" as "
|
||||
"a \"%s\" value to group \"%s\" (%s).\n",
|
||||
@@ -1770,7 +1769,7 @@ automember_update_member_value(Slapi_Entry *member_e, const char *group_dn, char
|
||||
}
|
||||
} else {
|
||||
/* delete value */
|
||||
- if ((result != LDAP_SUCCESS) && (result != LDAP_NO_SUCH_ATTRIBUTE)) {
|
||||
+ if (result != LDAP_SUCCESS) {
|
||||
slapi_log_err(SLAPI_LOG_ERR, AUTOMEMBER_PLUGIN_SUBSYSTEM,
|
||||
"automember_update_member_value - Unable to delete \"%s\" as "
|
||||
"a \"%s\" value from group \"%s\" (%s).\n",
|
||||
diff --git a/ldap/servers/plugins/memberof/memberof.c b/ldap/servers/plugins/memberof/memberof.c
|
||||
index f79b083a9..f3dc7cf00 100644
|
||||
--- a/ldap/servers/plugins/memberof/memberof.c
|
||||
+++ b/ldap/servers/plugins/memberof/memberof.c
|
||||
@@ -1482,18 +1482,9 @@ memberof_del_dn_type_callback(Slapi_Entry *e, void *callback_data)
|
||||
mod.mod_op = LDAP_MOD_DELETE;
|
||||
mod.mod_type = ((memberof_del_dn_data *)callback_data)->type;
|
||||
mod.mod_values = val;
|
||||
-
|
||||
- slapi_modify_internal_set_pb_ext(
|
||||
- mod_pb, slapi_entry_get_sdn(e),
|
||||
- mods, 0, 0,
|
||||
- memberof_get_plugin_id(), SLAPI_OP_FLAG_BYPASS_REFERRALS);
|
||||
-
|
||||
- slapi_modify_internal_pb(mod_pb);
|
||||
-
|
||||
- slapi_pblock_get(mod_pb,
|
||||
- SLAPI_PLUGIN_INTOP_RESULT,
|
||||
- &rc);
|
||||
-
|
||||
+ /* Internal mod with error overrides for DEL/ADD */
|
||||
+ rc = slapi_single_modify_internal_override(mod_pb, slapi_entry_get_sdn(e), mods,
|
||||
+ memberof_get_plugin_id(), SLAPI_OP_FLAG_BYPASS_REFERRALS);
|
||||
slapi_pblock_destroy(mod_pb);
|
||||
|
||||
if (rc == LDAP_NO_SUCH_ATTRIBUTE && val[0] == NULL) {
|
||||
@@ -1966,6 +1957,7 @@ memberof_replace_dn_type_callback(Slapi_Entry *e, void *callback_data)
|
||||
|
||||
return rc;
|
||||
}
|
||||
+
|
||||
LDAPMod **
|
||||
my_copy_mods(LDAPMod **orig_mods)
|
||||
{
|
||||
@@ -2774,33 +2766,6 @@ memberof_modop_one_replace_r(Slapi_PBlock *pb, MemberOfConfig *config, int mod_o
|
||||
replace_mod.mod_values = replace_val;
|
||||
}
|
||||
rc = memberof_add_memberof_attr(mods, op_to, config->auto_add_oc);
|
||||
- if (rc == LDAP_NO_SUCH_ATTRIBUTE || rc == LDAP_TYPE_OR_VALUE_EXISTS) {
|
||||
- if (rc == LDAP_TYPE_OR_VALUE_EXISTS) {
|
||||
- /*
|
||||
- * For some reason the new modrdn value is present, so retry
|
||||
- * the delete by itself and ignore the add op by tweaking
|
||||
- * the mod array.
|
||||
- */
|
||||
- mods[1] = NULL;
|
||||
- rc = memberof_add_memberof_attr(mods, op_to, config->auto_add_oc);
|
||||
- } else {
|
||||
- /*
|
||||
- * The memberof value to be replaced does not exist so just
|
||||
- * add the new value. Shuffle the mod array to apply only
|
||||
- * the add operation.
|
||||
- */
|
||||
- mods[0] = mods[1];
|
||||
- mods[1] = NULL;
|
||||
- rc = memberof_add_memberof_attr(mods, op_to, config->auto_add_oc);
|
||||
- if (rc == LDAP_TYPE_OR_VALUE_EXISTS) {
|
||||
- /*
|
||||
- * The entry already has the expected memberOf value, no
|
||||
- * problem just return success.
|
||||
- */
|
||||
- rc = LDAP_SUCCESS;
|
||||
- }
|
||||
- }
|
||||
- }
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4454,43 +4419,57 @@ memberof_add_memberof_attr(LDAPMod **mods, const char *dn, char *add_oc)
|
||||
Slapi_PBlock *mod_pb = NULL;
|
||||
int added_oc = 0;
|
||||
int rc = 0;
|
||||
+ LDAPMod *single_mod[2];
|
||||
|
||||
- while (1) {
|
||||
- mod_pb = slapi_pblock_new();
|
||||
- slapi_modify_internal_set_pb(
|
||||
- mod_pb, dn, mods, 0, 0,
|
||||
- memberof_get_plugin_id(), SLAPI_OP_FLAG_BYPASS_REFERRALS);
|
||||
- slapi_modify_internal_pb(mod_pb);
|
||||
-
|
||||
- slapi_pblock_get(mod_pb, SLAPI_PLUGIN_INTOP_RESULT, &rc);
|
||||
- if (rc == LDAP_OBJECT_CLASS_VIOLATION) {
|
||||
- if (!add_oc || added_oc) {
|
||||
- /*
|
||||
- * We aren't auto adding an objectclass, or we already
|
||||
- * added the objectclass, and we are still failing.
|
||||
- */
|
||||
+ if (!dn || !mods) {
|
||||
+ slapi_log_err(SLAPI_LOG_ERR, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
+ "Invalid argument: %s%s is NULL\n",
|
||||
+ !dn ? "dn " : "",
|
||||
+ !mods ? "mods " : "");
|
||||
+ return LDAP_PARAM_ERROR;
|
||||
+ }
|
||||
+
|
||||
+
|
||||
+ mod_pb = slapi_pblock_new();
|
||||
+ /* Split multiple mods into individual mod operations */
|
||||
+ for (size_t i = 0; (mods != NULL) && (mods[i] != NULL); i++) {
|
||||
+ single_mod[0] = mods[i];
|
||||
+ single_mod[1] = NULL;
|
||||
+
|
||||
+ while (1) {
|
||||
+ slapi_pblock_init(mod_pb);
|
||||
+
|
||||
+ /* Internal mod with error overrides for DEL/ADD */
|
||||
+ rc = slapi_single_modify_internal_override(mod_pb, slapi_sdn_new_normdn_byref(dn), single_mod,
|
||||
+ memberof_get_plugin_id(), SLAPI_OP_FLAG_BYPASS_REFERRALS);
|
||||
+ if (rc == LDAP_OBJECT_CLASS_VIOLATION) {
|
||||
+ if (!add_oc || added_oc) {
|
||||
+ /*
|
||||
+ * We aren't auto adding an objectclass, or we already
|
||||
+ * added the objectclass, and we are still failing.
|
||||
+ */
|
||||
+ break;
|
||||
+ }
|
||||
+ rc = memberof_add_objectclass(add_oc, dn);
|
||||
+ slapi_log_err(SLAPI_LOG_WARNING, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
+ "Entry %s - schema violation caught - repair operation %s\n",
|
||||
+ dn ? dn : "unknown",
|
||||
+ rc ? "failed" : "succeeded");
|
||||
+ if (rc) {
|
||||
+ /* Failed to add objectclass */
|
||||
+ rc = LDAP_OBJECT_CLASS_VIOLATION;
|
||||
+ break;
|
||||
+ }
|
||||
+ added_oc = 1;
|
||||
+ } else if (rc) {
|
||||
+ /* Some other fatal error */
|
||||
+ slapi_log_err(SLAPI_LOG_PLUGIN, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
+ "memberof_add_memberof_attr - Internal modify failed. rc=%d\n", rc);
|
||||
break;
|
||||
- }
|
||||
- rc = memberof_add_objectclass(add_oc, dn);
|
||||
- slapi_log_err(SLAPI_LOG_WARNING, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
- "Entry %s - schema violation caught - repair operation %s\n",
|
||||
- dn ? dn : "unknown",
|
||||
- rc ? "failed" : "succeeded");
|
||||
- if (rc) {
|
||||
- /* Failed to add objectclass */
|
||||
- rc = LDAP_OBJECT_CLASS_VIOLATION;
|
||||
+ } else {
|
||||
+ /* success */
|
||||
break;
|
||||
}
|
||||
- added_oc = 1;
|
||||
- slapi_pblock_destroy(mod_pb);
|
||||
- } else if (rc) {
|
||||
- /* Some other fatal error */
|
||||
- slapi_log_err(SLAPI_LOG_PLUGIN, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
- "memberof_add_memberof_attr - Internal modify failed. rc=%d\n", rc);
|
||||
- break;
|
||||
- } else {
|
||||
- /* success */
|
||||
- break;
|
||||
}
|
||||
}
|
||||
slapi_pblock_destroy(mod_pb);
|
||||
diff --git a/ldap/servers/plugins/referint/referint.c b/ldap/servers/plugins/referint/referint.c
|
||||
index 28240c1f6..c5e259d8d 100644
|
||||
--- a/ldap/servers/plugins/referint/referint.c
|
||||
+++ b/ldap/servers/plugins/referint/referint.c
|
||||
@@ -711,19 +711,28 @@ static int
|
||||
_do_modify(Slapi_PBlock *mod_pb, Slapi_DN *entrySDN, LDAPMod **mods)
|
||||
{
|
||||
int rc = 0;
|
||||
+ LDAPMod *mod[2];
|
||||
|
||||
- slapi_pblock_init(mod_pb);
|
||||
+ /* Split multiple modifications into individual modify operations */
|
||||
+ for (size_t i = 0; (mods != NULL) && (mods[i] != NULL); i++) {
|
||||
+ mod[0] = mods[i];
|
||||
+ mod[1] = NULL;
|
||||
|
||||
- if (allow_repl) {
|
||||
- /* Must set as a replicated operation */
|
||||
- slapi_modify_internal_set_pb_ext(mod_pb, entrySDN, mods, NULL, NULL,
|
||||
- referint_plugin_identity, OP_FLAG_REPLICATED);
|
||||
- } else {
|
||||
- slapi_modify_internal_set_pb_ext(mod_pb, entrySDN, mods, NULL, NULL,
|
||||
- referint_plugin_identity, 0);
|
||||
+ slapi_pblock_init(mod_pb);
|
||||
+
|
||||
+ /* Do a single mod with error overrides for DEL/ADD */
|
||||
+ if (allow_repl) {
|
||||
+ rc = slapi_single_modify_internal_override(mod_pb, entrySDN, mod,
|
||||
+ referint_plugin_identity, OP_FLAG_REPLICATED);
|
||||
+ } else {
|
||||
+ rc = slapi_single_modify_internal_override(mod_pb, entrySDN, mod,
|
||||
+ referint_plugin_identity, 0);
|
||||
+ }
|
||||
+
|
||||
+ if (rc != LDAP_SUCCESS) {
|
||||
+ return rc;
|
||||
+ }
|
||||
}
|
||||
- slapi_modify_internal_pb(mod_pb);
|
||||
- slapi_pblock_get(mod_pb, SLAPI_PLUGIN_INTOP_RESULT, &rc);
|
||||
|
||||
return rc;
|
||||
}
|
||||
@@ -1033,7 +1042,6 @@ _update_all_per_mod(Slapi_DN *entrySDN, /* DN of the searched entry */
|
||||
/* (case 1) */
|
||||
slapi_mods_add_string(smods, LDAP_MOD_DELETE, attrName, sval);
|
||||
slapi_mods_add_string(smods, LDAP_MOD_ADD, attrName, newDN);
|
||||
-
|
||||
} else if (p) {
|
||||
/* (case 2) */
|
||||
slapi_mods_add_string(smods, LDAP_MOD_DELETE, attrName, sval);
|
||||
diff --git a/ldap/servers/slapd/modify.c b/ldap/servers/slapd/modify.c
|
||||
index 669bb104c..455eb63ec 100644
|
||||
--- a/ldap/servers/slapd/modify.c
|
||||
+++ b/ldap/servers/slapd/modify.c
|
||||
@@ -492,6 +492,57 @@ slapi_modify_internal_set_pb_ext(Slapi_PBlock *pb, const Slapi_DN *sdn, LDAPMod
|
||||
slapi_pblock_set(pb, SLAPI_PLUGIN_IDENTITY, plugin_identity);
|
||||
}
|
||||
|
||||
+/* Performs a single LDAP modify operation with error overrides.
|
||||
+ *
|
||||
+ * If specific errors occur, such as attempting to add an existing attribute or
|
||||
+ * delete a non-existent one, the function overrides the error and returns success:
|
||||
+ * - LDAP_MOD_ADD -> LDAP_TYPE_OR_VALUE_EXISTS (ignored)
|
||||
+ * - LDAP_MOD_DELETE -> LDAP_NO_SUCH_ATTRIBUTE (ignored)
|
||||
+ *
|
||||
+ * Any other errors encountered during the operation will be returned as-is.
|
||||
+ */
|
||||
+int
|
||||
+slapi_single_modify_internal_override(Slapi_PBlock *pb, const Slapi_DN *sdn, LDAPMod **mod, Slapi_ComponentId *plugin_id, int op_flags)
|
||||
+{
|
||||
+ int rc = 0;
|
||||
+ int result = 0;
|
||||
+ int result_reset = 0;
|
||||
+ int mod_op = 0;
|
||||
+
|
||||
+ if (!pb || !sdn || !mod || !mod[0]) {
|
||||
+ slapi_log_err(SLAPI_LOG_ERR, "slapi_single_modify_internal_override",
|
||||
+ "Invalid argument: %s%s%s%s is NULL\n",
|
||||
+ !pb ? "pb " : "",
|
||||
+ !sdn ? "sdn " : "",
|
||||
+ !mod ? "mod " : "",
|
||||
+ !mod[0] ? "mod[0] " : "");
|
||||
+
|
||||
+ return LDAP_PARAM_ERROR;
|
||||
+ }
|
||||
+
|
||||
+ slapi_modify_internal_set_pb_ext(pb, sdn, mod, NULL, NULL, plugin_id, op_flags);
|
||||
+ slapi_modify_internal_pb(pb);
|
||||
+ slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_RESULT, &result);
|
||||
+
|
||||
+ if (result != LDAP_SUCCESS) {
|
||||
+ mod_op = mod[0]->mod_op & LDAP_MOD_OP;
|
||||
+ if ((mod_op == LDAP_MOD_ADD && result == LDAP_TYPE_OR_VALUE_EXISTS) ||
|
||||
+ (mod_op == LDAP_MOD_DELETE && result == LDAP_NO_SUCH_ATTRIBUTE)) {
|
||||
+ slapi_log_err(SLAPI_LOG_PLUGIN, "slapi_single_modify_internal_override",
|
||||
+ "Overriding return code - plugin:%s dn:%s mod_op:%d result:%d\n",
|
||||
+ plugin_id ? plugin_id->sci_component_name : "unknown",
|
||||
+ sdn ? sdn->udn : "unknown", mod_op, result);
|
||||
+
|
||||
+ slapi_pblock_set(pb, SLAPI_PLUGIN_INTOP_RESULT, &result_reset);
|
||||
+ rc = LDAP_SUCCESS;
|
||||
+ } else {
|
||||
+ rc = result;
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ return rc;
|
||||
+}
|
||||
+
|
||||
/* Helper functions */
|
||||
|
||||
static int
|
||||
diff --git a/ldap/servers/slapd/slapi-plugin.h b/ldap/servers/slapd/slapi-plugin.h
|
||||
index 9fdcaccc8..a84a60c92 100644
|
||||
--- a/ldap/servers/slapd/slapi-plugin.h
|
||||
+++ b/ldap/servers/slapd/slapi-plugin.h
|
||||
@@ -5965,6 +5965,7 @@ void slapi_add_entry_internal_set_pb(Slapi_PBlock *pb, Slapi_Entry *e, LDAPContr
|
||||
int slapi_add_internal_set_pb(Slapi_PBlock *pb, const char *dn, LDAPMod **attrs, LDAPControl **controls, Slapi_ComponentId *plugin_identity, int operation_flags);
|
||||
void slapi_modify_internal_set_pb(Slapi_PBlock *pb, const char *dn, LDAPMod **mods, LDAPControl **controls, const char *uniqueid, Slapi_ComponentId *plugin_identity, int operation_flags);
|
||||
void slapi_modify_internal_set_pb_ext(Slapi_PBlock *pb, const Slapi_DN *sdn, LDAPMod **mods, LDAPControl **controls, const char *uniqueid, Slapi_ComponentId *plugin_identity, int operation_flags);
|
||||
+int slapi_single_modify_internal_override(Slapi_PBlock *pb, const Slapi_DN *sdn, LDAPMod **mod, Slapi_ComponentId *plugin_identity, int operation_flags);
|
||||
/**
|
||||
* Set \c Slapi_PBlock to perform modrdn/rename internally
|
||||
*
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,415 @@
|
||||
From ccaaaa31a86eb059315580249838d72e4a51bf8b Mon Sep 17 00:00:00 2001
|
||||
From: tbordaz <tbordaz@redhat.com>
|
||||
Date: Tue, 14 Jan 2025 18:12:56 +0100
|
||||
Subject: [PATCH] Issue 6470 - Some replication status data are reset upon a
|
||||
restart (#6471)
|
||||
|
||||
Bug description:
|
||||
The replication agreement contains operational attributes
|
||||
related to the total init: nsds5replicaLastInitStart,
|
||||
nsds5replicaLastInitEnd, nsds5replicaLastInitStatus.
|
||||
Those attributes are reset at restart
|
||||
|
||||
Fix description:
|
||||
When reading the replication agreement from config
|
||||
(agmt_new_from_entry) restore the attributes into
|
||||
the in-memory RA.
|
||||
Updates the RA config entry from the in-memory RA
|
||||
during shutdown/cleanallruv/enable_ra
|
||||
|
||||
fixes: #6470
|
||||
|
||||
Reviewed by: Simon Pichugin (Thanks !!)
|
||||
|
||||
(cherry picked from commit 90071a334517be523e498bded5b663c50c40ee3f)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../suites/replication/single_master_test.py | 128 ++++++++++++++++
|
||||
ldap/servers/plugins/replication/repl5.h | 4 +
|
||||
ldap/servers/plugins/replication/repl5_agmt.c | 140 +++++++++++++++++-
|
||||
.../plugins/replication/repl5_agmtlist.c | 1 +
|
||||
.../replication/repl5_replica_config.c | 1 +
|
||||
.../plugins/replication/repl_globals.c | 3 +
|
||||
6 files changed, 273 insertions(+), 4 deletions(-)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/replication/single_master_test.py b/dirsrvtests/tests/suites/replication/single_master_test.py
|
||||
index e927e6cfd..f448d2342 100644
|
||||
--- a/dirsrvtests/tests/suites/replication/single_master_test.py
|
||||
+++ b/dirsrvtests/tests/suites/replication/single_master_test.py
|
||||
@@ -13,6 +13,7 @@ from lib389.utils import *
|
||||
from lib389.idm.user import UserAccounts, TEST_USER_PROPERTIES
|
||||
|
||||
from lib389.replica import ReplicationManager, Replicas
|
||||
+from lib389.agreement import Agreements
|
||||
from lib389.backend import Backends
|
||||
|
||||
from lib389.topologies import topology_m1c1 as topo_r # Replication
|
||||
@@ -154,6 +155,133 @@ def test_lastupdate_attr_before_init(topo_nr):
|
||||
json_obj = json.loads(json_status)
|
||||
log.debug("JSON status message: {}".format(json_obj))
|
||||
|
||||
+def test_total_init_operational_attr(topo_r):
|
||||
+ """Check that operation attributes nsds5replicaLastInitStatus
|
||||
+ nsds5replicaLastInitStart and nsds5replicaLastInitEnd
|
||||
+ are preserved between restart
|
||||
+
|
||||
+ :id: 6ba00bb1-87c0-47dd-86e0-ccf892b3985b
|
||||
+ :customerscenario: True
|
||||
+ :setup: Replication setup with supplier and consumer instances,
|
||||
+ test user on supplier
|
||||
+ :steps:
|
||||
+ 1. Check that user was replicated to consumer
|
||||
+ 2. Trigger a first total init
|
||||
+ 3. Check status/start/end values are set on the supplier
|
||||
+ 4. Restart supplier
|
||||
+ 5. Check previous status/start/end values are preserved
|
||||
+ 6. Trigger a second total init
|
||||
+ 7. Check status/start/end values are set on the supplier
|
||||
+ 8. Restart supplier
|
||||
+ 9. Check previous status/start/end values are preserved
|
||||
+ 10. Check status/start/end values are different between
|
||||
+ first and second total init
|
||||
+ :expectedresults:
|
||||
+ 1. The user should be replicated to consumer
|
||||
+ 2. Total init should be successful
|
||||
+ 3. It must exist a values
|
||||
+ 4. Operation should be successful
|
||||
+ 5. Check values are identical before/after restart
|
||||
+ 6. Total init should be successful
|
||||
+ 7. It must exist a values
|
||||
+ 8. Operation should be successful
|
||||
+ 9. Check values are identical before/after restart
|
||||
+ 10. values must differ between first/second total init
|
||||
+ """
|
||||
+
|
||||
+ supplier = topo_r.ms["supplier1"]
|
||||
+ consumer = topo_r.cs["consumer1"]
|
||||
+ repl = ReplicationManager(DEFAULT_SUFFIX)
|
||||
+
|
||||
+ # Create a test user
|
||||
+ m_users = UserAccounts(topo_r.ms["supplier1"], DEFAULT_SUFFIX)
|
||||
+ m_user = m_users.ensure_state(properties=TEST_USER_PROPERTIES)
|
||||
+ m_user.ensure_present('mail', 'testuser@redhat.com')
|
||||
+
|
||||
+ # Then check it is replicated
|
||||
+ log.info("Check that replication is working")
|
||||
+ repl.wait_for_replication(supplier, consumer)
|
||||
+ c_users = UserAccounts(topo_r.cs["consumer1"], DEFAULT_SUFFIX)
|
||||
+ c_user = c_users.get('testuser')
|
||||
+ assert c_user
|
||||
+
|
||||
+ # Retrieve the replication agreement S1->C1
|
||||
+ replica_supplier = Replicas(supplier).get(DEFAULT_SUFFIX)
|
||||
+ agmts_supplier = Agreements(supplier, replica_supplier.dn)
|
||||
+ supplier_consumer = None
|
||||
+ for agmt in agmts_supplier.list():
|
||||
+ if (agmt.get_attr_val_utf8('nsDS5ReplicaPort') == str(consumer.port) and
|
||||
+ agmt.get_attr_val_utf8('nsDS5ReplicaHost') == consumer.host):
|
||||
+ supplier_consumer = agmt
|
||||
+ break
|
||||
+ assert supplier_consumer
|
||||
+
|
||||
+ # Trigger a first total init and check that
|
||||
+ # start/end/status is updated AND preserved during a restart
|
||||
+ log.info("First total init")
|
||||
+ supplier_consumer.begin_reinit()
|
||||
+ (done, error) = supplier_consumer.wait_reinit()
|
||||
+ assert done is True
|
||||
+
|
||||
+ status_1 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitStatus")
|
||||
+ assert status_1
|
||||
+
|
||||
+ initStart_1 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitStart")
|
||||
+ assert initStart_1
|
||||
+
|
||||
+ initEnd_1 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitEnd")
|
||||
+ assert initEnd_1
|
||||
+
|
||||
+ log.info("Check values from first total init are preserved")
|
||||
+ supplier.restart()
|
||||
+ post_restart_status_1 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitStatus")
|
||||
+ assert post_restart_status_1
|
||||
+ assert post_restart_status_1 == status_1
|
||||
+
|
||||
+ post_restart_initStart_1 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitStart")
|
||||
+ assert post_restart_initStart_1
|
||||
+ assert post_restart_initStart_1 == initStart_1
|
||||
+
|
||||
+ post_restart_initEnd_1 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitEnd")
|
||||
+ assert post_restart_initEnd_1 == initEnd_1
|
||||
+
|
||||
+ # Trigger a second total init and check that
|
||||
+ # start/end/status is updated (differ from previous values)
|
||||
+ # AND new values are preserved during a restart
|
||||
+ time.sleep(1)
|
||||
+ log.info("Second total init")
|
||||
+ supplier_consumer.begin_reinit()
|
||||
+ (done, error) = supplier_consumer.wait_reinit()
|
||||
+ assert done is True
|
||||
+
|
||||
+ status_2 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitStatus")
|
||||
+ assert status_2
|
||||
+
|
||||
+ initStart_2 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitStart")
|
||||
+ assert initStart_2
|
||||
+
|
||||
+ initEnd_2 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitEnd")
|
||||
+ assert initEnd_2
|
||||
+
|
||||
+ log.info("Check values from second total init are preserved")
|
||||
+ supplier.restart()
|
||||
+ post_restart_status_2 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitStatus")
|
||||
+ assert post_restart_status_2
|
||||
+ assert post_restart_status_2 == status_2
|
||||
+
|
||||
+ post_restart_initStart_2 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitStart")
|
||||
+ assert post_restart_initStart_2
|
||||
+ assert post_restart_initStart_2 == initStart_2
|
||||
+
|
||||
+ post_restart_initEnd_2 = supplier_consumer.get_attr_val_utf8("nsds5replicaLastInitEnd")
|
||||
+ assert post_restart_initEnd_2 == initEnd_2
|
||||
+
|
||||
+ # Check that values are updated by total init
|
||||
+ log.info("Check values from first/second total init are different")
|
||||
+ assert status_2 == status_1
|
||||
+ assert initStart_2 != initStart_1
|
||||
+ assert initEnd_2 != initEnd_1
|
||||
+
|
||||
if __name__ == '__main__':
|
||||
# Run isolated
|
||||
# -s for DEBUG mode
|
||||
diff --git a/ldap/servers/plugins/replication/repl5.h b/ldap/servers/plugins/replication/repl5.h
|
||||
index c2fbff8c0..65e2059e7 100644
|
||||
--- a/ldap/servers/plugins/replication/repl5.h
|
||||
+++ b/ldap/servers/plugins/replication/repl5.h
|
||||
@@ -165,6 +165,9 @@ extern const char *type_nsds5ReplicaBootstrapCredentials;
|
||||
extern const char *type_nsds5ReplicaBootstrapBindMethod;
|
||||
extern const char *type_nsds5ReplicaBootstrapTransportInfo;
|
||||
extern const char *type_replicaKeepAliveUpdateInterval;
|
||||
+extern const char *type_nsds5ReplicaLastInitStart;
|
||||
+extern const char *type_nsds5ReplicaLastInitEnd;
|
||||
+extern const char *type_nsds5ReplicaLastInitStatus;
|
||||
|
||||
/* Attribute names for windows replication agreements */
|
||||
extern const char *type_nsds7WindowsReplicaArea;
|
||||
@@ -430,6 +433,7 @@ void agmt_notify_change(Repl_Agmt *ra, Slapi_PBlock *pb);
|
||||
Object *agmt_get_consumer_ruv(Repl_Agmt *ra);
|
||||
ReplicaId agmt_get_consumer_rid(Repl_Agmt *ra, void *conn);
|
||||
int agmt_set_consumer_ruv(Repl_Agmt *ra, RUV *ruv);
|
||||
+void agmt_update_init_status(Repl_Agmt *ra);
|
||||
void agmt_update_consumer_ruv(Repl_Agmt *ra);
|
||||
CSN *agmt_get_consumer_schema_csn(Repl_Agmt *ra);
|
||||
void agmt_set_consumer_schema_csn(Repl_Agmt *ra, CSN *csn);
|
||||
diff --git a/ldap/servers/plugins/replication/repl5_agmt.c b/ldap/servers/plugins/replication/repl5_agmt.c
|
||||
index a71343dec..c3b8d298c 100644
|
||||
--- a/ldap/servers/plugins/replication/repl5_agmt.c
|
||||
+++ b/ldap/servers/plugins/replication/repl5_agmt.c
|
||||
@@ -56,6 +56,7 @@
|
||||
#include "repl5_prot_private.h"
|
||||
#include "cl5_api.h"
|
||||
#include "slapi-plugin.h"
|
||||
+#include "slap.h"
|
||||
|
||||
#define DEFAULT_TIMEOUT 120 /* (seconds) default outbound LDAP connection */
|
||||
#define DEFAULT_FLOWCONTROL_WINDOW 1000 /* #entries sent without acknowledgment */
|
||||
@@ -510,10 +511,33 @@ agmt_new_from_entry(Slapi_Entry *e)
|
||||
ra->last_update_status[0] = '\0';
|
||||
ra->update_in_progress = PR_FALSE;
|
||||
ra->stop_in_progress = PR_FALSE;
|
||||
- ra->last_init_end_time = 0UL;
|
||||
- ra->last_init_start_time = 0UL;
|
||||
- ra->last_init_status[0] = '\0';
|
||||
- ra->changecounters = (struct changecounter **)slapi_ch_calloc(MAX_NUM_OF_MASTERS + 1,
|
||||
+ val = (char *)slapi_entry_attr_get_ref(e, type_nsds5ReplicaLastInitEnd);
|
||||
+ if (val) {
|
||||
+ time_t init_end_time;
|
||||
+
|
||||
+ init_end_time = parse_genTime((char *) val);
|
||||
+ if (init_end_time == NO_TIME || init_end_time == SLAPD_END_TIME) {
|
||||
+ ra->last_init_end_time = 0UL;
|
||||
+ } else {
|
||||
+ ra->last_init_end_time = init_end_time;
|
||||
+ }
|
||||
+ }
|
||||
+ val = (char *)slapi_entry_attr_get_ref(e, type_nsds5ReplicaLastInitStart);
|
||||
+ if (val) {
|
||||
+ time_t init_start_time;
|
||||
+
|
||||
+ init_start_time = parse_genTime((char *) val);
|
||||
+ if (init_start_time == NO_TIME || init_start_time == SLAPD_END_TIME) {
|
||||
+ ra->last_init_start_time = 0UL;
|
||||
+ } else {
|
||||
+ ra->last_init_start_time = init_start_time;
|
||||
+ }
|
||||
+ }
|
||||
+ val = (char *)slapi_entry_attr_get_ref(e, type_nsds5ReplicaLastInitStatus);
|
||||
+ if (val) {
|
||||
+ strcpy(ra->last_init_status, val);
|
||||
+ }
|
||||
+ ra->changecounters = (struct changecounter **)slapi_ch_calloc(MAX_NUM_OF_SUPPLIERS + 1,
|
||||
sizeof(struct changecounter *));
|
||||
ra->num_changecounters = 0;
|
||||
ra->max_changecounters = MAX_NUM_OF_MASTERS;
|
||||
@@ -2504,6 +2528,113 @@ agmt_set_consumer_ruv(Repl_Agmt *ra, RUV *ruv)
|
||||
return 0;
|
||||
}
|
||||
|
||||
+void
|
||||
+agmt_update_init_status(Repl_Agmt *ra)
|
||||
+{
|
||||
+ int rc;
|
||||
+ Slapi_PBlock *pb;
|
||||
+ LDAPMod **mods;
|
||||
+ int nb_mods = 0;
|
||||
+ int mod_idx;
|
||||
+ Slapi_Mod smod_start_time = {0};
|
||||
+ Slapi_Mod smod_end_time = {0};
|
||||
+ Slapi_Mod smod_status = {0};
|
||||
+
|
||||
+ PR_ASSERT(ra);
|
||||
+ PR_Lock(ra->lock);
|
||||
+
|
||||
+ if (ra->last_init_start_time) {
|
||||
+ nb_mods++;
|
||||
+ }
|
||||
+ if (ra->last_init_end_time) {
|
||||
+ nb_mods++;
|
||||
+ }
|
||||
+ if (ra->last_init_status[0] != '\0') {
|
||||
+ nb_mods++;
|
||||
+ }
|
||||
+ if (nb_mods == 0) {
|
||||
+ /* shortcut. no need to go further */
|
||||
+ PR_Unlock(ra->lock);
|
||||
+ return;
|
||||
+ }
|
||||
+ mods = (LDAPMod **) slapi_ch_malloc((nb_mods + 1) * sizeof(LDAPMod *));
|
||||
+ mod_idx = 0;
|
||||
+ if (ra->last_init_start_time) {
|
||||
+ struct berval val;
|
||||
+ char *time_tmp = NULL;
|
||||
+ slapi_mod_init(&smod_start_time, 1);
|
||||
+ slapi_mod_set_type(&smod_start_time, type_nsds5ReplicaLastInitStart);
|
||||
+ slapi_mod_set_operation(&smod_start_time, LDAP_MOD_REPLACE | LDAP_MOD_BVALUES);
|
||||
+
|
||||
+ time_tmp = format_genTime(ra->last_init_start_time);
|
||||
+ val.bv_val = time_tmp;
|
||||
+ val.bv_len = strlen(time_tmp);
|
||||
+ slapi_mod_add_value(&smod_start_time, &val);
|
||||
+ slapi_ch_free((void **)&time_tmp);
|
||||
+ mods[mod_idx] = (LDAPMod *)slapi_mod_get_ldapmod_byref(&smod_start_time);
|
||||
+ mod_idx++;
|
||||
+ }
|
||||
+ if (ra->last_init_end_time) {
|
||||
+ struct berval val;
|
||||
+ char *time_tmp = NULL;
|
||||
+ slapi_mod_init(&smod_end_time, 1);
|
||||
+ slapi_mod_set_type(&smod_end_time, type_nsds5ReplicaLastInitEnd);
|
||||
+ slapi_mod_set_operation(&smod_end_time, LDAP_MOD_REPLACE | LDAP_MOD_BVALUES);
|
||||
+
|
||||
+ time_tmp = format_genTime(ra->last_init_end_time);
|
||||
+ val.bv_val = time_tmp;
|
||||
+ val.bv_len = strlen(time_tmp);
|
||||
+ slapi_mod_add_value(&smod_end_time, &val);
|
||||
+ slapi_ch_free((void **)&time_tmp);
|
||||
+ mods[mod_idx] = (LDAPMod *)slapi_mod_get_ldapmod_byref(&smod_end_time);
|
||||
+ mod_idx++;
|
||||
+ }
|
||||
+ if (ra->last_init_status[0] != '\0') {
|
||||
+ struct berval val;
|
||||
+ char *init_status = NULL;
|
||||
+ slapi_mod_init(&smod_status, 1);
|
||||
+ slapi_mod_set_type(&smod_status, type_nsds5ReplicaLastInitStatus);
|
||||
+ slapi_mod_set_operation(&smod_status, LDAP_MOD_REPLACE | LDAP_MOD_BVALUES);
|
||||
+
|
||||
+ init_status = slapi_ch_strdup(ra->last_init_status);
|
||||
+ val.bv_val = init_status;
|
||||
+ val.bv_len = strlen(init_status);
|
||||
+ slapi_mod_add_value(&smod_status, &val);
|
||||
+ slapi_ch_free((void **)&init_status);
|
||||
+ mods[mod_idx] = (LDAPMod *)slapi_mod_get_ldapmod_byref(&smod_status);
|
||||
+ mod_idx++;
|
||||
+ }
|
||||
+
|
||||
+ if (nb_mods) {
|
||||
+ /* it is ok to release the lock here because we are done with the agreement data.
|
||||
+ we have to do it before issuing the modify operation because it causes
|
||||
+ agmtlist_notify_all to be called which uses the same lock - hence the deadlock */
|
||||
+ PR_Unlock(ra->lock);
|
||||
+
|
||||
+ pb = slapi_pblock_new();
|
||||
+ mods[nb_mods] = NULL;
|
||||
+
|
||||
+ slapi_modify_internal_set_pb_ext(pb, ra->dn, mods, NULL, NULL,
|
||||
+ repl_get_plugin_identity(PLUGIN_MULTISUPPLIER_REPLICATION), 0);
|
||||
+ slapi_modify_internal_pb(pb);
|
||||
+
|
||||
+ slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_RESULT, &rc);
|
||||
+ if (rc != LDAP_SUCCESS && rc != LDAP_NO_SUCH_ATTRIBUTE) {
|
||||
+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "agmt_update_consumer_ruv - "
|
||||
+ "%s: agmt_update_consumer_ruv: "
|
||||
+ "failed to update consumer's RUV; LDAP error - %d\n",
|
||||
+ ra->long_name, rc);
|
||||
+ }
|
||||
+
|
||||
+ slapi_pblock_destroy(pb);
|
||||
+ } else {
|
||||
+ PR_Unlock(ra->lock);
|
||||
+ }
|
||||
+ slapi_mod_done(&smod_start_time);
|
||||
+ slapi_mod_done(&smod_end_time);
|
||||
+ slapi_mod_done(&smod_status);
|
||||
+}
|
||||
+
|
||||
void
|
||||
agmt_update_consumer_ruv(Repl_Agmt *ra)
|
||||
{
|
||||
@@ -3123,6 +3254,7 @@ agmt_set_enabled_from_entry(Repl_Agmt *ra, Slapi_Entry *e, char *returntext)
|
||||
PR_Unlock(ra->lock);
|
||||
agmt_stop(ra);
|
||||
agmt_update_consumer_ruv(ra);
|
||||
+ agmt_update_init_status(ra);
|
||||
agmt_set_last_update_status(ra, 0, 0, "agreement disabled");
|
||||
return rc;
|
||||
}
|
||||
diff --git a/ldap/servers/plugins/replication/repl5_agmtlist.c b/ldap/servers/plugins/replication/repl5_agmtlist.c
|
||||
index 18b641f8c..e3b1e814c 100644
|
||||
--- a/ldap/servers/plugins/replication/repl5_agmtlist.c
|
||||
+++ b/ldap/servers/plugins/replication/repl5_agmtlist.c
|
||||
@@ -782,6 +782,7 @@ agmtlist_shutdown()
|
||||
ra = (Repl_Agmt *)object_get_data(ro);
|
||||
agmt_stop(ra);
|
||||
agmt_update_consumer_ruv(ra);
|
||||
+ agmt_update_init_status(ra);
|
||||
next_ro = objset_next_obj(agmt_set, ro);
|
||||
/* Object ro was released in objset_next_obj,
|
||||
* but the address ro can be still used to remove ro from objset. */
|
||||
diff --git a/ldap/servers/plugins/replication/repl5_replica_config.c b/ldap/servers/plugins/replication/repl5_replica_config.c
|
||||
index aea2cf506..8cc7423bf 100644
|
||||
--- a/ldap/servers/plugins/replication/repl5_replica_config.c
|
||||
+++ b/ldap/servers/plugins/replication/repl5_replica_config.c
|
||||
@@ -2006,6 +2006,7 @@ clean_agmts(cleanruv_data *data)
|
||||
cleanruv_log(data->task, data->rid, CLEANALLRUV_ID, SLAPI_LOG_INFO, "Cleaning agmt...");
|
||||
agmt_stop(agmt);
|
||||
agmt_update_consumer_ruv(agmt);
|
||||
+ agmt_update_init_status(agmt);
|
||||
agmt_start(agmt);
|
||||
agmt_obj = agmtlist_get_next_agreement_for_replica(data->replica, agmt_obj);
|
||||
}
|
||||
diff --git a/ldap/servers/plugins/replication/repl_globals.c b/ldap/servers/plugins/replication/repl_globals.c
|
||||
index 797ca957f..e6b89c33b 100644
|
||||
--- a/ldap/servers/plugins/replication/repl_globals.c
|
||||
+++ b/ldap/servers/plugins/replication/repl_globals.c
|
||||
@@ -118,6 +118,9 @@ const char *type_nsds5ReplicaBootstrapBindDN = "nsds5ReplicaBootstrapBindDN";
|
||||
const char *type_nsds5ReplicaBootstrapCredentials = "nsds5ReplicaBootstrapCredentials";
|
||||
const char *type_nsds5ReplicaBootstrapBindMethod = "nsds5ReplicaBootstrapBindMethod";
|
||||
const char *type_nsds5ReplicaBootstrapTransportInfo = "nsds5ReplicaBootstrapTransportInfo";
|
||||
+const char *type_nsds5ReplicaLastInitStart = "nsds5replicaLastInitStart";
|
||||
+const char *type_nsds5ReplicaLastInitEnd = "nsds5replicaLastInitEnd";
|
||||
+const char *type_nsds5ReplicaLastInitStatus = "nsds5replicaLastInitStatus";
|
||||
|
||||
/* windows sync specific attributes */
|
||||
const char *type_nsds7WindowsReplicaArea = "nsds7WindowsReplicaSubtree";
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,671 @@
|
||||
From dfb7e19fdcefe4af683a235ea7113956248571e3 Mon Sep 17 00:00:00 2001
|
||||
From: tbordaz <tbordaz@redhat.com>
|
||||
Date: Thu, 17 Nov 2022 14:21:17 +0100
|
||||
Subject: [PATCH] Issue 3729 - RFE Extend log of operations statistics in
|
||||
access log (#5508)
|
||||
|
||||
Bug description:
|
||||
Create a per operation framework to collect/display
|
||||
statistics about internal ressource consumption
|
||||
|
||||
Fix description:
|
||||
|
||||
The fix contains 2 parts
|
||||
The framework, that registers a per operation object extension
|
||||
(op_stat_init). The extension is used to store/retrieve
|
||||
collected statistics.
|
||||
To reduce the impact of collecting/logging it uses a toggle
|
||||
with config attribute 'nsslapd-statlog-level' that is a bit mask.
|
||||
So that data are collected and logged only if the appropriate
|
||||
statistic level is set.
|
||||
|
||||
An exemple of statistic level regarding indexes fetching
|
||||
during the evaluation of a search filter.
|
||||
it is implemented in filterindex.c (store) and result.c (retrieve/log).
|
||||
This path uses LDAP_STAT_READ_INDEX=0x1.
|
||||
For LDAP_STAT_READ_INDEX, the collected data are:
|
||||
- for each key (attribute, type, value) the number of
|
||||
IDs
|
||||
- the duration to fetch all the values
|
||||
|
||||
design https://www.port389.org/docs/389ds/design/log-operation-stats.html
|
||||
relates: #3729
|
||||
|
||||
Reviewed by: Pierre Rogier, Mark Reynolds (thanks !)
|
||||
|
||||
(cherry picked from commit a480d2cbfa2b1325f44ab3e1c393c5ee348b388e)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../tests/suites/ds_logs/ds_logs_test.py | 73 ++++++++++++++++
|
||||
ldap/servers/slapd/back-ldbm/filterindex.c | 49 +++++++++++
|
||||
ldap/servers/slapd/libglobs.c | 48 +++++++++++
|
||||
ldap/servers/slapd/log.c | 26 ++++++
|
||||
ldap/servers/slapd/log.h | 1 +
|
||||
ldap/servers/slapd/main.c | 1 +
|
||||
ldap/servers/slapd/operation.c | 86 +++++++++++++++++++
|
||||
ldap/servers/slapd/proto-slap.h | 8 ++
|
||||
ldap/servers/slapd/result.c | 64 ++++++++++++++
|
||||
ldap/servers/slapd/slap.h | 4 +
|
||||
ldap/servers/slapd/slapi-private.h | 27 ++++++
|
||||
11 files changed, 387 insertions(+)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py b/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py
|
||||
index 84d721756..43288f67f 100644
|
||||
--- a/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py
|
||||
+++ b/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py
|
||||
@@ -27,6 +27,7 @@ from lib389.idm.group import Groups
|
||||
from lib389.idm.organizationalunit import OrganizationalUnits
|
||||
from lib389._constants import DEFAULT_SUFFIX, LOG_ACCESS_LEVEL, PASSWORD
|
||||
from lib389.utils import ds_is_older, ds_is_newer
|
||||
+from lib389.dseldif import DSEldif
|
||||
import ldap
|
||||
import glob
|
||||
import re
|
||||
@@ -1250,6 +1251,78 @@ def test_missing_backend_suffix(topology_st, request):
|
||||
|
||||
request.addfinalizer(fin)
|
||||
|
||||
+def test_stat_index(topology_st, request):
|
||||
+ """Testing nsslapd-statlog-level with indexing statistics
|
||||
+
|
||||
+ :id: fcabab05-f000-468c-8eb4-02ce3c39c902
|
||||
+ :setup: Standalone instance
|
||||
+ :steps:
|
||||
+ 1. Check that nsslapd-statlog-level is 0 (default)
|
||||
+ 2. Create 20 users with 'cn' starting with 'user\_'
|
||||
+ 3. Check there is no statistic record in the access log with ADD
|
||||
+ 4. Check there is no statistic record in the access log with SRCH
|
||||
+ 5. Set nsslapd-statlog-level=LDAP_STAT_READ_INDEX (0x1) to get
|
||||
+ statistics when reading indexes
|
||||
+ 6. Check there is statistic records in access log with SRCH
|
||||
+ :expectedresults:
|
||||
+ 1. This should pass
|
||||
+ 2. This should pass
|
||||
+ 3. This should pass
|
||||
+ 4. This should pass
|
||||
+ 5. This should pass
|
||||
+ 6. This should pass
|
||||
+ """
|
||||
+ topology_st.standalone.start()
|
||||
+
|
||||
+ # Step 1
|
||||
+ log.info("Assert nsslapd-statlog-level is by default 0")
|
||||
+ assert topology_st.standalone.config.get_attr_val_int("nsslapd-statlog-level") == 0
|
||||
+
|
||||
+ # Step 2
|
||||
+ users = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX)
|
||||
+ users_set = []
|
||||
+ log.info('Adding 20 users')
|
||||
+ for i in range(20):
|
||||
+ name = 'user_%d' % i
|
||||
+ last_user = users.create(properties={
|
||||
+ 'uid': name,
|
||||
+ 'sn': name,
|
||||
+ 'cn': name,
|
||||
+ 'uidNumber': '1000',
|
||||
+ 'gidNumber': '1000',
|
||||
+ 'homeDirectory': '/home/%s' % name,
|
||||
+ 'mail': '%s@example.com' % name,
|
||||
+ 'userpassword': 'pass%s' % name,
|
||||
+ })
|
||||
+ users_set.append(last_user)
|
||||
+
|
||||
+ # Step 3
|
||||
+ assert not topology_st.standalone.ds_access_log.match('.*STAT read index.*')
|
||||
+
|
||||
+ # Step 4
|
||||
+ entries = topology_st.standalone.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "cn=user_*")
|
||||
+ assert not topology_st.standalone.ds_access_log.match('.*STAT read index.*')
|
||||
+
|
||||
+ # Step 5
|
||||
+ log.info("Set nsslapd-statlog-level: 1 to enable indexing statistics")
|
||||
+ topology_st.standalone.config.set("nsslapd-statlog-level", "1")
|
||||
+
|
||||
+ # Step 6
|
||||
+ entries = topology_st.standalone.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "cn=user_*")
|
||||
+ topology_st.standalone.stop()
|
||||
+ assert topology_st.standalone.ds_access_log.match('.*STAT read index.*')
|
||||
+ assert topology_st.standalone.ds_access_log.match('.*STAT read index: attribute.*')
|
||||
+ assert topology_st.standalone.ds_access_log.match('.*STAT read index: duration.*')
|
||||
+ topology_st.standalone.start()
|
||||
+
|
||||
+ def fin():
|
||||
+ log.info('Deleting users')
|
||||
+ for user in users_set:
|
||||
+ user.delete()
|
||||
+ topology_st.standalone.config.set("nsslapd-statlog-level", "0")
|
||||
+
|
||||
+ request.addfinalizer(fin)
|
||||
+
|
||||
if __name__ == '__main__':
|
||||
# Run isolated
|
||||
# -s for DEBUG mode
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/filterindex.c b/ldap/servers/slapd/back-ldbm/filterindex.c
|
||||
index 8a79848c3..30550dde7 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/filterindex.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/filterindex.c
|
||||
@@ -1040,13 +1040,57 @@ keys2idl(
|
||||
int allidslimit)
|
||||
{
|
||||
IDList *idl = NULL;
|
||||
+ Op_stat *op_stat;
|
||||
+ PRBool collect_stat = PR_FALSE;
|
||||
|
||||
slapi_log_err(SLAPI_LOG_TRACE, "keys2idl", "=> type %s indextype %s\n", type, indextype);
|
||||
+
|
||||
+ /* Before reading the index take the start time */
|
||||
+ if (LDAP_STAT_READ_INDEX & config_get_statlog_level()) {
|
||||
+ op_stat = op_stat_get_operation_extension(pb);
|
||||
+ if (op_stat->search_stat) {
|
||||
+ collect_stat = PR_TRUE;
|
||||
+ clock_gettime(CLOCK_MONOTONIC, &(op_stat->search_stat->keys_lookup_start));
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
for (uint32_t i = 0; ivals[i] != NULL; i++) {
|
||||
IDList *idl2 = NULL;
|
||||
+ struct component_keys_lookup *key_stat;
|
||||
+ int key_len;
|
||||
|
||||
idl2 = index_read_ext_allids(pb, be, type, indextype, slapi_value_get_berval(ivals[i]), txn, err, unindexed, allidslimit);
|
||||
+ if (collect_stat) {
|
||||
+ /* gather the index lookup statistics */
|
||||
+ key_stat = (struct component_keys_lookup *) slapi_ch_calloc(1, sizeof (struct component_keys_lookup));
|
||||
+
|
||||
+ /* indextype e.g. "eq" or "sub" (see index.c) */
|
||||
+ if (indextype) {
|
||||
+ key_stat->index_type = slapi_ch_strdup(indextype);
|
||||
+ }
|
||||
+ /* key value e.g. '^st' or 'smith'*/
|
||||
+ key_len = slapi_value_get_length(ivals[i]);
|
||||
+ if (key_len) {
|
||||
+ key_stat->key = (char *) slapi_ch_calloc(1, key_len + 1);
|
||||
+ memcpy(key_stat->key, slapi_value_get_string(ivals[i]), key_len);
|
||||
+ }
|
||||
|
||||
+ /* attribute name e.g. 'uid' */
|
||||
+ if (type) {
|
||||
+ key_stat->attribute_type = slapi_ch_strdup(type);
|
||||
+ }
|
||||
+
|
||||
+ /* Number of lookup IDs with the key */
|
||||
+ key_stat->id_lookup_cnt = idl2 ? idl2->b_nids : 0;
|
||||
+ if (op_stat->search_stat->keys_lookup) {
|
||||
+ /* it already exist key stat. add key_stat at the head */
|
||||
+ key_stat->next = op_stat->search_stat->keys_lookup;
|
||||
+ } else {
|
||||
+ /* this is the first key stat record */
|
||||
+ key_stat->next = NULL;
|
||||
+ }
|
||||
+ op_stat->search_stat->keys_lookup = key_stat;
|
||||
+ }
|
||||
#ifdef LDAP_ERROR_LOGGING
|
||||
/* XXX if ( slapd_ldap_debug & LDAP_DEBUG_TRACE ) { XXX */
|
||||
{
|
||||
@@ -1080,5 +1124,10 @@ keys2idl(
|
||||
}
|
||||
}
|
||||
|
||||
+ /* All the keys have been fetch, time to take the completion time */
|
||||
+ if (collect_stat) {
|
||||
+ clock_gettime(CLOCK_MONOTONIC, &(op_stat->search_stat->keys_lookup_end));
|
||||
+ }
|
||||
+
|
||||
return (idl);
|
||||
}
|
||||
diff --git a/ldap/servers/slapd/libglobs.c b/ldap/servers/slapd/libglobs.c
|
||||
index 2097ab93c..99b2c5d8e 100644
|
||||
--- a/ldap/servers/slapd/libglobs.c
|
||||
+++ b/ldap/servers/slapd/libglobs.c
|
||||
@@ -712,6 +712,10 @@ static struct config_get_and_set
|
||||
NULL, 0,
|
||||
(void **)&global_slapdFrontendConfig.accessloglevel,
|
||||
CONFIG_INT, NULL, SLAPD_DEFAULT_ACCESSLOG_LEVEL_STR, NULL},
|
||||
+ {CONFIG_STATLOGLEVEL_ATTRIBUTE, config_set_statlog_level,
|
||||
+ NULL, 0,
|
||||
+ (void **)&global_slapdFrontendConfig.statloglevel,
|
||||
+ CONFIG_INT, NULL, SLAPD_DEFAULT_STATLOG_LEVEL, NULL},
|
||||
{CONFIG_ERRORLOG_LOGROTATIONTIMEUNIT_ATTRIBUTE, NULL,
|
||||
log_set_rotationtimeunit, SLAPD_ERROR_LOG,
|
||||
(void **)&global_slapdFrontendConfig.errorlog_rotationunit,
|
||||
@@ -1748,6 +1752,7 @@ FrontendConfig_init(void)
|
||||
cfg->accessloglevel = SLAPD_DEFAULT_ACCESSLOG_LEVEL;
|
||||
init_accesslogbuffering = cfg->accesslogbuffering = LDAP_ON;
|
||||
init_csnlogging = cfg->csnlogging = LDAP_ON;
|
||||
+ cfg->statloglevel = SLAPD_DEFAULT_STATLOG_LEVEL;
|
||||
|
||||
init_errorlog_logging_enabled = cfg->errorlog_logging_enabled = LDAP_ON;
|
||||
init_external_libs_debug_enabled = cfg->external_libs_debug_enabled = LDAP_OFF;
|
||||
@@ -5382,6 +5387,38 @@ config_set_accesslog_level(const char *attrname, char *value, char *errorbuf, in
|
||||
return retVal;
|
||||
}
|
||||
|
||||
+int
|
||||
+config_set_statlog_level(const char *attrname, char *value, char *errorbuf, int apply)
|
||||
+{
|
||||
+ int retVal = LDAP_SUCCESS;
|
||||
+ long level = 0;
|
||||
+ char *endp = NULL;
|
||||
+
|
||||
+ slapdFrontendConfig_t *slapdFrontendConfig = getFrontendConfig();
|
||||
+
|
||||
+ if (config_value_is_null(attrname, value, errorbuf, 1)) {
|
||||
+ return LDAP_OPERATIONS_ERROR;
|
||||
+ }
|
||||
+
|
||||
+ errno = 0;
|
||||
+ level = strtol(value, &endp, 10);
|
||||
+
|
||||
+ if (*endp != '\0' || errno == ERANGE || level < 0) {
|
||||
+ slapi_create_errormsg(errorbuf, SLAPI_DSE_RETURNTEXT_SIZE, "%s: stat log level \"%s\" is invalid,"
|
||||
+ " access log level must range from 0 to %lld",
|
||||
+ attrname, value, (long long int)LONG_MAX);
|
||||
+ retVal = LDAP_OPERATIONS_ERROR;
|
||||
+ return retVal;
|
||||
+ }
|
||||
+
|
||||
+ if (apply) {
|
||||
+ CFG_LOCK_WRITE(slapdFrontendConfig);
|
||||
+ g_set_statlog_level(level);
|
||||
+ slapdFrontendConfig->statloglevel = level;
|
||||
+ CFG_UNLOCK_WRITE(slapdFrontendConfig);
|
||||
+ }
|
||||
+ return retVal;
|
||||
+}
|
||||
/* set the referral-mode url (which puts us into referral mode) */
|
||||
int
|
||||
config_set_referral_mode(const char *attrname __attribute__((unused)), char *url, char *errorbuf, int apply)
|
||||
@@ -6612,6 +6649,17 @@ config_get_accesslog_level()
|
||||
return retVal;
|
||||
}
|
||||
|
||||
+int
|
||||
+config_get_statlog_level()
|
||||
+{
|
||||
+ slapdFrontendConfig_t *slapdFrontendConfig = getFrontendConfig();
|
||||
+ int retVal;
|
||||
+
|
||||
+ retVal = slapdFrontendConfig->statloglevel;
|
||||
+
|
||||
+ return retVal;
|
||||
+}
|
||||
+
|
||||
/* return integer -- don't worry about locking similar to config_check_referral_mode
|
||||
below */
|
||||
|
||||
diff --git a/ldap/servers/slapd/log.c b/ldap/servers/slapd/log.c
|
||||
index 8074735e2..837a9c6fd 100644
|
||||
--- a/ldap/servers/slapd/log.c
|
||||
+++ b/ldap/servers/slapd/log.c
|
||||
@@ -233,6 +233,17 @@ g_set_accesslog_level(int val)
|
||||
LOG_ACCESS_UNLOCK_WRITE();
|
||||
}
|
||||
|
||||
+/******************************************************************************
|
||||
+* Set the stat level
|
||||
+******************************************************************************/
|
||||
+void
|
||||
+g_set_statlog_level(int val)
|
||||
+{
|
||||
+ LOG_ACCESS_LOCK_WRITE();
|
||||
+ loginfo.log_access_stat_level = val;
|
||||
+ LOG_ACCESS_UNLOCK_WRITE();
|
||||
+}
|
||||
+
|
||||
/******************************************************************************
|
||||
* Set whether the process is alive or dead
|
||||
* If it is detached, then we write the error in 'stderr'
|
||||
@@ -283,6 +294,7 @@ g_log_init()
|
||||
if ((loginfo.log_access_buffer->lock = PR_NewLock()) == NULL) {
|
||||
exit(-1);
|
||||
}
|
||||
+ loginfo.log_access_stat_level = cfg->statloglevel;
|
||||
|
||||
/* ERROR LOG */
|
||||
loginfo.log_error_state = cfg->errorlog_logging_enabled;
|
||||
@@ -2640,7 +2652,21 @@ vslapd_log_access(char *fmt, va_list ap)
|
||||
|
||||
return (rc);
|
||||
}
|
||||
+int
|
||||
+slapi_log_stat(int loglevel, const char *fmt, ...)
|
||||
+{
|
||||
+ char buf[2048];
|
||||
+ va_list args;
|
||||
+ int rc = LDAP_SUCCESS;
|
||||
|
||||
+ if (loglevel & loginfo.log_access_stat_level) {
|
||||
+ va_start(args, fmt);
|
||||
+ PR_vsnprintf(buf, sizeof(buf), fmt, args);
|
||||
+ rc = slapi_log_access(LDAP_DEBUG_STATS, "%s", buf);
|
||||
+ va_end(args);
|
||||
+ }
|
||||
+ return rc;
|
||||
+}
|
||||
int
|
||||
slapi_log_access(int level,
|
||||
char *fmt,
|
||||
diff --git a/ldap/servers/slapd/log.h b/ldap/servers/slapd/log.h
|
||||
index 9fb4e7425..6ac37bd29 100644
|
||||
--- a/ldap/servers/slapd/log.h
|
||||
+++ b/ldap/servers/slapd/log.h
|
||||
@@ -120,6 +120,7 @@ struct logging_opts
|
||||
int log_access_exptime; /* time */
|
||||
int log_access_exptimeunit; /* unit time */
|
||||
int log_access_exptime_secs; /* time in secs */
|
||||
+ int log_access_stat_level; /* statistics level in access log file */
|
||||
|
||||
int log_access_level; /* access log level */
|
||||
char *log_access_file; /* access log file path */
|
||||
diff --git a/ldap/servers/slapd/main.c b/ldap/servers/slapd/main.c
|
||||
index ac45c85d1..9b5b845cb 100644
|
||||
--- a/ldap/servers/slapd/main.c
|
||||
+++ b/ldap/servers/slapd/main.c
|
||||
@@ -1040,6 +1040,7 @@ main(int argc, char **argv)
|
||||
* changes are replicated as soon as the replication plugin is started.
|
||||
*/
|
||||
pw_exp_init();
|
||||
+ op_stat_init();
|
||||
|
||||
plugin_print_lists();
|
||||
plugin_startall(argc, argv, NULL /* specific plugin list */);
|
||||
diff --git a/ldap/servers/slapd/operation.c b/ldap/servers/slapd/operation.c
|
||||
index 4dd3481c7..dacd1838f 100644
|
||||
--- a/ldap/servers/slapd/operation.c
|
||||
+++ b/ldap/servers/slapd/operation.c
|
||||
@@ -652,6 +652,92 @@ slapi_operation_time_expiry(Slapi_Operation *o, time_t timeout, struct timespec
|
||||
slapi_timespec_expire_rel(timeout, &(o->o_hr_time_rel), expiry);
|
||||
}
|
||||
|
||||
+
|
||||
+/*
|
||||
+ * Operation extension for operation statistics
|
||||
+ */
|
||||
+static int op_stat_objtype = -1;
|
||||
+static int op_stat_handle = -1;
|
||||
+
|
||||
+Op_stat *
|
||||
+op_stat_get_operation_extension(Slapi_PBlock *pb)
|
||||
+{
|
||||
+ Slapi_Operation *op;
|
||||
+
|
||||
+ slapi_pblock_get(pb, SLAPI_OPERATION, &op);
|
||||
+ return (Op_stat *)slapi_get_object_extension(op_stat_objtype,
|
||||
+ op, op_stat_handle);
|
||||
+}
|
||||
+
|
||||
+void
|
||||
+op_stat_set_operation_extension(Slapi_PBlock *pb, Op_stat *op_stat)
|
||||
+{
|
||||
+ Slapi_Operation *op;
|
||||
+
|
||||
+ slapi_pblock_get(pb, SLAPI_OPERATION, &op);
|
||||
+ slapi_set_object_extension(op_stat_objtype, op,
|
||||
+ op_stat_handle, (void *)op_stat);
|
||||
+}
|
||||
+
|
||||
+/*
|
||||
+ * constructor for the operation object extension.
|
||||
+ */
|
||||
+static void *
|
||||
+op_stat_constructor(void *object __attribute__((unused)), void *parent __attribute__((unused)))
|
||||
+{
|
||||
+ Op_stat *op_statp = NULL;
|
||||
+ op_statp = (Op_stat *)slapi_ch_calloc(1, sizeof(Op_stat));
|
||||
+ op_statp->search_stat = (Op_search_stat *)slapi_ch_calloc(1, sizeof(Op_search_stat));
|
||||
+
|
||||
+ return op_statp;
|
||||
+}
|
||||
+/*
|
||||
+ * destructor for the operation object extension.
|
||||
+ */
|
||||
+static void
|
||||
+op_stat_destructor(void *extension, void *object __attribute__((unused)), void *parent __attribute__((unused)))
|
||||
+{
|
||||
+ Op_stat *op_statp = (Op_stat *)extension;
|
||||
+
|
||||
+ if (NULL == op_statp) {
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ if (op_statp->search_stat) {
|
||||
+ struct component_keys_lookup *keys, *next;
|
||||
+
|
||||
+ /* free all the individual key counter */
|
||||
+ keys = op_statp->search_stat->keys_lookup;
|
||||
+ while (keys) {
|
||||
+ next = keys->next;
|
||||
+ slapi_ch_free_string(&keys->attribute_type);
|
||||
+ slapi_ch_free_string(&keys->key);
|
||||
+ slapi_ch_free_string(&keys->index_type);
|
||||
+ slapi_ch_free((void **) &keys);
|
||||
+ keys = next;
|
||||
+ }
|
||||
+ slapi_ch_free((void **) &op_statp->search_stat);
|
||||
+ }
|
||||
+ slapi_ch_free((void **) &op_statp);
|
||||
+}
|
||||
+
|
||||
+#define SLAPI_OP_STAT_MODULE "Module to collect operation stat"
|
||||
+/* Called once from main */
|
||||
+void
|
||||
+op_stat_init(void)
|
||||
+{
|
||||
+ if (slapi_register_object_extension(SLAPI_OP_STAT_MODULE,
|
||||
+ SLAPI_EXT_OPERATION,
|
||||
+ op_stat_constructor,
|
||||
+ op_stat_destructor,
|
||||
+ &op_stat_objtype,
|
||||
+ &op_stat_handle) != 0) {
|
||||
+ slapi_log_err(SLAPI_LOG_ERR, "op_stat_init",
|
||||
+ "slapi_register_object_extension failed; "
|
||||
+ "operation statistics is not enabled\n");
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
/* Set the time the operation actually started */
|
||||
void
|
||||
slapi_operation_set_time_started(Slapi_Operation *o)
|
||||
diff --git a/ldap/servers/slapd/proto-slap.h b/ldap/servers/slapd/proto-slap.h
|
||||
index 410a3c5fe..3a049ee76 100644
|
||||
--- a/ldap/servers/slapd/proto-slap.h
|
||||
+++ b/ldap/servers/slapd/proto-slap.h
|
||||
@@ -291,6 +291,7 @@ int config_set_defaultreferral(const char *attrname, struct berval **value, char
|
||||
int config_set_timelimit(const char *attrname, char *value, char *errorbuf, int apply);
|
||||
int config_set_errorlog_level(const char *attrname, char *value, char *errorbuf, int apply);
|
||||
int config_set_accesslog_level(const char *attrname, char *value, char *errorbuf, int apply);
|
||||
+int config_set_statlog_level(const char *attrname, char *value, char *errorbuf, int apply);
|
||||
int config_set_auditlog(const char *attrname, char *value, char *errorbuf, int apply);
|
||||
int config_set_auditfaillog(const char *attrname, char *value, char *errorbuf, int apply);
|
||||
int config_set_userat(const char *attrname, char *value, char *errorbuf, int apply);
|
||||
@@ -510,6 +511,7 @@ long long config_get_pw_minage(void);
|
||||
long long config_get_pw_warning(void);
|
||||
int config_get_errorlog_level(void);
|
||||
int config_get_accesslog_level(void);
|
||||
+int config_get_statlog_level();
|
||||
int config_get_auditlog_logging_enabled(void);
|
||||
int config_get_auditfaillog_logging_enabled(void);
|
||||
char *config_get_auditlog_display_attrs(void);
|
||||
@@ -815,10 +817,15 @@ int lock_fclose(FILE *fp, FILE *lfp);
|
||||
#define LDAP_DEBUG_INFO 0x08000000 /* 134217728 */
|
||||
#define LDAP_DEBUG_DEBUG 0x10000000 /* 268435456 */
|
||||
#define LDAP_DEBUG_ALL_LEVELS 0xFFFFFF
|
||||
+
|
||||
+#define LDAP_STAT_READ_INDEX 0x00000001 /* 1 */
|
||||
+#define LDAP_STAT_FREE_1 0x00000002 /* 2 */
|
||||
+
|
||||
extern int slapd_ldap_debug;
|
||||
|
||||
int loglevel_is_set(int level);
|
||||
int slapd_log_error_proc(int sev_level, char *subsystem, char *fmt, ...);
|
||||
+int slapi_log_stat(int loglevel, const char *fmt, ...);
|
||||
|
||||
int slapi_log_access(int level, char *fmt, ...)
|
||||
#ifdef __GNUC__
|
||||
@@ -874,6 +881,7 @@ int check_log_max_size(
|
||||
|
||||
|
||||
void g_set_accesslog_level(int val);
|
||||
+void g_set_statlog_level(int val);
|
||||
void log__delete_rotated_logs(void);
|
||||
|
||||
/*
|
||||
diff --git a/ldap/servers/slapd/result.c b/ldap/servers/slapd/result.c
|
||||
index adcef9539..e94533d72 100644
|
||||
--- a/ldap/servers/slapd/result.c
|
||||
+++ b/ldap/servers/slapd/result.c
|
||||
@@ -38,6 +38,7 @@ static PRLock *current_conn_count_mutex;
|
||||
|
||||
static int flush_ber(Slapi_PBlock *pb, Connection *conn, Operation *op, BerElement *ber, int type);
|
||||
static char *notes2str(unsigned int notes, char *buf, size_t buflen);
|
||||
+static void log_op_stat(Slapi_PBlock *pb);
|
||||
static void log_result(Slapi_PBlock *pb, Operation *op, int err, ber_tag_t tag, int nentries);
|
||||
static void log_entry(Operation *op, Slapi_Entry *e);
|
||||
static void log_referral(Operation *op);
|
||||
@@ -2050,6 +2051,68 @@ notes2str(unsigned int notes, char *buf, size_t buflen)
|
||||
return (buf);
|
||||
}
|
||||
|
||||
+static void
|
||||
+log_op_stat(Slapi_PBlock *pb)
|
||||
+{
|
||||
+
|
||||
+ Connection *conn = NULL;
|
||||
+ Operation *op = NULL;
|
||||
+ Op_stat *op_stat;
|
||||
+ struct timespec duration;
|
||||
+ char stat_etime[ETIME_BUFSIZ] = {0};
|
||||
+
|
||||
+ if (config_get_statlog_level() == 0) {
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ slapi_pblock_get(pb, SLAPI_CONNECTION, &conn);
|
||||
+ slapi_pblock_get(pb, SLAPI_OPERATION, &op);
|
||||
+ op_stat = op_stat_get_operation_extension(pb);
|
||||
+
|
||||
+ if (conn == NULL || op == NULL || op_stat == NULL) {
|
||||
+ return;
|
||||
+ }
|
||||
+ /* process the operation */
|
||||
+ switch (op->o_tag) {
|
||||
+ case LDAP_REQ_BIND:
|
||||
+ case LDAP_REQ_UNBIND:
|
||||
+ case LDAP_REQ_ADD:
|
||||
+ case LDAP_REQ_DELETE:
|
||||
+ case LDAP_REQ_MODRDN:
|
||||
+ case LDAP_REQ_MODIFY:
|
||||
+ case LDAP_REQ_COMPARE:
|
||||
+ break;
|
||||
+ case LDAP_REQ_SEARCH:
|
||||
+ if ((LDAP_STAT_READ_INDEX & config_get_statlog_level()) &&
|
||||
+ op_stat->search_stat) {
|
||||
+ struct component_keys_lookup *key_info;
|
||||
+ for (key_info = op_stat->search_stat->keys_lookup; key_info; key_info = key_info->next) {
|
||||
+ slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
+ "conn=%" PRIu64 " op=%d STAT read index: attribute=%s key(%s)=%s --> count %d\n",
|
||||
+ op->o_connid, op->o_opid,
|
||||
+ key_info->attribute_type, key_info->index_type, key_info->key,
|
||||
+ key_info->id_lookup_cnt);
|
||||
+ }
|
||||
+
|
||||
+ /* total elapsed time */
|
||||
+ slapi_timespec_diff(&op_stat->search_stat->keys_lookup_end, &op_stat->search_stat->keys_lookup_start, &duration);
|
||||
+ snprintf(stat_etime, ETIME_BUFSIZ, "%" PRId64 ".%.09" PRId64 "", (int64_t)duration.tv_sec, (int64_t)duration.tv_nsec);
|
||||
+ slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
+ "conn=%" PRIu64 " op=%d STAT read index: duration %s\n",
|
||||
+ op->o_connid, op->o_opid, stat_etime);
|
||||
+ }
|
||||
+ break;
|
||||
+ case LDAP_REQ_ABANDON_30:
|
||||
+ case LDAP_REQ_ABANDON:
|
||||
+ break;
|
||||
+
|
||||
+ default:
|
||||
+ slapi_log_err(SLAPI_LOG_ERR,
|
||||
+ "log_op_stat", "Ignoring unknown LDAP request (conn=%" PRIu64 ", tag=0x%lx)\n",
|
||||
+ conn->c_connid, op->o_tag);
|
||||
+ break;
|
||||
+ }
|
||||
+}
|
||||
|
||||
static void
|
||||
log_result(Slapi_PBlock *pb, Operation *op, int err, ber_tag_t tag, int nentries)
|
||||
@@ -2206,6 +2269,7 @@ log_result(Slapi_PBlock *pb, Operation *op, int err, ber_tag_t tag, int nentries
|
||||
} else {
|
||||
ext_str = "";
|
||||
}
|
||||
+ log_op_stat(pb);
|
||||
slapi_log_access(LDAP_DEBUG_STATS,
|
||||
"conn=%" PRIu64 " op=%d RESULT err=%d"
|
||||
" tag=%" BERTAG_T " nentries=%d wtime=%s optime=%s etime=%s%s%s%s\n",
|
||||
diff --git a/ldap/servers/slapd/slap.h b/ldap/servers/slapd/slap.h
|
||||
index 927576b70..82550527c 100644
|
||||
--- a/ldap/servers/slapd/slap.h
|
||||
+++ b/ldap/servers/slapd/slap.h
|
||||
@@ -348,6 +348,8 @@ typedef void (*VFPV)(); /* takes undefined arguments */
|
||||
#define SLAPD_DEFAULT_FE_ERRORLOG_LEVEL_STR "16384"
|
||||
#define SLAPD_DEFAULT_ACCESSLOG_LEVEL 256
|
||||
#define SLAPD_DEFAULT_ACCESSLOG_LEVEL_STR "256"
|
||||
+#define SLAPD_DEFAULT_STATLOG_LEVEL 0
|
||||
+#define SLAPD_DEFAULT_STATLOG_LEVEL_STR "0"
|
||||
|
||||
#define SLAPD_DEFAULT_DISK_THRESHOLD 2097152
|
||||
#define SLAPD_DEFAULT_DISK_THRESHOLD_STR "2097152"
|
||||
@@ -2082,6 +2084,7 @@ typedef struct _slapdEntryPoints
|
||||
#define CONFIG_SCHEMAREPLACE_ATTRIBUTE "nsslapd-schemareplace"
|
||||
#define CONFIG_LOGLEVEL_ATTRIBUTE "nsslapd-errorlog-level"
|
||||
#define CONFIG_ACCESSLOGLEVEL_ATTRIBUTE "nsslapd-accesslog-level"
|
||||
+#define CONFIG_STATLOGLEVEL_ATTRIBUTE "nsslapd-statlog-level"
|
||||
#define CONFIG_ACCESSLOG_MODE_ATTRIBUTE "nsslapd-accesslog-mode"
|
||||
#define CONFIG_ERRORLOG_MODE_ATTRIBUTE "nsslapd-errorlog-mode"
|
||||
#define CONFIG_AUDITLOG_MODE_ATTRIBUTE "nsslapd-auditlog-mode"
|
||||
@@ -2457,6 +2460,7 @@ typedef struct _slapdFrontendConfig
|
||||
int accessloglevel;
|
||||
slapi_onoff_t accesslogbuffering;
|
||||
slapi_onoff_t csnlogging;
|
||||
+ int statloglevel;
|
||||
|
||||
/* ERROR LOG */
|
||||
slapi_onoff_t errorlog_logging_enabled;
|
||||
diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h
|
||||
index 4b6cf29eb..bd7a4b39d 100644
|
||||
--- a/ldap/servers/slapd/slapi-private.h
|
||||
+++ b/ldap/servers/slapd/slapi-private.h
|
||||
@@ -449,6 +449,33 @@ int operation_is_flag_set(Slapi_Operation *op, int flag);
|
||||
unsigned long operation_get_type(Slapi_Operation *op);
|
||||
LDAPMod **copy_mods(LDAPMod **orig_mods);
|
||||
|
||||
+/* Structures use to collect statistics per operation */
|
||||
+/* used for LDAP_STAT_READ_INDEX */
|
||||
+struct component_keys_lookup
|
||||
+{
|
||||
+ char *index_type;
|
||||
+ char *attribute_type;
|
||||
+ char *key;
|
||||
+ int id_lookup_cnt;
|
||||
+ struct component_keys_lookup *next;
|
||||
+};
|
||||
+typedef struct op_search_stat
|
||||
+{
|
||||
+ struct component_keys_lookup *keys_lookup;
|
||||
+ struct timespec keys_lookup_start;
|
||||
+ struct timespec keys_lookup_end;
|
||||
+} Op_search_stat;
|
||||
+
|
||||
+/* structure store in the operation extension */
|
||||
+typedef struct op_stat
|
||||
+{
|
||||
+ Op_search_stat *search_stat;
|
||||
+} Op_stat;
|
||||
+
|
||||
+void op_stat_init(void);
|
||||
+Op_stat *op_stat_get_operation_extension(Slapi_PBlock *pb);
|
||||
+void op_stat_set_operation_extension(Slapi_PBlock *pb, Op_stat *op_stat);
|
||||
+
|
||||
/*
|
||||
* From ldap.h
|
||||
* #define LDAP_MOD_ADD 0x00
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,297 @@
|
||||
From bc2629db166667cdb01fde2b9e249253d5d868b5 Mon Sep 17 00:00:00 2001
|
||||
From: tbordaz <tbordaz@redhat.com>
|
||||
Date: Mon, 21 Nov 2022 11:41:15 +0100
|
||||
Subject: [PATCH] Issue 3729 - (cont) RFE Extend log of operations statistics
|
||||
in access log (#5538)
|
||||
|
||||
Bug description:
|
||||
This is a continuation of the #3729
|
||||
The previous fix did not manage internal SRCH, so
|
||||
statistics of internal SRCH were not logged
|
||||
|
||||
Fix description:
|
||||
For internal operation log_op_stat uses
|
||||
connid/op_id/op_internal_id/op_nested_count that have been
|
||||
computed log_result
|
||||
|
||||
For direct operation log_op_stat uses info from the
|
||||
operation itself (o_connid and o_opid)
|
||||
|
||||
log_op_stat relies on operation_type rather than
|
||||
o_tag that is not available for internal operation
|
||||
|
||||
relates: #3729
|
||||
|
||||
Reviewed by: Pierre Rogier
|
||||
|
||||
(cherry picked from commit 7915e85a55476647ac54330de4f6e89faf6f2934)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../tests/suites/ds_logs/ds_logs_test.py | 90 ++++++++++++++++++-
|
||||
ldap/servers/slapd/proto-slap.h | 2 +-
|
||||
ldap/servers/slapd/result.c | 74 +++++++++------
|
||||
3 files changed, 136 insertions(+), 30 deletions(-)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py b/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py
|
||||
index 43288f67f..fbb8d7bf1 100644
|
||||
--- a/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py
|
||||
+++ b/dirsrvtests/tests/suites/ds_logs/ds_logs_test.py
|
||||
@@ -21,7 +21,7 @@ from lib389.idm.domain import Domain
|
||||
from lib389.configurations.sample import create_base_domain
|
||||
from lib389._mapped_object import DSLdapObject
|
||||
from lib389.topologies import topology_st
|
||||
-from lib389.plugins import AutoMembershipPlugin, ReferentialIntegrityPlugin, AutoMembershipDefinitions
|
||||
+from lib389.plugins import AutoMembershipPlugin, ReferentialIntegrityPlugin, AutoMembershipDefinitions, MemberOfPlugin
|
||||
from lib389.idm.user import UserAccounts, UserAccount
|
||||
from lib389.idm.group import Groups
|
||||
from lib389.idm.organizationalunit import OrganizationalUnits
|
||||
@@ -1323,6 +1323,94 @@ def test_stat_index(topology_st, request):
|
||||
|
||||
request.addfinalizer(fin)
|
||||
|
||||
+def test_stat_internal_op(topology_st, request):
|
||||
+ """Check that statistics can also be collected for internal operations
|
||||
+
|
||||
+ :id: 19f393bd-5866-425a-af7a-4dade06d5c77
|
||||
+ :setup: Standalone Instance
|
||||
+ :steps:
|
||||
+ 1. Check that nsslapd-statlog-level is 0 (default)
|
||||
+ 2. Enable memberof plugins
|
||||
+ 3. Create a user
|
||||
+ 4. Remove access log (to only detect new records)
|
||||
+ 5. Enable statistic logging nsslapd-statlog-level=1
|
||||
+ 6. Check that on direct SRCH there is no 'Internal' Stat records
|
||||
+ 7. Remove access log (to only detect new records)
|
||||
+ 8. Add group with the user, so memberof triggers internal search
|
||||
+ and check it exists 'Internal' Stat records
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. Success
|
||||
+ 5. Success
|
||||
+ 6. Success
|
||||
+ 7. Success
|
||||
+ 8. Success
|
||||
+ """
|
||||
+
|
||||
+ inst = topology_st.standalone
|
||||
+
|
||||
+ # Step 1
|
||||
+ log.info("Assert nsslapd-statlog-level is by default 0")
|
||||
+ assert topology_st.standalone.config.get_attr_val_int("nsslapd-statlog-level") == 0
|
||||
+
|
||||
+ # Step 2
|
||||
+ memberof = MemberOfPlugin(inst)
|
||||
+ memberof.enable()
|
||||
+ inst.restart()
|
||||
+
|
||||
+ # Step 3 Add setup entries
|
||||
+ users = UserAccounts(inst, DEFAULT_SUFFIX, rdn=None)
|
||||
+ user = users.create(properties={'uid': 'test_1',
|
||||
+ 'cn': 'test_1',
|
||||
+ 'sn': 'test_1',
|
||||
+ 'description': 'member',
|
||||
+ 'uidNumber': '1000',
|
||||
+ 'gidNumber': '2000',
|
||||
+ 'homeDirectory': '/home/testuser'})
|
||||
+ # Step 4 reset accesslog
|
||||
+ topology_st.standalone.stop()
|
||||
+ lpath = topology_st.standalone.ds_access_log._get_log_path()
|
||||
+ os.unlink(lpath)
|
||||
+ topology_st.standalone.start()
|
||||
+
|
||||
+ # Step 5 enable statistics
|
||||
+ log.info("Set nsslapd-statlog-level: 1 to enable indexing statistics")
|
||||
+ topology_st.standalone.config.set("nsslapd-statlog-level", "1")
|
||||
+
|
||||
+ # Step 6 for direct SRCH only non internal STAT records
|
||||
+ entries = topology_st.standalone.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "uid=test_1")
|
||||
+ topology_st.standalone.stop()
|
||||
+ assert topology_st.standalone.ds_access_log.match('.*STAT read index.*')
|
||||
+ assert topology_st.standalone.ds_access_log.match('.*STAT read index: attribute.*')
|
||||
+ assert topology_st.standalone.ds_access_log.match('.*STAT read index: duration.*')
|
||||
+ assert not topology_st.standalone.ds_access_log.match('.*Internal.*STAT.*')
|
||||
+ topology_st.standalone.start()
|
||||
+
|
||||
+ # Step 7 reset accesslog
|
||||
+ topology_st.standalone.stop()
|
||||
+ lpath = topology_st.standalone.ds_access_log._get_log_path()
|
||||
+ os.unlink(lpath)
|
||||
+ topology_st.standalone.start()
|
||||
+
|
||||
+ # Step 8 trigger internal searches and check internal stat records
|
||||
+ groups = Groups(inst, DEFAULT_SUFFIX, rdn=None)
|
||||
+ group = groups.create(properties={'cn': 'mygroup',
|
||||
+ 'member': 'uid=test_1,%s' % DEFAULT_SUFFIX,
|
||||
+ 'description': 'group'})
|
||||
+ topology_st.standalone.restart()
|
||||
+ assert topology_st.standalone.ds_access_log.match('.*Internal.*STAT read index.*')
|
||||
+ assert topology_st.standalone.ds_access_log.match('.*Internal.*STAT read index: attribute.*')
|
||||
+ assert topology_st.standalone.ds_access_log.match('.*Internal.*STAT read index: duration.*')
|
||||
+
|
||||
+ def fin():
|
||||
+ log.info('Deleting user/group')
|
||||
+ user.delete()
|
||||
+ group.delete()
|
||||
+
|
||||
+ request.addfinalizer(fin)
|
||||
+
|
||||
if __name__ == '__main__':
|
||||
# Run isolated
|
||||
# -s for DEBUG mode
|
||||
diff --git a/ldap/servers/slapd/proto-slap.h b/ldap/servers/slapd/proto-slap.h
|
||||
index 3a049ee76..6e473a08e 100644
|
||||
--- a/ldap/servers/slapd/proto-slap.h
|
||||
+++ b/ldap/servers/slapd/proto-slap.h
|
||||
@@ -511,7 +511,7 @@ long long config_get_pw_minage(void);
|
||||
long long config_get_pw_warning(void);
|
||||
int config_get_errorlog_level(void);
|
||||
int config_get_accesslog_level(void);
|
||||
-int config_get_statlog_level();
|
||||
+int config_get_statlog_level(void);
|
||||
int config_get_auditlog_logging_enabled(void);
|
||||
int config_get_auditfaillog_logging_enabled(void);
|
||||
char *config_get_auditlog_display_attrs(void);
|
||||
diff --git a/ldap/servers/slapd/result.c b/ldap/servers/slapd/result.c
|
||||
index e94533d72..87641e92f 100644
|
||||
--- a/ldap/servers/slapd/result.c
|
||||
+++ b/ldap/servers/slapd/result.c
|
||||
@@ -38,7 +38,7 @@ static PRLock *current_conn_count_mutex;
|
||||
|
||||
static int flush_ber(Slapi_PBlock *pb, Connection *conn, Operation *op, BerElement *ber, int type);
|
||||
static char *notes2str(unsigned int notes, char *buf, size_t buflen);
|
||||
-static void log_op_stat(Slapi_PBlock *pb);
|
||||
+static void log_op_stat(Slapi_PBlock *pb, uint64_t connid, int32_t op_id, int32_t op_internal_id, int32_t op_nested_count);
|
||||
static void log_result(Slapi_PBlock *pb, Operation *op, int err, ber_tag_t tag, int nentries);
|
||||
static void log_entry(Operation *op, Slapi_Entry *e);
|
||||
static void log_referral(Operation *op);
|
||||
@@ -2051,65 +2051,82 @@ notes2str(unsigned int notes, char *buf, size_t buflen)
|
||||
return (buf);
|
||||
}
|
||||
|
||||
+#define STAT_LOG_CONN_OP_FMT_INT_INT "conn=Internal(%" PRIu64 ") op=%d(%d)(%d)"
|
||||
+#define STAT_LOG_CONN_OP_FMT_EXT_INT "conn=%" PRIu64 " (Internal) op=%d(%d)(%d)"
|
||||
static void
|
||||
-log_op_stat(Slapi_PBlock *pb)
|
||||
+log_op_stat(Slapi_PBlock *pb, uint64_t connid, int32_t op_id, int32_t op_internal_id, int32_t op_nested_count)
|
||||
{
|
||||
-
|
||||
- Connection *conn = NULL;
|
||||
Operation *op = NULL;
|
||||
Op_stat *op_stat;
|
||||
struct timespec duration;
|
||||
char stat_etime[ETIME_BUFSIZ] = {0};
|
||||
+ int internal_op;
|
||||
|
||||
if (config_get_statlog_level() == 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
- slapi_pblock_get(pb, SLAPI_CONNECTION, &conn);
|
||||
slapi_pblock_get(pb, SLAPI_OPERATION, &op);
|
||||
+ internal_op = operation_is_flag_set(op, OP_FLAG_INTERNAL);
|
||||
op_stat = op_stat_get_operation_extension(pb);
|
||||
|
||||
- if (conn == NULL || op == NULL || op_stat == NULL) {
|
||||
+ if (op == NULL || op_stat == NULL) {
|
||||
return;
|
||||
}
|
||||
/* process the operation */
|
||||
- switch (op->o_tag) {
|
||||
- case LDAP_REQ_BIND:
|
||||
- case LDAP_REQ_UNBIND:
|
||||
- case LDAP_REQ_ADD:
|
||||
- case LDAP_REQ_DELETE:
|
||||
- case LDAP_REQ_MODRDN:
|
||||
- case LDAP_REQ_MODIFY:
|
||||
- case LDAP_REQ_COMPARE:
|
||||
+ switch (operation_get_type(op)) {
|
||||
+ case SLAPI_OPERATION_BIND:
|
||||
+ case SLAPI_OPERATION_UNBIND:
|
||||
+ case SLAPI_OPERATION_ADD:
|
||||
+ case SLAPI_OPERATION_DELETE:
|
||||
+ case SLAPI_OPERATION_MODRDN:
|
||||
+ case SLAPI_OPERATION_MODIFY:
|
||||
+ case SLAPI_OPERATION_COMPARE:
|
||||
+ case SLAPI_OPERATION_EXTENDED:
|
||||
break;
|
||||
- case LDAP_REQ_SEARCH:
|
||||
+ case SLAPI_OPERATION_SEARCH:
|
||||
if ((LDAP_STAT_READ_INDEX & config_get_statlog_level()) &&
|
||||
op_stat->search_stat) {
|
||||
struct component_keys_lookup *key_info;
|
||||
for (key_info = op_stat->search_stat->keys_lookup; key_info; key_info = key_info->next) {
|
||||
- slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
- "conn=%" PRIu64 " op=%d STAT read index: attribute=%s key(%s)=%s --> count %d\n",
|
||||
- op->o_connid, op->o_opid,
|
||||
- key_info->attribute_type, key_info->index_type, key_info->key,
|
||||
- key_info->id_lookup_cnt);
|
||||
+ if (internal_op) {
|
||||
+ slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
+ connid == 0 ? STAT_LOG_CONN_OP_FMT_INT_INT "STAT read index: attribute=%s key(%s)=%s --> count %d\n":
|
||||
+ STAT_LOG_CONN_OP_FMT_EXT_INT "STAT read index: attribute=%s key(%s)=%s --> count %d\n",
|
||||
+ connid, op_id, op_internal_id, op_nested_count,
|
||||
+ key_info->attribute_type, key_info->index_type, key_info->key,
|
||||
+ key_info->id_lookup_cnt);
|
||||
+ } else {
|
||||
+ slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
+ "conn=%" PRIu64 " op=%d STAT read index: attribute=%s key(%s)=%s --> count %d\n",
|
||||
+ connid, op_id,
|
||||
+ key_info->attribute_type, key_info->index_type, key_info->key,
|
||||
+ key_info->id_lookup_cnt);
|
||||
+ }
|
||||
}
|
||||
|
||||
/* total elapsed time */
|
||||
slapi_timespec_diff(&op_stat->search_stat->keys_lookup_end, &op_stat->search_stat->keys_lookup_start, &duration);
|
||||
snprintf(stat_etime, ETIME_BUFSIZ, "%" PRId64 ".%.09" PRId64 "", (int64_t)duration.tv_sec, (int64_t)duration.tv_nsec);
|
||||
- slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
- "conn=%" PRIu64 " op=%d STAT read index: duration %s\n",
|
||||
- op->o_connid, op->o_opid, stat_etime);
|
||||
+ if (internal_op) {
|
||||
+ slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
+ connid == 0 ? STAT_LOG_CONN_OP_FMT_INT_INT "STAT read index: duration %s\n":
|
||||
+ STAT_LOG_CONN_OP_FMT_EXT_INT "STAT read index: duration %s\n",
|
||||
+ connid, op_id, op_internal_id, op_nested_count, stat_etime);
|
||||
+ } else {
|
||||
+ slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
+ "conn=%" PRIu64 " op=%d STAT read index: duration %s\n",
|
||||
+ op->o_connid, op->o_opid, stat_etime);
|
||||
+ }
|
||||
}
|
||||
break;
|
||||
- case LDAP_REQ_ABANDON_30:
|
||||
- case LDAP_REQ_ABANDON:
|
||||
+ case SLAPI_OPERATION_ABANDON:
|
||||
break;
|
||||
|
||||
default:
|
||||
slapi_log_err(SLAPI_LOG_ERR,
|
||||
- "log_op_stat", "Ignoring unknown LDAP request (conn=%" PRIu64 ", tag=0x%lx)\n",
|
||||
- conn->c_connid, op->o_tag);
|
||||
+ "log_op_stat", "Ignoring unknown LDAP request (conn=%" PRIu64 ", op_type=0x%lx)\n",
|
||||
+ connid, operation_get_type(op));
|
||||
break;
|
||||
}
|
||||
}
|
||||
@@ -2269,7 +2286,7 @@ log_result(Slapi_PBlock *pb, Operation *op, int err, ber_tag_t tag, int nentries
|
||||
} else {
|
||||
ext_str = "";
|
||||
}
|
||||
- log_op_stat(pb);
|
||||
+ log_op_stat(pb, op->o_connid, op->o_opid, 0, 0);
|
||||
slapi_log_access(LDAP_DEBUG_STATS,
|
||||
"conn=%" PRIu64 " op=%d RESULT err=%d"
|
||||
" tag=%" BERTAG_T " nentries=%d wtime=%s optime=%s etime=%s%s%s%s\n",
|
||||
@@ -2284,6 +2301,7 @@ log_result(Slapi_PBlock *pb, Operation *op, int err, ber_tag_t tag, int nentries
|
||||
}
|
||||
} else {
|
||||
int optype;
|
||||
+ log_op_stat(pb, connid, op_id, op_internal_id, op_nested_count);
|
||||
#define LOG_MSG_FMT " tag=%" BERTAG_T " nentries=%d wtime=%s optime=%s etime=%s%s%s\n"
|
||||
slapi_log_access(LDAP_DEBUG_ARGS,
|
||||
connid == 0 ? LOG_CONN_OP_FMT_INT_INT LOG_MSG_FMT :
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,124 @@
|
||||
From f6eca13762139538d974c1cb285ddf1354fe7837 Mon Sep 17 00:00:00 2001
|
||||
From: tbordaz <tbordaz@redhat.com>
|
||||
Date: Tue, 28 Mar 2023 10:27:01 +0200
|
||||
Subject: [PATCH] Issue 5710 - subtree search statistics for index lookup does
|
||||
not report ancestorid/entryrdn lookups (#5711)
|
||||
|
||||
Bug description:
|
||||
The RFE #3729 allows to collect index lookups per search
|
||||
operation. For subtree searches the server lookup ancestorid
|
||||
and those lookup are not recorded
|
||||
|
||||
Fix description:
|
||||
if statistics are enabled, record ancestorid lookup
|
||||
|
||||
relates: #5710
|
||||
|
||||
Reviewed by: Mark Reynolds (thanks)
|
||||
|
||||
(cherry picked from commit fca27c3d0487c9aea9dc7da151a79e3ce0fc7d35)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
ldap/servers/slapd/back-ldbm/ldbm_search.c | 59 ++++++++++++++++++++++
|
||||
1 file changed, 59 insertions(+)
|
||||
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_search.c b/ldap/servers/slapd/back-ldbm/ldbm_search.c
|
||||
index 8c07d1395..5d98e288e 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/ldbm_search.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/ldbm_search.c
|
||||
@@ -35,6 +35,7 @@ static IDList *onelevel_candidates(Slapi_PBlock *pb, backend *be, const char *ba
|
||||
static back_search_result_set *new_search_result_set(IDList *idl, int vlv, int lookthroughlimit);
|
||||
static void delete_search_result_set(Slapi_PBlock *pb, back_search_result_set **sr);
|
||||
static int can_skip_filter_test(Slapi_PBlock *pb, struct slapi_filter *f, int scope, IDList *idl);
|
||||
+static void stat_add_srch_lookup(Op_stat *op_stat, char * attribute_type, const char* index_type, char *key_value, int lookup_cnt);
|
||||
|
||||
/* This is for performance testing, allows us to disable ACL checking altogether */
|
||||
#if defined(DISABLE_ACL_CHECK)
|
||||
@@ -1167,6 +1168,45 @@ create_subtree_filter(Slapi_Filter *filter, int managedsait, Slapi_Filter **focr
|
||||
return ftop;
|
||||
}
|
||||
|
||||
+static void
|
||||
+stat_add_srch_lookup(Op_stat *op_stat, char * attribute_type, const char* index_type, char *key_value, int lookup_cnt)
|
||||
+{
|
||||
+ struct component_keys_lookup *key_stat;
|
||||
+
|
||||
+ if ((op_stat == NULL) || (op_stat->search_stat == NULL)) {
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ /* gather the index lookup statistics */
|
||||
+ key_stat = (struct component_keys_lookup *) slapi_ch_calloc(1, sizeof (struct component_keys_lookup));
|
||||
+
|
||||
+ /* indextype is "eq" */
|
||||
+ if (index_type) {
|
||||
+ key_stat->index_type = slapi_ch_strdup(index_type);
|
||||
+ }
|
||||
+
|
||||
+ /* key value e.g. '1234' */
|
||||
+ if (key_value) {
|
||||
+ key_stat->key = (char *) slapi_ch_calloc(1, strlen(key_value) + 1);
|
||||
+ memcpy(key_stat->key, key_value, strlen(key_value));
|
||||
+ }
|
||||
+
|
||||
+ /* attribute name is e.g. 'uid' */
|
||||
+ if (attribute_type) {
|
||||
+ key_stat->attribute_type = slapi_ch_strdup(attribute_type);
|
||||
+ }
|
||||
+
|
||||
+ /* Number of lookup IDs with the key */
|
||||
+ key_stat->id_lookup_cnt = lookup_cnt;
|
||||
+ if (op_stat->search_stat->keys_lookup) {
|
||||
+ /* it already exist key stat. add key_stat at the head */
|
||||
+ key_stat->next = op_stat->search_stat->keys_lookup;
|
||||
+ } else {
|
||||
+ /* this is the first key stat record */
|
||||
+ key_stat->next = NULL;
|
||||
+ }
|
||||
+ op_stat->search_stat->keys_lookup = key_stat;
|
||||
+}
|
||||
|
||||
/*
|
||||
* Build a candidate list for a SUBTREE scope search.
|
||||
@@ -1232,6 +1272,17 @@ subtree_candidates(
|
||||
if (candidates != NULL && (idl_length(candidates) > FILTER_TEST_THRESHOLD) && e) {
|
||||
IDList *tmp = candidates, *descendants = NULL;
|
||||
back_txn txn = {NULL};
|
||||
+ Op_stat *op_stat = NULL;
|
||||
+ char key_value[32] = {0};
|
||||
+
|
||||
+ /* statistics for index lookup is enabled */
|
||||
+ if (LDAP_STAT_READ_INDEX & config_get_statlog_level()) {
|
||||
+ op_stat = op_stat_get_operation_extension(pb);
|
||||
+ if (op_stat) {
|
||||
+ /* easier to just record the entry ID */
|
||||
+ PR_snprintf(key_value, sizeof(key_value), "%lu", (u_long) e->ep_id);
|
||||
+ }
|
||||
+ }
|
||||
|
||||
slapi_pblock_get(pb, SLAPI_TXN, &txn.back_txn_txn);
|
||||
if (entryrdn_get_noancestorid()) {
|
||||
@@ -1239,12 +1290,20 @@ subtree_candidates(
|
||||
*err = entryrdn_get_subordinates(be,
|
||||
slapi_entry_get_sdn_const(e->ep_entry),
|
||||
e->ep_id, &descendants, &txn, 0);
|
||||
+ if (op_stat) {
|
||||
+ /* record entryrdn lookups */
|
||||
+ stat_add_srch_lookup(op_stat, LDBM_ENTRYRDN_STR, indextype_EQUALITY, key_value, descendants ? descendants->b_nids : 0);
|
||||
+ }
|
||||
idl_insert(&descendants, e->ep_id);
|
||||
candidates = idl_intersection(be, candidates, descendants);
|
||||
idl_free(&tmp);
|
||||
idl_free(&descendants);
|
||||
} else if (!has_tombstone_filter && !is_bulk_import) {
|
||||
*err = ldbm_ancestorid_read_ext(be, &txn, e->ep_id, &descendants, allidslimit);
|
||||
+ if (op_stat) {
|
||||
+ /* records ancestorid lookups */
|
||||
+ stat_add_srch_lookup(op_stat, LDBM_ANCESTORID_STR, indextype_EQUALITY, key_value, descendants ? descendants->b_nids : 0);
|
||||
+ }
|
||||
idl_insert(&descendants, e->ep_id);
|
||||
candidates = idl_intersection(be, candidates, descendants);
|
||||
idl_free(&tmp);
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,249 @@
|
||||
From aced6f575f3be70f16756860f8b852d3447df867 Mon Sep 17 00:00:00 2001
|
||||
From: tbordaz <tbordaz@redhat.com>
|
||||
Date: Tue, 6 May 2025 16:09:36 +0200
|
||||
Subject: [PATCH] Issue 6764 - statistics about index lookup report a wrong
|
||||
duration (#6765)
|
||||
|
||||
Bug description:
|
||||
During a SRCH statistics about indexes lookup
|
||||
(when nsslapd-statlog-level=1) reports a duration.
|
||||
It is wrong because it should report a duration per filter
|
||||
component.
|
||||
|
||||
Fix description:
|
||||
Record a index lookup duration per key
|
||||
using key_lookup_start/key_lookup_end
|
||||
|
||||
fixes: #6764
|
||||
|
||||
Reviewed by: Pierre Rogier (Thanks !)
|
||||
|
||||
(cherry picked from commit cd8069a76bcbb2d7bb4ac3bb9466019b01cc6db3)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
ldap/servers/slapd/back-ldbm/filterindex.c | 17 +++++++-----
|
||||
ldap/servers/slapd/back-ldbm/ldbm_search.c | 31 +++++++++++++++-------
|
||||
ldap/servers/slapd/result.c | 12 +++++----
|
||||
ldap/servers/slapd/slapi-plugin.h | 9 +++++++
|
||||
ldap/servers/slapd/slapi-private.h | 2 ++
|
||||
ldap/servers/slapd/time.c | 13 +++++++++
|
||||
6 files changed, 62 insertions(+), 22 deletions(-)
|
||||
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/filterindex.c b/ldap/servers/slapd/back-ldbm/filterindex.c
|
||||
index 30550dde7..abc502b96 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/filterindex.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/filterindex.c
|
||||
@@ -1040,8 +1040,7 @@ keys2idl(
|
||||
int allidslimit)
|
||||
{
|
||||
IDList *idl = NULL;
|
||||
- Op_stat *op_stat;
|
||||
- PRBool collect_stat = PR_FALSE;
|
||||
+ Op_stat *op_stat = NULL;
|
||||
|
||||
slapi_log_err(SLAPI_LOG_TRACE, "keys2idl", "=> type %s indextype %s\n", type, indextype);
|
||||
|
||||
@@ -1049,8 +1048,9 @@ keys2idl(
|
||||
if (LDAP_STAT_READ_INDEX & config_get_statlog_level()) {
|
||||
op_stat = op_stat_get_operation_extension(pb);
|
||||
if (op_stat->search_stat) {
|
||||
- collect_stat = PR_TRUE;
|
||||
clock_gettime(CLOCK_MONOTONIC, &(op_stat->search_stat->keys_lookup_start));
|
||||
+ } else {
|
||||
+ op_stat = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1059,11 +1059,14 @@ keys2idl(
|
||||
struct component_keys_lookup *key_stat;
|
||||
int key_len;
|
||||
|
||||
- idl2 = index_read_ext_allids(pb, be, type, indextype, slapi_value_get_berval(ivals[i]), txn, err, unindexed, allidslimit);
|
||||
- if (collect_stat) {
|
||||
+ if (op_stat) {
|
||||
/* gather the index lookup statistics */
|
||||
key_stat = (struct component_keys_lookup *) slapi_ch_calloc(1, sizeof (struct component_keys_lookup));
|
||||
-
|
||||
+ clock_gettime(CLOCK_MONOTONIC, &(key_stat->key_lookup_start));
|
||||
+ }
|
||||
+ idl2 = index_read_ext_allids(pb, be, type, indextype, slapi_value_get_berval(ivals[i]), txn, err, unindexed, allidslimit);
|
||||
+ if (op_stat) {
|
||||
+ clock_gettime(CLOCK_MONOTONIC, &(key_stat->key_lookup_end));
|
||||
/* indextype e.g. "eq" or "sub" (see index.c) */
|
||||
if (indextype) {
|
||||
key_stat->index_type = slapi_ch_strdup(indextype);
|
||||
@@ -1125,7 +1128,7 @@ keys2idl(
|
||||
}
|
||||
|
||||
/* All the keys have been fetch, time to take the completion time */
|
||||
- if (collect_stat) {
|
||||
+ if (op_stat) {
|
||||
clock_gettime(CLOCK_MONOTONIC, &(op_stat->search_stat->keys_lookup_end));
|
||||
}
|
||||
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_search.c b/ldap/servers/slapd/back-ldbm/ldbm_search.c
|
||||
index 5d98e288e..27301f453 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/ldbm_search.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/ldbm_search.c
|
||||
@@ -35,7 +35,7 @@ static IDList *onelevel_candidates(Slapi_PBlock *pb, backend *be, const char *ba
|
||||
static back_search_result_set *new_search_result_set(IDList *idl, int vlv, int lookthroughlimit);
|
||||
static void delete_search_result_set(Slapi_PBlock *pb, back_search_result_set **sr);
|
||||
static int can_skip_filter_test(Slapi_PBlock *pb, struct slapi_filter *f, int scope, IDList *idl);
|
||||
-static void stat_add_srch_lookup(Op_stat *op_stat, char * attribute_type, const char* index_type, char *key_value, int lookup_cnt);
|
||||
+static void stat_add_srch_lookup(Op_stat *op_stat, struct component_keys_lookup *key_stat, char * attribute_type, const char* index_type, char *key_value, int lookup_cnt);
|
||||
|
||||
/* This is for performance testing, allows us to disable ACL checking altogether */
|
||||
#if defined(DISABLE_ACL_CHECK)
|
||||
@@ -1169,17 +1169,12 @@ create_subtree_filter(Slapi_Filter *filter, int managedsait, Slapi_Filter **focr
|
||||
}
|
||||
|
||||
static void
|
||||
-stat_add_srch_lookup(Op_stat *op_stat, char * attribute_type, const char* index_type, char *key_value, int lookup_cnt)
|
||||
+stat_add_srch_lookup(Op_stat *op_stat, struct component_keys_lookup *key_stat, char * attribute_type, const char* index_type, char *key_value, int lookup_cnt)
|
||||
{
|
||||
- struct component_keys_lookup *key_stat;
|
||||
-
|
||||
- if ((op_stat == NULL) || (op_stat->search_stat == NULL)) {
|
||||
+ if ((op_stat == NULL) || (op_stat->search_stat == NULL) || (key_stat == NULL)) {
|
||||
return;
|
||||
}
|
||||
|
||||
- /* gather the index lookup statistics */
|
||||
- key_stat = (struct component_keys_lookup *) slapi_ch_calloc(1, sizeof (struct component_keys_lookup));
|
||||
-
|
||||
/* indextype is "eq" */
|
||||
if (index_type) {
|
||||
key_stat->index_type = slapi_ch_strdup(index_type);
|
||||
@@ -1286,23 +1281,39 @@ subtree_candidates(
|
||||
|
||||
slapi_pblock_get(pb, SLAPI_TXN, &txn.back_txn_txn);
|
||||
if (entryrdn_get_noancestorid()) {
|
||||
+ struct component_keys_lookup *key_stat;
|
||||
+
|
||||
+ if (op_stat) {
|
||||
+ /* gather the index lookup statistics */
|
||||
+ key_stat = (struct component_keys_lookup *) slapi_ch_calloc(1, sizeof (struct component_keys_lookup));
|
||||
+ clock_gettime(CLOCK_MONOTONIC, &key_stat->key_lookup_start);
|
||||
+ }
|
||||
/* subtree-rename: on && no ancestorid */
|
||||
*err = entryrdn_get_subordinates(be,
|
||||
slapi_entry_get_sdn_const(e->ep_entry),
|
||||
e->ep_id, &descendants, &txn, 0);
|
||||
if (op_stat) {
|
||||
+ clock_gettime(CLOCK_MONOTONIC, &key_stat->key_lookup_end);
|
||||
/* record entryrdn lookups */
|
||||
- stat_add_srch_lookup(op_stat, LDBM_ENTRYRDN_STR, indextype_EQUALITY, key_value, descendants ? descendants->b_nids : 0);
|
||||
+ stat_add_srch_lookup(op_stat, key_stat, LDBM_ENTRYRDN_STR, indextype_EQUALITY, key_value, descendants ? descendants->b_nids : 0);
|
||||
}
|
||||
idl_insert(&descendants, e->ep_id);
|
||||
candidates = idl_intersection(be, candidates, descendants);
|
||||
idl_free(&tmp);
|
||||
idl_free(&descendants);
|
||||
} else if (!has_tombstone_filter && !is_bulk_import) {
|
||||
+ struct component_keys_lookup *key_stat;
|
||||
+
|
||||
+ if (op_stat) {
|
||||
+ /* gather the index lookup statistics */
|
||||
+ key_stat = (struct component_keys_lookup *) slapi_ch_calloc(1, sizeof (struct component_keys_lookup));
|
||||
+ clock_gettime(CLOCK_MONOTONIC, &key_stat->key_lookup_start);
|
||||
+ }
|
||||
*err = ldbm_ancestorid_read_ext(be, &txn, e->ep_id, &descendants, allidslimit);
|
||||
if (op_stat) {
|
||||
+ clock_gettime(CLOCK_MONOTONIC, &key_stat->key_lookup_end);
|
||||
/* records ancestorid lookups */
|
||||
- stat_add_srch_lookup(op_stat, LDBM_ANCESTORID_STR, indextype_EQUALITY, key_value, descendants ? descendants->b_nids : 0);
|
||||
+ stat_add_srch_lookup(op_stat, key_stat, LDBM_ANCESTORID_STR, indextype_EQUALITY, key_value, descendants ? descendants->b_nids : 0);
|
||||
}
|
||||
idl_insert(&descendants, e->ep_id);
|
||||
candidates = idl_intersection(be, candidates, descendants);
|
||||
diff --git a/ldap/servers/slapd/result.c b/ldap/servers/slapd/result.c
|
||||
index 87641e92f..f40556de8 100644
|
||||
--- a/ldap/servers/slapd/result.c
|
||||
+++ b/ldap/servers/slapd/result.c
|
||||
@@ -2089,19 +2089,21 @@ log_op_stat(Slapi_PBlock *pb, uint64_t connid, int32_t op_id, int32_t op_interna
|
||||
op_stat->search_stat) {
|
||||
struct component_keys_lookup *key_info;
|
||||
for (key_info = op_stat->search_stat->keys_lookup; key_info; key_info = key_info->next) {
|
||||
+ slapi_timespec_diff(&key_info->key_lookup_end, &key_info->key_lookup_start, &duration);
|
||||
+ snprintf(stat_etime, ETIME_BUFSIZ, "%" PRId64 ".%.09" PRId64 "", (int64_t)duration.tv_sec, (int64_t)duration.tv_nsec);
|
||||
if (internal_op) {
|
||||
slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
- connid == 0 ? STAT_LOG_CONN_OP_FMT_INT_INT "STAT read index: attribute=%s key(%s)=%s --> count %d\n":
|
||||
- STAT_LOG_CONN_OP_FMT_EXT_INT "STAT read index: attribute=%s key(%s)=%s --> count %d\n",
|
||||
+ connid == 0 ? STAT_LOG_CONN_OP_FMT_INT_INT "STAT read index: attribute=%s key(%s)=%s --> count %d (duration %s)\n":
|
||||
+ STAT_LOG_CONN_OP_FMT_EXT_INT "STAT read index: attribute=%s key(%s)=%s --> count %d (duration %s)\n",
|
||||
connid, op_id, op_internal_id, op_nested_count,
|
||||
key_info->attribute_type, key_info->index_type, key_info->key,
|
||||
- key_info->id_lookup_cnt);
|
||||
+ key_info->id_lookup_cnt, stat_etime);
|
||||
} else {
|
||||
slapi_log_stat(LDAP_STAT_READ_INDEX,
|
||||
- "conn=%" PRIu64 " op=%d STAT read index: attribute=%s key(%s)=%s --> count %d\n",
|
||||
+ "conn=%" PRIu64 " op=%d STAT read index: attribute=%s key(%s)=%s --> count %d (duration %s)\n",
|
||||
connid, op_id,
|
||||
key_info->attribute_type, key_info->index_type, key_info->key,
|
||||
- key_info->id_lookup_cnt);
|
||||
+ key_info->id_lookup_cnt, stat_etime);
|
||||
}
|
||||
}
|
||||
|
||||
diff --git a/ldap/servers/slapd/slapi-plugin.h b/ldap/servers/slapd/slapi-plugin.h
|
||||
index a84a60c92..00e9722d2 100644
|
||||
--- a/ldap/servers/slapd/slapi-plugin.h
|
||||
+++ b/ldap/servers/slapd/slapi-plugin.h
|
||||
@@ -8314,6 +8314,15 @@ void DS_Sleep(PRIntervalTime ticks);
|
||||
* \param struct timespec c the difference.
|
||||
*/
|
||||
void slapi_timespec_diff(struct timespec *a, struct timespec *b, struct timespec *diff);
|
||||
+
|
||||
+/**
|
||||
+ * add 'new' timespect into 'cumul'
|
||||
+ * clock_monotonic to find time taken to perform operations.
|
||||
+ *
|
||||
+ * \param struct timespec cumul to compute total duration.
|
||||
+ * \param struct timespec new is a additional duration
|
||||
+ */
|
||||
+void slapi_timespec_add(struct timespec *cumul, struct timespec *new);
|
||||
/**
|
||||
* Given an operation, determine the time elapsed since the op
|
||||
* began.
|
||||
diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h
|
||||
index bd7a4b39d..dfb0e272a 100644
|
||||
--- a/ldap/servers/slapd/slapi-private.h
|
||||
+++ b/ldap/servers/slapd/slapi-private.h
|
||||
@@ -457,6 +457,8 @@ struct component_keys_lookup
|
||||
char *attribute_type;
|
||||
char *key;
|
||||
int id_lookup_cnt;
|
||||
+ struct timespec key_lookup_start;
|
||||
+ struct timespec key_lookup_end;
|
||||
struct component_keys_lookup *next;
|
||||
};
|
||||
typedef struct op_search_stat
|
||||
diff --git a/ldap/servers/slapd/time.c b/ldap/servers/slapd/time.c
|
||||
index 0406c3689..0dd457fbe 100644
|
||||
--- a/ldap/servers/slapd/time.c
|
||||
+++ b/ldap/servers/slapd/time.c
|
||||
@@ -272,6 +272,19 @@ slapi_timespec_diff(struct timespec *a, struct timespec *b, struct timespec *dif
|
||||
diff->tv_nsec = nsec;
|
||||
}
|
||||
|
||||
+void
|
||||
+slapi_timespec_add(struct timespec *cumul, struct timespec *new)
|
||||
+{
|
||||
+ /* Now add the two */
|
||||
+ time_t sec = cumul->tv_sec + new->tv_sec;
|
||||
+ long nsec = cumul->tv_nsec + new->tv_nsec;
|
||||
+
|
||||
+ sec += nsec / 1000000000;
|
||||
+ nsec = nsec % 1000000000;
|
||||
+ cumul->tv_sec = sec;
|
||||
+ cumul->tv_nsec = nsec;
|
||||
+}
|
||||
+
|
||||
void
|
||||
slapi_timespec_expire_at(time_t timeout, struct timespec *expire)
|
||||
{
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,82 @@
|
||||
From c8a5594efdb2722b6dceaed16219039d8e59c888 Mon Sep 17 00:00:00 2001
|
||||
From: Thierry Bordaz <tbordaz@redhat.com>
|
||||
Date: Thu, 5 Jun 2025 10:33:29 +0200
|
||||
Subject: [PATCH] Issue 6470 (Cont) - Some replication status data are reset
|
||||
upon a restart
|
||||
|
||||
(cherry picked from commit a8b419dab31f4fa9fca8c33fe04a79e7a34965e5)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
ldap/servers/plugins/replication/repl5_agmt.c | 4 ++--
|
||||
ldap/servers/slapd/slapi-plugin.h | 8 --------
|
||||
ldap/servers/slapd/time.c | 13 -------------
|
||||
3 files changed, 2 insertions(+), 23 deletions(-)
|
||||
|
||||
diff --git a/ldap/servers/plugins/replication/repl5_agmt.c b/ldap/servers/plugins/replication/repl5_agmt.c
|
||||
index c3b8d298c..229783763 100644
|
||||
--- a/ldap/servers/plugins/replication/repl5_agmt.c
|
||||
+++ b/ldap/servers/plugins/replication/repl5_agmt.c
|
||||
@@ -537,7 +537,7 @@ agmt_new_from_entry(Slapi_Entry *e)
|
||||
if (val) {
|
||||
strcpy(ra->last_init_status, val);
|
||||
}
|
||||
- ra->changecounters = (struct changecounter **)slapi_ch_calloc(MAX_NUM_OF_SUPPLIERS + 1,
|
||||
+ ra->changecounters = (struct changecounter **)slapi_ch_calloc(MAX_NUM_OF_MASTERS + 1,
|
||||
sizeof(struct changecounter *));
|
||||
ra->num_changecounters = 0;
|
||||
ra->max_changecounters = MAX_NUM_OF_MASTERS;
|
||||
@@ -2615,7 +2615,7 @@ agmt_update_init_status(Repl_Agmt *ra)
|
||||
mods[nb_mods] = NULL;
|
||||
|
||||
slapi_modify_internal_set_pb_ext(pb, ra->dn, mods, NULL, NULL,
|
||||
- repl_get_plugin_identity(PLUGIN_MULTISUPPLIER_REPLICATION), 0);
|
||||
+ repl_get_plugin_identity(PLUGIN_MULTIMASTER_REPLICATION), 0);
|
||||
slapi_modify_internal_pb(pb);
|
||||
|
||||
slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_RESULT, &rc);
|
||||
diff --git a/ldap/servers/slapd/slapi-plugin.h b/ldap/servers/slapd/slapi-plugin.h
|
||||
index 00e9722d2..677be1db0 100644
|
||||
--- a/ldap/servers/slapd/slapi-plugin.h
|
||||
+++ b/ldap/servers/slapd/slapi-plugin.h
|
||||
@@ -8315,14 +8315,6 @@ void DS_Sleep(PRIntervalTime ticks);
|
||||
*/
|
||||
void slapi_timespec_diff(struct timespec *a, struct timespec *b, struct timespec *diff);
|
||||
|
||||
-/**
|
||||
- * add 'new' timespect into 'cumul'
|
||||
- * clock_monotonic to find time taken to perform operations.
|
||||
- *
|
||||
- * \param struct timespec cumul to compute total duration.
|
||||
- * \param struct timespec new is a additional duration
|
||||
- */
|
||||
-void slapi_timespec_add(struct timespec *cumul, struct timespec *new);
|
||||
/**
|
||||
* Given an operation, determine the time elapsed since the op
|
||||
* began.
|
||||
diff --git a/ldap/servers/slapd/time.c b/ldap/servers/slapd/time.c
|
||||
index 0dd457fbe..0406c3689 100644
|
||||
--- a/ldap/servers/slapd/time.c
|
||||
+++ b/ldap/servers/slapd/time.c
|
||||
@@ -272,19 +272,6 @@ slapi_timespec_diff(struct timespec *a, struct timespec *b, struct timespec *dif
|
||||
diff->tv_nsec = nsec;
|
||||
}
|
||||
|
||||
-void
|
||||
-slapi_timespec_add(struct timespec *cumul, struct timespec *new)
|
||||
-{
|
||||
- /* Now add the two */
|
||||
- time_t sec = cumul->tv_sec + new->tv_sec;
|
||||
- long nsec = cumul->tv_nsec + new->tv_nsec;
|
||||
-
|
||||
- sec += nsec / 1000000000;
|
||||
- nsec = nsec % 1000000000;
|
||||
- cumul->tv_sec = sec;
|
||||
- cumul->tv_nsec = nsec;
|
||||
-}
|
||||
-
|
||||
void
|
||||
slapi_timespec_expire_at(time_t timeout, struct timespec *expire)
|
||||
{
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,101 @@
|
||||
From a7231528b5ad7e887eeed4317de48d054cd046cd Mon Sep 17 00:00:00 2001
|
||||
From: Mark Reynolds <mreynolds@redhat.com>
|
||||
Date: Wed, 23 Jul 2025 19:35:32 -0400
|
||||
Subject: [PATCH] Issue 6895 - Crash if repl keep alive entry can not be
|
||||
created
|
||||
|
||||
Description:
|
||||
|
||||
Heap use after free when logging that the replicaton keep-alive entry can not
|
||||
be created. slapi_add_internal_pb() frees the slapi entry, then
|
||||
we try and get the dn from the entry and we get a use-after-free crash.
|
||||
|
||||
Relates: https://github.com/389ds/389-ds-base/issues/6895
|
||||
|
||||
Reviewed by: spichugi(Thanks!)
|
||||
|
||||
(cherry picked from commit 43ab6b1d1de138d6be03b657f27cbb6ba19ddd14)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
ldap/servers/plugins/chainingdb/cb_config.c | 3 +--
|
||||
ldap/servers/plugins/posix-winsync/posix-winsync.c | 1 -
|
||||
ldap/servers/plugins/replication/repl5_init.c | 3 ---
|
||||
ldap/servers/plugins/replication/repl5_replica.c | 8 ++++----
|
||||
4 files changed, 5 insertions(+), 10 deletions(-)
|
||||
|
||||
diff --git a/ldap/servers/plugins/chainingdb/cb_config.c b/ldap/servers/plugins/chainingdb/cb_config.c
|
||||
index 40a7088d7..24fa1bcb3 100644
|
||||
--- a/ldap/servers/plugins/chainingdb/cb_config.c
|
||||
+++ b/ldap/servers/plugins/chainingdb/cb_config.c
|
||||
@@ -44,8 +44,7 @@ cb_config_add_dse_entries(cb_backend *cb, char **entries, char *string1, char *s
|
||||
slapi_pblock_get(util_pb, SLAPI_PLUGIN_INTOP_RESULT, &res);
|
||||
if (LDAP_SUCCESS != res && LDAP_ALREADY_EXISTS != res) {
|
||||
slapi_log_err(SLAPI_LOG_ERR, CB_PLUGIN_SUBSYSTEM,
|
||||
- "cb_config_add_dse_entries - Unable to add config entry (%s) to the DSE: %s\n",
|
||||
- slapi_entry_get_dn(e),
|
||||
+ "cb_config_add_dse_entries - Unable to add config entry to the DSE: %s\n",
|
||||
ldap_err2string(res));
|
||||
rc = res;
|
||||
slapi_pblock_destroy(util_pb);
|
||||
diff --git a/ldap/servers/plugins/posix-winsync/posix-winsync.c b/ldap/servers/plugins/posix-winsync/posix-winsync.c
|
||||
index 56efb2330..ab37497cd 100644
|
||||
--- a/ldap/servers/plugins/posix-winsync/posix-winsync.c
|
||||
+++ b/ldap/servers/plugins/posix-winsync/posix-winsync.c
|
||||
@@ -1625,7 +1625,6 @@ posix_winsync_end_update_cb(void *cbdata __attribute__((unused)),
|
||||
"posix_winsync_end_update_cb: "
|
||||
"add task entry\n");
|
||||
}
|
||||
- /* slapi_entry_free(e_task); */
|
||||
slapi_pblock_destroy(pb);
|
||||
pb = NULL;
|
||||
posix_winsync_config_reset_MOFTaskCreated();
|
||||
diff --git a/ldap/servers/plugins/replication/repl5_init.c b/ldap/servers/plugins/replication/repl5_init.c
|
||||
index 5a748e35a..9b6523a2e 100644
|
||||
--- a/ldap/servers/plugins/replication/repl5_init.c
|
||||
+++ b/ldap/servers/plugins/replication/repl5_init.c
|
||||
@@ -682,7 +682,6 @@ create_repl_schema_policy(void)
|
||||
repl_schema_top,
|
||||
ldap_err2string(return_value));
|
||||
rc = -1;
|
||||
- slapi_entry_free(e); /* The entry was not consumed */
|
||||
goto done;
|
||||
}
|
||||
slapi_pblock_destroy(pb);
|
||||
@@ -703,7 +702,6 @@ create_repl_schema_policy(void)
|
||||
repl_schema_supplier,
|
||||
ldap_err2string(return_value));
|
||||
rc = -1;
|
||||
- slapi_entry_free(e); /* The entry was not consumed */
|
||||
goto done;
|
||||
}
|
||||
slapi_pblock_destroy(pb);
|
||||
@@ -724,7 +722,6 @@ create_repl_schema_policy(void)
|
||||
repl_schema_consumer,
|
||||
ldap_err2string(return_value));
|
||||
rc = -1;
|
||||
- slapi_entry_free(e); /* The entry was not consumed */
|
||||
goto done;
|
||||
}
|
||||
slapi_pblock_destroy(pb);
|
||||
diff --git a/ldap/servers/plugins/replication/repl5_replica.c b/ldap/servers/plugins/replication/repl5_replica.c
|
||||
index d67f1bc71..cec140140 100644
|
||||
--- a/ldap/servers/plugins/replication/repl5_replica.c
|
||||
+++ b/ldap/servers/plugins/replication/repl5_replica.c
|
||||
@@ -440,10 +440,10 @@ replica_subentry_create(const char *repl_root, ReplicaId rid)
|
||||
if (return_value != LDAP_SUCCESS &&
|
||||
return_value != LDAP_ALREADY_EXISTS &&
|
||||
return_value != LDAP_REFERRAL /* CONSUMER */) {
|
||||
- slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "replica_subentry_create - Unable to "
|
||||
- "create replication keep alive entry %s: error %d - %s\n",
|
||||
- slapi_entry_get_dn_const(e),
|
||||
- return_value, ldap_err2string(return_value));
|
||||
+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name, "replica_subentry_create - "
|
||||
+ "Unable to create replication keep alive entry 'cn=%s %d,%s': error %d - %s\n",
|
||||
+ KEEP_ALIVE_ENTRY, rid, repl_root,
|
||||
+ return_value, ldap_err2string(return_value));
|
||||
rc = -1;
|
||||
goto done;
|
||||
}
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,720 @@
|
||||
From 18a807e0e23b1160ea61e05e721da9fbd0c560b1 Mon Sep 17 00:00:00 2001
|
||||
From: Simon Pichugin <spichugi@redhat.com>
|
||||
Date: Mon, 28 Jul 2025 15:41:29 -0700
|
||||
Subject: [PATCH] Issue 6884 - Mask password hashes in audit logs (#6885)
|
||||
|
||||
Description: Fix the audit log functionality to mask password hash values for
|
||||
userPassword, nsslapd-rootpw, nsmultiplexorcredentials, nsds5ReplicaCredentials,
|
||||
and nsds5ReplicaBootstrapCredentials attributes in ADD and MODIFY operations.
|
||||
Update auditlog.c to detect password attributes and replace their values with
|
||||
asterisks (**********************) in both LDIF and JSON audit log formats.
|
||||
Add a comprehensive test suite audit_password_masking_test.py to verify
|
||||
password masking works correctly across all log formats and operation types.
|
||||
|
||||
Fixes: https://github.com/389ds/389-ds-base/issues/6884
|
||||
|
||||
Reviewed by: @mreynolds389, @vashirov (Thanks!!)
|
||||
|
||||
(cherry picked from commit 24f9aea1ae7e29bd885212825dc52d2a5db08a03)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../logging/audit_password_masking_test.py | 457 ++++++++++++++++++
|
||||
ldap/servers/slapd/auditlog.c | 144 +++++-
|
||||
ldap/servers/slapd/slapi-private.h | 1 +
|
||||
src/lib389/lib389/chaining.py | 3 +-
|
||||
4 files changed, 586 insertions(+), 19 deletions(-)
|
||||
create mode 100644 dirsrvtests/tests/suites/logging/audit_password_masking_test.py
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/logging/audit_password_masking_test.py b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py
|
||||
new file mode 100644
|
||||
index 000000000..ae379cbba
|
||||
--- /dev/null
|
||||
+++ b/dirsrvtests/tests/suites/logging/audit_password_masking_test.py
|
||||
@@ -0,0 +1,457 @@
|
||||
+# --- BEGIN COPYRIGHT BLOCK ---
|
||||
+# Copyright (C) 2025 Red Hat, Inc.
|
||||
+# All rights reserved.
|
||||
+#
|
||||
+# License: GPL (version 3 or any later version).
|
||||
+# See LICENSE for details.
|
||||
+# --- END COPYRIGHT BLOCK ---
|
||||
+#
|
||||
+import logging
|
||||
+import pytest
|
||||
+import os
|
||||
+import re
|
||||
+import time
|
||||
+import ldap
|
||||
+from lib389._constants import DEFAULT_SUFFIX, DN_DM, PW_DM
|
||||
+from lib389.topologies import topology_m2 as topo
|
||||
+from lib389.idm.user import UserAccounts
|
||||
+from lib389.plugins import ChainingBackendPlugin
|
||||
+from lib389.chaining import ChainingLinks
|
||||
+from lib389.agreement import Agreements
|
||||
+from lib389.replica import ReplicationManager, Replicas
|
||||
+from lib389.idm.directorymanager import DirectoryManager
|
||||
+
|
||||
+log = logging.getLogger(__name__)
|
||||
+
|
||||
+MASKED_PASSWORD = "**********************"
|
||||
+TEST_PASSWORD = "MySecret123"
|
||||
+TEST_PASSWORD_2 = "NewPassword789"
|
||||
+TEST_PASSWORD_3 = "NewPassword101"
|
||||
+
|
||||
+
|
||||
+def setup_audit_logging(inst, log_format='default', display_attrs=None):
|
||||
+ """Configure audit logging settings"""
|
||||
+ inst.config.replace('nsslapd-auditlog-logging-enabled', 'on')
|
||||
+
|
||||
+ if display_attrs is not None:
|
||||
+ inst.config.replace('nsslapd-auditlog-display-attrs', display_attrs)
|
||||
+
|
||||
+ inst.deleteAuditLogs()
|
||||
+
|
||||
+
|
||||
+def check_password_masked(inst, log_format, expected_password, actual_password):
|
||||
+ """Helper function to check password masking in audit logs"""
|
||||
+
|
||||
+ inst.restart() # Flush the logs
|
||||
+
|
||||
+ # List of all password/credential attributes that should be masked
|
||||
+ password_attributes = [
|
||||
+ 'userPassword',
|
||||
+ 'nsslapd-rootpw',
|
||||
+ 'nsmultiplexorcredentials',
|
||||
+ 'nsDS5ReplicaCredentials',
|
||||
+ 'nsDS5ReplicaBootstrapCredentials'
|
||||
+ ]
|
||||
+
|
||||
+ # Get password schemes to check for hash leakage
|
||||
+ user_password_scheme = inst.config.get_attr_val_utf8('passwordStorageScheme')
|
||||
+ root_password_scheme = inst.config.get_attr_val_utf8('nsslapd-rootpwstoragescheme')
|
||||
+
|
||||
+ # Check LDIF format logs
|
||||
+ found_masked = False
|
||||
+ found_actual = False
|
||||
+ found_hashed = False
|
||||
+
|
||||
+ # Check each password attribute for masked password
|
||||
+ for attr in password_attributes:
|
||||
+ if inst.ds_audit_log.match(f"{attr}: {re.escape(expected_password)}"):
|
||||
+ found_masked = True
|
||||
+ if inst.ds_audit_log.match(f"{attr}: {actual_password}"):
|
||||
+ found_actual = True
|
||||
+
|
||||
+ # Check for hashed passwords in LDIF format
|
||||
+ if user_password_scheme:
|
||||
+ if inst.ds_audit_log.match(f"userPassword: {{{user_password_scheme}}}"):
|
||||
+ found_hashed = True
|
||||
+ if root_password_scheme:
|
||||
+ if inst.ds_audit_log.match(f"nsslapd-rootpw: {{{root_password_scheme}}}"):
|
||||
+ found_hashed = True
|
||||
+
|
||||
+ # Delete audit logs to avoid interference with other tests
|
||||
+ # We need to reset the root password to default as deleteAuditLogs()
|
||||
+ # opens a new connection with the default password
|
||||
+ dm = DirectoryManager(inst)
|
||||
+ dm.change_password(PW_DM)
|
||||
+ inst.deleteAuditLogs()
|
||||
+
|
||||
+ return found_masked, found_actual, found_hashed
|
||||
+
|
||||
+
|
||||
+@pytest.mark.parametrize("log_format,display_attrs", [
|
||||
+ ("default", None),
|
||||
+ ("default", "*"),
|
||||
+ ("default", "userPassword"),
|
||||
+])
|
||||
+def test_password_masking_add_operation(topo, log_format, display_attrs):
|
||||
+ """Test password masking in ADD operations
|
||||
+
|
||||
+ :id: 4358bd75-bcc7-401c-b492-d3209b10412d
|
||||
+ :parametrized: yes
|
||||
+ :setup: Standalone Instance
|
||||
+ :steps:
|
||||
+ 1. Configure audit logging format
|
||||
+ 2. Add user with password
|
||||
+ 3. Check that password is masked in audit log
|
||||
+ 4. Verify actual password does not appear in log
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Password should be masked with asterisks
|
||||
+ 4. Actual password should not be found in log
|
||||
+ """
|
||||
+ inst = topo.ms['supplier1']
|
||||
+ setup_audit_logging(inst, log_format, display_attrs)
|
||||
+
|
||||
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
|
||||
+ user = None
|
||||
+
|
||||
+ try:
|
||||
+ user = users.create(properties={
|
||||
+ 'uid': 'test_add_pwd_mask',
|
||||
+ 'cn': 'Test Add User',
|
||||
+ 'sn': 'User',
|
||||
+ 'uidNumber': '1000',
|
||||
+ 'gidNumber': '1000',
|
||||
+ 'homeDirectory': '/home/test_add',
|
||||
+ 'userPassword': TEST_PASSWORD
|
||||
+ })
|
||||
+
|
||||
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
|
||||
+
|
||||
+ assert found_masked, f"Masked password not found in {log_format} ADD operation"
|
||||
+ assert not found_actual, f"Actual password found in {log_format} ADD log (should be masked)"
|
||||
+ assert not found_hashed, f"Hashed password found in {log_format} ADD log (should be masked)"
|
||||
+
|
||||
+ finally:
|
||||
+ if user is not None:
|
||||
+ try:
|
||||
+ user.delete()
|
||||
+ except:
|
||||
+ pass
|
||||
+
|
||||
+
|
||||
+@pytest.mark.parametrize("log_format,display_attrs", [
|
||||
+ ("default", None),
|
||||
+ ("default", "*"),
|
||||
+ ("default", "userPassword"),
|
||||
+])
|
||||
+def test_password_masking_modify_operation(topo, log_format, display_attrs):
|
||||
+ """Test password masking in MODIFY operations
|
||||
+
|
||||
+ :id: e6963aa9-7609-419c-aae2-1d517aa434bd
|
||||
+ :parametrized: yes
|
||||
+ :setup: Standalone Instance
|
||||
+ :steps:
|
||||
+ 1. Configure audit logging format
|
||||
+ 2. Add user without password
|
||||
+ 3. Add password via MODIFY operation
|
||||
+ 4. Check that password is masked in audit log
|
||||
+ 5. Modify password to new value
|
||||
+ 6. Check that new password is also masked
|
||||
+ 7. Verify actual passwords do not appear in log
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. Password should be masked with asterisks
|
||||
+ 5. Success
|
||||
+ 6. New password should be masked with asterisks
|
||||
+ 7. No actual password values should be found in log
|
||||
+ """
|
||||
+ inst = topo.ms['supplier1']
|
||||
+ setup_audit_logging(inst, log_format, display_attrs)
|
||||
+
|
||||
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
|
||||
+ user = None
|
||||
+
|
||||
+ try:
|
||||
+ user = users.create(properties={
|
||||
+ 'uid': 'test_modify_pwd_mask',
|
||||
+ 'cn': 'Test Modify User',
|
||||
+ 'sn': 'User',
|
||||
+ 'uidNumber': '2000',
|
||||
+ 'gidNumber': '2000',
|
||||
+ 'homeDirectory': '/home/test_modify'
|
||||
+ })
|
||||
+
|
||||
+ user.replace('userPassword', TEST_PASSWORD)
|
||||
+
|
||||
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
|
||||
+ assert found_masked, f"Masked password not found in {log_format} MODIFY operation (first password)"
|
||||
+ assert not found_actual, f"Actual password found in {log_format} MODIFY log (should be masked)"
|
||||
+ assert not found_hashed, f"Hashed password found in {log_format} MODIFY log (should be masked)"
|
||||
+
|
||||
+ user.replace('userPassword', TEST_PASSWORD_2)
|
||||
+
|
||||
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
|
||||
+ assert found_masked_2, f"Masked password not found in {log_format} MODIFY operation (second password)"
|
||||
+ assert not found_actual_2, f"Second actual password found in {log_format} MODIFY log (should be masked)"
|
||||
+ assert not found_hashed_2, f"Second hashed password found in {log_format} MODIFY log (should be masked)"
|
||||
+
|
||||
+ finally:
|
||||
+ if user is not None:
|
||||
+ try:
|
||||
+ user.delete()
|
||||
+ except:
|
||||
+ pass
|
||||
+
|
||||
+
|
||||
+@pytest.mark.parametrize("log_format,display_attrs", [
|
||||
+ ("default", None),
|
||||
+ ("default", "*"),
|
||||
+ ("default", "nsslapd-rootpw"),
|
||||
+])
|
||||
+def test_password_masking_rootpw_modify_operation(topo, log_format, display_attrs):
|
||||
+ """Test password masking for nsslapd-rootpw MODIFY operations
|
||||
+
|
||||
+ :id: ec8c9fd4-56ba-4663-ab65-58efb3b445e4
|
||||
+ :parametrized: yes
|
||||
+ :setup: Standalone Instance
|
||||
+ :steps:
|
||||
+ 1. Configure audit logging format
|
||||
+ 2. Modify nsslapd-rootpw in configuration
|
||||
+ 3. Check that root password is masked in audit log
|
||||
+ 4. Modify root password to new value
|
||||
+ 5. Check that new root password is also masked
|
||||
+ 6. Verify actual root passwords do not appear in log
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Root password should be masked with asterisks
|
||||
+ 4. Success
|
||||
+ 5. New root password should be masked with asterisks
|
||||
+ 6. No actual root password values should be found in log
|
||||
+ """
|
||||
+ inst = topo.ms['supplier1']
|
||||
+ setup_audit_logging(inst, log_format, display_attrs)
|
||||
+ dm = DirectoryManager(inst)
|
||||
+
|
||||
+ try:
|
||||
+ dm.change_password(TEST_PASSWORD)
|
||||
+ dm.rebind(TEST_PASSWORD)
|
||||
+ dm.change_password(PW_DM)
|
||||
+
|
||||
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
|
||||
+ assert found_masked, f"Masked root password not found in {log_format} MODIFY operation (first root password)"
|
||||
+ assert not found_actual, f"Actual root password found in {log_format} MODIFY log (should be masked)"
|
||||
+ assert not found_hashed, f"Hashed root password found in {log_format} MODIFY log (should be masked)"
|
||||
+
|
||||
+ dm.change_password(TEST_PASSWORD_2)
|
||||
+ dm.rebind(TEST_PASSWORD_2)
|
||||
+ dm.change_password(PW_DM)
|
||||
+
|
||||
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
|
||||
+ assert found_masked_2, f"Masked root password not found in {log_format} MODIFY operation (second root password)"
|
||||
+ assert not found_actual_2, f"Second actual root password found in {log_format} MODIFY log (should be masked)"
|
||||
+ assert not found_hashed_2, f"Second hashed root password found in {log_format} MODIFY log (should be masked)"
|
||||
+
|
||||
+ finally:
|
||||
+ dm.change_password(PW_DM)
|
||||
+ dm.rebind(PW_DM)
|
||||
+
|
||||
+
|
||||
+@pytest.mark.parametrize("log_format,display_attrs", [
|
||||
+ ("default", None),
|
||||
+ ("default", "*"),
|
||||
+ ("default", "nsmultiplexorcredentials"),
|
||||
+])
|
||||
+def test_password_masking_multiplexor_credentials(topo, log_format, display_attrs):
|
||||
+ """Test password masking for nsmultiplexorcredentials in chaining/multiplexor configurations
|
||||
+
|
||||
+ :id: 161a9498-b248-4926-90be-a696a36ed36e
|
||||
+ :parametrized: yes
|
||||
+ :setup: Standalone Instance
|
||||
+ :steps:
|
||||
+ 1. Configure audit logging format
|
||||
+ 2. Create a chaining backend configuration entry with nsmultiplexorcredentials
|
||||
+ 3. Check that multiplexor credentials are masked in audit log
|
||||
+ 4. Modify the credentials
|
||||
+ 5. Check that updated credentials are also masked
|
||||
+ 6. Verify actual credentials do not appear in log
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Multiplexor credentials should be masked with asterisks
|
||||
+ 4. Success
|
||||
+ 5. Updated credentials should be masked with asterisks
|
||||
+ 6. No actual credential values should be found in log
|
||||
+ """
|
||||
+ inst = topo.ms['supplier1']
|
||||
+ setup_audit_logging(inst, log_format, display_attrs)
|
||||
+
|
||||
+ # Enable chaining plugin and create chaining link
|
||||
+ chain_plugin = ChainingBackendPlugin(inst)
|
||||
+ chain_plugin.enable()
|
||||
+
|
||||
+ chains = ChainingLinks(inst)
|
||||
+ chain = None
|
||||
+
|
||||
+ try:
|
||||
+ # Create chaining link with multiplexor credentials
|
||||
+ chain = chains.create(properties={
|
||||
+ 'cn': 'testchain',
|
||||
+ 'nsfarmserverurl': 'ldap://localhost:389/',
|
||||
+ 'nsslapd-suffix': 'dc=example,dc=com',
|
||||
+ 'nsmultiplexorbinddn': 'cn=manager',
|
||||
+ 'nsmultiplexorcredentials': TEST_PASSWORD,
|
||||
+ 'nsCheckLocalACI': 'on',
|
||||
+ 'nsConnectionLife': '30',
|
||||
+ })
|
||||
+
|
||||
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
|
||||
+ assert found_masked, f"Masked multiplexor credentials not found in {log_format} ADD operation"
|
||||
+ assert not found_actual, f"Actual multiplexor credentials found in {log_format} ADD log (should be masked)"
|
||||
+ assert not found_hashed, f"Hashed multiplexor credentials found in {log_format} ADD log (should be masked)"
|
||||
+
|
||||
+ # Modify the credentials
|
||||
+ chain.replace('nsmultiplexorcredentials', TEST_PASSWORD_2)
|
||||
+
|
||||
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
|
||||
+ assert found_masked_2, f"Masked multiplexor credentials not found in {log_format} MODIFY operation"
|
||||
+ assert not found_actual_2, f"Actual multiplexor credentials found in {log_format} MODIFY log (should be masked)"
|
||||
+ assert not found_hashed_2, f"Hashed multiplexor credentials found in {log_format} MODIFY log (should be masked)"
|
||||
+
|
||||
+ finally:
|
||||
+ chain_plugin.disable()
|
||||
+ if chain is not None:
|
||||
+ inst.delete_branch_s(chain.dn, ldap.SCOPE_ONELEVEL)
|
||||
+ chain.delete()
|
||||
+
|
||||
+
|
||||
+@pytest.mark.parametrize("log_format,display_attrs", [
|
||||
+ ("default", None),
|
||||
+ ("default", "*"),
|
||||
+ ("default", "nsDS5ReplicaCredentials"),
|
||||
+])
|
||||
+def test_password_masking_replica_credentials(topo, log_format, display_attrs):
|
||||
+ """Test password masking for nsDS5ReplicaCredentials in replication agreements
|
||||
+
|
||||
+ :id: 7bf9e612-1b7c-49af-9fc0-de4c7df84b2a
|
||||
+ :parametrized: yes
|
||||
+ :setup: Standalone Instance
|
||||
+ :steps:
|
||||
+ 1. Configure audit logging format
|
||||
+ 2. Create a replication agreement entry with nsDS5ReplicaCredentials
|
||||
+ 3. Check that replica credentials are masked in audit log
|
||||
+ 4. Modify the credentials
|
||||
+ 5. Check that updated credentials are also masked
|
||||
+ 6. Verify actual credentials do not appear in log
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Replica credentials should be masked with asterisks
|
||||
+ 4. Success
|
||||
+ 5. Updated credentials should be masked with asterisks
|
||||
+ 6. No actual credential values should be found in log
|
||||
+ """
|
||||
+ inst = topo.ms['supplier2']
|
||||
+ setup_audit_logging(inst, log_format, display_attrs)
|
||||
+ agmt = None
|
||||
+
|
||||
+ try:
|
||||
+ replicas = Replicas(inst)
|
||||
+ replica = replicas.get(DEFAULT_SUFFIX)
|
||||
+ agmts = replica.get_agreements()
|
||||
+ agmt = agmts.create(properties={
|
||||
+ 'cn': 'testagmt',
|
||||
+ 'nsDS5ReplicaHost': 'localhost',
|
||||
+ 'nsDS5ReplicaPort': '389',
|
||||
+ 'nsDS5ReplicaBindDN': 'cn=replication manager,cn=config',
|
||||
+ 'nsDS5ReplicaCredentials': TEST_PASSWORD,
|
||||
+ 'nsDS5ReplicaRoot': DEFAULT_SUFFIX
|
||||
+ })
|
||||
+
|
||||
+ found_masked, found_actual, found_hashed = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD)
|
||||
+ assert found_masked, f"Masked replica credentials not found in {log_format} ADD operation"
|
||||
+ assert not found_actual, f"Actual replica credentials found in {log_format} ADD log (should be masked)"
|
||||
+ assert not found_hashed, f"Hashed replica credentials found in {log_format} ADD log (should be masked)"
|
||||
+
|
||||
+ # Modify the credentials
|
||||
+ agmt.replace('nsDS5ReplicaCredentials', TEST_PASSWORD_2)
|
||||
+
|
||||
+ found_masked_2, found_actual_2, found_hashed_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
|
||||
+ assert found_masked_2, f"Masked replica credentials not found in {log_format} MODIFY operation"
|
||||
+ assert not found_actual_2, f"Actual replica credentials found in {log_format} MODIFY log (should be masked)"
|
||||
+ assert not found_hashed_2, f"Hashed replica credentials found in {log_format} MODIFY log (should be masked)"
|
||||
+
|
||||
+ finally:
|
||||
+ if agmt is not None:
|
||||
+ agmt.delete()
|
||||
+
|
||||
+
|
||||
+@pytest.mark.parametrize("log_format,display_attrs", [
|
||||
+ ("default", None),
|
||||
+ ("default", "*"),
|
||||
+ ("default", "nsDS5ReplicaBootstrapCredentials"),
|
||||
+])
|
||||
+def test_password_masking_bootstrap_credentials(topo, log_format, display_attrs):
|
||||
+ """Test password masking for nsDS5ReplicaCredentials and nsDS5ReplicaBootstrapCredentials in replication agreements
|
||||
+
|
||||
+ :id: 248bd418-ffa4-4733-963d-2314c60b7c5b
|
||||
+ :parametrized: yes
|
||||
+ :setup: Standalone Instance
|
||||
+ :steps:
|
||||
+ 1. Configure audit logging format
|
||||
+ 2. Create a replication agreement entry with both nsDS5ReplicaCredentials and nsDS5ReplicaBootstrapCredentials
|
||||
+ 3. Check that both credentials are masked in audit log
|
||||
+ 4. Modify both credentials
|
||||
+ 5. Check that both updated credentials are also masked
|
||||
+ 6. Verify actual credentials do not appear in log
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Both credentials should be masked with asterisks
|
||||
+ 4. Success
|
||||
+ 5. Both updated credentials should be masked with asterisks
|
||||
+ 6. No actual credential values should be found in log
|
||||
+ """
|
||||
+ inst = topo.ms['supplier2']
|
||||
+ setup_audit_logging(inst, log_format, display_attrs)
|
||||
+ agmt = None
|
||||
+
|
||||
+ try:
|
||||
+ replicas = Replicas(inst)
|
||||
+ replica = replicas.get(DEFAULT_SUFFIX)
|
||||
+ agmts = replica.get_agreements()
|
||||
+ agmt = agmts.create(properties={
|
||||
+ 'cn': 'testbootstrapagmt',
|
||||
+ 'nsDS5ReplicaHost': 'localhost',
|
||||
+ 'nsDS5ReplicaPort': '389',
|
||||
+ 'nsDS5ReplicaBindDN': 'cn=replication manager,cn=config',
|
||||
+ 'nsDS5ReplicaCredentials': TEST_PASSWORD,
|
||||
+ 'nsDS5replicabootstrapbinddn': 'cn=bootstrap manager,cn=config',
|
||||
+ 'nsDS5ReplicaBootstrapCredentials': TEST_PASSWORD_2,
|
||||
+ 'nsDS5ReplicaRoot': DEFAULT_SUFFIX
|
||||
+ })
|
||||
+
|
||||
+ found_masked_bootstrap, found_actual_bootstrap, found_hashed_bootstrap = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_2)
|
||||
+ assert found_masked_bootstrap, f"Masked bootstrap credentials not found in {log_format} ADD operation"
|
||||
+ assert not found_actual_bootstrap, f"Actual bootstrap credentials found in {log_format} ADD log (should be masked)"
|
||||
+ assert not found_hashed_bootstrap, f"Hashed bootstrap credentials found in {log_format} ADD log (should be masked)"
|
||||
+
|
||||
+ agmt.replace('nsDS5ReplicaBootstrapCredentials', TEST_PASSWORD_3)
|
||||
+
|
||||
+ found_masked_bootstrap_2, found_actual_bootstrap_2, found_hashed_bootstrap_2 = check_password_masked(inst, log_format, MASKED_PASSWORD, TEST_PASSWORD_3)
|
||||
+ assert found_masked_bootstrap_2, f"Masked bootstrap credentials not found in {log_format} MODIFY operation"
|
||||
+ assert not found_actual_bootstrap_2, f"Actual bootstrap credentials found in {log_format} MODIFY log (should be masked)"
|
||||
+ assert not found_hashed_bootstrap_2, f"Hashed bootstrap credentials found in {log_format} MODIFY log (should be masked)"
|
||||
+
|
||||
+ finally:
|
||||
+ if agmt is not None:
|
||||
+ agmt.delete()
|
||||
+
|
||||
+
|
||||
+
|
||||
+if __name__ == '__main__':
|
||||
+ CURRENT_FILE = os.path.realpath(__file__)
|
||||
+ pytest.main(["-s", CURRENT_FILE])
|
||||
\ No newline at end of file
|
||||
diff --git a/ldap/servers/slapd/auditlog.c b/ldap/servers/slapd/auditlog.c
|
||||
index 0597ecc6f..c41415725 100644
|
||||
--- a/ldap/servers/slapd/auditlog.c
|
||||
+++ b/ldap/servers/slapd/auditlog.c
|
||||
@@ -37,6 +37,89 @@ static void write_audit_file(Slapi_Entry *entry, int logtype, int optype, const
|
||||
|
||||
static const char *modrdn_changes[4];
|
||||
|
||||
+/* Helper function to check if an attribute is a password that needs masking */
|
||||
+static int
|
||||
+is_password_attribute(const char *attr_name)
|
||||
+{
|
||||
+ return (strcasecmp(attr_name, SLAPI_USERPWD_ATTR) == 0 ||
|
||||
+ strcasecmp(attr_name, CONFIG_ROOTPW_ATTRIBUTE) == 0 ||
|
||||
+ strcasecmp(attr_name, SLAPI_MB_CREDENTIALS) == 0 ||
|
||||
+ strcasecmp(attr_name, SLAPI_REP_CREDENTIALS) == 0 ||
|
||||
+ strcasecmp(attr_name, SLAPI_REP_BOOTSTRAP_CREDENTIALS) == 0);
|
||||
+}
|
||||
+
|
||||
+/* Helper function to create a masked string representation of an entry */
|
||||
+static char *
|
||||
+create_masked_entry_string(Slapi_Entry *original_entry, int *len)
|
||||
+{
|
||||
+ Slapi_Attr *attr = NULL;
|
||||
+ char *entry_str = NULL;
|
||||
+ char *current_pos = NULL;
|
||||
+ char *line_start = NULL;
|
||||
+ char *next_line = NULL;
|
||||
+ char *colon_pos = NULL;
|
||||
+ int has_password_attrs = 0;
|
||||
+
|
||||
+ if (original_entry == NULL) {
|
||||
+ return NULL;
|
||||
+ }
|
||||
+
|
||||
+ /* Single pass through attributes to check for password attributes */
|
||||
+ for (slapi_entry_first_attr(original_entry, &attr); attr != NULL;
|
||||
+ slapi_entry_next_attr(original_entry, attr, &attr)) {
|
||||
+
|
||||
+ char *attr_name = NULL;
|
||||
+ slapi_attr_get_type(attr, &attr_name);
|
||||
+
|
||||
+ if (is_password_attribute(attr_name)) {
|
||||
+ has_password_attrs = 1;
|
||||
+ break;
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ /* If no password attributes, return original string - no masking needed */
|
||||
+ entry_str = slapi_entry2str(original_entry, len);
|
||||
+ if (!has_password_attrs) {
|
||||
+ return entry_str;
|
||||
+ }
|
||||
+
|
||||
+ /* Process the string in-place, replacing password values */
|
||||
+ current_pos = entry_str;
|
||||
+ while ((line_start = current_pos) != NULL && *line_start != '\0') {
|
||||
+ /* Find the end of current line */
|
||||
+ next_line = strchr(line_start, '\n');
|
||||
+ if (next_line != NULL) {
|
||||
+ *next_line = '\0'; /* Temporarily terminate line */
|
||||
+ current_pos = next_line + 1;
|
||||
+ } else {
|
||||
+ current_pos = NULL; /* Last line */
|
||||
+ }
|
||||
+
|
||||
+ /* Find the colon that separates attribute name from value */
|
||||
+ colon_pos = strchr(line_start, ':');
|
||||
+ if (colon_pos != NULL) {
|
||||
+ char saved_colon = *colon_pos;
|
||||
+ *colon_pos = '\0'; /* Temporarily null-terminate attribute name */
|
||||
+
|
||||
+ /* Check if this is a password attribute that needs masking */
|
||||
+ if (is_password_attribute(line_start)) {
|
||||
+ strcpy(colon_pos + 1, " **********************");
|
||||
+ }
|
||||
+
|
||||
+ *colon_pos = saved_colon; /* Restore colon */
|
||||
+ }
|
||||
+
|
||||
+ /* Restore newline if it was there */
|
||||
+ if (next_line != NULL) {
|
||||
+ *next_line = '\n';
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ /* Update length since we may have shortened the string */
|
||||
+ *len = strlen(entry_str);
|
||||
+ return entry_str; /* Return the modified original string */
|
||||
+}
|
||||
+
|
||||
void
|
||||
write_audit_log_entry(Slapi_PBlock *pb)
|
||||
{
|
||||
@@ -248,7 +331,21 @@ add_entry_attrs(Slapi_Entry *entry, lenstr *l)
|
||||
{
|
||||
slapi_entry_attr_find(entry, req_attr, &entry_attr);
|
||||
if (entry_attr) {
|
||||
- log_entry_attr(entry_attr, req_attr, l);
|
||||
+ if (strcmp(req_attr, PSEUDO_ATTR_UNHASHEDUSERPASSWORD) == 0) {
|
||||
+ /* Do not write the unhashed clear-text password */
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
+ /* Check if this is a password attribute that needs masking */
|
||||
+ if (is_password_attribute(req_attr)) {
|
||||
+ /* userpassword/rootdn password - mask the value */
|
||||
+ addlenstr(l, "#");
|
||||
+ addlenstr(l, req_attr);
|
||||
+ addlenstr(l, ": **********************\n");
|
||||
+ } else {
|
||||
+ /* Regular attribute - log normally */
|
||||
+ log_entry_attr(entry_attr, req_attr, l);
|
||||
+ }
|
||||
}
|
||||
}
|
||||
} else {
|
||||
@@ -262,13 +359,11 @@ add_entry_attrs(Slapi_Entry *entry, lenstr *l)
|
||||
continue;
|
||||
}
|
||||
|
||||
- if (strcasecmp(attr, SLAPI_USERPWD_ATTR) == 0 ||
|
||||
- strcasecmp(attr, CONFIG_ROOTPW_ATTRIBUTE) == 0)
|
||||
- {
|
||||
+ if (is_password_attribute(attr)) {
|
||||
/* userpassword/rootdn password - mask the value */
|
||||
addlenstr(l, "#");
|
||||
addlenstr(l, attr);
|
||||
- addlenstr(l, ": ****************************\n");
|
||||
+ addlenstr(l, ": **********************\n");
|
||||
continue;
|
||||
}
|
||||
log_entry_attr(entry_attr, attr, l);
|
||||
@@ -354,6 +449,10 @@ write_audit_file(
|
||||
break;
|
||||
}
|
||||
}
|
||||
+
|
||||
+ /* Check if this is a password attribute that needs masking */
|
||||
+ int is_password_attr = is_password_attribute(mods[j]->mod_type);
|
||||
+
|
||||
switch (operationtype) {
|
||||
case LDAP_MOD_ADD:
|
||||
addlenstr(l, "add: ");
|
||||
@@ -378,18 +477,27 @@ write_audit_file(
|
||||
break;
|
||||
}
|
||||
if (operationtype != LDAP_MOD_IGNORE) {
|
||||
- for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
|
||||
- char *buf, *bufp;
|
||||
- len = strlen(mods[j]->mod_type);
|
||||
- len = LDIF_SIZE_NEEDED(len, mods[j]->mod_bvalues[i]->bv_len) + 1;
|
||||
- buf = slapi_ch_malloc(len);
|
||||
- bufp = buf;
|
||||
- slapi_ldif_put_type_and_value_with_options(&bufp, mods[j]->mod_type,
|
||||
- mods[j]->mod_bvalues[i]->bv_val,
|
||||
- mods[j]->mod_bvalues[i]->bv_len, 0);
|
||||
- *bufp = '\0';
|
||||
- addlenstr(l, buf);
|
||||
- slapi_ch_free((void **)&buf);
|
||||
+ if (is_password_attr) {
|
||||
+ /* Add masked password */
|
||||
+ for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
|
||||
+ addlenstr(l, mods[j]->mod_type);
|
||||
+ addlenstr(l, ": **********************\n");
|
||||
+ }
|
||||
+ } else {
|
||||
+ /* Add actual values for non-password attributes */
|
||||
+ for (i = 0; mods[j]->mod_bvalues != NULL && mods[j]->mod_bvalues[i] != NULL; i++) {
|
||||
+ char *buf, *bufp;
|
||||
+ len = strlen(mods[j]->mod_type);
|
||||
+ len = LDIF_SIZE_NEEDED(len, mods[j]->mod_bvalues[i]->bv_len) + 1;
|
||||
+ buf = slapi_ch_malloc(len);
|
||||
+ bufp = buf;
|
||||
+ slapi_ldif_put_type_and_value_with_options(&bufp, mods[j]->mod_type,
|
||||
+ mods[j]->mod_bvalues[i]->bv_val,
|
||||
+ mods[j]->mod_bvalues[i]->bv_len, 0);
|
||||
+ *bufp = '\0';
|
||||
+ addlenstr(l, buf);
|
||||
+ slapi_ch_free((void **)&buf);
|
||||
+ }
|
||||
}
|
||||
}
|
||||
addlenstr(l, "-\n");
|
||||
@@ -400,7 +508,7 @@ write_audit_file(
|
||||
e = change;
|
||||
addlenstr(l, attr_changetype);
|
||||
addlenstr(l, ": add\n");
|
||||
- tmp = slapi_entry2str(e, &len);
|
||||
+ tmp = create_masked_entry_string(e, &len);
|
||||
tmpsave = tmp;
|
||||
while ((tmp = strchr(tmp, '\n')) != NULL) {
|
||||
tmp++;
|
||||
diff --git a/ldap/servers/slapd/slapi-private.h b/ldap/servers/slapd/slapi-private.h
|
||||
index dfb0e272a..af2180e55 100644
|
||||
--- a/ldap/servers/slapd/slapi-private.h
|
||||
+++ b/ldap/servers/slapd/slapi-private.h
|
||||
@@ -843,6 +843,7 @@ void task_cleanup(void);
|
||||
/* for reversible encyrption */
|
||||
#define SLAPI_MB_CREDENTIALS "nsmultiplexorcredentials"
|
||||
#define SLAPI_REP_CREDENTIALS "nsds5ReplicaCredentials"
|
||||
+#define SLAPI_REP_BOOTSTRAP_CREDENTIALS "nsds5ReplicaBootstrapCredentials"
|
||||
int pw_rever_encode(Slapi_Value **vals, char *attr_name);
|
||||
int pw_rever_decode(char *cipher, char **plain, const char *attr_name);
|
||||
|
||||
diff --git a/src/lib389/lib389/chaining.py b/src/lib389/lib389/chaining.py
|
||||
index 533b83ebf..33ae78c8b 100644
|
||||
--- a/src/lib389/lib389/chaining.py
|
||||
+++ b/src/lib389/lib389/chaining.py
|
||||
@@ -134,7 +134,7 @@ class ChainingLink(DSLdapObject):
|
||||
"""
|
||||
|
||||
# Create chaining entry
|
||||
- super(ChainingLink, self).create(rdn, properties, basedn)
|
||||
+ link = super(ChainingLink, self).create(rdn, properties, basedn)
|
||||
|
||||
# Create mapping tree entry
|
||||
dn_comps = ldap.explode_dn(properties['nsslapd-suffix'][0])
|
||||
@@ -149,6 +149,7 @@ class ChainingLink(DSLdapObject):
|
||||
self._mts.ensure_state(properties=mt_properties)
|
||||
except ldap.ALREADY_EXISTS:
|
||||
pass
|
||||
+ return link
|
||||
|
||||
|
||||
class ChainingLinks(DSLdapObjects):
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,137 @@
|
||||
From 284da99d0cd1ad16c702f4a4f68d2a479ac41576 Mon Sep 17 00:00:00 2001
|
||||
From: Viktor Ashirov <vashirov@redhat.com>
|
||||
Date: Wed, 18 Jun 2025 11:12:28 +0200
|
||||
Subject: [PATCH] Issue 6819 - Incorrect pwdpolicysubentry returned for an
|
||||
entry with user password policy
|
||||
|
||||
Bug Description:
|
||||
When both subtree and user password policies exist, pwdpolicysubentry
|
||||
points to the subtree password policy instead of user password policy.
|
||||
|
||||
Fix Description:
|
||||
Update the template for CoS pointer definition to use
|
||||
`operational-default` modifier instead of `operational`.
|
||||
|
||||
Fixes: https://github.com/389ds/389-ds-base/issues/6819
|
||||
|
||||
Reviewed by: @droideck, @tbordaz (Thanks!)
|
||||
|
||||
(cherry picked from commit 622c191302879035ef7450a29aa7569ee768c3ab)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../password/pwdPolicy_attribute_test.py | 73 +++++++++++++++++--
|
||||
src/lib389/lib389/pwpolicy.py | 2 +-
|
||||
2 files changed, 66 insertions(+), 9 deletions(-)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/password/pwdPolicy_attribute_test.py b/dirsrvtests/tests/suites/password/pwdPolicy_attribute_test.py
|
||||
index c2c1e47fb..0dde8d637 100644
|
||||
--- a/dirsrvtests/tests/suites/password/pwdPolicy_attribute_test.py
|
||||
+++ b/dirsrvtests/tests/suites/password/pwdPolicy_attribute_test.py
|
||||
@@ -59,17 +59,39 @@ def test_user(topology_st, request):
|
||||
return user
|
||||
|
||||
|
||||
-@pytest.fixture(scope="module")
|
||||
-def password_policy(topology_st, test_user):
|
||||
+@pytest.fixture(scope="function")
|
||||
+def password_policy(topology_st, request, test_user):
|
||||
"""Set up password policy for subtree and user"""
|
||||
|
||||
pwp = PwPolicyManager(topology_st.standalone)
|
||||
policy_props = {}
|
||||
- log.info('Create password policy for subtree {}'.format(OU_PEOPLE))
|
||||
- pwp.create_subtree_policy(OU_PEOPLE, policy_props)
|
||||
+ log.info(f"Create password policy for subtree {OU_PEOPLE}")
|
||||
+ try:
|
||||
+ pwp.create_subtree_policy(OU_PEOPLE, policy_props)
|
||||
+ except ldap.ALREADY_EXISTS:
|
||||
+ log.info(f"Subtree password policy for {OU_PEOPLE} already exist, skipping")
|
||||
+
|
||||
+ log.info(f"Create password policy for user {TEST_USER_DN}")
|
||||
+ try:
|
||||
+ pwp.create_user_policy(TEST_USER_DN, policy_props)
|
||||
+ except ldap.ALREADY_EXISTS:
|
||||
+ log.info(f"User password policy for {TEST_USER_DN} already exist, skipping")
|
||||
+
|
||||
+ def fin():
|
||||
+ log.info(f"Delete password policy for subtree {OU_PEOPLE}")
|
||||
+ try:
|
||||
+ pwp.delete_local_policy(OU_PEOPLE)
|
||||
+ except ValueError:
|
||||
+ log.info(f"Subtree password policy for {OU_PEOPLE} doesn't exist, skipping")
|
||||
+
|
||||
+ log.info(f"Delete password policy for user {TEST_USER_DN}")
|
||||
+ try:
|
||||
+ pwp.delete_local_policy(TEST_USER_DN)
|
||||
+ except ValueError:
|
||||
+ log.info(f"User password policy for {TEST_USER_DN} doesn't exist, skipping")
|
||||
+
|
||||
+ request.addfinalizer(fin)
|
||||
|
||||
- log.info('Create password policy for user {}'.format(TEST_USER_DN))
|
||||
- pwp.create_user_policy(TEST_USER_DN, policy_props)
|
||||
|
||||
@pytest.mark.skipif(ds_is_older('1.4.3.3'), reason="Not implemented")
|
||||
def test_pwd_reset(topology_st, test_user):
|
||||
@@ -257,8 +279,43 @@ def test_pwd_min_age(topology_st, test_user, password_policy):
|
||||
log.info('Bind as DM')
|
||||
topology_st.standalone.simple_bind_s(DN_DM, PASSWORD)
|
||||
user.reset_password(TEST_USER_PWD)
|
||||
- pwp.delete_local_policy(TEST_USER_DN)
|
||||
- pwp.delete_local_policy(OU_PEOPLE)
|
||||
+
|
||||
+
|
||||
+def test_pwdpolicysubentry(topology_st, password_policy):
|
||||
+ """Verify that 'pwdpolicysubentry' attr works as expected
|
||||
+ User should have a priority over a subtree.
|
||||
+
|
||||
+ :id: 4ab0c62a-623b-40b4-af67-99580c77b36c
|
||||
+ :setup: Standalone instance, a test user,
|
||||
+ password policy entries for a user and a subtree
|
||||
+ :steps:
|
||||
+ 1. Create a subtree policy
|
||||
+ 2. Create a user policy
|
||||
+ 3. Search for 'pwdpolicysubentry' in the user entry
|
||||
+ 4. Delete the user policy
|
||||
+ 5. Search for 'pwdpolicysubentry' in the user entry
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Should point to the user policy entry
|
||||
+ 4. Success
|
||||
+ 5. Should point to the subtree policy entry
|
||||
+
|
||||
+ """
|
||||
+
|
||||
+ users = UserAccounts(topology_st.standalone, OU_PEOPLE, rdn=None)
|
||||
+ user = users.get(TEST_USER_NAME)
|
||||
+
|
||||
+ pwp_subentry = user.get_attr_vals_utf8('pwdpolicysubentry')[0]
|
||||
+ assert 'nsPwPolicyEntry_subtree' not in pwp_subentry
|
||||
+ assert 'nsPwPolicyEntry_user' in pwp_subentry
|
||||
+
|
||||
+ pwp = PwPolicyManager(topology_st.standalone)
|
||||
+ pwp.delete_local_policy(TEST_USER_DN)
|
||||
+ pwp_subentry = user.get_attr_vals_utf8('pwdpolicysubentry')[0]
|
||||
+ assert 'nsPwPolicyEntry_subtree' in pwp_subentry
|
||||
+ assert 'nsPwPolicyEntry_user' not in pwp_subentry
|
||||
+
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Run isolated
|
||||
diff --git a/src/lib389/lib389/pwpolicy.py b/src/lib389/lib389/pwpolicy.py
|
||||
index 7ffe449cc..6a47a44fe 100644
|
||||
--- a/src/lib389/lib389/pwpolicy.py
|
||||
+++ b/src/lib389/lib389/pwpolicy.py
|
||||
@@ -168,7 +168,7 @@ class PwPolicyManager(object):
|
||||
|
||||
# The CoS specification entry at the subtree level
|
||||
cos_pointer_defs = CosPointerDefinitions(self._instance, dn)
|
||||
- cos_pointer_defs.create(properties={'cosAttribute': 'pwdpolicysubentry default operational',
|
||||
+ cos_pointer_defs.create(properties={'cosAttribute': 'pwdpolicysubentry default operational-default',
|
||||
'cosTemplateDn': cos_template.dn,
|
||||
'cn': 'nsPwPolicy_CoS'})
|
||||
except ldap.LDAPError as e:
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,572 @@
|
||||
From 23e56fd01eaa24a2fa945430f91600dd9c726d34 Mon Sep 17 00:00:00 2001
|
||||
From: Simon Pichugin <spichugi@redhat.com>
|
||||
Date: Tue, 19 Aug 2025 14:30:15 -0700
|
||||
Subject: [PATCH] Issue 6936 - Make user/subtree policy creation idempotent
|
||||
(#6937)
|
||||
|
||||
Description: Correct the CLI mapping typo to use 'nsslapd-pwpolicy-local',
|
||||
rework subtree policy detection to validate CoS templates and add user-policy detection.
|
||||
Make user/subtree policy creation idempotent via ensure_state, and improve deletion
|
||||
logic to distinguish subtree vs user policies and fail if none exist.
|
||||
|
||||
Add a test suite (pwp_history_local_override_test.py) exercising global-only and local-only
|
||||
history enforcement, local overriding global counts, immediate effect of dsconf updates,
|
||||
and fallback to global after removing a user policy, ensuring reliable behavior
|
||||
and preventing regressions.
|
||||
|
||||
Fixes: https://github.com/389ds/389-ds-base/issues/6936
|
||||
|
||||
Reviewed by: @mreynolds389 (Thanks!)
|
||||
|
||||
(cherry picked from commit da4eea126cc9019f540b57c1db9dec7988cade10)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../pwp_history_local_override_test.py | 351 ++++++++++++++++++
|
||||
src/lib389/lib389/cli_conf/pwpolicy.py | 4 +-
|
||||
src/lib389/lib389/pwpolicy.py | 107 ++++--
|
||||
3 files changed, 424 insertions(+), 38 deletions(-)
|
||||
create mode 100644 dirsrvtests/tests/suites/password/pwp_history_local_override_test.py
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/password/pwp_history_local_override_test.py b/dirsrvtests/tests/suites/password/pwp_history_local_override_test.py
|
||||
new file mode 100644
|
||||
index 000000000..6d72725fa
|
||||
--- /dev/null
|
||||
+++ b/dirsrvtests/tests/suites/password/pwp_history_local_override_test.py
|
||||
@@ -0,0 +1,351 @@
|
||||
+# --- BEGIN COPYRIGHT BLOCK ---
|
||||
+# Copyright (C) 2025 Red Hat, Inc.
|
||||
+# All rights reserved.
|
||||
+#
|
||||
+# License: GPL (version 3 or any later version).
|
||||
+# See LICENSE for details.
|
||||
+# --- END COPYRIGHT BLOCK ---
|
||||
+#
|
||||
+import os
|
||||
+import time
|
||||
+import ldap
|
||||
+import pytest
|
||||
+import subprocess
|
||||
+import logging
|
||||
+
|
||||
+from lib389._constants import DEFAULT_SUFFIX, DN_DM, PASSWORD, DN_CONFIG
|
||||
+from lib389.topologies import topology_st
|
||||
+from lib389.idm.user import UserAccounts
|
||||
+from lib389.idm.domain import Domain
|
||||
+from lib389.pwpolicy import PwPolicyManager
|
||||
+
|
||||
+pytestmark = pytest.mark.tier1
|
||||
+
|
||||
+DEBUGGING = os.getenv("DEBUGGING", default=False)
|
||||
+if DEBUGGING:
|
||||
+ logging.getLogger(__name__).setLevel(logging.DEBUG)
|
||||
+else:
|
||||
+ logging.getLogger(__name__).setLevel(logging.INFO)
|
||||
+log = logging.getLogger(__name__)
|
||||
+
|
||||
+OU_DN = f"ou=People,{DEFAULT_SUFFIX}"
|
||||
+USER_ACI = '(targetattr="userpassword || passwordHistory")(version 3.0; acl "pwp test"; allow (all) userdn="ldap:///self";)'
|
||||
+
|
||||
+
|
||||
+@pytest.fixture(autouse=True, scope="function")
|
||||
+def restore_global_policy(topology_st, request):
|
||||
+ """Snapshot and restore global password policy around each test in this file."""
|
||||
+ inst = topology_st.standalone
|
||||
+ inst.simple_bind_s(DN_DM, PASSWORD)
|
||||
+
|
||||
+ attrs = [
|
||||
+ 'nsslapd-pwpolicy-local',
|
||||
+ 'nsslapd-pwpolicy-inherit-global',
|
||||
+ 'passwordHistory',
|
||||
+ 'passwordInHistory',
|
||||
+ 'passwordChange',
|
||||
+ ]
|
||||
+
|
||||
+ entry = inst.getEntry(DN_CONFIG, ldap.SCOPE_BASE, '(objectClass=*)', attrs)
|
||||
+ saved = {attr: entry.getValue(attr) for attr in attrs}
|
||||
+
|
||||
+ def fin():
|
||||
+ inst.simple_bind_s(DN_DM, PASSWORD)
|
||||
+ for attr, value in saved.items():
|
||||
+ inst.config.replace(attr, value)
|
||||
+
|
||||
+ request.addfinalizer(fin)
|
||||
+
|
||||
+
|
||||
+@pytest.fixture(scope="function")
|
||||
+def setup_entries(topology_st, request):
|
||||
+ """Create test OU and user, and install an ACI for self password changes."""
|
||||
+
|
||||
+ inst = topology_st.standalone
|
||||
+
|
||||
+ suffix = Domain(inst, DEFAULT_SUFFIX)
|
||||
+ suffix.add('aci', USER_ACI)
|
||||
+
|
||||
+ users = UserAccounts(inst, DEFAULT_SUFFIX)
|
||||
+ try:
|
||||
+ user = users.create_test_user(uid=1)
|
||||
+ except ldap.ALREADY_EXISTS:
|
||||
+ user = users.get("test_user_1")
|
||||
+
|
||||
+ def fin():
|
||||
+ pwp = PwPolicyManager(inst)
|
||||
+ try:
|
||||
+ pwp.delete_local_policy(OU_DN)
|
||||
+ except Exception as e:
|
||||
+ if "No password policy" in str(e):
|
||||
+ pass
|
||||
+ else:
|
||||
+ raise e
|
||||
+ try:
|
||||
+ pwp.delete_local_policy(user.dn)
|
||||
+ except Exception as e:
|
||||
+ if "No password policy" in str(e):
|
||||
+ pass
|
||||
+ else:
|
||||
+ raise e
|
||||
+ suffix.remove('aci', USER_ACI)
|
||||
+ request.addfinalizer(fin)
|
||||
+
|
||||
+ return user
|
||||
+
|
||||
+
|
||||
+def set_user_password(inst, user, new_password, bind_as_user_password=None, expect_violation=False):
|
||||
+ if bind_as_user_password is not None:
|
||||
+ user.rebind(bind_as_user_password)
|
||||
+ try:
|
||||
+ user.reset_password(new_password)
|
||||
+ if expect_violation:
|
||||
+ pytest.fail("Password change unexpectedly succeeded")
|
||||
+ except ldap.CONSTRAINT_VIOLATION:
|
||||
+ if not expect_violation:
|
||||
+ pytest.fail("Password change unexpectedly rejected with CONSTRAINT_VIOLATION")
|
||||
+ finally:
|
||||
+ inst.simple_bind_s(DN_DM, PASSWORD)
|
||||
+ time.sleep(1)
|
||||
+
|
||||
+
|
||||
+def set_global_history(inst, enabled: bool, count: int, inherit_global: str = 'on'):
|
||||
+ inst.simple_bind_s(DN_DM, PASSWORD)
|
||||
+ inst.config.replace('nsslapd-pwpolicy-local', 'on')
|
||||
+ inst.config.replace('nsslapd-pwpolicy-inherit-global', inherit_global)
|
||||
+ inst.config.replace('passwordHistory', 'on' if enabled else 'off')
|
||||
+ inst.config.replace('passwordInHistory', str(count))
|
||||
+ inst.config.replace('passwordChange', 'on')
|
||||
+ time.sleep(1)
|
||||
+
|
||||
+
|
||||
+def ensure_local_subtree_policy(inst, count: int, track_update_time: str = 'on'):
|
||||
+ pwp = PwPolicyManager(inst)
|
||||
+ pwp.create_subtree_policy(OU_DN, {
|
||||
+ 'passwordChange': 'on',
|
||||
+ 'passwordHistory': 'on',
|
||||
+ 'passwordInHistory': str(count),
|
||||
+ 'passwordTrackUpdateTime': track_update_time,
|
||||
+ })
|
||||
+ time.sleep(1)
|
||||
+
|
||||
+
|
||||
+def set_local_history_via_cli(inst, count: int):
|
||||
+ sbin_dir = inst.get_sbin_dir()
|
||||
+ inst_name = inst.serverid
|
||||
+ cmd = [f"{sbin_dir}/dsconf", inst_name, "localpwp", "set", f"--pwdhistorycount={count}", OU_DN]
|
||||
+ rc = subprocess.call(cmd)
|
||||
+ assert rc == 0, f"dsconf command failed rc={rc}: {' '.join(cmd)}"
|
||||
+ time.sleep(1)
|
||||
+
|
||||
+
|
||||
+def test_global_history_only_enforced(topology_st, setup_entries):
|
||||
+ """Global-only history enforcement with count 2
|
||||
+
|
||||
+ :id: 3d8cf35b-4a33-4587-9814-ebe18b7a1f92
|
||||
+ :setup: Standalone instance, test OU and user, ACI for self password changes
|
||||
+ :steps:
|
||||
+ 1. Remove local policies
|
||||
+ 2. Set global policy: passwordHistory=on, passwordInHistory=2
|
||||
+ 3. Set password to Alpha1, then change to Alpha2 and Alpha3 as the user
|
||||
+ 4. Attempt to change to Alpha1 and Alpha2
|
||||
+ 5. Attempt to change to Alpha4
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. Changes to Welcome1 and Welcome2 are rejected with CONSTRAINT_VIOLATION
|
||||
+ 5. Change to Welcome4 is accepted
|
||||
+ """
|
||||
+ inst = topology_st.standalone
|
||||
+ inst.simple_bind_s(DN_DM, PASSWORD)
|
||||
+
|
||||
+ set_global_history(inst, enabled=True, count=2)
|
||||
+
|
||||
+ user = setup_entries
|
||||
+ user.reset_password('Alpha1')
|
||||
+ set_user_password(inst, user, 'Alpha2', bind_as_user_password='Alpha1')
|
||||
+ set_user_password(inst, user, 'Alpha3', bind_as_user_password='Alpha2')
|
||||
+
|
||||
+ # Within last 2
|
||||
+ set_user_password(inst, user, 'Alpha2', bind_as_user_password='Alpha3', expect_violation=True)
|
||||
+ set_user_password(inst, user, 'Alpha1', bind_as_user_password='Alpha3', expect_violation=True)
|
||||
+
|
||||
+ # New password should be allowed
|
||||
+ set_user_password(inst, user, 'Alpha4', bind_as_user_password='Alpha3', expect_violation=False)
|
||||
+
|
||||
+
|
||||
+def test_local_overrides_global_history(topology_st, setup_entries):
|
||||
+ """Local subtree policy (history=3) overrides global (history=1)
|
||||
+
|
||||
+ :id: 97c22f56-5ea6-40c1-8d8c-1cece3bf46fd
|
||||
+ :setup: Standalone instance, test OU and user
|
||||
+ :steps:
|
||||
+ 1. Set global policy passwordInHistory=1
|
||||
+ 2. Create local subtree policy on the OU with passwordInHistory=3
|
||||
+ 3. Set password to Bravo1, then change to Bravo2 and Bravo3 as the user
|
||||
+ 4. Attempt to change to Bravo1
|
||||
+ 5. Attempt to change to Bravo5
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. Change to Welcome1 is rejected (local policy wins)
|
||||
+ 5. Change to Welcome5 is accepted
|
||||
+ """
|
||||
+ inst = topology_st.standalone
|
||||
+ inst.simple_bind_s(DN_DM, PASSWORD)
|
||||
+
|
||||
+ set_global_history(inst, enabled=True, count=1, inherit_global='on')
|
||||
+
|
||||
+ ensure_local_subtree_policy(inst, count=3)
|
||||
+
|
||||
+ user = setup_entries
|
||||
+ user.reset_password('Bravo1')
|
||||
+ set_user_password(inst, user, 'Bravo2', bind_as_user_password='Bravo1')
|
||||
+ set_user_password(inst, user, 'Bravo3', bind_as_user_password='Bravo2')
|
||||
+
|
||||
+ # Third prior should be rejected under local policy count=3
|
||||
+ set_user_password(inst, user, 'Bravo1', bind_as_user_password='Bravo3', expect_violation=True)
|
||||
+
|
||||
+ # New password allowed
|
||||
+ set_user_password(inst, user, 'Bravo5', bind_as_user_password='Bravo3', expect_violation=False)
|
||||
+
|
||||
+
|
||||
+def test_change_local_history_via_cli_affects_enforcement(topology_st, setup_entries):
|
||||
+ """Changing local policy via CLI is enforced immediately
|
||||
+
|
||||
+ :id: 5a6d0d14-4009-4bad-86e1-cde5000c43dc
|
||||
+ :setup: Standalone instance, test OU and user, dsconf available
|
||||
+ :steps:
|
||||
+ 1. Ensure local subtree policy passwordInHistory=3
|
||||
+ 2. Set password to Charlie1, then change to Charlie2 and Charlie3 as the user
|
||||
+ 3. Attempt to change to Charlie1 (within last 3)
|
||||
+ 4. Run: dsconf <inst> localpwp set --pwdhistorycount=1 "ou=product testing,<suffix>"
|
||||
+ 5. Attempt to change to Charlie1 again
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Change to Welcome1 is rejected
|
||||
+ 4. CLI command succeeds
|
||||
+ 5. Change to Welcome1 now succeeds (only last 1 is disallowed)
|
||||
+ """
|
||||
+ inst = topology_st.standalone
|
||||
+ inst.simple_bind_s(DN_DM, PASSWORD)
|
||||
+
|
||||
+ ensure_local_subtree_policy(inst, count=3)
|
||||
+
|
||||
+ user = setup_entries
|
||||
+ user.reset_password('Charlie1')
|
||||
+ set_user_password(inst, user, 'Charlie2', bind_as_user_password='Charlie1', expect_violation=False)
|
||||
+ set_user_password(inst, user, 'Charlie3', bind_as_user_password='Charlie2', expect_violation=False)
|
||||
+
|
||||
+ # With count=3, Welcome1 is within history
|
||||
+ set_user_password(inst, user, 'Charlie1', bind_as_user_password='Charlie3', expect_violation=True)
|
||||
+
|
||||
+ # Reduce local count to 1 via CLI to exercise CLI mapping and updated code
|
||||
+ set_local_history_via_cli(inst, count=1)
|
||||
+
|
||||
+ # Now Welcome1 should be allowed
|
||||
+ set_user_password(inst, user, 'Charlie1', bind_as_user_password='Charlie3', expect_violation=False)
|
||||
+
|
||||
+
|
||||
+def test_history_local_only_enforced(topology_st, setup_entries):
|
||||
+ """Local-only history enforcement with count 3
|
||||
+
|
||||
+ :id: af6ff34d-ac94-4108-a7b6-2b589c960154
|
||||
+ :setup: Standalone instance, test OU and user
|
||||
+ :steps:
|
||||
+ 1. Disable global password history (passwordHistory=off, passwordInHistory=0, inherit off)
|
||||
+ 2. Ensure local subtree policy with passwordInHistory=3
|
||||
+ 3. Set password to Delta1, then change to Delta2 and Delta3 as the user
|
||||
+ 4. Attempt to change to Delta1
|
||||
+ 5. Attempt to change to Delta5
|
||||
+ 6. Change once more to Delta6, then change to Delta1
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. Change to Welcome1 is rejected (within last 3)
|
||||
+ 5. Change to Welcome5 is accepted
|
||||
+ 6. Welcome1 is now older than the last 3 and is accepted
|
||||
+ """
|
||||
+ inst = topology_st.standalone
|
||||
+ inst.simple_bind_s(DN_DM, PASSWORD)
|
||||
+
|
||||
+ set_global_history(inst, enabled=False, count=0, inherit_global='off')
|
||||
+
|
||||
+ ensure_local_subtree_policy(inst, count=3)
|
||||
+
|
||||
+ user = setup_entries
|
||||
+ user.reset_password('Delta1')
|
||||
+ set_user_password(inst, user, 'Delta2', bind_as_user_password='Delta1')
|
||||
+ set_user_password(inst, user, 'Delta3', bind_as_user_password='Delta2')
|
||||
+
|
||||
+ # Within last 2
|
||||
+ set_user_password(inst, user, 'Delta1', bind_as_user_password='Delta3', expect_violation=True)
|
||||
+
|
||||
+ # New password allowed
|
||||
+ set_user_password(inst, user, 'Delta5', bind_as_user_password='Delta3', expect_violation=False)
|
||||
+
|
||||
+ # Now Welcome1 is older than last 2 after one more change
|
||||
+ set_user_password(inst, user, 'Delta6', bind_as_user_password='Delta5', expect_violation=False)
|
||||
+ set_user_password(inst, user, 'Delta1', bind_as_user_password='Delta6', expect_violation=False)
|
||||
+
|
||||
+
|
||||
+def test_user_policy_detection_and_enforcement(topology_st, setup_entries):
|
||||
+ """User local policy is detected and enforced; removal falls back to global policy
|
||||
+
|
||||
+ :id: 2213126a-1f47-468c-8337-0d2ee5d2d585
|
||||
+ :setup: Standalone instance, test OU and user
|
||||
+ :steps:
|
||||
+ 1. Set global policy passwordInHistory=1
|
||||
+ 2. Create a user local password policy on the user with passwordInHistory=3
|
||||
+ 3. Verify is_user_policy(USER_DN) is True
|
||||
+ 4. Set password to Echo1, then change to Echo2 and Echo3 as the user
|
||||
+ 5. Attempt to change to Echo1 (within last 3)
|
||||
+ 6. Delete the user local policy
|
||||
+ 7. Verify is_user_policy(USER_DN) is False
|
||||
+ 8. Attempt to change to Echo1 again (now only last 1 disallowed by global)
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. is_user_policy returns True
|
||||
+ 4. Success
|
||||
+ 5. Change to Welcome1 is rejected
|
||||
+ 6. Success
|
||||
+ 7. is_user_policy returns False
|
||||
+ 8. Change to Welcome1 succeeds (two back is allowed by global=1)
|
||||
+ """
|
||||
+ inst = topology_st.standalone
|
||||
+ inst.simple_bind_s(DN_DM, PASSWORD)
|
||||
+
|
||||
+ set_global_history(inst, enabled=True, count=1, inherit_global='on')
|
||||
+
|
||||
+ pwp = PwPolicyManager(inst)
|
||||
+ user = setup_entries
|
||||
+ pwp.create_user_policy(user.dn, {
|
||||
+ 'passwordChange': 'on',
|
||||
+ 'passwordHistory': 'on',
|
||||
+ 'passwordInHistory': '3',
|
||||
+ })
|
||||
+
|
||||
+ assert pwp.is_user_policy(user.dn) is True
|
||||
+
|
||||
+ user.reset_password('Echo1')
|
||||
+ set_user_password(inst, user, 'Echo2', bind_as_user_password='Echo1', expect_violation=False)
|
||||
+ set_user_password(inst, user, 'Echo3', bind_as_user_password='Echo2', expect_violation=False)
|
||||
+ set_user_password(inst, user, 'Echo1', bind_as_user_password='Echo3', expect_violation=True)
|
||||
+
|
||||
+ pwp.delete_local_policy(user.dn)
|
||||
+ assert pwp.is_user_policy(user.dn) is False
|
||||
+
|
||||
+ # With only global=1, Echo1 (two back) is allowed
|
||||
+ set_user_password(inst, user, 'Echo1', bind_as_user_password='Echo3', expect_violation=False)
|
||||
+
|
||||
+
|
||||
+if __name__ == '__main__':
|
||||
+ # Run isolated
|
||||
+ # -s for DEBUG mode
|
||||
+ CURRENT_FILE = os.path.realpath(__file__)
|
||||
+ pytest.main("-s %s" % CURRENT_FILE)
|
||||
diff --git a/src/lib389/lib389/cli_conf/pwpolicy.py b/src/lib389/lib389/cli_conf/pwpolicy.py
|
||||
index 2d4ba9b21..a3e59a90c 100644
|
||||
--- a/src/lib389/lib389/cli_conf/pwpolicy.py
|
||||
+++ b/src/lib389/lib389/cli_conf/pwpolicy.py
|
||||
@@ -1,5 +1,5 @@
|
||||
# --- BEGIN COPYRIGHT BLOCK ---
|
||||
-# Copyright (C) 2023 Red Hat, Inc.
|
||||
+# Copyright (C) 2025 Red Hat, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# License: GPL (version 3 or any later version).
|
||||
@@ -43,7 +43,7 @@ def _get_pw_policy(inst, targetdn, log, use_json=None):
|
||||
targetdn = 'cn=config'
|
||||
policydn = targetdn
|
||||
basedn = targetdn
|
||||
- attr_list.extend(['passwordisglobalpolicy', 'nsslapd-pwpolicy_local'])
|
||||
+ attr_list.extend(['passwordisglobalpolicy', 'nsslapd-pwpolicy-local'])
|
||||
all_attrs = inst.config.get_attrs_vals_utf8(attr_list)
|
||||
attrs = {k: v for k, v in all_attrs.items() if len(v) > 0}
|
||||
else:
|
||||
diff --git a/src/lib389/lib389/pwpolicy.py b/src/lib389/lib389/pwpolicy.py
|
||||
index 6a47a44fe..539c230a9 100644
|
||||
--- a/src/lib389/lib389/pwpolicy.py
|
||||
+++ b/src/lib389/lib389/pwpolicy.py
|
||||
@@ -1,5 +1,5 @@
|
||||
# --- BEGIN COPYRIGHT BLOCK ---
|
||||
-# Copyright (C) 2018 Red Hat, Inc.
|
||||
+# Copyright (C) 2025 Red Hat, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# License: GPL (version 3 or any later version).
|
||||
@@ -7,6 +7,7 @@
|
||||
# --- END COPYRIGHT BLOCK ---
|
||||
|
||||
import ldap
|
||||
+from ldap import filter as ldap_filter
|
||||
from lib389._mapped_object import DSLdapObject, DSLdapObjects
|
||||
from lib389.backend import Backends
|
||||
from lib389.config import Config
|
||||
@@ -74,19 +75,56 @@ class PwPolicyManager(object):
|
||||
}
|
||||
|
||||
def is_subtree_policy(self, dn):
|
||||
- """Check if the entry has a subtree password policy. If we can find a
|
||||
- template entry it is subtree policy
|
||||
+ """Check if a subtree password policy exists for a given entry DN.
|
||||
|
||||
- :param dn: Entry DN with PwPolicy set up
|
||||
+ A subtree policy is indicated by the presence of any CoS template
|
||||
+ (under `cn=nsPwPolicyContainer,<dn>`) that has a `pwdpolicysubentry`
|
||||
+ attribute pointing to an existing entry with objectClass `passwordpolicy`.
|
||||
+
|
||||
+ :param dn: Entry DN to check for subtree policy
|
||||
:type dn: str
|
||||
|
||||
- :returns: True if the entry has a subtree policy, False otherwise
|
||||
+ :returns: True if a subtree policy exists, False otherwise
|
||||
+ :rtype: bool
|
||||
"""
|
||||
- cos_templates = CosTemplates(self._instance, 'cn=nsPwPolicyContainer,{}'.format(dn))
|
||||
try:
|
||||
- cos_templates.get('cn=nsPwTemplateEntry,%s' % dn)
|
||||
- return True
|
||||
- except:
|
||||
+ container_basedn = 'cn=nsPwPolicyContainer,{}'.format(dn)
|
||||
+ templates = CosTemplates(self._instance, container_basedn).list()
|
||||
+ for tmpl in templates:
|
||||
+ pwp_dn = tmpl.get_attr_val_utf8('pwdpolicysubentry')
|
||||
+ if not pwp_dn:
|
||||
+ continue
|
||||
+ # Validate that the referenced entry exists and is a passwordpolicy
|
||||
+ pwp_entry = PwPolicyEntry(self._instance, pwp_dn)
|
||||
+ if pwp_entry.exists() and pwp_entry.present('objectClass', 'passwordpolicy'):
|
||||
+ return True
|
||||
+ except ldap.LDAPError:
|
||||
+ pass
|
||||
+ return False
|
||||
+
|
||||
+ def is_user_policy(self, dn):
|
||||
+ """Check if the entry has a user password policy.
|
||||
+
|
||||
+ A user policy is indicated by the target entry having a
|
||||
+ `pwdpolicysubentry` attribute that points to an existing
|
||||
+ entry with objectClass `passwordpolicy`.
|
||||
+
|
||||
+ :param dn: Entry DN to check
|
||||
+ :type dn: str
|
||||
+
|
||||
+ :returns: True if the entry has a user policy, False otherwise
|
||||
+ :rtype: bool
|
||||
+ """
|
||||
+ try:
|
||||
+ entry = Account(self._instance, dn)
|
||||
+ if not entry.exists():
|
||||
+ return False
|
||||
+ pwp_dn = entry.get_attr_val_utf8('pwdpolicysubentry')
|
||||
+ if not pwp_dn:
|
||||
+ return False
|
||||
+ pwp_entry = PwPolicyEntry(self._instance, pwp_dn)
|
||||
+ return pwp_entry.exists() and pwp_entry.present('objectClass', 'passwordpolicy')
|
||||
+ except ldap.LDAPError:
|
||||
return False
|
||||
|
||||
def create_user_policy(self, dn, properties):
|
||||
@@ -114,10 +152,10 @@ class PwPolicyManager(object):
|
||||
pwp_containers = nsContainers(self._instance, basedn=parentdn)
|
||||
pwp_container = pwp_containers.ensure_state(properties={'cn': 'nsPwPolicyContainer'})
|
||||
|
||||
- # Create policy entry
|
||||
+ # Create or update the policy entry
|
||||
properties['cn'] = 'cn=nsPwPolicyEntry_user,%s' % dn
|
||||
pwp_entries = PwPolicyEntries(self._instance, pwp_container.dn)
|
||||
- pwp_entry = pwp_entries.create(properties=properties)
|
||||
+ pwp_entry = pwp_entries.ensure_state(properties=properties)
|
||||
try:
|
||||
# Add policy to the entry
|
||||
user_entry.replace('pwdpolicysubentry', pwp_entry.dn)
|
||||
@@ -152,32 +190,27 @@ class PwPolicyManager(object):
|
||||
pwp_containers = nsContainers(self._instance, basedn=dn)
|
||||
pwp_container = pwp_containers.ensure_state(properties={'cn': 'nsPwPolicyContainer'})
|
||||
|
||||
- # Create policy entry
|
||||
- pwp_entry = None
|
||||
+ # Create or update the policy entry
|
||||
properties['cn'] = 'cn=nsPwPolicyEntry_subtree,%s' % dn
|
||||
pwp_entries = PwPolicyEntries(self._instance, pwp_container.dn)
|
||||
- pwp_entry = pwp_entries.create(properties=properties)
|
||||
- try:
|
||||
- # The CoS template entry (nsPwTemplateEntry) that has the pwdpolicysubentry
|
||||
- # value pointing to the above (nsPwPolicyEntry) entry
|
||||
- cos_template = None
|
||||
- cos_templates = CosTemplates(self._instance, pwp_container.dn)
|
||||
- cos_template = cos_templates.create(properties={'cosPriority': '1',
|
||||
- 'pwdpolicysubentry': pwp_entry.dn,
|
||||
- 'cn': 'cn=nsPwTemplateEntry,%s' % dn})
|
||||
-
|
||||
- # The CoS specification entry at the subtree level
|
||||
- cos_pointer_defs = CosPointerDefinitions(self._instance, dn)
|
||||
- cos_pointer_defs.create(properties={'cosAttribute': 'pwdpolicysubentry default operational-default',
|
||||
- 'cosTemplateDn': cos_template.dn,
|
||||
- 'cn': 'nsPwPolicy_CoS'})
|
||||
- except ldap.LDAPError as e:
|
||||
- # Something went wrong, remove what we have done
|
||||
- if pwp_entry is not None:
|
||||
- pwp_entry.delete()
|
||||
- if cos_template is not None:
|
||||
- cos_template.delete()
|
||||
- raise e
|
||||
+ pwp_entry = pwp_entries.ensure_state(properties=properties)
|
||||
+
|
||||
+ # Ensure the CoS template entry (nsPwTemplateEntry) that points to the
|
||||
+ # password policy entry
|
||||
+ cos_templates = CosTemplates(self._instance, pwp_container.dn)
|
||||
+ cos_template = cos_templates.ensure_state(properties={
|
||||
+ 'cosPriority': '1',
|
||||
+ 'pwdpolicysubentry': pwp_entry.dn,
|
||||
+ 'cn': 'cn=nsPwTemplateEntry,%s' % dn
|
||||
+ })
|
||||
+
|
||||
+ # Ensure the CoS specification entry at the subtree level
|
||||
+ cos_pointer_defs = CosPointerDefinitions(self._instance, dn)
|
||||
+ cos_pointer_defs.ensure_state(properties={
|
||||
+ 'cosAttribute': 'pwdpolicysubentry default operational-default',
|
||||
+ 'cosTemplateDn': cos_template.dn,
|
||||
+ 'cn': 'nsPwPolicy_CoS'
|
||||
+ })
|
||||
|
||||
# make sure that local policies are enabled
|
||||
self.set_global_policy({'nsslapd-pwpolicy-local': 'on'})
|
||||
@@ -244,10 +277,12 @@ class PwPolicyManager(object):
|
||||
if self.is_subtree_policy(entry.dn):
|
||||
parentdn = dn
|
||||
subtree = True
|
||||
- else:
|
||||
+ elif self.is_user_policy(entry.dn):
|
||||
dn_comps = ldap.dn.explode_dn(dn)
|
||||
dn_comps.pop(0)
|
||||
parentdn = ",".join(dn_comps)
|
||||
+ else:
|
||||
+ raise ValueError('The target entry dn does not have a password policy')
|
||||
|
||||
# Starting deleting the policy, ignore the parts that might already have been removed
|
||||
pwp_container = nsContainer(self._instance, 'cn=nsPwPolicyContainer,%s' % parentdn)
|
||||
--
|
||||
2.51.1
|
||||
|
||||
76
SOURCES/0062-Issue-6641-Fix-memory-leaks.patch
Normal file
76
SOURCES/0062-Issue-6641-Fix-memory-leaks.patch
Normal file
@ -0,0 +1,76 @@
|
||||
From 4a17dc8ef8f226b9d733f3f8fc72bce5e506eb40 Mon Sep 17 00:00:00 2001
|
||||
From: Viktor Ashirov <vashirov@redhat.com>
|
||||
Date: Wed, 10 Sep 2025 13:16:26 +0200
|
||||
Subject: [PATCH] Issue 6641 - Fix memory leaks
|
||||
|
||||
Description:
|
||||
Partial backport from 9cede9cdcbfb10e864ba0d91053efdabbe937eca
|
||||
|
||||
Relates: https://github.com/389ds/389-ds-base/issues/6910
|
||||
(cherry picked from commit cec5596acb0fb82ca34ee98b7881312dd7ba602c)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
ldap/servers/plugins/automember/automember.c | 7 ++++---
|
||||
ldap/servers/plugins/memberof/memberof.c | 13 ++++++++++---
|
||||
2 files changed, 14 insertions(+), 6 deletions(-)
|
||||
|
||||
diff --git a/ldap/servers/plugins/automember/automember.c b/ldap/servers/plugins/automember/automember.c
|
||||
index fde92ee12..1b1da39b3 100644
|
||||
--- a/ldap/servers/plugins/automember/automember.c
|
||||
+++ b/ldap/servers/plugins/automember/automember.c
|
||||
@@ -1755,9 +1755,10 @@ automember_update_member_value(Slapi_Entry *member_e, const char *group_dn, char
|
||||
|
||||
mod_pb = slapi_pblock_new();
|
||||
/* Do a single mod with error overrides for DEL/ADD */
|
||||
- result = slapi_single_modify_internal_override(mod_pb, slapi_sdn_new_dn_byval(group_dn), mods,
|
||||
- automember_get_plugin_id(), 0);
|
||||
-
|
||||
+ Slapi_DN *sdn = slapi_sdn_new_normdn_byref(group_dn);
|
||||
+ result = slapi_single_modify_internal_override(mod_pb, sdn, mods,
|
||||
+ automember_get_plugin_id(), 0);
|
||||
+ slapi_sdn_free(&sdn);
|
||||
if(add){
|
||||
if (result != LDAP_SUCCESS) {
|
||||
slapi_log_err(SLAPI_LOG_ERR, AUTOMEMBER_PLUGIN_SUBSYSTEM,
|
||||
diff --git a/ldap/servers/plugins/memberof/memberof.c b/ldap/servers/plugins/memberof/memberof.c
|
||||
index f3dc7cf00..ce1788e35 100644
|
||||
--- a/ldap/servers/plugins/memberof/memberof.c
|
||||
+++ b/ldap/servers/plugins/memberof/memberof.c
|
||||
@@ -1647,6 +1647,7 @@ memberof_call_foreach_dn(Slapi_PBlock *pb __attribute__((unused)), Slapi_DN *sdn
|
||||
/* We already did the search for this backend, don't
|
||||
* do it again when we fall through */
|
||||
do_suffix_search = PR_FALSE;
|
||||
+ slapi_pblock_init(search_pb);
|
||||
}
|
||||
}
|
||||
} else if (!all_backends) {
|
||||
@@ -3745,6 +3746,10 @@ memberof_replace_list(Slapi_PBlock *pb, MemberOfConfig *config, Slapi_DN *group_
|
||||
|
||||
pre_index++;
|
||||
} else {
|
||||
+ if (pre_index >= pre_total || post_index >= post_total) {
|
||||
+ /* Don't overrun pre_array/post_array */
|
||||
+ break;
|
||||
+ }
|
||||
/* decide what to do */
|
||||
int cmp = memberof_compare(
|
||||
config,
|
||||
@@ -4438,10 +4443,12 @@ memberof_add_memberof_attr(LDAPMod **mods, const char *dn, char *add_oc)
|
||||
|
||||
while (1) {
|
||||
slapi_pblock_init(mod_pb);
|
||||
-
|
||||
+ Slapi_DN *sdn = slapi_sdn_new_normdn_byref(dn);
|
||||
/* Internal mod with error overrides for DEL/ADD */
|
||||
- rc = slapi_single_modify_internal_override(mod_pb, slapi_sdn_new_normdn_byref(dn), single_mod,
|
||||
- memberof_get_plugin_id(), SLAPI_OP_FLAG_BYPASS_REFERRALS);
|
||||
+ rc = slapi_single_modify_internal_override(mod_pb, sdn, single_mod,
|
||||
+ memberof_get_plugin_id(),
|
||||
+ SLAPI_OP_FLAG_BYPASS_REFERRALS);
|
||||
+ slapi_sdn_free(&sdn);
|
||||
if (rc == LDAP_OBJECT_CLASS_VIOLATION) {
|
||||
if (!add_oc || added_oc) {
|
||||
/*
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,363 @@
|
||||
From 16cde9b2e584a75f987c1e5f1151d8703f23263e Mon Sep 17 00:00:00 2001
|
||||
From: tbordaz <tbordaz@redhat.com>
|
||||
Date: Mon, 1 Sep 2025 18:23:33 +0200
|
||||
Subject: [PATCH] Issue 6933 - When deferred memberof update is enabled after
|
||||
the server crashed it should not launch memberof fixup task by default
|
||||
(#6935)
|
||||
|
||||
Bug description:
|
||||
When deferred memberof update is enabled, the updates of the
|
||||
group and the members is done with different TXN.
|
||||
So there is a risk that at the time of a crash the membership
|
||||
('memberof') are invalid.
|
||||
To repair this we should run a memberof fixup task.
|
||||
The problem is that this task is resource intensive and
|
||||
should be, by default, scheduled by the administrator.
|
||||
|
||||
Fix description:
|
||||
The fix introduces a new memberof config parameter 'launchFixup'
|
||||
that is 'off' by default.
|
||||
After a crash, when it is 'on' the server launch the fixup
|
||||
task. If it is 'off' it logs a warning.
|
||||
|
||||
fixes: #6933
|
||||
|
||||
Reviewed by: Simon Pichugin (Thanks !)
|
||||
|
||||
(cherry picked from commit 72f621c56114e1fd3ba3f6c25c731496b881075a)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../suites/memberof_plugin/regression_test.py | 109 ++++++++++++------
|
||||
ldap/servers/plugins/memberof/memberof.c | 13 ++-
|
||||
ldap/servers/plugins/memberof/memberof.h | 2 +
|
||||
.../plugins/memberof/memberof_config.c | 11 ++
|
||||
.../lib389/cli_conf/plugins/memberof.py | 9 ++
|
||||
src/lib389/lib389/plugins.py | 30 +++++
|
||||
6 files changed, 136 insertions(+), 38 deletions(-)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/memberof_plugin/regression_test.py b/dirsrvtests/tests/suites/memberof_plugin/regression_test.py
|
||||
index 9ba40a0c3..976729c2f 100644
|
||||
--- a/dirsrvtests/tests/suites/memberof_plugin/regression_test.py
|
||||
+++ b/dirsrvtests/tests/suites/memberof_plugin/regression_test.py
|
||||
@@ -1289,15 +1289,19 @@ def test_shutdown_on_deferred_memberof(topology_st):
|
||||
:setup: Standalone Instance
|
||||
:steps:
|
||||
1. Enable memberof plugin to scope SUFFIX
|
||||
- 2. create 1000 users
|
||||
- 3. Create a large groups with 500 members
|
||||
+ 2. create 500 users
|
||||
+ 3. Create a large groups with 250 members
|
||||
4. Restart the instance (using the default 2 minutes timeout)
|
||||
5. Check that users memberof and group members are in sync.
|
||||
- 6. Modify the group to have 10 members.
|
||||
+ 6. Modify the group to have 250 others members.
|
||||
7. Restart the instance with short timeout
|
||||
- 8. Check that fixup task is in progress
|
||||
- 9. Wait until fixup task is completed
|
||||
- 10. Check that users memberof and group members are in sync.
|
||||
+ 8. Check that the instance needs fixup
|
||||
+ 9. Check that deferred thread did not run fixup
|
||||
+ 10. Allow deferred thread to run fixup
|
||||
+ 11. Modify the group to have 250 others members.
|
||||
+ 12. Restart the instance with short timeout
|
||||
+ 13. Check that the instance needs fixup
|
||||
+ 14. Check that deferred thread did run fixup
|
||||
:expectedresults:
|
||||
1. should succeed
|
||||
2. should succeed
|
||||
@@ -1308,14 +1312,18 @@ def test_shutdown_on_deferred_memberof(topology_st):
|
||||
7. should succeed
|
||||
8. should succeed
|
||||
9. should succeed
|
||||
- 10. should succeed
|
||||
"""
|
||||
|
||||
inst = topology_st.standalone
|
||||
+ inst.stop()
|
||||
+ lpath = inst.ds_error_log._get_log_path()
|
||||
+ os.unlink(lpath)
|
||||
+ inst.start()
|
||||
inst.config.loglevel(vals=(ErrorLog.DEFAULT,ErrorLog.PLUGIN))
|
||||
errlog = DirsrvErrorLog(inst)
|
||||
test_timeout = 900
|
||||
|
||||
+
|
||||
# Step 1. Enable memberof plugin to scope SUFFIX
|
||||
memberof = MemberOfPlugin(inst)
|
||||
delay=0
|
||||
@@ -1336,8 +1344,8 @@ def test_shutdown_on_deferred_memberof(topology_st):
|
||||
#Creates users and groups
|
||||
users_dn = []
|
||||
|
||||
- # Step 2. create 1000 users
|
||||
- for i in range(1000):
|
||||
+ # Step 2. create 500 users
|
||||
+ for i in range(500):
|
||||
CN = '%s%d' % (USER_CN, i)
|
||||
users = UserAccounts(inst, SUFFIX)
|
||||
user_props = TEST_USER_PROPERTIES.copy()
|
||||
@@ -1347,7 +1355,7 @@ def test_shutdown_on_deferred_memberof(topology_st):
|
||||
|
||||
# Step 3. Create a large groups with 250 members
|
||||
groups = Groups(inst, SUFFIX)
|
||||
- testgroup = groups.create(properties={'cn': 'group500', 'member': users_dn[0:249]})
|
||||
+ testgroup = groups.create(properties={'cn': 'group50', 'member': users_dn[0:249]})
|
||||
|
||||
# Step 4. Restart the instance (using the default 2 minutes timeout)
|
||||
time.sleep(10)
|
||||
@@ -1361,7 +1369,7 @@ def test_shutdown_on_deferred_memberof(topology_st):
|
||||
check_memberof_consistency(inst, testgroup)
|
||||
|
||||
# Step 6. Modify the group to get another big group.
|
||||
- testgroup.replace('member', users_dn[500:999])
|
||||
+ testgroup.replace('member', users_dn[250:499])
|
||||
|
||||
# Step 7. Restart the instance with short timeout
|
||||
pattern = 'deferred_thread_func - thread has stopped'
|
||||
@@ -1374,40 +1382,71 @@ def test_shutdown_on_deferred_memberof(topology_st):
|
||||
nbcleanstop = len(errlog.match(pattern))
|
||||
assert nbcleanstop == original_nbcleanstop
|
||||
|
||||
- original_nbfixupmsg = count_global_fixup_message(errlog)
|
||||
log.info(f'Instance restarted after timeout at {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}')
|
||||
inst.restart()
|
||||
assert inst.status()
|
||||
log.info(f'Restart completed at {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}')
|
||||
|
||||
+ # Step 9.
|
||||
# Check that memberofneedfixup is present
|
||||
- dse = DSEldif(inst)
|
||||
- assert dse.get(memberof.dn, 'memberofneedfixup', single=True)
|
||||
-
|
||||
- # Step 8. Check that fixup task is in progress
|
||||
- # Note we have to wait as there may be some delay
|
||||
- elapsed_time = 0
|
||||
- nbfixupmsg = count_global_fixup_message(errlog)
|
||||
- while nbfixupmsg[0] == original_nbfixupmsg[0]:
|
||||
- assert elapsed_time <= test_timeout
|
||||
- assert inst.status()
|
||||
- time.sleep(5)
|
||||
- elapsed_time += 5
|
||||
- nbfixupmsg = count_global_fixup_message(errlog)
|
||||
-
|
||||
- # Step 9. Wait until fixup task is completed
|
||||
- while nbfixupmsg[1] == original_nbfixupmsg[1]:
|
||||
- assert elapsed_time <= test_timeout
|
||||
- assert inst.status()
|
||||
- time.sleep(10)
|
||||
- elapsed_time += 10
|
||||
- nbfixupmsg = count_global_fixup_message(errlog)
|
||||
-
|
||||
- # Step 10. Check that users memberof and group members are in sync.
|
||||
+ # and fixup task was not launched because by default launch_fixup is no
|
||||
+ memberof = MemberOfPlugin(inst)
|
||||
+ memberof.set_memberofdeferredupdate("on")
|
||||
+ if (memberof.get_memberofdeferredupdate() and memberof.get_memberofdeferredupdate().lower() != "on"):
|
||||
+ pytest.skip("Memberof deferred update not enabled or not supported.");
|
||||
+ else:
|
||||
+ delay=10
|
||||
+ value = memberof.get_memberofneedfixup()
|
||||
+ assert ((str(value).lower() == "yes") or (str(value).lower() == "true"))
|
||||
+ assert len(errlog.match('.*It is recommended to launch memberof fixup task.*')) == 1
|
||||
+
|
||||
+ # Step 10. allow the server to launch the fixup task
|
||||
+ inst.stop()
|
||||
+ inst.deleteErrorLogs()
|
||||
+ inst.start()
|
||||
+ log.info(f'set memberoflaunchfixup=ON')
|
||||
+ memberof.set_memberoflaunchfixup('on')
|
||||
+ inst.restart()
|
||||
+
|
||||
+ # Step 11. Modify the group to get another big group.
|
||||
+ testgroup.replace('member', users_dn[250:499])
|
||||
+
|
||||
+ # Step 12. then kill/reset errorlog/restart
|
||||
+ _kill_instance(inst, sig=signal.SIGKILL, delay=5)
|
||||
+ log.info(f'Instance restarted after timeout at {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}')
|
||||
+ inst.restart()
|
||||
+ assert inst.status()
|
||||
+ log.info(f'Restart completed at {datetime.now().strftime("%Y-%m-%d %H:%M:%S")}')
|
||||
+
|
||||
+ # step 13. Check that memberofneedfixup is present
|
||||
+ memberof = MemberOfPlugin(inst)
|
||||
+ value = memberof.get_memberofneedfixup()
|
||||
+ assert ((str(value).lower() == "yes") or (str(value).lower() == "true"))
|
||||
+
|
||||
+ # step 14. fixup task was not launched because by default launch_fixup is no
|
||||
+ assert len(errlog.match('.*It is recommended to launch memberof fixup task.*')) == 0
|
||||
+
|
||||
+ # Check that users memberof and group members are in sync.
|
||||
time.sleep(delay)
|
||||
check_memberof_consistency(inst, testgroup)
|
||||
|
||||
|
||||
+ def fin():
|
||||
+
|
||||
+ for dn in users_dn:
|
||||
+ try:
|
||||
+ inst.delete_s(dn)
|
||||
+ except ldap.NO_SUCH_OBJECT:
|
||||
+ pass
|
||||
+
|
||||
+ try:
|
||||
+ inst.delete_s(testgroup.dn)
|
||||
+ except ldap.NO_SUCH_OBJECT:
|
||||
+ pass
|
||||
+
|
||||
+ request.addfinalizer(fin)
|
||||
+
|
||||
+
|
||||
if __name__ == '__main__':
|
||||
# Run isolated
|
||||
# -s for DEBUG mode
|
||||
diff --git a/ldap/servers/plugins/memberof/memberof.c b/ldap/servers/plugins/memberof/memberof.c
|
||||
index ce1788e35..2ee7ee319 100644
|
||||
--- a/ldap/servers/plugins/memberof/memberof.c
|
||||
+++ b/ldap/servers/plugins/memberof/memberof.c
|
||||
@@ -1012,9 +1012,16 @@ deferred_thread_func(void *arg)
|
||||
* keep running this thread until plugin is signaled to close
|
||||
*/
|
||||
g_incr_active_threadcnt();
|
||||
- if (memberof_get_config()->need_fixup && perform_needed_fixup()) {
|
||||
- slapi_log_err(SLAPI_LOG_ALERT, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
- "Failure occured during global fixup task: memberof values are invalid\n");
|
||||
+ if (memberof_get_config()->need_fixup) {
|
||||
+ if (memberof_get_config()->launch_fixup) {
|
||||
+ if (perform_needed_fixup()) {
|
||||
+ slapi_log_err(SLAPI_LOG_ALERT, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
+ "Failure occurred during global fixup task: memberof values are invalid\n");
|
||||
+ }
|
||||
+ } else {
|
||||
+ slapi_log_err(SLAPI_LOG_WARNING, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
+ "It is recommended to launch memberof fixup task\n");
|
||||
+ }
|
||||
}
|
||||
slapi_log_err(SLAPI_LOG_PLUGIN, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
"deferred_thread_func - thread is starting "
|
||||
diff --git a/ldap/servers/plugins/memberof/memberof.h b/ldap/servers/plugins/memberof/memberof.h
|
||||
index c11d901ab..f2bb1d1cf 100644
|
||||
--- a/ldap/servers/plugins/memberof/memberof.h
|
||||
+++ b/ldap/servers/plugins/memberof/memberof.h
|
||||
@@ -44,6 +44,7 @@
|
||||
#define MEMBEROF_DEFERRED_UPDATE_ATTR "memberOfDeferredUpdate"
|
||||
#define MEMBEROF_AUTO_ADD_OC "memberOfAutoAddOC"
|
||||
#define MEMBEROF_NEED_FIXUP "memberOfNeedFixup"
|
||||
+#define MEMBEROF_LAUNCH_FIXUP "memberOfLaunchFixup"
|
||||
#define NSMEMBEROF "nsMemberOf"
|
||||
#define MEMBEROF_ENTRY_SCOPE_EXCLUDE_SUBTREE "memberOfEntryScopeExcludeSubtree"
|
||||
#define DN_SYNTAX_OID "1.3.6.1.4.1.1466.115.121.1.12"
|
||||
@@ -138,6 +139,7 @@ typedef struct memberofconfig
|
||||
PLHashTable *fixup_cache;
|
||||
Slapi_Task *task;
|
||||
int need_fixup;
|
||||
+ PRBool launch_fixup;
|
||||
} MemberOfConfig;
|
||||
|
||||
/* The key to access the hash table is the normalized DN
|
||||
diff --git a/ldap/servers/plugins/memberof/memberof_config.c b/ldap/servers/plugins/memberof/memberof_config.c
|
||||
index 89c44b014..e17c91fb9 100644
|
||||
--- a/ldap/servers/plugins/memberof/memberof_config.c
|
||||
+++ b/ldap/servers/plugins/memberof/memberof_config.c
|
||||
@@ -472,6 +472,7 @@ memberof_apply_config(Slapi_PBlock *pb __attribute__((unused)),
|
||||
const char *deferred_update = NULL;
|
||||
char *auto_add_oc = NULL;
|
||||
const char *needfixup = NULL;
|
||||
+ const char *launchfixup = NULL;
|
||||
int num_vals = 0;
|
||||
|
||||
*returncode = LDAP_SUCCESS;
|
||||
@@ -508,6 +509,7 @@ memberof_apply_config(Slapi_PBlock *pb __attribute__((unused)),
|
||||
deferred_update = slapi_entry_attr_get_ref(e, MEMBEROF_DEFERRED_UPDATE_ATTR);
|
||||
auto_add_oc = slapi_entry_attr_get_charptr(e, MEMBEROF_AUTO_ADD_OC);
|
||||
needfixup = slapi_entry_attr_get_ref(e, MEMBEROF_NEED_FIXUP);
|
||||
+ launchfixup = slapi_entry_attr_get_ref(e, MEMBEROF_LAUNCH_FIXUP);
|
||||
|
||||
if (auto_add_oc == NULL) {
|
||||
auto_add_oc = slapi_ch_strdup(NSMEMBEROF);
|
||||
@@ -628,6 +630,15 @@ memberof_apply_config(Slapi_PBlock *pb __attribute__((unused)),
|
||||
theConfig.deferred_update = PR_FALSE;
|
||||
}
|
||||
}
|
||||
+ theConfig.launch_fixup = PR_FALSE;
|
||||
+ if (theConfig.deferred_update) {
|
||||
+ /* The automatic fixup task is only triggered when
|
||||
+ * deferred update is on
|
||||
+ */
|
||||
+ if (launchfixup && (strcasecmp(launchfixup, "on") == 0)) {
|
||||
+ theConfig.launch_fixup = PR_TRUE;
|
||||
+ }
|
||||
+ }
|
||||
|
||||
if (allBackends) {
|
||||
if (strcasecmp(allBackends, "on") == 0) {
|
||||
diff --git a/src/lib389/lib389/cli_conf/plugins/memberof.py b/src/lib389/lib389/cli_conf/plugins/memberof.py
|
||||
index 90c1af2c3..598fe0bbc 100644
|
||||
--- a/src/lib389/lib389/cli_conf/plugins/memberof.py
|
||||
+++ b/src/lib389/lib389/cli_conf/plugins/memberof.py
|
||||
@@ -23,6 +23,8 @@ arg_to_attr = {
|
||||
'scope': 'memberOfEntryScope',
|
||||
'exclude': 'memberOfEntryScopeExcludeSubtree',
|
||||
'autoaddoc': 'memberOfAutoAddOC',
|
||||
+ 'deferredupdate': 'memberOfDeferredUpdate',
|
||||
+ 'launchfixup': 'memberOfLaunchFixup',
|
||||
'config_entry': 'nsslapd-pluginConfigArea'
|
||||
}
|
||||
|
||||
@@ -119,6 +121,13 @@ def _add_parser_args(parser):
|
||||
help='If an entry does not have an object class that allows the memberOf attribute '
|
||||
'then the memberOf plugin will automatically add the object class listed '
|
||||
'in the memberOfAutoAddOC parameter')
|
||||
+ parser.add_argument('--deferredupdate', choices=['on', 'off'], type=str.lower,
|
||||
+ help='Specifies that the updates of the members are done after the completion '
|
||||
+ 'of the update of the target group. In addition each update (group/members) '
|
||||
+ 'uses its own transaction')
|
||||
+ parser.add_argument('--launchfixup', choices=['on', 'off'], type=str.lower,
|
||||
+ help='Specify that if the server disorderly shutdown (crash, kill,..) then '
|
||||
+ 'at restart the memberof fixup task is launched automatically')
|
||||
|
||||
|
||||
def create_parser(subparsers):
|
||||
diff --git a/src/lib389/lib389/plugins.py b/src/lib389/lib389/plugins.py
|
||||
index 25b49dae4..4f177adef 100644
|
||||
--- a/src/lib389/lib389/plugins.py
|
||||
+++ b/src/lib389/lib389/plugins.py
|
||||
@@ -962,6 +962,36 @@ class MemberOfPlugin(Plugin):
|
||||
|
||||
self.remove_all('memberofdeferredupdate')
|
||||
|
||||
+ def get_memberofneedfixup(self):
|
||||
+ """Get memberofneedfixup attribute"""
|
||||
+
|
||||
+ return self.get_attr_val_utf8_l('memberofneedfixup')
|
||||
+
|
||||
+ def get_memberofneedfixup_formatted(self):
|
||||
+ """Display memberofneedfixup attribute"""
|
||||
+
|
||||
+ return self.display_attr('memberofneedfixup')
|
||||
+
|
||||
+ def get_memberoflaunchfixup(self):
|
||||
+ """Get memberoflaunchfixup attribute"""
|
||||
+
|
||||
+ return self.get_attr_val_utf8_l('memberoflaunchfixup')
|
||||
+
|
||||
+ def get_memberoflaunchfixup_formatted(self):
|
||||
+ """Display memberoflaunchfixup attribute"""
|
||||
+
|
||||
+ return self.display_attr('memberoflaunchfixup')
|
||||
+
|
||||
+ def set_memberoflaunchfixup(self, value):
|
||||
+ """Set memberoflaunchfixup attribute"""
|
||||
+
|
||||
+ self.set('memberoflaunchfixup', value)
|
||||
+
|
||||
+ def remove_memberoflaunchfixup(self):
|
||||
+ """Remove all memberoflaunchfixup attributes"""
|
||||
+
|
||||
+ self.remove_all('memberoflaunchfixup')
|
||||
+
|
||||
def get_autoaddoc(self):
|
||||
"""Get memberofautoaddoc attribute"""
|
||||
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,831 @@
|
||||
From 4667e657fe4d3eab1e900cc1f278bc9a9e2fcf0a Mon Sep 17 00:00:00 2001
|
||||
From: Viktor Ashirov <vashirov@redhat.com>
|
||||
Date: Mon, 18 Aug 2025 09:13:12 +0200
|
||||
Subject: [PATCH] Issue 6928 - The parentId attribute is indexed with improper
|
||||
matching rule
|
||||
|
||||
Bug Description:
|
||||
`parentId` attribute contains integer values and needs to be indexed with
|
||||
`integerOrderingMatch` matching rule. This attribute is a system attribute
|
||||
and the configuration entry for this attribute is created when a backend
|
||||
is created. The bug is that the per backend configuration entry does not
|
||||
contain `nsMatchingRule: integerOrderingMatch`.
|
||||
|
||||
Fix Description:
|
||||
* Update `ldbm_instance_create_default_indexes` to support matching rules
|
||||
and update default system index configuration for `parentId` to include
|
||||
`integerOrderingMatch` matching rule.
|
||||
* Add healthcheck linter for default system indexes and indexes created
|
||||
by RetroCL and USN plugins.
|
||||
|
||||
Fixes: https://github.com/389ds/389-ds-base/issues/6928
|
||||
Fixes: https://github.com/389ds/389-ds-base/issues/6915
|
||||
|
||||
Reviewed by: @progier389, @tbordaz (Thanks!)
|
||||
|
||||
(cherry picked from commit fd45579f8111c371852686dafe761fe535a5bef3)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
dirsrvtests/tests/suites/basic/basic_test.py | 2 +-
|
||||
.../healthcheck/health_system_indexes_test.py | 456 ++++++++++++++++++
|
||||
ldap/ldif/template-dse.ldif.in | 8 +
|
||||
ldap/servers/slapd/back-ldbm/instance.c | 32 +-
|
||||
src/lib389/lib389/backend.py | 133 ++++-
|
||||
src/lib389/lib389/lint.py | 29 ++
|
||||
6 files changed, 645 insertions(+), 15 deletions(-)
|
||||
create mode 100644 dirsrvtests/tests/suites/healthcheck/health_system_indexes_test.py
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/basic/basic_test.py b/dirsrvtests/tests/suites/basic/basic_test.py
|
||||
index 8bf89cb33..4a45f9dbe 100644
|
||||
--- a/dirsrvtests/tests/suites/basic/basic_test.py
|
||||
+++ b/dirsrvtests/tests/suites/basic/basic_test.py
|
||||
@@ -461,7 +461,7 @@ def test_basic_db2index(topology_st):
|
||||
topology_st.standalone.db2index(bename=DEFAULT_BENAME, attrs=indexes)
|
||||
log.info('Checking the server logs for %d backend indexes INFO' % numIndexes)
|
||||
for indexNum, index in enumerate(indexes):
|
||||
- if index in "entryrdn":
|
||||
+ if index in ["entryrdn", "ancestorid"]:
|
||||
assert topology_st.standalone.searchErrorsLog(
|
||||
'INFO - bdb_db2index - ' + DEFAULT_BENAME + ':' + ' Indexing ' + index)
|
||||
else:
|
||||
diff --git a/dirsrvtests/tests/suites/healthcheck/health_system_indexes_test.py b/dirsrvtests/tests/suites/healthcheck/health_system_indexes_test.py
|
||||
new file mode 100644
|
||||
index 000000000..61972d60c
|
||||
--- /dev/null
|
||||
+++ b/dirsrvtests/tests/suites/healthcheck/health_system_indexes_test.py
|
||||
@@ -0,0 +1,456 @@
|
||||
+# --- BEGIN COPYRIGHT BLOCK ---
|
||||
+# Copyright (C) 2025 Red Hat, Inc.
|
||||
+# All rights reserved.
|
||||
+#
|
||||
+# License: GPL (version 3 or any later version).
|
||||
+# See LICENSE for details.
|
||||
+# --- END COPYRIGHT BLOCK ---
|
||||
+#
|
||||
+
|
||||
+import pytest
|
||||
+import os
|
||||
+
|
||||
+from lib389.backend import Backends
|
||||
+from lib389.index import Index
|
||||
+from lib389.plugins import (
|
||||
+ USNPlugin,
|
||||
+ RetroChangelogPlugin,
|
||||
+)
|
||||
+from lib389.utils import logging, ds_is_newer
|
||||
+from lib389.cli_base import FakeArgs
|
||||
+from lib389.topologies import topology_st
|
||||
+from lib389.cli_ctl.health import health_check_run
|
||||
+
|
||||
+pytestmark = pytest.mark.tier1
|
||||
+
|
||||
+CMD_OUTPUT = "No issues found."
|
||||
+JSON_OUTPUT = "[]"
|
||||
+log = logging.getLogger(__name__)
|
||||
+
|
||||
+
|
||||
+@pytest.fixture(scope="function")
|
||||
+def usn_plugin_enabled(topology_st, request):
|
||||
+ """Fixture to enable USN plugin and ensure cleanup after test"""
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ log.info("Enable USN plugin")
|
||||
+ usn_plugin = USNPlugin(standalone)
|
||||
+ usn_plugin.enable()
|
||||
+ standalone.restart()
|
||||
+
|
||||
+ def cleanup():
|
||||
+ log.info("Disable USN plugin")
|
||||
+ usn_plugin.disable()
|
||||
+ standalone.restart()
|
||||
+
|
||||
+ request.addfinalizer(cleanup)
|
||||
+ return usn_plugin
|
||||
+
|
||||
+
|
||||
+@pytest.fixture(scope="function")
|
||||
+def retrocl_plugin_enabled(topology_st, request):
|
||||
+ """Fixture to enable RetroCL plugin and ensure cleanup after test"""
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ log.info("Enable RetroCL plugin")
|
||||
+ retrocl_plugin = RetroChangelogPlugin(standalone)
|
||||
+ retrocl_plugin.enable()
|
||||
+ standalone.restart()
|
||||
+
|
||||
+ def cleanup():
|
||||
+ log.info("Disable RetroCL plugin")
|
||||
+ retrocl_plugin.disable()
|
||||
+ standalone.restart()
|
||||
+
|
||||
+ request.addfinalizer(cleanup)
|
||||
+ return retrocl_plugin
|
||||
+
|
||||
+
|
||||
+@pytest.fixture(scope="function")
|
||||
+def log_buffering_enabled(topology_st, request):
|
||||
+ """Fixture to enable log buffering and restore original setting after test"""
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ original_value = standalone.config.get_attr_val_utf8("nsslapd-accesslog-logbuffering")
|
||||
+
|
||||
+ log.info("Enable log buffering")
|
||||
+ standalone.config.set("nsslapd-accesslog-logbuffering", "on")
|
||||
+
|
||||
+ def cleanup():
|
||||
+ log.info("Restore original log buffering setting")
|
||||
+ standalone.config.set("nsslapd-accesslog-logbuffering", original_value)
|
||||
+
|
||||
+ request.addfinalizer(cleanup)
|
||||
+ return standalone
|
||||
+
|
||||
+
|
||||
+def run_healthcheck_and_flush_log(topology, instance, searched_code, json, searched_code2=None):
|
||||
+ args = FakeArgs()
|
||||
+ args.instance = instance.serverid
|
||||
+ args.verbose = instance.verbose
|
||||
+ args.list_errors = False
|
||||
+ args.list_checks = False
|
||||
+ args.check = [
|
||||
+ "config",
|
||||
+ "refint",
|
||||
+ "backends",
|
||||
+ "monitor-disk-space",
|
||||
+ "logs",
|
||||
+ "memberof",
|
||||
+ ]
|
||||
+ args.dry_run = False
|
||||
+
|
||||
+ # If we are using BDB as a backend, we will get error DSBLE0006 on new versions
|
||||
+ if (
|
||||
+ ds_is_newer("3.0.0")
|
||||
+ and instance.get_db_lib() == "bdb"
|
||||
+ and (searched_code is CMD_OUTPUT or searched_code is JSON_OUTPUT)
|
||||
+ ):
|
||||
+ searched_code = "DSBLE0006"
|
||||
+
|
||||
+ if json:
|
||||
+ log.info("Use healthcheck with --json option")
|
||||
+ args.json = json
|
||||
+ health_check_run(instance, topology.logcap.log, args)
|
||||
+ assert topology.logcap.contains(searched_code)
|
||||
+ log.info("healthcheck returned searched code: %s" % searched_code)
|
||||
+
|
||||
+ if searched_code2 is not None:
|
||||
+ assert topology.logcap.contains(searched_code2)
|
||||
+ log.info("healthcheck returned searched code: %s" % searched_code2)
|
||||
+ else:
|
||||
+ log.info("Use healthcheck without --json option")
|
||||
+ args.json = json
|
||||
+ health_check_run(instance, topology.logcap.log, args)
|
||||
+
|
||||
+ assert topology.logcap.contains(searched_code)
|
||||
+ log.info("healthcheck returned searched code: %s" % searched_code)
|
||||
+
|
||||
+ if searched_code2 is not None:
|
||||
+ assert topology.logcap.contains(searched_code2)
|
||||
+ log.info("healthcheck returned searched code: %s" % searched_code2)
|
||||
+
|
||||
+ log.info("Clear the log")
|
||||
+ topology.logcap.flush()
|
||||
+
|
||||
+
|
||||
+def test_missing_parentid(topology_st, log_buffering_enabled):
|
||||
+ """Check if healthcheck returns DSBLE0007 code when parentId system index is missing
|
||||
+
|
||||
+ :id: 2653f16f-cc9c-4fad-9d8c-86a3457c6d0d
|
||||
+ :setup: Standalone instance
|
||||
+ :steps:
|
||||
+ 1. Create DS instance
|
||||
+ 2. Remove parentId index
|
||||
+ 3. Use healthcheck without --json option
|
||||
+ 4. Use healthcheck with --json option
|
||||
+ 5. Re-add the parentId index
|
||||
+ 6. Use healthcheck without --json option
|
||||
+ 7. Use healthcheck with --json option
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. healthcheck reports DSBLE0007 code and related details
|
||||
+ 4. healthcheck reports DSBLE0007 code and related details
|
||||
+ 5. Success
|
||||
+ 6. healthcheck reports no issues found
|
||||
+ 7. healthcheck reports no issues found
|
||||
+ """
|
||||
+
|
||||
+ RET_CODE = "DSBLE0007"
|
||||
+ PARENTID_DN = "cn=parentid,cn=index,cn=userroot,cn=ldbm database,cn=plugins,cn=config"
|
||||
+
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ log.info("Remove parentId index")
|
||||
+ parentid_index = Index(standalone, PARENTID_DN)
|
||||
+ parentid_index.delete()
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=RET_CODE)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=RET_CODE)
|
||||
+
|
||||
+ log.info("Re-add the parentId index")
|
||||
+ backend = Backends(standalone).get("userRoot")
|
||||
+ backend.add_index("parentid", ["eq"], matching_rules=["integerOrderingMatch"])
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
|
||||
+
|
||||
+
|
||||
+def test_missing_matching_rule(topology_st, log_buffering_enabled):
|
||||
+ """Check if healthcheck returns DSBLE0007 code when parentId index is missing integerOrderingMatch
|
||||
+
|
||||
+ :id: 7ffa71db-8995-430a-bed8-59bce944221c
|
||||
+ :setup: Standalone instance
|
||||
+ :steps:
|
||||
+ 1. Create DS instance
|
||||
+ 2. Remove integerOrderingMatch matching rule from parentId index
|
||||
+ 3. Use healthcheck without --json option
|
||||
+ 4. Use healthcheck with --json option
|
||||
+ 5. Re-add the matching rule
|
||||
+ 6. Use healthcheck without --json option
|
||||
+ 7. Use healthcheck with --json option
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. healthcheck reports DSBLE0007 code and related details
|
||||
+ 4. healthcheck reports DSBLE0007 code and related details
|
||||
+ 5. Success
|
||||
+ 6. healthcheck reports no issues found
|
||||
+ 7. healthcheck reports no issues found
|
||||
+ """
|
||||
+
|
||||
+ RET_CODE = "DSBLE0007"
|
||||
+ PARENTID_DN = "cn=parentid,cn=index,cn=userroot,cn=ldbm database,cn=plugins,cn=config"
|
||||
+
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ log.info("Remove integerOrderingMatch matching rule from parentId index")
|
||||
+ parentid_index = Index(standalone, PARENTID_DN)
|
||||
+ parentid_index.remove("nsMatchingRule", "integerOrderingMatch")
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=RET_CODE)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=RET_CODE)
|
||||
+
|
||||
+ log.info("Re-add the integerOrderingMatch matching rule")
|
||||
+ parentid_index = Index(standalone, PARENTID_DN)
|
||||
+ parentid_index.add("nsMatchingRule", "integerOrderingMatch")
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
|
||||
+
|
||||
+
|
||||
+def test_usn_plugin_missing_entryusn(topology_st, usn_plugin_enabled, log_buffering_enabled):
|
||||
+ """Check if healthcheck returns DSBLE0007 code when USN plugin is enabled but entryusn index is missing
|
||||
+
|
||||
+ :id: 4879dfc8-cd96-43e6-9ebc-053fc8e64ad0
|
||||
+ :setup: Standalone instance
|
||||
+ :steps:
|
||||
+ 1. Create DS instance
|
||||
+ 2. Enable USN plugin
|
||||
+ 3. Remove entryusn index
|
||||
+ 4. Use healthcheck without --json option
|
||||
+ 5. Use healthcheck with --json option
|
||||
+ 6. Re-add the entryusn index
|
||||
+ 7. Use healthcheck without --json option
|
||||
+ 8. Use healthcheck with --json option
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. healthcheck reports DSBLE0007 code and related details
|
||||
+ 5. healthcheck reports DSBLE0007 code and related details
|
||||
+ 6. Success
|
||||
+ 7. healthcheck reports no issues found
|
||||
+ 8. healthcheck reports no issues found
|
||||
+ """
|
||||
+
|
||||
+ RET_CODE = "DSBLE0007"
|
||||
+ ENTRYUSN_DN = "cn=entryusn,cn=index,cn=userroot,cn=ldbm database,cn=plugins,cn=config"
|
||||
+
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ log.info("Remove entryusn index")
|
||||
+ entryusn_index = Index(standalone, ENTRYUSN_DN)
|
||||
+ entryusn_index.delete()
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=RET_CODE)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=RET_CODE)
|
||||
+
|
||||
+ log.info("Re-add the entryusn index")
|
||||
+ backend = Backends(standalone).get("userRoot")
|
||||
+ backend.add_index("entryusn", ["eq"], matching_rules=["integerOrderingMatch"])
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
|
||||
+
|
||||
+
|
||||
+def test_usn_plugin_missing_matching_rule(topology_st, usn_plugin_enabled, log_buffering_enabled):
|
||||
+ """Check if healthcheck returns DSBLE0007 code when USN plugin is enabled but entryusn index is missing integerOrderingMatch
|
||||
+
|
||||
+ :id: b00b419f-2ca6-451f-a9b2-f22ad6b10718
|
||||
+ :setup: Standalone instance
|
||||
+ :steps:
|
||||
+ 1. Create DS instance
|
||||
+ 2. Enable USN plugin
|
||||
+ 3. Remove integerOrderingMatch matching rule from entryusn index
|
||||
+ 4. Use healthcheck without --json option
|
||||
+ 5. Use healthcheck with --json option
|
||||
+ 6. Re-add the matching rule
|
||||
+ 7. Use healthcheck without --json option
|
||||
+ 8. Use healthcheck with --json option
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. healthcheck reports DSBLE0007 code and related details
|
||||
+ 5. healthcheck reports DSBLE0007 code and related details
|
||||
+ 6. Success
|
||||
+ 7. healthcheck reports no issues found
|
||||
+ 8. healthcheck reports no issues found
|
||||
+ """
|
||||
+
|
||||
+ RET_CODE = "DSBLE0007"
|
||||
+ ENTRYUSN_DN = "cn=entryusn,cn=index,cn=userroot,cn=ldbm database,cn=plugins,cn=config"
|
||||
+
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ log.info("Create or modify entryusn index without integerOrderingMatch")
|
||||
+ entryusn_index = Index(standalone, ENTRYUSN_DN)
|
||||
+ entryusn_index.remove("nsMatchingRule", "integerOrderingMatch")
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=RET_CODE)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=RET_CODE)
|
||||
+
|
||||
+ log.info("Re-add the integerOrderingMatch matching rule")
|
||||
+ entryusn_index = Index(standalone, ENTRYUSN_DN)
|
||||
+ entryusn_index.add("nsMatchingRule", "integerOrderingMatch")
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
|
||||
+
|
||||
+
|
||||
+def test_retrocl_plugin_missing_changenumber(topology_st, retrocl_plugin_enabled, log_buffering_enabled):
|
||||
+ """Check if healthcheck returns DSBLE0007 code when RetroCL plugin is enabled but changeNumber index is missing from changelog backend
|
||||
+
|
||||
+ :id: 3e1a3625-4e6f-4e23-868d-6f32e018ad7e
|
||||
+ :setup: Standalone instance
|
||||
+ :steps:
|
||||
+ 1. Create DS instance
|
||||
+ 2. Enable RetroCL plugin
|
||||
+ 3. Remove changeNumber index from changelog backend
|
||||
+ 4. Use healthcheck without --json option
|
||||
+ 5. Use healthcheck with --json option
|
||||
+ 6. Re-add the changeNumber index
|
||||
+ 7. Use healthcheck without --json option
|
||||
+ 8. Use healthcheck with --json option
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. healthcheck reports DSBLE0007 code and related details
|
||||
+ 5. healthcheck reports DSBLE0007 code and related details
|
||||
+ 6. Success
|
||||
+ 7. healthcheck reports no issues found
|
||||
+ 8. healthcheck reports no issues found
|
||||
+ """
|
||||
+
|
||||
+ RET_CODE = "DSBLE0007"
|
||||
+
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ log.info("Remove changeNumber index from changelog backend")
|
||||
+ changenumber_dn = "cn=changenumber,cn=index,cn=changelog,cn=ldbm database,cn=plugins,cn=config"
|
||||
+ changenumber_index = Index(standalone, changenumber_dn)
|
||||
+ changenumber_index.delete()
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=RET_CODE)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=RET_CODE)
|
||||
+
|
||||
+ log.info("Re-add the changeNumber index")
|
||||
+ backends = Backends(standalone)
|
||||
+ changelog_backend = backends.get("changelog")
|
||||
+ changelog_backend.add_index("changenumber", ["eq"], matching_rules=["integerOrderingMatch"])
|
||||
+ log.info("Successfully re-added changeNumber index")
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
|
||||
+
|
||||
+
|
||||
+def test_retrocl_plugin_missing_matching_rule(topology_st, retrocl_plugin_enabled, log_buffering_enabled):
|
||||
+ """Check if healthcheck returns DSBLE0007 code when RetroCL plugin is enabled but changeNumber index is missing integerOrderingMatch
|
||||
+
|
||||
+ :id: 1c68b1b2-90a9-4ec0-815a-a626b20744fe
|
||||
+ :setup: Standalone instance
|
||||
+ :steps:
|
||||
+ 1. Create DS instance
|
||||
+ 2. Enable RetroCL plugin
|
||||
+ 3. Remove integerOrderingMatch matching rule from changeNumber index
|
||||
+ 4. Use healthcheck without --json option
|
||||
+ 5. Use healthcheck with --json option
|
||||
+ 6. Re-add the matching rule
|
||||
+ 7. Use healthcheck without --json option
|
||||
+ 8. Use healthcheck with --json option
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. healthcheck reports DSBLE0007 code and related details
|
||||
+ 5. healthcheck reports DSBLE0007 code and related details
|
||||
+ 6. Success
|
||||
+ 7. healthcheck reports no issues found
|
||||
+ 8. healthcheck reports no issues found
|
||||
+ """
|
||||
+
|
||||
+ RET_CODE = "DSBLE0007"
|
||||
+
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ log.info("Remove integerOrderingMatch matching rule from changeNumber index")
|
||||
+ changenumber_dn = "cn=changenumber,cn=index,cn=changelog,cn=ldbm database,cn=plugins,cn=config"
|
||||
+ changenumber_index = Index(standalone, changenumber_dn)
|
||||
+ changenumber_index.remove("nsMatchingRule", "integerOrderingMatch")
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=RET_CODE)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=RET_CODE)
|
||||
+
|
||||
+ log.info("Re-add the integerOrderingMatch matching rule")
|
||||
+ changenumber_index = Index(standalone, changenumber_dn)
|
||||
+ changenumber_index.add("nsMatchingRule", "integerOrderingMatch")
|
||||
+ log.info("Successfully re-added integerOrderingMatch to changeNumber index")
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
|
||||
+
|
||||
+
|
||||
+def test_multiple_missing_indexes(topology_st, log_buffering_enabled):
|
||||
+ """Check if healthcheck returns DSBLE0007 code when multiple system indexes are missing
|
||||
+
|
||||
+ :id: f7cfcd6e-3c47-4ba5-bb2b-1f8e7a29c899
|
||||
+ :setup: Standalone instance
|
||||
+ :steps:
|
||||
+ 1. Create DS instance
|
||||
+ 2. Remove multiple system indexes (parentId, nsUniqueId)
|
||||
+ 3. Use healthcheck without --json option
|
||||
+ 4. Use healthcheck with --json option
|
||||
+ 5. Re-add the missing indexes
|
||||
+ 6. Use healthcheck without --json option
|
||||
+ 7. Use healthcheck with --json option
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. healthcheck reports DSBLE0007 code and related details
|
||||
+ 4. healthcheck reports DSBLE0007 code and related details
|
||||
+ 5. Success
|
||||
+ 6. healthcheck reports no issues found
|
||||
+ 7. healthcheck reports no issues found
|
||||
+ """
|
||||
+
|
||||
+ RET_CODE = "DSBLE0007"
|
||||
+ PARENTID_DN = "cn=parentid,cn=index,cn=userroot,cn=ldbm database,cn=plugins,cn=config"
|
||||
+ NSUNIQUEID_DN = "cn=nsuniqueid,cn=index,cn=userroot,cn=ldbm database,cn=plugins,cn=config"
|
||||
+
|
||||
+ standalone = topology_st.standalone
|
||||
+
|
||||
+ log.info("Remove multiple system indexes")
|
||||
+ for index_dn in [PARENTID_DN, NSUNIQUEID_DN]:
|
||||
+ index = Index(standalone, index_dn)
|
||||
+ index.delete()
|
||||
+ log.info(f"Successfully removed index: {index_dn}")
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=RET_CODE)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=RET_CODE)
|
||||
+
|
||||
+ log.info("Re-add the missing system indexes")
|
||||
+ backend = Backends(standalone).get("userRoot")
|
||||
+ backend.add_index("parentid", ["eq"], matching_rules=["integerOrderingMatch"])
|
||||
+ backend.add_index("nsuniqueid", ["eq"])
|
||||
+
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
+ run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
|
||||
+
|
||||
+
|
||||
+if __name__ == "__main__":
|
||||
+ # Run isolated
|
||||
+ # -s for DEBUG mode
|
||||
+ CURRENT_FILE = os.path.realpath(__file__)
|
||||
diff --git a/ldap/ldif/template-dse.ldif.in b/ldap/ldif/template-dse.ldif.in
|
||||
index 2ddaf5fb3..c2754adf8 100644
|
||||
--- a/ldap/ldif/template-dse.ldif.in
|
||||
+++ b/ldap/ldif/template-dse.ldif.in
|
||||
@@ -973,6 +973,14 @@ cn: aci
|
||||
nssystemindex: true
|
||||
nsindextype: pres
|
||||
|
||||
+dn: cn=ancestorid,cn=default indexes, cn=config,cn=ldbm database,cn=plugins,cn=config
|
||||
+objectclass: top
|
||||
+objectclass: nsIndex
|
||||
+cn: ancestorid
|
||||
+nssystemindex: true
|
||||
+nsindextype: eq
|
||||
+nsmatchingrule: integerOrderingMatch
|
||||
+
|
||||
dn: cn=cn,cn=default indexes, cn=config,cn=ldbm database,cn=plugins,cn=config
|
||||
objectclass: top
|
||||
objectclass: nsIndex
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/instance.c b/ldap/servers/slapd/back-ldbm/instance.c
|
||||
index e82cd17cc..f6a9817a7 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/instance.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/instance.c
|
||||
@@ -16,7 +16,7 @@
|
||||
|
||||
/* Forward declarations */
|
||||
static void ldbm_instance_destructor(void **arg);
|
||||
-Slapi_Entry *ldbm_instance_init_config_entry(char *cn_val, char *v1, char *v2, char *v3, char *v4);
|
||||
+Slapi_Entry *ldbm_instance_init_config_entry(char *cn_val, char *v1, char *v2, char *v3, char *v4, char *mr);
|
||||
|
||||
|
||||
/* Creates and initializes a new ldbm_instance structure.
|
||||
@@ -127,7 +127,7 @@ done:
|
||||
* Take a bunch of strings, and create a index config entry
|
||||
*/
|
||||
Slapi_Entry *
|
||||
-ldbm_instance_init_config_entry(char *cn_val, char *val1, char *val2, char *val3, char *val4)
|
||||
+ldbm_instance_init_config_entry(char *cn_val, char *val1, char *val2, char *val3, char *val4, char *mr)
|
||||
{
|
||||
Slapi_Entry *e = slapi_entry_alloc();
|
||||
struct berval *vals[2];
|
||||
@@ -162,6 +162,12 @@ ldbm_instance_init_config_entry(char *cn_val, char *val1, char *val2, char *val3
|
||||
slapi_entry_add_values(e, "nsIndexType", vals);
|
||||
}
|
||||
|
||||
+ if (mr) {
|
||||
+ val.bv_val = mr;
|
||||
+ val.bv_len = strlen(mr);
|
||||
+ slapi_entry_add_values(e, "nsMatchingRule", vals);
|
||||
+ }
|
||||
+
|
||||
return e;
|
||||
}
|
||||
|
||||
@@ -184,24 +190,24 @@ ldbm_instance_create_default_indexes(backend *be)
|
||||
* ACL routines.
|
||||
*/
|
||||
if (entryrdn_get_switch()) { /* subtree-rename: on */
|
||||
- e = ldbm_instance_init_config_entry(LDBM_ENTRYRDN_STR, "subtree", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_ENTRYRDN_STR, "subtree", 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
} else {
|
||||
- e = ldbm_instance_init_config_entry(LDBM_ENTRYDN_STR, "eq", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_ENTRYDN_STR, "eq", 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
}
|
||||
|
||||
- e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0, "integerOrderingMatch");
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
- e = ldbm_instance_init_config_entry("objectclass", "eq", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry("objectclass", "eq", 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
- e = ldbm_instance_init_config_entry("aci", "pres", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry("aci", "pres", 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
@@ -211,26 +217,26 @@ ldbm_instance_create_default_indexes(backend *be)
|
||||
slapi_entry_free(e);
|
||||
#endif
|
||||
|
||||
- e = ldbm_instance_init_config_entry(LDBM_NUMSUBORDINATES_STR, "pres", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_NUMSUBORDINATES_STR, "pres", 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
- e = ldbm_instance_init_config_entry(SLAPI_ATTR_UNIQUEID, "eq", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(SLAPI_ATTR_UNIQUEID, "eq", 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
/* For MMR, we need this attribute (to replace use of dncomp in delete). */
|
||||
- e = ldbm_instance_init_config_entry(ATTR_NSDS5_REPLCONFLICT, "eq", "pres", 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(ATTR_NSDS5_REPLCONFLICT, "eq", "pres", 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
/* write the dse file only on the final index */
|
||||
- e = ldbm_instance_init_config_entry(SLAPI_ATTR_NSCP_ENTRYDN, "eq", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(SLAPI_ATTR_NSCP_ENTRYDN, "eq", 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
/* ldbm_instance_config_add_index_entry(inst, 2, argv); */
|
||||
- e = ldbm_instance_init_config_entry(LDBM_PSEUDO_ATTR_DEFAULT, "none", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_PSEUDO_ATTR_DEFAULT, "none", 0, 0, 0, 0);
|
||||
attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
|
||||
slapi_entry_free(e);
|
||||
|
||||
@@ -239,7 +245,7 @@ ldbm_instance_create_default_indexes(backend *be)
|
||||
* ancestorid is special, there is actually no such attr type
|
||||
* but we still want to use the attr index file APIs.
|
||||
*/
|
||||
- e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0, "integerOrderingMatch");
|
||||
attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
|
||||
slapi_entry_free(e);
|
||||
}
|
||||
diff --git a/src/lib389/lib389/backend.py b/src/lib389/lib389/backend.py
|
||||
index cee073ea7..a97def17e 100644
|
||||
--- a/src/lib389/lib389/backend.py
|
||||
+++ b/src/lib389/lib389/backend.py
|
||||
@@ -34,7 +34,8 @@ from lib389.encrypted_attributes import EncryptedAttr, EncryptedAttrs
|
||||
# This is for sample entry creation.
|
||||
from lib389.configurations import get_sample_entries
|
||||
|
||||
-from lib389.lint import DSBLE0001, DSBLE0002, DSBLE0003, DSVIRTLE0001, DSCLLE0001
|
||||
+from lib389.lint import DSBLE0001, DSBLE0002, DSBLE0003, DSBLE0007, DSVIRTLE0001, DSCLLE0001
|
||||
+from lib389.plugins import USNPlugin
|
||||
|
||||
|
||||
class BackendLegacy(object):
|
||||
@@ -531,6 +532,136 @@ class Backend(DSLdapObject):
|
||||
self._log.debug(f"_lint_cl_trimming - backend ({suffix}) is not replicated")
|
||||
pass
|
||||
|
||||
+ def _lint_system_indexes(self):
|
||||
+ """Check that system indexes are correctly configured"""
|
||||
+ bename = self.lint_uid()
|
||||
+ suffix = self.get_attr_val_utf8('nsslapd-suffix')
|
||||
+ indexes = self.get_indexes()
|
||||
+
|
||||
+ # Default system indexes taken from ldap/servers/slapd/back-ldbm/instance.c
|
||||
+ expected_system_indexes = {
|
||||
+ 'entryrdn': {'types': ['subtree'], 'matching_rule': None},
|
||||
+ 'parentId': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch'},
|
||||
+ 'ancestorId': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch'},
|
||||
+ 'objectClass': {'types': ['eq'], 'matching_rule': None},
|
||||
+ 'aci': {'types': ['pres'], 'matching_rule': None},
|
||||
+ 'nscpEntryDN': {'types': ['eq'], 'matching_rule': None},
|
||||
+ 'nsUniqueId': {'types': ['eq'], 'matching_rule': None},
|
||||
+ 'nsds5ReplConflict': {'types': ['eq', 'pres'], 'matching_rule': None}
|
||||
+ }
|
||||
+
|
||||
+ # Default system indexes taken from ldap/ldif/template-dse.ldif.in
|
||||
+ expected_system_indexes.update({
|
||||
+ 'nsCertSubjectDN': {'types': ['eq'], 'matching_rule': None},
|
||||
+ 'numsubordinates': {'types': ['pres'], 'matching_rule': None},
|
||||
+ 'nsTombstoneCSN': {'types': ['eq'], 'matching_rule': None},
|
||||
+ 'targetuniqueid': {'types': ['eq'], 'matching_rule': None}
|
||||
+ })
|
||||
+
|
||||
+
|
||||
+ # RetroCL plugin creates its own backend with an additonal index for changeNumber
|
||||
+ # See ldap/servers/plugins/retrocl/retrocl_create.c
|
||||
+ if suffix.lower() == 'cn=changelog':
|
||||
+ expected_system_indexes.update({
|
||||
+ 'changeNumber': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch'}
|
||||
+ })
|
||||
+
|
||||
+ # USN plugin requires entryusn attribute indexed for equality with integerOrderingMatch rule
|
||||
+ # See ldap/ldif/template-dse.ldif.in
|
||||
+ try:
|
||||
+ usn_plugin = USNPlugin(self._instance)
|
||||
+ if usn_plugin.status():
|
||||
+ expected_system_indexes.update({
|
||||
+ 'entryusn': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch'}
|
||||
+ })
|
||||
+ except Exception as e:
|
||||
+ self._log.debug(f"_lint_system_indexes - Error checking USN plugin: {e}")
|
||||
+
|
||||
+ discrepancies = []
|
||||
+ remediation_commands = []
|
||||
+ reindex_attrs = set()
|
||||
+
|
||||
+ for attr_name, expected_config in expected_system_indexes.items():
|
||||
+ try:
|
||||
+ index = indexes.get(attr_name)
|
||||
+ # Check if index exists
|
||||
+ if index is None:
|
||||
+ discrepancies.append(f"Missing system index: {attr_name}")
|
||||
+ # Generate remediation command
|
||||
+ index_types = ' '.join([f"--add-type {t}" for t in expected_config['types']])
|
||||
+ cmd = f"dsconf YOUR_INSTANCE backend index add {bename} --attr {attr_name} {index_types}"
|
||||
+ if expected_config['matching_rule']:
|
||||
+ cmd += f" --add-mr {expected_config['matching_rule']}"
|
||||
+ remediation_commands.append(cmd)
|
||||
+ reindex_attrs.add(attr_name) # New index needs reindexing
|
||||
+ else:
|
||||
+ # Index exists, check configuration
|
||||
+ actual_types = index.get_attr_vals_utf8('nsIndexType') or []
|
||||
+ actual_mrs = index.get_attr_vals_utf8('nsMatchingRule') or []
|
||||
+
|
||||
+ # Normalize to lowercase for comparison
|
||||
+ actual_types = [t.lower() for t in actual_types]
|
||||
+ expected_types = [t.lower() for t in expected_config['types']]
|
||||
+
|
||||
+ # Check index types
|
||||
+ missing_types = set(expected_types) - set(actual_types)
|
||||
+ if missing_types:
|
||||
+ discrepancies.append(f"Index {attr_name} missing types: {', '.join(missing_types)}")
|
||||
+ missing_type_args = ' '.join([f"--add-type {t}" for t in missing_types])
|
||||
+ cmd = f"dsconf YOUR_INSTANCE backend index set {bename} --attr {attr_name} {missing_type_args}"
|
||||
+ remediation_commands.append(cmd)
|
||||
+ reindex_attrs.add(attr_name)
|
||||
+
|
||||
+ # Check matching rules
|
||||
+ expected_mr = expected_config['matching_rule']
|
||||
+ if expected_mr:
|
||||
+ actual_mrs_lower = [mr.lower() for mr in actual_mrs]
|
||||
+ if expected_mr.lower() not in actual_mrs_lower:
|
||||
+ discrepancies.append(f"Index {attr_name} missing matching rule: {expected_mr}")
|
||||
+ # Add the missing matching rule
|
||||
+ cmd = f"dsconf YOUR_INSTANCE backend index set {bename} --attr {attr_name} --add-mr {expected_mr}"
|
||||
+ remediation_commands.append(cmd)
|
||||
+ reindex_attrs.add(attr_name)
|
||||
+
|
||||
+ except Exception as e:
|
||||
+ self._log.debug(f"_lint_system_indexes - Error checking index {attr_name}: {e}")
|
||||
+ discrepancies.append(f"Unable to check index {attr_name}: {str(e)}")
|
||||
+
|
||||
+ if discrepancies:
|
||||
+ report = copy.deepcopy(DSBLE0007)
|
||||
+ report['check'] = f'backends:{bename}:system_indexes'
|
||||
+ report['items'] = [suffix]
|
||||
+
|
||||
+ expected_indexes_list = []
|
||||
+ for attr_name, config in expected_system_indexes.items():
|
||||
+ types_str = "', '".join(config['types'])
|
||||
+ index_desc = f"- {attr_name}: index type{'s' if len(config['types']) > 1 else ''} '{types_str}'"
|
||||
+ if config['matching_rule']:
|
||||
+ index_desc += f" with matching rule '{config['matching_rule']}'"
|
||||
+ expected_indexes_list.append(index_desc)
|
||||
+
|
||||
+ formatted_expected_indexes = '\n'.join(expected_indexes_list)
|
||||
+ report['detail'] = report['detail'].replace('EXPECTED_INDEXES', formatted_expected_indexes)
|
||||
+ report['detail'] = report['detail'].replace('DISCREPANCIES', '\n'.join([f"- {d}" for d in discrepancies]))
|
||||
+
|
||||
+ formatted_commands = '\n'.join([f" # {cmd}" for cmd in remediation_commands])
|
||||
+ report['fix'] = report['fix'].replace('REMEDIATION_COMMANDS', formatted_commands)
|
||||
+
|
||||
+ # Generate specific reindex commands for affected attributes
|
||||
+ if reindex_attrs:
|
||||
+ reindex_commands = []
|
||||
+ for attr in sorted(reindex_attrs):
|
||||
+ reindex_cmd = f"dsconf YOUR_INSTANCE backend index reindex {bename} --attr {attr}"
|
||||
+ reindex_commands.append(f" # {reindex_cmd}")
|
||||
+ formatted_reindex_commands = '\n'.join(reindex_commands)
|
||||
+ else:
|
||||
+ formatted_reindex_commands = " # No reindexing needed"
|
||||
+
|
||||
+ report['fix'] = report['fix'].replace('REINDEX_COMMANDS', formatted_reindex_commands)
|
||||
+ report['fix'] = report['fix'].replace('YOUR_INSTANCE', self._instance.serverid)
|
||||
+ report['fix'] = report['fix'].replace('BACKEND_NAME', bename)
|
||||
+ yield report
|
||||
+
|
||||
def create_sample_entries(self, version):
|
||||
"""Creates sample entries under nsslapd-suffix value
|
||||
|
||||
diff --git a/src/lib389/lib389/lint.py b/src/lib389/lib389/lint.py
|
||||
index 3d3c79ea3..1e48c790d 100644
|
||||
--- a/src/lib389/lib389/lint.py
|
||||
+++ b/src/lib389/lib389/lint.py
|
||||
@@ -57,6 +57,35 @@ DSBLE0003 = {
|
||||
'fix': """You need to import an LDIF file, or create the suffix entry, in order to initialize the database."""
|
||||
}
|
||||
|
||||
+DSBLE0007 = {
|
||||
+ 'dsle': 'DSBLE0007',
|
||||
+ 'severity': 'HIGH',
|
||||
+ 'description': 'Missing or incorrect system indexes.',
|
||||
+ 'items': [],
|
||||
+ 'detail': """System indexes are essential for proper directory server operation. Missing or
|
||||
+incorrectly configured system indexes can lead to poor search performance, replication
|
||||
+issues, and other operational problems.
|
||||
+
|
||||
+The following system indexes should be present with correct configuration:
|
||||
+EXPECTED_INDEXES
|
||||
+
|
||||
+Current discrepancies:
|
||||
+DISCREPANCIES
|
||||
+""",
|
||||
+ 'fix': """Add the missing system indexes or fix the incorrect configurations using dsconf:
|
||||
+
|
||||
+REMEDIATION_COMMANDS
|
||||
+
|
||||
+After adding or modifying indexes, you may need to reindex the affected attributes:
|
||||
+
|
||||
+REINDEX_COMMANDS
|
||||
+
|
||||
+WARNING: Reindexing can be resource-intensive and may impact server performance on a live system.
|
||||
+Consider scheduling reindexing during maintenance windows or periods of low activity. For production
|
||||
+systems, you may want to reindex offline or use the --wait option to monitor task completion.
|
||||
+"""
|
||||
+}
|
||||
+
|
||||
# Config checks
|
||||
DSCLE0001 = {
|
||||
'dsle': 'DSCLE0001',
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,606 @@
|
||||
From fb28c3a318fa87ff194aeb7f29c0fc1846918d81 Mon Sep 17 00:00:00 2001
|
||||
From: tbordaz <tbordaz@redhat.com>
|
||||
Date: Fri, 3 Oct 2025 15:11:12 +0200
|
||||
Subject: [PATCH] Issue 6966 - On large DB, unlimited IDL scan limit reduce the
|
||||
SRCH performance (#6967)
|
||||
|
||||
Bug description:
|
||||
The RFE 2435, removed the limit of the IDList size.
|
||||
A side effect is that for subtree/on-level searches some IDL can be
|
||||
huge. For example a subtree search on the base suffix will build a
|
||||
IDL up to number of entries in the DB.
|
||||
Building such big IDL is accounting for +90% of the etime of the
|
||||
operation.
|
||||
|
||||
Fix description:
|
||||
Using fine grain indexing we can limit IDL for parentid
|
||||
(onelevel) and ancestorid (subtree) index.
|
||||
It support a new backend config parameter nsslapd-systemidlistscanlimit
|
||||
that is the default value limit for parentid/ancestorid limits.
|
||||
Default value is 5000.
|
||||
When creating a new backend it creates parentid/ancestorid
|
||||
indexes with nsIndexIDListScanLimit setting the above limit.
|
||||
At startup the fine grain limit is either taken from nsIndexIDListScanLimit
|
||||
or fallback from nsslapd-systemidlistscanlimit.
|
||||
During a search request it uses the standard fine grain mechanism.
|
||||
On my tests it improves throughput and response time by ~50 times
|
||||
|
||||
fixes: #6966
|
||||
|
||||
Reviewed by: Mark Reynolds, Pierre Rogier, William Brown and Simon
|
||||
Piguchin (Thanks to you all !!!)
|
||||
|
||||
(cherry picked from commit b53181715937135b1c80ff34d56e9e21b53fe889)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../tests/suites/config/config_test.py | 31 +++++--
|
||||
.../paged_results/paged_results_test.py | 25 ++++--
|
||||
ldap/servers/slapd/back-ldbm/back-ldbm.h | 1 +
|
||||
ldap/servers/slapd/back-ldbm/index.c | 2 +
|
||||
ldap/servers/slapd/back-ldbm/instance.c | 89 ++++++++++++++++---
|
||||
ldap/servers/slapd/back-ldbm/ldbm_config.c | 30 +++++++
|
||||
ldap/servers/slapd/back-ldbm/ldbm_config.h | 1 +
|
||||
.../slapd/back-ldbm/ldbm_index_config.c | 8 ++
|
||||
src/lib389/lib389/backend.py | 33 ++++++-
|
||||
src/lib389/lib389/cli_conf/backend.py | 20 +++++
|
||||
10 files changed, 213 insertions(+), 27 deletions(-)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/config/config_test.py b/dirsrvtests/tests/suites/config/config_test.py
|
||||
index 19232c87d..430176602 100644
|
||||
--- a/dirsrvtests/tests/suites/config/config_test.py
|
||||
+++ b/dirsrvtests/tests/suites/config/config_test.py
|
||||
@@ -514,17 +514,19 @@ def test_ndn_cache_enabled(topo):
|
||||
topo.standalone.config.set('nsslapd-ndn-cache-max-size', 'invalid_value')
|
||||
|
||||
|
||||
-def test_require_index(topo):
|
||||
- """Test nsslapd-ignore-virtual-attrs configuration attribute
|
||||
+def test_require_index(topo, request):
|
||||
+ """Validate that unindexed searches are rejected
|
||||
|
||||
:id: fb6e31f2-acc2-4e75-a195-5c356faeb803
|
||||
:setup: Standalone instance
|
||||
:steps:
|
||||
1. Set "nsslapd-require-index" to "on"
|
||||
- 2. Test an unindexed search is rejected
|
||||
+ 2. ancestorid/idlscanlimit to 100
|
||||
+ 3. Test an unindexed search is rejected
|
||||
:expectedresults:
|
||||
1. Success
|
||||
2. Success
|
||||
+ 3. Success
|
||||
"""
|
||||
|
||||
# Set the config
|
||||
@@ -535,6 +537,10 @@ def test_require_index(topo):
|
||||
|
||||
db_cfg = DatabaseConfig(topo.standalone)
|
||||
db_cfg.set([('nsslapd-idlistscanlimit', '100')])
|
||||
+ backend = Backends(topo.standalone).get_backend(DEFAULT_SUFFIX)
|
||||
+ ancestorid_index = backend.get_index('ancestorid')
|
||||
+ ancestorid_index.replace("nsIndexIDListScanLimit", ensure_bytes("limit=100 type=eq flags=AND"))
|
||||
+ topo.standalone.restart()
|
||||
|
||||
users = UserAccounts(topo.standalone, DEFAULT_SUFFIX)
|
||||
for i in range(101):
|
||||
@@ -545,11 +551,16 @@ def test_require_index(topo):
|
||||
with pytest.raises(ldap.UNWILLING_TO_PERFORM):
|
||||
raw_objects.filter("(description=test*)")
|
||||
|
||||
+ def fin():
|
||||
+ ancestorid_index.replace("nsIndexIDListScanLimit", ensure_bytes("limit=5000 type=eq flags=AND"))
|
||||
+
|
||||
+ request.addfinalizer(fin)
|
||||
+
|
||||
|
||||
|
||||
@pytest.mark.skipif(ds_is_older('1.4.2'), reason="The config setting only exists in 1.4.2 and higher")
|
||||
-def test_require_internal_index(topo):
|
||||
- """Test nsslapd-ignore-virtual-attrs configuration attribute
|
||||
+def test_require_internal_index(topo, request):
|
||||
+ """Ensure internal operations require indexed attributes
|
||||
|
||||
:id: 22b94f30-59e3-4f27-89a1-c4f4be036f7f
|
||||
:setup: Standalone instance
|
||||
@@ -580,6 +591,10 @@ def test_require_internal_index(topo):
|
||||
# Create a bunch of users
|
||||
db_cfg = DatabaseConfig(topo.standalone)
|
||||
db_cfg.set([('nsslapd-idlistscanlimit', '100')])
|
||||
+ backend = Backends(topo.standalone).get_backend(DEFAULT_SUFFIX)
|
||||
+ ancestorid_index = backend.get_index('ancestorid')
|
||||
+ ancestorid_index.replace("nsIndexIDListScanLimit", ensure_bytes("limit=100 type=eq flags=AND"))
|
||||
+ topo.standalone.restart()
|
||||
users = UserAccounts(topo.standalone, DEFAULT_SUFFIX)
|
||||
for i in range(102, 202):
|
||||
users.create_test_user(uid=i)
|
||||
@@ -604,6 +619,12 @@ def test_require_internal_index(topo):
|
||||
with pytest.raises(ldap.UNWILLING_TO_PERFORM):
|
||||
user.delete()
|
||||
|
||||
+ def fin():
|
||||
+ ancestorid_index.replace("nsIndexIDListScanLimit", ensure_bytes("limit=5000 type=eq flags=AND"))
|
||||
+
|
||||
+ request.addfinalizer(fin)
|
||||
+
|
||||
+
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Run isolated
|
||||
diff --git a/dirsrvtests/tests/suites/paged_results/paged_results_test.py b/dirsrvtests/tests/suites/paged_results/paged_results_test.py
|
||||
index 1ed11c891..8835be8fa 100644
|
||||
--- a/dirsrvtests/tests/suites/paged_results/paged_results_test.py
|
||||
+++ b/dirsrvtests/tests/suites/paged_results/paged_results_test.py
|
||||
@@ -317,19 +317,19 @@ def test_search_success(topology_st, create_user, page_size, users_num):
|
||||
del_users(users_list)
|
||||
|
||||
|
||||
-@pytest.mark.parametrize("page_size,users_num,suffix,attr_name,attr_value,expected_err", [
|
||||
+@pytest.mark.parametrize("page_size,users_num,suffix,attr_name,attr_value,expected_err, restart", [
|
||||
(50, 200, 'cn=config,%s' % DN_LDBM, 'nsslapd-idlistscanlimit', '100',
|
||||
- ldap.UNWILLING_TO_PERFORM),
|
||||
+ ldap.UNWILLING_TO_PERFORM, True),
|
||||
(5, 15, DN_CONFIG, 'nsslapd-timelimit', '20',
|
||||
- ldap.UNAVAILABLE_CRITICAL_EXTENSION),
|
||||
+ ldap.UNAVAILABLE_CRITICAL_EXTENSION, False),
|
||||
(21, 50, DN_CONFIG, 'nsslapd-sizelimit', '20',
|
||||
- ldap.SIZELIMIT_EXCEEDED),
|
||||
+ ldap.SIZELIMIT_EXCEEDED, False),
|
||||
(21, 50, DN_CONFIG, 'nsslapd-pagedsizelimit', '5',
|
||||
- ldap.SIZELIMIT_EXCEEDED),
|
||||
+ ldap.SIZELIMIT_EXCEEDED, False),
|
||||
(5, 50, 'cn=config,%s' % DN_LDBM, 'nsslapd-lookthroughlimit', '20',
|
||||
- ldap.ADMINLIMIT_EXCEEDED)])
|
||||
+ ldap.ADMINLIMIT_EXCEEDED, False)])
|
||||
def test_search_limits_fail(topology_st, create_user, page_size, users_num,
|
||||
- suffix, attr_name, attr_value, expected_err):
|
||||
+ suffix, attr_name, attr_value, expected_err, restart):
|
||||
"""Verify that search with a simple paged results control
|
||||
throws expected exceptoins when corresponding limits are
|
||||
exceeded.
|
||||
@@ -351,6 +351,15 @@ def test_search_limits_fail(topology_st, create_user, page_size, users_num,
|
||||
|
||||
users_list = add_users(topology_st, users_num, DEFAULT_SUFFIX)
|
||||
attr_value_bck = change_conf_attr(topology_st, suffix, attr_name, attr_value)
|
||||
+ ancestorid_index = None
|
||||
+ if attr_name == 'nsslapd-idlistscanlimit':
|
||||
+ backend = Backends(topology_st.standalone).get_backend(DEFAULT_SUFFIX)
|
||||
+ ancestorid_index = backend.get_index('ancestorid')
|
||||
+ ancestorid_index.replace("nsIndexIDListScanLimit", ensure_bytes("limit=100 type=eq flags=AND"))
|
||||
+
|
||||
+ if (restart):
|
||||
+ log.info('Instance restarted')
|
||||
+ topology_st.standalone.restart()
|
||||
conf_param_dict = {attr_name: attr_value}
|
||||
search_flt = r'(uid=test*)'
|
||||
searchreq_attrlist = ['dn', 'sn']
|
||||
@@ -403,6 +412,8 @@ def test_search_limits_fail(topology_st, create_user, page_size, users_num,
|
||||
else:
|
||||
break
|
||||
finally:
|
||||
+ if ancestorid_index:
|
||||
+ ancestorid_index.replace("nsIndexIDListScanLimit", ensure_bytes("limit=5000 type=eq flags=AND"))
|
||||
del_users(users_list)
|
||||
change_conf_attr(topology_st, suffix, attr_name, attr_value_bck)
|
||||
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/back-ldbm.h b/ldap/servers/slapd/back-ldbm/back-ldbm.h
|
||||
index d17ec644b..cde30cedd 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/back-ldbm.h
|
||||
+++ b/ldap/servers/slapd/back-ldbm/back-ldbm.h
|
||||
@@ -554,6 +554,7 @@ struct ldbminfo
|
||||
int li_mode;
|
||||
int li_lookthroughlimit;
|
||||
int li_allidsthreshold;
|
||||
+ int li_system_allidsthreshold;
|
||||
char *li_directory;
|
||||
int li_reslimit_lookthrough_handle;
|
||||
uint64_t li_dbcachesize;
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/index.c b/ldap/servers/slapd/back-ldbm/index.c
|
||||
index 30fa09ebb..63f0196c1 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/index.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/index.c
|
||||
@@ -999,6 +999,8 @@ index_read_ext_allids(
|
||||
}
|
||||
if (pb) {
|
||||
slapi_pblock_get(pb, SLAPI_SEARCH_IS_AND, &is_and);
|
||||
+ } else if (strcasecmp(type, LDBM_ANCESTORID_STR) == 0) {
|
||||
+ is_and = 1;
|
||||
}
|
||||
ai_flags = is_and ? INDEX_ALLIDS_FLAG_AND : 0;
|
||||
/* the caller can pass in a value of 0 - just ignore those - but if the index
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/instance.c b/ldap/servers/slapd/back-ldbm/instance.c
|
||||
index f6a9817a7..29299b992 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/instance.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/instance.c
|
||||
@@ -16,7 +16,7 @@
|
||||
|
||||
/* Forward declarations */
|
||||
static void ldbm_instance_destructor(void **arg);
|
||||
-Slapi_Entry *ldbm_instance_init_config_entry(char *cn_val, char *v1, char *v2, char *v3, char *v4, char *mr);
|
||||
+Slapi_Entry *ldbm_instance_init_config_entry(char *cn_val, char *v1, char *v2, char *v3, char *v4, char *mr, char *scanlimit);
|
||||
|
||||
|
||||
/* Creates and initializes a new ldbm_instance structure.
|
||||
@@ -127,7 +127,7 @@ done:
|
||||
* Take a bunch of strings, and create a index config entry
|
||||
*/
|
||||
Slapi_Entry *
|
||||
-ldbm_instance_init_config_entry(char *cn_val, char *val1, char *val2, char *val3, char *val4, char *mr)
|
||||
+ldbm_instance_init_config_entry(char *cn_val, char *val1, char *val2, char *val3, char *val4, char *mr, char *scanlimit)
|
||||
{
|
||||
Slapi_Entry *e = slapi_entry_alloc();
|
||||
struct berval *vals[2];
|
||||
@@ -168,6 +168,11 @@ ldbm_instance_init_config_entry(char *cn_val, char *val1, char *val2, char *val3
|
||||
slapi_entry_add_values(e, "nsMatchingRule", vals);
|
||||
}
|
||||
|
||||
+ if (scanlimit) {
|
||||
+ val.bv_val = scanlimit;
|
||||
+ val.bv_len = strlen(scanlimit);
|
||||
+ slapi_entry_add_values(e, "nsIndexIDListScanLimit", vals);
|
||||
+ }
|
||||
return e;
|
||||
}
|
||||
|
||||
@@ -180,8 +185,59 @@ ldbm_instance_create_default_indexes(backend *be)
|
||||
{
|
||||
Slapi_Entry *e;
|
||||
ldbm_instance *inst = (ldbm_instance *)be->be_instance_info;
|
||||
+ struct ldbminfo *li = (struct ldbminfo *)be->be_database->plg_private;
|
||||
/* write the dse file only on the final index */
|
||||
int flags = LDBM_INSTANCE_CONFIG_DONT_WRITE;
|
||||
+ char *ancestorid_indexes_limit = NULL;
|
||||
+ char *parentid_indexes_limit = NULL;
|
||||
+ struct attrinfo *ai = NULL;
|
||||
+ struct index_idlistsizeinfo *iter;
|
||||
+ int cookie;
|
||||
+ int limit;
|
||||
+
|
||||
+ ainfo_get(be, (char *)LDBM_ANCESTORID_STR, &ai);
|
||||
+ if (ai && ai->ai_idlistinfo) {
|
||||
+ iter = (struct index_idlistsizeinfo *)dl_get_first(ai->ai_idlistinfo, &cookie);
|
||||
+ if (iter) {
|
||||
+ limit = iter->ai_idlistsizelimit;
|
||||
+ slapi_log_err(SLAPI_LOG_BACKLDBM, "ldbm_instance_create_default_indexes",
|
||||
+ "set ancestorid limit to %d from attribute index\n",
|
||||
+ limit);
|
||||
+ } else {
|
||||
+ limit = li->li_system_allidsthreshold;
|
||||
+ slapi_log_err(SLAPI_LOG_BACKLDBM, "ldbm_instance_create_default_indexes",
|
||||
+ "set ancestorid limit to %d from default (fail to read limit)\n",
|
||||
+ limit);
|
||||
+ }
|
||||
+ ancestorid_indexes_limit = slapi_ch_smprintf("limit=%d type=eq flags=AND", limit);
|
||||
+ } else {
|
||||
+ ancestorid_indexes_limit = slapi_ch_smprintf("limit=%d type=eq flags=AND", li->li_system_allidsthreshold);
|
||||
+ slapi_log_err(SLAPI_LOG_BACKLDBM, "ldbm_instance_create_default_indexes",
|
||||
+ "set ancestorid limit to %d from default (no attribute or limit)\n",
|
||||
+ li->li_system_allidsthreshold);
|
||||
+ }
|
||||
+
|
||||
+ ainfo_get(be, (char *)LDBM_PARENTID_STR, &ai);
|
||||
+ if (ai && ai->ai_idlistinfo) {
|
||||
+ iter = (struct index_idlistsizeinfo *)dl_get_first(ai->ai_idlistinfo, &cookie);
|
||||
+ if (iter) {
|
||||
+ limit = iter->ai_idlistsizelimit;
|
||||
+ slapi_log_err(SLAPI_LOG_BACKLDBM, "ldbm_instance_create_default_indexes",
|
||||
+ "set parentid limit to %d from attribute index\n",
|
||||
+ limit);
|
||||
+ } else {
|
||||
+ limit = li->li_system_allidsthreshold;
|
||||
+ slapi_log_err(SLAPI_LOG_BACKLDBM, "ldbm_instance_create_default_indexes",
|
||||
+ "set parentid limit to %d from default (fail to read limit)\n",
|
||||
+ limit);
|
||||
+ }
|
||||
+ parentid_indexes_limit = slapi_ch_smprintf("limit=%d type=eq flags=AND", limit);
|
||||
+ } else {
|
||||
+ parentid_indexes_limit = slapi_ch_smprintf("limit=%d type=eq flags=AND", li->li_system_allidsthreshold);
|
||||
+ slapi_log_err(SLAPI_LOG_BACKLDBM, "ldbm_instance_create_default_indexes",
|
||||
+ "set parentid limit to %d from default (no attribute or limit)\n",
|
||||
+ li->li_system_allidsthreshold);
|
||||
+ }
|
||||
|
||||
/*
|
||||
* Always index (entrydn or entryrdn), parentid, objectclass,
|
||||
@@ -190,24 +246,29 @@ ldbm_instance_create_default_indexes(backend *be)
|
||||
* ACL routines.
|
||||
*/
|
||||
if (entryrdn_get_switch()) { /* subtree-rename: on */
|
||||
- e = ldbm_instance_init_config_entry(LDBM_ENTRYRDN_STR, "subtree", 0, 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_ENTRYRDN_STR, "subtree", 0, 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
} else {
|
||||
- e = ldbm_instance_init_config_entry(LDBM_ENTRYDN_STR, "eq", 0, 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_ENTRYDN_STR, "eq", 0, 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
}
|
||||
|
||||
- e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0, "integerOrderingMatch");
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_PARENTID_STR, "eq", 0, 0, 0, "integerOrderingMatch", parentid_indexes_limit);
|
||||
+ ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
+ attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
|
||||
+ slapi_entry_free(e);
|
||||
+
|
||||
+ e = ldbm_instance_init_config_entry("objectclass", "eq", 0, 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
- e = ldbm_instance_init_config_entry("objectclass", "eq", 0, 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry("aci", "pres", 0, 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
- e = ldbm_instance_init_config_entry("aci", "pres", 0, 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_NUMSUBORDINATES_STR, "pres", 0, 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
@@ -221,22 +282,22 @@ ldbm_instance_create_default_indexes(backend *be)
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
- e = ldbm_instance_init_config_entry(SLAPI_ATTR_UNIQUEID, "eq", 0, 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(SLAPI_ATTR_UNIQUEID, "eq", 0, 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
/* For MMR, we need this attribute (to replace use of dncomp in delete). */
|
||||
- e = ldbm_instance_init_config_entry(ATTR_NSDS5_REPLCONFLICT, "eq", "pres", 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(ATTR_NSDS5_REPLCONFLICT, "eq", "pres", 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
/* write the dse file only on the final index */
|
||||
- e = ldbm_instance_init_config_entry(SLAPI_ATTR_NSCP_ENTRYDN, "eq", 0, 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(SLAPI_ATTR_NSCP_ENTRYDN, "eq", 0, 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
|
||||
/* ldbm_instance_config_add_index_entry(inst, 2, argv); */
|
||||
- e = ldbm_instance_init_config_entry(LDBM_PSEUDO_ATTR_DEFAULT, "none", 0, 0, 0, 0);
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_PSEUDO_ATTR_DEFAULT, "none", 0, 0, 0, 0, 0);
|
||||
attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
|
||||
slapi_entry_free(e);
|
||||
|
||||
@@ -245,11 +306,15 @@ ldbm_instance_create_default_indexes(backend *be)
|
||||
* ancestorid is special, there is actually no such attr type
|
||||
* but we still want to use the attr index file APIs.
|
||||
*/
|
||||
- e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0, "integerOrderingMatch");
|
||||
+ e = ldbm_instance_init_config_entry(LDBM_ANCESTORID_STR, "eq", 0, 0, 0, "integerOrderingMatch", ancestorid_indexes_limit);
|
||||
+ ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
attr_index_config(be, "ldbm index init", 0, e, 1, 0, NULL);
|
||||
slapi_entry_free(e);
|
||||
}
|
||||
|
||||
+ slapi_ch_free_string(&ancestorid_indexes_limit);
|
||||
+ slapi_ch_free_string(&parentid_indexes_limit);
|
||||
+
|
||||
return 0;
|
||||
}
|
||||
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_config.c b/ldap/servers/slapd/back-ldbm/ldbm_config.c
|
||||
index b7bceabf2..f8d8f7474 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/ldbm_config.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/ldbm_config.c
|
||||
@@ -366,6 +366,35 @@ ldbm_config_allidsthreshold_set(void *arg, void *value, char *errorbuf __attribu
|
||||
return retval;
|
||||
}
|
||||
|
||||
+static void *
|
||||
+ldbm_config_system_allidsthreshold_get(void *arg)
|
||||
+{
|
||||
+ struct ldbminfo *li = (struct ldbminfo *)arg;
|
||||
+
|
||||
+ return (void *)((uintptr_t)(li->li_system_allidsthreshold));
|
||||
+}
|
||||
+
|
||||
+static int
|
||||
+ldbm_config_system_allidsthreshold_set(void *arg, void *value, char *errorbuf __attribute__((unused)), int phase __attribute__((unused)), int apply)
|
||||
+{
|
||||
+ struct ldbminfo *li = (struct ldbminfo *)arg;
|
||||
+ int retval = LDAP_SUCCESS;
|
||||
+ int val = (int)((uintptr_t)value);
|
||||
+
|
||||
+ /* Do whatever we can to make sure the data is ok. */
|
||||
+
|
||||
+ /* Catch attempts to configure a stupidly low ancestorid allidsthreshold */
|
||||
+ if ((val > -1) && (val < 5000)) {
|
||||
+ val = 5000;
|
||||
+ }
|
||||
+
|
||||
+ if (apply) {
|
||||
+ li->li_system_allidsthreshold = val;
|
||||
+ }
|
||||
+
|
||||
+ return retval;
|
||||
+}
|
||||
+
|
||||
static void *
|
||||
ldbm_config_pagedallidsthreshold_get(void *arg)
|
||||
{
|
||||
@@ -945,6 +974,7 @@ static config_info ldbm_config[] = {
|
||||
{CONFIG_LOOKTHROUGHLIMIT, CONFIG_TYPE_INT, "5000", &ldbm_config_lookthroughlimit_get, &ldbm_config_lookthroughlimit_set, CONFIG_FLAG_ALWAYS_SHOW | CONFIG_FLAG_ALLOW_RUNNING_CHANGE},
|
||||
{CONFIG_MODE, CONFIG_TYPE_INT_OCTAL, "0600", &ldbm_config_mode_get, &ldbm_config_mode_set, CONFIG_FLAG_ALWAYS_SHOW | CONFIG_FLAG_ALLOW_RUNNING_CHANGE},
|
||||
{CONFIG_IDLISTSCANLIMIT, CONFIG_TYPE_INT, "2147483646", &ldbm_config_allidsthreshold_get, &ldbm_config_allidsthreshold_set, CONFIG_FLAG_ALWAYS_SHOW | CONFIG_FLAG_ALLOW_RUNNING_CHANGE},
|
||||
+ {CONFIG_SYSTEMIDLISTSCANLIMIT, CONFIG_TYPE_INT, "5000", &ldbm_config_system_allidsthreshold_get, &ldbm_config_system_allidsthreshold_set, CONFIG_FLAG_ALWAYS_SHOW | CONFIG_FLAG_ALLOW_RUNNING_CHANGE},
|
||||
{CONFIG_DIRECTORY, CONFIG_TYPE_STRING, "", &ldbm_config_directory_get, &ldbm_config_directory_set, CONFIG_FLAG_ALWAYS_SHOW | CONFIG_FLAG_ALLOW_RUNNING_CHANGE | CONFIG_FLAG_SKIP_DEFAULT_SETTING},
|
||||
{CONFIG_MAXPASSBEFOREMERGE, CONFIG_TYPE_INT, "100", &ldbm_config_maxpassbeforemerge_get, &ldbm_config_maxpassbeforemerge_set, 0},
|
||||
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_config.h b/ldap/servers/slapd/back-ldbm/ldbm_config.h
|
||||
index 48446193e..004e5ea7e 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/ldbm_config.h
|
||||
+++ b/ldap/servers/slapd/back-ldbm/ldbm_config.h
|
||||
@@ -60,6 +60,7 @@ struct config_info
|
||||
#define CONFIG_RANGELOOKTHROUGHLIMIT "nsslapd-rangelookthroughlimit"
|
||||
#define CONFIG_PAGEDLOOKTHROUGHLIMIT "nsslapd-pagedlookthroughlimit"
|
||||
#define CONFIG_IDLISTSCANLIMIT "nsslapd-idlistscanlimit"
|
||||
+#define CONFIG_SYSTEMIDLISTSCANLIMIT "nsslapd-systemidlistscanlimit"
|
||||
#define CONFIG_PAGEDIDLISTSCANLIMIT "nsslapd-pagedidlistscanlimit"
|
||||
#define CONFIG_DIRECTORY "nsslapd-directory"
|
||||
#define CONFIG_MODE "nsslapd-mode"
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/ldbm_index_config.c b/ldap/servers/slapd/back-ldbm/ldbm_index_config.c
|
||||
index 38e7368e1..bae2a64b9 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/ldbm_index_config.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/ldbm_index_config.c
|
||||
@@ -384,6 +384,14 @@ ldbm_instance_config_add_index_entry(
|
||||
}
|
||||
}
|
||||
|
||||
+ /* get nsIndexIDListScanLimit and its values, and add them */
|
||||
+ if (0 == slapi_entry_attr_find(e, "nsIndexIDListScanLimit", &attr)) {
|
||||
+ for (j = slapi_attr_first_value(attr, &sval); j != -1; j = slapi_attr_next_value(attr, j, &sval)) {
|
||||
+ attrValue = slapi_value_get_berval(sval);
|
||||
+ eBuf = PR_sprintf_append(eBuf, "nsIndexIDListScanLimit: %s\n", attrValue->bv_val);
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
ldbm_config_add_dse_entry(li, eBuf, flags);
|
||||
if (eBuf) {
|
||||
PR_smprintf_free(eBuf);
|
||||
diff --git a/src/lib389/lib389/backend.py b/src/lib389/lib389/backend.py
|
||||
index a97def17e..03290ac1c 100644
|
||||
--- a/src/lib389/lib389/backend.py
|
||||
+++ b/src/lib389/lib389/backend.py
|
||||
@@ -541,8 +541,8 @@ class Backend(DSLdapObject):
|
||||
# Default system indexes taken from ldap/servers/slapd/back-ldbm/instance.c
|
||||
expected_system_indexes = {
|
||||
'entryrdn': {'types': ['subtree'], 'matching_rule': None},
|
||||
- 'parentId': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch'},
|
||||
- 'ancestorId': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch'},
|
||||
+ 'parentId': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch', 'scanlimit': 'limit=5000 type=eq flags=AND'},
|
||||
+ 'ancestorId': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch', 'scanlimit': 'limit=5000 type=eq flags=AND'},
|
||||
'objectClass': {'types': ['eq'], 'matching_rule': None},
|
||||
'aci': {'types': ['pres'], 'matching_rule': None},
|
||||
'nscpEntryDN': {'types': ['eq'], 'matching_rule': None},
|
||||
@@ -592,12 +592,15 @@ class Backend(DSLdapObject):
|
||||
cmd = f"dsconf YOUR_INSTANCE backend index add {bename} --attr {attr_name} {index_types}"
|
||||
if expected_config['matching_rule']:
|
||||
cmd += f" --add-mr {expected_config['matching_rule']}"
|
||||
+ if expected_config['scanlimit']:
|
||||
+ cmd += f" --add-scanlimit {expected_config['scanlimit']}"
|
||||
remediation_commands.append(cmd)
|
||||
reindex_attrs.add(attr_name) # New index needs reindexing
|
||||
else:
|
||||
# Index exists, check configuration
|
||||
actual_types = index.get_attr_vals_utf8('nsIndexType') or []
|
||||
actual_mrs = index.get_attr_vals_utf8('nsMatchingRule') or []
|
||||
+ actual_scanlimit = index.get_attr_vals_utf8('nsIndexIDListScanLimit') or []
|
||||
|
||||
# Normalize to lowercase for comparison
|
||||
actual_types = [t.lower() for t in actual_types]
|
||||
@@ -623,6 +626,19 @@ class Backend(DSLdapObject):
|
||||
remediation_commands.append(cmd)
|
||||
reindex_attrs.add(attr_name)
|
||||
|
||||
+ # Check fine grain definitions for parentid ONLY
|
||||
+ expected_scanlimit = expected_config['scanlimit']
|
||||
+ if (attr_name.lower() == "parentid") and expected_scanlimit and (len(actual_scanlimit) == 0):
|
||||
+ discrepancies.append(f"Index {attr_name} missing fine grain definition of IDs limit: {expected_mr}")
|
||||
+ # Add the missing scanlimit
|
||||
+ if expected_mr:
|
||||
+ cmd = f"dsconf YOUR_INSTANCE backend index set {bename} --attr {attr_name} --add-mr {expected_mr} --add-scanlimit {expected_scanlimit}"
|
||||
+ else:
|
||||
+ cmd = f"dsconf YOUR_INSTANCE backend index set {bename} --attr {attr_name} --add-scanlimit {expected_scanlimit}"
|
||||
+ remediation_commands.append(cmd)
|
||||
+ reindex_attrs.add(attr_name)
|
||||
+
|
||||
+
|
||||
except Exception as e:
|
||||
self._log.debug(f"_lint_system_indexes - Error checking index {attr_name}: {e}")
|
||||
discrepancies.append(f"Unable to check index {attr_name}: {str(e)}")
|
||||
@@ -852,12 +868,13 @@ class Backend(DSLdapObject):
|
||||
return
|
||||
raise ValueError("Can not delete index because it does not exist")
|
||||
|
||||
- def add_index(self, attr_name, types, matching_rules=None, reindex=False):
|
||||
+ def add_index(self, attr_name, types, matching_rules=None, idlistscanlimit=None, reindex=False):
|
||||
""" Add an index.
|
||||
|
||||
:param attr_name - name of the attribute to index
|
||||
:param types - a List of index types(eq, pres, sub, approx)
|
||||
:param matching_rules - a List of matching rules for the index
|
||||
+ :param idlistscanlimit - a List of fine grain definitions for scanning limit
|
||||
:param reindex - If set to True then index the attribute after creating it.
|
||||
"""
|
||||
|
||||
@@ -887,6 +904,15 @@ class Backend(DSLdapObject):
|
||||
# Only add if there are actually rules present in the list.
|
||||
if len(mrs) > 0:
|
||||
props['nsMatchingRule'] = mrs
|
||||
+
|
||||
+ if idlistscanlimit is not None:
|
||||
+ scanlimits = []
|
||||
+ for scanlimit in idlistscanlimit:
|
||||
+ scanlimits.append(scanlimit)
|
||||
+ # Only add if there are actually limits in the list.
|
||||
+ if len(scanlimits) > 0:
|
||||
+ props['nsIndexIDListScanLimit'] = mrs
|
||||
+
|
||||
new_index.create(properties=props, basedn="cn=index," + self._dn)
|
||||
|
||||
if reindex:
|
||||
@@ -1193,6 +1219,7 @@ class DatabaseConfig(DSLdapObject):
|
||||
'nsslapd-lookthroughlimit',
|
||||
'nsslapd-mode',
|
||||
'nsslapd-idlistscanlimit',
|
||||
+ 'nsslapd-systemidlistscanlimit',
|
||||
'nsslapd-directory',
|
||||
'nsslapd-import-cachesize',
|
||||
'nsslapd-idl-switch',
|
||||
diff --git a/src/lib389/lib389/cli_conf/backend.py b/src/lib389/lib389/cli_conf/backend.py
|
||||
index 4dc67d563..d57cb9433 100644
|
||||
--- a/src/lib389/lib389/cli_conf/backend.py
|
||||
+++ b/src/lib389/lib389/cli_conf/backend.py
|
||||
@@ -39,6 +39,7 @@ arg_to_attr = {
|
||||
'mode': 'nsslapd-mode',
|
||||
'state': 'nsslapd-state',
|
||||
'idlistscanlimit': 'nsslapd-idlistscanlimit',
|
||||
+ 'systemidlistscanlimit': 'nsslapd-systemidlistscanlimit',
|
||||
'directory': 'nsslapd-directory',
|
||||
'dbcachesize': 'nsslapd-dbcachesize',
|
||||
'logdirectory': 'nsslapd-db-logdirectory',
|
||||
@@ -587,6 +588,21 @@ def backend_set_index(inst, basedn, log, args):
|
||||
except ldap.NO_SUCH_ATTRIBUTE:
|
||||
raise ValueError('Can not delete matching rule type because it does not exist')
|
||||
|
||||
+ if args.replace_scanlimit is not None:
|
||||
+ for replace_scanlimit in args.replace_scanlimit:
|
||||
+ index.replace('nsIndexIDListScanLimit', replace_scanlimit)
|
||||
+
|
||||
+ if args.add_scanlimit is not None:
|
||||
+ for add_scanlimit in args.add_scanlimit:
|
||||
+ index.add('nsIndexIDListScanLimit', add_scanlimit)
|
||||
+
|
||||
+ if args.del_scanlimit is not None:
|
||||
+ for del_scanlimit in args.del_scanlimit:
|
||||
+ try:
|
||||
+ index.remove('nsIndexIDListScanLimit', del_scanlimit)
|
||||
+ except ldap.NO_SUCH_ATTRIBUTE:
|
||||
+ raise ValueError('Can not delete a fine grain limit definition because it does not exist')
|
||||
+
|
||||
if args.reindex:
|
||||
be.reindex(attrs=[args.attr])
|
||||
log.info("Index successfully updated")
|
||||
@@ -908,6 +924,9 @@ def create_parser(subparsers):
|
||||
edit_index_parser.add_argument('--del-type', action='append', help='Removes an index type from the index: (eq, sub, pres, or approx)')
|
||||
edit_index_parser.add_argument('--add-mr', action='append', help='Adds a matching-rule to the index')
|
||||
edit_index_parser.add_argument('--del-mr', action='append', help='Removes a matching-rule from the index')
|
||||
+ edit_index_parser.add_argument('--add-scanlimit', action='append', help='Adds a fine grain limit definiton to the index')
|
||||
+ edit_index_parser.add_argument('--replace-scanlimit', action='append', help='Replaces a fine grain limit definiton to the index')
|
||||
+ edit_index_parser.add_argument('--del-scanlimit', action='append', help='Removes a fine grain limit definiton to the index')
|
||||
edit_index_parser.add_argument('--reindex', action='store_true', help='Re-indexes the database after editing the index')
|
||||
edit_index_parser.add_argument('be_name', help='The backend name or suffix')
|
||||
|
||||
@@ -1034,6 +1053,7 @@ def create_parser(subparsers):
|
||||
'will check when examining candidate entries in response to a search request')
|
||||
set_db_config_parser.add_argument('--mode', help='Specifies the permissions used for newly created index files')
|
||||
set_db_config_parser.add_argument('--idlistscanlimit', help='Specifies the number of entry IDs that are searched during a search operation')
|
||||
+ set_db_config_parser.add_argument('--systemidlistscanlimit', help='Specifies the number of entry IDs that are fetch from ancestorid/parentid indexes')
|
||||
set_db_config_parser.add_argument('--directory', help='Specifies absolute path to database instance')
|
||||
set_db_config_parser.add_argument('--dbcachesize', help='Specifies the database index cache size in bytes')
|
||||
set_db_config_parser.add_argument('--logdirectory', help='Specifies the path to the directory that contains the database transaction logs')
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,240 @@
|
||||
From a5079d745c620393602bc83a9a83c174c2405301 Mon Sep 17 00:00:00 2001
|
||||
From: tbordaz <tbordaz@redhat.com>
|
||||
Date: Tue, 14 Oct 2025 15:12:31 +0200
|
||||
Subject: [PATCH] Issue 6979 - Improve the way to detect asynchronous
|
||||
operations in the access logs (#6980)
|
||||
|
||||
Bug description:
|
||||
Asynch operations are prone to make the server unresponsive.
|
||||
The detection of those operations is not easy.
|
||||
Access logs should contain a way to retrieve easilly the
|
||||
operations (an the connections) with async searches
|
||||
|
||||
Fix description:
|
||||
When dispatching a new operation, if the count of
|
||||
uncompleted operations on the connection overpass
|
||||
a threshold (2) then add a note to the 'notes='
|
||||
in the access log
|
||||
between the
|
||||
|
||||
fixes: #6979
|
||||
|
||||
Reviewed by: Pierre Rogier, Simon Pichugin (Thanks !)
|
||||
|
||||
(cherry picked from commit 1f0210264545d4e674507e8962c81b2e9d3b28b6)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
dirsrvtests/tests/suites/basic/basic_test.py | 69 +++++++++++++++++++-
|
||||
ldap/servers/slapd/connection.c | 23 ++++++-
|
||||
ldap/servers/slapd/daemon.c | 2 +
|
||||
ldap/servers/slapd/result.c | 2 +
|
||||
ldap/servers/slapd/slap.h | 1 +
|
||||
ldap/servers/slapd/slapi-plugin.h | 2 +
|
||||
6 files changed, 95 insertions(+), 4 deletions(-)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/basic/basic_test.py b/dirsrvtests/tests/suites/basic/basic_test.py
|
||||
index 4a45f9dbe..7f13fac1a 100644
|
||||
--- a/dirsrvtests/tests/suites/basic/basic_test.py
|
||||
+++ b/dirsrvtests/tests/suites/basic/basic_test.py
|
||||
@@ -245,6 +245,72 @@ def test_basic_ops(topology_st, import_example_ldif):
|
||||
assert False
|
||||
log.info('test_basic_ops: PASSED')
|
||||
|
||||
+def test_basic_search_asynch(topology_st, request):
|
||||
+ """
|
||||
+ Tests asynchronous searches generate string 'notes=B'
|
||||
+ and 'notes=N' in access logs
|
||||
+
|
||||
+ :id: 1b761421-d2bb-487b-813e-2278123fd13c
|
||||
+ :parametrized: no
|
||||
+ :setup: Standalone instance, create test user to search with filter (uid=*).
|
||||
+
|
||||
+ :steps:
|
||||
+ 1. Create a test user
|
||||
+ 2. trigger async searches
|
||||
+ 3. Verify access logs contains 'notes=B' up to 10 attempts
|
||||
+ 4. Verify access logs contains 'notes=N' up to 10 attempts
|
||||
+
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ 3. Success
|
||||
+ 4. Success
|
||||
+
|
||||
+ """
|
||||
+
|
||||
+ log.info('Running test_basic_search_asynch...')
|
||||
+
|
||||
+ search_filter = "(uid=*)"
|
||||
+ topology_st.standalone.restart()
|
||||
+ topology_st.standalone.config.set("nsslapd-accesslog-logbuffering", "off")
|
||||
+ topology_st.standalone.config.set("nsslapd-maxthreadsperconn", "3")
|
||||
+
|
||||
+ try:
|
||||
+ users = UserAccounts(topology_st.standalone, DEFAULT_SUFFIX, rdn=None)
|
||||
+ user = users.create_test_user()
|
||||
+ except ldap.LDAPError as e:
|
||||
+ log.fatal('Failed to create test user: error ' + e.args[0]['desc'])
|
||||
+ assert False
|
||||
+
|
||||
+ for attempt in range(10):
|
||||
+ msgids = []
|
||||
+ for i in range(5):
|
||||
+ searchid = topology_st.standalone.search(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, search_filter)
|
||||
+ msgids.append(searchid)
|
||||
+
|
||||
+ for msgid in msgids:
|
||||
+ rtype, rdata = topology_st.standalone.result(msgid)
|
||||
+
|
||||
+ # verify if some operations got blocked
|
||||
+ error_lines = topology_st.standalone.ds_access_log.match('.*notes=.*B.* details.*')
|
||||
+ if len(error_lines) > 0:
|
||||
+ log.info('test_basic_search_asynch: found "notes=B" after %d attempt(s)' % (attempt + 1))
|
||||
+ break
|
||||
+
|
||||
+ assert attempt < 10
|
||||
+
|
||||
+ # verify if some operations got flagged Not synchronous
|
||||
+ error_lines = topology_st.standalone.ds_access_log.match('.*notes=.*N.* details.*')
|
||||
+ assert len(error_lines) > 0
|
||||
+
|
||||
+ def fin():
|
||||
+ user.delete()
|
||||
+ topology_st.standalone.config.set("nsslapd-accesslog-logbuffering", "on")
|
||||
+ topology_st.standalone.config.set("nsslapd-maxthreadsperconn", "5")
|
||||
+
|
||||
+ request.addfinalizer(fin)
|
||||
+
|
||||
+ log.info('test_basic_search_asynch: PASSED')
|
||||
|
||||
def test_basic_import_export(topology_st, import_example_ldif):
|
||||
"""Test online and offline LDIF import & export
|
||||
@@ -1771,9 +1837,6 @@ def test_dscreate_with_different_rdn(dscreate_test_rdn_value):
|
||||
else:
|
||||
assert True
|
||||
|
||||
-
|
||||
-
|
||||
-
|
||||
if __name__ == '__main__':
|
||||
# Run isolated
|
||||
# -s for DEBUG mode
|
||||
diff --git a/ldap/servers/slapd/connection.c b/ldap/servers/slapd/connection.c
|
||||
index 10a8cc577..6c5ef5291 100644
|
||||
--- a/ldap/servers/slapd/connection.c
|
||||
+++ b/ldap/servers/slapd/connection.c
|
||||
@@ -23,6 +23,7 @@
|
||||
#include "prlog.h" /* for PR_ASSERT */
|
||||
#include "fe.h"
|
||||
#include <sasl/sasl.h>
|
||||
+#include <stdbool.h>
|
||||
#if defined(LINUX)
|
||||
#include <netinet/tcp.h> /* for TCP_CORK */
|
||||
#endif
|
||||
@@ -568,6 +569,19 @@ connection_dispatch_operation(Connection *conn, Operation *op, Slapi_PBlock *pb)
|
||||
/* Set the start time */
|
||||
slapi_operation_set_time_started(op);
|
||||
|
||||
+ /* difficult to detect false asynch operations
|
||||
+ * Indeed because of scheduling of threads a previous
|
||||
+ * operation may have sent its result but not yet updated
|
||||
+ * the completed count.
|
||||
+ * To avoid false positive lets set a limit of 2.
|
||||
+ */
|
||||
+ if ((conn->c_opsinitiated - conn->c_opscompleted) > 2) {
|
||||
+ unsigned int opnote;
|
||||
+ opnote = slapi_pblock_get_operation_notes(pb);
|
||||
+ opnote |= SLAPI_OP_NOTE_ASYNCH_OP; /* the operation is dispatch while others are running */
|
||||
+ slapi_pblock_set_operation_notes(pb, opnote);
|
||||
+ }
|
||||
+
|
||||
/* If the minimum SSF requirements are not met, only allow
|
||||
* bind and extended operations through. The bind and extop
|
||||
* code will ensure that only SASL binds and startTLS are
|
||||
@@ -1006,10 +1020,16 @@ connection_wait_for_new_work(Slapi_PBlock *pb, int32_t interval)
|
||||
slapi_log_err(SLAPI_LOG_TRACE, "connection_wait_for_new_work", "no work to do\n");
|
||||
ret = CONN_NOWORK;
|
||||
} else {
|
||||
+ Connection *conn = wqitem;
|
||||
/* make new pb */
|
||||
- slapi_pblock_set(pb, SLAPI_CONNECTION, wqitem);
|
||||
+ slapi_pblock_set(pb, SLAPI_CONNECTION, conn);
|
||||
slapi_pblock_set_op_stack_elem(pb, op_stack_obj);
|
||||
slapi_pblock_set(pb, SLAPI_OPERATION, op_stack_obj->op);
|
||||
+ if (conn->c_flagblocked) {
|
||||
+ /* flag this new operation that it was blocked by maxthreadperconn */
|
||||
+ slapi_pblock_set_operation_notes(pb, SLAPI_OP_NOTE_ASYNCH_BLOCKED);
|
||||
+ conn->c_flagblocked = false;
|
||||
+ }
|
||||
}
|
||||
|
||||
pthread_mutex_unlock(&work_q_lock);
|
||||
@@ -1869,6 +1889,7 @@ connection_threadmain(void *arg)
|
||||
} else {
|
||||
/* keep count of how many times maxthreads has blocked an operation */
|
||||
conn->c_maxthreadsblocked++;
|
||||
+ conn->c_flagblocked = true;
|
||||
if (conn->c_maxthreadsblocked == 1 && connection_has_psearch(conn)) {
|
||||
slapi_log_err(SLAPI_LOG_NOTICE, "connection_threadmain",
|
||||
"Connection (conn=%" PRIu64 ") has a running persistent search "
|
||||
diff --git a/ldap/servers/slapd/daemon.c b/ldap/servers/slapd/daemon.c
|
||||
index bef75e4a3..2534483c1 100644
|
||||
--- a/ldap/servers/slapd/daemon.c
|
||||
+++ b/ldap/servers/slapd/daemon.c
|
||||
@@ -25,6 +25,7 @@
|
||||
#include <sys/wait.h>
|
||||
#include <pthread.h>
|
||||
#include <stdint.h>
|
||||
+#include <stdbool.h>
|
||||
#if defined(HAVE_MNTENT_H)
|
||||
#include <mntent.h>
|
||||
#endif
|
||||
@@ -1673,6 +1674,7 @@ setup_pr_read_pds(Connection_Table *ct)
|
||||
} else {
|
||||
if (c->c_threadnumber >= c->c_max_threads_per_conn) {
|
||||
c->c_maxthreadsblocked++;
|
||||
+ c->c_flagblocked = true;
|
||||
if (c->c_maxthreadsblocked == 1 && connection_has_psearch(c)) {
|
||||
slapi_log_err(SLAPI_LOG_NOTICE, "connection_threadmain",
|
||||
"Connection (conn=%" PRIu64 ") has a running persistent search "
|
||||
diff --git a/ldap/servers/slapd/result.c b/ldap/servers/slapd/result.c
|
||||
index f40556de8..f000e32f1 100644
|
||||
--- a/ldap/servers/slapd/result.c
|
||||
+++ b/ldap/servers/slapd/result.c
|
||||
@@ -1945,6 +1945,8 @@ static struct slapi_note_map notemap[] = {
|
||||
{SLAPI_OP_NOTE_SIMPLEPAGED, "P", "Paged Search"},
|
||||
{SLAPI_OP_NOTE_FULL_UNINDEXED, "A", "Fully Unindexed Filter"},
|
||||
{SLAPI_OP_NOTE_FILTER_INVALID, "F", "Filter Element Missing From Schema"},
|
||||
+ {SLAPI_OP_NOTE_ASYNCH_OP, "N", "Not synchronous operation"},
|
||||
+ {SLAPI_OP_NOTE_ASYNCH_BLOCKED, "B", "Blocked because too many operations"},
|
||||
};
|
||||
|
||||
#define SLAPI_NOTEMAP_COUNT (sizeof(notemap) / sizeof(struct slapi_note_map))
|
||||
diff --git a/ldap/servers/slapd/slap.h b/ldap/servers/slapd/slap.h
|
||||
index 82550527c..36d26bf4a 100644
|
||||
--- a/ldap/servers/slapd/slap.h
|
||||
+++ b/ldap/servers/slapd/slap.h
|
||||
@@ -1720,6 +1720,7 @@ typedef struct conn
|
||||
int32_t c_anon_access;
|
||||
int32_t c_max_threads_per_conn;
|
||||
int32_t c_bind_auth_token;
|
||||
+ bool c_flagblocked; /* Flag the next read operation as blocked */
|
||||
} Connection;
|
||||
#define CONN_FLAG_SSL 1 /* Is this connection an SSL connection or not ? \
|
||||
* Used to direct I/O code when SSL is handled differently \
|
||||
diff --git a/ldap/servers/slapd/slapi-plugin.h b/ldap/servers/slapd/slapi-plugin.h
|
||||
index 677be1db0..6517665a9 100644
|
||||
--- a/ldap/servers/slapd/slapi-plugin.h
|
||||
+++ b/ldap/servers/slapd/slapi-plugin.h
|
||||
@@ -7336,6 +7336,8 @@ typedef enum _slapi_op_note_t {
|
||||
SLAPI_OP_NOTE_SIMPLEPAGED = 0x02,
|
||||
SLAPI_OP_NOTE_FULL_UNINDEXED = 0x04,
|
||||
SLAPI_OP_NOTE_FILTER_INVALID = 0x08,
|
||||
+ SLAPI_OP_NOTE_ASYNCH_OP = 0x10,
|
||||
+ SLAPI_OP_NOTE_ASYNCH_BLOCKED = 0x20,
|
||||
} slapi_op_note_t;
|
||||
|
||||
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,76 @@
|
||||
From f4992f3038078ff96a7982d7a6fcced1c3870e16 Mon Sep 17 00:00:00 2001
|
||||
From: Simon Pichugin <spichugi@redhat.com>
|
||||
Date: Thu, 16 Oct 2025 22:00:13 -0700
|
||||
Subject: [PATCH] Issue 7047 - MemberOf plugin logs null attribute name on
|
||||
fixup task completion (#7048)
|
||||
|
||||
Description: The MemberOf plugin logged "(null)" instead of the attribute
|
||||
name when the global fixup task completed. This occurred because the config
|
||||
structure containing the attribute name was freed before the completion log
|
||||
message was written.
|
||||
|
||||
This fix moves the memberof_free_config() call to after the log statement,
|
||||
ensuring the attribute name is available for logging.
|
||||
|
||||
Additionally, the test_shutdown_on_deferred_memberof test has been improved
|
||||
to properly verify the fixup task behavior by checking that both the "started"
|
||||
and "finished" log messages contain the correct attribute name.
|
||||
|
||||
Fixes: https://github.com/389ds/389-ds-base/issues/7047
|
||||
|
||||
Reviewed by: @tbordaz (Thanks!)
|
||||
|
||||
(cherry picked from commit 777187a89f13bc00dc03b0e9370333cdfc299da9)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../suites/memberof_plugin/regression_test.py | 21 +++++++++++++++++--
|
||||
ldap/servers/plugins/memberof/memberof.c | 1 +
|
||||
2 files changed, 20 insertions(+), 2 deletions(-)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/memberof_plugin/regression_test.py b/dirsrvtests/tests/suites/memberof_plugin/regression_test.py
|
||||
index 976729c2f..7b5410b67 100644
|
||||
--- a/dirsrvtests/tests/suites/memberof_plugin/regression_test.py
|
||||
+++ b/dirsrvtests/tests/suites/memberof_plugin/regression_test.py
|
||||
@@ -1423,8 +1423,25 @@ def test_shutdown_on_deferred_memberof(topology_st):
|
||||
value = memberof.get_memberofneedfixup()
|
||||
assert ((str(value).lower() == "yes") or (str(value).lower() == "true"))
|
||||
|
||||
- # step 14. fixup task was not launched because by default launch_fixup is no
|
||||
- assert len(errlog.match('.*It is recommended to launch memberof fixup task.*')) == 0
|
||||
+ # step 14. Verify the global fixup started/finished messages
|
||||
+ attribute_name = 'memberOf'
|
||||
+ started_lines = errlog.match('.*Memberof plugin started the global fixup task for attribute .*')
|
||||
+ assert len(started_lines) >= 1
|
||||
+ for line in started_lines:
|
||||
+ log.info(f'Started line: {line}')
|
||||
+ assert f'attribute {attribute_name}' in line
|
||||
+
|
||||
+ # Wait for finished messages to appear, then verify no nulls are present
|
||||
+ finished_lines = []
|
||||
+ for _ in range(60):
|
||||
+ finished_lines = errlog.match('.*Memberof plugin finished the global fixup task.*')
|
||||
+ if finished_lines:
|
||||
+ break
|
||||
+ time.sleep(1)
|
||||
+ assert len(finished_lines) >= 1
|
||||
+ for line in finished_lines:
|
||||
+ log.info(f'Finished line: {line}')
|
||||
+ assert '(null)' not in line
|
||||
|
||||
# Check that users memberof and group members are in sync.
|
||||
time.sleep(delay)
|
||||
diff --git a/ldap/servers/plugins/memberof/memberof.c b/ldap/servers/plugins/memberof/memberof.c
|
||||
index 2ee7ee319..b52bc0331 100644
|
||||
--- a/ldap/servers/plugins/memberof/memberof.c
|
||||
+++ b/ldap/servers/plugins/memberof/memberof.c
|
||||
@@ -926,6 +926,7 @@ perform_needed_fixup()
|
||||
slapi_ch_free_string(&td.filter_str);
|
||||
slapi_log_err(SLAPI_LOG_INFO, MEMBEROF_PLUGIN_SUBSYSTEM,
|
||||
"Memberof plugin finished the global fixup task for attribute %s\n", config.memberof_attr);
|
||||
+ memberof_free_config(&config);
|
||||
return rc;
|
||||
}
|
||||
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,42 @@
|
||||
From 496277d9a69a559f690530145ffa5bb2f7b0e837 Mon Sep 17 00:00:00 2001
|
||||
From: tbordaz <tbordaz@redhat.com>
|
||||
Date: Mon, 20 Oct 2025 14:30:52 +0200
|
||||
Subject: [PATCH] Issue 7032 - The new ipahealthcheck test
|
||||
ipahealthcheck.ds.backends.BackendsCheck raises CRITICAL issue (#7036)
|
||||
|
||||
Bug description:
|
||||
The bug fix #6966 adds a 'scanlimit' to one of the system
|
||||
index ('parentid'). So not all of them have such attribute.
|
||||
In healthcheck such attribute (i.e. key) can miss but
|
||||
the code assumes it is present
|
||||
|
||||
Fix description:
|
||||
Get 'parentid' from the dict with the proper routine
|
||||
(Thanks Florence Renaud for the debug/fix)
|
||||
|
||||
fixes: #7032
|
||||
|
||||
Reviewed by: Pierre Rogier and Simon Pichugin (thank you !)
|
||||
|
||||
(cherry picked from commit ea8d4c8c2261861118cf8ae20dffb0e5a466e9d2)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
src/lib389/lib389/backend.py | 2 +-
|
||||
1 file changed, 1 insertion(+), 1 deletion(-)
|
||||
|
||||
diff --git a/src/lib389/lib389/backend.py b/src/lib389/lib389/backend.py
|
||||
index 03290ac1c..e74d1fbf9 100644
|
||||
--- a/src/lib389/lib389/backend.py
|
||||
+++ b/src/lib389/lib389/backend.py
|
||||
@@ -627,7 +627,7 @@ class Backend(DSLdapObject):
|
||||
reindex_attrs.add(attr_name)
|
||||
|
||||
# Check fine grain definitions for parentid ONLY
|
||||
- expected_scanlimit = expected_config['scanlimit']
|
||||
+ expected_scanlimit = expected_config.get('scanlimit')
|
||||
if (attr_name.lower() == "parentid") and expected_scanlimit and (len(actual_scanlimit) == 0):
|
||||
discrepancies.append(f"Index {attr_name} missing fine grain definition of IDs limit: {expected_mr}")
|
||||
# Add the missing scanlimit
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,320 @@
|
||||
From 747eed3acc8ec05c8a0740080d058a295631b807 Mon Sep 17 00:00:00 2001
|
||||
From: Mark Reynolds <mreynolds@redhat.com>
|
||||
Date: Wed, 20 Aug 2025 13:44:47 -0400
|
||||
Subject: [PATCH] Issue 6947 - Revise time skew check in healthcheck tool and
|
||||
add option to exclude checks
|
||||
|
||||
Description:
|
||||
|
||||
The current check reports a critical warning if time skew is greater than
|
||||
1 day - even if "nsslapd-ignore-time-skew" is set to "on". If we are ignoring
|
||||
time skew we should still report a warning if it's very significant like
|
||||
30 days.
|
||||
|
||||
Also added an option to exclude checks
|
||||
|
||||
Relates: https://github.com/389ds/389-ds-base/issues/6947
|
||||
|
||||
Reviewed by: progier, spichugi, viktor(Thanks!!!)
|
||||
|
||||
(cherry picked from commit 7ac6d61df5d696b2e0c7911379448daecf10e652)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../suites/healthcheck/health_config_test.py | 3 +-
|
||||
.../healthcheck/health_security_test.py | 1 +
|
||||
.../healthcheck/health_tunables_test.py | 1 +
|
||||
.../suites/healthcheck/healthcheck_test.py | 37 +++++++++++++++-
|
||||
src/lib389/lib389/cli_ctl/health.py | 34 +++++++++++----
|
||||
src/lib389/lib389/dseldif.py | 42 +++++++++++++++----
|
||||
src/lib389/lib389/lint.py | 11 +++++
|
||||
7 files changed, 109 insertions(+), 20 deletions(-)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/healthcheck/health_config_test.py b/dirsrvtests/tests/suites/healthcheck/health_config_test.py
|
||||
index 747699486..f6fe220b8 100644
|
||||
--- a/dirsrvtests/tests/suites/healthcheck/health_config_test.py
|
||||
+++ b/dirsrvtests/tests/suites/healthcheck/health_config_test.py
|
||||
@@ -10,7 +10,7 @@
|
||||
import pytest
|
||||
import os
|
||||
import subprocess
|
||||
-
|
||||
+import time
|
||||
from lib389.backend import Backends, DatabaseConfig
|
||||
from lib389.cos import CosTemplates, CosPointerDefinitions
|
||||
from lib389.dbgen import dbgen_users
|
||||
@@ -46,6 +46,7 @@ def run_healthcheck_and_flush_log(topology, instance, searched_code, json, searc
|
||||
args.list_checks = False
|
||||
args.check = ['config', 'refint', 'backends', 'monitor-disk-space', 'logs', 'memberof']
|
||||
args.dry_run = False
|
||||
+ args.exclude_check = []
|
||||
|
||||
if json:
|
||||
log.info('Use healthcheck with --json option')
|
||||
diff --git a/dirsrvtests/tests/suites/healthcheck/health_security_test.py b/dirsrvtests/tests/suites/healthcheck/health_security_test.py
|
||||
index ebd330d95..753658037 100644
|
||||
--- a/dirsrvtests/tests/suites/healthcheck/health_security_test.py
|
||||
+++ b/dirsrvtests/tests/suites/healthcheck/health_security_test.py
|
||||
@@ -46,6 +46,7 @@ def run_healthcheck_and_flush_log(topology, instance, searched_code, json, searc
|
||||
args.list_checks = False
|
||||
args.check = None
|
||||
args.dry_run = False
|
||||
+ args.exclude_check = []
|
||||
|
||||
if json:
|
||||
log.info('Use healthcheck with --json option')
|
||||
diff --git a/dirsrvtests/tests/suites/healthcheck/health_tunables_test.py b/dirsrvtests/tests/suites/healthcheck/health_tunables_test.py
|
||||
index 5e80c8038..2d9ae90da 100644
|
||||
--- a/dirsrvtests/tests/suites/healthcheck/health_tunables_test.py
|
||||
+++ b/dirsrvtests/tests/suites/healthcheck/health_tunables_test.py
|
||||
@@ -32,6 +32,7 @@ def run_healthcheck_and_flush_log(topology, instance, searched_code=None, json=F
|
||||
args.verbose = instance.verbose
|
||||
args.list_errors = list_errors
|
||||
args.list_checks = list_checks
|
||||
+ args.exclude_check = []
|
||||
args.check = check
|
||||
args.dry_run = False
|
||||
args.json = json
|
||||
diff --git a/dirsrvtests/tests/suites/healthcheck/healthcheck_test.py b/dirsrvtests/tests/suites/healthcheck/healthcheck_test.py
|
||||
index f45688dbb..ef49240f7 100644
|
||||
--- a/dirsrvtests/tests/suites/healthcheck/healthcheck_test.py
|
||||
+++ b/dirsrvtests/tests/suites/healthcheck/healthcheck_test.py
|
||||
@@ -41,6 +41,7 @@ def run_healthcheck_and_flush_log(topology, instance, searched_code=None, json=F
|
||||
args.list_errors = list_errors
|
||||
args.list_checks = list_checks
|
||||
args.check = check
|
||||
+ args.exclude_check = []
|
||||
args.dry_run = False
|
||||
args.json = json
|
||||
|
||||
@@ -265,8 +266,40 @@ def test_healthcheck_check_option(topology_st):
|
||||
run_healthcheck_and_flush_log(topology_st, standalone, searched_code=JSON_OUTPUT, json=True, check=[item])
|
||||
|
||||
|
||||
-@pytest.mark.ds50873
|
||||
-@pytest.mark.bz1685160
|
||||
+def test_healthcheck_exclude_option(topology_st):
|
||||
+ """Check functionality of HealthCheck Tool with --exclude-check option
|
||||
+
|
||||
+ :id: a4e2103c-67b8-4359-a8ba-67a8650cd3b7
|
||||
+ :setup: Standalone instance
|
||||
+ :steps:
|
||||
+ 1. Set check to exclude from list
|
||||
+ 2. Run HealthCheck
|
||||
+ :expectedresults:
|
||||
+ 1. Success
|
||||
+ 2. Success
|
||||
+ """
|
||||
+
|
||||
+ inst = topology_st.standalone
|
||||
+
|
||||
+ exclude_list = [
|
||||
+ ('config:passwordscheme', 'config:passwordscheme',
|
||||
+ 'config:securitylog_buffering'),
|
||||
+ ('config', 'config:', 'backends:userroot:mappingtree')
|
||||
+ ]
|
||||
+
|
||||
+ for exclude, unwanted, wanted in exclude_list:
|
||||
+ unwanted_pattern = 'Checking ' + unwanted
|
||||
+ wanted_pattern = 'Checking ' + wanted
|
||||
+
|
||||
+ log.info('Exclude check: %s unwanted: %s wanted: %s',
|
||||
+ exclude, unwanted, wanted)
|
||||
+
|
||||
+ run_healthcheck_exclude(topology_st.logcap, inst,
|
||||
+ unwanted=unwanted_pattern,
|
||||
+ wanted=wanted_pattern,
|
||||
+ exclude_check=exclude)
|
||||
+
|
||||
+
|
||||
@pytest.mark.skipif(ds_is_older("1.4.1"), reason="Not implemented")
|
||||
def test_healthcheck_standalone_tls(topology_st):
|
||||
"""Check functionality of HealthCheck Tool on TLS enabled standalone instance with no errors
|
||||
diff --git a/src/lib389/lib389/cli_ctl/health.py b/src/lib389/lib389/cli_ctl/health.py
|
||||
index d85e3906a..38540e0df 100644
|
||||
--- a/src/lib389/lib389/cli_ctl/health.py
|
||||
+++ b/src/lib389/lib389/cli_ctl/health.py
|
||||
@@ -75,6 +75,9 @@ def _list_errors(log):
|
||||
|
||||
|
||||
def _list_checks(inst, specs: Iterable[str]):
|
||||
+ if specs is None:
|
||||
+ yield []
|
||||
+ return
|
||||
o_uids = dict(_list_targets(inst))
|
||||
for s in specs:
|
||||
wanted, rest = DSLint._dslint_parse_spec(s)
|
||||
@@ -85,19 +88,27 @@ def _list_checks(inst, specs: Iterable[str]):
|
||||
for l in o_uids[wanted].lint_list(rest):
|
||||
yield o_uids[wanted], l
|
||||
else:
|
||||
- raise ValueError('No such object specifier')
|
||||
+ raise ValueError('No such object specifier: ' + wanted)
|
||||
|
||||
|
||||
def _print_checks(inst, log, specs: Iterable[str]) -> None:
|
||||
for o, s in _list_checks(inst, specs):
|
||||
log.info(f'{o.lint_uid()}:{s[0]}')
|
||||
|
||||
-def _run(inst, log, args, checks):
|
||||
+
|
||||
+def _run(inst, log, args, checks, exclude_checks):
|
||||
if not args.json:
|
||||
log.info("Beginning lint report, this could take a while ...")
|
||||
|
||||
report = []
|
||||
+ excludes = []
|
||||
+ for _, skip in exclude_checks:
|
||||
+ excludes.append(skip[0])
|
||||
+
|
||||
for o, s in checks:
|
||||
+ if s[0] in excludes:
|
||||
+ continue
|
||||
+
|
||||
if not args.json:
|
||||
log.info(f"Checking {o.lint_uid()}:{s[0]} ...")
|
||||
try:
|
||||
@@ -119,12 +130,12 @@ def _run(inst, log, args, checks):
|
||||
if count > 1:
|
||||
plural = "s"
|
||||
if not args.json:
|
||||
- log.info("{} Issue{} found! Generating report ...".format(count, plural))
|
||||
+ log.info(f"{count} Issue{plural} found! Generating report ...")
|
||||
idx = 1
|
||||
for item in report:
|
||||
_format_check_output(log, item, idx)
|
||||
idx += 1
|
||||
- log.info('\n\n===== End Of Report ({} Issue{} found) ====='.format(count, plural))
|
||||
+ log.info(f'\n\n===== End Of Report ({count} Issue{plural} found) =====')
|
||||
else:
|
||||
log.info(json.dumps(report, indent=4))
|
||||
|
||||
@@ -147,17 +158,21 @@ def health_check_run(inst, log, args):
|
||||
dsrc_inst = dsrc_to_ldap(DSRC_HOME, args.instance, log.getChild('dsrc'))
|
||||
dsrc_inst = dsrc_arg_concat(args, dsrc_inst)
|
||||
try:
|
||||
- inst = connect_instance(dsrc_inst=dsrc_inst, verbose=args.verbose, args=args)
|
||||
+ inst = connect_instance(dsrc_inst=dsrc_inst, verbose=args.verbose,
|
||||
+ args=args)
|
||||
except Exception as e:
|
||||
- raise ValueError('Failed to connect to Directory Server instance: ' + str(e))
|
||||
+ raise ValueError('Failed to connect to Directory Server instance: ' +
|
||||
+ str(e)) from e
|
||||
|
||||
checks = args.check or dict(_list_targets(inst)).keys()
|
||||
-
|
||||
+ exclude_checks = args.exclude_check
|
||||
+ print("MARK excl: " + str(exclude_checks))
|
||||
if args.list_checks or args.dry_run:
|
||||
_print_checks(inst, log, checks)
|
||||
return
|
||||
|
||||
- _run(inst, log, args, _list_checks(inst, checks))
|
||||
+ _run(inst, log, args, _list_checks(inst, checks),
|
||||
+ _list_checks(inst, exclude_checks))
|
||||
|
||||
disconnect_instance(inst)
|
||||
|
||||
@@ -175,3 +190,6 @@ def create_parser(subparsers):
|
||||
run_healthcheck_parser.add_argument('--check', nargs='+', default=None,
|
||||
help='Areas to check. These can be obtained by --list-checks. Every element on the left of the colon (:)'
|
||||
' may be replaced by an asterisk if multiple options on the right are available.')
|
||||
+ run_healthcheck_parser.add_argument('--exclude-check', nargs='+', default=[],
|
||||
+ help='Areas to skip. These can be obtained by --list-checks. Every element on the left of the colon (:)'
|
||||
+ ' may be replaced by an asterisk if multiple options on the right are available.')
|
||||
diff --git a/src/lib389/lib389/dseldif.py b/src/lib389/lib389/dseldif.py
|
||||
index 31577c9fa..3104a7b6f 100644
|
||||
--- a/src/lib389/lib389/dseldif.py
|
||||
+++ b/src/lib389/lib389/dseldif.py
|
||||
@@ -23,7 +23,8 @@ from lib389.lint import (
|
||||
DSPERMLE0002,
|
||||
DSSKEWLE0001,
|
||||
DSSKEWLE0002,
|
||||
- DSSKEWLE0003
|
||||
+ DSSKEWLE0003,
|
||||
+ DSSKEWLE0004
|
||||
)
|
||||
|
||||
|
||||
@@ -66,26 +67,49 @@ class DSEldif(DSLint):
|
||||
return 'dseldif'
|
||||
|
||||
def _lint_nsstate(self):
|
||||
+ """
|
||||
+ Check the nsState attribute, which contains the CSN generator time
|
||||
+ diffs, for excessive replication time skew
|
||||
+ """
|
||||
+ ignoring_skew = False
|
||||
+ skew_high = 86400 # 1 day
|
||||
+ skew_medium = 43200 # 12 hours
|
||||
+ skew_low = 21600 # 6 hours
|
||||
+
|
||||
+ ignore_skew = self.get("cn=config", "nsslapd-ignore-time-skew")
|
||||
+ if ignore_skew is not None and ignore_skew[0].lower() == "on":
|
||||
+ # If we are ignoring time skew only report a warning if the skew
|
||||
+ # is significant
|
||||
+ ignoring_skew = True
|
||||
+ skew_high = 86400 * 365 # Report a warning for skew over a year
|
||||
+ skew_medium = 99999999999
|
||||
+ skew_low = 99999999999
|
||||
+
|
||||
suffixes = self.readNsState()
|
||||
for suffix in suffixes:
|
||||
# Check the local offset first
|
||||
report = None
|
||||
- skew = int(suffix['time_skew'])
|
||||
- if skew >= 86400:
|
||||
- # 24 hours - replication will break
|
||||
- report = copy.deepcopy(DSSKEWLE0003)
|
||||
- elif skew >= 43200:
|
||||
+ skew = abs(int(suffix['time_skew']))
|
||||
+ if skew >= skew_high:
|
||||
+ if ignoring_skew:
|
||||
+ # Ignoring skew, but it's too excessive not to report it
|
||||
+ report = copy.deepcopy(DSSKEWLE0004)
|
||||
+ else:
|
||||
+ # 24 hours of skew - replication will break
|
||||
+ report = copy.deepcopy(DSSKEWLE0003)
|
||||
+ elif skew >= skew_medium:
|
||||
# 12 hours
|
||||
report = copy.deepcopy(DSSKEWLE0002)
|
||||
- elif skew >= 21600:
|
||||
+ elif skew >= skew_low:
|
||||
# 6 hours
|
||||
report = copy.deepcopy(DSSKEWLE0001)
|
||||
if report is not None:
|
||||
report['items'].append(suffix['suffix'])
|
||||
report['items'].append('Time Skew')
|
||||
report['items'].append('Skew: ' + suffix['time_skew_str'])
|
||||
- report['fix'] = report['fix'].replace('YOUR_INSTANCE', self._instance.serverid)
|
||||
- report['check'] = f'dseldif:nsstate'
|
||||
+ report['fix'] = report['fix'].replace('YOUR_INSTANCE',
|
||||
+ self._instance.serverid)
|
||||
+ report['check'] = 'dseldif:nsstate'
|
||||
yield report
|
||||
|
||||
def _update(self):
|
||||
diff --git a/src/lib389/lib389/lint.py b/src/lib389/lib389/lint.py
|
||||
index 1e48c790d..fe39a5d59 100644
|
||||
--- a/src/lib389/lib389/lint.py
|
||||
+++ b/src/lib389/lib389/lint.py
|
||||
@@ -518,6 +518,17 @@ Also look at https://access.redhat.com/documentation/en-us/red_hat_directory_ser
|
||||
and find the paragraph "Too much time skew"."""
|
||||
}
|
||||
|
||||
+DSSKEWLE0004 = {
|
||||
+ 'dsle': 'DSSKEWLE0004',
|
||||
+ 'severity': 'Low',
|
||||
+ 'description': 'Extensive time skew.',
|
||||
+ 'items': ['Replication'],
|
||||
+ 'detail': """The time skew is over 365 days. If the time skew continues to
|
||||
+increase eventually serious replication problems can occur.""",
|
||||
+ 'fix': """Avoid making changes to the system time, and make sure the clocks
|
||||
+on all the replicas are correct."""
|
||||
+}
|
||||
+
|
||||
DSLOGNOTES0001 = {
|
||||
'dsle': 'DSLOGNOTES0001',
|
||||
'severity': 'Medium',
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,79 @@
|
||||
From 3ed914e5b7a668fbf90c4a2f425ce166901018b6 Mon Sep 17 00:00:00 2001
|
||||
From: Viktor Ashirov <vashirov@redhat.com>
|
||||
Date: Tue, 18 Nov 2025 14:17:09 +0100
|
||||
Subject: [PATCH] Issue 6901 - Update changelog trimming logging (#7102)
|
||||
|
||||
Description:
|
||||
* Set SLAPI_LOG_ERR for message in `_cl5DispatchTrimThread`
|
||||
* Add number of scanned entries to the log.
|
||||
|
||||
Fixes: https://github.com/389ds/389-ds-base/issues/6901
|
||||
|
||||
Reviewed by: @mreynolds389, @progier389, @tbordaz (Thanks!)
|
||||
|
||||
(cherry picked from commit 375d317cbe39c7792cdc608f236846e18252d6b1)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
ldap/servers/plugins/replication/cl5_api.c | 11 +++++++----
|
||||
1 file changed, 7 insertions(+), 4 deletions(-)
|
||||
|
||||
diff --git a/ldap/servers/plugins/replication/cl5_api.c b/ldap/servers/plugins/replication/cl5_api.c
|
||||
index 5d4edea92..21d2f5b8b 100644
|
||||
--- a/ldap/servers/plugins/replication/cl5_api.c
|
||||
+++ b/ldap/servers/plugins/replication/cl5_api.c
|
||||
@@ -2082,7 +2082,7 @@ _cl5DispatchDBThreads(void)
|
||||
NULL, PR_PRIORITY_NORMAL, PR_GLOBAL_THREAD,
|
||||
PR_UNJOINABLE_THREAD, DEFAULT_THREAD_STACKSIZE);
|
||||
if (NULL == pth) {
|
||||
- slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl,
|
||||
+ slapi_log_err(SLAPI_LOG_ERR, repl_plugin_name_cl,
|
||||
"_cl5DispatchDBThreads - Failed to create trimming thread"
|
||||
"; NSPR error - %d\n",
|
||||
PR_GetError());
|
||||
@@ -3687,7 +3687,7 @@ _cl5TrimFile(Object *obj, long *numToTrim)
|
||||
slapi_operation_parameters op = {0};
|
||||
ReplicaId csn_rid;
|
||||
void *it;
|
||||
- int finished = 0, totalTrimmed = 0, count;
|
||||
+ int finished = 0, totalTrimmed = 0, totalScanned = 0, count, scanned;
|
||||
PRBool abort;
|
||||
char strCSN[CSN_STRSIZE];
|
||||
int rc;
|
||||
@@ -3704,6 +3704,7 @@ _cl5TrimFile(Object *obj, long *numToTrim)
|
||||
while (!finished && !slapi_is_shutting_down()) {
|
||||
it = NULL;
|
||||
count = 0;
|
||||
+ scanned = 0;
|
||||
txnid = NULL;
|
||||
abort = PR_FALSE;
|
||||
|
||||
@@ -3720,6 +3721,7 @@ _cl5TrimFile(Object *obj, long *numToTrim)
|
||||
|
||||
finished = _cl5GetFirstEntry(obj, &entry, &it, txnid);
|
||||
while (!finished && !slapi_is_shutting_down()) {
|
||||
+ scanned++;
|
||||
/*
|
||||
* This change can be trimmed if it exceeds purge
|
||||
* parameters and has been seen by all consumers.
|
||||
@@ -3809,6 +3811,7 @@ _cl5TrimFile(Object *obj, long *numToTrim)
|
||||
rc, db_strerror(rc));
|
||||
} else {
|
||||
totalTrimmed += count;
|
||||
+ totalScanned += scanned;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3818,8 +3821,8 @@ _cl5TrimFile(Object *obj, long *numToTrim)
|
||||
ruv_destroy(&ruv);
|
||||
|
||||
if (totalTrimmed) {
|
||||
- slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, "_cl5TrimFile - Trimmed %d changes from the changelog\n",
|
||||
- totalTrimmed);
|
||||
+ slapi_log_err(SLAPI_LOG_REPL, repl_plugin_name_cl, "_cl5TrimFile - Scanned %d records, and trimmed %d changes from the changelog\n",
|
||||
+ totalScanned, totalTrimmed);
|
||||
}
|
||||
}
|
||||
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,70 @@
|
||||
From 902805365a07ccb8210ec5a8867431c17999ae4a Mon Sep 17 00:00:00 2001
|
||||
From: Mark Reynolds <mreynolds@redhat.com>
|
||||
Date: Tue, 18 Nov 2025 15:04:45 -0500
|
||||
Subject: [PATCH] Issue 7007 - Improve paged result search locking
|
||||
|
||||
Description:
|
||||
|
||||
Hold the paged result connection hash mutex while acquiring the global
|
||||
connection paged result lock. Otherwise there is a window where the
|
||||
mutex could be rmoved and lead to a crash
|
||||
|
||||
Relates: https://github.com/389ds/389-ds-base/issues/7007
|
||||
|
||||
Reviewed by: progier, spichugi, and tbordaz(Thanks!!!)
|
||||
|
||||
(cherry picked from commit 17968b55bc481aaef775c51131cc93a70b86793d)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
ldap/servers/slapd/pagedresults.c | 8 ++++----
|
||||
1 file changed, 4 insertions(+), 4 deletions(-)
|
||||
|
||||
diff --git a/ldap/servers/slapd/pagedresults.c b/ldap/servers/slapd/pagedresults.c
|
||||
index 4aa1fa3e5..a18081c63 100644
|
||||
--- a/ldap/servers/slapd/pagedresults.c
|
||||
+++ b/ldap/servers/slapd/pagedresults.c
|
||||
@@ -801,6 +801,7 @@ pagedresults_cleanup(Connection *conn, int needlock)
|
||||
prp->pr_current_be = NULL;
|
||||
if (prp->pr_mutex) {
|
||||
PR_DestroyLock(prp->pr_mutex);
|
||||
+ prp->pr_mutex = NULL;
|
||||
}
|
||||
memset(prp, '\0', sizeof(PagedResults));
|
||||
}
|
||||
@@ -841,6 +842,7 @@ pagedresults_cleanup_all(Connection *conn, int needlock)
|
||||
prp = conn->c_pagedresults.prl_list + i;
|
||||
if (prp->pr_mutex) {
|
||||
PR_DestroyLock(prp->pr_mutex);
|
||||
+ prp->pr_mutex = NULL;
|
||||
}
|
||||
if (prp->pr_current_be && prp->pr_search_result_set &&
|
||||
prp->pr_current_be->be_search_results_release) {
|
||||
@@ -1022,11 +1024,10 @@ pagedresults_lock(Connection *conn, int index)
|
||||
}
|
||||
pthread_mutex_lock(pageresult_lock_get_addr(conn));
|
||||
prp = conn->c_pagedresults.prl_list + index;
|
||||
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
|
||||
if (prp->pr_mutex) {
|
||||
PR_Lock(prp->pr_mutex);
|
||||
}
|
||||
- return;
|
||||
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
|
||||
}
|
||||
|
||||
void
|
||||
@@ -1038,11 +1039,10 @@ pagedresults_unlock(Connection *conn, int index)
|
||||
}
|
||||
pthread_mutex_lock(pageresult_lock_get_addr(conn));
|
||||
prp = conn->c_pagedresults.prl_list + index;
|
||||
- pthread_mutex_unlock(pageresult_lock_get_addr(conn));
|
||||
if (prp->pr_mutex) {
|
||||
PR_Unlock(prp->pr_mutex);
|
||||
}
|
||||
- return;
|
||||
+ pthread_mutex_unlock(pageresult_lock_get_addr(conn));
|
||||
}
|
||||
|
||||
int
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,30 @@
|
||||
From 85c82d8e7c95eeb76788311b33544f8086cec31b Mon Sep 17 00:00:00 2001
|
||||
From: Thierry Bordaz <tbordaz@redhat.com>
|
||||
Date: Wed, 26 Nov 2025 10:38:40 +0100
|
||||
Subject: [PATCH] Issue 6966 - (2nd) On large DB, unlimited IDL scan limit
|
||||
reduce the SRCH performance
|
||||
|
||||
(cherry picked from commit e098ee776d94c2d4dac6f2a01473d63d8db54954)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
ldap/servers/slapd/back-ldbm/instance.c | 4 ----
|
||||
1 file changed, 4 deletions(-)
|
||||
|
||||
diff --git a/ldap/servers/slapd/back-ldbm/instance.c b/ldap/servers/slapd/back-ldbm/instance.c
|
||||
index 29299b992..65084c61c 100644
|
||||
--- a/ldap/servers/slapd/back-ldbm/instance.c
|
||||
+++ b/ldap/servers/slapd/back-ldbm/instance.c
|
||||
@@ -278,10 +278,6 @@ ldbm_instance_create_default_indexes(backend *be)
|
||||
slapi_entry_free(e);
|
||||
#endif
|
||||
|
||||
- e = ldbm_instance_init_config_entry(LDBM_NUMSUBORDINATES_STR, "pres", 0, 0, 0, 0);
|
||||
- ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
- slapi_entry_free(e);
|
||||
-
|
||||
e = ldbm_instance_init_config_entry(SLAPI_ATTR_UNIQUEID, "eq", 0, 0, 0, 0, 0);
|
||||
ldbm_instance_config_add_index_entry(inst, e, flags);
|
||||
slapi_entry_free(e);
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -0,0 +1,143 @@
|
||||
From 106cd8af10368dac8f1c3897436f9ca32bc13685 Mon Sep 17 00:00:00 2001
|
||||
From: Viktor Ashirov <vashirov@redhat.com>
|
||||
Date: Tue, 4 Nov 2025 12:05:51 +0100
|
||||
Subject: [PATCH] Issue 7056 - DSBLE0007 doesn't generate remediation steps for
|
||||
missing indexes
|
||||
|
||||
Bug Description:
|
||||
dsctl healthcheck doesn't generate remediation steps for missing
|
||||
indexes, instead it prints an error message:
|
||||
|
||||
```
|
||||
- Unable to check index ancestorId: No object exists given the filter criteria: ancestorId (&(&(objectclass=nsIndex))(|(cn=ancestorId)))
|
||||
```
|
||||
|
||||
Fix Description:
|
||||
Catch `ldap.NO_SUCH_OBJECT` when index is missing and generate
|
||||
remediation instructions.
|
||||
Update remediation instructions for missing index.
|
||||
Fix failing tests due to missing idlistscanlimit.
|
||||
|
||||
Fixes: https://github.com/389ds/389-ds-base/issues/7056
|
||||
|
||||
Reviewed by: @progier389, @droideck (Thank you!)
|
||||
|
||||
(cherry picked from commit 0a85d7bcca0422ff1a8e20b219727410333c1a4f)
|
||||
Signed-off-by: Masahiro Matsuya <mmatsuya@redhat.com>
|
||||
---
|
||||
.../healthcheck/health_system_indexes_test.py | 9 ++++--
|
||||
src/lib389/lib389/backend.py | 28 ++++++++++++-------
|
||||
2 files changed, 24 insertions(+), 13 deletions(-)
|
||||
|
||||
diff --git a/dirsrvtests/tests/suites/healthcheck/health_system_indexes_test.py b/dirsrvtests/tests/suites/healthcheck/health_system_indexes_test.py
|
||||
index 61972d60c..6293340ca 100644
|
||||
--- a/dirsrvtests/tests/suites/healthcheck/health_system_indexes_test.py
|
||||
+++ b/dirsrvtests/tests/suites/healthcheck/health_system_indexes_test.py
|
||||
@@ -171,7 +171,8 @@ def test_missing_parentid(topology_st, log_buffering_enabled):
|
||||
|
||||
log.info("Re-add the parentId index")
|
||||
backend = Backends(standalone).get("userRoot")
|
||||
- backend.add_index("parentid", ["eq"], matching_rules=["integerOrderingMatch"])
|
||||
+ backend.add_index("parentid", ["eq"], matching_rules=["integerOrderingMatch"],
|
||||
+ idlistscanlimit=['limit=5000 type=eq flags=AND'])
|
||||
|
||||
run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
|
||||
@@ -259,7 +260,8 @@ def test_usn_plugin_missing_entryusn(topology_st, usn_plugin_enabled, log_buffer
|
||||
|
||||
log.info("Re-add the entryusn index")
|
||||
backend = Backends(standalone).get("userRoot")
|
||||
- backend.add_index("entryusn", ["eq"], matching_rules=["integerOrderingMatch"])
|
||||
+ backend.add_index("entryusn", ["eq"], matching_rules=["integerOrderingMatch"],
|
||||
+ idlistscanlimit=['limit=5000 type=eq flags=AND'])
|
||||
|
||||
run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
run_healthcheck_and_flush_log(topology_st, standalone, json=True, searched_code=JSON_OUTPUT)
|
||||
@@ -443,7 +445,8 @@ def test_multiple_missing_indexes(topology_st, log_buffering_enabled):
|
||||
|
||||
log.info("Re-add the missing system indexes")
|
||||
backend = Backends(standalone).get("userRoot")
|
||||
- backend.add_index("parentid", ["eq"], matching_rules=["integerOrderingMatch"])
|
||||
+ backend.add_index("parentid", ["eq"], matching_rules=["integerOrderingMatch"],
|
||||
+ idlistscanlimit=['limit=5000 type=eq flags=AND'])
|
||||
backend.add_index("nsuniqueid", ["eq"])
|
||||
|
||||
run_healthcheck_and_flush_log(topology_st, standalone, json=False, searched_code=CMD_OUTPUT)
|
||||
diff --git a/src/lib389/lib389/backend.py b/src/lib389/lib389/backend.py
|
||||
index e74d1fbf9..14b64d1d3 100644
|
||||
--- a/src/lib389/lib389/backend.py
|
||||
+++ b/src/lib389/lib389/backend.py
|
||||
@@ -541,8 +541,8 @@ class Backend(DSLdapObject):
|
||||
# Default system indexes taken from ldap/servers/slapd/back-ldbm/instance.c
|
||||
expected_system_indexes = {
|
||||
'entryrdn': {'types': ['subtree'], 'matching_rule': None},
|
||||
- 'parentId': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch', 'scanlimit': 'limit=5000 type=eq flags=AND'},
|
||||
- 'ancestorId': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch', 'scanlimit': 'limit=5000 type=eq flags=AND'},
|
||||
+ 'parentid': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch', 'scanlimit': 'limit=5000 type=eq flags=AND'},
|
||||
+ 'ancestorid': {'types': ['eq'], 'matching_rule': 'integerOrderingMatch', 'scanlimit': 'limit=5000 type=eq flags=AND'},
|
||||
'objectClass': {'types': ['eq'], 'matching_rule': None},
|
||||
'aci': {'types': ['pres'], 'matching_rule': None},
|
||||
'nscpEntryDN': {'types': ['eq'], 'matching_rule': None},
|
||||
@@ -584,15 +584,24 @@ class Backend(DSLdapObject):
|
||||
for attr_name, expected_config in expected_system_indexes.items():
|
||||
try:
|
||||
index = indexes.get(attr_name)
|
||||
+ except ldap.NO_SUCH_OBJECT:
|
||||
+ # Index is missing
|
||||
+ index = None
|
||||
+ except Exception as e:
|
||||
+ self._log.debug(f"_lint_system_indexes - Error getting index {attr_name}: {e}")
|
||||
+ discrepancies.append(f"Unable to check index {attr_name}: {str(e)}")
|
||||
+ continue
|
||||
+
|
||||
+ try:
|
||||
# Check if index exists
|
||||
if index is None:
|
||||
discrepancies.append(f"Missing system index: {attr_name}")
|
||||
# Generate remediation command
|
||||
- index_types = ' '.join([f"--add-type {t}" for t in expected_config['types']])
|
||||
+ index_types = ' '.join([f"--index-type {t}" for t in expected_config['types']])
|
||||
cmd = f"dsconf YOUR_INSTANCE backend index add {bename} --attr {attr_name} {index_types}"
|
||||
- if expected_config['matching_rule']:
|
||||
- cmd += f" --add-mr {expected_config['matching_rule']}"
|
||||
- if expected_config['scanlimit']:
|
||||
+ if expected_config.get('matching_rule'):
|
||||
+ cmd += f" --matching-rule {expected_config['matching_rule']}"
|
||||
+ if expected_config.get('scanlimit'):
|
||||
cmd += f" --add-scanlimit {expected_config['scanlimit']}"
|
||||
remediation_commands.append(cmd)
|
||||
reindex_attrs.add(attr_name) # New index needs reindexing
|
||||
@@ -616,7 +625,7 @@ class Backend(DSLdapObject):
|
||||
reindex_attrs.add(attr_name)
|
||||
|
||||
# Check matching rules
|
||||
- expected_mr = expected_config['matching_rule']
|
||||
+ expected_mr = expected_config.get('matching_rule')
|
||||
if expected_mr:
|
||||
actual_mrs_lower = [mr.lower() for mr in actual_mrs]
|
||||
if expected_mr.lower() not in actual_mrs_lower:
|
||||
@@ -638,7 +647,6 @@ class Backend(DSLdapObject):
|
||||
remediation_commands.append(cmd)
|
||||
reindex_attrs.add(attr_name)
|
||||
|
||||
-
|
||||
except Exception as e:
|
||||
self._log.debug(f"_lint_system_indexes - Error checking index {attr_name}: {e}")
|
||||
discrepancies.append(f"Unable to check index {attr_name}: {str(e)}")
|
||||
@@ -907,11 +915,11 @@ class Backend(DSLdapObject):
|
||||
|
||||
if idlistscanlimit is not None:
|
||||
scanlimits = []
|
||||
- for scanlimit in idlistscanlimit:
|
||||
+ for scanlimit in idlistscanlimit:
|
||||
scanlimits.append(scanlimit)
|
||||
# Only add if there are actually limits in the list.
|
||||
if len(scanlimits) > 0:
|
||||
- props['nsIndexIDListScanLimit'] = mrs
|
||||
+ props['nsIndexIDListScanLimit'] = scanlimits
|
||||
|
||||
new_index.create(properties=props, basedn="cn=index," + self._dn)
|
||||
|
||||
--
|
||||
2.51.1
|
||||
|
||||
@ -52,7 +52,7 @@ ExcludeArch: i686
|
||||
Summary: 389 Directory Server (base)
|
||||
Name: 389-ds-base
|
||||
Version: 1.4.3.39
|
||||
Release: %{?relprefix}15%{?prerel}%{?dist}
|
||||
Release: %{?relprefix}19%{?prerel}%{?dist}
|
||||
License: GPL-3.0-or-later WITH GPL-3.0-389-ds-base-exception AND (0BSD OR Apache-2.0 OR MIT) AND (Apache-2.0 OR Apache-2.0 WITH LLVM-exception OR MIT) AND (Apache-2.0 OR BSD-2-Clause OR MIT) AND (Apache-2.0 OR BSL-1.0) AND (Apache-2.0 OR LGPL-2.1-or-later OR MIT) AND (Apache-2.0 OR MIT OR Zlib) AND (Apache-2.0 OR MIT) AND (MIT OR Apache-2.0) AND Unicode-3.0 AND (MIT OR Unlicense) AND Apache-2.0 AND BSD-3-Clause AND MIT AND MPL-2.0
|
||||
URL: https://www.port389.org
|
||||
Group: System Environment/Daemons
|
||||
@ -347,6 +347,30 @@ Patch46: 0046-Issue-6686-CLI-Re-enabling-user-accounts-that-reache.patc
|
||||
Patch47: 0047-Issue-6302-Allow-to-run-replication-status-without-a.patch
|
||||
Patch48: 0048-Issue-6857-uiduniq-allow-specifying-match-rules-in-t.patch
|
||||
Patch49: 0049-Issue-6859-str2filter-is-not-fully-applying-matching.patch
|
||||
Patch50: 0050-Issue-6787-Improve-error-message-when-bulk-import-co.patch
|
||||
Patch51: 0051-Issue-6641-modrdn-fails-when-a-user-is-member-of-mul.patch
|
||||
Patch52: 0052-Issue-6470-Some-replication-status-data-are-reset-up.patch
|
||||
Patch53: 0053-Issue-3729-RFE-Extend-log-of-operations-statistics-i.patch
|
||||
Patch54: 0054-Issue-3729-cont-RFE-Extend-log-of-operations-statist.patch
|
||||
Patch55: 0055-Issue-5710-subtree-search-statistics-for-index-looku.patch
|
||||
Patch56: 0056-Issue-6764-statistics-about-index-lookup-report-a-wr.patch
|
||||
Patch57: 0057-Issue-6470-Cont-Some-replication-status-data-are-res.patch
|
||||
Patch58: 0058-Issue-6895-Crash-if-repl-keep-alive-entry-can-not-be.patch
|
||||
Patch59: 0059-Issue-6884-Mask-password-hashes-in-audit-logs-6885.patch
|
||||
Patch60: 0060-Issue-6819-Incorrect-pwdpolicysubentry-returned-for-.patch
|
||||
Patch61: 0061-Issue-6936-Make-user-subtree-policy-creation-idempot.patch
|
||||
Patch62: 0062-Issue-6641-Fix-memory-leaks.patch
|
||||
Patch63: 0063-Issue-6933-When-deferred-memberof-update-is-enabled-.patch
|
||||
Patch64: 0064-Issue-6928-The-parentId-attribute-is-indexed-with-im.patch
|
||||
Patch65: 0065-Issue-6966-On-large-DB-unlimited-IDL-scan-limit-redu.patch
|
||||
Patch66: 0066-Issue-6979-Improve-the-way-to-detect-asynchronous-op.patch
|
||||
Patch67: 0067-Issue-7047-MemberOf-plugin-logs-null-attribute-name-.patch
|
||||
Patch68: 0068-Issue-7032-The-new-ipahealthcheck-test-ipahealthchec.patch
|
||||
Patch69: 0069-Issue-6947-Revise-time-skew-check-in-healthcheck-too.patch
|
||||
Patch70: 0070-Issue-6901-Update-changelog-trimming-logging-7102.patch
|
||||
Patch71: 0071-Issue-7007-Improve-paged-result-search-locking.patch
|
||||
Patch72: 0072-Issue-6966-2nd-On-large-DB-unlimited-IDL-scan-limit-.patch
|
||||
Patch73: 0073-Issue-7056-DSBLE0007-doesn-t-generate-remediation-st.patch
|
||||
|
||||
|
||||
#Patch100: cargo.patch
|
||||
@ -972,6 +996,34 @@ exit 0
|
||||
%doc README.md
|
||||
|
||||
%changelog
|
||||
* Fri Dec 05 2025 Masahiro Matsuya <mmatsuya@redhat.com> - 1.4.3.39-19
|
||||
- Resolves: RHEL-117759 - Replication online reinitialization of a large database gets stalled. [rhel-8.10.z]
|
||||
|
||||
* Wed Dec 03 2025 Masahiro Matsuya <mmatsuya@redhat.com> - 1.4.3.39-18
|
||||
- Reverts: RHEL-123241 - Attribute uniqueness is not enforced upon modrdn operation [rhel-8.10.z]
|
||||
|
||||
* Wed Nov 26 2025 Masahiro Matsuya <mmatsuya@redhat.com> - 1.4.3.39-17
|
||||
- Resolves: RHEL-80491 - Can't rename users member of automember rule [rhel-8.10.z]
|
||||
- Resolves: RHEL-87191 - Some replication status data are reset upon a restart. [rhel-8.10.z]
|
||||
- Resolves: RHEL-89785 - Extend log of operations statistics in access log
|
||||
- Resolves: RHEL-111226 - Error showing local password policy on web UI [rhel-8.10.z]
|
||||
- Resolves: RHEL-113976 - AddressSanitizer: memory leak in memberof_add_memberof_attr [rhel-8.10.z]
|
||||
- Resolves: RHEL-117457 - subtree search statistics for index lookup does not report ancestorid/entryrdn lookups
|
||||
- Resolves: RHEL-117752 - Crash if repl keep alive entry can not be created [rhel-8.10.z]
|
||||
- Resolves: RHEL-117759 - Replication online reinitialization of a large database gets stalled. [rhel-8.10.z]
|
||||
- Resolves: RHEL-117765 - Statistics about index lookup report a wrong duration [rhel-8.10.z]
|
||||
- Resolves: RHEL-123228 - Improve the way to detect asynchronous operations in the access logs [rhel-8.10.z]
|
||||
- Resolves: RHEL-123241 - Attribute uniqueness is not enforced upon modrdn operation [rhel-8.10.z]
|
||||
- Resolves: RHEL-123254 - Typo in errors log after a Memberof fixup task. [rhel-8.10.z]
|
||||
- Resolves: RHEL-123269 - LDAP high CPU usage while handling indexes with IDL scan limit at INT_MAX [rhel-8.10.z]
|
||||
- Resolves: RHEL-123276 - The new ipahealthcheck test ipahealthcheck.ds.backends.BackendsCheck raises CRITICAL issue [rhel-8.10.z]
|
||||
- Resolves: RHEL-123363 - When deferred memberof update is enabled after the server crashed it should not launch memberof fixup task by default [rhel-8.10.z]
|
||||
- Resolves: RHEL-123365 - IPA health check up script shows time skew is over 24 hours [rhel-8.10.z]
|
||||
- Resolves: RHEL-123920 - Changelog trimming - add number of scanned entries to the log [rhel-8.10.z]
|
||||
- Resolves: RHEL-126512 - Created user password hash available to see in audit log [rhel-8.10.z]
|
||||
- Resolves: RHEL-129578 - Fix paged result search locking [rhel-8.10.z]
|
||||
- Resolves: RHEL-130900 - On RHDS 12.6 The user password policy for a user was created, but the pwdpolicysubentry attribute for this user incorrectly points to the People OU password policy instead of the specific user policy. [rhel-8.10.z]
|
||||
|
||||
* Mon Aug 18 2025 Viktor Ashirov <vashirov@redhat.com> - 1.4.3.39-15
|
||||
- Resolves: RHEL-109028 - Allow Uniqueness plugin to search uniqueness attributes using custom matching rules [rhel-8.10.z]
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user