Wednesday, May 3, 2023

[389-users] Re: 389 Ldap Cleanallruv Replica Crash

Thanks Mark and Thierry for checking into this.


From: Thierry Bordaz <>
Sent: Wednesday, May 3, 2023 4:14 AM
To: General discussion list for the 389 Directory server project. <>; Juan Quintanilla <>
Subject: Re: [389-users] 389 Ldap Cleanallruv Replica Crash

Note: This message originated from outside the FIU Faculty/Staff email system.

Hi Juan,

Thanks for raising this issue. The crash can be reproduced and I opened

It is a side effect of a CL refactoring done in 2.x branch.

best regards

On 5/2/23 21:00, Juan Quintanilla wrote:

I recently installed 389-ds-base-libs-2.2.6-2.el8.x86_64 and 389-ds-base-2.2.6-2.el8.x86_64 on an ALma Linux 8 Server, but I'm encountering an issue with removing offline replicas from our existing 389 Ldap.

When the command below is executed on one of the suppliers:

dsconf INSTANCE_NAME repl-tasks cleanallruv --suffix "ou=sample,dc=test,dc=dom" --replica-id 20 --force-cleaning

The entry is removed from the ldap supplier, and when the change is sent to the secondary supplier it is also removed with no problem.  The issue is when the change is sent to the consumer, the slapd process will instantly crash.  When the consumer instance is brought back up the entry that needed to be removed is gone.

Has anyone encountered a similar issue with the consumers crashing during a cleanallruv request or cleanruv?

I also tried running a cleanruv task on each server, suppliers have no issue. When the command is run on the readonly consumers the slapd process crashes.

ldapmodify -x -D "cn=manager" -W <<EOF
dn: cn=replica,cn=ou\3Dsample\2Cdc\3Dtest\2Cdc\3Ddom,cn=mapping tree,cn=config
changetype: modify
replace: nsds5task
nsds5task: CLEANRUV20

There is no recorded error in the logs to indicate the reason for the crash.



_______________________________________________  389-users mailing list --  To unsubscribe send an email to  Fedora Code of Conduct:  List Guidelines:  List Archives:  Do not reply to spam, report it:  

No comments:

Post a Comment