so node2 returns a LDAP_BUSY error which is relatively rare (and usually happens just after logging the
"Retry count exceeded" error ...) because txn get aborted too often because of db locks conflicts
You can try to tweak the nsslapd-db-deadlock-policy but it is a bit puzzling:
During a bulk import (i.e importing entries from another supplier), only the import is active on the backend, so
there should not be db lock conflict during an import
And I do not understand why deleting and recreating the agreements could solve such issues.
Especially since agreement toward and from the target replica are disabled while the replication is in progress.
The fact that they exist or not, should not change anything ...
Unless there is a bug somewhere an a db lock is leaking. But that does not ring any bells in mind mind ...
"Retry count exceeded" error ...) because txn get aborted too often because of db locks conflicts
You can try to tweak the nsslapd-db-deadlock-policy but it is a bit puzzling:
During a bulk import (i.e importing entries from another supplier), only the import is active on the backend, so
there should not be db lock conflict during an import
And I do not understand why deleting and recreating the agreements could solve such issues.
Especially since agreement toward and from the target replica are disabled while the replication is in progress.
The fact that they exist or not, should not change anything ...
Unless there is a bug somewhere an a db lock is leaking. But that does not ring any bells in mind mind ...
On Mon, Dec 9, 2024 at 6:37 PM Luiz Quirino via 389-users <389-users@lists.fedoraproject.org> wrote:
Hi, Pierre,
I appreciate your assertive approach to addressing this issue.
I reviewed the logs on node01, which was sending data for replication, and noticed that at the exact moment the error was recorded on node02, node01 logged an LDAP error 51 ("Server is busy").
Given that node01 and node02 are virtual machines on the same network segment, I do not believe this issue is related to the network interface.
This leads me to focus on the minimum resource requirements or internal parameters of the 389 Directory Server as potential causes.
Currently, both nodes are configured with 4 vCPUs and 4 GB of RAM each.
I suspect that some internal parameter in the 389 DS configuration might be contributing to this issue.
Looking forward to your insights.
--
_______________________________________________
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
--
389 Directory Server Development Team
389 Directory Server Development Team
No comments:
Post a Comment