On 6/7/21 9:39 AM, Marco Favero wrote:
> Gasp, I suspect the problem seems to be here. In the agreements I see
>
> dn: cn=it 2--\3E1,cn=replica,cn=c\3Dit,cn=mapping tree,cn=config
> objectClass: top
> objectClass: nsds5replicationagreement
> cn: it 2-->1
> cn: it 2--\>1
> nsDS5ReplicaRoot: c=it
> description: it 2-->1
> nsDS5ReplicaHost: srv1.example.com
> nsDS5ReplicaPort: 389
> nsDS5ReplicaBindMethod: simple
> nsDS5ReplicaTransportInfo: LDAP
> nsDS5ReplicaBindDN: cn=replication manager,cn=config
> nsds50ruv: {replicageneration} 60704f730000c3500000
> nsds50ruv: {replica 50001 ldap://srv1.example.com:389} 607424dd0000c3510
> 000 60ba18fb0000c3510000
> nsds50ruv: {replica 50000 ldap://srv.example.com:389} 6074264a0000c3500000 6
> 0ba190f0000c3500000
> nsds50ruv: {replica 50002 ldap://srv2.example.com:389} 607426410000c3520
> 000 60ba19050000c3520000
> nsruvReplicaLastModified: {replica 50001 ldap://srv1.example.com:389} 00
> 000000
> nsruvReplicaLastModified: {replica 50000 ldap://srv.example.com:389} 0000000
> 0
> nsruvReplicaLastModified: {replica 50002 ldap://srv2.example.com:389} 00
> 000000
> nsds5replicareapactive: 0
> nsds5replicaLastUpdateStart: 20210604124542Z
> nsds5replicaLastUpdateEnd: 20210604124542Z
> nsds5replicaChangesSentSinceStartup:: NTAwMDI6NC8wIA==
>
> The replica ID 50000 corresponds to the server srv3.example.com, the first host installed in a set of three multimaster servers. The balancer host is srv.example.com. As suggested by dscreate I put the balancer host in the parameter "full_machine_name" for all LDAP servers. For a reason which I don't know the full_machine_name (the load balancer host) has been written in the ruv in place of the fqdn of the machine host containing the dirsrv installation. In this case, srv.example.com in place of srv3.example.com.
Hi marco,
the hostname in the RUV (nsds50ruv) is coming from 'nsslapd-localhost'
attribute in the 'cn=config' entry (dse.ldif). I am unsure of the impact
of this erroneously value (srv.example.com instead of srv3.example.com)
in the RUV.
IMHO what is important for the RA to start a replication session is
nsds5ReplicaHost and replicageneration. Of course it would be better
that hosts are valid in RUV element but not sure it explains that
srv1->srv3 stopped working.
If you can reproduce the problem, I would recommend that you enable
replication logging (nsslapd-errorlog-level: 8192) on both side (srv1
and srv3) and reproduce the failure of the RA. Then isolated from access
logs and error logs the replication session that fails.
regards
thierry
>
> I suspect that if I reinstall all servers with their hostname in "full_machine_name" I resolve my issue.
>
> Any idea?
>
> Thank you very much
> _______________________________________________
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
> Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
_______________________________________________
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
No comments:
Post a Comment