Tuesday, May 20, 2014

[389-commits] Branch '389-ds-base-1.3.2' - dirsrvtests/tickets ldap/schema ldap/servers

dirsrvtests/tickets/ticket47676_test.py | 485 ++++++++++++++++++++++++++
ldap/schema/01core389.ldif | 5
ldap/servers/plugins/replication/repl5_init.c | 98 +++++
ldap/servers/slapd/schema.c | 304 +++++++++++++++-
ldap/servers/slapd/slap.h | 36 +
ldap/servers/slapd/slapi-private.h | 3
6 files changed, 922 insertions(+), 9 deletions(-)

New commits:
commit e35d20194b415167ec0fbb0f0ec62e39e3808a32
Author: Thierry bordaz (tbordaz) <tbordaz@redhat.com>
Date: Tue Feb 4 11:50:45 2014 +0100

Ticket 47676 : Replication of the schema fails 'master branch' -> 1.2.11 or 1.3.1

Bug Description:

Since https://fedorahosted.org/389/ticket/47490 and https://fedorahosted.org/389/ticket/47541 a supplier schema is
pushed to the consumer at the condition that the consumer schema is not a superset of the supplier schema.

With https://fedorahosted.org/389/ticket/47647, the objectclass 'print-uri' has been removed from master.
Starting in 1.3.1, unhashed#user#password pseudo attribute as been removed from the schema.

A consequence is that replication of the schema fails: master->1.2.11 and master->1.3.1

Fix Description:

Replication plugin initialization (multimaster_start) creates the three following entries
dn: cn=replSchema,cn=config
objectClass: top
objectClass: nsSchemaPolicy
cn: replSchema

dn: cn=consumerUpdatePolicy,cn=replSchema,cn=config
objectClass: top
objectClass: nsSchemaPolicy
cn: consumerUpdatePolicy
schemaUpdateObjectclassAccept: printer-uri-oid
schemaUpdateAttributeAccept: 2.16.840.1.113730.3.1.2110
schemaUpdateObjectclassReject: dummy-objectclass-name
schemaUpdateAttributeReject: dummy-attribute-name

dn: cn=supplierUpdatePolicy,cn=replSchema,cn=config
objectClass: top
objectClass: nsSchemaPolicy
cn: supplierUpdatePolicy
schemaUpdateObjectclassAccept: printer-uri-oid
schemaUpdateAttributeAccept: 2.16.840.1.113730.3.1.2110
schemaUpdateObjectclassReject: dummy-objectclass-name
schemaUpdateAttributeReject: dummy-attribute-name

schemaUpdateObjectclassAccept, schemaUpdateAttributeAccept, schemaUpdateObjectclassReject and schemaUpdateAttributeReject
are optional multi-valued attribute.
The values are strings representing objectclass/attribute name or OID.

During a replication session, if the consumer schema needs to be updated (because nsSchemaCSN differs) the checkings are:
On supplier side:
OBJECTCLASSES
For each objectclass OC in the consumer schema:
if it exists in 'cn=supplierUpdatePolicy,cn=replSchema,cn=config'
schemaUpdateObjectclassAccept: <oc_name or oc_oid>
then this OC is "accepted" without checking that
- OC exists in supplier schema
- supplier's OC >= consumer's OC

schemaUpdateObjectclassReject: <oc_name or oc_oid>
then the supplier schema is not pushed (rejected)

If none of these values exists, then it does the "normal" processing to determine if the schema can be updated:
(if Supplier's OC < consumer's OC then schema is not pushed)

ATTRIBUTES
It does the same processing as above for each attribute, looking for values
schemaUpdateAttributeAccept and schemaUpdateAttributeReject

On consumer side:

OBJECTCLASSES
It does the same processing as above for each OC in the supplier schema, checking against
entry 'cn=consumerUpdatePolicy,cn=replSchema,cn=config'

ATTRIBUTES
It does the same processing as above for each AT in the supplier schema, checking against
entry 'cn=consumerUpdatePolicy,cn=replSchema,cn=config'

https://fedorahosted.org/389/ticket/47676

Reviewed by: Rich Megginson (Thanks Rich!)

Platforms tested: F17/F19(jenkins)

Flag Day: no

Doc impact: no

diff --git a/dirsrvtests/tickets/ticket47676_test.py b/dirsrvtests/tickets/ticket47676_test.py
new file mode 100644
index 0000000..8ba5956
--- /dev/null
+++ b/dirsrvtests/tickets/ticket47676_test.py
@@ -0,0 +1,485 @@
+'''
+Created on Nov 7, 2013
+
+@author: tbordaz
+'''
+import os
+import sys
+import time
+import ldap
+import logging
+import socket
+import time
+import logging
+import pytest
+import re
+from lib389 import DirSrv, Entry, tools
+from lib389.tools import DirSrvTools
+from lib389._constants import *
+from lib389.properties import *
+from constants import *
+from lib389._constants import REPLICAROLE_MASTER
+
+logging.getLogger(__name__).setLevel(logging.DEBUG)
+log = logging.getLogger(__name__)
+
+#
+# important part. We can deploy Master1 and Master2 on different versions
+#
+installation1_prefix = None
+installation2_prefix = None
+
+SCHEMA_DN = "cn=schema"
+TEST_REPL_DN = "cn=test_repl, %s" % SUFFIX
+OC_NAME = 'OCticket47676'
+OC_OID_EXT = 2
+MUST = "(postalAddress $ postalCode)"
+MAY = "(member $ street)"
+
+OC2_NAME = 'OC2ticket47676'
+OC2_OID_EXT = 3
+MUST_2 = "(postalAddress $ postalCode)"
+MAY_2 = "(member $ street)"
+
+REPL_SCHEMA_POLICY_CONSUMER = "cn=consumerUpdatePolicy,cn=replSchema,cn=config"
+REPL_SCHEMA_POLICY_SUPPLIER = "cn=supplierUpdatePolicy,cn=replSchema,cn=config"
+
+OTHER_NAME = 'other_entry'
+MAX_OTHERS = 10
+
+BIND_NAME = 'bind_entry'
+BIND_DN = 'cn=%s, %s' % (BIND_NAME, SUFFIX)
+BIND_PW = 'password'
+
+ENTRY_NAME = 'test_entry'
+ENTRY_DN = 'cn=%s, %s' % (ENTRY_NAME, SUFFIX)
+ENTRY_OC = "top person %s" % OC_NAME
+
+BASE_OID = "1.2.3.4.5.6.7.8.9.10"
+
+def _oc_definition(oid_ext, name, must=None, may=None):
+ oid = "%s.%d" % (BASE_OID, oid_ext)
+ desc = 'To test ticket 47490'
+ sup = 'person'
+ if not must:
+ must = MUST
+ if not may:
+ may = MAY
+
+ new_oc = "( %s NAME '%s' DESC '%s' SUP %s AUXILIARY MUST %s MAY %s )" % (oid, name, desc, sup, must, may)
+ return new_oc
+class TopologyMaster1Master2(object):
+ def __init__(self, master1, master2):
+ master1.open()
+ self.master1 = master1
+
+ master2.open()
+ self.master2 = master2
+
+
+@pytest.fixture(scope="module")
+def topology(request):
+ '''
+ This fixture is used to create a replicated topology for the 'module'.
+ The replicated topology is MASTER1 <-> Master2.
+ At the beginning, It may exists a master2 instance and/or a master2 instance.
+ It may also exists a backup for the master1 and/or the master2.
+
+ Principle:
+ If master1 instance exists:
+ restart it
+ If master2 instance exists:
+ restart it
+ If backup of master1 AND backup of master2 exists:
+ create or rebind to master1
+ create or rebind to master2
+
+ restore master1 from backup
+ restore master2 from backup
+ else:
+ Cleanup everything
+ remove instances
+ remove backups
+ Create instances
+ Initialize replication
+ Create backups
+ '''
+ global installation1_prefix
+ global installation2_prefix
+
+ # allocate master1 on a given deployement
+ master1 = DirSrv(verbose=False)
+ if installation1_prefix:
+ args_instance[SER_DEPLOYED_DIR] = installation1_prefix
+
+ # Args for the master1 instance
+ args_instance[SER_HOST] = HOST_MASTER_1
+ args_instance[SER_PORT] = PORT_MASTER_1
+ args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_1
+ args_master = args_instance.copy()
+ master1.allocate(args_master)
+
+ # allocate master1 on a given deployement
+ master2 = DirSrv(verbose=False)
+ if installation2_prefix:
+ args_instance[SER_DEPLOYED_DIR] = installation2_prefix
+
+ # Args for the consumer instance
+ args_instance[SER_HOST] = HOST_MASTER_2
+ args_instance[SER_PORT] = PORT_MASTER_2
+ args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_2
+ args_master = args_instance.copy()
+ master2.allocate(args_master)
+
+
+ # Get the status of the backups
+ backup_master1 = master1.checkBackupFS()
+ backup_master2 = master2.checkBackupFS()
+
+ # Get the status of the instance and restart it if it exists
+ instance_master1 = master1.exists()
+ if instance_master1:
+ master1.stop(timeout=10)
+ master1.start(timeout=10)
+
+ instance_master2 = master2.exists()
+ if instance_master2:
+ master2.stop(timeout=10)
+ master2.start(timeout=10)
+
+ if backup_master1 and backup_master2:
+ # The backups exist, assuming they are correct
+ # we just re-init the instances with them
+ if not instance_master1:
+ master1.create()
+ # Used to retrieve configuration information (dbdir, confdir...)
+ master1.open()
+
+ if not instance_master2:
+ master2.create()
+ # Used to retrieve configuration information (dbdir, confdir...)
+ master2.open()
+
+ # restore master1 from backup
+ master1.stop(timeout=10)
+ master1.restoreFS(backup_master1)
+ master1.start(timeout=10)
+
+ # restore master2 from backup
+ master2.stop(timeout=10)
+ master2.restoreFS(backup_master2)
+ master2.start(timeout=10)
+ else:
+ # We should be here only in two conditions
+ # - This is the first time a test involve master-consumer
+ # so we need to create everything
+ # - Something weird happened (instance/backup destroyed)
+ # so we discard everything and recreate all
+
+ # Remove all the backups. So even if we have a specific backup file
+ # (e.g backup_master) we clear all backups that an instance my have created
+ if backup_master1:
+ master1.clearBackupFS()
+ if backup_master2:
+ master2.clearBackupFS()
+
+ # Remove all the instances
+ if instance_master1:
+ master1.delete()
+ if instance_master2:
+ master2.delete()
+
+ # Create the instances
+ master1.create()
+ master1.open()
+ master2.create()
+ master2.open()
+
+ #
+ # Now prepare the Master-Consumer topology
+ #
+ # First Enable replication
+ master1.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_1)
+ master2.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_2)
+
+ # Initialize the supplier->consumer
+
+ properties = {RA_NAME: r'meTo_$host:$port',
+ RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
+ RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
+ RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
+ RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
+ repl_agreement = master1.agreement.create(suffix=SUFFIX, host=master2.host, port=master2.port, properties=properties)
+
+ if not repl_agreement:
+ log.fatal("Fail to create a replica agreement")
+ sys.exit(1)
+
+ log.debug("%s created" % repl_agreement)
+
+ properties = {RA_NAME: r'meTo_$host:$port',
+ RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
+ RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
+ RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
+ RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
+ master2.agreement.create(suffix=SUFFIX, host=master1.host, port=master1.port, properties=properties)
+
+ master1.agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2)
+ master1.waitForReplInit(repl_agreement)
+
+ # Check replication is working fine
+ master1.add_s(Entry((TEST_REPL_DN, {
+ 'objectclass': "top person".split(),
+ 'sn': 'test_repl',
+ 'cn': 'test_repl'})))
+ loop = 0
+ while loop <= 10:
+ try:
+ ent = master2.getEntry(TEST_REPL_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+ break
+ except ldap.NO_SUCH_OBJECT:
+ time.sleep(1)
+ loop += 1
+
+ # Time to create the backups
+ master1.stop(timeout=10)
+ master1.backupfile = master1.backupFS()
+ master1.start(timeout=10)
+
+ master2.stop(timeout=10)
+ master2.backupfile = master2.backupFS()
+ master2.start(timeout=10)
+
+ #
+ # Here we have two instances master and consumer
+ # with replication working. Either coming from a backup recovery
+ # or from a fresh (re)init
+ # Time to return the topology
+ return TopologyMaster1Master2(master1, master2)
+
+
+def test_ticket47676_init(topology):
+ """
+ It adds
+ - Objectclass with MAY 'member'
+ - an entry ('bind_entry') with which we bind to test the 'SELFDN' operation
+ It deletes the anonymous aci
+
+ """
+
+
+ topology.master1.log.info("Add %s that allows 'member' attribute" % OC_NAME)
+ new_oc = _oc_definition(OC_OID_EXT, OC_NAME, must = MUST, may = MAY)
+ topology.master1.addSchema('objectClasses', new_oc)
+
+
+ # entry used to bind with
+ topology.master1.log.info("Add %s" % BIND_DN)
+ topology.master1.add_s(Entry((BIND_DN, {
+ 'objectclass': "top person".split(),
+ 'sn': BIND_NAME,
+ 'cn': BIND_NAME,
+ 'userpassword': BIND_PW})))
+
+ # enable acl error logging
+ mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', str(128+8192))] # ACL + REPL
+ topology.master1.modify_s(DN_CONFIG, mod)
+ topology.master2.modify_s(DN_CONFIG, mod)
+
+ # add dummy entries
+ for cpt in range(MAX_OTHERS):
+ name = "%s%d" % (OTHER_NAME, cpt)
+ topology.master1.add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
+ 'objectclass': "top person".split(),
+ 'sn': name,
+ 'cn': name})))
+
+def test_ticket47676_skip_oc_at(topology):
+ '''
+ This test ADD an entry on MASTER1 where 47676 is fixed. Then it checks that entry is replicated
+ on MASTER2 (even if on MASTER2 47676 is NOT fixed). Then update on MASTER2.
+ If the schema has successfully been pushed, updating Master2 should succeed
+ '''
+ topology.master1.log.info("\n\n######################### ADD ######################\n")
+
+ # bind as 'cn=Directory manager'
+ topology.master1.log.info("Bind as %s and add the add the entry with specific oc" % DN_DM)
+ topology.master1.simple_bind_s(DN_DM, PASSWORD)
+
+ # Prepare the entry with multivalued members
+ entry = Entry(ENTRY_DN)
+ entry.setValues('objectclass', 'top', 'person', 'OCticket47676')
+ entry.setValues('sn', ENTRY_NAME)
+ entry.setValues('cn', ENTRY_NAME)
+ entry.setValues('postalAddress', 'here')
+ entry.setValues('postalCode', '1234')
+ members = []
+ for cpt in range(MAX_OTHERS):
+ name = "%s%d" % (OTHER_NAME, cpt)
+ members.append("cn=%s,%s" % (name, SUFFIX))
+ members.append(BIND_DN)
+ entry.setValues('member', members)
+
+ topology.master1.log.info("Try to add Add %s should be successful" % ENTRY_DN)
+ topology.master1.add_s(entry)
+
+ #
+ # Now check the entry as been replicated
+ #
+ topology.master2.simple_bind_s(DN_DM, PASSWORD)
+ topology.master1.log.info("Try to retrieve %s from Master2" % ENTRY_DN)
+ loop = 0
+ while loop <= 10:
+ try:
+ ent = topology.master2.getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+ break
+ except ldap.NO_SUCH_OBJECT:
+ time.sleep(2)
+ loop += 1
+ assert loop <= 10
+
+ # Now update the entry on Master2 (as DM because 47676 is possibly not fixed on M2)
+ topology.master1.log.info("Update %s on M2" % ENTRY_DN)
+ mod = [(ldap.MOD_REPLACE, 'description', 'test_add')]
+ topology.master2.modify_s(ENTRY_DN, mod)
+
+ topology.master1.simple_bind_s(DN_DM, PASSWORD)
+ loop = 0
+ while loop <= 10:
+ ent = topology.master1.getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+ if ent.hasAttr('description') and (ent.getValue('description') == 'test_add'):
+ break
+ time.sleep(1)
+ loop += 1
+
+ assert ent.getValue('description') == 'test_add'
+
+def test_ticket47676_reject_action(topology):
+
+ topology.master1.log.info("\n\n######################### REJECT ACTION ######################\n")
+
+ topology.master1.simple_bind_s(DN_DM, PASSWORD)
+ topology.master2.simple_bind_s(DN_DM, PASSWORD)
+
+ # make master1 to refuse to push the schema if OC_NAME is present in consumer schema
+ mod = [(ldap.MOD_ADD, 'schemaUpdateObjectclassReject', '%s' % (OC_NAME) )] # ACL + REPL
+ topology.master1.modify_s(REPL_SCHEMA_POLICY_SUPPLIER, mod)
+
+ # Restart is required to take into account that policy
+ topology.master1.stop(timeout=10)
+ topology.master1.start(timeout=10)
+
+ # Add a new OC on M1 so that schema CSN will change and M1 will try to push the schema
+ topology.master1.log.info("Add %s on M1" % OC2_NAME)
+ new_oc = _oc_definition(OC2_OID_EXT, OC2_NAME, must = MUST, may = MAY)
+ topology.master1.addSchema('objectClasses', new_oc)
+
+ # Safety checking that the schema has been updated on M1
+ topology.master1.log.info("Check %s is in M1" % OC2_NAME)
+ ent = topology.master1.getEntry(SCHEMA_DN, ldap.SCOPE_BASE, "(objectclass=*)", ["objectclasses"])
+ assert ent.hasAttr('objectclasses')
+ found = False
+ for objectclass in ent.getValues('objectclasses'):
+ if str(objectclass).find(OC2_NAME) >= 0:
+ found = True
+ break
+ assert found
+
+ # Do an update of M1 so that M1 will try to push the schema
+ topology.master1.log.info("Update %s on M1" % ENTRY_DN)
+ mod = [(ldap.MOD_REPLACE, 'description', 'test_reject')]
+ topology.master1.modify_s(ENTRY_DN, mod)
+
+ # Check the replication occured and so also M1 attempted to push the schema
+ topology.master1.log.info("Check updated %s on M2" % ENTRY_DN)
+ loop = 0
+ while loop <= 10:
+ ent = topology.master2.getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)", ['description'])
+ if ent.hasAttr('description') and ent.getValue('description') == 'test_reject':
+ # update was replicated
+ break
+ time.sleep(2)
+ loop += 1
+ assert loop <= 10
+
+ # Check that the schema has not been pushed
+ topology.master1.log.info("Check %s is not in M2" % OC2_NAME)
+ ent = topology.master2.getEntry(SCHEMA_DN, ldap.SCOPE_BASE, "(objectclass=*)", ["objectclasses"])
+ assert ent.hasAttr('objectclasses')
+ found = False
+ for objectclass in ent.getValues('objectclasses'):
+ if str(objectclass).find(OC2_NAME) >= 0:
+ found = True
+ break
+ assert not found
+
+ topology.master1.log.info("\n\n######################### NO MORE REJECT ACTION ######################\n")
+
+ # make master1 to do no specific action on OC_NAME
+ mod = [(ldap.MOD_DELETE, 'schemaUpdateObjectclassReject', '%s' % (OC_NAME) )] # ACL + REPL
+ topology.master1.modify_s(REPL_SCHEMA_POLICY_SUPPLIER, mod)
+
+ # Restart is required to take into account that policy
+ topology.master1.stop(timeout=10)
+ topology.master1.start(timeout=10)
+
+ # Do an update of M1 so that M1 will try to push the schema
+ topology.master1.log.info("Update %s on M1" % ENTRY_DN)
+ mod = [(ldap.MOD_REPLACE, 'description', 'test_no_more_reject')]
+ topology.master1.modify_s(ENTRY_DN, mod)
+
+ # Check the replication occured and so also M1 attempted to push the schema
+ topology.master1.log.info("Check updated %s on M2" % ENTRY_DN)
+ loop = 0
+ while loop <= 10:
+ ent = topology.master2.getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)", ['description'])
+ if ent.hasAttr('description') and ent.getValue('description') == 'test_no_more_reject':
+ # update was replicated
+ break
+ time.sleep(2)
+ loop += 1
+ assert loop <= 10
+
+ # Check that the schema has been pushed
+ topology.master1.log.info("Check %s is in M2" % OC2_NAME)
+ ent = topology.master2.getEntry(SCHEMA_DN, ldap.SCOPE_BASE, "(objectclass=*)", ["objectclasses"])
+ assert ent.hasAttr('objectclasses')
+ found = False
+ for objectclass in ent.getValues('objectclasses'):
+ if str(objectclass).find(OC2_NAME) >= 0:
+ found = True
+ break
+ assert found
+
+def test_ticket47676_final(topology):
+ topology.master1.stop(timeout=10)
+ topology.master2.stop(timeout=10)
+
+def run_isolated():
+ '''
+ run_isolated is used to run these test cases independently of a test scheduler (xunit, py.test..)
+ To run isolated without py.test, you need to
+ - edit this file and comment '@pytest.fixture' line before 'topology' function.
+ - set the installation prefix
+ - run this program
+ '''
+ global installation1_prefix
+ global installation2_prefix
+ installation1_prefix = None
+ installation2_prefix = None
+
+ topo = topology(True)
+ topo.master1.log.info("\n\n######################### Ticket 47676 ######################\n")
+ test_ticket47676_init(topo)
+
+ test_ticket47676_skip_oc_at(topo)
+ test_ticket47676_reject_action(topo)
+
+ test_ticket47676_final(topo)
+
+
+
+
+if __name__ == '__main__':
+ run_isolated()
+
diff --git a/ldap/schema/01core389.ldif b/ldap/schema/01core389.ldif
index b9baae7..85b1860 100644
--- a/ldap/schema/01core389.ldif
+++ b/ldap/schema/01core389.ldif
@@ -154,6 +154,10 @@ attributeTypes: ( 2.16.840.1.113730.3.1.2152 NAME 'nsds5ReplicaProtocolTimeout'
attributeTypes: ( 2.16.840.1.113730.3.1.2154 NAME 'nsds5ReplicaBackoffMin' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE X-ORIGIN 'Netscape Directory Server' )
attributeTypes: ( 2.16.840.1.113730.3.1.2155 NAME 'nsds5ReplicaBackoffMax' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE X-ORIGIN 'Netscape Directory Server' )
attributeTypes: ( 2.16.840.1.113730.3.1.2156 NAME 'nsslapd-sasl-max-buffer-size' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE X-ORIGIN 'Netscape Directory Server' )
+attributeTypes: ( 2.16.840.1.113730.3.1.2165 NAME 'schemaUpdateObjectclassAccept' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Netscape Directory Server' )
+attributeTypes: ( 2.16.840.1.113730.3.1.2166 NAME 'schemaUpdateObjectclassReject' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Netscape Directory Server' )
+attributeTypes: ( 2.16.840.1.113730.3.1.2167 NAME 'schemaUpdateAttributeAccept' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Netscape Directory Server' )
+attributeTypes: ( 2.16.840.1.113730.3.1.2168 NAME 'schemaUpdateAttributeReject' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Netscape Directory Server' )
#
# objectclasses
#
@@ -165,6 +169,7 @@ objectClasses: ( 2.16.840.1.113730.3.2.110 NAME 'nsMappingTree' DESC 'Netscape d
objectClasses: ( 2.16.840.1.113730.3.2.104 NAME 'nsContainer' DESC 'Netscape defined objectclass' SUP top MUST ( CN ) X-ORIGIN 'Netscape Directory Server' )
objectClasses: ( 2.16.840.1.113730.3.2.108 NAME 'nsDS5Replica' DESC 'Netscape defined objectclass' SUP top MUST ( nsDS5ReplicaRoot $ nsDS5ReplicaId ) MAY (cn $ nsds5ReplicaCleanRUV $ nsds5ReplicaAbortCleanRUV $ nsDS5ReplicaType $ nsDS5ReplicaBindDN $ nsState $ nsDS5ReplicaName $ nsDS5Flags $ nsDS5Task $ nsDS5ReplicaReferral $ nsDS5ReplicaAutoReferral $ nsds5ReplicaPurgeDelay $ nsds5ReplicaTombstonePurgeInterval $ nsds5ReplicaChangeCount $ nsds5ReplicaLegacyConsumer $ nsds5ReplicaProtocolTimeout $ nsds5ReplicaBackoffMin $ nsds5ReplicaBackoffMax ) X-ORIGIN 'Netscape Directory Server' )
objectClasses: ( 2.16.840.1.113730.3.2.113 NAME 'nsTombstone' DESC 'Netscape defined objectclass' SUP top MAY ( nsParentUniqueId $ nscpEntryDN ) X-ORIGIN 'Netscape Directory Server' )
+objectClasses: ( 2.16.840.1.113730.3.2.115 NAME 'nsSchemaPolicy' DESC 'Netscape defined objectclass' SUP top MAY ( cn $ schemaUpdateObjectclassAccept $ schemaUpdateObjectclassReject $ schemaUpdateAttributeAccept $ schemaUpdateAttributeReject) X-ORIGIN 'Netscape Directory Server' )
objectClasses: ( 2.16.840.1.113730.3.2.103 NAME 'nsDS5ReplicationAgreement' DESC 'Netscape defined objectclass' SUP top MUST ( cn ) MAY ( nsds5ReplicaCleanRUVNotified $ nsDS5ReplicaHost $ nsDS5ReplicaPort $ nsDS5ReplicaTransportInfo $ nsDS5ReplicaBindDN $ nsDS5ReplicaCredentials $ nsDS5ReplicaBindMethod $ nsDS5ReplicaRoot $ nsDS5ReplicatedAttributeList $ nsDS5ReplicatedAttributeListTotal $ nsDS5ReplicaUpdateSchedule $ nsds5BeginReplicaRefresh $ description $ nsds50ruv $ nsruvReplicaLastModified $ nsds5ReplicaTimeout $ nsds5replicaChangesSentSinceStartup $ nsds5replicaLastUpdateEnd $ nsds5replicaLastUpdateStart $ nsds5replicaLastUpdateStatus $ nsds5replicaUpdateInProgress $ nsds5replicaLastInitEnd $ nsds5ReplicaEnabled $ nsds5replicaLastInitStart $ nsds5replicaLastInitStatus $ nsds5debugreplicatimeout $ nsds5replicaBusyWaitTime $ nsds5ReplicaStripAttrs $ nsds5replicaSessionPauseTime $ nsds5ReplicaProtocolTimeout ) X-ORIGIN 'Netscape Directory Server' )
objectClasses: ( 2.16.840.1.113730.3.2.39 NAME 'nsslapdConfig' DESC 'Netscape defined objectclass' SUP top MAY ( cn ) X-ORIGIN 'Netscape Directory Server' )
objectClasses: ( 2.16.840.1.113730.3.2.317 NAME 'nsSaslMapping' DESC 'Netscape defined objectclass' SUP top MUST ( cn $ nsSaslMapRegexString $ nsSaslMapBaseDNTemplate $ nsSaslMapFilterTemplate ) MAY ( nsSaslMapPriority ) X-ORIGIN 'Netscape Directory Server' )
diff --git a/ldap/servers/plugins/replication/repl5_init.c b/ldap/servers/plugins/replication/repl5_init.c
index ee923c9..56f01a1 100644
--- a/ldap/servers/plugins/replication/repl5_init.c
+++ b/ldap/servers/plugins/replication/repl5_init.c
@@ -644,7 +644,100 @@ check_for_ldif_dump(Slapi_PBlock *pb)
}
return return_value;
}
-
+/*
+ * If the entries do not exist, it create the entries of the schema replication policies
+ * returns 0 if success
+ */
+static int
+create_repl_schema_policy()
+{
+ /* DN part of this entry_string: no need to be optimized. */
+ char entry_string[1024];
+ Slapi_PBlock *pb;
+ Slapi_Entry *e ;
+ int return_value;
+ char *repl_schema_top, *repl_schema_supplier, *repl_schema_consumer;
+ char *default_supplier_policy = NULL;
+ char *default_consumer_policy = NULL;
+ int rc = 0;
+
+ slapi_schema_get_repl_entries(&repl_schema_top, &repl_schema_supplier, &repl_schema_consumer, &default_supplier_policy, &default_consumer_policy);
+
+ /* Create cn=replSchema,cn=config */
+ PR_snprintf(entry_string, sizeof(entry_string), "dn: %s\nobjectclass: top\nobjectclass: nsSchemaPolicy\ncn: replSchema\n", repl_schema_top);
+ e = slapi_str2entry(entry_string, 0);
+ pb = slapi_pblock_new();
+ slapi_add_entry_internal_set_pb(pb, e, NULL, /* controls */
+ repl_get_plugin_identity(PLUGIN_MULTIMASTER_REPLICATION), 0 /* flags */);
+ slapi_add_internal_pb(pb);
+ slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_RESULT, &return_value);
+ if (return_value != LDAP_SUCCESS && return_value != LDAP_ALREADY_EXISTS) {
+ slapi_log_error(SLAPI_LOG_FATAL, repl_plugin_name, "Warning: unable to "
+ "create configuration entry %s: %s\n", repl_schema_top,
+ ldap_err2string(return_value));
+ rc = -1;
+ slapi_entry_free (e); /* The entry was not consumed */
+ goto done;
+ }
+ slapi_pblock_destroy(pb);
+
+ /* Create cn=supplierUpdatePolicy,cn=replSchema,cn=config */
+ PR_snprintf(entry_string, sizeof(entry_string), "dn: %s\nobjectclass: top\nobjectclass: nsSchemaPolicy\ncn: supplierUpdatePolicy\n%s",
+ repl_schema_supplier,
+ default_supplier_policy ? default_supplier_policy : "");
+ e = slapi_str2entry(entry_string, 0);
+ pb = slapi_pblock_new();
+ slapi_add_entry_internal_set_pb(pb, e, NULL, /* controls */
+ repl_get_plugin_identity(PLUGIN_MULTIMASTER_REPLICATION), 0 /* flags */);
+ slapi_add_internal_pb(pb);
+ slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_RESULT, &return_value);
+ if (return_value != LDAP_SUCCESS && return_value != LDAP_ALREADY_EXISTS) {
+ slapi_log_error(SLAPI_LOG_FATAL, repl_plugin_name, "Warning: unable to "
+ "create configuration entry %s: %s\n", repl_schema_supplier,
+ ldap_err2string(return_value));
+ rc = -1;
+ slapi_entry_free(e); /* The entry was not consumed */
+ goto done;
+ }
+ slapi_pblock_destroy(pb);
+
+ /* Create cn=consumerUpdatePolicy,cn=replSchema,cn=config */
+ PR_snprintf(entry_string, sizeof(entry_string), "dn: %s\nobjectclass: top\nobjectclass: nsSchemaPolicy\ncn: consumerUpdatePolicy\n%s",
+ repl_schema_consumer,
+ default_consumer_policy ? default_consumer_policy : "");
+ e = slapi_str2entry(entry_string, 0);
+ pb = slapi_pblock_new();
+ slapi_add_entry_internal_set_pb(pb, e, NULL, /* controls */
+ repl_get_plugin_identity(PLUGIN_MULTIMASTER_REPLICATION), 0 /* flags */);
+ slapi_add_internal_pb(pb);
+ slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_RESULT, &return_value);
+ if (return_value != LDAP_SUCCESS && return_value != LDAP_ALREADY_EXISTS) {
+ slapi_log_error(SLAPI_LOG_FATAL, repl_plugin_name, "Warning: unable to "
+ "create configuration entry %s: %s\n", repl_schema_consumer,
+ ldap_err2string(return_value));
+ rc = -1;
+ slapi_entry_free(e); /* The entry was not consumed */
+ goto done;
+ }
+ slapi_pblock_destroy(pb);
+ pb = NULL;
+
+
+
+ /* Load the policies of the schema replication */
+ if (slapi_schema_load_repl_policies()) {
+ slapi_log_error(SLAPI_LOG_FATAL, repl_plugin_name, "Warning: unable to "
+ "load the schema replication policies\n");
+ rc = -1;
+ goto done;
+ }
+done:
+ if (pb) {
+ slapi_pblock_destroy(pb);
+ pb = NULL;
+ }
+ return rc;
+}

static PRBool is_ldif_dump = PR_FALSE;

@@ -715,6 +808,9 @@ multimaster_start( Slapi_PBlock *pb )
if (rc != 0)
goto out;
}
+ rc = create_repl_schema_policy();
+ if (rc != 0)
+ goto out;

/* check if the replica's data was reloaded offline and we need
to reinitialize replica's changelog. This should be done
diff --git a/ldap/servers/slapd/schema.c b/ldap/servers/slapd/schema.c
index d7eed74..bd0e006 100644
--- a/ldap/servers/slapd/schema.c
+++ b/ldap/servers/slapd/schema.c
@@ -86,6 +86,55 @@ static char *schema_user_defined_origin[] = {
NULL
};

+/* The policies for the replication of the schema are
+ * - base policy
+ * - extended policies
+ * Those policies are enforced when the server is acting as a supplier and
+ * when it is acting as a consumer
+ *
+ * Base policy:
+ * Supplier: before pushing the schema, the supplier checks that each objectclass/attribute of
+ * the consumer schema is a subset of the objectclass/attribute of the supplier schema
+ * Consumer: before accepting a schema (from replication), the consumer checks that
+ * each objectclass/attribute of the consumer schema is a subset of the objectclass/attribute
+ * of the supplier schema
+ * Extended policies:
+ * They are stored in repl_schema_policy_t and specifies an "action" to be taken
+ * for specific objectclass/attribute.
+ * Supplier: extended policies are stored in entry "cn=supplierUpdatePolicy,cn=replSchema,cn=config"
+ * and uploaded in static variable: supplier_policy
+ * Before pushing the schema, for each objectclass/attribute defined in supplier_policy:
+ * if its "action" is REPL_SCHEMA_UPDATE_ACCEPT_VALUE, it is not checked that the
+ * attribute/objectclass of the consumer is a subset of the attribute/objectclass
+ * of the supplier schema.
+ *
+ * if its "action" is REPL_SCHEMA_UPDATE_REJECT_VALUE and the consumer schema contains
+ * attribute/objectclass, then schema is not pushed
+ *
+ * Consumer: extended policies are stored in entry "cn=consumerUpdatePolicy,cn=replSchema,cn=config"
+ * and uploaded in static variable: consumer_policy
+ * before accepting a schema (from replication), for each objectclass/attribute defined in
+ * consumer_policy:
+ * if its "action" is REPL_SCHEMA_UPDATE_ACCEPT_VALUE, it is not checked that the
+ * attribute/objectclass of the consumer is a subset of the attribute/objectclass
+ * of the supplier schema.
+ *
+ * if its "action" is REPL_SCHEMA_UPDATE_REJECT_VALUE and the consumer schema contains
+ * attribute/objectclass, then schema is not accepted
+ *
+ */
+
+typedef struct schema_item {
+ int action; /* REPL_SCHEMA_UPDATE_ACCEPT_VALUE or REPL_SCHEMA_UPDATE_REJECT_VALUE */
+ char *name_or_oid;
+ struct schema_item *next;
+} schema_item_t;
+
+typedef struct repl_schema_policy {
+ schema_item_t *objectclasses;
+ schema_item_t *attributes;
+} repl_schema_policy_t;
+
/*
* pschemadse is based on the general implementation in dse
*/
@@ -145,7 +194,7 @@ static void schema_create_errormsg( char *errorbuf, size_t errorbufsize,
#else
;

No comments:

Post a Comment