This page looks best with JavaScript enabled

Multi-Master LDAP Replication + Keepalive

 ·  🎃 kr0m

Previously, we explained how to set up an LDAP server, but this scenario did not provide us with data redundancy or high availability. To solve these problems, we will set up LDAP in multi-master mode and serve this service through a VIP with keepalive. This way, the data will be replicated from one LDAP to the other automatically, regardless of the server that receives the order.

When the first server goes down, the VIP will migrate automatically to the other server without users noticing anything that happened. The only limitation of this system is that the two servers must be on the same network segment, as it is a requirement of VRRP and therefore of keepalive.


ORIGINAL SERVER

We generate the password for the replication user:

slappasswd

We insert the user’s data:

vi sync_user.ldif

dn: cn=syncuser,dc=alfaexploit,dc=com
changetype: add
objectClass: top
objectClass: person
cn: syncuser
sn: syncuser
description: LDAP synchronisator
userPassword: {SSHA}XXXXXXXXXXXXXXXX
ldapadd -x -D “cn=Manager,dc=alfaexploit,dc=com” -W -f sync_user.ldif

We configure LDAP to replicate data with the indicated user:

vi /etc/openldap/slapd.conf

moduleload syncprov

# Give syncuser DN limitless searches
limits dn.exact="cn=syncuser,dc=alfaexploit,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
access to * by dn="cn=syncuser,dc=alfaexploit,dc=com" read by * read

# LDAP Sync - Master
serverID 1
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100

# LDAP Sync - Slave
syncrepl rid=001 provider=ldap://192.168.40.141 bindmethod=simple binddn="cn=syncuser,dc=alfaexploit,dc=com" credentials=XXXXXX searchbase="dc=alfaexploit,dc=com" filter="(objectClass=*)" attrs="*" schemachecking=on type=refreshAndPersist interval=00:00:00:30 retry="60 +"
mirrormode on

We restart the service:

/etc/init.d/slapd restart


NEW SERVER

First, we set it up with an identical configuration to server 1 without replication and check that everything works correctly.

We configure LDAP to replicate data with the indicated user:

vi /etc/openldap/slapd.conf

moduleload syncprov

# Give syncuser DN limitless searches
limits dn.exact="cn=syncuser,dc=alfaexploit,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
access to * by dn="cn=syncuser,dc=alfaexploit,dc=com" read by * read

# LDAP Sync - Master
serverID 2
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100

# LDAP Sync - Slave
syncrepl rid=001 provider=ldap://192.168.40.189 bindmethod=simple binddn="cn=syncuser,dc=alfaexploit,dc=com" credentials=XXXXXX searchbase="dc=alfaexploit,dc=com" filter="(objectClass=*)" attrs="*" schemachecking=on type=refreshAndPersist interval=00:00:00:30 retry="60 +"
mirrormode on

We delete the DB to resynchronize from scratch:

/etc/init.d/slapd stop
cd /var/lib/openldap-data/
cp DB_CONFIG /root/
rm *
cp /root/DB_CONFIG ./
chown ldap:ldap DB_CONFIG
/etc/init.d/slapd start

The replication configuration parameters are:

 syncrepl rid=<replica ID>
                provider=ldap[s]://<hostname>[:port]
                [type=refreshOnly|refreshAndPersist]
                [interval=dd:hh:mm:ss]
                [retry=[<retry interval> <# of retries>]+]
                searchbase=<base DN>
                [filter=<filter str>]
                [scope=sub|one|base]
                [attrs=<attr list>]
                [attrsonly]
                [sizelimit=<limit>]
                [timelimit=<limit>]
                [schemachecking=on|off]
                [bindmethod=simple|sasl]
                [binddn=<DN>]
                [saslmech=<mech>]
                [authcid=<identity>]
                [authzid=<identity>]
                [credentials=<passwd>]
                [realm=<realm>]
                [secprops=<properties>]
                [starttls=yes|critical]
                [tls_cert=<file>]
                [tls_key=<file>]
                [tls_cacert=<file>]
                [tls_cacertdir=<path>]
                [tls_reqcert=never|allow|try|demand]
                [tls_ciphersuite=<ciphers>]
                [tls_crlcheck=none|peer|all]
                [logbase=<base DN>]
                [logfilter=<filter str>]
                [syncdata=default|accesslog|changelog]

We check that the content of the tree has been replicated:

ssh SERVER1(192.168.40.189)
shelldap --server 192.168.40.189 --binddn cn=Manager,dc=alfaexploit,dc=com
ls

ssh SERVER2(192.168.40.141)
shelldap --server 192.168.40.141 --binddn cn=Manager,dc=alfaexploit,dc=com
ls

With this, we would already have replication but not high availability.


We install and configure keepalive:

emerge -av keepalived

On the first server:

vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   notification_email {
     sys@alfaexploit.com
   }
   notification_email_from ldap00@rack
   smtp_server localhost
   smtp_connect_timeout 30
   router_id LVS_MAIN_LDAP
}

vrrp_script chk_ldap {
    script "killall -0 slapd"
    interval 2 # check cada 2 segundos
    weight 2 # anyade 2 puntos de prioridad si OK
}

vrrp_instance VI_LDAP_1 {
    state MASTER
    interface eth0
    virtual_router_id 10
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass XXXXXXXXX
    }
    virtual_ipaddress {
        AAA.BBB.CCC.DDD/MASK dev INTERFAZ
    }

    track_script {
        chk_ldap
    }
}

On the second:

vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   notification_email {
     sys@alfaexploit.com
   }
   notification_email_from ldap01@rack
   smtp_server localhost
   smtp_connect_timeout 30
   router_id LVS_MAIN_LDAP
}

vrrp_script chk_ldap {
    script "killall -0 slapd"
    interval 2
    weight 2
}

vrrp_instance VI_LDAP_1 {
    state BACKUP
    interface eth0
    virtual_router_id 10
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass XXXXXXXXX
    }

    virtual_ipaddress {

        AAA.BBB.CCC.DDD/MASK dev INTERFAZ
    }

    track_script {
        chk_ldap
    }
}

WARNING: We will have to take into account several aspects for all of this to work automatically. We have to leave the service configured with the final IPs in the LDAP (by default, LDAP binds to all available interfaces). To be able to bind to some IPs that the backup does not have, we have to do some tricks:

vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.ip_nonlocal_bind=1
sysctl -p

We start keepalive and add it to the startup:

/etc/init.d/keepalived start
rc-update add keepalived default

Services that use LDAP obviously must be configured to connect to the VIP.

NOTE: If you want to learn more about keepalive, you can read more in this previous article.

If one of the nodes fails, it is not enough to reconnect it. You have to export the LDAP tree, import it into the damaged server, and synchronize it again.

MASTER:

slapcat -v -l backup_openldap.ldif

SLAVE:

/etc/init.d/sldap stop
cp -rp /var/lib/openldap-data/ /var/lib/openldap-data_ori
cd /var/lib/openldap-data/
rm *
cp /var/lib/openldap-data_ori/DB_CONFIG /var/lib/openldap-data/
slapadd –v –l /root/backup_ldap.diff
chown ldap:ldap *
/etc/init.d/sldap start

If you liked the article, you can treat me to a RedBull here