Redis synchronization can be problematic if the dataset to be transferred is large. While a master is passing the dataset to a slave, changes made on the master are stored in RAM. When the slave has received the dataset, the changes that the master had stored are passed on to it.
If the change buffer on the master is too small to store these changes made while the dataset was being transferred to the slave, the synchronization process will restart from scratch. This restart will occur indefinitely, forming an infinite loop.
In the master logs, we can see something like this:
[4394] 06 Jan 23:42:48.977 # Client addr=X.X.X.X:41702 fd=22 name= age=1287 idle=463 flags=S db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=19354 omem=536875344
events=rw cmd=sync scheduled to be closed ASAP for overcoming of output buffer limits.
We are going to increase the buffer limits, but we must keep in mind that we will need more RAM on the master to maintain those additional changes.
All buffer manipulation commands will be executed on the master.
We check the current values to be able to revert the change once synchronization has finished:
1) "client-output-buffer-limit"
2) "normal 0 0 0 slave 268435456 67108864 60 pubsub 33554432 8388608 60"
We change the values:
OK
We make sure that the changes have been applied:
1) "client-output-buffer-limit"
2) "normal 0 0 0 slave 536870912 536870912 0 pubsub 33554432 8388608 60"
If the slave is still not synchronizing, we will continue increasing the buffer on the master until synchronization is successfully completed.