<div dir="ltr">It seems that one of your assumptions is that a cluster would operate across data centers. This is not recommended for RabbitMQ and we don't use it that way - we use shovels between clusters since we, like you, can deal with eventual consistency.<div>
<br></div><div>Our clusters are similar to what you describe. For us, (almost) all queues are persistent and mirrored because we can't tolerate message loss. We have seen significant sensitivity to partitioning in Windows OS-based clusters under very heavy load; we do not see this on Linux and can run at least twice the load as well. </div>
<div><br></div><div>There is an F5 in front of the cluster, but it doesn't do load balancing but just acts as a persistent router. We've found that by directing all traffic to one node and letting it replicate to other nodes we don't have to deal with issues if a network partition occurs, and they *do* occur (earlier this week one of the virtual NICs stopped on one of the Rabbit nodes, for example). The F5 will detect this very quickly and divert traffic to another node if necessary. (Note to others - we've found that this arrangement scales significantly better than round robin load balancing for mirrored persistent queues).</div>
<div><br></div><div>We have clusters in each data center; for clusters that need to replicate to a different data center there is the shovel. However, there is a chance of message loss if the application sends the message to the cluster in one DC and that entire DC is hit by a meteor before the message can be delivered to the other data center. For most scenarios, however, the messages will be eventually delivered when the DC comes back up.</div>
<div><br></div><div>Hope that helps a little...</div><div><br></div><div>Cheers,</div><div><br></div><div>-ronc</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, May 22, 2014 at 12:04 PM, Steffen Daniel Jensen <span dir="ltr"><<a href="mailto:steffen.daniel.jensen@gmail.com" target="_blank">steffen.daniel.jensen@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Simon,<div><br></div><div>Thank you for your reply.<br><div class="gmail_extra"><div class="gmail_quote">
<div class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
We have two data centers connected closely by LAN.<br>
We are interested in a *reliable cluster* setup. It must be a cluster<br>
because we want clients to be able to connect to each node<br>
transparently. Federation is not an option.<br>
</blockquote>
<br>
I hope you realise that you are asking for a lot here! You should read up on the CAP theorem if you have not already done so.<br></blockquote><div><br></div></div><div>Yes, I know. But I am not asking an unreasonably lot, IMO :-)</div>
<div>I am aware of the CAP theorem, but I don't see how it is in violation. I am willing to live with eventual consistency.</div><div class=""><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">1. It happens that the firewall/switch is restarted, and maybe a few<br>
ping messages are lost.<br>
2. The setup should survive data center crash<br>
3. All queues are durable and mirrored, all messages are persisted, all<br>
publishes are confirmed<br>
There are 3 cluster-recovery settings<br>
a) ignore: A cross data center network break-down would cause message<br>
loss on the node that is restarted In order to rejoin.<br>
b) pause_minority: If we choose the same number of nodes in each data<br>
center, the whole cluster will pause. If we don't, only the data center<br>
with the most nodes can survive.<br>
c) auto_heal: If the cluster decides network partitioning, there is a<br>
potential of message loss, when joining.<br>
[I would really like a resync-setting similar to the one described below]<br>
Question 1: Is it even possible to have a fully reliable setup in such a<br>
setting?<br>
</blockquote>
<br>
Depends how you define "fully reliable". If you want Consistency (i.e. mirrored queues), Availability (i.e. neither data centre pauses) and Partition tolerance (no loss of data from either side if the network goes down between them) then I'm afraid you can't.<br>
</blockquote><div><br></div></div><div>What I mean when I say "reliable" is: All subscribers at the time of publish will eventually get the message.</div><div><br></div><div>That should be possible, assuming that all live inconsistent nodes will eventually rejoin (without dumping messages). I know this is not the case in rabbitmq, but it is definitely theoretically possible. I guess this is what is usually referred to as eventual consistency.</div>
<div class="">
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
In reality we probably won't have actual network partitions, and it will<br>
most probably only be a very short network downtime.<br>
Question 2: Is it possible to adjust how long it takes rabbitmq to<br>
decide "node down"?<br>
</blockquote>
<br>
Yes, see <a href="http://www.rabbitmq.com/nettick.html" target="_blank">http://www.rabbitmq.com/<u></u>nettick.html</a></blockquote><div><br></div></div><div>Thank you! (!)</div><div>I have been looking for that one. But I am surprised to see that it is actually 60sec. Then I really don't understand how I could have seen so many clusters ending up partitioned.</div>
<div><br></div><div>Do you know what the consequence of doubling it might be?</div><div><br></div><div>RabbitMq writes:</div><div><span style="color:rgb(85,85,85);font-family:Verdana,sans-serif;font-size:13px;line-height:18px">Increasing the </span><span style="color:rgb(51,51,51);font-family:'Courier New',Courier,monospace;font-size:13px;white-space:nowrap;line-height:18px">net_ticktime</span><span style="color:rgb(85,85,85);font-family:Verdana,sans-serif;font-size:13px;line-height:18px"> across all nodes in a cluster will make the cluster more resilient to short network outtages, but it will take longer for remaing nodes to detect crashed nodes.</span></div>
<div><br></div><div>More specifically I wonder what happens in the time a node is actually in its own network, but before it finds out. In our setup all publishes have all-HA subscriber queues, with publisher confirm. So I will expect a distributed agreement that the msg has been persisted. Will a publisher confirm then block until the node decides that other nodes are down, and then succeed?</div>
<div class="">
<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
It is much better to have a halted rabbitmq for some seconds than to<br>
have message loss.<br>
Question 3: Assume that we are using the ignore setting, and that we<br>
have only two nodes in the cluster. Would the following be a full<br>
recovery with zero message loss?<br>
0. Decide which node survives, Ns, and which should be restarted, Nr.<br>
1. Refuse all connections to Nr except from a special recovery<br>
application. (One could change the ip, so all running services can't<br>
connect or similar)<br>
2. Consume and republish all message from Nr to Ns.<br>
3. Restart Nr<br>
Then the cluster should be up-and-running again.<br>
</blockquote>
<br>
That sounds like it would work. You're losing some availability and consistency, and your message ordering will change. You have a pretty good chance of duplicating lots of messages too (any that were in the queues when the partition happened). Assuming you're happy with that it sounds reasonable.<br>
</blockquote><div><br></div></div><div>The duplication is ok -- but assuming that rabbit is usually empty, it won't really happen, I think. </div><div>But -- I am sure that rabbit does not guarantee exactly once delivery anyway.</div>
<div>For that reason, we will build in idempotency for critical messages.</div><div><br></div><div>Ordering can always get scrambled when nacking consuming messages, so we are not assuming ordering either.</div><div><br>
</div>
<div><br></div><div>About the CAP theorem in relation to rabbit.</div><div>Reliable messaging (zero message loss), is often preferred in SOA-settings. I wonder why vmware/pivotal/... chose not to prioritize this guarantee. It is aimed by the federation setup, but it is a little to weak in its synchronization. It would be preferred if it had a possibility of communicating consumption of messages. Then one could mirror queues between up/down-stream exchanges, and have even more "availability". One would definitely give up consistency a little further, but it would be possible to have the setup above, I think. I know it definitely doesn't come out-of-the-box, and it is not a part of AMQP, AFAIK, but it seems possible.<br>
</div><div><br></div><div>Thank you, Simon!</div><span class="HOEnZb"><font color="#888888"><div><br></div><div>-- S</div></font></span></div></div></div></div>
<br>_______________________________________________<br>
rabbitmq-discuss mailing list<br>
<a href="mailto:rabbitmq-discuss@lists.rabbitmq.com">rabbitmq-discuss@lists.rabbitmq.com</a><br>
<a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br>
<br></blockquote></div><br></div>