<div dir="ltr"><div><div><div><div><div>Hey all,<br><br>I'm reviving a really old question here, but I suspect the story has changed between 2.8.4 and 3.1.1.<br><br></div>See prior messages in this thread for context, but in a nutshell, we're looking for guidance on whether to start up our clustered brokers sequentially or in parallel.<br>
<br></div>I know that sequentially is the ideal situation, but in the case of an uncontrolled shutdown, it doesn't reliably start, as the first node may time out waiting for a later node.<br><br></div>When this happens, starting the nodes in parallel gets past the problem. However, my understanding is that this has its own risks. (See my original first message in this thread.)<br>
<br></div>At the end of the day, we just need a set of scripts that will idempotently start/stop the cluster reliably. It's infeasible to expect an operator (i.e. not me) to assess the current cluster state and then guess which approach to take.<br>
<br></div>Has the guidance changed between 2.8.4 and 3.1.1? I know it's basically a mnesia issue - I just don't know what improvements have been made since 2.8.4.<br><br>Thanks,<br><br>Matt<br></div><div class="gmail_extra">
<br><br><div class="gmail_quote">On Thu, Jul 26, 2012 at 8:45 AM, Matt Pietrek <span dir="ltr"><<a href="mailto:mpietrek@skytap.com" target="_blank">mpietrek@skytap.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Francesco,<br><br>Thanks for the quick reply. A couple of replies/questions:<br><br>If I'm understanding what you're saying, we should be starting up our brokers sequentially. However, in my experience this hasn't worked. For instance, we've seen mq1 stall in its startup, waiting for mq3 to start. But mq3 can't start (per the sequential logic) till mq1 finishes starting up. Per advice I received from you previously (below) we've moved to async startup of the brokers:<br>
<br><a href="http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2012-June/020689.html" target="_blank">http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2012-June/020689.html</a><br><br><pre>><i> Question 2
</i>><i> ---------------
</i>><i> Related to the above scenario, is there any danger (after an unplanned
</i>><i> shutdown), in simply letting all the nodes start in parallel and
</i>><i> letting Mnesia's waiting sort out the order? It seems to work OK in my
</i>><i> limited testing so far, but I don't know if we're risking data loss.
</i>
It should be fine, but in general it's better to do cluster operations
sequentially and at one site. In this specific case it should be OK.<br><br></pre>As it stands now, we're in a catch 22 - If we do sequential startup, we run the risk of deadlocking if we start the nodes in the wrong order. But if we do async startup, we run into the problem described in this thread.<br>
<br>--------<div class="im"><br>> Uhm. It looks like mnesia is detecting a deadlock, and I'm not sure why. What<br>
> happens if you don't kill it? Does it terminate by itself, eventually?<br><br></div>I've let it wait for a good long time (30 minutes +) before killing it.<br><br>Thanks much for your help,<br><br>Matt<div class="HOEnZb">
<div class="h5"><br><br><div class="gmail_quote">
On Thu, Jul 26, 2012 at 2:40 AM, Francesco Mazzoli <span dir="ltr"><<a href="mailto:francesco@rabbitmq.com" target="_blank">francesco@rabbitmq.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Matt,<br>
<br>
At Wed, 25 Jul 2012 11:48:56 -0700,<br>
<div>Matt Pietrek wrote:<br>
> We have a 3 node cluster (mq1, mq2, mq3) running 2.8.4 supporting a small<br>
> number of HA queues. During startup of the cluster, we start all nodes in<br>
> parallel.<br>
<br>
</div>This is not a good idea when dealing with clustering. RabbitMQ clustering is<br>
basically a thin layer over mnesia clustering, and we need to do some additional<br>
bookkeeping that is prone to race conditions (e.g. storing the online nodes at<br>
shutdown). We are putting efforts in making this process more reliable on the<br>
rabbit side.<br>
<br>
For this reason you should always execute clustering operations sequentially.<br>
<div><br>
> Usually everything works fine. However, we've just recently seen one of the<br>
> nodes (mq3) won't start, i.e., the rabbitmqctl wait <pid> doesn't complete.<br>
><br>
> I can log in to the management UI on mq1 and mq2, so they're at least<br>
> minimally running.<br>
><br>
> Luckily, we've turned on verbose Mnesia logging. here's what the failing node<br>
> (mq3) shows in the console spew:<br>
><br>
</div>> [...]<br>
<div>><br>
> The pattern of "Getting table rabbit_durable_exchange (disc_copies) from node<br>
> rabbit@mq1:" cycles between mq1 and mq2 repeatedly until I kill mq3.<br>
<br>
</div>Uhm. It looks like mnesia is detecting a deadlock, and I'm not sure why. What<br>
happens if you don't kill it? Does it terminate by itself, eventually?<br>
<div><br>
> What other sort of information can I provide or look for when this situation<br>
> repeats?<br>
<br>
</div>Well, the normal rabbit logs would help.<br>
<br>
--<br>
Francesco * Often in error, never in doubt<br>
</blockquote></div><br>
</div></div></blockquote></div><br></div>