[rabbitmq-discuss] Failover in Clustering mode

CharSyam charsyam at gmail.com
Mon Sep 3 14:56:38 BST 2012


Thank Francesco and  Matthias.

Oh, I'm sorry for leaving stupid question.

Yes, I killed disc node(node1), and I tried to make a queue which has the
same name that node1 has.

and Matthias. your answer is very clear.

and I understood that clustering mode is different from mirror mode. I
thought it does something like mirror mode.

and, If I want to support Failover, I have to run rabbitmq as mirror mode.

Thank you.

2012/8/31 Matthias Radestock <matthias at rabbitmq.com>

> On 29/08/12 12:42, CharSyam wrote:
>
>> I made a Rabbitmq cluster with 3 nodes.
>>
>> Originally, there are 1 disk node and 2 ram nodes.
>>
>> It worked well.
>>
>> but when I kill the process of node1.
>>
>> the queue_declare command failed( queue name is "task_queue" )
>>
>> and I couldn't make it again unless I recovred it.
>>
>
> That is the expected behaviour when the queue is declared as 'durable'.
>
> While a queue's "home" node is down, clients connected to other nodes and
> attempting to re-declare the queue will get a 404-NOT_FOUND.
>
> If other nodes were able to re-declare such queues they would then
> conflict with the original queue when that gets recovered.
>
> Matthias.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120903/1caf53c9/attachment.htm>


More information about the rabbitmq-discuss mailing list