Thank Francesco and
Matthias.<div><br></div><div>Oh, I'm sorry for leaving stupid question. </div><div><br></div><div>Yes, I killed disc node(node1), and I tried to make a queue which has the same name that node1 has.</div><div><br></div>
<div>and Matthias. your answer is very clear.</div><div><br></div><div>and I understood that clustering mode is different from mirror mode. I thought it does something like mirror mode.</div><div><br></div><div>and, If I want to support Failover, I have to run rabbitmq as mirror mode.</div>
<div><br></div><div>Thank you.<br><br><div class="gmail_quote">2012/8/31 Matthias Radestock <span dir="ltr"><<a href="mailto:matthias@rabbitmq.com" target="_blank">matthias@rabbitmq.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">On 29/08/12 12:42, CharSyam wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I made a Rabbitmq cluster with 3 nodes.<br>
<br>
Originally, there are 1 disk node and 2 ram nodes.<br>
<br>
It worked well.<br>
<br>
but when I kill the process of node1.<br>
<br>
the queue_declare command failed( queue name is "task_queue" )<br>
<br>
and I couldn't make it again unless I recovred it.<br>
</blockquote>
<br></div>
That is the expected behaviour when the queue is declared as 'durable'.<br>
<br>
While a queue's "home" node is down, clients connected to other nodes and attempting to re-declare the queue will get a 404-NOT_FOUND.<br>
<br>
If other nodes were able to re-declare such queues they would then conflict with the original queue when that gets recovered.<span class="HOEnZb"><font color="#888888"><br>
<br>
Matthias.<br>
</font></span></blockquote></div><br></div>