[rabbitmq-discuss] Cluster and nodes shutdown: not_found, "no queue 'XXXX' in vhost 'YYY'", 'queue.declare'
Simon MacMullen
simon at rabbitmq.com
Thu Dec 1 10:18:11 GMT 2011
On 30/11/11 20:18, Tiago Cruz wrote:
> I'm using rabbitMQ 2.6.1's mirror queue without 'x-ha-policy' (yet!),
> and I just found a weird situation.
>
> I'm using keepalived to exchange IP address from clients, and my queues
> are "Durable=True"
<snip>
If you're not using x-ha-policy, then you are not using mirrored queues.
By default queues only exist on one node (qwc1 in your case).
> If I shutdown the _qwc2_ everything works as well.
>
> If I shutdown the _qwc1_ the cluster stop to work, and I got this errors
> messages on log:
<snip>
> When the node qwc1 turn back to live, everything works as well.
>
> If, before the shutdown, I delete the queue 'videos' and 'sender',
> everything works as well.
>
> Is the behaviour correct? I thought that everything was still working as
> well even with a crash of one of nodes of cluster :)
The behaviour is as designed. If you declare a queue on a single node,
and then that node goes down, attempts to redeclare it return not_found.
The idea is that when the node comes back up we want the queue to still
be there - so while it is down Rabbit won't just recreate it on one of
the other nodes, since then when the node does come back up again you
would have two copies of the same queue.
But it sounds like you want mirrored queues, so use 'x-ha-policy'.
Cheers, Simon
--
Simon MacMullen
RabbitMQ, VMware
More information about the rabbitmq-discuss
mailing list