[rabbitmq-discuss] Losing messages in a Cluster configuration of RabbitMQ

Upendra Sharma upendras at hotmail.com
Tue Jul 3 11:22:19 BST 2012

thanks for your response.
After some deeper drilling I think I have found out the cause. I think that compared to a single-node mode, rabbitmq takes slightly longer time (probably a few mili sec longer) to create a new exchange or a queue in a clustered mode. I had, inadvertently, made some assumptions which started to go wrong in a clustered-setup because of this slight delay in clustered mode. I think that the message was getting published before a client could attach a queue to the exchange. My guess is that RabbitMQ was trashing the message published on that exchange as there was no queue to collect it; and later when the client did attach a queue to the exchange the message would already have been trashed and client would keep waiting for it until timeout.

> From: ask at rabbitmq.com
> Date: Mon, 2 Jul 2012 15:24:07 +0100
> To: upendra.sharma at gmail.com
> CC: rabbitmq-discuss at lists.rabbitmq.com
> Subject: Re: [rabbitmq-discuss] Losing messages in a Cluster configuration	of RabbitMQ
> On 28 Jun 2012, at 22:24, Upendra Sharma wrote:
> > Hi all,
> > 
> > I am facing a weird problem of losing messages, even though I am publishing them as persistent messages with pika.BasicProperties(delivery_mode=2) . Here is the scenario:
> > 
> > I have two RabbitMQ brokers, namely rabbit at vm11 and rabbit at vm22;  these two are have been setup in a cluster configuration as shown below:
> > [{nodes,[{disc,[rabbit at vm11,rabbit at vm22]}]},
> >  {running_nodes,[rabbit at vm22,rabbit at vm11]}]
> > 
> > I have two clients (written using pika.BlockingConnection() ); lets call these clients C1 and N1.
> > 1.) C1 creates an unique exchange (exchange name generated using UUID, say 3fe546be8aa341b7b174b29a56e63797).
> > 2.) C1 then spawns a thread, say C1-T1, which connects to rabbitmq server and waits for a response on this exchange using channel.start_consuming(). 
> > 3.) C1 then sends a message to N1 and in the message provides the name of the exchange (3fe546be8aa341b7b174b29a56e63797) where N1 should send the response.
> > 4.) once C1-T1 gets the response, it hands over the response to C1 and dies.
> > 
> > In my current setup I have 100 clients process like C1, i.e. C1, C2 ... C100 and one N1. 
> > 
> > This setup works perfectly fine when RabbitMQ is in a single node setup but when I put it in a cluster setup, it starts to loose the messages. What I mean is that Thread C1-T1 never gets a response and time-outs writing an ERROR in my log file. 
> > 
> > The trouble is I am loosing as many as 50% of the messages.
> > 
> Rabbitmq delivers messages in round-robin, so when you say 50% of the messages
> are lost it sounds like too much of a coincidence to me.
> What can often happen is having a broken consumer process that just sits
> there stealing messages, you can verify that by looking at the `rabbitmqctl list_queues`
> output:
>     $ sudo rabbitmqct list_queues -p $your_vhost name messages_ready messages_unacknowledged consumers
> If you have more consumers and unacknowledged messages than you expected then
> you probably have a stale consumer process that you have to kill.
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20120703/d68149c4/attachment.htm>

More information about the rabbitmq-discuss mailing list