<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>
thanks for your response.<div><br><div>After some deeper drilling I think I have found out the cause. I think that compared to a single-node mode, rabbitmq takes slightly longer time (probably a few mili sec longer) to create a new exchange or a queue in a clustered mode. I had, inadvertently, made some assumptions which started to go wrong in a clustered-setup because of this slight delay in clustered mode. I think that the message was getting published before a client could attach a queue to the exchange. My guess is that RabbitMQ was trashing the message published on that exchange as there was no queue to collect it; and later when the client did attach a queue to the exchange the message would already have been trashed and client would keep waiting for it until timeout.</div><div><br></div><div>thanks,</div><div>-upendra</div><div><br></div><div><br></div><div><br><div><div id="SkyDrivePlaceholder"></div>> From: ask@rabbitmq.com<br>> Date: Mon, 2 Jul 2012 15:24:07 +0100<br>> To: upendra.sharma@gmail.com<br>> CC: rabbitmq-discuss@lists.rabbitmq.com<br>> Subject: Re: [rabbitmq-discuss] Losing messages in a Cluster configuration        of RabbitMQ<br>> <br>> <br>> On 28 Jun 2012, at 22:24, Upendra Sharma wrote:<br>> <br>> > Hi all,<br>> > <br>> > I am facing a weird problem of losing messages, even though I am publishing them as persistent messages with pika.BasicProperties(delivery_mode=2) . Here is the scenario:<br>> > <br>> > I have two RabbitMQ brokers, namely rabbit@vm11 and rabbit@vm22; these two are have been setup in a cluster configuration as shown below:<br>> > [{nodes,[{disc,[rabbit@vm11,rabbit@vm22]}]},<br>> > {running_nodes,[rabbit@vm22,rabbit@vm11]}]<br>> > <br>> > I have two clients (written using pika.BlockingConnection() ); lets call these clients C1 and N1.<br>> > 1.) C1 creates an unique exchange (exchange name generated using UUID, say 3fe546be8aa341b7b174b29a56e63797).<br>> > 2.) C1 then spawns a thread, say C1-T1, which connects to rabbitmq server and waits for a response on this exchange using channel.start_consuming(). <br>> > 3.) C1 then sends a message to N1 and in the message provides the name of the exchange (3fe546be8aa341b7b174b29a56e63797) where N1 should send the response.<br>> > 4.) once C1-T1 gets the response, it hands over the response to C1 and dies.<br>> > <br>> > In my current setup I have 100 clients process like C1, i.e. C1, C2 ... C100 and one N1. <br>> > <br>> > This setup works perfectly fine when RabbitMQ is in a single node setup but when I put it in a cluster setup, it starts to loose the messages. What I mean is that Thread C1-T1 never gets a response and time-outs writing an ERROR in my log file. <br>> > <br>> > The trouble is I am loosing as many as 50% of the messages.<br>> > <br>> <br>> Rabbitmq delivers messages in round-robin, so when you say 50% of the messages<br>> are lost it sounds like too much of a coincidence to me.<br>> <br>> What can often happen is having a broken consumer process that just sits<br>> there stealing messages, you can verify that by looking at the `rabbitmqctl list_queues`<br>> output:<br>> <br>> $ sudo rabbitmqct list_queues -p $your_vhost name messages_ready messages_unacknowledged consumers<br>> <br>> If you have more consumers and unacknowledged messages than you expected then<br>> you probably have a stale consumer process that you have to kill.<br>> <br>> _______________________________________________<br>> rabbitmq-discuss mailing list<br>> rabbitmq-discuss@lists.rabbitmq.com<br>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss<br></div></div></div>                                            </div></body>
</html>