[rabbitmq-discuss] Strange message loss behavior with mirrored queues
mpietrek at skytap.com
Fri Feb 3 18:57:10 GMT 2012
I'm writing some basic throughput tests to verify that RabbitMQ will scale
to our needs in HA environments. In one scenario I'm finding substantial
message loss. It may be Pika clientlib behavior, but even if so, it's very
odd what I'm seeing.
* Ubuntu 10.04 with RabbitMQ 2.71.
* Broker instances are all disk nodes.
* Queues are declared as durable and x-ha-policy:all
* Clients are written in Python using Pika 0.9.5.
* For this test, no transactions or publisher-confirms are used.
The test code is extremely simple. It simply reads or writes messages (up to
100,000) as quickly as it can, and shows the throughput rate at 1000 message
If I run my "write" test against a single broker instance, I rapidly write
all the messages and after the app exits, I see 100,000 messages in the
queue, as expected.
However, if I add two more brokers to the cluster (such that the queue is
now mirrored), the identical write test yields only ~25K messages in the
queue. (They are the first 25K messages written, FWIW.)
My app calls channel.disconnect() after doing all the writes, so I'd expect
that it would flush its internal buffers before exiting.
I can supply my entire app (about 150 lines of Python) if desired, but for
now, here are the relevant pieces:
for i in range(100000):
clientlib.write_message(QUEUENAME, "abc_%s" % (i))
def connect(self, hostname, use_transactions):
self.connection = pika.BlockingConnection(
self.channel = self.connection.channel()
self.use_transactions = use_transactions # This is False for this
def write_message(self, queue_name, json_string):
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss