[rabbitmq-discuss] Feature Req / Bug list

Simon MacMullen simon at rabbitmq.com
Fri Oct 25 11:37:20 BST 2013

First of all, thank you for taking the time to generate a test case. I 
haven't reproduced anything yet, but rest assured that I will attempt to 
do so.

I do have some quibbles with a couple of your assertions though:

On 24/10/2013 7:30PM, Graeme N wrote:
> - Bug: even though node and management port listeners are specified, the
> first instance started will still incorrectly bind to port 55672 for the
> management interface.

Any node with the management plugin will attempt to bind to this port to 
serve an HTTP redirect to the new port (55672 was the old mgmt port in 2.x).

Note that unlike all the other port bindings this will not prevent the 
server from starting up if it fails to bind (we changed it from 55672 
since there were problems binding sometimes).

So yeah, this is pretty hard coded, but it's meant as a fairly invisible 
usability / migration thing. If it bothers you, you can turn it off by 
modifying your script to:

-rabbitmq_management listener [{port,4444$n},{redirect_old_port,false}]

This redirect port will be going away in 3.3.0.

> - populate queues with 1000 messages each in parallel: ./populate_queues.sh
> - Note: shows low delivery rates noted before on spinning disks (60-80
> msgs/sec), even though my VM storage is on btrfs RAID10 capable of
> sustained block writes > 200MB/s. iostat shows the VM is only generating
> 1-8 MB/s of IO. Looking at messages under
> /var/lib/rabbitmq/mnesia/rabbit2 at localhost/queues, they seem to be
> chunked into 64, 68, 72, and 84 kiB files before being delivered to the
> 16MiB msg_store_persistent/*.rdq files. This implies a lot of random IO
> while delivering messages, which explains why the performance problems
> disappear when switching to SSDs, even just two SSDs in RAID1. Typically
> with other data stores we'd expect to see on-disk chunks that are
> multiples of 128 MiB to properly leverage RAID block IO, in both
> incoming and finalized data stores. The effect of this is that it takes
> ~20m to load ~32 MiB of messages, which is pretty awful.

Err, your script invokes amqp-publish(1) in a loop. 100,000 times. I 
suspect most of the slowness is due to the time taken to fork that many 
processes, open and close that many AMQP connections, and so on. I would 
guess this goes faster on an SSD because you can fork() faster.

Certainly I can populate 100 queues with 1,000 messages each in a rather 
small fraction of a second with the PerfTest tool 
(http://www.rabbitmq.com/java-tools.html) if the same message goes to 
all queues 1,000 times, or in less than 10 seconds if each message is 

Cheers, Simon

Simon MacMullen
RabbitMQ, Pivotal

More information about the rabbitmq-discuss mailing list