[rabbitmq-discuss] problem with new persiter with 1000 topics/queues

Matthias Radestock matthias at rabbitmq.com
Thu Oct 28 23:44:35 BST 2010


Alex,

alex chen wrote:
> I am setting prefetch count to 50.  each message is 20 KB.  so total 
> usage should be 1000 queue * 20 KB * 50 which is about 1GB.

It's a bit more than that due to overheads and fragmentation. But not 
much more.

> After all the messages were consumed, the broker's memory still 
> remained at 4GB.  Is this expected or a problem with memory leak?

So rabbit is taking up 4GB when all the queues are empty? How many
queues are there? 1000? And how many connections?

> Another observation with 2.1.1 is that the most rdq files in 
> msg_store_persistent were not deleted until all the messages were 
> consumed.

2.1.1 is less eager in deleting rdq files than previous versions. This
helps in reducing the number of writes in cases where a message is
routed to multiple queues and consumed+ack'd very quickly. So this is
nothing to worry about.

> because the broker was so busy "rabbitmqctl list_queues" could not
> return the queue length

How frequently are you running 'rabbitmqctl list_queues'? If it is run 
fairly frequently it can significantly delay freeing up memory since 
queues don't get a chance to hibernate.


Regards,

Matthias.


More information about the rabbitmq-discuss mailing list