[rabbitmq-discuss] problem with new persiter with 1000 topics/queues

Matthias Radestock matthias at rabbitmq.com
Wed Oct 27 11:21:55 BST 2010


alex chen wrote:
> I tested 2.1.1 release today with 1000 queues.  When the message store 
> reached 120 GB, the broker's memory reached 4GB and started to slow down 
> the publishers.  Publish rate dropped from 80 MB/sec to less than 1 
> MB/sec.  The memory stayed around 4GB for long time.

That is expected. When clients are publishing at a higher rate than the 
broker can write to disk, and consumers aren't draining the queues, then 
eventually rabbit will have to pause producers so it can page messages 
to disk and free up some memory.

> I tried to change the memory threshold from 0.4 to 0.7, and restarted 
> the broker.  Howerver, the log showed that the broker's memory limit is 
> 3.2 GB (our machine has 8 GB   RAM).  It seems to me release 2.1.1 would 
> not allow the broker to use more than 0.4 of the total RAM.

That threshold setting via vm_memory_high_watermark definitely still 
works in 2.1.1.

> the 2.1.1 release has build error from rabbitmq_ssl.erl.  I commented 
> out "-include(ssl/src/ssl_int.hrl)" to make it work.

Ah, that's a dependency on the Erlang/OTP sources. I thought we had 
eradicated all those. Clearly not. Have filed a bug.



More information about the rabbitmq-discuss mailing list