[rabbitmq-discuss] problem with new persiter with 1000 topics/queues

alex chen chen650 at yahoo.com
Tue Aug 24 21:09:49 BST 2010

> Based on your previous  comments I am assuming you have 100/1000 *queues*, 

right.  we map bind one topic to one queue.
> With 1000  queues the memory is more fragmented than with 100 queues and a lot 
>more things  are going on in parallel. That makes it harder for the persister to 
>keep memory  usage in check. Particularly if your file descriptor limit is also  

So is there a plan to improve the memory usage in the new persister?  if not, we 
have to order new machines with 16 GB RAM instead of the current 8 GB.  If we 
increase the file descriptor limit, would it reduce to memory usage?
i saw the following line in rabbit.log:
"Limiting to approx 16284 file handles (14654 sockets)"
is this too low?

thanks a lot for all the inputs you provide on this problem.  that would help us 
to figure out the system bottleneck and do the hardware planning accordingly.



More information about the rabbitmq-discuss mailing list