[rabbitmq-discuss] problem with new persiter with 1000 topics/queues

alex chen chen650 at yahoo.com
Wed Aug 18 22:06:25 BST 2010

> [{rabbit,  [{msg_store_file_size_limit, 67108864}]}].

this increases the write throughput, but the memory usage is still high.

> We can reproduce that here. I have a few theories about this, but it's
> really pending more testing and debugging. Is there any chance you could
> send us your test code - it'd be good to see exactly what you're doing?

attached please find amqp_consumer.c that i modified to show our use case. the 
main params are:
  amqp_boolean_t durable =1;
  amqp_boolean_t auto_delete =0;
  amqp_boolean_t no_ack =1;
  int prefetch_count = 50;

> Mmm. Try seeing what happens if you reduce  the high watermark to 0.2 or
> lower - yes, publishing will be slower, but it  might help with the
> memory usage on consuming.

the memory usage on consuming becomes lower, but the throughput also goes down.
so far the highest throughput we get is by setting mem high watermark to 0.6 
(20-30 MB/sec consume rate).
the only problem is the mem would go to 10 GB.  decreasing the watermark would 
lower the mem usage, but in the same time the throughput would decrease.

i think the big puzzle is:  for both 100 topics and 1000 topics, when there are 
total number of 200 GB messages queued, the number of messages is the same (10M 
for 20 KB message size).  why the 1000 topics case uses much more memory than 
the 100 topic case?


-------------- next part --------------
A non-text attachment was scrubbed...
Name: amqp_consumer.c
Type: text/x-csrc
Size: 6929 bytes
Desc: not available
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20100818/72cea609/attachment.c>

More information about the rabbitmq-discuss mailing list