[rabbitmq-discuss] memory consumption during message delivery
Cermak, Marek
Marek.Cermak at Honeywell.com
Wed Apr 3 13:52:37 BST 2013
Hi Tim,
thanks for answer.
I would reckon that 'slowing down' the transfer from disk to memory may be a way to go. It definitely is not desirable for all purposes but for particular cases - like the one I tested - it would be nice to be able to set some values :) Another effect I've seen was using of huge amount of file descriptors which after all limits the transfer from RMQ to consumer as RMQ has to wait. The delivery was very slow (at least in comparison with publishing).
I know 1G of memory is not much but I look for the limits for various purposes so this is the way to go in my case.
thanks
Marek
From: rabbitmq-discuss-bounces at lists.rabbitmq.com [mailto:rabbitmq-discuss-bounces at lists.rabbitmq.com] On Behalf Of Tim Watson
Sent: 03 April, 2013 14:36
To: Discussions about RabbitMQ
Subject: Re: [rabbitmq-discuss] memory consumption during message delivery
Marek,
On 3 Apr 2013, at 13:16, Cermak, Marek wrote:
Hi everybody,
I did some measurements of memory consumption of RMQ (v3.0.2) during sending and delivering messages and found an interesting behaviour during delivery. In short: messages were stored in mnesia transient storage on disk and when a consumer starts to consume them all of them were read to memory in one chunk which was almost deadly for RMQ and the server.
The mnesia database is used to store schema elements (queue definitions, exchanges, users, etc), however messages are *never* stored in mnesia, they're stored in a custom file system storage layer - the persister - which was purpose written for RabbitMQ.
The scenario was 1+2+8+10 pairs of bound direct exchanges and queues each received messages of particular size (1kB, 64 kB, 1 MB and 8MB) and after a while consumer was started to 'empty' each of the queues. The consumer of the queue with biggest messages triggered the avalanche of data being transferred from disk to memory. Similar behaviour can be seen always when you fill the queue up to the roof with bigger messages and start the consumer.
[snip]
Tested with 1G of memory, 64b Debian GNU/linux, Erlang R15B, RMQ v3.0.2.
Is it an expected behaviour? Can it be somehow tuned/adjusted? Thanks.
Well if you've got a bunch of data on disk that needs to be delivered to a consumer later on, it'll need to be read into memory in order to be transferred onto the wire. One obvious way to avoid getting into this position would be to allocate a good deal more memory - if you're expecting a large volume of data in your queues, then 1Gb seems quite small.
How would you like to be able to tune it? There are some parameters of the message store which are configurable, see http://www.rabbitmq.com/configure.html - but I suspect these will not do what you want and I'd strongly advise against fiddling with them - as far as we can tell, these are already optimally tuned. In fact, I really suspect that the right thing for you to do is add more memory. Rabbit will use as much memory as possible to get its job done, so allocating more memory is the way to do IMO.
Cheers,
Tim
Marek
<combo_020.gif>_______________________________________________
rabbitmq-discuss mailing list
rabbitmq-discuss at lists.rabbitmq.com<mailto:rabbitmq-discuss at lists.rabbitmq.com>
https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130403/95584f23/attachment.htm>
More information about the rabbitmq-discuss
mailing list