[rabbitmq-discuss] Memory problems
Marek Majkowski
majek04 at gmail.com
Wed Jul 14 14:02:58 BST 2010
Nicolás
First, sorry for timing, I must have missed your message somehow.
2010/6/3 Nicolás César <nico at nicocesar.com>:
> A process -> A_q queue -> B process -> fat_queue -> many C processes (even
> that I have many Cs, A is much faster)
>
> Here is the problem when everythig is almost empty:
>
> rabbitmqctl list_queues memory | awk '{T += $1;}END{print T;}'
> 5703040
>
> this means that the total memory used for queues is arround 5Mb but RES
> memory for beam.smp process is 358Mb !!!
You're right. rabbitmqctl memory report isn't anything useful.
Actually, it's impossible to give any decent memory statistics. Consider
a fanout exchange, where a single message gets distributed into multiple queues.
RabbitMQ actually doesn't copy the message, it shares it between
queues - what should memory usage return in that case?
> And when I start using it, things get worse. beam.smp just jumps from 400m
> to1.6G (in jumps of 100-200m per second) and well... my VPS provider just
> decides to kill almost everything on that machine. Right now I'm killing the
> B process after a bunch of messages, but actually i don't know how to
> estimate how much memory rabbit will be using per message.
RabbitMQ stores messages body as erlang binaries. They are handled
by a bit different garbage collection algorithms than normal erlang runtime
(that's another reason why we don't really know how much memory
messages are using).
Generally, garbage collector once in a while decides that it needs to
defragment the memory. To do so it allocates a big memory block
and copies used data there. After that's done, it frees the old memory block.
But for a while erlang will use _twice_ as much memory as it really needs.
That can be one of the reasons why you see the huge memory consumption.
Another reason may be badly tune garbage collector. In one
of my installations I noticed that setting some magic erlang gc option made
rabbit use way less memory. But tuning gc internals is not a thing we want
to recommend.
> I've heard the "new persister" (what a name, eh!) is "much better" with
> "memory issues":
> http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2010-May/007251.html
It definitely is. If you can afford compiling rabbit from source, please give
it a try. Just do "hg up bug21673; make run". More info here:
http://www.rabbitmq.com/build-server.html#obtaining
Using erlang R13B03 or later may also help (it introduces better
garbage collection for binaries)
> taken from:
> http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2010-February/006397.html
>
>> =INFO REPORT==== 16-Feb-2010::04:42:05 ===
>> Memory limit set to 3196MB.
>
> I'd love that functionality! when was implemented? or what parameters I need
> to pass to rabbit?
The idea behind it, is to tell the producers that they should stop
sending messages
when rabbit is using too much memory. To make it useful your client
must support
channel.flow amqp command, which tells the client to stop sending.
I'm afraid python-amqplib doesn't support that.
Some server side documentation:
http://www.rabbitmq.com/extensions.html#memsup
This functionality works a bit differently in the new persister
branch. It gives a hint
to rabbit about how much memory rabbit may use. If it's using more - data gets
saved to the disk. So theoretically you should never see channel.flow
(if you have fast enough disk).
Cheers,
Marek Majkowski
More information about the rabbitmq-discuss
mailing list