[rabbitmq-discuss] Memory problems

Nicolás César nico at nicocesar.com
Thu Jun 3 13:38:38 BST 2010


Hi all

First I need to send kudos to all of you, I've been using rabbit for a year
now and has given me a lot of satisfactions. It reminded me the day I use a
Database instead of a bunch of files.. the same but with data flows! Like a
marriage, satisfactions and problems come in the same pack! Just some
background info:

 * Im running rabbitmq 1.7.0-3 on a debian testing distribution with
python-amqplib 0.6.1-1
 * It's a VPS with 2GB of RAM, *BUT* when the used memory it's aboout 80%,
"tasks may be killed off pre-maturely, we do this to prevent a total OOM
scenario where the VPS goes dead" (literal words from the provider)
 * I was using ~15 queues with 1kb messages and a rate of  ~20
messages/minute

Like a week ago I've made the following changes:
 * I'm using 50 queues  with a rate of ~60 messages/minute
 * One of them (I call it "the fat queue" ) has 50kb messages
 * Processing at the rear end is rate fixed ... so I need rabbit  to act as
a memory/disk buffer

A: High speed producer : 50000 1K messages in a second or so.
B: Translator: reads from A_q and generates a 50kb message from a single 1kb
message  and publishes it on the fat_queue.
C: (Slow) Fixed speed consumer: grabs the 50kb message and does it's magic.

A process -> A_q queue -> B process -> fat_queue -> many C processes (even
that I have many Cs,   A is much faster)


Here is the problem when everythig is almost empty:

rabbitmqctl list_queues memory | awk '{T += $1;}END{print T;}'
5703040

this means that the total memory used for queues is arround 5Mb but RES
memory for beam.smp process is 358Mb !!!

And when I start using it, things get worse. beam.smp just jumps from 400m
to1.6G (in jumps of 100-200m per second) and well... my VPS provider just
decides to kill almost everything on that machine. Right now I'm killing the
B process after a bunch of messages, but actually i don't know how to
estimate how much memory rabbit will be using per message.


I've heard the "new persister" (what a name, eh!) is "much better" with
"memory issues":
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2010-May/007251.html

taken from:
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2010-February/006397.html

> =INFO REPORT==== 16-Feb-2010::04:42:05 ===
> Memory limit set to 3196MB.

I'd love that functionality! when was implemented? or what parameters I need
to pass to rabbit?

Greetings!

Nico César
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20100603/7cbbbfee/attachment.htm>


More information about the rabbitmq-discuss mailing list