[rabbitmq-discuss] Over memory threshold?

Marek Majkowski majek04 at gmail.com
Thu Jun 16 15:00:39 BST 2011


On Thu, Jun 16, 2011 at 10:50, yaohui <yaohui1984 at qq.com> wrote:
> When there is plenty of  messages in the queues, the memory would be over
> a high_watermark that is 1/4 physical memory by default. after write some
> message to disk, the memory will below the high_watermark.
>  but i found that, after it's over the hig_watermark some times, the
> rabbitmq do nothing though the memory is over the high_watermark. then i
> can't send any message to rabbitmq. why the message aren't written to disk?

Hi,

Unfortunately it's not that simple. Rabbit does the best to free memory,
but in practice, it's not always able to do enough. For example, Rabbit doesn't
control the way Erlang VM handles garbage collection.

In the over-memory situation Rabbit tries to put messages to the disk,
frees the memory used by them, and prays that Erlang GC will eventually
release memory.

But in some corner situations, it's possible for Rabbit to get stuck. For
example: if you created more queues than you have available
memory. As queue metadata must always be in memory, Rabbit will
hit memory watermark and will never be able to recover.

Also, there is some memory cost associated with every message even
though the message is stored on-disk (unless you use a magical
rabbitmq-toke plugin, which intends to solve this problem).

So, take a look at rabbitmq-toke, monitor rabbitmq memory
usage usign rabbitmq-managament plugin, and just buy
more RAM if that's not enough!

Cheers,
   Marek


More information about the rabbitmq-discuss mailing list