[rabbitmq-discuss] OOM kill

Jason McIntosh mcintoshj at gmail.com
Fri Mar 7 17:43:47 GMT 2014


As I recall, the memory limit doesn't prevent rabbit from using more
memory, it's the point at which it stops accepting new connections.  It
would still allow existing connections to flood the system:
Memory-Based Flow Control

The RabbitMQ server detects the total amount of RAM installed in the
computer on startup and when rabbitmqctl set_vm_memory_high_watermark
*fraction* is executed. By default, when the RabbitMQ server uses above 40%
of the installed RAM, it raises a memory alarm and *blocks all connections.
*Once the memory alarm has cleared (e.g. due to the server paging messages
to disk or delivering them to clients) normal service resumes.


No clue about the plugins memory.
Jason


On Fri, Mar 7, 2014 at 11:16 AM, Dmitry Andrianov <
dmitry.andrianov at alertme.com> wrote:

> Hello.
> We are trying to load-test RabbitMQ server in different configurations on
> Amazon EC2 node.
> Most of our tests end with Linux OOM killer intervening and killing Rabbit.
> That is something I cannot really understand especially given that it is
> reproducible even with vm_memory_high_watermark set to 0.2 and no other
> processes running on that box.
> So if someone could shed some light onto that issue it would help a lot.
>
> Below the status response not long before the process was killed:
>
>  {os,{unix,linux}},
>  {erlang_version,
>      "Erlang R15B01 (erts-5.9.1) [source] [64-bit] [async-threads:30]
> [kernel-poll:true]\n"},
>  {memory,
>      [{total,1984625336},
>       {connection_procs,1235761240},
>       {queue_procs,142935728},
>       {plugins,-44730984},
>       {other_proc,34065132},
>       {mnesia,32001976},
>       {mgmt_db,233149752},
>       {msg_index,13545296},
>       {other_ets,49861192},
>       {binary,148478688},
>       {code,20404184},
>       {atom,793505},
>       {other_system,118359627}]},
>  {vm_memory_high_watermark,0.2},
>  {vm_memory_limit,804643635},
>  {disk_free_limit,50000000},
>  {disk_free,1843380224},
>  {file_descriptors,
>      [{total_limit,64900},
>       {total_used,11681},
>       {sockets_limit,58408},
>       {sockets_used,11677}]},
>  {processes,[{limit,1048576},{used,198557}]},
>  {run_queue,6},
>  {uptime,575}]
>
> Couple of strange things there are:
>
> 1. {vm_memory_limit,804643635} but still memory {total,1984625336}. How is
> that possible? https://www.rabbitmq.com/memory.html says that erlang
> process can take twice the configured size so I expected that but it is
> definitely more than twice.
>
> 2. {plugins,-44730984} - how this one is possible?
>
> And just in cast it is needed, below is /proc/X/status
>
> VmPeak:     2267580 kB
> VmSize:     2110952 kB
> VmLck:           0 kB
> VmHWM:     2096276 kB
> VmRSS:     2020848 kB
> VmData:     2075168 kB
> VmStk:          88 kB
> VmExe:        1604 kB
> VmLib:        4848 kB
> VmPTE:        4304 kB
> Threads:    31
>
>
> Thanks in advance!
>
>
>
>
>
>
> This email is for the use of the intended recipient(s) only.
> If you have received this email in error, please notify the sender
> immediately and then delete it.
> If you are not the intended recipient, you must not use, disclose or
> distribute this email without the
> author's prior permission. AlertMe.com Ltd. is not responsible for any
> personal views expressed
> in this message or any attachments that are those of the individual sender.
>
> AlertMe.com Ltd, 30 Station Road, Cambridge, CB1 2RE, UK.
> Registered in England, Company number 578 2908, VAT registration number GB
> 895 9914 42.
>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>



-- 
Jason McIntosh
https://github.com/jasonmcintosh/
573-424-7612
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140307/6a06d623/attachment.html>


More information about the rabbitmq-discuss mailing list