[rabbitmq-discuss] Rabbitmq 3.2.4 running out of memory
srikanth tns
srikanthtns at gmail.com
Wed Aug 27 01:13:32 BST 2014
Hi ,
Thanks for suggesting the changes.
we have applied the relevant changes to config file for statistics and RAM
upgrade. Things seem to be stable now but we will monitor the usage to see
if it jumps off again.
here is a screenshot of how its looking now. Please let me know if this is
promising , also is there any possibility of memory leaks and how we can
avoid them?
[image: Inline image 1]
Memory stats
{erlang_version,
"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:4:4] [rq:4]
[async-threads:30] [kernel-poll:true]\n"},
{memory,
[{total,4337647904},
{connection_procs,338878104},
{queue_procs,1193259768},
{plugins,328401480},
{other_proc,566879328},
{mnesia,331579864},
{mgmt_db,755366192},
{msg_index,22422152},
{other_ets,58232552},
{binary,402509464},
{code,17776116},
{atom,1621321},
{other_system,320721563}]},
{vm_memory_high_watermark,0.7},
{vm_memory_limit,23567893708},
{disk_free_limit,1000000000},
{disk_free,19428507648},
{file_descriptors,
[{total_limit,327579},
{total_used,4289},
{sockets_limit,294819},
{sockets_used,4287}]},
{processes,[{limit,1048576},{used,197320}]},
{run_queue,0},
{uptime,4970}]
{memory,
[{total,2662846904},
{connection_procs,188698640},
{queue_procs,929974408},
{plugins,265628672},
{other_proc,285984288},
{mnesia,333067736},
{mgmt_db,9096},
{msg_index,14039664},
{other_ets,34617944},
{binary,298184672},
{code,17779316},
{atom,1620513},
{other_system,293241955}]},
{vm_memory_high_watermark,0.7},
{vm_memory_limit,23567893708},
{disk_free_limit,1000000000},
{disk_free,19331850240},
{file_descriptors,
[{total_limit,327579},
{total_used,3360},
{sockets_limit,294819},
{sockets_used,3358}]},
{processes,[{limit,1048576},{used,154555}]},
{run_queue,0},
{uptime,3812}]
{memory,
[{total,2548103256},
{connection_procs,216784920},
{queue_procs,846726064},
{plugins,247802288},
{other_proc,260433432},
{mnesia,333461048},
{mgmt_db,9096},
{msg_index,8830272},
{other_ets,33553920},
{binary,291250680},
{code,17776116},
{atom,1620513},
{other_system,289854907}]},
{vm_memory_high_watermark,0.7},
{vm_memory_limit,23567893708},
{disk_free_limit,1000000000},
{disk_free,19382099968},
{file_descriptors,
[{total_limit,327579},
{total_used,3112},
{sockets_limit,294819},
{sockets_used,3110}]},
{processes,[{limit,1048576},{used,143173}]},
{run_queue,0},
{uptime,3777}]
On Tue, Aug 26, 2014 at 12:50 PM, Michael Klishin <mklishin at pivotal.io>
wrote:
> On 26 August 2014 at 23:47:50, srikanth tns (srikanthtns at gmail.com)
> wrote:
> > > what are the changes we see on the UI statistics ? I have applied
> > it on our test instances and see one of the main charts for message
> > rates is gone.
>
> Yes, message rate stats depend on fine grained statistics. This is
> mentioned
> in the docs.
>
> > Also we are bumping up from the RAM from 16G to 32G
>
> You can also bump the watermark ratio depending on what other software
> may be running on the same machine/VM.
> (but remember that OS and system services
> also need RAM, as do file system caches, so going above 0.85 is probably
> not a good idea)
> --
> MK
>
> Staff Software Engineer, Pivotal/RabbitMQ
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140826/5de2b51d/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 73551 bytes
Desc: not available
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140826/5de2b51d/attachment.png>
More information about the rabbitmq-discuss
mailing list