[rabbitmq-discuss] RabbitMQ crashes hard when it runs out of memory

Stephen Day sjaday at gmail.com
Thu Oct 22 22:22:44 BST 2009


As a light experiment, to isolate garbage collection, I ran this:

4> memory().
[{total,367371832},
 {processes,139434000},
 {processes_used,139430112},
 {system,227937832},
 {atom,514765},
 {atom_used,488348},
 {binary,157628784},
 {code,3880064},
 {ets,64744732}]

(rabbit at vs-dfw-ctl11)5> [erlang:garbage_collect(P) || P <-
erlang:processes()].
[true,true,true,true,true,true,true,true,true,true,true,
 true,true,true,true,true,true,true,true,true,true,true,true,
 true,true,true,true,true,true|...]

(rabbit at vs-dfw-ctl11)6>
memory().
[{total,145833144},
 {processes,50900752},
 {processes_used,50896864},
 {system,94932392},
 {atom,514765},
 {atom_used,488348},
 {binary,24622512},
 {code,3880064},
 {ets,64745716}]

This really cut down on usage, so its likely that the binary gc is falling
behind rabbits requirements. How do I track down the uncollected binary heap
usage to a process?

_steve

On Thu, Oct 22, 2009 at 1:50 PM, Stephen Day <sjaday at gmail.com> wrote:

> Unfortunately, the system has crashed since the last outputs I provided,
> but the behavior remains. There definitely seems to be some memory held up
> in the persister, but I dont think this is the main source. Below, I printed
> out the memory for the process, gc'd it, then printed it again:
>
> 1> process_info(whereis(rabbit_persister)).
> [{registered_name,rabbit_persister},
>  {current_function,{gen_server2,process_next_msg,8}},
>  {initial_call,{proc_lib,init_p,5}},
>  {status,waiting},
>  {message_queue_len,0},
>  {messages,[]},
>  {links,[<0.76.0>,<0.188.0>]},
>  {dictionary,[{'$ancestors',[rabbit_sup,<0.75.0>]},
>               {'$initial_call',{rabbit_persister,init,1}}]},
>  {trap_exit,true},
>  {error_handler,error_handler},
>  {priority,normal},
>  {group_leader,<0.74.0>},
>  {total_heap_size,43398670},
>  {heap_size,5135590},
>  {stack_size,13},
>  {reductions,128289510},
>  {garbage_collection,[{fullsweep_after,65535},
>                       {minor_gcs,49}]},
>  {suspending,[]}]
> 2> garbage_collect(whereis(rabbit_persister)).
> true
> 3> process_info(whereis(rabbit_persister)).
> [{registered_name,rabbit_persister},
>  {current_function,{gen_server2,process_next_msg,8}},
>  {initial_call,{proc_lib,init_p,5}},
>  {status,waiting},
>  {message_queue_len,0},
>  {messages,[]},
>  {links,[<0.76.0>,<0.188.0>]},
>  {dictionary,[{'$ancestors',[rabbit_sup,<0.75.0>]},
>               {'$initial_call',{rabbit_persister,init,1}}]},
>  {trap_exit,true},
>  {error_handler,error_handler},
>  {priority,normal},
>  {group_leader,<0.74.0>},
>  {total_heap_size,987},
>  {heap_size,610},
>  {stack_size,13},
>  {reductions,133572480},
>  {garbage_collection,[{fullsweep_after,65535},{minor_gcs,6}]},
>  {suspending,[]}]
>
> So, even the though this collected quite a bit of memory, we can see that
> the binary allocation is still large:
>
> 4> memory().
> [{total,906056008},
>  {processes,72681252},
>  {processes_used,72668564},
>  {system,833374756},
>  {atom,515733},
>  {atom_used,490081},
>  {binary,769103232},
>  {code,3890441},
>  {ets,58694668}]
>
> Is there a way I can print the allocators for this binary memory?
>
> -Stephen
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20091022/181c1537/attachment.htm 


More information about the rabbitmq-discuss mailing list