[rabbitmq-discuss] ETS memory usage
Simone Sciarrati
s.sciarrati at gmail.com
Mon Oct 7 10:41:55 BST 2013
Hi,
I am investigating memory usage on one of our nodes in a cluster of 3
x c1.xlarge instances in ec2, ubuntu 12.04, RabbitMQ 2.8.4 and Erlang
R14B04 (2 disk nodes and one ram node) . We are planning to upgrade to
3.1.x but regardless of the version I would like to understand how to
extract information about what is consuming the memory.
Right now, one of the nodes is showing 2.3GB used (high watermark
2.7GB), where 2GB are held in ETS according to the management console:
1> ets:i().
id name type size mem owner
----------------------------------------------------------------------------
12 cookies set 0 300 auth
4111 code set 733 82416 code_server
8208 code_names set 53 7387 code_server
49169 mnesia_subscr duplicate_bag 1 307 mnesia_subscr
65570 httpc_manager__session_cookie_db bag 0 300
httpc_manager
90149 file_handle_cache_client set 51 1194
file_handle_cache
94244 file_handle_cache_elders set 36 696
file_handle_cache
98339 ign_requests set 0 300 inet_gethost_native
102438 ign_req_index set 0 300 inet_gethost_native
229436 vertices set 0 300 rabbit
233533 edges set 0 300 rabbit
237630 neighbours bag 2 314 rabbit
253993 rabbit_memory_monitor set 35 1118
rabbit_memory_monitor
446504 rabbit_msg_store_file_summary ordered_set 1 104
msg_store_transient
450623 rabbit_msg_store_ets_index set 6565 99801
msg_store_transient
454720 rabbit_msg_store_shared_file_handles ordered_set 0
90 msg_store_transient
458817 rabbit_msg_store_cur_file set 6562 447542
msg_store_transient
462914 rabbit_msg_store_flying set 0 300
msg_store_transient
467013 rabbit_msg_store_file_summary ordered_set 384 5466
msg_store_persistent
471110 rabbit_msg_store_ets_index set 2627724 39819062
msg_store_persistent
475207 rabbit_msg_store_shared_file_handles ordered_set 803
13741 msg_store_persistent
479304 rabbit_msg_store_cur_file set 9193 553491
msg_store_persistent
483401 rabbit_msg_store_flying set 0 300
msg_store_persistent
2625617 anon ordered_set 67 15491 <0.9548.283>
2629654 anon ordered_set 1308748 215451711 <0.9548.283>
2633803 anon ordered_set 2562 187731 <0.9548.283>
2637898 anon ordered_set 809 42884 <0.9548.283>
2641996 anon ordered_set 642 70378 <0.9548.283>
2646094 anon ordered_set 134 7406 <0.9548.283>
2650189 anon ordered_set 177 10405 <0.9548.283>
3072079 mnesia_transient_decision set 24 684 mnesia_recover
3092563 mnesia_transient_decision set 23 668 mnesia_recover
3162139 mnesia_transient_decision set 6 396 mnesia_recover
3182612 shell_records ordered_set 0 90 <0.24264.361>
ac_tab ac_tab set 90 5663 application_controller
disk_log_names disk_log_names set 0 300 disk_log_server
disk_log_pids disk_log_pids set 0 300 disk_log_server
file_io_servers file_io_servers set 2 468 file_server_2
global_locks global_locks set 0 300 global_name_server
global_names global_names set 1 319 global_name_server
global_names_ext global_names_ext set 0 300 global_name_server
global_pid_ids global_pid_ids bag 0 300 global_name_server
global_pid_names global_pid_names bag 2 317 global_name_server
gm_group gm_group set 0 300 mnesia_monitor
httpc_manager__handler_db httpc_manager__handler_db set 0 300
httpc_manager
httpc_manager__session_db httpc_manager__session_db set 0 300
httpc_manager
inet_cache inet_cache bag 0 300 inet_db
inet_db inet_db set 29 615 inet_db
inet_hosts_byaddr inet_hosts_byaddr bag 0 300 inet_db
inet_hosts_byname inet_hosts_byname bag 0 300 inet_db
inet_hosts_file_byaddr inet_hosts_file_byaddr bag 0 300 inet_db
inet_hosts_file_byname inet_hosts_file_byname bag 0 300 inet_db
mirrored_sup_childspec mirrored_sup_childspec ordered_set 1 116
mnesia_monitor
mnesia_decision mnesia_decision set 1 308 mnesia_recover
mnesia_gvar mnesia_gvar set 704 12494 mnesia_monitor
mnesia_held_locks mnesia_held_locks bag 0 300 mnesia_locker
mnesia_lock_queue mnesia_lock_queue bag 0 300 mnesia_locker
mnesia_stats mnesia_stats set 7 349 mnesia_monitor
mnesia_sticky_locks mnesia_sticky_locks set 0 300 mnesia_locker
mnesia_tid_locks mnesia_tid_locks bag 0 300 mnesia_locker
pg2_fixed_table pg2_fixed_table ordered_set 14 310 pg2_fixed
pg_local_table pg_local_table ordered_set 0 90 pg_local
rabbit_durable_exchange rabbit_durable_exchange set 51 1657
mnesia_monitor
rabbit_durable_queue rabbit_durable_queue set 57 1978 mnesia_monitor
rabbit_durable_route rabbit_durable_route set 118 5477 mnesia_monitor
rabbit_exchange rabbit_exchange set 51 1657 mnesia_monitor
rabbit_exchange_serial rabbit_exchange_serial set 0 300
mnesia_monitor
rabbit_listener rabbit_listener bag 3 468 mnesia_monitor
rabbit_queue rabbit_queue set 66 2323 mnesia_monitor
rabbit_registry rabbit_registry set 7 370 rabbit_registry
rabbit_reverse_route rabbit_reverse_route ordered_set 162 7530
mnesia_monitor
rabbit_route rabbit_route ordered_set 162 7530 mnesia_monitor
rabbit_semi_durable_route rabbit_semi_durable_route ordered_set 118
5385 mnesia_monitor
rabbit_topic_trie_binding rabbit_topic_trie_binding ordered_set 96
4489 mnesia_monitor
rabbit_topic_trie_edge rabbit_topic_trie_edge ordered_set 56 2299
mnesia_monitor
rabbit_topic_trie_node rabbit_topic_trie_node ordered_set 93 2892
mnesia_monitor
rabbit_user rabbit_user set 2 338 mnesia_monitor
rabbit_user_permission rabbit_user_permission set 1 332
mnesia_monitor
rabbit_vhost rabbit_vhost set 1 311 mnesia_monitor
schema schema set 19 2695 mnesia_monitor
sys_dist sys_dist set 3 423 net_kernel
timer_interval_tab timer_interval_tab set 6 420 timer_server
timer_tab timer_tab ordered_set 6 255 timer_server
ok
According to this output the 2 tables that are consuming most of the memory are:
2629654 anon ordered_set 1308748 215451711 <0.9548.283>
471110 rabbit_msg_store_ets_index set 2627724 39819062
msg_store_persistent
Where the first one is:
<0.9548.283> rabbit_mgmt_db:init/1 2584 27448279 0
Looking at this output the sum of the memory doesn't seem to add up to
> 2GB, also the number of objects in the rabbit_msg_store_ets_index is
> 2.5 millions, which doesn't seem correct when the number of messages
in all queues on this node isn't more than 150k at any time (perhaps
my understanding of this number is incorrect and it doesn't relate to
the # of messages).
All queues are durable.
Any help in understanding where the memory might be used would be
greatly appreciated, this is a generic question not specifically tied
to the RabbitMQ version (same or different problem might present with
later versions at some point in time).
Thanks a lot,
Simone
More information about the rabbitmq-discuss
mailing list