Dear Simon,<div><br></div><div>My tool uses company internal libraries, so I can not publish it.</div><div>Would you like to get more details of this test to be able to play it on your own?</div><div><br></div><div>Regards,</div>
<div>MK<br><br><div class="gmail_quote">2012/6/27 Simon MacMullen <span dir="ltr"><<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Micha³ - please can you keep rabbitmq-discuss on CC?<br>
<br>
So as I said, the limit is only the point at which Rabbit stops accepting new messages. In the general case this should be enough to stop further memory consumption - but in your case it looks like it isn't. If you were able to post your test tool in a way that would make it easy for us to run, then that might be the easiest way for us to help you. At the moment we just don't have enough information.<br>
<br>
Cheers, Simon<div class="im"><br>
<br>
On 27/06/12 09:36, Micha³ Kiêdy¶ wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">
Simon,<br>
<br>
My question becomes from fact, that Rabbit can consume even more than<br>
4GB when limit is set to 1.6GB.<br>
At this scenario raports usage at 2.7GB but real usage is more than 4GB.<br>
<br>
rabbit@arch-task-mq -8<br></div>
<<a href="http://arch-task-mq-7:55672/#/nodes/rabbit%40arch-task-mq-8" target="_blank">http://arch-task-mq-7:55672/#<u></u>/nodes/rabbit%40arch-task-mq-8</a><u></u>><div class="im"><br>
734 / 1024<br>
701 / 829<br>
5795 / 1048576<br>
2.7GB (?)<br></div>
_1.6GB high watermark<br>
49.6GB<br>
_4.0GB low watermark 12m 33sRAM<div><div class="h5"><br>
<br>
<br>
After a while kernel kills Rabbit process:<br>
<br>
Mem-info:<br>
DMA per-cpu:<br>
cpu 0 hot: high 186, batch 31 used:8<br>
cpu 0 cold: high 62, batch 15 used:48<br>
cpu 1 hot: high 186, batch 31 used:108<br>
cpu 1 cold: high 62, batch 15 used:55<br>
cpu 2 hot: high 186, batch 31 used:118<br>
cpu 2 cold: high 62, batch 15 used:53<br>
cpu 3 hot: high 186, batch 31 used:89<br>
cpu 3 cold: high 62, batch 15 used:55<br>
DMA32 per-cpu: empty<br>
Normal per-cpu: empty<br>
HighMem per-cpu: empty<br>
Free pages: 12076kB (0kB HighMem)<br>
Active:0 inactive:741324 dirty:0 writeback:9 unstable:0 free:3023<br>
slab:101876 mapped:3649 pagetables:2586<br>
DMA free:12092kB min:8196kB low:10244kB high:12292kB active:0kB<br>
inactive:2965168kB present:4202496kB pages_scanned:32 all_unreclaimable? no<br>
lowmem_reserve[]: 0 0 0 0<br>
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB<br>
present:0kB pages_scanned:0 all_unreclaimable? no<br>
lowmem_reserve[]: 0 0 0 0<br>
Normal free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB<br>
present:0kB pages_scanned:0 all_unreclaimable? no<br>
lowmem_reserve[]: 0 0 0 0<br>
HighMem free:0kB min:128kB low:128kB high:128kB active:0kB inactive:0kB<br>
present:0kB pages_scanned:0 all_unreclaimable? no<br>
lowmem_reserve[]: 0 0 0 0<br>
DMA: 172*4kB 533*8kB 170*16kB 41*32kB 11*64kB 1*128kB 1*256kB 1*512kB<br>
0*1024kB 1*2048kB 0*4096kB = 12632kB<br>
DMA32: empty<br>
Normal: empty<br>
HighMem: empty<br>
Swap cache: add 4358, delete 4243, find 0/0, race 0+0<br>
Free swap = 1031136kB<br>
Total swap = 1048568kB<br>
Free swap: 1031136kB<br>
1050624 pages of RAM<br>
26588 reserved pages<br>
17300 pages shared<br>
83 pages swap cached<br>
Out of Memory: Kill process 2213 (rabbitmq-server) score 14598295 and<br>
children.<br>
Out of memory: Killed process 2227 (beam.smp).<br>
<br>
<br>
<br>
This is Ok?<br>
<br>
<br>
Regards,<br>
MK<br>
<br></div></div>
2012/6/22 Simon MacMullen <<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a> <mailto:<a href="mailto:simon@rabbitmq.com" target="_blank">simon@rabbitmq.com</a>>><div><div class="h5"><br>
<br>
Hi Micha³.<br>
<br>
This is quite vague - if we can't see the source of your test tool<br>
it's hard to see what it's actually doing.<br>
<br>
The server can use more memory than the high watermark; that's just<br>
the point at which it stops accepting new messages from the network.<br>
This should greatly cut the extent to which it can consume more<br>
memory, but will not eliminate it.<br>
<br>
There is an existing issue where the processes used by connections<br>
do not close when the connection is closed and memory use is above<br>
the watermark. When the memory use drops the processes will go.<br>
Could your test application be opening new connections?<br>
<br>
Also, you say:<br>
<br>
<br>
The readers has been disconnected by the server ahead of time.<br>
<br>
<br>
does this mean that huge numbers of messages are building up in the<br>
server? Note that in the default configuration there is a<br>
per-message cost in memory of a hundred bytes or so even when the<br>
message has been paged out to disc, so that might explain why so<br>
much memory is being used.<br>
<br>
I hope this helps explain what you are seeing. But I'm not exactly<br>
sure what you are doing...<br>
<br>
Cheers, Simon<br>
<br>
<br>
On 22/06/12 14:09, Micha³ Kiêdy¶ wrote:<br>
<br>
Hi,<br>
<br>
Software version: 2.8.2<br>
The cluster has been stressed with 1000 writers and 100 readers.<br>
Message<br>
size is 100kB.<br>
Test configuration:<br>
<br>
_readers node #1_<br>
<br>
test.ConnectionPerWorker=true<br>
test.WritersCount=0<br>
test.ReadersCount=33<br>
test.Durable=true<br>
test.QueuesCount=1<br>
test.AutoAck=false<br>
test.ExchangeType=direct<br>
test.QueueNamePrefix=direct<br>
test.Host=arch-task-mq-7.atm<br>
<br>
_readers node #2_<br>
<br>
test.ConnectionPerWorker=true<br>
test.WritersCount=0<br>
test.ReadersCount=33<br>
test.Durable=true<br>
test.QueuesCount=1<br>
test.AutoAck=false<br>
test.ExchangeType=direct<br>
test.QueueNamePrefix=direct<br>
test.Host=arch-task-mq-8.atm<br>
<br>
_readers node #3_<br>
<br>
test.ConnectionPerWorker=true<br>
test.WritersCount=0<br>
test.ReadersCount=33<br>
test.Durable=true<br>
test.QueuesCount=1<br>
test.AutoAck=false<br>
test.ExchangeType=direct<br>
test.QueueNamePrefix=direct<br>
test.Host=arch-task-mq-8.atm<br>
<br>
_writers node #4_<br>
<br>
test.ConnectionPerWorker=true<br>
test.WritersCount=333<br>
test.ReadersCount=0<br>
test.Durable=true<br>
test.QueuesCount=1<br>
test.AutoAck=false<br>
test.ExchangeType=direct<br>
test.QueueNamePrefix=direct<br>
test.BodySize=102400<br>
# available units: s(seconds), m(minutes), h(hours) d(days)<br>
test.TestDuration=3h<br>
test.Host=arch-task-mq-8.atm<br>
<br>
writers node #5<br>
test.ConnectionPerWorker=true<br>
test.WritersCount=333<br>
test.ReadersCount=0<br>
test.Durable=true<br>
test.QueuesCount=1<br>
test.AutoAck=false<br>
test.ExchangeType=direct<br>
test.QueueNamePrefix=direct<br>
test.BodySize=102400<br>
# available units: s(seconds), m(minutes), h(hours) d(days)<br>
test.TestDuration=3h<br>
test.Host=arch-task-mq-7.atm<br>
<br>
writers node #6<br>
test.ConnectionPerWorker=true<br>
test.WritersCount=334<br>
test.ReadersCount=0<br>
test.Durable=true<br>
test.QueuesCount=1<br>
test.AutoAck=false<br>
test.ExchangeType=direct<br>
test.QueueNamePrefix=direct<br>
test.BodySize=102400<br>
# available units: s(seconds), m(minutes), h(hours) d(days)<br>
test.TestDuration=3h<br>
test.Host=arch-task-mq-8.atm<br>
<br>
<br>
_Actual tests state:_<br>
<br>
Running worker-1000w-100r-100kB<br>
Preparing tests on arch-task-mq-1<br>
Preparing tests on arch-task-mq-2<br>
Preparing tests on arch-task-mq-3<br>
Preparing tests on arch-task-mq-4<br>
Preparing tests on arch-task-mq-5<br>
Preparing tests on arch-task-mq-6<br>
Preparations done, starting testing procedure<br>
Start tests on arch-task-mq-1<br>
Start tests on arch-task-mq-2<br>
Start tests on arch-task-mq-3<br>
Start tests on arch-task-mq-4<br>
Start tests on arch-task-mq-5<br>
Start tests on arch-task-mq-6<br>
Waiting for tests to finish<br>
Tests done on arch-task-mq-5<br>
Tests done on arch-task-mq-6<br>
Tests done on arch-task-mq-4<br>
<br>
<br>
The readers has been disconnected by the server ahead of time.<br>
<br>
<br>
_Actual cluster state (data from Management Plugin view):_<br>
<br>
Name File descriptors (?) Socket<br>
descriptors<br>
(?) Erlang processes Memory Disk<br>
space Uptime Type<br>
(used / available)<br>
(used<br>
/ available) (used / available)<br>
rabit@arch-task-mq-7 392 / 1024 334 / 829<br>
2885 / 1048576 540.2MB<br>
49.6GB 21h 14m Disc Stats *<br>
<br>
<br>
1.6GB high watermark 4.0GB low watermark<br>
rabbit@arch-task-mq-8 692 / 1024 668 / 829<br>
5522 / 1048576 1.8GB (?)<br>
46.1GB 21h 16m RAM<br>
<br>
<br>
1.6GB high watermark 4.0GB low watermark<br>
<br>
Number of processes is growing all the time even though no<br>
messages are<br>
not published or received.<br>
All publishers has been blocked. After some time I killed the<br>
publisher processes, but RabbitMQ still sees them as connected and<br>
blocked. :)<br>
<br>
Some logs:<br>
<br></div></div>
mkiedys@arch-task-mq-8:/var/__<u></u>log/rabbitmq$ cat<div class="im"><br>
rabbit@arch-task-mq-8.log<br>
|grep vm_memory_high|tail -n 20<br>
vm_memory_high_watermark clear. Memory used:1709148224<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:<a href="tel:2135174984" value="+12135174984" target="_blank">2135174984</a><br></div>
<tel:<a href="tel:2135174984" value="+12135174984" target="_blank">2135174984</a>> allowed:1717986918<div class="im"><br>
vm_memory_high_watermark clear. Memory used:1593121728<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:<a href="tel:2043534608" value="+12043534608" target="_blank">2043534608</a><br></div>
<tel:<a href="tel:2043534608" value="+12043534608" target="_blank">2043534608</a>> allowed:1717986918<div class="im"><br>
vm_memory_high_watermark clear. Memory used:1681947128<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:<a href="tel:2088225952" value="+12088225952" target="_blank">2088225952</a><br></div>
<tel:<a href="tel:2088225952" value="+12088225952" target="_blank">2088225952</a>> allowed:1717986918<div class="im"><br>
vm_memory_high_watermark clear. Memory used:1710494800<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:2208875080<br>
allowed:1717986918<br>
vm_memory_high_watermark clear. Memory used:1713902032<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:<a href="tel:2122564032" value="+12122564032" target="_blank">2122564032</a><br></div>
<tel:<a href="tel:2122564032" value="+12122564032" target="_blank">2122564032</a>> allowed:1717986918<div class="im"><br>
vm_memory_high_watermark clear. Memory used:1663616264<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:<a href="tel:2098909664" value="+12098909664" target="_blank">2098909664</a><br></div>
<tel:<a href="tel:2098909664" value="+12098909664" target="_blank">2098909664</a>> allowed:1717986918<div class="im"><br>
vm_memory_high_watermark clear. Memory used:1712666136<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:<a href="tel:2088814360" value="+12088814360" target="_blank">2088814360</a><br></div>
<tel:<a href="tel:2088814360" value="+12088814360" target="_blank">2088814360</a>> allowed:1717986918<div class="im"><br>
vm_memory_high_watermark clear. Memory used:1640273568<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:2116966952<br>
allowed:1717986918<br>
vm_memory_high_watermark clear. Memory used:1715305176<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:<a href="tel:2186572648" value="+12186572648" target="_blank">2186572648</a><br></div>
<tel:<a href="tel:2186572648" value="+12186572648" target="_blank">2186572648</a>> allowed:1717986918<div class="im"><br>
vm_memory_high_watermark clear. Memory used:1716620504<br>
allowed:1717986918<br>
vm_memory_high_watermark set. Memory used:2180898440<br>
allowed:1717986918<br>
<br></div>
mkiedys@arch-task-mq-8:/var/__<u></u>log/rabbitmq$ cat<div class="im"><br>
rabbit@arch-task-mq-8.log<br>
|grep vm_memory_high|wc -l<br>
2935<br>
<br>
Why does the server consumes more memory than 1.6GB limit?<br>
<br>
Regards,<br>
MK<br>
<br>
<br>
<br></div>
______________________________<u></u>___________________<br>
rabbitmq-discuss mailing list<br>
rabbitmq-discuss@lists.__<a href="http://rabbitmq.com" target="_blank">rabbi<u></u>tmq.com</a><br>
<mailto:<a href="mailto:rabbitmq-discuss@lists.rabbitmq.com" target="_blank">rabbitmq-discuss@<u></u>lists.rabbitmq.com</a>><br>
<a href="https://lists.rabbitmq.com/__cgi-bin/mailman/listinfo/__rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/__<u></u>cgi-bin/mailman/listinfo/__<u></u>rabbitmq-discuss</a><div class="im"><br>
<<a href="https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">https://lists.rabbitmq.com/<u></u>cgi-bin/mailman/listinfo/<u></u>rabbitmq-discuss</a>><br>
<br>
<br>
<br>
--<br>
Simon MacMullen<br>
RabbitMQ, VMware<br>
<br>
<br>
</div></blockquote><div class="HOEnZb"><div class="h5">
<br>
<br>
-- <br>
Simon MacMullen<br>
RabbitMQ, VMware<br>
</div></div></blockquote></div><br></div>