Hi,<div><br></div><div>Software version: 2.8.2</div><div>The cluster has been <span style="font-family:arial,sans-serif;font-size:13px;white-space:nowrap;background-color:rgb(255,255,255)">stressed</span> with 1000 writers and 100 readers. Message size is 100kB.</div>
<div><div>Test configuration:</div></div><div><br></div><div><u>readers node #1</u></div><div><div>test.ConnectionPerWorker=true</div><div>test.WritersCount=0</div><div>test.ReadersCount=33</div><div>test.Durable=true</div>
<div>test.QueuesCount=1</div><div>test.AutoAck=false</div><div>test.ExchangeType=direct</div><div>test.QueueNamePrefix=direct</div></div><div>test.Host=arch-task-mq-7.atm</div><div><br></div><div><u>readers node #2</u></div>
<div><div>test.ConnectionPerWorker=true</div><div>test.WritersCount=0</div><div>test.ReadersCount=33</div><div>test.Durable=true</div><div>test.QueuesCount=1</div><div>test.AutoAck=false</div><div>test.ExchangeType=direct</div>
<div>test.QueueNamePrefix=direct</div><div>test.Host=arch-task-mq-8.atm</div></div><div><br></div><div><u>readers node #3</u></div><div><div>test.ConnectionPerWorker=true</div><div>test.WritersCount=0</div><div>test.ReadersCount=33</div>
<div>test.Durable=true</div><div>test.QueuesCount=1</div><div>test.AutoAck=false</div><div>test.ExchangeType=direct</div><div>test.QueueNamePrefix=direct</div><div>test.Host=arch-task-mq-8.atm</div></div><div><br></div><div>
<u>writers node #4</u></div><div><div>test.ConnectionPerWorker=true</div><div>test.WritersCount=333</div><div>test.ReadersCount=0</div><div>test.Durable=true</div><div>test.QueuesCount=1</div><div>test.AutoAck=false</div>
<div>test.ExchangeType=direct</div><div>test.QueueNamePrefix=direct</div><div>test.BodySize=102400</div><div># available units: s(seconds), m(minutes), h(hours) d(days)</div><div>test.TestDuration=3h</div><div>test.Host=arch-task-mq-8.atm</div>
</div><div><br></div><div>writers node #5</div><div><div>test.ConnectionPerWorker=true</div><div>test.WritersCount=333</div><div>test.ReadersCount=0</div><div>test.Durable=true</div><div>test.QueuesCount=1</div><div>test.AutoAck=false</div>
<div>test.ExchangeType=direct</div><div>test.QueueNamePrefix=direct</div><div>test.BodySize=102400</div><div># available units: s(seconds), m(minutes), h(hours) d(days)</div><div>test.TestDuration=3h</div><div>test.Host=arch-task-mq-7.atm</div>
</div><div><br></div><div>writers node #6</div><div><div>test.ConnectionPerWorker=true</div><div>test.WritersCount=334</div><div>test.ReadersCount=0</div><div>test.Durable=true</div><div>test.QueuesCount=1</div><div>test.AutoAck=false</div>
<div>test.ExchangeType=direct</div><div>test.QueueNamePrefix=direct</div><div>test.BodySize=102400</div><div># available units: s(seconds), m(minutes), h(hours) d(days)</div><div>test.TestDuration=3h</div><div>test.Host=arch-task-mq-8.atm</div>
</div><div><br></div><div><br></div><div><u>Actual tests state:</u></div><div><div>Running worker-1000w-100r-100kB</div><div>Preparing tests on arch-task-mq-1</div><div>Preparing tests on arch-task-mq-2</div><div>Preparing tests on arch-task-mq-3</div>
<div>Preparing tests on arch-task-mq-4</div><div>Preparing tests on arch-task-mq-5</div><div>Preparing tests on arch-task-mq-6</div><div>Preparations done, starting testing procedure</div><div>Start tests on arch-task-mq-1</div>
<div>Start tests on arch-task-mq-2</div><div>Start tests on arch-task-mq-3</div><div>Start tests on arch-task-mq-4</div><div>Start tests on arch-task-mq-5</div><div>Start tests on arch-task-mq-6</div><div>Waiting for tests to finish</div>
<div>Tests done on arch-task-mq-5</div><div>Tests done on arch-task-mq-6</div><div>Tests done on arch-task-mq-4</div></div><div><br></div><div><br></div><div>The readers has been disconnected by the server ahead of time.</div>
<div><br></div><div><br></div><div><u>Actual cluster state (data from Management Plugin view):</u></div><div><div>Name<span class="Apple-tab-span" style="white-space:pre">        </span>                   File descriptors (?)           Socket descriptors (?)           Erlang processes      Memory                      Disk space     Uptime     Type</div>
<div>                                   (used / available)             (used / available)                  (used / available)</div></div><div><div>rabit@arch-task-mq-7    392 / 1024                     334 / 829                              2885 / 1048576        540.2MB                    49.6GB          21h 14m  Disc Stats *</div>
<div>                                                                                                                                                 1.6GB high watermark 4.0GB low watermark</div><div><div>rabbit@arch-task-mq-8  692 / 1024                     668 / 829                               5522 / 1048576        1.8GB (?)                  46.1GB          21h 16m  RAM</div>
<div>                                                                                                                                                 1.6GB high watermark 4.0GB low watermark</div><div><br></div></div></div>
<div>Number of processes is growing all the time even though no messages are not published or received.</div><div>All publishers has been blocked. After some time I killed the publisher processes, but RabbitMQ still sees them as connected and blocked. :)</div>
<div><br></div><div>Some logs:</div><div><br></div><div><div><div>mkiedys@arch-task-mq-8:/var/log/rabbitmq$ cat rabbit@arch-task-mq-8.log |grep vm_memory_high|tail -n 20</div><div>vm_memory_high_watermark clear. Memory used:1709148224 allowed:1717986918</div>
<div>vm_memory_high_watermark set. Memory used:2135174984 allowed:1717986918</div><div>vm_memory_high_watermark clear. Memory used:1593121728 allowed:1717986918</div><div>vm_memory_high_watermark set. Memory used:2043534608 allowed:1717986918</div>
<div>vm_memory_high_watermark clear. Memory used:1681947128 allowed:1717986918</div><div>vm_memory_high_watermark set. Memory used:2088225952 allowed:1717986918</div><div>vm_memory_high_watermark clear. Memory used:1710494800 allowed:1717986918</div>
<div>vm_memory_high_watermark set. Memory used:2208875080 allowed:1717986918</div><div>vm_memory_high_watermark clear. Memory used:1713902032 allowed:1717986918</div><div>vm_memory_high_watermark set. Memory used:2122564032 allowed:1717986918</div>
<div>vm_memory_high_watermark clear. Memory used:1663616264 allowed:1717986918</div><div>vm_memory_high_watermark set. Memory used:2098909664 allowed:1717986918</div><div>vm_memory_high_watermark clear. Memory used:1712666136 allowed:1717986918</div>
<div>vm_memory_high_watermark set. Memory used:2088814360 allowed:1717986918</div><div>vm_memory_high_watermark clear. Memory used:1640273568 allowed:1717986918</div><div>vm_memory_high_watermark set. Memory used:2116966952 allowed:1717986918</div>
<div>vm_memory_high_watermark clear. Memory used:1715305176 allowed:1717986918</div><div>vm_memory_high_watermark set. Memory used:2186572648 allowed:1717986918</div><div>vm_memory_high_watermark clear. Memory used:1716620504 allowed:1717986918</div>
<div>vm_memory_high_watermark set. Memory used:2180898440 allowed:1717986918</div></div><div><br></div><div>mkiedys@arch-task-mq-8:/var/log/rabbitmq$ cat rabbit@arch-task-mq-8.log |grep vm_memory_high|wc -l</div><div>2935</div>
</div><div><br></div><div>Why does the server consumes more memory than 1.6GB limit?</div><div><br></div><div>Regards,</div><div>MK</div>