[rabbitmq-discuss] Per-Connection Flow Control - RMQ 2.8.1

DawgTool dawgtool at aol.com
Wed Apr 25 17:14:44 BST 2012

Hi Matthias,

On 4/25/12 11:50 AM, Matthias Radestock wrote:
> On 25/04/12 15:41, DawgTool wrote:
>> I haven't run these same tests on 2.7.1 yet, since its our production
>> machines (which are currently stable... enough).
> I see. So, just to be clear, we are now no longer looking at the 
> original problem you reported, namely that you thought 2.7.1 could 
> handle a considerably higher rate than 2.8.1?
Comparing 2.7.1 to 2.8.1 has become less a priority as getting 2.8.1 to 
behave the same as our current 2.7.1 system.
(and I say behave meaning throughput count more then anything else)
>> The spiking and the locking is not unusual? ;)
>> Maybe the large gaps between queue memory and actual memory is not
>> unusual also? ;)
>> Maybe the flat queue sizes for several seconds while hundreds of
>> thousands of records are added and purged? ;)
>> All joking aside.... =)
In case you thought different, this really is a good nature poke. So 
please don't take my joking personally. =)

> Running, as you were, with considerably higher flow control credits 
> than standard make all of the above worse. One of the reasons we 
> introduced flow control (and picked the standard settings we have) is 
> to reduce the magnitude of these.
Agreed, the reason I was raising the credits was to give my publishers 
more time to publish before they got locked out.  At the default level 
{200,50}, my publishing minimum rate of 4k/sec (normal rate is 90k/s) 
blocked almost instantly.
The flat queue concerns me more then the queue vs total, I expected that 
there would be records piled up in the broker/exchange.
>> When RabbitMQ is cleaning its index/journals and data files, the queues
>> are all locked.
> Paging to disk happens largely asynchronously. However, ordering and 
> other causality constraints may require certain operations to wait 
> until others have completed. There is not much we can do about that.
I assumed the same thing, 'purging'/paging would be something done 
async, but in all my tests with non-persistent messages, paging locked 
everything without fail, regardless of disk type (SAS vs RAM). I haven't 
gone through the code enough to know when is the case, but i would 
assume its the index/journal rebuilding and not the disk i/o.
>> The current flow control {200,50} hides a lot of this
> Yep. Quite intentionally.
>> On my test machine, all the non-persistent messages were dropped to disk
>> (causing a lock) around 4GB every time. On smaller systems, its usually
>> around 1.5 to 2GB. This happens regardless of any settings on rabbitmq
>> or mnesia (unless I missed something).
> With the default credit settings I can get publishers to pause for a 
> few seconds every now and then, but not longer. Are you seeing 
> something different?
You can easily get this to behave poorly by pushing volume (3-4k/s) to a 
topic exchange with a short TTL on the queues (90000ms)
I can send my json object if you want to see the configuration I am 
using. Summary: 1 exchange(topic) -> 14 queues (TTL:90000ms)
> Regards,
> Matthias.

More information about the rabbitmq-discuss mailing list