[rabbitmq-discuss] RabbitMQ experience

Matthias Radestock matthias at lshift.net
Mon Jan 26 23:19:20 GMT 2009


Gordon,

Gordon Sim wrote:
> A more meaningful result is often the throughput as measured from 
> publishing client to receiving client.

Indeed. That plus latency for the same setup are the performance figures 
we usually look at.

> (Adding in a reply queue and  periodic status updates back from the
> consumer to the producer also allows such a test to 'throttle' itself
> and avoid the overload that Matthias refers to).

It is actually surprisingly difficult to determine the *maximum* 
sustainable throughput that way. My experiments have shown that in a 
test where the feedback loop ensures a constant lag (the number of 
messages "in flight", i.e. the difference between the number of messages 
sent by the producer and received by the consumer), plotting the lag vs 
throughput exhibits some peculiar characteristics:

- there are local maxima, due to buffering and other effects

- the graph is very sensitive to the setup

- the graph changes over time, due in part to the effects of JITs

- sampling intervals have to be very long to get anything approaching 
reproducible results

And all that happens when the feedback loop has been minimised by 
colocating the producer and consumer in the same O/S process and using 
internal message passing for the feedback. Routing the feedback through 
the broker would make the results even more unpredictable.

That's why so far the goal of writing a "press a button and get the 
maximum throughput figure" test has eluded us. Coming up with a test 
that delivers results with a +/-20% accuracy isn't too hard. But that is 
far too insensitive for performance regression tests, where we are 
interested in spotting variations of as little as 2%.


Matthias.




More information about the rabbitmq-discuss mailing list