[rabbitmq-discuss] direct queue throughput
matthew at lshift.net
Sun Aug 30 22:14:16 BST 2009
On Sun, Aug 30, 2009 at 03:13:26PM -0500, Chuck Remes wrote:
> I don't disbelieve you but in my testing I have never seen a rabbit
> process go above 100%. This is using Erlang 13B and the "generic" unix
> rabbitmq 1.5.5 under OSX. Perhaps the defaults disable SMP?
I don't think we're doing anything special under OS X to disable SMP
http://developer.studivz.net/tag/erlang/ has some information about
recent developments in the Erlang interpreter for SMP support.
It's possible that I'm overstating the contribution of SMP to Rabbit,
but I've definitely seen Rabbit use more than 1 CPU under Linux.
Just a quick test - 100 queues, one producer, default direct exchange,
fire in messages to every queue in turn. Output of top:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
29500 matthew 20 0 805m 542m 2284 S 203 13.7 1:23.53 beam.smp
and it sits there at > 190% CPU for most of the test. This is on a
quad-core - one core is totally maxed out with the producer. Having run
this test for a little while, my load average is now sitting at 3.1,
which makes sense.
So no, we're not scaling linearly with the number of cores available,
and we will continue to try and improve this. But we definitely are
taking at least some advantage of parallel hardware. Without wishing to
bash OS X unnecessarily, it is known to fair badly in multithreaded
testing - I seem to recall a lot of multithreaded MySQL tests
demonstrating OS X really performs badly, due (hazy memory) to its
threads actually being quite heavy weight - IIRC, they took the
lightweight Mach threads, and wrapped round them several times, ending
up with something very heavy and performing badly. However, the reports
I'm thinking of definitely happened several years ago, and I wouldn't be
surpised if Apple have improved things since, assuming they took some
time out from polishing pretty buttons! ;)
More information about the rabbitmq-discuss