[rabbitmq-discuss] limit number of messages buffered in memory in new persister
matthew at lshift.net
Fri Mar 5 11:59:17 GMT 2010
On Thu, Mar 04, 2010 at 06:25:52PM -0800, alex chen wrote:
> We tested publish+commit and found that it is much slower than publish only.
It always will be. Rabbit buffers messages in several locations before
they hit disk. By issuing the commit, you're forcing all those buffers
to be drained and the message to be fsync'd to disk. At the end of the
day, you are going to be limited by the speed at which you can fsync.
What you can do, is publish messages in parallel, using different
channels. The server can coalesce these commits under some circumstances
and so you could publish, say, 100 messages on one channel, and then
issue the commit. Whilst you're waiting for the commit_ok to come back,
you could publish on another channel. You'd need to build your client in
this way, but it would mean you're never halting publishes.
> For single client with commit on every publish, it can do about 20 msg/sec.
That's pretty low. See my other email - you should be able get much much
higer than that with some careful tuning. Experimentation and testing is
the order of the day - it's very dependent on your hard disc, what else
is being written to the disk, the file system, kernel and the direction
of the prevailing wind.
> It took about 40ms for client to get the tx_commit_ok from server.
> It seems to me the bottleneck was not fsync, because disk was not busy.
> also i tried commenting out file:sync() from the src code and it has no effect.
That is very curious. Also, don't do that - if it crashes at that point
you're not going to be able to recover much data from disk ;) How big
are the messages you're writing?
> if commit on 100 publishes, the performance increases to above 1000 msg/sec.
> Is there anyway to speed up the performance for commit on every publish?
Yes. Better hard discs / SSDs and kernel/fs tuning.
More information about the rabbitmq-discuss