[rabbitmq-discuss] limit number of messages buffered in memory in new persister

Matthew Sackman matthew at lshift.net
Tue Mar 2 11:23:09 GMT 2010

Hi Alex,

On Tue, Mar 02, 2010 at 12:38:28AM -0800, alex chen wrote:
> We are using new persister.  When client publishes at high rate, messages are buffered in memory before writing to disk.  In this case if the server crashes, those messages are lost.  I am wondering if rabbitmq could add a configuration parameter on server side to limit the number of messages that have not been written to the disk?

There is no such parameter, and adding one would be very tricky indeed
giving the number of processes and corresponding mailboxes that messages
can sit in. Your best bet to limit this is to use transactions - if you
put the publishing channel in transactional mode, and then issue a
commit every, say, 100 publishes, then you will ensure that no more than
100 messages (from that publisher) are ever pending being written to
disk. However, be warned that a commit must also fsync various files,
which is an expensive operation on rotating disks. SSDs can do fsyncs in
constant time, about 1000 times faster than rotating disks. Thus if
performance is important here, you will want to carefully balance the
number of outstanding messages, and maybe investigate SSDs if you find
performance suffers too much.

Note that in the case of multiple publishers doing multiple commits at
the same time, we coalesce commits thus the worst case is one fsync per
commit, and it's frequently less than one fsync per commit.


More information about the rabbitmq-discuss mailing list