[rabbitmq-discuss] detecting publish failure across restart

Matthias Radestock matthias at rabbitmq.com
Tue Jun 15 10:38:13 BST 2010


On 01/06/10 19:55, Jesse Myers wrote:
> Can someone elaborate on the expected bottlenecks when using
> transactions? From what I've mocked up so far, it looks like the
> majority of the overhead is associated with tx.commit (no surprise
> there) and that I can get much better performance by batching
> messages within a single transaction.

Yes, that is correct. The main cause of that is the need to do an fsync,
which is unavoidable. Also, in the current ("old") persister, commits
happen at most every 2ms, to allow for coalescing of multiple concurrent
transactions into a single fsync. The new persister is somewhat more
clever here.

> I also looked at running multiple tx.commit operations concurrently
> in different channels in different threads, but there doesn't
> actually appear to be any performance benefit to doing so. Am I right
> to conclude that only a single transaction is allowed at a time per
> connection?

In the current persister transactions involving the *same queue* are 
completely serialised. That changes in the new persister.

> If so, this suggests queuing up messages client side and having a
> single worker thread submit messages in batches. Does this sound
> reasonable?

You don't need to submit the messages/acks themselves in batches, and in 
fact doing so is detrimental to performance. Just send them straight 
away but avoid performing a *commit* after every action.



More information about the rabbitmq-discuss mailing list