[rabbitmq-discuss] When is persister.log.previous made?
matthias at lshift.net
Mon Oct 19 13:31:43 BST 2009
> One of our systems overfed the rabbit, and it's now a bit angry. The
> strange problem that we're having, though, is that it seems that the
> commit after the thousandth ack'd message is triggering the persister
> log rollover, which is then causing beam to run out of memory. The
> only thing we're doing is basic_get'ing messages, acking them, and
> then committing (the channel is transactional). We've also tried
> committing every thousand messages; either way, the first commit after
> the thousandth ack'd message is causing the persister log rollover,
> followed immediately by beam crashing due to being out of RAM.
The logic for when the persister log is rolled over is quite complex,
involving both time-based and message-count based heuristics.
As an aside, why are you using transactions on the consuming side? I
have yet to come across a use case where that is genuinely required.
> Does anybody know why rolling the persister log would cause the system
> to run out of memory? It seems like a strange place to need to
> allocate a lot of RAM
When rolling the persister log, rabbit writes a snapshot of all
currently persisted messages, and it needs to allocate memory for that
> Also, will the future (1.8?) persister still do the process described
> in https://dev.rabbitmq.com/wiki/RabbitPersisterDesign ?
> Writing the entire rabbit state to disk every so-often doesn't seem
> like it would work terribly well when storing huge amounts of data
More information about the rabbitmq-discuss