[rabbitmq-discuss] max amount of persisted data
Matthias Radestock
matthias at lshift.net
Tue Mar 17 12:05:40 GMT 2009
Tsuraan,
Tony Garnock-Jones wrote:
> Matthias Radestock wrote:
>> tsuraan wrote:
>>> Would replacing the queue currently used for unacked_message_q in
>>> rabbit_channel.erl with a queue that overflows to a tempfile be
>>> sufficient to resolve this?
>> The short answer is 'no'.
>
> :-) Or, seen another way, "yes". It's a start. It'd work for when
> non-persistent messages were being sent through the broker.
No, it won't. My original answer is correct. unacked_message_q in
rabbit_channel keeps track of the ids of messages that have been sent to
consumers but have not been ack'ed yet. It does not hold undelivered
messages and it does not hold message content. That all happens in
rabbit_amqqueue_process' #q.q field (which I reckon Tony thought you
were talking about), and rabbit_persister's #psnapshot.queues field (for
persistent messages).
> The full solution needs to carefully balance three interacting components:
>
> - persistent storage of messages
> - the memory needs of the queue-data-structure of messages in each
> AMQP-queue
> - TX-class transactions
+ fanout (not keeping multiple copies of messages)
+ clustering (which affects decisions on where messages should be
persisted, and how recovery works)
+ non-linear access (for priorities and message expiry)
+ a few other things we will no doubt discover
Matthias.
More information about the rabbitmq-discuss
mailing list