[rabbitmq-discuss] limit number of messages buffered in memory in new persister
matthias at lshift.net
Wed Mar 3 09:22:35 GMT 2010
alex chen wrote:
> One problem is that client needs to keep a copy of uncommitted
> messages to handle broker restart.
Publishing is asynchronous. Hence a publisher has no idea when a
particular message has been accepted by the server and safely written to
disk for recovery in the event of a restart, *except* by using
transaction. So if you don't use transactions, you'd still have to keep
copies of messages ...but you won't know how many and for how long.
> Looking into the server's src code, it seems to me rabbit_reader is
> the process that accepts the publish. I am wondering when it sends
> to message to the message writer's inbox, can it wait for the ack
> from message writer? then it can stop accepting publish from client
> if there are too many un-acked messages.
We have spent a good few person weeks discussing various options along
those lines. Here are a few points to note:
1) introducing any synchrony into the message flow carries significant
2) there is no way for a server to stop accepting publishes without also
stopping to accept anything else, notably acks, since any AMQP
connection can carry a mixture of commands.
3) AMQP 0-8/9/9-1 are in desperate need of an inexpensive mechanism that
allows clients to find out when publishes have been accepted. We have a
few design ideas here and may extend the protocol to implement them. Not
any time soon though.
> This will simplify client's publish a lot - no need to use commit
I don't see commits as overly burdensome. After all, what you are
suggesting above is in effect a synchronous publish, in which case just
introduce a helper method that does a publish followed by a commit.
> no need to listen for channel flow.
There are good reasons for the existence of something like channel.flow.
It allows clients to continue performing operations other than publish,
notably ack'ing of messages.
More information about the rabbitmq-discuss