[rabbitmq-discuss] Confusing disk free space limit warning
mark.hingston at vardr.com
Tue Sep 18 03:22:49 BST 2012
On 17/09/12 7:41 PM, Matthias Radestock wrote:
> Ah. That's a bug in the stomp plug-in. It's been around since 2.8.3.
> Will fix.
> The upshot is that from then onwards alarm handling is broken, which
> explains why the subsequent clearing of the disk alarm wasn't
> unblocking producers for you.
Ok, thanks for the explanation. So is my best option to disable the
stomp plugin until this is fixed?
> The fact that the server is "slow" because it encountered an alarm
> condition is neither here or there - as far as the client is concerned
> it's just talking to a slow server.
> So your question really comes down to how would you expect a client to
> detect and deal with a slow server / congested network.
Thanks for the explanation. I guess now I'm trying to figure out what
the best way is to defend against this situation so that my messages
don't get lost. To make this happen I was thinking that the client
should not report the message successfully sent to our upstream
components until it gets some confirmation from rabbit that the message
has been actually added to a queue. Wrt, it looks like a while back
Simon MacMullen gave this answer in the topic: "Guaranteed delivery":
" At the moment the only way you can guarantee that a publish has gone
through is to publish inside a transaction - when the transaction commit
completes the message is on disk (assuming durable queue / persistent
message). This is a little heavyweight though. In the future we intend
to introduce streaming publisher acks to do the same job in a more async
Is there any chance a more lightweight alternative to accomplish this
has emerged in the past couple of years?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss