[rabbitmq-discuss] Handling network reliability problems on the publisher side
simon at rabbitmq.com
Mon Dec 13 15:42:56 GMT 2010
On 13/12/10 15:17, Stefan Kaes wrote:
> It looks like no programming cleverness on our side can achieve what we
> want: have the publisher block as soon as a message cannot be accepted
> by the intended rabbitmq server, because, as far as we understand the
> amqp protocol, no protocol level acknowledgements are sent which the
> publisher could wait for (and optionally timeout), before sending the
> next message.
> This is all fine to achieve high throughput on the publisher side, but
> doesn’t quite fit our use case. For some messages, we really need to
> make reasonably sure the message has been received by a rabbitmq server,
> before we continue. We could then use timeouts to detect network
> partitioning and buffer messages locally until they can be sent (only if
> none of our three redundant servers can be reached, of course).
> * is the analysis correct?
Yes. Publishes are fully async in AMQP.
> * is the JSON-RPC plugin stable enough to be used in production?
Interesting question. Our official position is that like all the plugins
it is "unsupported". However it's pretty clear that some plugins are
more supported than others, and JSON-RPC is pretty low down the pecking
> * maybe there’s a better solution available?
The next release will feature publisher acknowledgements, where Rabbit
will stream basic.acks back at you (if you ask for that). This sounds
like exactly what you're looking for.
In the meantime, you could get away with publishing inside a
transaction. When you commit the transaction, and get commit-ok back,
you know Rabbit has got the message (and persisted it if it's
persistent). Unfortunately this imposes a cost per commit - tweak the
number of publishes per commit to tune performance versus maximum number
of outstanding messages.
Staff Engineer, RabbitMQ
SpringSource, a division of VMware
More information about the rabbitmq-discuss