[rabbitmq-discuss] request for help!

Robert Godfrey rob.j.godfrey at gmail.com
Mon May 10 16:18:06 BST 2010

Hi Tim,

thanks for the feedback

On 10 May 2010 15:37, Tim Fox <tim.fox at jboss.com> wrote:
> I've spent this morning going through the 1.0. PR3 spec, firstly, it's
> considerably simpler than 0.10, which is great news :)
> Here's my 2p:
> One thing I find quite strange is that the core spec doesn't actually
> seem to mandate any queueing semantics anywhere. I've nothing
> particularly against that - in fact, the idea that a node can do
> different types of ordering is actually quite nice, however it's not a
> queueing protocol. Shouldn't AMQP therefore be renamed to AMTP (Advanced
> Message Transfer Protocol) ? ;)

:-) It is deliberate that we've chosen to break the specification up
into distinct layers.  The parts covered in PR3 do not form the
"whole" of AMQP but what we have been so far referring to as the
"core".  It is in the plan that on top of this will come definitions
of specific node types (such as Queues) and the behaviours that one
can expect from them.  An "AMQP Broker" will be defined in terms the
node types and behaviours required to be supported.  That is not to
say that you can't have a messaging broker which speaks AMQP which is
not an "AMQP Broker".  One of the design goals was to enable vendors
of existing messaging products to layer AMQP as a protocol on top of
their existing product - something that proved very difficult with the
0-x protocols due to the very specific model it imposed on broker
behaviour right down to the transport layer.

> On a more serious note, my main concerns are mainly around complexity,
> and verbosity of the wire format. The latter I suppose is not completely
> independent from the former.
> Regarding complexity. IMO a large part of the complexity in the spec.
> seems to come from the way it tries to provide a once and only once
> delivery guarantee. AIUI the way the spec. implements this guarantee is
> something like the following when transferring a message from A to B:
> a) message to be sent from A-->B
> b) ack sent back from B-->A
> c) "ack of ack" sent from A-->B - now the delivery tag can be removed
> from the senders cache

Just to be clear, while this behaviour is permitted under the spec, it
is not mandated that every message exchange follows this pattern.
Firstly the protocol will support different reliability guarantees
agreed at the link level (at most once, at least once, exactly once,
etc) which will allow simpler patterns where the extra guarantees are
not required.  Secondly even when performing exactly once messaging
these acknowledgements are expected to be batched if you are looking
for reasonable performance.

> This results in a complex set of message states, and puts the burden on
> both sides of the link to maintain a map of delivery tags, which would
> also have to be persisted in order to provide once and only delivery
> guarantee in event of failures of node(s). This will also require
> several syncs to storage at each transition (for durable messaging).
> I.e. slow

For each message you will need to sync when it is assigned to a
consumer and when it is dequeued if you want exactly once.  I'm not
sure how you can easily do it with fewer (other than combining the
enqueue with an assignment to a consumer).  If you are happy with at
least once then you can remove the first of these (though you'd still
need the enqueue sync - obviously).  If you want at most once you,
dequeue on sending.

> Perhaps a simpler way of getting the once and only guarantee is to
> forget the delivery tag altogether and allow the sender to specify a
> de-duplication-id - this is just a user generated id - e.g. a String or
> a byte[], (can be generated from user application domain concepts - e.g.
> order number).

I'm not sure what the difference between this and the delivery-tag
is... or are you just suggesting that in the case where you don't need
exactly once then you don't need a semantically meaningful (at the
application level) tag?

> When sending a message this id can be specified on the transfer. The
> receiving end can then maintain a de-duplication cache. The
> de-duplication cache can be implemented as a circular buffer which just
> overwrites itself when full (this is what we do in HornetQ for reliable
> bridging between nodes), this means the interaction c) is not necessary
> or can just be sent intermittently to allow the cache to be cleared. The
> de-dup cache still requires syncing to non volatile storage to give the
> once and only once (for durable messages), however it requires less
> writes than the method described in the spec, and it it has one less
> interaction (you can get rid of the "ack of ack")

This is pretty much how the spec works.  There is nothing to prevent
you implementing your de-dup cache as a circular buffer.  One thing
that was missed in PR3 but has been rectified since we sent it out is
that sequence-no for the low watermark of unsettled transfers should
be on the transfer frame as well as the disposition frame.  Inspecting
this value allows you to clear from your circular buffer safely.

> On recovery after system failure, the sender just blindly sends the
> messages again, on receipt at the server any messages seen before will
> just be rejected. No need for reattaching, sending maps of unsettled
> transfers or other complex stuff like that.

> By removing all this delivery tag book-keeping and session re-attachment
> stuff, which seems unnecessary to me, would result in a dramatic
> simplification.

I presume by this you mean link reattachment - there is no concept of
session reattachment in PR3.  I'm not sure exactly how much
simplification this gives... the retained state needs to be pretty
much the same (the sender needs to hold the in-doubt messages, the
receiver the de-dup ids of the in-doubt transfers).  On re-attach all
we are doing is sending this state (which you will be keeping anyway).
 The advantage in terms of not sending unnecessary duplicates is
possibly only marginal for many people (though it does help solve the
case of resuming an interrupted transfer of a large message), however
it also allows the disambiguation of the case where one side has
actually lost state, so we can determine what to do with the messages
where we can no-longer guarantee exactly once.  Sending as maps rather
than as a sequence allows each end to retain the data without
necessarily having to remember the sequence.  the point is really that
you can *choose* to implement this all using circular buffers (you
don't *need* to implement a map)... but we aren't forcing clients to
persistently remember the sequence in which they sent/received

> Regarding verbosity of the wire format for message transfer; if you're
> just passing a 12 byte message (e.g. stock price - 4 byte identifier + 8
> byte price) then the overall encoded size is much higher than 12 bytes.
> This will kill performance for small messages, making any AMQP compliant
> implementation unable to compete in the world of lightweight
> publish/subscribe messaging with other, non AMQP implementations which
> don't have to conform to the AMQP wire format and can produce much more
> lightweight encodings. The key to perf with lightweight pub/sub is to
> make the encoded message size as small as possible and cram as many
> messages as you can into single socket writes.
> Now, lightweight pub/sub may not be the target domain for AMQP, in which
> case it does not need to worry about it, however if a particular
> messaging system supports multiple protocols including AMQP, it will not
> do much for the adoption of AMQP if the best performance is not
> achievable using the AMQP protocol - users will fall back to using the
> proprietary protocol offered by the vendor.

I haven't actually worked out what the per message overhead will be...
we've tried to make as many fields optional as possible to reduce
overhead, and we'll continue to do so.  The efficiency (in terms of
number of bytes per value) of the encoding has not been a focus simply
because the protocol is designed to admit alternative encodings at a
later date.  In many use cases we've seen that the ease of
encoding/decoding has actually proven a bigger determinant on
performance than the number of bytes on the wire.  Our view was that
trying to optimise this stuff too early would perhaps lead us to focus
on the wrong areas.  habing said all this, AMQP is specifically not
targeting high volume low latency pub/sub at this juncture... and this
is an area where protocols targeted at that market will always likely
have an advantage over a more general protocol.

> A short comment on transactions. I have to be honest here, I spent about
> 30 mins reading the chapter on transactions several times. I have to say
> at the end of it I am not much further understanding it. :(
> However maybe that is moot - a part of me is thinking that transactions
> don't really belong in the core spec. Perhaps the core spec should be
> concerned with allowing the reliable movement of messages between nodes.
> With that in place, transactions could be layered on top in another spec (?)

The ideas is that the transactions are already layered on top of the
transport rather than as a part of it as in prior versions of AMQP.
It is quite possible that the transaction documentation needs a little
more polishing to make this clear :-)  There will more more work
coming out on this, including more detailed work on distributed

> --
> Sent from my BBC Micro Model B

The computer of champions :-)

-- Rob

More information about the rabbitmq-discuss mailing list