[rabbitmq-discuss] request for help!

Tim Fox tim.fox at jboss.com
Mon May 10 17:11:09 BST 2010

On 10/05/10 16:18, Robert Godfrey wrote:
> Hi Tim,
> thanks for the feedback
> On 10 May 2010 15:37, Tim Fox<tim.fox at jboss.com>  wrote:
>> I've spent this morning going through the 1.0. PR3 spec, firstly, it's
>> considerably simpler than 0.10, which is great news :)
>> Here's my 2p:
>> One thing I find quite strange is that the core spec doesn't actually
>> seem to mandate any queueing semantics anywhere. I've nothing
>> particularly against that - in fact, the idea that a node can do
>> different types of ordering is actually quite nice, however it's not a
>> queueing protocol. Shouldn't AMQP therefore be renamed to AMTP (Advanced
>> Message Transfer Protocol) ? ;)
> :-) It is deliberate that we've chosen to break the specification up
> into distinct layers.  The parts covered in PR3 do not form the
> "whole" of AMQP but what we have been so far referring to as the
> "core".  It is in the plan that on top of this will come definitions
> of specific node types (such as Queues) and the behaviours that one
> can expect from them.  An "AMQP Broker" will be defined in terms the
> node types and behaviours required to be supported.  That is not to
> say that you can't have a messaging broker which speaks AMQP which is
> not an "AMQP Broker".  One of the design goals was to enable vendors
> of existing messaging products to layer AMQP as a protocol on top of
> their existing product - something that proved very difficult with the
> 0-x protocols due to the very specific model it imposed on broker
> behaviour right down to the transport layer.
>> On a more serious note, my main concerns are mainly around complexity,
>> and verbosity of the wire format. The latter I suppose is not completely
>> independent from the former.
>> Regarding complexity. IMO a large part of the complexity in the spec.
>> seems to come from the way it tries to provide a once and only once
>> delivery guarantee. AIUI the way the spec. implements this guarantee is
>> something like the following when transferring a message from A to B:
>> a) message to be sent from A-->B
>> b) ack sent back from B-->A
>> c) "ack of ack" sent from A-->B - now the delivery tag can be removed
>> from the senders cache
> Just to be clear, while this behaviour is permitted under the spec, it
> is not mandated that every message exchange follows this pattern.
> Firstly the protocol will support different reliability guarantees
> agreed at the link level (at most once, at least once, exactly once,
> etc) which will allow simpler patterns where the extra guarantees are
> not required.
I believe you can use a simpler pattern, but still retain the once and 
only once guarantee.

This is how we do it in HornetQ to give us once and only once when 
bridging messages from one server A to another B.

The sender doesn't maintain any delivery map at all. Each message has a 
de-duplication-id which can either be set by the client when the message 
was originally sent, or can be set by the server and derived from the 
message and node id of the sender.

Since the id can be derived from the message, or is already persisted as 
part of the message there's no need to persist it separately or maintain 
a map on the client side.

The receiving node B simply maintains a circular cache of received ids 
(which is optionally persisted). If B sees the same de-duplication-id 
more than once it simply ignores the message.

After failure, the sender can just carry on sending any unacked messages 
as normal and the server will reject any dups.

B sends acks back to A asynchronously in a different stream (so 
everything is pipelined for performance). When A receives the ack, then 
the message can be removed from storage.

This gives us once and only once without having the c) interaction above.

I believe it this gives us one less transfer on the wire, but more 
importantly one less write to storage - on the sender side.

I can see that your scheme accomplishes the same goal, but it seems 
somewhat more complex, unless I am missing something (quite possible ;) )

More information about the rabbitmq-discuss mailing list