[rabbitmq-discuss] RabbitMQ .NET client BasicConsume and HandleBasicDeliver question

Simone Busoli simone.busoli at gmail.com
Wed Jun 15 20:36:17 BST 2011


I'd say that if you need strict ordering then the subscriber will have to
handle that, perhaps by caching incoming messages as soon as it detects a
hole in the sequence flow and starting to process once again once the
missing message has been processed. This gets messy soon, however, if you
happen to lose another message while you are already waiting for the first
one. And so on, recursively.

Are there any other approaches to achieve ordering guarantees?
On Jun 15, 2011 6:14 PM, "Emile Joubert" <emile at rabbitmq.com> wrote:
> Hi,
>
> Rabbit preserves order of messages under strict conditions, as
> discussed. Your case does not meet those conditions, so the ordering
> guarantees do not apply. The current version of rabbit (2.5.0 at the
> time of writing) will therefore not deliver those messages in order
> (unless the queue happens to be empty when requeues take place).
> Unfortunately stopping the producer will not change that.
>
> You could try to meet the conditions under which the broker preserves
> message order or you could detect the problem and attempt to deal with
> it downstream.
>
>
> -Emile
>
>
> On 15/06/11 16:31, T-zex wrote:
>> Tanks again!
>>
>> So, in case we detect that current message sequence number is bigger
>> than expected do we need to loop through the queue untill the required
>> message is found? Does this mean that producer must be stopped,
>> because all other messages in front of the queue will become unacked
>> and will go back to the queue and new ones will appear at the front -
>> basically to avoid shuffling.
>>
>>
>>
>> On Wed, Jun 15, 2011 at 4:03 PM, Emile Joubert <emile at rabbitmq.com>
wrote:
>>> Hi,
>>>
>>> Message order guarantees only apply along the same path between a single
>>> producer and consumer. The guarantees do not hold if messages get
>>> requeued. Rabbit will requeue messages at the back of the queue, i.e.
>>> treat it as a new message.
>>>
>>> Your producer may add a sequence number to messages so that consumers
>>> can detect this. Consumers may also inspect the "redelivered" flag to
>>> check whether a message has been delivered before.
>>>
>>>
>>> -Emile
>>>
>>>
>>> On 15/06/11 15:34, T-zex wrote:
>>>> Thank you Emile.
>>>>
>>>> The thing I'm trying to solve is have a guaranteed order of messages:
>>>> if I fail to ack after BasicGet() the message anyway will go to the
>>>> end of the queue. In case noAck = true, there is a risk to lose a
>>>> message when a process fails. I can't find a way to make RabbitMQ work
>>>> as a queue in a simple single reader scenario...
>>>>
>>>> On Wed, Jun 15, 2011 at 1:42 PM, Emile Joubert <emile at rabbitmq.com>
wrote:
>>>>> Hi,
>>>>>
>>>>> Yes, if you want to retrieve a single message synchronously then use
>>>>> IModel.BasicGet() . BasicConsume() is for asynchronous delivery of
>>>>> messages, which is faster than BasicGet() and often leads to simpler
>>>>> client code.
>>>>>
>>>>>
>>>>>
>>>>> On 15/06/11 13:35, T-zex wrote:
>>>>>> In my case it doesn't matter if it is IModel.Close() or
>>>>>> IModel.ChannelFlow(false) couse the process must be terminated
anyway.
>>>>>> We need an atomic operation which would ack the message and would not
>>>>>> fetch the other one. It seems that BasicConsume is unable to gurantee
>>>>>> that, so maybe BasicGet is a better option?
>>>>>>
>>>>>> On Wed, Jun 15, 2011 at 12:03 PM, Emile Joubert <emile at rabbitmq.com>
wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Prefetch count limits the number of unacknowledged messages, so you
>>>>>>> would need to acknowledge the last message to get the next one if
the
>>>>>>> prefetch count was set to 1.
>>>>>>>
>>>>>>> You should never need to close the channel for flow control. You can
use
>>>>>>> IModel.ChannelFlow() if you need to temporarily stop the broker from
>>>>>>> delivering messages. This works independently from the prefetch
count.
>>>>>>>
>>>>>>>
>>>>>>> -Emile
>>>>>>>
>>>>>>>
>>>>>>> On 15/06/11 10:55, T-zex wrote:
>>>>>>>> Thank you!
>>>>>>>>
>>>>>>>> When prefetch count is set to 1 channel will try to fetch the next
>>>>>>>> message as soon as HandleBasicDeliver returns? Should I invoke
>>>>>>>> Channel.Close() on a different thread (to avoid blocking) and then
>>>>>>>> return?
>>>>>>>>
>>>>>>>> On Wed, Jun 15, 2011 at 10:33 AM, Emile Joubert <emile at rabbitmq.com>
wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> On 15/06/11 10:08, T-zex wrote:
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> We are overriding DefaultBasicConsumer.HandleBasicDeliver method
and
>>>>>>>>>> use IModel.BasicConsume to subscribe to incoming messages:
>>>>>>>>>> model.BasicConsume(queue, false, instanceOfDerivedConsumer);
>>>>>>>>>>
>>>>>>>>>> When application fails to ack a message that message is requeued.
How
>>>>>>>>>> can I guarantee that BasicConsume is receiving one message at a
time?
>>>>>>>>>> How to deterministically stop consuming when application detects
that
>>>>>>>>>> it is unable to process a message and prevent the second message
from
>>>>>>>>>> arriving? How to make sure that there is one and only one unacked
>>>>>>>>>> message and that message is at the peak of broker queue.
>>>>>>>>>
>>>>>>>>> The solution is to set the prefetch count of the channel to 1
using
>>>>>>>>> IModel.BasicQos(). Also see
>>>>>>>>> http://www.rabbitmq.com/amqp-0-9-1-quickref.html#basic.qos .
>>>>>>>>> Note that rabbit does not implement the prefetch size or global
>>>>>>>>> parameters to the same method.
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20110615/acfe7080/attachment-0001.htm>


More information about the rabbitmq-discuss mailing list