[rabbitmq-discuss] RabbitMQ .NET client BasicConsume and HandleBasicDeliver question

Emile Joubert emile at rabbitmq.com
Wed Jun 15 16:03:19 BST 2011


Hi,

Message order guarantees only apply along the same path between a single
producer and consumer. The guarantees do not hold if messages get
requeued. Rabbit will requeue messages at the back of the queue, i.e.
treat it as a new message.

Your producer may add a sequence number to messages so that consumers
can detect this. Consumers may also inspect the "redelivered" flag to
check whether a message has been delivered before.


-Emile


On 15/06/11 15:34, T-zex wrote:
> Thank you Emile.
> 
> The thing I'm trying to solve is have a guaranteed order of messages:
> if I fail to ack after BasicGet() the message anyway will go to the
> end of the queue. In case noAck = true, there is a risk to lose a
> message when a process fails. I can't find a way to make RabbitMQ work
> as a queue in a simple single reader scenario...
> 
> On Wed, Jun 15, 2011 at 1:42 PM, Emile Joubert <emile at rabbitmq.com> wrote:
>> Hi,
>>
>> Yes, if you want to retrieve a single message synchronously then use
>> IModel.BasicGet() . BasicConsume() is for asynchronous delivery of
>> messages, which is faster than BasicGet() and often leads to simpler
>> client code.
>>
>>
>>
>> On 15/06/11 13:35, T-zex wrote:
>>> In my case it doesn't matter if it is IModel.Close() or
>>> IModel.ChannelFlow(false) couse the process must be terminated anyway.
>>> We need an atomic operation which would ack the message and would not
>>> fetch the other one. It seems that BasicConsume is unable to gurantee
>>> that, so maybe BasicGet is a better option?
>>>
>>> On Wed, Jun 15, 2011 at 12:03 PM, Emile Joubert <emile at rabbitmq.com> wrote:
>>>> Hi,
>>>>
>>>> Prefetch count limits the number of unacknowledged messages, so you
>>>> would need to acknowledge the last message to get the next one if the
>>>> prefetch count was set to 1.
>>>>
>>>> You should never need to close the channel for flow control. You can use
>>>> IModel.ChannelFlow() if you need to temporarily stop the broker from
>>>> delivering messages. This works independently from the prefetch count.
>>>>
>>>>
>>>> -Emile
>>>>
>>>>
>>>> On 15/06/11 10:55, T-zex wrote:
>>>>> Thank you!
>>>>>
>>>>> When prefetch count is set to 1 channel will try to fetch the next
>>>>> message as soon as HandleBasicDeliver returns? Should I invoke
>>>>> Channel.Close() on a different thread (to avoid blocking) and then
>>>>> return?
>>>>>
>>>>> On Wed, Jun 15, 2011 at 10:33 AM, Emile Joubert <emile at rabbitmq.com> wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 15/06/11 10:08, T-zex wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> We are overriding DefaultBasicConsumer.HandleBasicDeliver method and
>>>>>>> use IModel.BasicConsume to subscribe to incoming messages:
>>>>>>> model.BasicConsume(queue, false, instanceOfDerivedConsumer);
>>>>>>>
>>>>>>> When application fails to ack a message that message is requeued. How
>>>>>>> can I guarantee that BasicConsume is receiving one message at a time?
>>>>>>> How to deterministically stop consuming when application detects that
>>>>>>> it is unable to process a message and prevent the second message from
>>>>>>> arriving? How to make sure that there is one and only one unacked
>>>>>>> message and that message is at the peak of broker queue.
>>>>>>
>>>>>> The solution is to set the prefetch count of the channel to 1 using
>>>>>> IModel.BasicQos(). Also see
>>>>>> http://www.rabbitmq.com/amqp-0-9-1-quickref.html#basic.qos .
>>>>>> Note that rabbit does not implement the prefetch size or global
>>>>>> parameters to the same method.



More information about the rabbitmq-discuss mailing list