[rabbitmq-discuss] How to do blocking asynchronously with Pika? I want 1 message at a time and stop receiving until I'm ready again.

Scott Chapman Scott.Chapman at servicenow.com
Thu Feb 21 19:12:59 GMT 2013


Mathias,
Thanks for the good info below.
What I want to do is to get a message and Ack it immediately and not get
another one until I'm done processing the first one.
If the client crashes during processing, a human will be notified and come
along later and clean it up.

So I would want to do:
Receive one message at a time.
Ack it.
Process it.
-> Receive one message at a time.

It sounds like the easy way to do this is to set the prefetch_count to 1
and follow the simple flow above.  No need to cancel subscription.

Scott


On 2/21/13 11:04 AM, "Matthias Radestock" <matthias at rabbitmq.com> wrote:

>Scott,
>
>On 21/02/13 18:11, Scott Chapman wrote:
>> If 1st message arrives, I ack it, and I am processing it then 2nd
>>message
>> arrives, it will be held _in_the_client_?
>
>Yes.
>
>> So if the client gets rebooted, the message will be lost?
>
>No. The server holds on to messages until they have been acknowledged.
>
>You may want to read
>http://www.rabbitmq.com/tutorials/tutorial-two-python.html, if you
>haven't done so already.
>
>> It appears that I can set the basic.qos prefetch-count to zero to solve
>> this problem?
>
>No. Zero means "unlimited". But there is no problem to start with.
>
>> If I decide to cancel the consumer,  I assume the proper order of events
>> would be:
>>
>> Enable Consumer
>> Receive message 1
>> Cancel Consumer
>> Ack Message
>> Process Message
>> -> back to Enable Consumer.
>
>What do you want to happen when the client crashes during Process Message?
>
>The typical consumer sequence, in AMQP commands, is
>
>- basic.qos{prefetch=n}
>- basic.consume
>- on basic.deliver (i.e. receiving of a message)
>   - process message
>   - basic.ack
>
>If the prefetch is set to 1 (though see note below), we can guarantee
>that the client only ever has at most one message sent to it until it
>issues an ack. And sending the ack after processing ensures that if
>anything goes wrong during processing, e.g. the client crashes, then the
>server will requeue the message, so other clients (or the same client
>when it reconnects) can process it.
>
>In some circumstances you may want to do the following instead:
>
>- basic.qos{prefetch=n}
>- basic.consume
>- on basic.deliver
>   - basic.cancel
>   - process message until a safe point from which the client can recover
>   - basic.ack
>   - do more, possibly expensive, processing of the message
>   - basic.consume
>
>which is closer to what you envisaged. But that's a lot more complex and
>expensive, so only warranted if the workload fits that special pattern
>of message processing being very expensive and splittable into two phases.
>
>> I think one would not want to ACK the message before canceling the
>> consumer because it might immediately dump another message into the
>> consumer?
>
>Correct.
>
>Now back to setting the prefetch. For maximum performance you'd want to
>tune that s.t. whenever the client has finished processing a message
>then the next message has just arrived from the server moments earlier.
>That way the client never sits there idle waiting for a message or,
>conversely, buffers a whole bunch of messages that could be processed by
>other consumers. The right figure here is highly dependent on workload
>and network latency.
>
>
>Regards,
>
>Matthias.



More information about the rabbitmq-discuss mailing list