[rabbitmq-discuss] Push to back of Queue on NAck

Ceri Storey ceri at lshift.net
Fri Jul 26 18:32:21 BST 2013


(26/07/13 18:03), Tom Anderson wrote:
> On 26/07/13 17:07, Ceri Storey wrote:
>> (26/07/13 16:49), Tom Anderson wrote:
>>> Implemented exactly as described there, it yields an infinite loop
>>> for unprocessable messages. You might therefore also want to keep a
>>> count of the number of processing attempts in a header on the
>>> message, and more thoroughly reject messages which reach some
>>> maximum number of attempts. I think you could do the final rejection
>>> by setting a routing key on the message when you reject it for the
>>> last time, and having exchange B be a direct exchange which routes
>>> to either queue B or some final deadletter queue.
>>>
>>> If you want exponential backoff in the retries, then life gets more
>>> complicated (multiple timeout queues, selected between by a routing
>>> key set by the consumer of A?). We are currently pussyfooting around
>>> this issue at my company. I will report back here if we ever
>>> implement a good solution!
>>
>> I've just written some code to do exactly this; limited retries with
>> exponential backoff. That said, we're kind of cheating in that we
>> store the retry state in a secondary datastore and buffer messages in
>> the client.
>>
>> So whenever we receive a message, we:
>>
>>   * When we post each message, we assign it a unique message_id
>>   * Lookup message's due time by it's message_id property in our
>>     datastore
>>   * Stash the message in a heap queue
>>   * When the message becomes due, remove it from the heap queue and
>>     pass it to the client code.
>>   * If the client code succeeds, then we finally ack the message.
>>     Otherwise, we reject the message.
>>
>
> I'm currently writing almost exactly the same thing! The difference
> being that i'm putting the due time in a header on the message rather
> than in a lookaside store, and that my component moves messages from a
> queue to an exchange, rather than from a queue to client code directly.
>
>> Whilst you can scale this horizontally, you will need enough buffer
>> space to hold a reasonable proportion of your queue, although/what/
>> proportion depends on how much you care about timeliness.
>>
>
> I'm not sure i understand. Don't you need to have enough space to hold
> all the messages that could be delayed at any given time? In our case,
> that happens to not be all that large, fortunately.
I've set the prefetch buffer set to a conservative number of messages;
mostly to avoid accidentally causing a denial of service of my own
application. To be clear, this will mean that we don't see some messages
until after they become due, but that's okay in our case.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20130726/4c6f8eb7/attachment.htm>


More information about the rabbitmq-discuss mailing list