[rabbitmq-discuss] performance with thousands of auto_delete queues

Muharem Hrnjadovic mh at foldr3.com
Wed Nov 23 12:45:13 GMT 2011

On 11/17/2011 05:20 PM, Ask Solem wrote:
> On 17 Nov 2011, at 14:42, Muharem Hrnjadovic wrote:
>>> An auto_delete queue is only deleted when it's empty,
>>> so you have to collect the results.
>> How does one collect the results? We do
>>   result = TaskSet(tasks=subtasks).apply_async()
>>   # Wait for all subtasks to complete.
>>   while not result.ready():
>>       time.sleep(0.25)
>>   the_results = result.join()
>> Is there something we need to do beyond that?
> This would collect the result, but maybe there are cases where it's
> not collected, or you have so many tasks that using one queue per task
> is not feasible.
> If the process to publish the task, and to collect the result is
> always the same, you can use reply-to style replies (one queue per
> publisher, instead of one queue per task). There's no built-in support
> for this in Celery, but adding the capability to your task should be
> fairly easy.
This is an interesting idea. It would be great if you could sketch out
how the "one queue per publisher" solution would look like.

> The best thing you can do right now is to set an expiry
> time for the results, that would probably help at least in the short term.
I did upgrade to the latest version and see that task queues are garbage
collected. Thanks for the advice!


Best regards/Mit freundlichen Grüßen

Muharem Hrnjadovic <mh at foldr3.com>
Public key id   : B2BBFCFC
Key fingerprint : A5A3 CC67 2B87 D641 103F  5602 219F 6B60 B2BB FCFC

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 900 bytes
Desc: OpenPGP digital signature
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20111123/72007931/attachment.pgp>

More information about the rabbitmq-discuss mailing list