[rabbitmq-discuss] Rabbitmq With Celery Problem, Error 541

Ask Solem ask at rabbitmq.com
Tue Nov 22 16:14:34 GMT 2011


On 18 Nov 2011, at 18:53, Mike Grouchy wrote:
> 
> 
> Okay,
> 
> So I enabled the pool set it to 12 (Seemed like a reasonable number
> for the BROKER_POOL_LIMIT). So I ran the task the first time and it
> seemed like it ran okay. I just got back from lunch and tried to run
> it again to see if maybe enabling the pool fixed the problem, but now
> we seem to have some more datapoints here.
> 
> I have attached the celery log output (from --INFO ) and the rabbitmq.log
> 
> https://gist.github.com/81e6f762410e6e4813aa
> 
> So it seems like we are running into a too many processes error, but I
> thought the upper bound was around 30K, which I can't imagine I would
> be using (even though, there are around 60k tasks).
> 
> I can't seem to replicate it consistently, which is making it hard to
> narrow down to a specific test case.


I guess you are running out of erlang processes for the result queues.
The amqp result backend is rather stupid, in that it creates one queue per result.
But this is necessary since the results should be delivered even though
there are no consumers currently waiting for it.

I'd suggest that you either switch to one of the other result backends (redis, database, memcached),

or if you use the results in RPC like calls (the one to retrieve the result
is always the same process as the one that published the task), you could
create your own backend that creates one queue per publisher instead (of course
this will eventually lead to problems too if the number of publishers exceed the process limit).  You could also increase the erlang process limit, the answer how to should be in 
the list archives.


More information about the rabbitmq-discuss mailing list