[rabbitmq-discuss] ll_alloc errors on RabbitMQ 2.1.1
Manuel DE FERRAN
mdeferran at ubikod.com
Thu Nov 18 08:09:24 GMT 2010
On Thu, Nov 18, 2010 at 6:49 AM, Matthias Radestock
<matthias at rabbitmq.com>wrote:
> Manuel DE FERRAN wrote:
> On Mon, Nov 15, 2010 at 12:31 PM, Matthew Sackman <matthew at rabbitmq.com<mailto:
>> matthew at rabbitmq.com>> wrote:
>> On Mon, Nov 15, 2010 at 11:56:25AM +0100, Manuel DE FERRAN wrote:
>> > recently we upgraded to version 2.1.1. We picked up the official
>> > package, and installed it on a debian lenny. And we face problems
>> of memory
>> > allocation 'll_alloc: Cannot allocate 486539264 bytes of memory
>> (of type
>> > "port_tab")' when invoking rabbitmqctl.
>> How much RAM is installed in the machine? How much is available? Have
>> you performed any configuration of Rabbit such as altering the
>> vm_high_memory_watermark value? Is it a 32-bit or 64-bit installation?
>> 1.5 GB is assigned to this machine (that's a xen virtual machine). There
>> is more than 500MB of available memory. We released up to 1GB, and that did
>> not fix the issue.
>> We are using the default configuration, means no configuration at all.
>> And that's a 64 bits installation, we grabbed the debian package from
> So the rabbitmq-server itself is running fine and you only encounter
> problems with rabbitmqctl? Strange indeed.
> What version of Erlang is running on that machine?
"erl" returns following header :
Erlang R13B04 (erts-5.7.5) [source] [64-bit] [rq:1] [async-threads:0]
Erlang has been installed from official debian lenny package, the exact
version is 1:13.b.4-dfsg-5.
> Looking at the Erlang source code, the error would appear to crop up when
> the operating system is configured with a very high file descriptor limit.
> What does 'ulimit -n' report? (NB: you should run that as the 'rabbitmq'
> user). If the figure is in the many millions, I suggest you set a lower
> limit in the o/s config. Alternativly, set the ERL_MAX_PORTS environment
That's a good point. The ulimit was returning about 600k as open file limit.
We lowered it to about 32k and so far we can not reproduce the problem.
> I wonder whether there is some strange hypervisor interaction going on.
> Could you perhaps try a different kind of VM?
Yes probably, we will try it if it fails again. Thanks !
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss