[rabbitmq-discuss] RabbitMQ filling up?

Martí Planellas Rosell marti.planellas at acknowledgement.co.uk
Fri Jan 10 18:01:59 GMT 2014


I had some problems recently with RabbitMQ freezing now and again 
apparently randomly, and I'm not sure what could be wrong.

When I do a rabbitmqctl report I get a lot of these:

Channels:
pid name connection number user vhost transactional confirm consumer_count 
messages_unacknowledged messages_unconfirmed messages_uncommitted 
acks_uncommitted prefetch_count client_flow_blocked
<'rabbit at edfdr-qb-wb01'.3.8110.0> 127.0.0.1:38774 -> 127.0.0.1:5672 (1) 
<'rabbit at edfdr-qb-wb01'.3.7966.0> 1 guest / false false 0 0 0 0 0 0 false
<'rabbit at edfdr-qb-wb01'.3.9512.0> 127.0.0.1:38775 -> 127.0.0.1:5672 (1) 
<'rabbit at edfdr-qb-wb01'.3.9507.0> 1 guest / false false 0 0 0 0 0 0 false
<'rabbit at edfdr-qb-wb01'.3.9516.0> 127.0.0.1:38775 -> 127.0.0.1:5672 (2) 
<'rabbit at edfdr-qb-wb01'.3.9507.0> 2 guest / false false 0 0 0 0 0 0 false
<'rabbit at edfdr-qb-wb01'.3.9520.0> 127.0.0.1:38775 -> 127.0.0.1:5672 (3) 
<'rabbit at edfdr-qb-wb01'.3.9507.0> 3 guest / false false 0 0 0 0 0 0 false
<'rabbit at edfdr-qb-wb01'.3.9524.0> 127.0.0.1:38775 -> 127.0.0.1:5672 (4) 
<'rabbit at edfdr-qb-wb01'.3.9507.0> 4 guest / false false 0 0 0 0 0 0 false
<'rabbit at edfdr-qb-wb01'.3.9528.0> 127.0.0.1:38775 -> 127.0.0.1:5672 (5) 
<'rabbit at edfdr-qb-wb01'.3.9507.0> 5 guest / false false 0 0 0 0 0 0 false
<'rabbit at edfdr-qb-wb01'.3.9532.0> 127.0.0.1:38775 -> 127.0.0.1:5672 (6) 
<'rabbit at edfdr-qb-wb01'.3.9507.0> 6 guest / false false 0 0 0 0 0 0 false
<'rabbit at edfdr-qb-wb01'.3.9536.0> 127.0.0.1:38775 -> 127.0.0.1:5672 (7) 
<'rabbit at edfdr-qb-wb01'.3.9507.0> 7 guest / false false 0 0 0 0 0 0 false
<'rabbit at edfdr-qb-wb01'.3.9540.0> 127.0.0.1:38775 -> 127.0.0.1:5672 (8) 
<'rabbit at edfdr-qb-wb01'.3.9507.0> 8 guest / false false 0 0 0 0 0 0 false
<'rabbit at edfdr-qb-wb01'.3.9544.0> 127.0.0.1:38775 -> 127.0.0.1:5672 (9) 
<'rabbit at edfdr-qb-wb01'.3.9507.0> 9 guest / false false 0 0 0 0 0 0 false
...
up to 1895

And a lot of:

Consumers on /:
queue_name channel_pid consumer_tag ack_required
updates <'rabbit at edfdr-qb-wb01'.3.16940.0> 
node-amqp-21172-0.15091347694396973 false
updates <'rabbit at edfdr-qb-wb01'.3.16944.0> node-amqp-21172-0.618321357993409 
false
updates <'rabbit at edfdr-qb-wb01'.3.16948.0> 
node-amqp-21172-0.9028398699592799 false
updates <'rabbit at edfdr-qb-wb01'.3.16952.0> 
node-amqp-21172-0.17504629422910511 false
updates <'rabbit at edfdr-qb-wb01'.3.16956.0> 
node-amqp-21172-0.6401850257534534 false
updates <'rabbit at edfdr-qb-wb01'.3.16960.0> node-amqp-21172-0.697384244762361 
false
updates <'rabbit at edfdr-qb-wb01'.3.16964.0> 
node-amqp-21172-0.41653046221472323 false
updates <'rabbit at edfdr-qb-wb01'.3.16968.0> 
node-amqp-21172-0.23239665268920362 false
...
About 40

In theory I only have one producer (PHP process) and one consumer (NodeJS 
server)

Anyone have some idea of what's going on with that?

The beam.smp process would eventually hit more than 50% CPU and memory and 
would stop delivering messages.

Any help will be greatly appreciated.

Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20140110/ad297f20/attachment.html>


More information about the rabbitmq-discuss mailing list