[rabbitmq-discuss] RabbitMQ+STOMP Plugin: Bulk Message Allocation Behavior?

Darien Kindlund darien at kindlund.com
Tue Feb 3 18:46:22 GMT 2009


Hi Tony,

Using: RabbitMQ 1.5.0 / Stomp 1.5.0

Maybe you can shed some light on this.  I have a topic exchange setup
with one (durable) queue bound to the exchange.  All messages sent to
the exchange are using a single Net::Stomp perl process and have the
proper routing key such that all messages get sent to the shared
queue.

I also have about 75 to 100 perl processes all using Net::Stomp which
are subscribed to the same queue, receiving these messages.  Each perl
process receives one message, processes the message, then sends back
an explicit ACK (client => 'ack') to the queue in order for the
message to be removed from the shared queue.

Here are the order of events:

0) No perl processes are subscribed to the queue.
1) 50,000 messages are sent to the exchange and are routed to the
queue (which are stored on the queue).
2) 50 perl processes start up and subscribe to the queue and start
processing the messages.
... wait 1-2 minutes ...
3) rabbitmqctl shows that messages_ready=0, BUT
messages_unacknowledged=messages_uncommited=30000
4) 10 new perl processes start up and subscribe to the queue.  At this
point, these 10 new perl processes NEVER obtain any of the remaining
30,000 messages

Here's the problem:

I want each perl process to handle ONE and only one message at a time,
yet it seems that RabbitMQ (or the STOMP adapter) pre-allocates more
than one message per STOMP connection.  Bottom line: This means that
one of the perl processes can be overloaded with messages while
another perl process may not have any messages to process.

Could you please confirm that this is, in fact, happening?  Next,
could you provide a recommendation as to how I should resolve this
issue?

Thanks again,
-- Darien




More information about the rabbitmq-discuss mailing list