[rabbitmq-discuss] Queue depth and no. of consumers.
michael neidhardt
meem.ok at gmail.com
Mon Oct 31 14:31:26 GMT 2011
Hi and thanks for the quick reply.
I do use ack, but not QoS. My program is as follows, using Perl and
Net::RabbitMQ:
-----------------------
my ($exchange, $queue, $routing_key) = ('myx', 'isi', 'isi');
my $mq = Net::RabbitMQ->new();
my $channel = 1;
$mq->connect($mqhost, { user => "guest", password => "guest" });
$mq->channel_open($channel);
$mq->exchange_declare( $channel, $exchange, { exchange_type =>
'direct',
auto_delete => 0,
durable => 0});
$mq->queue_declare( $channel, $queue, { auto_delete => 0, durable =>
0});
$mq->queue_bind( $channel, $queue, $exchange, $routing_key);
$mq->consume($channel, $queue, { no_ack => 0 } );
while (1) {
my $msg = $mq->recv();
handle_file($msg->{body});
$mq->ack($channel, $msg->{delivery_tag});
}
-----------------------
With no QoS set, I assume each consumer fetches as many messages as
possible (whatever that means),
and 'queues' them itself. I will try to set QoS/prefetch_count to 1,
though I seem to remember
having read that this can cause trouble. I would assume that it would
be no problem, have you got any thoughts on whether it's a good idea?
As I wrote earlier, we have a number (several thousand) of files to
process, each of which may contain up to several hundred thousand
records (around 2KB each).
In an earlier test, I let the processes handling files push the ID of
each record to a queue.
(Simply add a publish to the above code after the ack). The consumer
for this queue (which uses autoack) would do a lookup in a Postgresql
DB, and nothing else.
After about 50 million records, the vm_memory_high_watermark would be
set, and shortly after
that, I got <"timeout waiting for channel.flow_ok{active=false}",none}
>. Eventually the whole system froze.
I guess this timeout is caused by my client not reacting to the flow
control from the RabbitMQ server. Is that correct? Unfortunately, the
client I use does not have methods for that. Should I expect to handle
this in normal operation, or could it be handled by a client for me?
Oh, and the big question: Is it out of the question to handle approx.
300 mill. messages (where payload is essentially a bigint) over a few
days?
Regards,
Michael.
More information about the rabbitmq-discuss
mailing list