[rabbitmq-discuss] Performance degrades with increasing queue depth
Paolo Negri
hungryblank at gmail.com
Mon Sep 7 23:23:10 BST 2009
Hi Everyone
the code provided for the publisher actually does seem likely to
produce the channel leak
try the following instead
require "rubygems"
require "bunny"
EXCHANGE = "raw_telemetry"
QUEUE = "process_telemetry"
count ||= 0
prev_time ||= Time.now
msg_server = Bunny.new(:spec => '08')
msg_server.start
exchange = msg_server.exchange(EXCHANGE, :type => :fanout)
msg_server.queue(QUEUE).bind(EXCHANGE)
loop do
exchange.publish("This is my msg")
count += 1
if count > 100
t = Time.now
puts("msgs pushes per sec: #{count / (t - prev_time)}")
prev_time = t
count = 0
end
end
Paolo
On Mon, Sep 7, 2009 at 11:28 PM, Aisha Fenton<aisha.fenton at gmail.com> wrote:
> Hi Chunk.
> I don't know Suhail. And his problem may be different from mine.
> To isolate the problem, I've reduce what I'm doing down to the attached code
> files. One pushes messages, the other pops them.
> I see the problem even when only test_push.rb is running (as long as the
> queue is sufficiently pre-populated). Nothing else is using rabbitmq when
> I'm doing the test.
> I'm getting the same problem on several different machines here -- my local
> MacOSX box and a server class Debian box.
>
> exchange - durable or not?
>
> not durable
>
> exchange - type? (Looks like "fanout" from the original post)
>
> fanout
>
> number of bindings per exchange
>
> 1
>
> queue - durable?
>
> not durable
>
> publishing settings - require ack? persistent? immediate?
>
> no ack, non persistent, and non immediate.
>
> message size? 1K according to the original post
>
> < 1K
>
>
> test_push.rb
> ---
> require "rubygems"
> require "bunny"
> EXCHANGE = "raw_telemetry"
> QUEUE = "process_telemetry"
> count ||= 0
> prev_time ||= Time.now
> msg_server = Bunny.new(:spec => '08')
> msg_server.start
> msg_server.exchange(EXCHANGE, :type => :fanout)
> msg_server.queue(QUEUE).bind(EXCHANGE)
> loop do
> msg_server.exchange(EXCHANGE).publish("This is my msg")
>
> count += 1
> if count > 100
> t = Time.now
> puts("msgs pushes per sec: #{count / (t - prev_time)}")
> prev_time = t
> count = 0
> end
>
> end
> test_pop.rb
> ---
> require "rubygems"
> require "bunny"
> @msg_server = Bunny.new(:spec => '08')
> @msg_server.start
> def start
> count ||= 0
> prev_time ||= Time.now
> queue = @msg_server.queue("process_telemetry")
> loop do
> result = queue.pop
> next if result == :queue_empty
>
> count += 1
> if count > 100
> t = Time.now
> puts("msgs pops per sec: #{count / (t - prev_time)}")
> prev_time = t
> count = 0
> end
> end
> end
> begin
> start()
> ensure
> puts "stopping"
> @msg_server.stop
> end
>
>
>
> On 8/09/2009, at 6:38 AM, Chuck Remes wrote:
>
> Are you running the same code as Aisha? Are you the same person or work
> together? I'm getting a bit confused in this thread.
> And, someone needs to start showing some code. I'm able to push hundreds of
> messages per second and never see a drop off in delivery performance
> regardless of the queue size (until I run out of memory). I don't think it
> is fair to blame rabbit until we can see your code and understand what you
> are doing.
> Also, please specify the following things:
> exchange - durable or not?
> exchange - type? (Looks like "fanout" from the original post)
> number of bindings per exchange
> queue - durable?
> publishing settings - require ack? persistent? immediate?
> message size? 1K according to the original post
> Is your test machine paging or swapping? What is the 'top' output for your
> process and for rabbit at various points of the run cycle? Is it lower or
> higher when you are getting 300 msg/s than when you are getting 80 msg/s?
> What is your subscriber doing with the data it receives? If you make that
> process a "no op" how many messages per second can you handle? Does it ever
> drop off related to queue size? Why are you "popping" messages from the
> queue isn't of having them pushed automatically?
> This thread has been really frustrating because there has NOT been very much
> information shared. All I see is "it's slow." Give us more information.
> cr
>
> On Sep 7, 2009, at 11:34 AM, Suhail Doshi wrote:
>
> 2009-09-07 16:33:10,822 INFO [id: 95705166] Processing complete,
> acknowledged.
> 2009-09-07 16:33:10,826 INFO [id: 95705166] Received queue item,
> processing...
> 2009-09-07 16:33:10,878 INFO [id: 95705166] Processing complete,
> acknowledged.
> 2009-09-07 16:33:10,882 INFO [id: 95705166] Received queue item,
> processing...
> 2009-09-07 16:33:12,839 INFO [id: 95705166] Processing complete,
> acknowledged.
> 2009-09-07 16:33:12,839 INFO [id: 95705166] Received queue item,
> processing...
> 2009-09-07 16:33:13,531 INFO [id: 95705166] Processing complete,
> acknowledged.
> 2009-09-07 16:33:13,531 INFO [id: 95705166] Received queue item,
> processing...
> If you see there's like a pause for a second between 10 and 12, an item gets
> processed pretty fast as you can see. Odd isn't it?
> Suhail
> On Mon, Sep 7, 2009 at 9:32 AM, Suhail Doshi <digitalwarfare at gmail.com>
> wrote:
>>
>> I almost feel as though rabbitmq can't push items to my consumer fast
>> enough, I had a backed up queue with 20 consumers and spawned another
>> watching the items get processed and it just seemed so slow, saw numerous
>> pauses between item processing and I don't believe it is the consumer's
>> fault.
>> I am using the Python library for the consumer part.
>> Suhail
>>
>> On Mon, Sep 7, 2009 at 8:23 AM, Chuck Remes <cremes.devlist at mac.com>
>> wrote:
>>>
>>> On Sep 7, 2009, at 1:07 AM, aisha fenton wrote:
>>>
>>> > Hi,
>>> > I'm sure I'm doing something wrong since I can't find reference to
>>> > this anywhere else. What I'm seeing is that the performance of
>>> > draining a queue gets slower as the queue size increases.
>>> >
>>> > I'm aware of the issue in RabbitMQ 1.6 that means that when it runs
>>> > out of physical memory that it's performance degrades because it
>>> > starts swapping out. But I'm not anywhere close to running out of
>>> > memory yet, and the degradation starts almost immediately and
>>> > increases linearly as the queue depth grows.
>>> >
>>> > I am publishing 500mps to an exchange. Each message is about 1KB. I
>>> > have a single fanout queue bound to the exchange. A single consumer is
>>> > popping messages off the queue.
>>> >
>>> > When the queue is less than 20,000 messages I can pop 300mps off the
>>> > queue. When the queue is 200,000 messages the performance drop to
>>> > 40-80mps.
>>> >
>>> > I'm running RabbitMQ 1.6.0. And nether rabbitmq, the consumer, or the
>>> > publisher are using more than 40% CPU.
>>> >
>>> > I assume I shouldn't be seeing this? Any help much appreciated.
>>>
>>> You didn't include any code, but I'm going to take a stab in the dark
>>> anyway. If you are using the ruby amqp gem you might be doing
>>> something like this:
>>>
>>> def next_message
>>> exchange = MQ.fanout 'foo'
>>> queue = MQ.queue 'bar'
>>>
>>> queue.bind exchange
>>>
>>> newest_message = queue.pop
>>> end
>>>
>>> In the code above the call to MQ.<whatever> is opening a new channel
>>> to rabbitmq each time the method is called. You are essentially
>>> leaking channels all over the place every time you try to pop a new
>>> message.
>>>
>>> If you aren't using the ruby amqp stuff, I still recommend checking
>>> your code for whatever library you chose. You only need to allocate a
>>> few channels and reuse them.
>>>
>>> cr
>>>
>>>
>>> _______________________________________________
>>> rabbitmq-discuss mailing list
>>> rabbitmq-discuss at lists.rabbitmq.com
>>> http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>
>>
>>
>> --
>> http://mixpanel.com
>> Blog: http://blog.mixpanel.com
>
>
>
> --
> http://mixpanel.com
> Blog: http://blog.mixpanel.com
>
>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>
More information about the rabbitmq-discuss
mailing list