[rabbitmq-discuss] Form Submission - Add A Question
Sean Treadway
sean at soundcloud.com
Wed Nov 26 12:39:45 GMT 2008
Ben,
On Nov 24, 2008, at 8:45 PM, Ben Hood wrote:
> What I meant by absorbing back pressure would be an example where the
> consuming client pulls every message from the socket and buffers it
> internally. This would negate the effect of back pressure and just
> exercise memory pressure on the client application to compensate for
> this. This would an example where you would be throwing away the flow
> control provided by TCP (under the assumption that you are using TCP
> as a transport).
That the client library is snarfing up anything the broker is sending
it was what I initially thought you meant. But then I looked hard and
I'm pretty sure this Ruby client doesn't pull all frames off the
socket and parse them. With the test case provided there is some
logging of the protocol parsing, and the decoding of each frame only
happens when the processing of each message returns control back to
the event loop (every 10 seconds). So then I thought that setting the
TCP window size to something very small could kick in TCP flow control
earlier to limit the bytes and hence messages sent by the broker.
>> This makes it clearer:
>> http://lettuce.squarespace.com/faq/queues/when-declaring-a-queue-what-does-the-message-count-field-mea.html
>> But on a practical level, I think it's important to remind
>> explicitly that
>> mid-flight messages for a consumer with no_ack=false are not
>> included in
>> this count.
>
> Sorry, I don't quite follow this. Can you explain?
This is just a suggestion to include a bit more information in the FAQ
entry. Coming from our application's point of view rather than the
broker's point of view, it was tough for me to put together a picture
of how all the pieces fit together.
This is what I was seeing when understanding 'message count':
The producer publishes 3 messages, the broker accepts these 3 and
increments the message count for each. A consumer connects, the
broker puts the 3 messages in a packet on the wire and decrements the
broker's message count by 3. The consumer parses the frames from the
wire and delivers 1 message to the application, the client application
updates its message count by 1.
At this point, 3 messages exist - the producer has a count of 0, the
broker has a count of 0, the consumer application has a count of 1.
There are 2 unaccounted messages somewhere. I went on a hunt for
these messages. With your and Matthias' help, I believe they exist in
consumer's socket's read buffer.
These messages in the read buffer are the ones that are in mid-
flight. Our client application logic per consumer could take up to 10
minutes per unit of work before the next message is parsed and handed
to our application, so those extra messages that are in "mid-fight",
are looking like they're in "mid-flight" for 10-20 minutes which isn't
very intuitive by the name. In the meantime, we could have started up
another client application to begin working on those mid-flight
messages if we knew that they existed by checking the 'message count'.
>
> This sounds more like an application of basic.qos because you would to
> distribute messages based on the various consumers' ability to process
> units of work. By setting the prefetch window, you get more fine
> grained consumer flow control.
Indeed this is just what we're looking for.
>> It looks like branch 19684 (rabbitmqctl list_queues messages_ready)
>> gives us
>> the statistic we can use to tune our consumer pools. Are there any
>> plans of
>> also exposing the 'messages_ready' statistic over AMQP?
>
> Hmmm, difficult question. Some of the Redhat guys have been talking
> about putting more managementy stuff into the protocol, but I don't
> know what the current situation.
>
> And coincidentally we have talking about using the codegen to add some
> propriety methods to Rabbit, but this is just talk.
>
> There were further discussions about SNMP as well.
>
> I think for now now backing an embedded RPC handler using JSON as a
> codec will pay even money. This would be quite simple to knock up with
> the Erlang client for example.
However the management numbers are exposed, I would plan on writing a
task that reads and publishes the numbers to a "statistics" exchange,
so consumers can adjust themselves without introducing dependencies on
other protocols. This would put the management protocol/setup/
dependency cost on 1 producer, verses having to have that cost to read
some stats on N consumers across M machine installations.
Ideally, the system management numbers could be consumed by queues
bound to system exchanges, without introducing the step of
republishing them. Like 'amq.statistics.messages-per-second.queue-
name'. But, but, the monitoring/security requirements probably vary
greatly between applications, so we'd all probably end up writing
custom solutions to meet our application needs no matter what is
decided.
>> Or would branch 18577 (basic.qos) with pre-fetch set to 1 give us
>> the count
>> of un-acknowledged messages we're looking for from a passive
>> queue.declare?
>
> I'm not sure I understand - if you set the prefetch size to be one,
> only one outstanding message per channel is allowed - I don't think
> this answers your question though.
I'm pretty sure that with only 1 outstanding channel message, the
message count will be something we can use for smaller queue depths,
because the window of time before the message is parsed from the wire
and in the client application's control is a fraction of the time that
it takes before the client application acknowledges the message. Then
here, we'll just need to setup our client applications to have 1
consumer per channel.
If pre-fetch was 2, the message count would be 2 less per channel,
even though the client application was only working on 1 message. One
message would be on the client's socket, the other message would be in
the client's logic/worker loop. For us, message count for these
queues should be interpreted as "number of messages that are not
currently within client application logic", so we'll set the pre-fetch
to 1, to make sure there are 0 messages left on the client's socket.
> No, this just a way to quickly grab that one message that wasn't
> acked. It would make no difference if you were to start another
> consumer for the same queue on a different channel - this is 6 of one
> and half a dozen of the other.
Good to know that the delivery priorities between basic.consume and
basic.get are equivalent.
Thanks, I hope this help you understand one of our uses of RabbitMQ,
Sean
More information about the rabbitmq-discuss
mailing list