[rabbitmq-discuss] flow control issues

romary.kremer at gmail.com romary.kremer at gmail.com
Fri Sep 17 15:32:09 BST 2010


Their are still some point I don't understand quite well with flow  
control.

As Marek told, in version previous to 2.0.0, the broker sends  
Channel.flow AMQP message to everyone once
rabbit reached memory limit. However, I still dont' understand how it  
came to block the production with that version.

In first, I thought that the client application should implement its  
own pause/resume logic, and that FlowListener interface would be the
way to do that. I realized only later that the producer was paused as  
well without the use of FlowListener, for instance using the
MultiCastMain sample in Java client 1.7.2.

Can you give us hints about how the production was indeed paused,  
other by stopping reading from the socket ?
Does the broker shut the connection or only the channel if a content- 
bearing message is sent after the flow.control ?

What is the meaning of status "Blocked" and "Blocking" that we can  
gather by listing connections ! I often noticed that
the connection of producer in status "blocked" lead the connection of  
the consumer to be "Blocking".

Sharing the same connection for producer and consumer, by the use of  
Channel may not be safe, that is very sad !

Best regards,

Romary.

Le 16 sept. 10 à 14:49, Marek Majkowski a écrit :

> On Wed, Sep 15, 2010 at 19:11, romary.kremer at gmail.com
> <romary.kremer at gmail.com> wrote:
>> Yes. You will never hear "FlowListener.handleFlow()" and it may be  
>> possible
>> for
>> channel.publish to block (though I would need to consult the sources
>> to be sure).
>>
>> It seems to me that FlowListener interface is likely to be  
>> deprecated so,
>> does'nt it ?
>
> It's more complex than that, but yes, you won't hear anything on  
> that interface.
>
>> Does not really matter for us anyway, cause we where on wrong idea  
>> using
>> that.
>> Does this new implementation keep the broker on track for  
>> compliance with
>> specification then ?
>
> Yes, the spec doesn't force us to send channel.flow. We implemented  
> that
> for a while but realized that it doesn't solve our problems.
>
>> This would be acceptable for our needs, only if we can somehow  
>> guarantee
>> that's an upper boundary !
>
> Optimistically, the upper bouadary is not more than your memory usage
> divided by the disk throughput.
>
>> But it is possible to create a very pessimistic environment in  
>> which the
>> memory usage will not drop - and the connection could be stuck for  
>> a long
>> time.
>> (though it's pretty unlikely).
>
>> ... Not that much unlikely, considering my little playing with the
>> MultiCastMain sample (see my previous reply about it for details).
>> I get 100 % times blocked connection.
>> What would be, based on your knowledge and your intuition, "a very
>> pessimistic environment in which the memory usage will not drop" ?
>> I think that the experimentation I've done on the MultiCastMain is  
>> maybe a
>> beginning of an answer for that question, although I would
>> never have considered that a single producer could have such power  
>> to flood
>> the broker.
>
> Okay. The memory can stay high due to a lot of reasons. If your  
> metadata
> that rabbit never releases is using more memory than threshold -
> rabbit will just get stuck.
>
> Next thing. remember, we don't control how erlang eats memory - and it
> has pretty
> complex GC and memory allocation mechanisms.
>
> If you think that you have enough memory for all the connections,
> queues, exchanges and bindings. And some memory for the messages.
> And you still hit the limit when you get stuck - feel free to tune
> Erlang GC internals:
>  http://www.erlang.org/doc/man/erts_alloc.html
>
>> Weren't Channel design for that ? In our environment, we have  
>> (naively ?)
>> considered the use of Channel to
>> separate the message production from the consumption.
>> Since we are targeting 10 000 peers doing both production and  
>> consumption,
>> the fact of multiplying the number of
>> connections by 2 is not negligible at all, considering scalability.
>> Moreover, as I reported later on, we use SSL to authenticate the  
>> broker, and
>> we are still unclear about memory leaks
>> induce by SSL connections. Doubling the number of connections will  
>> not be
>> negligible at all considering memory occupation either.
>> In conclusion, we are not likely to implement our peers using 2  
>> connections
>> for the same broker.
>> What would you recommend to us then ? And could you give us a better
>> understanding on the use case of channels ?
>
> Yes, channels were designed exactly for that. On the other hand, AMQP
> has few pretty serious issues. For example when you open channel
> you're free to publish a message. And broker can't 'refuse' accepting
> a message. Channel.flow can be sent from the broker to the client
> but *after* channel is opened. So there is a window in which
> you just can publish (after channel.open before channel.flow).  
> Sorry, there
> just isn't other way of forbidding publishes than by stopping the  
> whole
> connection. I also don't like it, but it's the only way.
>
>
> Cheers,
> Marek



More information about the rabbitmq-discuss mailing list