[rabbitmq-discuss] hanging on message sending with py-amqplib
matthew at rabbitmq.com
Fri Oct 1 18:19:36 BST 2010
On Thu, Sep 30, 2010 at 04:52:46PM -0500, tsuraan wrote:
> The way I'm repopulating the queues is by binding each queue to the
> amq.direct exchange with the queue name as the routing key, and then
> sending messages to that exchange with the desired queue's name as the
> key. The messages are being created as persistent messages, and the
> sends are done in a transactional channel with a commit every hundred
Ok, so you should be seeing a fair amount of disk activity.
> While this is happening, the consumers are consuming from
> the queues. If I stop the consumers, then the hangs also stop.
Are the consumers part of the same clients that are doing the sending or
are they different clients?
> also don't ever seem to get hangs when doing normal message sending
> through the exchanges that I normally use; at least I haven't seen any
> evidence or heard any bug reports that would indicate that's
> happening. There is nothing in my sasl log, and the only unusual
> thing in my other log is that I'm sometimes hitting the high water
> mark, but I haven't seen that happen while I've been attempting to
> send messages.
At what rate are the consumers consuming messages? Rabbit is optimised
to get rid of messages quickly, thus if a queue is quite long, it can
drive consumers as fast as possible, but will ignore publishes until the
queue has become empty... however, that should really only express
itself internally within Rabbit - the client shouldn't actually see any
impact on publishing, unless it was also doing something synchronous
from time to time like tx.commit. Are you finding it blocks when it hits
I don't think there's anything wrong with what you're doing. But this
might be a client library issue.
Seeing as you're using a client to shovel messages from one broker to
another, I would suggest experimenting with using the shovel - I'd
configure it in the new broker (where it's easiest to install) and just
have it drag all messages over from the old broker. If that works, then
it does point to something going wrong in the python client library.
Socket flushing will block if the broker has hit its high memory
watermarks. This is because we now use TCP backpressure to block any
clients which are sending us messages. If you have the sender and the
consumer on the same connection, this will certainly affect you, and it
is possible that memory use will be worse (more fragmented) if you have
a consumer and publisher at the same time rather than just a publisher,
so you might just be hitting the limits more often. It could be that the
socket flush doesn't correctly return even when the socket becomes
More information about the rabbitmq-discuss