[rabbitmq-discuss] Stop producer and queue continue growing...
matthew at lshift.net
Wed Mar 17 12:40:10 GMT 2010
On Wed, Mar 17, 2010 at 09:15:21AM -0300, Gustavo Aquino wrote:
> > Look,for example I'm monitoring queue size, and look that we are coming to
> > our limit, one way to guarantee the messages inside server is stopping
> > producers and redirect producers to other server, so in theory I can
> > guarantee that messages inside server1 will be consumed and server not loose
> > messages or crash.
Right, monitoring queue size is not a good idea in general, though it
may be sufficient in your case. Erlang is a GC'd language and it decides
by itself when to GC. So the same rabbit instance with exactly the same
messages in it can take up wildly different amounts of memory. This is
just one of the reasons why monitoring queue size is not a great idea.
The new persister goes to great lengths to ensure that eventually,
Rabbit will always be able to accept another message, assuming you don't
run out of disk space. If you combine the new persister with the
rabbitmq-toke plugin then you really can get to the point at which there
is no per-message RAM cost. However, yes, that does mean that overall
throughput will be lower as messages are written to and read from disk.
Fast disks and plenty of RAM can allieviate this problem, though not
But in general, if you need low latency then that can only be achieved
by ensuring that the queues stay empty, or very close to empty. If that
is the case, then you should have no issues at all with running out of
RAM or having to deal with badly overloaded Rabbits.
> > How to duplicate resources inside multiples brokers using Rabbitmq ? I saw
> > that Rabbit don't have a default way to do a Cluster HA, your based proposal
> > is HP.
You can do active/passive failover - we have an OCF script now and this
can be set up with Pacemaker, but this'll only ensure that persistent
messages survive node failures. As I've recently demonstrated in other
posts to the mailing list, the rabbitmq-shovel can very effectively be
used to duplicate messages to several other brokers in a way which
ensures the brokers have the same ordering of messages within them. That
may be sufficient for you to ensure that messages are duplicated to
several brokers, thus ensuring they will survive node failure, but
obivously your consumers need to be a bit smarter to consume from all
such nodes and perform deduplication, unless you plan to solve this
issue through some other means.
More information about the rabbitmq-discuss