[rabbitmq-discuss] Stop producer and queue continue growing...

Gustavo Aquino aquino.gustavo at gmail.com
Wed Mar 17 12:15:21 GMT 2010


Hi Matthew,

Can you help me with this doubts ?

Regards.

On Fri, Mar 12, 2010 at 1:03 PM, Gustavo Aquino <aquino.gustavo at gmail.com>wrote:

> Matthew,
>
>
> On Fri, Mar 12, 2010 at 11:58 AM, Matthew Sackman <matthew at lshift.net>wrote:
>
>> On Fri, Mar 12, 2010 at 11:36:31AM -0300, Gustavo Aquino wrote:
>> > This is my biggest concern.
>>
>> It's the nature of AMQP.
>>
>> > Look about this scenario I have one Exchange and one persistent
>> queue.I'm
>> > posting transient messages to this Exchange (to get better performance)
>> if
>> > my Exchanges messages are in buffer and server crash I will be loose all
>> > messages in buffer and it is a big problem.... I cannot lose any
>> messages,
>>
>> Not true. There is no messaging product available (nor will there ever
>> be) that can guarantee it can't lose messages. Do you have infinite disk
>> space? Is there no event that can cause power loss? Even in the event of
>> the sun exploding you want to ensure no message loss?
>>
>
> If sun exploding money will be the minor problem of us. ;-)
>
> We don't have infinite disk space, but the primary proposal is monitoring
> to be able to be pro active and do something before problem happen.
>
> Look,for example I'm monitoring queue size, and look that we are coming to
> our limit, one way to guarantee the messages inside server is stopping
> producers and redirect producers to other server, so in theory I can
> guarantee that messages inside server1 will be consumed and server not loose
> messages or crash.
>
>
> Or how to guarantee that one message posted will be consumed ? consume
> don't know about the message posted.
>
>
>>
>> The fact is, message loss is unavoidable. You can take many steps to
>> avoid this, by using transactions and buffering messages in the client
>> so that if the connection to the broker dies you can reconnect and
>> republish, but at that point you risk duplicates, so you need
>> application logic on the consumer side to watch for duplicates and
>> remove them as necessary.
>>
>> > this messages are business it represent money, so if I loose one we
>> loose
>> > money and It cannot happen, in other way I need to be the fastest we
>> > need overrides in nanoseconds or 1 millisecond is the limit, so post
>> > messages in persistent queue is out of the scope, or  have one way to be
>> > fast with persistent ?
>>
>> Right, so you can't afford the delay of writing to disk, and yet you
>> want to ensure messages can't be lost in the event of a crash, or
>> comet strike. You may wish then to explore multiple brokers, publish, in
>> transactions to duplicate resources (queues) on every broker, and then
>> simultaneously consume from all of them, deduplicating as you go. Thus
>> you don't need to write to disk, and by shear strength of numbers, you
>> should be able to survive crashes. However, really these various brokers
>> should be in different data centres, probably on different continents,
>> so the 1ms max delay may be challenging...
>>
>
> How to duplicate resources inside multiples brokers using Rabbitmq ? I saw
> that Rabbit don't have a default way to do a Cluster HA, your based proposal
> is HP.
>
>
>>
>> Matthew
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/attachments/20100317/7954d7e7/attachment.htm 


More information about the rabbitmq-discuss mailing list