[rabbitmq-discuss] [Minimum Air Induction] Introducing Shovel: An AMQP Relay
dialtone at gmail.com
Sat Sep 20 17:45:37 BST 2008
-----BEGIN PGP SIGNED MESSAGE-----
On Sep 20, 2008, at 7:27 AM, Ben Hood wrote:
>> I hope so :). The main problem is the absolute necessity to not lose
>> any single one of the messages. Nothing can be lost.
> Sounds familiar. If you use transactional persistent messaging, this
> will be guaranteed. Sounds like an expensive setup for log statements
> though :-)
A transaction per message is indeed to expensive, I've tried this and
too slow. One thing that I wasn't able to understand is what does a
give me in rabbitmq? I mean, I'm interested in single messages and not
with a bunch of them at a time (unless I aggregate them on shovel as
below) so for single messages transactions seemed the wrong tool for me.
>> Here is already the first 'problem', if the known exchange is down
>> the line
>> would be lost forever, this is simple enough though and rabbitmq
>> run embedded in mochiweb together with shovel. Every queue durable
>> every message too. In this case if mochiweb fails I won't have to
>> worry, if
>> shovel disconnects it won't send lines to anyone and wouldn't even
>> them from the queue so nothing is lost here, if rabbitmq dies I
>> hope it
>> everything down with itself, traffic is rebalanced on the remaining
>> nothing is lost.
> Phew! That was a sentence.....what do you mean by Rabbit bringing
> everything down with itself?
In the event of rabbitmq crashing I would like the whole thing to
crash so that
I'm sure that there won't be lines generated without being also
is the embedded rabbitmq of course.
> And if you're using an embedded RabbitMQ instance, how is the Shovel
> application supposed to failover to other Rabbit nodes?
What do you mean? I suppose shovel would have a list of backup
and would use them in the event the main one dies.
>> Right after this component there's another rabbitmq server, that we
>> can call
>> local rabbitmq, which is local to the mochiweb server, in the same
>> This server would collect everything that various mochiweb+rabbitmq
>> servers send, persist it and forward it to a central location. Again,
>> everything is
>> durable so there should be no risk of losing messages.
> Why have this middleman? Why not just have the embedded Rabbit
> instances forward straight to the remote brokers?
I thought for performance reasons. Since the first transmission is in
the same subnet
it's fast and would free the webserver nodes from any more trouble. Of
nothing wrong if I just remove this middle man and directly send stuff
to the remote
> Have you considered doing the coalescing in Shovel (i.e. on the
> sending side rather than on the receiving side)?
Yes, I was worried about losing some messages in the process, I suppose
it can work in this way too with transactions though this time.
> Maybe you also want to compress stuff if you're sending it over a WAN.
Yes, one thing that I was thinking is to just gzip the body of the
myself before sending it, but I haven't looked into rabbitmq to see if
supports this feature.
>> So, how should shovel behave:
>> Well, it should be pretty sure that every message was delivered to
>> the final
>> location, so I think its way of working would be:
>> 1. receive message from embedded consumer
>> 2. publish message to remote host
>> 3. wait for ack
>> 4. ack the rabbitmq container
>> 5. the rabbitmq container at this point can remove the message
> What happens when Shovel fails between step 3 and 4? Or there is a
> network failure just after the remote broker sends the ack and just
> before it would have been received by Shovel? This sounds like the
> Byzantine General's problem. Maybe there is something you can do in
> the application to achieve the idempotency your application requires.
Well what would happen is that a message would not be marked as sent
so it will become a duplicate once sent again.
>> Now... I'm not sure if there's an ack confirmation message so that
>> consumer if
>> 100% sure that the confirmation was received, I suppose there isn't
>> so this
>> that the system will maybe have duplicates at the end and I'll have
>> to take
>> care of
>> this somehow (any suggestions?).
> Not quite sure what you mean here. Can you elaborate?
It's exactly what you said above. I was already thinking about a way
the ack transmission and thus how to deal with duplicate messages that
created when the ack is lost.
>> Another small problem is the current state of Shovel where it
>> crashes when a
>> connection is dropped, a change that I would like to make (or I
>> would like
>> to see) is that
>> it should be able to reconnect to the remote host with an exponential
>> backoff so that it
>> starts retransmitting as soon as possible.
> Sure, the OTP supervisor could potentially handle this.
Great, I'll have to study them more.
>> So it basically means that we have 8 minutes to react to such a
>> Does this
>> also sound reasonable? and if so... What possible fixes can I look
>> Ultimately... does this sound like something that rabbitmq can be
>> good at?
> ATM, queues are memory bound, so as indicated in a previous thread,
> you would have to calibrate this with your own application and
> production sceanario. Just test it and find out where the limit is.
> BTW, we do intend to implement the disk overflow mechanism discussed
> with Edwin. Just don't know when it'll get done.
I suppose a solution to this would be to avoid part of this problem is
remove the middleman rabbitmq. There is one question I have about AMQP
though: a durable queue and exchange don't persist messages by
right? What happens if a persistent messages enters a durable exchange
I suppose my tests weren't too accurate then now... is a persistent
slower than a non persistent one? Because I obtained wonderful numbers
messages not explicitly marked as being persistent, like 8000 messages
with the bottleneck being in the saturated network, on the write side
of the connection
and about 3-4K messages per second on the read side with the
bottleneck being the
python client most probably. So would these numbers confirm themselves
or are they simply completely wrong? At least I need about 2500-3000
second because, given the constraint with memory bound queues, the
be as fast as the webserver otherwise the messages start to pile up.
Valentino Volonghi aka Dialtone
Now running MacOS X 10.5
Home Page: http://www.twisted.it
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Darwin)
-----END PGP SIGNATURE-----
More information about the rabbitmq-discuss