[rabbitmq-discuss] [Minimum Air Induction] Introducing Shovel: An AMQP Relay
Valentino Volonghi
dialtone at gmail.com
Sat Sep 20 17:45:37 BST 2008
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Sep 20, 2008, at 7:27 AM, Ben Hood wrote:
>> I hope so :). The main problem is the absolute necessity to not lose
>> any single one of the messages. Nothing can be lost.
>
> Sounds familiar. If you use transactional persistent messaging, this
> will be guaranteed. Sounds like an expensive setup for log statements
> though :-)
A transaction per message is indeed to expensive, I've tried this and
it's indeed
too slow. One thing that I wasn't able to understand is what does a
transaction
give me in rabbitmq? I mean, I'm interested in single messages and not
in dealing
with a bunch of them at a time (unless I aggregate them on shovel as
you suggest
below) so for single messages transactions seemed the wrong tool for me.
>> Here is already the first 'problem', if the known exchange is down
>> the line
>> would be lost forever, this is simple enough though and rabbitmq
>> would
>> run embedded in mochiweb together with shovel. Every queue durable
>> and
>> every message too. In this case if mochiweb fails I won't have to
>> worry, if
>> shovel disconnects it won't send lines to anyone and wouldn't even
>> remove
>> them from the queue so nothing is lost here, if rabbitmq dies I
>> hope it
>> brings
>> everything down with itself, traffic is rebalanced on the remaining
>> servers
>> and
>> nothing is lost.
>
> Phew! That was a sentence.....what do you mean by Rabbit bringing
> everything down with itself?
In the event of rabbitmq crashing I would like the whole thing to
crash so that
I'm sure that there won't be lines generated without being also
handled. This
is the embedded rabbitmq of course.
> And if you're using an embedded RabbitMQ instance, how is the Shovel
> application supposed to failover to other Rabbit nodes?
What do you mean? I suppose shovel would have a list of backup
rabbitmq nodes
and would use them in the event the main one dies.
>> Right after this component there's another rabbitmq server, that we
>> can call
>> local rabbitmq, which is local to the mochiweb server, in the same
>> subnet.
>> This server would collect everything that various mochiweb+rabbitmq
>> +shovel
>> servers send, persist it and forward it to a central location. Again,
>> everything is
>> durable so there should be no risk of losing messages.
>
> Why have this middleman? Why not just have the embedded Rabbit
> instances forward straight to the remote brokers?
I thought for performance reasons. Since the first transmission is in
the same subnet
it's fast and would free the webserver nodes from any more trouble. Of
course there's
nothing wrong if I just remove this middle man and directly send stuff
to the remote
brokers.
> Have you considered doing the coalescing in Shovel (i.e. on the
> sending side rather than on the receiving side)?
Yes, I was worried about losing some messages in the process, I suppose
it can work in this way too with transactions though this time.
> Maybe you also want to compress stuff if you're sending it over a WAN.
Yes, one thing that I was thinking is to just gzip the body of the
message
myself before sending it, but I haven't looked into rabbitmq to see if
it already
supports this feature.
>>
>> So, how should shovel behave:
>>
>> Well, it should be pretty sure that every message was delivered to
>> the final
>> location, so I think its way of working would be:
>>
>> 1. receive message from embedded consumer
>> 2. publish message to remote host
>> 3. wait for ack
>> 4. ack the rabbitmq container
>> 5. the rabbitmq container at this point can remove the message
>
> What happens when Shovel fails between step 3 and 4? Or there is a
> network failure just after the remote broker sends the ack and just
> before it would have been received by Shovel? This sounds like the
> Byzantine General's problem. Maybe there is something you can do in
> the application to achieve the idempotency your application requires.
Well what would happen is that a message would not be marked as sent
so it will become a duplicate once sent again.
>> Now... I'm not sure if there's an ack confirmation message so that
>> the
>> consumer if
>> 100% sure that the confirmation was received, I suppose there isn't
>> so this
>> means
>> that the system will maybe have duplicates at the end and I'll have
>> to take
>> care of
>> this somehow (any suggestions?).
>
> Not quite sure what you mean here. Can you elaborate?
It's exactly what you said above. I was already thinking about a way
to confirm
the ack transmission and thus how to deal with duplicate messages that
are
created when the ack is lost.
>> Another small problem is the current state of Shovel where it
>> basically
>> crashes when a
>> connection is dropped, a change that I would like to make (or I
>> would like
>> to see) is that
>> it should be able to reconnect to the remote host with an exponential
>> backoff so that it
>> starts retransmitting as soon as possible.
>
> Sure, the OTP supervisor could potentially handle this.
Great, I'll have to study them more.
>> So it basically means that we have 8 minutes to react to such a
>> failure.
>> Does this
>> also sound reasonable? and if so... What possible fixes can I look
>> for?
>> Ultimately... does this sound like something that rabbitmq can be
>> good at?
>
> ATM, queues are memory bound, so as indicated in a previous thread,
> you would have to calibrate this with your own application and
> production sceanario. Just test it and find out where the limit is.
>
> BTW, we do intend to implement the disk overflow mechanism discussed
> with Edwin. Just don't know when it'll get done.
I suppose a solution to this would be to avoid part of this problem is
to
remove the middleman rabbitmq. There is one question I have about AMQP
though: a durable queue and exchange don't persist messages by
themselves
right? What happens if a persistent messages enters a durable exchange
without
queues?
I suppose my tests weren't too accurate then now... is a persistent
message much
slower than a non persistent one? Because I obtained wonderful numbers
from
messages not explicitly marked as being persistent, like 8000 messages
per second,
with the bottleneck being in the saturated network, on the write side
of the connection
and about 3-4K messages per second on the read side with the
bottleneck being the
python client most probably. So would these numbers confirm themselves
pretty much
or are they simply completely wrong? At least I need about 2500-3000
requests per
second because, given the constraint with memory bound queues, the
component should
be as fast as the webserver otherwise the messages start to pile up.
- --
Valentino Volonghi aka Dialtone
Now running MacOS X 10.5
Home Page: http://www.twisted.it
http://www.adroll.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Darwin)
iEYEARECAAYFAkjVKLEACgkQ9Llz28widGX3CgCfcKyLzoClrtxbpeclJsVxPj72
XJgAn0vFb8nIHLNiTNNBnQJf/bpGtK9J
=74jQ
-----END PGP SIGNATURE-----
More information about the rabbitmq-discuss
mailing list