[rabbitmq-discuss] Unexaplainable behaviour with shovel plugin.
michael.laing at nytimes.com
Sat Mar 1 18:00:36 GMT 2014
Interesting. We don't use persistent messages. In fact the proxy clusters,
which stand between our internal clients and the core clusters, explicitly
remove persistence in case our clients 'forget'. We rely on replication
instead; our persistence requirements are 'outsourced' to a global
Cassandra cluster. So no disk IO hence no IO wait - our primo defense
against network partitions in AWS/EC2, and a nice performance boost.
Although principles play a part too: idempotency - when in doubt,
reconnect/resend/resubscribe; tolerate replicas. And we try to
realistically engineer for 5 9's of reliability or more, not 100%, as we
can decompose that target into realistic actions/costs.
On Sat, Mar 1, 2014 at 12:21 PM, Jason McIntosh <mcintoshj at gmail.com> wrote:
> On our systems, we've seen consistent 400/sec on some queues. During a
> heavy data load roughly 2500/sec per queue (these are short lived usually).
> Usually at that point flow control kicks in as our consumers can't quite
> keep up. We use x-consistent-hashes to get around network latency and
> shovel each queue in a hash. SO publishers publish to a fanout queue with
> a random routing key, which is bound to an x-consistent-hash exchange bound
> to 8 queues. Each of the 8 queues is shoveled independently with a 1500
> prefetch. We've not been able to overload this mechanism easily - the
> drive IO is typically our limiting factor. Or the consumers as stated on
> the remote side. And that's because we're doing persistent messages,
> publisher confirms, and a whole lot of checks to make sure we don't ever
> lose anything.
> On Sat, Mar 1, 2014 at 10:22 AM, Laing, Michael <michael.laing at nytimes.com
> > wrote:
>> Our volumes are quite variable on the shovels, representing a high
>> overall degree of variability in our message volumes.
>> Just looking over the last 24 hours, shovel volume ranged from 25/sec to
>> 2,500/sec on our Oregon core cluster.
>> On Fri, Feb 28, 2014 at 1:14 PM, Ben Hood <0x6e6562 at gmail.com> wrote:
>>> On Fri, Feb 28, 2014 at 12:45 PM, Laing, Michael
>>> <michael.laing at nytimes.com> wrote:
>>> > So I turned to shovels for more simplicity and control at the expense
>>> > more difficult configuration.
>>> Yes, it is quite a low level tool, but I guess sometimes your
>>> requirements are intricate enough to need to reach down to the lower
>>> > Some of our core clusters support the 'retail' layer of instances that
>>> > gateway to clients (candles?). We are introducing federation into one
>>> > these communication links because we want the propagation of client
>>> > from the gateway instance to the core - an excellent feature of
>>> > and an important refinement for us.
>>> Using federation to implement an AMQP gateway seems like a common
>>> pattern. One wonders why it didn't go into the AMQP spec ....
>>> > Initially I had thought that the 'new' federation replaced the 'old'
>>> > but this is not true - each tool has its place although their
>>> > overlap.
>>> > With easier configuration in 3.3, the lowly shovel may get its due!
>>> It's interesting to see that the shovel still lives on, despite it
>>> being quite an agricultural component. What sort of message volumes
>>> are you guys processing with this, BTW?
>>> Thanks for being so detailed about your experiences, it's much
>>> rabbitmq-discuss mailing list
>>> rabbitmq-discuss at lists.rabbitmq.com
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.rabbitmq.com
> Jason McIntosh
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss