[rabbitmq-discuss] Introducing the RabbitMQ Sharding Plugin
videlalvaro at gmail.com
Thu Jul 10 21:49:37 BST 2014
You need to issue 8 basic.consume commands, whether from the same process
or from a separate one. Of course you could use the same callback for all
About Shovel, I don't think it will work out of the box with this, but
perhaps Simon can correct me.
On Thu, Jul 10, 2014 at 6:06 PM, Jason McIntosh <mcintoshj at gmail.com> wrote:
> Yeah this is still the issue for us - how would you dynamically increase
> your consumers easily from say a Java process? Generally, using things
> like SimpleMessageListenerContainer - it starts up and consumes from a
> queue "foo.data" - so would we need to start up 8 separate consumer
> processes? Or would a single bound consumer to that queue handle this when
> set to 8 concurrent consumers? Or would we need to listen for a shard
> change event to allow it to do this? I'd also ask about the shovel process
> - would the shovel just consume from "foo.data" and how would that handle
> the multiple queues?
> On Thu, Jul 10, 2014 at 10:41 AM, Alvaro Videla <videlalvaro at gmail.com>
>> If you have 8 queues per shard, then each shard will require 8 consumers
>> to get the data out of those 8 queues that lay behind the shard. The plugin
>> will take care of selecting the queue with the least amount of consumers
>> whenever a new basic.consume arrives.
>> On Thu, Jul 10, 2014 at 5:00 PM, Jason McIntosh <mcintoshj at gmail.com>
>>> Since I just saw this crop up again on the list, thought I'd ask again
>>> on this plugin as it looks danged handy. With geographically distributed
>>> data centers, we're using RabbitMQ to replicate data and information to
>>> remote data centers. But with the WAN latency, we hit an issue where a
>>> single queue would seemingly get backlogged trying to shovel messages
>>> across the WAN. SO we used x-consistent-hashes to basically do something
>>> like this:
>>> Publish to a fanout exchange "bob"
>>> bob is bound to "bob.multi"
>>> bob.multi is an x-consistent-hash bound to 8 queues, "bobs.data.1"
>>> through "bobs.data.8". We then shovel "bobs.data.1" through 8 to a remote
>>> system that has a single exchange "bob" with a single queue "bobs.data".
>>> We have the shovel config over ride the routing key and exchange on the
>>> publish side so things land in one spot. If I didn't have to script this
>>> logic out I'd be more than happy to switch to using the "shard" exchange.
>>> And the goal on the remote side is to have a single consumer (java
>>> process) connected to queue "bobs.data" on the remote side, and a single
>>> publisher publishing to "bob" on the publish side. I THINK I may have
>>> misunderstood the earlier comment where we'd need 1 consumer per shard
>>> On Thu, Apr 10, 2014 at 3:10 PM, Alvaro Videla <videlalvaro at gmail.com>
>>>> On Thu, Apr 10, 2014 at 10:07 PM, Jason McIntosh <mcintoshj at gmail.com>
>>>> > The one thing I didn't see in the post was how many consumers you'd
>>>> > It sounds like you'd need at least N consumers where N is the number
>>>> > nodes in the shard? With the shoveling config, we'd need to
>>>> > grow consumers to adapt to the cluster size it sounds like.
>>>> Yes, at the moment you would need 1 consumer per shard. So if the
>>>> shard-per-node value is 4, then you'd need 4 consumers per node in the
>>> Jason McIntosh
> Jason McIntosh
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the rabbitmq-discuss