[rabbitmq-discuss] RabbitMQ 3.0 Policy ha-all delete issue
Matthias Radestock
matthias at rabbitmq.com
Tue Nov 20 20:26:35 GMT 2012
Mark,
On 20/11/12 19:29, Mark Ward wrote:
> The function request typically always goes into a queue that is static. For
> example a ping service would have a queue based upon the computer's name.
> Any ping request to the computer would be put into the queue. Results of
> the ping would generate a new queue. Each result will be placed into a
> unique result queue that is created for each result published. The result
> queue is based upon a name that the server and client both know. I went
> with this design to eliminate the need to build routing logic to route
> results from a queue to waiting callers. I also didn't want a caller's
> result be dependent upon another caller's result within a single queue.
Could you create one reply queue per thread, assuming there can't be
more than one pending rpc per thread? Or maintain a pool of reply queues.
> I will think about the result queue not being mirrored as this would greatly
> improve performance but does make rabbitMQ server maintenance more
> difficult. If the node with the queue is shutdown and the client connects
> to a new node the result queue would be recreated.
I see. If you can move off the one-queue-per-reply model then making the
reply queues HA won't be so much of an issue. w/o that though the
penalty is massive.
> I ran into issues with my cluster testing when messages become large enough
> to negatively impact the cluster. My test cluster only has 1 gig of ram per
> computer (3 servers). The message size plus slowly draining queues will
> easily destroy this test cluster. I would run into issues with the erlang
> node heartbeat. I would also run into issues where erlang would run out of
> ram even with the default high watermark + flow control. Erlang would crash
> bringing down the rabbitmq node.
Interesting. How large are the messages?
> If a large message is detected it will be split into smaller packets. A
> split indicator message will be placed into the original target queue. The
> actual message will be split into smaller packets and published into a newly
> created queue just for the split. Each split message will have an equal
> split queue.
> The original target queue may have two or more subscribers. A subscriber
> will receive a split indicator message. It will then begin to subscribe and
> drain the message's split queue. When completed all messages are acked.
> This allows the split message queue to be processed by a single client while
> the original queue can have any number of subscribers. When the split queue
> is finished it is disposed of.
That's pretty neat.
Perhaps you could somehow recycle the split message queues rather than
deleting it.
> I know i am getting a performance hit with the dynamic queues but their use
> made the two scenarios much easier to implement.
If the performance you are getting is good enough for your application
then there is no compelling reason to change anything.
Regards,
Matthias.
More information about the rabbitmq-discuss
mailing list