Alexis, <br><br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Can you clarify what you mean by "failover of session state" please?<br>
I am assuming you mean that when a node dies, then another node can<br>
'take over' from it, without loss of any state that the first node has<br>
committed (to some other node and/or persistent store), and that the<br>
second node can 'take over' in some sense 'faster' than recovery by<br>
restarting the first node from disk would.</blockquote><div><br>More or less.� I'm personally not that concerned with the timeliness of recovery for committed messages.� I'm mostly concerned with any messages being dropped by the system because no bindings matching a particular message's routing key existed in the exchange at the time of the enqueue command (because the one node which contained the queue which matched that routing key has suffered a disaster).� To go further I really want the ability to define something like "failover bindings" (e.g. create queue Q on node A as master and node B for failover)� in the exchange such that if delivery to to a queue "Q" on node "A" fails then the exchange will declare that node "B" is the new master for queue "Q" and all subsequent routing will be adjusted.� It seems that if the persister were using mnesia to record session state (queue data) and one could define "failover bindings" like I mentioned above then the exchange could failover a queue to a secondary node transparently to all producers and consumers in the system, save for the ones that were directly connected to the node which suffered the disaster.<br>
�</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> �I ask because this case is<br>
not quite the same as 'exactly once' delivery.</blockquote><div><br>Yes, I was just referring to my belief that if I want to eliminate this single point of failure I would need to create redundant queues on separate nodes and have some kind of out-of-band mechanism to guard against duplicate processing (or that I would need to make sure that all downstream operations were idempotent).<br>
�</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
<br>
Cheers,<br>
<br>
alexis<br>
<div><div></div><div class="h5"><br>
<br>
<br>
<br>
<br>
On Sun, Jul 26, 2009 at 6:45 PM, charles<br>
woerner<<a href="mailto:charleswoerner.lists@gmail.com">charleswoerner.lists@gmail.com</a>> wrote:<br>
> Thanks Arvind.� That seems to confirm our belief that rabbitmq doesn't<br>
> handle failover of session state at the moment.� So in the case of competing<br>
> consumers taking messages from a rabbitmq broker cluster it sounds like<br>
> once-and-only-once delivery is somewhat up to the application to implement<br>
> by arranging for each message to be delivered to redundant queues (ie. 2<br>
> separate queues with similar bindings residing on different hosts), then<br>
> coordinate among your consumers to ensure once-and-only-once delivery using<br>
> a database or simply to make your workflow idempotent with respect to the<br>
> duplicate messages.<br>
><br>
> Ideally, I'd like my broker cluster to handle this.� I'm looking into apache<br>
> qpid as it seems to be more mature with respect to availability.� The main<br>
> downside with rabbitmq for me right now is the lack of at least an option<br>
> for session state replication.� It sounds like a future release of rabbitmq<br>
> with pluggable persisters will make distributed session state possible via<br>
> mnesia.<br>
><br>
><br>
</div></div>> _______________________________________________<br>
> rabbitmq-discuss mailing list<br>
> <a href="mailto:rabbitmq-discuss@lists.rabbitmq.com">rabbitmq-discuss@lists.rabbitmq.com</a><br>
> <a href="http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss" target="_blank">http://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss</a><br>
><br>
><br>
</blockquote></div><br>