[rabbitmq-discuss] Are publisher confirms at all aware of HA (replicated) durable queues?

Steve Rehrauer steve.rehrauer at gmail.com
Wed Oct 5 14:54:36 BST 2011


Okay, thanks.

I'm trying to chase down why I'm not seeing all the events I've
published, in a test that shuts down the master node while events are
being produced.  Am trying to understand what I might be doing wrong.
Good to know that I can rely on robust publisher confirms!

I've pruned the test back to a very simple thing now.  I have two
nodes in the cluster.

1. With both nodes running, I produce an event.
2. I verify that I can consume this event.  I verify that I get a
publisher confirmation from the broker.
3. Using Alice, I stop the master node.
4. I produce another event.
5. I verify that I can consume this event.

I'm not seeing the event from step 5.  I wonder if I'm not properly
setting up the consumer?

When I stop the master node in step 3, I see the consumer catch a
ShutdownSignalException, with "reason:
{#method<connection.close>(reply-code=320,
reply-text=CONNECTION_FORCED - broker forced connection closure with
reason 'shutdown'".  The consumer reconnects to the broker,
successfully as far as I can tell.

When I try to produce another event in step 4, the producer also
catches a ShutdownSignalException, with "clean connection shutdown;
reason: Attempt to use closed channel".  The producer also reconnects
to the broker, successfully as far as I can tell.

After the producer reconnects, I see it succeed in basicPublish().  I
also see a publish confirmation from the broker for this event.

However, the consumer is never seeing it.

Q1: Is it surprising that the producer sees this in step 4?  I assumed
that it would be the same for the producer as for the consumer; if the
master node goes down, both sides must reconnect to the broker?

Q2: I think I am asking for consumer cancellation notification in the
consumer, but I never see the callback happen.  Should I expect to, in
the scenario I've outlined above?  I'm now a little concerned I'm not
setting it up properly, and this is why my consumer never sees this
event.  Here's the Java code that I use, which is in a class that
derives from RabbitMQ's ConnectionFactory.

   public DerivedConnectionFactory() {
      super();

      ...

        Map<String, Object> clientProperties = new HashMap<String, Object>();
        clientProperties.put(
               "consumer_cancel_notify",
                Boolean.valueOf(true)
                );
        this.setClientProperties(clientProperties);
   }

Thanks, --Steve

On Wed, Oct 5, 2011 at 5:18 AM, Matthias Radestock
<matthias at rabbitmq.com> wrote:
> Steve,
>
> On 04/10/11 18:05, Steve Rehrauer wrote:
>>
>> If I'm asking the broker for publisher confirms, and I have a cluster
>> of nodes, and I have durable queues that are replicated (x-ha-policy:
>> "all"), does the broker send the confirm as soon as a message is
>> persisted in the queues of the master node?  It doesn't wait for those
>> queues to be replicated to the slaves?
>
> From http://www.rabbitmq.com/ha.html#behaviour
>
> <quote>
> As the chosen slave becomes the master, no messages that are published to
> the mirrored-queue during this time will be lost: messages published to a
> mirrored-queue are always published directly to the master and all slaves.
> Thus should the master fail, the messages continue to be sent to the slaves
> and will be added to the queue once the promotion of a slave to the master
> completes.
>
> Similarly, messages published by clients using publisher confirms will still
> be confirmed correctly even if the master (or any slaves) fail between the
> message being published and the message being able to be confirmed to the
> publisher.
> </quote>
>
> Matthias.
>


More information about the rabbitmq-discuss mailing list