[rabbitmq-discuss] Database Corruption Possibilities
Allan Kamau
kamauallan at gmail.com
Fri Jun 3 13:53:55 BST 2011
On Fri, Jun 3, 2011 at 2:40 PM, Alvaro Videla <videlalvaro at gmail.com> wrote:
> Just FYI,
> RabbitMQ does not use Mnesia as message storage. It has it's own persister
> module.
> AFAIK Mnesia is only used for storing routing tables, Exchanges and Queues
> meta information.
> -Alvaro
> On Jun 3, 2011, at 1:35 PM, Alex Lovell-Troy wrote:
>
> There are a handful of approaches to this problem which all seem to follow a
> similar pattern. Until there is a way to replace mnesia with a replicable
> back end, we've all written something on our own that wraps each amqp
> transaction with a transactional write to the datastore of our choice and.
> I've seen this done with MySQL, MongoDB, and riak personally.
> @jbrisbin has done some interesting work with implementing a specific queue
> that sends messages to riak that looks intriguing. I believe there is also
> a piece that pushes messages from riak into rabbitmq.
> At least, that was the state of the art the last time I looked
> Anyone else?
> -alex
> On Fri, Jun 3, 2011 at 12:21 PM, Ozan Seymen <Ozan.Seymen at tdpg.com> wrote:
>>
>> Hi all,
>>
>>
>>
>> Can someone please explain the scenarios where we might have Rabbit
>> message storage (is it mnesia?) corrupted in a way that it is not
>> recoverable?
>>
>>
>>
>> In the solution I am working on, I simply cannot afford to lose any
>> messages. In order to secure this, I will:
>>
>>
>>
>> · Rely on publisher confirms. This should ensure that broker will
>> always confirm whether it assumed responsibility and persisted the message.
>>
>> · Implement durable queues/exchanges.
>>
>> · Ack enabled in consumers to prevent losing messages if consumer
>> dies halfway. I will solve the ordering problem on the consumer side.
>>
>>
>>
>> Even though all of these above prevent message loss in normal conditions,
>> none of them covers the case where data gets corrupted in the broker. There
>> is a window (albeit small) that things might go wrong: broker assumes
>> responsibility (message is in the disk) and before message is sent to the
>> consumer, broker experiences problems which corrupts the storage. If I can’t
>> bring the broker up with all data previously persisted, I’ve lost messages –
>> producer forgot about the message as broker accepts responsibility and
>> consumers have no idea about the message as it hasn’t been delivered to it.
>>
>>
>>
>> Am I a total paranoid that is beyond help? Even so, I would really
>> appreciate any info you guys can share.
>>
>>
>>
>> Cheers,
>>
>> oseymen
>>
>>
>>
>>
>>
>>
>> ________________________________
>> This e-mail and any attached files are intended for the named addressee
>> only. It contains information which may be confidential and legally
>> privileged and also protected by copyright. Unless you are the named
>> addressee (or authorised to receive for the addressee) you may not copy or
>> use it or disclose it to anyone else. If you received it in error please
>> notify the sender immediately and then delete it from your system.
>>
>> Please be advised that the views and opinions expressed in this e-mail may
>> not reflect the views and opinions of The Digital Property Group Limited or
>> any of its subsidiary companies.
>>
>> We make every effort to keep our network free from viruses. However, you
>> do need to check this e-mail and any attachments to it for viruses as we can
>> take no responsibility for any computer virus which may be transferred by
>> way of this e-mail. We reserve the right to monitor all e-mail
>> communications.
>>
>> The Digital Property Group Limited is a Daily Mail and General Trust plc
>> company.
>> Registered Office: Northcliffe House, 2 Derry Street, London, W8 5TT.
>> Registered in England & Wales No: 02290527 VAT no. 243571174
>>
>> ______________________________________________________________________
>> This email has been scanned by the MessageLabs Email Security System.
>> For more information please visit http://www.messagelabs.com/email
>> ______________________________________________________________________
>>
>> _______________________________________________
>> rabbitmq-discuss mailing list
>> rabbitmq-discuss at lists.rabbitmq.com
>> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
> Sent form my Nokia 1100
>
>
>
> _______________________________________________
> rabbitmq-discuss mailing list
> rabbitmq-discuss at lists.rabbitmq.com
> https://lists.rabbitmq.com/cgi-bin/mailman/listinfo/rabbitmq-discuss
>
>
I would recommend a solution similar to Alex's but using PostgreSQL.
Have each message send to a special queue to this queue have a
consuming application that simply inserts each message into a database
table. Then have the same application send the message to your
ordinary queue, then acknowledge it's consumption of the message it
received. You may implement something similar on the reverse to ensure
that you know what messages have be processed at any time.
Then when things don't go very well, you will have the listing of the
messages produced and another listing of the messages consumed
(successfully). Then you may write a small script that injects the
produced-but-not-yet-successfully processed messages into the queue of
the newly initialized rabbitmq installation.
Ideally you may want to develop this application as a multi-threaded
application to make use of more cores on your server. This application
could be hosted on a different server that has the DB running on SSD,
then periodically make backups to your usual mechanical HDD, and
subsequently to another HDD which you will be taking offsite everyday
to interchange with other the next morning.
Testing the implementations of these work-arounds is very important to
ensure it all works as planned. In a way the messages produced and
messages consumed tables could be used for other purposes too.
Why PostgreSQL?
1)MyISAM may not scale very well under significantly high concurrent
write situations.
2)Table corruption may be a possibility that is why there is a repair
utility in your MySQL's bin directory (the last time I checked).
3)The numerous other MVCC based engine offerings under MySQL are very
new and may not be as scalable as PostgreSQL. Why, it has taken the
leading MVCC based databases companies decades developing, refining
and honing their MVCC DB engines. Concurrency is a challenging
concept.
4)I think now (just as before) one needs to pay for MySQL.
5)It is possible that referential and data integrity concepts are
still not being appreciated in the MySQL community as it may be
de-harmonizing their speed song.
Allan.
More information about the rabbitmq-discuss
mailing list