[rabbitmq-discuss] RabbitMQ and Splunk
Michael Vierling
MVierling at attinteractive.com
Thu Nov 4 20:13:41 GMT 2010
The tsv format may work fine. I'll defer my feedback until I can properly test it though.
-----Original Message-----
From: Matthias Radestock [mailto:matthias at rabbitmq.com]
Sent: Thursday, November 04, 2010 12:33 PM
To: Michael Vierling
Cc: rabbitmq-discuss at lists.rabbitmq.com
Subject: Re: [rabbitmq-discuss] RabbitMQ and Splunk
Michael,
On 04/11/10 19:03, Michael Vierling wrote:
> Let me turn the question around: why have messy poorly formatted log
> data? Splunk has extensive tools to extract fields. But it will
> always be true that having clean, well formatted log data goes a long
> ways towards making any extraction process easier and more reliable.
No arguing with that. However,
1) rabbitmqadmin is not outputting *log* data (and neither, for that
matter are ps, iostat, etc). Which makes things like sticking timestamps
at the beginning of every line feel rather artificial. Of course there
are perfectly legitimate reasons for wanting to process non-log data
with tools like splunk, but the logical place for the supporting code to
live is as a plug-in to splunk.
2) what exactly is wrong with the existing formats output by
rabbitmqadmin? The default table format is designed for human
readability. The tsv format is designed for post-processing with
standard unix tools (e.g. "rabbitmqadmin -f tsv -q list connections |
cut -f 2 | while read conn ; do rabbitmqadmin close connection ${conn} ;
done" will close all connections). And the json format works well for
post processing by programmatic means.
> Hundreds of lines of code are needed to parse iostat, ps and the others.
I appreciate that but surely the same isn't true of the rabbitmqadmin
outputs. In particular the tsv and json outputs should be a breeze to
parse. If they are not then that is certainly something we'd want to fix.
Regards,
Matthias.
More information about the rabbitmq-discuss
mailing list