[rabbitmq-discuss] Taking error_logger out of the "fast path" (was Re: beam process using 100% cpu and lots of memory)
Tony Garnock-Jones
tonyg at lshift.net
Thu Feb 25 20:54:48 GMT 2010
majek04 wrote:
> I think we can take several approaches:
> - use syslog and/or rewrite the logging infrastructure
I've been interested in syslog for a while; did you know there's a
syslog *protocol*? It'd be cool to have a syslog plugin, for in- and
outbound traffic, that spoke that protocol...
> - invent filter on top of the logger, which would stop sending
> new errors if there are too many of them. Something similar
> to "last message repeated XX time(s)" from syslog.
Perhaps, but in the case of a high rate of connections/disconnections
such as the OP is experiencing, that's tantamount to throwing away most
of the information (namely, the IP address and port), which we could do
just as well by not logging it at all. Unless you really do have
something adaptive in mind. Sounds tricky to get right ;-)
> - move logger away from the critical path, by doing logging
> asynchronously. This can lead to out-of-memory crash.
Yep. So if, as Matthew says, the problem is a long error_logger message
queue, if we avoided *that* we might be OK even with the error_logger as
it stands: you know, the normal problems of O(n^2) for selective
receives. Perhaps dusting off the old buffering intermediary process?
Actually, since I think we're using rabbit_log *almost* everywhere we
need to (but, frustratingly, not in tcp_acceptor! perhaps because it is
for generic, non-rabbit use as well?), we might be able to bypass the
problem well enough by switching rabbit_log to being an instance of
gen_server2.
Tony
More information about the rabbitmq-discuss
mailing list