发布时间:2022-08-09 文章分类:编程知识 投稿人:王小丽 字号: 默认 | | 超大 打印

Centralized Logging Using Rsyslog

As we’ve been expanding, we’ve found that tracking performance or errors across dozens of different services isn’t trivial. There are lots of solutions to this, including scribe and flume. For now, we’ve decided to go a more traditional route for aggregating logs. This keeps things simple, and doesn’t require that we have to alter how our services talk with logging infrastructure (even if they just log to a file). So far, we’ve had great success with this method. Here’s how we set it up:

Shipping Logs

While logging and log management are definitely not among the most interesting engineering challenges, they are among the more important. When trying to track down a problem within a cluster of servers, having a central log that can be tailed, mined, analyzed, stored, and backed-up is a tremendous asset. To do this, we use rsyslog, primarily because of its straightforward configuration, its ability to filter based on syslog header items. Excellent documentation and lots of examples don’t hurt either.

Rsyslog’s architecture is simple but elegant. The daemon runs on a central server and accepts log messages via TCP (port 10514) and/or UDP (port 514) from rsyslog daemons on other nodes in the cluster. Each remote syslog packet sent begins with a header containing a timestamp for the entry, followed by the message itself. These log messages usually contain a service name, severity level, the message itself, and occasionally some metadata, like this:

107 tacobot.local [Helium] INFO QueueConsumer - Connected to Beanstalk; waiting for jobs.

These messages are logged from dozens of different services and components within our infrastructure such as the machines’ OS itself, network services like Apache, Postgres, and Cassandra, and our messaging applications written in Python and Java. It’s difficult to overstress the value of bringing the status of all of these different services into one place.

While sharing a common protocol is definitely valuable, one must also ensure that each service is able to speak it. With deep roots in the UNIX tradition, interfacing with syslog from Python is dead simple:

>>> import syslog
>>> syslog.syslog('Processing started')

We had a bit more difficulty teaching our Java-based services to play nicely with syslog, but eventually straightened things out. These services use Log4J, a swiss army knife for logging messages in Java. While Log4J is tremendously flexible and allows for non-XML based configuration, its documentation (especially for remote syslog’ing) is a bit light. It took us some time to figure out that Log4J’s “SyslogAppender” supported UDP-based logging to remote servers, it does not support TCP-based logging (our default). With a default failure mode of printing nothing at all (i.e., no errors indicating that there’s a problem), and without logs showing up at their destination, this was a bit difficult to track down.

After some research, we discovered that the solution was rather simple: enabling UDP logging in rsyslog.conf on the server receiving logs, and updating our Log4J configuration as follows:

log4j.rootLogger=INFO, SYSLOG
log4j.appender.SYSLOG=org.apache.log4j.net.SyslogAppender
log4j.appender.SYSLOG.syslogHost=localhost
log4j.appender.SYSLOG.layout=org.apache.log4j.PatternLayout
log4j.appender.SYSLOG.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
log4j.appender.SYSLOG.Header=true
log4j.appender.SYSLOG.Facility=LOCAL1

This configuration specifies a default log level of “Info,” with these messages to be logged to Syslog. These messages go to a local syslog daemon, which forwards them on to our central logging server. The conversion pattern is a fairly standard formatting string specifying how we’d like the actual log messages to appear. Setting the “header” instructs Log4J to include a syslog header specifying the hostname and timestamp, which is used by the central rsyslog daemon to split up the incoming logs it receives into different files based on the service and/or host logging to it.

The actual use of this logger is fairly simple. In the application, it’s just a matter of adding:

PropertyConfigurator.configure(“log4j.properties”);
private static Logger logger = Logger.getLogger("Helium");
logger.info("Launching Queue Consumer...");

Filtering on Loghost

One of the biggest challenges with centralized logging is figuring out how to sort incoming messages. With only a few exceptions, we do this based on the service type (queue, mongodb, postgres, web, etc.). We’re able to glean this from the hostname of the server that sends the message to our loghost.

One of the nice things about rsyslog’s filtering capabilities is that it supports regular expressions. In our case, a full regular expression match wasn’t necessary because each of our hosts’ hostname begins with its service type:

Here’s an example set of lines from our rsyslog.conf on the loghost (the machine receiving all of our syslog messages):

if $hostname startswith 'queue' then /var/log/ua/queue.log
& ~

Rsyslog automatically breaks up header properties into $Slt;property>

You can get a full list of the properties here:(http://www.rsyslog.com/doc/property_replacer.html)

In this case, we’re using the ‘startswith’ function within rsyslog. It’s a little faster than doing a full regular expression match because it doesn’t have to try every character offset in the target property. So, in our case, we’re looking at all of the incoming messages, and if its hostname starts with ‘queue’, then it came from the queue server and we’ll store that in the queue service log (aggregated data across all of the queue servers).

We could just as easily do additional sorting if we wanted to. Say we only wanted to get Error level log messages or above from queue services, and we wanted this to go to a file called /var/log/ua/queue_errors.log:

if $hostname startswith ‘queue’ and $syslogseverity <= 5 then /var/log/ua/queue_errors.log
& ~

The “& ~” is really important. It’s shorthand for “skip to the next message.” Because rsyslog by default allows you to filter the same message any number of ways, you need to explicitly tell it when you don’t want to filter that message, or it will end up in every log file after it successfully makes a match. e.g. if you have 10 if statements to filter incoming messages, and your log message is triggered on number 6 — without the ‘& ~‘ your message would appear in the last four of the log files.

Finally, we need a place to put all messages which don’t match our filters:

*.* /var/log/ua/other.log

This is a catchall, so we see which messages our filters missed. In our case, since we use hostname filtering, we see either messages coming from misconfigured hosts or from syslog clients which don’t include the hostname for some reason. This makes it easy to figure out which syslog client needs reconfiguration.

Performance

Rsyslog is quite efficient. However, we thought it wise to split up the incoming message queue from the action queue, to help prevent floods of incoming messages from getting things done. We will, afterall, have many dozens of machines all shipping their logs to this one loghost server.

From /etc/rsyslog.conf:

# Decouple incomming queue from action queue.
$MainMsgQueueFileName /var/log/rsyslog.main.q
$ActionQueueFileName /var/log/rsyslog.action.q

Logrotate and Backups

One of the main purposes in aggregating our logs is to make it easy to back them up. A combination of logrotate and custom backup software facilities our needs to store logs in S3 so we can do log analysis later using Hive.

The log rotate script, placed somewhere in /etc/logrotate.d/:

/var/log/ua/*.log {
daily
missingok
rotate 10
compress
create 640 syslog adm
sharedscripts
postrotate
service rsyslog reload > /dev/null
endscript
lastaction
ualogbackup
endscript
}

The ualogbackup binary then takes all of the *.1.gz files and uploads them into a specific S3 bucket. The last action is a really handy logrotate script. It ensures that all of the logs are rotated before it runs. We found that a combination of sharedscripts and postrotate commands weren’t enough to ensure a consistent state.

Now we’ve got a system where all of our logs are backed up and ready to be analyzed.