November 8, 2011

Syslog-ng FAQ (from

Filed under: syslog-ng — lancevermilion @ 1:34 pm

Syslog-ng FAQ

This FAQ covers old syslog-ng versions, 1.X and 2.0. If you are looking for information about recent syslog-ng versions, please check the official FAQ now hosted by Balabit at: mailing list should have a list of frequently asked questions, and the answers usually given to those questions. Here’s one for syslog-ng.

Disclaimer: Use this information at your own risk, I cannot be held responsible for how you use this information and any consequences that may result. However, every effort has been made to ensure the technical accuracy of this document.

Most questions are taken from actual posts to the syslog-ng mailing list. Truly horrible grammar and spelling were cleaned up, but most questions are identical to the original post.

Any new entries should be submitted to the new FAQ at Balabit, not here.

Important Syslog-ng and syslog links

syslog/syslog-ng Graphical Interfaces


Geting started

  • Syslog-ng 2.x requires glib and eventlog which reside in /usr, thus cannot be used on systems where /usr is mounted during boot time, aftersyslog-ng starts.The latest snapshots (and future releases) of syslog-ng 2.x links to GLib and EventLog statically, thus it will not require those libraries to be present at boot time.

    The eventlog library was written by the syslog-ng author, and can be downloaded from the Balabit site.

    You can download glib from here, linked to from the main GTK site here.

  • I miss this/that very important feature from syslog-ng 2.x while it was present in syslog-ng 1.6.From Bazsi:
    syslog-ng 2.x is a complete reimplementation of syslog-ng and even though I plan to make it upward compatible with syslog-ng 1.6, I might have forgotten something. So please post to the mailing list either if you find missing or incompatible features.


  • What’s with this libol stuff, and which one do I need?libol is a library written by the author of syslog-ng, Balazs Scheidler, which is used in syslog-ng 1.6.x and below. A built copy of libol needs to be present on a system when these versions of syslog-ng are built.

    libol does *not* need to be installed, however. A built copy can be left in a typical build directory like /usr/src, and given as a parameter to syslog-ng’s configure script. Run ‘./configure –help’ in the syslog-ng source directory for more information.Information about versions of libol and which branch of syslog-ng they correspond to, see here.

  • When attempting to use the match(“.asx”) in my filter it is returning anything with “asx”. I only need to return those lines with the period before the asx, hence a file extension. For some reason syslog-ng seems to ignore my specification of the . before the asx. I have tried searching with ..asx and \.asx and /.asx but doesn’t seem to work no matter what I do. Any suggestions?match() expects an extended regular expression. _and_ syslog-ng performs \ based escaping for strings to make it possible to include ” within the string. Therefore you need to specify:

    …it will match a single dot followed by the string asxSee this post for more explanation on this issue:

  • I run Linux and see that I can choose one of two types of UNIX socket for my main syslog source. Which one is correct?You should choose unix-stream over unix-dgram for the same reasons you’d choose TCP (stream) over UDP (datagram): increased reliability, ordered delivery and client-side notification of failure.

    Along the same lines, you should choose unix-dgram over unix-stream for the same reasons you’d choose UDP (datagram) over TCP (stream): less possibility of denial of service by opening many connections (local-only vulnerability though), less overhead, don’t care to know if the remote end actually received the message.

    Most of us setting up syslog-ng tend to desire the benefits of unix-stream, and Bazsi recommends its use. See his commentary in the officialreference manual.

  • Hi, can someone please help me to get the compile right.OS: RHEL ES 3 update 4

    The ./configure went well no errrors.
    But the make did not go so well:

    Error message(s) :

    >> snip gcc -g -O2 -Wall -I/usr/local/include/libol -D_GNU_SOURCE -o
    syslog-ng main o sources.o center.o filters.o destinations.o log.o cfgfile.o 
    cfg-grammar.o cfg lex.o affile.o afsocket.o afunix.o afinet.o afinter.o afuser.o 
    afstreams.o afpr gram.o afremctrl.o nscache.o utils.o syslog-names.o macros.o -lnsl 
    sr/local/lib/libol.a -lnsl -Wl,-Bstatic 
    -Wl,-Bdynamic cfg-lex.o(.text+0x45f): In function `yylex': 
    /root/rpm/syslog-ng/syslog-ng-1.6.8/src/cfg-lex.c:1123: undefined 
    reference to 
    cfg-lex.o(.text+0xb33): In function `input': 
    /root/rpm/syslog-ng/syslog-ng-1.6.8/src/cfg-lex.c:1450: undefined 
    reference to 
    collect2: ld returned 1 exit status 
    make[3]: *** [syslog-ng] Error 1 
    make[3]: Leaving directory `/root/rpm/syslog-ng/syslog-ng-1.6.8/src' 
    make[2]: *** [all-recursive] Error 1 
    make[2]: Leaving directory `/root/rpm/syslog-ng/syslog-ng-1.6.8/src' 
    make[1]: *** [all] Error 2 
    make[1]: Leaving directory `/root/rpm/syslog-ng/syslog-ng-1.6.8/src' 
    make: *** [all-recursive] Error 1 

    Can someone plz explain what went wrong?As previously noted by another poster this is a problem with the flex version on your system. If you use a flex with a higher version than 2.5.4 you’re out of luck, unless you patch the sources. The reason for this is that the people developing flex had the “interesting” idea of changing the way the lexer parses the language file.

    The fix is to downgrade your flex or to patch cfg-lex.l with a %option field disabling yywarp. From the top of my head it should read:

     %option noyywrap

    Or you define the options as follows (I think):

    prototype functionname(parameters);

    Just my 2 cents since this issue turns up on almost every OSS project out there and people hit this very problem all the time and I thus want to enlarge the information and have google find it once and forever ;).Best regards,
    Roberto Nibali, ratz

    ps.: Let’s hope I got it right

    Note from Rob Munsch:
    It should be noted that one will get an *identical* error, specifically the ‘undefined reference to yywrap,’ if flex is *not installed at all.
    …those who hit this error should first perhaps ensure that they have flex/m4 installed before they start screwing with the cfg-lex.l code. I’ve been happily compiling various things on this (new) system for a while now, and to those of us newer at this than some others, we tend to fall into the trap of thinking that if most things compile, then we aren’t missing any vital steps of a compile process, like say preprocessors….

  • Thanks for the patch, I just patched it, and I can’t recompile libol. I did a make clean after I patched it, then tried:
     $ make
     Making all in utils
     make[1]: Entering directory `/home/src/libol-0.2.17/utils'
     make[1]: Nothing to be done for `all'.
     make[1]: Leaving directory `/home/src/libol-0.2.17/utils'
     Making all in src
     make[1]: Entering directory `/home/src/libol-0.2.17/src'
     /usr/src/libol-0.2.17/utils/make_class io.c.xt
     /bin/sh: /usr/src/libol-0.2.17/utils/make_class: No such file or directory
     make[1]: *** [io.c.x] Error 127
     make[1]: Leaving directory `/home/src/libol-0.2.17/src'
     make: *** [all-recursive] Error 1

    I’m not sure what this means. Should I try the patch again? I just did a 

    $ patch -ORIG_FILE -DIFF_FILE

    This comes from a missing scheme interpreter, touch io.c.x or install scsh.

    For much more on libol and scheme in syslog-ng, read this post by Bazsi

  • If I replace my syslog daemon with syslog-ng, what side effects can it have?Glad you asked, the most common side effect is being happy with a superior syslog daemon.

    Another common result is that system logfiles grow to huge sizes. This isn’t syslog-ng’s fault, but a side effect of syslog-ng logging to different logfiles than your old syslog daemon. Change your log rotation program’s config files to rotate the new log names/locations or change syslog-ng’s config file to make it log to the same files as your old syslog daemon.

Running it

  • I’m new to syslog-ng. Is there a way for syslog-ng and syslogd to co-exist? Our servers are managed by another group, and they don’t supportsyslog-ng. Can you pipe all syslogd messages to syslog-ng?Yes, syslog-ng can accept messages from stock syslogd using the udp() source.
  • I want a catch-all log destination and can’t seem to find out how in the documentation or examples.Jay Guerette helped out with:

    Filters are optional. A catchall should appear in your .conf before all othe entries; and can look something like:

    destination catchall {
    log {
  • I want to replace syslogd *and* klogd on my Linux box with syslog-ng.Use a source line like this in your conf file to read kernel messages, too.
    source src { file("/proc/kmsg"); unix-stream("/dev/log"); internal(); };


    1. do not run klogd and syslog-ng fetching local kernel messages at the same time. It may cause syslog-ng to block which makes all logging local daemons unusable.
    2. Current selinux policy distributed for RHEL4 supports syslog-ng by a boolean named “use_syslogng”. But on the not working host (using “pipe”), following happens:avc: denied { write } for pid=2190 comm="syslog-ng" name="kmsg" dev=proc ino=-268435446 scontext=root:system_r:syslogd_t tcontext=system_u:object_r:proc_kmsg_t tclass=filePlease don’t use “pipe” at all for /proc/kmsg.

      Thanks to Peter Bieringer for contributing this information

    3. If you find yourself getting lots of kernel messages to the console after replacing klogd with syslog-ng: set the kernel’s console log level. This is done by klogd but not syslog-ng automatically.Something like “dmesg -n4” should help.
  • I have been trying syslog-ng and extremely happy with the power of using it. I have one question, when using the program option under destination drivers, my PERL script gets launched when I start syslog-ng, but executes once and then dies. I am using this script to page any time I see an log entry, but it only runs the first time it runs.You can read log messages on your stdin, so instead of fetching a single line and exiting keep reading your input like this:
    while (<>) {
            # send to pager
  • Is it possible to create sockets with syslog-ng similar to how you can do so with syslogd? The reason for this being that I’m running some applications chrooted, and need to open a /dev/log socket that is in that chroot-jail.Of course you can. Just add a source:
    source local { unix-stream("/dev/log"); internal(); };
    source jail1 { unix-stream("/jail/dns/dev/log"); };
    source jail2 { unix-stream("/jail/www/dev/log"); };

    You can do this by using a single source:

    source local {

    Note that postfix appears to need a log socket in it’s chroot jail, or it’s logging will stop when you reload syslog-ng:

    source postfix { unix-stream("/var/spool/postfix/dev/log" keep-alive(yes)); };
  • Directories with names like “Error”, “SCSI”, “”, are showing up in the directory that holds the syslogs for the different hosts that we monitor.Has anyone seen these random directories? Any suggestions on how to deal with them?From the description it’s apparent that logs are being stored in your filesystem with a macro similar to this:


    destination std { file( "/var/log/$HOST/$FACILITY"); };

    …so that you have directories created with the value of $HOST. This is bad. The host entry in syslog messages is often set to a bad value, especially with messages originating from the UNIX kernel, like SCSI error messages.

    The best fix for this is to *never* create files or directories based on unfiltered input from the network (You’d do well to remember that in general). Set the option keep_hostname to (no), and syslog-ng will always replace the hostname field (possibly using DNS, so make sure your local caching DNS is setup correctly).

    Here’s the way to keep the hostnames in the log files but ALSO log safely to the filesystem:

    options {
    source src {
    	unix-stream("/dev/log" keep-alive(yes));
    # set it up
    destination logip {
    	owner(syslog-ng) group(syslog-ng) perm(0600) dir_perm(0700) create_dirs(yes)
    # log it
    log {

    Since you don’t use DNS, your $HOST_FROM directory name will be an IP, but since you keep_hostnames(yes) you’ll still have the hostname AS SENT inside the actual logfile. How’s that for a good setup? I quite like it! 😉If still you really want to use hostnames for directory or file names, read on:
    When still using hostnames (from the DNS) for directory names, the author of this FAQ didn’t have garbled $HOST macros go away until he modified all clients to run syslog-ng and transfer over TCP. Both steps might not be required, syslog-ng over UDP might be sufficient, though there’s little reason *not* to use TCP. Modern TCP/IP stacks are tuned to handle lots of web connections, so even a central host for hundreds of machines can use TCP without issues from the use of TCP alone. There will be I/O problems with trying to commit that many hosts’ logs to disk much sooner under most circumstances.

  • DNS: I want to use fully qualified domain names in my logs, I have many different hosts named ns1, or www, and don’t want the logs mixed up. Also, I have a question concerning the use_dns option. Is this a global option only, or is there some way to change this per source or destination?First of all, make sure that you have a reliable DNS cache nearby. Nearby may be on the same host or network segment, or even at your upstream provider. Just make sure that you can reach it reliably. syslog-ng blocks on DNS lookups, so you can stop all logging if you start getting DNS timeouts.

    Internal syslog-ng DNS caching has recently been worked on, and reportedly works well. This appears to be a good alternative to running a local caching DNS server (‘dns_cache(yes);’).

    The use_dns option can be specified on a per-source basis (so can the keep_hostname option).

    See also: the section on hostname options directly below.

  • What is with all the “hostname” options?When syslog-ng receives a message it tries to rewrite the hostname it contains unless keep_hostname is true. If the hostname is to be rewritten (e.g. keep_hostname is false), it checks whether chain_hostnames (or long_hostname which is an alias for chain_hostnames) is true. If chain_hostnames is true, the name of the host syslog-ng received the message from is appended to the hostname, otherwise it’s replaced.

    So if you have a message which has hostname “server”, and which resolves to “server2”, the following happens:

    keep_hostname(yes) keep_hostname(no)
    chain_hostname(yes) server server/server2
    chain_hostname(no) server server2

    I hope this makes things clear.

  • I have this config file:
     filter f_local0 { facility(local0); }; filter f_local1 { facility(local1); }; destination df_local1 { file("/mnt/log/$R_YEAR-$R_MONTH-$R_DAY/$SOURCEIP/local.log" template("$FULLDATE <> $PROGRAM <> $MSGONLY\n") template_escape(no)); }; log { source(s_tcp); source(s_internal); source(s_udp); source(s_unix); filter(f_local0); filter(f_local1); destination(df_local1); }; 

    When an event arrives at the system by facility local0 or local1, this one is registered in the file never. Can be some bug of syslog-ng or failure in config?If I understand you correctly then the problem is that you’re using two filters which exclude each other however using more filters they are logically AND-ed. If you want to catch messages from local0 and local1 use a filter like this:

    filter f_local01 { facility(local0) or facility(local1); }; 


  • I archive my logs like this:
    file("/var/log/HOSTS/$HOST/$YEAR/$MONTH/$DAY/$FACILITY_$HOST_$YEAR_$MONTH_$DAY" owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes) ); 

    …and over time my archive takes up lots of space. When will syslog-ng implement compression so that I can compress them automatically?You don’t need syslog-ng to compress them, install bzip2 and run a nightly cronjob like this:

     /usr/bin/find /var/log/HOSTS ! -name "*bz2" -type f ! -path "*`/bin/date +%Y/%m/%d`*" -exec /usr/bin/bzip2 {} \; 

    This might need some explaining: find all non-bzipped files that aren’t from today (syslog-ng might still write to them) and compress them with bzip2. This was tested on Debian GNU/Linux with GNU find version 4.1.7.Submitted by Michael King:
    We started with the compression script you have, but changed it to use gzip compression. (Less file space efficient, but more time efficient. B2zip takes approximately 20 minutes to decompress vs 2 or 3 minutes for Gzip), added a quick find and delete for empty directories and files modified more than 14 days old.

    # Current policy is:
    # Find all non-Archived files that aren't from today, and archive them
    # Archive Logs are deleted after 14 days
    #Changes.   Change -mtime +14 to the number of days to keep
    # Archive old logs
    /usr/bin/find /var/log/HOSTS ! -name "*.gz" -type f ! -path "*`/bin/date +%Y/%m/%d`*" -exec /usr/bin/gzip {} \;
    # Delete old archives
    find /var/log/HOSTS/ -daystart -mtime +14 -type f -exec rm {} \;
    # Delete empty directories
    find /var/log/HOSTS/ -depth -type d -empty -exec rmdir {} \;
  • My syslog-ng.conf has
    destination std { file( "/var/log/$HOST/$YEAR$MONTH/$FACILITY" create_dirs(yes)); };

    What happens is if /var/log/$HOST/$YEAR$MONTH does not exist, syslog-ng makes that dir, but it’s owner is root:other. I think because daemon’s effective user ID is root. I want to change dir’s owner. Is this possible?

    Yes, you can do this using the owner(), group() and perm() options.
    For example:

    destination d_file { file("/var/log/$HOST/log" create_dirs(yes)
     owner("log") group("log") perm(0600)); };
  • a) I have snmptrapd running so that any trap that it receives should be logged to local1. I have a filter taking anything received via local1 to a specific file:
    filter snmptrap {facility(local1);};
    destination snmptraps { file("/var/log/snmptraps";};

    Unfortunately a number of traps are getting cut off at a specific point, and the remainder of the trap ends up in syslog and not in the proper destination.syslog defaults to 1024 byte long messages, but this value is tunable in syslog-ng 1.5 where you can set it to a higher value.

    options { log_msg_size(8192); };

    b) Andreas Schulze points out: “We are running snmptrapd and syslog-ng 1.5.x under Solaris 8 and observed exactly the same problem.

    This doesn’t fix the problem for us. It seems that there is a problem in the syslog(3) implementation at least on Solaris. Maybe on Linux, too. This is important, because snmptrapd feeds its messages via syslog(3) to syslog-ng. So syslog-ng never gets the correct message, because its truncated in libc before syslog-ng receive it.

    Our solution was, to patch snmptrapd to log its messages via a local Unix DGRAM socket and use this socket as message source for syslog-ng. This fix the problem and works pretty fine and very stable for more than one year in our environment.

    Basically you’re screwed on Solaris, but hopefully other implementations aren’t as brain-dead.

  • It seems I have syslog clients with unsynchronized clocks and I have files that were created with the time macros, and their date is wrong. What I want is the files to be created with the time/date they are received.It’s an option. use_time_recvd() boolean.
    options { use_time_recvd(true); };
  • What conf settings can I use for my syslog-ng.conf file so that messages are written to disk the instant they are received?Add sync(0) to your config file.
    options { sync(0); };
  • I want to run syslog-ng chrooted and as a non-root user.Use syslog-ng 1.5.x’s own -C and -u flags when starting it.
  • I want to rewrite my logs into a specific format.Syslog-ng 1.5.3 added support for user definable log file formats. Here’s how to use it:
    destination my_file {
            file("/var/log/messages" template("$ISODATE $TAG $FULLHOST $MESSAGE"));

    For an explanation of available macros read this post.

  • I have been experiencing a problem with a syslog-ng (1.4.11) server seemingly only allowing 10 connections to a tcp() source. A quick tour of the code found the offending code at line 341 in afinet.c:
    self->super.max_connections = 10;

    It’s limited because otherwise it’d be easy to mount a DoS attack against the logger host. You can change this limit run-time without changing the source with the max-connection option to tcp():

    source src { tcp(max-connections(100)); };

  • Can output from programs started by syslog-ng be captured and logged by syslog-ng?It’s on the todo list. As long as it is not implemented, you might try to redirect the program’s output to a named pipe like this:
    destination d_swatch { program("swatch 2> /var/run/swatch.err"); };
     source s_swatch { pipe("/var/run/swatch.err"); };

Getting fancy

  • The whole point of setting up a loghost was to report on the logs. How can syslog-ng help?Syslog-ng is not about reporting on messages. Syslog-ng is a “sink” for syslog messages. Once syslog-ng commits them to some sort of storage (filesystem, database, line printer, etc), it is up to you to scan them.

    That being said, Nate Campi’s “newlogcheck” page shows how he filters all messages through swatch in real time, and also uses syslog-ng’s “match” option to alert on certain message strings.
    The fact that this stuff works is a result of syslog-ng’s flexibility, not because it was written to be all things to all people. Syslog-ng is a quality daemon because it tries to stay good at one thing and one thing only: being a syslog server.

    Look at the links part of this page for the link to Nate’s newlogcheck page, and for the link to the Log Analysis page. You’ll find plenty of information on log parsing there.

  • I want to input my logs into a database in real time – why can’t I do it?You can, there’s just nothing built into syslog-ng which knows about databases. You simply need to take advantage of syslog-ng’s ability to pipe to a program. Follow the links in the links part of this page to read up on how other have done it.
  • How much log volume can syslog-ng handle?The limits to throughput in syslog-ng are similar to that of most other network applications, where network and disk I/O are the limiting factors.

    Here’s a report from Kevin Kadow about throughput on his busy loghost:

    Our log volume has been growing slowly over time, I recently checked my primary
    logger and noticed that the raw log volume for this past Monday (the 24-hour
    period from midnight to midnight) was 10,982,118,488 bytes, or 10.22GiB.
    Peak hourly volume was 913MiB. Peak one-second volume in that hour was 2,626
    messages totaling 440KiB, no duplicates.
    System specs:
    OpenBSD on a Dell 2650 (single 2.8Ghz P4) running syslog-ng 1.6.X.
    Logs are written to a Dell PERC 3/Di SCSI RAID-0, as a 2-drive stripe.
  • I’m using syslog-ng over redirected ports inside an SSH channel and whenever I HUP syslog-ng, the SSH channel closessyslog-ng closes TCP connections when a SIGHUP is received, but you can change this behaviour with the keep-alive option.
    destination remote_tcp { tcp("loghost" port(1514) keep-alive(yes)); };
    source tcp_listen { tcp(ip("") port(5140) keep-alive(yes)); };
  • I’ve successfully set up syslog-ng to tunnel through stunnel. I’m having one problem though, all messages come through with a hostname of “localhost”, presumably since stunnel is coming from localhost on the syslog server….Keep the hostname as sent by the remote syslog daemon:
    options { keep_hostname(yes); };
  • I am trying to setup a central log host and am having trouble getting events registered on the central server. It looks like the remote host does connect to the central host but nothing shows in a log anywhere for it.Here is the central loghost config file:
    options {
    source gateway {
            udp(ip( port(514));
    source tcpgateway {
            tcp(ip( port(514) max_connections(1000));
    destination hosts {
            owner(root) group(root) perm(0600) dir_perm(0700)
    log {
            source(gateway); destination(hosts);
    log {
            source(tcpgateway); destination(hosts);

    Don’t duplicate sources in your source{} (unix-stream(“/dev/log”); and internal); directives. syslog-ng is going to open /dev/log once for each time you list it, same for any TCP/IP ports, files, etc. List them once and use the source{} in additional log{} statements.

    You’ll want something more like:

    options {
    source local {
    source gateway {
            udp(ip( port(514));
    source tcpgateway {
            tcp(ip( port(514) max_connections(1000));
    destination hosts {
            owner(root) group(root) perm(0600) dir_perm(0700)
    log {
    log {
    log {
  • I am having problems with syslog-ng on an SeLinux aware machine. The kernel will not allow me to open up /proc/kmsg for kernel messages. The error message I get looks like:Oct 24 14:03:06 shadowlance kernel: audit(1130178038.432:2): avc: denied { read } for pid=2690 comm="syslog-ng" name="kmsg" dev=proc ino=-268435446 scontext=user_u:system_r:syslogd_t tcontext=system_u:object_r:proc_kmsg_t tclass=fileWhat can I do?

    These errors are a sign that either the Selinux policies are not syslog-ng aware or have not been enabled yet. Make sure you have the latest policies for your distribution and then use getsebool to see if usesyslogng is turned on

    # getsebool use_syslogng
    use_syslogng --> inactive
    # setsebool -P use_syslogng=1

    Restart the syslog and the problem should be fixed. If not, you will need to contact your distributions selinux team for more guidance.

  • I am trying to send all important messages from a bunch of other machines to a central syslog-ng server via tcp. I chose tcp partly, because the same log server gets all kinds of less important stuff via udp from other machines, which can easily be distinguished that way, but partially also because I expected tcp to be more reliable. Unfortunately, this does not seem to be the case: When the connection has died for any reason, the client will only discover this when it is trying to send the next message to the server. Only then it starts to wait until “time_reopen” is over and establishes a new connection – the message that originally triggered this and whatever comes in between is lost.This has been discussed quite a bit lately on the mailing list. A single line is written to a TCP socket without an error when the connection has been lost. It is not until the next message is written that the error condition is reported by the kernel.

Performance Tips

  • What are some tips for optimizing a really busy loghost running syslog-ng?In no particular order:
  • If you use DNS, at least keep a caching DNS server running on the local host and make use of it – or better yet don’t use DNS.
  • You can post-process logs on an analysis host later on and resolve hostnames at that time if you need to. On your loghost your main concern is keeping up with the incoming log stream – the last thing you want to do is make the recording of events rely on an external lookup. syslog-ng blocks on DNS lookups (as noted elsewhere in this FAQ), so you’ll slow down/stop ALL destinations with slow/failed DNS lookups.
      • Don’t log to the console or a tty, under heavy load they won’t be able to read the messages as fast as syslog-ng sends them, slowing down syslog-ng too much.
      • Don’t use regular expressions in your filters. Instead of:
        filter f_xntp_filter_no_regexp {
        	# original line: "xntpd[1567]: time error -1159.777379 is way too large (set clock manually);
        	program("xntpd") and
        	match("time error .* is way too large .* set clock manually");

        Use this instead:

        filter f_xntp_filter_no_regexp {
        	# original line: "xntpd[1567]: time error -1159.777379 is way too large (set clock manually);
        	program("xntpd") and
        	match("time error") and match("is way too large") and match("set clock manually");

        Under heavy, heavy logging load you’ll see CPU usage like this when using regexps:

        …vs CPU usage like this when not using regexps:

        Note that the results at the bottom of the graphs show that the test with heavy regexp use caused huge delays, almost 25% lost messages (the test only sent 5,000 messages!) and hammered the CPU. The test without regexps was one where I sent 50,000 messages, and it hardly used any CPU, didn’t drop any messages and all the messages made it across in under a second (not all 50,000, each individual message made it in under a second). Note that the “Pace” of 500/sec is simply how fast they were injected to the syslog system using the syslog() system call (from perl using Unix::Syslog).

        NOTE: when not using regexps and matching on different pieces of the message, you might match messages that you don’t mean to. There is only a small risk of this, and it is much better than running out of CPU resources on your log server under most circumstances. It is your call to make.

        Please don’t ask me for the scripts that generated these graphs, I wrote them for work and it probably wouldn’t be possible to ever release them. I hope to one day write some like it in my free time and release them…but that may be a pipe dream. 😦


    • Be sure to increase your maximum connections to a TCP source, as described here
    • There’s a good chance you’ll want to set per-destination buffers. The official reference manual covers the subject here.The idea here is to make sure that when you have multiple log destinations that might block somewhat “normally” (TCP and FIFO come to mind) that they don’t interfere with each other’s buffering. If you have a TCP connection maxed out in its buffer because of an extended network problem, but have only a temporary problem feeding logs into a FIFO, you can avoid losing any data in the FIFO (assuming your buffer size is large enough to handle the backup) if you set up separate buffers.

      If our TCP destination connection drops because the regional syslog server is down for a syslog-ng upgrade or kernel patch, we want events bound for the TCP destination to be held in the buffer and sent across once the connection is re-established. If our bucket is already filled because of FIFO problems to a local process, we can’t buffer a single message for the entire duration of the TCP connection outage. Ouch.

      The problem with implementing per-destination buffers is that the log_fifo_size option was added to the TCP destinations in the 1.6.6 version. You need to upgrade to syslog-ng 1.6.6 or later (I suggest the latest stable version).

  • You probably need to increase the size of your UDP receive buffers on your loghost. See this doc about UDP buffer sizing and how to modify it.
  • If you have many clients, you might well run out of fd’s (the default limit for maximum file descriptors is around 1000), thus syslog-ng might not be able to open files. The workaround then would be to increase the maximum file handles (ulimit -n) before starting syslog-ng, the best is to put this in the init script.

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: