Gheek.net

September 11, 2015

Use Perl to Create Multi-page or Multi-Document Visio(s) using Visio Plugin for PowerShell

Filed under: perl, PowerShell, Visio — lancevermilion @ 12:36 pm

I draw Visio diagram very regular but have been very motivated to create more and more dynamic diagram that only require the artistic touch to make the diagrams look the way you want. There should be no reason to waste time typing in data if you already have it in a Excel Spreadsheet, SharePoint Foundation List, or DB. I am using Excel Spreadsheets for this example but you can use whatever you want. There is additional things you would need to do to make it work obviously. To do all the drawing you can do it in VBA, PowerShell using the Visio module/plugin, or C# (I will learn this one day). I have decided to use the Visio module/plugin for PowerShell by Saveen Reddy because it is the quickly to learn and use for me. He is the author of a larger project call Visio Automation.

I did create my own Stencil Pack and added my Data Graphics to it. I prefer to create diagram that has simple hostnames (that are unique) so they can be linked using the external data concept in Visio and use the Excel Spreadsheet, SharePoint Foundation List, or DB options. Since I use the unique hostname as the key in the XLS I can match that to the name of the shape on the page. I can then populate the Data Graphic with all the data from the external data. I also keep a connection list in a format of C followed by any number of digits. This allows the WAN links to be linked to the external data.

Here is a little of useful powershell to quickly select the specific shapes all at once.

# Select all rectangles (aka didn't match criteria for specific stencil)
Select-VisioShape None
$shapes = Get-VisioShape *
$shapeids = $shapes | where { $_.NameU -like "*rectangle*"} | Select ID
if ($shapeids) { Select-VisioShape $shapeids.ID }

# Select all Dynamic connectors
Select-VisioShape None
$shapes = Get-VisioShape *
$shapeids = $shapes | where { $_.NameU -like "*Dynamic connector*"} | Select ID
if ($shapeids) { Select-VisioShape $shapeids.ID }

Resize a rectangle shape (default shape in the code that draws the diagrams automatically) on all pages

#
# Go page by page and resize each rectangle and set other page attributes
#

# If the Visio plugin for PowerShell doesn't see a connection to Visio connect to it.
if (-Not (Test-VisioApplication) ) 
{
    Connect-VisioApplication
}

# Sometimes you will need to also attach to the document.
$app = Get-VisioApplication


$pages = Get-VisioPage | where { $_.Name } | Select Name
foreach ( $page in $pages )
{
    if ( "background" -eq $page.Name ) {break}
    Write-Host Processing: $page.Name
    Set-VisioPage -Name $page.Name
    $shapes = Get-VisioShape *
    $rectids = $shapes | where { $_.NameU -like "*rectangle*"} | Select ID
    if ($rectids)
    {
        Write-Host $rectids.ID
        Select-VisioShape $rectids.ID
        Set-VisioShapeCell -Width 0.6049 -Height 0.475
        Set-VisioPageLayout -Orientation Landscape -BackgroundPage background
        Set-VisioPageCell -PageWidth 11.0 -PageHeight 8.5
    }
    Select-VisioShape None
}

Because I am not a decent PowerShell programmer/script/hacker/etc I wrote a Perl script to output a PowerShell script to put in PowerShell ISE and do my dirty work for me.🙂

My Script requires input from a file in the following format.

label = "Name of Page/Document in Visio"
DEV1,DEV2,Connector_Label,Connector_Type

Real Data

label = "Site A"
ATTCLOUD,DEV1,Connector_Label,Connector_Type
DEV1,DEV2,Connector_Label,Connector_Type
DEV2,DEV3,Connector_Label,Connector_Type
DEV2,DEV5,Connector_Label,Connector_Type
DEV2,DEV4,Connector_Label,Connector_Type
XOCLOUD,DEV4,Connector_Label,Connector_Type

Currently Connector_Label is ignored but could be used for different implementations.
Connector Types are: Straight, RightAngle, or Curved (this is the default)

Perl Script to Create a PowerShell Script to Dynaimcally draw Diagrams

#!/usr/local/bin/perl

my $filename = $ARGV[0];
my @arr = do {
    open my $fh, "<", $filename
        or die "could not open $filename: $!";
    <$fh>;
};

my $hash = {};
my $save_location = 'c:';

# Where to place each diagram
# 0 = Each Diagram goes on new page
# 1 = Each Diagram goes in its own Visio file
my $newdocper = 0;

my $site = '';
my $site_clean = '';
my $cnt = 0;
for my $line (@arr)
{
  chomp($line);
  if ( $line =~ /label/ )
  {
    $site = '';
    (undef, $site) = split(/ = /, $line);
    $site_clean = $site;
    $site_clean =~ s/"//g;
    $site_clean =~ s/-/_/g;
    $site_clean =~ s/#/NUM/g;
    $site_clean =~ s/&/_/g;
    $site_clean =~ s/\(//g;
    $site_clean =~ s/\)//g;
    $site_clean =~ s/\//_/g;
    $site_clean =~ s/ /_/g;
    $site_clean =~ s/\./_/g;
  }
  else
  {
    $cnt++;
    my ($from, $to, $label, $linetype) = split(/,/, $line);
    my $to_clean = $to;
    $to_clean =~ s/"//g;
    $to_clean =~ s/-/_/g;
    $to_clean =~ s/#/NUM/g;
    $to_clean =~ s/&/_/g;
    $to_clean =~ s/\(//g;
    $to_clean =~ s/\)//g;
    $to_clean =~ s/\//_/g;
    $to_clean =~ s/ /_/g;
    $to_clean =~ s/\./_/g;
    my $to_normal = $to;
    my $from_clean = $from;
    $from_clean =~ s/"//g;
    $from_clean =~ s/-/_/g;
    $from_clean =~ s/#/NUM/g;
    $from_clean =~ s/&/_/g;
    $from_clean =~ s/\(//g;
    $from_clean =~ s/\)//g;
    $from_clean =~ s/\//_/g;
    $from_clean =~ s/ /_/g;
    $from_clean =~ s/\./_/g;
    my $from_normal = $from;

    # Build Node portion of hash
    $hash->{$site_clean}->{'nodes'}->{$from}->{'clean'} = $from_clean;
    $hash->{$site_clean}->{'nodes'}->{$from}->{'normal'} = $from_normal;
    $hash->{$site_clean}->{'nodes'}->{$to}->{'clean'} = $to_clean;
    $hash->{$site_clean}->{'nodes'}->{$to}->{'normal'} = $to_normal;

    # Build links portion of hash
    $hash->{$site_clean}->{'links'}->{$cnt}->{'from'} = $from_clean;
    $hash->{$site_clean}->{'links'}->{$cnt}->{'to'} = $to_clean;
    $hash->{$site_clean}->{'links'}->{$cnt}->{'label'} = $label;
    $hash->{$site_clean}->{'links'}->{$cnt}->{'linetype'} = $linetype;
    
  }
}

print "Set-StrictMode -Version 2\n";
print "\$ErrorActionPreference = \"Stop\"\n";
print "Import-Module Visio\n";
print "\$options = New-Object VisioAutomation.Models.DirectedGraph.MSAGLLayoutOptions\n";
print "\$d = New-VisioDirectedGraph\n";
print "\$app = New-Object -ComObject Visio.Application \n";
print "\$app.visible = \$true \n";
print "\$docs = \$app.Documents \n";
print "\$doc = \$docs.Add(\"DTLNET_U.vst\") \n";
print "\$pages = \$app.ActiveDocument.Pages \n";
print "\$page = \$pages.Item(1)\n";
print "\$stencil = \$app.Documents.Add(\"My_Network_Stencil_Pack.vss\")\n";
print "\$backgroundborder = \$stencil.Masters.Item(\"Background Border\")\n";
print "\$infobar = \$stencil.Masters.Item(\"Info Bar on Background\")\n";
print "\$page.Name = \"background\"\n";
print "\$page.AutoSize = \$false\n";
print "\$page.Background = \$true\n";
print "\$page.Document.PrintLandscape = \$true\n";
print "\$page.document.PrintFitOnPages = \$true\n";
print "\$bg = \$page.Drop(\$backgroundborder, 5.5, 4.25) \n";
print "\$bginfobar = \$page.Drop(\$infobar, 8.0646, 0.95) \n";
print "\$page.CenterDrawing\n";
print "if (-Not (Test-VisioApplication) ) \n";
print "{\n";
print "    Connect-VisioApplication\n";
print "}\n";
print "if (-Not (Test-VisioDocument) )\n";
print "{\n";
print "    Set-VisioDocument -Name Drawing1\n";
print "}\n";
print "Set-VisioPageCell -PageWidth 11.0 -PageHeight 8.5\n";
for my $sitekey ( sort keys %$hash )
{
  print "\$d = New-VisioDirectedGraph\n";
  #my $nodecnt = 1;
  #my @custprops = ();
  # Print out all Nodes per Site
  for my $node ( sort keys %{$hash->{$sitekey}->{'nodes'}} )
  {
    my $node_label = $hash->{$sitekey}->{'nodes'}->{$node}->{'normal'};
    my $node_name = $hash->{$sitekey}->{'nodes'}->{$node}->{'clean'};
    my $node_stencil = "BASIC_U.VSS";
    my $node_shape = "Rectangle";
    my $node_stencil_my = "My_Network_Stencil_Pack.vss";
    my $node_shape_l2switch = "Cisco L2 Switch DG";
    my $node_shape_l3switch = "Cisco L3 Switch DG";
    my $node_shape_rtr = "Cisco Router DG";
    my $node_shape_fw = "Cisco ASA 5500 Series DG";
    my $node_shape_cloud = "Cloud";
    my $node_shape_rectangle = "Rectangle DG";
    my $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil\", \"$node_shape\")\n";
    #my $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil_my\", \"$node_shape_rectangle\")\n";
    $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil_my\", \"$node_shape_cloud\")\n" if ( $node_name =~ /qmoe/i );
    $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil_my\", \"$node_shape_l3switch\")\n" if ( $node_name =~ /3850/ );
    $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil_my\", \"$node_shape_l3switch\")\n" if ( $node_name =~ /6500/ );
    $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil_my\", \"$node_shape_l2switch\")\n" if ( $node_name =~ /as/ );
    $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil_my\", \"$node_shape_rtr\")\n" if ( $node_name =~ /wr\d\d/i );
    $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil_my\", \"$node_shape_rtr\")\n" if ( $node_name =~ /vrf_/i );
    $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil_my\", \"$node_shape_rtr\")\n" if ( $node_name =~ /rt\d\d/i );
    $nodename = "\$$node_name = \$d.AddShape(\"$node_name\",\"$node_label\", \"$node_stencil_my\", \"$node_shape_fw\")\n" if ( $node_name =~ /fw\d\d/i );
    print $nodename;
  }

  for my $linknum ( sort keys %{$hash->{$sitekey}->{'links'}} )
  {
    # Print out all links per Site
    my $linknum_ = "C" . $linknum;
    my $from_ = $hash->{$sitekey}->{'links'}->{$linknum}->{'from'};
    my $to_ = $hash->{$sitekey}->{'links'}->{$linknum}->{'to'};
    my $label_ = $hash->{$sitekey}->{'links'}->{$linknum}->{'label'};
    my $linetype_ = $hash->{$sitekey}->{'links'}->{$linknum}->{'linetype'};
    print "\$d.AddConnection(\"$linknum_\",\$$from_,\$$to_,\"$linknum_\",\"$linetype_\")\n";
  }
  print "New-VisioDocument\n" if ( $newdocper );
  print "\$p = New-VisioPage -Name \"$sitekey\"  -Height 8.5 -Width 11\n" if ( ! $newdocper );
  print "\$d.Render(\$p,\$options)\n";
  print "\$shapes = Get-VisioShape *\n";
  print "\$rectids = \$shapes | where { \$_.NameU -like \"*rectangle*\"} | Select ID\n";
  print "if (\$rectids)\n";
  print "{\n";
  print "    Select-VisioShape \$rectids.ID\n";
  print "    Set-VisioShapeCell -Width 0.6049 -Height 0.475\n";
  print "    Set-VisioPageLayout -Orientation Landscape -BackgroundPage background\n";
  print "    Set-VisioPageCell -PageWidth 11.0 -PageHeight 8.5\n";
  print "}\n";
  print "Select-VisioShape None\n";

  print "Save-VisioDocument \"$save_location\\$sitekey.vsd\"\n" if ( $newdocper );
  print "\$od = Get-VisioDocument -ActiveDocument\n" if ( $newdocper );
  print "Close-VisioDocument -Documents \$od\n" if ( $newdocper );
}

Example of Generated PowerShell

Set-StrictMode -Version 2
$ErrorActionPreference = "Stop"
Import-Module Visio
$options = New-Object VisioAutomation.Models.DirectedGraph.MSAGLLayoutOptions
$d = New-VisioDirectedGraph
$app = New-Object -ComObject Visio.Application 
$app.visible = $true 
$docs = $app.Documents 
$doc = $docs.Add("DTLNET_U.vst") 
$pages = $app.ActiveDocument.Pages 
$page = $pages.Item(1)
# You would have to comment out the following three lines since you likely don't have them in a Stencil Pack like I do.
$stencil = $app.Documents.Add("My_Network_Stencil_Pack.vss")
$backgroundborder = $stencil.Masters.Item("Background Border")
$infobar = $stencil.Masters.Item("Info Bar on Background")
$page.Name = "background"
$page.AutoSize = $false
$page.Background = $true
$page.Document.PrintLandscape = $true
$page.document.PrintFitOnPages = $true
# You would have to comment out the following two lines since you likely don't have them in a Stencil Pack like I do.
$bg = $page.Drop($backgroundborder, 5.5, 4.25) 
$bginfobar = $page.Drop($infobar, 8.0646, 0.95) 
$page.CenterDrawing
if (-Not (Test-VisioApplication) ) 
{
    Connect-VisioApplication
}
if (-Not (Test-VisioDocument) )
{
    Set-VisioDocument -Name Drawing1
}
Set-VisioPageCell -PageWidth 11.0 -PageHeight 8.5
$d = New-VisioDirectedGraph
$ATTCLOUD = $d.AddShape("ATTCLOUD","ATTCLOUD", "BASIC_U.VSS", "Rectangle")
$DEV1 = $d.AddShape("DEV1","DEV1", "BASIC_U.VSS", "Rectangle")
$DEV2 = $d.AddShape("DEV2","DEV2", "BASIC_U.VSS", "Rectangle")
$DEV3 = $d.AddShape("DEV3","DEV3", "BASIC_U.VSS", "Rectangle")
$DEV4 = $d.AddShape("DEV4","DEV4", "BASIC_U.VSS", "Rectangle")
$DEV5 = $d.AddShape("DEV5","DEV5", "BASIC_U.VSS", "Rectangle")
$XOCLOUD = $d.AddShape("XOCLOUD","XOCLOUD", "BASIC_U.VSS", "Rectangle")
$d.AddConnection("C1",$ATTCLOUD,$DEV1,"C1","Straight")
$d.AddConnection("C2",$DEV1,$DEV2,"C2","Straight")
$d.AddConnection("C3",$DEV2,$DEV3,"C3","Straight")
$d.AddConnection("C4",$DEV2,$DEV5,"C4","Straight")
$d.AddConnection("C5",$DEV2,$DEV4,"C5","Straight")
$d.AddConnection("C6",$XOCLOUD,$DEV4,"C6","Straight")
$p = New-VisioPage -Name "Site_A"  -Height 8.5 -Width 11
$d.Render($p,$options)
$shapes = Get-VisioShape *
$rectids = $shapes | where { $_.NameU -like "*rectangle*"} | Select ID
if ($rectids)
{
    Select-VisioShape $rectids.ID
    Set-VisioShapeCell -Width 0.6049 -Height 0.475
    Set-VisioPageLayout -Orientation Landscape -BackgroundPage background
    Set-VisioPageCell -PageWidth 11.0 -PageHeight 8.5
}
Select-VisioShape None

Here is a picture of what it would draw.

Dummy_Net_Diagram

using Perl, create a CSV of data if it is in a list format

Filed under: perl — lancevermilion @ 11:40 am

I needed to convert a list of data from a list of data that was already setup in a key/value format. The problem was the same keys did not exist for each chunk of data that made up the list. For example if I have a sample list of what is below I would expect to get the following.

Sample List

start data chunk abc
  key1 value1
  key2 value2
start data chunk def
  key2 value2
start data chunk ghi
  key1 value1
start data chunk jkl
  key1 value1
  key2 value2

Desired Sample CSV output

start data chunk,abc,def,ghi,jkl
key1,value1,,value1,value1
key2,value2,value2,,value2

I wrote some Perl code to accomplish this. Not pretty and I am sure people will say it can be done better, faster, and with less lines. This works for my purposes. Enjoy the use of it if it helps you.

Here is the code

#!/usr/local/bin/perl
use strict;

my $filename = $ARGV[0];
my @arr = do {
    open my $fh, "<", $filename
        or die "could not open $filename: $!";
    <$fh>;
};

# Unique set of Keys
my $href_keys = {};
# List of Key1/Key2/Values in data provided
my $href_data = {};
# New list of Key/Values as an array
# If Key2 does not exist for Key1 it will be added as a blank values. This allows a proper CSV to be presented.
my $href_csv = {};

my $tmpkey = '';

for my $line (@arr)
{
  chomp($line);
  $line =~ s/\r//g; #Just in case you have CR
  $line =~ s/\n//g; #Just in case you have New Line / LF
  if ( $line =~ /^(start data chunk) (.*)/ )
  {
    $tmpkey = $2;
    $href_keys->{$tmpkey}->{$1} = $2;
    $href_data->{$1} = '';
  }
  if ( $line =~ /^\s+(.*) (value\d.*)/ )
  {
    $href_keys->{$tmpkey}->{$1} = $2;
    $href_data->{$1} = '';
  }
}


for my $href_data_k1 ( sort keys %$href_data )
{
  my $newkey = $href_data_k1;
  $newkey =~ s/(start data chunk)/_$1/g;
  for my $href_keys_k1 ( sort keys %$href_keys )
  {
    my $val = '';
    $val = $href_keys->{$href_keys_k1}->{$href_data_k1} if ( $href_keys->{$href_keys_k1}->{$href_data_k1} );
    push(@{$href_csv->{"$newkey"}}, $val);
  }
}

for my $href_csv_k1 ( sort keys %$href_csv )
{
  my $href_csv_newk1 = $href_csv_k1;
  $href_csv_newk1 =~ s/_//g;
  print "$href_csv_newk1,", join(",", @{$href_csv->{$href_csv_k1}}), "\n";
}

October 30, 2012

TFTP Server and CentOS with SELINUX set to enforcing

Filed under: linux, SELINUX, TFTP Server — lancevermilion @ 11:11 am

I have fun trying to get figure out why I couldn’t get a simple tftp server to work on CentOS 6.3. I turned off iptables, turned on debug (added -d to EXTRAOPTIONS="" in /etc/sysconfig/xinetd), and checked /var/log/messages. I was still failing. While looking at the output of tcpdump I saw the tftp connection came in but nothing ever went back to the host requesting TFTP. I should have looked at the audit log (/var/log/audit/audit.log), but didn’t because I totally spaced it.

I guess I was thinking TFTP (in.tftpd)would have already been added to the SELINUX policy. I was wrong and found a bug ID point to this in FC11 (https://bugzilla.redhat.com/show_bug.cgi?id=511839). I followed the instructions in the comment from that bug ID (that I have copied below) and everything worked like a charm.

Erik Auerswald 2009-10-16 03:43:36 EDT
Well, finally I’ve got TFTP write access working with SELinux in enforcing mode.

The target directory seems to have the correct context:

# semanage fcontext -a -t tftpdir_rw_t ‘/var/lib/tftpboot(/.*)?’
/usr/sbin/semanage: File context for /var/lib/tftpboot(/.*)? already defined

A file created therein (by root, the dir has 0755 permissions, owner root:root) has file context tftpdir_t. If changed to tftpdir_rw_t in.tftpd can write to it:

# touch /var/lib/tftpboot/rms.cfg
# chmod 666 /var/lib/tftpboot/rms.cfg
# ls –context /var/lib/tftpboot/
-rw-rw-rw-. root root unconfined_u:object_r:tftpdir_t:s0 rms.cfg
# chcon -t tftpdir_rw_t /var/lib/tftpboot/rms.cfg

Not knowing much about SELinux I don’t know if this a bug. It is violating the principle of least surprise for people knowing only the traditional TFTP configuration/usage, i.e. without SELinux.

The directory /var/lib/tftpboot is intended to be written by in.tftpd. The files to be written to need to be manually created and set to mode 666. The selinuxtroubleshoot output could be enhanced by mentioning ‘chcon -t tftpdir_rw_t’ for files intended to be written to. IMHO this would help the administrator new to SELinux and keep the spirit of manually allowing TFTP write access only.

[Of course directories, file owners and file permissions used by in.tftpd can be changed via commandline option. The description above fits the Fedora 11 default configuration and reflects pre-SELinux best practice.]

September 12, 2012

iotop2csv – One way to graph the batch output of iotop

Filed under: linux, perl — lancevermilion @ 5:38 pm

I had a need to know what was writing disk, how much it was writing, and how often it was writing. iotop does a pretty good job of giving you a top like view of disk I/O activity. I decided I wanted to graph this to hopefully make it easier for me to see how heavy writing to the disk might correlate to sluggish CLI response.

The output is in CSV so you can drop it in Excel (if you so wish) and create a simple PivotChart with it. You can do so by selecting all data from the csv (after it is imported/converted to rows/columns using data import or text to columns) then choose insert pivotchart.

Choose fields timestamp, tid, read, write and cmd.
Timestamp goes in AxisFields Categories.
Values, cmd, and tid go in Legend Fields
Sum of read and Sum of write goes in Values.

I never quite got to the smoking gun (just yet) but I was able to see what applications were busy little apps writing to the disk.

I do no warranty this code in any way. Use it as you wish. If you find it useful then please write back saying so. If you find it incorrect or improve/fix it please write back on that as well.

Here is the code

#/usr/bin/perl -w
use strict;
# Enabling strict with strict refs breaks the summary output (when ctrl+c is pressed)
# because of the error Can't use string ("") as a HASH ref while "strict refs" in use.
no strict 'refs';

#Initialize an empty hashref
my $hash_ref = {};

#Catch CTRL+C and then call printsummary subroutine
$SIG{'INT'} = \&printsummary;

#Pipe the output from iotop to a file handle
open fh, "iotop -bt|" or die $!;

# Build the initial Total Value key/values
$hash_ref->{'tsample'} = 0;
$hash_ref->{'tread'} = 0;
$hash_ref->{'twrite'} = 0;

# Divider of columns
my $div = ",";

#Print the header
print "#" x 50 . "\n";
print "Output from iotop -bt\n";
print "#" x 50 . "\n";
print "timestamp" . $div . "tid" . $div . "user" . $div . "read" . $div . "total read each tid" . $div . "write" . $div . "total write each tid" . $div . "cmd" . $div . "sample number" . $div . "Percent of Total Read" . $div . "Percent of Total Write" . "\n";

#While data from iotop process it. Since we get lots of 0.00 read and write values
#don't waste time storing/printing values that are useless. Only store lines that
#have a value to read/write.
while(<fh>) {
  my ($ts,$tid,$prio,$user,$read,undef,$write,undef,undef,undef,undef,undef,$cmd) = split(/\s+/, "$_");
  chomp($cmd);

  #Make sure we fill each tid with dummy values if the tid is a real valid number
  if ( !$hash_ref->{$tid} && ( $tid =~ m/^-?\d+$/ || $tid =~ m/^-?\d+[\/|\.]\d+$/ ) )
  {
    $hash_ref->{$tid}->{'user'} = 'null';
    $hash_ref->{$tid}->{'read'} = 0;
    $hash_ref->{$tid}->{'write'} = 0;
    $hash_ref->{$tid}->{'cmd'} = 'null';
    $hash_ref->{$tid}->{'samples'} = 0;
  }

  #Continue if we have already created a key for the tid and the tid is a real valid number
  if ( $hash_ref->{$tid} && ( $tid =~ m/^-?\d+$/ || $tid =~ m/^-?\d+[\/|\.]\d+$/ ) )
  {
    $hash_ref->{$tid}->{'user'} = $user;
    $hash_ref->{$tid}->{'read'} += $read if $read =~ m/^-?\d+$/ || $read =~ m/^-?\d+[\/|\.]\d+$/;
    $hash_ref->{$tid}->{'write'} += $write if $write =~ m/^-?\d+$/ || $write =~ m/^-?\d+[\/|\.]\d+$/;
    $hash_ref->{$tid}->{'cmd'} = $cmd;
    $hash_ref->{$tid}->{'samples'}++;

    # create totals key/value pair
    $hash_ref->{'tsample'}++;
    $hash_ref->{'tread'} += $read if $read =~ m/^-?\d+$/ || $read =~ m/^-?\d+[\/|\.]\d+$/;
    $hash_ref->{'twrite'} += $write if $write =~ m/^-?\d+$/ || $write =~ m/^-?\d+[\/|\.]\d+$/;

    # Print values of the read/write as they happen
    if ( ( $read =~ m/^-?\d+$/ || $read =~ m/^-?\d+[\/|\.]\d+$/ || $write =~ m/^-?\d+$/ || $write =~ m/^-?\d+[\/|\.]\d+$/ ) && ( $read > 0 || $write > 0 ) )
    {
      my $ptr = 0;
      my $ptw = 0;
      $ptr = ( $hash_ref->{$tid}->{'read'} / $hash_ref->{'tread'} ) * 100 if ( $read > 0 ) && ( $hash_ref->{'tread'} > 0 );
      $ptw = ( $hash_ref->{$tid}->{'write'} / $hash_ref->{'twrite'} ) * 100 if ( $write > 0 ) && ( $hash_ref->{'twrite'} > 0 );

      print $ts . "$div";
      print $tid . "$div";
      print $hash_ref->{$tid}->{'user'} . "$div";
      print $read . "$div";
      print "$hash_ref->{$tid}->{'read'}" . "$div";
      print $write . "$div";
      print "$hash_ref->{$tid}->{'write'}" . "$div";
      print $hash_ref->{$tid}->{'cmd'} . "$div";
      print $hash_ref->{$tid}->{'samples'} . "$div";
      print sprintf("%.2f", $ptr) . "$div";
      print sprintf("%.2f", $ptw) . "\n";
    }
  }
}

# Close the filehandle when we are done.
close(fh);

#Sub routine to print out a summary of disk I/O.
sub printsummary {
print "#" x 50 . "\n";
print "Caught Ctrl+C. Printing Disk I/O summary.\n";
print "#" x 50 . "\n";
print "Total Bytes Read:  $hash_ref->{'tread'}\n";
print "Total Bytes Write: $hash_ref->{'twrite'}\n";
print "#" x 50 . "\n";

for my $id ( keys %$hash_ref )
  {
  if ( $hash_ref->{$id}->{'read'} > 0 || $hash_ref->{$id}->{'write'} > 0 )
    {
      my $psptr = 0;
      my $psptw = 0;
      $psptr = ( $hash_ref->{$id}->{'read'} / $hash_ref->{'tread'} ) * 100 if ( $hash_ref->{'tread'} > 0 );
      $psptw = ( $hash_ref->{$id}->{'write'} / $hash_ref->{'twrite'} ) * 100 if ( $hash_ref->{'twrite'} > 0 );

      print "$id" . " ";
      print $hash_ref->{$id}->{'user'} . " ";
      print $hash_ref->{$id}->{'read'} . " ";
      print $hash_ref->{$id}->{'write'} . " ";
      print $hash_ref->{$id}->{'cmd'} . " ";
      print $hash_ref->{$id}->{'samples'} . " ";
      print sprintf("%.2f", $psptr) . " ";
      print sprintf("%.2f", $psptw);
      print "\n"
    }
  }
  print "#" x 50 . "\n";
}

Here is the csv output

sudo perl iotop2csv.pl 
##################################################
Output from iotop -bt
##################################################
timestamp,tid,user,read,total read each tid,write,total write each tid,cmd,sample number,Percent of Total Read,Percent of Total Write
16:27:00,6003,mysql,0.00,0,318.04,318.04,mysqld,2,0.00,100.00
16:27:00,4959,root,0.00,0,3.74,3.74,nailslogd,2,0.00,1.16
16:27:00,5992,mysql,0.00,0,3.74,3.74,mysqld,2,0.00,1.15
16:27:00,5994,mysql,0.00,0,314.30,314.3,mysqld,2,0.00,49.12
16:27:01,511,root,0.00,0,101.83,101.83,[kjournald],3,0.00,13.73
16:27:01,6557,mysql,0.00,0,3.77,3.77,mysqld,3,0.00,0.51
16:27:02,6530,mysql,0.00,0,7.55,7.55,mysqld,4,0.00,1.00
16:27:03,6009,mysql,0.00,0,7.54,7.54,mysqld,5,0.00,0.99
16:27:03,25220,mysql,3.77,3.77,11.31,11.31,mysqld,5,100.00,1.47
16:27:04,6534,mysql,3.77,3.77,97.94,97.94,mysqld,6,50.00,11.26
16:27:04,1864,root,0.00,0,199.65,199.65,[kjournald],6,0.00,18.67

Here is an example graph via Excel

September 4, 2012

Example of an optimized nested SELECT statements to CASE statements in SQL (using SQLite)

Filed under: sql, SQLite — lancevermilion @ 6:26 pm

I had written a webpage that needed to do a single SQL query and return the information. The information had a key column named ‘AssetnameFindingDetailURL’ and computations needed to be done based on a combination of a few other columns ‘StatusDisplayOverride’ and ‘Severity’. To achieve this I originally just did a HUGE (1110 lines) SQL with nested SELECT statements. This proved to be very slow and almost impossible to understand. I decided it was possible to shorten it so i spent about 1 hour having a go at using a CASE statement. The results were quite good. I shortened the execution time from .4 of a second down to .1 of a second and from 1110 lines of SQL to 110 lines of SQL. I am sure this can be shortened even more but it is beyond my interest at the moment to make it shorter without any procedures/etc. Hopefully someone will find this usefule.

How the Syntax changed

ROUND
( 
  (
    (
      (
        SELECT 
          COUNT(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') 
      )*100.0
    )/COUNT(severity)
  ),2
) AS 'Total_%_Done'

Was turned into

ROUND
(
 COUNT
  (
    CASE 
      WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') THEN 
        Severity 
    END
  )*100.00 / COUNT(Severity), 2 
) AS CatAPercent,

Example output

A|54.48|525|286|100.0|64|64|0|0|29|35|53.21|389|207|47|128|60|147|21.43|70|15|36|18|12|3|0.0|2|0|2|0|0|0
B|73.91|23|17|100.0|3|3|0|0|3|0|76.92|13|10|0|2|10|0|57.14|7|4|3|0|4|0||0|0|0|0|0|0
C|41.67|12|5|100.0|5|5|0|0|5|0|0.0|3|0|3|0|0|0|0.0|3|0|3|0|0|0|0.0|1|0|1|0|0|0
D|12.5|16|2|100.0|1|1|0|0|1|0|0.0|6|0|6|0|0|0|12.5|8|1|7|0|1|0|0.0|1|0|1|0|0|0
E|87.5|8|7|100.0|1|1|0|0|1|0|85.71|7|6|0|1|6|0||0|0|0|0|0|0||0|0|0|0|0|0
F|12.5|8|1|100.0|1|1|0|0|1|0|0.0|4|0|4|0|0|0|0.0|3|0|3|0|0|0||0|0|0|0|0|0
G|100.0|13|13|100.0|2|2|0|0|2|0|100.0|9|9|0|0|9|0|100.0|2|2|0|0|2|0||0|0|0|0|0|0
H|81.4|43|35|100.0|12|12|0|0|4|8|81.48|27|22|0|5|17|5|25.0|4|1|2|1|1|0||0|0|0|0|0|0
I|37.5|48|18|100.0|11|11|0|0|3|8|25.0|28|7|18|3|2|5|0.0|9|0|9|0|0|0||0|0|0|0|0|0
J|54.81|312|171|100.0|18|18|0|0|6|12|54.28|269|146|0|117|16|130|28.0|25|7|0|17|4|3||0|0|0|0|0|0
K|40.48|42|17|100.0|10|10|0|0|3|7|30.43|23|7|16|0|0|7|0.0|9|0|9|0|0|0||0|0|0|0|0|0

Here is the Short version of the SQL (CASE statement).

SELECT
  'ALL DEVICES',
  ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') THEN Severity END)*100.00 / COUNT(Severity), 2 ) AS CatAPercent,
  COUNT(Severity) AS CatAT,
  COUNT( CASE WHEN (StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted') THEN Severity END ) AS CatAD,
  ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity='1-Category I' THEN Severity END)*100.00 / COUNT( CASE WHEN Severity='1-Category I' THEN Severity END ), 2 ) AS Cat1Percent,
  COUNT( CASE WHEN Severity='1-Category I' THEN Severity END ) AS Cat1T,
  COUNT( CASE WHEN (StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted') AND Severity='1-Category I' THEN Severity END ) AS Cat1D,
  COUNT( CASE WHEN StatusDisplayOverride='O-Open' AND Severity='1-Category I' THEN Severity END ) AS Cat1O,
  COUNT( CASE WHEN StatusDisplayOverride='PR-Pending Research' AND Severity='1-Category I' THEN Severity END ) AS Cat1PR,
  COUNT( CASE WHEN StatusDisplayOverride='TBS-To Be Submitted' AND Severity='1-Category I' THEN Severity END ) AS Cat1TBS,
  COUNT( CASE WHEN StatusDisplayOverride='S-Submitted' AND Severity='1-Category I' THEN Severity END ) AS Cat1S,
  ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity='2-Category II' THEN Severity END)*100.00 / COUNT( CASE WHEN Severity='2-Category II' THEN Severity END ), 2 ) AS Cat2Percent,
  COUNT( CASE WHEN Severity='2-Category II' THEN Severity END ) AS Cat2T,
  COUNT( CASE WHEN (StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted' ) AND Severity='2-Category II' THEN Severity END ) AS Cat2D,
  COUNT( CASE WHEN StatusDisplayOverride='O-Open' AND Severity='2-Category II' THEN Severity END ) AS Cat2O,
  COUNT( CASE WHEN StatusDisplayOverride='PR-Pending Research' AND Severity='2-Category II' THEN Severity END ) AS Cat2PR,
  COUNT( CASE WHEN StatusDisplayOverride='TBS-To Be Submitted' AND Severity='2-Category II' THEN Severity END ) AS Cat2TBS,
  COUNT( CASE WHEN StatusDisplayOverride='S-Submitted' AND Severity='2-Category II' THEN Severity END ) AS Cat2S,
  ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity='3-Category III' THEN Severity END)*100.00 / COUNT( CASE WHEN Severity='3-Category III' THEN Severity END ), 2 ) AS Cat3Percent,
  COUNT( CASE WHEN Severity='3-Category III' THEN Severity END ) AS Cat3T,
  COUNT( CASE WHEN ( StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted' ) AND Severity='3-Category III' THEN Severity END ) AS Cat3D,
  COUNT( CASE WHEN StatusDisplayOverride='O-Open' AND Severity='3-Category III' THEN Severity END ) AS Cat3O,
  COUNT( CASE WHEN StatusDisplayOverride='PR-Pending Research' AND Severity='3-Category III' THEN Severity END ) AS Cat3PR,
  COUNT( CASE WHEN StatusDisplayOverride='TBS-To Be Submitted' AND Severity='3-Category III' THEN Severity END ) AS Cat3TBS,
  COUNT( CASE WHEN StatusDisplayOverride='S-Submitted' AND Severity='3-Category III' THEN Severity END ) AS Cat3S,
  ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity='4-Category IV' THEN Severity END)*100.00 / COUNT( CASE WHEN Severity='4-Category IV' THEN Severity END ), 2 ) AS Cat4Percent,
  COUNT( CASE WHEN Severity='4-Category IV' THEN Severity END ) AS Cat4T,
  COUNT( CASE WHEN (StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted' ) AND Severity='4-Category IV' THEN Severity END ) AS Cat4D,
  COUNT( CASE WHEN StatusDisplayOverride='O-Open' AND Severity='4-Category IV' THEN Severity END ) AS Cat4O,
  COUNT( CASE WHEN StatusDisplayOverride='PR-Pending Research' AND Severity='4-Category IV' THEN Severity END ) AS Cat4PR,
  COUNT( CASE WHEN StatusDisplayOverride='TBS-To Be Submitted' AND Severity='4-Category IV' THEN Severity END ) AS Cat4TBS,
  COUNT( CASE WHEN StatusDisplayOverride='S-Submitted' AND Severity='4-Category IV' THEN Severity END ) AS Cat4S
FROM
  vms
UNION ALL
SELECT
  UPPER(AssetNameFindingDetailURL) as 'Asset Name',
  CatAPercent,
  CatAT,
  CatAD,
  Cat1Percent,
  Cat1T,
  Cat1D,
  Cat1O,
  Cat1PR,
  Cat1TBS,
  Cat1S,
  Cat2Percent,
  Cat2T,
  Cat2D,
  Cat2O,
  Cat2PR,
  Cat2TBS,
  Cat2S,
  Cat3Percent,
  Cat3T,
  Cat3D,
  Cat3O,
  Cat3PR,
  Cat3TBS,
  Cat3S,
  Cat4Percent,
  Cat4T,
  Cat4D,
  Cat4O,
  Cat4PR,
  Cat4TBS,
  Cat4S
FROM
  (
    SELECT
      AssetNameFindingDetailURL,
      ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') THEN Severity END)*100.00 / COUNT(Severity), 2 ) AS CatAPercent,
      COUNT(Severity) AS CatAT,
      COUNT( CASE WHEN (StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted') THEN Severity END ) AS CatAD,
      ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity='1-Category I' THEN Severity END)*100.00 / COUNT( CASE WHEN Severity='1-Category I' THEN Severity END ), 2 ) AS Cat1Percent,
      COUNT( CASE WHEN Severity='1-Category I' THEN Severity END ) AS Cat1T,
      COUNT( CASE WHEN ( StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted' ) AND Severity='1-Category I' THEN Severity END ) AS Cat1D,
      COUNT( CASE WHEN ( StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted' ) AND Severity='1-Category I' THEN Severity END ) AS Cat1Percent,
      COUNT( CASE WHEN StatusDisplayOverride='O-Open' AND Severity='1-Category I' THEN Severity END ) AS Cat1O,
      COUNT( CASE WHEN StatusDisplayOverride='PR-Pending Research' AND Severity='1-Category I' THEN Severity END ) AS Cat1PR,
      COUNT( CASE WHEN StatusDisplayOverride='TBS-To Be Submitted' AND Severity='1-Category I' THEN Severity END ) AS Cat1TBS,
      COUNT( CASE WHEN StatusDisplayOverride='S-Submitted' AND Severity='1-Category I' THEN Severity END ) AS Cat1S,
      ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity='2-Category II' THEN Severity END)*100.00 / COUNT( CASE WHEN Severity='2-Category II' THEN Severity END ), 2 ) AS Cat2Percent,
      COUNT( CASE WHEN Severity='2-Category II' THEN Severity END ) AS Cat2T,
      COUNT( CASE WHEN ( StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted' ) AND Severity='2-Category II' THEN Severity END ) AS Cat2D,
      COUNT( CASE WHEN StatusDisplayOverride='O-Open' AND Severity='2-Category II' THEN Severity END ) AS Cat2O,
      COUNT( CASE WHEN StatusDisplayOverride='PR-Pending Research' AND Severity='2-Category II' THEN Severity END ) AS Cat2PR,
      COUNT( CASE WHEN StatusDisplayOverride='TBS-To Be Submitted' AND Severity='2-Category II' THEN Severity END ) AS Cat2TBS,
      COUNT( CASE WHEN StatusDisplayOverride='S-Submitted' AND Severity='2-Category II' THEN Severity END ) AS Cat2S,
      ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity='3-Category III' THEN Severity END)*100.00 / COUNT( CASE WHEN Severity='3-Category III' THEN Severity END ), 2 ) AS Cat3Percent,
      COUNT( CASE WHEN Severity='3-Category III' THEN Severity END ) AS Cat3T,
      COUNT( CASE WHEN ( StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted' ) AND Severity='3-Category III' THEN Severity END ) AS Cat3D,
      COUNT( CASE WHEN StatusDisplayOverride='O-Open' AND Severity='3-Category III' THEN Severity END ) AS Cat3O,
      COUNT( CASE WHEN StatusDisplayOverride='PR-Pending Research' AND Severity='3-Category III' THEN Severity END ) AS Cat3PR,
      COUNT( CASE WHEN StatusDisplayOverride='TBS-To Be Submitted' AND Severity='3-Category III' THEN Severity END ) AS Cat3TBS,
      COUNT( CASE WHEN StatusDisplayOverride='S-Submitted' AND Severity='3-Category III' THEN Severity END ) AS Cat3S,
      ROUND( COUNT(CASE WHEN (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity='4-Category IV' THEN Severity END)*100.00 / COUNT( CASE WHEN Severity='4-Category IV' THEN Severity END ), 2 ) AS Cat4Percent,
      COUNT( CASE WHEN Severity='4-Category IV' THEN Severity END ) AS Cat4T,
      COUNT( CASE WHEN ( StatusDisplayOverride='S-Submitted' OR StatusDisplayOverride='TBS-To Be Submitted' ) AND Severity='4-Category IV' THEN Severity END ) AS Cat4D,
      COUNT( CASE WHEN StatusDisplayOverride='O-Open' AND Severity='4-Category IV' THEN Severity END ) AS Cat4O,
      COUNT( CASE WHEN StatusDisplayOverride='PR-Pending Research' AND Severity='4-Category IV' THEN Severity END ) AS Cat4PR,
      COUNT( CASE WHEN StatusDisplayOverride='TBS-To Be Submitted' AND Severity='4-Category IV' THEN Severity END ) AS Cat4TBS,
      COUNT( CASE WHEN StatusDisplayOverride='S-Submitted' AND Severity='4-Category IV' THEN Severity END ) AS Cat4S
    FROM
      vms
    GROUP BY
      AssetNameFindingDetailURL
  ) AS ABC;

Here is the Long version of the SQL (nested SELECT statements).

SELECT
  '*ALL DEVICES*' as 'Asset Name',
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted')
      )*100.0
    )/count(severity)
  ),2) as 'Total_%_Done',
  count(severity) as Total,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted')
  ) as 'Total_Done',
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '1-Category I'
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '1-Category I'
  )
  ),2) as '%_Done_Cat1',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I'
  ) as Total_Cat1s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '1-Category I'
  ) as 'Done_Cat1',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'O-Open'
  ) as O_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'PR-Pending Research'
  ) as PR_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'TBS-To Be Submitted'
  ) as TBS_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'S-Submitted'
  ) as S_Cat1,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '2-Category II'
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '2-Category II'
  )
  ),2) as '%_Done_Cat2',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II'
  ) as Total_Cat2s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '2-Category II'
  ) as 'Done_Cat2',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'O-Open'
  ) as O_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'PR-Pending Research'
  ) as PR_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'TBS-To Be Submitted'
  ) as TBS_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'S-Submitted'
  ) as S_Cat2,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '3-Category III'
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '3-Category III'
  )
  ),2) as '%_Done_Cat3',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III'
  ) as Total_Cat3s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '3-Category III'
  ) as 'Done_Cat3',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'O-Open'
  ) as O_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'PR-Pending Research'
  ) as PR_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'TBS-To Be Submitted'
  ) as TBS_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'S-Submitted'
  ) as S_Cat3,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '4-Category IV'
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '4-Category IV'
  )
  ),2) as '%_Done_Cat4',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV'
  ) as Total_Cat4s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '4-Category IV'
  ) as 'Done_Cat4',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'O-Open'
  ) as O_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'PR-Pending Research'
  ) as PR_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'TBS-To Be Submitted'
  ) as TBS_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'S-Submitted'
  ) as S_Cat4
FROM
  vms
UNION ALL
SELECT
  UPPER(AssetNameFindingDetailURL) as 'Asset Name',
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/count(severity)
  ),2) as 'Total_%_Done',
  count(severity) as Total,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Total_Done',
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '1-Category I' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '1-Category I' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  )
  ),2) as '%_Done_Cat1',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as Total_Cat1s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '1-Category I' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Done_Cat1',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'O-Open' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as O_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'PR-Pending Research' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as PR_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'TBS-To Be Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as TBS_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'S-Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as S_Cat1,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '2-Category II' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '2-Category II' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  )
  ),2) as '%_Done_Cat2',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as Total_Cat2s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '2-Category II' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Done_Cat2',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'O-Open' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as O_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'PR-Pending Research' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as PR_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'TBS-To Be Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as TBS_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'S-Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as S_Cat2,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '3-Category III' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '3-Category III' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  )
  ),2) as '%_Done_Cat3',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as Total_Cat3s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '3-Category III' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Done_Cat3',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'O-Open' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as O_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'PR-Pending Research' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as PR_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'TBS-To Be Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as TBS_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'S-Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as S_Cat3,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '4-Category IV' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '4-Category IV' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  )
  ),2) as '%_Done_Cat4',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as Total_Cat4s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '4-Category IV' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Done_Cat4',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'O-Open' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as O_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'PR-Pending Research' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as PR_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'TBS-To Be Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as TBS_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'S-Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as S_Cat4
FROM
  vms as vw
GROUP BY AssetNameFindingDetailURL): no such column: vv.AssetNameFindingDetailURL

Array
(
    [0] => Array
        (
            [file] => /var/www/html/itdb/php/reports.php
            [line] => 811
            [function] => db_execute
            [args] => Array
                (
                    [0] => PDO Object
                        (
                        )

                    [1] => SELECT
  '*ALL DEVICES*' as 'Asset Name',
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted')
      )*100.0
    )/count(severity)
  ),2) as 'Total_%_Done',
  count(severity) as Total,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted')
  ) as 'Total_Done',
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '1-Category I'
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '1-Category I'
  )
  ),2) as '%_Done_Cat1',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I'
  ) as Total_Cat1s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '1-Category I'
  ) as 'Done_Cat1',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'O-Open'
  ) as O_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'PR-Pending Research'
  ) as PR_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'TBS-To Be Submitted'
  ) as TBS_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'S-Submitted'
  ) as S_Cat1,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '2-Category II'
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '2-Category II'
  )
  ),2) as '%_Done_Cat2',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II'
  ) as Total_Cat2s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '2-Category II'
  ) as 'Done_Cat2',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'O-Open'
  ) as O_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'PR-Pending Research'
  ) as PR_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'TBS-To Be Submitted'
  ) as TBS_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'S-Submitted'
  ) as S_Cat2,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '3-Category III'
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '3-Category III'
  )
  ),2) as '%_Done_Cat3',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III'
  ) as Total_Cat3s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '3-Category III'
  ) as 'Done_Cat3',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'O-Open'
  ) as O_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'PR-Pending Research'
  ) as PR_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'TBS-To Be Submitted'
  ) as TBS_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'S-Submitted'
  ) as S_Cat3,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '4-Category IV'
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '4-Category IV'
  )
  ),2) as '%_Done_Cat4',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV'
  ) as Total_Cat4s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '4-Category IV'
  ) as 'Done_Cat4',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'O-Open'
  ) as O_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'PR-Pending Research'
  ) as PR_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'TBS-To Be Submitted'
  ) as TBS_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'S-Submitted'
  ) as S_Cat4
FROM
  vms
UNION ALL
SELECT
  UPPER(AssetNameFindingDetailURL) as 'Asset Name',
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/count(severity)
  ),2) as 'Total_%_Done',
  count(severity) as Total,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Total_Done',
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '1-Category I' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '1-Category I' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  )
  ),2) as '%_Done_Cat1',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as Total_Cat1s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '1-Category I' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Done_Cat1',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'O-Open' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as O_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'PR-Pending Research' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as PR_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'TBS-To Be Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as TBS_Cat1,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '1-Category I' AND StatusDisplayOverride = 'S-Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as S_Cat1,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '2-Category II' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '2-Category II' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  )
  ),2) as '%_Done_Cat2',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as Total_Cat2s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '2-Category II' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Done_Cat2',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'O-Open' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as O_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'PR-Pending Research' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as PR_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'TBS-To Be Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as TBS_Cat2,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '2-Category II' AND StatusDisplayOverride = 'S-Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as S_Cat2,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '3-Category III' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '3-Category III' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  )
  ),2) as '%_Done_Cat3',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as Total_Cat3s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '3-Category III' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Done_Cat3',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'O-Open' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as O_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'PR-Pending Research' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as PR_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'TBS-To Be Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as TBS_Cat3,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '3-Category III' AND StatusDisplayOverride = 'S-Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as S_Cat3,
  round( (
    (
      (
        SELECT
          count(severity)
        FROM
          vms
        WHERE
          (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '4-Category IV' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
      )*100.0
    )/ (
     SELECT count(severity) FROM vms WHERE Severity = '4-Category IV' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  )
  ),2) as '%_Done_Cat4',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as Total_Cat4s,
  (
      SELECT
        count(severity)
      FROM
        vms
      WHERE
        (StatusDisplayOverride = 'TBS-To Be Submitted' OR StatusDisplayOverride = 'S-Submitted') AND Severity = '4-Category IV' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as 'Done_Cat4',
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'O-Open' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as O_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'PR-Pending Research' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as PR_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'TBS-To Be Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as TBS_Cat4,
  (
    SELECT
      count(severity)
    FROM
      vms
    WHERE
      Severity = '4-Category IV' AND StatusDisplayOverride = 'S-Submitted' AND vv.AssetNameFindingDetailURL = AssetNameFindingDetailURL
  ) as S_Cat4
FROM
  vms as vv
GROUP BY AssetNameFindingDetailURL

May 24, 2012

How to find files with permissions that are more permissive than 0640

Filed under: linux, perl — lancevermilion @ 6:40 pm

I always seem to need a way to dig through directories and find permissions that are more or less restrictive. I couldn’t seem to figure out how to do it with “find -perm” so I decided to write a perl script to utilize a simple find piped to ls -la.

Enjoy.

As a perl script

#!/usr/bin/perl
my @array = `find $ARGV[0] -type f \\( ! -iname "." -or ! -iname ".." \\) | xargs ls -la`;
foreach(@array)
{
  my @line = split(/\s+/, $_);
  my $perms = $line[0];
  my $file = $line[8];
  chomp($file);
  my @perm = split(//, $perms);
  my $match = 0;
  $match++ if ( $perm[0] ne '-' );
  $match ++ if ( $perm[3] ne '-' );
  $match++ if ( "$perm[5]$perm[6]" ne '--' );
  $match++ if ( "$perm[7]$perm[8]$perm[9]" ne '---' );
  $permstr =  join '', @perm;
  print "$file ($permstr More Permissive than -rw-r-----)\n" if ( $match > 0 );
}

A one liner

perl -e 'my @array = `find /var/log/ -type f \\( ! -iname "." -or ! -iname ".." \\) | xargs ls -la`;foreach(@array){my @line = split(/\s+/, $_);my $perms = $line[0];my $file = $line[8];chomp($file);my @perm = split(//, $perms);my $match = 0;$match++ if ( $perm[0] ne "-" );$match ++ if ( $perm[3] ne "-" );$match++ if ( "$perm[5]$perm[6]" ne "--" );$match++ if ( "$perm[7]$perm[8]$perm[9]" ne "---" );$permstr =  join "", @perm;print "$file ($permstr More permissive than -rw-r-----)\n" if ( $match > 0 );}'

Here is an example output

/var/log/sa/sa16 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sa17 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sa18 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sa19 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sa20 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sa21 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sa22 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sa23 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sa24 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sar15 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sar16 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sar17 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sar18 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sar19 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sar20 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sar21 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sar22 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/sa/sar23 (-rw-r--r-- More Permissive than -rw-r-----)
/var/log/wtmp (-rw-rw-r-- More Permissive than -rw-r-----)

May 3, 2012

pt-summary from Percona Toolkit

Filed under: Freemind, linux, Percona Toolkit, perl — lancevermilion @ 4:04 pm

Here is the bash script ‘pt-summary’ from the Percona Toolkit. I have written a perl script (pt2mm.pl) that slurps in a lot of this data (and other info) and generates xml to be used with Freemind.

#!/bin/sh

# This program is part of Percona Toolkit: http://www.percona.com/software/
# See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal
# notices and disclaimers.

set -u

# ########################################################################
# Globals, settings, helper functions
# ########################################################################
TOOL="pt-summary"
POSIXLY_CORRECT=1
export POSIXLY_CORRECT

# ###########################################################################
# log_warn_die package
# This package is a copy without comments from the original.  The original
# with comments and its test file can be found in the Bazaar repository at,
#   lib/bash/log_warn_die.sh
#   t/lib/bash/log_warn_die.sh
# See https://launchpad.net/percona-toolkit for more information.
# ###########################################################################


set -u

PTFUNCNAME=""
PTDEBUG="${PTDEBUG:-""}"
EXIT_STATUS=0

log() {
   TS=$(date +%F-%T | tr :- _);
   echo "$TS $*"
}

warn() {
   log "$*" >&2
   EXIT_STATUS=1
}

die() {
   warn "$*"
   exit 1
}

_d () {
   [ "$PTDEBUG" ] && echo "# $PTFUNCNAME: $(log "$*")" >&2
}

# ###########################################################################
# End log_warn_die package
# ###########################################################################

# ###########################################################################
# parse_options package
# This package is a copy without comments from the original.  The original
# with comments and its test file can be found in the Bazaar repository at,
#   lib/bash/parse_options.sh
#   t/lib/bash/parse_options.sh
# See https://launchpad.net/percona-toolkit for more information.
# ###########################################################################





set -u

ARGV=""           # Non-option args (probably input files)
EXT_ARGV=""       # Everything after -- (args for an external command)
HAVE_EXT_ARGV=""  # Got --, everything else is put into EXT_ARGV
OPT_ERRS=0        # How many command line option errors
OPT_VERSION=""    # If --version was specified
OPT_HELP=""       # If --help was specified
PO_DIR=""         # Directory with program option spec files

usage() {
   local file="$1"

   local usage=$(grep '^Usage: ' "$file")
   echo $usage
   echo
   echo "For more information, 'man $TOOL' or 'perldoc $file'."
}

usage_or_errors() {
   local file="$1"

   if [ "$OPT_VERSION" ]; then
      local version=$(grep '^pt-[^ ]\+ [0-9]' "$file")
      echo "$version"
      return 1
   fi

   if [ "$OPT_HELP" ]; then
      usage "$file"
      echo
      echo "Command line options:"
      echo
      perl -e '
         use strict;
         use warnings FATAL => qw(all);
         my $lcol = 20;         # Allow this much space for option names.
         my $rcol = 80 - $lcol; # The terminal is assumed to be 80 chars wide.
         my $name;
         while ( <> ) {
            my $line = $_;
            chomp $line;
            if ( $line =~ s/^long:/  --/ ) {
               $name = $line;
            }
            elsif ( $line =~ s/^desc:// ) {
               $line =~ s/ +$//mg;
               my @lines = grep { $_      }
                           $line =~ m/(.{0,$rcol})(?:\s+|\Z)/g;
               if ( length($name) >= $lcol ) {
                  print $name, "\n", (q{ } x $lcol);
               }
               else {
                  printf "%-${lcol}s", $name;
               }
               print join("\n" . (q{ } x $lcol), @lines);
               print "\n";
            }
         }
      ' "$PO_DIR"/*
      echo
      echo "Options and values after processing arguments:"
      echo
      for opt in $(ls "$PO_DIR"); do
         local varname="OPT_$(echo "$opt" | tr a-z- A-Z_)"
         local varvalue="${!varname}"
         printf -- "  --%-30s %s" "$opt" "${varvalue:-(No value)}"
         echo
      done
      return 1
   fi

   if [ $OPT_ERRS -gt 0 ]; then
      echo
      usage "$file"
      return 1
   fi

   return 0
}

option_error() {
   local err="$1"
   OPT_ERRS=$(($OPT_ERRS + 1))
   echo "$err" >&2
}

parse_options() {
   local file="$1"
   shift

   ARGV=""
   EXT_ARGV=""
   HAVE_EXT_ARGV=""
   OPT_ERRS=0
   OPT_VERSION=""
   OPT_HELP=""
   PO_DIR="$TMPDIR/po"

   if [ ! -d "$PO_DIR" ]; then
      mkdir "$PO_DIR"
      if [ $? -ne 0 ]; then
         echo "Cannot mkdir $PO_DIR" >&2
         exit 1
      fi
   fi

   rm -rf "$PO_DIR"/*
   if [ $? -ne 0 ]; then
      echo "Cannot rm -rf $PO_DIR/*" >&2
      exit 1
   fi

   _parse_pod "$file"  # Parse POD into program option (po) spec files
   _eval_po            # Eval po into existence with default values

   if [ $# -ge 2 ] &&  [ "$1" = "--config" ]; then
      shift  # --config
      local user_config_files="$1"
      shift  # that ^
      local IFS=","
      for user_config_file in $user_config_files; do
         _parse_config_files "$user_config_file"
      done
   else
      _parse_config_files "/etc/percona-toolkit/percona-toolkit.conf" "/etc/percona-toolkit/$TOOL.conf" "$HOME/.percona-toolkit.conf" "$HOME/.$TOOL.conf"
   fi

   _parse_command_line "$@"
}

_parse_pod() {
   local file="$1"

   cat "$file" | PO_DIR="$PO_DIR" perl -ne '
      BEGIN { $/ = ""; }
      next unless $_ =~ m/^=head1 OPTIONS/;
      while ( defined(my $para = <>) ) {
         last if $para =~ m/^=head1/;
         chomp;
         if ( $para =~ m/^=item --(\S+)/ ) {
            my $opt  = $1;
            my $file = "$ENV{PO_DIR}/$opt";
            open my $opt_fh, ">", $file or die "Cannot open $file: $!";
            print $opt_fh "long:$opt\n";
            $para = <>;
            chomp;
            if ( $para =~ m/^[a-z ]+:/ ) {
               map {
                  chomp;
                  my ($attrib, $val) = split(/: /, $_);
                  print $opt_fh "$attrib:$val\n";
               } split(/; /, $para);
               $para = <>;
               chomp;
            }
            my ($desc) = $para =~ m/^([^?.]+)/;
            print $opt_fh "desc:$desc.\n";
            close $opt_fh;
         }
      }
      last;
   '
}

_eval_po() {
   local IFS=":"
   for opt_spec in "$PO_DIR"/*; do
      local opt=""
      local default_val=""
      local neg=0
      local size=0
      while read key val; do
         case "$key" in
            long)
               opt=$(echo $val | sed 's/-/_/g' | tr [:lower:] [:upper:])
               ;;
            default)
               default_val="$val"
               ;;
            "short form")
               ;;
            type)
               [ "$val" = "size" ] && size=1
               ;;
            desc)
               ;;
            negatable)
               if [ "$val" = "yes" ]; then
                  neg=1
               fi
               ;;
            *)
               echo "Invalid attribute in $opt_spec: $line" >&2
               exit 1
         esac 
      done < "$opt_spec"

      if [ -z "$opt" ]; then
         echo "No long attribute in option spec $opt_spec" >&2
         exit 1
      fi

      if [ $neg -eq 1 ]; then
         if [ -z "$default_val" ] || [ "$default_val" != "yes" ]; then
            echo "Option $opt_spec is negatable but not default: yes" >&2
            exit 1
         fi
      fi

      if [ $size -eq 1 -a -n "$default_val" ]; then
         default_val=$(size_to_bytes $default_val)
      fi

      eval "OPT_${opt}"="$default_val"
   done
}

_parse_config_files() {

   for config_file in "$@"; do
      test -f "$config_file" || continue

      while read config_opt; do

         echo "$config_opt" | grep '^[ ]*[^#]' >/dev/null 2>&1 || continue

         config_opt="$(echo "$config_opt" | sed -e 's/^ *//g' -e 's/ *$//g' -e 's/[ ]*=[ ]*/=/' -e 's/[ ]*#.*$//')"

         [ "$config_opt" = "" ] && continue

         if ! [ "$HAVE_EXT_ARGV" ]; then
            config_opt="--$config_opt"
         fi

         _parse_command_line "$config_opt"

      done < "$config_file"

      HAVE_EXT_ARGV=""  # reset for each file

   done
}

_parse_command_line() {
   local opt=""
   local val=""
   local next_opt_is_val=""
   local opt_is_ok=""
   local opt_is_negated=""
   local real_opt=""
   local required_arg=""
   local spec=""

   for opt in "$@"; do
      if [ "$opt" = "--" -o "$opt" = "----" ]; then
         HAVE_EXT_ARGV=1
         continue
      fi
      if [ "$HAVE_EXT_ARGV" ]; then
         if [ "$EXT_ARGV" ]; then
            EXT_ARGV="$EXT_ARGV $opt"
         else
            EXT_ARGV="$opt"
         fi
         continue
      fi

      if [ "$next_opt_is_val" ]; then
         next_opt_is_val=""
         if [ $# -eq 0 ] || [ $(expr "$opt" : "-") -eq 1 ]; then
            option_error "$real_opt requires a $required_arg argument"
            continue
         fi
         val="$opt"
         opt_is_ok=1
      else
         if [ $(expr "$opt" : "-") -eq 0 ]; then
            if [ -z "$ARGV" ]; then
               ARGV="$opt"
            else
               ARGV="$ARGV $opt"
            fi
            continue
         fi

         real_opt="$opt"

         if $(echo $opt | grep '^--no-' >/dev/null); then
            opt_is_negated=1
            opt=$(echo $opt | sed 's/^--no-//')
         else
            opt_is_negated=""
            opt=$(echo $opt | sed 's/^-*//')
         fi

         if $(echo $opt | grep '^[a-z-][a-z-]*=' >/dev/null 2>&1); then
            val="$(echo $opt | awk -F= '{print $2}')"
            opt="$(echo $opt | awk -F= '{print $1}')"
         fi

         if [ -f "$TMPDIR/po/$opt" ]; then
            spec="$TMPDIR/po/$opt"
         else
            spec=$(grep "^short form:-$opt\$" "$TMPDIR"/po/* | cut -d ':' -f 1)
            if [ -z "$spec"  ]; then
               option_error "Unknown option: $real_opt"
               continue
            fi
         fi

         required_arg=$(cat "$spec" | awk -F: '/^type:/{print $2}')
         if [ "$required_arg" ]; then
            if [ "$val" ]; then
               opt_is_ok=1
            else
               next_opt_is_val=1
            fi
         else
            if [ "$val" ]; then
               option_error "Option $real_opt does not take a value"
               continue
            fi 
            if [ "$opt_is_negated" ]; then
               val=""
            else
               val="yes"
            fi
            opt_is_ok=1
         fi
      fi

      if [ "$opt_is_ok" ]; then
         opt=$(cat "$spec" | grep '^long:' | cut -d':' -f2 | sed 's/-/_/g' | tr [:lower:] [:upper:])

         if grep "^type:size" "$spec" >/dev/null; then
            val=$(size_to_bytes $val)
         fi

         eval "OPT_$opt"="'$val'"

         opt=""
         val=""
         next_opt_is_val=""
         opt_is_ok=""
         opt_is_negated=""
         real_opt=""
         required_arg=""
         spec=""
      fi
   done
}

size_to_bytes() {
   local size="$1"
   echo $size | perl -ne '%f=(B=>1, K=>1_024, M=>1_048_576, G=>1_073_741_824, T=>1_099_511_627_776); m/^(\d+)([kMGT])?/i; print $1 * $f{uc($2 || "B")};'
}

# ###########################################################################
# End parse_options package
# ###########################################################################

# ###########################################################################
# tmpdir package
# This package is a copy without comments from the original.  The original
# with comments and its test file can be found in the Bazaar repository at,
#   lib/bash/tmpdir.sh
#   t/lib/bash/tmpdir.sh
# See https://launchpad.net/percona-toolkit for more information.
# ###########################################################################


set -u

TMPDIR=""

mk_tmpdir() {
   local dir="${1:-""}"

   if [ -n "$dir" ]; then
      if [ ! -d "$dir" ]; then
         mkdir "$dir" || die "Cannot make tmpdir $dir"
      fi
      TMPDIR="$dir"
   else
      local tool="${0##*/}"
      local pid="$$"
      TMPDIR=`mktemp -d /tmp/${tool}.${pid}.XXXXX` \
         || die "Cannot make secure tmpdir"
   fi
}

rm_tmpdir() {
   if [ -n "$TMPDIR" ] && [ -d "$TMPDIR" ]; then
      rm -rf "$TMPDIR"
   fi
   TMPDIR=""
}

# ###########################################################################
# End tmpdir package
# ###########################################################################

# ###########################################################################
# alt_cmds package
# This package is a copy without comments from the original.  The original
# with comments and its test file can be found in the Bazaar repository at,
#   lib/bash/alt_cmds.sh
#   t/lib/bash/alt_cmds.sh
# See https://launchpad.net/percona-toolkit for more information.
# ###########################################################################


set -u

_seq() {
   local i="$1"
   awk "BEGIN { for(i=1; i<=$i; i++) print i; }"
}

_pidof() {
   local cmd="$1"
   if ! pidof "$cmd" 2>/dev/null; then
      ps -eo pid,ucomm | awk -v comm="$cmd" '$2 == comm { print $1 }'
   fi
}

_lsof() {
   local pid="$1"
   if ! lsof -p $pid 2>/dev/null; then
      /bin/ls -l /proc/$pid/fd 2>/dev/null
   fi
}



_which() {
   if [ -x /usr/bin/which ]; then
      /usr/bin/which "$1" 2>/dev/null | awk '{print $1}'
   elif which which 1>/dev/null 2>&1; then
      which "$1" 2>/dev/null | awk '{print $1}'
   else
      echo "$1"
   fi
}

# ###########################################################################
# End alt_cmds package
# ###########################################################################

# ###########################################################################
# summary_common package
# This package is a copy without comments from the original.  The original
# with comments and its test file can be found in the Bazaar repository at,
#   lib/bash/summary_common.sh
#   t/lib/bash/summary_common.sh
# See https://launchpad.net/percona-toolkit for more information.
# ###########################################################################


set -u

CMD_FILE="$( _which file 2>/dev/null )"
CMD_NM="$( _which nm 2>/dev/null )"
CMD_OBJDUMP="$( _which objdump 2>/dev/null )"

get_nice_of_pid () {
   local pid="$1"
   local niceness="$(ps -p $pid -o nice | awk '$1 !~ /[^0-9]/ {print $1; exit}')"

   if [ -n "${niceness}" ]; then
      echo $niceness
   else
      local tmpfile="$TMPDIR/nice_through_c.tmp.c"
      _d "Getting the niceness from ps failed, somehow. We are about to try this:"
      cat <<EOC > "$tmpfile"

int main(void) {
   int priority = getpriority(PRIO_PROCESS, $pid);
   if ( priority == -1 && errno == ESRCH ) {
      return 1;
   }
   else {
      printf("%d\\n", priority);
      return 0;
   }
}

EOC
      local c_comp=$(_which gcc)
      if [ -z "${c_comp}" ]; then
         c_comp=$(_which cc)
      fi
      _d "$tmpfile: $( cat "$tmpfile" )"
      _d "$c_comp -xc \"$tmpfile\" -o \"$tmpfile\" && eval \"$tmpfile\""
      $c_comp -xc "$tmpfile" -o "$tmpfile" 2>/dev/null && eval "$tmpfile" 2>/dev/null
      if [ $? -ne 0 ]; then
         echo "?"
         _d "Failed to get a niceness value for $pid"
      fi
   fi
}

get_oom_of_pid () {
   local pid="$1"
   local oom_adj=""

   if [ -n "${pid}" -a -e /proc/cpuinfo ]; then
      if [ -s "/proc/$pid/oom_score_adj" ]; then
         oom_adj=$(cat "/proc/$pid/oom_score_adj" 2>/dev/null)
         _d "For $pid, the oom value is $oom_adj, retreived from oom_score_adj"
      else
         oom_adj=$(cat "/proc/$pid/oom_adj" 2>/dev/null)
         _d "For $pid, the oom value is $oom_adj, retreived from oom_adj"
      fi
   fi

   if [ -n "${oom_adj}" ]; then
      echo "${oom_adj}"
   else
      echo "?"
      _d "Can't find the oom value for $pid"
   fi
}

has_symbols () {
   local executable="$(_which "$1")"
   local has_symbols=""

   if    [ "${CMD_FILE}" ] \
      && [ "$($CMD_FILE "${executable}" | grep 'not stripped' )" ]; then
      has_symbols=1
   elif    [ "${CMD_NM}" ] \
        || [ "${CMD_OBJDMP}" ]; then
      if    [ "${CMD_NM}" ] \
         && [ !"$("${CMD_NM}" -- "${executable}" 2>&1 | grep 'File format not recognized' )" ]; then
         if [ -z "$( $CMD_NM -- "${executable}" 2>&1 | grep ': no symbols' )" ]; then
            has_symbols=1
         fi
      elif [ -z "$("${CMD_OBJDUMP}" -t -- "${executable}" | grep '^no symbols$' )" ]; then
         has_symbols=1
      fi
   fi

   if [ "${has_symbols}" ]; then
      echo "Yes"
   else
      echo "No"
   fi
}

setup_data_dir () {
   local existing_dir="$1"
   local data_dir=""
   if [ -z "$existing_dir" ]; then
      mkdir "$TMPDIR/data" || die "Cannot mkdir $TMPDIR/data"
      data_dir="$TMPDIR/data"
   else
      if [ ! -d "$existing_dir" ]; then
         mkdir "$existing_dir" || die "Cannot mkdir $existing_dir"
      elif [ "$( ls -A "$existing_dir" )" ]; then
         die "--save-samples directory isn't empty, halting."
      fi
      touch "$existing_dir/test" || die "Cannot write to $existing_dir"
      rm "$existing_dir/test"    || die "Cannot rm $existing_dir/test"
      data_dir="$existing_dir"
   fi
   echo "$data_dir"
}

get_var () {
   local varname="$1"
   local file="$2"
   awk -v pattern="${varname}" '$1 == pattern { if (length($2)) { len = length($1); print substr($0, len+index(substr($0, len+1), $2)) } }' "${file}"
}

# ###########################################################################
# End summary_common package
# ###########################################################################

# ###########################################################################
# report_formatting package
# This package is a copy without comments from the original.  The original
# with comments and its test file can be found in the Bazaar repository at,
#   lib/bash/report_formatting.sh
#   t/lib/bash/report_formatting.sh
# See https://launchpad.net/percona-toolkit for more information.
# ###########################################################################


set -u

POSIXLY_CORRECT=1
export POSIXLY_CORRECT

fuzzy_formula='
   rounded = 0;
   if (fuzzy_var <= 10 ) {
      rounded   = 1;
   }
   factor = 1;
   while ( rounded == 0 ) {
      if ( fuzzy_var <= 50 * factor ) {
         fuzzy_var = sprintf("%.0f", fuzzy_var / (5 * factor)) * 5 * factor;
         rounded   = 1;
      }
      else if ( fuzzy_var <= 100  * factor) {
         fuzzy_var = sprintf("%.0f", fuzzy_var / (10 * factor)) * 10 * factor;
         rounded   = 1;
      }
      else if ( fuzzy_var <= 250  * factor) {
         fuzzy_var = sprintf("%.0f", fuzzy_var / (25 * factor)) * 25 * factor;
         rounded   = 1;
      }
      factor = factor * 10;
   }'

fuzz () {
   awk -v fuzzy_var="$1" "BEGIN { ${fuzzy_formula} print fuzzy_var;}"
}

fuzzy_pct () {
   local pct="$(awk -v one="$1" -v two="$2" 'BEGIN{ if (two > 0) { printf "%d", one/two*100; } else {print 0} }')";
   echo "$(fuzz "${pct}")%"
}

section () {
   local str="$1"
   awk -v var="${str} _" 'BEGIN {
      line = sprintf("# %-60s", var);
      i = index(line, "_");
      x = substr(line, i);
      gsub(/[_ \t]/, "#", x);
      printf("%s%s\n", substr(line, 1, i-1), x);
   }'
}

NAME_VAL_LEN=12
name_val () {
   printf "%+*s | %s\n" "${NAME_VAL_LEN}" "$1" "$2"
}

shorten() {
   local num="$1"
   local prec="${2:-2}"
   local div="${3:-1024}"

   echo "$num" | awk -v prec="$prec" -v div="$div" '
   {
      size = 4;
      val  = $1;

      unit = val >= 1099511627776 ? "T" : val >= 1073741824 ? "G" : val >= 1048576 ? "M" : val >= 1024 ? "k" : "";

      while ( int(val) && !(val % 1024) ) {
         val /= 1024;
      }

      while ( val > 1000 ) {
         val /= div;
      }

      printf "%.*f%s", prec, val, unit;
   }
   '
}

group_concat () {
   sed -e '{H; $!d;}' -e 'x' -e 's/\n[[:space:]]*\([[:digit:]]*\)[[:space:]]*/, \1x/g' -e 's/[[:space:]][[:space:]]*/ /g' -e 's/, //' "${1}"
}

# ###########################################################################
# End report_formatting package
# ###########################################################################

# ###########################################################################
# collect_system_info package
# This package is a copy without comments from the original.  The original
# with comments and its test file can be found in the Bazaar repository at,
#   lib/bash/collect_system_info.sh
#   t/lib/bash/collect_system_info.sh
# See https://launchpad.net/percona-toolkit for more information.
# ###########################################################################



set -u

setup_commands () {
   CMD_SYSCTL="$(_which sysctl 2>/dev/null )"
   CMD_DMIDECODE="$(_which dmidecode 2>/dev/null )"
   CMD_ZONENAME="$(_which zonename 2>/dev/null )"
   CMD_DMESG="$(_which dmesg 2>/dev/null )"
   CMD_FILE="$(_which file 2>/dev/null )"
   CMD_LSPCI="$(_which lspci 2>/dev/null )"
   CMD_PRTDIAG="$(_which prtdiag 2>/dev/null )"
   CMD_SMBIOS="$(_which smbios 2>/dev/null )"
   CMD_GETENFORCE="$(_which getenforce 2>/dev/null )"
   CMD_PRTCONF="$(_which prtconf 2>/dev/null )"
   CMD_LVS="$(_which lvs 2>/dev/null)"
   CMD_VGS="$(_which vgs 2>/dev/null)"
   CMD_PRSTAT="$(_which prstat 2>/dev/null)"
   CMD_ISAINFO="$(_which isainfo 2>/dev/null)"
   CMD_TOP="$(_which top 2>/dev/null)"
   CMD_ARCCONF="$( _which arcconf 2>/dev/null )"
   CMD_HPACUCLI="$( _which hpacucli 2>/dev/null )"
   CMD_MEGACLI64="$( _which MegaCli64 2>/dev/null )"
   CMD_VMSTAT="$(_which vmstat 2>/dev/null)"
   CMD_IP="$( _which ip 2>/dev/null )"
   CMD_NETSTAT="$( _which netstat 2>/dev/null )"
   CMD_PSRINFO="$( _which psrinfo 2>/dev/null )"
   CMD_SWAPCTL="$( _which swapctl 2>/dev/null )"
   CMD_LSB_RELEASE="$( _which lsb_release 2>/dev/null )"
   CMD_ETHTOOL="$( _which ethtool 2>/dev/null )"
   CMD_GETCONF="$( _which getconf 2>/dev/null )"
}

collect_system_data () { local PTFUNCNAME=collect_system_data;
   local data_dir="$1"

   if [ -r /var/log/dmesg -a -s /var/log/dmesg ]; then
      cat "/var/log/dmesg" > "$data_dir/dmesg_file"
   fi

   $CMD_SYSCTL -a > "$data_dir/sysctl" 2>/dev/null

   if [ "${CMD_LSPCI}" ]; then
      $CMD_LSPCI > "$data_dir/lspci_file" 2>/dev/null
   fi

   local platform="$(uname -s)"
   echo "platform    $platform"   >> "$data_dir/summary"
   echo "hostname    $(uname -n)" >> "$data_dir/summary"
   uptime >> "$data_dir/uptime"

   processor_info "$data_dir"
   find_release_and_kernel "$platform" >> "$data_dir/summary"
   cpu_and_os_arch "$platform"         >> "$data_dir/summary"
   find_virtualization "$platform" "$data_dir/dmesg_file" "$data_dir/lspci_file" >> "$data_dir/summary"
   dmidecode_system_info               >> "$data_dir/summary"

   if [ "${platform}" = "SunOS" -a "${CMD_ZONENAME}" ]; then
      echo "zonename    $($CMD_ZONENAME)" >> "$data_dir/summary"
   fi

   if [ -x /lib/libc.so.6 ]; then
      echo "compiler    $(/lib/libc.so.6 | grep 'Compiled by' | cut -c13-)" >> "$data_dir/summary"
   fi

   local rss=$(ps -eo rss 2>/dev/null | awk '/[0-9]/{total += $1 * 1024} END {print total}')
   echo "rss    ${rss}" >> "$data_dir/summary"

   [ "$CMD_DMIDECODE" ] && $CMD_DMIDECODE > "$data_dir/dmidecode" 2>/dev/null

   find_memory_stats "$platform" > "$data_dir/memory"
   [ "$OPT_SUMMARIZE_MOUNTS" ] && mounted_fs_info "$platform" > "$data_dir/mounted_fs"
   raid_controller   "$data_dir/dmesg_file" "$data_dir/lspci_file" >> "$data_dir/summary"

   local controller="$(get_var raid_controller "$data_dir/summary")"
   propietary_raid_controller "$data_dir/raid-controller" "$data_dir/summary" "$data_dir" "$controller"

   [ "${platform}" = "Linux" ] && linux_exclusive_collection "$data_dir"

   if [ "$CMD_IP" -a "$OPT_SUMMARIZE_NETWORK" ]; then
      $CMD_IP -s link > "$data_dir/ip"
      network_device_info "$data_dir/ip" > "$data_dir/network_devices"
   fi

   [ "$CMD_SWAPCTL" ] && $CMD_SWAPCTL -s > "$data_dir/swapctl"

   if [ "$OPT_SUMMARIZE_PROCESSES" ]; then
      top_processes > "$data_dir/processes"
      notable_processes_info > "$data_dir/notable_procs"

      if [ "$CMD_VMSTAT" ]; then
         touch "$data_dir/vmstat"
         (
            $CMD_VMSTAT 1 $OPT_SLEEP > "$data_dir/vmstat"
         ) &
      fi
   fi

   for file in $data_dir/*; do
      [ "$file" = "vmstat" ] && continue
      [ ! -s "$file" ] && rm "$file"
   done
}

linux_exclusive_collection () { local PTFUNCNAME=linux_exclusive_collection;
   local data_dir="$1"

   echo "threading    $(getconf GNU_LIBPTHREAD_VERSION)" >> "$data_dir/summary"

   local getenforce=""
   [ "$CMD_GETENFORCE" ] && getenforce="$($CMD_GETENFORCE 2>&1)"
   echo "getenforce    ${getenforce:-"No SELinux detected"}" >> "$data_dir/summary"

   echo "swappiness    $(awk '/vm.swappiness/{print $3}' "$data_dir/sysctl")" >> "$data_dir/summary"

   local dirty_ratio="$(awk '/vm.dirty_ratio/{print $3}' "$data_dir/sysctl")"
   local dirty_bg_ratio="$(awk '/vm.dirty_background_ratio/{print $3}' "$data_dir/sysctl")"
   if [ "$dirty_ratio" -a "$dirty_bg_ratio" ]; then
      echo "dirtypolicy    $dirty_ratio, $dirty_bg_ratio" >> "$data_dir/summary"
   fi

   local dirty_bytes="$(awk '/vm.dirty_bytes/{print $3}' "$data_dir/sysctl")"
   if [ "$dirty_bytes" ]; then
      echo "dirtystatus     $(awk '/vm.dirty_bytes/{print $3}' "$data_dir/sysctl"), $(awk '/vm.dirty_background_bytes/{print $3}' "$data_dir/sysctl")" >> "$data_dir/summary"
   fi

   schedulers_and_queue_size "$data_dir/summary" > "$data_dir/partitioning"

   for file in dentry-state file-nr inode-nr; do
      echo "${file}    $(cat /proc/sys/fs/${file} 2>&1)" >> "$data_dir/summary"
   done

   [ "$CMD_LVS" -a -x "$CMD_LVS" ] && $CMD_LVS 1>"$data_dir/lvs" 2>"$data_dir/lvs.stderr"

   [ "$CMD_VGS" -a -x "$CMD_VGS" ] && \
      $CMD_VGS -o vg_name,vg_size,vg_free 2>/dev/null > "$data_dir/vgs"

   [ "$CMD_NETSTAT" -a "$OPT_SUMMARIZE_NETWORK" ] && \
      $CMD_NETSTAT -antp > "$data_dir/netstat" 2>/dev/null
}

network_device_info () {
   local ip_minus_s_file="$1"

   if [ "$CMD_ETHTOOL" ]; then
      local tempfile="$TMPDIR/ethtool_output_temp"
      for device in $( awk '/^[1-9]/{ print $2 }'  "$ip_minus_s_file" \
                        | awk -F: '{print $1}'     \
                        | grep -v '^lo\|^in\|^gr'  \
                        | sort -u ); do
         ethtool $device > "$tempfile" 2>/dev/null

         if ! grep -q 'No data available' "$tempfile"; then
            cat "$tempfile"
         fi
      done
   fi
}

find_release_and_kernel () { local PTFUNCNAME=find_release_and_kernel;
   local platform="$1"

   local kernel=""
   local release=""
   if [ "${platform}" = "Linux" ]; then
      kernel="$(uname -r)"
      if [ -e /etc/fedora-release ]; then
         release=$(cat /etc/fedora-release);
      elif [ -e /etc/redhat-release ]; then
         release=$(cat /etc/redhat-release);
      elif [ -e /etc/system-release ]; then
         release=$(cat /etc/system-release);
      elif [ "$CMD_LSB_RELEASE" ]; then
         release="$($CMD_LSB_RELEASE -ds) ($($CMD_LSB_RELEASE -cs))"
      elif [ -e /etc/lsb-release ]; then
         release=$(grep DISTRIB_DESCRIPTION /etc/lsb-release |awk -F'=' '{print $2}' |sed 's#"##g');
      elif [ -e /etc/debian_version ]; then
         release="Debian-based version $(cat /etc/debian_version)";
         if [ -e /etc/apt/sources.list ]; then
             local code=` awk  '/^deb/ {print $3}' /etc/apt/sources.list       \
                        | awk -F/ '{print $1}'| awk 'BEGIN {FS="|"}{print $1}' \
                        | sort | uniq -c | sort -rn | head -n1 | awk '{print $2}'`
             release="${release} (${code})"
      fi
      elif ls /etc/*release >/dev/null 2>&1; then
         if grep -q DISTRIB_DESCRIPTION /etc/*release; then
            release=$(grep DISTRIB_DESCRIPTION /etc/*release | head -n1);
         else
            release=$(cat /etc/*release | head -n1);
         fi
      fi
   elif     [ "${platform}" = "FreeBSD" ] \
         || [ "${platform}" = "NetBSD"  ] \
         || [ "${platform}" = "OpenBSD" ]; then
      release="$(uname -r)"
      kernel="$($CMD_SYSCTL -n "kern.osrevision")"
   elif [ "${platform}" = "SunOS" ]; then
      release="$(head -n1 /etc/release)"
      if [ -z "${release}" ]; then
         release="$(uname -r)"
      fi
      kernel="$(uname -v)"
   fi
   echo "kernel    $kernel"
   echo "release    $release"
}

cpu_and_os_arch () { local PTFUNCNAME=cpu_and_os_arch;
   local platform="$1"

   local CPU_ARCH='32-bit'
   local OS_ARCH='32-bit'
   if [ "${platform}" = "Linux" ]; then
      if grep -q ' lm ' /proc/cpuinfo; then
         CPU_ARCH='64-bit'
      fi
   elif [ "${platform}" = "FreeBSD" ] || [ "${platform}" = "NetBSD" ]; then
      if $CMD_SYSCTL "hw.machine_arch" | grep -v 'i[36]86' >/dev/null; then
         CPU_ARCH='64-bit'
      fi
   elif [ "${platform}" = "OpenBSD" ]; then
      if $CMD_SYSCTL "hw.machine" | grep -v 'i[36]86' >/dev/null; then
         CPU_ARCH='64-bit'
      fi
   elif [ "${platform}" = "SunOS" ]; then
      if $CMD_ISAINFO -b | grep 64 >/dev/null ; then
         CPU_ARCH="64-bit"
      fi
   fi
   if [ -z "$CMD_FILE" ]; then
      if [ "$CMD_GETCONF" ] && $CMD_GETCONF LONG_BIT 1>/dev/null 2>&1; then
         OS_ARCH="$($CMD_GETCONF LONG_BIT 2>/dev/null)-bit"
      else
         OS_ARCH='N/A'
      fi
   elif $CMD_FILE /bin/sh | grep '64-bit' >/dev/null; then
       OS_ARCH='64-bit'
   fi

   echo "CPU_ARCH    $CPU_ARCH"
   echo "OS_ARCH    $OS_ARCH"
}

find_virtualization () { local PTFUNCNAME=find_virtualization;
   local platform="$1"
   local dmesg_file="$2"
   local lspci_file="$3"

   local tempfile="$TMPDIR/find_virtualziation.tmp"

   local virt=""
   if [ -s "$dmesg_file" ]; then
      virt="$(find_virtualization_dmesg "$dmesg_file")"
   fi
   if [ -z "${virt}" ] && [ -s "$lspci_file" ]; then
      if grep -qi "virtualbox" "$lspci_file" ; then
         virt="VirtualBox"
      elif grep -qi "vmware" "$lspci_file" ; then
         virt="VMWare"
      fi
   elif [ "${platform}" = "FreeBSD" ]; then
      if ps -o stat | grep J ; then
         virt="FreeBSD Jail"
      fi
   elif [ "${platform}" = "SunOS" ]; then
      if [ "$CMD_PRTDIAG" ] && $CMD_PRTDIAG > "$tempfile" 2>/dev/null; then
         virt="$(find_virtualization_generic "$tempfile" )"
      elif [ "$CMD_SMBIOS" ] && $CMD_SMBIOS > "$tempfile" 2>/dev/null; then
         virt="$(find_virtualization_generic "$tempfile" )"
      fi
   elif [ -e /proc/user_beancounters ]; then
      virt="OpenVZ/Virtuozzo"
   fi
   echo "virt    ${virt:-"No virtualization detected"}"
}

find_virtualization_generic() { local PTFUNCNAME=find_virtualization_generic;
   local file="$1"
   if grep -i -e "virtualbox" "$file" >/dev/null; then
      echo "VirtualBox"
   elif grep -i -e "vmware" "$file" >/dev/null; then
      echo "VMWare"
   fi
}

find_virtualization_dmesg () { local PTFUNCNAME=find_virtualization_dmesg;
   local file="$1"
   if grep -qi -e "vmware" -e "vmxnet" -e 'paravirtualized kernel on vmi' "${file}"; then
      echo "VMWare";
   elif grep -qi -e 'paravirtualized kernel on xen' -e 'Xen virtual console' "${file}"; then
      echo "Xen";
   elif grep -qi "qemu" "${file}"; then
      echo "QEmu";
   elif grep -qi 'paravirtualized kernel on KVM' "${file}"; then
      echo "KVM";
   elif grep -q "VBOX" "${file}"; then
      echo "VirtualBox";
   elif grep -qi 'hd.: Virtual .., ATA.*drive' "${file}"; then
      echo "Microsoft VirtualPC";
   fi
}

dmidecode_system_info () { local PTFUNCNAME=dmidecode_system_info;
   if [ "${CMD_DMIDECODE}" ]; then
      local vendor="$($CMD_DMIDECODE -s "system-manufacturer" 2>/dev/null | sed 's/ *$//g')"
      echo "vendor    ${vendor}"
      if [ "${vendor}" ]; then
         local product="$($CMD_DMIDECODE -s "system-product-name" 2>/dev/null | sed 's/ *$//g')"
         local version="$($CMD_DMIDECODE -s "system-version" 2>/dev/null | sed 's/ *$//g')"
         local chassis="$($CMD_DMIDECODE -s "chassis-type" 2>/dev/null | sed 's/ *$//g')"
         local servicetag="$($CMD_DMIDECODE -s "system-serial-number" 2>/dev/null | sed 's/ *$//g')"
         local system="${vendor}; ${product}; v${version} (${chassis})"

         echo "system    ${system}"
         echo "servicetag    ${servicetag:-"Not found"}"
      fi
   fi
}

find_memory_stats () { local PTFUNCNAME=find_memory_stats;
   local platform="$1"

   if [ "${platform}" = "Linux" ]; then
      free -b
      cat /proc/meminfo
   elif [ "${platform}" = "SunOS" ]; then
      $CMD_PRTCONF | awk -F: '/Memory/{print $2}'
   fi
}

mounted_fs_info () { local PTFUNCNAME=mounted_fs_info;
   local platform="$1"

   if [ "${platform}" != "SunOS" ]; then
      local cmd="df -h"
      if [ "${platform}" = "Linux" ]; then
         cmd="df -h -P"
      fi
      $cmd  | sort > "$TMPDIR/mounted_fs_info.tmp"
      mount | sort | join "$TMPDIR/mounted_fs_info.tmp" -
   fi
}

raid_controller () { local PTFUNCNAME=raid_controller;
   local dmesg_file="$1"
   local lspci_file="$2"

   local tempfile="$TMPDIR/raid_controller.tmp"

   local controller=""
   if [ -s "$lspci_file" ]; then
      controller="$(find_raid_controller_lspci "$lspci_file")"
   fi
   if [ -z "${controller}" ] && [ -s "$dmesg_file" ]; then
      controller="$(find_raid_controller_dmesg "$dmesg_file")"
   fi

   echo "raid_controller    ${controller:-"No RAID controller detected"}"
}

find_raid_controller_dmesg () { local PTFUNCNAME=find_raid_controller_dmesg;
   local file="$1"
   local pat='scsi[0-9].*: .*'
   if grep -qi "${pat}megaraid" "${file}"; then
      echo 'LSI Logic MegaRAID SAS'
   elif grep -q "Fusion MPT SAS" "${file}"; then
      echo 'Fusion-MPT SAS'
   elif grep -q "${pat}aacraid" "${file}"; then
      echo 'AACRAID'
   elif grep -q "${pat}3ware [0-9]* Storage Controller" "${file}"; then
      echo '3Ware'
   fi
}

find_raid_controller_lspci () { local PTFUNCNAME=find_raid_controller_lspci;
   local file="$1"
   if grep -q "RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS" "${file}"; then
      echo 'LSI Logic MegaRAID SAS'
   elif grep -q "Fusion-MPT SAS" "${file}"; then
      echo 'Fusion-MPT SAS'
   elif grep -q "RAID bus controller: LSI Logic / Symbios Logic Unknown" "${file}"; then
      echo 'LSI Logic Unknown'
   elif grep -q "RAID bus controller: Adaptec AAC-RAID" "${file}"; then
      echo 'AACRAID'
   elif grep -q "3ware [0-9]* Storage Controller" "${file}"; then
      echo '3Ware'
   elif grep -q "Hewlett-Packard Company Smart Array" "${file}"; then
      echo 'HP Smart Array'
   elif grep -q " RAID bus controller: " "${file}"; then
      awk -F: '/RAID bus controller\:/ {print $3" "$5" "$6}' "${file}"
   fi
}

schedulers_and_queue_size () { local PTFUNCNAME=schedulers_and_queue_size;
   local file="$1"

   local disks="$(ls /sys/block/ | grep -v -e ram -e loop -e 'fd[0-9]' | xargs echo)"
   echo "internal::disks    $disks" >> "$file"

   for disk in $disks; do
      if [ -e "/sys/block/${disk}/queue/scheduler" ]; then
         echo "internal::${disk}    $(cat /sys/block/${disk}/queue/scheduler | grep -o '\[.*\]') $(cat /sys/block/${disk}/queue/nr_requests)" >> "$file"
         fdisk -l "/dev/${disk}" 2>/dev/null
      fi
   done
}

top_processes () { local PTFUNCNAME=top_processes;
   if [ "$CMD_PRSTAT" ]; then
      $CMD_PRSTAT | head
   elif [ "$CMD_TOP" ]; then
      local cmd="$CMD_TOP -bn 1"
      if    [ "${platform}" = "FreeBSD" ] \
         || [ "${platform}" = "NetBSD"  ] \
         || [ "${platform}" = "OpenBSD" ]; then
         cmd="$CMD_TOP -b -d 1"
      fi
      $cmd \
         | sed -e 's# *$##g' -e '/./{H;$!d;}' -e 'x;/PID/!d;' \
         | grep . \
         | head
   fi
}

notable_processes_info () { local PTFUNCNAME=notable_processes_info;
   local format="%5s    %+2d    %s\n"
   local sshd_pid=$(ps -eo pid,args | awk '$2 ~ /\/usr\/sbin\/sshd/ { print $1; exit }')

   echo "  PID    OOM    COMMAND"

   if [ "$sshd_pid" ]; then
      printf "$format" "$sshd_pid" "$(get_oom_of_pid $sshd_pid)" "sshd"
   else
      printf "%5s    %3s    %s\n" "?" "?" "sshd doesn't appear to be running"
   fi

   local PTDEBUG=""
   ps -eo pid,ucomm | grep '^[0-9]' | while read pid proc; do
      [ "$sshd_pid" ] && [ "$sshd_pid" = "$pid" ] && continue
      local oom="$(get_oom_of_pid $pid)"
      if [ "$oom" ] && [ "$oom" != "?" ] && [ "$oom" = "-17" ]; then
         printf "$format" "$pid" "$oom" "$proc"
      fi
   done
}

processor_info () { local PTFUNCNAME=processor_info;
   local data_dir="$1"
   if [ -f /proc/cpuinfo ]; then
      cat /proc/cpuinfo > "$data_dir/proc_cpuinfo_copy" 2>/dev/null
   elif [ "${platform}" = "SunOS" ]; then
      $CMD_PSRINFO -v > "$data_dir/psrinfo_minus_v"
   fi 
}

propietary_raid_controller () { local PTFUNCNAME=propietary_raid_controller;
   local file="$1"
   local variable_file="$2"
   local data_dir="$3"
   local controller="$4"

   notfound=""
   if [ "${controller}" = "AACRAID" ]; then
      if [ -z "$CMD_ARCCONF" ]; then
         notfound="e.g. http://www.adaptec.com/en-US/support/raid/scsi_raid/ASR-2120S/"
      elif $CMD_ARCCONF getconfig 1 > "$file" 2>/dev/null; then
         echo "internal::raid_opt    1" >> "$variable_file"
      fi
   elif [ "${controller}" = "HP Smart Array" ]; then
      if [ -z "$CMD_HPACUCLI" ]; then
         notfound="your package repository or the manufacturer's website"
      elif $CMD_HPACUCLI ctrl all show config > "$file" 2>/dev/null; then
         echo "internal::raid_opt    2" >> "$variable_file"
      fi
   elif [ "${controller}" = "LSI Logic MegaRAID SAS" ]; then
      if [ -z "$CMD_MEGACLI64" ]; then 
         notfound="your package repository or the manufacturer's website"
      else
         echo "internal::raid_opt    3" >> "$variable_file"
         $CMD_MEGACLI64 -AdpAllInfo -aALL -NoLog > "$data_dir/lsi_megaraid_adapter_info.tmp" 2>/dev/null
         $CMD_MEGACLI64 -AdpBbuCmd -GetBbuStatus -aALL -NoLog > "$data_dir/lsi_megaraid_bbu_status.tmp" 2>/dev/null
         $CMD_MEGACLI64 -LdPdInfo -aALL -NoLog > "$data_dir/lsi_megaraid_devices.tmp" 2>/dev/null
      fi
   fi

   if [ "${notfound}" ]; then
      echo "internal::raid_opt    0" >> "$variable_file"
      echo "   RAID controller software not found; try getting it from" > "$file"
      echo "   ${notfound}" >> "$file"
   fi
}

# ###########################################################################
# End collect_system_info package
# ###########################################################################

# ###########################################################################
# report_system_info package
# This package is a copy without comments from the original.  The original
# with comments and its test file can be found in the Bazaar repository at,
#   lib/bash/report_system_info.sh
#   t/lib/bash/report_system_info.sh
# See https://launchpad.net/percona-toolkit for more information.
# ###########################################################################


set -u

   
parse_proc_cpuinfo () { local PTFUNCNAME=parse_proc_cpuinfo;
   local file="$1"
   local virtual="$(grep -c ^processor "${file}")";
   local physical="$(grep 'physical id' "${file}" | sort -u | wc -l)";
   local cores="$(grep 'cpu cores' "${file}" | head -n 1 | cut -d: -f2)";

   [ "${physical}" = "0" ] && physical="${virtual}"
   [ -z "${cores}" ] && cores=0

   cores=$((${cores} * ${physical}));
   local htt=""
   if [ ${cores} -gt 0 -a $cores -lt $virtual ]; then htt=yes; else htt=no; fi

   name_val "Processors" "physical = ${physical}, cores = ${cores}, virtual = ${virtual}, hyperthreading = ${htt}"

   awk -F: '/cpu MHz/{print $2}' "${file}" \
      | sort | uniq -c > "$TMPDIR/parse_proc_cpuinfo_cpu.unq"
   name_val "Speeds" "$(group_concat "$TMPDIR/parse_proc_cpuinfo_cpu.unq")"

   awk -F: '/model name/{print $2}' "${file}" \
      | sort | uniq -c > "$TMPDIR/parse_proc_cpuinfo_model.unq"
   name_val "Models" "$(group_concat "$TMPDIR/parse_proc_cpuinfo_model.unq")"

   awk -F: '/cache size/{print $2}' "${file}" \
      | sort | uniq -c > "$TMPDIR/parse_proc_cpuinfo_cache.unq"
   name_val "Caches" "$(group_concat "$TMPDIR/parse_proc_cpuinfo_cache.unq")"
}

parse_sysctl_cpu_freebsd() { local PTFUNCNAME=parse_sysctl_cpu_freebsd;
   local file="$1"
   [ -e "$file" ] || return;
   local virtual="$(awk '/hw.ncpu/{print $2}' "$file")"
   name_val "Processors" "virtual = ${virtual}"
   name_val "Speeds" "$(awk '/hw.clockrate/{print $2}' "$file")"
   name_val "Models" "$(awk -F: '/hw.model/{print substr($2, 2)}' "$file")"
}

parse_sysctl_cpu_netbsd() { local PTFUNCNAME=parse_sysctl_cpu_netbsd;
   local file="$1"

   [ -e "$file" ] || return

   local virtual="$(awk '/hw.ncpu /{print $NF}' "$file")"
   name_val "Processors" "virtual = ${virtual}"
   name_val "Models" "$(awk -F: '/hw.model/{print $3}' "$file")"
}

parse_sysctl_cpu_openbsd() { local PTFUNCNAME=parse_sysctl_cpu_openbsd;
   local file="$1"

   [ -e "$file" ] || return

   name_val "Processors" "$(awk -F= '/hw.ncpu=/{print $2}' "$file")"
   name_val "Speeds" "$(awk -F= '/hw.cpuspeed/{print $2}' "$file")"
   name_val "Models" "$(awk -F= '/hw.model/{print substr($2, 1, index($2, " "))}' "$file")"
}

parse_psrinfo_cpus() { local PTFUNCNAME=parse_psrinfo_cpus;
   local file="$1"

   [ -e "$file" ] || return

   name_val "Processors" "$(grep -c 'Status of .* processor' "$file")"
   awk '/operates at/ {
      start = index($0, " at ") + 4;
      end   = length($0) - start - 4
      print substr($0, start, end);
   }' "$file" | sort | uniq -c > "$TMPDIR/parse_psrinfo_cpus.tmp"
   name_val "Speeds" "$(group_concat "$TMPDIR/parse_psrinfo_cpus.tmp")"
}

parse_free_minus_b () { local PTFUNCNAME=parse_free_minus_b;
   local file="$1"

   [ -e "$file" ] || return

   local physical=$(awk '/Mem:/{print $3}' "${file}")
   local swap_alloc=$(awk '/Swap:/{print $2}' "${file}")
   local swap_used=$(awk '/Swap:/{print $3}' "${file}")
   local virtual=$(shorten $(($physical + $swap_used)) 1)

   name_val "Total"   $(shorten $(awk '/Mem:/{print $2}' "${file}") 1)
   name_val "Free"    $(shorten $(awk '/Mem:/{print $4}' "${file}") 1)
   name_val "Used"    "physical = $(shorten ${physical} 1), swap allocated = $(shorten ${swap_alloc} 1), swap used = $(shorten ${swap_used} 1), virtual = ${virtual}"
   name_val "Buffers" $(shorten $(awk '/Mem:/{print $6}' "${file}") 1)
   name_val "Caches"  $(shorten $(awk '/Mem:/{print $7}' "${file}") 1)
   name_val "Dirty"  "$(awk '/Dirty:/ {print $2, $3}' "${file}")"
}

parse_memory_sysctl_freebsd() { local PTFUNCNAME=parse_memory_sysctl_freebsd;
   local file="$1"

   [ -e "$file" ] || return

   local physical=$(awk '/hw.realmem:/{print $2}' "${file}")
   local mem_hw=$(awk '/hw.physmem:/{print $2}' "${file}")
   local mem_used=$(awk '
      /hw.physmem/                   { mem_hw       = $2; }
      /vm.stats.vm.v_inactive_count/ { mem_inactive = $2; }
      /vm.stats.vm.v_cache_count/    { mem_cache    = $2; }
      /vm.stats.vm.v_free_count/     { mem_free     = $2; }
      /hw.pagesize/                  { pagesize     = $2; }
      END {
         mem_inactive *= pagesize;
         mem_cache    *= pagesize;
         mem_free     *= pagesize;
         print mem_hw - mem_inactive - mem_cache - mem_free;
      }
   ' "$file");
   name_val "Total"   $(shorten ${mem_hw} 1)
   name_val "Virtual" $(shorten ${physical} 1)
   name_val "Used"    $(shorten ${mem_used} 1)
}

parse_memory_sysctl_netbsd() { local PTFUNCNAME=parse_memory_sysctl_netbsd;
   local file="$1"
   local swapctl_file="$2"

   [ -e "$file" -a -e "$swapctl_file" ] || return

   local swap_mem="$(echo "$(awk '{print $2;}' "$swapctl_file")*512" | bc -l)"
   name_val "Total"   $(shorten "$(awk '/hw.physmem /{print $NF}' "$file")" 1)
   name_val "User"    $(shorten "$(awk '/hw.usermem /{print $NF}' "$file")" 1)
   name_val "Swap"    $(shorten ${swap_mem} 1)
}

parse_memory_sysctl_openbsd() { local PTFUNCNAME=parse_memory_sysctl_openbsd;
   local file="$1"
   local swapctl_file="$2"

   [ -e "$file" -a -e "$swapctl_file" ] || return

   local swap_mem="$(echo "$(awk '{print $2;}' "$swapctl_file")*512" | bc -l)"
   name_val "Total"   $(shorten "$(awk -F= '/hw.physmem/{print $2}' "$file")" 1)
   name_val "User"    $(shorten "$(awk -F= '/hw.usermem/{print $2}' "$file")" 1)
   name_val "Swap"    $(shorten ${swap_mem} 1)
}

parse_dmidecode_mem_devices () { local PTFUNCNAME=parse_dmidecode_mem_devices;
   local file="$1"

   [ -e "$file" ] || return

   echo "  Locator   Size     Speed             Form Factor   Type          Type Detail"
   echo "  ========= ======== ================= ============= ============= ==========="
   sed    -e '/./{H;$!d;}' \
          -e 'x;/Memory Device\n/!d;' \
          -e 's/: /:/g' \
          -e 's/</{/g' \
          -e 's/>/}/g' \
          -e 's/[ \t]*\n/\n/g' \
       "${file}" \
       | awk -F: '/Size|Type|Form.Factor|Type.Detail|[^ ]Locator/{printf("|%s", $2)}/Speed/{print "|" $2}' \
       | sed -e 's/No Module Installed/{EMPTY}/' \
       | sort \
       | awk -F'|' '{printf("  %-9s %-8s %-17s %-13s %-13s %-8s\n", $4, $2, $7, $3, $5, $6);}'
}

parse_ip_s_link () { local PTFUNCNAME=parse_ip_s_link;
   local file="$1"

   [ -e "$file" ] || return

   echo "  interface  rx_bytes rx_packets  rx_errors   tx_bytes tx_packets  tx_errors"
   echo "  ========= ========= ========== ========== ========== ========== =========="

   awk "/^[1-9][0-9]*:/ {
      save[\"iface\"] = substr(\$2, 1, index(\$2, \":\") - 1);
      new = 1;
   }
   \$0 !~ /[^0-9 ]/ {
      if ( new == 1 ) {
         new = 0;
         fuzzy_var = \$1; ${fuzzy_formula} save[\"bytes\"] = fuzzy_var;
         fuzzy_var = \$2; ${fuzzy_formula} save[\"packs\"] = fuzzy_var;
         fuzzy_var = \$3; ${fuzzy_formula} save[\"errs\"]  = fuzzy_var;
      }
      else {
         fuzzy_var = \$1; ${fuzzy_formula} tx_bytes   = fuzzy_var;
         fuzzy_var = \$2; ${fuzzy_formula} tx_packets = fuzzy_var;
         fuzzy_var = \$3; ${fuzzy_formula} tx_errors  = fuzzy_var;
         printf \"  %-8s %10.0f %10.0f %10.0f %10.0f %10.0f %10.0f\\n\", save[\"iface\"], save[\"bytes\"], save[\"packs\"], save[\"errs\"], tx_bytes, tx_packets, tx_errors;
      }
   }" "$file"
}

parse_ethtool () {
   local file="$1"

   [ -e "$file" ] || return

   echo "  Device    Speed     Duplex"
   echo "  ========= ========= ========="


   awk '
      /^Settings for / {
         device               = substr($3, 1, index($3, ":") ? index($3, ":")-1 : length($3));
         device_names[device] = device;
      }
      /Speed:/  { devices[device ",speed"]  = $2 }
      /Duplex:/ { devices[device ",duplex"] = $2 }
      END {
         for ( device in device_names ) {
            printf("  %-10s %-10s %-10s\n",
               device,
               devices[device ",speed"],
               devices[device ",duplex"]);
         }
      }
   ' "$file"

}

parse_netstat () { local PTFUNCNAME=parse_netstat;
   local file="$1"

   [ -e "$file" ] || return

   echo "  Connections from remote IP addresses"
   awk '$1 ~ /^tcp/ && $5 ~ /^[1-9]/ {
      print substr($5, 1, index($5, ":") - 1);
   }' "${file}" | sort | uniq -c \
      | awk "{
         fuzzy_var=\$1;
         ${fuzzy_formula}
         printf \"    %-15s %5d\\n\", \$2, fuzzy_var;
         }" \
      | sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4
   echo "  Connections to local IP addresses"
   awk '$1 ~ /^tcp/ && $5 ~ /^[1-9]/ {
      print substr($4, 1, index($4, ":") - 1);
   }' "${file}" | sort | uniq -c \
      | awk "{
         fuzzy_var=\$1;
         ${fuzzy_formula}
         printf \"    %-15s %5d\\n\", \$2, fuzzy_var;
         }" \
      | sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4
   echo "  Connections to top 10 local ports"
   awk '$1 ~ /^tcp/ && $5 ~ /^[1-9]/ {
      print substr($4, index($4, ":") + 1);
   }' "${file}" | sort | uniq -c | sort -rn | head -n10 \
      | awk "{
         fuzzy_var=\$1;
         ${fuzzy_formula}
         printf \"    %-15s %5d\\n\", \$2, fuzzy_var;
         }" | sort
   echo "  States of connections"
   awk '$1 ~ /^tcp/ {
      print $6;
   }' "${file}" | sort | uniq -c | sort -rn \
      | awk "{
         fuzzy_var=\$1;
         ${fuzzy_formula}
         printf \"    %-15s %5d\\n\", \$2, fuzzy_var;
         }" | sort
}

parse_filesystems () { local PTFUNCNAME=parse_filesystems;
   local file="$1"
   local platform="$2"

   [ -e "$file" ] || return

   local spec="$(awk "
      BEGIN {
         device     = 10;
         fstype     = 4;
         options    = 4;
      }
      /./ {
         f_device     = \$1;
         f_fstype     = \$10;
         f_options    = substr(\$11, 2, length(\$11) - 2);
         if ( \"$2\" ~ /(Free|Open|Net)BSD/ ) {
            f_fstype  = substr(\$9, 2, length(\$9) - 2);
            f_options = substr(\$0, index(\$0, \",\") + 2);
            f_options = substr(f_options, 1, length(f_options) - 1);
         }
         if ( length(f_device) > device ) {
            device=length(f_device);
         }
         if ( length(f_fstype) > fstype ) {
            fstype=length(f_fstype);
         }
         if ( length(f_options) > options ) {
            options=length(f_options);
         }
      }
      END{
         print \"%-\" device \"s %5s %4s %-\" fstype \"s %-\" options \"s %s\";
      }
   " "${file}")"

   awk "
      BEGIN {
         spec=\"  ${spec}\\n\";
         printf spec, \"Filesystem\", \"Size\", \"Used\", \"Type\", \"Opts\", \"Mountpoint\";
      }
      {
         f_fstype     = \$10;
         f_options    = substr(\$11, 2, length(\$11) - 2);
         if ( \"$2\" ~ /(Free|Open|Net)BSD/ ) {
            f_fstype  = substr(\$9, 2, length(\$9) - 2);
            f_options = substr(\$0, index(\$0, \",\") + 2);
            f_options = substr(f_options, 1, length(f_options) - 1);
         }
         printf spec, \$1, \$2, \$5, f_fstype, f_options, \$6;
      }
   " "${file}"
}

parse_fdisk () { local PTFUNCNAME=parse_fdisk;
   local file="$1"

   [ -e "$file" -a -s "$file" ] || return

   awk '
      BEGIN {
         format="%-12s %4s %10s %10s %18s\n";
         printf(format, "Device", "Type", "Start", "End", "Size");
         printf(format, "============", "====", "==========", "==========", "==================");
      }
      /Disk.*bytes/ {
         disk = substr($2, 1, length($2) - 1);
         size = $5;
         printf(format, disk, "Disk", "", "", size);
      }
      /Units/ {
         units = $9;
      }
      /^\/dev/ {
         if ( $2 == "*" ) {
            start = $3;
            end   = $4;
         }
         else {
            start = $2;
            end   = $3;
         }
         printf(format, $1, "Part", start, end, sprintf("%.0f", (end - start) * units));
      }
   ' "${file}"
}

parse_ethernet_controller_lspci () { local PTFUNCNAME=parse_ethernet_controller_lspci;
   local file="$1"

   [ -e "$file" ] || return

   grep -i ethernet "${file}" | cut -d: -f3 | while read line; do
      name_val "Controller" "${line}"
   done
}

parse_hpacucli () { local PTFUNCNAME=parse_hpacucli;
   local file="$1"
   [ -e "$file" ] || return
   grep 'logicaldrive\|physicaldrive' "${file}"
}

parse_arcconf () { local PTFUNCNAME=parse_arcconf;
   local file="$1"

   [ -e "$file" ] || return

   local model="$(awk -F: '/Controller Model/{print $2}' "${file}")"
   local chan="$(awk -F: '/Channel description/{print $2}' "${file}")"
   local cache="$(awk -F: '/Installed memory/{print $2}' "${file}")"
   local status="$(awk -F: '/Controller Status/{print $2}' "${file}")"
   name_val "Specs" "$(echo "$model" | sed -e 's/ //'),${chan},${cache} cache,${status}"

   local battery=""
   if grep -q "ZMM" "$file"; then
      battery="$(grep -A2 'Controller ZMM Information' "$file" \
                  | awk '/Status/ {s=$4}
                         END      {printf "ZMM %s", s}')"
   else
      battery="$(grep -A5 'Controller Battery Info' "${file}" \
         | awk '/Capacity remaining/ {c=$4}
               /Status/             {s=$3}
               /Time remaining/     {t=sprintf("%dd%dh%dm", $7, $9, $11)}
               END                  {printf("%d%%, %s remaining, %s", c, t, s)}')"
   fi
   name_val "Battery" "${battery}"

   echo
   echo "  LogicalDev Size      RAID Disks Stripe Status  Cache"
   echo "  ========== ========= ==== ===== ====== ======= ======="
   for dev in $(awk '/Logical device number/{print $4}' "${file}"); do
      sed -n -e "/^Logical device .* ${dev}$/,/^$\|^Logical device number/p" "${file}" \
      | awk '
         /Logical device name/               {d=$5}
         /Size/                              {z=$3 " " $4}
         /RAID level/                        {r=$4}
         /Group [0-9]/                       {g++}
         /Stripe-unit size/                  {p=$4 " " $5}
         /Status of logical/                 {s=$6}
         /Write-cache mode.*Ena.*write-back/ {c="On (WB)"}
         /Write-cache mode.*Ena.*write-thro/ {c="On (WT)"}
         /Write-cache mode.*Disabled/        {c="Off"}
         END {
            printf("  %-10s %-9s %4d %5d %-6s %-7s %-7s\n",
               d, z, r, g, p, s, c);
         }'
   done

   echo
   echo "  PhysiclDev State   Speed         Vendor  Model        Size        Cache"
   echo "  ========== ======= ============= ======= ============ =========== ======="

   local tempresult=""
   sed -n -e '/Physical Device information/,/^$/p' "${file}" \
      | awk -F: '
         /Device #[0-9]/ {
            device=substr($0, index($0, "#"));
            devicenames[device]=device;
         }
         /Device is a/ {
            devices[device ",isa"] = substr($0, index($0, "is a") + 5);
         }
         /State/ {
            devices[device ",state"] = substr($2, 2);
         }
         /Transfer Speed/ {
            devices[device ",speed"] = substr($2, 2);
         }
         /Vendor/ {
            devices[device ",vendor"] = substr($2, 2);
         }
         /Model/ {
            devices[device ",model"] = substr($2, 2);
         }
         /Size/ {
            devices[device ",size"] = substr($2, 2);
         }
         /Write Cache/ {
            if ( $2 ~ /Enabled .write-back./ )
               devices[device ",cache"] = "On (WB)";
            else
               if ( $2 ~ /Enabled .write-th/ )
                  devices[device ",cache"] = "On (WT)";
               else
                  devices[device ",cache"] = "Off";
         }
         END {
            for ( device in devicenames ) {
               if ( devices[device ",isa"] ~ /Hard drive/ ) {
                  printf("  %-10s %-7s %-13s %-7s %-12s %-11s %-7s\n",
                     devices[device ",isa"],
                     devices[device ",state"],
                     devices[device ",speed"],
                     devices[device ",vendor"],
                     devices[device ",model"],
                     devices[device ",size"],
                     devices[device ",cache"]);
               }
            }
         }'
}

parse_fusionmpt_lsiutil () { local PTFUNCNAME=parse_fusionmpt_lsiutil;
   local file="$1"
   echo
   awk '/LSI.*Firmware/ { print " ", $0 }' "${file}"
   grep . "${file}" | sed -n -e '/B___T___L/,$ {s/^/  /; p}'
}

parse_lsi_megaraid_adapter_info () { local PTFUNCNAME=parse_lsi_megaraid_adapter_info;
   local file="$1"

   [ -e "$file" ] || return

   local name="$(awk -F: '/Product Name/{print substr($2, 2)}' "${file}")";
   local int=$(awk '/Host Interface/{print $4}' "${file}");
   local prt=$(awk '/Number of Backend Port/{print $5}' "${file}");
   local bbu=$(awk '/^BBU             :/{print $3}' "${file}");
   local mem=$(awk '/Memory Size/{print $4}' "${file}");
   local vdr=$(awk '/Virtual Drives/{print $4}' "${file}");
   local dvd=$(awk '/Degraded/{print $3}' "${file}");
   local phy=$(awk '/^  Disks/{print $3}' "${file}");
   local crd=$(awk '/Critical Disks/{print $4}' "${file}");
   local fad=$(awk '/Failed Disks/{print $4}' "${file}");

   name_val "Model" "${name}, ${int} interface, ${prt} ports"
   name_val "Cache" "${mem} Memory, BBU ${bbu}"
}

parse_lsi_megaraid_bbu_status () { local PTFUNCNAME=parse_lsi_megaraid_bbu_status;
   local file="$1"

   [ -e "$file" ] || return

   local charge=$(awk '/Relative State/{print $5}' "${file}");
   local temp=$(awk '/^Temperature/{print $2}' "${file}");
   local soh=$(awk '/isSOHGood:/{print $2}' "${file}");
   name_val "BBU" "${charge}% Charged, Temperature ${temp}C, isSOHGood=${soh}"
}

format_lvs () { local PTFUNCNAME=format_lvs;
   local file="$1"
   if [ -e "$file" ]; then
      grep -v "open failed" "$file"
   else
      echo "Unable to collect information";
   fi
}

parse_lsi_megaraid_devices () { local PTFUNCNAME=parse_lsi_megaraid_devices;
   local file="$1"

   [ -e "$file" ] || return

   echo
   echo "  PhysiclDev Type State   Errors Vendor  Model        Size"
   echo "  ========== ==== ======= ====== ======= ============ ==========="
   for dev in $(awk '/Device Id/{print $3}' "${file}"); do
      sed -e '/./{H;$!d;}' -e "x;/Device Id: ${dev}/!d;" "${file}" \
      | awk '
         /Media Type/                        {d=substr($0, index($0, ":") + 2)}
         /PD Type/                           {t=$3}
         /Firmware state/                    {s=$3}
         /Media Error Count/                 {me=$4}
         /Other Error Count/                 {oe=$4}
         /Predictive Failure Count/          {pe=$4}
         /Inquiry Data/                      {v=$3; m=$4;}
         /Raw Size/                          {z=$3}
         END {
            printf("  %-10s %-4s %-7s %6s %-7s %-12s %-7s\n",
               substr(d, 1, 10), t, s, me "/" oe "/" pe, v, m, z);
         }'
   done
}

parse_lsi_megaraid_virtual_devices () { local PTFUNCNAME=parse_lsi_megaraid_virtual_devices;
   local file="$1"

   [ -e "$file" ] || return

   echo
   echo "  VirtualDev Size      RAID Level Disks SpnDpth Stripe Status  Cache"
   echo "  ========== ========= ========== ===== ======= ====== ======= ========="
   awk '
      /^Virtual (Drive|Disk):/ {
         device              = $3;
         devicenames[device] = device;
      }
      /Number Of Drives/ {
         devices[device ",numdisks"] = substr($0, index($0, ":") + 1);
      }
      /^Name/ {
         devices[device ",name"] = substr($0, index($0, ":") + 1) > "" ? substr($0, index($0, ":") + 1) : "(no name)";
      }
      /RAID Level/ {
         devices[device ",primary"]   = substr($3, index($3, "-") + 1, 1);
         devices[device ",secondary"] = substr($4, index($4, "-") + 1, 1);
         devices[device ",qualifier"] = substr($NF, index($NF, "-") + 1, 1);
      }
      /Span Depth/ {
         devices[device ",spandepth"] = substr($2, index($2, ":") + 1);
      }
      /Number of Spans/ {
         devices[device ",numspans"] = $4;
      }
      /^Size/ {
         devices[device ",size"] = substr($0, index($0, ":") + 1);
      }
      /^State/ {
         devices[device ",state"] = substr($0, index($0, ":") + 2);
      }
      /^Stripe? Size/ {
         devices[device ",stripe"] = substr($0, index($0, ":") + 1);
      }
      /^Current Cache Policy/ {
         devices[device ",wpolicy"] = $4 ~ /WriteBack/ ? "WB" : "WT";
         devices[device ",rpolicy"] = $5 ~ /ReadAheadNone/ ? "no RA" : "RA";
      }
      END {
         for ( device in devicenames ) {
            raid = 0;
            if ( devices[device ",primary"] == 1 ) {
               raid = 1;
               if ( devices[device ",secondary"] == 3 ) {
                  raid = 10;
               }
            }
            else {
               if ( devices[device ",primary"] == 5 ) {
                  raid = 5;
               }
            }
            printf("  %-10s %-9s %-10s %5d %7s %6s %-7s %s\n",
               device devices[device ",name"],
               devices[device ",size"],
               raid " (" devices[device ",primary"] "-" devices[device ",secondary"] "-" devices[device ",qualifier"] ")",
               devices[device ",numdisks"],
               devices[device ",spandepth"] "-" devices[device ",numspans"],
               devices[device ",stripe"], devices[device ",state"],
               devices[device ",wpolicy"] ", " devices[device ",rpolicy"]);
         }
      }' "${file}"
}

format_vmstat () { local PTFUNCNAME=format_vmstat;
   local file="$1"

   [ -e "$file" ] || return

   awk "
      BEGIN {
         format = \"  %2s %2s  %4s %4s %5s %5s %6s %6s %3s %3s %3s %3s %3s\n\";
      }
      /procs/ {
         print  \"  procs  ---swap-- -----io---- ---system---- --------cpu--------\";
      }
      /bo/ {
         printf format, \"r\", \"b\", \"si\", \"so\", \"bi\", \"bo\", \"ir\", \"cs\", \"us\", \"sy\", \"il\", \"wa\", \"st\";
      }
      \$0 !~ /r/ {
            fuzzy_var = \$1;   ${fuzzy_formula}  r   = fuzzy_var;
            fuzzy_var = \$2;   ${fuzzy_formula}  b   = fuzzy_var;
            fuzzy_var = \$7;   ${fuzzy_formula}  si  = fuzzy_var;
            fuzzy_var = \$8;   ${fuzzy_formula}  so  = fuzzy_var;
            fuzzy_var = \$9;   ${fuzzy_formula}  bi  = fuzzy_var;
            fuzzy_var = \$10;  ${fuzzy_formula}  bo  = fuzzy_var;
            fuzzy_var = \$11;  ${fuzzy_formula}  ir  = fuzzy_var;
            fuzzy_var = \$12;  ${fuzzy_formula}  cs  = fuzzy_var;
            fuzzy_var = \$13;                    us  = fuzzy_var;
            fuzzy_var = \$14;                    sy  = fuzzy_var;
            fuzzy_var = \$15;                    il  = fuzzy_var;
            fuzzy_var = \$16;                    wa  = fuzzy_var;
            fuzzy_var = \$17;                    st  = fuzzy_var;
            printf format, r, b, si, so, bi, bo, ir, cs, us, sy, il, wa, st;
         }
   " "${file}"
}

processes_section () { local PTFUNCNAME=processes_section;
   local top_process_file="$1"
   local notable_procs_file="$2"
   local vmstat_file="$3"
   local platform="$4"

   section "Top Processes"
   cat "$top_process_file"
   section "Notable Processes"
   cat "$notable_procs_file"
   if [ -e "$vmstat_file" ]; then
      section "Simplified and fuzzy rounded vmstat (wait please)"
      wait # For the process we forked that was gathering vmstat samples
      if [ "${platform}" = "Linux" ]; then
         format_vmstat "$vmstat_file"
      else
         cat "$vmstat_file"
      fi
   fi
}

section_Processor () {
   local platform="$1"
   local data_dir="$2"

   section "Processor"

   if [ -e "$data_dir/proc_cpuinfo_copy" ]; then
      parse_proc_cpuinfo "$data_dir/proc_cpuinfo_copy"
   elif [ "${platform}" = "FreeBSD" ]; then
      parse_sysctl_cpu_freebsd "$data_dir/sysctl"
   elif [ "${platform}" = "NetBSD" ]; then
      parse_sysctl_cpu_netbsd "$data_dir/sysctl"
   elif [ "${platform}" = "OpenBSD" ]; then
      parse_sysctl_cpu_openbsd "$data_dir/sysctl"
   elif [ "${platform}" = "SunOS" ]; then
      parse_psrinfo_cpus "$data_dir/psrinfo_minus_v"
   fi
}

section_Memory () {
   local platform="$1"
   local data_dir="$2"

   section "Memory"
   if [ "${platform}" = "Linux" ]; then
      parse_free_minus_b "$data_dir/memory"
   elif [ "${platform}" = "FreeBSD" ]; then
      parse_memory_sysctl_freebsd "$data_dir/sysctl"
   elif [ "${platform}" = "NetBSD" ]; then
      parse_memory_sysctl_netbsd "$data_dir/sysctl" "$data_dir/swapctl"
   elif [ "${platform}" = "OpenBSD" ]; then
      parse_memory_sysctl_openbsd "$data_dir/sysctl" "$data_dir/swapctl"
   elif [ "${platform}" = "SunOS" ]; then
      name_val "Memory" "$(cat "$data_dir/memory")"
   fi

   local rss=$( get_var "rss" "$data_dir/summary" )
   name_val "UsedRSS" "$(shorten ${rss} 1)"

   if [ "${platform}" = "Linux" ]; then
      name_val "Swappiness" "$(get_var "swappiness" "$data_dir/summary")"
      name_val "DirtyPolicy" "$(get_var "dirtypolicy" "$data_dir/summary")"
      local dirty_status="$(get_var "dirtystatus" "$data_dir/summary")"
      if [ -n "$dirty_status" ]; then
         name_val "DirtyStatus" "$dirty_status"
      fi
   fi

   if [ -s "$data_dir/dmidecode" ]; then
      parse_dmidecode_mem_devices "$data_dir/dmidecode"
   fi
}

parse_uptime () {
   local file="$1"

   awk ' / up / {
            printf substr($0, index($0, " up ")+4 );
         }
         !/ up / {
            printf $0;
         }
' "$file"
}

report_system_summary () { local PTFUNCNAME=report_system_summary;
   local data_dir="$1"

   section "Percona Toolkit System Summary Report"


   [ -e "$data_dir/summary" ] \
      || die "The data directory doesn't have a summary file, exiting."

   local platform="$(get_var "platform" "$data_dir/summary")"
   name_val "Date" "`date -u +'%F %T UTC'` (local TZ: `date +'%Z %z'`)"
   name_val "Hostname" "$(get_var hostname "$data_dir/summary")"
   name_val "Uptime" "$(parse_uptime "$data_dir/uptime")"

   if [ "$(get_var "vendor" "$data_dir/summary")" ]; then
      name_val "System" "$(get_var "system" "$data_dir/summary")";
      name_val "Service Tag" "$(get_var "servicetag" "$data_dir/summary")";
   fi

   name_val "Platform" "${platform}"
   local zonename="$(get_var zonename "$data_dir/summary")";
   [ -n "${zonename}" ] && name_val "Zonename" "$zonename"

   name_val "Release" "$(get_var "release" "$data_dir/summary")"
   name_val "Kernel" "$(get_var "kernel" "$data_dir/summary")"

   name_val "Architecture" "CPU = $(get_var "CPU_ARCH" "$data_dir/summary"), OS = $(get_var "OS_ARCH" "$data_dir/summary")"

   local threading="$(get_var threading "$data_dir/summary")"
   local compiler="$(get_var compiler "$data_dir/summary")"
   [ -n "$threading" ] && name_val "Threading" "$threading"
   [ -n "$compiler"  ] && name_val "Compiler" "$compiler"

   local getenforce="$(get_var getenforce "$data_dir/summary")"
   [ -n "$getenforce" ] && name_val "SELinux" "${getenforce}";

   name_val "Virtualized" "$(get_var "virt" "$data_dir/summary")"

   section_Processor "$platform" "$data_dir"

   section_Memory    "$platform" "$data_dir"


   if [ -s "$data_dir/mounted_fs" ]; then
      section "Mounted Filesystems"
      parse_filesystems "$data_dir/mounted_fs" "${platform}"
   fi

   if [ "${platform}" = "Linux" ]; then

      section "Disk Schedulers And Queue Size"
      local disks="$( get_var "internal::disks" "$data_dir/summary" )"
      for disk in $disks; do
         local scheduler="$( get_var "internal::${disk}" "$data_dir/summary" )"
         name_val "${disk}" "${scheduler:-"UNREADABLE"}"
      done

      section "Disk Partioning"
      parse_fdisk "$data_dir/partitioning"

      section "Kernel Inode State"
      for file in dentry-state file-nr inode-nr; do
         name_val "${file}" "$(get_var "${file}" "$data_dir/summary")"
      done

      section "LVM Volumes"
      format_lvs "$data_dir/lvs"
      section "LVM Volume Groups"
      format_lvs "$data_dir/vgs"
   fi

   section "RAID Controller"
   local controller="$(get_var "raid_controller" "$data_dir/summary")"
   name_val "Controller" "$controller"
   local key="$(get_var "internal::raid_opt" "$data_dir/summary")"
   case "$key" in
      0)
         cat "$data_dir/raid-controller"
         ;;
      1)
         parse_arcconf "$data_dir/raid-controller"
         ;;
      2)
         parse_hpacucli "$data_dir/raid-controller"
         ;;
      3)
         [ -e "$data_dir/lsi_megaraid_adapter_info.tmp" ] && \
            parse_lsi_megaraid_adapter_info "$data_dir/lsi_megaraid_adapter_info.tmp"
         [ -e "$data_dir/lsi_megaraid_bbu_status.tmp" ] && \
            parse_lsi_megaraid_bbu_status "$data_dir/lsi_megaraid_bbu_status.tmp"
         if [ -e "$data_dir/lsi_megaraid_devices.tmp" ]; then
            parse_lsi_megaraid_virtual_devices "$data_dir/lsi_megaraid_devices.tmp"
            parse_lsi_megaraid_devices "$data_dir/lsi_megaraid_devices.tmp"
         fi
         ;;
   esac

   if [ "${OPT_SUMMARIZE_NETWORK}" ]; then
      if [ "${platform}" = "Linux" ]; then
         section "Network Config"
         if [ -s "$data_dir/lspci_file" ]; then
            parse_ethernet_controller_lspci "$data_dir/lspci_file"
         fi
         if grep "net.ipv4.tcp_fin_timeout" "$data_dir/sysctl" > /dev/null 2>&1; then
            name_val "FIN Timeout" "$(awk '/net.ipv4.tcp_fin_timeout/{print $NF}' "$data_dir/sysctl")"
            name_val "Port Range" "$(awk '/net.ipv4.ip_local_port_range/{print $NF}' "$data_dir/sysctl")"
         fi
      fi


      if [ -s "$data_dir/ip" ]; then
         section "Interface Statistics"
         parse_ip_s_link "$data_dir/ip"
      fi

      if [ -s "$data_dir/network_devices" ]; then
         section "Network Devices"
         parse_ethtool "$data_dir/network_devices"
      fi

      if [ "${platform}" = "Linux" -a -e "$data_dir/netstat" ]; then
         section "Network Connections"
         parse_netstat "$data_dir/netstat"
      fi
   fi

   [ "$OPT_SUMMARIZE_PROCESSES" ] && processes_section           \
                                       "$data_dir/processes"     \
                                       "$data_dir/notable_procs" \
                                       "$data_dir/vmstat"        \
                                       "$platform"

   section "The End"
}

# ###########################################################################
# End report_system_info package
# ###########################################################################

# ##############################################################################
# The main() function is called at the end of the script.  This makes it
# testable.  Major bits of parsing are separated into functions for testability.
# ##############################################################################
main () { local PTFUNCNAME=main;
   trap sigtrap HUP INT TERM

   local RAN_WITH="--sleep=$OPT_SLEEP --save-samples=$OPT_SAVE_SAMPLES --read-samples=$OPT_READ_SAMPLES"

   # Begin by setting the $PATH to include some common locations that are not
   # always in the $PATH, including the "sbin" locations, and some common
   # locations for proprietary management software, such as RAID controllers.
   export PATH="${PATH}:/usr/local/bin:/usr/bin:/bin:/usr/libexec"
   export PATH="${PATH}:/usr/local/sbin:/usr/sbin:/sbin"
   export PATH="${PATH}:/usr/StorMan/:/opt/MegaRAID/MegaCli/"

   setup_commands

   _d "Starting $0 $RAN_WITH"

   # Set up temporary files.
   mk_tmpdir

   local data_dir="$(setup_data_dir "${OPT_SAVE_SAMPLES:-""}")"

   if [ -n "$OPT_READ_SAMPLES" -a -d "$OPT_READ_SAMPLES" ]; then
      data_dir="$OPT_READ_SAMPLES"
   else
      collect_system_data "$data_dir" 2>"$data_dir/collect.err"
   fi

   report_system_summary "$data_dir"

   rm_tmpdir
}

sigtrap() { local PTFUNCNAME=sigtrap;
   warn "Caught signal, forcing exit"
   rm_tmpdir
   exit $EXIT_STATUS
}

# Execute the program if it was not included from another file.  This makes it
# possible to include without executing, and thus test.
if    [ "${0##*/}" = "$TOOL" ] \
   || [ "${0##*/}" = "bash" -a "$_" = "$0" ]; then

   # Set up temporary dir.
   mk_tmpdir
   # Parse command line options.
   parse_options "$0" "$@"
   usage_or_errors "$0"
   po_status=$?
   rm_tmpdir

   if [ $po_status -ne 0 ]; then
      exit $po_status
   fi

   main "$@"
fi


# ############################################################################
# Documentation
# ############################################################################
:<<'DOCUMENTATION'
=pod

=head1 NAME

pt-summary - Summarize system information nicely.

=head1 SYNOPSIS

Usage: pt-summary

pt-summary conveniently summarizes the status and configuration of a server.
It is not a tuning tool or diagnosis tool.  It produces a report that is easy
to diff and can be pasted into emails without losing the formatting.  This
tool works well on many types of Unix systems.

Download and run:

   wget http://percona.com/get/pt-summary
   bash ./pt-summary

=head1 RISKS

The following section is included to inform users about the potential risks,
whether known or unknown, of using this tool.  The two main categories of risks
are those created by the nature of the tool (e.g. read-only tools vs. read-write
tools) and those created by bugs.

pt-summary is a read-only tool.  It should be very low-risk.

At the time of this release, we know of no bugs that could harm users.

The authoritative source for updated information is always the online issue
tracking system.  Issues that affect this tool will be marked as such.  You can
see a list of such issues at the following URL:
L<http://www.percona.com/bugs/pt-summary>.

See also L<"BUGS"> for more information on filing bugs and getting help.

=head1 DESCRIPTION

pt-summary runs a large variety of commands to inspect system status and
configuration, saves the output into files in a temporary directory, and
then runs Unix commands on these results to format them nicely.  It works
best when executed as a privileged user, but will also work without privileges,
although some output might not be possible to generate without root.

=head1 OUTPUT

Many of the outputs from this tool are deliberately rounded to show their
magnitude but not the exact detail. This is called fuzzy-rounding. The idea is
that it doesn't matter whether a particular counter is 918 or 921; such a small
variation is insignificant, and only makes the output hard to compare to other
servers. Fuzzy-rounding rounds in larger increments as the input grows. It
begins by rounding to the nearest 5, then the nearest 10, nearest 25, and then
repeats by a factor of 10 larger (50, 100, 250), and so on, as the input grows.

The following is a simple report generated from a CentOS virtual machine,
broken into sections with commentary following each section. Some long lines
are reformatted for clarity when reading this documentation as a manual page in
a terminal.

 # Percona Toolkit System Summary Report ######################
         Date | 2012-03-30 00:58:07 UTC (local TZ: EDT -0400)
     Hostname | localhost.localdomain
       Uptime | 20:58:06 up 1 day, 20 min, 1 user,
                load average: 0.14, 0.18, 0.18
       System | innotek GmbH; VirtualBox; v1.2 ()
  Service Tag | 0
     Platform | Linux
      Release | CentOS release 5.5 (Final)
       Kernel | 2.6.18-194.el5
 Architecture | CPU = 32-bit, OS = 32-bit
    Threading | NPTL 2.5
     Compiler | GNU CC version 4.1.2 20080704 (Red Hat 4.1.2-48).
      SELinux | Enforcing
  Virtualized | VirtualBox

This section shows the current date and time, and a synopsis of the server and
operating system.

 # Processor ##################################################
   Processors | physical = 1, cores = 0, virtual = 1, hyperthreading = no
       Speeds | 1x2510.626
       Models | 1xIntel(R) Core(TM) i5-2400S CPU @ 2.50GHz
       Caches | 1x6144 KB

This section is derived from F</proc/cpuinfo>.

 # Memory #####################################################
        Total | 503.2M
         Free | 29.0M
         Used | physical = 474.2M, swap allocated = 1.0M,
                swap used = 16.0k, virtual = 474.3M
      Buffers | 33.9M
       Caches | 262.6M
        Dirty | 396 kB
      UsedRSS | 201.9M
   Swappiness | 60
  DirtyPolicy | 40, 10
  Locator  Size  Speed    Form Factor  Type    Type Detail
  =======  ====  =====    ===========  ====    ===========

Information about memory is gathered from C<free>. The Used statistic is the
total of the rss sizes displayed by C<ps>. The Dirty statistic for the cached
value comes from F</proc/meminfo>. On Linux, the swappiness settings are
gathered from C<sysctl>. The final portion of this section is a table of the
DIMMs, which comes from C<dmidecode>. In this example there is no output.

 # Mounted Filesystems ########################################
   Filesystem                       Size Used Type  Opts Mountpoint
   /dev/mapper/VolGroup00-LogVol00   15G  17% ext3  rw   /
   /dev/sda1                         99M  13% ext3  rw   /boot
   tmpfs                            252M   0% tmpfs rw   /dev/shm

The mounted filesystem section is a combination of information from C<mount> and
C<df>. This section is skipped if you disable L<"--summarize-mounts">.

 # Disk Schedulers And Queue Size #############################
         dm-0 | UNREADABLE
         dm-1 | UNREADABLE
          hdc | [cfq] 128
          md0 | UNREADABLE
          sda | [cfq] 128

The disk scheduler information is extracted from the F</sys> filesystem in
Linux.

 # Disk Partioning ############################################
 Device       Type      Start        End               Size
 ============ ==== ========== ========== ==================
 /dev/sda     Disk                              17179869184
 /dev/sda1    Part          1         13           98703360
 /dev/sda2    Part         14       2088        17059230720

Information about disk partitioning comes from C<fdisk -l>.

 # Kernel Inode State #########################################
 dentry-state | 10697 8559  45 0  0  0
      file-nr | 960   0  50539
     inode-nr | 14059 8139

These lines are from the files of the same name in the F</proc/sys/fs>
directory on Linux. Read the C<proc> man page to learn about the meaning of
these files on your system.

 # LVM Volumes ################################################
 LV       VG         Attr   LSize   Origin Snap% Move Log Copy% Convert
 LogVol00 VolGroup00 -wi-ao 269.00G                                      
 LogVol01 VolGroup00 -wi-ao   9.75G   

This section shows the output of C<lvs>.

 # RAID Controller ############################################
   Controller | No RAID controller detected

The tool can detect a variety of RAID controllers by examining C<lspci> and
C<dmesg> information. If the controller software is installed on the system, in
many cases it is able to execute status commands and show a summary of the RAID
controller's status and configuration. If your system is not supported, please
file a bug report.

 # Network Config #############################################
   Controller | Intel Corporation 82540EM Gigabit Ethernet Controller
  FIN Timeout | 60
   Port Range | 61000

The network controllers attached to the system are detected from C<lspci>. The
TCP/IP protocol configuration parameters are extracted from C<sysctl>. You can skip this section by disabling the L<"--summarize-network"> option.

 # Interface Statistics #######################################
 interface rx_bytes rx_packets rx_errors tx_bytes tx_packets tx_errors
 ========= ======== ========== ========= ======== ========== =========
 lo        60000000      12500         0 60000000      12500         0
 eth0      15000000      80000         0  1500000      10000         0
 sit0             0          0         0        0          0         0

Interface statistics are gathered from C<ip -s link> and are fuzzy-rounded. The
columns are received and transmitted bytes, packets, and errors.  You can skip
this section by disabling the L<"--summarize-network"> option.

 # Network Connections ########################################
   Connections from remote IP addresses
     127.0.0.1           2
   Connections to local IP addresses
     127.0.0.1           2
   Connections to top 10 local ports
     38346               1
     60875               1
   States of connections
     ESTABLISHED         5
     LISTEN              8

This section shows a summary of network connections, retrieved from C<netstat>
and "fuzzy-rounded" to make them easier to compare when the numbers grow large.
There are two sub-sections showing how many connections there are per origin
and destination IP address, and a sub-section showing the count of ports in
use.  The section ends with the count of the network connections' states.  You
can skip this section by disabling the L<"--summarize-network"> option.

 # Top Processes ##############################################
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
     1 root  15   0  2072  628  540 S  0.0  0.1   0:02.55 init
     2 root  RT  -5     0    0    0 S  0.0  0.0   0:00.00 migration/0
     3 root  34  19     0    0    0 S  0.0  0.0   0:00.03 ksoftirqd/0
     4 root  RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/0
     5 root  10  -5     0    0    0 S  0.0  0.0   0:00.97 events/0
     6 root  10  -5     0    0    0 S  0.0  0.0   0:00.00 khelper
     7 root  10  -5     0    0    0 S  0.0  0.0   0:00.00 kthread
    10 root  10  -5     0    0    0 S  0.0  0.0   0:00.13 kblockd/0
    11 root  20  -5     0    0    0 S  0.0  0.0   0:00.00 kacpid
 # Notable Processes ##########################################
   PID    OOM    COMMAND
  2028    +0    sshd

This section shows the first few lines of C<top> so that you can see what
processes are actively using CPU time.  The notable processes include the SSH
daemon and any process whose out-of-memory-killer priority is set to 17. You
can skip this section by disabling the L<"--summarize-processes"> option.

 # Simplified and fuzzy rounded vmstat (wait please) ##########
   procs  ---swap-- -----io---- ---system---- --------cpu--------
    r  b    si   so    bi    bo     ir     cs  us  sy  il  wa  st
    2  0     0    0     3    15     30    125   0   0  99   0   0
    0  0     0    0     0     0   1250    800   6  10  84   0   0
    0  0     0    0     0     0   1000    125   0   0 100   0   0
    0  0     0    0     0     0   1000    125   0   0 100   0   0
    0  0     0    0     0   450   1000    125   0   1  88  11   0
 # The End ####################################################

This section is a trimmed-down sample of C<vmstat 1 5>, so you can see the
general status of the system at present. The values in the table are
fuzzy-rounded, except for the CPU columns.  You can skip this section by
disabling the L<"--summarize-processes"> option.

=head1 OPTIONS

=over

=item --config

type: string

Read this comma-separated list of config files.  If specified, this must be the
first option on the command line.

=item --help

Print help and exit.

=item --save-samples

type: string

Save the collected data in this directory.

=item --read-samples

type: string

Create a report from the files in this directory.

=item --summarize-mounts

default: yes; negatable: yes

Report on mounted filesystems and disk usage.

=item --summarize-network

default: yes; negatable: yes

Report on network controllers and configuration.

=item --summarize-processes

default: yes; negatable: yes

Report on top processes and C<vmstat> output.

=item --sleep

type: int; default: 5

How long to sleep when gathering samples from vmstat.

=item --version

Print tool's version and exit.

=back

=head1 SYSTEM REQUIREMENTS

This tool requires the Bourne shell (F</bin/sh>).

=head1 BUGS

For a list of known bugs, see L<http://www.percona.com/bugs/pt-summary>.

Please report bugs at L<https://bugs.launchpad.net/percona-toolkit>.
Include the following information in your bug report:

=over

=item * Complete command-line used to run the tool

=item * Tool L<"--version">

=item * MySQL version of all servers involved

=item * Output from the tool including STDERR

=item * Input files (log/dump/config files, etc.)

=back

If possible, include debugging output by running the tool with C<PTDEBUG>;
see L<"ENVIRONMENT">.

=head1 DOWNLOADING

Visit L<http://www.percona.com/software/percona-toolkit/> to download the
latest release of Percona Toolkit.  Or, get the latest release from the
command line:

   wget percona.com/get/percona-toolkit.tar.gz

   wget percona.com/get/percona-toolkit.rpm

   wget percona.com/get/percona-toolkit.deb

You can also get individual tools from the latest release:

   wget percona.com/get/TOOL

Replace C<TOOL> with the name of any tool.

=head1 AUTHORS

Baron Schwartz and Kevin van Zonneveld (http://kevin.vanzonneveld.net)

=head1 ABOUT PERCONA TOOLKIT

This tool is part of Percona Toolkit, a collection of advanced command-line
tools developed by Percona for MySQL support and consulting.  Percona Toolkit
was forked from two projects in June, 2011: Maatkit and Aspersa.  Those
projects were created by Baron Schwartz and developed primarily by him and
Daniel Nichter, both of whom are employed by Percona.  Visit
L<http://www.percona.com/software/> for more software developed by Percona.

=head1 COPYRIGHT, LICENSE, AND WARRANTY

This program is copyright 2010-2011 Baron Schwartz, 2011-2012 Percona Inc.
Feedback and improvements are welcome.

THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation, version 2; OR the Perl Artistic License.  On UNIX and similar
systems, you can issue `man perlgpl' or `man perlartistic' to read these
licenses.

You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc., 59 Temple
Place, Suite 330, Boston, MA  02111-1307  USA.

=head1 VERSION

pt-summary 2.1.1

=cut

DOCUMENTATION

pt2mm – Perl Script to slurp pt-summary and make xml ready for Freemind

Filed under: Freemind, linux, Percona Toolkit, perl — lancevermilion @ 4:03 pm

I copied the pt-summary script from the Percona Toolkit and put it in the post below in case you don’t want to install the whole toolkit.

Percona Toolkit – pt-summary shell script
https://gheeknet.wordpress.com/?p=518

I wrote this so someone can do a poor mans manually scripted inventory / documentation of your Linux server(s) (RHEL based). The output from this script will create a nicely formatted XML that can be dropped right into a Freemind file.

Find your .mm file that you will use in Freemind (pick your location in the file and copy/paste the node info) or create a clean one like the example below.

Example:

<map version="0.9.0">
<!-- To view this file, download free mind mapping software FreeMind from http://freemind.sourceforge.net -->
<node COLOR="#338800" CREATED="1335892427459" ID="ID_1651254375" MODIFIED="1335914182567" TEXT="My Linux Servers">
  <node CREATED="1336084328712" FOLDED="true" ID="ID_545525996" STYLE="fork" TEXT="MON01">
    <node CREATED="1336084330688" FOLDED="true" ID="ID_565609429"  TEXT="System Summary">
      <node CREATED="1336084330688" FOLDED="true" ID="ID_392279737"  TEXT="System Summary">
        <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
        <attribute NAME="        Date" VALUE="2012-05-03 22:32:10 UTC (local TZ: MST -0700)"/>
        <attribute NAME="    Hostname" VALUE="centos54.linux.domain"/>
        <attribute NAME="      Uptime" VALUE="41 days, 23:39,  1 user,  load average: 0.68, 0.38, 0.31"/>
        <attribute NAME="      System" VALUE="Dell Inc.; PowerEdge 1950; vNot Specified (<OUT OF SPEC>)"/>
        <attribute NAME=" Service Tag" VALUE="some service tag id"/>
        <attribute NAME="    Platform" VALUE="Linux"/>
        <attribute NAME="     Release" VALUE="CentOS release 5.4 (Final)"/>
        <attribute NAME="      Kernel" VALUE="2.6.18-164.el5PAE"/>
        <attribute NAME="Architecture" VALUE="CPU = 64-bit, OS = 32-bit"/>
        <attribute NAME="   Threading" VALUE="NPTL 2.5"/>
        <attribute NAME="    Compiler" VALUE="GNU CC version 4.1.2 20080704 (Red Hat 4.1.2-44)."/>
        <attribute NAME="     SELinux" VALUE="Permissive"/>
        <attribute NAME=" Virtualized" VALUE="No virtualization detected"/>
      </node>
    </node>
    <node CREATED="1336084330689" FOLDED="true" ID="ID_9081042188"  TEXT="Processor">
      <node CREATED="1336084330689" FOLDED="true" ID="ID_3133169832"  TEXT="Processor">
        <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
        <attribute NAME="  Processors" VALUE="physical = 1, cores = 4, virtual = 4, hyperthreading = no"/>
        <attribute NAME="      Speeds" VALUE="4x1995.049"/>
        <attribute NAME="      Models" VALUE="4xIntel(R) Xeon(R) CPU E5335 @ 2.00GHz"/>
        <attribute NAME="      Caches" VALUE="4x4096 KB"/>
      </node>
    </node>
    <node CREATED="1336084330689" FOLDED="true" ID="ID_4516226642" STYLE="fork" TEXT="Memory">
      <node CREATED="1336084330689" FOLDED="true" ID="ID_9303160679" TEXT="Memory Summary">
        <node CREATED="1336084330689" FOLDED="true" ID="ID_6187080024" TEXT="Memory Summary">
          <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
          <attribute NAME="       Total" VALUE="2.0G"/>
          <attribute NAME="        Free" VALUE="81.9M"/>
          <attribute NAME="        Used" VALUE="physical = 1.9G, swap allocated = 2.0G, swap used = 780.0k, virtual = 1.9G"/>
          <attribute NAME="     Buffers" VALUE="183.9M"/>
          <attribute NAME="      Caches" VALUE="777.1M"/>
          <attribute NAME="       Dirty" VALUE="6316 kB"/>
          <attribute NAME="     UsedRSS" VALUE="1.8G"/>
          <attribute NAME="  Swappiness" VALUE="60"/>
          <attribute NAME=" DirtyPolicy" VALUE="40, 10"/>
        </node>
      </node>
      <node CREATED="1336084330689" FOLDED="true" ID="ID_7505443526" TEXT="Memory Banks">
        <node CREATED="1336084330689" FOLDED="true" ID="ID_2992848714" TEXT="Memory Banks">
          <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
          <attribute NAME="Locator" VALUE="   Size     Speed             Form Factor   Type          Type Detail"/>
          <attribute NAME="DIMM1" VALUE="     1024 MB  667 MHz           FB-DIMM       DDR2 FB-DIMM  Synchronous"/>
          <attribute NAME="DIMM2" VALUE="     1024 MB  667 MHz           FB-DIMM       DDR2 FB-DIMM  Synchronous"/>
          <attribute NAME="DIMM3" VALUE="     {EMPTY}  Unknown           FB-DIMM       DDR2 FB-DIMM  Synchronous"/>
          <attribute NAME="DIMM4" VALUE="     {EMPTY}  Unknown           FB-DIMM       DDR2 FB-DIMM  Synchronous"/>
          <attribute NAME="DIMM5" VALUE="     {EMPTY}  Unknown           FB-DIMM       DDR2 FB-DIMM  Synchronous"/>
          <attribute NAME="DIMM6" VALUE="     {EMPTY}  Unknown           FB-DIMM       DDR2 FB-DIMM  Synchronous"/>
          <attribute NAME="DIMM7" VALUE="     {EMPTY}  Unknown           FB-DIMM       DDR2 FB-DIMM  Synchronous"/>
          <attribute NAME="DIMM8" VALUE="     {EMPTY}  Unknown           FB-DIMM       DDR2 FB-DIMM  Synchronous"/>
        </node>
      </node>
    </node>
    <node CREATED="1336084330689" FOLDED="true" ID="ID_5249004985" TEXT="Mounted Filesystems">
      <node CREATED="1336084330689" FOLDED="true" ID="ID_7641022705" TEXT="Mounted Filesystems">
        <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
        <attribute NAME="Mountpoint" VALUE="Filesystem, Size, Used, Type, Opts"/>
        <attribute NAME="/boot" VALUE="/dev/sda1, 99M, 18%, ext3, rw,nosuid"/>
        <attribute NAME="/var" VALUE="/dev/sda3, 21G, 40%, ext3, rw"/>
        <attribute NAME="/tmp" VALUE="/dev/sda5, 494M, 24%, ext3, rw"/>
        <attribute NAME="/home" VALUE="/dev/sda6, 1.9G, 53%, ext3, rw,nosuid"/>
        <attribute NAME="/" VALUE="/dev/sda7, 3.8G, 33%, ext3, rw"/>
        <attribute NAME="/usr" VALUE="/dev/sda8, 3.8G, 35%, ext3, rw"/>
        <attribute NAME="/dev/shm" VALUE="tmpfs, 1014M, 0%, tmpfs, rw"/>
      </node>
    </node>
    <node CREATED="1336084330689" FOLDED="true" ID="ID_5445514046" STYLE="fork" TEXT="Network">
      <node CREATED="1336084330699" FOLDED="true" ID="ID_931470206" TEXT="bond0">
        <node CREATED="1336084330699" FOLDED="true" ID="ID_4868000148" TEXT="bond0">
          <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
          <attribute NAME="MIITOOL" VALUE="10 Mbit, half duplex, link ok"/>
          <attribute NAME="ONBOOT" VALUE="yes"/>
          <attribute NAME="IPADDR" VALUE="10.0.1.70"/>
        </node>
      </node>
      <node CREATED="1336084330708" FOLDED="true" ID="ID_3056584660" TEXT="bond1">
        <node CREATED="1336084330708" FOLDED="true" ID="ID_5465277235" TEXT="bond1">
          <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
          <attribute NAME="MIITOOL" VALUE="10 Mbit, half duplex, link ok"/>
          <attribute NAME="ONBOOT" VALUE="yes"/>
          <attribute NAME="IPADDR" VALUE="10.0.6.7"/>
        </node>
      </node>
      <node CREATED="1336084330717" FOLDED="true" ID="ID_5479778297" TEXT="eth0">
        <node CREATED="1336084330717" FOLDED="true" ID="ID_5414246572" TEXT="eth0">
          <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
          <attribute NAME="MIITOOL" VALUE="negotiated, link ok"/>
          <attribute NAME="HWADDR" VALUE="00:19:B9:E6:2B:31"/>
          <attribute NAME="ONBOOT" VALUE="yes"/>
          <attribute NAME="MASTER" VALUE="bond0"/>
        </node>
      </node>
      <node CREATED="1336084330726" FOLDED="true" ID="ID_6661200172" TEXT="eth1">
        <node CREATED="1336084330726" FOLDED="true" ID="ID_2473564425" TEXT="eth1">
          <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
          <attribute NAME="MIITOOL" VALUE="negotiated, link ok"/>
          <attribute NAME="HWADDR" VALUE="00:19:B9:E6:2B:33"/>
          <attribute NAME="ONBOOT" VALUE="yes"/>
          <attribute NAME="MASTER" VALUE="bond1"/>
        </node>
      </node>
      <node CREATED="1336084330734" FOLDED="true" ID="ID_6276371651" TEXT="eth2">
        <node CREATED="1336084330734" FOLDED="true" ID="ID_7696366231" TEXT="eth2">
          <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
          <attribute NAME="MIITOOL" VALUE="10 Mbit, half duplex, no link"/>
          <attribute NAME="HWADDR" VALUE="00:15:17:2E:B6:CE"/>
          <attribute NAME="ONBOOT" VALUE="no"/>
        </node>
      </node>
      <node CREATED="1336084330742" FOLDED="true" ID="ID_2722883920" TEXT="eth3">
        <node CREATED="1336084330742" FOLDED="true" ID="ID_1804908795" TEXT="eth3">
          <attribute_layout NAME_WIDTH="79" VALUE_WIDTH="450"/>
          <attribute NAME="MIITOOL" VALUE="10 Mbit, half duplex, no link"/>
          <attribute NAME="HWADDR" VALUE="00:15:17:2E:B6:CF"/>
          <attribute NAME="ONBOOT" VALUE="no"/>
        </node>
      </node>
    </node>
    <node CREATED="1336084330743" FOLDED="true" ID="ID_5436431323" TEXT="Applications">
      <node CREATED="1336084330743" FOLDED="true" ID="ID_9595296346" TEXT="Applications">
        <attribute_layout NAME_WIDTH="125" VALUE_WIDTH="450"/>
        <attribute NAME="aide" VALUE="RPM: aide-0.13.1-6.el5 InstallDate: Wed 04 Apr 2012 11:04:38 AM MST"/>
        <attribute NAME="bind-libs" VALUE="RPM: bind-libs-9.3.6-4.P1.el5 InstallDate: Thu 08 Dec 2011 08:16:01 PM MST"/>
        <attribute NAME="expect" VALUE="RPM: expect-5.43.0-5.1 InstallDate: Thu 08 Dec 2011 08:15:09 PM MST"/>
        <attribute NAME="iplike" VALUE="RPM: iplike-1.0.9-1 InstallDate: Fri 09 Dec 2011 04:10:44 PM MST"/>
        <attribute NAME="java-1.6.0-openjdk" VALUE="RPM: java-1.6.0-openjdk-1.6.0.0-1.25.1.10.6.el5_8 InstallDate: Mon 19 Mar 2012 05:54:07 PM MST"/>
        <attribute NAME="kernel-PAE" VALUE="RPM: kernel-PAE-2.6.18-164.el5 InstallDate: Thu 08 Dec 2011 08:18:02 PM MST"/>
        <attribute NAME="libpcap" VALUE="RPM: libpcap-0.9.4-14.el5 InstallDate: Thu 08 Dec 2011 08:16:02 PM MST"/>
        <attribute NAME="McAfeeVSEForLinux" VALUE="RPM: McAfeeVSEForLinux-1.7.0-28611 InstallDate: Mon 19 Mar 2012 06:06:42 PM MST"/>
        <attribute NAME="MFEcma" VALUE="RPM: MFEcma-4.6.0-1694 InstallDate: Mon 19 Mar 2012 06:02:44 PM MST"/>
        <attribute NAME="MFErt" VALUE="RPM: MFErt-2.0-0 InstallDate: Mon 19 Mar 2012 06:02:43 PM MST"/>
        <attribute NAME="minicom" VALUE="RPM: minicom-2.1-3 InstallDate: Mon 26 Mar 2012 06:18:35 PM MST"/>
        <attribute NAME="net-snmp-utils" VALUE="RPM: net-snmp-utils-5.3.2.2-14.el5_7.1 InstallDate: Mon 06 Feb 2012 09:59:29 AM MST"/>
        <attribute NAME="ntp" VALUE="RPM: ntp-4.2.2p1-9.el5.centos.2 InstallDate: Thu 08 Dec 2011 08:16:38 PM MST"/>
        <attribute NAME="opennms" VALUE="RPM: opennms-1.8.4-1 InstallDate: Thu 08 Dec 2011 08:17:51 PM MST"/>
        <attribute NAME="opennms-core" VALUE="RPM: opennms-core-1.8.4-1 InstallDate: Thu 08 Dec 2011 08:17:10 PM MST"/>
        <attribute NAME="opennms-docs" VALUE="RPM: opennms-docs-1.8.4-1 InstallDate: Thu 08 Dec 2011 08:15:35 PM MST"/>
        <attribute NAME="opennms-plugin-provisioning-rancid" VALUE="RPM: opennms-plugin-provisioning-rancid-1.8.4-1 InstallDate: Thu 08 Dec 2011 08:17:31 PM MST"/>
        <attribute NAME="opennms-webapp-jetty" VALUE="RPM: opennms-webapp-jetty-1.8.4-1 InstallDate: Thu 08 Dec 2011 08:17:30 PM MST"/>
        <attribute NAME="openssh" VALUE="RPM: openssh-4.3p2-36.el5 InstallDate: Thu 08 Dec 2011 08:17:33 PM MST"/>
        <attribute NAME="openssh-server" VALUE="RPM: openssh-server-4.3p2-36.el5 InstallDate: Thu 08 Dec 2011 08:17:39 PM MST"/>
        <attribute NAME="openssl" VALUE="RPM: openssl-0.9.8e-12.el5 InstallDate: Thu 08 Dec 2011 08:15:44 PM MST"/>
        <attribute NAME="postgresql-server" VALUE="RPM: postgresql-server-8.4.3-1PGDG.rhel5 InstallDate: Thu 08 Dec 2011 08:17:36 PM MST"/>
        <attribute NAME="rancid" VALUE="RPM: rancid-2.3.4-1 InstallDate: Thu 08 Dec 2011 08:16:06 PM MST"/>
        <attribute NAME="security-blanket" VALUE="RPM: security-blanket-4.0.8-r17082.el5 InstallDate: Mon 19 Mar 2012 06:08:18 PM MST"/>
        <attribute NAME="syslog-ng-premium-edition" VALUE="RPM: syslog-ng-premium-edition-3.0.5-1.rhel5 InstallDate: Tue 20 Dec 2011 01:58:31 PM MST"/>
        <attribute NAME="viewvc" VALUE="RPM: viewvc-1.0.12-1.el5.rf InstallDate: Thu 08 Dec 2011 08:17:54 PM MST"/>
      </node>
    </node>
  </node>
</node>

Here is the actual script “pt2mm.pl” that does all the work.

#!/usr/bin/perl
# Author: Lance Vermilion <scripting(a)gheek.net>
# Purpose: Convert output from pt-summary (Percona Toolkit script)
#          to xml that is then directly able to be pasted into the 
#          <freemind_file>.mm.
# Date: May 2, 2012
# Version: 0.1
use strict;
use warnings;
use Sys::Hostname;
use Time::HiRes qw(gettimeofday);

# The applist is the only configurable section here. Leave everythign else alone
my @applist = ( 'McAfeeVSEForLinux','MFEcma','MFErt','mpr','syslog-ng-premium-edition','java-1.6.0-openjdk','heartbeat','security-blanket','tomcat5','tomcat6','mysql-server','postgresql-server','drbd','kernel-PAE','openssh','openssh-server','openssl','kmod-drbd-PAE','bind-chroot','bind','bind-libs','net-snmp-utils','kmod-drbd','caching-nameserver','opennms','opennms-core','opennms-webapp-jetty','opennms-docs','opennms-plugin-provisioning-rancid','iplike','minicom','expect','aide','ntp','viewvc','rancid','libpcap','libpcap-devel' );

my $node_started = 0;
my $begin_skip = 0;
my $tmpsection = '';
my $host = hostname;
$host =~ s/\..*$//g;
my @network = `egrep "^DEVICE|^HWADDR|^IPADDR|^MASTER|^ONBOOT" /etc/sysconfig/network-scripts/ifcfg-[eb]*`;
my @allrpm = `rpm -qa --queryformat '%{name}|RPM: %{name}-%{version}-%{release} InstallDate: %{installtime:date}\n'`;


my $pad2 = "  ";
my $pad4 = "    ";
my $pad6 = "      ";
my $pad8 = "        ";
my $pad10 = "          ";

print "$pad2<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" STYLE=\"fork\" TEXT=\"".uc($host)."\">\n";
for my $line (`sudo pt-summary --no-summarize-processes --no-summarize-network`)
{
  chomp ($line);
  if ( $line =~ /^# Disk Schedulers And Queue Size/ ) 
  {
    $begin_skip = 1;
  }
  elsif ( $line =~ /^# (.*) #+/ && $begin_skip != 1 ) 
  {
    my $SECTION_HEADER = $1;
    $SECTION_HEADER =~ s/Percona Toolkit //g;
    $SECTION_HEADER =~ s/ Report//g;
    my $STYLE='';

    if ( $SECTION_HEADER =~ /System Summary/ )
    {
      print "$pad4<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" $STYLE TEXT=\"$SECTION_HEADER\">\n";
      print "$pad6<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" $STYLE TEXT=\"$SECTION_HEADER\">\n";
      print "$pad8<attribute_layout NAME_WIDTH=\"79\" VALUE_WIDTH=\"450\"/>\n";
    }
    elsif ( $SECTION_HEADER =~ /Processor/ )
    {
      if ( $node_started == 1 )
      {
        print "$pad6</node>\n";
        print "$pad4</node>\n";
      }
      print "$pad4<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" $STYLE TEXT=\"$SECTION_HEADER\">\n";
      print "$pad6<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" $STYLE TEXT=\"$SECTION_HEADER\">\n";
      print "$pad8<attribute_layout NAME_WIDTH=\"79\" VALUE_WIDTH=\"450\"/>\n";
    }
    elsif ( $SECTION_HEADER =~ /Memory/ )
    {
      $STYLE='STYLE="fork"';
      print "$pad6</node>\n";
      print "$pad4</node>\n";
      print "$pad4<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" $STYLE TEXT=\"$SECTION_HEADER\">\n";
      print "$pad6<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"$SECTION_HEADER Summary\">\n";
      print "$pad8<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"$SECTION_HEADER Summary\">\n";
      print "$pad10<attribute_layout NAME_WIDTH=\"79\" VALUE_WIDTH=\"450\"/>\n";
    }
    elsif ( $SECTION_HEADER =~ /Mounted Filesystems/ )
    {
      print "$pad8</node>\n";
      print "$pad6</node>\n";
      print "$pad4</node>\n";
      print "$pad4<node CREATED=\"" . int (gettimeofday * 1000)  . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"$SECTION_HEADER\">\n";
      print "$pad6<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"$SECTION_HEADER\">\n";
      print "$pad8<attribute_layout NAME_WIDTH=\"79\" VALUE_WIDTH=\"450\"/>\n";
    }

    $tmpsection = $SECTION_HEADER;
    $node_started = 1;

  }
  elsif ( $line !~ /^# .* #+/ && $begin_skip != 1) 
  {
    if ( $line =~ /(.*) \| (.*)/ && $node_started == 1 )
    {
      my $key = $1;
      my $value = $2;
      if ( $tmpsection !~ /Memory/ )
      {
        print "$pad8<attribute NAME=\"$key\" VALUE=\"$value\"/>\n";
      } else {
        print "$pad10<attribute NAME=\"$key\" VALUE=\"$value\"/>\n";
      }
    }
    elsif ( $line =~ /(Locator)(.*)/ )
    {
      print "$pad8</node>\n";
      print "$pad6</node>\n";
      print "$pad6<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"Memory Banks\">\n";
      print "$pad8<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"Memory Banks\">\n";
      print "$pad10<attribute_layout NAME_WIDTH=\"79\" VALUE_WIDTH=\"450\"/>\n";
      print "$pad10<attribute NAME=\"$1\" VALUE=\"$2\"/>\n";
    }
    elsif ( $line =~ /(DIMM[0-9])(.*)/ )
    {
      print "$pad10<attribute NAME=\"$1\" VALUE=\"$2\"/>\n";
    }
    elsif ( $line =~ /[0-9]\%|Opts.*Mountpoint/ )
    {
      my ( undef, $fs, $sz, $used, $type, $opts, $mnt ) = split (/\s+/, $line);
      print "$pad8<attribute NAME=\"$mnt\" VALUE=\"$fs, $sz, $used, $type, $opts\"/>\n";
    }
  }
}
print "$pad6</node>\n";
print "$pad4</node>\n";
print "$pad4<node CREATED=\"" . int (gettimeofday * 1000)  . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" STYLE=\"fork\" TEXT=\"Network\">\n";
my $int_details_started = 0;
for my $int (@network)
{
  $int =~ s/.*DEVICE=/DEVICE=/;
  $int =~ s/.*HWADDR=/HWADDR=/;
  $int =~ s/.*IPADDR=/IPADDR=/;
  $int =~ s/.*MASTER=/MASTER=/;
  $int =~ s/.*ONBOOT=/ONBOOT=/;
  if ( $int =~ /DEVICE=(.*)/ )
  {
    my $DEVICE=$1;
    if ( $int_details_started == 1 )
    {
      print "$pad8</node>\n";
      print "$pad6</node>\n";
    }
    my $miitool = 'interface disabled';
    my $miitooloutput = `sudo /sbin/mii-tool $1 2>/dev/null`;
    $miitool = $miitooloutput if ( $miitooloutput ne '' );
    $miitool =~ s/^(eth|bond)[0-9]: //g;
    chomp($miitool);
    print "$pad6<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"$DEVICE\">\n";
    print "$pad8<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"$DEVICE\">\n";
    print "$pad10<attribute_layout NAME_WIDTH=\"79\" VALUE_WIDTH=\"450\"/>\n";
    print "$pad10<attribute NAME=\"MIITOOL\" VALUE=\"$miitool\"/>\n";
    $int_details_started = 1;
  }
  elsif ( $int =~ /(HWADDR)=(.*)/ )
  {
    print "$pad10<attribute NAME=\"$1\" VALUE=\"$2\"/>\n";
  }
  elsif ( $int =~ /(IPADDR)=(.*)/ )
  {
    print "$pad10<attribute NAME=\"$1\" VALUE=\"$2\"/>\n";
  }
  elsif ( $int =~ /(MASTER)=(.*)/ )
  {
    print "$pad10<attribute NAME=\"$1\" VALUE=\"$2\"/>\n";
  }
  elsif ( $int =~ /(ONBOOT)=(.*)/ )
  {
    print "$pad10<attribute NAME=\"$1\" VALUE=\"$2\"/>\n";
  }
}
print "$pad8</node>\n";
print "$pad6</node>\n";
print "$pad4</node>\n";
# Get list of Important Applications
print "$pad4<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"Applications\">\n";
print "$pad6<node CREATED=\"" . int (gettimeofday * 1000) . "\" FOLDED=\"true\" ID=\"ID_" . int(rand(10000000000)) . "\" TEXT=\"Applications\">\n";
print "$pad8<attribute_layout NAME_WIDTH=\"125\" VALUE_WIDTH=\"450\"/>\n";
for my $app (sort { lc($a) cmp lc($b) } @applist)
{
  for my $rpm (sort { lc($a) cmp lc($b) } @allrpm)
  {
    chomp ( $rpm );
    my ( $appname, $rpmdetail ) = split(/\|/, $rpm);
    print "$pad8<attribute NAME=\"$app\" VALUE=\"$rpmdetail\"/>\n" if ( $app eq $appname );
  }
}
print "$pad6</node>\n";
print "$pad4</node>\n";
print "$pad2</node>\n";

November 8, 2011

Syslog-ng :: expanded-syslog-ng.conf (from campin.net)

Filed under: syslog-ng — lancevermilion @ 1:50 pm

Today Campin.net was unavailable so I captured a few pages i used frequently from the google cache.

#----------------------------------------------------------------------
#  Program:  syslog-ng.conf
#  Notes:    Embedded most of the manual notes within the configuration
#            file.  The original manual can be found at:
#
#            http://www.balabit.com/products/syslog_ng/reference/book1.html
#            http://www.campin.net/syslog-ng/faq.html
#            or
#            https://gheeknet.wordpress.com/2011/11/08/syslog-ng-faq-from-campin-net/
#
#            Many people may find placing all of this information in a
#            configuration file a bit redundant, but I have found that
#            with a little bit of extra comments and reference, 
#            maintaining these beasties is much easier.
#
#            This particular log file was taken from the examples that
#            are given at the different web sites, and made to emulate
#            the logs of a Mandrake Linux system as much as possible.
#            Of course, Unix is Unix, is Linux.  It should be generic
#            enough for any Unix system.
#----------------------------------------------------------------------
#  16-Mar-03 - REP - Added some extra definitions to the file.
#  15-Mar-03 - REP - Added back the comments on filtering.
#  27-Feb-03 - REP - Further modified for local environment.
#  27-Feb-03 - REP - Updated for new configuration and version 1.6.0
#  12-Dec-02 - REP - Continued updates for writing to databases.
#  30-Nov-02 - REP - Initial creation for testing.

#----------------------------------------------------------------------
#  Options
#----------------------------------------------------------------------
#
#  Name                       Values   Description
#  -------------------------  -------  ------------------------------------
#  bad_hostname               reg exp  A regexp which matches hostnames 
#                                      which should not be taken as such.
#  chain_hostnames            y/n      Enable or disable the chained 
#                                      hostname format.
#  create_dirs                y/n      Enable or disable directory creation 
#                                      for destination files.
#  dir_group                  groupid
#  dir_owner                  userid
#  dir_perm                   perm
#  dns_cache                  y/n      Enable or disable DNS cache usage.
#  dns_cache_expire           num      Number of seconds while a successful 
#                                      lookup is cached.
#  dns_cache_expire_failed    num      Number of seconds while a failed 
#                                      lookup is cached.
#  dns_cache_size             num      Number of hostnames in the DNS cache.
#  gc_busy_threshold          num      Sets the threshold value for the 
#                                      garbage collector, when syslog-ng is 
#                                      busy. GC phase starts when the number 
#                                      of allocated objects reach this 
#                                      number. Default: 3000.
#  gc_idle_threshold          num      Sets the threshold value for the 
#                                      garbage collector, when syslog-ng is 
#                                      idle. GC phase starts when the number 
#                                      of allocated objects reach this 
#                                      number. Default: 100.
#  group                      groupid
#  keep_hostname              y/n      Enable or disable hostname rewriting.
#                                      This means that if the log entry had
#                                      been passed through at least one other
#                                      logging system, the ORIGINAL hostname
#                                      will be kept attached to the log.  
#                                      Otherwise the last logger will be
#                                      considered the log entry owner and
#                                      the log entry will appear to have 
#                                      come from that host.
#  log_fifo_size              num      The number of lines fitting to the 
#                                      output queue
#  log_msg_size               num      Maximum length of message in bytes.
#  long_hostnames             on/off   This options appears to only really
#                                      have an affect on the local system.
#                                      which removes the source of the log.
#                                      As an example, normally the local
#                                      logs will state src@hostname, but
#                                      with this feature off, the source
#                                      is not reported.
#  mark                       num      The number of seconds between two 
#                                      MARK lines. NOTE: not implemented 
#                                      yet.
#  owner                      userid
#  perm                       perm
#  stats                      num      The number of seconds between two 
#                                      STATS.
#  sync                       num      The number of lines buffered before 
#                                      written to file
#  time_reap                  num      The time to wait before an idle 
#                                      destination file is closed.
#  time_reopen                num      The time to wait before a died 
#                                      connection is reestablished
#  use_dns                    y/n      Enable or disable DNS usage. 
#                                      syslog-ng blocks on DNS queries, 
#                                      so enabling DNS may lead to a 
#                                      Denial of Service attack. To 
#                                      prevent DoS, protect your 
#                                      syslog-ng network endpoint with 
#                                      firewall rules, and make sure that 
#                                      all hosts, which may get to 
#                                      syslog-ng is resolvable.
#  use_fqdn                   y/n      Add Fully Qualified Domain Name 
#                                      instead of short hostname.
#  use_time_recvd             y/n      Use the time a message is 
#                                      received instead of the one 
#                                      specified in the message.
#----------------------------------------------------------------------
#  15-Mar-03 - REP - Since some of the clocks are not quite right, we
#                    are going to go ahead and just use the local time
#                    as the master time.
#  12-Mar-03 - REP - We have taken a few configuration options from the
#                    newer Solaris configuration because some of the 
#                    reasons are valid for us as well.  We have increased
#                    the log_msg_size and log_fifo_size to increase the
#                    amount of buffering that we do.  While for most
#                    systems this may not have a noticeable affect, it
#                    will for systems that are at the end of a lot of
#                    logging systems.
#  20-Dec-02 - REP - Changed the stat() time from the default of 10
#                    minutes to once an hour.
#----------------------------------------------------------------------
options 
  {
    chain_hostnames(no);
    create_dirs (no);
    dir_perm(0755); 
    dns_cache(yes);
    keep_hostname(yes);
    log_fifo_size(2048);
    log_msg_size(8192);
    long_hostnames(on);
    perm(0644); 
    stats(3600);
    sync(0);
    time_reopen (10);
    use_dns(yes);
    use_fqdn(yes);
  };

#----------------------------------------------------------------------
#  Sources
#----------------------------------------------------------------------
#
#  fifo/pipe     - The pipe driver opens a named pipe with the 
#                  specified name, and listens for messages. It's 
#                  used as the native message getting protocol on 
#                  HP-UX. 
#  file          - Usually the kernel presents its messages in a 
#                  special file (/dev/kmsg on BSDs, /proc/kmsg on 
#                  Linux), so to read such special files, you'll need 
#                  the file() driver. Please note that you can't use 
#                  this driver to follow a file like tail -f does.
#  internal      - All internally generated messages "come" from this 
#                  special source. If you want warnings, errors and 
#                  notices from syslog-ng itself, you have to include 
#                  this source in one of your source statements. 
#  sun-streams   - Solaris uses its STREAMS API to send messages to 
#                  the syslogd process. You'll have to compile 
#                  syslog-ng with this driver compiled in (see 
#                  ./configure --help).
#
#                  Newer versions of Solaris (2.5.1 and above), uses a 
#                  new IPC in addition to STREAMS, called door to 
#                  confirm delivery of a message. Syslog-ng supports 
#                  this new IPC mechanism with the door() option.
#
#                  The sun-streams() driver has a single required 
#                  argument, specifying the STREAMS device to open and 
#                  a single option. 
#  tcp/udp       - These drivers let you receive messages from the 
#                  network, and as the name of the drivers show, you 
#                  can use both UDP and TCP as transport. 
#
#                  UDP is a simple datagram oriented protocol, which 
#                  provides "best effort service" to transfer 
#                  messages between hosts. It may lose messages, and 
#                  no attempt is made to retransmit such lost 
#                  messages at the protocol level. 
#
#                  TCP provides connection-oriented service, which 
#                  basically means a flow-controlled message pipeline. 
#                  In this pipeline, each message is acknowledged, and 
#                  retransmission is done for lost packets. Generally 
#                  it's safer to use TCP, because lost connections can 
#                  be detected, and no messages get lost, but 
#                  traditionally the syslog protocol uses UDP. 
#
#                  None of tcp() and udp() drivers require positional 
#                  parameters. By default they bind to 0.0.0.0:514, 
#                  which means that syslog-ng will listen on all 
#                  available interfaces, port 514. To limit accepted 
#                  connections to one interface only, use the 
#                  localip() parameter as described below. 
#
#    Options:
#
#    Name            Type    Description                       Default
#    --------------  ------  --------------------------------  --------
#    ip or local ip  string  The IP address to bind to. Note   0.0.0.0
#                            that this is not the address 
#                            where messages are accepted 
#                            from.
#    keep-alive     y/n      Available for tcp() only, and     yes
#                            specifies whether to close 
#                            connections upon the receival 
#                            of a SIGHUP signal.
#    max-connections number  Specifies the maximum number of   10
#                            simultaneous connections.
#    port or local   port    number The port number to bind     514
#                            to.
#    --------------  ------  --------------------------------  --------
#
#  unix-stream   -  unix-dgram - These two drivers behave similarly: 
#                   they open the given AF_UNIX socket, and start 
#                   listening on them for messages. unix-stream() is 
#                   primarily used on Linux, and uses SOCK_STREAM 
#                   semantics (connection oriented, no messages are 
#                   lost), unix-dgram() is used on BSDs, and uses 
#                   SOCK_DGRAM semantics, this may result in lost 
#                   local messages, if the system is overloaded.
#
#                   To avoid denial of service attacks when using 
#                   connection-oriented protocols, the number of 
#                   simultaneously accepted connections should be 
#                   limited. This can be achieved using the 
#                   max-connections() parameter. The default value of 
#                   this parameter is quite strict, you might have to 
#                   increase it on a busy system.
#
#                   Both unix-stream and unix-dgram has a single 
#                   required positional argument, specifying the 
#                   filename of the socket to create, and several 
#                   optional parameters. 
#
#    Options:
#
#    Name            Type    Description                       Default
#    --------------  ------  --------------------------------  --------
#    group           string  Set the gid of the socket.        root
#    keep-alive      y/n     Selects whether to keep           yes
#                            connections opened when 
#                            syslog-ng is restarted, can be 
#                            used only with unix-stream().
#    max-connections numb    Limits the number of              10
#                            simultaneously opened 
#                            connections. Can be used only 
#                            with unix-stream().
#    owner           string  Set the uid of the socket.        root
#    perm            num     Set the permission mask. For      0666
#                            octal numbers prefix the number 
#                            with '0', e.g. use 0755 for 
#                            rwxr-xr-x.
#----------------------------------------------------------------------
#  Notes:    For Linux systems (and especially RedHat derivatives), 
#            they have a second logging process for kernel messages.  
#            This source is /proc/kmsg.  If you are running this on a 
#            system that is not Linux, then the source entry for this 
#            should be removed.
#
#            It seems that there is some performance questions related
#            to what type of source stream should be used for Linux 
#            boxes.  The documentation states the /dev/log should use
#            unix-stream, but from the mailing list it has been
#            strongly suggested that unix-dgram be used.
#
#  WARNING:  TCP wrappers has been enabled for this system, and unless
#            you also place entries in /etc/hosts.allow for each of the
#            devices that will be delivering logs via TCP, you will
#            NOT receive the logs.
#
#            Also note that if there is any form of a local firewall,
#            this will also need to be altered such that the incoming
#            and possibly outgoing packets are allowed by the firewall
#            rules.
#----------------------------------------------------------------------
#  There has been a lot of debate on whether everything should be put
#  to a single source, or breakdown all the sources into individual
#  streams.  The greatest flexibility would be in many, but the most
#  simple is the single.  Since we wrote this file, we have chosen the
#  route of maximum flexibility.
#
#  For those of you that like simplicity, this could have also been
#  done as the follows:
#
#  source src 
#    {
#      internal();
#      pipe("/proc/kmsg" log_prefix("kernel: "));
#      tcp(ip(127.0.0.1) port(4800) keep-alive(yes));
#      udp();
#      unix-stream("/dev/log");
#    };
#
#  You would also have to change all the log statements to only 
#  reference the now single source stream.
#----------------------------------------------------------------------
#  16-Mar-03 - REP - The default number of allowed TCP connects is set
#                    very low for a logserver.  This value should only
#                    be set greater than the default for servers that
#                    will actually be serving that many systems.
#----------------------------------------------------------------------
source s_dgram
  { unix-dgram("/dev/log"); };

source s_internal
  { internal(); };

source s_kernel
  { pipe("/proc/kmsg" log_prefix("kernel: ")); };

source s_tcp
  { tcp(port(4800) keep-alive(yes) max_connections(100)); };

#----------------------------------------------------------------------
#  Destinations
#----------------------------------------------------------------------
#
#  fifo/pipe    - This driver sends messages to a named pipe like 
#                 /dev/xconsole
#
#                 The pipe driver has a single required parameter, 
#                 specifying the filename of the pipe to open, and 
#                 no options. 
#  file         - The file driver is one of the most important 
#                 destination drivers in syslog-ng. It allows you to 
#                 output messages to the named file, or as you'll see 
#                 to a set of files.
#
#                 The destination filename may include macros which 
#                 gets expanded when the message is written, thus a 
#                 simple file() driver may result in several files 
#                 to be created. Macros can be included by prefixing 
#                 the macro name with a '$' sign (without the quotes), 
#                 just like in Perl/PHP.
#
#                 If the expanded filename refers to a directory 
#                 which doesn't exist, it will be created depending 
#                 on the create_dirs() setting (both global and a per 
#                 destination option)
#
#                 WARNING: since the state of each created file must 
#                 be tracked by syslog-ng, it consumes some memory 
#                 for each file. If no new messages are written to a 
#                 file within 60 seconds (controlled by the time_reap 
#                 global option), it's closed, and its state is freed.
#
#                 Exploiting this, a DoS attack can be mounted against 
#                 your system. If the number of possible destination 
#                 files and its needed memory is more than the amount 
#                 your logserver has.
#
#                 The most suspicious macro is $PROGRAM, where the 
#                 possible variations is quite high, so in untrusted 
#                 environments $PROGRAM usage should be avoided. 
#
#    Macros:
#
#    Name               Description
#    -----------------  -----------------------------------------------
#    DATE               Date of the transaction.
#    DAY                The day of month the message was sent.
#    FACILITY           The name of the facility, the message is tagged 
#                       as coming from.
#    FULLDATE           Long form of the date of the transaction.
#    FULLHOST           Full hostname of the system that sent the log.
#    HOST               The name of the source host where the message 
#                       is originated from. If the message traverses 
#                       several hosts, and chain_hostnames() is on, 
#                       the first one is used.
#    HOUR               The hour of day the message was sent.
#    ISODATE            Date in ISO format.
#    MIN                The minute the message was sent.
#    MONTH              The month the message was sent.
#    MSG or MESSAGE     Message contents. 
#    PRIORITY or LEVEL  The priority of the message. 
#    PROGRAM            The name of the program the message was sent by.
#    SEC                The second the message was sent.
#    TAG                The priority and facility encoded as a 2 digit 
#                       hexadecimal number.
#    TZ                  The time zone or name or abbreviation. e.g. 'PDT'
#    TZOFFSET           The time-zone as hour offset from GMT. e.g. 
#                       '-0700'
#    WEEKDAY            The 3-letter name of the day of week the 
#                       message was sent, e.g. 'Thu'.
#    YEAR               The year the message was sent. Time expansion 
#                       macros can either use the time specified in 
#                       the log message, e.g. the time the log message 
#                       is sent, or the time the message was received 
#                       by the log server. This is controlled by the 
#                       use_time_recvd() option.
#    -----------------  -----------------------------------------------
#
#    Options:
#
#    Name            Type    Description                       Default
#    --------------  ------  --------------------------------  --------
#    compress        y/n     Compress the resulting logfile    global
#                            using zlib. NOTE: this is not     setting
#                            implemented as of 1.3.14.
#    reate_dirs      y/n     Enable creating non-existing      no
#                            directories.
#    dir_perm        num     The permission mask of            0600
#                            directories created by 
#                            syslog-ng. Log directories are 
#                            only created if a file after 
#                            macro expansion refers to a 
#                            non-existing directory, and dir 
#                            creation is enabled using 
#                            create_dirs().
#    encrypt         y/n     Encrypt the resulting file.       global
#                            NOTE: this is not implemented as  setting
#                            of 1.3.14.
#    fsync           y/n     Forces an fsync() call on the 
#                            destination fd after each write. 
#                            Note: this may degrade 
#                            performance seriously
#    group           string  Set the group of the created      root
#                            filename to the one specified.
#    log_fifo_size   num     The number of entries in the      global
#                            output fifo.                      setting
#    owner           string  Set the owner of the created      root
#                            filename to the one specified.
#    perm            num     The permission mask of the file   0600
#                            if it is created by syslog-ng.
#    remove_if_older num     If set to a value higher than 0,  0
#                            before writing to a file, 
#                            syslog-ng checks whether this 
#                            file is older than the specified 
#                            amount of time (specified in 
#                            seconds). If so, it removes the 
#                            existing file and the line to 
#                            be written is the first line in 
#                            a new file with the same name. 
#                            In combination with e.g. the 
#                            $WEEKDAY macro, this is can be 
#                            used for simple log rotation, 
#                            in case not all history need to 
#                            be kept. 
#    sync_freq       num     The logfile is synced when this   global
#                            number of messages has been       setting
#                            written to it.
#    template        string  Specifies a template which 
#                            specifies the logformat to be 
#                            used in this file. The possible 
#                            macros are the same as in 
#                            destination filenames.
#    template_escape y/n     Turns on escaping ' and " in      yes
#                            templated output files. It is 
#                            useful for generating SQL 
#                            statements and quoting string 
#                            contents so that parts of your 
#                            log message don't get 
#                            interpreted as commands to the 
#                            SQL server.
#    --------------  ------  --------------------------------  --------
#
#  program      - This driver fork()'s executes the given program with 
#                 the given arguments and sends messages down to the 
#                 stdin of the child.
#
#                 The program driver has a single required parameter, 
#                 specifying a program name to start and no options. 
#                 The program is executed with the help of the current 
#                 shell, so the command may include both file patterns 
#                 and I/O redirection, they will be processed. 
#
#                 NOTE: the program is executed once at startup, and 
#                 kept running until SIGHUP or exit. The reason is to 
#                 prevent starting up a large number of programs for 
#                 messages, which would imply an easy DoS. 
#  tcp/udp      - This driver sends messages to another host on the 
#                 local intranet or internet using either UDP or TCP 
#                 protocol.
#
#                 Both drivers have a single required argument 
#                 specifying the destination host address, where 
#                 messages should be sent, and several optional 
#                 parameters. Note that this differs from source 
#                 drivers, where local bind address is implied, and 
#                 none of the parameters are required. 
#
#    Options:
#
#    Name            Type    Description                       Default
#    --------------  ------  --------------------------------  --------
#    localip         string  The IP address to bind to before  0.0.0.0
#                            connecting to target.
#    localport       num     The port number to bind to.       0
#    port/destport   num     The port number to connect to.   514
#    --------------  ------  --------------------------------  --------
#  usertty      - This driver writes messages to the terminal of a 
#                 logged-in user.
#
#                 The usertty driver has a single required argument, 
#                 specifying a username who should receive a copy of 
#                 matching messages, and no optional arguments. 
#  unix-dgram   - unix-stream -  This driver sends messages to a unix 
#                 socket in either SOCK_STREAM or SOCK_DGRAM mode.
#
#                 Both drivers have a single required argument 
#                 specifying the name of the socket to connect to, and 
#                 no optional arguments. 
#----------------------------------------------------------------------

#----------------------------------------------------------------------
#  Standard Log file locations
#----------------------------------------------------------------------
destination authlog        { file("/var/log/auth.log"); };
destination bootlog        { file("/var/log/boot.log"); };
destination debug          { file("/var/log/debug"); };
destination explan         { file("/var/log/explanations"); };
destination messages       { file("/var/log/messages"); };
destination routers        { file("/var/log/routers.log"); };
destination secure         { file("/var/log/secure"); };
destination spooler        { file("/var/log/spooler"); };
destination syslog         { file("/var/log/syslog"); };
destination user           { file("/var/log/user.log"); };

#----------------------------------------------------------------------
#  Special catch all destination sorting by host
#----------------------------------------------------------------------
destination hosts          { file("/var/log/HOSTS/$HOST/$YEAR/$MONTH/$DAY/$FACILITY_$HOST_$YEAR_$MONTH_$DAY" 
			     owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)); };

#----------------------------------------------------------------------
#  Forward to a loghost server
#----------------------------------------------------------------------
#destination loghost       { udp("10.1.1.254" port(514)); };

#----------------------------------------------------------------------
#  Mail subsystem logs
#----------------------------------------------------------------------
destination mail           { file("/var/log/mail.log"); };
destination mailerr        { file("/var/log/mail/errors"); };
destination mailinfo       { file("/var/log/mail/info"); };
destination mailwarn       { file("/var/log/mail/warnings"); };

#----------------------------------------------------------------------
#  INN news subsystem
#----------------------------------------------------------------------
destination newscrit       { file("/var/log/news/critical"); };
destination newserr        { file("/var/log/news/errors"); };
destination newsnotice     { file("/var/log/news/notice"); };
destination newswarn       { file("/var/log/news/warnings"); };

#----------------------------------------------------------------------
#  Cron subsystem
#----------------------------------------------------------------------
destination cron           { file("/var/log/cron.log"); };
destination crondebug      { file("/var/log/cron/debug"); }; 
destination cronerr        { file("/var/log/cron/errors"); }; 
destination croninfo       { file("/var/log/cron/info"); }; 
destination cronwarn       { file("/var/log/cron/warnings"); }; 

#----------------------------------------------------------------------
#  LPR subsystem
#----------------------------------------------------------------------
destination lpr            { file("/var/log/lpr.log"); };
destination lprerr         { file("/var/log/lpr/errors"); }; 
destination lprinfo        { file("/var/log/lpr/info"); }; 
destination lprwarn        { file("/var/log/lpr/warnings"); }; 

#----------------------------------------------------------------------
#  Kernel messages
#----------------------------------------------------------------------
destination kern           { file("/var/log/kern.log"); };
destination kernerr        { file("/var/log/kernel/errors"); }; 
destination kerninfo       { file("/var/log/kernel/info"); }; 
destination kernwarn       { file("/var/log/kernel/warnings"); }; 

#----------------------------------------------------------------------
#  Daemon messages
#----------------------------------------------------------------------
destination daemon         { file("/var/log/daemon.log"); };
destination daemonerr      { file("/var/log/daemons/errors"); };
destination daemoninfo     { file("/var/log/daemons/info"); };
destination daemonwarn     { file("/var/log/daemons/warnings"); };

#----------------------------------------------------------------------
#  Console warnings
#----------------------------------------------------------------------
destination console        { file("/dev/tty12"); };

#----------------------------------------------------------------------
#  All users
#----------------------------------------------------------------------
destination users          { usertty("*"); };

#----------------------------------------------------------------------
#  Examples of programs that accept syslog messages and do something
#  programatically with them.
#----------------------------------------------------------------------
#destination mail-alert     { program("/usr/local/bin/syslog-mail"); };
#destination mail-perl      { program("/usr/local/bin/syslog-mail-perl"); };

#----------------------------------------------------------------------
#  Piping to Swatch
#----------------------------------------------------------------------
#destination swatch         { program("/usr/bin/swatch --read-pipe=\"cat /dev/fd/0\""); };

#----------------------------------------------------------------------
#  Database notes:
#
#  Overall there seems to be three primary methods of putting data from
#  syslog-ng into a database.  Each of these has certain pros and cons.
#
#  FIFO file:    Simply piping the template data into a First In, First
#                Out file.  This will create a stream of data that will
#                not require any sort of marker or identifier of how
#                much data has been read.  This is the most elegant of
#                the solutions and probably the most unstable.
#
#                Pros:  Very fast data writes and reads.  Data being
#                       inserted into a database will be near real
#                       time.
#
#                Cons:  Least stable of all the possible solutions,
#                       and could require a lot of custom work to
#                       make function on any particular Unix system.
#
#                       Loss of the pipe file will cause complete
#                       data loss, and all following data that would
#                       have been written to the FIFO file.
#
#  Buffer file:  While very similar to a FIFO file this is would be a
#                text file which would buffer all the template 
#                output information.  Another program from cron or
#                similar service would then run and source the buffer
#                files and process the data into the database.
#
#                Pros:  Little chance of losing data since everything
#                       will be written to a physical file much like
#                       the regular logging process.  
#
#                       This method gives a tremendous amount of 
#                       flexibility since there would be yet another
#                       opportunity to filter logs prior to inserting
#                       any data into the database.
#
#                Cons:  Because there must be some interval between
#                       the processing of the buffer files, there will
#                       be a lag before the data is inserted in to the
#                       database.  
#
#                       There is also a slight chance of data corruption 
#                       (ie bad insert command) if the system crashes 
#                       during a write, although this scenero is very 
#                       unlikely.  
#
#                       Another possible issue is that because multiple 
#                       buffer files be written, the previously run 
#                       sourcing file could get behind the data 
#                       insertion if there is a very large quantity of 
#                       logs being written.  This will totally depend 
#                       on the system that this is running on.
#
#  Program:      The least elegant of the solutions.  This method is to
#                send the stream of data through some further interrupter
#                program such as something in Perl or C.  That program
#                will then take some action based off the data which
#                could include writing to a database similarly to the
#                program "sqlsyslogd".
#
#                Pros:  Allows complete control of the data, and as much
#                       post processing as required.
#
#                Cons:  Slowest of all the forms.  Since the data will
#                       have to go through some post processing it will
#                       cause data being written to the database to 
#                       remain behind actual log records.  This could
#                       cause a race condition in that logging is lost
#                       either due to system crash, or high load on
#                       the logging system.
#
#----------------------------------------------------------------------

#----------------------------------------------------------------------
#  Writing to a MySQL database:
#
#  Assumes a table/database structure of:
#
#            CREATE DATABASE syslog;
#            USE syslog;
#
#            CREATE TABLE logs ( host varchar(32) default NULL, 
#                                facility varchar(10) default NULL, 
#                                priority varchar(10) default NULL, 
#                                level varchar(10) default NULL, 
#                                tag varchar(10) default NULL, 
#                                date date default NULL, 
#                                time time default NULL, 
#                                program varchar(15) default NULL, 
#                                msg text, seq int(10) unsigned NOT NULL auto_increment, 
#                                PRIMARY KEY (seq), 
#                                KEY host (host), 
#                                KEY seq (seq), 
#                                KEY program (program), 
#                                KEY time (time), 
#                                KEY date (date), 
#                                KEY priority (priority), 
#                                KEY facility (facility)) 
#                                TYPE=MyISAM;
#
#----------------------------------------------------------------------
#  Piping method
#----------------------------------------------------------------------
#destination database       { pipe("/tmp/mysql.pipe"
#                             template("INSERT INTO logs (host, facility, 
#                             priority, level, tag, date, time, program, 
#                             msg) VALUES ( '$HOST', '$FACILITY', '$PRIORITY', 
#                             '$LEVEL', '$TAG', '$YEAR-$MONTH-$DAY', 
#                             '$HOUR:$MIN:$SEC', '$PROGRAM', '$MSG' );\n") 
#                             template-escape(yes)); };

#----------------------------------------------------------------------
#  Buffer file method
#----------------------------------------------------------------------
destination database       { file("/var/log/dblog/fulllog.$YEAR.$MONTH.$DAY.$HOUR.$MIN.$SEC"
                             template("INSERT INTO logs (host, facility, 
                             priority, level, tag, date, time, program, 
                             msg) VALUES ( '$HOST', '$FACILITY', '$PRIORITY', 
                             '$LEVEL', '$TAG', '$YEAR-$MONTH-$DAY', 
                             '$HOUR:$MIN:$SEC', '$PROGRAM', '$MSG' );\n") 
			     owner(root) group(root) perm(0600) 
			     dir_perm(0700) create_dirs(yes)
                             template-escape(yes)); };

#----------------------------------------------------------------------
#  Program method (alternate using sqlsyslogd):
#
#  Notes:  This is not a bad process, but lacks very much flexibility
#          unless more changes are made to the source of sqlsyslogd.
#          This is because sqlsyslogd assumes the data in a larger
#          object style instead of breaking it down into smaller
#          columnar pieces.
#----------------------------------------------------------------------
#destination database       { program("/usr/local/sbin/sqlsyslogd -u
#                             sqlsyslogd -t logs sqlsyslogs2 -p"); };

#----------------------------------------------------------------------
#  Since we probably will not be putting ALL of our logs in the database
#  we better plan on capturing that data that we will be discarding for
#  later review to insure we did not throw anything away we really
#  should have captured.
#----------------------------------------------------------------------
destination db_discard     { file("/var/log/discard.log"); };

#----------------------------------------------------------------------
#  Filters
#----------------------------------------------------------------------
#
#  Functions:
#
#  Name               Synopsis                        Description
#  --------------  ------------------------------  --------------------
#  facility        facility(facility[,facility])    Match messages 
#                                                  having one of the 
#                                                  listed facility code.
#  filter          Call another filter rule and 
#                  evaluate its value
#  host            host(regexp)                    Match messages by 
#                                                  using a regular 
#                                                  expression against 
#                                                  the hostname field 
#                                                  of log messages.
#  level/priority  level(pri[,pri1..pri2[,pri3]])  Match messages based 
#                                                  on priority.
#  match           Tries to match a regular 
#                  expression to the message 
#                  itself.
#  program         program(regexp)                 Match messages by 
#                                                  using a regular 
#                                                  expression against 
#                                                  the program name 
#                                                  field of log messages
#----------------------------------------------------------------------
#  NOTES: 
#
#  Getting filtering to work right can be difficult because while the
#  syntax is fairly simple, it is not well documented.  To illustrate
#  a brief lesson on filtering and to explain the majority of the
#  mechanics, we shall use the filter from the PostgreSQL database
#  how-to page found at:  http://www.umialumni.com/~ben/SYSLOG-DOC.html
#                            
#  This is a perfect and somewhat complex example to use.  In its
#  original form it resembles:
#                            
#  filter f_postgres { not(
#                            (host("syslogdb") and facility(cron) and level(info))
#                         or (facility(user) and level(notice)
#                                 and ( match(" gethostbyaddr: ")
#                                       or match("last message repeated ")
#                                      )
#                             )
#                         or ( facility(local3) and level(notice)
#                               and match(" SYSMON NORMAL "))
#                         or ( facility(mail) and level(warning)
#                               and match(" writable directory")
#                            )
#                         or (  ( host("dbserv1.somecompany.com")
#                                 or host("dbserv2.somecompany.com")
#                               )
#                               and facility(auth) and level(info)
#                               and match("su oracle") and match(" succeeded for root on /dev/")
#                            )
#                         ); };
#
#  While in this form, it does not induce a tremendous amount of 
#  insight on what the specific filter is attempting to accomplish.  In
#  reformatting the filter to resemble something a bit more human
#  readable, it would look like:
#
#  filter f_postgres { not
#                        (
#                          (
#                            host("syslogdb") and 
#                            facility(cron) and 
#                            level(info)
#                          ) or 
#                          (
#                            facility(user) and 
#                            level(notice) and 
#                            ( 
#                              match(" gethostbyaddr: ") or 
#                              match("last message repeated ")
#                            )
#                        ) or 
#                        ( 
#                          facility(local3) and 
#                          level(notice) and 
#                          match(" SYSMON NORMAL ")
#                        ) or 
#                        ( 
#                          facility(mail) and 
#                          level(warning) and 
#                          match(" writable directory")
#                        ) or 
#                        (  
#                          ( 
#                            host("dbserv1.somecompany.com") or 
#                            host("dbserv2.somecompany.com")
#                          ) and 
#                          facility(auth) and 
#                          level(info) and 
#                          match("su oracle") and 
#                          match(" succeeded for root on /dev/")
#                        )
#                      ); 
#                   };
#
#  Now in this form we can now begin to see what this filter has been
#  attempting to accomplish.  We can now further breakdown each logical
#  section and explain the different methods:
#
#  [1]  As in all statements in syslog-ng, each of the beginnings and
#       endings must be with a curly bracket "{" "}" to clearly denote
#       the start and finish.
#
#       In this filter, the entire filter is preferred by a "not" to 
#       indicate that these are the messages that we are NOT interested
#       in and should be the ones filtered out.  All lines of logs that
#       do not match these lines will be sent to the destination.
#
#  { not
#
#  [2]  The first major part of the filter is actually a compound
#       filter that has two parts.  Because the two parts are separated
#       by an "or", only one of the two parts must be matched for that
#       line of log to be filtered.
#
#  [2a]  In the first part of this filter there are three requirements
#        to be met for the filter to take affect.  These are the host 
#        string "syslogdb". the facility "cron", and the syslog level
#        of info.
#
#  (
#    (
#      host("syslogdb") and 
#      facility(cron) and 
#      level(info)
#    ) or 
#
#  [2b]  In the second part of the filter, which in itself is a 
#        compound filter, there are three requirements as well.  These
#        are that the facility of "user", and the log level of "notice"
#        are met in addition to one of the two string matches that are
#        shown in the example.
#
#  (
#    facility(user) and 
#    level(notice) and 
#    ( 
#      match(" gethostbyaddr: ") or 
#      match("last message repeated ")
#    )
#  ) or
#
#  [3]  In the section of the filter there are once again three
#       requirements to fire off a match which are a facility of "level3"
#       a log level of "notice" and a sting match of " SYSMON NORMAL ".
#
#  ( 
#    facility(local3) and 
#    level(notice) and 
#    match(" SYSMON NORMAL ")
#  ) or 
#
#  [4]  This part of the filter is very similar to the previous
#       filter, but with different search patterns.
#
#  ( 
#    facility(mail) and 
#    level(warning) and 
#    match(" writable directory")
#  ) or 
#
#  [5]  The last section of the filter is also a compound filter
#       that to take affect will require that one of two hosts
#       are matched, the facility of "auth", and log level of 
#       "info" occur in addition to the two string matches.
#
#  (  
#    ( 
#      host("dbserv1.somecompany.com") or 
#      host("dbserv2.somecompany.com")
#    ) and 
#    facility(auth) and 
#    level(info) and 
#    match("su oracle") and 
#    match(" succeeded for root on /dev/")
#  )
#
#  [6]  As in all command sets in syslog-ng, each of the statements 
#       must be properly closed with the correct ending punctuation
#       AND a semi-colon.  Do not forget both, or you will be faced with
#       an error.
#
#  ); };
#
#  While this may not be the most complete example, it does cover the
#  majority of the options and features that are available within the
#  current version of syslog-ng.
#----------------------------------------------------------------------

#----------------------------------------------------------------------
#  Standard filters for the standard destinations.
#----------------------------------------------------------------------
filter      f_auth         { facility(auth, authpriv); };
filter      f_authpriv     { facility(authpriv); }; 
filter      f_cron         { facility(cron); };
filter      f_daemon       { facility(daemon); };
filter      f_kern         { facility(kern); };
filter      f_local1       { facility(local1); };
filter      f_local2       { facility(local2); };
filter      f_local3       { facility(local3); };
filter      f_local4       { facility(local4); };
filter      f_local5       { facility(local5); };
filter      f_local6       { facility(local6); };
filter      f_local7       { facility(local7); };
filter      f_lpr          { facility(lpr); };
filter      f_mail         { facility(mail); };
filter      f_messages     { facility(daemon, kern, user); };
filter      f_news         { facility(news); };
filter      f_spooler      { facility(uucp,news) and level(crit); };
filter      f_syslog       { not facility(auth, authpriv) and not facility(mail); };
filter      f_user         { facility(user); };

#----------------------------------------------------------------------
#  Other catch-all filters
#----------------------------------------------------------------------
filter      f_crit         { level(crit); };
#filter     f_debug        { not facility(auth, authpriv, news, mail); };
filter      f_debug        { level(debug); };
filter      f_emergency    { level(emerg); };
filter      f_err          { level(err); };
filter      f_info         { level(info); };
filter      f_notice       { level(notice); };
filter      f_warn         { level(warn); };

#----------------------------------------------------------------------
#  Filer for the MySQL database pipe.  These are things that we really
#  do not care to see otherwise they may fill up our database with
#  garbage.
#----------------------------------------------------------------------
#filter      f_db           { not facility(kern) and level(info, warning) or
#                             not facility(user) and level(notice) or
#                             not facility(local2) and level(debug); };
#
#filter      f_db           { not match("last message repeated ") or
#                             not match("emulate rawmode for keycode"); };
#
#filter      f_discard      { facility(kern) and level(info, warning) or
#                             facility(user) and level(notice) or
#			     facility(local2) and level(debug); };
#
#filter      f_discard      { match("last message repeated ") or
#                             match("emulate rawmode for keycode"); };

#----------------------------------------------------------------------
#  Logging
#----------------------------------------------------------------------
#
#  Notes:  When applying filters, remember that each subsequent filter
#          acts as a filter on the previous data flow.  This means that
#          if the first filter limits the flow to only data from the
#          auth system, a subsequent filter for authpriv will cause
#          no data to be written.  An example of this would be:
#
# log { source(s_dgram);
#       source(s_internal);
#       source(s_kernel);
#       source(s_tcp);
#       source(s_udp);      filter(f_auth); 
#                           filter(f_authpriv);  destination(authlog); };
#
#          So, one can cancel out the other.
#
#  There are also certain flags that can be attached to each of the log
#  statements:
#
#  Flag      Description
#  --------  ----------------------------------------------------------
#  catchall  This flag means that the source of the message is ignored, 
#            only the filters are taken into account when matching 
#            messages.
#  fallback  This flag makes a log statement 'fallback'. Being a 
#            fallback statement means that only messages not matching 
#            any 'non-fallback' log statements will be dispatched.
#  final     This flag means that the processing of log statements ends 
#            here. Note that this doesn't necessarily mean that 
#            matching messages will be stored once, as they can be 
#            matching log statements processed prior the current one.
#----------------------------------------------------------------------

#----------------------------------------------------------------------
#  Standard logging
#----------------------------------------------------------------------
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_auth);      destination(authlog); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_local7);    destination(bootlog); };
#log{ source(s_dgram);
#      source(s_internal);
#      source(s_kernel);
#      source(s_tcp);
#      source(s_udp);      filter(f_debug);     destination(debug); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_local1);    destination(explan); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_local5);    destination(routers); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_messages);  destination(messages); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_authpriv);  destination(secure); }; 
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_spooler);   destination(spooler); };
log { source(s_dgram);
      source(s_internal);
      source(s_kernel);
      source(s_tcp);      filter(f_syslog);    destination(syslog); };
#log { source(s_dgram);
#      source(s_internal);
#      source(s_kernel);
#      source(s_tcp);
#      source(s_udp);                       destination(syslog); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_user);      destination(user); };

#----------------------------------------------------------------------
#  Special catch all destination sorting by host
#----------------------------------------------------------------------
log { source(s_dgram);
      source(s_internal);
      source(s_kernel);
      source(s_tcp);                           destination(hosts); };

#----------------------------------------------------------------------
#  Send to a loghost
#----------------------------------------------------------------------
#log { source(s_dgram);
#      source(s_internal);
#      source(s_kernel);
#      source(s_tcp);                           destination(loghost); };

#----------------------------------------------------------------------
#  Mail subsystem logging
#----------------------------------------------------------------------
#log { source(s_dgram);
#      source(s_internal);
#      source(s_kernel);
#      source(s_tcp);
#      source(s_udp);     filter(f_mail);      destination(mail); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_mail); 
                          filter(f_err);       destination(mailerr); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_mail); 
                          filter(f_info);      destination(mailinfo); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_mail); 
                          filter(f_notice);    destination(mailinfo); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_mail); 
                          filter(f_warn);      destination(mailwarn); };

#----------------------------------------------------------------------
#  INN subsystem logging
#----------------------------------------------------------------------
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_news); 
                          filter(f_crit);      destination(newscrit); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_news); 
                          filter(f_err);       destination(newserr); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_news); 
                          filter(f_notice);    destination(newsnotice); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_news); 
                          filter(f_warn);      destination(newswarn); };

#----------------------------------------------------------------------
#  Cron subsystem logging
#----------------------------------------------------------------------
#log { source(s_dgram);
#      source(s_internal);
#      source(s_tcp);
#      source(s_udp);     filter(f_cron);      destination(crondebug); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_cron); 
                          filter(f_err);       destination(cronerr); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_cron); 
                          filter(f_info);      destination(croninfo); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_cron); 
                          filter(f_warn);      destination(cronwarn); };

#----------------------------------------------------------------------
#  LPR subsystem logging
#----------------------------------------------------------------------
#log { source(s_dgram);
#      source(s_internal);
#      source(s_tcp);
#      source(s_udp);     filter(f_lpr);       destination(lpr); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_lpr); 
                          filter(f_err);       destination(lprerr); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_lpr); 
                          filter(f_info);      destination(lprinfo); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_lpr);   
                          filter(f_warn);      destination(lprwarn); }; 

#----------------------------------------------------------------------
#  Kernel subsystem logging
#----------------------------------------------------------------------
#log { source(s_dgram);
#      source(s_internal);
#      source(s_kernel);
#      source(s_tcp);
#      source(s_udp);     filter(f_kern);      destination(kern); };
log { source(s_dgram);
      source(s_internal);
      source(s_kernel);
      source(s_tcp);      filter(f_kern); 
                          filter(f_err);       destination(kernerr); }; 
log { source(s_dgram);
      source(s_internal);
      source(s_kernel);
      source(s_tcp);      filter(f_kern); 
                          filter(f_info);      destination(kerninfo); }; 
log { source(s_dgram);
      source(s_internal);
      source(s_kernel);
      source(s_tcp);      filter(f_kern);
                          filter(f_warn);      destination(kernwarn); }; 

#----------------------------------------------------------------------
#  Daemon subsystem logging
#----------------------------------------------------------------------
#log { source(s_dgram);
#      source(s_internal);
#      source(s_tcp);
#      source(s_udp);     filter(f_daemon);    destination(daemon); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_daemon);    
                          filter(f_err);       destination(daemonerr); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_daemon); 
                          filter(f_info);      destination(daemoninfo); };
log { source(s_dgram);
      source(s_internal);
      source(s_tcp);      filter(f_daemon);    
                          filter(f_warn);      destination(daemonwarn); };

#----------------------------------------------------------------------
#  Console logging
#----------------------------------------------------------------------
#  16-Mar-03 - REP - Removed logging to the console for performance
#                    reasons.  Since we are not really going to be 
#                    looking at the console all the time, why log there
#                    anyway.
#----------------------------------------------------------------------
#log { source(s_dgram);
#      source(s_internal);
#      source(s_kernel);
#      source(s_tcp);      filter(f_syslog);    destination(console); };

#----------------------------------------------------------------------
#  Logging to a database
#----------------------------------------------------------------------
#log { source(s_dgram);
#      source(s_internal);
#      source(s_kernel);
#      source(s_tcp);      filter(f_db);        destination(database); };
#log { source(s_dgram);
#      source(s_internal);
#      source(s_kernel);
#      source(s_tcp);      filter(f_discard);   destination(db_discard); };

Syslog-ng FAQ (from campin.net)

Filed under: syslog-ng — lancevermilion @ 1:34 pm

Syslog-ng FAQ

This FAQ covers old syslog-ng versions, 1.X and 2.0. If you are looking for information about recent syslog-ng versions, please check the official FAQ now hosted by Balabit at: http://www.balabit.com/wiki/syslog-ng-faq.Every mailing list should have a list of frequently asked questions, and the answers usually given to those questions. Here’s one for syslog-ng.

Disclaimer: Use this information at your own risk, I cannot be held responsible for how you use this information and any consequences that may result. However, every effort has been made to ensure the technical accuracy of this document.

Most questions are taken from actual posts to the syslog-ng mailing list. Truly horrible grammar and spelling were cleaned up, but most questions are identical to the original post.

Any new entries should be submitted to the new FAQ at Balabit, not here.

Important Syslog-ng and syslog links

syslog/syslog-ng Graphical Interfaces

Contents

Geting started

  • Syslog-ng 2.x requires glib and eventlog which reside in /usr, thus cannot be used on systems where /usr is mounted during boot time, aftersyslog-ng starts.The latest snapshots (and future releases) of syslog-ng 2.x links to GLib and EventLog statically, thus it will not require those libraries to be present at boot time.

    The eventlog library was written by the syslog-ng author, and can be downloaded from the Balabit site.

    You can download glib from here, linked to from the main GTK site here.

  • I miss this/that very important feature from syslog-ng 2.x while it was present in syslog-ng 1.6.From Bazsi:
    syslog-ng 2.x is a complete reimplementation of syslog-ng and even though I plan to make it upward compatible with syslog-ng 1.6, I might have forgotten something. So please post to the mailing list either if you find missing or incompatible features.

     

  • What’s with this libol stuff, and which one do I need?libol is a library written by the author of syslog-ng, Balazs Scheidler, which is used in syslog-ng 1.6.x and below. A built copy of libol needs to be present on a system when these versions of syslog-ng are built.

    libol does *not* need to be installed, however. A built copy can be left in a typical build directory like /usr/src, and given as a parameter to syslog-ng’s configure script. Run ‘./configure –help’ in the syslog-ng source directory for more information.Information about versions of libol and which branch of syslog-ng they correspond to, see here.

  • When attempting to use the match(“.asx”) in my filter it is returning anything with “asx”. I only need to return those lines with the period before the asx, hence a file extension. For some reason syslog-ng seems to ignore my specification of the . before the asx. I have tried searching with ..asx and \.asx and /.asx but doesn’t seem to work no matter what I do. Any suggestions?match() expects an extended regular expression. _and_ syslog-ng performs \ based escaping for strings to make it possible to include ” within the string. Therefore you need to specify:
    match("\\.asx") 

    …it will match a single dot followed by the string asxSee this post for more explanation on this issue: https://lists.balabit.hu/pipermail/syslog-ng/2003-December/005616.html

  • I run Linux and see that I can choose one of two types of UNIX socket for my main syslog source. Which one is correct?You should choose unix-stream over unix-dgram for the same reasons you’d choose TCP (stream) over UDP (datagram): increased reliability, ordered delivery and client-side notification of failure.

    Along the same lines, you should choose unix-dgram over unix-stream for the same reasons you’d choose UDP (datagram) over TCP (stream): less possibility of denial of service by opening many connections (local-only vulnerability though), less overhead, don’t care to know if the remote end actually received the message.

    Most of us setting up syslog-ng tend to desire the benefits of unix-stream, and Bazsi recommends its use. See his commentary in the officialreference manual.

  • Hi, can someone please help me to get the compile right.OS: RHEL ES 3 update 4

    The ./configure went well no errrors.
    But the make did not go so well:

    Error message(s) :

    >> snip gcc -g -O2 -Wall -I/usr/local/include/libol -D_GNU_SOURCE -o
    syslog-ng main o sources.o center.o filters.o destinations.o log.o cfgfile.o 
    cfg-grammar.o cfg lex.o affile.o afsocket.o afunix.o afinet.o afinter.o afuser.o 
    afstreams.o afpr gram.o afremctrl.o nscache.o utils.o syslog-names.o macros.o -lnsl 
    -lresolv 
    sr/local/lib/libol.a -lnsl -Wl,-Bstatic 
    -Wl,-Bdynamic cfg-lex.o(.text+0x45f): In function `yylex': 
    /root/rpm/syslog-ng/syslog-ng-1.6.8/src/cfg-lex.c:1123: undefined 
    reference to 
    yywrap' 
    cfg-lex.o(.text+0xb33): In function `input': 
    /root/rpm/syslog-ng/syslog-ng-1.6.8/src/cfg-lex.c:1450: undefined 
    reference to 
    yywrap' 
    collect2: ld returned 1 exit status 
    make[3]: *** [syslog-ng] Error 1 
    make[3]: Leaving directory `/root/rpm/syslog-ng/syslog-ng-1.6.8/src' 
    make[2]: *** [all-recursive] Error 1 
    make[2]: Leaving directory `/root/rpm/syslog-ng/syslog-ng-1.6.8/src' 
    make[1]: *** [all] Error 2 
    make[1]: Leaving directory `/root/rpm/syslog-ng/syslog-ng-1.6.8/src' 
    make: *** [all-recursive] Error 1 

    Can someone plz explain what went wrong?As previously noted by another poster this is a problem with the flex version on your system. If you use a flex with a higher version than 2.5.4 you’re out of luck, unless you patch the sources. The reason for this is that the people developing flex had the “interesting” idea of changing the way the lexer parses the language file.

    The fix is to downgrade your flex or to patch cfg-lex.l with a %option field disabling yywarp. From the top of my head it should read:

     %option noyywrap

    Or you define the options as follows (I think):

     %{
    prototype functionname(parameters);
     %}

    Just my 2 cents since this issue turns up on almost every OSS project out there and people hit this very problem all the time and I thus want to enlarge the information and have google find it once and forever😉.Best regards,
    Roberto Nibali, ratz

    ps.: Let’s hope I got it right

    Note from Rob Munsch:
    It should be noted that one will get an *identical* error, specifically the ‘undefined reference to yywrap,’ if flex is *not installed at all.
    …those who hit this error should first perhaps ensure that they have flex/m4 installed before they start screwing with the cfg-lex.l code. I’ve been happily compiling various things on this (new) system for a while now, and to those of us newer at this than some others, we tend to fall into the trap of thinking that if most things compile, then we aren’t missing any vital steps of a compile process, like say preprocessors….

  • Thanks for the patch, I just patched it, and I can’t recompile libol. I did a make clean after I patched it, then tried:
     $ make
     Making all in utils
     make[1]: Entering directory `/home/src/libol-0.2.17/utils'
     make[1]: Nothing to be done for `all'.
     make[1]: Leaving directory `/home/src/libol-0.2.17/utils'
     Making all in src
     make[1]: Entering directory `/home/src/libol-0.2.17/src'
     /usr/src/libol-0.2.17/utils/make_class io.c.xt
     /bin/sh: /usr/src/libol-0.2.17/utils/make_class: No such file or directory
     make[1]: *** [io.c.x] Error 127
     make[1]: Leaving directory `/home/src/libol-0.2.17/src'
     make: *** [all-recursive] Error 1

    I’m not sure what this means. Should I try the patch again? I just did a 

    $ patch -ORIG_FILE -DIFF_FILE

    This comes from a missing scheme interpreter, touch io.c.x or install scsh.

    For much more on libol and scheme in syslog-ng, read this post by Bazsi

  • If I replace my syslog daemon with syslog-ng, what side effects can it have?Glad you asked, the most common side effect is being happy with a superior syslog daemon.

    Another common result is that system logfiles grow to huge sizes. This isn’t syslog-ng’s fault, but a side effect of syslog-ng logging to different logfiles than your old syslog daemon. Change your log rotation program’s config files to rotate the new log names/locations or change syslog-ng’s config file to make it log to the same files as your old syslog daemon.

Running it

  • I’m new to syslog-ng. Is there a way for syslog-ng and syslogd to co-exist? Our servers are managed by another group, and they don’t supportsyslog-ng. Can you pipe all syslogd messages to syslog-ng?Yes, syslog-ng can accept messages from stock syslogd using the udp() source.
  • I want a catch-all log destination and can’t seem to find out how in the documentation or examples.Jay Guerette helped out with:

    Filters are optional. A catchall should appear in your .conf before all othe entries; and can look something like:

    destination catchall {
            file(/var/log/catchall);
    };
    log {
            source(syslog);
            destination(catchall);
    };
  • I want to replace syslogd *and* klogd on my Linux box with syslog-ng.Use a source line like this in your conf file to read kernel messages, too.
    source src { file("/proc/kmsg"); unix-stream("/dev/log"); internal(); };

    Notes:

    1. do not run klogd and syslog-ng fetching local kernel messages at the same time. It may cause syslog-ng to block which makes all logging local daemons unusable.
    2. Current selinux policy distributed for RHEL4 supports syslog-ng by a boolean named “use_syslogng”. But on the not working host (using “pipe”), following happens:avc: denied { write } for pid=2190 comm="syslog-ng" name="kmsg" dev=proc ino=-268435446 scontext=root:system_r:syslogd_t tcontext=system_u:object_r:proc_kmsg_t tclass=filePlease don’t use “pipe” at all for /proc/kmsg.

      Thanks to Peter Bieringer for contributing this information

    3. If you find yourself getting lots of kernel messages to the console after replacing klogd with syslog-ng: set the kernel’s console log level. This is done by klogd but not syslog-ng automatically.Something like “dmesg -n4” should help.
  • I have been trying syslog-ng and extremely happy with the power of using it. I have one question, when using the program option under destination drivers, my PERL script gets launched when I start syslog-ng, but executes once and then dies. I am using this script to page any time I see an log entry, but it only runs the first time it runs.You can read log messages on your stdin, so instead of fetching a single line and exiting keep reading your input like this:
    #!/usr/bin/perl
    while (<>) {
            # send to pager
    }
  • Is it possible to create sockets with syslog-ng similar to how you can do so with syslogd? The reason for this being that I’m running some applications chrooted, and need to open a /dev/log socket that is in that chroot-jail.Of course you can. Just add a source:
    source local { unix-stream("/dev/log"); internal(); };
    source jail1 { unix-stream("/jail/dns/dev/log"); };
    source jail2 { unix-stream("/jail/www/dev/log"); };

    You can do this by using a single source:

    source local {
    	unix-stream("/dev/log");
    	internal();
    	unix-stream("/jail/dns/dev/log");
    };

    Note that postfix appears to need a log socket in it’s chroot jail, or it’s logging will stop when you reload syslog-ng:

    source postfix { unix-stream("/var/spool/postfix/dev/log" keep-alive(yes)); };
  • Directories with names like “Error”, “SCSI”, “”, are showing up in the directory that holds the syslogs for the different hosts that we monitor.Has anyone seen these random directories? Any suggestions on how to deal with them?From the description it’s apparent that logs are being stored in your filesystem with a macro similar to this:

     

    destination std { file( "/var/log/$HOST/$FACILITY"); };

    …so that you have directories created with the value of $HOST. This is bad. The host entry in syslog messages is often set to a bad value, especially with messages originating from the UNIX kernel, like SCSI error messages.

    The best fix for this is to *never* create files or directories based on unfiltered input from the network (You’d do well to remember that in general). Set the option keep_hostname to (no), and syslog-ng will always replace the hostname field (possibly using DNS, so make sure your local caching DNS is setup correctly).

    Here’s the way to keep the hostnames in the log files but ALSO log safely to the filesystem:

    options {
            long_hostnames(off);
    	keep_hostname(yes);
    	use_dns(no);
    };
    
    source src {
    	internal();
    	unix-stream("/dev/log" keep-alive(yes));
    };
    
    # set it up
    destination logip {
    	file("/archive/logs/HOSTS/$HOST_FROM/$FACILITY/$YEAR$MONTH/$FACILITY$YEAR$MONTH$DAY"
    	owner(syslog-ng) group(syslog-ng) perm(0600) dir_perm(0700) create_dirs(yes)
    	template("$DATE $FULLHOST $PROGRAM $TAG [$FACILITY.$LEVEL] $MESSAGE\n")  );
    };
    
    # log it
    log {
    	source(src);
    	destination(logip);
    };

    Since you don’t use DNS, your $HOST_FROM directory name will be an IP, but since you keep_hostnames(yes) you’ll still have the hostname AS SENT inside the actual logfile. How’s that for a good setup? I quite like it!😉If still you really want to use hostnames for directory or file names, read on:
    When still using hostnames (from the DNS) for directory names, the author of this FAQ didn’t have garbled $HOST macros go away until he modified all clients to run syslog-ng and transfer over TCP. Both steps might not be required, syslog-ng over UDP might be sufficient, though there’s little reason *not* to use TCP. Modern TCP/IP stacks are tuned to handle lots of web connections, so even a central host for hundreds of machines can use TCP without issues from the use of TCP alone. There will be I/O problems with trying to commit that many hosts’ logs to disk much sooner under most circumstances.

  • DNS: I want to use fully qualified domain names in my logs, I have many different hosts named ns1, or www, and don’t want the logs mixed up. Also, I have a question concerning the use_dns option. Is this a global option only, or is there some way to change this per source or destination?First of all, make sure that you have a reliable DNS cache nearby. Nearby may be on the same host or network segment, or even at your upstream provider. Just make sure that you can reach it reliably. syslog-ng blocks on DNS lookups, so you can stop all logging if you start getting DNS timeouts.

    Internal syslog-ng DNS caching has recently been worked on, and reportedly works well. This appears to be a good alternative to running a local caching DNS server (‘dns_cache(yes);’).

    The use_dns option can be specified on a per-source basis (so can the keep_hostname option).

    See also: the section on hostname options directly below.

  • What is with all the “hostname” options?When syslog-ng receives a message it tries to rewrite the hostname it contains unless keep_hostname is true. If the hostname is to be rewritten (e.g. keep_hostname is false), it checks whether chain_hostnames (or long_hostname which is an alias for chain_hostnames) is true. If chain_hostnames is true, the name of the host syslog-ng received the message from is appended to the hostname, otherwise it’s replaced.

    So if you have a message which has hostname “server”, and which resolves to “server2”, the following happens:

    keep_hostname(yes) keep_hostname(no)
    chain_hostname(yes) server server/server2
    chain_hostname(no) server server2

    I hope this makes things clear.

  • I have this config file:
     filter f_local0 { facility(local0); }; filter f_local1 { facility(local1); }; destination df_local1 { file("/mnt/log/$R_YEAR-$R_MONTH-$R_DAY/$SOURCEIP/local.log" template("$FULLDATE <> $PROGRAM <> $MSGONLY\n") template_escape(no)); }; log { source(s_tcp); source(s_internal); source(s_udp); source(s_unix); filter(f_local0); filter(f_local1); destination(df_local1); }; 

    When an event arrives at the system by facility local0 or local1, this one is registered in the file never. Can be some bug of syslog-ng or failure in config?If I understand you correctly then the problem is that you’re using two filters which exclude each other however using more filters they are logically AND-ed. If you want to catch messages from local0 and local1 use a filter like this:

    filter f_local01 { facility(local0) or facility(local1); }; 

     

  • I archive my logs like this:
    file("/var/log/HOSTS/$HOST/$YEAR/$MONTH/$DAY/$FACILITY_$HOST_$YEAR_$MONTH_$DAY" owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes) ); 

    …and over time my archive takes up lots of space. When will syslog-ng implement compression so that I can compress them automatically?You don’t need syslog-ng to compress them, install bzip2 and run a nightly cronjob like this:

     /usr/bin/find /var/log/HOSTS ! -name "*bz2" -type f ! -path "*`/bin/date +%Y/%m/%d`*" -exec /usr/bin/bzip2 {} \; 

    This might need some explaining: find all non-bzipped files that aren’t from today (syslog-ng might still write to them) and compress them with bzip2. This was tested on Debian GNU/Linux with GNU find version 4.1.7.Submitted by Michael King:
    We started with the compression script you have, but changed it to use gzip compression. (Less file space efficient, but more time efficient. B2zip takes approximately 20 minutes to decompress vs 2 or 3 minutes for Gzip), added a quick find and delete for empty directories and files modified more than 14 days old.

    # Current policy is:
    # Find all non-Archived files that aren't from today, and archive them
    # Archive Logs are deleted after 14 days
    #
    #Changes.   Change -mtime +14 to the number of days to keep
    
    # Archive old logs
    /usr/bin/find /var/log/HOSTS ! -name "*.gz" -type f ! -path "*`/bin/date +%Y/%m/%d`*" -exec /usr/bin/gzip {} \;
    
    # Delete old archives
    find /var/log/HOSTS/ -daystart -mtime +14 -type f -exec rm {} \;
    
    # Delete empty directories
    find /var/log/HOSTS/ -depth -type d -empty -exec rmdir {} \;
  • My syslog-ng.conf has
    destination std { file( "/var/log/$HOST/$YEAR$MONTH/$FACILITY" create_dirs(yes)); };

    What happens is if /var/log/$HOST/$YEAR$MONTH does not exist, syslog-ng makes that dir, but it’s owner is root:other. I think because daemon’s effective user ID is root. I want to change dir’s owner. Is this possible?

    Yes, you can do this using the owner(), group() and perm() options.
    For example:

    destination d_file { file("/var/log/$HOST/log" create_dirs(yes)
     owner("log") group("log") perm(0600)); };
  • a) I have snmptrapd running so that any trap that it receives should be logged to local1. I have a filter taking anything received via local1 to a specific file:
    filter snmptrap {facility(local1);};
    destination snmptraps { file("/var/log/snmptraps";};

    Unfortunately a number of traps are getting cut off at a specific point, and the remainder of the trap ends up in syslog and not in the proper destination.syslog defaults to 1024 byte long messages, but this value is tunable in syslog-ng 1.5 where you can set it to a higher value.

    options { log_msg_size(8192); };

    b) Andreas Schulze points out: “We are running snmptrapd and syslog-ng 1.5.x under Solaris 8 and observed exactly the same problem.

    This doesn’t fix the problem for us. It seems that there is a problem in the syslog(3) implementation at least on Solaris. Maybe on Linux, too. This is important, because snmptrapd feeds its messages via syslog(3) to syslog-ng. So syslog-ng never gets the correct message, because its truncated in libc before syslog-ng receive it.

    Our solution was, to patch snmptrapd to log its messages via a local Unix DGRAM socket and use this socket as message source for syslog-ng. This fix the problem and works pretty fine and very stable for more than one year in our environment.

    Basically you’re screwed on Solaris, but hopefully other implementations aren’t as brain-dead.

  • It seems I have syslog clients with unsynchronized clocks and I have files that were created with the time macros, and their date is wrong. What I want is the files to be created with the time/date they are received.It’s an option. use_time_recvd() boolean.
    options { use_time_recvd(true); };
  • What conf settings can I use for my syslog-ng.conf file so that messages are written to disk the instant they are received?Add sync(0) to your config file.
    options { sync(0); };
  • I want to run syslog-ng chrooted and as a non-root user.Use syslog-ng 1.5.x’s own -C and -u flags when starting it.
  • I want to rewrite my logs into a specific format.Syslog-ng 1.5.3 added support for user definable log file formats. Here’s how to use it:
    destination my_file {
            file("/var/log/messages" template("$ISODATE $TAG $FULLHOST $MESSAGE"));
    };

    For an explanation of available macros read this post.

  • I have been experiencing a problem with a syslog-ng (1.4.11) server seemingly only allowing 10 connections to a tcp() source. A quick tour of the code found the offending code at line 341 in afinet.c:
    self->super.max_connections = 10;

    It’s limited because otherwise it’d be easy to mount a DoS attack against the logger host. You can change this limit run-time without changing the source with the max-connection option to tcp():

    source src { tcp(max-connections(100)); };

  • Can output from programs started by syslog-ng be captured and logged by syslog-ng?It’s on the todo list. As long as it is not implemented, you might try to redirect the program’s output to a named pipe like this:
    destination d_swatch { program("swatch 2> /var/run/swatch.err"); };
     source s_swatch { pipe("/var/run/swatch.err"); };

Getting fancy

  • The whole point of setting up a loghost was to report on the logs. How can syslog-ng help?Syslog-ng is not about reporting on messages. Syslog-ng is a “sink” for syslog messages. Once syslog-ng commits them to some sort of storage (filesystem, database, line printer, etc), it is up to you to scan them.

    That being said, Nate Campi’s “newlogcheck” page shows how he filters all messages through swatch in real time, and also uses syslog-ng’s “match” option to alert on certain message strings.
    The fact that this stuff works is a result of syslog-ng’s flexibility, not because it was written to be all things to all people. Syslog-ng is a quality daemon because it tries to stay good at one thing and one thing only: being a syslog server.

    Look at the links part of this page for the link to Nate’s newlogcheck page, and for the link to the Log Analysis page. You’ll find plenty of information on log parsing there.

  • I want to input my logs into a database in real time – why can’t I do it?You can, there’s just nothing built into syslog-ng which knows about databases. You simply need to take advantage of syslog-ng’s ability to pipe to a program. Follow the links in the links part of this page to read up on how other have done it.
  • How much log volume can syslog-ng handle?The limits to throughput in syslog-ng are similar to that of most other network applications, where network and disk I/O are the limiting factors.

    Here’s a report from Kevin Kadow about throughput on his busy loghost:

    Our log volume has been growing slowly over time, I recently checked my primary
    logger and noticed that the raw log volume for this past Monday (the 24-hour
    period from midnight to midnight) was 10,982,118,488 bytes, or 10.22GiB.
    
    Peak hourly volume was 913MiB. Peak one-second volume in that hour was 2,626
    messages totaling 440KiB, no duplicates.
    
    System specs:
    OpenBSD on a Dell 2650 (single 2.8Ghz P4) running syslog-ng 1.6.X.
    Logs are written to a Dell PERC 3/Di SCSI RAID-0, as a 2-drive stripe.
  • I’m using syslog-ng over redirected ports inside an SSH channel and whenever I HUP syslog-ng, the SSH channel closessyslog-ng closes TCP connections when a SIGHUP is received, but you can change this behaviour with the keep-alive option.
    destination remote_tcp { tcp("loghost" port(1514) keep-alive(yes)); };
    source tcp_listen { tcp(ip("10.0.0.1") port(5140) keep-alive(yes)); };
  • I’ve successfully set up syslog-ng to tunnel through stunnel. I’m having one problem though, all messages come through with a hostname of “localhost”, presumably since stunnel is coming from localhost on the syslog server….Keep the hostname as sent by the remote syslog daemon:
    options { keep_hostname(yes); };
  • I am trying to setup a central log host and am having trouble getting events registered on the central server. It looks like the remote host does connect to the central host but nothing shows in a log anywhere for it.Here is the central loghost config file:
    options {
            long_hostnames(off);
            sync(0);
            stats(43200);
            dns_cache(yes);
            use_fqdn(no);
            keep_hostname(yes);
            use_dns(yes);
    };
    
    source gateway {
            unix-stream("/dev/log");
            internal();
            udp(ip(0.0.0.0) port(514));
    };
    
    source tcpgateway {
            unix-stream("/dev/log");
            internal();
            tcp(ip(0.0.0.0) port(514) max_connections(1000));
    };
    
    destination hosts {
            file("/var/log/syslogs/$HOST/$FACILITY"
            owner(root) group(root) perm(0600) dir_perm(0700)
    create_dirs(yes));
    };
    
    log {
            source(gateway); destination(hosts);
    };
    
    log {
            source(tcpgateway); destination(hosts);
    };

    Don’t duplicate sources in your source{} (unix-stream(“/dev/log”); and internal); directives. syslog-ng is going to open /dev/log once for each time you list it, same for any TCP/IP ports, files, etc. List them once and use the source{} in additional log{} statements.

    You’ll want something more like:

    options {
            long_hostnames(off);
            sync(0);
            stats(43200);
            dns_cache(yes);
            use_fqdn(no);
            keep_hostname(yes);
            use_dns(yes);
    };
    
    source local {
            unix-stream("/dev/log");
            internal();
    };
    
    source gateway {
            udp(ip(0.0.0.0) port(514));
    
    source tcpgateway {
            tcp(ip(0.0.0.0) port(514) max_connections(1000));
    };
    
    destination hosts {
            file("/var/log/syslogs/$HOST/$FACILITY"
            owner(root) group(root) perm(0600) dir_perm(0700)
    create_dirs(yes));
    };
    
    log {
            source(gateway);
    	destination(hosts);
    };
    
    log {
            source(local);
    	destination(hosts);
    };
    
    log {
            source(tcpgateway);
    	destination(hosts);
    };
  • I am having problems with syslog-ng on an SeLinux aware machine. The kernel will not allow me to open up /proc/kmsg for kernel messages. The error message I get looks like:Oct 24 14:03:06 shadowlance kernel: audit(1130178038.432:2): avc: denied { read } for pid=2690 comm="syslog-ng" name="kmsg" dev=proc ino=-268435446 scontext=user_u:system_r:syslogd_t tcontext=system_u:object_r:proc_kmsg_t tclass=fileWhat can I do?

    These errors are a sign that either the Selinux policies are not syslog-ng aware or have not been enabled yet. Make sure you have the latest policies for your distribution and then use getsebool to see if usesyslogng is turned on

    # getsebool use_syslogng
    use_syslogng --> inactive
    # setsebool -P use_syslogng=1

    Restart the syslog and the problem should be fixed. If not, you will need to contact your distributions selinux team for more guidance.

  • I am trying to send all important messages from a bunch of other machines to a central syslog-ng server via tcp. I chose tcp partly, because the same log server gets all kinds of less important stuff via udp from other machines, which can easily be distinguished that way, but partially also because I expected tcp to be more reliable. Unfortunately, this does not seem to be the case: When the connection has died for any reason, the client will only discover this when it is trying to send the next message to the server. Only then it starts to wait until “time_reopen” is over and establishes a new connection – the message that originally triggered this and whatever comes in between is lost.This has been discussed quite a bit lately on the mailing list. A single line is written to a TCP socket without an error when the connection has been lost. It is not until the next message is written that the error condition is reported by the kernel.

Performance Tips

  • What are some tips for optimizing a really busy loghost running syslog-ng?In no particular order:
  • If you use DNS, at least keep a caching DNS server running on the local host and make use of it – or better yet don’t use DNS.
  • You can post-process logs on an analysis host later on and resolve hostnames at that time if you need to. On your loghost your main concern is keeping up with the incoming log stream – the last thing you want to do is make the recording of events rely on an external lookup. syslog-ng blocks on DNS lookups (as noted elsewhere in this FAQ), so you’ll slow down/stop ALL destinations with slow/failed DNS lookups.
      • Don’t log to the console or a tty, under heavy load they won’t be able to read the messages as fast as syslog-ng sends them, slowing down syslog-ng too much.
      • Don’t use regular expressions in your filters. Instead of:
        filter f_xntp_filter_no_regexp {
        	# original line: "xntpd[1567]: time error -1159.777379 is way too large (set clock manually);
        	program("xntpd") and
        	match("time error .* is way too large .* set clock manually");
        };
         

        Use this instead:

        filter f_xntp_filter_no_regexp {
        	# original line: "xntpd[1567]: time error -1159.777379 is way too large (set clock manually);
        	program("xntpd") and
        	match("time error") and match("is way too large") and match("set clock manually");
        
        };

        Under heavy, heavy logging load you’ll see CPU usage like this when using regexps:


        …vs CPU usage like this when not using regexps:

        Note that the results at the bottom of the graphs show that the test with heavy regexp use caused huge delays, almost 25% lost messages (the test only sent 5,000 messages!) and hammered the CPU. The test without regexps was one where I sent 50,000 messages, and it hardly used any CPU, didn’t drop any messages and all the messages made it across in under a second (not all 50,000, each individual message made it in under a second). Note that the “Pace” of 500/sec is simply how fast they were injected to the syslog system using the syslog() system call (from perl using Unix::Syslog).

        NOTE: when not using regexps and matching on different pieces of the message, you might match messages that you don’t mean to. There is only a small risk of this, and it is much better than running out of CPU resources on your log server under most circumstances. It is your call to make.

        Please don’t ask me for the scripts that generated these graphs, I wrote them for work and it probably wouldn’t be possible to ever release them. I hope to one day write some like it in my free time and release them…but that may be a pipe dream.😦

 

    • Be sure to increase your maximum connections to a TCP source, as described here
    • There’s a good chance you’ll want to set per-destination buffers. The official reference manual covers the subject here.The idea here is to make sure that when you have multiple log destinations that might block somewhat “normally” (TCP and FIFO come to mind) that they don’t interfere with each other’s buffering. If you have a TCP connection maxed out in its buffer because of an extended network problem, but have only a temporary problem feeding logs into a FIFO, you can avoid losing any data in the FIFO (assuming your buffer size is large enough to handle the backup) if you set up separate buffers.

      If our TCP destination connection drops because the regional syslog server is down for a syslog-ng upgrade or kernel patch, we want events bound for the TCP destination to be held in the buffer and sent across once the connection is re-established. If our bucket is already filled because of FIFO problems to a local process, we can’t buffer a single message for the entire duration of the TCP connection outage. Ouch.

      The problem with implementing per-destination buffers is that the log_fifo_size option was added to the TCP destinations in the 1.6.6 version. You need to upgrade to syslog-ng 1.6.6 or later (I suggest the latest stable version).

  • You probably need to increase the size of your UDP receive buffers on your loghost. See this doc about UDP buffer sizing and how to modify it.
  • If you have many clients, you might well run out of fd’s (the default limit for maximum file descriptors is around 1000), thus syslog-ng might not be able to open files. The workaround then would be to increase the maximum file handles (ulimit -n) before starting syslog-ng, the best is to put this in the init script.
Older Posts »

Create a free website or blog at WordPress.com.