Setup LVM

The following tutorial shows creating two 1TB disks into one logical volume of 2TB

Add the 2 disks into the machine, in this case its a VM so added 2 1TB disks via the VM setup options

Check the disks have been added using fdisk, -l will list all the physical disk’s attached to the host

fdisk -l

Now you need to set a partition on the disks, again use fdisk

fdisk /dev/sdb 
root@rsync:~# fdisk /dev/sdb

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the DOS compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help):

Select 

n 

p

Now select t to set it to LVM

t

8e 

w

Do the above fdisk procedure for all the disks you want on the volume, so in my case it was /dev/sdb and /dev/sdc. Now you can start to create the logical volume

pvcreate /dev/sdb1 /dev/sdc1
pvdisplay
vgcreate rsync-vol /dev/sdb1 /dev/sdc1
lvcreate --name nas-data --size 1.9TB rsync-vol
lvdisplay
lvscan

Now format the logical volume

mkfs.ext3 /dev/mapper/rsync--vol-nas--data

Then mount

mount /dev/mapper/rsync--vol-nas--data /mnt/nas-vol 

Setup Check MK Centos 6

 

I have been using Nagios for years and frankly love this piece of software.  Recently i wanted to setup Check_MK which is an add-on for Nagios. The reason for my interest was the “livestatus” that Check_MK uses. I was finding NDOutils was crashing and causing weekly maintenance work so wanted to look at other options

The installation is for Centos 6 going from a fresh minimal installation.

As its a minimal installation i need to install wget & nano

yum -y nano wget

First of all disable selinux, this causes issues ! If you want it enabled then please create the exceptions

nano /etc/sysconfig/selinux

Then you need to add the epel repository, I’ve also included remi

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm

Now install the following applications, i’ve included gcc & make as we are going to manually compile the latest version of Check_MK

yum -y install nagios gcc httpd gcc-c++ autoconf automake mlocate xinetd check-mk-agent nagios-plugins-all.x86_64

Now go to the Check_MK and get the latest version, 1.2.5i2 is the latest at the time of writing

wget --no-check-certificate https://mathias-kettner.de/download/check_mk-1.2.5i2.tar.gz

Then move to an appropriate directory and extract the archive

tar -xvvzf check_mk-1.2.5i2.tar.gz

Since we installed apache and Nagios in the previous step we need to start the services as when we compile Check_MK it will automatically detect the services

service nagios start
service httpd start 

Now go into the directory where you extracted Check_MK and start the setup

cd /check_mk

./setup 

Within the setup I followed most the defaults, it should automatically detect your Nagios installation and apache installation so just use the defaults. The program should compile without any errors

After the install restart Nagios & apache

service nagios restart
service httpd restart

As this is a brand new install you will need to setup a Nagios password to access the webpage

htpasswd -c /etc/nagios/passwd nagiosadmin

If you want PNP graphs which i would recommend then this can be installed via yum

yum -y install pnp4nagios.x86_64

Then you need to tell Nagios to use pnp4nagios for its performance data processing

nano /etc/nagios/objects/commands.cfg

Change the following


# 'process-host-perfdata' command definition
#define command{
#       command_name    process-host-perfdata
#       command_line    /usr/bin/printf "%b" "$LASTHOSTCHECK$\t$HOSTNAME$\t$HOSTSTATE$\t$HOSTATTEMPT$\t$HOSTSTATETYPE$\t$HOSTEXECUTIONTIME$\t$HOSTOUTPUT$\t$$
#       }
#

# 'process-service-perfdata' command definition
#define command{
#       command_name    process-service-perfdata
#       command_line    /usr/bin/printf "%b" "$LASTSERVICECHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$SERVICESTATETYPE$\t$SERVICEE$
#       }
#

define command {
       command_name    process-service-perfdata
       command_line    /usr/bin/perl /usr/libexec/pnp4nagios/process_perfdata.pl

}

define command {
       command_name    process-host-perfdata
       command_line    /usr/bin/perl /usr/libexec/pnp4nagios/process_perfdata.pl -d HOSTPERFDATA
}

Then view the nagios.cfg file to make sure its processing performance data, change the following if not already set

nano /etc/nagios/nagios.cfg
process_performance_data=1

# HOST AND SERVICE PERFORMANCE DATA PROCESSING COMMANDS
# These commands are run after every host and service check is
# performed.  These commands are executed only if the
# enable_performance_data option (above) is set to 1.  The command
# argument is the short name of a command definition that you
# define in your host configuration file.  Read the HTML docs for
# more information on performance data.

host_perfdata_command=process-host-perfdata
service_perfdata_command=process-service-perfdata

If like me you intend to use the livestatus and want remote access to it, you need to setup xinetd

nano /etc/xinetd.d/livestatus

Then copy the configuration below into the file


service livestatus
{
	type		= UNLISTED
	port		= 6557
	socket_type	= stream
	protocol	= tcp
	wait		= no
# limit to 100 connections per second. Disable 3 secs if above.
	cps             = 100 3
# set the number of maximum allowed parallel instances of unixcat.
# Please make sure that this values is at least as high as
# the number of threads defined with num_client_threads in
# etc/mk-livestatus/nagios.cfg
        instances       = 500
# limit the maximum number of simultaneous connections from
# one source IP address
        per_source      = 250
# Disable TCP delay, makes connection more responsive
	flags           = NODELAY
	user		= nagios
	server		= /usr/bin/unixcat
	server_args     = /var/spool/nagios/cmd/live
# configure the IP address(es) of your Nagios server here:
#	only_from       = 127.0.0.1 10.0.20.1 10.0.20.2
	disable		= no
}
service xinetd restart

Copy from linuxbox to vmware datastore

I needed to transfer some backup “VMDK” files from our Linux based backup server to our VMWare datastores, i used ssh to do the job. Make sure your destination VMWare server has SSH enabled. Hope this simple hint helps others !

192.168.5.40 is the VMWare host, I am running the command from the linux based backup server’s command line

ssh 192.168.5.40 cat < sda-flat.vmdk "> /vmfs/volumes/datastore1/sbsfolder/sda-flat.vmdk"

 

Clonezilla Centos 6

Very quick guide on how I setup “Clonezilla” to run as a server on Centos6

wget http://sourceforge.net/projects/drbl/files/drbl_stable/1.10.31/drbl-1.10.31-1drbl.i386.rpm/download
yum install perl-Digest-SHA perl-Didgest-SHA1
rpm -ivh drbl-1.10.31-1drbl.i386.rpm
yum update
/opt/drbl/sbin/drblsrv -i

When I first ran through the above command I got an error

(found some error -> A suitable kernel rpm package is NOT found in these…)

To fix all I did was run yum update again

yum update

Now i can setup the drbl server

/opt/drbl/sbin/drblsrv -i

Now setup the parameters

/opt/drbl/sbin/drblpush -i

Once the parameters have been setup run

/opt/drbl/sbin/dcs

Make sure you know the amount of clients then set it to run when a certain number of clients are connected

I had some issues were the restore from PXE would go straight into a shell saying:

/bin/bash/ tty 

To fix the errors just restart the nfs server

/etc/init.d/nfs restart

LXC Ubuntu

My notes on LXC setup with Ubuntu

By default, lxc-create places the container’s root filesystem as a directory tree at /var/lib/lxc/CN/rootfs

apt-get install lxc

aptitude install bridge-utils libvirt-bin debootstrap

lxc-create -t ubuntu -n lxc1

 

This will default to using the same version and architecture as your machine,
additional option are obviously available (?help will list them). Login/Password are ubuntu/ubuntu.

screen -S -d -m lxc-start -n lxc1

lxc-console -n lxc1 -t 1

For our deployment we would want to use cloning for rapid deployment – lxc-clone -o C1 -n C2

changing IP

Setup new
GW Ip to the bridge interface

ifconfig lxcbr0:1 172.18.10.1 netmask 255.255.255.0 up

then add the NAT rules to the ip can route

iptables -t nat -A POSTROUTING -s 172.18.10.0/24 ! -d 172.18.10.0/24 -j MASQUERADE

============

Setup VLAN

apt-get install vlan

vconfig add eth0 200

root@lxc2:~# ifconfig eth0.200
eth0.200 Link encap:Ethernet HWaddr 00:16:3e:71:db:7e
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B
nano /etc/network/interfaces
iface eth0.200 inet static
address 172.18.10.2
netmask 255.255.255.0

ifup eth0.200

root@lxc2:~# cat /proc/net/vlan/eth0.200
eth0.200 VID: 200 REORDER_HDR: 1 dev->priv_flags: 1
total frames received 0
total bytes received 0
Broadcast/Multicast Rcvd 0

total frames transmitted 6
total bytes transmitted 468
Device: eth0
INGRESS priority mappings: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0
EGRESS priority mappings:
root@lxc2:~#

 

Ref:

 

https://help.ubuntu.com/12.04/serverguide/lxc.html

http://askubuntu.com/questions/293275/what-is-lxc-and-how-to-get-started

Adding Radius checks to Nagios

I needed to monitor some of our Companies RADIUS servers. The Nagios server requires a RADIUS client to be setup, this can then interact with the check_radius plugin

Nagios Server = 172.17.17.201

RADIUS Server = 192.168.2.66

Adding Radius checks to Nagios == check_radius

First of all make sure the nagios plugins pack is installed, it will be found in

/usr/lib/nagios/plugins/check_radius

Install radiusclient

Next you need to install the radius client software –

radiusclient-0.3.2-0.2.el5.rf.i386.rpm

Install with

rpm -ivh radiusclient-0.3.2-0.2.el5.rf.i386.rpm

Now that’s installed you need to edit some of its config files –

/etc/radiusclient/radiusclient.conf

and

/etc/radiusclient/servers

radiusclient.conf

change the “authserver”

authserver 192.168.2.66

then run –

chown -R nagios:nagios /etc/radiusclient/radiusclient.conf

This makes sure nagios can excute the check_radius command as this file is included in the check_radius command

radiusclient/servers

Add in the server host or IP and the sercret key

#Server Name or Client/Server pair Key

#—————- —————

#portmaster.elemental.net hardlyasecret

#portmaster2.elemental.net donttellanyone

192.168.2.66 testing123

Then run the command below to setup ther permissions so nagios can read the servers file

chown -R nagios:nagios /etc/radiusclient/servers

Adding to Nagios

Add the command into commands.cfg

/usr/lib/nagios/plugins/check_radius -H 192.168.2.66 -F /etc/radiusclient/radiusclient.conf -u xonetest -p xonetest -P 1812

This is what I have in the config file

define command{

command_name check_radius

command_line /usr/lib/nagios/plugins/check_radius -H 192.168.2.66 -F /etc/radiusclient/radiusclient.conf -u xonetest -p xonetest -P 1812

}

Add to localhost.cfg

define service{

use local-service ; Name of service template to use

host_name xone

service_description RADIUS_DRWOTSON

check_command check_radius

notifications_enabled 1

}

Add the Nagios server to the clients file on the Radius Server

You need to add the nagios server to the radius servers client file so it knows to accept auth requests from that server, if not the requests will be ignored

nano /etc/raddb/clients

172.14.14.201 testing123

TEST!

[root@xone html]# tail -f /var/log/radius/radius.log

Tue Jan 19 19:51:59 2010 : Info: Ready to process requests.

Tue Jan 19 19:52:06 2010 : Auth: Login OK: [xonetest/xonetest] (from client 172.17.17.201 port 0)

Auto mount Linux

A guide on how to setup auto-mounting in Linux

Install autofs

yum install autofs

Then edit the main config file

nano /etc/auto.master

In here you setup the path follwed by a sperate config file, the config file completes the mount path

/mnt/auto /etc/auto.auto --timeout=60

so the full path is /mnt/auto/back – the “/mnt/auto” is taken from the master & the “back” is taken from auto.auto

nano /etc/auto.auto
back -fstype=reiserfs,noatime,data=writeback :/dev/disk/by-label/DR_BACKUP_DRIVE

Now to mount the drive cd into the drive /mnt/auto/back – this will auto mount the drive

remember to reload & restart the autofs when any changes have been made to the config files

———

If you want to use the NTFS file system with Linux then do the following

Make sure nfts-3g is installed otherwise you will not be able to write to the drive, my working fstab entry is shown below

2TB -fstype=auto,async :/dev/sdc1

You can use :/dev/sdc1 for the disk or better is to use :/dev/disk/by-label/DR_Backup as if the drive changes
i.e. sdd1 instead of sdc1 then the mount will fail. To use the lable, just give the disk a name

The –timeout=60 is set to auot dismount the drive after a certain amount of inactivity, this means it won’t corrupt when
removed

ntfslabel -f /dev/sdc1 DISK_BACKUP

SSH Rsync

Quick hint on how to Rsync folders over SSH

The command will be pull the data from “/backup-remote” on the “remoteserver” and sync it into the local folder /mnt/bkup/

 

rsync --stats --progress -avz -e 'ssh -p 22' root@remoteserver:/backup-remote /mnt/bkup/

NRPE Nagios Remote Linux

Adding a remote Linux machine

Use the NRPE daemon to execute Nagios plugins on the remote server and report back to the monitoring host server.

Create Nagios user account on remote server to be monitored:

# useradd nagios

# passwd nagios

Download and Install Nagios Plugins

[root@xone /]# yum install nagios.i386 nagios-plugins.i386 nagios-plugins-nrpe.i386 nagios-nrpe.i386

You need the openssl-devel package installed to compile plugins with ssl support. **

yum -y install openssl-devel

Edit the file xinetd

nano /etc/xinetd.d/nrpe

change “only_from” and add the IP of the Nagios server – Remember to change disable = no

# default: off

# description: NRPE (Nagios Remote Plugin Executor)

service nrpe

{

flags = REUSE

type = UNLISTED

port = 5666

socket_type = stream

wait = no

user = nagios

group = nagios

server = /usr/sbin/nrpe

server_args = -c /etc/nagios/nrpe.cfg –inetd

log_on_failure += USERID

disable = no

only_from = 127.0.0.1 192.168.2.65

}

Specify the Nagios server 192.168.2.65

Add the nrpe to services

nano /etc/services

nrpe 5666/tcp # NRPE

service xinetd restart

Run = netstat -at |grep nrpe – this shows the host is listening for the requests

[root@xone /]# netstat -at |grep nrpe

tcp 0 0 *:nrpe *:* LISTEN

Then run = /usr/lib/nagios/plugins/check_nrpe -H localhost

Should look like =

[root@xone /]# /usr/lib/nagios/plugins/check_nrpe -H localhost

NRPE v2.12

============================

Open Port 5666 on Firewall

============================

Now setup the Nagios server config for the remote host

Make sure the plugins are installed –

yum install nagios-plugins-nrpe.i386 nagios-nrpe.i386

then run /usr/lib/nagios/plugins/check_nrpe -H 192.168.2.66

If all is ok you should get the output = NRPE v2.12

[root@mampi /]# /usr/lib/nagios/plugins/check_nrpe -H 192.168.2.66

NRPE v2.12

Create NRPE Command Definition

nano /etc/nagios/objects/commands.cfg

Add the following:

###############################################################################

# NRPE CHECK COMMAND

#

# Command to use NRPE to check remote host systems

###############################################################################

define command{

command_name check_nrpe

command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$

}

Create Linux Object template

In order to be able to add the remote linux machine to Nagios we need to create an object template file adn add some object definitions.

Create new linux-box-remote object template file: linux-box-remote.cfg

/etc/nagios/objects/linux-box-remote.cfg

Here we add all the necessary’s – If there’s already a host then its not defined here, the example below is using xone host template defined else where, the is a host template that’s commented out “xeno-r”

define host{

name linux-box-remote ; Name of this template

use generic-host ; Inherit default values

check_period 24×7

check_interval 5

retry_interval 1

max_check_attempts 10

check_command check-host-alive

notification_period 24×7

notification_interval 30

notification_options d,r

contact_groups admins

register 0 ; DONT REGISTER THIS – ITS A TEMPLATE

}

#

#define host{

# use linux-box-remote ; Inherit default values from a template

# host_name xeno-r ; The name we’re giving to this server

# alias xeno-r ; A longer name for the server

# address 192.168.2.66 ; IP address of the server

# }

define service{

use generic-service

host_name xone

service_description CPU Load

check_command check_nrpe!check_load

}

define service{

use generic-service

host_name xone

service_description Current Users

check_command check_nrpe!check_users

}

define service{

use generic-service

host_name xone

service_description /dev/hda1 Free Space

check_command check_nrpe!check_hda1

}

define service{

use generic-service

host_name xone

service_description Total Processes

check_command check_nrpe!check_total_procs

}

define service{

use generic-service

host_name xone

service_description Zombie Processes

check_command check_nrpe!check_zombie_procs

}


Lastly add the new cfg created to the nagios config file so it knows to load it up –

nano /etc/nagios/nagios.cfg

# Definitions for monitoring the local (Linux) host

cfg_file=/etc/nagios/objects/linux-box-remote.cfg

cfg_file=/etc/nagios/objects/linux-box-remote-swift.cfg

How to monitor a remote process on a remote linux host parsing the commands using nrpe

Ok the commands are defined in the nrpe.cfg file on the remote machine

So

command[check_john]=/usr/lib/nagios/plugins/check_procs -c 1:30 -C john

Then on the Nagios server machine define the check

define service{

use generic-service

host_name swift

service_description Check John

check_command check_nrpe!check_john

}


nagios -v /etc/nagios/nagios.cfg

service nagios restart

Adding images to the Nagios

Adding images to the Nagios GUi

The images are kept in /usr/local/nagios/share/images/logos for my CentOS installation. If yours is different you can use something like “locate linux40.png”  

Once you have an image it needs to be defined in the templates config file

Just add the image file name i.e. = icon_image linux40.png

The variable is “icon_image”

####################### RED HAT IMAGE ##############################

define host {

name redhat-img

register 0

icon_image linux40.png

}

####################################################################

############################ ROUTER IMAGES #########################

define host {

name router-img

register 0

icon_image switch40.png

}

###################################################################

define host {

name win-img

register 0

icon_image win40.png

}

Once the image is defined in the templates file it needs to be called from the relevant host file.

Add the name to the host or service to get it to load =

define host{

use linux-server,host-pnp,redhat-img

————

define service{

use linux-server,disk-img