Thursday, 28 March 2013

RHCSA: Mounting Filesystems with /etc/fstab

The /etc/fstab file is used on Linux systems to control the mounting of filesystems, both at boot time and as they are introduced to the system by a user, such as a USB flash drive. It's a file that used to hold great mystery to me and in my earlier days as a Linux user causing me to inadvertantly screw up the filesystem mounts and trash a few systems when I didn't know how to troubleshoot it correctly. Fortunately, once you have spent some time looking at the /etc/fstab file it really isn't so difficult to understand.

Here's a sample /etc/fstab:

 [root@cruithne ~]# cat /etc/fstab   
 # /etc/fstab  
 # Created by anaconda on Wed Mar 27 21:22:12 2013  
 # Accessible filesystems, by reference, are maintained under '/dev/disk'  
 # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info  
 /dev/mapper/vg_cruithne-lv_root /            ext4  defaults    1 1  
 UUID=59376313-6474-445e-8af7-b175d9ea4fb5 /boot          ext4  defaults    1 2  
 /dev/mapper/vg_cruithne-lv_home /home          ext4  defaults    1 2  
 /dev/mapper/vg_cruithne-LogVol02 swap          swap  defaults    0 0  
 tmpfs          /dev/shm        tmpfs  defaults    0 0  
 devpts         /dev/pts        devpts gid=5,mode=620 0 0  
 sysfs          /sys          sysfs  defaults    0 0  
 proc          /proc          proc  defaults    0 0  

Admittedly, it doesn't look that easy to understand initially. It's a bit easier once you know that the file is simply a line per filesystem with six fields on each line:

  • Field 1: The device or filesystem to be mounted. 
  • Field 2: Mount point, ie where you would like the device/filesystem to appear on your box.
  • Field 3: Filesystem Format: Examples here are ext3, ext4, ReiserFS, iso9660, smb, swap and nfs.
  • Field 4: Mount options. You can put options in here to set if the filesystem should be mounted at boot, noauto option prevents a filesystem from being automatically mounted with the mount -a command, acl enables access control lists on a filesystem. Too many options to list here but fully documents in the man pages for fstab.
  • Field 5: Dump Value. Determines the archiving frequency on the filesystem. Set this to 1 for daily, 2 for every other day etc. Setting to 0 disables the feature.
  • Field 6: Filesystem Checking. This field determines the order that filesystems are checked by fsck at boot time. Root should be 1, other local filesystems 2 and removable devices set to 0 to disable fsck checking.
So now you know what those fields mean, it might be useful to have an example. I've plugged in a USB stick to my machine, I'd like it to always appear at /mnt/USB. The first thing I like to to is check to see what it has been identified by as by the kernel. I like to use the command dmesg | tail as below:

 [root@cruithne ~]# dmesg | tail  
 sd 4:0:0:0: [sdb] Write Protect is off  
 sd 4:0:0:0: [sdb] Mode Sense: 23 00 00 00  
 sd 4:0:0:0: [sdb] Assuming drive cache: write through  
 sd 4:0:0:0: [sdb] Assuming drive cache: write through  
  sdb: sdb1  
 sd 4:0:0:0: [sdb] Assuming drive cache: write through  
 sd 4:0:0:0: [sdb] Attached SCSI removable disk  
 EXT4-fs (sdb1): recovery complete  
 EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts:   
 SELinux: initialized (dev sdb1, type ext4), uses xattr  

We can see quite clearly that it has been identified as /dev/sdb with a single partition /dev/sdb1. Let's check it out with fdisk:

 [root@cruithne ~]# fdisk -cul /dev/sdb  
 Disk /dev/sdb: 4027 MB, 4027580416 bytes  
 168 heads, 62 sectors/track, 755 cylinders, total 7866368 sectors  
 Units = sectors of 1 * 512 = 512 bytes  
 Sector size (logical/physical): 512 bytes / 512 bytes  
 I/O size (minimum/optimal): 512 bytes / 512 bytes  
 Disk identifier: 0x00090022  
   Device Boot   Start     End   Blocks  Id System  
 /dev/sdb1      2048   7866367   3932160  83 Linux  
 [root@cruithne ~]#   

At the bottom there we can see that the partition type is 83, suitable for using on Linux systems. Let's format it with ext4 so that we can start using it to save files:

 [root@cruithne ~]# mkfs.ext4 /dev/sdb1  
 mke2fs 1.41.12 (17-May-2010)  
 Filesystem label=  
 OS type: Linux  
 Block size=4096 (log=2)  
 Fragment size=4096 (log=2)  
 Stride=0 blocks, Stripe width=0 blocks  
 245760 inodes, 983040 blocks  
 49152 blocks (5.00%) reserved for the super user  
 First data block=0  
 Maximum filesystem blocks=1006632960  
 30 block groups  
 32768 blocks per group, 32768 fragments per group  
 8192 inodes per group  
 Superblock backups stored on blocks:   
      32768, 98304, 163840, 229376, 294912, 819200, 884736  
 Writing inode tables: done                
 Creating journal (16384 blocks): done  
 Writing superblocks and filesystem accounting information: done  
 This filesystem will be automatically checked every 39 mounts or  
 180 days, whichever comes first. Use tune2fs -c or -i to override.  
 [root@cruithne ~]#   

Now we need to mount it in our folder:

 [root@cruithne ~]# mkdir /mnt/usb  
 [root@cruithne ~]# mount /dev/sdb1 /mnt/usb  
 [root@cruithne ~]# mount | grep /dev/sdb1  
 /dev/sdb1 on /mnt/usb type ext4 (rw)  
 [root@cruithne ~]#   

Ok, we can see that it's successfully mounted in there. This isn't going to survive a reboot though, to do that we'll need to open our /etc/fstab and enter the line at the bottom:

 # /etc/fstab  
 # Created by anaconda on Wed Mar 27 21:22:12 2013  
 # Accessible filesystems, by reference, are maintained under '/dev/disk'  
 # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info  
 /dev/mapper/vg_cruithne-lv_root /            ext4  defaults    1 1  
 UUID=59376313-6474-445e-8af7-b175d9ea4fb5 /boot          ext4  defaults    1 2  
 /dev/mapper/vg_cruithne-lv_home /home          ext4  defaults    1 2  
 /dev/mapper/vg_cruithne-LogVol02 swap          swap  defaults    0 0  
 tmpfs          /dev/shm        tmpfs  defaults    0 0  
 devpts         /dev/pts        devpts gid=5,mode=620 0 0  
 sysfs          /sys          sysfs  defaults    0 0  
 proc          /proc          proc  defaults    0 0  
 /dev/sdb1        /mnt/usb        ext4  defaults    0 0  

In short, we are mounting /dev/sdb1 at mount point /mnt/usb, it's formatted with ext4 and will be using the default mount options. We've added 0 0 at the end to disable the dump option and fsck checking on this drive.

Let's see if that's worked, unmount the drive and then use the mount -a command to remount anything in fstab that uses the auto option (included in the "defaults" directive we used).

 [root@cruithne ~]# umount /mnt/usb/  
 [root@cruithne ~]# mount | grep /dev/sdb1  
 [root@cruithne ~]# mount -a   
 [root@cruithne ~]# mount | grep /dev/sdb1  
 /dev/sdb1 on /mnt/usb type ext4 (rw)  
 [root@cruithne ~]#   

In the commands above, I've unmounted the drive from its mount point and then used grep to see if it is still mounted. I've used mount -a to remount anything listed in /etc/fstab and then used grep to check again. We can see this time that /dev/sdb1 has been mounted at /mnt/usb as we specified.

That's a very basic run-through of what we can do. We can also use /etc/fstab to automatically mount remote filesystems such as Samba and ftp shares. I'll cover these in a part 2 when I get round to doing it.

Monday, 25 March 2013

RHCSA: Creating Logical Volume Snapshots

The concept of logical volumes was a bit tough for me to wrap my head around initially, but the same as with many things in this world, all it takes is a little practice. I've blogged previously about the concepts of logical volume management here, here's a quick how-to explaining how we can create a snapshot of a logical volume. Snapshot logical volumes are a useful feature of LVM storage. An LVM snapshot is connected to another volume, temporarily preserving changed data. The snapshot provides a self-consistent previous image of the original volume so that its data can be backed up in a consistent state.

To start off with, let's take a look and make sure we have enough space to create this snapshot volume. Let's use the pvdisplay command:

 [root@centos ~]# pvdisplay   
  --- Physical volume ---  
  PV Name        /dev/sda2  
  VG Name        vg_centos  
  PV Size        100.00 GiB / not usable 4.00 MiB  
  Allocatable      yes   
  PE Size        4.00 MiB  
  Total PE       25599  
  Free PE        14335  
  Allocated PE     11264  
  PV UUID        SZ8KN0-OQ8Z-CHcM-R0SH-k1eO-jQAX-1bbNKa  

On this system we only have a single physical volume. This is where all of our volume groups, the container for our logical volumes, sits on top of. This is broken up into 25599 * 4MB chunks, known as physical extents (PE) . We can see we have 11264 PE used up already but we still have plenty left. We could make a new volume group here if we need to. Let's take a look at our volume groups now, this time using the vgdisplay command:

 [root@centos ~]# vgdisplay   
  --- Volume group ---  
  VG Name        vg_centos  
  System ID         
  Format        lvm2  
  Metadata Areas    1  
  Metadata Sequence No 4  
  VG Access       read/write  
  VG Status       resizable  
  MAX LV        0  
  Cur LV        3  
  Open LV        3  
  Max PV        0  
  Cur PV        1  
  Act PV        1  
  VG Size        100.00 GiB  
  PE Size        4.00 MiB  
  Total PE       25599  
  Alloc PE / Size    11264 / 44.00 GiB  
  Free PE / Size    14335 / 56.00 GiB  
  VG UUID        AhQMcj-dHAa-zd6a-Dq4c-ymOl-qBOx-4XxDI8  

Looking again at the "Free PE" we can see we have 14335 of those 4MB physical extents available to us, ~56GB. Let's check out the logical volume for the /home directory, using lvdisplay:

 [root@centos ~]# lvdisplay /dev/vg_centos/lv_home  
  --- Logical volume ---  
  LV Path        /dev/vg_centos/lv_home  
  LV Name        lv_home  
  VG Name        vg_centos  
  LV UUID        a19hzZ-llnX-3hdb-dCQM-NTPR-n4gZ-T90ANE  
  LV Write Access    read/write  
  LV Creation host, time centos, 2013-03-23 23:39:38 +0000  
  LV Status       available  
  # open         1  
  LV Size        20.00 GiB  
  Current LE       5120  
  Segments        1  
  Allocation       inherit  
  Read ahead sectors   auto  
  - currently set to   256  
  Block device      253:2  

We seem to have more than enough space left in our volume group for our snapshot requirements. The whole of the lv_home logical volume is 20GB. Red Hat's official documentation suggests around a snapshot of ~3-5% of an origin whch has rarely changing data. In this case, I'm creating a snapshot for the volume that houses the /home directories of my users. I would expect that to change fairly frequently. I've got plenty of space in the volume group so I'm going to go for 25% and create a  5GB snapshot.

Let's create our 5GB snapshot of the lv_home logical volume:

 [root@centos ~]# lvcreate -s -n lv_home_snap -L 5G /dev/vg_centos/lv_home  
  Logical volume "lv_home_snap" created  

Let's take a good look at this command. We create a new logical volume, using the lvcreate command. We use the -s option to make it a snapshot. We name our new logical volume with the  -n option, here I've called it lv_home_snap. I've chosen to specify the size I want the snapshot logical volume to be here with the -L option, stating 5G for 5 gigabytes. Finally, I've specifed the logical volume that I want to use at the origin of my snapshot volume.

Let's check to see if it has been created to our requirements, with the lvs command:

 [root@centos mugwriter]# lvs  
  LV      VG    Attr   LSize Pool Origin Data% Move Log Cpy%Sync Convert  
  lv_home   vg_centos owi-aos-- 20.00g                         
  lv_home_snap vg_centos swi-a-s-- 5.00g   lv_home  0.21               
  lv_root   vg_centos -wi-ao--- 20.00g                         
  lv_swap   vg_centos -wi-ao--- 4.00g      

There it is on the second line. Apologies for the formatting here, in the terminal it shows lv_home under the "Origin" column so you can see that it is a snapshot of the lv_home logical volume.

Let's test our snapshot to make sure its working. I've created a copy of the /etc directory and dropped it in my home folder:
 [root@centos mugwriter]# ls -l  
 total 8156  
 drwxr-xr-x. 107 root root  12288 Mar 24 22:45 etc  
 -rw-r--r--.  1 root root 8338615 Mar 24 23:14 tar.etc.tgz  
 [root@centos mugwriter]#   

If our snapshot is working correctly, we should be able to delete any of these. But, oops, I deleted the whole  of my normal user's home directory!

 [root@centos home]# ls  
 lost+found mugrwriter  
 [root@centos home]# rm -rf mugwriter/  

Luckily, as the systems administator here I can restore this from my snapshot volume.

 [root@centos home]#mkdir /snapmount  
 [root@centos home]# mount /dev/vg_centos/lv_home_snap /snapmount/  
 root@centos home]# ls -l /snapmount/  
 total 20  
 drwx------. 2 root root 16384 Mar 23 23:39 lost+found  
 drwx------. 4 mugwriter mugwriter 4096 Mar 24 09:32 mugwriter  
 [root@centos home]# cp -r /snapmount/mugwriter/ /home  
 [root@centos home]# ls -l /home
 total 20  
 drwx------. 2 root root 16384 Mar 23 23:39 lost+found  
 drwx------. 4 root root 4096 Mar 25 00:11 mugwriter 

So that's how we create a snapshot logical volume. Hopefully you won't ever accidentally delete a user's entire home directory, but if you do you can restore it before somebody notices.

Tuesday, 19 March 2013

RHCSA: Deploy a default configuration HTTP server

Deploying a web server on an Red Hat system is very simple indeed. It's really only a matter of installing the required packages, ensuring the service is running at boot time and opening up the port on the firewall. Putting something interesting on there is up to you though...

As root, install the webserver package along with any dependencies:

 [root@mugwriter ~]# yum install httpd  
 Loaded plugins: product-id, refresh-packagekit, rhnplugin, security,  
        : subscription-manager  
 This system is receiving updates from Red Hat Subscription Management.  
 This system is not registered with RHN Classic or RHN Satellite.  
 You can use rhn_register to register.  
 RHN Satellite or RHN Classic support will be disabled.  
 rhel-6-client-rhev-agent-rpms              | 2.8 kB   00:00     
 rhel-6-desktop-rpms                   | 3.7 kB   00:00     
 Setting up Install Process  
 Resolving Dependencies  
 --> Running transaction check  
 ---> Package httpd.x86_64 0:2.2.15-26.el6 will be installed  
 --> Finished Dependency Resolution  
 Dependencies Resolved  
  Package   Arch     Version        Repository         Size  
  httpd    x86_64    2.2.15-26.el6     rhel-6-desktop-rpms    821 k  
 Transaction Summary  
 Install    1 Package(s)  
 Total download size: 821 k  
 Installed size: 2.9 M  
 Is this ok [y/N]: Y  
 Downloading Packages:  
 httpd-2.2.15-26.el6.x86_64.rpm                                                              | 821 kB   00:06     
 Running rpm_check_debug  
 Running Transaction Test  
 Transaction Test Succeeded  
 Running Transaction  
  Installing : httpd-2.2.15-26.el6.x86_64                                                                  1/1   
  Verifying : httpd-2.2.15-26.el6.x86_64                                                                  1/1   
  httpd.x86_64 0:2.2.15-26.el6                                                                            
 [root@mugwriter ~]#   

Once this has completed, check to see if the service has started:

 [root@mugwriter ~]# service httpd status  
 httpd is stopped  

As expected, it hasn't automatically been started up. I suspect this is for security reasons. Ok, let's start the service:

 [root@mugwriter ~]# service httpd start  
 Starting httpd:                 [ OK ]     

Let's make sure that the service runs automatically at boot time, otherwise we'll need to start it from the command line every time. We can use chkconfig to see which runlevels it is set to run at from boot and change as we see fit:
 [root@mugwriter ~]# chkconfig --list httpd  
 httpd          0:off     1:off     2:off     3:off     4:off     5:off     6:off  
 [root@mugwriter ~]# chkconfig httpd on  
 [root@mugwriter ~]# chkconfig --list httpd  
 httpd          0:off     1:off     2:on     3:on     4:on     5:on     6:off  

Right, we should be able to see the default apache webserver index page now. Launch a browser and go to http://localhost, you should see something like this:

Great, it's working! Some useful content to serve up might be nice though. You'll need to add your content to the /var/www/html directory on your system, starting with a new index.html.

If you want anybody other than users on your system to see this marvel of website design you're going to need to open up port 80 on your firewall. This is easy enough to do and you don't need to trouble yourself with iptables to do it. There are a couple of tools on a RHEL system for administering your firewall. if you are working at the CLI, you can use system-config-firewall-tui. If you are using a graphical interface, you can either launch  system-config-firewall from the command line or you can access it through the System > Administration > Firewall menu.

Put a tick in the box next to the www option to open up port 80 on the firewall and then click apply. I told you it was easy! Now you're all set to start serving up something a little more useful, perhaps a locally hosted  Red Hat software repository?

Sunday, 17 March 2013

Searching with GNU Grep

GNU grep is installed by default on pretty much any Linux system you are likely to come across. It's a useful utility for searching files for strings, words, regular expressions etc. By deafult, it will display the lines in a file that match the input. Here's a few examples of how it can be used:

 #grep 'word' /home/mugwiter/mybigfile  

This would return the lines with 'word' in  /home/mugwriter/mybigfile. While this is useful enough, there are other ways that we can use grep that might be more useful. For example, the above command will only search a single file to the regular expression we're trying to match. We can search multiple files using the -r switch:

 #grep -r 'word' * /home/mugwiter/  

The -r switch will search recursively, so in this instance it can be used to search all of the files in a folder. This is more useful for the occasions when we need to find the regular expression in a file but aren't sure which file it's in. Another use case for this is when there are multiple files where we need to locate the matching pattern and perhaps pipe the results to sed to make amendments. But when we're using the previous example, grep is only going to match the 'word' string literally, so any occurrence of 'Word' or 'wOrd' etc are going to be left out of the results. In other words, if we need case insensitive results from grep, we need to find another way of searching. This is where the -i switch comes in:

 #grep -ir 'word' * /home/mugwiter/  

Now we're starting to get something a bit more useful. Grep will return the 'word' input when it's at the start of a sentence with a capital W, it will return 'WORD' and any combination of upper and lower-case.

We can also pipe the standard output from other commands, one of the ways I use it most frequently is to search for running processes. Let's see if anybody is using the iftop program:

 [bob@mugwriter ~]$ ps aux | grep iftop  
 root   23861 0.0 0.1 187208 3092 pts/8  S+  11:30  0:00 sudo iftop -i wlan0  
 root   23867 0.1 0.2 352096 5624 pts/8  Sl+ 11:30  0:04 iftop -i wlan0  
 bob  24577 0.0 0.0 103248  856 pts/11  S+  12:20  0:00 grep iftop  

We can use it for searching log files, let's see check for users that have used the sudo command

 #grep sudo /var/log/secure*  

On its own, that's not really all that useful. The above will search all of the /var/log/secure files including those that have been rotated out and datestamped on a weekly basis.  You should rightly expect to get a lot of output from that command on a system that gets much useage. How about if we narrow it down a bit, let's get the amount of times sudo was used but failed by piping the output from our last command back into grep:

 # grep sudo /var/log/secure* | grep failure  

This will output all of the times that users on a system tried to use the sudo command but failed, possibly sue to entering a password incorrectly, perhaps they aren't in the /etc/sudoers files so they aren't allowed to use sudo. Still, we could get a lot of output from this again, so let's just pipe the results to wc, and use is to count the amount of lines we get from our query:

 # grep sudo /var/log/secure* | grep failure | wc -l  

We can see that we get from grep 5 lines of output that match our query.

I'll put some more useful examples of useage up when I get chance, these are just a few ways grep can be put to use.

Friday, 8 March 2013

RHCSA: Logical Volume Management

You can use a collection of disks on a system, group them together into a volume group and then carve this volume group up into logical volumes of varying sizes, not necessarily the same size as the original disks. You can also use the physical disks, convert to a volume group and from this make a single, logical, volume.

On RHEL systems logical volumes can be managed using the CLI or using system-config-lvm. System -config-lvm isn’t installed by default however so it will need to be added from the repo if you wish to use it. Real men use the command line though as we all know, so let’s press on and ignore the GUI.

Rogical Volume Management Steps

The first step is to convert your disk partitions into Linux LVM partitions. Check your disks using:

#fdisk -l

This will list your disks. For this example we will assume we have two 250GB disks at /dev/sdd and /dev/sde. Convert the partitions using:

#fdisk /dev/sdd

Press ‘l’ to list the partition types available. We need to change to Linux LVM, which is listed as type 8e. Press ‘t’ to change the partition type and enter ‘8e’. Press ‘w’ to write changes. Then do the same thing for /dev/sde.

The second step is to convert the LVM disks into “physical volumes”. We do this using the pvcreate command:

#pvcreate /dev/sdd
#pvcreate /dev/sde

We can view our physical volumes with the pvs command or with more detail using pvdisplay.

The third step is to create logical volumes by combining our physical disks. We do this using vgcreate.

#vgcreate vg_new /dev/sdd /dev/sde

You should get a message saying the volume group has been created. You can view volume groups using vgs or vgdisplay.

You will now need to create a logical volume using lvcreate (notice a pattern here?)

#lvcreate -L 400G -n lv_new vg_new
Essentially, this creates a 400GB logical volume named lv_new on the vg_new volume group.

You can check your logical volumes using lvs or lvdisplay (more patterns!). You get far more information with lvdisplay.

Following this you will need to put a filesystem on the volume. You can do this with:

#mkfs.ext4 /dev/vg_new/lv_new

Following this you will need to create a mount point for you new filesystem.

#mkdir /mnt/new_disk

Add an entry to /etc/fstab to ensure that it is brought up at boot time:

#vi /etc/fstab

Append something relevent, eg:

/dev/vg_new/lv_new /mnt/new_disk ext4 defaults 0 0


Then mount the new volume:

#mount -a

Test the new volume is operating as required by creating a test volume.

Resizing Logical Volumes

We can resize a logical volume without having to unmount it first using the lvresize command:

#lvresize -L 500G /dev/vg_new/lv_new

Check the logical volume has grown with the lvs command.

To shrink the logical volume, first of all make sure that we have a valid backup of the data because any data that is on the area that is removed will be lost. Assuming we have a backup, we unmount the volume:

#umount /mnt/new_disk

Now we run the file system checker on the logical volume:

#fsck -f /dev/vg_new/lv_new

Next we need to shrink the filesystem with the resize2fs command:

#resize2fs /dev/vg_new/lv_new 400G

Once we have done this, we can use lvresize again to shrink the logical volume:

#lvresize -L 400G /dev/vg_new/lv_new

We get a warning at this point indicating the high risk of data loss, so really make sure you have a good backup. Use lvs once again to verify the new size of the logical volume.

Finally, we can remount the logical volume:

#mount -t ext4 /dev/vg_new/lv_new /mnt/new_disk

We can use the df -h command to see the amount of disk space we have in human readable format.