Updates to the Red Hat Enterprise Clustering and Storage Management course

by Wander Boessenkool (Red Hat)

With the release of the updated Red Hat Enterprise Clustering and Storage Management Course (RH436) for Red Hat Enterprise Linux 6 a couple of new subjects have been introduced, while others have been updated to reflect the changes in the Red Hat High-Availability Add-On moving from Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 6.

One of the most noticeable new subjects in this updated course is the inclusion of an introduction to highly available, distributed, scalable storage using Red Hat Storage Server. Other updates include the use of multipathed storage throughout the course, as well as coverage of the XFS® file system.

Below you will find a sneak peek at Red Hat Storage Server, based on materials from the updated course.

As an introduction to Red Hat Storage Server we will discuss how to set up a 2×2 distributed-replicated volume below. We will assume that you have already installed four machines using the Red Hat Storage iso-image that can be downloaded from Red Hat Network, named node1, node2, node3, and node4.

The first thing we’ll do after installing Red Hat Storage Server on our systems is to verify that the glusterd daemon has been enabled and started. The default installation will have performed those steps for us, but it never hurts to verify.

[root@nodeY ~]# chkconfig --list glusterd
glusterd        0:off   1:off   2:off   3:on    4:off   5:on    6:off
[root@nodeY ~]# service glusterd status
glusterd (pid  2265) is running...

The next step will be to combine our different Red Hat Storage Server machines into a Trusted Storage Pool. To do this we will use the gluster peer probe <othernode> command on one of our machines to add the others.

[root@node1 ~]# gluster peer probe node2.example.com
Probe successful
[root@node1 ~]# gluster peer probe node3.example.com
Probe successful
[root@node1 ~]# gluster peer probe node4.example.com
Probe successful

Now we can verify the status of our peers by running the following command:

[root@node1 ~]# gluster peer status
Number of Peers: 3

Hostname: node2.example.com
Uuid: 8047aad1-4f63-4ef1-8977-c1b5f06eb87b
State: Peer in Cluster (Connected)

Hostname: node3.example.com
Uuid: 3bca95a6-0d5d-431f-95a2-96edb3a2ce16
State: Peer in Cluster (Connected)

Hostname: node4.example.com
Uuid: c81bb568-4ec8-4c85-b560-30972a7bc0bf
State: Peer in Cluster (Connected)

Now that we have our machines configured we can start to create a volume. Volumes in Red Hat Storage server consist of bricks, XFS formatted file systems mounted on your nodes. In this example we will assume that you have already created logical volumes on your nodes to hold these file systems. We will be using /dev/vgsrv/brick1 on node1, /dev/vgsrv/brick2 on node2, etc.

With our logical volumes in place we can create our XFS file systems. Since Red Hat Storage Server makes extensive use of extended attributes we will format our XFS file systems with an inode size of 512 bytes (double the default) so that we can be sure that there will be enough room in our inodes to store all extended attributes. Repeat the step below for all your bricks on all your nodes:

[root@node1 ~]# mkfs -t xfs -i size=512 /dev/vgsrv/brick1

Since bricks will need to be (persistently) mounted before we can use them we will create mount points for all our bricks, then add an entry for them to /etc/fstab so that they will be mounted at startup. In our example we will be using /mnt/bricks/brickX, where X is our brick number. Repeat the commands below for all your bricks on all your nodes.

[root@node1 ~]# echo "/dev/vgsrv/brick1 /mnt/bricks/brick1 xfs defaults 1 2" >> /etc/fstab
[root@node1 ~]# mount -a

With all our bricks in place and mounted we can start to create a volume. Red Hat Storage volumes can be distributed, replicated, and striped. It is also possible to use a combination of these methods, e.g. a distributed-replicated volume. In our example we will be creating a 2×2 distributed-replicated volume using four bricks:

[root@node1 ~]# gluster volume create demovol \
> node1.example.com:/mnt/bricks/brick1 \
> node2.example.com:/mnt/bricks/brick2 \
> node3.example.com:/mnt/bricks/brick3 \
> node4.example.com:/mnt/bricks/brick4
Creation of volume demovol has been successful. Please start the volume to access data.

With our volume created we can ask for some information about the volume. Please note that the Status is still set to Created. If we actually want to use this volume we will need to start the volume first.

[root@node1 ~]# gluster volume info demovol
Volume Name: demovol
Type: Distributed-Replicate
Volume ID: e6f2fedf-bbdd-49fe-9402-f5e217d0327c
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1.example.com:/mnt/bricks/brick1
Brick2: node2.example.com:/mnt/bricks/brick2
Brick3: node3.example.com:/mnt/bricks/brick3
Brick4: node4.example.com:/mnt/bricks/brick4

To start this volume we can use the gluster volume start <volname> command:

[root@node1 ~]# gluster volume start demovol
Starting volume demovol has been successful

With our volume started we can request some detailed status information. This will tell us which of our bricks are online, if self-heal is running for replicated volumes, and if we have an active NFS server running.

[root@node1 ~]# gluster volume status demovol
Status of volume: demovol
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick node1.example.com:/mnt/bricks/brick1              24009   Y       22978
Brick node2.example.com:/mnt/bricks/brick2              24009   Y       22667
Brick node3.example.com:/mnt/bricks/brick3              24009   Y       2408
Brick node4.example.com:/mnt/bricks/brick4              24009   Y       3165
NFS Server on localhost                                 38467   Y       22983
Self-heal Daemon on localhost                           N/A     Y       22988
NFS Server on node3.example.com                         38467   Y       2413
Self-heal Daemon on node3.example.com                   N/A     Y       2419
NFS Server on node2.example.com                         38467   Y       22672
Self-heal Daemon on node2.example.com                   N/A     Y       22678
NFS Server on node4.example.com                         38467   Y       3171
Self-heal Daemon on node4.example.com                   N/A     Y       3177

Now that we’ve verified that our volume is active we can start accessing it from our clients. To do this you can use the glusterfs file system-type provided by the glusterfs-fuse package (also called the Native Client) which provides the most feature-rich and robust experience. For those systems where you can not use the native client you can also access your volumes over NFSv3 using TCP, or the CIFS protocol. For more information about configuring clients please refer to the Red Hat Storage 2.0 Administration Guide or the Red Hat Enterprise Clustering and Storage Management Course (RH436).

This was just a short sneak peek into one of the many topics covered in the updated Red Hat Enterprise Clustering and Storage Management (RH436) course. During the course you will get hands-on experience configuring iSCSI targets and initiators, multipathing, High-Availability clustering using the Red Hat High-Availability Add-On, Red Hat Storage Server and clients, and much more.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s