Glusterfs replicated volume based Havana 2013.2 instances on Server With GlusterFS 3.4.1 Fedora 19 in two node cluster

Two node gluster 3.4.1 cluster set up follows bellow. Havana 2013.2 RDO been installed via `packstack –alliinone` on one of the boxes has cinder tuned to create volumes in replicated glusterfs 3.4.1 storage. Several samples of creating via images bootable cinder volumes are described in step by step way. What actually provides proof of the concepts of article mentioned down here

Please,  view first nice article: http://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means/ and  https://wiki.openstack.org/wiki/CinderSupportMatrix

Per https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS

On the server1, run the following command:

  ssh-keygen (Hit Enter to accept all of the defaults)

On the server1, run the following command for each server. Server, run the following command for each node in cluster (server).

  ssh-copy-id -i ~/.ssh/id_rsa.pub root@server4

View also https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS  for 5),6),7)

[root@server1 ~]#   yum install glusterfs glusterfs-server glusterfs-fuse

[root@server1 ~(keystone_admin)]# service glusterd status

Redirecting to /bin/systemctl status  glusterd.service

glusterd.service – GlusterFS an clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)

Active: active (running) since Sat 2013-11-02 13:44:42 MSK; 1h 42min ago

Process: 2699 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)

Main PID: 2700 (glusterd)

CGroup: name=systemd:/system/glusterd.service

├─2700 /usr/sbin/glusterd -p /run/glusterd.pid

├─2902 /usr/sbin/glusterfsd -s server1 –volfile-id cinder-volumes02.server1.home-boris-node-replicate -p /var/l…

├─5376 /usr/sbin/glusterfs -s localhost  –volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glus…

├─6675 /usr/sbin/glusterfs -s localhost  –volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/lo…

└─6683 /sbin/rpc.statd

Nov 02 13:44:40 server1 systemd[1]: Starting GlusterFS an clustered file-system server…

Nov 02 13:44:42 server1 systemd[1]: Started GlusterFS an clustered file-system server.

Nov 02 13:46:52 server1 rpc.statd[5383]: Version 1.2.7 starting

Nov 02 13:46:52 server1 sm-notify[5384]: Version 1.2.7 starting

[root@server1 ~]# service iptables stop

[root@server1 ~]# service iptables status

Redirecting to /bin/systemctl status  iptables.service

iptables.service – IPv4 firewall with iptables

Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)

Active: failed (Result: exit-code) since Sat 2013-11-02 12:59:10 MSK; 5s ago

Process: 14306 ExecStop=/usr/libexec/iptables/iptables.init stop (code=exited, status=1/FAILURE)

Main PID: 472 (code=exited, status=0/SUCCESS)

CGroup: name=systemd:/system/iptables.service

Nov 02 12:59:10 server1 systemd[1]: Stopping IPv4 firewall with iptables…

Nov 02 12:59:10 server1 iptables.init[14306]: iptables: Flushing firewall rules: [  OK  ]

Nov 02 12:59:10 server1 iptables.init[14306]: iptables: Setting chains to policy ACCEPT: raw security mangle nat fil…ILED]

Nov 02 12:59:10 server1 iptables.init[14306]: iptables: Unloading modules:  iptable_nat[FAILED]

Nov 02 12:59:10 server1 systemd[1]: iptables.service: control process exited, code=exited status=1

Nov 02 12:59:10 server1 systemd[1]: Stopped IPv4 firewall with iptables.

Nov 02 12:59:10 server1 systemd[1]: Unit iptables.service entered failed state.

[root@server1 ~]# gluster peer probe server4

peer probe: success

[root@server1 ~]# gluster peer  status

Number of Peers: 1

Hostname: server4

Port: 24007

Uuid: 4062c822-74d5-45e9-8eaa-8353845332de

State: Peer in Cluster (Connected)

[root@server1 ~]# gluster volume create cinder-volumes02  replica 2 \

server1:/home/boris/node-replicate  server4:/home/boris/node-replicate

volume create: cinder-volumes02: success: please start the volume to access data

[root@server1 ~]# gluster volume start cinder-volumes02

volume start: cinder-volumes02: success

[root@server1 ~]# gluster volume set cinder-volumes02  quick-read off
volume set: success

[root@server1 ~]# gluster volume set cinder-volumes02  cluster.eager-lock on
volume set: success

[root@server1 ~]# gluster volume set cinder-volumes02  performance.stat-prefetch off
volume set: success

[root@server1 ~]# gluster volume info

Volume Name: cinder-volumes02

Type: Replicate

Volume ID: 1a1566ed-34f7-4264-b0b4-91cf9526b5ef

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: server1:/home/boris/node-replicate

Brick2: server4:/home/boris/node-replicate

Options Reconfigured:
performance.quick-read: off
cluster.eager-lock: on
performance.stat-prefetch: off

[root@server1 ~]# service iptables start

Redirecting to /bin/systemctl start  iptables.service

[root@server1 ~]# service iptables status

Redirecting to /bin/systemctl status  iptables.service

iptables.service – IPv4 firewall with iptables

Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)

Active: active (exited) since Sat 2013-11-02 13:10:17 MSK; 5s ago

Process: 14306 ExecStop=/usr/libexec/iptables/iptables.init stop (code=exited, status=1/FAILURE)

Process: 17699 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=0/SUCCESS)

Nov 02 13:10:17 server1 iptables.init[17699]: iptables: Applying firewall rules: [  OK  ]

Nov 02 13:10:17 server1 systemd[1]: Started IPv4 firewall with iptables.

[root@server1 ~(keystone_admin)]# gluster peer status

Number of Peers: 1

Hostname: server4

Uuid: 4062c822-74d5-45e9-8eaa-8353845332de

State: Peer in Cluster (Connected)

[root@server1 ~]# gluster volume info

Volume Name: cinder-volumes02

Type: Replicate

Volume ID: 1a1566ed-34f7-4264-b0b4-91cf9526b5ef

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: server1:/home/boris/node-replicate

Brick2: server4:/home/boris/node-replicate

Options Reconfigured:
performance.quick-read: off
cluster.eager-lock: on
performance.stat-prefetch: off

Update /etc/sysconfig/iptables on second box :-

Add to *filter section

-A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

Watching replication

Configuring Cinder to Add GlusterFS

 

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
 # openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
 # openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
Then tune file /etc/cinder/shares.conf
# vi /etc/cinder/shares.conf
    192.168.1.147:cinder-volumes02
:wq
Update iptables firewall ( remind that service firewalld should be disabled on F19 from the beginning  to keep changes done by neutron/quantum in place)
# iptables-save  >  iptables.dump
**********************
 Add to *filter section:
**********************
 -A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

# iptables-restore <  iptables.dump
# service iptables restart

Now mount glusterfs volume on predefined Havana’s directory

# for i in api scheduler volume; do service openstack-cinder-${i} restart; done

[root@server1 ~(keystone_admin)]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/fedora_5-root       193G   48G  135G  27% /
devtmpfs                                    3.9G     0  3.9G   0% /dev
tmpfs                                          3.9G  140K  3.9G   1% /dev/shm
tmpfs                                          3.9G  948K  3.9G   1% /run
tmpfs                                          3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                          3.9G   92K  3.9G   1% /tmp
/dev/loop0                                  928M  1.3M  860M   1% /srv/node/device1
/dev/sda1                                   477M   87M  362M  20% /boot
tmpfs                                          3.9G  948K  3.9G   1% /run/netns

192.168.1.147:cinder-volumes02  116G   61G   50G  56% /var/lib/cinder/volumes/e879618364aca859f13701bb918b087f

Building Ubuntu Server 13.10 utilizing replicated via glusterfs 3.4.1 cinder bootable volume

   

Building Windows 2012 evaluation instance utilizing replicated via glusterfs 3.4.1 cinder bootable volume

[root@ovirt1 ~(keystone_admin)]# service glusterd status

Redirecting to /bin/systemctl status  glusterd.service

glusterd.service – GlusterFS an clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)

Active: active (running) since Mon 2013-11-04 22:31:55 VOLT; 21min ago

Process: 2962 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)

Main PID: 2963 (glusterd)

CGroup: name=systemd:/system/glusterd.service

├─ 2963 /usr/sbin/glusterd -p /run/glusterd.pid

├─ 3245 /usr/sbin/glusterfsd -s ovirt1 –volfile-id cinder-vols.ovirt1.fdr-set-node-replicate -p /var/lib/gluste…

├─ 6031 /usr/sbin/glusterfs -s localhost –volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glu…

├─11335 /usr/sbin/glusterfs -s localhost –volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/l…

└─11343 /sbin/rpc.statd

Nov 04 22:31:51 ovirt1.localdomain systemd[1]: Starting GlusterFS an clustered file-system server…

Nov 04 22:31:55 ovirt1.localdomain systemd[1]: Started GlusterFS an clustered file-system server.

Nov 04 22:35:11 ovirt1.localdomain rpc.statd[6038]: Version 1.2.7 starting

Nov 04 22:35:11 ovirt1.localdomain sm-notify[6039]: Version 1.2.7 starting

Nov 04 22:35:11 ovirt1.localdomain GlusterFS[6026]: [2013-11-04 18:35:11.400008] C [nfs.c:271:nfs_start_subvol_lookup_…ctory

Nov 04 22:53:23 ovirt1.localdomain rpc.statd[11343]: Version 1.2.7 starting

[root@ovirt1 ~(keystone_admin)]# df -h

Filesystem                  Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root   169G   74G   87G  46% /

devtmpfs                    3.9G     0  3.9G   0% /dev

tmpfs                       3.9G   84K  3.9G   1% /dev/shm

tmpfs                       3.9G  956K  3.9G   1% /run

tmpfs                       3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                       3.9G  116K  3.9G   1% /tmp

/dev/loop0                  928M  1.3M  860M   1% /srv/node/device1

/dev/sdb1                   477M   87M  361M  20% /boot

tmpfs                       3.9G  956K  3.9G   1% /run/netns

192.168.1.137:/cinder-vols  164G   73G   83G  47% /var/lib/cinder/volumes/8a78781567bbf747a694c25ae4494d9c

[root@ovirt1 ~(keystone_admin)]# gluster peer status

Number of Peers: 1

Hostname: ovirt2

Uuid: 2aa2dfb5-d266-4474-89c1-c5c011eec025

State: Peer in Cluster (Connected)

[root@ovirt1 ~(keystone_admin)]# gluster volume info cinder-vols

Volume Name: cinder-vols

Type: Replicate

Volume ID: e8eab40f-3401-4893-ba25-121bd4e0a74e

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirt1:/fdr/set/node-replicate

Brick2: ovirt2:/fdr/set/node-replicate

Options Reconfigured:
performance.quick-read: off
cluster.eager-lock: on
performance.stat-prefetch: off

[root@ovirt1 ~(keystone_admin)]# nova image-list

+————————————–+———————————+——–+——–+

| ID                                   | Name                            | Status | Server |

+————————————–+———————————+——–+——–+

| 291f7c8b-043b-4656-9285-244770f127e5 | Fedora19image                   | ACTIVE |        |

| 67d9f757-43ca-4204-985d-5ecdb31e8ec7 | Salamander1030                  | ACTIVE |        |

| 624681da-f48f-43d9-968e-1e3da6cc75a3 | Windows Server 2012 R2 Std Eval | ACTIVE |        |

| bd01f02d-e0bf-4cc5-aa35-ff97ebd9c1ef | cirros                          | ACTIVE |        |

+————————————–+———————————+——–+——–+

[root@ovirt1 ~(keystone_admin)]# cinder create –image-id  \
624681da-f48f-43d9-968e-1e3da6cc75a3 –display_name Windows2012VL 20

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: