Glusterfs volume based Havana 2013.2 instances on NFS-Like Standalone Storage Server With GlusterFS 3.4.1 Fedora 19


This is a snapshot to show the difference between the Havanna and Grizzly releases with GlusterFS.

Grizzly Havana
Glance – Could point to the filesystem images mounted with GlusterFS, but had to copy VM image to deploy it Can now point to Cinder interface, removing the need to copy image
Cinder – Integrated with GlusterFS, but only with Fuse mounted volumes Can now use libgfapi-QEMU integration for KVM hypervisors
Nova – No integration with GlusterFS Can now use the libgfapi-QEMU integration
Swift – GlusterFS maintained a separate repository of changes to Swift proxy layer Swift patches now merged upstream, providing a cleaner break between API and implementation

Actually, on Glusterfs F19 Server included in cluster procedure of cinder tuning should be the same.
First step –  set up Havana RC1 RDO on Fedora 19 per

Next – installing GlusterFS Server on Cinder host

#   yum install glusterfs glusterfs-server glusterfs-fuse

#   systemctl status glusterd

glusterd.service – GlusterFS an clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)

Active: active (running) since Fri 2013-10-18 13:47:51 MSK; 2h 37min ago

Process: 1126 ExecStart=/usr/sbin/glusterd -p /run/ (code=exited, status=0/SUCCESS)

Main PID: 1136 (glusterd)

CGroup: name=systemd:/system/glusterd.service

├─1136 /usr/sbin/glusterd -p /run/

├─8861 /usr/sbin/glusterfsd -s –volfile-id cinder-volume. -…

├─8878 /usr/sbin/glusterfs -s localhost –volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/ -l /var/lo…

└─8885 /sbin/rpc.statd

Oct 18 13:47:51 localhost.localdomain  systemd[1]: Started GlusterFS an clustered file-system server.

Oct 18 13:58:19 localhost.localdomain  rpc.statd[8885]: Version 1.2.7 starting

Oct 18 13:58:19 localhost.localdomain  sm-notify[8886]: Version 1.2.7 starting

Oct 18 13:58:19 localhost.localdomain  rpc.statd[8885]: Initializing NSM state

#   mkdir -p /rhs/brick1/cinder-volume

#  gluster volume create cinder-volume  volume

#  gluster volume start cinder-volume

#  gluster volume info

Volume Name: cinder-volume

Type: Distribute

Volume ID: d52c0ba1-d7b1-495d-8f14-07ff03e7db95

Status: Started

Number of Bricks: 1

Transport-type: tcp



A sample of utilizing stripe gluster volume may be viewed here :-

Configuring Cinder to Add GlusterFS

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf 

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes

 # vi /etc/cinder/shares.conf


# iptables-save >  iptables.dump

Add to *filter section:

-A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

# iptables-restore <  iptables.dump

# service iptables restart

Restart openstack-cinder-volume service mounts glusterfs volume with no problems with ownership :

# for i in api scheduler volume

>  do

> service openstack-cinder-${i} restart

 > done

 # df -h

Filesystem                   Size  Used Avail Use% Mounted on

/dev/mapper/fedora-root      164G   29G  127G  19% /

devtmpfs                     3.9G     0  3.9G   0% /dev

tmpfs                           3.9G  148K  3.9G   1% /dev/shm

tmpfs                           3.9G  1.1M  3.9G   1% /run

tmpfs                           3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                           3.9G  800K  3.9G   1% /tmp

/dev/sda1                    477M   87M  362M  20% /boot

/dev/loop0                   928M  1.4M  860M   1% /srv/node/device1

tmpfs                           3.9G  1.1M  3.9G   1% /run/netns  164G   29G  127G  19% /var/lib/cinder/volumes/f39d1b2d7e2a2e48af66eceba039b139

 # nova image-list


| ID                                   | Name             | Status | Server |


| 59758edc-da8d-444e-b0a0-d93d323fc026 | F19Image         | ACTIVE |        |

| df912358-b227-43a5-94a3-edc874c577bc | UbuntuSalamander | ACTIVE |        |

| ae07d1ba-41de-44e9-877a-455f8956d86f | cirros           | ACTIVE |        |


Creating havana volume in glusterfs storage via command line :

#  cinder create –image-id 59758edc-da8d-444e-b0a0-d93d323fc026  –display_name Fedora19VL 5

# cinder list


|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |


| 0474ead2-61a8-41dd-8f8d-ef3000266403 | in-use |              |  5   |     None    |   true   | 779b306b-3cb2-48ea-9711-2c42c508b577 |

| da344703-dcf9-450e-9e34-cafb331f80f6 | in-use |  Fedora19VL  |  5   |     None    |   true   | 1a8e5fa5-6a79-43f0-84ee-58e2099b1ebe |


 # ls -l /var/lib/cinder/volumes/f39d1b2d7e2a2e48af66eceba039b139

total 5528248

-rw-rw-rw-. 1 qemu qemu 5368709120 Oct 18 16:19 volume-0474ead2-61a8-41dd-8f8d-ef3000266403

-rw-rw-rw-. 1 qemu qemu 5368709120 Oct 18 16:19 volume-da344703-dcf9-450e-9e34-cafb331f80f6



Screen shots on another F19 instance dual booting with first




   Creating via cinder command line Ubuntu 13.10 Server bootable volume

in glusterfs storage :




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 31 other followers

%d bloggers like this: