oVirt 3.3 & 3.3.1 hackery on Fedora 19

November 16, 2013

***********************************************************************************

UPDATE on 12/07/2013  Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” attempting to add new host, in this case run one more time 

# engine-setup

on master server . Via my experience it helped several times (3.3.1 on F19).  Ovirt 3.3.0.1 never required the last hack

UPDATE on 11/23/2013.  Same schema would work for 3.3.1
with “yum downgrade apache-sshd” to be able to add new host. When creating VM it’s possible to select NIC1 “ovirtmgmt/ovirtmgmt”.   I was able to find  http://www.ovirt.org/Features/Detailed_OSN_Integration regarding set up  Neutron(Quantum) to create VLANs (external provider)

**********************************************************************************

My final target was  to create two node oVirt 3.3 cluster and virtual machines using replicated glusterfs 3.4.1 volumes. Choice of firewalld as configured firewall seems to be unacceptable for this purpose in meantime. Selection of iptables firewall allows to complete the task. However, this is only my personal preference. IPv4 firewall with iptables  just works for me with no pain and I clearly understand what to do when problems come up, nothing else.

First fix bug with NFS Server still affecting F19 :-  https://bugzilla.redhat.com/show_bug.cgi?id=970595

Please, also be aware of http://www.ovirt.org/OVirt_3.3_TestDay#Known_issues

Quote :

Known issues : host installation

Fedora 19: If the ovirtmgmt bridge is not successfully installed during initial host-setup, manually click on the host, setup networks, and add the ovirtmgmt bridge. It is recommended to disable NetworkManager as well.

End quote

Second put under /etc/sysconfig/network-scripts 

[root@ovirt1 network-scripts]# cat ifcfg-ovirtmgmt

DEVICE=ovirtmgmt

TYPE=Bridge

ONBOOT=yes

DELAY=0

BOOTPROTO=static

IPADDR=192.168.1.142

NETMASK=255.255.255.0

GATEWAY=192.168.1.1

DNS1=83.221.202.254

NM_CONTROLLED=”no

In particular (my box) :

[root@ovirt1 network-scripts]# cat ifcfg-enp2s0

BOOTPROTO=none

TYPE=”Ethernet”

ONBOOT=”yes”

NAME=”enp2s0″

BRIDGE=”ovirtmgmt”

HWADDR=00:22:15:63:e4:e2

Disable NetworkManager and enable service network.

Skipping this two steps in my case crashed install per

http://community.redhat.com/up-and-running-with-ovirt-3-3/

First by obvious reason,second didn’t bring vdsmd during install and engine.log generated a bunch of errors complaining absence network ovirtmgmt. Web console was actually useless  (again in my case) not managing storage domains in down status.

View also : http://www.mail-archive.com/users@ovirt.org/msg11394.html

Follow http://community.redhat.com/up-and-running-with-ovirt-3-3/

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

$ sudo yum install ovirt-engine-setup-plugin-allinone -y

Before run engine-setup :-

[root@ovirt1 ~]# yum install ovirt-engine-websocket-proxy

Loaded plugins: langpacks, refresh-packagekit, versionlock

Resolving Dependencies

–>; Running transaction check

–> Package ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19 will be installed

–> Finished Dependency Resolution

Dependencies Resolved

===================================================

Package                                 Arch              Version                   Repository               Size

===================================================

Installing:

ovirt-engine-websocket-proxy            noarch            3.3.0.1-1.fc19            ovirt-stable             12 k

Transaction Summary

===================================================

Install  1 Package

Total download size: 12 k

Installed size: 18 k

Is this ok [y/d/N]: y

Downloading packages:

ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch.rpm                                      |  12 kB  00:00:02

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Installing : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Verifying  : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Installed:

ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19

Complete!

[root@ovirt1 ~]# engine-setup

[ INFO  ] Stage: Initializing

[ INFO  ] Stage: Environment setup

Configuration files: [‘/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf’, ‘/etc/ovirt-engine-setup.conf.d/10-packaging.conf’]

Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)

[ INFO  ] Hardware supports virtualization

[ INFO  ] Stage: Environment packages setup

[ INFO  ] Stage: Programs detection

[ INFO  ] Stage: Environment setup

[ INFO  ] Stage: Environment customization

Configure VDSM on this host? (Yes, No) [No]: Yes

Local storage domain path [/var/lib/images]:

Local storage domain name [local_storage]:

–== PACKAGES ==–

[ INFO  ] Checking for product updates…

[ INFO  ] No product updates found

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:

[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

          firewalld was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  no

         iptables firewall was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  yes

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:

Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.

Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:

Confirm engine admin password:

Application mode (Both, Virt, Gluster) [Both]:

Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.

Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:

Setup can configure apache to use SSL using a certificate issued from the internal CA.

Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:

Local ISO domain path [/var/lib/exports/iso]:

Local ISO domain name [ISO_DOMAIN]:

Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation

[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine

Database secured connection        : False

Database host                      : localhost

Database user name                 : engine

Database host name validation      : False

Datbase port                       : 5432

NFS setup                          : True

PKI organization                   : localdomain

NFS mount point                    : /var/lib/exports/iso

Application mode                   : both

  Firewall manager                   : iptables

Configure WebSocket Proxy          : True

Host FQDN                          : ovirt1.localdomain

Datacenter storage type            : nfs

Configure local database           : True

Set application as default page    : True

Configure Apache SSL               : True

Configure VDSM on this host        : True

Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup

[ INFO  ] Stopping engine service

[ INFO  ] Stopping websocket-proxy service

[ INFO  ] Stage: Misc configuration

[ INFO  ] Stage: Package installation

[ INFO  ] Stage: Misc configuration

[ INFO  ] Initializing PostgreSQL

[ INFO  ] Creating PostgreSQL database

[ INFO  ] Configurating PostgreSQL

[ INFO  ] Creating database schema

[ INFO  ] Creating CA

[ INFO  ] Configurating WebSocket Proxy

[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’

[ INFO  ] Stage: Transaction commit

[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available

A default ISO NFS share has been created on this host.

If IP based access restrictions are required, edit:

entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports

SSH fingerprint: 90:16:09:69:8A:D8:43:C9:87:A7:CF:1A:A3:3B:71:44

Internal CA 5F:2E:12:99:32:55:07:11:C9:F9:AB:58:02:C9:A6:8E:16:91:CA:C1

Web access is enabled at:

http://ovirt1.localdomain:80/ovirt-engine

https://ovirt1.localdomain:443/ovirt-engine

Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service

[ INFO  ] Restarting httpd

[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…

[ INFO  ] The VDSM Host is now operational

[ INFO  ] Restarting nfs services

[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131112005106-setup.conf’

[ INFO  ] Stage: Clean up

Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

[ INFO  ] Stage: Pre-termination

[ INFO  ] Stage: Termination

[ INFO  ] Execution of setup completed successfully

Install 3.3.1 doesn’t require  ovirt-engine-websocket-proxy and looks like

[root@ovirt1 ~]# engine-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
Configuration files: [‘/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf’, ‘/etc/ovirt-engine-setup.conf.d/10-packaging.conf’]
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

–== PACKAGES ==–

[ INFO  ] Checking for product updates…
[ INFO  ] No product updates found

–== ALL IN ONE CONFIGURATION ==–

Configure VDSM on this host? (Yes, No) [No]: Yes
Local storage domain path [/var/lib/images]:
Local storage domain name [local_storage]:

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:
[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:
Confirm engine admin password:
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:
Local ISO domain path [/var/lib/exports/iso]:
Local ISO domain name [ISO_DOMAIN]:
Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation
[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine
Database secured connection        : False
Database host                      : localhost
Database user name                 : engine
Database host name validation      : False
Datbase port                       : 5432
NFS setup                          : True
PKI organization                   : localdomain
NFS mount point                    : /var/lib/exports/iso
Application mode                   : both
Configure WebSocket Proxy          : True
Host FQDN                          : ovirt1.localdomain
Datacenter storage type            : nfs
Configure local database           : True
Set application as default page    : True
Configure Apache SSL               : True
Configure VDSM on this host        : True
Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup
[ INFO  ] Stopping engine service
[ INFO  ] Stopping websocket-proxy service
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Initializing PostgreSQL
[ INFO  ] Creating PostgreSQL database
[ INFO  ] Configurating PostgreSQL
[ INFO  ] Creating database schema
[ INFO  ] Creating CA
[ INFO  ] Configurating WebSocket Proxy
[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available
An ISO NFS share has been created on this host.
If IP based access restrictions are required, edit:
entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports
SSH fingerprint: DB:C5:99:16:0D:67:4B:F5:62:99:B2:D3:E2:C7:7F:59
Internal CA 93:BB:05:42:C6:6F:00:28:A1:F1:90:C5:3E:E3:91:D6:1F:1B:17:3D
The following network ports should be opened:
tcp:111
tcp:2049
tcp:32803
tcp:443
tcp:49152-49216
tcp:5432
tcp:5634-6166
tcp:6100
tcp:662
tcp:80
tcp:875
tcp:892
udp:111
udp:32769
udp:662
udp:875
udp:892
An example of the required configuration for iptables can be found at:
/etc/ovirt-engine/iptables.example
In order to configure firewalld, copy the files from
/etc/ovirt-engine/firewalld to /etc/firewalld/services
and execute the following commands:
firewall-cmd -service ovirt-postgres
firewall-cmd -service ovirt-https
firewall-cmd -service ovirt-aio
firewall-cmd -service ovirt-websocket-proxy
firewall-cmd -service ovirt-nfs
firewall-cmd -service ovirt-http
Web access is enabled at:
http://ovirt1.localdomain:80/ovirt-engine
https://ovirt1.localdomain:443/ovirt-engine
Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service
[ INFO  ] Restarting httpd
[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…
[ INFO  ] The VDSM Host is now operational
[ INFO  ] Restarting nfs services
[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131122144055-setup.conf’
[ INFO  ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Execution of setup completed successfully

 Not sure it’s a must, but I’ve also updated /etc/sysconfig/iptables with

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

VMs running on different hosts of two node cluster started via Web Console

[root@ovirt1 ~]# service libvirtd status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:31:07 VOLT; 54min ago

Main PID: 1131 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1131 /usr/sbin/libvirtd –listen

└─8606 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name UbuntuSalamander -S -machine pc-1.0,accel=kvm,usb=of…

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: info : libvirt version: 1.0.5.7….org)

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: debug : virLogParseOutputs:1331…d.log

[root@ovirt1 ~]# ssh ovirt2

Last login: Fri Nov 22 10:45:26 2013

[root@ovirt2 ~]# service libvirtd  status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:44:47 VOLT; 41min ago

Main PID: 1019 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1019 /usr/sbin/libvirtd –listen

└─2776 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name VF19NW -S -machine pc-1.0,accel=kvm,usb=off -cpu Pen…

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: info : libvirt version: 1.0.5.7….org)

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: debug : virLogParseOutputs:1331…d.log


Virtual machines using replicated glusterfs 3.4.1 volumes

Add new host via Web Console.  Make sure that on new host you previously ran :-

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

otherwise it stays incompatible with oVirt 3.3 ( 3.2 as maximum ).

Set up ovirtmgmt bridge, disabled firewalld and enabled iptables firewall manager

On server ovirt1, run the following commands before adding new host ovirt2

# ssh-keygen (Hit Enter to accept all of the defaults)

# ssh-copy-id -i ~/.ssh/id_rsa.pub root@ovirt2

Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” in this case run one more time 

# engine-setup

on master server . Via my experience it helped several times (3.3.1 on F19) . Ovirt 3.3.0.1 never required the last hack

Version 3.3.1 allows to create Gluster volumes via GUI, automatically configuring required features for volume been created via graphical environment.

Regarding design glusterfs volumes for production environment view https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS

  

Double check via command line

 # gluster volume info

Volume Name: ovirt-data02
Type: Replicate
Volume ID: b1cf98c9-5525-48d4-9fb0-bde47d7a98b0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/home/boris/node-replicate
Brick2: 192.168.1.127:/home/boris/node-replicate
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: enable
nfs.disable: off

Creating XFS based replicated gluster volume via oVirt 3.3.1 per  https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS
 
 [root@ovirt1 ~]# gluster volume info ovirt-data05
Volume Name: ovirt-data05
Type: Replicate
Volume ID: ff0955b6-668a-4eab-acf0-606456ee0005
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/mnt/brick1/node-replicate
Brick2: 192.168.1.127:/mnt/brick1/node-replicate
Options Reconfigured:
nfs.disable: off
user.cifs: enable
auth.allow: *
storage.owner-uid: 36
storage.owner-gid: 36
 
[root@ovirt1 ~]# mount | grep xfs
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
/dev/sda3 on /mnt/brick1 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
 
[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   26G  112G  19% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  2.2M  3.9G   1% /dev/shm
tmpfs                                    3.9G 1004K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   76K  3.9G   1% /tmp
/dev/sda1                                477M  105M  344M  24% /boot
/dev/sda3                                 98G   19G   80G  19% /mnt/brick1
ovirt1.localdomain:ovirt-data05           98G   19G   80G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05
ovirt1.localdomain:/var/lib/exports/iso  145G   26G  112G  19% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.137:/var/lib/exports/export    145G   26G  112G  19% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export
ovirt1.localdomain:ovirt-data02          145G   26G  112G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02
 

Creating  glusterfs 3.4.1 cluster  with ovirt1 and ovirt2 via CLI. (3.3.0)

[root@ovirt1 ~]# gluster peer status

Number of Peers: 1

Hostname: ovirt2

Uuid: 8355d741-fc2d-4484-b6e3-ca0ef99658c1

State: Peer in Cluster (Connected)

[root@ovirt1 ~]# ssh ovirt2

Last login: Sat Nov 16 10:23:11 2013 from ovirt1.localdomain

[root@ovirt2 ~]# gluster peer status

Number of Peers: 1

Hostname: 192.168.1.120

Uuid: 3d00042b-4e44-4680-98f7-98b814354001

State: Peer in Cluster (Connected)

then create replicated volume  visible in Web Console, make Glusterfs storage based on this volume and convert into Data(Master)

[root@ovirt1 ~]# gluster volume create data02-share  replica 2 \

ovirt1:/GLSD/node-replicate ovirt2:/GLSD/node-replicate

volume create: data02-share: success: please start the volume to access data

Follow carefully http://community.redhat.com/ovirt-3-3-glusterized/ regarding

1. Editing /etc/glusterfs/glusterd.vol add line

“option rpc-auth-allow-insecure on”

2. gluster volume set data server.allow-insecure on

before starting volume , otherwise you won’t be able to start vms.

Then set right permissions for manually created volume :-  

[root@ovirt1 ~]#  gluster volume set  data02-share  storage.owner-uid 36
[root@ovirt1 ~]#  gluster volume  set data02-share  storage.owner-gid 36

[root@ovirt1 ~]# gluster volume set data02-share quick-read off

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share cluster.eager-lock on

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share performance.stat-prefetch off

volume set: success

[root@ovirt1 ~]# gluster volume info

Volume Name: data02-share

Type: Replicate

Volume ID: 282545cd-583b-4211-a0f4-22eea4142953

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirt1:/GLSD/node-replicate

Brick2: ovirt2:/GLSD/node-replicate

Options Reconfigured:

performance.stat-prefetch: off

cluster.eager-lock: on

performance.quick-read: off

storage.owner-uid: 36

storage.owner-gid: 36

server.allow-insecure: on

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5651976

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ssh ovirt2

Last login: Sat Nov 16 10:26:05 2013 from ovirt1.localdomain

[root@ovirt2 ~]# cd /GLSD/node-replicate/12c1221b-c500-4d21-87ac-1cdd0e0d5269/images/a16d3f36-1a40-4867-9ecb-bbae78189c03

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5043492

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5065892

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:45 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5295140

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:47 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

Filesystem layout looks like :

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# df -h

Filesystem                               Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root                145G   24G  113G  18% /

devtmpfs                                 3.9G     0  3.9G   0% /dev

tmpfs                                    3.9G  100K  3.9G   1% /dev/shm

tmpfs                                    3.9G  1.1M  3.9G   1% /run

tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                                    3.9G   76K  3.9G   1% /tmp

/dev/sdb3                                477M   87M  362M  20% /boot

ovirt1.localdomain:data02-share          125G   10G  109G   9% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share

ovirt1.localdomain:/var/lib/exports/iso  145G   24G  113G  18% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso

192.168.1.120:/var/lib/exports/export    145G   24G  113G  18% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

Hidden issues 

To make environment stable Storage Pool Manager was moved to ovirt2.localdomain:

In this case nfs mount requests from ovirt2 would be satisfied successfully.  View next snapshot :-

Detailed filesystems layout on ovirt1 and ovirt2

[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   31G  107G  23% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  104K  3.9G   1% /dev/shm
tmpfs                                    3.9G  1.1M  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   80K  3.9G   1% /tmp
/dev/sdb3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

[root@ovirt1 ~]# ssh ovirt2

Last login: Sun Nov 17 15:04:29 2013 from ovirt1.localdomain

[root@ovirt2 ~]# ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 17083  bytes 95312048 (90.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 17083  bytes 95312048 (90.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163  mtu 1500
inet 192.168.1.130  netmask 255.255.255.0  broadcast 192.168.1.255
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
ether 00:22:15:63:e4:e2  txqueuelen 0  (Ethernet)
RX packets 1876878  bytes 451006322 (430.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049680  bytes 218222806 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

p37p1: flags=4163  mtu 1500
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
ether 00:22:15:63:e4:e2  txqueuelen 1000  (Ethernet)
RX packets 1877201  bytes 477310768 (455.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049698  bytes 218224910 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device interrupt 17

[root@ovirt2 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora02-root                125G   16G  104G  13% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G   92K  3.9G   1% /dev/shm
tmpfs                                    3.9G  984K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   44K  3.9G   1% /tmp
/dev/sda3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export


Attempt to install oVirt 3.3 & 3.3.1 on Fedora 19

November 13, 2013

***********************************************************************************

UPDATE on 12/07/2013  Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” attempting to add new host, in this case run one more time 

# engine-setup

on master server . Via my experience it helped several times (3.3.1 on F19).  Ovirt 3.3.0.1 never required the last hack

UPDATE on 11/23/2013.  Same schema would work for 3.3.1
with “yum downgrade apache-sshd” to be able to add new
host. When creating VM it’s possible to select NIC1
“ovirtmgmt/ovirtmgmt”.   I was able to find  http://www.ovirt.org/Features/Detailed_OSN_Integration regarding set up  Neutron(Quantum) to create VLANs (external provider) **********************************************************************************

Following bellow is attempt  to create two node oVirt 3.3 cluster and virtual machines using replicated glusterfs 3.4.1 volumes. Choice of firewalld as configured firewall seems to be unacceptable for this purpose in meantime. Selection of iptables firewall allows to complete the task. IPv4 firewall with iptables  just works for me with no pain and I clearly understand what to do when problems come up, nothing else. I also believe that any post pretending for “Howto” should be reproduced by any newcomer easily and successfully without  frustration or  disappointment.

First fix bug with NFS Server still affecting F19 :-  https://bugzilla.redhat.com/show_bug.cgi?id=970595

Please, also be aware of http://www.ovirt.org/OVirt_3.3_TestDay#Known_issues

Quote :

Known issues : host installation

Fedora 19: If the ovirtmgmt bridge is not successfully installed during initial host-setup, manually click on the host, setup networks, and add the ovirtmgmt bridge. It is recommended to disable NetworkManager as well.

End quote

Second put under /etc/sysconfig/network-scripts 

[root@ovirt1 network-scripts]# cat ifcfg-ovirtmgmt

DEVICE=ovirtmgmt

TYPE=Bridge

ONBOOT=yes

DELAY=0

BOOTPROTO=static

IPADDR=192.168.1.142

NETMASK=255.255.255.0

GATEWAY=192.168.1.1

DNS1=83.221.202.254

NM_CONTROLLED=”no

In particular (my box) :

[root@ovirt1 network-scripts]# cat ifcfg-enp2s0

BOOTPROTO=none

TYPE=”Ethernet”

ONBOOT=”yes”

NAME=”enp2s0″

BRIDGE=”ovirtmgmt”

HWADDR=00:22:15:63:e4:e2

Disable NetworkManager and enable service network.

Skipping this two steps in my case crashed install per

http://community.redhat.com/up-and-running-with-ovirt-3-3/

First by obvious reason,second didn’t bring vdsmd during install and engine.log generated a bunch of errors complaining absence network ovirtmgmt. Web console was actually useless  (again in my case) not managing storage domains in down status.

View also : http://www.mail-archive.com/users@ovirt.org/msg11394.html

Follow http://community.redhat.com/up-and-running-with-ovirt-3-3/

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

$ sudo yum install ovirt-engine-setup-plugin-allinone -y

Before run engine-setup :-

[root@ovirt1 ~]# yum install ovirt-engine-websocket-proxy

Loaded plugins: langpacks, refresh-packagekit, versionlock

Resolving Dependencies

–>; Running transaction check

–> Package ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19 will be installed

–> Finished Dependency Resolution

Dependencies Resolved

===================================================

Package                                 Arch              Version                   Repository               Size

===================================================

Installing:

ovirt-engine-websocket-proxy            noarch            3.3.0.1-1.fc19            ovirt-stable             12 k

Transaction Summary

===================================================

Install  1 Package

Total download size: 12 k

Installed size: 18 k

Is this ok [y/d/N]: y

Downloading packages:

ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch.rpm                                      |  12 kB  00:00:02

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Installing : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Verifying  : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Installed:

ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19

Complete!

[root@ovirt1 ~]# engine-setup

[ INFO  ] Stage: Initializing

[ INFO  ] Stage: Environment setup

Configuration files: [‘/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf’, ‘/etc/ovirt-engine-setup.conf.d/10-packaging.conf’]

Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)

[ INFO  ] Hardware supports virtualization

[ INFO  ] Stage: Environment packages setup

[ INFO  ] Stage: Programs detection

[ INFO  ] Stage: Environment setup

[ INFO  ] Stage: Environment customization

Configure VDSM on this host? (Yes, No) [No]: Yes

Local storage domain path [/var/lib/images]:

Local storage domain name [local_storage]:

–== PACKAGES ==–

[ INFO  ] Checking for product updates…

[ INFO  ] No product updates found

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:

[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

          firewalld was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  no

         iptables firewall was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  yes

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:

Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.

Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:

Confirm engine admin password:

Application mode (Both, Virt, Gluster) [Both]:

Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.

Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:

Setup can configure apache to use SSL using a certificate issued from the internal CA.

Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:

Local ISO domain path [/var/lib/exports/iso]:

Local ISO domain name [ISO_DOMAIN]:

Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation

[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine

Database secured connection        : False

Database host                      : localhost

Database user name                 : engine

Database host name validation      : False

Datbase port                       : 5432

NFS setup                          : True

PKI organization                   : localdomain

NFS mount point                    : /var/lib/exports/iso

Application mode                   : both

  Firewall manager                   : iptables

Configure WebSocket Proxy          : True

Host FQDN                          : ovirt1.localdomain

Datacenter storage type            : nfs

Configure local database           : True

Set application as default page    : True

Configure Apache SSL               : True

Configure VDSM on this host        : True

Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup

[ INFO  ] Stopping engine service

[ INFO  ] Stopping websocket-proxy service

[ INFO  ] Stage: Misc configuration

[ INFO  ] Stage: Package installation

[ INFO  ] Stage: Misc configuration

[ INFO  ] Initializing PostgreSQL

[ INFO  ] Creating PostgreSQL database

[ INFO  ] Configurating PostgreSQL

[ INFO  ] Creating database schema

[ INFO  ] Creating CA

[ INFO  ] Configurating WebSocket Proxy

[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’

[ INFO  ] Stage: Transaction commit

[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available

A default ISO NFS share has been created on this host.

If IP based access restrictions are required, edit:

entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports

SSH fingerprint: 90:16:09:69:8A:D8:43:C9:87:A7:CF:1A:A3:3B:71:44

Internal CA 5F:2E:12:99:32:55:07:11:C9:F9:AB:58:02:C9:A6:8E:16:91:CA:C1

Web access is enabled at:

http://ovirt1.localdomain:80/ovirt-engine

https://ovirt1.localdomain:443/ovirt-engine

Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service

[ INFO  ] Restarting httpd

[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…

[ INFO  ] The VDSM Host is now operational

[ INFO  ] Restarting nfs services

[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131112005106-setup.conf’

[ INFO  ] Stage: Clean up

Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

[ INFO  ] Stage: Pre-termination

[ INFO  ] Stage: Termination

[ INFO  ] Execution of setup completed successfully

Not sure it’s a must, but I’ve also updated /etc/sysconfig/iptables with

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

Install 3.3.1 doesn’t require  ovirt-engine-websocket-proxy and looks like

[root@ovirt1 ~]# engine-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
Configuration files: [‘/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf’, ‘/etc/ovirt-engine-setup.conf.d/10-packaging.conf’]
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

–== PACKAGES ==–

[ INFO  ] Checking for product updates…
[ INFO  ] No product updates found

–== ALL IN ONE CONFIGURATION ==–

Configure VDSM on this host? (Yes, No) [No]: Yes
Local storage domain path [/var/lib/images]:
Local storage domain name [local_storage]:

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:
[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:
Confirm engine admin password:
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:
Local ISO domain path [/var/lib/exports/iso]:
Local ISO domain name [ISO_DOMAIN]:
Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation
[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine
Database secured connection        : False
Database host                      : localhost
Database user name                 : engine
Database host name validation      : False
Datbase port                       : 5432
NFS setup                          : True
PKI organization                   : localdomain
NFS mount point                    : /var/lib/exports/iso
Application mode                   : both
Configure WebSocket Proxy          : True
Host FQDN                          : ovirt1.localdomain
Datacenter storage type            : nfs
Configure local database           : True
Set application as default page    : True
Configure Apache SSL               : True
Configure VDSM on this host        : True
Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup
[ INFO  ] Stopping engine service
[ INFO  ] Stopping websocket-proxy service
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Initializing PostgreSQL
[ INFO  ] Creating PostgreSQL database
[ INFO  ] Configurating PostgreSQL
[ INFO  ] Creating database schema
[ INFO  ] Creating CA
[ INFO  ] Configurating WebSocket Proxy
[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available
An ISO NFS share has been created on this host.
If IP based access restrictions are required, edit:
entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports
SSH fingerprint: DB:C5:99:16:0D:67:4B:F5:62:99:B2:D3:E2:C7:7F:59
Internal CA 93:BB:05:42:C6:6F:00:28:A1:F1:90:C5:3E:E3:91:D6:1F:1B:17:3D
The following network ports should be opened:
tcp:111
tcp:2049
tcp:32803
tcp:443
tcp:49152-49216
tcp:5432
tcp:5634-6166
tcp:6100
tcp:662
tcp:80
tcp:875
tcp:892
udp:111
udp:32769
udp:662
udp:875
udp:892
An example of the required configuration for iptables can be found at:
/etc/ovirt-engine/iptables.example
In order to configure firewalld, copy the files from
/etc/ovirt-engine/firewalld to /etc/firewalld/services
and execute the following commands:
firewall-cmd -service ovirt-postgres
firewall-cmd -service ovirt-https
firewall-cmd -service ovirt-aio
firewall-cmd -service ovirt-websocket-proxy
firewall-cmd -service ovirt-nfs
firewall-cmd -service ovirt-http
Web access is enabled at:
http://ovirt1.localdomain:80/ovirt-engine
https://ovirt1.localdomain:443/ovirt-engine
Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service
[ INFO  ] Restarting httpd
[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…
[ INFO  ] The VDSM Host is now operational
[ INFO  ] Restarting nfs services
[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131122144055-setup.conf’
[ INFO  ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Execution of setup completed successfully

Updated /etc/sysconfig/iptables with

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

VMs running on different hosts of two node cluster started via Web Console

[root@ovirt1 ~]# service libvirtd status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:31:07 VOLT; 54min ago

Main PID: 1131 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1131 /usr/sbin/libvirtd –listen

└─8606 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name UbuntuSalamander -S -machine pc-1.0,accel=kvm,usb=of…

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: info : libvirt version: 1.0.5.7….org)

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: debug : virLogParseOutputs:1331…d.log

[root@ovirt1 ~]# ssh ovirt2

Last login: Fri Nov 22 10:45:26 2013

[root@ovirt2 ~]# service libvirtd  status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:44:47 VOLT; 41min ago

Main PID: 1019 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1019 /usr/sbin/libvirtd –listen

└─2776 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name VF19NW -S -machine pc-1.0,accel=kvm,usb=off -cpu Pen…

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: info : libvirt version: 1.0.5.7….org)

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: debug : virLogParseOutputs:1331…d.log

 

Virtual machines using replicated glusterfs 3.4.1 volumes

Add new host via Web Console.  Make sure that on new host you previously ran :-

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

otherwise it stays incompatible with oVirt 3.3 ( 3.2 as maximum )

Set up ovirtmgmt bridge, disabled firewalld and enabled iptables firewall manager

On server ovirt1, run the following commands before adding new host ovirt2

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@ovirt2

Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” in this case run one more time  # engine-setup on master server . Via my experience it helped several times (3.3.1 on F19) . Ovirt 3.3.0.1 never required the last hack

Version 3.3.1 allows to create Gluster volumes via GUI, automatically configuring required features for volume been created via graphical environment.

Regarding design glusterfs volumes for production environment view https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS

Double check via command line

 # gluster volume info

Volume Name: ovirt-data02
Type: Replicate
Volume ID: b1cf98c9-5525-48d4-9fb0-bde47d7a98b0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/home/boris/node-replicate
Brick2: 192.168.1.127:/home/boris/node-replicate
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: enable

nfs.disable: off

Creating XFS based replicated gluster volume via oVirt 3.3.1 per  https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS
 
 [root@ovirt1 ~]# gluster volume info ovirt-data05
Volume Name: ovirt-data05
Type: Replicate
Volume ID: ff0955b6-668a-4eab-acf0-606456ee0005
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/mnt/brick1/node-replicate
Brick2: 192.168.1.127:/mnt/brick1/node-replicate
Options Reconfigured:
nfs.disable: off
user.cifs: enable
auth.allow: *
storage.owner-uid: 36
storage.owner-gid: 36
 
[root@ovirt1 ~]# mount | grep xfs
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
/dev/sda3 on /mnt/brick1 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
 
[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   26G  112G  19% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  2.2M  3.9G   1% /dev/shm
tmpfs                                    3.9G 1004K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   76K  3.9G   1% /tmp
/dev/sda1                                477M  105M  344M  24% /boot
/dev/sda3                                 98G   19G   80G  19% /mnt/brick1
ovirt1.localdomain:ovirt-data05           98G   19G   80G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05
ovirt1.localdomain:/var/lib/exports/iso  145G   26G  112G  19% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.137:/var/lib/exports/export    145G   26G  112G  19% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export
ovirt1.localdomain:ovirt-data02          145G   26G  112G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02

 

 

 

Creating  glusterfs 3.4.1 cluster  with ovirt1 and ovirt2 via CLI. (3.3.0)

[root@ovirt1 ~]# gluster peer status

Number of Peers: 1

Hostname: ovirt2

Uuid: 8355d741-fc2d-4484-b6e3-ca0ef99658c1

State: Peer in Cluster (Connected)

[root@ovirt1 ~]# ssh ovirt2

Last login: Sat Nov 16 10:23:11 2013 from ovirt1.localdomain

[root@ovirt2 ~]# gluster peer status

Number of Peers: 1

Hostname: 192.168.1.120

Uuid: 3d00042b-4e44-4680-98f7-98b814354001

State: Peer in Cluster (Connected)

then create replicated volume  visible in Web Console, make Glusterfs storage based on this volume and convert into Data(Master)

[root@ovirt1 ~]# gluster volume create data02-share  replica 2 \

ovirt1:/GLSD/node-replicate ovirt2:/GLSD/node-replicate

volume create: data02-share: success: please start the volume to access data

Follow carefully http://community.redhat.com/ovirt-3-3-glusterized/ regarding

1. Editing /etc/glusterfs/glusterd.vol add line

“option rpc-auth-allow-insecure on”

2. gluster volume set data server.allow-insecure on

before starting volume , otherwise you won’t be able to start vms.

Then set right permissions for manually created volume :-  

[root@ovirt1 ~]#  gluster volume set  data02-share  storage.owner-uid 36
[root@ovirt1 ~]#  gluster volume  set data02-share  storage.owner-gid 36

[root@ovirt1 ~]# gluster volume set data02-share quick-read off

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share cluster.eager-lock on

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share performance.stat-prefetch off

volume set: success

[root@ovirt1 ~]# gluster volume info

Volume Name: data02-share

Type: Replicate

Volume ID: 282545cd-583b-4211-a0f4-22eea4142953

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirt1:/GLSD/node-replicate

Brick2: ovirt2:/GLSD/node-replicate

Options Reconfigured:

performance.stat-prefetch: off

cluster.eager-lock: on

performance.quick-read: off

storage.owner-uid: 36

storage.owner-gid: 36

server.allow-insecure: on

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5651976

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ssh ovirt2

Last login: Sat Nov 16 10:26:05 2013 from ovirt1.localdomain

[root@ovirt2 ~]# cd /GLSD/node-replicate/12c1221b-c500-4d21-87ac-1cdd0e0d5269/images/a16d3f36-1a40-4867-9ecb-bbae78189c03

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5043492

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5065892

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:45 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5295140

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:47 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

Filesystem layout looks like :

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# df -h

Filesystem                               Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root                145G   24G  113G  18% /

devtmpfs                                 3.9G     0  3.9G   0% /dev

tmpfs                                    3.9G  100K  3.9G   1% /dev/shm

tmpfs                                    3.9G  1.1M  3.9G   1% /run

tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                                    3.9G   76K  3.9G   1% /tmp

/dev/sdb3                                477M   87M  362M  20% /boot

ovirt1.localdomain:data02-share          125G   10G  109G   9% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share

ovirt1.localdomain:/var/lib/exports/iso  145G   24G  113G  18% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso

192.168.1.120:/var/lib/exports/export    145G   24G  113G  18% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

Setting up Ubuntu Salamander Server KVM  via oVirt 3.3 on F19

Hidden issues 

To make environment stable Storage Pool Manager was moved to ovirt2.localdomain:

In this case nfs mount requests from ovirt2 would be satisfied successfully.  View next snapshot :-

Detailed filesystems layout on ovirt1 and ovirt2

[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   31G  107G  23% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  104K  3.9G   1% /dev/shm
tmpfs                                    3.9G  1.1M  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   80K  3.9G   1% /tmp
/dev/sdb3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

[root@ovirt1 ~]# ssh ovirt2

Last login: Sun Nov 17 15:04:29 2013 from ovirt1.localdomain

[root@ovirt2 ~]# ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 17083  bytes 95312048 (90.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 17083  bytes 95312048 (90.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163  mtu 1500
inet 192.168.1.130  netmask 255.255.255.0  broadcast 192.168.1.255
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
ether 00:22:15:63:e4:e2  txqueuelen 0  (Ethernet)
RX packets 1876878  bytes 451006322 (430.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049680  bytes 218222806 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

p37p1: flags=4163  mtu 1500
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
ether 00:22:15:63:e4:e2  txqueuelen 1000  (Ethernet)
RX packets 1877201  bytes 477310768 (455.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049698  bytes 218224910 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device interrupt 17

[root@ovirt2 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora02-root                125G   16G  104G  13% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G   92K  3.9G   1% /dev/shm
tmpfs                                    3.9G  984K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   44K  3.9G   1% /tmp
/dev/sda3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

                                                           Spice  vs VNC

                         


Glusterfs replicated volume based Havana 2013.2 instances on Server With GlusterFS 3.4.1 Fedora 19 in two node cluster

November 2, 2013

Two node gluster 3.4.1 cluster set up follows bellow. Havana 2013.2 RDO been installed via `packstack –alliinone` on one of the boxes has cinder tuned to create volumes in replicated glusterfs 3.4.1 storage. Several samples of creating via images bootable cinder volumes are described in step by step way. What actually provides proof of the concepts of article mentioned down here

Please,  view first nice article: http://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means/ and  https://wiki.openstack.org/wiki/CinderSupportMatrix

Per https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS

On the server1, run the following command:

  ssh-keygen (Hit Enter to accept all of the defaults)

On the server1, run the following command for each server. Server, run the following command for each node in cluster (server).

  ssh-copy-id -i ~/.ssh/id_rsa.pub root@server4

View also https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS  for 5),6),7)

[root@server1 ~]#   yum install glusterfs glusterfs-server glusterfs-fuse

[root@server1 ~(keystone_admin)]# service glusterd status

Redirecting to /bin/systemctl status  glusterd.service

glusterd.service – GlusterFS an clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)

Active: active (running) since Sat 2013-11-02 13:44:42 MSK; 1h 42min ago

Process: 2699 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)

Main PID: 2700 (glusterd)

CGroup: name=systemd:/system/glusterd.service

├─2700 /usr/sbin/glusterd -p /run/glusterd.pid

├─2902 /usr/sbin/glusterfsd -s server1 –volfile-id cinder-volumes02.server1.home-boris-node-replicate -p /var/l…

├─5376 /usr/sbin/glusterfs -s localhost  –volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glus…

├─6675 /usr/sbin/glusterfs -s localhost  –volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/lo…

└─6683 /sbin/rpc.statd

Nov 02 13:44:40 server1 systemd[1]: Starting GlusterFS an clustered file-system server…

Nov 02 13:44:42 server1 systemd[1]: Started GlusterFS an clustered file-system server.

Nov 02 13:46:52 server1 rpc.statd[5383]: Version 1.2.7 starting

Nov 02 13:46:52 server1 sm-notify[5384]: Version 1.2.7 starting

[root@server1 ~]# service iptables stop

[root@server1 ~]# service iptables status

Redirecting to /bin/systemctl status  iptables.service

iptables.service – IPv4 firewall with iptables

Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)

Active: failed (Result: exit-code) since Sat 2013-11-02 12:59:10 MSK; 5s ago

Process: 14306 ExecStop=/usr/libexec/iptables/iptables.init stop (code=exited, status=1/FAILURE)

Main PID: 472 (code=exited, status=0/SUCCESS)

CGroup: name=systemd:/system/iptables.service

Nov 02 12:59:10 server1 systemd[1]: Stopping IPv4 firewall with iptables…

Nov 02 12:59:10 server1 iptables.init[14306]: iptables: Flushing firewall rules: [  OK  ]

Nov 02 12:59:10 server1 iptables.init[14306]: iptables: Setting chains to policy ACCEPT: raw security mangle nat fil…ILED]

Nov 02 12:59:10 server1 iptables.init[14306]: iptables: Unloading modules:  iptable_nat[FAILED]

Nov 02 12:59:10 server1 systemd[1]: iptables.service: control process exited, code=exited status=1

Nov 02 12:59:10 server1 systemd[1]: Stopped IPv4 firewall with iptables.

Nov 02 12:59:10 server1 systemd[1]: Unit iptables.service entered failed state.

[root@server1 ~]# gluster peer probe server4

peer probe: success

[root@server1 ~]# gluster peer  status

Number of Peers: 1

Hostname: server4

Port: 24007

Uuid: 4062c822-74d5-45e9-8eaa-8353845332de

State: Peer in Cluster (Connected)

[root@server1 ~]# gluster volume create cinder-volumes02  replica 2 \

server1:/home/boris/node-replicate  server4:/home/boris/node-replicate

volume create: cinder-volumes02: success: please start the volume to access data

[root@server1 ~]# gluster volume start cinder-volumes02

volume start: cinder-volumes02: success

[root@server1 ~]# gluster volume set cinder-volumes02  quick-read off
volume set: success

[root@server1 ~]# gluster volume set cinder-volumes02  cluster.eager-lock on
volume set: success

[root@server1 ~]# gluster volume set cinder-volumes02  performance.stat-prefetch off
volume set: success

[root@server1 ~]# gluster volume info

Volume Name: cinder-volumes02

Type: Replicate

Volume ID: 1a1566ed-34f7-4264-b0b4-91cf9526b5ef

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: server1:/home/boris/node-replicate

Brick2: server4:/home/boris/node-replicate

Options Reconfigured:
performance.quick-read: off
cluster.eager-lock: on
performance.stat-prefetch: off

[root@server1 ~]# service iptables start

Redirecting to /bin/systemctl start  iptables.service

[root@server1 ~]# service iptables status

Redirecting to /bin/systemctl status  iptables.service

iptables.service – IPv4 firewall with iptables

Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)

Active: active (exited) since Sat 2013-11-02 13:10:17 MSK; 5s ago

Process: 14306 ExecStop=/usr/libexec/iptables/iptables.init stop (code=exited, status=1/FAILURE)

Process: 17699 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=0/SUCCESS)

Nov 02 13:10:17 server1 iptables.init[17699]: iptables: Applying firewall rules: [  OK  ]

Nov 02 13:10:17 server1 systemd[1]: Started IPv4 firewall with iptables.

[root@server1 ~(keystone_admin)]# gluster peer status

Number of Peers: 1

Hostname: server4

Uuid: 4062c822-74d5-45e9-8eaa-8353845332de

State: Peer in Cluster (Connected)

[root@server1 ~]# gluster volume info

Volume Name: cinder-volumes02

Type: Replicate

Volume ID: 1a1566ed-34f7-4264-b0b4-91cf9526b5ef

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: server1:/home/boris/node-replicate

Brick2: server4:/home/boris/node-replicate

Options Reconfigured:
performance.quick-read: off
cluster.eager-lock: on
performance.stat-prefetch: off

Update /etc/sysconfig/iptables on second box :-

Add to *filter section

-A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

Watching replication

Configuring Cinder to Add GlusterFS

 

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
 # openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
 # openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
Then tune file /etc/cinder/shares.conf
# vi /etc/cinder/shares.conf
    192.168.1.147:cinder-volumes02
:wq
Update iptables firewall ( remind that service firewalld should be disabled on F19 from the beginning  to keep changes done by neutron/quantum in place)
# iptables-save  >  iptables.dump
**********************
 Add to *filter section:
**********************
 -A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

# iptables-restore <  iptables.dump
# service iptables restart

Now mount glusterfs volume on predefined Havana’s directory

# for i in api scheduler volume; do service openstack-cinder-${i} restart; done

[root@server1 ~(keystone_admin)]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/fedora_5-root       193G   48G  135G  27% /
devtmpfs                                    3.9G     0  3.9G   0% /dev
tmpfs                                          3.9G  140K  3.9G   1% /dev/shm
tmpfs                                          3.9G  948K  3.9G   1% /run
tmpfs                                          3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                          3.9G   92K  3.9G   1% /tmp
/dev/loop0                                  928M  1.3M  860M   1% /srv/node/device1
/dev/sda1                                   477M   87M  362M  20% /boot
tmpfs                                          3.9G  948K  3.9G   1% /run/netns

192.168.1.147:cinder-volumes02  116G   61G   50G  56% /var/lib/cinder/volumes/e879618364aca859f13701bb918b087f

Building Ubuntu Server 13.10 utilizing replicated via glusterfs 3.4.1 cinder bootable volume

   

Building Windows 2012 evaluation instance utilizing replicated via glusterfs 3.4.1 cinder bootable volume

[root@ovirt1 ~(keystone_admin)]# service glusterd status

Redirecting to /bin/systemctl status  glusterd.service

glusterd.service – GlusterFS an clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)

Active: active (running) since Mon 2013-11-04 22:31:55 VOLT; 21min ago

Process: 2962 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)

Main PID: 2963 (glusterd)

CGroup: name=systemd:/system/glusterd.service

├─ 2963 /usr/sbin/glusterd -p /run/glusterd.pid

├─ 3245 /usr/sbin/glusterfsd -s ovirt1 –volfile-id cinder-vols.ovirt1.fdr-set-node-replicate -p /var/lib/gluste…

├─ 6031 /usr/sbin/glusterfs -s localhost –volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glu…

├─11335 /usr/sbin/glusterfs -s localhost –volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/l…

└─11343 /sbin/rpc.statd

Nov 04 22:31:51 ovirt1.localdomain systemd[1]: Starting GlusterFS an clustered file-system server…

Nov 04 22:31:55 ovirt1.localdomain systemd[1]: Started GlusterFS an clustered file-system server.

Nov 04 22:35:11 ovirt1.localdomain rpc.statd[6038]: Version 1.2.7 starting

Nov 04 22:35:11 ovirt1.localdomain sm-notify[6039]: Version 1.2.7 starting

Nov 04 22:35:11 ovirt1.localdomain GlusterFS[6026]: [2013-11-04 18:35:11.400008] C [nfs.c:271:nfs_start_subvol_lookup_…ctory

Nov 04 22:53:23 ovirt1.localdomain rpc.statd[11343]: Version 1.2.7 starting

[root@ovirt1 ~(keystone_admin)]# df -h

Filesystem                  Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root   169G   74G   87G  46% /

devtmpfs                    3.9G     0  3.9G   0% /dev

tmpfs                       3.9G   84K  3.9G   1% /dev/shm

tmpfs                       3.9G  956K  3.9G   1% /run

tmpfs                       3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                       3.9G  116K  3.9G   1% /tmp

/dev/loop0                  928M  1.3M  860M   1% /srv/node/device1

/dev/sdb1                   477M   87M  361M  20% /boot

tmpfs                       3.9G  956K  3.9G   1% /run/netns

192.168.1.137:/cinder-vols  164G   73G   83G  47% /var/lib/cinder/volumes/8a78781567bbf747a694c25ae4494d9c

[root@ovirt1 ~(keystone_admin)]# gluster peer status

Number of Peers: 1

Hostname: ovirt2

Uuid: 2aa2dfb5-d266-4474-89c1-c5c011eec025

State: Peer in Cluster (Connected)

[root@ovirt1 ~(keystone_admin)]# gluster volume info cinder-vols

Volume Name: cinder-vols

Type: Replicate

Volume ID: e8eab40f-3401-4893-ba25-121bd4e0a74e

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirt1:/fdr/set/node-replicate

Brick2: ovirt2:/fdr/set/node-replicate

Options Reconfigured:
performance.quick-read: off
cluster.eager-lock: on
performance.stat-prefetch: off

[root@ovirt1 ~(keystone_admin)]# nova image-list

+————————————–+———————————+——–+——–+

| ID                                   | Name                            | Status | Server |

+————————————–+———————————+——–+——–+

| 291f7c8b-043b-4656-9285-244770f127e5 | Fedora19image                   | ACTIVE |        |

| 67d9f757-43ca-4204-985d-5ecdb31e8ec7 | Salamander1030                  | ACTIVE |        |

| 624681da-f48f-43d9-968e-1e3da6cc75a3 | Windows Server 2012 R2 Std Eval | ACTIVE |        |

| bd01f02d-e0bf-4cc5-aa35-ff97ebd9c1ef | cirros                          | ACTIVE |        |

+————————————–+———————————+——–+——–+

[root@ovirt1 ~(keystone_admin)]# cinder create –image-id  \
624681da-f48f-43d9-968e-1e3da6cc75a3 –display_name Windows2012VL 20