Attempt to install oVirt 3.3 & 3.3.1 on Fedora 19

***********************************************************************************

UPDATE on 12/07/2013  Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” attempting to add new host, in this case run one more time 

# engine-setup

on master server . Via my experience it helped several times (3.3.1 on F19).  Ovirt 3.3.0.1 never required the last hack

UPDATE on 11/23/2013.  Same schema would work for 3.3.1
with “yum downgrade apache-sshd” to be able to add new
host. When creating VM it’s possible to select NIC1
“ovirtmgmt/ovirtmgmt”.   I was able to find  http://www.ovirt.org/Features/Detailed_OSN_Integration regarding set up  Neutron(Quantum) to create VLANs (external provider) **********************************************************************************

Following bellow is attempt  to create two node oVirt 3.3 cluster and virtual machines using replicated glusterfs 3.4.1 volumes. Choice of firewalld as configured firewall seems to be unacceptable for this purpose in meantime. Selection of iptables firewall allows to complete the task. IPv4 firewall with iptables  just works for me with no pain and I clearly understand what to do when problems come up, nothing else. I also believe that any post pretending for “Howto” should be reproduced by any newcomer easily and successfully without  frustration or  disappointment.

First fix bug with NFS Server still affecting F19 :-  https://bugzilla.redhat.com/show_bug.cgi?id=970595

Please, also be aware of http://www.ovirt.org/OVirt_3.3_TestDay#Known_issues

Quote :

Known issues : host installation

Fedora 19: If the ovirtmgmt bridge is not successfully installed during initial host-setup, manually click on the host, setup networks, and add the ovirtmgmt bridge. It is recommended to disable NetworkManager as well.

End quote

Second put under /etc/sysconfig/network-scripts 

[root@ovirt1 network-scripts]# cat ifcfg-ovirtmgmt

DEVICE=ovirtmgmt

TYPE=Bridge

ONBOOT=yes

DELAY=0

BOOTPROTO=static

IPADDR=192.168.1.142

NETMASK=255.255.255.0

GATEWAY=192.168.1.1

DNS1=83.221.202.254

NM_CONTROLLED=”no

In particular (my box) :

[root@ovirt1 network-scripts]# cat ifcfg-enp2s0

BOOTPROTO=none

TYPE=”Ethernet”

ONBOOT=”yes”

NAME=”enp2s0″

BRIDGE=”ovirtmgmt”

HWADDR=00:22:15:63:e4:e2

Disable NetworkManager and enable service network.

Skipping this two steps in my case crashed install per

http://community.redhat.com/up-and-running-with-ovirt-3-3/

First by obvious reason,second didn’t bring vdsmd during install and engine.log generated a bunch of errors complaining absence network ovirtmgmt. Web console was actually useless  (again in my case) not managing storage domains in down status.

View also : http://www.mail-archive.com/users@ovirt.org/msg11394.html

Follow http://community.redhat.com/up-and-running-with-ovirt-3-3/

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

$ sudo yum install ovirt-engine-setup-plugin-allinone -y

Before run engine-setup :-

[root@ovirt1 ~]# yum install ovirt-engine-websocket-proxy

Loaded plugins: langpacks, refresh-packagekit, versionlock

Resolving Dependencies

–>; Running transaction check

–> Package ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19 will be installed

–> Finished Dependency Resolution

Dependencies Resolved

===================================================

Package                                 Arch              Version                   Repository               Size

===================================================

Installing:

ovirt-engine-websocket-proxy            noarch            3.3.0.1-1.fc19            ovirt-stable             12 k

Transaction Summary

===================================================

Install  1 Package

Total download size: 12 k

Installed size: 18 k

Is this ok [y/d/N]: y

Downloading packages:

ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch.rpm                                      |  12 kB  00:00:02

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Installing : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Verifying  : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Installed:

ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19

Complete!

[root@ovirt1 ~]# engine-setup

[ INFO  ] Stage: Initializing

[ INFO  ] Stage: Environment setup

Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']

Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)

[ INFO  ] Hardware supports virtualization

[ INFO  ] Stage: Environment packages setup

[ INFO  ] Stage: Programs detection

[ INFO  ] Stage: Environment setup

[ INFO  ] Stage: Environment customization

Configure VDSM on this host? (Yes, No) [No]: Yes

Local storage domain path [/var/lib/images]:

Local storage domain name [local_storage]:

–== PACKAGES ==–

[ INFO  ] Checking for product updates…

[ INFO  ] No product updates found

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:

[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

          firewalld was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  no

         iptables firewall was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  yes

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:

Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.

Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:

Confirm engine admin password:

Application mode (Both, Virt, Gluster) [Both]:

Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.

Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:

Setup can configure apache to use SSL using a certificate issued from the internal CA.

Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:

Local ISO domain path [/var/lib/exports/iso]:

Local ISO domain name [ISO_DOMAIN]:

Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation

[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine

Database secured connection        : False

Database host                      : localhost

Database user name                 : engine

Database host name validation      : False

Datbase port                       : 5432

NFS setup                          : True

PKI organization                   : localdomain

NFS mount point                    : /var/lib/exports/iso

Application mode                   : both

  Firewall manager                   : iptables

Configure WebSocket Proxy          : True

Host FQDN                          : ovirt1.localdomain

Datacenter storage type            : nfs

Configure local database           : True

Set application as default page    : True

Configure Apache SSL               : True

Configure VDSM on this host        : True

Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup

[ INFO  ] Stopping engine service

[ INFO  ] Stopping websocket-proxy service

[ INFO  ] Stage: Misc configuration

[ INFO  ] Stage: Package installation

[ INFO  ] Stage: Misc configuration

[ INFO  ] Initializing PostgreSQL

[ INFO  ] Creating PostgreSQL database

[ INFO  ] Configurating PostgreSQL

[ INFO  ] Creating database schema

[ INFO  ] Creating CA

[ INFO  ] Configurating WebSocket Proxy

[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’

[ INFO  ] Stage: Transaction commit

[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available

A default ISO NFS share has been created on this host.

If IP based access restrictions are required, edit:

entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports

SSH fingerprint: 90:16:09:69:8A:D8:43:C9:87:A7:CF:1A:A3:3B:71:44

Internal CA 5F:2E:12:99:32:55:07:11:C9:F9:AB:58:02:C9:A6:8E:16:91:CA:C1

Web access is enabled at:

http://ovirt1.localdomain:80/ovirt-engine

https://ovirt1.localdomain:443/ovirt-engine

Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service

[ INFO  ] Restarting httpd

[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…

[ INFO  ] The VDSM Host is now operational

[ INFO  ] Restarting nfs services

[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131112005106-setup.conf’

[ INFO  ] Stage: Clean up

Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

[ INFO  ] Stage: Pre-termination

[ INFO  ] Stage: Termination

[ INFO  ] Execution of setup completed successfully

Not sure it’s a must, but I’ve also updated /etc/sysconfig/iptables with

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

Install 3.3.1 doesn’t require  ovirt-engine-websocket-proxy and looks like

[root@ovirt1 ~]# engine-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

–== PACKAGES ==–

[ INFO  ] Checking for product updates…
[ INFO  ] No product updates found

–== ALL IN ONE CONFIGURATION ==–

Configure VDSM on this host? (Yes, No) [No]: Yes
Local storage domain path [/var/lib/images]:
Local storage domain name [local_storage]:

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:
[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:
Confirm engine admin password:
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:
Local ISO domain path [/var/lib/exports/iso]:
Local ISO domain name [ISO_DOMAIN]:
Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation
[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine
Database secured connection        : False
Database host                      : localhost
Database user name                 : engine
Database host name validation      : False
Datbase port                       : 5432
NFS setup                          : True
PKI organization                   : localdomain
NFS mount point                    : /var/lib/exports/iso
Application mode                   : both
Configure WebSocket Proxy          : True
Host FQDN                          : ovirt1.localdomain
Datacenter storage type            : nfs
Configure local database           : True
Set application as default page    : True
Configure Apache SSL               : True
Configure VDSM on this host        : True
Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup
[ INFO  ] Stopping engine service
[ INFO  ] Stopping websocket-proxy service
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Initializing PostgreSQL
[ INFO  ] Creating PostgreSQL database
[ INFO  ] Configurating PostgreSQL
[ INFO  ] Creating database schema
[ INFO  ] Creating CA
[ INFO  ] Configurating WebSocket Proxy
[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available
An ISO NFS share has been created on this host.
If IP based access restrictions are required, edit:
entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports
SSH fingerprint: DB:C5:99:16:0D:67:4B:F5:62:99:B2:D3:E2:C7:7F:59
Internal CA 93:BB:05:42:C6:6F:00:28:A1:F1:90:C5:3E:E3:91:D6:1F:1B:17:3D
The following network ports should be opened:
tcp:111
tcp:2049
tcp:32803
tcp:443
tcp:49152-49216
tcp:5432
tcp:5634-6166
tcp:6100
tcp:662
tcp:80
tcp:875
tcp:892
udp:111
udp:32769
udp:662
udp:875
udp:892
An example of the required configuration for iptables can be found at:
/etc/ovirt-engine/iptables.example
In order to configure firewalld, copy the files from
/etc/ovirt-engine/firewalld to /etc/firewalld/services
and execute the following commands:
firewall-cmd -service ovirt-postgres
firewall-cmd -service ovirt-https
firewall-cmd -service ovirt-aio
firewall-cmd -service ovirt-websocket-proxy
firewall-cmd -service ovirt-nfs
firewall-cmd -service ovirt-http
Web access is enabled at:
http://ovirt1.localdomain:80/ovirt-engine
https://ovirt1.localdomain:443/ovirt-engine
Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service
[ INFO  ] Restarting httpd
[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…
[ INFO  ] The VDSM Host is now operational
[ INFO  ] Restarting nfs services
[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131122144055-setup.conf’
[ INFO  ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Execution of setup completed successfully

Updated /etc/sysconfig/iptables with

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

VMs running on different hosts of two node cluster started via Web Console

[root@ovirt1 ~]# service libvirtd status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:31:07 VOLT; 54min ago

Main PID: 1131 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1131 /usr/sbin/libvirtd –listen

└─8606 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name UbuntuSalamander -S -machine pc-1.0,accel=kvm,usb=of…

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: info : libvirt version: 1.0.5.7….org)

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: debug : virLogParseOutputs:1331…d.log

[root@ovirt1 ~]# ssh ovirt2

Last login: Fri Nov 22 10:45:26 2013

[root@ovirt2 ~]# service libvirtd  status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:44:47 VOLT; 41min ago

Main PID: 1019 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1019 /usr/sbin/libvirtd –listen

└─2776 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name VF19NW -S -machine pc-1.0,accel=kvm,usb=off -cpu Pen…

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: info : libvirt version: 1.0.5.7….org)

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: debug : virLogParseOutputs:1331…d.log

 

Virtual machines using replicated glusterfs 3.4.1 volumes

Add new host via Web Console.  Make sure that on new host you previously ran :-

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

otherwise it stays incompatible with oVirt 3.3 ( 3.2 as maximum )

Set up ovirtmgmt bridge, disabled firewalld and enabled iptables firewall manager

On server ovirt1, run the following commands before adding new host ovirt2

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@ovirt2

Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” in this case run one more time  # engine-setup on master server . Via my experience it helped several times (3.3.1 on F19) . Ovirt 3.3.0.1 never required the last hack

Version 3.3.1 allows to create Gluster volumes via GUI, automatically configuring required features for volume been created via graphical environment.

Regarding design glusterfs volumes for production environment view https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS

Double check via command line

 # gluster volume info

Volume Name: ovirt-data02
Type: Replicate
Volume ID: b1cf98c9-5525-48d4-9fb0-bde47d7a98b0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/home/boris/node-replicate
Brick2: 192.168.1.127:/home/boris/node-replicate
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: enable

nfs.disable: off

Creating XFS based replicated gluster volume via oVirt 3.3.1 per  https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS
 
 [root@ovirt1 ~]# gluster volume info ovirt-data05
Volume Name: ovirt-data05
Type: Replicate
Volume ID: ff0955b6-668a-4eab-acf0-606456ee0005
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/mnt/brick1/node-replicate
Brick2: 192.168.1.127:/mnt/brick1/node-replicate
Options Reconfigured:
nfs.disable: off
user.cifs: enable
auth.allow: *
storage.owner-uid: 36
storage.owner-gid: 36
 
[root@ovirt1 ~]# mount | grep xfs
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
/dev/sda3 on /mnt/brick1 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
 
[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   26G  112G  19% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  2.2M  3.9G   1% /dev/shm
tmpfs                                    3.9G 1004K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   76K  3.9G   1% /tmp
/dev/sda1                                477M  105M  344M  24% /boot
/dev/sda3                                 98G   19G   80G  19% /mnt/brick1
ovirt1.localdomain:ovirt-data05           98G   19G   80G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05
ovirt1.localdomain:/var/lib/exports/iso  145G   26G  112G  19% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.137:/var/lib/exports/export    145G   26G  112G  19% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export
ovirt1.localdomain:ovirt-data02          145G   26G  112G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02

 

 

 

Creating  glusterfs 3.4.1 cluster  with ovirt1 and ovirt2 via CLI. (3.3.0)

[root@ovirt1 ~]# gluster peer status

Number of Peers: 1

Hostname: ovirt2

Uuid: 8355d741-fc2d-4484-b6e3-ca0ef99658c1

State: Peer in Cluster (Connected)

[root@ovirt1 ~]# ssh ovirt2

Last login: Sat Nov 16 10:23:11 2013 from ovirt1.localdomain

[root@ovirt2 ~]# gluster peer status

Number of Peers: 1

Hostname: 192.168.1.120

Uuid: 3d00042b-4e44-4680-98f7-98b814354001

State: Peer in Cluster (Connected)

then create replicated volume  visible in Web Console, make Glusterfs storage based on this volume and convert into Data(Master)

[root@ovirt1 ~]# gluster volume create data02-share  replica 2 \

ovirt1:/GLSD/node-replicate ovirt2:/GLSD/node-replicate

volume create: data02-share: success: please start the volume to access data

Follow carefully http://community.redhat.com/ovirt-3-3-glusterized/ regarding

1. Editing /etc/glusterfs/glusterd.vol add line

“option rpc-auth-allow-insecure on”

2. gluster volume set data server.allow-insecure on

before starting volume , otherwise you won’t be able to start vms.

Then set right permissions for manually created volume :-  

[root@ovirt1 ~]#  gluster volume set  data02-share  storage.owner-uid 36
[root@ovirt1 ~]#  gluster volume  set data02-share  storage.owner-gid 36

[root@ovirt1 ~]# gluster volume set data02-share quick-read off

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share cluster.eager-lock on

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share performance.stat-prefetch off

volume set: success

[root@ovirt1 ~]# gluster volume info

Volume Name: data02-share

Type: Replicate

Volume ID: 282545cd-583b-4211-a0f4-22eea4142953

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirt1:/GLSD/node-replicate

Brick2: ovirt2:/GLSD/node-replicate

Options Reconfigured:

performance.stat-prefetch: off

cluster.eager-lock: on

performance.quick-read: off

storage.owner-uid: 36

storage.owner-gid: 36

server.allow-insecure: on

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5651976

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ssh ovirt2

Last login: Sat Nov 16 10:26:05 2013 from ovirt1.localdomain

[root@ovirt2 ~]# cd /GLSD/node-replicate/12c1221b-c500-4d21-87ac-1cdd0e0d5269/images/a16d3f36-1a40-4867-9ecb-bbae78189c03

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5043492

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5065892

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:45 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5295140

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:47 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

Filesystem layout looks like :

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# df -h

Filesystem                               Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root                145G   24G  113G  18% /

devtmpfs                                 3.9G     0  3.9G   0% /dev

tmpfs                                    3.9G  100K  3.9G   1% /dev/shm

tmpfs                                    3.9G  1.1M  3.9G   1% /run

tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                                    3.9G   76K  3.9G   1% /tmp

/dev/sdb3                                477M   87M  362M  20% /boot

ovirt1.localdomain:data02-share          125G   10G  109G   9% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share

ovirt1.localdomain:/var/lib/exports/iso  145G   24G  113G  18% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso

192.168.1.120:/var/lib/exports/export    145G   24G  113G  18% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

Setting up Ubuntu Salamander Server KVM  via oVirt 3.3 on F19

Hidden issues 

To make environment stable Storage Pool Manager was moved to ovirt2.localdomain:

In this case nfs mount requests from ovirt2 would be satisfied successfully.  View next snapshot :-

Detailed filesystems layout on ovirt1 and ovirt2

[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   31G  107G  23% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  104K  3.9G   1% /dev/shm
tmpfs                                    3.9G  1.1M  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   80K  3.9G   1% /tmp
/dev/sdb3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

[root@ovirt1 ~]# ssh ovirt2

Last login: Sun Nov 17 15:04:29 2013 from ovirt1.localdomain

[root@ovirt2 ~]# ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0×10
loop  txqueuelen 0  (Local Loopback)
RX packets 17083  bytes 95312048 (90.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 17083  bytes 95312048 (90.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163  mtu 1500
inet 192.168.1.130  netmask 255.255.255.0  broadcast 192.168.1.255
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0×20
ether 00:22:15:63:e4:e2  txqueuelen 0  (Ethernet)
RX packets 1876878  bytes 451006322 (430.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049680  bytes 218222806 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

p37p1: flags=4163  mtu 1500
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0×20
ether 00:22:15:63:e4:e2  txqueuelen 1000  (Ethernet)
RX packets 1877201  bytes 477310768 (455.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049698  bytes 218224910 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device interrupt 17

[root@ovirt2 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora02-root                125G   16G  104G  13% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G   92K  3.9G   1% /dev/shm
tmpfs                                    3.9G  984K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   44K  3.9G   1% /tmp
/dev/sda3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

                                                           Spice  vs VNC

                         

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: