oVirt 3.3.2 hackery on Fedora 19

December 21, 2013

My final target was  to create two node oVirt 3.3.2 cluster and virtual machines using replicated glusterfs 3.4.1 volumes based on XFS formatted partitions. Choice of IPv4 firewall with iptables for tuning cluster environment and synchronization is my personal preference. Now I also know that postgres requires enough shared memory allocation like Informix or Oracle ( i was Informix DBA@Verizon for about 5 years , it was nice time ..)

   oVirt is an open source alternative to VMware vSphere, and provides an awesome KVM management interface for multi-node virtualization.

oVirt 3.3.2 clean install was performed as follows :-

1. Created ovirtmgmt bridge

[root@ovirt1 network-scripts]# cat ifcfg-ovirtmgmt

DEVICE=ovirtmgmt
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=static
IPADDR=192.168.1.142
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=83.221.202.254
NM_CONTROLLED=”no”

 In particular (my box) :

 [root@ovirt1 network-scripts]# cat ifcfg-enp2s0

BOOTPROTO=none
TYPE=”Ethernet”
ONBOOT=”yes”
NAME=”enp2s0″
BRIDGE=”ovirtmgmt”
HWADDR=00:22:15:63:e4:e2

2. Fixed bug with NFS Server:   https://bugzilla.redhat.com/show_bug.cgi?id=970595

3. Set up IPv4 firewall with iptables

4. Disabled NetworkManager and enabled network service 

5. To be able perform current 3.3.2 install on F19 ,  set up per

http://postgresql.1045698.n5.nabble.com/How-to-install-latest-stable-postgresql-on-Debian-td5005417.html

# sysctl -w kernel.shmmax=419430400
kernel.shmmax = 419430400
# sysctl -n kernel.shmmax
419430400 

Appears to be known issue http://www.ovirt.org/OVirt_3.3.2_release_notes  On Fedora 19 with recent versions of PostgreSQL it may be necessary to manually change kernel.shmmax settings (BZ 1039616)

Otherwise, setup fails to perform Misc Configuration. Systemctl status postgresql.service reports a servers crash during setup. Runtime shared memory mapping :-

[root@ovirt1 ~]# systemctl list-units | grep postgres
postgresql.service          loaded active running   PostgreSQL database server

[root@ovirt1 ~]# ipcs -a

—— Message Queues ——–
key        msqid      owner      perms      used-bytes   messages

—— Shared Memory Segments ——–
key        shmid      owner      perms      bytes      nattch     status
0x00000000 0          root       644        80         2
0x00000000 32769      root       644        16384      2
0x00000000 65538      root       644        280        2
0x00000000 163843     boris      600        4194304    2          dest
0x0052e2c1 360452     postgres   600        43753472   8
0x00000000 294917     boris      600        2097152    2          dest
0x0112e4a1 393222     root       600        1000       11
0x00000000 425991     boris      600        393216     2          dest
0x00000000 557065     boris      600        1048576    2          dest

—— Semaphore Arrays ——–
key        semid      owner      perms      nsems
0x000000a7 65536      root       600        1
0x0052e2c1 458753     postgres   600        17
0x0052e2c2 491522     postgres   600        17
0x0052e2c3 524291     postgres   600        17
0x0052e2c4 557060     postgres   600        17
0x0052e2c5 589829     postgres   600        17
0x0052e2c6 622598     postgres   600        17
0x0052e2c7 655367     postgres   600        17
0x0052e2c8 688136     postgres   600        17
0x0052e2c9 720905     postgres   600        17
0x0052e2ca 753674     postgres   600        17

After creating replication gluster volume ovirt-data02  via Web Admin   I ran manually :

gluster volume set ovirt-data02 auth.allow 192.168.1.* ;
gluster volume set ovirt-data02 group virt  ;
gluster volume set ovirt-data02 cluster.quorum-type auto ;
gluster volume set ovirt-data02 performance.cache-size 1GB ;

Currently apache-sshd is 0.9.0-3 . https://bugzilla.redhat.com/show_bug.cgi?id=1021273

Adding new host works fine , just /etc/sysconfig/iptables on master server should have :
-A INPUT -p tcp -m multiport –dport 24007:24108  -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

 Personally i was experiencing one issue during second host deployment, which required service vdsmd restart on second host to allow system bring it up at the end of installation. Two installs behaved absolutely similar

[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status  vdsmd.service
vdsmd.service – Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:40:40 MSK; 50s ago
Process: 2896 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh –pre-start (code=exited, status=0/SUCCESS)

Main PID: 3166 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3166 /usr/bin/python /usr/share/vdsm/vdsm

Dec 24 15:40:41 hv02.localdomain python[3192]: detected unhandled Python exception in ‘/usr/bin/vdsm-tool’
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: [427B blob data]
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: vdsm vds WARNING Unable to load the json rpc server module. Ple…led.
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 make_client_response()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 3

[root@hv02 ~]# service vdsmd restart
Redirecting to /bin/systemctl restart  vdsmd.service

[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status  vdsmd.service
vdsmd.service – Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:41:42 MSK; 2s ago
Process: 3355 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh –post-stop (code=exited, status=0/SUCCESS)
Process: 3358 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh –pre-start (code=exited, status=0/SUCCESS)

Main PID: 3418 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3418 /usr/bin/python /usr/share/vdsm/vdsm

Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: vdsm: Running test_conflicting_conf
Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: SUCCESS: ssl configured to true. No conflicts
Dec 24 15:41:42 hv02.localdomain systemd[1]: Started Virtual Desktop Server Manager.
Dec 24 15:41:43 hv02.localdomain vdsm[3418]: vdsm vds WARNING Unable to load the json rpc server module. Ple…led.
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 make_client_response()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 3

Moreover  if during core install on first server same report comes up during  awaiting host to become VDSM operational  install will hang for a while and finally won’t bring up master server. Workaround is the same. Once again it’s my personal experience.  It’s random error during core “all in one”  install.

 

 

[root@ovirt1 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  —  anywhere             anywhere
ACCEPT     icmp —  anywhere             anywhere             icmp any
ACCEPT     all  —  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:ssh
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:postgres
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:https
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpts:xprtld:6166
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpts:49152:49216
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:synchronet-db
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:sunrpc
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:sunrpc
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:pftp
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:pftp
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:rquotad
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:rquotad
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:892
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:892
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:nfs
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:filenet-rpc
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:32803
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:http
ACCEPT     tcp  —  anywhere             anywhere             multiport dports 24007:24108
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     udp  —  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     tcp  —  anywhere             anywhere             multiport dports 38465:38485
REJECT     all  —  anywhere             anywhere             reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

[root@ovirt1 ~]# ssh ovirt2

Last login: Sat Dec 21 23:17:05 2013 from ovirt1.localdomain

[root@ovirt2 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  —  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     all  —  anywhere             anywhere
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:54321
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ssh
ACCEPT     udp  —  anywhere             anywhere             udp dpt:snmp
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:16514
ACCEPT     tcp  —  anywhere             anywhere             multiport dports xprtld:6166
ACCEPT     tcp  —  anywhere             anywhere             multiport dports 49152:49216
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:24007
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:webcache
ACCEPT     udp  —  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38465
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38466
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38467
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:nfs
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38469
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:39543
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:55863
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38468
ACCEPT     udp  —  anywhere             anywhere             udp dpt:963
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:965
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ctdb
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:netbios-ssn
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:microsoft-ds
ACCEPT     tcp  —  anywhere             anywhere             tcp dpts:24007:24108
ACCEPT     tcp  —  anywhere             anywhere             tcp dpts:49152:49251
REJECT     all  —  anywhere             anywhere             reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
REJECT     all  —  anywhere             anywhere             PHYSDEV match ! –physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Creating XFS replicated Gluster Storage

[root@ovirt1 ~]# pvcreate /dev/sda3
[root@ovirt1 ~]# vgcreate vg_virt /dev/sda3
[root@ovirt1 ~]# lvcreate -L 91000M -n lv_gluster  vg_virt  /dev/sda3
Logical volume “lv_gluster” created
[root@ovirt1 ~]# lvscan
ACTIVE            ‘/dev/fedora00/root’ [170.90 GiB] inherit
ACTIVE            ‘/dev/fedora00/swap’ [7.89 GiB] inherit
ACTIVE            ‘/dev/vg_virt/lv_gluster’ [88.87 GiB] inherit
[root@ovirt1 ~]# mkfs.xfs -f -i size=512 /dev/mapper/vg_virt-lv_gluster

meta-data=/dev/mapper/vg_virt-lv_gluster isize=512    agcount=16, agsize=1456000 blks
=                       sectsz=4096  attr=2, projid32bit=0
data     =                       bsize=4096   blocks=23296000, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=11375, version=2
=                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@ovirt1 ~]# mkdir /data1
[root@ovirt1 ~]# chown -R 36:36 /data1
[root@ovirt1 ~]# echo “/dev/mapper/vg_virt-lv_gluster  /data1  xfs     defaults    1 2″ >> /etc/fstab
[root@ovirt1 ~]# mount -a

  Creating replicated gluster volume beased on XFS LVM via Web Admin Console

The last line corresponds ovirt-data05 replicated gluster volume based on  XFS formatted mounted via /etc/fstab  LVM partition   /dev/mapper/vg_virt-lv_gluster  (similar on both peers)

[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                169G   35G  125G  22% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  152K  3.9G   1% /dev/shm
tmpfs                                    3.9G  988K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   80K  3.9G   1% /tmp
/dev/sda1                             477M   87M  361M  20% /boot

ovirt1.localdomain:ovirt-data02            169G   35G  125G  22% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02

192.168.1.137:/var/lib/exports/export    169G   35G  125G  22% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export

ovirt1.localdomain:/var/lib/exports/iso  169G   35G  125G  22% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso

/dev/mapper/vg_virt-lv_gluster            89G   36M   89G   1% /data1

ovirt1.localdomain:ovirt-data05         89G   36M   89G   1% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05

Fedora 20 KVM installation on XFS Gluster domain


Follow

Get every new post delivered to your Inbox.