“Setting up Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN” on CentOS 6.5 with both Controller and Compute nodes each one having two Ethernet adapters per Andrew Lau

December 28, 2013

Why CentOS 6.5 ?  It has library libgfapi http://www.gluster.org/2012/11/integration-with-kvmqemu/ back-ported  what allows native Qemu work directly with glusterfs 3.4.1 volumes  https://bugzilla.redhat.com/show_bug.cgi?id=848070  View also http://rhn.redhat.com/errata/RHEA-2013-1859.html in particular bug : 956919 – Develop native qemu-gluster driver for Cinder. General concept may be seen here  http://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means . I am very thankful to Andrew Lau for sample of anwer-file for setups a kind of “Controller + Compute Node + Compute Node ..” His “Howto” [1]  is perfect , no matter that even having box with 3 Ethernet adapters I was unable to reproduce his setup exactly. Latter I realised that I just didn’t fix epel-*.repo files and decided to switch to another set up.Baseurl should be uncommented , mirror-list on the contrary. I believe  it’s very personal issue. By some reasons I had to install manually  EPEL on CentOS 6.5 .Packstack failed on internet enabled  boxes,epel-*.repo also required manual intervention to make packstack finally happy.

Differences :-

1. RDO Controller and Compute nodes setup based per Andrew Lau multi-node.packstack [1] is a bit different from original

No gluster volumes for cinder,nova,glance created before RDO packstack install , no network like 172.16.0.0 for gluster cluster management,

just original network 192.168.1.0/24 with internet alive used in RDO setup ( answer-file pretty close to Andrew’s attached)

2.Set up LBaaS :-

Edit /etc/neutron/neutron.conf and add the following in the default section:

[DEFAULT]
service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin

Already there

Then edit the /etc/openstack-dashboard/local_settings file and search for enable_lb and set it to true:

OPENSTACK_NEUTRON_NETWORK = {
‘enable_lb’: True
}

Done

# vi /etc/neutron/lbaas_agent.ini – already done no changes

device_driver=neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
user_group=haproxy

Comment out the line in the service_providers section:
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Nothing to remove

service neutron-lbaas-agent start – already running , restarted
chkconfig neutron-lbaas-agent on  – skipped
service neutron-server restart  – done
service httpd restart  – done 

All done.

Haproxy is supposed to manage the landscape with several controllers.One of them is considered as frontend and the rest as backend servers providing HA openstack services running on controllers. It’s a separate host. View  :-

http://openstack.redhat.com/Load_Balance_OpenStack_API#HAProxy

In current Controller+Compute  set up there is no need in Haproxy. Otherwise third host is needed to load balance openstack-nova-compute.

So “yum install haproxy” in LBaaS section of [1] is hard to understand.

3. At the end of RDO install br-ex bridge and OVS port eth0 have been created

4. Gluster volumes for Nova,Glance,Cinder backup have been created after     RDO install. Havana tuned for cinder-volumes gluster backend after RDO installation

5. HA implementation via keepalived per [1] after RDO install due to changing interface to “br-ex” on Master.

Initial repositories set up per [1]

# yum install -y  http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm
# cd /etc/yum.repos.d/
# wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo
# yum install -y openstack-packstack python-netaddr
# yum install -y glusterfs glusterfs-fuse glusterfs-server

In case packstack failure to install EPEL :-

[root@hv02 ~]# wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@hv02 ~]# wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
[root@hv02 ~]# rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm

[root@hv02 ~]# ls -1 /etc/yum.repos.d/epel* /etc/yum.repos.d/remi.repo
/etc/yum.repos.d/epel.repo
/etc/yum.repos.d/epel-testing.repo
/etc/yum.repos.d/remi.repo

In case next packstack failure to resolve dependencies:-
Update also epel*.repo files. Uncomment baseurl.Comment out mirrorlist

System core setup

- Controller node: Nova, Keystone, Cinder, Glance, Neutron  (hv02)
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)  (hv01)

Service NetworkManager disabled, service network enabled, system rebooted before RDO installation

[root@hv02 ~]# packstack –answer-file=multi-node.packstack
Welcome to Installer setup utility
Packstack changed given value  to required value /root/.ssh/id_rsa.pub
Installing:
Clean Up…                                            [ DONE ]
Setting up ssh keys…                                 [ DONE ]
Discovering hosts’ details…                          [ DONE ]
Adding pre install manifest entries…                 [ DONE ]
Installing time synchronization via NTP…             [ DONE ]
Adding MySQL manifest entries…                       [ DONE ]
Adding QPID manifest entries…                        [ DONE ]
Adding Keystone manifest entries…                    [ DONE ]
Adding Glance Keystone manifest entries…             [ DONE ]
Adding Glance manifest entries…                      [ DONE ]
Installing dependencies for Cinder…                  [ DONE ]
Adding Cinder Keystone manifest entries…             [ DONE ]
Adding Cinder manifest entries…                      [ DONE ]
Adding Nova API manifest entries…                    [ DONE ]
Adding Nova Keystone manifest entries…               [ DONE ]
Adding Nova Cert manifest entries…                   [ DONE ]
Adding Nova Conductor manifest entries…              [ DONE ]
Adding Nova Compute manifest entries…                [ DONE ]
Adding Nova Scheduler manifest entries…              [ DONE ]
Adding Nova VNC Proxy manifest entries…              [ DONE ]
Adding Nova Common manifest entries…                 [ DONE ]
Adding Openstack Network-related Nova manifest entries…[ DONE ]
Adding Neutron API manifest entries…                 [ DONE ]
Adding Neutron Keystone manifest entries…            [ DONE ]
Adding Neutron L3 manifest entries…                  [ DONE ]
Adding Neutron L2 Agent manifest entries…            [ DONE ]
Adding Neutron DHCP Agent manifest entries…          [ DONE ]
Adding Neutron LBaaS Agent manifest entries…         [ DONE ]
Adding Neutron Metadata Agent manifest entries…      [ DONE ]
Adding OpenStack Client manifest entries…            [ DONE ]
Adding Horizon manifest entries…                     [ DONE ]
Adding Heat manifest entries…                        [ DONE ]
Adding Heat Keystone manifest entries…               [ DONE ]
Adding Ceilometer manifest entries…                  [ DONE ]
Adding Ceilometer Keystone manifest entries…         [ DONE ]
Adding post install manifest entries…                [ DONE ]
Preparing servers…                                   [ DONE ]
Installing Dependencies…                             [ DONE ]
Copying Puppet modules and manifests…                [ DONE ]
Applying Puppet manifests…
Applying 192.168.1.127_prescript.pp
Applying 192.168.1.137_prescript.pp
192.168.1.127_prescript.pp :               [ DONE ]
192.168.1.137_prescript.pp :               [ DONE ]
Applying 192.168.1.127_ntpd.pp
Applying 192.168.1.137_ntpd.pp
192.168.1.127_ntpd.pp :                         [ DONE ]
192.168.1.137_ntpd.pp :                         [ DONE ]
Applying 192.168.1.137_mysql.pp
Applying 192.168.1.137_qpid.pp
192.168.1.137_mysql.pp :                       [ DONE ]
192.168.1.137_qpid.pp :                         [ DONE ]
Applying 192.168.1.137_keystone.pp
Applying 192.168.1.137_glance.pp
Applying 192.168.1.137_cinder.pp
192.168.1.137_keystone.pp :                 [ DONE ]
192.168.1.137_glance.pp :                     [ DONE ]
192.168.1.137_cinder.pp :                     [ DONE ]
Applying 192.168.1.137_api_nova.pp
192.168.1.137_api_nova.pp :                 [ DONE ]
Applying 192.168.1.137_nova.pp
Applying 192.168.1.127_nova.pp
192.168.1.137_nova.pp :                         [ DONE ]
192.168.1.127_nova.pp :                         [ DONE ]
Applying 192.168.1.127_neutron.pp
Applying 192.168.1.137_neutron.pp
192.168.1.127_neutron.pp :                   [ DONE ]
192.168.1.137_neutron.pp :                   [ DONE ]
Applying 192.168.1.137_osclient.pp
Applying 192.168.1.137_horizon.pp
Applying 192.168.1.137_heat.pp
Applying 192.168.1.137_ceilometer.pp
192.168.1.137_osclient.pp :                 [ DONE ]
192.168.1.137_horizon.pp :                   [ DONE ]
192.168.1.137_heat.pp :                         [ DONE ]
192.168.1.137_ceilometer.pp :             [ DONE ]
Applying 192.168.1.127_postscript.pp
Applying 192.168.1.137_postscript.pp
192.168.1.127_postscript.pp :             [ DONE ]
192.168.1.137_postscript.pp :             [ DONE ]
[ DONE ]
Finalizing…                                          [ DONE ]

**** Installation completed successfully ******

Additional information:
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.1.137. To use the command line tools you need to source the file.
* NOTE : A certificate was generated to be used for ssl, You should change the ssl certificate configured in /etc/httpd/conf.d/ssl.conf on 192.168.1.137 to use a CA signed cert.
* To access the OpenStack Dashboard browse to https://192.168.1.137/dashboard.
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* The installation log file is available at: /var/tmp/packstack/20131226-230226-PzmL7R/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20131226-230226-PzmL7R/manifests

Services on Controller Node :-

Services on Compute Node :-

Post install configuration

On Controller :

root@hv02 network-scripts(keystone_admin)]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.137″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@hv02 network-scripts(keystone_admin)]# cat ifcfg-eth0

NAME=”eth0″
HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

Pre install configuration

[root@hv02 network-scripts(keystone_admin)]# cat ifcfg-eth1

DEVICE=eth1
HWADDR=00:0C:76:E0:1E:C5
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Post install configuration

[root@hv02 ~(keystone_admin)]# ovs-vsctl show
e059cd59-21c8-48f8-ad7c-b9e1de9a986b
Bridge br-int
Port “int-br-eth1″
Interface “int-br-eth1″
Port br-int
Interface br-int
type: internal
Port “qvo5252ab82-49″
tag: 1
Interface “qvo5252ab82-49″
Port “tape1849acb-66″
tag: 1
Interface “tape1849acb-66″
type: internal
Port “qr-9017c241-f3″
tag: 1
Interface “qr-9017c241-f3″
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port “eth0″
Interface “eth0″
Port “qg-14fcad42-83″
Interface “qg-14fcad42-83″
type: internal
Bridge “br-eth1″
Port “br-eth1″
Interface “br-eth1″
type: internal
Port “eth1″
Interface “eth1″
Port “phy-br-eth1″
Interface “phy-br-eth1″
ovs_version: “1.11.0″

On Compute node :-

[root@hv01 network-scripts]# cat ifcfg-eth0

DEVICE=eth0
TYPE=Ethernet
UUID=e25e1975-50db-4421-ae39-676708d480db
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=192.168.1.127
PREFIX=24
GATEWAY=192.168.1.1
DNS1=83.221.202.254
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=”System eth0″
HWADDR=00:22:15:63:E4:E2
[root@hv01 network-scripts]# cat ifcfg-eth1

DEVICE=eth1
HWADDR=00:22:15:63:F9:9F
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Glusterfs replicated volumes created  after reboot for glance,nova,cinder-volumes.

At this point implement HA via keepalived with  /etc/keepalived/keepalived.conf  on hv02

vrrp_instance VI_1 {
interface  br-ex
state MASTER
virtual_router_id 10
priority 100   # master 100
virtual_ipaddress {
192.168.1.134
}
}

and another on on hv01

vrrp_instance VI_1 {
interface eth0
state BACKUP
virtual_router_id 10
priority 99 # master 100
virtual_ipaddress {
192.168.1.134
}
}

I just follow [1] but intterface for MASTER is “br-ex”

Enable service “keepalived” and reboot boxes

Tuning glance and nova per [1]  http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/

Just in case I reproduce instructions from [1]

# mkdir -p /mnt/gluster/{glance,nova} # On Controller
# mkdir -p /mnt/gluster/nova          # On Compute
# mount -t glusterfs 192.168.1.134:/nova2 /mnt/gluster/nova/
# mount -t glusterfs 192.168.1.134:/glance2 /mnt/gluster/glance/

Update /etc/glance/glance-api.conf  
    filesystem_store_datadir = /mnt/gluster/glance/images

# mkdir -p /mnt/gluster/glance/images
# chown -R glance:glance /mnt/gluster/glance/
# service openstack-glance-api restart

For all Compute Nodes ( you may have more the one and controller if you run on it  openstack-nova-compute )

# mkdir /mnt/gluster/nova/instance/
# chown -R nova:nova /mnt/gluster/nova/instance/

Upadte  /etc/nova/nova.conf  
  instances_path = /mnt/gluster/nova/instance

# service openstack-nova-compute restart

Quoting ends

Post installation creating cinder-volumes :-

Configuring Cinder to Add GlusterFS

# gluster volume create cinder-volumes02  replica 2 hv01.localdomain:/data2/cinder hv02.localdomain:/data2/cinder

# gluster volume start cinder-volumes02

# gluster volume set cinder-volumes02  auth.allow 192.168.1.*

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver 

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf 

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes

 # vi /etc/cinder/shares.conf

    192.168.1.134:cinder-volumes02

:wq

Update /etc/sysconfig/iptables (if it hasn’t been done earlier) :-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT

-A INPUT -p tcp –dport 111 -j ACCEPT

-A INPUT -p udp –dport 111 -j ACCEPT

-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment Out

-A FORWARD -j REJECT –reject-with icmp-host-prohibited

-A INPUT -j REJECT –reject-with icmp-host-prohibited

# service iptables restart

Restart openstack-cinder-volume services mounts glusterfs volume:-

 # for i in api scheduler volume ; do service openstack-cinder-${i} restart ;done

After RDO packstack completed and post configuration tuning is done.

On Controller :-

[root@hv02 ~(keystone_admin)]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vg_hv02-LogVol00     154G   16G  131G  11% /
tmpfs                            3.9G  232K  3.9G   1% /dev/shm
/dev/sdb1                        485M   70M  390M  16% /boot
/dev/mapper/vg_havana-lv_havana   98G  2.8G   95G   3% /data2
192.168.1.134:/glance2            98G  2.9G   95G   3% /mnt/gluster/glance2
192.168.1.134:/nova2              98G  2.9G   95G   3% /mnt/gluster/nova2
192.168.1.134:/cinder-volumes02   98G  2.9G   95G   3% /var/lib/cinder/volumes/77b8406d9f60712274c66a84844feb8a
192.168.1.134:/cinder-volumes02   98G  2.9G   95G   3% /var/lib/nova/mnt/77b8406d9f60712274c66a84844feb8a

[root@hv02 ~(keystone_admin)]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Dec 28 10:47:59 2013
#
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_hv02-LogVol00 /                       ext4    defaults        1 1
UUID=0a7bffa6-d133-4cd6-bdaf-06a00af0b340 /boot    ext4    defaults  1 2

/dev/mapper/vg_hv02-LogVol01 swap                    swap    defaults        0 0
tmpfs                   /dev/shm               tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0sysfs                   /sys                      sysfs   defaults        0 0
proc                    /proc                     proc    defaults        0 0
/dev/mapper/vg_havana-lv_havana    /data2  xfs     defaults        1 2
192.168.1.134:/glance2  /mnt/gluster/glance2  glusterfs defaults,_netdev 0 0
192.168.1.134:/nova2    /mnt/gluster/nova2     glusterfs defaults,_netdev

[root@hv02 ~(keystone_admin)]# gluster volume info nova2
Volume Name: nova2
Type: Replicate
Volume ID: 3a04a896-8080-4172-b3fb-c89c028c6944
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hv01.localdomain:/data2/nova
Brick2: hv02.localdomain:/data2/nova
Options Reconfigured:
auth.allow: 192.168.1.*

[root@hv02 ~(keystone_admin)]# gluster volume info glance2
Volume Name: glance2
Type: Replicate
Volume ID: c7b31eaa-6dea-49c2-9d09-ec4dcd65c560
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hv01.localdomain:/data2/glance
Brick2: hv02.localdomain:/data2/glance
Options Reconfigured:
auth.allow: 192.168.1.*

[root@hv02 ~(keystone_admin)]# gluster volume info cinder-volumes02
Volume Name: cinder-volumes02
Type: Replicate
Volume ID: 639e6afa-dc29-4fd7-8d3c-95f655383d1c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hv01.localdomain:/data2/cinder
Brick2: hv02.localdomain:/data2/cinder
Options Reconfigured:
auth.allow: 192.168.1.*

On Compute :-


[root@hv02 ~(keystone_admin)]# ssh hv01
Last login: Mon Dec 30 11:09:16 2013 from hv02

[root@hv01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_hv01-LogVol00 154G 4.5G 142G 4% /
tmpfs 3.9G 84K 3.9G 1% /dev/shm
/dev/sdb1 485M 70M 390M 16% /boot
/dev/mapper/vg_havana-lv_havana 98G 3.1G 95G 4% /data2
192.168.1.134:/nova2 98G 3.1G 95G 4% /mnt/gluster/nova2
192.168.1.134:/cinder-volumes02 98G 3.1G 95G 4% /var/lib/nova/mnt/77b8406d9f60712274c66a84844feb8a

[root@hv01 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Dec 28 10:14:16 2013
#
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_hv01-LogVol00 /                       ext4    defaults        1 1
UUID=21afa600-9b18-4aea-bfb7-16b73eaee3de /boot                   ext4    defaults        1 2
/dev/mapper/vg_hv01-LogVol01       swap            swap    defaults        0 0
tmpfs                   /dev/shm             tmpfs   defaults        0 0
devpts                  /dev/pts               devpts  gid=5,mode=620  0 0
sysfs                   /sys                      sysfs   defaults        0 0
proc                    /proc                    proc    defaults        0 0
/dev/mapper/vg_havana-lv_havana    /data2  xfs     defaults        1 2
192.168.1.134:/nova2   /mnt/gluster/nova2  glusterfs defaults,_netdev 0 0

On Controller :-

[root@hv02 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 active
openstack-nova-network:                 dead      (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active

== Glance services ==

openstack-glance-api:                   active
openstack-glance-registry:              active

== Keystone service ==

openstack-keystone:                     active

== Horizon service ==

openstack-dashboard:                    000
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    active
neutron-openvswitch-agent:              active

== Cinder services ==

openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active

== Ceilometer services ==

openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active

== Heat services ==

openstack-heat-api:                     active
openstack-heat-api-cfn:                 dead      (disabled on boot)
openstack-heat-api-cloudwatch:          dead      (disabled on boot)
openstack-heat-engine:                  active

== Support services ==

mysqld:                                 active
libvirtd:                               active
openvswitch:                            active
messagebus:                             active
tgtd:                                   active
qpidd:                                  active
memcached:                              active

== Keystone users ==

+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 0b6cc1c84d194a4fbf6be1cd3343167e |   admin    |   True  |    test@test.com     |
| 1415f2952fc34b419abc8a0d75130e30 | ceilometer |   True  | ceilometer@localhost |
| d77e11979821441da8157103011cae5a |   cinder   |   True  |   cinder@localhost   |
| 2860d02458904f9aa0f89afed6bcc423 |   glance   |   True  |   glance@localhost   |
| 78a8beeeb277493e96feae3127ea0607 |    heat    |   True  |    heat@localhost    |
| 002a2b8fcbfb47a1a588e74e51cb1f3a |  neutron   |   True  |  neutron@localhost   |
| 1b558e148aff4f618120f0f7f547f064 |    nova    |   True  |    nova@localhost    |
+———————————-+————+———+———————-+

== Glance images ==

+————————————–+—————–+————-+——————+———–+——–+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+————————————–+—————–+————-+——————+———–+——–+
| 02ef79b4-081b-4966-8b11-10492449fba5 | f19image        | qcow2       | bare             | 237371392 | active |
| 6eb9e748-5786-4072-b2cf-4c2a91da2bf3 | Ubuntu1310image | qcow2       | bare             | 243728384 | active |
+————————————–+—————–+————-+——————+———–+——–+

== Nova managed services ==

+——————+——————+———-+———+——-+—————————-+—————–+
| Binary           | Host             | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+——————+——————+———-+———+——-+—————————-+—————–+
| nova-consoleauth | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:32.000000 | None            |
| nova-scheduler   | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:32.000000 | None            |
| nova-conductor   | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:35.000000 | None            |
| nova-cert        | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:32.000000 | None            |
| nova-compute     | hv02.localdomain | nova     | enabled | up    | 2013-12-28T11:06:33.000000 | None            |
| nova-compute     | hv01.localdomain | nova     | enabled | up    | 2013-12-28T11:06:32.000000 | None            |

+——————+——————+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+———+——+
| ID                                   | Label   | Cidr |
+————————————–+———+——+
| 56456fcb-8696-4e63-894e-635681c911e4 | private | None |
| d4e83ac8-c257-4fee-a551-5d711087c238 | public  | None |
+————————————–+———+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+——————+——–+————+————-+——————————–+
| ID                                   | Name             | Status | Task State | Power State | Networks                       |
+————————————–+——————+——–+————+————-+——————————–+
| 7a9da01f-499c-4d27-9b7a-1b1307b767a8 | UbuntuSalamander | ACTIVE | None       | Running     | private=10.0.0.4, 192.168.1.60 |
| 4db2876c-cedd-4d2b-853c-e156bcb20592 | VF19RS1          | ACTIVE | None       | Running     | private=10.0.0.2, 192.168.1.59 |
+————————————–+——————+——–+————+————-+——————————–|

Detailed info about both instances

 [root@hv02 ~(keystone_admin)]# nova show 7a9da01f-499c-4d27-9b7a-1b1307b767a8

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2013-12-28T10:43:53Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hv02.localdomain                                         |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| private network                      | 10.0.0.4, 192.168.1.60                                   |
| hostId                               | 2d47a35fc92addd418ba8dd8df73233732a0e880b2e4e1ffac907091 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                                        |
| OS-SRV-USG:launched_at               | 2013-12-28T10:43:53.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hv02.localdomain                                         |
| flavor                               | m1.small (2)                                             |
| id                                   | 7a9da01f-499c-4d27-9b7a-1b1307b767a8                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 0b6cc1c84d194a4fbf6be1cd3343167e                         |
| name                                 | UbuntuSalamander                                         |
| created                              | 2013-12-28T10:43:40Z                                     |
| tenant_id                            | dc2ec9f2a8404c22b46566f567bebc49                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'eaf06b2e-23d0-4a65-bbba-6d464f6c0441'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

[root@hv02 ~(keystone_admin)]# nova show 4db2876c-cedd-4d2b-853c-e156bcb20592

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2013-12-28T10:20:31Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hv01.localdomain                                         |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| private network                      | 10.0.0.2, 192.168.1.59                                   |
| hostId                               | fc6ed5fd7d8a2f3c510671ff8485af9e340d4244246eb0aff55f1a0d |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                        |
| OS-SRV-USG:launched_at               | 2013-12-28T10:20:31.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hv01.localdomain                                         |
| flavor                               | m1.small (2)                                             |
| id                                   | 4db2876c-cedd-4d2b-853c-e156bcb20592                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 0b6cc1c84d194a4fbf6be1cd3343167e                         |
| name                                 | VF19RS1                                                  |
| created                              | 2013-12-28T10:20:22Z                                     |
| tenant_id                            | dc2ec9f2a8404c22b46566f567bebc49                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'c1ebdd6c-2be0-451e-b3ba-b93cbc5b506b'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

  Testing Windows 2012 Server evaluation cloud instance :-

[root@hv02 Downloads(keystone_admin)]# gunzip -cd windows_server_2012_r2_standard_eval_kvm_20131117.qcow2.gz | glance image-create –property hypervisor_type=kvm  –name “Windows Server 2012 R2 Std Eval” –container-format bare –disk-format vhd
+—————————-+————————————–+
| Property                   | Value                                |
+—————————-+————————————–+
| Property ‘hypervisor_type’ | kvm                                  |
| checksum                   | 83c08f00b784e551a79ac73348b47360     |
| container_format           | bare                                 |
| created_at                 | 2014-01-09T13:27:24                  |
| deleted                    | False                                |
| deleted_at                 | None                                 |
| disk_format                | vhd                                  |
| id                         | d55b81c5-2370-4d3e-8cb1-323e7a8fa9da |
| is_public                  | False                                |
| min_disk                   | 0                                    |
| min_ram                    | 0                                    |
| name                       | Windows Server 2012 R2 Std Eval      |
| owner                      | dc2ec9f2a8404c22b46566f567bebc49     |
| protected                  | False                                |
| size                       | 17182752768                          |
| status                     | active                               |
| updated_at                 | 2014-01-09T13:52:18                  |
+—————————-+————————————–+

[root@hv02 Downloads(keystone_admin)]# nova image-list
+————————————–+———————————+——–+——–+
| ID                                   | Name                            | Status | Server |
+————————————–+———————————+——–+——–+
| 6bb391f6-f330-406a-95eb-a12fd3db93d5 | UbuntuSalamanderImage           | ACTIVE |        |
| d55b81c5-2370-4d3e-8cb1-323e7a8fa9da | Windows Server 2012 R2 Std Eval | ACTIVE
| c8265abc-5499-414d-94c3-0376cd652281 | fedora19image                   | ACTIVE |        |
| 545aa5a8-b3b8-4fbd-9c86-c523d7790b49 | fedora20image                   | ACTIVE |        |
+————————————–+———————————+——–+——–+

[root@hv02 Downloads(keystone_admin)]# cinder create –image-id d55b81c5-2370-4d3e-8cb1-323e7a8fa9da –display_name Windows2012LVG 20
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-01-09T13:58:49.761145      |
| display_description |                 None                 |
|     display_name    |            Windows2012LVG            |
|          id         | fb78c942-1cf7-4f8c-b264-1a3997d03eef |
|       image_id      | d55b81c5-2370-4d3e-8cb1-323e7a8fa9da |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@hv02 Downloads(keystone_admin)]# cinder create –image-id d55b81c5-2370-4d3e-8cb1-323e7a8fa9da –display_name Windows2012LVG 20
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-01-09T13:58:49.761145      |
| display_description |                 None                 |
|     display_name    |            Windows2012LVG            |
|          id         | fb78c942-1cf7-4f8c-b264-1a3997d03eef |
|       image_id      | d55b81c5-2370-4d3e-8cb1-323e7a8fa9da |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# ls -lah
total 8.5G
drwxr-xr-x. 3 root   root    173 Jan  9 17:58 .
drwxr-xr-x. 6 cinder cinder 4.0K Jan  8 14:12 ..
-rw-rw-rw-. 1 root   root    12G Jan  9 14:56 volume-1ef5e77f-3ac2-42ab-97e6-ebb04a872461
-rw-rw-rw-. 1 root   root    10G Jan  8 22:52 volume-42671dcc-3295-4d9c-a040-6ff031277b73
-rw-rw-rw-. 1 root   root    20G Jan  9 17:58 volume-fb78c942-1cf7-4f8c-b264-1a3997d03eef

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# cinder list
+————————————–+————-+———————+——+————-+———-+————————————–+
|                  ID                  |    Status   |     Display Name    | Size | Volume Type | Bootable |             Attached to              |
+————————————–+————-+———————+——+————-+———-+————————————–+
| 1ef5e77f-3ac2-42ab-97e6-ebb04a872461 |    in-use   |       VF19VLG2      |  12  | performance |   true   | 6b40285c-ce03-4194-b247-013c6f11ff42 |
| 42671dcc-3295-4d9c-a040-6ff031277b73 |    in-use   | UbuntuSalamanderVLG |  10  | performance |   true   | ebd3063e-00c7-4ea8-aed4-63919ebddb42 |
| fb78c942-1cf7-4f8c-b264-1a3997d03eef | downloading |    Windows2012LVG   |  20  |     None    |  false   | |                                      |
———————————————————————————————————–
[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# cinder list
+————————————–+——–+———————+——+————-+———-+————————————–+
|                  ID                  | Status |     Display Name    | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+———————+——+————-+———-+————————————–+
| 1ef5e77f-3ac2-42ab-97e6-ebb04a872461 | in-use |       VF19VLG2      |  12  | performance |   true   | 6b40285c-ce03-4194-b247-013c6f11ff42 |
| 42671dcc-3295-4d9c-a040-6ff031277b73 | in-use | UbuntuSalamanderVLG |  10  | performance |   true   | ebd3063e-00c7-4ea8-aed4-63919ebddb42 |
| fb78c942-1cf7-4f8c-b264-1a3997d03eef | in-use |    Windows2012LVG   |  20  |     None    |   true   | 2950e393-eb37-4991-9e16-fa7ca24b678a |
+————————————–+——–+———————+——+————-+———-+————————————–+
+————————————–+————-+———————+——+————-+——-

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# nova list

+————————————–+——————+———–+————+————-+——————————–+
| ID                                   | Name             | Status    | Task State | Power State | Networks                       |
+————————————–+——————+———–+————+————-+——————————–+
| ebd3063e-00c7-4ea8-aed4-63919ebddb42 | UbuntuSalamander | SUSPENDED | None       | Shutdown    | private=10.0.0.4, 192.168.1.60 |
| 6b40285c-ce03-4194-b247-013c6f11ff42 | VF19RS2          | SUSPENDED | None       | Shutdown    | private=10.0.0.2, 192.168.1.59 |
| 2950e393-eb37-4991-9e16-fa7ca24b678a | Win2012SRV       | ACTIVE    | None       | Running     | private=10.0.0.5, 192.168.1.61 |
+————————————–+——————+———–+————+————-+——————————–+

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# nova show  2950e393-eb37-4991-9e16-fa7ca24b678a

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-01-09T19:37:09Z                           |
| OS-EXT-STS:task_state                | None                                             |
| OS-EXT-SRV-ATTR:host                 | hv01.localdomain                                         |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| private network                      | 10.0.0.5, 192.168.1.61                                   |
| hostId                               | fc6ed5fd7d8a2f3c510671ff8485af9e340d4244246eb0aff55f1a0d |
| OS-EXT-STS:vm_state                  | active                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000013                                        |
| OS-SRV-USG:launched_at               | 2014-01-09T14:26:34.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hv01.localdomain                                         |
| flavor                               | m1.small (2)                                             |
| id                                   | 2950e393-eb37-4991-9e16-fa7ca24b678a                     |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 0b6cc1c84d194a4fbf6be1cd3343167e                         |
| name                                 | Win2012SRV                                           |
| created                              | 2014-01-09T14:26:24Z                             |
| tenant_id                            | dc2ec9f2a8404c22b46566f567bebc49                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'fb78c942-1cf7-4f8c-b264-1a3997d03eef'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

System info :-

REFERENCES.

1. http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/
2. http://openstack.redhat.com/forum/discussion/607/havana-mutlinode-with-neutron

Answer file :

[general]

# Path to a Public key to install on servers. If a usable key has not

# been installed on the remote servers the user will be prompted for a

# password and this key will be installed so the password will not be

# required again

CONFIG_SSH_KEY=

# Set to ‘y’ if you would like Packstack to install MySQL

CONFIG_MYSQL_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Image

# Service (Glance)

CONFIG_GLANCE_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Block

# Storage (Cinder)

CONFIG_CINDER_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Compute

# (Nova)

CONFIG_NOVA_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack

# Networking (Neutron)

CONFIG_NEUTRON_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack

# Dashboard (Horizon)

CONFIG_HORIZON_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Object

# Storage (Swift)

CONFIG_SWIFT_INSTALL=n

# Set to ‘y’ if you would like Packstack to install OpenStack

# Metering (Ceilometer)

CONFIG_CEILOMETER_INSTALL=y

# Set to ‘y’ if you would like Packstack to install Heat

CONFIG_HEAT_INSTALL=y

# Set to ‘y’ if you would like Packstack to install the OpenStack

# Client packages. An admin “rc” file will also be installed

CONFIG_CLIENT_INSTALL=y

# Comma separated list of NTP servers. Leave plain if Packstack

# should not install ntpd on instances.

CONFIG_NTP_SERVERS=0.au.pool.ntp.org,1.au.pool.ntp.org,2.au.pool.ntp.org,3.au.pool.ntp.org

# Set to ‘y’ if you would like Packstack to install Nagios to monitor

# openstack hosts

CONFIG_NAGIOS_INSTALL=n

# Comma separated list of servers to be excluded from installation in

# case you are running Packstack the second time with the same answer

# file and don’t want Packstack to touch these servers. Leave plain if

# you don’t need to exclude any server.

EXCLUDE_SERVERS=

# The IP address of the server on which to install MySQL

CONFIG_MYSQL_HOST=192.168.1.137

# Username for the MySQL admin user

CONFIG_MYSQL_USER=root

# Password for the MySQL admin user

CONFIG_MYSQL_PW=1279e9bb292c48e5

# The IP address of the server on which to install the QPID service

CONFIG_QPID_HOST=192.168.1.137

CONFIG_QPID_ENABLE_SSL=n

CONFIG_QPID_ENABLE_AUTH=n

CONFIG_NEUTRON_LBAAS_HOSTS=192.168.1.137,192.168.1.127

CONFIG_RH_USER=n

CONFIG_RH_PW=n

CONFIG_RH_BETA_REPO=n

CONFIG_SATELLITE_URL=n

CONFIG_SATELLITE_USER=n

CONFIG_SATELLITE_PW=n

CONFIG_SATELLITE_AKEY=n

CONFIG_SATELLITE_CACERT=n

CONFIG_SATELLITE_PROFILE=n

CONFIG_SATELLITE_FLAGS=novirtinfo

CONFIG_SATELLITE_PROXY=n

CONFIG_SATELLITE_PROXY_USER=n

CONFIG_SATELLITE_PROXY_PW=n

# The IP address of the server on which to install Keystone

CONFIG_KEYSTONE_HOST=192.168.1.137

# The password to use for the Keystone to access DB

CONFIG_KEYSTONE_DB_PW=6cde8da7a3ca4bc0

# The token to use for the Keystone service api

CONFIG_KEYSTONE_ADMIN_TOKEN=c9a7f68c19e448b48c9f520df5771851

# The password to use for the Keystone admin user

CONFIG_KEYSTONE_ADMIN_PW=6fa29c9cb0264385

# The password to use for the Keystone demo user

CONFIG_KEYSTONE_DEMO_PW=6dc04587dd234ac9

# Kestone token format. Use either UUID or PKI

CONFIG_KEYSTONE_TOKEN_FORMAT=PKI

# The IP address of the server on which to install Glance

CONFIG_GLANCE_HOST=192.168.1.137

# The password to use for the Glance to access DB

CONFIG_GLANCE_DB_PW=1c135a665b70481d

# The password to use for the Glance to authenticate with Keystone

CONFIG_GLANCE_KS_PW=9c32f5a3bfb54966

# The IP address of the server on which to install Cinder

CONFIG_CINDER_HOST=192.168.1.137

# The password to use for the Cinder to access DB

CONFIG_CINDER_DB_PW=d9e997c7f6ec4f3b

# The password to use for the Cinder to authenticate with Keystone

CONFIG_CINDER_KS_PW=ae0e15732c104989

# The Cinder backend to use, valid options are: lvm, gluster, nfs

CONFIG_CINDER_BACKEND=lvm

# Create Cinder’s volumes group. This should only be done for testing

# on a proof-of-concept installation of Cinder.  This will create a

# file-backed volume group and is not suitable for production usage.

CONFIG_CINDER_VOLUMES_CREATE=y

# Cinder’s volumes group size. Note that actual volume size will be

# extended with 3% more space for VG metadata.

CONFIG_CINDER_VOLUMES_SIZE=20G

# A single or comma separated list of gluster volume shares to mount,

# eg: ip-address:/vol-name

# CONFIG_CINDER_GLUSTER_MOUNTS=192.168.1.137:/CINDER-VOLUMES

# A single or comma seprated list of NFS exports to mount, eg: ip-

# address:/export-name

CONFIG_CINDER_NFS_MOUNTS=

# The IP address of the server on which to install the Nova API

# service

CONFIG_NOVA_API_HOST=192.168.1.137

# The IP address of the server on which to install the Nova Cert

# service

CONFIG_NOVA_CERT_HOST=192.168.1.137

# The IP address of the server on which to install the Nova VNC proxy

CONFIG_NOVA_VNCPROXY_HOST=192.168.1.137

# A comma separated list of IP addresses on which to install the Nova

# Compute services

CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137,192.168.1.127

# The IP address of the server on which to install the Nova Conductor

# service

CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.137

# The password to use for the Nova to access DB

CONFIG_NOVA_DB_PW=34bf4442200c4c93

# The password to use for the Nova to authenticate with Keystone

CONFIG_NOVA_KS_PW=beaf384bc2b941ca

# The IP address of the server on which to install the Nova Scheduler

# service

CONFIG_NOVA_SCHED_HOST=192.168.1.137

# The overcommitment ratio for virtual to physical CPUs. Set to 1.0

# to disable CPU overcommitment

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=32.0

# The overcommitment ratio for virtual to physical RAM. Set to 1.0 to

# disable RAM overcommitment

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=3.0

# Private interface for Flat DHCP on the Nova compute servers

CONFIG_NOVA_COMPUTE_PRIVIF=eth1

# The list of IP addresses of the server on which to install the Nova

# Network service

CONFIG_NOVA_NETWORK_HOSTS=192.168.1.137

# Nova network manager

CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

# Public interface on the Nova network server

CONFIG_NOVA_NETWORK_PUBIF=eth0

# Private interface for network manager on the Nova network server

CONFIG_NOVA_NETWORK_PRIVIF=eth1

# IP Range for network manager

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22

# IP Range for Floating IP’s

CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

# Name of the default floating pool to which the specified floating

# ranges are added to

CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova

# Automatically assign a floating IP to new instances

CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

# First VLAN for private networks

CONFIG_NOVA_NETWORK_VLAN_START=100

# Number of networks to support

CONFIG_NOVA_NETWORK_NUMBER=1

# Number of addresses in each private subnet

CONFIG_NOVA_NETWORK_SIZE=255

# The IP addresses of the server on which to install the Neutron

# server

CONFIG_NEUTRON_SERVER_HOST=192.168.1.137

# The password to use for Neutron to authenticate with Keystone

CONFIG_NEUTRON_KS_PW=53d71f31745b431e

# The password to use for Neutron to access DB

CONFIG_NEUTRON_DB_PW=ab7d7088075b4727

# A comma separated list of IP addresses on which to install Neutron

# L3 agent

CONFIG_NEUTRON_L3_HOSTS=192.168.1.137

# The name of the bridge that the Neutron L3 agent will use for

# external traffic, or ‘provider’ if using provider networks

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

# A comma separated list of IP addresses on which to install Neutron

# DHCP agent

CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.137

# The name of the L2 plugin to be used with Neutron

CONFIG_NEUTRON_L2_PLUGIN=openvswitch

# A comma separated list of IP addresses on which to install Neutron

# metadata agent

CONFIG_NEUTRON_METADATA_HOSTS=192.168.1.137

# A comma separated list of IP addresses on which to install Neutron

# metadata agent

CONFIG_NEUTRON_METADATA_PW=d7ae6de0e6ef4d5e

# The type of network to allocate for tenant networks

CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local

# A comma separated list of VLAN ranges for the Neutron linuxbridge

# plugin

CONFIG_NEUTRON_LB_VLAN_RANGES=

# A comma separated list of interface mappings for the Neutron

# linuxbridge plugin

CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

# Type of network to allocate for tenant networks

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan

# A comma separated list of VLAN ranges for the Neutron openvswitch

# plugin

CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:10:20

# A comma separated list of bridge mappings for the Neutron

# openvswitch plugin

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1

# A comma separated list of colon-separated OVS bridge:interface

# pairs. The interface will be added to the associated bridge.

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1

# A comma separated list of tunnel ranges for the Neutron openvswitch

# plugin

CONFIG_NEUTRON_OVS_TUNNEL_RANGES=

# Override the IP used for GRE tunnels on this hypervisor to the IP

# found on the specified interface (defaults to the HOST IP)

CONFIG_NEUTRON_OVS_TUNNEL_IF=

# The IP address of the server on which to install the OpenStack

# client packages. An admin “rc” file will also be installed

CONFIG_OSCLIENT_HOST=192.168.1.137

# The IP address of the server on which to install Horizon

CONFIG_HORIZON_HOST=192.168.1.137

# To set up Horizon communication over https set this to “y”

CONFIG_HORIZON_SSL=y

# PEM encoded certificate to be used for ssl on the https server,

# leave blank if one should be generated, this certificate should not

# require a passphrase

CONFIG_SSL_CERT=

# Keyfile corresponding to the certificate if one was entered

CONFIG_SSL_KEY=

# The IP address on which to install the Swift proxy service

# (currently only single proxy is supported)

CONFIG_SWIFT_PROXY_HOSTS=192.168.1.137

# The password to use for the Swift to authenticate with Keystone

CONFIG_SWIFT_KS_PW=311d3891e9e140b9

# A comma separated list of IP addresses on which to install the

# Swift Storage services, each entry should take the format

# [/dev], for example 127.0.0.1/vdb will install /dev/vdb
# on 127.0.0.1 as a swift storage device(packstack does not create the
# filesystem, you must do this first), if /dev is omitted Packstack
# will create a loopback device for a test setup
CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.137
# Number of swift storage zones, this number MUST be no bigger than
# the number of storage devices configured
CONFIG_SWIFT_STORAGE_ZONES=1
# Number of swift storage replicas, this number MUST be no bigger
# than the number of storage zones configured
CONFIG_SWIFT_STORAGE_REPLICAS=1
# FileSystem type for storage nodes
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
# Whether to provision for demo usage and testing
CONFIG_PROVISION_DEMO=n
# The CIDR network address for the floating IP subnet
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
# Whether to configure tempest for testing
CONFIG_PROVISION_TEMPEST=n
# The uri of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
# The revision of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
# Whether to configure the ovs external bridge in an all-in-one
# deployment
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
# The IP address of the server on which to install Heat service
CONFIG_HEAT_HOST=192.168.1.137
# The password used by Heat user to authenticate against MySQL
CONFIG_HEAT_DB_PW=0f593f0e8ac94b20
# The password to use for the Heat to authenticate with Keystone
CONFIG_HEAT_KS_PW=22a4dee89e0e490b
# Set to ‘y’ if you would like Packstack to install Heat CloudWatch
# API
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
# Set to ‘y’ if you would like Packstack to install Heat
# CloudFormation API
CONFIG_HEAT_CFN_INSTALL=n
# The IP address of the server on which to install Heat CloudWatch
# API service
CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.137
# The IP address of the server on which to install Heat
# CloudFormation API service
CONFIG_HEAT_CFN_HOST=192.168.1.137
# The IP address of the server on which to install Ceilometer
CONFIG_CEILOMETER_HOST=192.168.1.137
# Secret key for signing metering messages.
CONFIG_CEILOMETER_SECRET=70ca460aa5354ef8
# The password to use for Ceilometer to authenticate with Keystone
CONFIG_CEILOMETER_KS_PW=72858e26b4cd40c2
# To subscribe each server to EPEL enter “y”
CONFIG_USE_EPEL=y
# A comma separated list of URLs to any additional yum repositories
# to install
CONFIG_REPO=
# The IP address of the server on which to install the Nagios server
CONFIG_NAGIOS_HOST=192.168.1.137
# The password of the nagiosadmin user on the Nagios server
CONFIG_NAGIOS_PW=c3832621eebd4d48


oVirt 3.3.2 hackery on Fedora 19

December 21, 2013

My final target was  to create two node oVirt 3.3.2 cluster and virtual machines using replicated glusterfs 3.4.1 volumes based on XFS formatted partitions. Choice of IPv4 firewall with iptables for tuning cluster environment and synchronization is my personal preference. Now I also know that postgres requires enough shared memory allocation like Informix or Oracle ( i was Informix DBA@Verizon for about 5 years , it was nice time ..)

   oVirt is an open source alternative to VMware vSphere, and provides an awesome KVM management interface for multi-node virtualization.

oVirt 3.3.2 clean install was performed as follows :-

1. Created ovirtmgmt bridge

[root@ovirt1 network-scripts]# cat ifcfg-ovirtmgmt

DEVICE=ovirtmgmt
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=static
IPADDR=192.168.1.142
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=83.221.202.254
NM_CONTROLLED=”no”

 In particular (my box) :

 [root@ovirt1 network-scripts]# cat ifcfg-enp2s0

BOOTPROTO=none
TYPE=”Ethernet”
ONBOOT=”yes”
NAME=”enp2s0″
BRIDGE=”ovirtmgmt”
HWADDR=00:22:15:63:e4:e2

2. Fixed bug with NFS Server:   https://bugzilla.redhat.com/show_bug.cgi?id=970595

3. Set up IPv4 firewall with iptables

4. Disabled NetworkManager and enabled network service 

5. To be able perform current 3.3.2 install on F19 ,  set up per

http://postgresql.1045698.n5.nabble.com/How-to-install-latest-stable-postgresql-on-Debian-td5005417.html

# sysctl -w kernel.shmmax=419430400
kernel.shmmax = 419430400
# sysctl -n kernel.shmmax
419430400 

Appears to be known issue http://www.ovirt.org/OVirt_3.3.2_release_notes  On Fedora 19 with recent versions of PostgreSQL it may be necessary to manually change kernel.shmmax settings (BZ 1039616)

Otherwise, setup fails to perform Misc Configuration. Systemctl status postgresql.service reports a servers crash during setup. Runtime shared memory mapping :-

[root@ovirt1 ~]# systemctl list-units | grep postgres
postgresql.service          loaded active running   PostgreSQL database server

[root@ovirt1 ~]# ipcs -a

—— Message Queues ——–
key        msqid      owner      perms      used-bytes   messages

—— Shared Memory Segments ——–
key        shmid      owner      perms      bytes      nattch     status
0×00000000 0          root       644        80         2
0×00000000 32769      root       644        16384      2
0×00000000 65538      root       644        280        2
0×00000000 163843     boris      600        4194304    2          dest
0x0052e2c1 360452     postgres   600        43753472   8
0×00000000 294917     boris      600        2097152    2          dest
0x0112e4a1 393222     root       600        1000       11
0×00000000 425991     boris      600        393216     2          dest
0×00000000 557065     boris      600        1048576    2          dest

—— Semaphore Arrays ——–
key        semid      owner      perms      nsems
0x000000a7 65536      root       600        1
0x0052e2c1 458753     postgres   600        17
0x0052e2c2 491522     postgres   600        17
0x0052e2c3 524291     postgres   600        17
0x0052e2c4 557060     postgres   600        17
0x0052e2c5 589829     postgres   600        17
0x0052e2c6 622598     postgres   600        17
0x0052e2c7 655367     postgres   600        17
0x0052e2c8 688136     postgres   600        17
0x0052e2c9 720905     postgres   600        17
0x0052e2ca 753674     postgres   600        17

After creating replication gluster volume ovirt-data02  via Web Admin   I ran manually :

gluster volume set ovirt-data02 auth.allow 192.168.1.* ;
gluster volume set ovirt-data02 group virt  ;
gluster volume set ovirt-data02 cluster.quorum-type auto ;
gluster volume set ovirt-data02 performance.cache-size 1GB ;

Currently apache-sshd is 0.9.0-3 . https://bugzilla.redhat.com/show_bug.cgi?id=1021273

Adding new host works fine , just /etc/sysconfig/iptables on master server should have :
-A INPUT -p tcp -m multiport –dport 24007:24108  -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

 Personally i was experiencing one issue during second host deployment, which required service vdsmd restart on second host to allow system bring it up at the end of installation. Two installs behaved absolutely similar

[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status  vdsmd.service
vdsmd.service – Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:40:40 MSK; 50s ago
Process: 2896 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh –pre-start (code=exited, status=0/SUCCESS)

Main PID: 3166 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3166 /usr/bin/python /usr/share/vdsm/vdsm

Dec 24 15:40:41 hv02.localdomain python[3192]: detected unhandled Python exception in ‘/usr/bin/vdsm-tool’
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: [427B blob data]
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: vdsm vds WARNING Unable to load the json rpc server module. Ple…led.
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 make_client_response()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 3

[root@hv02 ~]# service vdsmd restart
Redirecting to /bin/systemctl restart  vdsmd.service

[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status  vdsmd.service
vdsmd.service – Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:41:42 MSK; 2s ago
Process: 3355 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh –post-stop (code=exited, status=0/SUCCESS)
Process: 3358 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh –pre-start (code=exited, status=0/SUCCESS)

Main PID: 3418 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3418 /usr/bin/python /usr/share/vdsm/vdsm

Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: vdsm: Running test_conflicting_conf
Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: SUCCESS: ssl configured to true. No conflicts
Dec 24 15:41:42 hv02.localdomain systemd[1]: Started Virtual Desktop Server Manager.
Dec 24 15:41:43 hv02.localdomain vdsm[3418]: vdsm vds WARNING Unable to load the json rpc server module. Ple…led.
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 make_client_response()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 3

Moreover  if during core install on first server same report comes up during  awaiting host to become VDSM operational  install will hang for a while and finally won’t bring up master server. Workaround is the same. Once again it’s my personal experience.  It’s random error during core “all in one”  install.

 

 

[root@ovirt1 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  –  anywhere             anywhere
ACCEPT     icmp –  anywhere             anywhere             icmp any
ACCEPT     all  –  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:ssh
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:postgres
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:https
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpts:xprtld:6166
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpts:49152:49216
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:synchronet-db
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:sunrpc
ACCEPT     udp  –  anywhere             anywhere             state NEW udp dpt:sunrpc
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:pftp
ACCEPT     udp  –  anywhere             anywhere             state NEW udp dpt:pftp
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:rquotad
ACCEPT     udp  –  anywhere             anywhere             state NEW udp dpt:rquotad
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:892
ACCEPT     udp  –  anywhere             anywhere             state NEW udp dpt:892
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:nfs
ACCEPT     udp  –  anywhere             anywhere             state NEW udp dpt:filenet-rpc
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:32803
ACCEPT     tcp  –  anywhere             anywhere             state NEW tcp dpt:http
ACCEPT     tcp  –  anywhere             anywhere             multiport dports 24007:24108
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     udp  –  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     tcp  –  anywhere             anywhere             multiport dports 38465:38485
REJECT     all  –  anywhere             anywhere             reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

[root@ovirt1 ~]# ssh ovirt2

Last login: Sat Dec 21 23:17:05 2013 from ovirt1.localdomain

[root@ovirt2 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  –  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     all  –  anywhere             anywhere
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:54321
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:ssh
ACCEPT     udp  –  anywhere             anywhere             udp dpt:snmp
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:16514
ACCEPT     tcp  –  anywhere             anywhere             multiport dports xprtld:6166
ACCEPT     tcp  –  anywhere             anywhere             multiport dports 49152:49216
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:24007
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:webcache
ACCEPT     udp  –  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:38465
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:38466
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:38467
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:nfs
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:38469
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:39543
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:55863
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:38468
ACCEPT     udp  –  anywhere             anywhere             udp dpt:963
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:965
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:ctdb
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:netbios-ssn
ACCEPT     tcp  –  anywhere             anywhere             tcp dpt:microsoft-ds
ACCEPT     tcp  –  anywhere             anywhere             tcp dpts:24007:24108
ACCEPT     tcp  –  anywhere             anywhere             tcp dpts:49152:49251
REJECT     all  –  anywhere             anywhere             reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
REJECT     all  –  anywhere             anywhere             PHYSDEV match ! –physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Creating XFS replicated Gluster Storage

[root@ovirt1 ~]# pvcreate /dev/sda3
[root@ovirt1 ~]# vgcreate vg_virt /dev/sda3
[root@ovirt1 ~]# lvcreate -L 91000M -n lv_gluster  vg_virt  /dev/sda3
Logical volume “lv_gluster” created
[root@ovirt1 ~]# lvscan
ACTIVE            ‘/dev/fedora00/root’ [170.90 GiB] inherit
ACTIVE            ‘/dev/fedora00/swap’ [7.89 GiB] inherit
ACTIVE            ‘/dev/vg_virt/lv_gluster’ [88.87 GiB] inherit
[root@ovirt1 ~]# mkfs.xfs -f -i size=512 /dev/mapper/vg_virt-lv_gluster

meta-data=/dev/mapper/vg_virt-lv_gluster isize=512    agcount=16, agsize=1456000 blks
=                       sectsz=4096  attr=2, projid32bit=0
data     =                       bsize=4096   blocks=23296000, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=11375, version=2
=                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@ovirt1 ~]# mkdir /data1
[root@ovirt1 ~]# chown -R 36:36 /data1
[root@ovirt1 ~]# echo “/dev/mapper/vg_virt-lv_gluster  /data1  xfs     defaults    1 2″ >> /etc/fstab
[root@ovirt1 ~]# mount -a

  Creating replicated gluster volume beased on XFS LVM via Web Admin Console

The last line corresponds ovirt-data05 replicated gluster volume based on  XFS formatted mounted via /etc/fstab  LVM partition   /dev/mapper/vg_virt-lv_gluster  (similar on both peers)

[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                169G   35G  125G  22% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  152K  3.9G   1% /dev/shm
tmpfs                                    3.9G  988K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   80K  3.9G   1% /tmp
/dev/sda1                             477M   87M  361M  20% /boot

ovirt1.localdomain:ovirt-data02            169G   35G  125G  22% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02

192.168.1.137:/var/lib/exports/export    169G   35G  125G  22% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export

ovirt1.localdomain:/var/lib/exports/iso  169G   35G  125G  22% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso

/dev/mapper/vg_virt-lv_gluster            89G   36M   89G   1% /data1

ovirt1.localdomain:ovirt-data05         89G   36M   89G   1% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05

Fedora 20 KVM installation on XFS Gluster domain


Follow

Get every new post delivered to your Inbox.