RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7

July 29, 2014

As of 07/28/2014 Bug https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ is still pending and workaround suggested above should be applied during two node RDO packstack installation.
Successful implementation of Neutron ML2&&OVS&&VXLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack.

Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for
Controller && Compute Nodes setup. Before running
`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)
icehouse2.localdomain   –  Compute   (192.168.1.137)

[root@icehouse1 ~(keystone_admin)]# cat TwoNodeVXLAN.txt

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_MYSQL_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_VMWARE_BACKEND=n
CONFIG_MYSQL_HOST=192.168.1.127
CONFIG_MYSQL_USER=root
CONFIG_MYSQL_PW=a7f0349d1f7a4ab0
CONFIG_AMQP_SERVER=rabbitmq
CONFIG_AMQP_HOST=192.168.1.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=0915db728b00409caf4b6e433b756308
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=f16d26ff54cd4033
CONFIG_KEYSTONE_HOST=192.168.1.127
CONFIG_KEYSTONE_DB_PW=32419736ee454c2c
CONFIG_KEYSTONE_ADMIN_TOKEN=836891519cb640458551556447a5a644
CONFIG_KEYSTONE_ADMIN_PW=4ebab181262d4224
CONFIG_KEYSTONE_DEMO_PW=56eb6360019e45bf
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI
CONFIG_GLANCE_HOST=192.168.1.127
CONFIG_GLANCE_DB_PW=e51feef536104b49
CONFIG_GLANCE_KS_PW=2458775cd64848cb
CONFIG_CINDER_HOST=192.168.1.127
CONFIG_CINDER_DB_PW=bcf3b09c9c4144e2
CONFIG_CINDER_KS_PW=888c59cc113e4489
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=15G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_VCENTER_HOST=192.168.1.127
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_NOVA_API_HOST=192.168.1.127
CONFIG_NOVA_CERT_HOST=192.168.1.127
CONFIG_NOVA_VNCPROXY_HOST=192.168.1.127
CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137
CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.127
CONFIG_NOVA_DB_PW=8cc18e22eaeb4c4d
CONFIG_NOVA_KS_PW=aaf8cf4c60224150
CONFIG_NOVA_SCHED_HOST=192.168.1.127
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_PRIVIF=p4p1
CONFIG_NOVA_NETWORK_HOSTS=192.168.1.127
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_VCENTER_HOST=192.168.1.127
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_NEUTRON_SERVER_HOST=192.168.1.127
CONFIG_NEUTRON_KS_PW=5f11f559abc94440
CONFIG_NEUTRON_DB_PW=0302dcfeb69e439f
CONFIG_NEUTRON_L3_HOSTS=192.168.1.127
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.127
CONFIG_NEUTRON_LBAAS_HOSTS=
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_HOSTS=192.168.1.127
CONFIG_NEUTRON_METADATA_PW=227f7bbc8b6f4f74
############################################
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
############################################
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
#########################################
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
########################################
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_OSCLIENT_HOST=192.168.1.127
CONFIG_HORIZON_HOST=192.168.1.127
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SWIFT_PROXY_HOSTS=192.168.1.127
CONFIG_SWIFT_KS_PW=63d3108083ac495b
CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.127
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=ebf91dbf930c49ca
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_HOST=192.168.1.127
CONFIG_HEAT_DB_PW=f0be2b0fa2044183
CONFIG_HEAT_AUTH_ENC_KEY=29419b1f4e574e5e
CONFIG_HEAT_KS_PW=d5c39c630c364c5b
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.127
CONFIG_HEAT_CFN_HOST=192.168.1.127
CONFIG_CEILOMETER_HOST=192.168.1.127
CONFIG_CEILOMETER_SECRET=d1ed1459830e4288
CONFIG_CEILOMETER_KS_PW=84f18f2e478f4230
CONFIG_MONGODB_HOST=192.168.1.127
CONFIG_NAGIOS_HOST=192.168.1.127
CONFIG_NAGIOS_PW=e2d02c03b5664ffe
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_RH_PW=
CONFIG_RH_BETA_REPO=n
CONFIG_SATELLITE_URL=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=

[root@icehouse1 ~(keystone_admin)]# cat /etc/neutron/plugin.ini
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[OVS]
local_ip=192.168.1.127
enable_tunneling=True
integration_bridge=br-int
tunnel_bridge=br-tun
[securitygroup]
enable_security_group = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
polling_interval=2

[root@icehouse1 ~(keystone_admin)]# ls -l /etc/neutron
total 64
-rw-r–r–. 1 root root      193 Jul 29 16:15 api-paste.ini
-rw-r—–. 1 root neutron  3853 Jul 29 16:14 dhcp_agent.ini
-rw-r—–. 1 root neutron   208 Jul 29 16:15 fwaas_driver.ini
-rw-r—–. 1 root neutron  3431 Jul 29 16:14 l3_agent.ini
-rw-r—–. 1 root neutron  1400 Jun  8 01:38 lbaas_agent.ini
-rw-r—–. 1 root neutron  1481 Jul 29 16:15 metadata_agent.ini
-rw-r—–. 1 root neutron 19150 Jul 29 16:15 neutron.conf
lrwxrwxrwx. 1 root root       37 Jul 29 16:14 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
-rw-r–r–. 1 root root      452 Jul 29 17:11 plugin.out
drwxr-xr-x. 4 root root       34 Jul 29 16:14 plugins
-rw-r—–. 1 root neutron  6148 Jun  8 01:38 policy.json
-rw-r–r–. 1 root root       78 Jul  2 15:11 release
-rw-r–r–. 1 root root     1216 Jun  8 01:38 rootwrap.conf

On Controller

[root@icehouse1 ~(keystone_admin)]# ovs-vsctl show
2742fa6e-78bf-440e-a2c1-cb48242ea565
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
Port “qg-76f29fee-9c”
Interface “qg-76f29fee-9c”
type: internal
Port br-ex
Interface br-ex
type: internal
Port “enp2s0″
Interface “enp2s0″
Bridge br-tun
Port “vxlan-c0a80089″
Interface “vxlan-c0a80089″
type: vxlan
options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qr-8cad61e3-ce”
tag: 1
Interface “qr-8cad61e3-ce”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapff8659ee-8d”
tag: 1
Interface “tapff8659ee-8d”
type: internal
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
ovs_version: “2.0.0”

On Compute

[root@icehouse2 ~]# ovs-vsctl show
642d8c9f-116e-4b44-842a-e975e506fe24
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “vxlan-c0a8007f”
Interface “vxlan-c0a8007f”
type: vxlan
options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
Port “qvodc2c598a-b3″
tag: 1
Interface “qvodc2c598a-b3″
Port br-int
Interface br-int
type: internal
Port “qvo25cbd1fa-96″
tag: 1
Interface “qvo25cbd1fa-96″
ovs_version: “2.0.0”


RDO IceHouse Setup Two Node (Controller+Compute) Neutron ML2&OVS&VLAN Cluster on Fedora 20

June 22, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file= TwoNodeML2&OVS&VLAN.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from VLAN Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

Answer file been used by packstack here http://textuploader.com/k9xo

 [root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                inactive  (disabled on boot)
== Ceilometer services ==
openstack-ceilometer-api:               failed
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
== Support services ==
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 42ceb5a601b041f0a5669868dd7f7663 |   admin    |   True  |    test@test.com     |
| d602599e69904691a6094d86f07b6121 | ceilometer |   True  | ceilometer@localhost |
| cc11c36f6e9a4bb7b050db7a380a51db |   cinder   |   True  |   cinder@localhost   |
| c3b1e25936a241bfa63c791346f179fc |   glance   |   True  |   glance@localhost   |
| d2bfcd4e6fc44478899b0a2544df0b00 |  neutron   |   True  |  neutron@localhost   |
| 3d572a8e32b94ac09dd3318cd84fd932 |    nova    |   True  |    nova@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+—————–+————-+——————+———–+——–+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+————————————–+—————–+————-+——————+———–+——–+
| 898a4245-d191-46b8-ac87-e0f1e1873cb1 | CirrOS31        | qcow2       | bare             | 13147648  | active |
| c4647c90-5160-48b1-8b26-dba69381b6fa | Ubuntu 06/18/14 | qcow2       | bare             | 254149120 | active |
+————————————–+—————–+————-+——————+———–+——–+
== Nova managed services ==
+——————+—————————————-+———-+———+——-+—————————-+—————–+
| Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+——————+—————————————-+———-+———+——-+—————————-+—————–+
| nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | –               |
| nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:21.000000 | –               |
| nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:23.000000 | –               |
| nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | –               |
| nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2014-06-22T10:39:23.000000 | –               |
+——————+—————————————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+———+——+
| ID                                   | Label   | Cidr |
+————————————–+———+——+
| 577b7ba7-adad-4051-a03f-787eb8bd55f6 | public  | –    |
| 70298098-a022-4a6b-841f-cef13524d86f | private | –    |
| 7459c84b-b460-4da2-8f24-e0c840be2637 | int     | –    |
+————————————–+———+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+————-+———–+————+————-+————————————+
| ID                                   | Name        | Status    | Task State | Power State | Networks                           |
+————————————–+————-+———–+————+————-+————————————+
| 388bbe10-87b2-40e5-a6ee-b87b05116d51 | CirrOS445   | ACTIVE    | –          | Running     | private=30.0.0.14, 192.169.142.155 |
| 4d380c79-3213-45c0-8e4c-cef2dd19836d | UbuntuSRV01 | SUSPENDED | –          | Shutdown    | private=30.0.0.13, 192.169.142.154 |
+————————————–+————-+———–+————+————-+————————————+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:00
nova-scheduler   ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:01
nova-conductor   ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:03
nova-cert        ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:00
nova-compute     ip-192-169-142-137.ip.secureserver.net nova             enabled    :-)   2014-06-22 10:40:03

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+—————————————-+——-+—————-+
| id                                   | agent_type         | host                                   | alive | admin_state_up |
+————————————–+——————–+—————————————-+——-+—————-+
| 61160392-4c97-4e8f-a902-1e55867e4425 | DHCP agent         | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| 6cd022b9-9eb8-4d1e-9991-01dfe678eba5 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           |
| 893a1a71-5709-48e9-b1a4-11e02f5eca15 | Metadata agent     | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| bb29c2dc-2db6-487c-a262-32cecf85c608 | L3 agent           | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| d7456233-53ba-4ae4-8936-3448f6ea9d65 | Open vSwitch agent | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
+————————————–+——————–+—————————————-+——-+—————-+

[root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

 [root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth0
DEVICE=”eth0″
# HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

 [root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
# HWADDR=52:54:00:EE:94:93
NM_CONTROLLED=no

 [root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
86e16ac0-c2e6-4eb4-a311-cee56fe86800
Bridge br-ex
Port “eth0″
Interface “eth0″
Port “qg-068e0e7a-95″
Interface “qg-068e0e7a-95″
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge “br-eth1″
Port “eth1″
Interface “eth1″
Port “phy-br-eth1″
Interface “phy-br-eth1″
Port “br-eth1″
Interface “br-eth1″
type: internal
Bridge br-int
Port “qr-16b1ea2b-fc”
tag: 1
Interface “qr-16b1ea2b-fc”
type: internal
Port “qr-2bb007df-e1″
tag: 2
Interface “qr-2bb007df-e1″
type: internal
Port “tap1c48d234-23″
tag: 2
Interface “tap1c48d234-23″
type: internal
Port br-int
Interface br-int
type: internal
Port “tap26440f58-b0″
tag: 1
Interface “tap26440f58-b0″
type: internal
Port “int-br-eth1″
Interface “int-br-eth1″
ovs_version: “2.1.2”

[root@ip-192-169-142-127 neutron]# cat plugin.ini
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch
[ml2_type_vlan]
network_vlan_ranges = physnet1:100:200
[ovs]
network_vlan_ranges = physnet1:100:200
tenant_network_type = vlan
enable_tunneling = False
integration_bridge = br-int
bridge_mappings = physnet1:br-eth1
local_ip = 192.168.122.127
[AGENT]
polling_interval = 2
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Checksum offloading disabled on eth1 of Compute Node
[root@ip-192-169-142-137 neutron]# /usr/sbin/ethtool --offload eth1 tx off
Actual changes:
tx-checksumming: off
    tx-checksum-ip-generic: off
tcp-segmentation-offload: off
    tx-tcp-segmentation: off [requested on]
    tx-tcp-ecn-segmentation: off [requested on]
    tx-tcp6-segmentation: off [requested on]
udp-fragmentation-offload: off [requested on]

 


Two Real Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 4, 2014

Two boxes  have been setup , each one having 2  NICs (p37p1,p4p1) for Controller && Compute Nodes setup. Before running `packstack –answer-file= TwoRealNodeOVS&GRE.txt` SELINUX set to permissive on both nodes.Both p4p1’s assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface p37p1 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

 Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && GRE )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

Post packstack install  updates :-

1. nova.conf && metadata_agent.ini on Controller per

Two Real Node IceHouse Neutron OVS&GRE

This updates enable nova-api to listen at port 9697

View section -

“Metadata support configured on Controller+NeutronServer Node”

 2. File /etc/sysconfig/iptables updated on both nodes with lines :-

*filter section

-A INPUT -p gre -j ACCEPT
-A OUTPUT -p gre -j ACCEPT

Service iptables restarted 

 ***************************************

 On Controller+NeutronServer

 ***************************************

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-p37p1
DEVICE=p37p1
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-p4p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p4p1
UUID=dbc361f1-805b-4f57-8150-cbc24ab7ee1a
ONBOOT=yes
IPADDR=192.168.0.127
PREFIX=24
# GATEWAY=192.168.0.1
DNS1=83.221.202.254
# HWADDR=00:E0:53:13:17:4C
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse1 network-scripts(keystone_admin)]# ovs-vsctl show
119e5be5-5ef6-4f39-875c-ab1dfdb18972
Bridge br-int
Port “qr-209f67c4-b1″
tag: 1
Interface “qr-209f67c4-b1″
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapb5da1c7e-50″
tag: 1
Interface “tapb5da1c7e-50″
type: internal
Bridge br-ex
Port “qg-22a1fffe-91″
Interface “qg-22a1fffe-91″
type: internal
Port “p37p1″
Interface “p37p1″
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port “gre-1″
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137″}
ovs_version: “2.1.2”

**********************************

On Compute

**********************************

[root@icehouse2 network-scripts]# cat ifcfg-p37p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p37p1
UUID=b29ecd0e-7093-4ba9-8a2d-79ac74e93ea5
ONBOOT=yes
IPADDR=192.168.1.137
PREFIX=24
GATEWAY=192.168.1.1
DNS1=83.221.202.254
HWADDR=90:E6:BA:2D:11:EB
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse2 network-scripts]# cat ifcfg-p4p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p4p1
UUID=a57d6dd3-32fe-4a9f-a6d0-614e004bfdf6
ONBOOT=yes
IPADDR=192.168.0.137
PREFIX=24
GATEWAY=192.168.0.1
DNS1=83.221.202.254
HWADDR=00:0C:76:E0:1E:C5
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse2 network-scripts]# ovs-vsctl show
2dd63952-602e-4370-900f-85d8c984a0cb
Bridge br-int
Port “qvo615e1af7-f4″
tag: 3
Interface “qvo615e1af7-f4″
Port “qvoe78bebdb-36″
tag: 3
Interface “qvoe78bebdb-36″
Port br-int
Interface br-int
type: internal
Port “qvo9ccf821f-87″
tag: 3
Interface “qvo9ccf821f-87″
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-2″
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}
Port br-tun
Interface br-tun
type: internal
ovs_version: “2.1.2

**************************************************

Update dhcp_agent.ini and create dnsmasq.conf

**************************************************

[root@icehouse1 neutron(keystone_admin)]# cat  dhcp_agent.ini

[DEFAULT]
debug = False
resync_interval = 30
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
enable_isolated_metadata = False
enable_metadata_network = False
dhcp_delete_namespaces = False
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
state_path=/var/lib/neutron

[root@icehouse1 neutron(keystone_admin)]# cat  dnsmasq.conf
log-facility = /var/log/neutron/dnsmasq.log
log-dhcp
# Line added
dhcp-option=26,1454

**************************************************************************

Metadata support configured on Controller+NeutronServer Node :- 

***************************************************************************

[root@icehouse1 ~(keystone_admin)]# ip netns
qrouter-269dfed8-e314-4a23-b693-b891ba00582e
qdhcp-79eb80f1-d550-4f4c-9670-f8e10b43e7eb

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-269dfed8-e314-4a23-b693-b891ba00582e iptables -S -t nat | grep 169.254

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-269dfed8-e314-4a23-b693-b891ba00582e netstat -anpt

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      5212/python

[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 5212


root      5212     1  0 11:40 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/269dfed8-e314-4a23-b693-b891ba00582e.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=269dfed8-e314-4a23-b693-b891ba00582e –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-269dfed8-e314-4a23-b693-b891ba00582e.log –log-dir=/var/log/neutron
root     21188  4697  0 14:29 pts/0    00:00:00 grep –color=auto 5212

[root@icehouse1 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1228/python       


[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 1228

nova      1228     1  0 11:38 ?          00:00:56 /usr/bin/python /usr/bin/nova-api
nova      3623  1228  0 11:39 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3626  1228  0 11:39 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3719  1228  0 11:39 ?        00:00:12 /usr/bin/python /usr/bin/nova-api
nova      3720  1228  0 11:39 ?        00:00:10 /usr/bin/python /usr/bin/nova-api
nova      3775  1228  0 11:39 ?        00:00:01 /usr/bin/python /usr/bin/nova-api
nova      3776  1228  0 11:39 ?        00:00:01 /usr/bin/python /usr/bin/nova-api
root     21230  4697  0 14:29 pts/0    00:00:00 grep –color=auto 1228

[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-scheduler   icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-conductor   icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-cert        icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-compute     icehouse2.localdomain                nova             enabled    :-)   2014-06-03 10:39:07

[root@icehouse1 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+———————–+——-+—————-+
| id                                   | agent_type         | host                  | alive | admin_state_up |
+————————————–+——————–+———————–+——-+—————-+
| 4f37a350-2613-4a2b-95b2-b3bd4ee075a0 | L3 agent           | icehouse1.localdomain | :-)   | True           |
| 5b800eb7-aaf8-476a-8197-d13a0fc931c6 | Metadata agent     | icehouse1.localdomain | :-)   | True           |
| 5ce5e6fe-4d17-4ce0-9e6e-2f3b255ffeb0 | Open vSwitch agent | icehouse1.localdomain | :-)   | True           |
| 7f88512a-c59a-4ea4-8494-02e910cae034 | DHCP agent         | icehouse1.localdomain | :-)   | True           |
| a23e4d51-3cbc-42ee-845a-f5c17dff2370 | Open vSwitch agent | icehouse2.localdomain | :-)   | True           |
+————————————–+——————–+———————–+——-+————

  

    

    

 


Two Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 2, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from GRE Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

ANSWER FILE Two Node IceHouse Neutron OVS&GRE  and  updated *.ini , *.conf files after packstack setup  http://textuploader.com/0ts8

Two Libvirt’s  subnet created on F20 KVM Sever to support installation

  Public subnet :  192.169.142.0/24  

 GRE Tunnel  Support subnet:      192.168.122.0/24 

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat openstackvms.xml
<network>
 <name>openstackvms</name>
 <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr1' stp='on' delay='0' />
 <mac address='52:54:00:60:f8:6e'/>
 <ip address='192.169.142.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.169.142.2' end='192.169.142.254' />
 </dhcp>
 </ip>
 </network>
 2. Define the above network:
  $ virsh net-define openstackvms.xml
3. Start the network and enable it for "autostart"
 $ virsh net-start openstackvms
 $ virsh net-autostart openstackvms

4. List your libvirt networks to see if it reflects:
  $ virsh net-list
  Name                 State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
  openstackvms         active     yes           yes

5. Optionally, list your bridge devices:
  $ brctl show
  bridge name     bridge id               STP enabled     interfaces
  virbr0          8000.5254003339b3       yes             virbr0-nic
  virbr1          8000.52540060f86e       yes             virbr1-nic

 After packstack 2 Node (Controller+Compute) IceHouse OVS&amp;GRE setup :-

[root@ip-192-169-142-127 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 143
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| nova               |
| ovs_neutron        |
| performance_schema |
| test               |
+——————–+
9 rows in set (0.02 sec)

MariaDB [(none)]&gt; use ovs_neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [ovs_neutron]&gt; show tables ;
+—————————+
| Tables_in_ovs_neutron     |
+—————————+
| agents                    |
| alembic_version           |
| allowedaddresspairs       |
| dnsnameservers            |
| externalnetworks          |
| extradhcpopts             |
| floatingips               |
| ipallocationpools         |
| ipallocations             |
| ipavailabilityranges      |
| networkdhcpagentbindings  |
| networks                  |
| ovs_network_bindings      |
| ovs_tunnel_allocations    |
| ovs_tunnel_endpoints      |
| ovs_vlan_allocations      |
| portbindingports          |
| ports                     |
| quotas                    |
| routerl3agentbindings     |
| routerroutes              |
| routers                   |
| securitygroupportbindings |
| securitygrouprules        |
| securitygroups            |
| servicedefinitions        |
| servicetypes              |
| subnetroutes              |
| subnets                   |
+—————————+
29 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from networks ;
+———————————-+————————————–+———+——–+—————-+——–+
| tenant_id                        | id                                   | name    | status | admin_state_up | shared |
+———————————-+————————————–+———+——–+—————-+——–+
| 179e44f8f53641da89c3eb5d07405523 | 3854bc88-ae14-47b0-9787-233e54ffe7e5 | private | ACTIVE |              1 |      0 |
| 179e44f8f53641da89c3eb5d07405523 | 6e8dc33d-55d4-47b5-925f-e1fa96128c02 | public  | ACTIVE |              1 |      1 |
+———————————-+————————————–+———+——–+—————-+——–+
2 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from routers ;
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| tenant_id                        | id                                   | name    | status | admin_state_up | gw_port_id                           | enable_snat |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| 179e44f8f53641da89c3eb5d07405523 | 94829282-e6b2-4364-b640-8c0980218a4f | ROUTER3 | ACTIVE |              1 | df9711e4-d1a2-4255-9321-69105fbd8665 |           1 |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
1 row in set (0.00 sec)

MariaDB [ovs_neutron]&gt;

*********************

On Controller :-

********************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

*************************************

ovs-vsctl show output on controller

*************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
dc2c76d6-40e3-496e-bdec-470452758c32
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-1″
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.122.127″, out_key=flow, remote_ip=”192.168.122.137″}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap7acb7666-aa”
tag: 1
Interface “tap7acb7666-aa”
type: internal
Port “qr-a26fe722-07″
tag: 1
Interface “qr-a26fe722-07″
type: internal
Bridge br-ex
Port “qg-df9711e4-d1″
Interface “qg-df9711e4-d1″
type: internal
Port “eth0″
Interface “eth0″
Port br-ex
Interface br-ex
type: internal
ovs_version: “2.1.2”

********************

On Compute:-

********************

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth0
UUID=f96e561d-d14c-4fb1-9657-0c935f7f5721
ONBOOT=yes
IPADDR=192.169.142.137
PREFIX=24
GATEWAY=192.169.142.1
DNS1=83.221.202.254
HWADDR=52:54:00:67:AC:04
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.137
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

************************************

ovs-vsctl show output on compute

************************************

[root@ip-192-169-142-137 ~]# ovs-vsctl show
1c6671de-fcdf-4a29-9eee-96c949848fff
Bridge br-tun
Port “gre-2″
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.122.137″, out_key=flow, remote_ip=”192.168.122.127″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qvo87038189-3f”
tag: 1
Interface “qvo87038189-3f”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: “2.1.2”

[root@ip-192-169-142-137 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr87038189-3f        8000.2abf9e69f97c    no        qvb87038189-3f
tap87038189-3f

*************************

Metadata verification 

*************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f iptables -S -t nat | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3771/python
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | 3771
bash: 3771: command not found…

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 3771
root      3771     1  0 13:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/94829282-e6b2-4364-b640-8c0980218a4f.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=94829282-e6b2-4364-b640-8c0980218a4f –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-94829282-e6b2-4364-b640-8c0980218a4f.log –log-dir=/var/log/neutron

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1024/python

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 1024
nova      1024     1  1 13:58 ?        00:00:05 /usr/bin/python /usr/bin/nova-api
nova      3369  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3370  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3397  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3398  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3423  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3424  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root      4947  4301  0 14:06 pts/0    00:00:00 grep –color=auto 1024

  

  

  

 


Two Real Node (Controller+Compute) RDO IceHouse Neutron OVS&VLAN Cluster on Fedora 20 Setup

May 27, 2014

Two boxes , each one having 2  NICs (p37p1,p4p1) for (Controller+NeutronServer) &amp;&amp; Compute Nodes have been setup.

Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

Before running `packstack –answer-file=TwoRealNode-answer.txt` SELINUX set to permissive on both nodes.  Interfaces p4p1 on both nodes set to promiscuous mode (e.g. HWADDRESS was commented out).

Specific of answer-file on real F20 boxes :-

CONFIG_NOVA_COMPUTE_PRIVIF=p4p1

CONFIG_NOVA_NETWORK_PUBIF=p37p1

CONFIG_NOVA_NETWORK_PRIVIF=p4p1

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan

CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:100:200

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-p4p1

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-p4p1:p4p1

Post installation steps :-

1. NetworkManager should be disabled on both nodes, service network enabled.

2. Syntax of ifcfg-* files of corresponding OVS ports  should follow RHEL 6.5 notations rather then F20

3. Special care should be taken to bring up p4p1 (in my case)

4. Post install reconfiguration *.ini  && *.conf   http://textuploader.com/9oec

5. Configuration p4p1 interfaces 

# cat ifcfg-p4p1

TYPE=Ethernet

BOOTPROTO=none

DEVICE=p4p1

ONBOOT=yes

NM_CONTROLLED=no

Metadata access verification on Controller:-

[root@icehouse1 ~(keystone_admin)]# ip netns

qdhcp-a2bf6363-6447-47f5-a243-b998d206d593

qrouter-2462467b-ea0a-4a40-a093-493572010694

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-2462467b-ea0a-4a40-a093-493572010694  iptables -S -t nat | grep 169

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8775

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-2462467b-ea0a-4a40-a093-493572010694  netstat -anpt

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   

tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      6156/python  

[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 6156

root      5691  4082  0 07:58 pts/0    00:00:00 grep –color=auto 6156
root      6156     1  0 06:04 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/2462467b-ea0a-4a40-a093-493572010694.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=2462467b-ea0a-4a40-a093-493572010694 –state_path=/var/lib/neutron –metadata_port=8775 –verbose –log-file=neutron-ns-metadata-proxy-2462467b-ea0a-4a40-a093-493572010694.log –log-dir=/var/log/neutron

[root@icehouse1 ~(keystone_admin)]# netstat -anpt | grep 8775

tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      1224/python 

[root@icehouse1 ~(keystone_admin)]# ps -aux | grep 1224

nova      1224  0.7  0.7 337092 65052 ?        Ss   05:59   0:46 /usr/bin/python /usr/bin/nova-api

boris     3789  0.0  0.1 504676 12248 ?        Sl   06:01   0:00 /usr/libexec/tracker-store

Verifying dhcp lease for private IPs for instances currently running :-

[root@icehouse1 ~(keystone_admin)]# ip netns exec qdhcp-a2bf6363-6447-47f5-a243-b998d206d593 ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 3  bytes 1728 (1.6 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 3  bytes 1728 (1.6 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapa7e1ac48-7b: flags=67  mtu 1500
inet 10.0.0.11  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::f816:3eff:fe9d:874d  prefixlen 64  scopeid 0x20
ether fa:16:3e:9d:87:4d  txqueuelen 0  (Ethernet)
RX packets 3364  bytes 626074 (611.4 KiB)
RX errors 0  dropped 35  overruns 0  frame 0
TX packets 2124  bytes 427060 (417.0 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@icehouse1 ~(keystone_admin)]# ip netns exec qdhcp-a2bf6363-6447-47f5-a243-b998d206d593 tcpdump -ln -i tapa7e1ac48-7b

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on tapa7e1ac48-7b, link-type EN10MB (Ethernet), capture size 65535 bytes

11:07:02.388376 ARP, Request who-has 10.0.0.11 tell 10.0.0.38, length 46

11:07:02.388399 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:12.239833 IP 10.0.0.43.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:40:da:a1, length 300

11:07:12.240491 IP 10.0.0.11.bootps &gt; 10.0.0.43.bootpc: BOOTP/DHCP, Reply, length 324

11:07:12.313087 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:13.313070 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:15.634980 IP 0.0.0.0.bootpc &gt; 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:81:ff, length 280

11:07:15.635595 IP 10.0.0.11.bootps &gt; 10.0.0.31.bootpc: BOOTP/DHCP, Reply, length 324

11:07:15.635954 IP 10.0.0.31 &gt; 10.0.0.11: ICMP 10.0.0.31 udp port bootpc unreachable, length 360

11:07:17.254260 ARP, Request who-has 10.0.0.43 tell 10.0.0.11, length 28

11:07:17.254866 ARP, Reply 10.0.0.43 is-at fa:16:3e:40:da:a1, length 46

11:07:20.644135 ARP, Request who-has 10.0.0.11 tell 10.0.0.31, length 28

11:07:20.644157 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:45.972179 IP 10.0.0.38.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:9d:67:df, length 300

11:07:45.973023 IP 10.0.0.11.bootps &gt; 10.0.0.38.bootpc: BOOTP/DHCP, Reply, length 324

11:07:50.980701 ARP, Request who-has 10.0.0.11 tell 10.0.0.38, length 46

11:07:50.980725 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:55.821920 IP 10.0.0.43.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:40:da:a1, length 300

11:07:55.822423 IP 10.0.0.11.bootps &gt; 10.0.0.43.bootpc: BOOTP/DHCP, Reply, length 324

11:07:55.898024 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:56.897994 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:08:00.823637 ARP, Request who-has 10.0.0.11 tell 10.0.0.43, length 46

******************

On Controller

******************

[root@icehouse1 ~(keystone_admin)]# ovs-vsctl show

a675c73e-c707-4f29-af60-57fb7c3f81c4

Bridge br-int

Port “int-br-p4p1″

Interface “int-br-p4p1″

Port br-int

Interface br-int

type: internal

Port “qr-bbba6fd3-a3″

tag: 1

Interface “qr-bbba6fd3-a3″

type: internal

Port “qvo61d82a0f-32″

tag: 1

Interface “qvo61d82a0f-32″

Port “tapa7e1ac48-7b”

tag: 1

Interface “tapa7e1ac48-7b”

type: internal

Port “qvof8c8a1a2-51″

tag: 1

Interface “qvof8c8a1a2-51″

Bridge br-ex

Port “p37p1″

Interface “p37p1″

Port br-ex

Interface br-ex

type: internal

Port “qg-3787602d-29″

Interface “qg-3787602d-29″

type: internal

Bridge “br-p4p1″

Port “p4p1″

Interface “p4p1″

Port “phy-br-p4p1″

Interface “phy-br-p4p1″

Port “br-p4p1″

Interface “br-p4p1″

type: internal

ovs_version: “2.0.1”

****************

On Compute

****************

[root@icehouse2 ]# ovs-vsctl show

bf768fc8-d18b-4762-bdd2-a410fcf88a9b

Bridge “br-p4p1″

Port “br-p4p1″

Interface “br-p4p1″

type: internal

Port “phy-br-p4p1″

Interface “phy-br-p4p1″

Port “p4p1″

Interface “p4p1″

Bridge br-int

Port br-int

Interface br-int

type: internal

Port “int-br-p4p1″

Interface “int-br-p4p1″

Port “qvoe5a82d77-d4″

tag: 8

Interface “qvoe5a82d77-d4″

ovs_version: “2.0.1”

[root@icehouse1 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active

openstack-nova-cert:                    active

openstack-nova-compute:                 active

openstack-nova-network:                 inactive  (disabled on boot)

openstack-nova-scheduler:               active

openstack-nova-volume:                  inactive  (disabled on boot)

openstack-nova-conductor:               active

== Glance services ==

openstack-glance-api:                   active

openstack-glance-registry:              active

== Keystone service ==

openstack-keystone:                     active

== Horizon service ==

openstack-dashboard:                    active

== neutron services ==

neutron-server:                         active

neutron-dhcp-agent:                     active

neutron-l3-agent:                       active

neutron-metadata-agent:                 active

neutron-lbaas-agent:                    inactive  (disabled on boot)

neutron-openvswitch-agent:              active

neutron-linuxbridge-agent:              inactive  (disabled on boot)

neutron-ryu-agent:                      inactive  (disabled on boot)

neutron-nec-agent:                      inactive  (disabled on boot)

neutron-mlnx-agent:                     inactive  (disabled on boot)

== Swift services ==

openstack-swift-proxy:                  active

openstack-swift-account:                active

openstack-swift-container:              active

openstack-swift-object:                 active

== Cinder services ==

openstack-cinder-api:                   active

openstack-cinder-scheduler:             active

openstack-cinder-volume:                active

openstack-cinder-backup:                inactive

== Ceilometer services ==

openstack-ceilometer-api:               active

openstack-ceilometer-central:           active

openstack-ceilometer-compute:           active

openstack-ceilometer-collector:         active

openstack-ceilometer-alarm-notifier:    active

openstack-ceilometer-alarm-evaluator:   active

== Support services ==

libvirtd:                               active

openvswitch:                            active

dbus:                                   active

tgtd:                                   active

rabbitmq-server:                        active

memcached:                              active

== Keystone users ==

+———————————-+————+———+———————-+

|                id                |    name    | enabled |        email         |

+———————————-+————+———+———————-+

| df9165cd160846b19f73491e0bc041c2 |   admin    |   True  |    test@test.com     |

| bafe2fc4d51a400a99b1b41ef50d4afd | ceilometer |   True  | ceilometer@localhost |

| df59d0782f174a34a3a73215300c64ca |   cinder   |   True  |   cinder@localhost   |

| ca624394c9d941b6ad0a07363ab668b2 |   glance   |   True  |   glance@localhost   |

| fb5125484a1f4b7aaf8503025eb018ba |  neutron   |   True  |  neutron@localhost   |

| 64912bc3726c48db8f003ce79d8fe746 |    nova    |   True  |    nova@localhost    |

| 6d8b48605d3b476097d89486813360c0 |   swift    |   True  |   swift@localhost    |

+———————————-+————+———+———————-+

== Glance images ==

+————————————–+—————–+————-+——————+———–+——–+

| ID                                   | Name            | Disk Format | Container Format | Size      | Status |

+————————————–+—————–+————-+——————+———–+——–+

| 8593a43a-2449-4b49-918f-9871011249a7 | CirrOS31        | qcow2       | bare             | 13147648  | active |

| 4be72a99-06e0-477d-b446-b597435455a9 | Fedora20image   | qcow2       | bare             | 210829312 | active |

| 28470072-f317-4a72-b3e8-3fffbe6a7661 | UubuntuServer14 | qcow2       | bare             | 253559296 | active |

+————————————–+—————–+————-+——————+———–+——–+

== Nova managed services ==

+——————+———————–+———-+———+——-+—————————-+—————–+

| Binary           | Host                  | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+——————+———————–+———-+———+——-+—————————-+—————–+

| nova-consoleauth | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | –               |

| nova-scheduler   | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | –               |

| nova-conductor   | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:13.000000 | –               |

| nova-compute     | icehouse1.localdomain | nova     | enabled | up    | 2014-05-25T03:03:10.000000 | –               |

| nova-cert        | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | –               |

| nova-compute     | icehouse2.localdomain | nova     | enabled | up    | 2014-05-25T03:03:13.000000 | –               |

+——————+———————–+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+———+——+

| ID                                   | Label   | Cidr |

+————————————–+———+——+

| 09e18ced-8c22-4166-a1a1-cbceece46884 | public  | –    |

| a2bf6363-6447-47f5-a243-b998d206d593 | private | –    |

+————————————–+———+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———–+———–+——+———–+——+——-+————-+———–+

| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |

| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |

| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |

| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |

| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |

+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+————–+———–+————+————-+———————————+

| ID                                   | Name         | Status    | Task State | Power State | Networks                        |

+————————————–+————–+———–+————+————-+———————————+

| b661a130-fdb7-41eb-aba5-588924634c9d | CirrOS302    | ACTIVE    | –          | Running     | private=10.0.0.31, 192.168.1.63 |

| 5d1dbb9d-7bef-4e51-be8d-4270ddd3d4cc | CirrOS351    | ACTIVE    | –          | Running     | private=10.0.0.39, 192.168.1.66 |

| ef73a897-8700-4999-ab25-49f25b896f34 | CirrOS370    | ACTIVE    | –          | Running     | private=10.0.0.40, 192.168.1.69 |

| 02718e21-edb9-4b59-8bb7-21e0290650fd | CirrOS390    | SUSPENDED | –          | Shutdown    | private=10.0.0.41, 192.168.1.67 |                           |

| 6992e37c-48c7-49b6-b6fc-8e35fe240704 | UbuntuSRV350 | SUSPENDED | –          | Shutdown    | private=10.0.0.38, 192.168.1.62 |

| 9953ed52-b666-4fe1-ac35-23621122af5a | VF20RS02     | ACTIVE    | –          | Running     | private=10.0.0.43, 192.168.1.71 |

+————————————–+————–+———–+————+————-+———————————+

[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-scheduler   icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-conductor   icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:14
nova-compute     icehouse1.localdomain                nova             enabled    :-)   2014-05-27 10:16:18
nova-cert        icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-compute     icehouse2.localdomain                nova             enabled    :-)   2014-05-27 10:16:12

[root@icehouse1 ~(keystone_admin)]# neutron agent-list

+--------------------------------------+--------------------+-----------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------+-------+----------------+
| 6775fac7-d594-4272-8447-f136b54247e8 | L3 agent | icehouse1.localdomain | :-) | True |
| 77fdc8a9-0d77-4f53-9cdd-1c732f0cfdb1 | Metadata agent | icehouse1.localdomain | :-) | True |
| 8f70b2c4-c65b-4d0b-9808-ba494c764d99 | Open vSwitch agent | icehouse1.localdomain | :-) | True |
| a86f1272-2afb-43b5-a7e6-e5fc6df565b5 | Open vSwitch agent | icehouse2.localdomain | :-) | True |
| e72bdcd5-3dd1-4994-860f-e21d4a58dd4c | DHCP agent | icehouse1.localdomain | :-) | True |
+--------------------------------------+--------------------+-----------------------+-------+----------------+


 
   


 
 Windows 2012 evaluation Server running on Compute Node :-
 


  


									

Setup Horizon Dashboard-2014.1 on F20 Havana Controller (firefox upgrade up to 29.0-5)

May 3, 2014

It’s hard to know what the right thing is. Once you know, it’s hard not to do it.
                       Harry Fertig (Kingsley). The Confession (1999 film)

Recent upgrade firefox up to 29.0-5 on Fedora 20 causes to fail login to Dashboard Console for Havana F20 Controller been setup per VNC Console in Dashboard on Two Node Neutron GRE+OVS F20 Cluster

Procedure bellow actually does a backport F21 packages python-django-horizon-2104.1-1 , python-django-openstack-auth-1.1.5-1, python-pbr-0.7.0-2 via manual

install of corresponding SRC.RPMs and invoking rpmbuild utility to produce F20

packages. The hard thing to know is which packages to backport ?

I had to perform AIO RDO IceHouse setup via packstack on specially created VM to run `rpm -qa | grep django` to obtain required list. Officially RDO Havana

comes with F20 and appears that most recent firefox upgrade breaks Horizon Dashboard supposed to be maintained as supported component for F20.

Download from Net :-

[boris@dfw02 Downloads]$ ls -l *.src.rpm

-rw-r–r–. 1 boris boris 4252988 May  3 08:21 python-django-horizon-2014.1-1.fc21.src.rpm

-rw-r–r–. 1 boris boris   47126 May  3 08:37 python-django-openstack-auth-1.1.5-1.fc21.src.rpm

-rw-r–r–. 1 boris boris   83761 May  3 08:48 python-pbr-0.7.0-2.fc21.src.rpm

Install src.rpms and build

[boris@dfw02 SPECS]$ rpmbuild -bb python-django-openstack-auth.spec

[boris@dfw02 SPECS]$ rpmbuild -bb python-pbr.spec

Then install rpms as preventive step before core package build

[boris@dfw02 noarch]$sudo yum install python-django-openstack-auth-1.1.5-1.fc20.noarch.rpm

[boris@dfw02 noarch]$sudo yum install  python-pbr-0.7.0-2.fc20.noarch.rpm

[boris@dfw02 noarch]$ cd -

/home/boris/rpmbuild/SPECS

Core build to succeed :-

[boris@dfw02 SPECS]$ rpmbuild -bb python-django-horizon.spec

[boris@dfw02 SPECS]$ cd ../RPMS/n*

[boris@dfw02 noarch]$ ls -l

total 6616

-rw-rw-r–. 1 boris boris 3293068 May  3 09:01 openstack-dashboard-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  732020 May  3 09:01 openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  160868 May  3 08:51 python3-pbr-0.7.0-2.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  823332 May  3 09:01 python-django-horizon-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris 1548752 May  3 09:01 python-django-horizon-doc-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris   43944 May  3 08:39 python-django-openstack-auth-1.1.5-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  158204 May  3 08:51 python-pbr-0.7.0-2.fc20.noarch.rpm

[boris@dfw02 noarch]$ ls *.rpm &gt; inst

[boris@dfw02 noarch]$ vi inst

[boris@dfw02 noarch]$ chmod u+x inst

[boris@dfw02 noarch]$ ./inst

[sudo] password for boris:

Loaded plugins: langpacks, priorities, refresh-packagekit

Examining openstack-dashboard-2014.1-1.fc20.noarch.rpm: openstack-dashboard-2014.1-1.fc20.noarch

Marking openstack-dashboard-2014.1-1.fc20.noarch.rpm as an update to openstack-dashboard-2013.2.3-1.fc20.noarch

Examining openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm: openstack-dashboard-theme-2014.1-1.fc20.noarch

Marking openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm to be installed

Examining python-django-horizon-2014.1-1.fc20.noarch.rpm: python-django-horizon-2014.1-1.fc20.noarch

Marking python-django-horizon-2014.1-1.fc20.noarch.rpm as an update to python-django-horizon-2013.2.3-1.fc20.noarch

Examining python-django-horizon-doc-2014.1-1.fc20.noarch.rpm: python-django-horizon-doc-2014.1-1.fc20.noarch

Marking python-django-horizon-doc-2014.1-1.fc20.noarch.rpm to be installed

Resolving Dependencies

–&gt; Running transaction check

—&gt; Package openstack-dashboard.noarch 0:2013.2.3-1.fc20 will be updated

—&gt; Package openstack-dashboard.noarch 0:2014.1-1.fc20 will be an update

—&gt; Package openstack-dashboard-theme.noarch 0:2014.1-1.fc20 will be installed

—&gt; Package python-django-horizon.noarch 0:2013.2.3-1.fc20 will be updated

—&gt; Package python-django-horizon.noarch 0:2014.1-1.fc20 will be an update

—&gt; Package python-django-horizon-doc.noarch 0:2014.1-1.fc20 will be installed

–&gt; Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================

Package                   Arch   Version          Repository                                       Size

=========================================================================================================

Installing:

openstack-dashboard-theme noarch 2014.1-1.fc20    /openstack-dashboard-theme-2014.1-1.fc20.noarch 1.5 M

python-django-horizon-doc noarch 2014.1-1.fc20    /python-django-horizon-doc-2014.1-1.fc20.noarch  24 M

Updating:

openstack-dashboard       noarch 2014.1-1.fc20    /openstack-dashboard-2014.1-1.fc20.noarch        14 M

python-django-horizon     noarch 2014.1-1.fc20    /python-django-horizon-2014.1-1.fc20.noarch     3.3 M

Transaction Summary

=========================================================================================================

Install  2 Packages

Upgrade  2 Packages

 

Total size: 42 M

Is this ok [y/d/N]: y

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Updating   : python-django-horizon-2014.1-1.fc20.noarch                                            1/6

Updating   : openstack-dashboard-2014.1-1.fc20.noarch                                              2/6

warning: /etc/openstack-dashboard/local_settings created as /etc/openstack-dashboard/local_settings.rpmnew

Installing : openstack-dashboard-theme-2014.1-1.fc20.noarch                                        3/6

Installing : python-django-horizon-doc-2014.1-1.fc20.noarch                                        4/6

Cleanup    : openstack-dashboard-2013.2.3-1.fc20.noarch                                            5/6

Cleanup    : python-django-horizon-2013.2.3-1.fc20.noarch                                          6/6

Verifying  : openstack-dashboard-theme-2014.1-1.fc20.noarch                                        1/6

Verifying  : openstack-dashboard-2014.1-1.fc20.noarch                                              2/6

Verifying  : python-django-horizon-doc-2014.1-1.fc20.noarch                                        3/6

Verifying  : python-django-horizon-2014.1-1.fc20.noarch                                            4/6

Verifying  : openstack-dashboard-2013.2.3-1.fc20.noarch                                            5/6

Verifying  : python-django-horizon-2013.2.3-1.fc20.noarch                                          6/6

Installed:

openstack-dashboard-theme.noarch 0:2014.1-1.fc20    python-django-horizon-doc.noarch 0:2014.1-1.fc20

Updated:

openstack-dashboard.noarch 0:2014.1-1.fc20         python-django-horizon.noarch 0:2014.1-1.fc20

Complete!

[root@dfw02 ~(keystone_admin)]$ rpm -qa | grep django

python-django-horizon-doc-2014.1-1.fc20.noarch

python-django-horizon-2014.1-1.fc20.noarch

python-django-1.6.3-1.fc20.noarch

python-django-nose-1.2-1.fc20.noarch

python-django-bash-completion-1.6.3-1.fc20.noarch

python-django-openstack-auth-1.1.5-1.fc20.noarch

python-django-appconf-0.6-2.fc20.noarch

python-django-compressor-1.3-2.fc20.noarch

Admin’s reports regarding Cluster status

 

 

 

     Ubuntu Trusty Server VM running


RDO Havana Neutron Namespaces Troubleshooting for OVS&VLAN(GRE) Config

April 14, 2014

The  OpenStack Networking components are deployed on the Controller, Compute, and Network nodes in the following configuration:

In case of Two Node Development Cluster :-

Controller node: hosts the Neutron server service, which provides the networking API and communicates with and tracks the agents.

DHCP agent: spawns and controls dnsmasq processes to provide leases to instances. This agent also spawns neutron-ns-metadata-proxy processes as part of the metadata system.

Metadata agent: Provides a metadata proxy to the nova-api-metadata service. The neutron-ns-metadata-proxy direct traffic that they receive in their namespaces to the proxy.

OVS plugin agent: Controls OVS network bridges and routes between them via patch, tunnel, or tap without requiring an external OpenFlow controller.

L3 agent: performs L3 forwarding and NAT.

In case of Three Node or more ( several Compute Nodes) :-

Separate box hosts Neutron Server and all services mentioned above

Compute node: has an OVS plugin agent and openstack-nova-compute service.

Namespaces (View  Identifying and Troubleshooting Neutron Namespaces )

For each network you create, the Network node (or Controller node, if combined) will have a unique network namespace (netns) created by the DHCP and Metadata agents. The netns hosts an interface and IP addresses for dnsmasq and the neutron-ns-metadata-proxy. You can view the namespaces with the `ip netns list`  command, and can interact with the namespaces with the `ip netns exec namespace command`   command.

Every l2-agent/private network has an associated dhcp namespace and

Every l3-agent/router has an associated router namespace.

Network namespace starts with dhcp- followed by the ID of the network.

Router namespace starts with qrouter- followed by the ID of the router.

Source admin credentials and get network list

[root@dfw02 ~(keystone_admin)]$ neutron net-list

+————————————–+——+—————————————————–+

| id                                   | name | subnets                                             |

+————————————–+——+—————————————————–+

| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int  | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24    |

| 426bb226-0ab9-440d-ba14-05634a17fb2b | int1 | 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06 40.0.0.0/24    |

| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |

+————————————–+——+—————————————————–+

Using command `ip netns list` run following commands to get tenants

qdhcp-* names

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep 1eea88bb-4952-4aa4-9148-18b61c22d5b7

qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep 426bb226-0ab9-440d-ba14-05634a17fb2b

qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b

Check tenants Namespace via getting IP and ping this IP inside namespaces

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 35  bytes 4416 (4.3 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 35  bytes 4416 (4.3 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ns-343b0090-24: flags=4163  mtu 1500
inet 40.0.0.3  netmask 255.255.255.0  broadcast 40.0.0.255

inet6 fe80::f816:3eff:fe01:8b55  prefixlen 64  scopeid 0x20
ether fa:16:3e:01:8b:55  txqueuelen 1000  (Ethernet)
RX packets 3251  bytes 386284 (377.2 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1774  bytes 344082 (336.0 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b ping  -c 3 40.0.0.3
PING 40.0.0.3 (40.0.0.3) 56(84) bytes of data.
64 bytes from 40.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from 40.0.0.3: icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from 40.0.0.3: icmp_seq=3 ttl=64 time=0.034 ms

— 40.0.0.3 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.034/0.036/0.041/0.007 ms

Now verify that we have a copy of dnsmasq process to support every tenants namespace

[root@dfw02 ~(keystone_admin)]$ ps -aux | grep dhcp

neutron   2320  0.3  0.3 263908 30696 ?        Ss   08:18   2:14 /usr/bin/python /usr/bin/neutron-dhcp-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/dhcp_agent.ini –log-file /var/log/neutron/dhcp-agent.log

nobody    3529  0.0  0.0  15532   832 ?        S    08:20   0:00 dnsmasq –no-hosts –no-resolv –strict-order –bind-interfaces –interface=ns-40dd712c-e4 –except-interface=lo –pid-file=/var/lib/neutron/dhcp/1eea88bb-4952-4aa4-9148-18b61c22d5b7/pid –dhcp-hostsfile=/var/lib/neutron/dhcp/1eea88bb-4952-4aa4-9148-18b61c22d5b7/host –dhcp-optsfile=/var/lib/neutron/dhcp/1eea88bb-4952-4aa4-9148-18b61c22d5b7/opts –leasefile-ro –dhcp-range=set:tag0,10.0.0.0,static,120s –dhcp-lease-max=256 –conf-file=/etc/neutron/dnsmasq.conf –domain=openstacklocal

nobody    3530  0.0  0.0  15532   944 ?        S    08:20   0:00 dnsmasq –no-hosts –no-resolv –strict-order –bind-interfaces –interface=ns-343b0090-24 –except-interface=lo –pid-file=/var/lib/neutron/dhcp/426bb226-0ab9-440d-ba14-05634a17fb2b/pid –dhcp-hostsfile=/var/lib/neutron/dhcp/426bb226-0ab9-440d-ba14-05634a17fb2b/host –dhcp-optsfile=/var/lib/neutron/dhcp/426bb226-0ab9-440d-ba14-05634a17fb2b/opts –leasefile-ro –dhcp-range=set:tag0,40.0.0.0,static,120s –dhcp-lease-max=256 –conf-file=/etc/neutron/dnsmasq.conf –domain=openstacklocal

List interfaces inside dhcp namespace

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

2: ns-343b0090-24: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:01:8b:55 brd ff:ff:ff:ff:ff:ff
inet 40.0.0.3/24 brd 40.0.0.255 scope global ns-343b0090-24
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe01:8b55/64 scope link
valid_lft forever preferred_lft forever

(A)( From the instance to a router)

Check routing inside dhcp namespace

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b  ip r

default via 40.0.0.1 dev ns-343b0090-24

40.0.0.0/24 dev ns-343b0090-24  proto kernel  scope link  src 40.0.0.3

Check routing inside the router namespace

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 ip r

default via 192.168.1.1 dev qg-9c090153-08

40.0.0.0/24 dev qr-e031db6b-d0  proto kernel  scope link  src 40.0.0.1

192.168.1.0/24 dev qg-9c090153-08  proto kernel  scope link  src 192.168.1.114

Get routers list  via similar grep and network-id to obtain Routers Namespaces

[root@dfw02 ~(keystone_admin)]$ neutron router-list

+————————————–+———+—————————————————————————–+

| id                                   | name    | external_gateway_info                                                       |

+————————————–+———+—————————————————————————–+

| 86b3008c-297f-4301-9bdc-766b839785f1 | router2 | {“network_id”: “780ce2f3-2e6e-4881-bbac-857813f9a8e0″, “enable_snat”: true} |

| bf360d81-79fb-4636-8241-0a843f228fc8 | router1 | {“network_id”: “780ce2f3-2e6e-4881-bbac-857813f9a8e0″, “enable_snat”: true} |

+————————————–+———+—————————————————————————–+

Now get qrouter-* namespaces via `ip netns list` command :-

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep 86b3008c-297f-4301-9bdc-766b839785f1
qrouter-86b3008c-297f-4301-9bdc-766b839785f1

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep  bf360d81-79fb-4636-8241-0a843f228fc8
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8

Now verify L3 forwarding  & NAT via command  `iptables -L -t nat` inside router namespace and check  routing   port 80 for 169.254.169.254 to the RDO Havana Controller’s ( in my configuration running Neutron Server Service along with all agents) host at metadata port 8700

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 iptables -L -t nat

Chain PREROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-PREROUTING  all  —  anywhere             anywhere

Chain INPUT (policy ACCEPT)

target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-OUTPUT  all  —  anywhere             anywhere

Chain POSTROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-POSTROUTING  all  —  anywhere             anywhere

neutron-postrouting-bottom  all  —  anywhere             anywhere

Chain neutron-l3-agent-OUTPUT (1 references)

target     prot opt source               destination

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.2

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.6

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.5

Chain neutron-l3-agent-POSTROUTING (1 references)

target     prot opt source               destination

ACCEPT     all  —  anywhere             anywhere             ! ctstate DNAT

Chain neutron-l3-agent-PREROUTING (1 references)

target     prot opt source               destination

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports 8700

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.2

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.6

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.5

Chain neutron-l3-agent-float-snat (1 references)

target     prot opt source               destination

SNAT       all  —  40.0.0.2             anywhere             to:192.168.1.107

SNAT       all  —  40.0.0.6             anywhere             to:192.168.1.104

SNAT       all  —  40.0.0.5             anywhere             to:192.168.1.110

Chain neutron-l3-agent-snat (1 references)

target     prot opt source               destination

neutron-l3-agent-float-snat  all  —  anywhere             anywhere

SNAT       all  —  40.0.0.0/24          anywhere             to:192.168.1.114

Chain neutron-postrouting-bottom (1 references)

target     prot opt source               destination

neutron-l3-agent-snat  all  —  anywhere             anywhere

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  iptables -L -t nat

Chain PREROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-PREROUTING  all  —  anywhere             anywhere

Chain INPUT (policy ACCEPT)

target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-OUTPUT  all  —  anywhere             anywhere

Chain POSTROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-POSTROUTING  all  —  anywhere             anywhere

neutron-postrouting-bottom  all  —  anywhere             anywhere

Chain neutron-l3-agent-OUTPUT (1 references)

target     prot opt source               destination

DNAT       all  —  anywhere             dfw02.localdomain    to:10.0.0.2

Chain neutron-l3-agent-POSTROUTING (1 references)

target     prot opt source               destination

ACCEPT     all  —  anywhere             anywhere             ! ctstate DNAT

Chain neutron-l3-agent-PREROUTING (1 references)

target     prot opt source               destination

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports 8700

DNAT       all  —  anywhere             dfw02.localdomain    to:10.0.0.2

Chain neutron-l3-agent-float-snat (1 references)

target     prot opt source               destination

SNAT       all  —  10.0.0.2             anywhere             to:192.168.1.103

Chain neutron-l3-agent-snat (1 references)

target     prot opt source               destination

neutron-l3-agent-float-snat  all  —  anywhere             anywhere

SNAT       all  —  10.0.0.0/24          anywhere             to:192.168.1.100

Chain neutron-postrouting-bottom (1 references)

target     prot opt source               destination

neutron-l3-agent-snat  all  —  anywhere             anywhere

(B) ( through a NAT rule in the router namespace)

Check the NAT table

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 iptables -t nat -S

-P PREROUTING ACCEPT

-P INPUT ACCEPT

-P OUTPUT ACCEPT

-P POSTROUTING ACCEPT

-N neutron-l3-agent-OUTPUT

-N neutron-l3-agent-POSTROUTING

-N neutron-l3-agent-PREROUTING

-N neutron-l3-agent-float-snat

-N neutron-l3-agent-snat

-N neutron-postrouting-bottom

-A PREROUTING -j neutron-l3-agent-PREROUTING

-A OUTPUT -j neutron-l3-agent-OUTPUT

-A POSTROUTING -j neutron-l3-agent-POSTROUTING

-A POSTROUTING -j neutron-postrouting-bottom

-A neutron-l3-agent-OUTPUT -d 192.168.1.112/32 -j DNAT –to-destination 40.0.0.2

-A neutron-l3-agent-OUTPUT -d 192.168.1.113/32 -j DNAT –to-destination 40.0.0.4

-A neutron-l3-agent-OUTPUT -d 192.168.1.104/32 -j DNAT –to-destination 40.0.0.6

-A neutron-l3-agent-OUTPUT -d 192.168.1.110/32 -j DNAT –to-destination 40.0.0.5

-A neutron-l3-agent-POSTROUTING ! -i qg-9c090153-08 ! -o qg-9c090153-08 -m conntrack ! –ctstate DNAT -j ACCEPT

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8700

-A neutron-l3-agent-PREROUTING -d 192.168.1.112/32 -j DNAT –to-destination 40.0.0.2

-A neutron-l3-agent-PREROUTING -d 192.168.1.113/32 -j DNAT –to-destination 40.0.0.4

-A neutron-l3-agent-PREROUTING -d 192.168.1.104/32 -j DNAT –to-destination 40.0.0.6

-A neutron-l3-agent-PREROUTING -d 192.168.1.110/32 -j DNAT –to-destination 40.0.0.5

-A neutron-l3-agent-float-snat -s 40.0.0.2/32 -j SNAT –to-source 192.168.1.112

-A neutron-l3-agent-float-snat -s 40.0.0.4/32 -j SNAT –to-source 192.168.1.113

-A neutron-l3-agent-float-snat -s 40.0.0.6/32 -j SNAT –to-source 192.168.1.104

-A neutron-l3-agent-float-snat -s 40.0.0.5/32 -j SNAT –to-source 192.168.1.110

-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat

-A neutron-l3-agent-snat -s 40.0.0.0/24 -j SNAT –to-source 192.168.1.114

-A neutron-postrouting-bottom -j neutron-l3-agent-snat

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 iptables -t nat -S

-P PREROUTING ACCEPT

-P INPUT ACCEPT

-P OUTPUT ACCEPT

-P POSTROUTING ACCEPT

-N neutron-l3-agent-OUTPUT

-N neutron-l3-agent-POSTROUTING

-N neutron-l3-agent-PREROUTING

-N neutron-l3-agent-float-snat

-N neutron-l3-agent-snat

-N neutron-postrouting-bottom

-A PREROUTING -j neutron-l3-agent-PREROUTING

-A OUTPUT -j neutron-l3-agent-OUTPUT

-A POSTROUTING -j neutron-l3-agent-POSTROUTING

-A POSTROUTING -j neutron-postrouting-bottom

-A neutron-l3-agent-OUTPUT -d 192.168.1.103/32 -j DNAT –to-destination 10.0.0.2

-A neutron-l3-agent-POSTROUTING ! -i qg-54e34740-87 ! -o qg-54e34740-87 -m conntrack ! –ctstate DNAT -j ACCEPT

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8700

-A neutron-l3-agent-PREROUTING -d 192.168.1.103/32 -j DNAT –to-destination 10.0.0.2

-A neutron-l3-agent-float-snat -s 10.0.0.2/32 -j SNAT –to-source 192.168.1.103

-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat

-A neutron-l3-agent-snat -s 10.0.0.0/24 -j SNAT –to-source 192.168.1.100

-A neutron-postrouting-bottom -j neutron-l3-agent-snat

Ping to verify network connections

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

64 bytes from 8.8.8.8: icmp_seq=1 ttl=47 time=42.6 ms

64 bytes from 8.8.8.8: icmp_seq=2 ttl=47 time=40.8 ms

64 bytes from 8.8.8.8: icmp_seq=3 ttl=47 time=41.6 ms

64 bytes from 8.8.8.8: icmp_seq=4 ttl=47 time=41.0 ms

Verifying  service listening at 8700 port  inside routers namespaces 

output seems like this :-

(C) (to an instance of the neutron-ns-metadata-proxy)

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1  netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      4946/python

Check process with pid 4946

[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4946

root      4946     1  0 08:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/86b3008c-297f-4301-9bdc-766b839785f1.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=86b3008c-297f-4301-9bdc-766b839785f1 –state_path=/var/lib/neutron –metadata_port=8700 –verbose –log-file=neutron-ns-metadata-proxy-86b3008c-297f-4301-9bdc-766b839785f1.log –log-dir=/var/log/neutron

root     10396 11489  0 16:33 pts/3    00:00:00 grep –color=auto 4946

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      4746/python

Check process with pid 4746

[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4746

root      4746     1  0 08:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/bf360d81-79fb-4636-8241-0a843f228fc8.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=bf360d81-79fb-4636-8241-0a843f228fc8 –state_path=/var/lib/neutron –metadata_port=8700 –verbose –log-file=neutron-ns-metadata-proxy-bf360d81-79fb-4636-8241-0a843f228fc8.log –log-dir=/var/log/neutron

Now run following commands inside routers namespaces to check status of neutron-metadata port :-

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1  netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN

Active UNIX domain sockets (servers and established)

Proto RefCnt Flags       Type       State         I-Node   Path

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN

Active UNIX domain sockets (servers and established)

Proto RefCnt Flags       Type       State         I-Node   Path

Outside routers namespace it would look like

(D) (to the actual Nova metadata service)

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      2746/python

Check process with pid  2746

[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 2746

nova      2746     1  0 08:57 ?        00:02:31 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

nova      2830  2746  0 08:57 ?        00:00:00 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

nova      2851  2746  0 08:57 ?        00:00:10 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

nova      2858  2746  0 08:57 ?        00:00:02 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

root      9976 11489  0 16:31 pts/3    00:00:00 grep –color=auto 2746

So , we actually verified statement from Direct access to Nova metadata

in an environment running Neutron, a request from your instance must traverse a number of steps:

1. From the instance to a router, (A)

2. Through a NAT rule in the router namespace,  (B)

3. To an instance of the neutron-ns-metadata-proxy, (C)

4. To the actual Nova metadata service (D)

References

1. OpenStack Networking concepts


HowTo access metadata from RDO Havana Instance on Fedora 20

April 5, 2014

Per  Direct_access _to_Nova_metadata

In an environment running Neutron, a request from your instance must traverse a number of steps:

1. From the instance to a router,
2. Through a NAT rule in the router namespace,
3. To an instance of the neutron-ns-metadata-proxy,
4. To the actual Nova metadata service

   Reproducing  Dirrect_access_to_Nova_metadata   I was able to get only list of EC2 metadata available, but not the values. However, the major concern is getting  values of metadata obtained in post  Direct_access _to_Nova_metadata
and also at  /openstack  location. The last  ones seem to me important not less then present  in EC2 list . This metadata are also not provided by this list.

Commands been run bellow are supposed to verify Nova&Neutron Setup to be performed  successfully , otherwise passing four steps 1,2,3,4 is supposed to fail and it will force you to analyse corresponding Logs file ( View References). It doesn’t matter did you set up cloud environment  manually or via RDO packstack

Run on Controller Node :-

[root@dallas1 ~(keystone_admin)]$ ip netns list

qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d
qdhcp-166d9651-d299-47df-a5a1-b368e87b612f

Check on the Routing on Cloud controller’s router namespace, it should show port 80 for 169.254.169.254 routes to the host at port 8700

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d iptables -L -t nat | grep 169

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports  8700

Check routing table inside the router namespace:

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d ip r

default via 192.168.1.1 dev qg-8fbb6202-3d
10.0.0.0/24 dev qr-2dd1ba70-34  proto kernel  scope link  src 10.0.0.1
192.168.1.0/24 dev qg-8fbb6202-3d  proto kernel  scope link  src 192.168.1.100

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN   
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ ip netns exec qdhcp-166d9651-d299-47df-a5a1-b368e87b612f netstat -na

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 10.0.0.3:53             0.0.0.0:*               LISTEN
tcp6       0      0 fe80::f816:3eff:feef:53 :::*                    LISTEN
udp        0      0 10.0.0.3:53             0.0.0.0:*
udp        0      0 0.0.0.0:67              0.0.0.0:*
udp6       0      0 fe80::f816:3eff:feef:53 :::*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ iptables-save | grep 8700

-A INPUT -p tcp -m multiport –dports 8700 -m comment –comment “001 metadata incoming” -j ACCEPT

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      2830/python  

[root@dallas1 ~(keystone_admin)]$ ps -ef | grep 2830
nova      2830     1  0 09:41 ?        00:00:57 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2856  2830  0 09:41 ?        00:00:00 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2874  2830  0 09:41 ?        00:00:09 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2875  2830  0 09:41 ?        00:00:01 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

1. At this point  you should be able (inside any running Havana instance) to launch your browser (“links” at least if there is no Light Weight X environment)  to

http://169.254.169.254/openstack/latest (not EC2)

The response  will be  :    meta_data.json password vendor_data.json

 If Light Weight X Environment is unavailable then use “links”

 

 

 What is curl   http://curl.haxx.se/docs/faq.html#What_is_cURL

Now you should be able to run on F20 instance

[root@vf20rs0404 ~] # curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

%  Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

Dload  Upload   Total   Spent    Left  Speed

100  1286  100  1286    0     0   1109      0  0:00:01  0:00:01 –:–:–  1127

. . . . . . . .

“uuid”: “10142280-44a2-4830-acce-f12f3849cb32“,

“availability_zone”: “nova”,

“hostname”: “vf20rs0404.novalocal”,

“launch_index”: 0,

“public_keys”: {“key2″: “ssh-rsa . . . . .  Generated by Nova\n”},

“name”: “VF20RS0404″

On another instance (in my case Ubuntu 14.04 )

 root@ubuntutrs0407:~#curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

Dload  Upload   Total   Spent    Left  Speed

100  1292  100  1292    0     0    444      0  0:00:02  0:00:02 –:–:–   446

{“random_seed”: “…”,

“uuid”: “8c79e60c-4f1d-44e5-8446-b42b4d94c4fc“,

“availability_zone”: “nova”,

“hostname”: “ubuntutrs0407.novalocal”,

“launch_index”: 0,

“public_keys”: {“key2″: “ssh-rsa …. Generated by Nova\n”},

“name”: “UbuntuTRS0407″}

Running VMs on Compute node:-

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+—————+———–+————+————-+—————————–+

| ID                                   | Name          | Status    | Task State | Power State | Networks                    |

+————————————–+—————+———–+————+————-+—————————–+

| d0f947b1-ff6a-4ff0-b858-b63a3d07cca3 | UbuntuTRS0405 | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.106 |

| 8c79e60c-4f1d-44e5-8446-b42b4d94c4fc | UbuntuTRS0407 | ACTIVE    | None       | Running     | int=10.0.0.6, 192.168.1.107 |

| 8775924c-dbbd-4fbb-afb8-7e38d9ac7615 | VF20RS037     | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.115 |

| d22a2376-33da-4a0e-a066-d334bd2e511d | VF20RS0402    | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.103 |

| 10142280-44a2-4830-acce-f12f3849cb32 | VF20RS0404    | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.105 |

+————————————–+—————+———–+————+————-+——————–

Launching browser to http://169.254.169.254/openstack/latest/meta_data.json on another Two Node Neutron GRE+OVS F20 Cluster. Output is sent directly to browser

2. I have provided some information about the OpenStack metadata api, which is available at /openstack, but if you are concerned  about the EC2 metadata API , browser should be launched to  http://169.254.169.254/latest/meta-data/

 What allows to to get any of displayed parameters

For instance :-

 

   OR via CLI

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/instance-id

i-000000a4

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-hostname

ubuntutrs0407.novalocal

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-ipv4

192.168.1.107

To verify instance-id launch virt-manger connected to Compute Node

 

 

which shows same value “000000a4″

Another option in text mode is “links” browser

$ ssh -l ubuntu -i key2.pem 192.168.1.109

Inside Ubuntu 14.04 instance  :-

# apt-get -y install links

# links

Press ESC to get to menu:-

 

 

 

 

References

1.https://ask.openstack.org/en/question/10140/wget-http1692541692542009-04-04meta-datainstance-id-error-404/


Setup Dashboard&VNC console on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster

March 13, 2014

This post follows up  Gluster 3.4.2 on Two Node Controller&Compute Neutron GRE+OVS Fedora 20 Cluster in particular,  it could be performed after Basic Setup  to make system management more comfortable the only CLI.

It’s also easy to create instance via  Dashboard :

  Placing in post creating panel customization script ( analog –user-data)

#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

To be able log in as “fedora” and set MTU=1457  inside VM (GRE tunneling)

   Key-pair submitted upon creation works like this :

[root@dfw02 Downloads(keystone_boris)]$ ssh -l fedora -i key2.pem  192.168.1.109
Last login: Sat Mar 15 07:47:45 2014

[fedora@vf20rs015 ~]$ uname -a
Linux vf20rs015.novalocal 3.13.6-200.fc20.x86_64 #1 SMP Fri Mar 7 17:02:28 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

[fedora@vf20rs015 ~]$ ifconfig
eth0: flags=4163  mtu 1457
inet 40.0.0.7  netmask 255.255.255.0  broadcast 40.0.0.255
inet6 fe80::f816:3eff:fe1e:1de6  prefixlen 64  scopeid 0x20
ether fa:16:3e:1e:1d:e6  txqueuelen 1000  (Ethernet)
RX packets 225  bytes 25426 (24.8 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 221  bytes 23674 (23.1 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Setup described at link mentioned above was originally suggested by Kashyap Chamarthy  for VMs running on non-default Libvirt’s subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for  nova-compute and neutron-openvswitch-agent  from Compute to Controller,   changes to /etc/sysconfig/iptables to enable  Gluster 3.4.2 setup on F20  systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for “dnsmasq.conf” and added to dnsmasq.conf line “dhcp-option=26,1454″. Updated configuration files are critical for launching instance without “Customization script” and allow to work with usual ssh keypair.  Actually , when updates are done instance gets created with MTU 1454. View  [2]. This setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..`  loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept).

Setup

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling ), Dashboard

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   -  Controller (192.168.1.127) 
dwf01.localdomain   -  Compute   (192.168.1.137)

1. First step follows  http://docs.openstack.org/havana/install-guide/install/yum/content/install_dashboard.html   and  http://docs.openstack.org/havana/install-guide/install/yum/content/dashboard-session-database.html Sequence of actions per manuals above :-

# yum install memcached python-memcached mod_wsgi openstack-dashboard

Modify the value of CACHES['default']['LOCATION'] in /etc/openstack-dashboard/local_settings to match the ones set in /etc/sysconfig/memcached. Open /etc/openstack-dashboard/local_settings and look for this line:

CACHES =

{ ‘default':

{ ‘BACKEND’ : ‘django.core.cache.backends.memcached.MemcachedCache’,

‘LOCATION’ : ‘127.0.0.1:11211′ }

}

Update the ALLOWED_HOSTS in local_settings.py to include the addresses you wish to access the dashboard from. Edit /etc/openstack-dashboard/local_settings:

ALLOWED_HOSTS = ['Controller-IP', 'my-desktop']

This guide assumes that you are running the Dashboard on the controller node. You can easily run the dashboard on a separate server, by changing the appropriate settings in local_settings.py. Edit /etc/openstack-dashboard/local_settings and change OPENSTACK_HOST to the hostname of your Identity Service:

OPENSTACK_HOST = “Controller-IP”

Start the Apache web server and memcached: # service httpd restart

# systemctl start memcached

# systemctl enable memcached

To configure the MySQL database, create the dash database:

mysql&gt; CREATE DATABASE dash; Create a MySQL user for the newly-created dash database that has full control of the database. Replace DASH_DBPASS with a password for the new user:

mysql&gt; GRANT ALL ON dash.* TO ‘dash’@’%’ IDENTIFIED BY ‘fedora';

mysql&gt; GRANT ALL ON dash.* TO ‘dash’@’localhost’ IDENTIFIED BY ‘fedora';

In the local_settings file /etc/openstack-dashboard/local_settings

SESSION_ENGINE = ‘django.contrib.sessions.backends.db’

DATABASES =

{ ‘default':

{ # Database configuration here

‘ENGINE': ‘django.db.backends.mysql’,

‘NAME': ‘dash’,

‘USER': ‘dash’, ‘PASSWORD':

‘fedora’, ‘HOST': ‘Controller-IP’,

‘default-character-set': ‘utf8′ }

}

After configuring the local_settings as shown, you can run the manage.py syncdb command to populate this newly-created database.

# /usr/share/openstack-dashboard/manage.py syncdb

Attempting to run syncdb you  might get an error like ‘dash’@’yourhost’ is not authorized to do it with password ‘YES’.  Then ( for instance in my case)

# mysql -u root -p

MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;

MariaDB [(none)]>  insert into mysql.user(User,Host,Password) values (‘dash’,’dallas1.localdomain’,’ ‘);

Query OK, 1 row affected, 4 warnings (0.00 sec)

MariaDB [(none)]> UPDATE mysql.user SET Password = PASSWORD(‘fedora’)

> WHERE User = ‘dash’ ;

Query OK, 1 row affected (0.00 sec) Rows matched: 3  Changed: 1  Warnings: 0

MariaDB [(none)]>  SELECT User, Host, Password FROM mysql.user;

.   .  .  .

| dash     | %                   | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |

| dash     | localhost       | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |

| dash     | dallas1.localdomain | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 | +———-+———————+——————————————-+

20 rows in set (0.00 sec)

That is exactly the same issue which comes up when starting openstack-nova-scheduler &amp; openstcak-nova-conductor  services during basic installation of Controller on Fedora 20. View Basic setup in particular :-

Set table mysql.user in proper status

shell> mysql -u root -p
mysql> insert into mysql.user (User,Host,Password) values ('nova','dfw02.localdomain',' ');
mysql> UPDATE mysql.user SET Password = PASSWORD('nova')
    ->    WHERE User = 'nova';
mysql> FLUSH PRIVILEGES;

Start, enable nova-{api,scheduler,conductor} services

  $ for i in start enable status; \
    do systemctl $i openstack-nova-api; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-scheduler; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-conductor; done

 # service httpd restart

Finally on Controller (dfw02  – 192.168.1.127)  file /etc/openstack-dashboard/local_settings  looks like http://bderzhavets.wordpress.com/2014/03/14/sample-of-etcopenstack-dashboardlocal_settings/

At this point dashboard is functional, but instances sessions outputs are unavailable via dashboard.  I didn’t get any error code, just

Instance Detail: VF20RS03

OverviewLogConsole

Loading…

2. Second step skipped in mentioned manual , however known by experienced persons https://ask.openstack.org/en/question/520/vnc-console-in-dashboard-fails-to-connect-ot-server-code-1006/

**************************************

Controller  dfw02 – 192.168.1.127

**************************************

# ssh-keygen (Hit Enter to accept all of the defaults)

# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dfw01

[root@dfw02 ~(keystone_boris)]$ ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5903:127.0.0.1:5903 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5904:127.0.0.1:5904 -N -f -l root 192.168.1.137

Compute’s  IP is 192.168.1.137

Update /etc/nova/nova.conf:

novncproxy_host=0.0.0.0

novncproxy_port=6080

novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html

[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-consoleauth.service
ln -s ‘/usr/lib/systemd/system/openstack-nova-consoleauth.service’ ‘/etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service’
[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-novncproxy.service
ln -s ‘/usr/lib/systemd/system/openstack-nova-novncproxy.service’ ‘/etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service’

[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-consoleauth.service
[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-novncproxy.service

[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-consoleauth.service

openstack-nova-consoleauth.service – OpenStack Nova VNC console auth Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-consoleauth.service; enabled)

Active: active (running) since Thu 2014-03-13 19:14:45 MSK; 20min ago

Main PID: 14679 (nova-consoleaut)

CGroup: /system.slice/openstack-nova-consoleauth.service

└─14679 /usr/bin/python /usr/bin/nova-consoleauth –logfile /var/log/nova/consoleauth.log

Mar 13 19:14:45 dfw02.localdomain systemd[1]: Started OpenStack Nova VNC console auth Server.

[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-novncproxy.service

openstack-nova-novncproxy.service – OpenStack Nova NoVNC Proxy Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled)

Active: active (running) since Thu 2014-03-13 19:14:58 MSK; 20min ago

Main PID: 14762 (nova-novncproxy)

CGroup: /system.slice/openstack-nova-novncproxy.service

├─14762 /usr/bin/python /usr/bin/nova-novncproxy –web /usr/share/novnc/

└─17166 /usr/bin/python /usr/bin/nova-novncproxy –web /usr/share/novnc/

Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: 127.0.0.1: Path: ‘/websockify’

Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: connecting to: 127.0.0.1:5900

Mar 13 19:23:55 dfw02.localdomain nova-novncproxy[14762]: 19: 127.0.0.1: ignoring empty handshake

Mar 13 19:24:31 dfw02.localdomain nova-novncproxy[14762]: 22: 127.0.0.1: ignoring socket not ready

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Plain non-SSL (ws://) WebSocket connection

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Version hybi-13, base64: ‘True’

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Path: ‘/websockify’

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: connecting to: 127.0.0.1:5901

Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 26: 127.0.0.1: ignoring empty handshake

Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 25: 127.0.0.1: ignoring empty handshake

Hint: Some lines were ellipsized, use -l to show in full.

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 6080

tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      14762/python

*********************************

Compute  dfw01 – 192.168.1.137

*********************************

Update  /etc/nova/nova.conf:

vnc_enabled=True

novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=192.168.1.137

# systemctl restart openstack-nova-compute

Finally :-

[root@dfw02 ~(keystone_admin)]$ systemctl list-units | grep nova

openstack-nova-api.service                      loaded active running   OpenStack Nova API Server
openstack-nova-conductor.service           loaded active running   OpenStack Nova Conductor Server
openstack-nova-consoleauth.service       loaded active running   OpenStack Nova VNC console auth Server
openstack-nova-novncproxy.service         loaded active running   OpenStack Nova NoVNC Proxy Server
openstack-nova-scheduler.service            loaded active running   OpenStack Nova Scheduler Server

[root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At

nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54

nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54

nova-compute     dfw01.localdomain                     nova             enabled    :-)   2014-03-13 16:56:45

nova-consoleauth dfw02.localdomain                   internal         enabled    :-)   2014-03-13 16:56:47

[root@dfw02 ~(keystone_admin)]$ neutron agent-list

+————————————–+——————–+——————-+——-+—————-+

| id                                   | agent_type         | host              | alive | admin_state_up |

+————————————–+——————–+——————-+——-+—————-+

| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent           | dfw02.localdomain | :-)   | True           |

| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-)   | True           |

| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent         | dfw02.localdomain | :-)   | True           |

| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-)   | True           |

+————————————–+——————–+——————-+——-+—————-+

Users console views :-

    Admin Console views :-

[root@dallas2 ~]# service openstack-nova-compute status -l
Redirecting to /bin/systemctl status  -l openstack-nova-compute.service
openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
Active: active (running) since Thu 2014-03-20 16:29:07 MSK; 6h ago
Main PID: 1685 (nova-compute)
CGroup: /system.slice/openstack-nova-compute.service
├─1685 /usr/bin/python /usr/bin/nova-compute –logfile /var/log/nova/compute.log
└─3552 /usr/sbin/glusterfs –volfile-id=cinder-volumes012 –volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

Mar 20 22:20:15 dallas2.localdomain sudo[11210]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb372fd13e-d2 up
Mar 20 22:20:15 dallas2.localdomain sudo[11213]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb372fd13e-d2 promisc on
Mar 20 22:20:16 dallas2.localdomain sudo[11216]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo372fd13e-d2 up
Mar 20 22:20:16 dallas2.localdomain sudo[11219]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo372fd13e-d2 promisc on
Mar 20 22:20:16 dallas2.localdomain sudo[11222]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qbr372fd13e-d2 up
Mar 20 22:20:16 dallas2.localdomain sudo[11225]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf brctl addif qbr372fd13e-d2 qvb372fd13e-d2
Mar 20 22:20:16 dallas2.localdomain sudo[11228]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl — –may-exist add-port br-int qvo372fd13e-d2 — set Interface qvo372fd13e-d2 external-ids:iface-id=372fd13e-d283-43ba-9a4e-a1684660f4ce external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:0d:4a:12 external-ids:vm-uuid=9679d849-7e4b-4cb5-b644-43279d53f01b
Mar 20 22:20:16 dallas2.localdomain ovs-vsctl[11230]: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl — –may-exist add-port br-int qvo372fd13e-d2 — set Interface qvo372fd13e-d2 external-ids:iface-id=372fd13e-d283-43ba-9a4e-a1684660f4ce external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:0d:4a:12 external-ids:vm-uuid=9679d849-7e4b-4cb5-b644-43279d53f01b
Mar 20 22:20:16 dallas2.localdomain sudo[11244]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf tee /sys/class/net/tap372fd13e-d2/brport/hairpin_mode
Mar 20 22:25:53 dallas2.localdomain nova-compute[1685]: 2014-03-20 22:25:53.102 1685 WARNING nova.compute.manager [-] Found 5 in the database and 2 on the hypervisor.

[root@dallas2 ~]# ovs-vsctl show
3e7422a7-8828-4e7c-b595-8a5b6504bc08
Bridge br-int
Port “qvod0e086e7-32″
tag: 1
Interface “qvod0e086e7-32″
Port br-int
            Interface br-int
type: internal
Port “qvo372fd13e-d2″
tag: 1
            Interface “qvo372fd13e-d2″
Port “qvob49ecf5e-8e”
tag: 1
Interface “qvob49ecf5e-8e”
Port “qvo756757a8-40″
tag: 1
Interface “qvo756757a8-40″
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qvo4d1f9115-03″
tag: 1
Interface “qvo4d1f9115-03″
Bridge br-tun
Port “gre-1″
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130″}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+————–+———–+————+————-+—————————–+
| ID                                   | Name         | Status    | Task State | Power State | Networks                    |
+————————————–+————–+———–+————+————-+—————————–+
| 690d29ae-4c3c-4b2e-b2df-e4d654668336 | UbuntuSRS007 | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.103 |
| 9c791573-1238-44c4-a103-6873fddc17d1 | UbuntuTS019  | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.107 |
| 70db20be-efa6-4a96-bf39-6250962784a3 | VF20RS015    | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.101 |
| 3c888e6a-dd4f-489a-82bb-1f1f9ce6a696 | VF20RS017    | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 9679d849-7e4b-4cb5-b644-43279d53f01b | VF20RS024    | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.105 |
+————————————–+————–+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ nova show 9679d849-7e4b-4cb5-b644-43279d53f01b
+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-03-20T18:20:16Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.2, 192.168.1.105                                  |
| hostId                               | 8477c225f2a46d84dcd609798bf5ee71cc8d20b44256b3b2a54b723f |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2014-03-20T18:20:16.000000                               |
| flavor                               | m1.small (2)                                             |
| id                                   | 9679d849-7e4b-4cb5-b644-43279d53f01b                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                         |
| name                                 | VF20RS024                                                |
| created                              | 2014-03-20T18:20:10Z                                     |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'abc0f5b8-5144-42b7-b49f-a42a20ddd88f'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+
[root@dallas1 ~(keystone_boris)]$ ls -l /FDR/Replicate
total 8383848
-rw-rw-rw-. 2 root root 5368709120 Mar 17 21:58 volume-4b807fe8-dcd2-46eb-b7dd-6ab10641c32a
-rw-rw-rw-. 2 root root 5368709120 Mar 20 18:26 volume-4df4fadf-1be9-4a09-b51c-723b8a6b9c23
-rw-rw-rw-. 2 root root 5368709120 Mar 19 13:46 volume-6ccc137a-6361-42ee-8925-57c6a2eeccf4
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:18 volume-abc0f5b8-5144-42b7-b49f-a42a20ddd88f
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:18 volume-ec9670b8-fa64-46e9-9695-641f51bf1421

[root@dallas1 ~(keystone_boris)]$ ssh 192.168.1.140
Last login: Thu Mar 20 20:15:49 2014
[root@dallas2 ~]# ls -l /FDR/Replicate
total 8383860
-rw-rw-rw-. 2 root root 5368709120 Mar 17 21:58 volume-4b807fe8-dcd2-46eb-b7dd-6ab10641c32a
-rw-rw-rw-. 2 root root 5368709120 Mar 20 18:26 volume-4df4fadf-1be9-4a09-b51c-723b8a6b9c23
-rw-rw-rw-. 2 root root 5368709120 Mar 19 13:46 volume-6ccc137a-6361-42ee-8925-57c6a2eeccf4
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:19 volume-abc0f5b8-5144-42b7-b49f-a42a20ddd88f
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:19 volume-ec9670b8-fa64-46e9-9695-641f51bf1421


Setup Gluster 3.4.2 on Two Node Controller&Compute Neutron GRE+OVS Fedora 20 Cluster

March 10, 2014

This post is an update for http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html  . It’s focused on Gluster 3.4.2  implementation including tuning /etc/sysconfig/iptables files on Controller and Compute Nodes.
Copying ssh-key from master node to compute, step by step verification of gluster volume replica 2  functionality and switching RDO Havana cinder services to work with gluster volume created  to store instances bootable cinders volumes for performance improvement. Of course creating gluster bricks under “/”  is not recommended . It should be a separate mount point for “xfs” filesystem to store gluster bricks on each node.

 Manual RDO Havana setup itself was originally suggested by Kashyap Chamarthy  for F20 VMs running on non-default Libvirt’s subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for  nova-compute and neutron-openvswitch-agent  from Compute to Controller,   changes to /etc/sysconfig/iptables to enable  Gluster 3.4.2 setup on F20  systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for “dnsmasq.conf” and added to dnsmasq.conf line “dhcp-option=26,1454″. Updated configuration files are critical for launching instance without “Customization script” and allow to work with usual ssh keypair.  Actually , when updates are done instance gets created with MTU 1454. View  [2]. Original  setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..`  loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept). Regarding Dashboard Setup&VNC Console,  view   :-
Setup Dashboard&VNC console on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster

Updated setup procedure itself may be viewed here

Setup 

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dallas1.localdomain   –  Controller (192.168.1.130)

dallas2.localdomain   –  Compute   (192.168.1.140)

First step is tuning /etc/sysconfig/iptables for IPv4 iptables firewall (service firewalld should be disabled) :-

Update /etc/sysconfig/iptables on both nodes:-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment out lines bellow , ignoring instruction from http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt  . It’s critical for Gluster functionality. Having them active you are supposed to work with thin LVM as cinder volumes. You won’t be able even remote mount with “-t glusterfs” option. Gluster’s  replications will be dead for ever.

# -A FORWARD -j REJECT –reject-with icmp-host-prohibited
# -A INPUT -j REJECT –reject-with icmp-host-prohibited

Restart service iptables on both nodes

Second step:-

On dallas1, run the following commands :

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dallas2

On both nodes run :-

# yum  -y install glusterfs glusterfs-server glusterfs-fuse
# service glusterd start

On dallas1

#gluster peer probe dallas2.localdomain
Should return “success”

[root@dallas1 ~(keystone_admin)]$ gluster peer status

Number of Peers: 1
Hostname: dallas2.localdomain
Uuid: b3b1cf43-2fec-4904-82d4-b9be03f77c5f
State: Peer in Cluster (Connected)
On dallas2
[root@dallas2 ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.1.130
Uuid: a57433dd-4a1a-4442-a5ae-ba2f682e5c79
State: Peer in Cluster (Connected)

*************************************************************************************
On Controller (192.168.1.130)  & Compute nodes (192.168.1.140)
**********************************************************************************

Verify ports availability:-

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep gluster
tcp    0      0 0.0.0.0:655        0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:49152      0.0.0.0:*    LISTEN      2524/glusterfsd
tcp    0      0 0.0.0.0:2049       0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38465      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38466      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:49155      0.0.0.0:*    LISTEN      2525/glusterfsd
tcp    0      0 0.0.0.0:38468      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38469      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:24007      0.0.0.0:*    LISTEN      2380/glusterd

************************************

Switching Cinder to Gluster volume

************************************

# gluster volume create cinder-volumes021  replica 2 ddallas1.localdomain:/FDR/Replicate   dallas2.localdomain:/FDR/Replicate force
# gluster volume start cinder-volumes021
# gluster volume set cinder-volumes021  auth.allow 192.168.1.*

[root@dallas1 ~(keystone_admin)]$ gluster volume info cinder-volumes012

Volume Name: cinder-volumes012
Type: Replicate
Volume ID: 9ee31c6c-0ae3-4fee-9886-b9cb6a518f48
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dallas1.localdomain:/FDR/Replicate
Brick2: dallas2.localdomain:/FDR/Replicate
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
auth.allow: 192.168.1.*

[root@dallas1 ~(keystone_admin)]$ gluster volume status cinder-volumes012

Status of volume: cinder-volumes012
Gluster process                                                    Port    Online    Pid
——————————————————————————
Brick dallas1.localdomain:/FDR/Replicate         49155    Y    2525
Brick dallas2.localdomain:/FDR/Replicate         49152    Y    1615
NFS Server on localhost                                  2049    Y    2591
Self-heal Daemon on localhost                         N/A    Y    2596
NFS Server on dallas2.localdomain                   2049    Y    2202
Self-heal Daemon on dallas2.localdomain          N/A    Y    2197

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf
192.168.1.130:cinder-volumes021
:wq

Make sure all thin LVM have been deleted via `cinder list` , if no then delete them all.

[root@dallas1 ~(keystone_admin)]$ for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

It should add row to `df -h` output :

192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34

[root@dallas1 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                        active
openstack-nova-cert:                       inactive  (disabled on boot)
openstack-nova-compute:               inactive  (disabled on boot)
openstack-nova-network:                inactive  (disabled on boot)
openstack-nova-scheduler:             active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:             active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:           active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                active
neutron-l3-agent:                     active
neutron-metadata-agent:        active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:       active
neutron-linuxbridge-agent:         inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                   inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:        active
openstack-cinder-volume:             active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 871cf99617ff40e09039185aa7ab11f8 |  admin  |   True  |       |
| df4a984ce2f24848a6b84aaa99e296f1 |  boris  |   True  |       |
| 57fc5466230b497a9f206a20618dbe25 |  cinder |   True  |       |
| cdb2e5af7bae4c5486a1e3e2f42727f0 |  glance |   True  |       |
| adb14139a0874c74b14d61d2d4f22371 | neutron |   True  |       |
| 2485122e3538409c8a6fa2ea4343cedf |   nova  |   True  |       |
+———————————-+———+———+——-+
== Glance images ==
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
== Nova managed services ==
+—————-+———————+———-+———+——-+—————————-+—————–+
| Binary         | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+———————+———-+———+——-+—————————-+—————–+
| nova-scheduler | dallas1.localdomain | internal | enabled | up    | 2014-03-09T14:19:31.000000 | None            |
| nova-conductor | dallas1.localdomain | internal | enabled | up    | 2014-03-09T14:19:30.000000 | None            |
| nova-compute   | dallas2.localdomain | nova     | enabled | up    | 2014-03-09T14:19:33.000000 | None            |
+—————-+———————+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 0ed406bf-3552-4036-9006-440f3e69618e | ext   | None |
| 166d9651-d299-47df-a5a1-b368e87b612f | int   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ df -h

Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora01-root        187G   32G  146G  18% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G  184K  3.9G   1% /dev/shm
tmpfs                            3.9G  9.1M  3.9G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                            3.9G  464K  3.9G   1% /tmp
/dev/sdb5                        477M  122M  327M  28% /boot
tmpfs                            3.9G  9.1M  3.9G   1% /run/netns
192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34

(neutron) agent-list

+————————————–+——————–+———————+——-+—————-+
| id                                   | agent_type         | host                | alive | admin_state_up |
+————————————–+——————–+———————+——-+—————-+
| 3ed1cd15-81af-4252-9d6f-e9bb140bf6cf | L3 agent           | dallas1.localdomain | :-)   | True           |
| a088a6df-633c-4959-a316-510c99f3876b | DHCP agent         | dallas1.localdomain | :-)   | True           |
| a3e5200c-b391-4930-b3ee-58c8d1b13c73 | Open vSwitch agent | dallas1.localdomain | :-)   | True           |
| b6da839a-0d93-44ad-9793-6d0919fbb547 | Open vSwitch agent | dallas2.localdomain | :-)   | True           |
+————————————–+——————–+———————+——-+—————-+
If Controller has been correctly set up:-

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep python
tcp    0     0 0.0.0.0:8700      0.0.0.0:*     LISTEN      1160/python
tcp    0     0 0.0.0.0:35357     0.0.0.0:*     LISTEN      1163/python
tcp   0      0 0.0.0.0:9696      0.0.0.0:*      LISTEN      1165/python
tcp   0      0 0.0.0.0:8773      0.0.0.0:*      LISTEN      1160/python
tcp   0      0 0.0.0.0:8774      0.0.0.0:*      LISTEN      1160/python
tcp   0      0 0.0.0.0:9191      0.0.0.0:*      LISTEN      1173/python
tcp   0      0 0.0.0.0:8776      0.0.0.0:*      LISTEN      8169/python
tcp   0      0 0.0.0.0:5000      0.0.0.0:*      LISTEN      1163/python
tcp   0      0 0.0.0.0:9292      0.0.0.0:*      LISTEN      1168/python 

**********************************************
Creating instance utilizing glusterfs volume
**********************************************

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

I have to notice that schema with `cinder create –image-id  .. –display_name VOL_NAME SIZE` &amp; `nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=volume_id:::0 VM_NAME`  doesn’t work stable  for me in meantime.

As of 03/11 standard schema via `cinder create –image-id IMAGE_ID –display_name VOL_NAME SIZE `& ` nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=VOLUME_ID:::0  INSTANCE_NAME`  started to work fine. However, schema described bellow on the contrary stopped to work on glusterfs based cinder’s volumes.

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-09T12:41:22Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f            |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20RS012                                       |
| adminPass                            | eFDhC8ZSCFU2                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-09T12:41:22Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+———–+———————-+————-+—————————–+
| ID                                   | Name      | Status    | Task State           | Power State | Networks                    |
+————————————–+———–+———–+———————-+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None                 | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | BUILD     | block_device_mapping | NOSTATE     |                             |
+————————————–+———–+———–+———————-+————-+—————————–+
WAIT …
[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE    | None       | Running     | int=10.0.0.4                |
+————————————–+———–+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 5c74667d-9b22-4092-ae0a-70ff3a06e785 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 8142ee4c-ef56-4b61-8a0b-ecd82d21484f

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| dc60b5f4-739e-49bd-a004-3ef806e2b488 |      | fa:16:3e:70:56:cc | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate 5c74667d-9b22-4092-ae0a-70ff3a06e785 dc60b5f4-739e-49bd-a004-3ef806e2b488

Associated floatingip 5c74667d-9b22-4092-ae0a-70ff3a06e785

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.102

PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data.
64 bytes from 192.168.1.102: icmp_seq=1 ttl=63 time=6.23 ms
64 bytes from 192.168.1.102: icmp_seq=2 ttl=63 time=0.702 ms
64 bytes from 192.168.1.102: icmp_seq=3 ttl=63 time=1.07 ms
64 bytes from 192.168.1.102: icmp_seq=4 ttl=63 time=0.693 ms
64 bytes from 192.168.1.102: icmp_seq=5 ttl=63 time=1.80 ms
64 bytes from 192.168.1.102: icmp_seq=6 ttl=63 time=0.750 ms
^C

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ cinder list

+————————————–+——–+————–+——+————-+———-+————————————–+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+————–+——+————-+———-+————————————–+

| 575be853-b104-458e-bc72-1785ef524416 | in-use |              |  5   |     None    |   true   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8  | in-use |              |  5   |     None    |   true   | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+——–+————–+——+————-+———-+——————————

On Compute:-

[root@dallas1 ~]# ssh 192.168.1.140

Last login: Sun Mar  9 16:46:40 2014

[root@dallas2 ~]# df -h

Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora01-root        187G   18G  160G  11% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G  3.1M  3.9G   1% /dev/shm
tmpfs                            3.9G  9.4M  3.9G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                            3.9G  115M  3.8G   3% /tmp
/dev/sdb5                        477M  122M  327M  28% /boot
192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

[root@dallas2 ~]# ps -ef| grep nova

nova      1548     1  0 16:29 ?        00:00:42 /usr/bin/python /usr/bin/nova-compute –logfile /var/log/nova/compute.log

root      3005     1  0 16:34 ?        00:00:38 /usr/sbin/glusterfs –volfile-id=cinder-volumes012 –volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

qemu      4762     1 58 16:42 ?        00:52:17 /usr/bin/qemu-system-x86_64 -name instance-00000061 -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 8142ee4c-ef56-4b61-8a0b-ecd82d21484f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=8142ee4c-ef56-4b61-8a0b-ecd82d21484f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000061.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-575be853-b104-458e-bc72-1785ef524416,if=none,id=drive-virtio-disk0,format=raw,serial=575be853-b104-458e-bc72-1785ef524416,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:70:56:cc,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/8142ee4c-ef56-4b61-8a0b-ecd82d21484f/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

qemu      6330     1 44 16:49 ?        00:36:02 /usr/bin/qemu-system-x86_64 -name instance-0000005f -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 9566adec-9406-4c3e-bce5-109ecb8bcf6b -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=9566adec-9406-4c3e-bce5-109ecb8bcf6b -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000005f.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-9794bd45-8923-4f3e-a48f-fa1d62a964f8,if=none,id=drive-virtio-disk0,format=raw,serial=9794bd45-8923-4f3e-a48f-fa1d62a964f8,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:84:72,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/9566adec-9406-4c3e-bce5-109ecb8bcf6b/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming fd:24 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

root     24713 24622  0 18:11 pts/4    00:00:00 grep –color=auto nova

[root@dallas2 ~]# ps -ef| grep neutron

neutron   1549     1  0 16:29 ?        00:00:53 /usr/bin/python /usr/bin/neutron-openvswitch-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini –log-file /var/log/neutron/openvswitch-agent.log

root     24981 24622  0 18:12 pts/4    00:00:00 grep –color=auto neutron

  Top at Compute node (192.168.1.140)

      Runtime at Compute node ( dallas2 192.168.1.140)

 ******************************************************

Building Ubuntu 14.04 instance via cinder volume

******************************************************

[root@dallas1 ~(keystone_boris)]$ glance image-list
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 | Ubuntu 14.04        | qcow2       | bare             | 264176128 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
[root@dallas1 ~(keystone_boris)]$ cinder create –image-id c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 –display_name UbuntuTrusty 5
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-10T06:35:39.873978      |
| display_description |                 None                 |
|     display_name    |             UbuntuTrusty             |
|          id         | 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 |
|       image_id      | c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 |
|       metadata      |                  {}                  |
|         size        |                  5                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————————————–+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+———–+————–+——+————-+———-+————————————–+
| 56ceaaa8-c0ec-45f3-98a4-555c1231b34e |   in-use  |              |  5   |     None    |   true   | e29606c5-582f-4766-ae1b-52043a698743 |
| 575be853-b104-458e-bc72-1785ef524416 |   in-use  |              |  5   |     None    |   true   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 | available | UbuntuTrusty |  5   |     None    |   true   |                                      |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8 |   in-use  |              |  5   |     None    |   true   | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+———–+————–+——+————-+———-+————————————–+

[root@dallas1 ~(keystone_boris)]$  nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2:::0 UbuntuTR01

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+

| status                               | BUILD                                              |
| updated                              | 2014-03-10T06:40:14Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume – no image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 0859e52d-c07b-4f56-ac79-2b37080d2843               |
| security_groups                      | [{u'name': u'default'}]                            |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                   |
| name                                 | UbuntuTR01                                         |
| adminPass                            | L8VuhttJMbJf                                       |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                   |
| created                              | 2014-03-10T06:40:13Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u'id': u'8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2'}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+———–+————+————-+—————————–+
| ID                                   | Name       | Status    | Task State | Power State | Networks                    |
+————————————–+————+———–+————+————-+—————————–+
| 0859e52d-c07b-4f56-ac79-2b37080d2843 | UbuntuTR01 | ACTIVE    | None       | Running     | int=10.0.0.6                |
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007  | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012  | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
| e29606c5-582f-4766-ae1b-52043a698743 | VF20RS016  | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
+————————————–+————+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 9498ac85-82b0-468a-b526-64a659080ab9 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 0859e52d-c07b-4f56-ac79-2b37080d2843

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 1f02fe57-d844-4fd8-a325-646f27163c8b |      | fa:16:3e:3f:a3:d4 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate  9498ac85-82b0-468a-b526-64a659080ab9 1f02fe57-d844-4fd8-a325-646f27163c8b

Associated floatingip 9498ac85-82b0-468a-b526-64a659080ab9

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.104

PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=2.35 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=2.56 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.17 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=4.08 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=2.19 ms
^C


Up to date procedure of Creating cinder’s ThinLVM based Cloud Instance F20,Ubuntu 13.10 on Fedora 20 Havana Compute Node.

March 4, 2014

  This post follows up  http://bderzhavets.wordpress.com/2014/01/24/setting-up-two-physical-node-openstack-rdo-havana-neutron-gre-on-fedora-20-boxes-with-both-controller-and-compute-nodes-each-one-having-one-ethernet-adapter/

   Per my experience `cinder create –image-id  Image_id –display_name …..` && `nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=Volume_id :::0 <VM_NAME>  doesn’t   work any longer, giving an error :-

$ tail -f /var/log/nova/compute.log  reports :-

 2014-03-03 13:28:43.646 1344 WARNING nova.virt.libvirt.driver [req-1bd6630e-b799-4d78-b702-f06da5f1464b df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29b a86d7eb] [instance: f621815f-3805-4f52-a878-9040c6a4af53] File injection into a boot from volume instance is not supported

Followed by python stack trace and Nova Exception

Workaround for this issue follows bellow. First stop and and start “tgtd” daemon :-

[root@dallas1 ~(keystone_admin)]$ service tgtd stop
Redirecting to /bin/systemctl stop  tgtd.service
[root@dallas1 ~(keystone_admin)]$ service tgtd status
Redirecting to /bin/systemctl status  tgtd.service
tgtd.service – tgtd iSCSI target daemon
Loaded: loaded (/usr/lib/systemd/system/tgtd.service; enabled)
Active: inactive (dead) since Tue 2014-03-04 11:46:18 MSK; 8s ago
Process: 11978 ExecStop=/usr/sbin/tgtadm –op delete –mode system (code=exited, status=0/SUCCESS)
Process: 11974 ExecStop=/usr/sbin/tgt-admin –update ALL -c /dev/null (code=exited, status=0/SUCCESS)
Process: 11972 ExecStop=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 1797 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v ready (code=exited, status=0/SUCCESS)
Process: 1791 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
Process: 1790 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 1173 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
Process: 1172 ExecStart=/usr/sbin/tgtd -f $TGTD_OPTS (code=exited, status=0/SUCCESS)
Main PID: 1172 (code=exited, status=0/SUCCESS)

Mar 04 11:14:04 dallas1.localdomain tgtd[1172]: tgtd: work_timer_start(146) use timer_fd based scheduler
Mar 04 11:14:04 dallas1.localdomain tgtd[1172]: tgtd: bs_init_signalfd(271) could not open backing-store module direct…store
Mar 04 11:14:04 dallas1.localdomain tgtd[1172]: tgtd: bs_init(390) use signalfd notification
Mar 04 11:14:09 dallas1.localdomain systemd[1]: Started tgtd iSCSI target daemon.
Mar 04 11:26:01 dallas1.localdomain tgtd[1172]: tgtd: device_mgmt(246) sz:69 params:path=/dev/cinder-volumes/volume-a0…2864d
Mar 04 11:26:01 dallas1.localdomain tgtd[1172]: tgtd: bs_thread_open(412) 16
Mar 04 11:33:32 dallas1.localdomain tgtd[1172]: tgtd: device_mgmt(246) sz:69 params:path=/dev/cinder-volumes/volume-01…f2969
Mar 04 11:33:32 dallas1.localdomain tgtd[1172]: tgtd: bs_thread_open(412) 16
Mar 04 11:46:18 dallas1.localdomain systemd[1]: Stopping tgtd iSCSI target daemon…
Mar 04 11:46:18 dallas1.localdomain systemd[1]: Stopped tgtd iSCSI target daemon.
Hint: Some lines were ellipsized, use -l to show in full.

[root@dallas1 ~(keystone_admin)]$ service tgtd start
Redirecting to /bin/systemctl start  tgtd.service
[root@dallas1 ~(keystone_admin)]$ service tgtd status -l
Redirecting to /bin/systemctl status  -l tgtd.service
tgtd.service – tgtd iSCSI target daemon
Loaded: loaded (/usr/lib/systemd/system/tgtd.service; enabled)
Active: active (running) since Tue 2014-03-04 11:46:40 MSK; 4s ago
Process: 11978 ExecStop=/usr/sbin/tgtadm –op delete –mode system (code=exited, status=0/SUCCESS)
Process: 11974 ExecStop=/usr/sbin/tgt-admin –update ALL -c /dev/null (code=exited, status=0/SUCCESS)
Process: 11972 ExecStop=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 12084 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v ready (code=exited, status=0/SUCCESS)
Process: 12078 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
Process: 12076 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 12052 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
Main PID: 12051 (tgtd)
CGroup: /system.slice/tgtd.service
└─12051 /usr/sbin/tgtd -f

Mar 04 11:46:35 dallas1.localdomain systemd[1]: Starting tgtd iSCSI target daemon…
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: librdmacm: Warning: couldn’t read ABI version.
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: librdmacm: Warning: assuming: 4
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: librdmacm: Fatal: unable to get RDMA device list
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: iser_ib_init(3351) Failed to initialize RDMA; load kernel modules?
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: work_timer_start(146) use timer_fd based scheduler
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: bs_init_signalfd(271) could not open backing-store module directory /usr/lib64/tgt/backing-store
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: bs_init(390) use signalfd notification
Mar 04 11:46:40 dallas1.localdomain systemd[1]: Started tgtd iSCSI target daemon.
[root@dallas1 ~(keystone_admin)]$ for i in api scheduler volume ; do service openstack-cinder-${i} restart ;done
Redirecting to /bin/systemctl restart  openstack-cinder-api.service
Redirecting to /bin/systemctl restart  openstack-cinder-scheduler.service
Redirecting to /bin/systemctl restart  openstack-cinder-volume.service
[root@dallas1 ~(keystone_Boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

Create thin LVM via Nova with login option “fedora”&”mysecret” in one command

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS01

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-07T05:50:18Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 770e33f7-7aab-49f1-95ca-3cf343f744ef            |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20RS01                                        |
| adminPass                            | CqjGVUm9bbs9                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-07T05:50:18Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———-+——–+———————-+————-+———-+
| ID                                   | Name     | Status | Task State           | Power State | Networks |
+————————————–+———-+——–+———————-+————-+———-+
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01 | BUILD  | block_device_mapping | NOSTATE     |          |
+————————————–+———-+——–+———————-+————-+———-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———-+——–+———————-+————-+———-+
| ID                                   | Name     | Status | Task State           | Power State | Networks |
+————————————–+———-+——–+———————-+————-+———-+
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01 | BUILD  | block_device_mapping | NOSTATE     |          |
+————————————–+———-+——–+———————-+————-+———-+
[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———-+——–+————+————-+————–+
| ID                                   | Name     | Status | Task State | Power State | Networks     |
+————————————–+———-+——–+————+————-+————–+
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01 | ACTIVE | None       | Running     | int=10.0.0.2 |
+————————————–+———-+——–+————+————-+————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.101                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | f7d9cd3f-e544-4f23-821d-0307ed4eb852 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 770e33f7-7aab-49f1-95ca-3cf343f744ef

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 8b5f142e-ce99-40e0-bbbe-620b201c0323 |      | fa:16:3e:0d:c4:e6 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.2”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate f7d9cd3f-e544-4f23-821d-0307ed4eb852 8b5f142e-ce99-40e0-bbbe-620b201c0323
Associated floatingip f7d9cd3f-e544-4f23-821d-0307ed4eb852

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.101

PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data.
64 bytes from 192.168.1.101: icmp_seq=1 ttl=63 time=7.75 ms
64 bytes from 192.168.1.101: icmp_seq=2 ttl=63 time=1.06 ms
64 bytes from 192.168.1.101: icmp_seq=3 ttl=63 time=1.27 ms
64 bytes from 192.168.1.101: icmp_seq=4 ttl=63 time=1.43 ms
64 bytes from 192.168.1.101: icmp_seq=5 ttl=63 time=1.80 ms
64 bytes from 192.168.1.101: icmp_seq=6 ttl=63 time=0.916 ms
64 bytes from 192.168.1.101: icmp_seq=7 ttl=63 time=0.919 ms
64 bytes from 192.168.1.101: icmp_seq=8 ttl=63 time=0.930 ms
64 bytes from 192.168.1.101: icmp_seq=9 ttl=63 time=0.977 ms
64 bytes from 192.168.1.101: icmp_seq=10 ttl=63 time=0.690 ms
^C

— 192.168.1.101 ping statistics —

10 packets transmitted, 10 received, 0% packet loss, time 9008ms

rtt min/avg/max/mdev = 0.690/1.776/7.753/2.015 ms

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=3e6eea8e-32e6-4373-9eb1-e04b8a3167f9,dest=volume,size=5,shutdown=preserve,bootindex=0 UbuntuRS01

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-07T05:53:44Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | bfcb2120-942f-4d3f-a173-93f6076a4be8            |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | UbuntuRS01                                      |
| adminPass                            | bXND2XTsvuA4                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-07T05:53:44Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+————+————-+—————————–+
| ID                                   | Name       | Status | Task State | Power State | Networks                    |
+————————————–+————+——–+————+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None       | Running     | int=10.0.0.4                |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+

| Field               | Value                                |

+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | b3d3f262-5142-4a99-9b8d-431c231cb1d7 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id bfcb2120-942f-4d3f-a173-93f6076a4be8

+————————————–+——+——————-+———————————————————————————+

| id                                   | name | mac_address       | fixed_ips                                                                       |

+————————————–+——+——————-+———————————————————————————+
| c81ca027-8f9b-49c3-af10-adc60f5d4d12 |      | fa:16:3e:ac:86:50 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate b3d3f262-5142-4a99-9b8d-431c231cb1d7 c81ca027-8f9b-49c3-af10-adc60f5d4d12

Associated floatingip b3d3f262-5142-4a99-9b8d-431c231cb1d7

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.102

PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data.
64 bytes from 192.168.1.102: icmp_seq=1 ttl=63 time=3.84 ms
64 bytes from 192.168.1.102: icmp_seq=2 ttl=63 time=3.06 ms
64 bytes from 192.168.1.102: icmp_seq=3 ttl=63 time=6.58 ms
64 bytes from 192.168.1.102: icmp_seq=4 ttl=63 time=7.98 ms
64 bytes from 192.168.1.102: icmp_seq=5 ttl=63 time=2.09 ms
64 bytes from 192.168.1.102: icmp_seq=6 ttl=63 time=1.06 ms
64 bytes from 192.168.1.102: icmp_seq=7 ttl=63 time=3.55 ms
64 bytes from 192.168.1.102: icmp_seq=8 ttl=63 time=2.01 ms
64 bytes from 192.168.1.102: icmp_seq=9 ttl=63 time=1.05 ms
64 bytes from 192.168.1.102: icmp_seq=10 ttl=63 time=3.45 ms
64 bytes from 192.168.1.102: icmp_seq=11 ttl=63 time=2.31 ms
64 bytes from 192.168.1.102: icmp_seq=12 ttl=63 time=0.977 ms
^C

— 192.168.1.102 ping statistics —

12 packets transmitted, 12 received, 0% packet loss, time 11014ms

rtt min/avg/max/mdev = 0.977/3.168/7.985/2.091 ms

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20GLX

+————————————–+————————————————-+

| Property                             | Value                                           |

+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-07T05:58:40Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 62ff1641-2c96-470f-9147-9272d68d2e5c            |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20GLX                                         |
| adminPass                            | E9KXeLp8fWig                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-07T05:58:40Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+————+————-+—————————–+

| ID                                   | Name       | Status | Task State | Power State | Networks                    |

+————————————–+————+——–+————+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | ACTIVE | None       | Running     | int=10.0.0.5                |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create extCreated a new floatingip:

+———————+————————————–+
| Field               | Value                                |
———————+————————————–+

| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.103                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 3fb87bb2-f485-4f1c-b2b7-7c5d90588d27 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 62ff1641-2c96-470f-9147-9272d68d2e5c

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 0845ad30-4d2c-487d-8847-2b6e3e8b9b9d |      | fa:16:3e:2c:84:62 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.5”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate 3fb87bb2-f485-4f1c-b2b7-7c5d90588d27 0845ad30-4d2c-487d-8847-2b6e3e8b9b9d

Associated floatingip 3fb87bb2-f485-4f1c-b2b7-7c5d90588d27

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.103

PING 192.168.1.103 (192.168.1.103) 56(84) bytes of data.
64 bytes from 192.168.1.103: icmp_seq=1 ttl=63 time=4.08 ms
64 bytes from 192.168.1.103: icmp_seq=2 ttl=63 time=1.59 ms
64 bytes from 192.168.1.103: icmp_seq=3 ttl=63 time=1.22 ms
64 bytes from 192.168.1.103: icmp_seq=4 ttl=63 time=1.49 ms
64 bytes from 192.168.1.103: icmp_seq=5 ttl=63 time=1.11 ms
64 bytes from 192.168.1.103: icmp_seq=6 ttl=63 time=0.980 ms
64 bytes from 192.168.1.103: icmp_seq=7 ttl=63 time=6.71 ms
^C

— 192.168.1.103 ping statistics —

7 packets transmitted, 7 received, 0% packet loss, time 6007ms

rtt min/avg/max/mdev = 0.980/2.458/6.711/1.996 ms

[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+————+——–+————+————-+—————————–+
| ID                                   | Name       | Status | Task State | Power State | Networks                    |
+————————————–+————+——–+————+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | ACTIVE | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$  vgdisplay
….

— Volume group —
VG Name               cinder-volumes
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  66
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                3
Open LV               3
Max PV                0
Cur PV                1
Act PV                1
VG Size               20.00 GiB
PE Size               4.00 MiB
Total PE              5119
Alloc PE / Size       3840 / 15.00 GiB
Free  PE / Size       1279 / 5.00 GiB
VG UUID               M11ikP-i6sd-ftwG-3XIH-F9wt-cSHe-m9kCtU


….

Three volumes have been created each one 5 GB

 [root@dallas1 ~(keystone_admin)]$ losetup -a

/dev/loop0: [64768]:14 (/cinder-volumes)

Same messages in log , but now it works

2014-03-03 23:50:19.851 6729 WARNING nova.virt.libvirt.driver [req-98443a14-3c3f-49f5-bf21-c183531a1778 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] [instance: baffc298-3b45-4e01-8891-1e6510e3dc0e] File injection into a boot from volume instance is not supported

2014-03-03 23:50:21.439 6729 WARNING nova.virt.libvirt.volume [req-98443a14-3c3f-49f5-bf21-c183531a1778 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] ISCSI volume not yet found at: vda. Will rescan &amp; retry.  Try number: 0

2014-03-03 23:50:21.518 6729 WARNING nova.virt.libvirt.vif [req-98443a14-3c3f-49f5-bf21-c183531a1778 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] Deprecated: The LibvirtHybridOVSBridgeDriver VIF driver is now deprecated and will be removed in the next release. Please use the LibvirtGenericVIFDriver VIF driver, together with a network plugin that reports the ‘vif_type’ attribute

2014-03-03 23:52:12.020 6729 WARNING nova.virt.libvirt.driver [req-1ea0e44e-b651-4f79-9d83-1ba872534440 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] [instance: a64a7a24-ff8a-4d01-aa59-80393a4213df] File injection into a boot from volume instance is not supported

2014-03-03 23:52:13.629 6729 WARNING nova.virt.libvirt.volume [req-1ea0e44e-b651-4f79-9d83-1ba872534440 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] ISCSI volume not yet found at: vda. Will rescan &amp; retry.  Try number: 0

2014-03-03 23:52:13.709 6729 WARNING nova.virt.libvirt.vif [req-1ea0e44e-b651-4f79-9d83-1ba872534440 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] Deprecated: The LibvirtHybridOVSBridgeDriver VIF driver is now deprecated and will be removed in the next release. Please use the LibvirtGenericVIFDriver VIF driver, together with a network plugin that reports the ‘vif_type’ attribute

2014-03-03 23:56:11.127 6729 WARNING nova.compute.manager [-] Found 4 in the database and 1 on the hypervisor.


USB Redirection hack on “Two Node Controller&Compute Neutron GRE+OVS” Fedora 20 Cluster

February 28, 2014
 
    I clearly understand that only incomplete  Havana RDO setup allows me to activate spice USB redirection communicating with cloud instances. There is no dashboard ( Administrative Web Console ) on Cluster. All information regarding nova instances status, neutron subnets,routers,ports is supposed to be obtained via CLI as well as managing instances, subnets,routers,ports and rules is also supposed to be done via CLI, having  carefully watch sourcing “keystonerc_user”  file to manage in environment of particular user of particular tenant.    Also I have to mention that  to create new instance I must have in `nova list` no more then four entries. Then I will be able create new one instance for sure.  It has been tested on two  “Two Node Neutron GRE+OVS Systems” It is related with `nova quota-show`  for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller. View https://ask.openstack.org/en/question/11746/openstack-nova-scheduler-service-cannot-any-longer-connect-to-amqp-server-performing-nova-boot-on-fedora-20/
Manual Setup  ( view [2]  http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html )
- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   –  Controller (192.168.1.127)

dwf01.localdomain   –  Compute   (192.168.1.137)

[root@dfw02 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                     active

openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           inactive  (disabled on boot)
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         inactive  (disabled on boot)
openstack-ceilometer-alarm-notifier:    inactive  (disabled on boot)
openstack-ceilometer-alarm-evaluator:   inactive  (disabled on boot)
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 970ed56ef7bc41d59c54f5ed8a1690dc |  admin  |   True  |       |
| 162021e787c54cac906ab3296a386006 |  boris  |   True  |       |
| 1beeaa4b20454048bf23f7d63a065137 |  cinder |   True  |       |
| 006c2728df9146bd82fab04232444abf |  glance |   True  |       |
| 5922aa93016344d5a5d49c0a2dab458c | neutron |   True  |       |
| af2f251586564b46a4f60cdf5ff6cf4f |   nova  |   True  |       |
+———————————-+———+———+——-+

== Glance images ==

+————————————–+———————————+————-+——————+————-+——–+
| ID                                   | Name                            | Disk Format | Container Format | Size        | Status |
+————————————–+———————————+————-+——————+————-+——–+
| a6e8ef59-e492-46e2-8147-fd8b1a65ed73 | CentOS 6.5 image                | qcow2       | bare             | 344457216   | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31                        | qcow2       | bare             | 13147648    | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64                | qcow2       | bare             | 237371392   | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image                 | qcow2       | bare             | 214106112   | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10             | qcow2       | bare             | 244514816   | active |
| b7d54434-1cc6-4770-82f3-c8619952575c | Ubuntu Trusty Tar 02/23/14      | qcow2       | bare             | 261029888   | active |
| 07071d00-fb85-4b32-a9b4-d515088700d0 | Windows Server 2012 R2 Std Eval | vhd         | bare             | 17182752768 | active |
+————————————–+———————————+————-+——————+————-+——–+

== Nova managed services ==

+—————-+——————-+———-+———+——-+—————————-+—————–+
| Binary         | Host              | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+——————-+———-+———+——-+—————————-+—————–+
| nova-scheduler | dfw02.localdomain | internal | enabled | up    | 2014-02-28T06:32:03.000000 | None            |
| nova-conductor | dfw02.localdomain | internal | enabled | up    | 2014-02-28T06:32:03.000000 | None            |
| nova-compute   | dfw01.localdomain | nova     | enabled | up    | 2014-02-28T06:31:59.000000 | None            |
+—————-+——————-+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int   | None |
| 426bb226-0ab9-440d-ba14-05634a17fb2b | int1  | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext   | None |
+————————————–+——-+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+

+—-+——+——–+————+————-+———-+
[root@dfw02 ~(keystone_boris)]$ nova list
+————————————–+———–+———–+————+————-+——————————+
| ID                                   | Name      | Status    | Task State | Power State | Networks                     |
+————————————–+———–+———–+————+————-+——————————+
| 5fcd83c3-1d4e-4b11-bfe5-061a03b73174 | UbuntuRSX | SUSPENDED | None       | Shutdown    | int1=40.0.0.5, 192.168.1.120 |
| 7953950c-112c-4c59-b183-5cbd06eabcf6 | VF19WXL   | SUSPENDED | None       | Shutdown    | int1=40.0.0.6, 192.168.1.121 |
| 784e8afc-d41a-4c2e-902a-8e109a40f7db | VF20GLS   | SUSPENDED | None       | Shutdown    | int1=40.0.0.4, 192.168.1.102 |
| 9b156b85-a6a1-4f15-bffa-6fdb124f8cff | VF20WXL   | SUSPENDED | None       | Shutdown    | int1=40.0.0.2, 192.168.1.101 |
+————————————–+———–+———–+————+————-+——————————+
 [root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-02-28 11:47:25
nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-02-28 11:47:25
nova-compute     dfw01.localdomain                    nova             enabled    :-)   2014-02-28 11:47:19

[root@dfw02 ~(keystone_admin)]$ neutron agent-list

+————————————–+——————–+——————-+——-+—————-+
| id                                   | agent_type         | host              | alive | admin_state_up |
+————————————–+——————–+——————-+——-+—————-+
| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent           | dfw02.localdomain | :-)   | True           |
| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-)   | True           |
| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent         | dfw02.localdomain | :-)   | True           |
| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-)   | True           |
+————————————–+——————–+——————-+——-+—————-+

Create F20 instance per http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html 

and run on newly built instance :-

# yum -y update
# yum -y install spice-vdagent
# reboot

Connect via virt-manager and switch to Properties tab :-

  

1. Switch to Spice Server
   2. Switch to Video QXL
   3. Add Hardware “Spice agent(spicevmc)”
   4. Add Hardware “USB Redirection”
       Spice channel
Then :- 

[root@dfw02 ~(keystone_boris)]$  nova reboot VF20GLS 

Plug in USB pen on Controller

[ 6443.772131] usb 1-2.1: USB disconnect, device number 5
[ 6523.996983] usb 1-2.1: new full-speed USB device number 6 using uhci_hcd
[ 6524.278848] usb 1-2.1: New USB device found, idVendor=0951, idProduct=160e
[ 6524.281206] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 6524.282055] usb 1-2.1: Product: DataTraveler 2.0
[ 6524.284851] usb 1-2.1: Manufacturer: Kingston
[ 6524.290527] usb 1-2.1: SerialNumber: 000AEB920161SK861E1301F6
[ 6524.369667] usb-storage 1-2.1:1.0: USB Mass Storage device detected
[ 6524.379638] scsi4 : usb-storage 1-2.1:1.0
[ 6525.420794] scsi 4:0:0:0: Direct-Access     Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 2
[ 6525.459504] sd 4:0:0:0: Attached scsi generic sg0 type 0
[ 6525.526419] sd 4:0:0:0: [sdb] 7856128 512-byte logical blocks: (4.02 GB/3.74 GiB)
[ 6525.554959] sd 4:0:0:0: [sdb] Write Protect is off
[ 6525.555010] sd 4:0:0:0: [sdb] Mode Sense: 23 00 00 00
[ 6525.571552] sd 4:0:0:0: [sdb] No Caching mode page found
[ 6525.573029] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 6525.667624] sd 4:0:0:0: [sdb] No Caching mode page found
[ 6525.669322] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 6525.816841]  sdb: sdb1
[ 6525.887493] sd 4:0:0:0: [sdb] No Caching mode page found
[ 6525.889142] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 6525.890478] sd 4:0:0:0: [sdb] Attached SCSI removable disk

$ sudo mount /dev/sdb1 /mnt/usbpen

[ 5685.621007] FAT-fs (sdb1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

[ 5685.631218] SELinux: initialized (dev sdb1, type vfat), uses genfs_contexts

Setup Light X Windows System &amp; Fluxbox on F20 instance ( [1] ) and make sure it’s completely functional and can read and wite to USB pen

   Nova status verification

 

 

   Neutron status verification

On the dfw02 (Controller) , run the following command:

ssh-keygen (Hit Enter to accept all of the defaults)
ssh-copy-id -i ~/.ssh/id_rsa.pub root@dwf01 (Compute)

Add to /etc/rc.d/rc.local lines :-

ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.137
ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.137
ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.137

to be comfortable with spicy connection to instances running on Compute node.

Build fresh spice-gtk packages :-

$ rpm -iv spice-gtk-0.23-1.fc21.src.rpm
$ cd ~/rpmbuild/SPEC
$ sudo yum install intltool gtk2-devel usbredir-devel libusb1-devel libgudev1-devel pixman-devel openssl-devel  libjpeg-turbo-devel celt051-devel pulseaudio-libs-devel pygtk2-devel python-devel zlib-devel cyrus-sasl-devel libcacard-devel gobject-introspection-devel  dbus-glib-devel libacl-devel polkit-devel gtk-doc vala-tools gtk3-devel spice-protocol opus-devel
$ rpmbuild -bb ./spice-gtk.spec
$ cd ../RPMS/x86_64

Install rpms been built , because spicy is not on the system

[boris@dfw02 x86_64]$  sudo yum install spice-glib-0.23-1.fc20.x86_64.rpm \
spice-glib-devel-0.23-1.fc20.x86_64.rpm \
spice-gtk-0.23-1.fc20.x86_64.rpm \
spice-gtk3-0.23-1.fc20.x86_64.rpm \
spice-gtk3-devel-0.23-1.fc20.x86_64.rpm \
spice-gtk3-vala-0.23-1.fc20.x86_64.rpm \
spice-gtk-debuginfo-0.23-1.fc20.x86_64.rpm \
spice-gtk-devel-0.23-1.fc20.x86_64.rpm  \
spice-gtk-python-0.23-1.fc20.x86_64.rpm \
spice-gtk-tools-0.23-1.fc20.x86_64.rpm

Verify new spice-gtk install on F20 :-

[boris@dfw02 x86_64]$ rpm -qa | grep spice-
spice-gtk-tools-0.23-1.fc20.x86_64
spice-server-0.12.4-3.fc20.x86_64
spice-glib-devel-0.23-1.fc20.x86_64
spice-gtk3-vala-0.23-1.fc20.x86_64
spice-gtk3-devel-0.23-1.fc20.x86_64
spice-gtk-python-0.23-1.fc20.x86_64
spice-vdagent-0.15.0-1.fc20.x86_64
spice-gtk-devel-0.23-1.fc20.x86_64
spice-gtk-0.23-1.fc20.x86_64
spice-gtk-debuginfo-0.23-1.fc20.x86_64
spice-glib-0.23-1.fc20.x86_64
spice-gtk3-0.23-1.fc20.x86_64
spice-protocol-0.12.6-2.fc20.noarch

Connection via spice will give  a warning :-

    just ignore this message.

References

1. http://bderzhavets.blogspot.com/2014/02/setup-light-weight-x-windows_2.html
2. http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Ongoing problems with “Two Real Controller&Compute Nodes Neutron GRE + OVS” setup on F20 via native Havana Repos

February 16, 2014

**************************************************************
UPDATE on 02/23/2014  
**************************************************************

To create new instance I must have in `nova list` no more then 4 entries. Then I can sequentially restart qpidd, openstack-nova-scheduler services ( it’s, actually, not always necessary)  and I will be able create new one instance for sure.  It has been tested on 2 “Two Node Neutron GRE+OVS+Gluster Backend for Cinder Cluster”.  It is related with `nova quota-show`  for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller. All testing  details here http://bderzhavets.blogspot.com/2014/02/next-attempt-to-set-up-two-node-neutron.html

Syntax like :

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 10    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+
[root@dallas1 ~(keystone_admin)]$  nova quota-class-update –instances 20 default

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 20    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+

doesn’t work for me

********************************************************************

Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don’t start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services :-
openstack-nova-conductor & openstack-nova-scheduler wouldn’t start if mysql.users table won’t be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.

Instance number on this snapshot is Instance-0000004a (HEX). This number is all the time increasing . This is instance created 74 th starting with 00000001

Detailed information about instances above:

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+————+———–+————+————-+—————————–+
| ID                                   | Name       | Status    | Task State | Power State | Networks                    |
+————————————–+————+———–+————+————-+—————————–+
| e52f8f4d-5d01-4237-a1ed-79ee53ecc88a | UbuntuSX5  | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.114
| 6c094d16-fda7-43fa-8f24-22e02e7a2fc6 | UbuntuVLG1 | ACTIVE    | None       | Running     | int=10.0.0.6, 192.168.1.118 |
| 526b803d-ded5-48d8-857a-f622f6082c18 | VF20GLF    | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.119 |
| c3a4c6d4-8618-4c4f-becb-0c53c2b3ad91 | VF20GLX    | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.117 |
| 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7 | VF20SX4    | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.110
+————————————–+————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ nova show 526b803d-ded5-48d8-857a-f622f6082c18
+————————————–+———————————————————-+
| Property                             | Value                                                               |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                            |
| updated                              | 2014-02-17T13:10:14Z                             |
| OS-EXT-STS:task_state                | None                                                  |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                         |
| key_name                             | None                                                             |
| image                Attempt to boot from volume – no image supplied      |
| int network                          | 10.0.0.5, 192.168.1.119                           |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                 |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000004a                                                                                                      |
| OS-SRV-USG:launched_at                                                                         | 2014-02-17T11:08:13.000000                                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                                                                       |
| flavor                               | m1.small (2)                                                     |
| id                                   | 526b803d-ded5-48d8-857a-f622f6082c18     |
| security_groups                      | [{u'name': u'default'}]                          |
| OS-SRV-USG:terminated_at             | None                                              |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc      |
| name                                 | VF20GLF                                                        |
| created                              | 2014-02-17T11:08:07Z                                |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f    |
| OS-DCF:diskConfig                    | MANUAL                                              |
| metadata                             | {}                                                                 |
| os-extended-volumes:volumes_attached | [{u'id': u'296d02ff-6e2a-424a-bd79-e75ed52875fc'}]       |
| accessIPv4                           |                                                                     |
| accessIPv6                           |                                                                     |
| progress                             | 0                                                                    |
| OS-EXT-STS:power_state               | 1                                                     |
| OS-EXT-AZ:availability_zone          | nova                                              |
| config_drive                         |                                                                     |
+————————————–+———————————————————-+

Instances numbers increasing sequence, old gets removed  new ones gets created

Top at Compute :-

Top at Controller :-

[root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-02-17 15:20:11
nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-02-17 15:20:11
nova-compute     dfw01.localdomain                    nova             enabled    :-)   2014-02-17 15:20:12

Watch also carefully “ovs-vsctl outputs on Controller &amp; Compute , presence of block :-
On controller :
Port “gre-2″
            Interface “gre-2″
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.130″, out_key=flow, remote_ip=”192.168.1.140″}
and this one on compute:
Port “gre-1″
            Interface “gre-1″
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130″}
is important for success . It might be gone from “ovs-vsctl show” report.

Initial point start testing . Continue per http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html

System is functional.
Controller – dallas1.localdomain 192.168.1.130
Compute  –  dallas2.localdomain 192.168.1.140

[root@dallas1 ~(keystone_admin)]$ date
Sat Feb 15 11:05:12 MSK 2014
[root@dallas1 ~(keystone_admin)]$ openstack-status

== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 974006673310455e8893e692f1d9350b |  admin  |   True  |       |
| fbba3a8646dc44e28e5200381d77493b |  cinder |   True  |       |
| 0214c6ae6ebc4d6ebeb3e68d825a1188 |  glance |   True  |       |
| abb1fa95b0ec448ea8da3cc99d61d301 | kashyap |   True  |       |
| 329b3ca03a894b319420b3a166d461b5 | neutron |   True  |       |
| 89b3f7d54dd04648b0519f8860bd0f2a |   nova  |   True  |       |
———————————-+———+———+——-+
== Glance images ==
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 2cada436-30a7-425b-9e75-ce56764cdd13 | Cirros31            | qcow2       | bare             | 13147648  | active |
| c0b90f9e-fd47-46da-b98b-1144a41a6c08 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b | Ubuntu 13.10 Server | qcow2       | bare             | 244711424 | active |
+————————————–+———————+————-+——————+———–+——–+
== Nova managed services ==
+—————-+———————+———-+———+——-+—————————-+—————–+
| Binary         | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+———————+———-+———+——-+—————————-+—————–+
| nova-scheduler | dallas1.localdomain | internal | enabled | up    | 2014-02-15T08:14:54.000000 | None            |
| nova-conductor | dallas1.localdomain | internal | enabled | up    | 2014-02-15T08:14:54.000000 | None            |
| nova-compute   | dallas2.localdomain | nova     | enabled | up    | 2014-02-15T08:14:59.000000 | None            |
+—————-+———————+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 082249a5-08f4-478f-b176-effad0ef6843 | ext   | None |
| cea0463e-1ef2-46ac-a449-d1c265f5ed7c | int   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+——————————

Looks good on both Controller and Compute

[root@dallas1 nova]# ovs-vsctl show
2790327e-fde5-4f35-9c99-b1180353b29e
Bridge br-int
Port br-int
Interface br-int
type: internal
Port “qr-f38eb3d5-20″
tag: 1
Interface “qr-f38eb3d5-20″
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap5d1add26-f3″
tag: 1
Interface “tap5d1add26-f3″
type: internal
Bridge br-ex
Port “p37p1″
Interface “p37p1″
Port br-ex
Interface br-ex
type: internal
Port “qg-0dea8587-32″
Interface “qg-0dea8587-32″
type: internal
Bridge br-tun
        Port “gre-2″
            Interface “gre-2″
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.130″, out_key=flow, remote_ip=”192.168.1.140″}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

[root@dallas2 ~]# ovs-vsctl show
b2e33386-ca7e-46e2-b97e-6bbf511727ac
Bridge br-int
Port br-int
Interface br-int
type: internal
Port “qvo30c356f8-c0″
tag: 1
Interface “qvo30c356f8-c0″
Port “qvoa5c6c346-78″
tag: 1
Interface “qvoa5c6c346-78″
Port “qvo56bfcccb-86″
tag: 1
Interface “qvo56bfcccb-86″
Port “qvo051565c4-dd”
tag: 1
Interface “qvo051565c4-dd”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
  Port “gre-1″
            Interface “gre-1″
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130″}
Port br-tun
Interface br-tun
type: internal
ovs_version: “2.0.0”

[root@dallas1 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –image 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b UbuntuSRV

+————————————–+————————————–+
| Property                             | Value                                |
+————————————–+————————————–+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Ubuntu 13.10 Server                  |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000003                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | 6adf0838-bfcf-4980-a0a4-6a541facf9c9 |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 974006673310455e8893e692f1d9350b     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-15T07:24:54Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | UbuntuSRV                            |
| adminPass                            | T2ArvfucEGqr                         |
| tenant_id                            | b5c0d0d4d31e4f3785362f2716df0b0f     |
| created                              | 2014-02-15T07:24:54Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+————————————–+————————————–+

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | BUILD  | spawning   | NOSTATE     | int=10.0.0.5                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ date

Sat Feb 15 11:25:36 MSK 2014

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE | None       | Running     | int=10.0.0.5                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+——————

/var/log/nova/schedure.log ( last message about 1 hour before successfull `nova boot  .. ` F20, Ubuntu 13.10, Cirrus loaded OK.

I believe a couple of `nova boot .. ` I still have.

Here is /var/log/nova/scheduler.log:-

2014-02-15 09:34:07.612 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 8 seconds

2014-02-15 09:34:15.617 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 16 seconds

2014-02-15 09:34:31.628 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 32 seconds

2014-02-15 09:35:03.630 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

2014-02-15 09:36:03.663 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

The last record in log :-

2014-02-15 09:37:03.713 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

Nothing else, still working

[root@dallas1 ~(keystone_admin)]$ date

Sat Feb 15 12:44:33 MSK 2014

[root@dallas1 Downloads(keystone_admin)]$ nova image-list

+————————————–+———————+——–+——–+
| ID                                   | Name                | Status | Server |
+————————————–+———————+——–+——–+
| 2cada436-30a7-425b-9e75-ce56764cdd13 | Cirros31            | ACTIVE |        |
| fd1cd492-d7d8-4fc3-961a-0b43f9aa148d | Fedora 20 Image     | ACTIVE |        |
| c0b90f9e-fd47-46da-b98b-1144a41a6c08 | Fedora 20 x86_64    | ACTIVE |        |
| 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b | Ubuntu 13.10 Server | ACTIVE |        |
+————————————–+———————+——–+——–+

[root@dallas1 Downloads(keystone_admin)]$ cd

[root@dallas1 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –image fd1cd492-d7d8-4fc3-961a-0b43f9aa148d VF20GLS

+————————————–+————————————–+
| Property                             | Value                                |
+————————————–+————————————–+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Fedora 20 Image                      |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000004                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | e948e74c-86e5-46e3-9df1-5b7ab890cb8a |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 974006673310455e8893e692f1d9350b     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-15T09:04:22Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | VF20GLS                              |
| adminPass                            | i5Lb79SybSpV                         |
| tenant_id                            | b5c0d0d4d31e4f3785362f2716df0b0f     |
| created                              | 2014-02-15T09:04:22Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+————————————–+————————————–+

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| e948e74c-86e5-46e3-9df1-5b7ab890cb8a | VF20GLS   | BUILD     | spawning   | NOSTATE     | int=10.0.0.6                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| e948e74c-86e5-46e3-9df1-5b7ab890cb8a | VF20GLS   | ACTIVE    | None       | Running     | int=10.0.0.6                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 082249a5-08f4-478f-b176-effad0ef6843 |
| id                  | b582d8f9-8e44-4282-a71c-20f36f2e3d89 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | b5c0d0d4d31e4f3785362f2716df0b0f     |
+———————+————————————–+

[root@dallas1 ~(keystone_admin)]$ neutron port-list –device-id e948e74c-86e5-46e3-9df1-5b7ab890cb8a

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 30c356f8-c0e9-439b-b68e-6c1e950b39ef |      | fa:16:3e:7f:4a:57 | {“subnet_id”: “3d75d529-9a18-46d3-ac08-7cb4c733636c”, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_admin)]$ neutron floatingip-associate b582d8f9-8e44-4282-a71c-20f36f2e3d89 30c356f8-c0e9-439b-b68e-6c1e950b39ef

Associated floatingip b582d8f9-8e44-2014-02-15

[root@dallas1 ~(keystone_admin)]$ ping 192.168.1.104
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=3.67 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=0.758 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=0.687 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=0.731 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=0.767 ms
64 bytes from 192.168.1.104: icmp_seq=6 ttl=63 time=0.713 ms
64 bytes from 192.168.1.104: icmp_seq=7 ttl=63 time=0.817 ms
64 bytes from 192.168.1.104: icmp_seq=8 ttl=63 time=0.741 ms
64 bytes from 192.168.1.104: icmp_seq=9 ttl=63 time=0.703 ms
^C

— 192.168.1.104 ping statistics —

9 packets transmitted, 9 received, 0% packet loss, time 8002ms

rtt min/avg/max/mdev = 0.687/1.065/3.674/0.923 ms

[root@dallas1 ~(keystone_admin)]$ date
Sat Feb 15 13:15:13 MSK 2014
 

Check same log :-

2014-02-15 09:36:03.663 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

Last record still the same :-

2014-02-15 09:37:03.713 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds



  Top at Compute Node :-

 

 

[root@dallas2 ~]# virsh list –all

Id    Name                           State

—————————————————-
4     instance-00000001              running
5     instance-00000003              running
9     instance-00000005              running
10    instance-00000002              running
11    instance-00000004              running

Finally, I get ERROR&NOSTATE at 16:28

[root@dallas1 ~(keystone_admin)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| ee3ff870-91b7-4d14-bb06-e9a6603f0a83 | UbuntuSLM | ERROR     | None       | NOSTATE     |                             |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5e43c64d-41cf-413f-b317-0046b070a7a4 | VF20GLS   | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.105 |
| e948e74c-86e5-46e3-9df1-5b7ab890cb8a | VF20GLS   | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.104 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ date
Sat Feb 15 16:28:35 MSK 2014

I was allowed to create 5 instance . Six one goes to ERROR&amp;NOSTATE

Then make number of instances no more then four  and optionally run  restarts of services
# service qpidd restart ;
# service openstack-nova-scheduler restart ;

Then you may run   :-

[root@dallas1 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –image 14cf6e7b-9aed-40c6-8185-366eb0c4c397 UbuntuSL3 

+————————————–+————————————–+
| Property                             | Value                                |
+————————————–+————————————–+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Ubuntu Salamander Server             |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000009                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | 2712446b-3442-4af2-a330-c9365736ee73 |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 974006673310455e8893e692f1d9350b     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-15T12:44:36Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | UbuntuSL3                            |
| adminPass                            | zq3n5FCktcYB                         |
| tenant_id                            | b5c0d0d4d31e4f3785362f2716df0b0f     |
| created                              | 2014-02-15T12:44:36Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+————————————–+————————————–+

[root@dallas1 ~(keystone_admin)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 2712446b-3442-4af2-a330-c9365736ee73 | UbuntuSL3 | BUILD     | spawning   | NOSTATE     | int=10.0.0.6                |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5e43c64d-41cf-413f-b317-0046b070a7a4 | VF20GLS   | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.105 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 2712446b-3442-4af2-a330-c9365736ee73 | UbuntuSL3 | ACTIVE    | None       | Running     | int=10.0.0.6                |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5e43c64d-41cf-413f-b317-0046b070a7a4 | VF20GLS   | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.105 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

Here goes sample on another Cluster :-

First remove one old instance if number =5 , then  run for “nova boot new instance”, otherwise there is a big chance to get “ERROR&NOSTATE” instead of “BUILD&SPAWNING”  status.  Log /var/log/nova/scheduling.log will explain you reason of rejecting – AMQP Server cannot be connected  after overcoming the limit of instances.

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2  –user-data=./myfile.txt –block_device_mapping vda=4cb4c501-c7b1-4c42-ba26-0141fcde038b:::0 VF20SX4


+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| OS-EXT-STS:task_state                | scheduling                                         |
| image                                | Attempt to boot from volume – no image supplied    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000003c                                  |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7               |
| security_groups                      | [{u'name': u'default'}]                            |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                   |
| OS-DCF:diskConfig                    | MANUAL                                             |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
| status                               | BUILD                                              |
| updated                              | 2014-02-16T06:15:34Z                               |
| hostId                               |                                                    |
| OS-EXT-SRV-ATTR:host                 | None                                               |
| OS-SRV-USG:terminated_at             | None                                               |
| key_name                             | None                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                               |
| name                                 | VF20SX4                                            |
| adminPass                            | C8r6vtF3kHJi                                       |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                   |
| created                              | 2014-02-16T06:15:33Z                               |
| os-extended-volumes:volumes_attached | [{u'id': u'4cb4c501-c7b1-4c42-ba26-0141fcde038b'}] |
| metadata                             | {}                                                 |
+————————————–+—————————————————-+

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+——————+———–+————+————-+—————————–+
| ID                                   | Name             | Status    | Task State | Power State | Networks                    |
+————————————–+——————+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5        | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312        | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 95a36074-5145-4959-b3b3-2651f2ac1a9c | UbuntuSalamander | SUSPENDED | None       | Shutdown    | int=10.0.0.8, 192.168.1.104 |
| 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7 | VF20SX4          | ACTIVE    | None       | Running     | int=10.0.0.4                |
| 55f6e0bc-281e-480d-b88f-193207ea4d4a | VF20XWL          | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.108 |
+————————————–+——————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ nova show 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-02-16T06:15:39Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                                        |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.4, 192.168.1.110                                  |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000003c                                        |
| OS-SRV-USG:launched_at               | 2014-02-16T06:15:39.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                        |
| flavor                               | m1.small (2)                                             |
| id                                   | 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                         |
| name                                 | VF20SX4                                                  |
| created                              | 2014-02-16T06:15:33Z                                     |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'4cb4c501-c7b1-4c42-ba26-0141fcde038b'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

  Tenants Network testing

[root@dfw02 ~]#  cat  keystonerc_boris
export OS_USERNAME=boris
export OS_TENANT_NAME=ostenant
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://192.168.1.127:35357/v2.0/
export PS1=’[\u@\h \W(keystone_boris)]$

[root@dfw02 ~]# . keystonerc_boris

[root@dfw02 ~(keystone_boris)]$ neutron net-list
+————————————–+——+—————————————+
| id                                   | name | subnets                               |
+————————————–+——+—————————————+
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d  |
+————————————–+——+—————————————+

[root@dfw02 ~(keystone_boris)]$ neutron router-create router2
Created a new router:
+———————–+————————————–+
| Field                 | Value                                |
+———————–+————————————–+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 86b3008c-297f-4301-9bdc-766b839785f1 |
| name                  | router2                              |
| status                | ACTIVE                               |
| tenant_id             | 4dacfff9e72c4245a48d648ee23468d5     |
+———————–+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron router-gateway-set router2 ext
Set gateway for router router2

[root@dfw02 ~(keystone_boris)]$  neutron net-create int1
Created a new network:
+—————-+————————————–+
| Field          | Value                                |
+—————-+————————————–+
| admin_state_up | True                                 |
| id             | 426bb226-0ab9-440d-ba14-05634a17fb2b |
| name           | int1                                 |
| shared         | False                                |
| status         | ACTIVE                               |
| subnets        |                                      |
| tenant_id      | 4dacfff9e72c4245a48d648ee23468d5     |
+—————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron subnet-create int1 40.0.0.0/24 –dns_nameservers list=true 83.221.202.254
Created a new subnet:
+——————+——————————————–+
| Field            | Value                                      |
+——————+——————————————–+
| allocation_pools | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
| cidr             | 40.0.0.0/24                                |
| dns_nameservers  | 83.221.202.254                             |
| enable_dhcp      | True                                       |
| gateway_ip       | 40.0.0.1                                   |
| host_routes      |                                            |
| id               | 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06       |
| ip_version       | 4                                          |
| name             |                                            |
| network_id       | 426bb226-0ab9-440d-ba14-05634a17fb2b       |
| tenant_id        | 4dacfff9e72c4245a48d648ee23468d5           |
+——————+——————————————–+

[root@dfw02 ~(keystone_boris)]$  neutron router-interface-add router2 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06
Added interface e031db6b-d0cc-4c57-877b-53b1c6946870 to router router2.

[root@dfw02 ~(keystone_boris)]$ neutron subnet-list
+————————————–+——+————-+——————————————–+
| id                                   | name | cidr        | allocation_pools                           |
+————————————–+——+————-+——————————————–+
| 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06 |      | 40.0.0.0/24 | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
+————————————–+——+————-+——————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol icmp \
>   –direction ingress –remote-ip-prefix 0.0.0.0/0 default
Created a new security_group_rule:
+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 4a6deddf-9350-4f98-97d7-a54cf6ebaa9a |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol tcp \
>   –port-range-min 22 –port-range-max 22 \
>   –direction ingress –remote-ip-prefix 0.0.0.0/0 default
Created a new security_group_rule:
+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 7a461936-ffbc-4968-975b-3d27ec975e04 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ glance image-list
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| b9782f24-2f37-4f8d-9467-f622507788bf | CentOS6.5 image     | qcow2       | bare             | 344457216 | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31            | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64    | qcow2       | bare             | 237371392 | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image     | qcow2       | bare             | 214106112 | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10 | qcow2       | bare             | 244514816 | active |
| dd8c11d2-f9b0-4b77-9b5e-534c7ac0fd58 | Ubuntu Trusty image | qcow2       | bare             | 246022144 | active |
+————————————–+———————+————-+——————+———–+——–+

[root@dfw02 ~(keystone_boris)]$ cinder create –image-id de93ee44-4085-4111-b022-a7437da8feac –display_name VF20VLG02 7
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-02-21T06:36:21.753407      |
| display_description |                 None                 |
|     display_name    |              VF20VLG02               |
|          id         | c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 |
|       image_id      | de93ee44-4085-4111-b022-a7437da8feac |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ cinder list
+————————————–+————-+————–+——+————-+———-+————-+
|                  ID                  |    Status   | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+————-+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | downloading |  VF20VLG02   |  7   |     None    |  false   |             |
+————————————–+————-+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ cinder list
+————————————–+———–+————–+——+————-+———-+————-+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | available |  VF20VLG02   |  7   |     None    |   true   |             |
+————————————–+———–+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ nova boot –flavor 2  –user-data=./myfile.txt –block_device_mapping vda=c3b09e44-1868-43c6-baaa-1ffcb4b80fb1:::0 VF20XWS
+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| status                               | BUILD                                              |
| updated                              | 2014-02-21T06:49:42Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume – no image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | c4573327-dd99-4e57-941e-3d35aacb637c               |
| security_groups                      | [{u'name': u'default'}]                            |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | 162021e787c54cac906ab3296a386006                   |
| name                                 | VF20XWS                                            |
| adminPass                            | YkPYdW58gz7K                                       |
| tenant_id                            | 4dacfff9e72c4245a48d648ee23468d5                   |
| created                              | 2014-02-21T06:49:42Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u'id': u'c3b09e44-1868-43c6-baaa-1ffcb4b80fb1'}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dfw02 ~(keystone_boris)]$ nova list
+————————————–+———+——–+————+————-+—————+
| ID                                   | Name    | Status | Task State | Power State | Networks      |
+————————————–+———+——–+————+————-+—————+
| c4573327-dd99-4e57-941e-3d35aacb637c | VF20XWS | ACTIVE | None       | Running     | int1=40.0.0.2 |
+————————————–+———+——–+————+————-+—————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-create ext
Created a new floatingip:
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron port-list –device-id c4573327-dd99-4e57-941e-3d35aacb637c
+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |      | fa:16:3e:10:a0:e3 | {“subnet_id”: “9e0d457b-c4c4-45cf-84e2-4ac7550f3b06″, “ip_address”: “40.0.0.2”} |
+————————————–+——+——————-+———————————————————————————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-associate 64dd749f-6127-4d0f-ba51-8a9978b8c211 2d6c6569-44c3-44b2-8bed-cdc8dde12336
Associated floatingip 64dd749f-6127-4d0f-ba51-8a9978b8c211

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-show 64dd749f-6127-4d0f-ba51-8a9978b8c211
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 40.0.0.2                             |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             | 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |
| router_id           | 86b3008c-297f-4301-9bdc-766b839785f1 |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ ping 192.168.1.115
PING 192.168.1.115 (192.168.1.115) 56(84) bytes of data.
64 bytes from 192.168.1.115: icmp_seq=1 ttl=63 time=3.80 ms
64 bytes from 192.168.1.115: icmp_seq=2 ttl=63 time=1.13 ms
64 bytes from 192.168.1.115: icmp_seq=3 ttl=63 time=0.954 ms
64 bytes from 192.168.1.115: icmp_seq=4 ttl=63 time=1.01 ms
64 bytes from 192.168.1.115: icmp_seq=5 ttl=63 time=0.999 ms
64 bytes from 192.168.1.115: icmp_seq=6 ttl=63 time=0.809 ms
64 bytes from 192.168.1.115: icmp_seq=7 ttl=63 time=1.02 ms
^C

The original text of documents was posted on fedorapeople.org by Kashyap in November 2013.
Atached ones tuned for new IP’s and should not have any more  typos of original version.They also contain MySQL preventive updates currently required for openstack-nova-compute & neutron-openvswitch-agent remote connection to Controller Node to succeed . MySQL stuff is mine. All attached *.conf & *.ini files have been update for my network as well.
In meantime I am quite sure  that using Libvirt’s default and non-default networks  for creating Controller and Compute nodes  as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real
physical boxes. Just one Ethernet Controller per box should be required in case of  using GRE tunnelling in case of RDO Havana on Fedora 20 manual setup.     

  References

1. http://textuploader.com/1hin
2. http://textuploader.com/1hey
3. http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt
4. http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
5.http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Surfing Internet & SSH connectoin on (to) cloud instance of Fedora 20 via Neutron GRE

February 4, 2014

When you meet the first time with GRE tunnelling you have to understand that GRE encapsulation requires 24 bytes and a lot of problems raise up , view http://www.cisco.com/en/US/tech/tk827/tk369/technologies_tech_note09186a0080093f1f.shtml

In particular,  Two Node (Controller+Compute) RDO Havana cluster on Fedora 20 hosts been built by myself per guidelines from http://kashyapc.wordpress.com/2013/11/23/neutron-configs-for-a-two-node-openstack-havana-setup-on-fedora-20/ was Neutron GRE  cluster. Hence, for any instance has been setup (Fedora or Ubuntu) problem with network communication raises up immediately. apt-get update just refuse to work on Ubuntu Salamander Server instance (default MTU value for Ethernet interface is 1500).

Light weight X windows environment also has been setup on Fedora 20 cloud instance (fluxbox) for quick Internet access.

Solution is simple to  set MTU to 1400 only on any cloud instance.

Place in /etc/rd.d/rc.local (or /etc/rc.local for Ubuntu Server) :-

#!/bin/sh
ifconfig eth0 mtu 1400 up ;
exit 0

At least in meantime I don’t see problems with LAN and routing to  Internet (via simple  DLINK router) on cloud instances F19,F20,Ubuntu 13.10 Server and LAN’s hosts.

For better understanding what is all about please view http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html  [1].

Load instance via :

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2  –user-data=./myfile.txt  –block_device_mapping vda=3cb671c2-06d8-4b3a-aca6-476b66fb309a:::0 VMF20RS

where

[root@dfw02 ~(keystone_admin)]$ cinder list
+————————————–+——–+————–+——+————-+———-+————————————–+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+————–+——+————-+———-+————————————–+
| 3cb671c2-06d8-4b3a-aca6-476b66fb309a | available | Fedora20VOL   |  9   |     None    |   true   |                                                                                           |
| 49d5b872-3720-4915-ad1e-ec428e956558 | in-use |   VF20VOL    |  9   |     None    |   true   | 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 |
| b4831720-941f-41a7-b747-1810df49b261 | in-use | UbuntuSALVG  |  7   |     None    |   true   | 5d750d44-0cad-4a02-8432-0ee10e988b2c |
+————————————–+——–+————–+——+————-+———-+————————————–+

and

[root@dfw02 ~(keystone_admin)]$ cat myfile.txt

#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

Then
[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+—————+———–+————+————-+—————————–+
| ID                                   | Name          | Status    | Task State | Power State | Networks                    |
+————————————–+—————+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5     | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312     | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 5d750d44-0cad-4a02-8432-0ee10e988b2c | UbuntuSaucySL | SUSPENDED | None       | Shutdown    | int=10.0.0.8, 192.168.1.112 |
| 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 | VF20KVM       | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.109 |
| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | ACTIVE  | None       | Running   | int=10.0.0.4                                  |
+————————————–+—————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ neutron port-list –device-id 10306d33-9684-4dab-a017-266fb9ab496a

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| fa982101-e2d9-4d21-be9d-7d485c792ce1 |      | fa:16:3e:57:e2:67 | {“subnet_id”: “fa930cea-3d51-4cbe-a305-579f12aa53c0″, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+——————————————————————————–

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-create ext

Created a new floatingip:
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | d9f1b47d-c4b1-4865-92d2-c1d9964a35fb |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+

[root@dfw02 ~(keystone_admin)]$  neutron floatingip-associate d9f1b47d-c4b1-4865-92d2-c1d9964a35fb fa982101-e2d9-4d21-be9d-7d485c792ce1

[root@dfw02 ~(keystone_admin)]$ ping  192.168.1.115

Connect via virt-manager to Compute from Controller and log into text mode console as “fedora” with known password “mysecret”.  Set MTU to 1400  , create new sudoer user, then reboot instance. Now ssh from Controller works in traditional way :

[root@dfw02 ~(keystone_admin)]$ nova list | grep VMF20RS
| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | SUSPENDED | resuming   | Shutdown    | int=10.0.0.4, 192.168.1.115 |

[root@dfw02 ~(keystone_admin)]$ nova list | grep VMF20RS

| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.115 |

[root@dfw02 ~(keystone_admin)]$ ssh root@192.168.1.115

root@192.168.1.115’s password:
Last login: Sat Feb  1 12:32:12 2014 from 192.168.1.127
[root@vmf20rs ~]# uname -a
Linux vmf20rs.novalocal 3.12.8-300.fc20.x86_64 #1 SMP Thu Jan 16 01:07:50 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

[root@vmf20rs ~]# ifconfig
eth0: flags=4163  mtu 1400
inet 10.0.0.4  netmask 255.255.255.0  broadcast 10.0.0.255

inet6 fe80::f816:3eff:fe57:e267  prefixlen 64  scopeid 0x20
ether fa:16:3e:57:e2:67  txqueuelen 1000  (Ethernet)
RX packets 591788  bytes 770176441 (734.4 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 196309  bytes 20105918 (19.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 2  bytes 140 (140.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2  bytes 140 (140.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Text mode Internet works as well via “links” for instance :-

Setup Light Weight X Windows environment on F20 Cloud instance and running Fedora 20 cloud instance in Spice session via virt-manager ( Controller connects to Compute node via virt-manager ).  Spice console and QXL specified in virt-manager , then `nova reboot VF20WRT`.

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

# echo “exec fluxbox” &gt; ~/.xinitrc
# startx

Fedora 20 cloud instance running in Spice Session via virt-manager with QXL 64 MB of VRAM  :-

Shutting down fluxbox :-

Done

Now run `nova suspend VF20WRT`

Connecting to Fedora 20 cloud instance via spicy from Compute node :-

Fluxbox on Ubuntu 13.10 Server Cloud Instance:-

References

1.http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Setup Light Weight X Windows environment on Fedora 20 Cloud instance and running F20 cloud instance in Spice session via virt-manager or spicy

February 3, 2014

Following bellow builds Light Weight X Windows environment on Fedora 20 Cloud instance and demonstrate running same instance in Spice session via virt-manager ( Controller connects to Compute node via virt-manager ).  Spice console and QXL specified in virt-manager , then instance rebooted via Nova.

This post follows up [1]  http://bderzhavets.blogspot.ru/2014/01/setting-up-two-physical-node-openstack.html 
getting things on cloud instances ready to work without openstack-dashboard setup (RDO Havana administrative WEB console)

Needless to say that Spice console behaviour with running X-Server much better then it happens in VNC session , where actually one X-sever is running in a client of another one at Controller Node (F20)

Spice-gtk source rpm installed on both boxes of Cluster and rebuilt:-
$ rpm -iv spice-gtk-0.22-1.fc21.src.rpm
$ cd ~/rpmbuild/SPEC
$ sudo yum install intltool gtk2-devel usbredir-devel libusb1-devel libgudev1-devel pixman-devel openssl-devel  libjpeg-turbo-devel celt051-devel pulseaudio-libs-devel pygtk2-devel python-devel zlib-devel cyrus-sasl-devel libcacard-devel gobject-introspection-devel  dbus-glib-devel libacl-devel polkit-devel gtk-doc vala-tools gtk3-devel spice-protocol

$ rpmbuild -bb ./spice-gtk.spec
$ cd ../RPMS/x86_64

RPMs been built installed , because spicy is not on the system

[boris@dfw02 x86_64]$  sudo yum install spice-glib-0.22-2.fc20.x86_64.rpm \
spice-glib-devel-0.22-2.fc20.x86_64.rpm \
spice-gtk-0.22-2.fc20.x86_64.rpm \
spice-gtk3-0.22-2.fc20.x86_64.rpm \
spice-gtk3-devel-0.22-2.fc20.x86_64.rpm \
spice-gtk3-vala-0.22-2.fc20.x86_64.rpm \
spice-gtk-debuginfo-0.22-2.fc20.x86_64.rpm \
spice-gtk-devel-0.22-2.fc20.x86_64.rpm  \
spice-gtk-python-0.22-2.fc20.x86_64.rpm \
spice-gtk-tools-0.22-2.fc20.x86_64.rpm

Next we install X windows on F20 to run fluxbox ( by the way after hours of googling I was unable to find requied set of packages and just picked them up

during KDE Env installation via yum , which I actually don’t need at all on cloud instance of Fedora )

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

Install some fonts :-

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

We are ready to go :-

# echo “exec fluxbox” &gt; ~/.xinitrc
# startx


Next:  $ yum -y install firefox
via x-terminal
$/usr/bin/firefox &amp;

Fedora 20 cloud instance running in Spice Session via virt-manager with QXL (64 MB of VRAM)  :-

Connecting via spicy from Compute Node to same F20 instance :-


   
  

    
  After port mapping :-
# ssh -L 5900:localhost:5900 -N -f -l root 192.168.1.137
Spicy may connect from Controller to Fedora 20 instance


 



“Setting up Two Physical-Node OpenStack RDO Havana + Gluster Backend for Cinder + Neutron GRE” on Fedora 20 boxes with both Controller and Compute nodes each one having one Ethernet adapter

January 24, 2014

**************************************************************
UPDATE on 02/23/2014  
**************************************************************

To create new instance I must have in `nova list` no more then 4 entries. Then I can sequentially restart qpidd, openstack-nova-scheduler services ( it’s, actually, not always necessary)  and I will be able create new one instance for sure.  It has been tested on 2 “Two Node Neutron GRE+OVS+Gluster Backend for Cinder Cluster”.  It is related with `nova quota-show`  for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller.
All testing  details here http://bderzhavets.blogspot.com/2014/02/next-attempt-to-set-up-two-node-neutron.html  Syntax like :

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 10    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+
[root@dallas1 ~(keystone_admin)]$  nova quota-class-update –instances 20 default

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 20    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+

doesn’t work for me
****************************************************************

1. F19,F20 have been installed via volumes based on glusterfs and show a good performance on Compute node. Yum works stable on F19 and a bit slower on F20.
2. CentOS 6.5 was installed only via glance image ( cinder shows ERROR status for volume ) network ops are slower then on Fedoras.
3. Ubuntu 13.10 Server was installed via volume based on glusterfs was able to obtain internal and floating IP. Network speed close to Fedora 19
4. Turning on Gluster backend for Cinder on F20 Two-Node Neutron GRE Cluster (Controller+Compute) improves performance significantly. Due to known F20 bug glustefs FS was ext4
5.On any cloud instance MTU should be set to 1400 for proper communications with GRE tunnel 

Post bellow follows up two Fedora 20 VMs setup described in :-
  http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt
  http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
  Both cases have been tested above –  default and non-default libvirt’s networks
In meantime I believe that using Libvirt’s networks for creating Controller and Compute nodes  as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling for RDO Havana on Fedora 20 manual setup.
  Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don’t start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services
openstack-nova-conductor & openstack-nova-scheduler wouldn’t start if mysql.users table won’t be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.

Manuals mentioned above require some editing per authors opinion as well.

Manual Setup  for two different physical boxes running Fedora 20 with the most recent `yum -y update`

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   –  Controller (192.168.1.127)

dwf01.localdomain   –  Compute   (192.168.1.137)

Two instances are running on Compute node :-

VF19RS instance has  192.168.1.102 – floating ip ,

CirrOS 3.1 instance has  192.168.1.101 – floating ip

Cloud instances running on Compute perform commands like nslookup,traceroute. Yum install & yum -y update work on Fedora 19 instance, however, in meantime time network on VF19 is stable, but relatively slow . Might be Realtek 8169 integrated on board is not good enough for GRE and it’s problem of my hardware ( dfw01 built up with Q9550,ASUS P5Q3, 8 GB DDR3, SATA 2 Seagate 500 GB). CentOS 6.5 with “RDO Havana+Glusterfs+Neutron VLAN” works on same box (dual booting with F20) much faster.  That is a first impression. I’ve also changed neutron.conf ‘s connection credentials to mysql to be able run neutron-server service. Neutron L3 agent and Neutron Openvswitch agent require some effort to be started on Controller.
Manual mentioned above requires some editing per authors opinion as well.

[root@dfw02 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           inactive  (disabled on boot)
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         inactive  (disabled on boot)
openstack-ceilometer-alarm-notifier:    inactive  (disabled on boot)
openstack-ceilometer-alarm-evaluator:   inactive  (disabled on boot)
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 970ed56ef7bc41d59c54f5ed8a1690dc |  admin  |   True  |       |
| 1beeaa4b20454048bf23f7d63a065137 |  cinder |   True  |       |
| 006c2728df9146bd82fab04232444abf |  glance |   True  |       |
| 5922aa93016344d5a5d49c0a2dab458c | neutron |   True  |       |
| af2f251586564b46a4f60cdf5ff6cf4f |   nova  |   True  |       |
+———————————-+———+———+——-+
== Glance images ==
+————————————–+——————+————-+——————+———–+——–+
| ID                                   | Name             | Disk Format | Container Format | Size      | Status |
+————————————–+——————+————-+——————+———–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | qcow2       | bare             | 237371392 | active |
+————————————–+——————+————-+——————+———–+——–+
== Nova managed services ==
 +—————-+——————-+———-+———+——-+—————————-+—————–+
| Binary         | Host              | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+——————-+———-+———+——-+—————————-+—————–+
| nova-scheduler | dfw02.localdomain | internal | enabled | up    | 2014-01-23T22:36:15.000000 | None            |
| nova-conductor | dfw02.localdomain | internal | enabled | up    | 2014-01-23T22:36:11.000000 | None            |
| nova-compute   | dfw01.localdomain | nova     | enabled | up    | 2014-01-23T22:36:10.000000 | None            |
+—————-+——————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int   | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS    | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-01-23 22:39:05
nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-01-23 22:39:11
nova-compute     dfw01.localdomain                    nova             enabled    :-)   2014-01-23 22:39:10
[root@dfw02 ~(keystone_admin)]$ ovs-vsctl show
7d78d536-3612-416e-bce6-24605088212f
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapf933e768-42″
tag: 1
Interface “tapf933e768-42″
Port “tap40dd712c-e4″
tag: 1
Interface “tap40dd712c-e4″
Bridge br-ex
Port “p37p1″
Interface “p37p1″
Port br-ex
Interface br-ex
type: internal
Port “tap54e34740-87″
Interface “tap54e34740-87″
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port “gre-2″
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.1.127″, out_key=flow, remote_ip=”192.168.1.137″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

Running instances on dfw01.localdomain :

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS    | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-01-23 22:25:45
nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-01-23 22:25:41
nova-compute     dfw01.localdomain                    nova             enabled    :-)   2014-01-23 22:25:50

Fedora 19 instance loaded via :
[root@dfw02 ~(keystone_admin)]$ nova image-list

+————————————–+——————+——–+——–+
| ID                                   | Name             | Status | Server |

+————————————–+——————+——–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | ACTIVE |        |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE |        |
+————————————–+——————+——–+——–+

[root@dfw02 ~(keystone_admin)]$  nova boot –flavor 2 –user-data=./myfile.txt –image 03c9ad20-b0a3-4b71-aa08-2728ecb66210 VF19RS

where

[root@dfw02 ~(keystone_admin)]$  cat ./myfile.txt
#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

Snapshots  done on dfw01 host with VNC consoles opened via virt-manager :-

   

Snapshots  done on dfw02 host via virt-manager connection to dfw01 :-

  
  \

Setup Light Weight X Windows environment on Fedora 20 Cloud instance and running F20 cloud instance in Spice session via virt-manager or spicy http://bderzhavets.blogspot.com/2014/02/setup-light-weight-x-windows_2.html

 Next we install X windows on F20 to run fluxbox ( by the way after hours of googling I was unable to find requied set of packages and just picked them up during KDE Env installation via yum , which I actually don’t need at all on cloud instance of Fedora )

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

# yum install feh xcompmgr lxappearance xscreensaver dmenu

View for details http://blog.bodhizazen.net/linux/a-5-minute-guide-to-fluxbox/

# mkdir .fluxbox/backgrounds

Add to ~/.fluxbox/menu file

[submenu] (Wallpapers)
[wallpapers] (~/.fluxbox/backgrounds) {feh –bg-scale}
[end] 

to be able set wallpapers

Install some fonts :-

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

 We are ready to go :-

# echo “exec fluxbox” > ~/.xinitrc
# startx

To be able surf internet set MTU 1400 only on cloud instances :-
#  ifconfig eth0 mtu 1400 up
Otherwise, it won’t be possible due to GRE incapsulation

[root@dfw02 ~(keystone_admin)]$ nova list | grep LXW
| 492af969-72c0-4235-ac4e-d75d3778fd0a | VF20LXW          | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.106 |
[root@dfw02 ~(keystone_admin)]$ nova show 492af969-72c0-4235-ac4e-d75d3778fd0a
+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-02-06T09:38:52Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                                        |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.4, 192.168.1.106                                  |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000021                                        |
| OS-SRV-USG:launched_at               | 2014-02-05T17:47:38.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                        |
| flavor                               | m1.small (2)                                             |
| id                                   | 492af969-72c0-4235-ac4e-d75d3778fd0a                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                         |
| name                                 | VF20LXW                                                  |
| created                              | 2014-02-05T17:47:33Z                                     |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'd0c5706d-4193-4925-9140-29dea801b447'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

Switching to Spice session improves X-Server behaviour on F20 cloud instance.

# ssh -L 5900:localhost:5900 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5901:localhost:5901 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5902:localhost:5902 -N -f -l 192.168.1.137 ( Compute IP-address)
# spicy -h localhost -p  590(X)

View also “Surfing Internet & SSH connectoin on (to) cloud instance of Fedora 20 via Neutron GRE” http://bderzhavets.wordpress.com/2014/02/04/surfing-internet-ssh-connectoin-on-to-cloud-instance-of-fedora-20-via-neutron-gre/

Same command  :  `ifconfig eth0 mtu 1400 up`  will put ssh in work from Controller and Compute nodes.

[root@dfw02 nova(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5 | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 14c49bfe-f99c-4f31-918e-dcf0fd42b49d | VF19RST   | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL  | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.104 |
| 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 | VF20KVM   | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.109 |
+————————————–+———–+———–+————+————-+—————————–+


[root@dfw02 nova(keystone_admin)]$ ssh fedora@192.168.1.109
fedora@192.168.1.109’s password:
Last login: Thu Jan 30 15:54:04 2014 from 192.168.1.127

 
[fedora@vf20kvm ~]$ ifconfig
eth0: flags=4163  mtu 1400
inet 10.0.0.7  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::f816:3eff:fec6:e89a  prefixlen 64  scopeid 0x20
ether fa:16:3e:c6:e8:9a  txqueuelen 1000  (Ethernet)
RX packets 630779  bytes 877092770 (836.4 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 166603  bytes 14706620 (14.0 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 2  bytes 140 (140.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2  bytes 140 (140.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

So, loading cloud instance  via `nova boot –user-data=./myfile.txt ….` allows to get access to command line and set MTU for eth0 to 1400 , this makes instance available for ssh connections from Controller and Compute Nodes and also makes possible Internet Surfing in text and graphical  mode for fedora 19,20, Ubuntu 13.10,12.04.

[root@dfw02 ~(keystone_admin)]$ ip netns list

qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8


[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 ip a
 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: qr-f933e768-42: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:6a:d3:f0 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-f933e768-42
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe6a:d3f0/64 scope link
valid_lft forever preferred_lft forever
3: qg-54e34740-87: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:00:9a:0d brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.101/32 brd 192.168.1.101 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.102/32 brd 192.168.1.102 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe00:9a0d/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7 ip a
 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ns-40dd712c-e4: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:93:44:f8 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global ns-40dd712c-e4
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe93:44f8/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  ip r
default via 192.168.1.1 dev qg-54e34740-87
10.0.0.0/24 dev qr-f933e768-42  proto kernel  scope link  src 10.0.0.1
192.168.1.0/24 dev qg-54e34740-87  proto kernel  scope link  src 192.168.1.100
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 \
&gt; iptables -L -t nat | grep 169
REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir  ports 8700

[root@dfw02 ~(keystone_admin)]$ neutron net-list
+————————————–+——+—————————————————–+
| id                                   | name | subnets                                             |
+————————————–+——+—————————————————–+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int  | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24    |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |
+————————————–+——+—————————————————–+
[root@dfw02 ~(keystone_admin)]$ neutron subnet-list
+————————————–+——+—————-+—————————————————-+
| id                                   | name | cidr           | allocation_pools                                   |
+————————————–+——+—————-+—————————————————-+
| fa930cea-3d51-4cbe-a305-579f12aa53c0 |      | 10.0.0.0/24    | {“start”: “10.0.0.2”, “end”: “10.0.0.254”}         |
| f30e5a16-a055-4388-a6ea-91ee142efc3d |      | 192.168.1.0/24 | {“start”: “192.168.1.100”, “end”: “192.168.1.200”} |
+————————————–+——+—————-+—————————————————-+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list
+————————————–+——————+———————+————————————–+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+————————————–+——————+———————+————————————–+
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2         | 192.168.1.101       | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4         | 192.168.1.102       | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
+————————————–+——————+———————+————————————–+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show af9c6ba6-e0ca-498e-8f67-b9327f75d93f
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 10.0.0.4                             |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | af9c6ba6-e0ca-498e-8f67-b9327f75d93f |
| port_id             | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| router_id           | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show  9d15609c-9465-4254-bdcb-43f072b6c7d4
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 10.0.0.2                             |
| floating_ip_address | 192.168.1.101                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 9d15609c-9465-4254-bdcb-43f072b6c7d4 |
| port_id             | e4cb68c4-b932-4c83-86cd-72c75289114a |
| router_id           | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+
Snapshot :-

*****************************************
Configuring Cinder to Add GlusterFS
*****************************************

# gluster volume create cinder-volumes05  replica 2 dwf02.localdomain:/data1/cinder5  dfw01.localdomain:/data1/cinder5
# gluster volume start cinder-volumes05
# gluster volume set cinder-volumes05  auth.allow 192.168.1.*
# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf

192.168.1.127:cinder-volumes05

:wq

Update /etc/sysconfig/iptables:-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment Out

-A FORWARD -j REJECT –reject-with icmp-host-prohibited
-A INPUT -j REJECT –reject-with icmp-host-prohibited

# service iptables restart

To mount gluster volume for cinder backend in current setup :-
# losetup -fv /cinder-volumes
# cinder delete a94b97f5-120b-40bd-b59e-8962a5cb6296
The above lines deleted testvol1 created by Kashyap

Ignoring this step would cause failure restart openstack-cinder-volume-service in particular situation

# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

Verification of service status :-

[root@dfw02 cinder(keystone_admin)]$ service openstack-cinder-volume status -l
Redirecting to /bin/systemctl status  -l openstack-cinder-volume.service
openstack-cinder-volume.service – OpenStack Cinder Volume Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled)
   Active: active (running) since Sat 2014-01-25 07:43:10 MSK; 6s ago
 Main PID: 21727 (cinder-volume)
   CGroup: /system.slice/openstack-cinder-volume.service
           ├─21727 /usr/bin/python /usr/bin/cinder-volume –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –logfile /var/log/cinder/volume.log
           ├─21736 /usr/bin/python /usr/bin/cinder-volume –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –logfile /var/log/cinder/volume.log
           └─21793 /usr/sbin/glusterfs –volfile-id=cinder-volumes05 –volfile-server=192.168.1.127 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:10 dfw02.localdomain systemd[1]: Started OpenStack Cinder Volume Server.
Jan 25 07:43:11 dfw02.localdomain cinder-volume[21727]: 2014-01-25 07:43:11.402 21736 WARNING cinder.volume.manager [req-69c0060b-b5bf-4bce-8a8e-f2218dec7638 None None] Unable to update stats, driver is uninitialized
Jan 25 07:43:11 dfw02.localdomain sudo[21754]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 192.168.1.127:cinder-volumes05 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:11 dfw02.localdomain sudo[21803]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf df –portability –block-size 1 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a

[root@dfw02 cinder(keystone_admin)]$ df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root        96G  7.4G   84G   9% /
devtmpfs                        3.9G     0  3.9G   0% /dev
tmpfs                           3.9G  152K  3.9G   1% /dev/shm
tmpfs                           3.9G  1.2M  3.9G   1% /run
tmpfs                           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                           3.9G  184K  3.9G   1% /tmp
/dev/sda5                       477M  101M  347M  23% /boot
/dev/mapper/fedora00-data1       77G   53M   73G   1% /data1
tmpfs                           3.9G  1.2M  3.9G   1% /run/netns
192.168.1.127:cinder-volumes05   77G   52M   73G   1% /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a

At runtime on Compute Node :-

[root@dfw01 ~]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/fedora-root          96G   54G   38G  59% /
devtmpfs                        3.9G     0  3.9G   0% /dev
tmpfs                           3.9G  484K  3.9G   1% /dev/shm
tmpfs                           3.9G  1.3M  3.9G   1% /run
tmpfs                           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                           3.9G   36K  3.9G   1% /tmp
/dev/sda5                       477M  121M  327M  27% /boot
/dev/mapper/fedora-data1         77G  6.7G   67G  10% /data1
192.168.1.127:cinder-volumes05   77G  6.7G   67G  10% /var/lib/nova/mnt/62f75cf6996a8a6bcc0d343be378c10a

[root@dfw02 ~(keystone_admin)]$ nova image-list
+————————————–+——————+——–+——–+
| ID                                   | Name             | Status | Server |
+————————————–+——————+——–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | ACTIVE |        |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE |        |
+————————————–+——————+——–+——–+

[root@dfw02 ~(keystone_admin)]$ cinder create –image-id 03c9ad20-b0a3-4b71-aa08-2728ecb66210 \
&gt; –display-name Fedora19VLG 7

+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-01-25T03:45:21.124690      |
| display_description |                 None                 |
|     display_name    |             Fedora19VLG              |
|          id         | 5f0f096b-192a-435b-bdbc-5063ed5c6366 |
|       image_id      | 03c9ad20-b0a3-4b71-aa08-2728ecb66210 |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dfw02 cinder5(keystone_admin)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————-+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————-+
| 5f0f096b-192a-435b-bdbc-5063ed5c6366 | available | Fedora19VLG  |  7   |     None    |   true   |             |
+————————————–+———–+————–+——+————-+———-+————–

**********************************************************************************
UPDATE on 03/09/2014. In meantime I am able to load instance via glusterfs cinder’s volume only via command :-
**********************************************************************************
[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012

***********************************************************************************
Update on 03/11/2014.
***********************************************************************************
Standard schema via `cinder create –image-id IMAGE_ID –display_name VOL_NAME SIZE ` && ` nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=VOLUME_ID:::0  INSTANCE_NAME`  started to work fine. Schema described in previous UPDATE 03/09/14 on the contrary stopped to work smoothly on glusterfs based cinder’s volumes. 
    However, ending up with “Error” status it creates glusterfs cinder volume ( with system_id ) , which is quite healthy and may be utilized for building new instance of F20 or Ubuntu 14.04, whatever was original image,  via CLI or Dashboard. It looks like a kind of bug in Nova&Neutron interprocess communications. I would say synchronization at boot up.
     Please view :-

“Provide an API for external services to send defined events to the compute service for synchronization. This includes immediate needs for nova-neutron interaction around boot timing and network info updates”
    https://blueprints.launchpad.net/nova/+spec/admin-event-callback-api  
 and bug report :-
    https://bugs.launchpad.net/nova/+bug/1280357

Loading instance via created volume on Glusterfs

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=5f0f096b-192a-435b-bdbc-5063ed5c6366:::0 VF19VLGL

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| OS-EXT-STS:task_state                | scheduling                                         |
| image                                | Attempt to boot from volume – no image supplied    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000005                                  |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 5aa903c5-624d-4dde-9e3c-49996d4a5edc               |
| security_groups                      | [{u'name': u'default'}]                            |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                   |
| OS-DCF:diskConfig                    | MANUAL                                             |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
| status                               | BUILD                                              |
| updated                              | 2014-01-25T03:59:12Z                               |
| hostId                               |                                                    |
| OS-EXT-SRV-ATTR:host                 | None                                               |
| OS-SRV-USG:terminated_at             | None                                               |
| key_name                             | None                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                               |
| name                                 | VF19VLGL                                           |
| adminPass                            | Aq4LBKP9rBGF                                       |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                   |
| created                              | 2014-01-25T03:59:12Z                               |
| os-extended-volumes:volumes_attached | [{u'id': u'5f0f096b-192a-435b-bdbc-5063ed5c6366'}] |
| metadata                             | {}                                                 |
+————————————–+—————————————————-+

Just in a second new instance will be booted via created volume on Glusterfs ( Fedora 20 : Qemu 1.6, Libvirt 1.1.3)

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| aaeada4a-6a83-4cbc-ac8b-96a8b1fa81ad | VF19GL    | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL  | ACTIVE    | None       | Running     | int=10.0.0.6                |
+————————————–+———–+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ neutron port-list –device-id  5aa903c5-624d-4dde-9e3c-49996d4a5edc

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 7196be1f-9216-4bfd-ac8b-9903780936d9 |      | fa:16:3e:4b:97:90 | {“subnet_id”: “fa930cea-3d51-4cbe-a305-579f12aa53c0″, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-

+———————————————————————————+

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list

+————————————–+——————+———————+————————————–+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+————————————–+——————+———————+————————————–+
| 04ccafab-1878-44f6-b5ab-a1e2ea1faa97 | 10.0.0.5         | 192.168.1.103       | 1d10dc02-c0f2-4225-ae61-db281f3af69c |
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2         | 192.168.1.101       | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4         | 192.168.1.102       | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e |                  | 192.168.1.104       |                                      |
+————————————–+——————+———————+————————————–+

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-associate c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e 7196be1f-9216-4bfd-ac8b-9903780936d9
Associated floatingip c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 10.0.0.6                             |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e |
| port_id             | 7196be1f-9216-4bfd-ac8b-9903780936d9 |
| router_id           | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+

root@dfw02 ~(keystone_admin)]$ ping 192.168.1.104

PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.

64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=4.19 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=1.32 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.06 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=1.11 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=1.13 ms
64 bytes from 192.168.1.104: icmp_seq=6 ttl=63 time=1.02 ms
64 bytes from 192.168.1.104: icmp_seq=7 ttl=63 time=1.05 ms
64 bytes from 192.168.1.104: icmp_seq=8 ttl=63 time=1.08 ms
64 bytes from 192.168.1.104: icmp_seq=9 ttl=63 time=0.974 ms
64 bytes from 192.168.1.104: icmp_seq=10 ttl=63 time=1.03 ms

I/O Speed improvement is noticeable on boot up and disk operations like this

CentOS 6.5 instance was able to start it’s own X Server in VNC session from F20 in other words been client of X Server of F20 host (?).

Setting up Ubuntu 13.10 cloud instance

 [root@dfw02 ~(keystone_admin)]$ nova list | grep UbuntuSalamander

| 812d369d-e351-469e-8820-a2d0d8740716 | UbuntuSalamander | ACTIVE    | None       | Running     | int=10.0.0.8, 192.168.1.110 |

 [root@dfw02 ~(keystone_admin)]$ nova show 812d369d-e351-469e-8820-a2d0d8740716

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-01-31T04:46:30Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                                        |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.8, 192.168.1.110                                  |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000016                                        |
| OS-SRV-USG:launched_at               | 2014-01-31T04:46:30.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                        |
| flavor                               | m1.small (2)                                             |
| id                                   | 812d369d-e351-469e-8820-a2d0d8740716                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                         |
| name                                 | UbuntuSalamander                                         |
| created                              | 2014-01-31T04:46:25Z                                     |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'34bdf9d9-5bcc-4b62-8140-919c00fe07df'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

[root@dfw02 ~(keystone_admin)]$ ssh ubuntu@192.168.1.110
ubuntu@192.168.1.110’s password: 


Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-15-generic x86_64)
* Documentation:  https://help.ubuntu.com/
System information as of Fri Jan 31 05:13:19 UTC 2014

System load:  0.08              Processes:           73
Usage of /:   11.4% of 6.86GB   Users logged in:     1
Memory usage: 3%                IP address for eth0: 10.0.0.8
Swap usage:   0%
Graph this data and manage this system at:

https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:

http://www.ubuntu.com/business/services/cloud

Last login: Fri Jan 31 05:13:25 2014 from 192.168.1.127

ubuntu@ubuntusalamander:~$ ifconfig
eth0      Link encap:Ethernet  HWaddr fa:16:3e:1e:16:35
inet addr:10.0.0.8  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe1e:1635/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
RX packets:854 errors:0 dropped:0 overruns:0 frame:0
TX packets:788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:85929 (85.9 KB)  TX bytes:81060 (81.0 KB)

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Setting up light weight X environment on Ubuntu instance:-

$ sudo  apt-get install xorg openbox
Reboot
$ startx
Right mouse click on desktop opens X-terminal
$ sudo apt-get install firefox
$ /usr/bin/firefox

Testing tenants network,router,instance creating ability

[root@dfw02 ~]#  cat  keystonerc_boris
export OS_USERNAME=boris
export OS_TENANT_NAME=ostenant
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://192.168.1.127:35357/v2.0/
export PS1=’[\u@\h \W(keystone_boris)]$ ‘

[root@dfw02 ~]# . keystonerc_boris

[root@dfw02 ~(keystone_boris)]$ neutron net-list

+————————————–+——+—————————————+

| id                                   | name | subnets                               |

+————————————–+——+—————————————+
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d  |
+————————————–+——+—————————————+

[root@dfw02 ~(keystone_boris)]$ neutron router-create router2 Created a new router:

+———————–+————————————–+
| Field                 | Value                                |
+———————–+————————————–+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 86b3008c-297f-4301-9bdc-766b839785f1 |
| name                  | router2                              |
| status                | ACTIVE                               |
| tenant_id             | 4dacfff9e72c4245a48d648ee23468d5     |
+———————–+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron router-gateway-set router2 ext

Set gateway for router router2

[root@dfw02 ~(keystone_boris)]$  neutron net-create int1

Created a new network:

+—————-+————————————–+
| Field          | Value                                |
+—————-+————————————–+
| admin_state_up | True                                 |
| id             | 426bb226-0ab9-440d-ba14-05634a17fb2b |
| name           | int1                                 |
| shared         | False                                |
| status         | ACTIVE                               |
| subnets        |                                      |
| tenant_id      | 4dacfff9e72c4245a48d648ee23468d5     |
+—————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron subnet-create int1 40.0.0.0/24 –dns_nameservers list=true 83.221.202.254

Created a new subnet:

+——————+——————————————–+
| Field            | Value                                      |
+——————+——————————————–+
| allocation_pools | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
| cidr             | 40.0.0.0/24                                |
| dns_nameservers  | 83.221.202.254                             |
| enable_dhcp      | True                                       |
| gateway_ip       | 40.0.0.1                                   |
| host_routes      |                                            |
| id               | 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06       |
| ip_version       | 4                                          |
| name             |                                            |
| network_id       | 426bb226-0ab9-440d-ba14-05634a17fb2b       |
| tenant_id        | 4dacfff9e72c4245a48d648ee23468d5           |
+——————+——————————————–+

[root@dfw02 ~(keystone_boris)]$  neutron router-interface-add router2 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06

Added interface e031db6b-d0cc-4c57-877b-53b1c6946870 to router router2.

[root@dfw02 ~(keystone_boris)]$ neutron subnet-list

+————————————–+——+————-+——————————————–+
| id                                   | name | cidr        | allocation_pools                           |
+————————————–+——+————-+——————————————–+
| 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06 |      | 40.0.0.0/24 | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
+————————————–+——+————-+——————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol icmp \
&gt;   –direction ingress –remote-ip-prefix 0.0.0.0/0 default

Created a new security_group_rule:

+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 4a6deddf-9350-4f98-97d7-a54cf6ebaa9a |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol tcp \
&gt;   –port-range-min 22 –port-range-max 22 \
&gt;   –direction ingress –remote-ip-prefix 0.0.0.0/0 default

Created a new security_group_rule:

+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 7a461936-ffbc-4968-975b-3d27ec975e04 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| b9782f24-2f37-4f8d-9467-f622507788bf | CentOS6.5 image     | qcow2       | bare             | 344457216 | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31            | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64    | qcow2       | bare             | 237371392 | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image     | qcow2       | bare             | 214106112 | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10 | qcow2       | bare             | 244514816 | active |
| dd8c11d2-f9b0-4b77-9b5e-534c7ac0fd58 | Ubuntu Trusty image | qcow2       | bare             | 246022144 | active |
+————————————–+———————+————-+——————+———–+——–+

[root@dfw02 ~(keystone_boris)]$ cinder create –image-id de93ee44-4085-4111-b022-a7437da8feac –display_name VF20VLG02 7

+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-02-21T06:36:21.753407      |
| display_description |                 None                 |
|     display_name    |              VF20VLG02               |
|          id         | c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 |
|       image_id      | de93ee44-4085-4111-b022-a7437da8feac |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ cinder list

+————————————–+————-+————–+——+————-+———-+————-+
|                  ID                  |    Status   | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+————-+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | downloading |  VF20VLG02   |  7   |     None    |  false   |             |
+————————————–+————-+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————-+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | available |  VF20VLG02   |  7   |     None    |   true   |             |
+————————————–+———–+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ nova boot –flavor 2  –user-data=./myfile.txt –block_device_mapping vda=c3b09e44-1868-43c6-baaa-1ffcb4b80fb1:::0 VF20XWS

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| status                               | BUILD                                              |
| updated                              | 2014-02-21T06:49:42Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume – no image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | c4573327-dd99-4e57-941e-3d35aacb637c               |
| security_groups                      | [{u'name': u'default'}]                            |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | 162021e787c54cac906ab3296a386006                   |
| name                                 | VF20XWS                                            |
| adminPass                            | YkPYdW58gz7K                                       |
| tenant_id                            | 4dacfff9e72c4245a48d648ee23468d5                   |
| created                              | 2014-02-21T06:49:42Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u'id': u'c3b09e44-1868-43c6-baaa-1ffcb4b80fb1'}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dfw02 ~(keystone_boris)]$ nova list

+————————————–+———+——–+————+————-+—————+
| ID                                   | Name    | Status | Task State | Power State | Networks      |
+————————————–+———+——–+————+————-+—————+
| c4573327-dd99-4e57-941e-3d35aacb637c | VF20XWS | ACTIVE | None       | Running     | int1=40.0.0.2 |
+————————————–+———+——–+————+————-+—————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron port-list –device-id c4573327-dd99-4e57-941e-3d35aacb637c

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |      | fa:16:3e:10:a0:e3 | {“subnet_id”: “9e0d457b-c4c4-45cf-84e2-4ac7550f3b06″, “ip_address”: “40.0.0.2”} |
+————————————–+——+——————-+———————————————————————————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-associate 64dd749f-6127-4d0f-ba51-8a9978b8c211 2d6c6569-44c3-44b2-8bed-cdc8dde12336

Associated floatingip 64dd749f-6127-4d0f-ba51-8a9978b8c211

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-show 64dd749f-6127-4d0f-ba51-8a9978b8c211

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 40.0.0.2                             |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             | 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |
| router_id           | 86b3008c-297f-4301-9bdc-766b839785f1 |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ ping 192.168.1.115

PING 192.168.1.115 (192.168.1.115) 56(84) bytes of data.
64 bytes from 192.168.1.115: icmp_seq=1 ttl=63 time=3.80 ms
64 bytes from 192.168.1.115: icmp_seq=2 ttl=63 time=1.13 ms
64 bytes from 192.168.1.115: icmp_seq=3 ttl=63 time=0.954 ms
64 bytes from 192.168.1.115: icmp_seq=4 ttl=63 time=1.01 ms
64 bytes from 192.168.1.115: icmp_seq=5 ttl=63 time=0.999 ms
64 bytes from 192.168.1.115: icmp_seq=6 ttl=63 time=0.809 ms
64 bytes from 192.168.1.115: icmp_seq=7 ttl=63 time=1.02 ms
^C

The original text of documents was posted on fedoraproject.org by Kashyap.
   Atached ones tuned for new IP’s and should not have any more  typos of original version.They also contain MySQL preventive updates currently required for openstack-nova-compute & neutron-openvswitch-agent remote connection to Controller Node to succeed . MySQL stuff is mine. All attached *.conf & *.ini files have been update for my network as well.
   In meantime I am quite sure  that using Libvirt’s default and non-default networks  for creating Controller and Compute nodes  as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real
physical boxes. Just one Ethernet Controller per box should be required in case of  using GRE tunnelling in case of RDO Havana on Fedora 20 manual setup.    
 

References

  1. http://textuploader.com/1hin
  2. http://textuploader.com/1hey
  3. http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
 4. http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


“Setting up Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN” on CentOS 6.5 with both Controller and Compute nodes each one having two Ethernet adapters per Andrew Lau

December 28, 2013

Why CentOS 6.5 ?  It has library libgfapi http://www.gluster.org/2012/11/integration-with-kvmqemu/ back-ported  what allows native Qemu work directly with glusterfs 3.4.1 volumes  https://bugzilla.redhat.com/show_bug.cgi?id=848070  View also http://rhn.redhat.com/errata/RHEA-2013-1859.html in particular bug : 956919 – Develop native qemu-gluster driver for Cinder. General concept may be seen here  http://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means . I am very thankful to Andrew Lau for sample of anwer-file for setups a kind of “Controller + Compute Node + Compute Node ..” His “Howto” [1]  is perfect , no matter that even having box with 3 Ethernet adapters I was unable to reproduce his setup exactly. Latter I realised that I just didn’t fix epel-*.repo files and decided to switch to another set up.Baseurl should be uncommented , mirror-list on the contrary. I believe  it’s very personal issue. By some reasons I had to install manually  EPEL on CentOS 6.5 .Packstack failed on internet enabled  boxes,epel-*.repo also required manual intervention to make packstack finally happy.

Differences :-

1. RDO Controller and Compute nodes setup based per Andrew Lau multi-node.packstack [1] is a bit different from original

No gluster volumes for cinder,nova,glance created before RDO packstack install , no network like 172.16.0.0 for gluster cluster management,

just original network 192.168.1.0/24 with internet alive used in RDO setup ( answer-file pretty close to Andrew’s attached)

2.Set up LBaaS :-

Edit /etc/neutron/neutron.conf and add the following in the default section:

[DEFAULT]
service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin

Already there

Then edit the /etc/openstack-dashboard/local_settings file and search for enable_lb and set it to true:

OPENSTACK_NEUTRON_NETWORK = {
‘enable_lb': True
}

Done

# vi /etc/neutron/lbaas_agent.ini – already done no changes

device_driver=neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
user_group=haproxy

Comment out the line in the service_providers section:
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Nothing to remove

service neutron-lbaas-agent start – already running , restarted
chkconfig neutron-lbaas-agent on  – skipped
service neutron-server restart  – done
service httpd restart  – done 

All done.

Haproxy is supposed to manage the landscape with several controllers.One of them is considered as frontend and the rest as backend servers providing HA openstack services running on controllers. It’s a separate host. View  :-

http://openstack.redhat.com/Load_Balance_OpenStack_API#HAProxy

In current Controller+Compute  set up there is no need in Haproxy. Otherwise third host is needed to load balance openstack-nova-compute.

So “yum install haproxy” in LBaaS section of [1] is hard to understand.

3. At the end of RDO install br-ex bridge and OVS port eth0 have been created

4. Gluster volumes for Nova,Glance,Cinder backup have been created after     RDO install. Havana tuned for cinder-volumes gluster backend after RDO installation

5. HA implementation via keepalived per [1] after RDO install due to changing interface to “br-ex” on Master.

Initial repositories set up per [1]

# yum install -y  http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm
# cd /etc/yum.repos.d/
# wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo
# yum install -y openstack-packstack python-netaddr
# yum install -y glusterfs glusterfs-fuse glusterfs-server

In case packstack failure to install EPEL :-

[root@hv02 ~]# wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@hv02 ~]# wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
[root@hv02 ~]# rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm

[root@hv02 ~]# ls -1 /etc/yum.repos.d/epel* /etc/yum.repos.d/remi.repo
/etc/yum.repos.d/epel.repo
/etc/yum.repos.d/epel-testing.repo
/etc/yum.repos.d/remi.repo

In case next packstack failure to resolve dependencies:-
Update also epel*.repo files. Uncomment baseurl.Comment out mirrorlist

System core setup

- Controller node: Nova, Keystone, Cinder, Glance, Neutron  (hv02)
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)  (hv01)

Service NetworkManager disabled, service network enabled, system rebooted before RDO installation

[root@hv02 ~]# packstack –answer-file=multi-node.packstack
Welcome to Installer setup utility
Packstack changed given value  to required value /root/.ssh/id_rsa.pub
Installing:
Clean Up…                                            [ DONE ]
Setting up ssh keys…                                 [ DONE ]
Discovering hosts’ details…                          [ DONE ]
Adding pre install manifest entries…                 [ DONE ]
Installing time synchronization via NTP…             [ DONE ]
Adding MySQL manifest entries…                       [ DONE ]
Adding QPID manifest entries…                        [ DONE ]
Adding Keystone manifest entries…                    [ DONE ]
Adding Glance Keystone manifest entries…             [ DONE ]
Adding Glance manifest entries…                      [ DONE ]
Installing dependencies for Cinder…                  [ DONE ]
Adding Cinder Keystone manifest entries…             [ DONE ]
Adding Cinder manifest entries…                      [ DONE ]
Adding Nova API manifest entries…                    [ DONE ]
Adding Nova Keystone manifest entries…               [ DONE ]
Adding Nova Cert manifest entries…                   [ DONE ]
Adding Nova Conductor manifest entries…              [ DONE ]
Adding Nova Compute manifest entries…                [ DONE ]
Adding Nova Scheduler manifest entries…              [ DONE ]
Adding Nova VNC Proxy manifest entries…              [ DONE ]
Adding Nova Common manifest entries…                 [ DONE ]
Adding Openstack Network-related Nova manifest entries…[ DONE ]
Adding Neutron API manifest entries…                 [ DONE ]
Adding Neutron Keystone manifest entries…            [ DONE ]
Adding Neutron L3 manifest entries…                  [ DONE ]
Adding Neutron L2 Agent manifest entries…            [ DONE ]
Adding Neutron DHCP Agent manifest entries…          [ DONE ]
Adding Neutron LBaaS Agent manifest entries…         [ DONE ]
Adding Neutron Metadata Agent manifest entries…      [ DONE ]
Adding OpenStack Client manifest entries…            [ DONE ]
Adding Horizon manifest entries…                     [ DONE ]
Adding Heat manifest entries…                        [ DONE ]
Adding Heat Keystone manifest entries…               [ DONE ]
Adding Ceilometer manifest entries…                  [ DONE ]
Adding Ceilometer Keystone manifest entries…         [ DONE ]
Adding post install manifest entries…                [ DONE ]
Preparing servers…                                   [ DONE ]
Installing Dependencies…                             [ DONE ]
Copying Puppet modules and manifests…                [ DONE ]
Applying Puppet manifests…
Applying 192.168.1.127_prescript.pp
Applying 192.168.1.137_prescript.pp
192.168.1.127_prescript.pp :               [ DONE ]
192.168.1.137_prescript.pp :               [ DONE ]
Applying 192.168.1.127_ntpd.pp
Applying 192.168.1.137_ntpd.pp
192.168.1.127_ntpd.pp :                         [ DONE ]
192.168.1.137_ntpd.pp :                         [ DONE ]
Applying 192.168.1.137_mysql.pp
Applying 192.168.1.137_qpid.pp
192.168.1.137_mysql.pp :                       [ DONE ]
192.168.1.137_qpid.pp :                         [ DONE ]
Applying 192.168.1.137_keystone.pp
Applying 192.168.1.137_glance.pp
Applying 192.168.1.137_cinder.pp
192.168.1.137_keystone.pp :                 [ DONE ]
192.168.1.137_glance.pp :                     [ DONE ]
192.168.1.137_cinder.pp :                     [ DONE ]
Applying 192.168.1.137_api_nova.pp
192.168.1.137_api_nova.pp :                 [ DONE ]
Applying 192.168.1.137_nova.pp
Applying 192.168.1.127_nova.pp
192.168.1.137_nova.pp :                         [ DONE ]
192.168.1.127_nova.pp :                         [ DONE ]
Applying 192.168.1.127_neutron.pp
Applying 192.168.1.137_neutron.pp
192.168.1.127_neutron.pp :                   [ DONE ]
192.168.1.137_neutron.pp :                   [ DONE ]
Applying 192.168.1.137_osclient.pp
Applying 192.168.1.137_horizon.pp
Applying 192.168.1.137_heat.pp
Applying 192.168.1.137_ceilometer.pp
192.168.1.137_osclient.pp :                 [ DONE ]
192.168.1.137_horizon.pp :                   [ DONE ]
192.168.1.137_heat.pp :                         [ DONE ]
192.168.1.137_ceilometer.pp :             [ DONE ]
Applying 192.168.1.127_postscript.pp
Applying 192.168.1.137_postscript.pp
192.168.1.127_postscript.pp :             [ DONE ]
192.168.1.137_postscript.pp :             [ DONE ]
[ DONE ]
Finalizing…                                          [ DONE ]

**** Installation completed successfully ******

Additional information:
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.1.137. To use the command line tools you need to source the file.
* NOTE : A certificate was generated to be used for ssl, You should change the ssl certificate configured in /etc/httpd/conf.d/ssl.conf on 192.168.1.137 to use a CA signed cert.
* To access the OpenStack Dashboard browse to https://192.168.1.137/dashboard.
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* The installation log file is available at: /var/tmp/packstack/20131226-230226-PzmL7R/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20131226-230226-PzmL7R/manifests

Services on Controller Node :-

Services on Compute Node :-

Post install configuration

On Controller :

root@hv02 network-scripts(keystone_admin)]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.137″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@hv02 network-scripts(keystone_admin)]# cat ifcfg-eth0

NAME=”eth0″
HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

Pre install configuration

[root@hv02 network-scripts(keystone_admin)]# cat ifcfg-eth1

DEVICE=eth1
HWADDR=00:0C:76:E0:1E:C5
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Post install configuration

[root@hv02 ~(keystone_admin)]# ovs-vsctl show
e059cd59-21c8-48f8-ad7c-b9e1de9a986b
Bridge br-int
Port “int-br-eth1″
Interface “int-br-eth1″
Port br-int
Interface br-int
type: internal
Port “qvo5252ab82-49″
tag: 1
Interface “qvo5252ab82-49″
Port “tape1849acb-66″
tag: 1
Interface “tape1849acb-66″
type: internal
Port “qr-9017c241-f3″
tag: 1
Interface “qr-9017c241-f3″
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port “eth0″
Interface “eth0″
Port “qg-14fcad42-83″
Interface “qg-14fcad42-83″
type: internal
Bridge “br-eth1″
Port “br-eth1″
Interface “br-eth1″
type: internal
Port “eth1″
Interface “eth1″
Port “phy-br-eth1″
Interface “phy-br-eth1″
ovs_version: “1.11.0”

On Compute node :-

[root@hv01 network-scripts]# cat ifcfg-eth0

DEVICE=eth0
TYPE=Ethernet
UUID=e25e1975-50db-4421-ae39-676708d480db
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=192.168.1.127
PREFIX=24
GATEWAY=192.168.1.1
DNS1=83.221.202.254
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=”System eth0″
HWADDR=00:22:15:63:E4:E2
[root@hv01 network-scripts]# cat ifcfg-eth1

DEVICE=eth1
HWADDR=00:22:15:63:F9:9F
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Glusterfs replicated volumes created  after reboot for glance,nova,cinder-volumes.

At this point implement HA via keepalived with  /etc/keepalived/keepalived.conf  on hv02

vrrp_instance VI_1 {
interface  br-ex
state MASTER
virtual_router_id 10
priority 100   # master 100
virtual_ipaddress {
192.168.1.134
}
}

and another on on hv01

vrrp_instance VI_1 {
interface eth0
state BACKUP
virtual_router_id 10
priority 99 # master 100
virtual_ipaddress {
192.168.1.134
}
}

I just follow [1] but intterface for MASTER is “br-ex”

Enable service “keepalived” and reboot boxes

Tuning glance and nova per [1]  http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/

Just in case I reproduce instructions from [1]

# mkdir -p /mnt/gluster/{glance,nova} # On Controller
# mkdir -p /mnt/gluster/nova          # On Compute
# mount -t glusterfs 192.168.1.134:/nova2 /mnt/gluster/nova/
# mount -t glusterfs 192.168.1.134:/glance2 /mnt/gluster/glance/

Update /etc/glance/glance-api.conf  
    filesystem_store_datadir = /mnt/gluster/glance/images

# mkdir -p /mnt/gluster/glance/images
# chown -R glance:glance /mnt/gluster/glance/
# service openstack-glance-api restart

For all Compute Nodes ( you may have more the one and controller if you run on it  openstack-nova-compute )

# mkdir /mnt/gluster/nova/instance/
# chown -R nova:nova /mnt/gluster/nova/instance/

Upadte  /etc/nova/nova.conf  
  instances_path = /mnt/gluster/nova/instance

# service openstack-nova-compute restart

Quoting ends

Post installation creating cinder-volumes :-

Configuring Cinder to Add GlusterFS

# gluster volume create cinder-volumes02  replica 2 hv01.localdomain:/data2/cinder hv02.localdomain:/data2/cinder

# gluster volume start cinder-volumes02

# gluster volume set cinder-volumes02  auth.allow 192.168.1.*

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver 

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf 

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes

 # vi /etc/cinder/shares.conf

    192.168.1.134:cinder-volumes02

:wq

Update /etc/sysconfig/iptables (if it hasn’t been done earlier) :-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT

-A INPUT -p tcp –dport 111 -j ACCEPT

-A INPUT -p udp –dport 111 -j ACCEPT

-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment Out

-A FORWARD -j REJECT –reject-with icmp-host-prohibited

-A INPUT -j REJECT –reject-with icmp-host-prohibited

# service iptables restart

Restart openstack-cinder-volume services mounts glusterfs volume:-

 # for i in api scheduler volume ; do service openstack-cinder-${i} restart ;done

After RDO packstack completed and post configuration tuning is done.

On Controller :-

[root@hv02 ~(keystone_admin)]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vg_hv02-LogVol00     154G   16G  131G  11% /
tmpfs                            3.9G  232K  3.9G   1% /dev/shm
/dev/sdb1                        485M   70M  390M  16% /boot
/dev/mapper/vg_havana-lv_havana   98G  2.8G   95G   3% /data2
192.168.1.134:/glance2            98G  2.9G   95G   3% /mnt/gluster/glance2
192.168.1.134:/nova2              98G  2.9G   95G   3% /mnt/gluster/nova2
192.168.1.134:/cinder-volumes02   98G  2.9G   95G   3% /var/lib/cinder/volumes/77b8406d9f60712274c66a84844feb8a
192.168.1.134:/cinder-volumes02   98G  2.9G   95G   3% /var/lib/nova/mnt/77b8406d9f60712274c66a84844feb8a

[root@hv02 ~(keystone_admin)]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Dec 28 10:47:59 2013
#
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_hv02-LogVol00 /                       ext4    defaults        1 1
UUID=0a7bffa6-d133-4cd6-bdaf-06a00af0b340 /boot    ext4    defaults  1 2

/dev/mapper/vg_hv02-LogVol01 swap                    swap    defaults        0 0
tmpfs                   /dev/shm               tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0sysfs                   /sys                      sysfs   defaults        0 0
proc                    /proc                     proc    defaults        0 0
/dev/mapper/vg_havana-lv_havana    /data2  xfs     defaults        1 2
192.168.1.134:/glance2  /mnt/gluster/glance2  glusterfs defaults,_netdev 0 0
192.168.1.134:/nova2    /mnt/gluster/nova2     glusterfs defaults,_netdev

[root@hv02 ~(keystone_admin)]# gluster volume info nova2
Volume Name: nova2
Type: Replicate
Volume ID: 3a04a896-8080-4172-b3fb-c89c028c6944
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hv01.localdomain:/data2/nova
Brick2: hv02.localdomain:/data2/nova
Options Reconfigured:
auth.allow: 192.168.1.*

[root@hv02 ~(keystone_admin)]# gluster volume info glance2
Volume Name: glance2
Type: Replicate
Volume ID: c7b31eaa-6dea-49c2-9d09-ec4dcd65c560
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hv01.localdomain:/data2/glance
Brick2: hv02.localdomain:/data2/glance
Options Reconfigured:
auth.allow: 192.168.1.*

[root@hv02 ~(keystone_admin)]# gluster volume info cinder-volumes02
Volume Name: cinder-volumes02
Type: Replicate
Volume ID: 639e6afa-dc29-4fd7-8d3c-95f655383d1c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hv01.localdomain:/data2/cinder
Brick2: hv02.localdomain:/data2/cinder
Options Reconfigured:
auth.allow: 192.168.1.*

On Compute :-


[root@hv02 ~(keystone_admin)]# ssh hv01
Last login: Mon Dec 30 11:09:16 2013 from hv02

[root@hv01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_hv01-LogVol00 154G 4.5G 142G 4% /
tmpfs 3.9G 84K 3.9G 1% /dev/shm
/dev/sdb1 485M 70M 390M 16% /boot
/dev/mapper/vg_havana-lv_havana 98G 3.1G 95G 4% /data2
192.168.1.134:/nova2 98G 3.1G 95G 4% /mnt/gluster/nova2
192.168.1.134:/cinder-volumes02 98G 3.1G 95G 4% /var/lib/nova/mnt/77b8406d9f60712274c66a84844feb8a

[root@hv01 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Dec 28 10:14:16 2013
#
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_hv01-LogVol00 /                       ext4    defaults        1 1
UUID=21afa600-9b18-4aea-bfb7-16b73eaee3de /boot                   ext4    defaults        1 2
/dev/mapper/vg_hv01-LogVol01       swap            swap    defaults        0 0
tmpfs                   /dev/shm             tmpfs   defaults        0 0
devpts                  /dev/pts               devpts  gid=5,mode=620  0 0
sysfs                   /sys                      sysfs   defaults        0 0
proc                    /proc                    proc    defaults        0 0
/dev/mapper/vg_havana-lv_havana    /data2  xfs     defaults        1 2
192.168.1.134:/nova2   /mnt/gluster/nova2  glusterfs defaults,_netdev 0 0

On Controller :-

[root@hv02 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 active
openstack-nova-network:                 dead      (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active

== Glance services ==

openstack-glance-api:                   active
openstack-glance-registry:              active

== Keystone service ==

openstack-keystone:                     active

== Horizon service ==

openstack-dashboard:                    000
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    active
neutron-openvswitch-agent:              active

== Cinder services ==

openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active

== Ceilometer services ==

openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active

== Heat services ==

openstack-heat-api:                     active
openstack-heat-api-cfn:                 dead      (disabled on boot)
openstack-heat-api-cloudwatch:          dead      (disabled on boot)
openstack-heat-engine:                  active

== Support services ==

mysqld:                                 active
libvirtd:                               active
openvswitch:                            active
messagebus:                             active
tgtd:                                   active
qpidd:                                  active
memcached:                              active

== Keystone users ==

+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 0b6cc1c84d194a4fbf6be1cd3343167e |   admin    |   True  |    test@test.com     |
| 1415f2952fc34b419abc8a0d75130e30 | ceilometer |   True  | ceilometer@localhost |
| d77e11979821441da8157103011cae5a |   cinder   |   True  |   cinder@localhost   |
| 2860d02458904f9aa0f89afed6bcc423 |   glance   |   True  |   glance@localhost   |
| 78a8beeeb277493e96feae3127ea0607 |    heat    |   True  |    heat@localhost    |
| 002a2b8fcbfb47a1a588e74e51cb1f3a |  neutron   |   True  |  neutron@localhost   |
| 1b558e148aff4f618120f0f7f547f064 |    nova    |   True  |    nova@localhost    |
+———————————-+————+———+———————-+

== Glance images ==

+————————————–+—————–+————-+——————+———–+——–+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+————————————–+—————–+————-+——————+———–+——–+
| 02ef79b4-081b-4966-8b11-10492449fba5 | f19image        | qcow2       | bare             | 237371392 | active |
| 6eb9e748-5786-4072-b2cf-4c2a91da2bf3 | Ubuntu1310image | qcow2       | bare             | 243728384 | active |
+————————————–+—————–+————-+——————+———–+——–+

== Nova managed services ==

+——————+——————+———-+———+——-+—————————-+—————–+
| Binary           | Host             | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+——————+——————+———-+———+——-+—————————-+—————–+
| nova-consoleauth | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:32.000000 | None            |
| nova-scheduler   | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:32.000000 | None            |
| nova-conductor   | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:35.000000 | None            |
| nova-cert        | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:32.000000 | None            |
| nova-compute     | hv02.localdomain | nova     | enabled | up    | 2013-12-28T11:06:33.000000 | None            |
| nova-compute     | hv01.localdomain | nova     | enabled | up    | 2013-12-28T11:06:32.000000 | None            |

+——————+——————+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+———+——+
| ID                                   | Label   | Cidr |
+————————————–+———+——+
| 56456fcb-8696-4e63-894e-635681c911e4 | private | None |
| d4e83ac8-c257-4fee-a551-5d711087c238 | public  | None |
+————————————–+———+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+——————+——–+————+————-+——————————–+
| ID                                   | Name             | Status | Task State | Power State | Networks                       |
+————————————–+——————+——–+————+————-+——————————–+
| 7a9da01f-499c-4d27-9b7a-1b1307b767a8 | UbuntuSalamander | ACTIVE | None       | Running     | private=10.0.0.4, 192.168.1.60 |
| 4db2876c-cedd-4d2b-853c-e156bcb20592 | VF19RS1          | ACTIVE | None       | Running     | private=10.0.0.2, 192.168.1.59 |
+————————————–+——————+——–+————+————-+——————————–|

Detailed info about both instances

 [root@hv02 ~(keystone_admin)]# nova show 7a9da01f-499c-4d27-9b7a-1b1307b767a8

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2013-12-28T10:43:53Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hv02.localdomain                                         |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| private network                      | 10.0.0.4, 192.168.1.60                                   |
| hostId                               | 2d47a35fc92addd418ba8dd8df73233732a0e880b2e4e1ffac907091 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                                        |
| OS-SRV-USG:launched_at               | 2013-12-28T10:43:53.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hv02.localdomain                                         |
| flavor                               | m1.small (2)                                             |
| id                                   | 7a9da01f-499c-4d27-9b7a-1b1307b767a8                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 0b6cc1c84d194a4fbf6be1cd3343167e                         |
| name                                 | UbuntuSalamander                                         |
| created                              | 2013-12-28T10:43:40Z                                     |
| tenant_id                            | dc2ec9f2a8404c22b46566f567bebc49                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'eaf06b2e-23d0-4a65-bbba-6d464f6c0441'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

[root@hv02 ~(keystone_admin)]# nova show 4db2876c-cedd-4d2b-853c-e156bcb20592

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2013-12-28T10:20:31Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hv01.localdomain                                         |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| private network                      | 10.0.0.2, 192.168.1.59                                   |
| hostId                               | fc6ed5fd7d8a2f3c510671ff8485af9e340d4244246eb0aff55f1a0d |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                        |
| OS-SRV-USG:launched_at               | 2013-12-28T10:20:31.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hv01.localdomain                                         |
| flavor                               | m1.small (2)                                             |
| id                                   | 4db2876c-cedd-4d2b-853c-e156bcb20592                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 0b6cc1c84d194a4fbf6be1cd3343167e                         |
| name                                 | VF19RS1                                                  |
| created                              | 2013-12-28T10:20:22Z                                     |
| tenant_id                            | dc2ec9f2a8404c22b46566f567bebc49                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'c1ebdd6c-2be0-451e-b3ba-b93cbc5b506b'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

  Testing Windows 2012 Server evaluation cloud instance :-

[root@hv02 Downloads(keystone_admin)]# gunzip -cd windows_server_2012_r2_standard_eval_kvm_20131117.qcow2.gz | glance image-create –property hypervisor_type=kvm  –name “Windows Server 2012 R2 Std Eval” –container-format bare –disk-format vhd
+—————————-+————————————–+
| Property                   | Value                                |
+—————————-+————————————–+
| Property ‘hypervisor_type’ | kvm                                  |
| checksum                   | 83c08f00b784e551a79ac73348b47360     |
| container_format           | bare                                 |
| created_at                 | 2014-01-09T13:27:24                  |
| deleted                    | False                                |
| deleted_at                 | None                                 |
| disk_format                | vhd                                  |
| id                         | d55b81c5-2370-4d3e-8cb1-323e7a8fa9da |
| is_public                  | False                                |
| min_disk                   | 0                                    |
| min_ram                    | 0                                    |
| name                       | Windows Server 2012 R2 Std Eval      |
| owner                      | dc2ec9f2a8404c22b46566f567bebc49     |
| protected                  | False                                |
| size                       | 17182752768                          |
| status                     | active                               |
| updated_at                 | 2014-01-09T13:52:18                  |
+—————————-+————————————–+

[root@hv02 Downloads(keystone_admin)]# nova image-list
+————————————–+———————————+——–+——–+
| ID                                   | Name                            | Status | Server |
+————————————–+———————————+——–+——–+
| 6bb391f6-f330-406a-95eb-a12fd3db93d5 | UbuntuSalamanderImage           | ACTIVE |        |
| d55b81c5-2370-4d3e-8cb1-323e7a8fa9da | Windows Server 2012 R2 Std Eval | ACTIVE
| c8265abc-5499-414d-94c3-0376cd652281 | fedora19image                   | ACTIVE |        |
| 545aa5a8-b3b8-4fbd-9c86-c523d7790b49 | fedora20image                   | ACTIVE |        |
+————————————–+———————————+——–+——–+

[root@hv02 Downloads(keystone_admin)]# cinder create –image-id d55b81c5-2370-4d3e-8cb1-323e7a8fa9da –display_name Windows2012LVG 20
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-01-09T13:58:49.761145      |
| display_description |                 None                 |
|     display_name    |            Windows2012LVG            |
|          id         | fb78c942-1cf7-4f8c-b264-1a3997d03eef |
|       image_id      | d55b81c5-2370-4d3e-8cb1-323e7a8fa9da |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@hv02 Downloads(keystone_admin)]# cinder create –image-id d55b81c5-2370-4d3e-8cb1-323e7a8fa9da –display_name Windows2012LVG 20
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-01-09T13:58:49.761145      |
| display_description |                 None                 |
|     display_name    |            Windows2012LVG            |
|          id         | fb78c942-1cf7-4f8c-b264-1a3997d03eef |
|       image_id      | d55b81c5-2370-4d3e-8cb1-323e7a8fa9da |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# ls -lah
total 8.5G
drwxr-xr-x. 3 root   root    173 Jan  9 17:58 .
drwxr-xr-x. 6 cinder cinder 4.0K Jan  8 14:12 ..
-rw-rw-rw-. 1 root   root    12G Jan  9 14:56 volume-1ef5e77f-3ac2-42ab-97e6-ebb04a872461
-rw-rw-rw-. 1 root   root    10G Jan  8 22:52 volume-42671dcc-3295-4d9c-a040-6ff031277b73
-rw-rw-rw-. 1 root   root    20G Jan  9 17:58 volume-fb78c942-1cf7-4f8c-b264-1a3997d03eef

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# cinder list
+————————————–+————-+———————+——+————-+———-+————————————–+
|                  ID                  |    Status   |     Display Name    | Size | Volume Type | Bootable |             Attached to              |
+————————————–+————-+———————+——+————-+———-+————————————–+
| 1ef5e77f-3ac2-42ab-97e6-ebb04a872461 |    in-use   |       VF19VLG2      |  12  | performance |   true   | 6b40285c-ce03-4194-b247-013c6f11ff42 |
| 42671dcc-3295-4d9c-a040-6ff031277b73 |    in-use   | UbuntuSalamanderVLG |  10  | performance |   true   | ebd3063e-00c7-4ea8-aed4-63919ebddb42 |
| fb78c942-1cf7-4f8c-b264-1a3997d03eef | downloading |    Windows2012LVG   |  20  |     None    |  false   | |                                      |
———————————————————————————————————–
[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# cinder list
+————————————–+——–+———————+——+————-+———-+————————————–+
|                  ID                  | Status |     Display Name    | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+———————+——+————-+———-+————————————–+
| 1ef5e77f-3ac2-42ab-97e6-ebb04a872461 | in-use |       VF19VLG2      |  12  | performance |   true   | 6b40285c-ce03-4194-b247-013c6f11ff42 |
| 42671dcc-3295-4d9c-a040-6ff031277b73 | in-use | UbuntuSalamanderVLG |  10  | performance |   true   | ebd3063e-00c7-4ea8-aed4-63919ebddb42 |
| fb78c942-1cf7-4f8c-b264-1a3997d03eef | in-use |    Windows2012LVG   |  20  |     None    |   true   | 2950e393-eb37-4991-9e16-fa7ca24b678a |
+————————————–+——–+———————+——+————-+———-+————————————–+
+————————————–+————-+———————+——+————-+——-

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# nova list

+————————————–+——————+———–+————+————-+——————————–+
| ID                                   | Name             | Status    | Task State | Power State | Networks                       |
+————————————–+——————+———–+————+————-+——————————–+
| ebd3063e-00c7-4ea8-aed4-63919ebddb42 | UbuntuSalamander | SUSPENDED | None       | Shutdown    | private=10.0.0.4, 192.168.1.60 |
| 6b40285c-ce03-4194-b247-013c6f11ff42 | VF19RS2          | SUSPENDED | None       | Shutdown    | private=10.0.0.2, 192.168.1.59 |
| 2950e393-eb37-4991-9e16-fa7ca24b678a | Win2012SRV       | ACTIVE    | None       | Running     | private=10.0.0.5, 192.168.1.61 |
+————————————–+——————+———–+————+————-+——————————–+

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# nova show  2950e393-eb37-4991-9e16-fa7ca24b678a

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-01-09T19:37:09Z                           |
| OS-EXT-STS:task_state                | None                                             |
| OS-EXT-SRV-ATTR:host                 | hv01.localdomain                                         |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| private network                      | 10.0.0.5, 192.168.1.61                                   |
| hostId                               | fc6ed5fd7d8a2f3c510671ff8485af9e340d4244246eb0aff55f1a0d |
| OS-EXT-STS:vm_state                  | active                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000013                                        |
| OS-SRV-USG:launched_at               | 2014-01-09T14:26:34.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hv01.localdomain                                         |
| flavor                               | m1.small (2)                                             |
| id                                   | 2950e393-eb37-4991-9e16-fa7ca24b678a                     |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 0b6cc1c84d194a4fbf6be1cd3343167e                         |
| name                                 | Win2012SRV                                           |
| created                              | 2014-01-09T14:26:24Z                             |
| tenant_id                            | dc2ec9f2a8404c22b46566f567bebc49                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'fb78c942-1cf7-4f8c-b264-1a3997d03eef'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

System info :-

REFERENCES.

1. http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/
2. http://openstack.redhat.com/forum/discussion/607/havana-mutlinode-with-neutron

Answer file :

[general]

# Path to a Public key to install on servers. If a usable key has not

# been installed on the remote servers the user will be prompted for a

# password and this key will be installed so the password will not be

# required again

CONFIG_SSH_KEY=

# Set to ‘y’ if you would like Packstack to install MySQL

CONFIG_MYSQL_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Image

# Service (Glance)

CONFIG_GLANCE_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Block

# Storage (Cinder)

CONFIG_CINDER_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Compute

# (Nova)

CONFIG_NOVA_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack

# Networking (Neutron)

CONFIG_NEUTRON_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack

# Dashboard (Horizon)

CONFIG_HORIZON_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Object

# Storage (Swift)

CONFIG_SWIFT_INSTALL=n

# Set to ‘y’ if you would like Packstack to install OpenStack

# Metering (Ceilometer)

CONFIG_CEILOMETER_INSTALL=y

# Set to ‘y’ if you would like Packstack to install Heat

CONFIG_HEAT_INSTALL=y

# Set to ‘y’ if you would like Packstack to install the OpenStack

# Client packages. An admin “rc” file will also be installed

CONFIG_CLIENT_INSTALL=y

# Comma separated list of NTP servers. Leave plain if Packstack

# should not install ntpd on instances.

CONFIG_NTP_SERVERS=0.au.pool.ntp.org,1.au.pool.ntp.org,2.au.pool.ntp.org,3.au.pool.ntp.org

# Set to ‘y’ if you would like Packstack to install Nagios to monitor

# openstack hosts

CONFIG_NAGIOS_INSTALL=n

# Comma separated list of servers to be excluded from installation in

# case you are running Packstack the second time with the same answer

# file and don’t want Packstack to touch these servers. Leave plain if

# you don’t need to exclude any server.

EXCLUDE_SERVERS=

# The IP address of the server on which to install MySQL

CONFIG_MYSQL_HOST=192.168.1.137

# Username for the MySQL admin user

CONFIG_MYSQL_USER=root

# Password for the MySQL admin user

CONFIG_MYSQL_PW=1279e9bb292c48e5

# The IP address of the server on which to install the QPID service

CONFIG_QPID_HOST=192.168.1.137

CONFIG_QPID_ENABLE_SSL=n

CONFIG_QPID_ENABLE_AUTH=n

CONFIG_NEUTRON_LBAAS_HOSTS=192.168.1.137,192.168.1.127

CONFIG_RH_USER=n

CONFIG_RH_PW=n

CONFIG_RH_BETA_REPO=n

CONFIG_SATELLITE_URL=n

CONFIG_SATELLITE_USER=n

CONFIG_SATELLITE_PW=n

CONFIG_SATELLITE_AKEY=n

CONFIG_SATELLITE_CACERT=n

CONFIG_SATELLITE_PROFILE=n

CONFIG_SATELLITE_FLAGS=novirtinfo

CONFIG_SATELLITE_PROXY=n

CONFIG_SATELLITE_PROXY_USER=n

CONFIG_SATELLITE_PROXY_PW=n

# The IP address of the server on which to install Keystone

CONFIG_KEYSTONE_HOST=192.168.1.137

# The password to use for the Keystone to access DB

CONFIG_KEYSTONE_DB_PW=6cde8da7a3ca4bc0

# The token to use for the Keystone service api

CONFIG_KEYSTONE_ADMIN_TOKEN=c9a7f68c19e448b48c9f520df5771851

# The password to use for the Keystone admin user

CONFIG_KEYSTONE_ADMIN_PW=6fa29c9cb0264385

# The password to use for the Keystone demo user

CONFIG_KEYSTONE_DEMO_PW=6dc04587dd234ac9

# Kestone token format. Use either UUID or PKI

CONFIG_KEYSTONE_TOKEN_FORMAT=PKI

# The IP address of the server on which to install Glance

CONFIG_GLANCE_HOST=192.168.1.137

# The password to use for the Glance to access DB

CONFIG_GLANCE_DB_PW=1c135a665b70481d

# The password to use for the Glance to authenticate with Keystone

CONFIG_GLANCE_KS_PW=9c32f5a3bfb54966

# The IP address of the server on which to install Cinder

CONFIG_CINDER_HOST=192.168.1.137

# The password to use for the Cinder to access DB

CONFIG_CINDER_DB_PW=d9e997c7f6ec4f3b

# The password to use for the Cinder to authenticate with Keystone

CONFIG_CINDER_KS_PW=ae0e15732c104989

# The Cinder backend to use, valid options are: lvm, gluster, nfs

CONFIG_CINDER_BACKEND=lvm

# Create Cinder’s volumes group. This should only be done for testing

# on a proof-of-concept installation of Cinder.  This will create a

# file-backed volume group and is not suitable for production usage.

CONFIG_CINDER_VOLUMES_CREATE=y

# Cinder’s volumes group size. Note that actual volume size will be

# extended with 3% more space for VG metadata.

CONFIG_CINDER_VOLUMES_SIZE=20G

# A single or comma separated list of gluster volume shares to mount,

# eg: ip-address:/vol-name

# CONFIG_CINDER_GLUSTER_MOUNTS=192.168.1.137:/CINDER-VOLUMES

# A single or comma seprated list of NFS exports to mount, eg: ip-

# address:/export-name

CONFIG_CINDER_NFS_MOUNTS=

# The IP address of the server on which to install the Nova API

# service

CONFIG_NOVA_API_HOST=192.168.1.137

# The IP address of the server on which to install the Nova Cert

# service

CONFIG_NOVA_CERT_HOST=192.168.1.137

# The IP address of the server on which to install the Nova VNC proxy

CONFIG_NOVA_VNCPROXY_HOST=192.168.1.137

# A comma separated list of IP addresses on which to install the Nova

# Compute services

CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137,192.168.1.127

# The IP address of the server on which to install the Nova Conductor

# service

CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.137

# The password to use for the Nova to access DB

CONFIG_NOVA_DB_PW=34bf4442200c4c93

# The password to use for the Nova to authenticate with Keystone

CONFIG_NOVA_KS_PW=beaf384bc2b941ca

# The IP address of the server on which to install the Nova Scheduler

# service

CONFIG_NOVA_SCHED_HOST=192.168.1.137

# The overcommitment ratio for virtual to physical CPUs. Set to 1.0

# to disable CPU overcommitment

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=32.0

# The overcommitment ratio for virtual to physical RAM. Set to 1.0 to

# disable RAM overcommitment

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=3.0

# Private interface for Flat DHCP on the Nova compute servers

CONFIG_NOVA_COMPUTE_PRIVIF=eth1

# The list of IP addresses of the server on which to install the Nova

# Network service

CONFIG_NOVA_NETWORK_HOSTS=192.168.1.137

# Nova network manager

CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

# Public interface on the Nova network server

CONFIG_NOVA_NETWORK_PUBIF=eth0

# Private interface for network manager on the Nova network server

CONFIG_NOVA_NETWORK_PRIVIF=eth1

# IP Range for network manager

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22

# IP Range for Floating IP’s

CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

# Name of the default floating pool to which the specified floating

# ranges are added to

CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova

# Automatically assign a floating IP to new instances

CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

# First VLAN for private networks

CONFIG_NOVA_NETWORK_VLAN_START=100

# Number of networks to support

CONFIG_NOVA_NETWORK_NUMBER=1

# Number of addresses in each private subnet

CONFIG_NOVA_NETWORK_SIZE=255

# The IP addresses of the server on which to install the Neutron

# server

CONFIG_NEUTRON_SERVER_HOST=192.168.1.137

# The password to use for Neutron to authenticate with Keystone

CONFIG_NEUTRON_KS_PW=53d71f31745b431e

# The password to use for Neutron to access DB

CONFIG_NEUTRON_DB_PW=ab7d7088075b4727

# A comma separated list of IP addresses on which to install Neutron

# L3 agent

CONFIG_NEUTRON_L3_HOSTS=192.168.1.137

# The name of the bridge that the Neutron L3 agent will use for

# external traffic, or ‘provider’ if using provider networks

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

# A comma separated list of IP addresses on which to install Neutron

# DHCP agent

CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.137

# The name of the L2 plugin to be used with Neutron

CONFIG_NEUTRON_L2_PLUGIN=openvswitch

# A comma separated list of IP addresses on which to install Neutron

# metadata agent

CONFIG_NEUTRON_METADATA_HOSTS=192.168.1.137

# A comma separated list of IP addresses on which to install Neutron

# metadata agent

CONFIG_NEUTRON_METADATA_PW=d7ae6de0e6ef4d5e

# The type of network to allocate for tenant networks

CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local

# A comma separated list of VLAN ranges for the Neutron linuxbridge

# plugin

CONFIG_NEUTRON_LB_VLAN_RANGES=

# A comma separated list of interface mappings for the Neutron

# linuxbridge plugin

CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

# Type of network to allocate for tenant networks

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan

# A comma separated list of VLAN ranges for the Neutron openvswitch

# plugin

CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:10:20

# A comma separated list of bridge mappings for the Neutron

# openvswitch plugin

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1

# A comma separated list of colon-separated OVS bridge:interface

# pairs. The interface will be added to the associated bridge.

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1

# A comma separated list of tunnel ranges for the Neutron openvswitch

# plugin

CONFIG_NEUTRON_OVS_TUNNEL_RANGES=

# Override the IP used for GRE tunnels on this hypervisor to the IP

# found on the specified interface (defaults to the HOST IP)

CONFIG_NEUTRON_OVS_TUNNEL_IF=

# The IP address of the server on which to install the OpenStack

# client packages. An admin “rc” file will also be installed

CONFIG_OSCLIENT_HOST=192.168.1.137

# The IP address of the server on which to install Horizon

CONFIG_HORIZON_HOST=192.168.1.137

# To set up Horizon communication over https set this to “y”

CONFIG_HORIZON_SSL=y

# PEM encoded certificate to be used for ssl on the https server,

# leave blank if one should be generated, this certificate should not

# require a passphrase

CONFIG_SSL_CERT=

# Keyfile corresponding to the certificate if one was entered

CONFIG_SSL_KEY=

# The IP address on which to install the Swift proxy service

# (currently only single proxy is supported)

CONFIG_SWIFT_PROXY_HOSTS=192.168.1.137

# The password to use for the Swift to authenticate with Keystone

CONFIG_SWIFT_KS_PW=311d3891e9e140b9

# A comma separated list of IP addresses on which to install the

# Swift Storage services, each entry should take the format

# [/dev], for example 127.0.0.1/vdb will install /dev/vdb
# on 127.0.0.1 as a swift storage device(packstack does not create the
# filesystem, you must do this first), if /dev is omitted Packstack
# will create a loopback device for a test setup
CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.137
# Number of swift storage zones, this number MUST be no bigger than
# the number of storage devices configured
CONFIG_SWIFT_STORAGE_ZONES=1
# Number of swift storage replicas, this number MUST be no bigger
# than the number of storage zones configured
CONFIG_SWIFT_STORAGE_REPLICAS=1
# FileSystem type for storage nodes
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
# Whether to provision for demo usage and testing
CONFIG_PROVISION_DEMO=n
# The CIDR network address for the floating IP subnet
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
# Whether to configure tempest for testing
CONFIG_PROVISION_TEMPEST=n
# The uri of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
# The revision of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
# Whether to configure the ovs external bridge in an all-in-one
# deployment
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
# The IP address of the server on which to install Heat service
CONFIG_HEAT_HOST=192.168.1.137
# The password used by Heat user to authenticate against MySQL
CONFIG_HEAT_DB_PW=0f593f0e8ac94b20
# The password to use for the Heat to authenticate with Keystone
CONFIG_HEAT_KS_PW=22a4dee89e0e490b
# Set to ‘y’ if you would like Packstack to install Heat CloudWatch
# API
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
# Set to ‘y’ if you would like Packstack to install Heat
# CloudFormation API
CONFIG_HEAT_CFN_INSTALL=n
# The IP address of the server on which to install Heat CloudWatch
# API service
CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.137
# The IP address of the server on which to install Heat
# CloudFormation API service
CONFIG_HEAT_CFN_HOST=192.168.1.137
# The IP address of the server on which to install Ceilometer
CONFIG_CEILOMETER_HOST=192.168.1.137
# Secret key for signing metering messages.
CONFIG_CEILOMETER_SECRET=70ca460aa5354ef8
# The password to use for Ceilometer to authenticate with Keystone
CONFIG_CEILOMETER_KS_PW=72858e26b4cd40c2
# To subscribe each server to EPEL enter “y”
CONFIG_USE_EPEL=y
# A comma separated list of URLs to any additional yum repositories
# to install
CONFIG_REPO=
# The IP address of the server on which to install the Nagios server
CONFIG_NAGIOS_HOST=192.168.1.137
# The password of the nagiosadmin user on the Nagios server
CONFIG_NAGIOS_PW=c3832621eebd4d48


oVirt 3.3.2 hackery on Fedora 19

December 21, 2013

My final target was  to create two node oVirt 3.3.2 cluster and virtual machines using replicated glusterfs 3.4.1 volumes based on XFS formatted partitions. Choice of IPv4 firewall with iptables for tuning cluster environment and synchronization is my personal preference. Now I also know that postgres requires enough shared memory allocation like Informix or Oracle ( i was Informix DBA@Verizon for about 5 years , it was nice time ..)

   oVirt is an open source alternative to VMware vSphere, and provides an awesome KVM management interface for multi-node virtualization.

oVirt 3.3.2 clean install was performed as follows :-

1. Created ovirtmgmt bridge

[root@ovirt1 network-scripts]# cat ifcfg-ovirtmgmt

DEVICE=ovirtmgmt
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=static
IPADDR=192.168.1.142
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=83.221.202.254
NM_CONTROLLED=”no”

 In particular (my box) :

 [root@ovirt1 network-scripts]# cat ifcfg-enp2s0

BOOTPROTO=none
TYPE=”Ethernet”
ONBOOT=”yes”
NAME=”enp2s0″
BRIDGE=”ovirtmgmt”
HWADDR=00:22:15:63:e4:e2

2. Fixed bug with NFS Server:   https://bugzilla.redhat.com/show_bug.cgi?id=970595

3. Set up IPv4 firewall with iptables

4. Disabled NetworkManager and enabled network service 

5. To be able perform current 3.3.2 install on F19 ,  set up per

http://postgresql.1045698.n5.nabble.com/How-to-install-latest-stable-postgresql-on-Debian-td5005417.html

# sysctl -w kernel.shmmax=419430400
kernel.shmmax = 419430400
# sysctl -n kernel.shmmax
419430400 

Appears to be known issue http://www.ovirt.org/OVirt_3.3.2_release_notes  On Fedora 19 with recent versions of PostgreSQL it may be necessary to manually change kernel.shmmax settings (BZ 1039616)

Otherwise, setup fails to perform Misc Configuration. Systemctl status postgresql.service reports a servers crash during setup. Runtime shared memory mapping :-

[root@ovirt1 ~]# systemctl list-units | grep postgres
postgresql.service          loaded active running   PostgreSQL database server

[root@ovirt1 ~]# ipcs -a

—— Message Queues ——–
key        msqid      owner      perms      used-bytes   messages

—— Shared Memory Segments ——–
key        shmid      owner      perms      bytes      nattch     status
0x00000000 0          root       644        80         2
0x00000000 32769      root       644        16384      2
0x00000000 65538      root       644        280        2
0x00000000 163843     boris      600        4194304    2          dest
0x0052e2c1 360452     postgres   600        43753472   8
0x00000000 294917     boris      600        2097152    2          dest
0x0112e4a1 393222     root       600        1000       11
0x00000000 425991     boris      600        393216     2          dest
0x00000000 557065     boris      600        1048576    2          dest

—— Semaphore Arrays ——–
key        semid      owner      perms      nsems
0x000000a7 65536      root       600        1
0x0052e2c1 458753     postgres   600        17
0x0052e2c2 491522     postgres   600        17
0x0052e2c3 524291     postgres   600        17
0x0052e2c4 557060     postgres   600        17
0x0052e2c5 589829     postgres   600        17
0x0052e2c6 622598     postgres   600        17
0x0052e2c7 655367     postgres   600        17
0x0052e2c8 688136     postgres   600        17
0x0052e2c9 720905     postgres   600        17
0x0052e2ca 753674     postgres   600        17

After creating replication gluster volume ovirt-data02  via Web Admin   I ran manually :

gluster volume set ovirt-data02 auth.allow 192.168.1.* ;
gluster volume set ovirt-data02 group virt  ;
gluster volume set ovirt-data02 cluster.quorum-type auto ;
gluster volume set ovirt-data02 performance.cache-size 1GB ;

Currently apache-sshd is 0.9.0-3 . https://bugzilla.redhat.com/show_bug.cgi?id=1021273

Adding new host works fine , just /etc/sysconfig/iptables on master server should have :
-A INPUT -p tcp -m multiport –dport 24007:24108  -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

 Personally i was experiencing one issue during second host deployment, which required service vdsmd restart on second host to allow system bring it up at the end of installation. Two installs behaved absolutely similar

[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status  vdsmd.service
vdsmd.service – Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:40:40 MSK; 50s ago
Process: 2896 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh –pre-start (code=exited, status=0/SUCCESS)

Main PID: 3166 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3166 /usr/bin/python /usr/share/vdsm/vdsm

Dec 24 15:40:41 hv02.localdomain python[3192]: detected unhandled Python exception in ‘/usr/bin/vdsm-tool’
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: [427B blob data]
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: vdsm vds WARNING Unable to load the json rpc server module. Ple…led.
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 make_client_response()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 3

[root@hv02 ~]# service vdsmd restart
Redirecting to /bin/systemctl restart  vdsmd.service

[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status  vdsmd.service
vdsmd.service – Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:41:42 MSK; 2s ago
Process: 3355 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh –post-stop (code=exited, status=0/SUCCESS)
Process: 3358 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh –pre-start (code=exited, status=0/SUCCESS)

Main PID: 3418 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3418 /usr/bin/python /usr/share/vdsm/vdsm

Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: vdsm: Running test_conflicting_conf
Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: SUCCESS: ssl configured to true. No conflicts
Dec 24 15:41:42 hv02.localdomain systemd[1]: Started Virtual Desktop Server Manager.
Dec 24 15:41:43 hv02.localdomain vdsm[3418]: vdsm vds WARNING Unable to load the json rpc server module. Ple…led.
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 make_client_response()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 3

Moreover  if during core install on first server same report comes up during  awaiting host to become VDSM operational  install will hang for a while and finally won’t bring up master server. Workaround is the same. Once again it’s my personal experience.  It’s random error during core “all in one”  install.

 

 

[root@ovirt1 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  —  anywhere             anywhere
ACCEPT     icmp —  anywhere             anywhere             icmp any
ACCEPT     all  —  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:ssh
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:postgres
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:https
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpts:xprtld:6166
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpts:49152:49216
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:synchronet-db
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:sunrpc
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:sunrpc
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:pftp
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:pftp
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:rquotad
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:rquotad
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:892
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:892
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:nfs
ACCEPT     udp  —  anywhere             anywhere             state NEW udp dpt:filenet-rpc
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:32803
ACCEPT     tcp  —  anywhere             anywhere             state NEW tcp dpt:http
ACCEPT     tcp  —  anywhere             anywhere             multiport dports 24007:24108
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     udp  —  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     tcp  —  anywhere             anywhere             multiport dports 38465:38485
REJECT     all  —  anywhere             anywhere             reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

[root@ovirt1 ~]# ssh ovirt2

Last login: Sat Dec 21 23:17:05 2013 from ovirt1.localdomain

[root@ovirt2 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  —  anywhere             anywhere             state RELATED,ESTABLISHED
ACCEPT     all  —  anywhere             anywhere
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:54321
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ssh
ACCEPT     udp  —  anywhere             anywhere             udp dpt:snmp
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:16514
ACCEPT     tcp  —  anywhere             anywhere             multiport dports xprtld:6166
ACCEPT     tcp  —  anywhere             anywhere             multiport dports 49152:49216
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:24007
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:webcache
ACCEPT     udp  —  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38465
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38466
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38467
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:nfs
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38469
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:39543
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:55863
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:38468
ACCEPT     udp  —  anywhere             anywhere             udp dpt:963
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:965
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ctdb
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:netbios-ssn
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:microsoft-ds
ACCEPT     tcp  —  anywhere             anywhere             tcp dpts:24007:24108
ACCEPT     tcp  —  anywhere             anywhere             tcp dpts:49152:49251
REJECT     all  —  anywhere             anywhere             reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
REJECT     all  —  anywhere             anywhere             PHYSDEV match ! –physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Creating XFS replicated Gluster Storage

[root@ovirt1 ~]# pvcreate /dev/sda3
[root@ovirt1 ~]# vgcreate vg_virt /dev/sda3
[root@ovirt1 ~]# lvcreate -L 91000M -n lv_gluster  vg_virt  /dev/sda3
Logical volume “lv_gluster” created
[root@ovirt1 ~]# lvscan
ACTIVE            ‘/dev/fedora00/root’ [170.90 GiB] inherit
ACTIVE            ‘/dev/fedora00/swap’ [7.89 GiB] inherit
ACTIVE            ‘/dev/vg_virt/lv_gluster’ [88.87 GiB] inherit
[root@ovirt1 ~]# mkfs.xfs -f -i size=512 /dev/mapper/vg_virt-lv_gluster

meta-data=/dev/mapper/vg_virt-lv_gluster isize=512    agcount=16, agsize=1456000 blks
=                       sectsz=4096  attr=2, projid32bit=0
data     =                       bsize=4096   blocks=23296000, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=11375, version=2
=                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@ovirt1 ~]# mkdir /data1
[root@ovirt1 ~]# chown -R 36:36 /data1
[root@ovirt1 ~]# echo “/dev/mapper/vg_virt-lv_gluster  /data1  xfs     defaults    1 2″ >> /etc/fstab
[root@ovirt1 ~]# mount -a

  Creating replicated gluster volume beased on XFS LVM via Web Admin Console

The last line corresponds ovirt-data05 replicated gluster volume based on  XFS formatted mounted via /etc/fstab  LVM partition   /dev/mapper/vg_virt-lv_gluster  (similar on both peers)

[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                169G   35G  125G  22% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  152K  3.9G   1% /dev/shm
tmpfs                                    3.9G  988K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   80K  3.9G   1% /tmp
/dev/sda1                             477M   87M  361M  20% /boot

ovirt1.localdomain:ovirt-data02            169G   35G  125G  22% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02

192.168.1.137:/var/lib/exports/export    169G   35G  125G  22% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export

ovirt1.localdomain:/var/lib/exports/iso  169G   35G  125G  22% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso

/dev/mapper/vg_virt-lv_gluster            89G   36M   89G   1% /data1

ovirt1.localdomain:ovirt-data05         89G   36M   89G   1% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05

Fedora 20 KVM installation on XFS Gluster domain


oVirt 3.3 & 3.3.1 hackery on Fedora 19

November 16, 2013

***********************************************************************************

UPDATE on 12/07/2013  Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” attempting to add new host, in this case run one more time 

# engine-setup

on master server . Via my experience it helped several times (3.3.1 on F19).  Ovirt 3.3.0.1 never required the last hack

UPDATE on 11/23/2013.  Same schema would work for 3.3.1
with “yum downgrade apache-sshd” to be able to add new host. When creating VM it’s possible to select NIC1 “ovirtmgmt/ovirtmgmt”.   I was able to find  http://www.ovirt.org/Features/Detailed_OSN_Integration regarding set up  Neutron(Quantum) to create VLANs (external provider)

**********************************************************************************

My final target was  to create two node oVirt 3.3 cluster and virtual machines using replicated glusterfs 3.4.1 volumes. Choice of firewalld as configured firewall seems to be unacceptable for this purpose in meantime. Selection of iptables firewall allows to complete the task. However, this is only my personal preference. IPv4 firewall with iptables  just works for me with no pain and I clearly understand what to do when problems come up, nothing else.

First fix bug with NFS Server still affecting F19 :-  https://bugzilla.redhat.com/show_bug.cgi?id=970595

Please, also be aware of http://www.ovirt.org/OVirt_3.3_TestDay#Known_issues

Quote :

Known issues : host installation

Fedora 19: If the ovirtmgmt bridge is not successfully installed during initial host-setup, manually click on the host, setup networks, and add the ovirtmgmt bridge. It is recommended to disable NetworkManager as well.

End quote

Second put under /etc/sysconfig/network-scripts 

[root@ovirt1 network-scripts]# cat ifcfg-ovirtmgmt

DEVICE=ovirtmgmt

TYPE=Bridge

ONBOOT=yes

DELAY=0

BOOTPROTO=static

IPADDR=192.168.1.142

NETMASK=255.255.255.0

GATEWAY=192.168.1.1

DNS1=83.221.202.254

NM_CONTROLLED=”no

In particular (my box) :

[root@ovirt1 network-scripts]# cat ifcfg-enp2s0

BOOTPROTO=none

TYPE=”Ethernet”

ONBOOT=”yes”

NAME=”enp2s0″

BRIDGE=”ovirtmgmt”

HWADDR=00:22:15:63:e4:e2

Disable NetworkManager and enable service network.

Skipping this two steps in my case crashed install per

http://community.redhat.com/up-and-running-with-ovirt-3-3/

First by obvious reason,second didn’t bring vdsmd during install and engine.log generated a bunch of errors complaining absence network ovirtmgmt. Web console was actually useless  (again in my case) not managing storage domains in down status.

View also : http://www.mail-archive.com/users@ovirt.org/msg11394.html

Follow http://community.redhat.com/up-and-running-with-ovirt-3-3/

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

$ sudo yum install ovirt-engine-setup-plugin-allinone -y

Before run engine-setup :-

[root@ovirt1 ~]# yum install ovirt-engine-websocket-proxy

Loaded plugins: langpacks, refresh-packagekit, versionlock

Resolving Dependencies

–>; Running transaction check

–> Package ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19 will be installed

–> Finished Dependency Resolution

Dependencies Resolved

===================================================

Package                                 Arch              Version                   Repository               Size

===================================================

Installing:

ovirt-engine-websocket-proxy            noarch            3.3.0.1-1.fc19            ovirt-stable             12 k

Transaction Summary

===================================================

Install  1 Package

Total download size: 12 k

Installed size: 18 k

Is this ok [y/d/N]: y

Downloading packages:

ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch.rpm                                      |  12 kB  00:00:02

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Installing : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Verifying  : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Installed:

ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19

Complete!

[root@ovirt1 ~]# engine-setup

[ INFO  ] Stage: Initializing

[ INFO  ] Stage: Environment setup

Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']

Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)

[ INFO  ] Hardware supports virtualization

[ INFO  ] Stage: Environment packages setup

[ INFO  ] Stage: Programs detection

[ INFO  ] Stage: Environment setup

[ INFO  ] Stage: Environment customization

Configure VDSM on this host? (Yes, No) [No]: Yes

Local storage domain path [/var/lib/images]:

Local storage domain name [local_storage]:

–== PACKAGES ==–

[ INFO  ] Checking for product updates…

[ INFO  ] No product updates found

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:

[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

          firewalld was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  no

         iptables firewall was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  yes

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:

Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.

Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:

Confirm engine admin password:

Application mode (Both, Virt, Gluster) [Both]:

Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.

Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:

Setup can configure apache to use SSL using a certificate issued from the internal CA.

Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:

Local ISO domain path [/var/lib/exports/iso]:

Local ISO domain name [ISO_DOMAIN]:

Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation

[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine

Database secured connection        : False

Database host                      : localhost

Database user name                 : engine

Database host name validation      : False

Datbase port                       : 5432

NFS setup                          : True

PKI organization                   : localdomain

NFS mount point                    : /var/lib/exports/iso

Application mode                   : both

  Firewall manager                   : iptables

Configure WebSocket Proxy          : True

Host FQDN                          : ovirt1.localdomain

Datacenter storage type            : nfs

Configure local database           : True

Set application as default page    : True

Configure Apache SSL               : True

Configure VDSM on this host        : True

Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup

[ INFO  ] Stopping engine service

[ INFO  ] Stopping websocket-proxy service

[ INFO  ] Stage: Misc configuration

[ INFO  ] Stage: Package installation

[ INFO  ] Stage: Misc configuration

[ INFO  ] Initializing PostgreSQL

[ INFO  ] Creating PostgreSQL database

[ INFO  ] Configurating PostgreSQL

[ INFO  ] Creating database schema

[ INFO  ] Creating CA

[ INFO  ] Configurating WebSocket Proxy

[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’

[ INFO  ] Stage: Transaction commit

[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available

A default ISO NFS share has been created on this host.

If IP based access restrictions are required, edit:

entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports

SSH fingerprint: 90:16:09:69:8A:D8:43:C9:87:A7:CF:1A:A3:3B:71:44

Internal CA 5F:2E:12:99:32:55:07:11:C9:F9:AB:58:02:C9:A6:8E:16:91:CA:C1

Web access is enabled at:

http://ovirt1.localdomain:80/ovirt-engine

https://ovirt1.localdomain:443/ovirt-engine

Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service

[ INFO  ] Restarting httpd

[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…

[ INFO  ] The VDSM Host is now operational

[ INFO  ] Restarting nfs services

[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131112005106-setup.conf’

[ INFO  ] Stage: Clean up

Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

[ INFO  ] Stage: Pre-termination

[ INFO  ] Stage: Termination

[ INFO  ] Execution of setup completed successfully

Install 3.3.1 doesn’t require  ovirt-engine-websocket-proxy and looks like

[root@ovirt1 ~]# engine-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

–== PACKAGES ==–

[ INFO  ] Checking for product updates…
[ INFO  ] No product updates found

–== ALL IN ONE CONFIGURATION ==–

Configure VDSM on this host? (Yes, No) [No]: Yes
Local storage domain path [/var/lib/images]:
Local storage domain name [local_storage]:

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:
[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:
Confirm engine admin password:
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:
Local ISO domain path [/var/lib/exports/iso]:
Local ISO domain name [ISO_DOMAIN]:
Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation
[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine
Database secured connection        : False
Database host                      : localhost
Database user name                 : engine
Database host name validation      : False
Datbase port                       : 5432
NFS setup                          : True
PKI organization                   : localdomain
NFS mount point                    : /var/lib/exports/iso
Application mode                   : both
Configure WebSocket Proxy          : True
Host FQDN                          : ovirt1.localdomain
Datacenter storage type            : nfs
Configure local database           : True
Set application as default page    : True
Configure Apache SSL               : True
Configure VDSM on this host        : True
Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup
[ INFO  ] Stopping engine service
[ INFO  ] Stopping websocket-proxy service
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Initializing PostgreSQL
[ INFO  ] Creating PostgreSQL database
[ INFO  ] Configurating PostgreSQL
[ INFO  ] Creating database schema
[ INFO  ] Creating CA
[ INFO  ] Configurating WebSocket Proxy
[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available
An ISO NFS share has been created on this host.
If IP based access restrictions are required, edit:
entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports
SSH fingerprint: DB:C5:99:16:0D:67:4B:F5:62:99:B2:D3:E2:C7:7F:59
Internal CA 93:BB:05:42:C6:6F:00:28:A1:F1:90:C5:3E:E3:91:D6:1F:1B:17:3D
The following network ports should be opened:
tcp:111
tcp:2049
tcp:32803
tcp:443
tcp:49152-49216
tcp:5432
tcp:5634-6166
tcp:6100
tcp:662
tcp:80
tcp:875
tcp:892
udp:111
udp:32769
udp:662
udp:875
udp:892
An example of the required configuration for iptables can be found at:
/etc/ovirt-engine/iptables.example
In order to configure firewalld, copy the files from
/etc/ovirt-engine/firewalld to /etc/firewalld/services
and execute the following commands:
firewall-cmd -service ovirt-postgres
firewall-cmd -service ovirt-https
firewall-cmd -service ovirt-aio
firewall-cmd -service ovirt-websocket-proxy
firewall-cmd -service ovirt-nfs
firewall-cmd -service ovirt-http
Web access is enabled at:
http://ovirt1.localdomain:80/ovirt-engine
https://ovirt1.localdomain:443/ovirt-engine
Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service
[ INFO  ] Restarting httpd
[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…
[ INFO  ] The VDSM Host is now operational
[ INFO  ] Restarting nfs services
[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131122144055-setup.conf’
[ INFO  ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Execution of setup completed successfully

 Not sure it’s a must, but I’ve also updated /etc/sysconfig/iptables with

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

VMs running on different hosts of two node cluster started via Web Console

[root@ovirt1 ~]# service libvirtd status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:31:07 VOLT; 54min ago

Main PID: 1131 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1131 /usr/sbin/libvirtd –listen

└─8606 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name UbuntuSalamander -S -machine pc-1.0,accel=kvm,usb=of…

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: info : libvirt version: 1.0.5.7….org)

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: debug : virLogParseOutputs:1331…d.log

[root@ovirt1 ~]# ssh ovirt2

Last login: Fri Nov 22 10:45:26 2013

[root@ovirt2 ~]# service libvirtd  status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:44:47 VOLT; 41min ago

Main PID: 1019 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1019 /usr/sbin/libvirtd –listen

└─2776 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name VF19NW -S -machine pc-1.0,accel=kvm,usb=off -cpu Pen…

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: info : libvirt version: 1.0.5.7….org)

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: debug : virLogParseOutputs:1331…d.log


Virtual machines using replicated glusterfs 3.4.1 volumes

Add new host via Web Console.  Make sure that on new host you previously ran :-

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

otherwise it stays incompatible with oVirt 3.3 ( 3.2 as maximum ).

Set up ovirtmgmt bridge, disabled firewalld and enabled iptables firewall manager

On server ovirt1, run the following commands before adding new host ovirt2

# ssh-keygen (Hit Enter to accept all of the defaults)

# ssh-copy-id -i ~/.ssh/id_rsa.pub root@ovirt2

Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” in this case run one more time 

# engine-setup

on master server . Via my experience it helped several times (3.3.1 on F19) . Ovirt 3.3.0.1 never required the last hack

Version 3.3.1 allows to create Gluster volumes via GUI, automatically configuring required features for volume been created via graphical environment.

Regarding design glusterfs volumes for production environment view https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS

  

Double check via command line

 # gluster volume info

Volume Name: ovirt-data02
Type: Replicate
Volume ID: b1cf98c9-5525-48d4-9fb0-bde47d7a98b0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/home/boris/node-replicate
Brick2: 192.168.1.127:/home/boris/node-replicate
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: enable
nfs.disable: off

Creating XFS based replicated gluster volume via oVirt 3.3.1 per  https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS
 
 [root@ovirt1 ~]# gluster volume info ovirt-data05
Volume Name: ovirt-data05
Type: Replicate
Volume ID: ff0955b6-668a-4eab-acf0-606456ee0005
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/mnt/brick1/node-replicate
Brick2: 192.168.1.127:/mnt/brick1/node-replicate
Options Reconfigured:
nfs.disable: off
user.cifs: enable
auth.allow: *
storage.owner-uid: 36
storage.owner-gid: 36
 
[root@ovirt1 ~]# mount | grep xfs
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
/dev/sda3 on /mnt/brick1 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
 
[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   26G  112G  19% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  2.2M  3.9G   1% /dev/shm
tmpfs                                    3.9G 1004K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   76K  3.9G   1% /tmp
/dev/sda1                                477M  105M  344M  24% /boot
/dev/sda3                                 98G   19G   80G  19% /mnt/brick1
ovirt1.localdomain:ovirt-data05           98G   19G   80G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05
ovirt1.localdomain:/var/lib/exports/iso  145G   26G  112G  19% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.137:/var/lib/exports/export    145G   26G  112G  19% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export
ovirt1.localdomain:ovirt-data02          145G   26G  112G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02
 

Creating  glusterfs 3.4.1 cluster  with ovirt1 and ovirt2 via CLI. (3.3.0)

[root@ovirt1 ~]# gluster peer status

Number of Peers: 1

Hostname: ovirt2

Uuid: 8355d741-fc2d-4484-b6e3-ca0ef99658c1

State: Peer in Cluster (Connected)

[root@ovirt1 ~]# ssh ovirt2

Last login: Sat Nov 16 10:23:11 2013 from ovirt1.localdomain

[root@ovirt2 ~]# gluster peer status

Number of Peers: 1

Hostname: 192.168.1.120

Uuid: 3d00042b-4e44-4680-98f7-98b814354001

State: Peer in Cluster (Connected)

then create replicated volume  visible in Web Console, make Glusterfs storage based on this volume and convert into Data(Master)

[root@ovirt1 ~]# gluster volume create data02-share  replica 2 \

ovirt1:/GLSD/node-replicate ovirt2:/GLSD/node-replicate

volume create: data02-share: success: please start the volume to access data

Follow carefully http://community.redhat.com/ovirt-3-3-glusterized/ regarding

1. Editing /etc/glusterfs/glusterd.vol add line

“option rpc-auth-allow-insecure on”

2. gluster volume set data server.allow-insecure on

before starting volume , otherwise you won’t be able to start vms.

Then set right permissions for manually created volume :-  

[root@ovirt1 ~]#  gluster volume set  data02-share  storage.owner-uid 36
[root@ovirt1 ~]#  gluster volume  set data02-share  storage.owner-gid 36

[root@ovirt1 ~]# gluster volume set data02-share quick-read off

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share cluster.eager-lock on

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share performance.stat-prefetch off

volume set: success

[root@ovirt1 ~]# gluster volume info

Volume Name: data02-share

Type: Replicate

Volume ID: 282545cd-583b-4211-a0f4-22eea4142953

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirt1:/GLSD/node-replicate

Brick2: ovirt2:/GLSD/node-replicate

Options Reconfigured:

performance.stat-prefetch: off

cluster.eager-lock: on

performance.quick-read: off

storage.owner-uid: 36

storage.owner-gid: 36

server.allow-insecure: on

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5651976

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ssh ovirt2

Last login: Sat Nov 16 10:26:05 2013 from ovirt1.localdomain

[root@ovirt2 ~]# cd /GLSD/node-replicate/12c1221b-c500-4d21-87ac-1cdd0e0d5269/images/a16d3f36-1a40-4867-9ecb-bbae78189c03

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5043492

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5065892

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:45 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5295140

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:47 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

Filesystem layout looks like :

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# df -h

Filesystem                               Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root                145G   24G  113G  18% /

devtmpfs                                 3.9G     0  3.9G   0% /dev

tmpfs                                    3.9G  100K  3.9G   1% /dev/shm

tmpfs                                    3.9G  1.1M  3.9G   1% /run

tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                                    3.9G   76K  3.9G   1% /tmp

/dev/sdb3                                477M   87M  362M  20% /boot

ovirt1.localdomain:data02-share          125G   10G  109G   9% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share

ovirt1.localdomain:/var/lib/exports/iso  145G   24G  113G  18% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso

192.168.1.120:/var/lib/exports/export    145G   24G  113G  18% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

Hidden issues 

To make environment stable Storage Pool Manager was moved to ovirt2.localdomain:

In this case nfs mount requests from ovirt2 would be satisfied successfully.  View next snapshot :-

Detailed filesystems layout on ovirt1 and ovirt2

[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   31G  107G  23% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  104K  3.9G   1% /dev/shm
tmpfs                                    3.9G  1.1M  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   80K  3.9G   1% /tmp
/dev/sdb3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

[root@ovirt1 ~]# ssh ovirt2

Last login: Sun Nov 17 15:04:29 2013 from ovirt1.localdomain

[root@ovirt2 ~]# ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 17083  bytes 95312048 (90.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 17083  bytes 95312048 (90.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163  mtu 1500
inet 192.168.1.130  netmask 255.255.255.0  broadcast 192.168.1.255
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
ether 00:22:15:63:e4:e2  txqueuelen 0  (Ethernet)
RX packets 1876878  bytes 451006322 (430.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049680  bytes 218222806 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

p37p1: flags=4163  mtu 1500
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
ether 00:22:15:63:e4:e2  txqueuelen 1000  (Ethernet)
RX packets 1877201  bytes 477310768 (455.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049698  bytes 218224910 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device interrupt 17

[root@ovirt2 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora02-root                125G   16G  104G  13% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G   92K  3.9G   1% /dev/shm
tmpfs                                    3.9G  984K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   44K  3.9G   1% /tmp
/dev/sda3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export


Attempt to install oVirt 3.3 & 3.3.1 on Fedora 19

November 13, 2013

***********************************************************************************

UPDATE on 12/07/2013  Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” attempting to add new host, in this case run one more time 

# engine-setup

on master server . Via my experience it helped several times (3.3.1 on F19).  Ovirt 3.3.0.1 never required the last hack

UPDATE on 11/23/2013.  Same schema would work for 3.3.1
with “yum downgrade apache-sshd” to be able to add new
host. When creating VM it’s possible to select NIC1
“ovirtmgmt/ovirtmgmt”.   I was able to find  http://www.ovirt.org/Features/Detailed_OSN_Integration regarding set up  Neutron(Quantum) to create VLANs (external provider) **********************************************************************************

Following bellow is attempt  to create two node oVirt 3.3 cluster and virtual machines using replicated glusterfs 3.4.1 volumes. Choice of firewalld as configured firewall seems to be unacceptable for this purpose in meantime. Selection of iptables firewall allows to complete the task. IPv4 firewall with iptables  just works for me with no pain and I clearly understand what to do when problems come up, nothing else. I also believe that any post pretending for “Howto” should be reproduced by any newcomer easily and successfully without  frustration or  disappointment.

First fix bug with NFS Server still affecting F19 :-  https://bugzilla.redhat.com/show_bug.cgi?id=970595

Please, also be aware of http://www.ovirt.org/OVirt_3.3_TestDay#Known_issues

Quote :

Known issues : host installation

Fedora 19: If the ovirtmgmt bridge is not successfully installed during initial host-setup, manually click on the host, setup networks, and add the ovirtmgmt bridge. It is recommended to disable NetworkManager as well.

End quote

Second put under /etc/sysconfig/network-scripts 

[root@ovirt1 network-scripts]# cat ifcfg-ovirtmgmt

DEVICE=ovirtmgmt

TYPE=Bridge

ONBOOT=yes

DELAY=0

BOOTPROTO=static

IPADDR=192.168.1.142

NETMASK=255.255.255.0

GATEWAY=192.168.1.1

DNS1=83.221.202.254

NM_CONTROLLED=”no

In particular (my box) :

[root@ovirt1 network-scripts]# cat ifcfg-enp2s0

BOOTPROTO=none

TYPE=”Ethernet”

ONBOOT=”yes”

NAME=”enp2s0″

BRIDGE=”ovirtmgmt”

HWADDR=00:22:15:63:e4:e2

Disable NetworkManager and enable service network.

Skipping this two steps in my case crashed install per

http://community.redhat.com/up-and-running-with-ovirt-3-3/

First by obvious reason,second didn’t bring vdsmd during install and engine.log generated a bunch of errors complaining absence network ovirtmgmt. Web console was actually useless  (again in my case) not managing storage domains in down status.

View also : http://www.mail-archive.com/users@ovirt.org/msg11394.html

Follow http://community.redhat.com/up-and-running-with-ovirt-3-3/

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

$ sudo yum install ovirt-engine-setup-plugin-allinone -y

Before run engine-setup :-

[root@ovirt1 ~]# yum install ovirt-engine-websocket-proxy

Loaded plugins: langpacks, refresh-packagekit, versionlock

Resolving Dependencies

–>; Running transaction check

–> Package ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19 will be installed

–> Finished Dependency Resolution

Dependencies Resolved

===================================================

Package                                 Arch              Version                   Repository               Size

===================================================

Installing:

ovirt-engine-websocket-proxy            noarch            3.3.0.1-1.fc19            ovirt-stable             12 k

Transaction Summary

===================================================

Install  1 Package

Total download size: 12 k

Installed size: 18 k

Is this ok [y/d/N]: y

Downloading packages:

ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch.rpm                                      |  12 kB  00:00:02

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Installing : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Verifying  : ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch                                              1/1

Installed:

ovirt-engine-websocket-proxy.noarch 0:3.3.0.1-1.fc19

Complete!

[root@ovirt1 ~]# engine-setup

[ INFO  ] Stage: Initializing

[ INFO  ] Stage: Environment setup

Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']

Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)

[ INFO  ] Hardware supports virtualization

[ INFO  ] Stage: Environment packages setup

[ INFO  ] Stage: Programs detection

[ INFO  ] Stage: Environment setup

[ INFO  ] Stage: Environment customization

Configure VDSM on this host? (Yes, No) [No]: Yes

Local storage domain path [/var/lib/images]:

Local storage domain name [local_storage]:

–== PACKAGES ==–

[ INFO  ] Checking for product updates…

[ INFO  ] No product updates found

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:

[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

          firewalld was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  no

         iptables firewall was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:  yes

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:

Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.

Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:

Confirm engine admin password:

Application mode (Both, Virt, Gluster) [Both]:

Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.

Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:

Setup can configure apache to use SSL using a certificate issued from the internal CA.

Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:

Local ISO domain path [/var/lib/exports/iso]:

Local ISO domain name [ISO_DOMAIN]:

Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation

[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine

Database secured connection        : False

Database host                      : localhost

Database user name                 : engine

Database host name validation      : False

Datbase port                       : 5432

NFS setup                          : True

PKI organization                   : localdomain

NFS mount point                    : /var/lib/exports/iso

Application mode                   : both

  Firewall manager                   : iptables

Configure WebSocket Proxy          : True

Host FQDN                          : ovirt1.localdomain

Datacenter storage type            : nfs

Configure local database           : True

Set application as default page    : True

Configure Apache SSL               : True

Configure VDSM on this host        : True

Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup

[ INFO  ] Stopping engine service

[ INFO  ] Stopping websocket-proxy service

[ INFO  ] Stage: Misc configuration

[ INFO  ] Stage: Package installation

[ INFO  ] Stage: Misc configuration

[ INFO  ] Initializing PostgreSQL

[ INFO  ] Creating PostgreSQL database

[ INFO  ] Configurating PostgreSQL

[ INFO  ] Creating database schema

[ INFO  ] Creating CA

[ INFO  ] Configurating WebSocket Proxy

[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’

[ INFO  ] Stage: Transaction commit

[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available

A default ISO NFS share has been created on this host.

If IP based access restrictions are required, edit:

entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports

SSH fingerprint: 90:16:09:69:8A:D8:43:C9:87:A7:CF:1A:A3:3B:71:44

Internal CA 5F:2E:12:99:32:55:07:11:C9:F9:AB:58:02:C9:A6:8E:16:91:CA:C1

Web access is enabled at:

http://ovirt1.localdomain:80/ovirt-engine

https://ovirt1.localdomain:443/ovirt-engine

Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service

[ INFO  ] Restarting httpd

[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…

[ INFO  ] The VDSM Host is now operational

[ INFO  ] Restarting nfs services

[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131112005106-setup.conf’

[ INFO  ] Stage: Clean up

Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131112004446.log

[ INFO  ] Stage: Pre-termination

[ INFO  ] Stage: Termination

[ INFO  ] Execution of setup completed successfully

Not sure it’s a must, but I’ve also updated /etc/sysconfig/iptables with

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

Install 3.3.1 doesn’t require  ovirt-engine-websocket-proxy and looks like

[root@ovirt1 ~]# engine-setup
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

–== PACKAGES ==–

[ INFO  ] Checking for product updates…
[ INFO  ] No product updates found

–== ALL IN ONE CONFIGURATION ==–

Configure VDSM on this host? (Yes, No) [No]: Yes
Local storage domain path [/var/lib/images]:
Local storage domain name [local_storage]:

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [ovirt1.localdomain]:
[WARNING] Failed to resolve ovirt1.localdomain using DNS, it can be resolved only locally

–== DATABASE CONFIGURATION ==–

Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
Using existing credentials

–== OVIRT ENGINE CONFIGURATION ==–

Engine admin password:
Confirm engine admin password:
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]:

–== APACHE CONFIGURATION ==–

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

–== SYSTEM CONFIGURATION ==–

Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:
Local ISO domain path [/var/lib/exports/iso]:
Local ISO domain name [ISO_DOMAIN]:
Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:

–== END OF CONFIGURATION ==–

[ INFO  ] Stage: Setup validation
[WARNING] Less than 16384MB of memory is available

–== CONFIGURATION PREVIEW ==–

Database name                      : engine
Database secured connection        : False
Database host                      : localhost
Database user name                 : engine
Database host name validation      : False
Datbase port                       : 5432
NFS setup                          : True
PKI organization                   : localdomain
NFS mount point                    : /var/lib/exports/iso
Application mode                   : both
Configure WebSocket Proxy          : True
Host FQDN                          : ovirt1.localdomain
Datacenter storage type            : nfs
Configure local database           : True
Set application as default page    : True
Configure Apache SSL               : True
Configure VDSM on this host        : True
Local storage domain directory     : /var/lib/images

Please confirm installation settings (OK, Cancel) [OK]:

[ INFO  ] Stage: Transaction setup
[ INFO  ] Stopping engine service
[ INFO  ] Stopping websocket-proxy service
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Initializing PostgreSQL
[ INFO  ] Creating PostgreSQL database
[ INFO  ] Configurating PostgreSQL
[ INFO  ] Creating database schema
[ INFO  ] Creating CA
[ INFO  ] Configurating WebSocket Proxy
[ INFO  ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up

–== SUMMARY ==–

[WARNING] Less than 16384MB of memory is available
An ISO NFS share has been created on this host.
If IP based access restrictions are required, edit:
entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports
SSH fingerprint: DB:C5:99:16:0D:67:4B:F5:62:99:B2:D3:E2:C7:7F:59
Internal CA 93:BB:05:42:C6:6F:00:28:A1:F1:90:C5:3E:E3:91:D6:1F:1B:17:3D
The following network ports should be opened:
tcp:111
tcp:2049
tcp:32803
tcp:443
tcp:49152-49216
tcp:5432
tcp:5634-6166
tcp:6100
tcp:662
tcp:80
tcp:875
tcp:892
udp:111
udp:32769
udp:662
udp:875
udp:892
An example of the required configuration for iptables can be found at:
/etc/ovirt-engine/iptables.example
In order to configure firewalld, copy the files from
/etc/ovirt-engine/firewalld to /etc/firewalld/services
and execute the following commands:
firewall-cmd -service ovirt-postgres
firewall-cmd -service ovirt-https
firewall-cmd -service ovirt-aio
firewall-cmd -service ovirt-websocket-proxy
firewall-cmd -service ovirt-nfs
firewall-cmd -service ovirt-http
Web access is enabled at:
http://ovirt1.localdomain:80/ovirt-engine
https://ovirt1.localdomain:443/ovirt-engine
Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO  ] Starting engine service
[ INFO  ] Restarting httpd
[ INFO  ] Waiting for VDSM host to become operational. This may take several minutes…
[ INFO  ] The VDSM Host is now operational
[ INFO  ] Restarting nfs services
[ INFO  ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20131122144055-setup.conf’
[ INFO  ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131122143650.log
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ INFO  ] Execution of setup completed successfully

Updated /etc/sysconfig/iptables with

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

VMs running on different hosts of two node cluster started via Web Console

[root@ovirt1 ~]# service libvirtd status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:31:07 VOLT; 54min ago

Main PID: 1131 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1131 /usr/sbin/libvirtd –listen

└─8606 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name UbuntuSalamander -S -machine pc-1.0,accel=kvm,usb=of…

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: info : libvirt version: 1.0.5.7….org)

Nov 22 10:31:07 ovirt1.localdomain libvirtd[1131]: 2013-11-22 06:31:07.778+0000: 1131: debug : virLogParseOutputs:1331…d.log

[root@ovirt1 ~]# ssh ovirt2

Last login: Fri Nov 22 10:45:26 2013

[root@ovirt2 ~]# service libvirtd  status

Redirecting to /bin/systemctl status  libvirtd.service

libvirtd.service – Virtualization daemon

Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)

Active: active (running) since Fri 2013-11-22 10:44:47 VOLT; 41min ago

Main PID: 1019 (libvirtd)

CGroup: name=systemd:/system/libvirtd.service

├─1019 /usr/sbin/libvirtd –listen

└─2776 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name VF19NW -S -machine pc-1.0,accel=kvm,usb=off -cpu Pen…

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: info : libvirt version: 1.0.5.7….org)

Nov 22 10:44:48 ovirt2.localdomain libvirtd[1019]: 2013-11-22 06:44:48.317+0000: 1019: debug : virLogParseOutputs:1331…d.log

 

Virtual machines using replicated glusterfs 3.4.1 volumes

Add new host via Web Console.  Make sure that on new host you previously ran :-

$ sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y

otherwise it stays incompatible with oVirt 3.3 ( 3.2 as maximum )

Set up ovirtmgmt bridge, disabled firewalld and enabled iptables firewall manager

On server ovirt1, run the following commands before adding new host ovirt2

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@ovirt2

Even with downgraded apache-sshd you might constantly get “Unexpected connection interruption” in this case run one more time  # engine-setup on master server . Via my experience it helped several times (3.3.1 on F19) . Ovirt 3.3.0.1 never required the last hack

Version 3.3.1 allows to create Gluster volumes via GUI, automatically configuring required features for volume been created via graphical environment.

Regarding design glusterfs volumes for production environment view https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS

Double check via command line

 # gluster volume info

Volume Name: ovirt-data02
Type: Replicate
Volume ID: b1cf98c9-5525-48d4-9fb0-bde47d7a98b0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/home/boris/node-replicate
Brick2: 192.168.1.127:/home/boris/node-replicate
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: enable

nfs.disable: off

Creating XFS based replicated gluster volume via oVirt 3.3.1 per  https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS
 
 [root@ovirt1 ~]# gluster volume info ovirt-data05
Volume Name: ovirt-data05
Type: Replicate
Volume ID: ff0955b6-668a-4eab-acf0-606456ee0005
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/mnt/brick1/node-replicate
Brick2: 192.168.1.127:/mnt/brick1/node-replicate
Options Reconfigured:
nfs.disable: off
user.cifs: enable
auth.allow: *
storage.owner-uid: 36
storage.owner-gid: 36
 
[root@ovirt1 ~]# mount | grep xfs
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
/dev/sda3 on /mnt/brick1 type xfs (rw,noatime,seclabel,attr2,inode64,noquota)
 
[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   26G  112G  19% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  2.2M  3.9G   1% /dev/shm
tmpfs                                    3.9G 1004K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   76K  3.9G   1% /tmp
/dev/sda1                                477M  105M  344M  24% /boot
/dev/sda3                                 98G   19G   80G  19% /mnt/brick1
ovirt1.localdomain:ovirt-data05           98G   19G   80G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05
ovirt1.localdomain:/var/lib/exports/iso  145G   26G  112G  19% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.137:/var/lib/exports/export    145G   26G  112G  19% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export
ovirt1.localdomain:ovirt-data02          145G   26G  112G  19% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02

 

 

 

Creating  glusterfs 3.4.1 cluster  with ovirt1 and ovirt2 via CLI. (3.3.0)

[root@ovirt1 ~]# gluster peer status

Number of Peers: 1

Hostname: ovirt2

Uuid: 8355d741-fc2d-4484-b6e3-ca0ef99658c1

State: Peer in Cluster (Connected)

[root@ovirt1 ~]# ssh ovirt2

Last login: Sat Nov 16 10:23:11 2013 from ovirt1.localdomain

[root@ovirt2 ~]# gluster peer status

Number of Peers: 1

Hostname: 192.168.1.120

Uuid: 3d00042b-4e44-4680-98f7-98b814354001

State: Peer in Cluster (Connected)

then create replicated volume  visible in Web Console, make Glusterfs storage based on this volume and convert into Data(Master)

[root@ovirt1 ~]# gluster volume create data02-share  replica 2 \

ovirt1:/GLSD/node-replicate ovirt2:/GLSD/node-replicate

volume create: data02-share: success: please start the volume to access data

Follow carefully http://community.redhat.com/ovirt-3-3-glusterized/ regarding

1. Editing /etc/glusterfs/glusterd.vol add line

“option rpc-auth-allow-insecure on”

2. gluster volume set data server.allow-insecure on

before starting volume , otherwise you won’t be able to start vms.

Then set right permissions for manually created volume :-  

[root@ovirt1 ~]#  gluster volume set  data02-share  storage.owner-uid 36
[root@ovirt1 ~]#  gluster volume  set data02-share  storage.owner-gid 36

[root@ovirt1 ~]# gluster volume set data02-share quick-read off

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share cluster.eager-lock on

volume set: success

[root@ovirt1 ~]# gluster volume set data02-share performance.stat-prefetch off

volume set: success

[root@ovirt1 ~]# gluster volume info

Volume Name: data02-share

Type: Replicate

Volume ID: 282545cd-583b-4211-a0f4-22eea4142953

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirt1:/GLSD/node-replicate

Brick2: ovirt2:/GLSD/node-replicate

Options Reconfigured:

performance.stat-prefetch: off

cluster.eager-lock: on

performance.quick-read: off

storage.owner-uid: 36

storage.owner-gid: 36

server.allow-insecure: on

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5651976

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ssh ovirt2

Last login: Sat Nov 16 10:26:05 2013 from ovirt1.localdomain

[root@ovirt2 ~]# cd /GLSD/node-replicate/12c1221b-c500-4d21-87ac-1cdd0e0d5269/images/a16d3f36-1a40-4867-9ecb-bbae78189c03

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5043492

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:44 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5065892

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:45 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

[root@ovirt2 a16d3f36-1a40-4867-9ecb-bbae78189c03]# ls -la

total 5295140

drwxr-xr-x. 2 vdsm kvm       4096 Nov 16 02:01 .

drwxr-xr-x. 3 vdsm kvm       4096 Nov 16 02:01 ..

-rw-rw—-. 2 vdsm kvm 9663676416 Nov 16 10:47 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f

-rw-rw—-. 2 vdsm kvm    1048576 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.lease

-rw-r–r–. 2 vdsm kvm        268 Nov 16 02:01 b6fc0ebd-1e49-4056-b7b0-2f8167867e5f.meta

Filesystem layout looks like :

[root@ovirt1 a16d3f36-1a40-4867-9ecb-bbae78189c03]# df -h

Filesystem                               Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root                145G   24G  113G  18% /

devtmpfs                                 3.9G     0  3.9G   0% /dev

tmpfs                                    3.9G  100K  3.9G   1% /dev/shm

tmpfs                                    3.9G  1.1M  3.9G   1% /run

tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                                    3.9G   76K  3.9G   1% /tmp

/dev/sdb3                                477M   87M  362M  20% /boot

ovirt1.localdomain:data02-share          125G   10G  109G   9% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share

ovirt1.localdomain:/var/lib/exports/iso  145G   24G  113G  18% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso

192.168.1.120:/var/lib/exports/export    145G   24G  113G  18% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

Setting up Ubuntu Salamander Server KVM  via oVirt 3.3 on F19

Hidden issues 

To make environment stable Storage Pool Manager was moved to ovirt2.localdomain:

In this case nfs mount requests from ovirt2 would be satisfied successfully.  View next snapshot :-

Detailed filesystems layout on ovirt1 and ovirt2

[root@ovirt1 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root                145G   31G  107G  23% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G  104K  3.9G   1% /dev/shm
tmpfs                                    3.9G  1.1M  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   80K  3.9G   1% /tmp
/dev/sdb3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

[root@ovirt1 ~]# ssh ovirt2

Last login: Sun Nov 17 15:04:29 2013 from ovirt1.localdomain

[root@ovirt2 ~]# ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 17083  bytes 95312048 (90.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 17083  bytes 95312048 (90.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163  mtu 1500
inet 192.168.1.130  netmask 255.255.255.0  broadcast 192.168.1.255
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
ether 00:22:15:63:e4:e2  txqueuelen 0  (Ethernet)
RX packets 1876878  bytes 451006322 (430.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049680  bytes 218222806 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

p37p1: flags=4163  mtu 1500
inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
ether 00:22:15:63:e4:e2  txqueuelen 1000  (Ethernet)
RX packets 1877201  bytes 477310768 (455.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2049698  bytes 218224910 (208.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device interrupt 17

[root@ovirt2 ~]# df -h
Filesystem                               Size  Used Avail Use% Mounted on
/dev/mapper/fedora02-root                125G   16G  104G  13% /
devtmpfs                                 3.9G     0  3.9G   0% /dev
tmpfs                                    3.9G   92K  3.9G   1% /dev/shm
tmpfs                                    3.9G  984K  3.9G   1% /run
tmpfs                                    3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                    3.9G   44K  3.9G   1% /tmp
/dev/sda3                                477M   87M  362M  20% /boot
ovirt1.localdomain:data02-share          125G   16G  104G  13% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:data02-share
ovirt1.localdomain:/var/lib/exports/iso  145G   31G  107G  23% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
192.168.1.120:/var/lib/exports/export    145G   31G  107G  23% /rhev/data-center/mnt/192.168.1.120:_var_lib_exports_export

                                                           Spice  vs VNC

                         


Glusterfs replicated volume based Havana 2013.2 instances on Server With GlusterFS 3.4.1 Fedora 19 in two node cluster

November 2, 2013

Two node gluster 3.4.1 cluster set up follows bellow. Havana 2013.2 RDO been installed via `packstack –alliinone` on one of the boxes has cinder tuned to create volumes in replicated glusterfs 3.4.1 storage. Several samples of creating via images bootable cinder volumes are described in step by step way. What actually provides proof of the concepts of article mentioned down here

Please,  view first nice article: http://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means/ and  https://wiki.openstack.org/wiki/CinderSupportMatrix

Per https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS

On the server1, run the following command:

  ssh-keygen (Hit Enter to accept all of the defaults)

On the server1, run the following command for each server. Server, run the following command for each node in cluster (server).

  ssh-copy-id -i ~/.ssh/id_rsa.pub root@server4

View also https://forge.gluster.org/hadoop/pages/InstallingAndConfiguringGlusterFS  for 5),6),7)

[root@server1 ~]#   yum install glusterfs glusterfs-server glusterfs-fuse

[root@server1 ~(keystone_admin)]# service glusterd status

Redirecting to /bin/systemctl status  glusterd.service

glusterd.service – GlusterFS an clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)

Active: active (running) since Sat 2013-11-02 13:44:42 MSK; 1h 42min ago

Process: 2699 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)

Main PID: 2700 (glusterd)

CGroup: name=systemd:/system/glusterd.service

├─2700 /usr/sbin/glusterd -p /run/glusterd.pid

├─2902 /usr/sbin/glusterfsd -s server1 –volfile-id cinder-volumes02.server1.home-boris-node-replicate -p /var/l…

├─5376 /usr/sbin/glusterfs -s localhost  –volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glus…

├─6675 /usr/sbin/glusterfs -s localhost  –volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/lo…

└─6683 /sbin/rpc.statd

Nov 02 13:44:40 server1 systemd[1]: Starting GlusterFS an clustered file-system server…

Nov 02 13:44:42 server1 systemd[1]: Started GlusterFS an clustered file-system server.

Nov 02 13:46:52 server1 rpc.statd[5383]: Version 1.2.7 starting

Nov 02 13:46:52 server1 sm-notify[5384]: Version 1.2.7 starting

[root@server1 ~]# service iptables stop

[root@server1 ~]# service iptables status

Redirecting to /bin/systemctl status  iptables.service

iptables.service – IPv4 firewall with iptables

Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)

Active: failed (Result: exit-code) since Sat 2013-11-02 12:59:10 MSK; 5s ago

Process: 14306 ExecStop=/usr/libexec/iptables/iptables.init stop (code=exited, status=1/FAILURE)

Main PID: 472 (code=exited, status=0/SUCCESS)

CGroup: name=systemd:/system/iptables.service

Nov 02 12:59:10 server1 systemd[1]: Stopping IPv4 firewall with iptables…

Nov 02 12:59:10 server1 iptables.init[14306]: iptables: Flushing firewall rules: [  OK  ]

Nov 02 12:59:10 server1 iptables.init[14306]: iptables: Setting chains to policy ACCEPT: raw security mangle nat fil…ILED]

Nov 02 12:59:10 server1 iptables.init[14306]: iptables: Unloading modules:  iptable_nat[FAILED]

Nov 02 12:59:10 server1 systemd[1]: iptables.service: control process exited, code=exited status=1

Nov 02 12:59:10 server1 systemd[1]: Stopped IPv4 firewall with iptables.

Nov 02 12:59:10 server1 systemd[1]: Unit iptables.service entered failed state.

[root@server1 ~]# gluster peer probe server4

peer probe: success

[root@server1 ~]# gluster peer  status

Number of Peers: 1

Hostname: server4

Port: 24007

Uuid: 4062c822-74d5-45e9-8eaa-8353845332de

State: Peer in Cluster (Connected)

[root@server1 ~]# gluster volume create cinder-volumes02  replica 2 \

server1:/home/boris/node-replicate  server4:/home/boris/node-replicate

volume create: cinder-volumes02: success: please start the volume to access data

[root@server1 ~]# gluster volume start cinder-volumes02

volume start: cinder-volumes02: success

[root@server1 ~]# gluster volume set cinder-volumes02  quick-read off
volume set: success

[root@server1 ~]# gluster volume set cinder-volumes02  cluster.eager-lock on
volume set: success

[root@server1 ~]# gluster volume set cinder-volumes02  performance.stat-prefetch off
volume set: success

[root@server1 ~]# gluster volume info

Volume Name: cinder-volumes02

Type: Replicate

Volume ID: 1a1566ed-34f7-4264-b0b4-91cf9526b5ef

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: server1:/home/boris/node-replicate

Brick2: server4:/home/boris/node-replicate

Options Reconfigured:
performance.quick-read: off
cluster.eager-lock: on
performance.stat-prefetch: off

[root@server1 ~]# service iptables start

Redirecting to /bin/systemctl start  iptables.service

[root@server1 ~]# service iptables status

Redirecting to /bin/systemctl status  iptables.service

iptables.service – IPv4 firewall with iptables

Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled)

Active: active (exited) since Sat 2013-11-02 13:10:17 MSK; 5s ago

Process: 14306 ExecStop=/usr/libexec/iptables/iptables.init stop (code=exited, status=1/FAILURE)

Process: 17699 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=0/SUCCESS)

Nov 02 13:10:17 server1 iptables.init[17699]: iptables: Applying firewall rules: [  OK  ]

Nov 02 13:10:17 server1 systemd[1]: Started IPv4 firewall with iptables.

[root@server1 ~(keystone_admin)]# gluster peer status

Number of Peers: 1

Hostname: server4

Uuid: 4062c822-74d5-45e9-8eaa-8353845332de

State: Peer in Cluster (Connected)

[root@server1 ~]# gluster volume info

Volume Name: cinder-volumes02

Type: Replicate

Volume ID: 1a1566ed-34f7-4264-b0b4-91cf9526b5ef

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: server1:/home/boris/node-replicate

Brick2: server4:/home/boris/node-replicate

Options Reconfigured:
performance.quick-read: off
cluster.eager-lock: on
performance.stat-prefetch: off

Update /etc/sysconfig/iptables on second box :-

Add to *filter section

-A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

Watching replication

Configuring Cinder to Add GlusterFS

 

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
 # openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
 # openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
Then tune file /etc/cinder/shares.conf
# vi /etc/cinder/shares.conf
    192.168.1.147:cinder-volumes02
:wq
Update iptables firewall ( remind that service firewalld should be disabled on F19 from the beginning  to keep changes done by neutron/quantum in place)
# iptables-save  >  iptables.dump
**********************
 Add to *filter section:
**********************
 -A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 
-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

# iptables-restore <  iptables.dump
# service iptables restart

Now mount glusterfs volume on predefined Havana’s directory

# for i in api scheduler volume; do service openstack-cinder-${i} restart; done

[root@server1 ~(keystone_admin)]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/fedora_5-root       193G   48G  135G  27% /
devtmpfs                                    3.9G     0  3.9G   0% /dev
tmpfs                                          3.9G  140K  3.9G   1% /dev/shm
tmpfs                                          3.9G  948K  3.9G   1% /run
tmpfs                                          3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                                          3.9G   92K  3.9G   1% /tmp
/dev/loop0                                  928M  1.3M  860M   1% /srv/node/device1
/dev/sda1                                   477M   87M  362M  20% /boot
tmpfs                                          3.9G  948K  3.9G   1% /run/netns

192.168.1.147:cinder-volumes02  116G   61G   50G  56% /var/lib/cinder/volumes/e879618364aca859f13701bb918b087f

Building Ubuntu Server 13.10 utilizing replicated via glusterfs 3.4.1 cinder bootable volume

   

Building Windows 2012 evaluation instance utilizing replicated via glusterfs 3.4.1 cinder bootable volume

[root@ovirt1 ~(keystone_admin)]# service glusterd status

Redirecting to /bin/systemctl status  glusterd.service

glusterd.service – GlusterFS an clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)

Active: active (running) since Mon 2013-11-04 22:31:55 VOLT; 21min ago

Process: 2962 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)

Main PID: 2963 (glusterd)

CGroup: name=systemd:/system/glusterd.service

├─ 2963 /usr/sbin/glusterd -p /run/glusterd.pid

├─ 3245 /usr/sbin/glusterfsd -s ovirt1 –volfile-id cinder-vols.ovirt1.fdr-set-node-replicate -p /var/lib/gluste…

├─ 6031 /usr/sbin/glusterfs -s localhost –volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glu…

├─11335 /usr/sbin/glusterfs -s localhost –volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/l…

└─11343 /sbin/rpc.statd

Nov 04 22:31:51 ovirt1.localdomain systemd[1]: Starting GlusterFS an clustered file-system server…

Nov 04 22:31:55 ovirt1.localdomain systemd[1]: Started GlusterFS an clustered file-system server.

Nov 04 22:35:11 ovirt1.localdomain rpc.statd[6038]: Version 1.2.7 starting

Nov 04 22:35:11 ovirt1.localdomain sm-notify[6039]: Version 1.2.7 starting

Nov 04 22:35:11 ovirt1.localdomain GlusterFS[6026]: [2013-11-04 18:35:11.400008] C [nfs.c:271:nfs_start_subvol_lookup_…ctory

Nov 04 22:53:23 ovirt1.localdomain rpc.statd[11343]: Version 1.2.7 starting

[root@ovirt1 ~(keystone_admin)]# df -h

Filesystem                  Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root   169G   74G   87G  46% /

devtmpfs                    3.9G     0  3.9G   0% /dev

tmpfs                       3.9G   84K  3.9G   1% /dev/shm

tmpfs                       3.9G  956K  3.9G   1% /run

tmpfs                       3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                       3.9G  116K  3.9G   1% /tmp

/dev/loop0                  928M  1.3M  860M   1% /srv/node/device1

/dev/sdb1                   477M   87M  361M  20% /boot

tmpfs                       3.9G  956K  3.9G   1% /run/netns

192.168.1.137:/cinder-vols  164G   73G   83G  47% /var/lib/cinder/volumes/8a78781567bbf747a694c25ae4494d9c

[root@ovirt1 ~(keystone_admin)]# gluster peer status

Number of Peers: 1

Hostname: ovirt2

Uuid: 2aa2dfb5-d266-4474-89c1-c5c011eec025

State: Peer in Cluster (Connected)

[root@ovirt1 ~(keystone_admin)]# gluster volume info cinder-vols

Volume Name: cinder-vols

Type: Replicate

Volume ID: e8eab40f-3401-4893-ba25-121bd4e0a74e

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: ovirt1:/fdr/set/node-replicate

Brick2: ovirt2:/fdr/set/node-replicate

Options Reconfigured:
performance.quick-read: off
cluster.eager-lock: on
performance.stat-prefetch: off

[root@ovirt1 ~(keystone_admin)]# nova image-list

+————————————–+———————————+——–+——–+

| ID                                   | Name                            | Status | Server |

+————————————–+———————————+——–+——–+

| 291f7c8b-043b-4656-9285-244770f127e5 | Fedora19image                   | ACTIVE |        |

| 67d9f757-43ca-4204-985d-5ecdb31e8ec7 | Salamander1030                  | ACTIVE |        |

| 624681da-f48f-43d9-968e-1e3da6cc75a3 | Windows Server 2012 R2 Std Eval | ACTIVE |        |

| bd01f02d-e0bf-4cc5-aa35-ff97ebd9c1ef | cirros                          | ACTIVE |        |

+————————————–+———————————+——–+——–+

[root@ovirt1 ~(keystone_admin)]# cinder create –image-id  \
624681da-f48f-43d9-968e-1e3da6cc75a3 –display_name Windows2012VL 20


Neutron basic RDO setup (havana 2013.2) to have original LAN as external on Fedora 19 with native Ethernet interfaces names

October 31, 2013

Follow as normal   http://openstack.redhat.com/Quickstart, just after

 $ sudo yum install -y openstack-packstack

 run   $ sudo yum -y update

one more time to upgrade python-backports to 1.0.4

Create under /etc/sysconfig/network-interfaces files :

[root@openstack network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.135″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

In my particular case ifcfg-enp2s0 is responsible for active ethernet interface p37p1.

[root@openstack network-scripts(keystone_admin)]# cat ifcfg-enp2s0
NAME=”enp2s0″
HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

Enable network service, due to known bugs

Bug 981583 – Openstack firewall rules are not enabled after reboot

https://bugzilla.redhat.com/show_bug.cgi?id=981583

Bug 981652 – firewalld does not cover openstack/packstack use case.

https://bugzilla.redhat.com/show_bug.cgi?id=981652

Run:-

# yum -y install iptables-services
# systemctl disable firewalld
# systemctl enable iptables

then  reboot

In dashboard environment delete router1 and public network.

# source keystonerc_admin
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# neutron net-create public –router:external=True
# neutron subnet-create public 192.168.1.0/24 –name vlan –enable_dhcp False \
–allocation_pool start=192.168.1.57,end=192.168.1.92  \
–gateway 192.168.1.1
# neutron floatingip-create public
# EXTERNAL_NETWORK_ID=`neutron net-list | grep public | awk ‘{ print $2 }’`
# INT_SUBNET_ID=`neutron subnet-list | grep private_subnet | awk ‘{ print $2}’`
# SERVICE_TENANT_ID=`keystone tenant-list | grep service | awk ‘{ print $2}’`
# neutron router-create –name router2 –tenant-id $SERVICE_TENANT_ID router2
# neutron router-gateway-set router2  $EXTERNAL_NETWORK_ID
# neutron router-interface-add router2  $INT_SUBNET_ID
# neutron subnet-update $INT_SUBNET_ID –dns_nameservers list=true 83.221.202.254
# neutron subnet-update $INT_SUBNET_ID –gateway_ip 10.0.0.1

Create images via command line :-

# glance image-create –name ‘Fedora19image’ –disk-format qcow2 –container-format bare –is-public true \
–copy-from http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2

# glance image-create –name ‘UbuntuServer13.10image’ \
–disk-format qcow2 \
–container-format bare –is-public true \
–copy-from http://cloud-images.ubuntu.com/saucy/current/saucy-server-cloudimg-amd64-disk1.img

Since this point you can proceed as suggested in

Glusterfs Striped volumes based Havana 2013.2 instances on NFS-Like Standalone Storage Server With GlusterFS 3.4.1 Fedora 19

http://bderzhavets.blogspot.ru/2013/10/glusterfs-striped-volumes-based-havana.html


Glusterfs Striped volumes based Havana 2013.2 instances on NFS-Like Standalone Storage Server With GlusterFS 3.4.1 Fedora 19

October 22, 2013

Please,  view first nice article: http://www.mirantis.com/blog/openstack-havana-glusterfs and  https://wiki.openstack.org/wiki/CinderSupportMatrix. Here goes a sample of more complicated structure of  gluster volume for Havana’s instances bootable cinder volumes storage  versus  http://bderzhavets.wordpress.com/2013/10/18/glusterfss-volume-based-havana-rc1-instances-on-nfs-like-standalone-storage-server-with-glusterfs-3-4-1-fedora-19/

[root@localhost boris(keystone_admin)]# gluster volume create \

cinder-volumes stripe 3 \

192.168.1.142:/home/boris/node1 \

192.168.1.142:/home/boris/node2 \

192.168.1.142:/home/boris/node3

volume create: cinder-volumes: success: please start the volume to access data

[root@localhost boris(keystone_admin)]# gluster volume start cinder-volumes

volume start: cinder-volumes: success

[root@localhost boris(keystone_admin)]# gluster volume info

Volume Name: cinder-volume

Type: Distribute

Volume ID: be95ce0d-47ea-47f5-beee-66196c546d20

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: 192.168.1.142:/rhs/brick1/cinder-volume

Volume Name: cinder-volumes

Type: Stripe

Volume ID: 14b7de86-7b0e-4a60-b21b-4990b1222f43

Status: Started

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: 192.168.1.142:/home/boris/node1

Brick2: 192.168.1.142:/home/boris/node2

Brick3: 192.168.1.142:/home/boris/node3

[root@localhost boris(keystone_admin)]# systemctl status glusterd

glusterd.service – GlusterFS an clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)

Active: active (running) since Sun 2013-10-20 13:47:11 MSK; 2h 21min ago

Process: 1350 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)

Main PID: 1380 (glusterd)

CGroup: name=systemd:/system/glusterd.service

├─ 1380 /usr/sbin/glusterd -p /run/glusterd.pid

├─ 2217 /usr/sbin/glusterfsd -s 192.168.1.142 –volfile-id cinder-volume.192.168.1.142.rhs-brick1-cinder-volume …

├─11156 /usr/sbin/glusterfsd -s 192.168.1.142 –volfile-id cinder-volumes.192.168.1.142.home-boris-node1 -p /var…

├─11165 /usr/sbin/glusterfsd -s 192.168.1.142 –volfile-id cinder-volumes.192.168.1.142.home-boris-node2 -p /var…

├─11174 /usr/sbin/glusterfsd -s 192.168.1.142 –volfile-id cinder-volumes.192.168.1.142.home-boris-node3 -p /var…

├─11190 /usr/sbin/glusterfs -s localhost –volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/l…

└─11198 /sbin/rpc.statd

Oct 20 13:47:11 localhost.localdomain rpc.statd[2261]: Version 1.2.7 starting

Oct 20 13:47:11 localhost.localdomain systemd[1]: Started GlusterFS an clustered file-system server.

Oct 20 13:58:50 localhost.localdomain rpc.statd[11198]: Version 1.2.7 starting

Oct 20 13:58:50 localhost.localdomain sm-notify[11199]: Version 1.2.7 starting

 Configuring Cinder to Add GlusterFS

  If it has not been done

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes

  Update shares to point to another gluster volume 

   # vi /etc/cinder/shares.conf

    192.168.1.142:cinder-volumes

:wq

  If it has not been done

# iptables-save > iptables.dump

Add to *filter section:

-A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEP

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

# iptables-restore <  iptables.dump

# service iptables restart

[root@localhost boris(keystone_admin)]# df -h

Filesystem                   Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root    145G   60G   78G  44% /

devtmpfs                     3.9G     0  3.9G   0% /dev

tmpfs                        3.9G  648K  3.9G   1% /dev/shm

tmpfs                        3.9G 1020K  3.9G   1% /run

tmpfs                        3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                        3.9G   96K  3.9G   1% /tmp

/dev/sdb3                    477M  104M  344M  24% /boot

/dev/loop0                   928M  1.4M  860M   1% /srv/node/device1

192.168.1.142:cinder-volume  145G   60G   78G  44% /var/lib/cinder/volumes/4b0a6960f94e0c2a28a479c06957e35a

tmpfs                        3.9G 1020K  3.9G   1% /run/netns

Ready to restart cinder services  

  [root@localhost boris(keystone_admin)]# for i in api scheduler volume; do service openstack-cinder-${i} restart; done

Redirecting to /bin/systemctl restart  openstack-cinder-api.service

Redirecting to /bin/systemctl restart  openstack-cinder-scheduler.service

Redirecting to /bin/systemctl restart  openstack-cinder-volume.service

New directory mounted :

[root@localhost boris(keystone_admin)]# df -h

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/fedora00-root     145G   59G   78G  44% /

devtmpfs                      3.9G     0  3.9G   0% /dev

tmpfs                         3.9G  648K  3.9G   1% /dev/shm

tmpfs                         3.9G 1020K  3.9G   1% /run

tmpfs                         3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                         3.9G  108K  3.9G   1% /tmp

/dev/sdb3                     477M  104M  344M  24% /boot

/dev/loop0                    928M  1.4M  860M   1% /srv/node/device1

tmpfs                         3.9G 1020K  3.9G   1% /run/netns

192.168.1.142:cinder-volumes  433G  177G  234G  44% /var/lib/cinder/volumes/76078cf31245409735587a509f24ab39

[root@localhost boris(keystone_admin)]# nova image-list

+————————————–+—————————-+——–+——–+

| ID                                   | Name                       | Status | Server |

+————————————–+—————————-+——–+——–+

| 807758c4-20fd-41b2-a943-27c80a651fc7 | Fedora19image              | ACTIVE |        |

| 35098ebe-f04d-4c26-bd8c-0bec0f0fb892 | UbuntuSaucyRelease         | ACTIVE |        |

| e8e426f8-8898-42a5-9b3f-0374d20e318b | UbuntuServer1310.image     | ACTIVE |        |

| 8fdc5c6e-1554-4ad2-bcaf-f4aefee7d690 | Windos Server2012 Std Eval | ACTIVE |        |

| 10295ec8-3d8d-4623-9490-f217f8498878 | cirros                     | ACTIVE |        |

+————————————–+—————————-+——–+——–+

[root@localhost boris(keystone_admin)]# cinder create –image-id 807758c4-20fd-41b2-a943-27c80a651fc7  –display_name Fedora19VL 5

+———————+————————————–+

|       Property      |                Value                 |

+———————+————————————–+

|     attachments     |                  []                  |

|  availability_zone  |                 nova                 |

|       bootable      |                false                 |

|      created_at     |      2013-10-20T10:05:22.586342      |

| display_description |                 None                 |

|     display_name    |              Fedora19VL              |

|          id         | d11f00de-e543-4ed5-9686-a0421639c5e3 |

|       image_id      | 807758c4-20fd-41b2-a943-27c80a651fc7 |

|       metadata      |                  {}                  |

|         size        |                  5                   |

|     snapshot_id     |                 None                 |

|     source_volid    |                 None                 |

|        status       |               creating               |

|     volume_type     |                 None                 |

+———————+————————————–+

[root@localhost boris(keystone_admin)]# cinder list

+————————————–+———–+————–+——+————-+———-+————-+

|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |

+————————————–+———–+————–+——+————-+———-+————-+

| d11f00de-e543-4ed5-9686-a0421639c5e3 | available |  Fedora19VL  |  5   |     None    |   true   |             |

+————————————–+———–+————–+——+————-+———-+————-+

[root@localhost boris(keystone_admin)]# cinder create –image-id 35098ebe-f04d-4c26-bd8c-0bec0f0fb892 –display_name SalamanderVL 5

+———————+————————————–+

|       Property      |                Value                 |

+———————+————————————–+

|     attachments     |                  []                  |

|  availability_zone  |                 nova             |

|       bootable         |                false              |

|      created_at       |      2013-10-20T10:07:01.897362      |

| display_description |                 None        |

|     display_name  |             SalamanderVL  |

|          id                | a0102e10-52cb-49d6-bd58-f0365968f721 |

|       image_id       | 35098ebe-f04d-4c26-bd8c-0bec0f0fb892 |

|       metadata      |                  {}               |

|         size             |                  5                 |

|     snapshot_id   |                 None           |

|     source_volid   |                 None          |

|        status          |               creating       |

|     volume_type   |                 None          |

+———————+————————————–+

[root@localhost boris(keystone_admin)]# cinder list

+————————————–+————-+————–+——+————-+———-+————-+

|                  ID                  |    Status   | Display Name | Size | Volume Type | Bootable | Attached to |

+————————————–+————-+————–+——+————-+———-+————-+

| a0102e10-52cb-49d6-bd58-f0365968f721 | downloading | SalamanderVL |  5   |     None    |   true   |             |

| d11f00de-e543-4ed5-9686-a0421639c5e3 |  available  |  Fedora19VL  |  5   |     None    |   true   |             |

+————————————–+————-+————–+——+————-+———-+————-+

[root@localhost boris(keystone_admin)]# cinder list

+————————————–+———–+————–+——+————-+———-+————-+

|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |

+————————————–+———–+————–+——+————-+———-+————-+

| a0102e10-52cb-49d6-bd58-f0365968f721 | available | SalamanderVL |  5   |     None    |   true   |             |

| d11f00de-e543-4ed5-9686-a0421639c5e3 | available |  Fedora19VL  |  5   |     None    |   true   |             |

+————————————–+———–+————–+——+————-+———-+————-+

[root@localhost boris(keystone_admin)]# ls -l /var/lib/cinder/volumes/76078cf31245409735587a509f24ab39

total 1496292

-rw-rw-rw-. 1 root root 5368709120 Oct 20 14:07 volume-a0102e10-52cb-49d6-bd58-f0365968f721

-rw-rw-rw-. 1 root root 5368709120 Oct 20 14:05 volume-d11f00de-e543-4ed5-9686-a0421639c5e3

[root@localhost boris(keystone_admin)]# nova list

+————————————–+——————-+——–+————+————-+——————————–+

| ID                                   | Name              | Status | Task State | Power State | Networks                       |

+————————————–+——————-+——–+————+————-+——————————–+

| 9754374c-bcf2-4f34-a931-649b417d939d | UbuntuSalamander1 | ACTIVE | None       | Running     | private=10.0.0.3, 192.168.1.61 |

+————————————–+——————-+——–+————+————-+——————————–+

[root@localhost boris(keystone_admin)]# nova show  9754374c-bcf2-4f34-a931-649b417d939d

+————————————–+———————————————————-+

| Property                             | Value                                                    |

+————————————–+———————————————————-+

| status                               | ACTIVE                                                   |

| updated                              | 2013-10-20T10:16:44Z                                     |

| OS-EXT-STS:task_state                | None                                                     |

| OS-EXT-SRV-ATTR:host                 | localhost.localdomain                                    |

| key_name                             | key2                                                     |

| image                                | Attempt to boot from volume – no image supplied          |

| private network                      | 10.0.0.3, 192.168.1.61                                   |

| hostId                               | 8a06ef818edcb74efba027817c5f14cb4d1d38c0fcd1dde73a9356d6 |

| OS-EXT-STS:vm_state                  | active                                                   |

| OS-EXT-SRV-ATTR:instance_name        | instance-00000013                                        |

| OS-SRV-USG:launched_at               | 2013-10-20T10:16:44.000000                               |

| OS-EXT-SRV-ATTR:hypervisor_hostname  | localhost.localdomain                                    |

| flavor                               | m1.small (2)                                             |

| id                                   | 9754374c-bcf2-4f34-a931-649b417d939d                     |

| security_groups                      | [{u'name': u'default'}]                                  |

| OS-SRV-USG:terminated_at             | None                                                     |

| user_id                              | 239b0f75c0514945acb5bb2041ccd89d                         |

| name                                 | UbuntuSalamander1                                        |

| created                              | 2013-10-20T10:16:35Z                                     |

| tenant_id                            | 644515d6bdf14f53a44f226fd7c9f69b                         |

| OS-DCF:diskConfig                    | MANUAL                                                   |

| metadata                             | {}                                                       |

| os-extended-volumes:volumes_attached | [{u'id': u'a0102e10-52cb-49d6-bd58-f0365968f721'}]       |

| accessIPv4                           |                                                          |

| accessIPv6                           |                                                          |

| progress                             | 0                                                        |

| OS-EXT-STS:power_state               | 1                                                        |

| OS-EXT-AZ:availability_zone          | nova                                                     |

| config_drive                         |                                                          |

+————————————–+———————————————————-+

[root@localhost boris(keystone_admin)]# cinder list

+————————————–+———–+————–+——+————-+———-+————————————–+

|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+———–+————–+——+————-+———-+————————————–+

| a0102e10-52cb-49d6-bd58-f0365968f721 |   in-use  | SalamanderVL |  5   |     None    |   true   | 9754374c-bcf2-4f34-a931-649b417d939d |

| d11f00de-e543-4ed5-9686-a0421639c5e3 | available |  Fedora19VL  |  5   |     None    |   true   |                                      |

+————————————–+———–+————–+——+————-+———-+————————————–+

[root@localhost boris(keystone_admin)]# ls -lah /var/lib/cinder/volumes/76078cf31245409735587a509f24ab39

total 1.8G

drwxr-xr-x. 3 root   root   4.0K Oct 20 14:07 .

drwxr-xr-x. 4 cinder cinder 4.0K Oct 20 14:03 ..

-rw-rw-rw-. 1 qemu   qemu   5.0G Oct 20 14:28 volume-a0102e10-52cb-49d6-bd58-f0365968f721

-rw-rw-rw-. 1 root   root   5.0G Oct 20 14:05 volume-d11f00de-e543-4ed5-9686-a0421639c5e3

[root@localhost boris(keystone_admin)]# cd /home/boris

[root@localhost boris(keystone_admin)]# du | grep node.$

637972    ./node3

638368    ./node2

637964    ./node1

    Another snap shots series

Loading Windows 2012 Server via bootable cinder volume located on glusterfs stripe 3 volume.

[root@localhost (keystone_admin)]# nova image-list

+————————————–+——————————+——–+——–+

| ID                                   | Name                         | Status | Server |

+————————————–+——————————+——–+——–+

| 59758edc-da8d-444e-b0a0-d93d323fc026 | F19Image                     | ACTIVE |        |

| df912358-b227-43a5-94a3-edc874c577bc | UbuntuSalamander             | ACTIVE |        |

| 1e26928b-5df0-4097-bbc6-46832dc8361b | Windows Server 2012 Std Eval | ACTIVE |        |

| ae07d1ba-41de-44e9-877a-455f8956d86f | cirros                       | ACTIVE |        |

+————————————–+——————————+——–+——–+

[root@localhost (keystone_admin)]# cinder create –image-id 1e26928b-5df0-4097-bbc6-46832dc8361b  –display_name WinSRV2012 20

+———————+————————————–+

|       Property      |                Value                 |

+———————+————————————–+

|     attachments     |                  []                  |

|  availability_zone  |                 nova                 |

|       bootable      |                false                 |

|      created_at     |      2013-10-22T06:03:05.019987      |

| display_description |                 None                 |

|     display_name    |              WinSRV2012              |

|          id         | 5812c7e5-d071-43cb-8a1c-81346167d72a |

|       image_id      | 1e26928b-5df0-4097-bbc6-46832dc8361b |

|       metadata      |                  {}                  |

|         size        |                  20                  |

|     snapshot_id     |                 None                 |

|     source_volid    |                 None                 |

|        status       |               creating               |

|     volume_type     |                 None                 |

+———————+————————————–+

[root@localhost boris(keystone_admin)]# cinder list

+————————————–+————-+————–+——+————-+———-+————————————–+

|                  ID                  |    Status   | Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+————-+————–+——+————-+———-+————————————–+

| 5812c7e5-d071-43cb-8a1c-81346167d72a | downloading |  WinSRV2012  |  20  |     None    |  false   |                                      |

| be59b5ae-863b-41e2-abf6-de6756398722 |    in-use   | SalamanderVL |  5   |     None    |   true   | 82ba6ffe-dc01-4fcc-b247-7995a5ca7cb8 |

+————————————–+————-+————–+——+————-+———-+————————————–+

[root@localhost boris(keystone_admin)]# ls -lah /var/lib/cinder/volumes/1d42fffa70f6647a45514f6fb5ce40ca

total 16G

drwxr-xr-x. 3 root   root   4.0K Oct 22 10:03 .

drwxr-xr-x. 5 cinder cinder 4.0K Oct 20 16:30 ..

-rw-rw-rw-. 1 root   root    16G Oct 22 10:21 volume-5812c7e5-d071-43cb-8a1c-81346167d72a

-rw-rw-rw-. 1 root   root   5.0G Oct 21 11:12 volume-be59b5ae-863b-41e2-abf6-de6756398722

[root@localhost boris(keystone_admin)]# du | grep node.$

5517480    ./node1

5516684    ./node3

5514680    ./node2

[root@localhost boris(keystone_admin)]# du | grep node.$

6551272    ./node1

6550168    ./node3

6549416    ./node2

[root@localhost boris(keystone_admin)]# cinder list

+————————————–+———–+————–+——+————-+———-+————————————–+

|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+———–+————–+——+————-+———-+————————————–+

| 5812c7e5-d071-43cb-8a1c-81346167d72a | available |  WinSRV2012  |  20  |     None    |   true   |                                      |

| be59b5ae-863b-41e2-abf6-de6756398722 |   in-use  | SalamanderVL |  5   |     None    |   true   | 82ba6ffe-dc01-4fcc-b247-7995a5ca7cb8 |

+————————————–+———–+————–+——+————-+———-+————————————–+

[root@localhost boris(keystone_admin)]# ls -lah /var/lib/cinder/volumes/1d42fffa70f6647a45514f6fb5ce40ca

total 19G

drwxr-xr-x. 3 root   root   4.0K Oct 22 10:03 .

drwxr-xr-x. 5 cinder cinder 4.0K Oct 20 16:30 ..

-rw-rw-rw-. 1 root   root    20G Oct 22 10:24 volume-5812c7e5-d071-43cb-8a1c-81346167d72a

-rw-rw-rw-. 1 root   root   5.0G Oct 21 11:12 volume-be59b5ae-863b-41e2-abf6-de6756398722

[root@localhost boris(keystone_admin)]# df -h

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/fedora-root       164G   70G   86G  45% /

devtmpfs                      3.9G     0  3.9G   0% /dev

tmpfs                         3.9G   92K  3.9G   1% /dev/shm

tmpfs                         3.9G  964K  3.9G   1% /run

tmpfs                         3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                         3.9G  112K  3.9G   1% /tmp

/dev/sda1                     477M   87M  362M  20% /boot

/dev/loop0                    928M  1.4M  860M   1% /srv/node/device1

192.168.1.135:cinder-volumes  490G  209G  257G  45% /var/lib/cinder/volumes/1d42fffa70f6647a45514f6fb5ce40ca

tmpfs                         3.9G  964K  3.9G   1% /run/netns

[root@localhost boris(keystone_admin)]# cinder list

+————————————–+——–+————–+——+————-+———-+————————————–+

|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+————–+——+————-+———-+————————————–+

| 5812c7e5-d071-43cb-8a1c-81346167d72a | in-use |  WinSRV2012  |  20  |     None    |   true   | d6e702c3-16e6-4890-bbe7-22b19ed05263 |

| be59b5ae-863b-41e2-abf6-de6756398722 | in-use | SalamanderVL |  5   |     None    |   true   | 82ba6ffe-dc01-4fcc-b247-7995a5ca7cb8 |

+————————————–+——–+————–+——+————-+———-+————————————–+


Glusterfs volume based Havana 2013.2 instances on NFS-Like Standalone Storage Server With GlusterFS 3.4.1 Fedora 19

October 18, 2013

  Following http://www.gluster.org/category/openstack/

This is a snapshot to show the difference between the Havanna and Grizzly releases with GlusterFS.

Grizzly Havana
Glance – Could point to the filesystem images mounted with GlusterFS, but had to copy VM image to deploy it Can now point to Cinder interface, removing the need to copy image
Cinder – Integrated with GlusterFS, but only with Fuse mounted volumes Can now use libgfapi-QEMU integration for KVM hypervisors
Nova – No integration with GlusterFS Can now use the libgfapi-QEMU integration
Swift – GlusterFS maintained a separate repository of changes to Swift proxy layer Swift patches now merged upstream, providing a cleaner break between API and implementation

Actually, on Glusterfs F19 Server included in cluster procedure of cinder tuning should be the same.
First step –  set up Havana RC1 RDO on Fedora 19 per

http://bderzhavets.blogspot.ru/2013/10/neutron-basic-rdo-setup-havana-to-have.html

Next – installing GlusterFS Server on Cinder host

#   yum install glusterfs glusterfs-server glusterfs-fuse

#   systemctl status glusterd

glusterd.service – GlusterFS an clustered file-system server

Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)

Active: active (running) since Fri 2013-10-18 13:47:51 MSK; 2h 37min ago

Process: 1126 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid (code=exited, status=0/SUCCESS)

Main PID: 1136 (glusterd)

CGroup: name=systemd:/system/glusterd.service

├─1136 /usr/sbin/glusterd -p /run/glusterd.pid

├─8861 /usr/sbin/glusterfsd -s 192.168.1.135 –volfile-id cinder-volume.192.168.1.135.rhs-brick1-cinder-volume -…

├─8878 /usr/sbin/glusterfs -s localhost –volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/lo…

└─8885 /sbin/rpc.statd

Oct 18 13:47:51 localhost.localdomain  systemd[1]: Started GlusterFS an clustered file-system server.

Oct 18 13:58:19 localhost.localdomain  rpc.statd[8885]: Version 1.2.7 starting

Oct 18 13:58:19 localhost.localdomain  sm-notify[8886]: Version 1.2.7 starting

Oct 18 13:58:19 localhost.localdomain  rpc.statd[8885]: Initializing NSM state

#   mkdir -p /rhs/brick1/cinder-volume

#  gluster volume create cinder-volume 192.168.1.135:/rhs/brick1/cinder-  volume

#  gluster volume start cinder-volume

#  gluster volume info

Volume Name: cinder-volume

Type: Distribute

Volume ID: d52c0ba1-d7b1-495d-8f14-07ff03e7db95

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: 192.168.1.135:/rhs/brick1/cinder-volume

A sample of utilizing stripe gluster volume may be viewed here :-

http://bderzhavets.blogspot.ru/2013/10/glusterfs-striped-volumes-based-havana.html

Configuring Cinder to Add GlusterFS

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf 

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes

 # vi /etc/cinder/shares.conf

    192.168.1.135:cinder-volume

:wq

# iptables-save >  iptables.dump

Add to *filter section:

-A INPUT -m state –state NEW -m tcp -p tcp –dport 111 -j ACCEPT 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24007 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24008 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24009 -j ACCEPT 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24010 -j ACCEPT 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 24011 -j ACCEPT 

-A INPUT -m state –state NEW -m tcp -p tcp –dport 38465:38469 -j ACCEPT

# iptables-restore <  iptables.dump

# service iptables restart

Restart openstack-cinder-volume service mounts glusterfs volume with no problems with ownership :

# for i in api scheduler volume

>  do

> service openstack-cinder-${i} restart

 > done

 # df -h

Filesystem                   Size  Used Avail Use% Mounted on

/dev/mapper/fedora-root      164G   29G  127G  19% /

devtmpfs                     3.9G     0  3.9G   0% /dev

tmpfs                           3.9G  148K  3.9G   1% /dev/shm

tmpfs                           3.9G  1.1M  3.9G   1% /run

tmpfs                           3.9G     0  3.9G   0% /sys/fs/cgroup

tmpfs                           3.9G  800K  3.9G   1% /tmp

/dev/sda1                    477M   87M  362M  20% /boot

/dev/loop0                   928M  1.4M  860M   1% /srv/node/device1

tmpfs                           3.9G  1.1M  3.9G   1% /run/netns 

192.168.1.135:cinder-volume  164G   29G  127G  19% /var/lib/cinder/volumes/f39d1b2d7e2a2e48af66eceba039b139

 # nova image-list

+————————————–+——————+——–+——–+

| ID                                   | Name             | Status | Server |

+————————————–+——————+——–+——–+

| 59758edc-da8d-444e-b0a0-d93d323fc026 | F19Image         | ACTIVE |        |

| df912358-b227-43a5-94a3-edc874c577bc | UbuntuSalamander | ACTIVE |        |

| ae07d1ba-41de-44e9-877a-455f8956d86f | cirros           | ACTIVE |        |

+————————————–+——————+——–+——–+

Creating havana volume in glusterfs storage via command line :

#  cinder create –image-id 59758edc-da8d-444e-b0a0-d93d323fc026  –display_name Fedora19VL 5

# cinder list

+————————————–+——–+————–+——+————-+———-+

|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+————–+——+————-+———-+

| 0474ead2-61a8-41dd-8f8d-ef3000266403 | in-use |              |  5   |     None    |   true   | 779b306b-3cb2-48ea-9711-2c42c508b577 |

| da344703-dcf9-450e-9e34-cafb331f80f6 | in-use |  Fedora19VL  |  5   |     None    |   true   | 1a8e5fa5-6a79-43f0-84ee-58e2099b1ebe |

+————————————–+——–+————–+——+————-+———-+

 # ls -l /var/lib/cinder/volumes/f39d1b2d7e2a2e48af66eceba039b139

total 5528248

-rw-rw-rw-. 1 qemu qemu 5368709120 Oct 18 16:19 volume-0474ead2-61a8-41dd-8f8d-ef3000266403

-rw-rw-rw-. 1 qemu qemu 5368709120 Oct 18 16:19 volume-da344703-dcf9-450e-9e34-cafb331f80f6

 

   

Screen shots on another F19 instance dual booting with first

 

 

  

   Creating via cinder command line Ubuntu 13.10 Server bootable volume

in glusterfs storage :

   

     

References1.  http://www.gluster.org/community/documentation/index.php/GlusterFS_Cinder


Neutron basic RDO setup (havana) to have original LAN as external on Fedora 19

October 5, 2013

***************************
UPDATE on 11/10/2013
***************************

To set up Havana RC1 due to

    https://bugzilla.redhat.com/show_bug.cgi?id=1012001

requires editing /etc/qpid/qpidd.conf – adding line ‘auth=no’ per

http://openstack.redhat.com/forum/discussion/605/fedora-19-just-upgrading-qpid-cpp-will-break-your-installation#Item_3

some time after qpid puppet completed and service qpidd restart during running installation

*************************

Follow as normal http://openstack.redhat.com/Neutron-Quickstart.
When done switch to eth0 per

http://unix.stackexchange.com/questions/81834/how-can-i-change-the-default-ens33-network-device-to-old-eth0-on-fedora-19

Remove biosdevname if it is installed. (yum remove biosdevname)
Disable the udev rule: ln -s /dev/null /etc/udev/rules.d/80-net-   name-slot.rules

Reboot
and  create under /etc/sysconfig/network-scripts

[root@localhost network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.125″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@localhost network-scripts]# cat ifcfg-eth0
NAME=”eth0″
HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

Enable network service, turn off interface eth0

Due to known bugs

Bug 981583 – Openstack firewall rules are not enabled after reboot

https://bugzilla.redhat.com/show_bug.cgi?id=981583

Bug 981652 – firewalld does not cover openstack/packstack use case.

https://bugzilla.redhat.com/show_bug.cgi?id=981652

Run:-
# yum -y install iptables-services
# systemctl disable firewalld
# systemctl enable iptables
then  reboot

In dashboard environment delete router1 and public network.

# source keystonerc_admin
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# neutron net-create public –router:external=True
# neutron subnet-create public 192.168.1.0/24 –name vlan –enable_dhcp False \
–allocation_pool start=192.168.1.57,end=192.168.1.92  \
–gateway 192.168.1.1
# neutron floatingip-create public

# EXTERNAL_NETWORK_ID=`neutron net-list | grep public | awk ‘{ print $2 }’`
# INT_SUBNET_ID=`neutron subnet-list | grep private_subnet | awk ‘{ print $2}’`
# SERVICE_TENANT_ID=`keystone tenant-list | grep service | awk ‘{ print $2}’`
# neutron router-create –name router2 –tenant-id $SERVICE_TENANT_ID router2
# neutron router-gateway-set router2  $EXTERNAL_NETWORK_ID
# neutron router-interface-add router2  $INT_SUBNET_ID
# neutron subnet-update $INT_SUBNET_ID –dns_nameservers list=true 83.221.202.254
# neutron subnet-update $INT_SUBNET_ID –gateway_ip 10.0.0.1

Create images via command line :-

# glance image-create –name ‘Fedora19image’ –disk-format qcow2 –container-format bare –is-public true \
–copy-from http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2

# glance image-create –name ‘UbuntuServer13.10image’ \
–disk-format qcow2 \
–container-format bare –is-public true \
–copy-from http://cloud-images.ubuntu.com/saucy/current/saucy-server-cloudimg-amd64-disk1.img

You can also fix the issue per

http://openstack.redhat.com/forum/discussion/554/havana-horizon-no-formats-available-for-images

View snapshots at  http://bderzhavets.blogspot.ru/2013/10/neutron-basic-rdo-setup-havana-to-have.html

Creating volumes on Havana RDO openstack

By default (not to follow [1]) cinder-volumes VG gets created under
/var/lib/cinder as loop mounted empty file due to :-

[root@localhost ~(keystone_admin)]# cd /var/lib/cinder
[root@localhost cinder(keystone_admin)]# ls -l
total 16777236
-rw-r–r–. 1 root   root   22118662144 Oct  7 16:04 cinder-volumes
drwxr-xr-x. 2 cinder cinder        4096 Oct  7 15:48 tmp
[root@localhost cinder(keystone_admin)]# losetup -a
/dev/loop0: [64768]:6034065 (/srv/loopback-device/device1)
/dev/loop1: [64768]:918092 (/var/lib/cinder/cinder-volumes)

[root@localhost ~(keystone_admin)]# nova image-list
+————————————–+————————+——–+——–+
| ID                                   | Name                   | Status | Server |
+————————————–+————————+——–+——–+
| 73ddfddf-833d-4eda-869f-e26321c20a2e | Fedora19image          | ACTIVE |        |
| 2d5f5596-c5f5-401a-ae16-388b5dae78f2 | UbuntuServer13.10image | ACTIVE |        |
| 0415ec26-d202-4fb7-b6a0-3e7923547e98 | cirros                 | ACTIVE |        |
+————————————–+————————+——–+——–+
[root@localhost ~(keystone_admin)]# cinder create –image-id 2d5f5596-c5f5-401a-ae16-388b5dae78f2  –display_name SalamanderVG 7
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                False                 |
|      created_at     |      2013-10-07T11:39:32.001108      |
| display_description |                 None                 |
|     display_name    |             SalamanderVG             |
|          id         | 624f7b78-bb1e-411a-afc6-e3190187af38 |
|       image_id      | 2d5f5596-c5f5-401a-ae16-388b5dae78f2 |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+
[root@localhost ~(keystone_admin)]# nova volume-list
+————————————–+————-+————–+——+————-+————-+
| ID                                   | Status      | Display Name | Size | Volume Type | Attached to |
+————————————–+————-+————–+——+————-+————-+
| 624f7b78-bb1e-411a-afc6-e3190187af38 | downloading | SalamanderVG | 7    | None        |             |
+————————————–+————-+————–+——+————-+————-+
[root@localhost ~(keystone_admin)]# cinder create –image-id 73ddfddf-833d-4eda-869f-e26321c20a2e \

–display_name Fedora19VG 7
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                False                 |
|      created_at     |      2013-10-07T11:42:32.708633      |
| display_description |                 None                 |
|     display_name    |              Fedora19VG              |
|          id         | d2745ee6-9166-4ace-9fb6-826999eddcd0 |
|       image_id      | 73ddfddf-833d-4eda-869f-e26321c20a2e |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+
[root@localhost ~(keystone_admin)]# nova volume-list

+————————————–+————-+————–+——+————-+
| ID                                   | Status      | Display Name | Size | Volume Type | Attached to |
+————————————–+————-+————–+——+————-+
| d2745ee6-9166-4ace-9fb6-826999eddcd0 | downloading | Fedora19VG   | 7    | None        |             |
| 624f7b78-bb1e-411a-afc6-e3190187af38 | available   | SalamanderVG | 7    | None        |             |
+————————————–+————-+————–+——+————-+

[root@localhost ~(keystone_admin)]# nova volume-list

+————————————–+——–+————–+——+————–+
| ID                                   | Status | Display Name | Size | Volume Type | Attached to                          |
+————————————–+——–+————–+——+————–+
| d2745ee6-9166-4ace-9fb6-826999eddcd0 | in-use | Fedora19VG   | 7    | None        | 5dc6569f-42d8-49fb-a3d5-7f3089249952 |
| 624f7b78-bb1e-411a-afc6-e3190187af38 | in-use | SalamanderVG | 7    | None        | 3e0a32b4-1045-4d30-9921-b1c2c5140639 |
+————————————–+——–+————–+——+————–+

[root@localhost ~(keystone_admin)]# pvscan | grep cinder-volumes
PV /dev/loop1   VG cinder-volumes   lvm2 [20.60 GiB / 6.60 GiB free]

REFERENCES

1. http://funwithlinux.net/2013/08/install-openstack-grizzly-on-fedora-19-using-packstack-with-quantum-networking/

2. http://www.blog.sandro-mathys.ch/2013/08/install-rdo-havana-2-on-fedora-19-and.html


Quantum basic RDO setup (grizzly) to have original LAN as external on Fedora 19

September 20, 2013
Follow as normal http://openstack.redhat.com/Neutron-Quickstart. Only if you have one Ethernet interface on box then do it,
otherwise it’s easy to keep native names for OVS ports.
When done switch to eth0 per

http://unix.stackexchange.com/questions/81834/how-can-i-change-the-default-ens33-network-device-to-old-eth0-on-fedora-19

      1.Remove biosdevname if it is installed. (yum remove biosdevname
      2. Disable the udev rule: ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules
      3. Reboot
and  create under /etc/sysconfig/network-scripts.

[root@localhost network-scripts]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.52″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@localhost network-scripts]# cat ifcfg-eth0

NAME=”eth0″
HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”no”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

Then enable network service and reboot.
Turn off interface eth0 , update ONBOOT=”no” to ONBOOT=”yes” in ifcfg-eth0, then  restart network service.
In dashboard environment delete router1 and public network.
Create router2 and internal interface to private network.
#   source keystonerc_admin
#   nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
#   nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
#   quantum net-create public –router:external=True
#   quantum subnet-create public 192.168.1.0/24 –name vlan \
     –enable_dhcp False –allocation_pool  \
     start=192.168.1.57, end=192.168.1.62  \
–gateway 192.168.1.1
#  quantum floatingip-create public
***********

Next step :

***********
#  source keystonerc_admin
#  EXTERNAL_NETWORK_ID=`quantum net-list | grep public | awk ‘{ print $2 }’`
# quantum router-gateway-set router2  $EXTERNAL_NETWORK_ID
# INT_SUBNET_ID=`quantum subnet-list | grep private_subnet | awk ‘{ print $2}’`
# quantum subnet-update $INT_SUBNET_ID –dns_nameservers list=true 83.221.202.254
# quantum subnet-update $INT_SUBNET_ID –gateway_ip 10.0.0.1
**************************************************************************
Router2 and internal interface to private network may be also created via CLI:
**************************************************************************

# EXTERNAL_NETWORK_ID=`quantum net-list | grep public | awk ‘{ print $2 }’`
# INT_SUBNET_ID=`quantum subnet-list | grep private_subnet | awk ‘{ print $2}’`
# SERVICE_TENANT_ID=`keystone tenant-list | grep service | awk ‘{ print $2}’`
# quantum router-create –name router2 –tenant-id $SERVICE_TENANT_ID router2
# quantum router-gateway-set router2  $EXTERNAL_NETWORK_ID
# quantum router-interface-add router2  $INT_SUBNET_ID
# quantum subnet-update $INT_SUBNET_ID –dns_nameservers list=true 83.221.202.254
# quantum subnet-update $INT_SUBNET_ID –gateway_ip 10.0.0.1

View  http://openstack.redhat.com/forum/discussion/196/quantum-basic-setup/p1

View snapshots at  http://bderzhavets.blogspot.ru/2013/09/quantum-basic-rdo-setup-grizzly-to-have_16.html

To make configuration persistent between reboots due to known bugs

Bug 981583 - Openstack firewall rules are not enabled after reboot   

https://bugzilla.redhat.com/show_bug.cgi?id=981583

Bug 981652 - firewalld does not cover openstack/packstack use case   

https://bugzilla.redhat.com/show_bug.cgi?id=981652

Run:-

# yum -y install iptables-services
# systemctl disable firewalld
# systemctl enable iptables

Remote noVNC cloud instances access via web browser:

[root@localhost ~(keystone_admin)]# nova list
+————————————–+————+———–+——————————–+
| ID                                   | Name       | Status    | Networks           |
+————————————–+————+———–+——————————–+
| 27616e5c-a08d-4c18-8366-038a03dec77c    | Ubuntu1310 | ACTIVE    | private=10.0.0.6, 192.168.1.63 |
| ca57df26-ae59-4ea0-a9c3-b21b1e862947    | VF19BD   | SUSPENDED | private=10.0.0.3, 192.168.1.59 |
| d37ccd48-0ba4-4e28-aa0b-eb43deb8b948 | WinSRV2012 | ACTIVE    | private=10.0.0.5, 192.168.1.61 |
+————————————–+————+———–+——————————–+
[root@localhost ~(keystone_admin)]# nova get-vnc-console 27616e5c-a08d-4c18-8366-038a03dec77c novnc
+——-+————————————————————————————+
| Type  | Url                                                                              |
+——-+————————————————————————————+
| novnc | http://192.168.1.145:6080/vnc_auto.html?token=f8945baa-37bd-4c0c-abd4-17fb4e93e163 |
+——-+————————————————————————————+
[root@localhost ~(keystone_admin)]# nova get-vnc-console d37ccd48-0ba4-4e28-aa0b-eb43deb8b948 novnc
+——-+————————————————————————————+
| Type  | Url                                                                              |
+——-+————————————————————————————+
| novnc | http://192.168.1.145:6080/vnc_auto.html?token=093f7649-e478-48e3-aaed-41ed207dff6e |
+——-+————————————————————————————+


Quantum basic RDO setup (grizzly) to have original LAN as external on CentOS 6.4

September 15, 2013

Attempting to follow http://allthingsopen.com/2013/08/23/openstack-packstack-installation-with-external-connectivity/
I’ve got an error after starting :-
# packstack –allinone –quantum-l3-ext-bridge=eth0
It reports that ovs port eth0 already exists. Approach bellow in general follows RDO’s discussion at http://openstack.redhat.com/forum/discussion/196/quantum-basic-setup/p1

Follow as normal http://openstack.redhat.com/Neutron-Quickstart
When done create under /etc/sysconfig/network-scripts

[root@Server64 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”none”
IPADDR=”192.168.1.42″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@Server64 network-scripts]# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
# HWADDR=”1C:C1:DE:76:19:70″
HWADDR=”00:22:15:63:E4:E2″
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

Run script as root :-

for i in /etc/quantum/*.ini
do
sed -i “s/^[# ]*ovs_use_veth.*$/ovs_use_veth = True/g” $i
done

sed -i \
-e “s/^[# ]*enable_isolated_metadata.*$/enable_isolated_metadata = True/g” \
-e “s/^[# ]*enable_metadata_network.*$/enable_metadata_network = True/g”  \
/etc/quantum/dhcp_agent.ini

# chkconfig network on

REBOOT
Disable autoconnect eth0.
REBOOT

Remove old puplic (external network) and create new one as required.
Recreate router in dashboard environment and add internal interface to
private network

#   source keystonerc_admin
#   nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
#   nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
#   quantum net-create public –router:external=True
#   quantum subnet-create public 192.168.1.0/24 –name vlan –enable_dhcp False –allocation_pool start=192.168.1.57,end=192.168.1.62 \
—  gateway 192.168.1.1
#   quantum floatingip-create public

#   quantum net-list
[root@Server64 ~(keystone_admin)]# quantum router-list
+————————————–+———+——————————————————–+
| id                                   | name    | external_gateway_info                                  |
+————————————–+———+——————————————————–+
| c56c1cc1-a11b-454c-9ccb-17dc7e62f475 | router1 |
+————————————–+———+——————————————————–+
[root@Server64 ~(keystone_admin)]# quantum net-list
+————————————–+———+—————————————————–+
| id                                   | name    | subnets                                             |
+————————————–+———+—————————————————–+
| 6823b670-231c-4b31-9325-12dc098087b2 | private | 203320cc-cd60-486d-b092-eec99740c4cc 10.0.0.0/24    |
| c9615975-beb4-461a-9aad-b740a3350bf5 | public  | 40568df0-9bae-4578-8ae9-56d0ae7d4a2e 192.168.1.0/24 |
+————————————–+———+—————————————————–+
#   quantum router-gateway-set c56c1cc1-a11b-454c-9ccb-17dc7e62f475 c9615975-beb4-461a-9aad-b740a3350bf5

[root@Server64 ~(keystone_admin)]# quantum subnet-list
+————————————–+—————-+—————-+————————————————–+
| id                                   | name           | cidr           | allocation_pools                                 |
+————————————–+—————-+—————-+————————————————–+
| 203320cc-cd60-486d-b092-eec99740c4cc | private_subnet | 10.0.0.0/24    | {“start”: “10.0.0.2”, “end”: “10.0.0.254”}       |
| 40568df0-9bae-4578-8ae9-56d0ae7d4a2e | vlan           | 192.168.1.0/24 | {“start”: “192.168.1.57”, “end”: “192.168.1.62”} |
+————————————–+—————-+—————-+————————————————–+
[root@Server64 ~(keystone_admin)]#  quantum subnet-update 203320cc-cd60-486d-b092-eec99740c4cc  –dns_nameservers list=true 83.221.202.254
Updated subnet: 203320cc-cd60-486d-b092-eec99740c4cc
[root@RServer64 ~(keystone_admin)]#  quantum subnet-update 203320cc-cd60-486d-b092-eec99740c4cc  –gateway_ip 10.0.0.1
Updated subnet: 203320cc-cd60-486d-b092-eec99740c4cc

In other way it may look like :-

# EXTERNAL_NETWORK_ID=`quantum net-list | grep public | awk ‘{ print $2 }’`
# quantum router-gateway-set router1 $EXTERNAL_NETWORK_ID
# INT_SUBNET_ID=`quantum subnet-list | grep private_subnet | awk ‘{ print $2}’`
# quantum subnet-update $INT_SUBNET_ID –dns_nameservers list=true 83.221.202.254
# quantum subnet-update $INT_SUBNET_ID –gateway_ip 10.0.0.1

For better snapshots view another blog entry :-

http://bderzhavets.blogspot.ru/2013/09/quantum-basic-rdo-setup-grizzly-to-have.html

Dashboard

Running F19 instance routed to orinal LAN as external

Running Internet browser on F19 instance  via original router on the LAN

References
1.http://openstack.redhat.com/forum/discussion/196/quantum-basic-setup/p1


Attempt to build Qemu 1.3 spice enabled on Ubuntu 12.10

January 29, 2013

Qemu 1.3 doesn’t support spice on Ubuntu 12.10 in meantime. View build log

https://launchpadlibrarian.net/129880187/buildlog_ubuntu-quantal-amd64.qemu_1.3.0%2Bdfsg-1~exp3ubuntu9_FAILEDTOBUILD.txt.gz

ERROR
ERROR: User requested feature spice
ERROR: configure was not able to find it
ERROR
make: *** [configure-stamp] Error 1
dpkg-buildpackage: error: debian/rules build-arch gave error exit status 2
******************************************************************************
Build finished at 20130129-1938
FAILED [dpkg-buildpackage died]
******************************************************************************

 

View Serge Hallyn’s response down here :

https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1109256

Some private information about myself may be found at

http://nauchi61.ru


Set up Qemu-kvm 1.1 & Spice-gtk 0.12 with USB redirection on Ubuntu Precise

June 24, 2012

$ sudo add-apt-repository ppa:bderzhavets/lib-usbredir81
$ sudo apt-get update
$ sudo apt-get install qemu-kvm qemu qemu-common qemu-utils \
seabios vgabios \
spice-client libusb-1.0-0 libusb-1.0-0-dev \
libusbredir libusbredir-dev usbredir-server \
libspice-protocol-dev libspice-server-dev \
libspice-client-glib-2.0-1 libspice-client-glib-2.0-dev \
libspice-client-gtk-2.0-1 libspice-client-gtk-2.0-dev \
libspice-client-gtk-3.0-1 libspice-client-gtk-3.0-dev \
python-spice-client-gtk spice-client-gtk

$ sudo apt-get install virtinst virt-manager virt-viewer
$ sudo ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/

$ sudo adduser $USER libvirtd
REBOOT

Link to PPA  Qemu-kvm 1.1


Set up qemu-kvm-1.0+noroms as spice enabled qemu server vs qemu-kvm-spice on Ubuntu Precise

May 22, 2012

This post follows up Bug #998435 qemu-kvm-spice doesn’t support spice/qxl installs

Build bellow is based on upstream (vs linaro) version of qemu-kvm 1.0 on Ubuntu Precise. View bug description above regarding details of qemu-kvm-spice misbehavior.

$ sudo add-apt-repository ppa:bderzhavets/lib-usbredir80
$ sudo apt-get update
$ sudo apt-get install qemu-kvm qemu qemu-common qemu-utils \
spice-client libusb-1.0-0 libusb-1.0-0-dev \
libusbredir libusbredir-dev usbredir-server \
libspice-protocol-dev libspice-server-dev \
libspice-client-glib-2.0-1 libspice-client-glib-2.0-dev \
libspice-client-gtk-2.0-1 libspice-client-gtk-2.0-dev \
libspice-client-gtk-3.0-1 libspice-client-gtk-3.0-dev \
python-spice-client-gtk spice-client-gtk

$ sudo apt-get install virtinst virt-manager virt-viewer
$ sudo ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/

$ sudo adduser $USER libvirtd
REBOOT

*************************************************************************************
Link to PPA Set up qemu-kvm-1.0+noroms as spice enabled qemu server
*************************************************************************************

Set up qemu-kvm-1.0+noroms as spice enabled qemu server & Spice 0.10.1 with Visio patches for Windows
*************************************************************************************
Link to PPA Set up qemu-kvm-1.0+noroms as spice enabled qemu server & Spice 0.10.1 with Visio patches for Windows
*************************************************************************************


Set up Spice-Gtk 0.12 with USB redirection on Ubuntu Precise

May 2, 2012

Qemu-kvm 1.0 has been built based on branch
http://cgit.freedesktop.org/~jwrdegoede/qemu/log/?h=qemu-kvm-1.0-usbredir
as of 04/29/2012. It contains all required usb redirection patches on
top of QEMU-KVM 1.0 release

$ git clone git://people.freedesktop.org/~jwrdegoede/qemu
$ cd qemu
$ git checkout -B qemu-kvm-1.0-usbredir origin/qemu-kvm-1.0-usbredir
$ cd ..
$ cp -R qemu qemu-kvm-1.0-usbredir043012

libcap-dev added to debian/control for virtfs support.

Build requires spice and spice-protocol 0.10.1 and the most recent usbredir 0.4.3
as of 04/02/2012.

$ sudo add-apt-repository ppa:bderzhavets/lib-usbredir75
$ sudo apt-get update
$ sudo apt-get install qemu-kvm qemu qemu-common qemu-utils \
spice-client libusb-1.0-0 libusb-1.0-0-dev \
libusbredir libusbredir-dev usbredir-server \
libspice-protocol-dev libspice-server-dev \
libspice-client-glib-2.0-1 libspice-client-glib-2.0-dev \
libspice-client-gtk-2.0-1 libspice-client-gtk-2.0-dev \
libspice-client-gtk-3.0-1 libspice-client-gtk-3.0-dev \
python-spice-client-gtk spice-client-gtk

$ sudo apt-get install virtinst virt-manager virt-viewer
$ sudo ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/

$ sudo adduser $USER libvirtd
REBOOT

*************************************************************************
Link to PPA Set up Spice-Gtk 0.12 on Ubuntu Precise (v.3)
*************************************************************************


Set up Spice-Gtk 0.11 with USB redirection on Ubuntu Precise

March 17, 2012

Build requires spice & spice-protocol 0.10.1 and the most recent usbredir 0.4.3 as of 04/02/2012. View also recent commit at http://cgit.freedesktop.org/spice/spice-gtk converted to 0001-usbredir-Check-for-existing-usb-channels-after-libus.patch for spice-gtk-0.11.
Qemu-kvm 1.0 has been built based on branch http://cgit.freedesktop.org/~jwrdegoede/qemu/log/?h=qemu-kvm-1.0-usbredir
as of 04/05/2012.
It contains all required usb redirection patches on top of QEMU-KVM 1.0 release

$ git clone git://people.freedesktop.org/~jwrdegoede/qemu
$ cd qemu
$ git checkout -B qemu-kvm-1.0-usbredir origin/qemu-kvm-1.0-usbredir
$ cd ..
$ cp -R qemu qemu-kvm-1.0-usbredir040712

****************
Link to PPA V.4
****************
$ sudo add-apt-repository ppa:bderzhavets/lib-usbredir71
$ $ sudo apt-get update
$ sudo apt-get install qemu-kvm qemu qemu-common qemu-utils \
spice-client libusb-1.0-0 libusb-1.0-0-dev \
libusbredir libusbredir-dev usbredir-server \
libspice-protocol-dev libspice-server-dev \
libspice-client-glib-2.0-1 libspice-client-glib-2.0-dev \
libspice-client-gtk-2.0-1 libspice-client-gtk-2.0-dev \
libspice-client-gtk-3.0-1 libspice-client-gtk-3.0-dev \
python-spice-client-gtk spice-client-gtk

$ sudo apt-get install virtinst virt-manager virt-viewer
$ sudo ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/

$ sudo adduser $USER libvirtd
REBOOT
View also bug report for F16 affecting Ubuntu as well



Start VM with virtfs support :
$ sudo /usr/bin/kvm -cpu host -enable-kvm \
-name VF15HQ -m 2048 \
-drive file=/dev/sdb5,if=virtio,media=disk,aio=native,cache=off \
-net nic,model=virtio -net user -localtime \
-usb -vga qxl -spice port=5900,disable-ticketing \
-device virtio-serial \
-chardev spicevmc,id=vdagent,name=vdagent \
-device virtserialport,chardev=vdagent,name=com.redhat.spice.0 \
-readconfig /etc/qemu/ich9-ehci-uhci.cfg \
-chardev spicevmc,name=usbredir,id=usbredirchardev1 \
-device usb-redir,chardev=usbredirchardev1, \
id=usbredirdev1,debug=3 \
-chardev spicevmc,name=usbredir, \
id=usbredirchardev2 \
-device usb-redir,chardev=usbredirchardev2, \
id=usbredirdev2,debug=3 \
-chardev spicevmc,name=usbredir,\
id=usbredirchardev3 \
-device usb-redir,chardev=usbredirchardev3, \
id=usbredirdev3,debug=3 \
-virtfs local,path=/home/boris,security_model=passthrough,mount_tag=host_share





do_spice_init: starting 0.10.1
spice_server_add_interface: SPICE_INTERFACE_MIGRATION
spice_server_add_interface: SPICE_INTERFACE_KEYBOARD
spice_server_add_interface: SPICE_INTERFACE_MOUSE
spice_server_add_interface: SPICE_INTERFACE_QXL
red_worker_main: begin
display_channel_create: create display channel
cursor_channel_create: create cursor channel
*** EHCI support is under development ***
spice_server_char_device_add_interface: CHAR_DEVICE usbredir
spice_server_char_device_add_interface: CHAR_DEVICE usbredir
spice_server_char_device_add_interface: CHAR_DEVICE usbredir
reds_handle_auth_mechanism: Auth method: 1
reds_handle_main_link:
reds_disconnect:
reds_show_new_channel: channel 1:0, connected successfully, over Non Secure link
main_channel_link: add main channel client
reds_handle_main_link: NEW Client 0x7f5ed6e44cb0 mcc 0x7f5ed6e443a0 connect-id 1804289383
main_channel_handle_parsed: net test: latency 0.225000 ms, bitrate 9061946902 bps (8642.146017 Mbps)
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 3:0, connected successfully, over Non Secure link
inputs_connect: inputs channel client create
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 4:0, connected successfully, over Non Secure link
red_dispatcher_set_cursor_peer:
reds_handle_auth_mechanism: Auth method: 1
handle_dev_cursor_connect: cursor connect
red_connect_cursor: add cursor channel client
listen_to_new_client_channel: NEW ID = 0
reds_show_new_channel: channel 2:0, connected successfully, over Non Secure link
red_dispatcher_set_display_peer:
handle_dev_display_connect: connect
handle_new_display_channel: add display channel client
reds_handle_auth_mechanism: Auth method: 1
handle_new_display_channel: New display (client 0x7f5ed6e44cb0) dcc 0x7f5e30602c30 stream 0x7f5ed6e450a0
handle_new_display_channel: jpeg disabled
handle_new_display_channel: zlib-over-glz disabled
listen_to_new_client_channel: NEW ID = 0
reds_show_new_channel: channel 9:0, connected successfully, over Non Secure link
reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 9:1, connected successfully, over Non Secure link
kvm: usbredirparser info: Peer version: spice-gtk 0.11

kvm: usbredirparser info: Peer version: spice-gtk 0.11

reds_handle_auth_mechanism: Auth method: 1
reds_show_new_channel: channel 9:2, connected successfully, over Non Secure link
kvm: usbredirparser info: Peer version: spice-gtk 0.11

display_channel_client_wait_for_init: creating encoder with id == 0
spice_server_add_interface: SPICE_INTERFACE_TABLET
handle_dev_set_mouse_mode: mouse mode 2
display_channel_release_item: not pushed (101)
spice_server_remove_interface: remove SPICE_INTERFACE_TABLET
inputs_detach_tablet:
handle_dev_set_mouse_mode: mouse mode 1
red_channel_client_disconnect: 0x7f5e30602c30 (channel 0x7f5e30045920 type 2 id 0)
display_channel_client_on_disconnect:
*********************************************
F17 usbredir enabled VM in spicy session
*********************************************







Set up Spice-Gtk 0.9 with USB redirection on Ubuntu Precise

February 2, 2012

******************************************************************************
UPDATE on 02/14/2012 Set up Spice-Gtk 0.9 on Ubuntu Oneiric
******************************************************************************
New upstream release.
– add USB redirection support, see Hans comments in the log and that
post for details: http://hansdegoede.livejournal.com/11084.html
– introduce SpiceGtkSession to deal with session-wide Gtk events, such
as clipboard, instead of doing it per display
– many cursor and keyboard handling improvements
– handle the new “semi-seamless” migration
– support new Spice mini-headers
– better coroutines: fibers on windows & jmp on linux
– add Vala vapi bindings generation
– Add command line options for setting the cache size and the glz
window size
– Add a USB device selection widget to libspice-client-gtk
– many bug fixes and code improvements
Build requires spice-protocol 0.10.1 and the most recent usbredir 0.3.3
******************************************************************************************
Link to PPA Set up Spice-Gtk 0.9 on Ubuntu Precise (v.3)
**********************************************************************************