“Setting up Two Physical-Node OpenStack RDO Havana + Gluster Backend for Cinder + Neutron GRE” on Fedora 20 boxes with both Controller and Compute nodes each one having one Ethernet adapter

January 24, 2014

**************************************************************
UPDATE on 02/23/2014  
**************************************************************

To create new instance I must have in `nova list` no more then 4 entries. Then I can sequentially restart qpidd, openstack-nova-scheduler services ( it’s, actually, not always necessary)  and I will be able create new one instance for sure.  It has been tested on 2 “Two Node Neutron GRE+OVS+Gluster Backend for Cinder Cluster”.  It is related with `nova quota-show`  for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller.
All testing  details here http://bderzhavets.blogspot.com/2014/02/next-attempt-to-set-up-two-node-neutron.html  Syntax like :

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 10    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+
[root@dallas1 ~(keystone_admin)]$  nova quota-class-update –instances 20 default

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 20    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+

doesn’t work for me
****************************************************************

1. F19,F20 have been installed via volumes based on glusterfs and show a good performance on Compute node. Yum works stable on F19 and a bit slower on F20.
2. CentOS 6.5 was installed only via glance image ( cinder shows ERROR status for volume ) network ops are slower then on Fedoras.
3. Ubuntu 13.10 Server was installed via volume based on glusterfs was able to obtain internal and floating IP. Network speed close to Fedora 19
4. Turning on Gluster backend for Cinder on F20 Two-Node Neutron GRE Cluster (Controller+Compute) improves performance significantly. Due to known F20 bug glustefs FS was ext4
5.On any cloud instance MTU should be set to 1400 for proper communications with GRE tunnel 

Post bellow follows up two Fedora 20 VMs setup described in :-
  http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt
  http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
  Both cases have been tested above –  default and non-default libvirt’s networks
In meantime I believe that using Libvirt’s networks for creating Controller and Compute nodes  as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling for RDO Havana on Fedora 20 manual setup.
  Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don’t start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services
openstack-nova-conductor & openstack-nova-scheduler wouldn’t start if mysql.users table won’t be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.

Manuals mentioned above require some editing per authors opinion as well.

Manual Setup  for two different physical boxes running Fedora 20 with the most recent `yum -y update`

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   –  Controller (192.168.1.127)

dwf01.localdomain   –  Compute   (192.168.1.137)

Two instances are running on Compute node :-

VF19RS instance has  192.168.1.102 – floating ip ,

CirrOS 3.1 instance has  192.168.1.101 – floating ip

Cloud instances running on Compute perform commands like nslookup,traceroute. Yum install & yum -y update work on Fedora 19 instance, however, in meantime time network on VF19 is stable, but relatively slow . Might be Realtek 8169 integrated on board is not good enough for GRE and it’s problem of my hardware ( dfw01 built up with Q9550,ASUS P5Q3, 8 GB DDR3, SATA 2 Seagate 500 GB). CentOS 6.5 with “RDO Havana+Glusterfs+Neutron VLAN” works on same box (dual booting with F20) much faster.  That is a first impression. I’ve also changed neutron.conf ‘s connection credentials to mysql to be able run neutron-server service. Neutron L3 agent and Neutron Openvswitch agent require some effort to be started on Controller.
Manual mentioned above requires some editing per authors opinion as well.

[root@dfw02 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           inactive  (disabled on boot)
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         inactive  (disabled on boot)
openstack-ceilometer-alarm-notifier:    inactive  (disabled on boot)
openstack-ceilometer-alarm-evaluator:   inactive  (disabled on boot)
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 970ed56ef7bc41d59c54f5ed8a1690dc |  admin  |   True  |       |
| 1beeaa4b20454048bf23f7d63a065137 |  cinder |   True  |       |
| 006c2728df9146bd82fab04232444abf |  glance |   True  |       |
| 5922aa93016344d5a5d49c0a2dab458c | neutron |   True  |       |
| af2f251586564b46a4f60cdf5ff6cf4f |   nova  |   True  |       |
+———————————-+———+———+——-+
== Glance images ==
+————————————–+——————+————-+——————+———–+——–+
| ID                                   | Name             | Disk Format | Container Format | Size      | Status |
+————————————–+——————+————-+——————+———–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | qcow2       | bare             | 237371392 | active |
+————————————–+——————+————-+——————+———–+——–+
== Nova managed services ==
 +—————-+——————-+———-+———+——-+—————————-+—————–+
| Binary         | Host              | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+——————-+———-+———+——-+—————————-+—————–+
| nova-scheduler | dfw02.localdomain | internal | enabled | up    | 2014-01-23T22:36:15.000000 | None            |
| nova-conductor | dfw02.localdomain | internal | enabled | up    | 2014-01-23T22:36:11.000000 | None            |
| nova-compute   | dfw01.localdomain | nova     | enabled | up    | 2014-01-23T22:36:10.000000 | None            |
+—————-+——————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int   | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS    | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    🙂   2014-01-23 22:39:05
nova-conductor   dfw02.localdomain                    internal         enabled    🙂   2014-01-23 22:39:11
nova-compute     dfw01.localdomain                    nova             enabled    🙂   2014-01-23 22:39:10
[root@dfw02 ~(keystone_admin)]$ ovs-vsctl show
7d78d536-3612-416e-bce6-24605088212f
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapf933e768-42”
tag: 1
Interface “tapf933e768-42”
Port “tap40dd712c-e4”
tag: 1
Interface “tap40dd712c-e4”
Bridge br-ex
Port “p37p1”
Interface “p37p1”
Port br-ex
Interface br-ex
type: internal
Port “tap54e34740-87”
Interface “tap54e34740-87”
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port “gre-2”
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.1.127″, out_key=flow, remote_ip=”192.168.1.137”}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

Running instances on dfw01.localdomain :

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS    | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    🙂   2014-01-23 22:25:45
nova-conductor   dfw02.localdomain                    internal         enabled    🙂   2014-01-23 22:25:41
nova-compute     dfw01.localdomain                    nova             enabled    🙂   2014-01-23 22:25:50

Fedora 19 instance loaded via :
[root@dfw02 ~(keystone_admin)]$ nova image-list

+————————————–+——————+——–+——–+
| ID                                   | Name             | Status | Server |

+————————————–+——————+——–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | ACTIVE |        |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE |        |
+————————————–+——————+——–+——–+

[root@dfw02 ~(keystone_admin)]$  nova boot –flavor 2 –user-data=./myfile.txt –image 03c9ad20-b0a3-4b71-aa08-2728ecb66210 VF19RS

where

[root@dfw02 ~(keystone_admin)]$  cat ./myfile.txt
#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

Snapshots  done on dfw01 host with VNC consoles opened via virt-manager :-

   

Snapshots  done on dfw02 host via virt-manager connection to dfw01 :-

  
  \

Setup Light Weight X Windows environment on Fedora 20 Cloud instance and running F20 cloud instance in Spice session via virt-manager or spicy http://bderzhavets.blogspot.com/2014/02/setup-light-weight-x-windows_2.html

 Next we install X windows on F20 to run fluxbox ( by the way after hours of googling I was unable to find requied set of packages and just picked them up during KDE Env installation via yum , which I actually don’t need at all on cloud instance of Fedora )

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

# yum install feh xcompmgr lxappearance xscreensaver dmenu

View for details http://blog.bodhizazen.net/linux/a-5-minute-guide-to-fluxbox/

# mkdir .fluxbox/backgrounds

Add to ~/.fluxbox/menu file

[submenu] (Wallpapers)
[wallpapers] (~/.fluxbox/backgrounds) {feh –bg-scale}
[end] 

to be able set wallpapers

Install some fonts :-

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

 We are ready to go :-

# echo “exec fluxbox” > ~/.xinitrc
# startx

To be able surf internet set MTU 1400 only on cloud instances :-
#  ifconfig eth0 mtu 1400 up
Otherwise, it won’t be possible due to GRE incapsulation

[root@dfw02 ~(keystone_admin)]$ nova list | grep LXW
| 492af969-72c0-4235-ac4e-d75d3778fd0a | VF20LXW          | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.106 |
[root@dfw02 ~(keystone_admin)]$ nova show 492af969-72c0-4235-ac4e-d75d3778fd0a
+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-02-06T09:38:52Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                                        |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.4, 192.168.1.106                                  |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000021                                        |
| OS-SRV-USG:launched_at               | 2014-02-05T17:47:38.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                        |
| flavor                               | m1.small (2)                                             |
| id                                   | 492af969-72c0-4235-ac4e-d75d3778fd0a                     |
| security_groups                      | [{u’name’: u’default’}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                         |
| name                                 | VF20LXW                                                  |
| created                              | 2014-02-05T17:47:33Z                                     |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’d0c5706d-4193-4925-9140-29dea801b447′}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

Switching to Spice session improves X-Server behaviour on F20 cloud instance.

# ssh -L 5900:localhost:5900 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5901:localhost:5901 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5902:localhost:5902 -N -f -l 192.168.1.137 ( Compute IP-address)
# spicy -h localhost -p  590(X)

View also “Surfing Internet & SSH connectoin on (to) cloud instance of Fedora 20 via Neutron GRE” https://bderzhavets.wordpress.com/2014/02/04/surfing-internet-ssh-connectoin-on-to-cloud-instance-of-fedora-20-via-neutron-gre/

Same command  :  `ifconfig eth0 mtu 1400 up`  will put ssh in work from Controller and Compute nodes.

[root@dfw02 nova(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5 | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 14c49bfe-f99c-4f31-918e-dcf0fd42b49d | VF19RST   | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL  | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.104 |
| 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 | VF20KVM   | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.109 |
+————————————–+———–+———–+————+————-+—————————–+


[root@dfw02 nova(keystone_admin)]$ ssh fedora@192.168.1.109
fedora@192.168.1.109’s password:
Last login: Thu Jan 30 15:54:04 2014 from 192.168.1.127

 
[fedora@vf20kvm ~]$ ifconfig
eth0: flags=4163  mtu 1400
inet 10.0.0.7  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::f816:3eff:fec6:e89a  prefixlen 64  scopeid 0x20
ether fa:16:3e:c6:e8:9a  txqueuelen 1000  (Ethernet)
RX packets 630779  bytes 877092770 (836.4 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 166603  bytes 14706620 (14.0 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 2  bytes 140 (140.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2  bytes 140 (140.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

So, loading cloud instance  via `nova boot –user-data=./myfile.txt ….` allows to get access to command line and set MTU for eth0 to 1400 , this makes instance available for ssh connections from Controller and Compute Nodes and also makes possible Internet Surfing in text and graphical  mode for fedora 19,20, Ubuntu 13.10,12.04.

[root@dfw02 ~(keystone_admin)]$ ip netns list

qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8


[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 ip a
 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: qr-f933e768-42: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:6a:d3:f0 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-f933e768-42
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe6a:d3f0/64 scope link
valid_lft forever preferred_lft forever
3: qg-54e34740-87: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:00:9a:0d brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.101/32 brd 192.168.1.101 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.102/32 brd 192.168.1.102 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe00:9a0d/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7 ip a
 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ns-40dd712c-e4: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:93:44:f8 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global ns-40dd712c-e4
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe93:44f8/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  ip r
default via 192.168.1.1 dev qg-54e34740-87
10.0.0.0/24 dev qr-f933e768-42  proto kernel  scope link  src 10.0.0.1
192.168.1.0/24 dev qg-54e34740-87  proto kernel  scope link  src 192.168.1.100
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 \
> iptables -L -t nat | grep 169
REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir  ports 8700

[root@dfw02 ~(keystone_admin)]$ neutron net-list
+————————————–+——+—————————————————–+
| id                                   | name | subnets                                             |
+————————————–+——+—————————————————–+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int  | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24    |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |
+————————————–+——+—————————————————–+
[root@dfw02 ~(keystone_admin)]$ neutron subnet-list
+————————————–+——+—————-+—————————————————-+
| id                                   | name | cidr           | allocation_pools                                   |
+————————————–+——+—————-+—————————————————-+
| fa930cea-3d51-4cbe-a305-579f12aa53c0 |      | 10.0.0.0/24    | {“start”: “10.0.0.2”, “end”: “10.0.0.254”}         |
| f30e5a16-a055-4388-a6ea-91ee142efc3d |      | 192.168.1.0/24 | {“start”: “192.168.1.100”, “end”: “192.168.1.200”} |
+————————————–+——+—————-+—————————————————-+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list
+————————————–+——————+———————+————————————–+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+————————————–+——————+———————+————————————–+
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2         | 192.168.1.101       | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4         | 192.168.1.102       | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
+————————————–+——————+———————+————————————–+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show af9c6ba6-e0ca-498e-8f67-b9327f75d93f
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 10.0.0.4                             |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | af9c6ba6-e0ca-498e-8f67-b9327f75d93f |
| port_id             | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| router_id           | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show  9d15609c-9465-4254-bdcb-43f072b6c7d4
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 10.0.0.2                             |
| floating_ip_address | 192.168.1.101                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 9d15609c-9465-4254-bdcb-43f072b6c7d4 |
| port_id             | e4cb68c4-b932-4c83-86cd-72c75289114a |
| router_id           | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+
Snapshot :-

*****************************************
Configuring Cinder to Add GlusterFS
*****************************************

# gluster volume create cinder-volumes05  replica 2 dwf02.localdomain:/data1/cinder5  dfw01.localdomain:/data1/cinder5
# gluster volume start cinder-volumes05
# gluster volume set cinder-volumes05  auth.allow 192.168.1.*
# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf

192.168.1.127:cinder-volumes05

:wq

Update /etc/sysconfig/iptables:-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment Out

-A FORWARD -j REJECT –reject-with icmp-host-prohibited
-A INPUT -j REJECT –reject-with icmp-host-prohibited

# service iptables restart

To mount gluster volume for cinder backend in current setup :-
# losetup -fv /cinder-volumes
# cinder delete a94b97f5-120b-40bd-b59e-8962a5cb6296
The above lines deleted testvol1 created by Kashyap

Ignoring this step would cause failure restart openstack-cinder-volume-service in particular situation

# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

Verification of service status :-

[root@dfw02 cinder(keystone_admin)]$ service openstack-cinder-volume status -l
Redirecting to /bin/systemctl status  -l openstack-cinder-volume.service
openstack-cinder-volume.service – OpenStack Cinder Volume Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled)
   Active: active (running) since Sat 2014-01-25 07:43:10 MSK; 6s ago
 Main PID: 21727 (cinder-volume)
   CGroup: /system.slice/openstack-cinder-volume.service
           ├─21727 /usr/bin/python /usr/bin/cinder-volume –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –logfile /var/log/cinder/volume.log
           ├─21736 /usr/bin/python /usr/bin/cinder-volume –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –logfile /var/log/cinder/volume.log
           └─21793 /usr/sbin/glusterfs –volfile-id=cinder-volumes05 –volfile-server=192.168.1.127 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:10 dfw02.localdomain systemd[1]: Started OpenStack Cinder Volume Server.
Jan 25 07:43:11 dfw02.localdomain cinder-volume[21727]: 2014-01-25 07:43:11.402 21736 WARNING cinder.volume.manager [req-69c0060b-b5bf-4bce-8a8e-f2218dec7638 None None] Unable to update stats, driver is uninitialized
Jan 25 07:43:11 dfw02.localdomain sudo[21754]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 192.168.1.127:cinder-volumes05 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:11 dfw02.localdomain sudo[21803]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf df –portability –block-size 1 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a

[root@dfw02 cinder(keystone_admin)]$ df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root        96G  7.4G   84G   9% /
devtmpfs                        3.9G     0  3.9G   0% /dev
tmpfs                           3.9G  152K  3.9G   1% /dev/shm
tmpfs                           3.9G  1.2M  3.9G   1% /run
tmpfs                           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                           3.9G  184K  3.9G   1% /tmp
/dev/sda5                       477M  101M  347M  23% /boot
/dev/mapper/fedora00-data1       77G   53M   73G   1% /data1
tmpfs                           3.9G  1.2M  3.9G   1% /run/netns
192.168.1.127:cinder-volumes05   77G   52M   73G   1% /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a

At runtime on Compute Node :-

[root@dfw01 ~]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/fedora-root          96G   54G   38G  59% /
devtmpfs                        3.9G     0  3.9G   0% /dev
tmpfs                           3.9G  484K  3.9G   1% /dev/shm
tmpfs                           3.9G  1.3M  3.9G   1% /run
tmpfs                           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                           3.9G   36K  3.9G   1% /tmp
/dev/sda5                       477M  121M  327M  27% /boot
/dev/mapper/fedora-data1         77G  6.7G   67G  10% /data1
192.168.1.127:cinder-volumes05   77G  6.7G   67G  10% /var/lib/nova/mnt/62f75cf6996a8a6bcc0d343be378c10a

[root@dfw02 ~(keystone_admin)]$ nova image-list
+————————————–+——————+——–+——–+
| ID                                   | Name             | Status | Server |
+————————————–+——————+——–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | ACTIVE |        |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE |        |
+————————————–+——————+——–+——–+

[root@dfw02 ~(keystone_admin)]$ cinder create –image-id 03c9ad20-b0a3-4b71-aa08-2728ecb66210 \
> –display-name Fedora19VLG 7

+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-01-25T03:45:21.124690      |
| display_description |                 None                 |
|     display_name    |             Fedora19VLG              |
|          id         | 5f0f096b-192a-435b-bdbc-5063ed5c6366 |
|       image_id      | 03c9ad20-b0a3-4b71-aa08-2728ecb66210 |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dfw02 cinder5(keystone_admin)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————-+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————-+
| 5f0f096b-192a-435b-bdbc-5063ed5c6366 | available | Fedora19VLG  |  7   |     None    |   true   |             |
+————————————–+———–+————–+——+————-+———-+————–

**********************************************************************************
UPDATE on 03/09/2014. In meantime I am able to load instance via glusterfs cinder’s volume only via command :-
**********************************************************************************
[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012

***********************************************************************************
Update on 03/11/2014.
***********************************************************************************
Standard schema via `cinder create –image-id IMAGE_ID –display_name VOL_NAME SIZE ` && ` nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=VOLUME_ID:::0  INSTANCE_NAME`  started to work fine. Schema described in previous UPDATE 03/09/14 on the contrary stopped to work smoothly on glusterfs based cinder’s volumes. 
    However, ending up with “Error” status it creates glusterfs cinder volume ( with system_id ) , which is quite healthy and may be utilized for building new instance of F20 or Ubuntu 14.04, whatever was original image,  via CLI or Dashboard. It looks like a kind of bug in Nova&Neutron interprocess communications. I would say synchronization at boot up.
     Please view :-

“Provide an API for external services to send defined events to the compute service for synchronization. This includes immediate needs for nova-neutron interaction around boot timing and network info updates”
    https://blueprints.launchpad.net/nova/+spec/admin-event-callback-api  
 and bug report :-
    https://bugs.launchpad.net/nova/+bug/1280357

Loading instance via created volume on Glusterfs

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=5f0f096b-192a-435b-bdbc-5063ed5c6366:::0 VF19VLGL

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| OS-EXT-STS:task_state                | scheduling                                         |
| image                                | Attempt to boot from volume – no image supplied    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000005                                  |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 5aa903c5-624d-4dde-9e3c-49996d4a5edc               |
| security_groups                      | [{u’name’: u’default’}]                            |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                   |
| OS-DCF:diskConfig                    | MANUAL                                             |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
| status                               | BUILD                                              |
| updated                              | 2014-01-25T03:59:12Z                               |
| hostId                               |                                                    |
| OS-EXT-SRV-ATTR:host                 | None                                               |
| OS-SRV-USG:terminated_at             | None                                               |
| key_name                             | None                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                               |
| name                                 | VF19VLGL                                           |
| adminPass                            | Aq4LBKP9rBGF                                       |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                   |
| created                              | 2014-01-25T03:59:12Z                               |
| os-extended-volumes:volumes_attached | [{u’id’: u’5f0f096b-192a-435b-bdbc-5063ed5c6366′}] |
| metadata                             | {}                                                 |
+————————————–+—————————————————-+

Just in a second new instance will be booted via created volume on Glusterfs ( Fedora 20 : Qemu 1.6, Libvirt 1.1.3)

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| aaeada4a-6a83-4cbc-ac8b-96a8b1fa81ad | VF19GL    | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL  | ACTIVE    | None       | Running     | int=10.0.0.6                |
+————————————–+———–+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ neutron port-list –device-id  5aa903c5-624d-4dde-9e3c-49996d4a5edc

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 7196be1f-9216-4bfd-ac8b-9903780936d9 |      | fa:16:3e:4b:97:90 | {“subnet_id”: “fa930cea-3d51-4cbe-a305-579f12aa53c0”, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-

+———————————————————————————+

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list

+————————————–+——————+———————+————————————–+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+————————————–+——————+———————+————————————–+
| 04ccafab-1878-44f6-b5ab-a1e2ea1faa97 | 10.0.0.5         | 192.168.1.103       | 1d10dc02-c0f2-4225-ae61-db281f3af69c |
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2         | 192.168.1.101       | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4         | 192.168.1.102       | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e |                  | 192.168.1.104       |                                      |
+————————————–+——————+———————+————————————–+

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-associate c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e 7196be1f-9216-4bfd-ac8b-9903780936d9
Associated floatingip c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 10.0.0.6                             |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e |
| port_id             | 7196be1f-9216-4bfd-ac8b-9903780936d9 |
| router_id           | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+

root@dfw02 ~(keystone_admin)]$ ping 192.168.1.104

PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.

64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=4.19 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=1.32 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.06 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=1.11 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=1.13 ms
64 bytes from 192.168.1.104: icmp_seq=6 ttl=63 time=1.02 ms
64 bytes from 192.168.1.104: icmp_seq=7 ttl=63 time=1.05 ms
64 bytes from 192.168.1.104: icmp_seq=8 ttl=63 time=1.08 ms
64 bytes from 192.168.1.104: icmp_seq=9 ttl=63 time=0.974 ms
64 bytes from 192.168.1.104: icmp_seq=10 ttl=63 time=1.03 ms

I/O Speed improvement is noticeable on boot up and disk operations like this

CentOS 6.5 instance was able to start it’s own X Server in VNC session from F20 in other words been client of X Server of F20 host (?).

Setting up Ubuntu 13.10 cloud instance

 [root@dfw02 ~(keystone_admin)]$ nova list | grep UbuntuSalamander

| 812d369d-e351-469e-8820-a2d0d8740716 | UbuntuSalamander | ACTIVE    | None       | Running     | int=10.0.0.8, 192.168.1.110 |

 [root@dfw02 ~(keystone_admin)]$ nova show 812d369d-e351-469e-8820-a2d0d8740716

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-01-31T04:46:30Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                                        |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.8, 192.168.1.110                                  |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000016                                        |
| OS-SRV-USG:launched_at               | 2014-01-31T04:46:30.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                        |
| flavor                               | m1.small (2)                                             |
| id                                   | 812d369d-e351-469e-8820-a2d0d8740716                     |
| security_groups                      | [{u’name’: u’default’}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                         |
| name                                 | UbuntuSalamander                                         |
| created                              | 2014-01-31T04:46:25Z                                     |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’34bdf9d9-5bcc-4b62-8140-919c00fe07df’}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

[root@dfw02 ~(keystone_admin)]$ ssh ubuntu@192.168.1.110
ubuntu@192.168.1.110’s password: 


Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-15-generic x86_64)
* Documentation:  https://help.ubuntu.com/
System information as of Fri Jan 31 05:13:19 UTC 2014

System load:  0.08              Processes:           73
Usage of /:   11.4% of 6.86GB   Users logged in:     1
Memory usage: 3%                IP address for eth0: 10.0.0.8
Swap usage:   0%
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Fri Jan 31 05:13:25 2014 from 192.168.1.127

ubuntu@ubuntusalamander:~$ ifconfig
eth0      Link encap:Ethernet  HWaddr fa:16:3e:1e:16:35
inet addr:10.0.0.8  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe1e:1635/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
RX packets:854 errors:0 dropped:0 overruns:0 frame:0
TX packets:788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:85929 (85.9 KB)  TX bytes:81060 (81.0 KB)

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Setting up light weight X environment on Ubuntu instance:-

$ sudo  apt-get install xorg openbox
Reboot
$ startx
Right mouse click on desktop opens X-terminal
$ sudo apt-get install firefox
$ /usr/bin/firefox

Testing tenants network,router,instance creating ability

[root@dfw02 ~]#  cat  keystonerc_boris
export OS_USERNAME=boris
export OS_TENANT_NAME=ostenant
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://192.168.1.127:35357/v2.0/
export PS1='[\u@\h \W(keystone_boris)]$ ‘

[root@dfw02 ~]# . keystonerc_boris

[root@dfw02 ~(keystone_boris)]$ neutron net-list

+————————————–+——+—————————————+

| id                                   | name | subnets                               |

+————————————–+——+—————————————+
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d  |
+————————————–+——+—————————————+

[root@dfw02 ~(keystone_boris)]$ neutron router-create router2 Created a new router:

+———————–+————————————–+
| Field                 | Value                                |
+———————–+————————————–+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 86b3008c-297f-4301-9bdc-766b839785f1 |
| name                  | router2                              |
| status                | ACTIVE                               |
| tenant_id             | 4dacfff9e72c4245a48d648ee23468d5     |
+———————–+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron router-gateway-set router2 ext

Set gateway for router router2

[root@dfw02 ~(keystone_boris)]$  neutron net-create int1

Created a new network:

+—————-+————————————–+
| Field          | Value                                |
+—————-+————————————–+
| admin_state_up | True                                 |
| id             | 426bb226-0ab9-440d-ba14-05634a17fb2b |
| name           | int1                                 |
| shared         | False                                |
| status         | ACTIVE                               |
| subnets        |                                      |
| tenant_id      | 4dacfff9e72c4245a48d648ee23468d5     |
+—————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron subnet-create int1 40.0.0.0/24 –dns_nameservers list=true 83.221.202.254

Created a new subnet:

+——————+——————————————–+
| Field            | Value                                      |
+——————+——————————————–+
| allocation_pools | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
| cidr             | 40.0.0.0/24                                |
| dns_nameservers  | 83.221.202.254                             |
| enable_dhcp      | True                                       |
| gateway_ip       | 40.0.0.1                                   |
| host_routes      |                                            |
| id               | 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06       |
| ip_version       | 4                                          |
| name             |                                            |
| network_id       | 426bb226-0ab9-440d-ba14-05634a17fb2b       |
| tenant_id        | 4dacfff9e72c4245a48d648ee23468d5           |
+——————+——————————————–+

[root@dfw02 ~(keystone_boris)]$  neutron router-interface-add router2 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06

Added interface e031db6b-d0cc-4c57-877b-53b1c6946870 to router router2.

[root@dfw02 ~(keystone_boris)]$ neutron subnet-list

+————————————–+——+————-+——————————————–+
| id                                   | name | cidr        | allocation_pools                           |
+————————————–+——+————-+——————————————–+
| 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06 |      | 40.0.0.0/24 | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
+————————————–+——+————-+——————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol icmp \
>   –direction ingress –remote-ip-prefix 0.0.0.0/0 default

Created a new security_group_rule:

+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 4a6deddf-9350-4f98-97d7-a54cf6ebaa9a |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol tcp \
>   –port-range-min 22 –port-range-max 22 \
>   –direction ingress –remote-ip-prefix 0.0.0.0/0 default

Created a new security_group_rule:

+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 7a461936-ffbc-4968-975b-3d27ec975e04 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| b9782f24-2f37-4f8d-9467-f622507788bf | CentOS6.5 image     | qcow2       | bare             | 344457216 | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31            | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64    | qcow2       | bare             | 237371392 | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image     | qcow2       | bare             | 214106112 | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10 | qcow2       | bare             | 244514816 | active |
| dd8c11d2-f9b0-4b77-9b5e-534c7ac0fd58 | Ubuntu Trusty image | qcow2       | bare             | 246022144 | active |
+————————————–+———————+————-+——————+———–+——–+

[root@dfw02 ~(keystone_boris)]$ cinder create –image-id de93ee44-4085-4111-b022-a7437da8feac –display_name VF20VLG02 7

+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-02-21T06:36:21.753407      |
| display_description |                 None                 |
|     display_name    |              VF20VLG02               |
|          id         | c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 |
|       image_id      | de93ee44-4085-4111-b022-a7437da8feac |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ cinder list

+————————————–+————-+————–+——+————-+———-+————-+
|                  ID                  |    Status   | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+————-+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | downloading |  VF20VLG02   |  7   |     None    |  false   |             |
+————————————–+————-+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————-+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | available |  VF20VLG02   |  7   |     None    |   true   |             |
+————————————–+———–+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ nova boot –flavor 2  –user-data=./myfile.txt –block_device_mapping vda=c3b09e44-1868-43c6-baaa-1ffcb4b80fb1:::0 VF20XWS

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| status                               | BUILD                                              |
| updated                              | 2014-02-21T06:49:42Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume – no image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | c4573327-dd99-4e57-941e-3d35aacb637c               |
| security_groups                      | [{u’name’: u’default’}]                            |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | 162021e787c54cac906ab3296a386006                   |
| name                                 | VF20XWS                                            |
| adminPass                            | YkPYdW58gz7K                                       |
| tenant_id                            | 4dacfff9e72c4245a48d648ee23468d5                   |
| created                              | 2014-02-21T06:49:42Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u’id’: u’c3b09e44-1868-43c6-baaa-1ffcb4b80fb1′}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dfw02 ~(keystone_boris)]$ nova list

+————————————–+———+——–+————+————-+—————+
| ID                                   | Name    | Status | Task State | Power State | Networks      |
+————————————–+———+——–+————+————-+—————+
| c4573327-dd99-4e57-941e-3d35aacb637c | VF20XWS | ACTIVE | None       | Running     | int1=40.0.0.2 |
+————————————–+———+——–+————+————-+—————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron port-list –device-id c4573327-dd99-4e57-941e-3d35aacb637c

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |      | fa:16:3e:10:a0:e3 | {“subnet_id”: “9e0d457b-c4c4-45cf-84e2-4ac7550f3b06”, “ip_address”: “40.0.0.2”} |
+————————————–+——+——————-+———————————————————————————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-associate 64dd749f-6127-4d0f-ba51-8a9978b8c211 2d6c6569-44c3-44b2-8bed-cdc8dde12336

Associated floatingip 64dd749f-6127-4d0f-ba51-8a9978b8c211

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-show 64dd749f-6127-4d0f-ba51-8a9978b8c211

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 40.0.0.2                             |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             | 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |
| router_id           | 86b3008c-297f-4301-9bdc-766b839785f1 |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ ping 192.168.1.115

PING 192.168.1.115 (192.168.1.115) 56(84) bytes of data.
64 bytes from 192.168.1.115: icmp_seq=1 ttl=63 time=3.80 ms
64 bytes from 192.168.1.115: icmp_seq=2 ttl=63 time=1.13 ms
64 bytes from 192.168.1.115: icmp_seq=3 ttl=63 time=0.954 ms
64 bytes from 192.168.1.115: icmp_seq=4 ttl=63 time=1.01 ms
64 bytes from 192.168.1.115: icmp_seq=5 ttl=63 time=0.999 ms
64 bytes from 192.168.1.115: icmp_seq=6 ttl=63 time=0.809 ms
64 bytes from 192.168.1.115: icmp_seq=7 ttl=63 time=1.02 ms
^C

The original text of documents was posted on fedoraproject.org by Kashyap.
   Atached ones tuned for new IP’s and should not have any more  typos of original version.They also contain MySQL preventive updates currently required for openstack-nova-compute & neutron-openvswitch-agent remote connection to Controller Node to succeed . MySQL stuff is mine. All attached *.conf & *.ini files have been update for my network as well.
   In meantime I am quite sure  that using Libvirt’s default and non-default networks  for creating Controller and Compute nodes  as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real
physical boxes. Just one Ethernet Controller per box should be required in case of  using GRE tunnelling in case of RDO Havana on Fedora 20 manual setup.    
 

References

  1. http://textuploader.com/1hin
  2. http://textuploader.com/1hey
  3. http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
 4. http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html