Creating Servers via REST API on RDO Mitaka && Keystone API V3

April 29, 2016

As usual ssh-kepair for particular tenant is supposed to be created sourcing tenant’s credentials and afterwards it works for particular tenant. By some reasons upgrade keystone api version to v3 breaks this schema in regards of REST API POST requests issued for servers creation. I am not sure either following bellow is workaround or it is supposed to work this way.

Assign admin role user admin on project demo via openstack client

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack project list| \
grep demo > list2

[root@ip-192-169-142-127 ~(keystone_admin)]#openstack user list| \
grep admin >> list2

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack role list|\
grep admin >> list2

[root@ip-192-169-142-127 ~(keystone_admin)]# cat list2
| 052b16e56537467d8161266b52a43b54 | demo |
| b6f2f511caa44f4e94ce5b2a5809dc50 | admin |
| f40413a0de92494680ed8b812f2bf266 | admin |

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack role add \
–project \
052b16e56537467d8161266b52a43b54 \
–user b6f2f511caa44f4e94ce5b2a5809dc50 \
f40413a0de92494680ed8b812f2bf266

*********************************************************************
Run to obtain token scoped “demo”
*********************************************************************

# . keystonerc_admin
# curl -i -H “Content-Type: application/json” -d \
‘ { “auth”:
{ “identity”:
{ “methods”: [“password”], “password”:
{ “user”:
{ “name”: “admin”, “domain”:
{ “id”: “default” }, “password”: “7049f834927e4468” }
}
},
“scope”:
{ “project”:
{ “name”: “demo”, “domain”:
{ “id”: “default” }
}
}
}
}’ http://192.169.142.127:5000/v3/auth/tokens ; echo

Screenshot from 2016-04-28 19-47-00

Created ssh keypair “oskeydemoV3” sourcing keystonerc_admin

Screenshot from 2016-04-28 19-50-02

Admin Console shows

Screenshot from 2016-04-28 20-28-57

***************************************************************************************
Submit “oskeydemoV3” as value for key_name into Chrome REST Client environment && issue POST request to create the server , “key_name” will be accepted ( vs case when ssh-keypair was created by tenant demo )
*************************************************************************************

Screenshot from 2016-04-28 19-52-24

Now log into dashboard as demo

Screenshot from 2016-04-28 19-56-25

Verify that created keypair “oskeydemoV3” allows log into server

Screenshot from 2016-04-28 19-58-56


Creating Servers via REST API on RDO Mitaka via Chrome Advanced REST Client

April 21, 2016

In posting bellow we are going to demonstrate Chrome Advanced REST Client successfully issuing REST API POST requests for creating RDO Mitaka Servers (VMs) as well as getting information about servers via GET requests. All required HTTP Headers are configured in GUI environment as well as body request field for servers creation.

Version of keystone API installed v2.0

Following [ 1 ] to authenticate access to OpenStack Services, you are supposed first of all to issue an authentication request to get authentication token. If the request succeeds, the server returns an authentication token.

Source keystonerc_demo on Controller or on Compute node. It doesn’t
matter. Then run this cURL command to request a token:

curl -s -X POST http://192.169.142.54:5000/v2.0/tokens \
-H “Content-Type: application/json” \
-d ‘{“auth”: {“tenantName”: “‘”$OS_TENANT_NAME”‘”, “passwordCredentials”: {“username”: “‘”$OS_USERNAME”‘”, “password”: “‘”$OS_PASSWORD”‘”}}}’ \
| python -m json.tool

to get authentication token and scroll down to the bottom :-

“token”: {
“audit_ids”: [
“ce1JojlRSiO6TmMTDW3QNQ”
],
“expires”: “2016-04-21T18:26:28Z”,
“id”: “0cfb3ec7a10c4f549a3dc138cf8a270a”, <== X-Auth-Token
“issued_at”: “2016-04-21T17:26:28.246724Z”,
“tenant”: {
“description”: “default tenant”,
“enabled”: true,
“id”: “1578b57cfd8d43278098c5266f64e49f”, <=== Demo tenant’s id
“name”: “demo”
}
},
“user”: {
“id”: “8e1e992eee474c3ab7a08ffde678e35b”,
“name”: “demo”,
“roles”: [
{
“name”: “heat_stack_owner”
},
{
“name”: “_member_”
}
],
“roles_links”: [],
“username”: “demo”
}
}
}

********************************************************************************************
Original request to obtain token might be issued via Chrome Advanced REST Client as well
********************************************************************************************

Scrolling down shows up token been returned and demo’s tenant id

Required output

{

access“: 

{

token“: 

{
issued_at“: 2016-04-21T21:56:52.668252Z
expires“: 2016-04-21T22:56:52Z
id“: dd119ea14e97416b834ca72aab7f8b5a

tenant“: 

{
description“: default tenant
enabled“: true
id“: 1578b57cfd8d43278098c5266f64e49f
name“: demo
}

*****************************************************************************
Next create ssh-keypair via CLI or dashboard for particular tenant :-
*****************************************************************************
nova keypair-add oskeymitaka0417 > oskeymitaka0417.pem
chmod 600 *.pem

******************************************************************************************
Following bellow is a couple of samples REST API POST requests starting servers as they usually are issued and described.
******************************************************************************************

curl -g -i -X POST http://192.169.142.54:8774/v2/1578b57cfd8d43278098c5266f64e49f/servers -H “User-Agent: python-novaclient” -H “Content-Type: application/json” -H “Accept: application/json” -H “X-Auth-Token: 0cfb3ec7a10c4f549a3dc138cf8a270a” -d ‘{“server”: {“name”: “CirrOSDevs03”, “key_name” : “oskeymitaka0417”, “imageRef”: “2e148cd0-7dac-49a7-8a79-2efddbd83852”, “flavorRef”: “1”, “max_count”: 1, “min_count”: 1, “networks”: [{“uuid”: “e7c90970-c304-4f51-9d65-4be42318487c”}], “security_groups”: [{“name”: “default”}]}}’

curl -g -i -X POST http://192.169.142. 54:8774/v2/1578b57cfd8d43278098c5266f64e49f/servers -H “User-Agent: python-novaclient” -H “Content-Type: application/json” -H “Accept: application/json” -H “X-Auth-Token: 0cfb3ec7a10c4f549a3dc138cf8a270a” -d ‘{“server”: {“name”: “VF23Devs03”, “key_name” : “oskeymitaka0417”, “imageRef”: “5b00b1a8-30d1-4e9d-bf7d-5f1abed5173b”, “flavorRef”: “2”, “max_count”: 1, “min_count”: 1, “networks”: [{“uuid”: “e7c90970-c304-4f51-9d65-4be42318487c”}], “security_groups”: [{“name”: “default”}]}}’

**********************************************************************************
We are going to initiate REST API POST requests creating servers been
issued  via Chrome Advanced REST Client
**********************************************************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# glance image-list

+————————————–+———————–+
| ID                                   | Name                  |
+————————————–+———————–+
| 28b590fa-05c8-4706-893a-54efc4ca8cd6 | cirros                |
| 9c78c3da-b25b-4b26-9d24-514185e99c00 | Ubuntu1510Cloud-image |
| a050a122-a1dc-40d0-883f-25617e452d90 | VF23Cloud-image       |
+————————————–+———————–+

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron net-list
+————————————–+————–+—————————————-+
| id                                   | name         | subnets                                |
+————————————–+————–+—————————————-+
| 43daa7c3-4e04-4661-8e78-6634b06d63f3 | public       | 71e0197b-fe9a-4643-b25f-65424d169492   |
|                                      |              | 192.169.142.0/24                       |
| 292a2f21-70af-48ef-b100-c0639a8ffb22 | demo_network | d7aa6f0f-33ba-430d-a409-bd673bed7060   |
|                                      |              | 50.0.0.0/24                            |
+————————————–+————–+—————————————-+

First required Headers were created in corresponding fields and
following fragment was placed in Raw Payload area of Chrome Client

{“server”:
{“name”: “VF23Devs03”,
“key_name” : “oskeymitaka0420”,
“imageRef” : “a050a122-a1dc-40d0-883f-25617e452d90“,
“flavorRef”: “2”,
“max_count”: 1,
“min_count”: 1,
“networks”: [{“uuid”: “292a2f21-70af-48ef-b100-c0639a8ffb22“}],
“security_groups”: [{“name”: “default”}]
}
}

Launching Fedora 23 Server :-

Next Ubuntu 15.10 Server (VM) will be created via changing  image-id in  Advanced RESTful Client GUI environment

Make sure that servers have been created and are currently up and running

***************************************************************************************
Now launch Chrome REST Client again for servers verification via GET request
***************************************************************************************


Neutron work flow for Docker Hypervisor running on DVR Cluster RDO Mitaka in appropriate amount of details && HA support for Glance storage using to load nova-docker instances

April 6, 2016

Why DVR come into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo) with which I had same kind of problems (VXLAN connection Controller <==> Compute) on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1). I just guess that Nova-Docker driver has a problem with OVS 2.4.0 no matter of stable/kilo, stable/liberty, stable/mitaka branches been checked out for driver build.

I have to notice that issue is related specifically with ML2&OVS&VXLAN setup, RDO Mitaka deployment ML2&OVS&VLAN  works with Nova-Docker (stable/mitaka) with no problems.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc, because even having proved malfunctinality I cannot file it to BZ. Nova-Docker Driver is not packaged for RDO so it’s upstream stuff. Upstream won’t consider issue which involves build driver from source on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup. It will result South-North traffic to be forwarded right away from host running Docker Hypervisor to Internet and vice/versa due to basic “fg” functionality ( outgoing interface of fip-namespace,residing on Compute node having L3 agent running in “dvr” agent_mode ).

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. See pre-deployment actions to be undertaken on Controller/Storage Node  

Before DVR set up change swift to glance back end  ( swift is configured in answer-file as follows )

CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1
CONFIG_SWIFT_STORAGE_ZONES=3
CONFIG_SWIFT_STORAGE_REPLICAS=3
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=a55607bff10c4210
CONFIG_SWIFT_STORAGE_SIZE=10G

Up on set up completion on storage node :-

[root@ip-192-169-142-127 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   45G  5.3G   40G  12% /
devtmpfs                 2.8G     0  2.8G   0% /dev
tmpfs                    2.8G  204K  2.8G   1% /dev/shm
tmpfs                    2.8G   25M  2.8G   1% /run
tmpfs                    2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/vdc1                 10G  2.5G  7.5G  25% /srv/node/vdc1
/dev/vdb1                 10G  2.5G  7.5G  25% /srv/node/vdb1
/dev/vdd1                 10G  2.5G  7.5G  25% /srv/node/vdd1

/dev/vda1                497M  211M  286M  43% /boot
tmpfs                    567M  4.0K  567M   1% /run/user/42
tmpfs                    567M  8.0K  567M   1% /run/user/1000

****************************
Update  glance-api.conf
****************************

[glance_store]
stores = swift
default_store = swift
swift_store_auth_address = http://192.169.142.127:5000/v2.0/
swift_store_user = services:glance
swift_store_key = f6a9398960534797 

swift_store_create_container_on_put = True
os_region_name=RegionOne

# openstack-service restart glance

# keystone user-role-add –tenant_id=$UUID_SERVICES_TENANT \
–user=$UUID_GLANCE_USER –role=$UUID_ResellerAdmin_ROLE

Value f6a9398960534797 is corresponding CONFIG_GLANCE_KS_PW in answer-file,i.e. keystone glance password for authentification

2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”
http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html
Just one notice on RDO Mitaka on each compute node run

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth0

Then configure

***********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
***********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************
vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute

***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf

container_formats=ami,ari,aki,bare,ovf,ova,docker
# systemctl restart openstack-glance-api

**************************************************
Network flow on Compute in a bit more details
**************************************************

When floating IP gets assigned to  VM ,  what actually happens ( [1] ) :-

The same explanation may be found in ([4]) , the only style would not be in step by step manner, in particular it contains detailed description of reverse network flow and ARP Proxy functionality.

1.The fip- namespace is created on the local compute node
(if it does not already exist)
2.A new port rfp- gets created on the qrouter- namespace
(if it does not already exist)
3.The rfp port on the qrouter namespace is assigned the associated floating IP address
4.The fpr port on the fip namespace gets created and linked via point-to-point  network to the rfp port of the qrouter namespace
5.The fip namespace gateway port fg- is assigned an additional address
from the public network range to set up  ARP proxy point
6.The fg- is configured as a Proxy ARP

*********************
Flow itself  ( [1] ):
*********************

1.The VM, initiating transmission, sends a packet via default gateway
and br-int forwards the traffic to the local DVR gateway port (qr-).
2.DVR routes the packet using the routing table to the rfp- port
3.The packet is applied NAT rule, replacing the source-IP of VM to
the assigned floating IP, and then it gets sent through the rfp- port,
which connects to the fip namespace via point-to-point network
169.254.31.28/31
4. The packet is received on the fpr- port in the fip namespace
and then routed outside through the fg- port

dvr273Screenshot from 2016-04-06 22-17-32

[root@ip-192-169-142-137 ~(keystone_demo)]# nova list

+————————————–+—————-+——–+————+————-+—————————————–+
| ID                                   | Name           | Status | Task State | Power State | Networks                                |
+————————————–+—————-+——–+————+————-+—————————————–+
| 957814c1-834e-47e5-9236-ef228455fe36 | UbuntuDevs01   | ACTIVE | –          | Running     | demo_network=50.0.0.12, 192.169.142.151 |
| 65dd55b9-23ea-4e5b-aeed-4db259436df2 | derbyGlassfish | ACTIVE | –          | Running     | demo_network=50.0.0.13, 192.169.142.153 |
| f9311d57-4352-48a6-a042-b36393e0af7a | fedora22docker | ACTIVE | –          | Running     | demo_network=50.0.0.14, 192.169.142.154 |
+————————————–+—————-+——–+————+————-+—————————————–+

[root@ip-192-169-142-137 ~(keystone_demo)]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

336679f5bf7a        kumarpraveen/fedora-sshd   “/usr/bin/supervisord”   About an hour ago   Up About an hour                        nova-f9311d57-4352-48a6-a042-b36393e0af7a
8bb2ce01e671        derby/docker-glassfish41   “/sbin/my_init”          2 hours ago         Up 2 hours                              nova-65dd55b9-23ea-4e5b-aeed-4db259436df2
fe5eb55a4c9d        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”      3 hours ago         Up 3 hours                              nova-957814c1-834e-47e5-9236-ef228455fe36

[root@ip-192-169-142-137 ~(keystone_demo)]# nova show f9311d57-4352-48a6-a042-b36393e0af7a | grep image
| image                                | kumarpraveen/fedora-sshd (93345f0b-fcbd-41e4-b335-a4ecb8b59e73) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 65dd55b9-23ea-4e5b-aeed-4db259436df2 | grep image
| image                                | derby/docker-glassfish41 (9f2cd9bc-7840-47c1-81e8-3bc0f76426ec) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 957814c1-834e-47e5-9236-ef228455fe36 | grep image
| image                                | rastasheep/ubuntu-sshd (29c057f1-3c7d-43e3-80e6-dc8fef1ea035) |

[root@ip-192-169-142-137 ~(keystone_demo)]# . keystonerc_glance
[root@ip-192-169-142-137 ~(keystone_glance)]# glance image-list

+————————————–+————————–+
| ID                                   | Name                     |

+————————————–+————————–+
| 27551b28-6df7-4b0e-a0c8-322b416092c1 | cirros                   |
| 9f2cd9bc-7840-47c1-81e8-3bc0f76426ec | derby/docker-glassfish41 |
| 93345f0b-fcbd-41e4-b335-a4ecb8b59e73 | kumarpraveen/fedora-sshd |
| 29c057f1-3c7d-43e3-80e6-dc8fef1ea035 | rastasheep/ubuntu-sshd   |
+————————————–+————————–+

[root@ip-192-169-142-137 ~(keystone_glance)]# swift list glance

29c057f1-3c7d-43e3-80e6-dc8fef1ea035
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00001
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00002

93345f0b-fcbd-41e4-b335-a4ecb8b59e73
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00001
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00002
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00003
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00004
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00005

9f2cd9bc-7840-47c1-81e8-3bc0f76426ec
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00001
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00002
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00003
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00004
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00005
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00006

Screenshot from 2016-04-06 18-08-30     Screenshot from 2016-04-06 18-08-46

Screenshot from 2016-04-06 18-09-28

 

 


Setting up Nova-Docker on Multi Node DVR Cluster RDO Mitaka

April 1, 2016

UPDATE 04/03/2016
   In meantime  better use  repositories for RC1,
   rather then Delorean trunks
END UPDATE

DVR && Nova-Docker Driver (stable/mitaka) tested fine on RDO Mitaka (build 20160329) with no issues described in previous post for RDO Liberty
So, create DVR deployment with Contrpoller/Network + N(*)Compute Nodes. Switch to Docker Hypervisor on each Compute Node and make requiered updates to glance and filters file on Controller. You are all set. Nova-Dockers instances FIP(s) are available from outside via Neutron Distributed Router (DNAT) using “fg” inteface ( fip-namespace ) residing on same host as Docker Hypervisor. South-North traffic is not related with VXLAN tunneling on DVR systems.

Why DVR comes into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo)
with which I had same kind of problems (VXLAN connection Controller <==> Compute)
on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1).
I just guess that Nova-Docker driver has a problem with OVS 2.4.0
no matter of stable/kilo, stable/liberty, stable/mitaka branches
been checked out for driver build.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc,
because even having proved malfunctinality I cannot file it to BZ.
Nova-Docker Driver is not packaged for RDO so it’s upstream stuff,
Upstream won’t consider issue which involves build driver from source
on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup,
to kill two birds with one stone. It will result South-North traffic
to be forwarded right away from host running Docker Hypervisor to Internet
and vice/versa due to basic “fg” functionality (outgoing interface of
fip-namespace,residing on Compute node having L3 agent running in “dvr”
agent_mode).

dvr273

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”  :-

http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html

Just one notice on RDO Mitaka on each compute node, first create br-ex and add port eth0

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth0

Then configure

*********************************
Compute nodes X=(3,4)
*********************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0

DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************

vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute
***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf
container_formats=ami,ari,aki,bare,ovf,ova,docker

# systemctl restart openstack-glance-api

Screenshot from 2016-04-03 12-22-34                                          Screenshot from 2016-04-03 12-57-09                                          Screenshot from 2016-04-03 12-32-41

Screenshot from 2016-04-03 14-39-11

**************************************************************************************
Build on Compute GlassFish 4.1 docker image per
http://bderzhavets.blogspot.com/2015/01/hacking-dockers-phusionbaseimage-to.html  and upload to glance :-
**************************************************************************************

[root@ip-192-169-142-137 ~(keystone_admin)]# docker images

REPOSITORY                 TAG                 IMAGE ID            CREATED              SIZE
derby/docker-glassfish41   latest              3a6b84ec9206        About a minute ago   1.155 GB
rastasheep/ubuntu-sshd     latest              70e0ac74c691        2 days ago           251.6 MB
phusion/baseimage          latest              772dd063a060        3 months ago         305.1 MB
tutum/tomcat               latest              2edd730bbedd        7 months ago         539.9 MB
larsks/thttpd              latest              a31ab5050b67        15 months ago        1.058 MB

[root@ip-192-169-142-137 ~(keystone_admin)]# docker save derby/docker-glassfish41 |  openstack image create  derby/docker-glassfish41  –public –container-format docker –disk-format raw

+——————+——————————————————+
| Field            | Value                                                |
+——————+——————————————————+
| checksum         | 9bea6dd0bcd8d0d7da2d82579c0e658a                     |
| container_format | docker                                               |
| created_at       | 2016-04-01T14:29:20Z                                 |
| disk_format      | raw                                                  |
| file             | /v2/images/acf03d15-b7c5-4364-b00f-603b6a5d9af2/file |
| id               | acf03d15-b7c5-4364-b00f-603b6a5d9af2                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | derby/docker-glassfish41                             |
| owner            | 31b24d4b1574424abe53b9a5affc70c8                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 1175020032                                           |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2016-04-01T14:30:13Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+——————+——————————————————+

[root@ip-192-169-142-137 ~(keystone_admin)]# docker ps

CONTAINER ID        IMAGE                      COMMAND               CREATED             STATUS              PORTS               NAMES

8f551d35f2d7        derby/docker-glassfish41   “/sbin/my_init”       39 seconds ago      Up 31 seconds                           nova-faba725e-e031-4edb-bf2c-41c6dfc188c1
dee4425261e8        tutum/tomcat               “/run.sh”             About an hour ago   Up About an hour                        nova-13450558-12d7-414c-bcd2-d746495d7a57
41d2ebc54d75        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”   2 hours ago         Up About an hour                        nova-04ddea42-10a3-4a08-9f00-df60b5890ee9

[root@ip-192-169-142-137 ~(keystone_admin)]# docker logs 8f551d35f2d7

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh…
No SSH host key available. Generating one…
*** Running /etc/my_init.d/01_sshd_start.sh…
Creating SSH2 RSA key; this may take some time …
Creating SSH2 DSA key; this may take some time …
Creating SSH2 ECDSA key; this may take some time …
Creating SSH2 ED25519 key; this may take some time …
invoke-rc.d: policy-rc.d denied execution of restart.
SSH KEYS regenerated by Boris just in case !
SSHD started !

*** Running /etc/my_init.d/database.sh…
Derby database started !
*** Running /etc/my_init.d/run.sh…

Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000006: instance-00000006: unknown error

Waiting for domain1 to start ……
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=&gt; Modifying password of admin to random in Glassfish
spawn asadmin –user admin change-admin-password
Enter the admin password&gt;
Enter the new admin password&gt;
Enter the new admin password again&gt;
Command change-admin-password executed successfully.

Fairly hard docker image been built by “docker expert” as myself😉
gets launched and nova-docker instance seems to run properly
several daemons at a time ( sshd enabled )
[boris@fedora23wks Downloads]$ ssh root@192.169.142.156

root@192.169.142.156’s password:
Last login: Fri Apr  1 15:33:06 2016 from 192.169.142.1
root@instance-00000006:~# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 14:32 ?        00:00:00 /usr/bin/python3 -u /sbin/my_init
root       100     1  0 14:33 ?        00:00:00 /bin/bash /etc/my_init.d/run.sh
root       103     1  0 14:33 ?        00:00:00 /usr/sbin/sshd
root       170     1  0 14:33 ?        00:00:03 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/op
root       427   100  0 14:33 ?        00:00:02 java -jar /opt/glassfish4/bin/../glassfish/lib/cl
root       444   427  2 14:33 ?        00:01:23 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/gla

root      1078     0  0 15:32 ?        00:00:00 bash
root      1110   103  0 15:33 ?        00:00:00 sshd: root@pts/0
root      1112  1110  0 15:33 pts/0    00:00:00 -bash
root      1123  1112  0 15:33 pts/0    00:00:00 ps -ef

Glassfish is running indeed


Setup Docker Hypervisor on Multi Node DVR Cluster RDO Mitaka

March 31, 2016

UPDATE 04/01/2016

  DVR && Nova-Docker Driver (stable/mitaka) tested fine on RDO Mitaka (build 20160329) with no issues discribed in link for RDO Liberty.So, create DVR deployment with Contrpoller/Network + N(*)Compute Nodes. Switch to Docker Hypervisor on each Compute Node and make requiered  updates to glance and filters file on Controller. You are all set. Nova-Dockers instances FIP(s) are available from outside via Neutron Distributed Router (DNAT) using “fg” inteface ( fip-namespace ) residing on same host as Docker Hypervisor. South-North traffic is not related with VXLAN tunneling on DVR systems.

END UPDATE

Perform two node cluster deployment Controller + Network&amp;Compute (ML2&amp;OVS&amp;VXLAN).  Another configuration available via packstack  is Controller+Storage+Compute&amp;Network.
Deployment schema bellow will start on Compute node ( supposed to run Nova-Docker instances ) all four Neutron agents. Thus routing via VXLAN tunnel will be excluded . Nova-Docker instances will be routed to the Internet and vice/versa via local neutron router (DNAT/SNAT) residing on the same host where Docker Hypervisor is running.

For multi node node solution testing DVR with Nova-Docker driver is required.

For now tested only on RDO Liberty DVR system :-
RDO Liberty DVR cluster switched no Nova-Docker (stable/liberty) successfully. Containers (instances) may be launched on Compute Nodes and are available via theirs fip(s) due to neutron (DNAT) routing via “fg” interface of corresponding fip-namespace.  Snapshots  here

Question will be closed if I would be able get same results on RDO Mitaka, which will solve problem of Multi Node Docker Hypervisor deployment across Compute nodes , not using VXLAN tunnels for South-North traffic, supported by Metadata,L3,openvswitch neutron agents with unique dhcp agent proviging
private IPs  and residing on Controller/Network Node.
SELINUX should be set to permissive mode after rdo deployment.

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

********************************************

Answer file for RDO Mitaka deployment

********************************************

[general]

CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

CONFIG_DEFAULT_PASSWORD=

CONFIG_SERVICE_WORKERS=%{::processorcount}

CONFIG_MARIADB_INSTALL=y

CONFIG_GLANCE_INSTALL=y

CONFIG_CINDER_INSTALL=y

CONFIG_MANILA_INSTALL=n

CONFIG_NOVA_INSTALL=y

CONFIG_NEUTRON_INSTALL=y

CONFIG_HORIZON_INSTALL=y

CONFIG_SWIFT_INSTALL=y

CONFIG_CEILOMETER_INSTALL=y

CONFIG_AODH_INSTALL=y

CONFIG_GNOCCHI_INSTALL=y

CONFIG_SAHARA_INSTALL=n

CONFIG_HEAT_INSTALL=n

CONFIG_TROVE_INSTALL=n

CONFIG_IRONIC_INSTALL=n

CONFIG_CLIENT_INSTALL=y

CONFIG_NTP_SERVERS=

CONFIG_NAGIOS_INSTALL=y

EXCLUDE_SERVERS=

CONFIG_DEBUG_MODE=n

CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.137

CONFIG_VMWARE_BACKEND=n

CONFIG_UNSUPPORTED=n

CONFIG_USE_SUBNETS=n

CONFIG_VCENTER_HOST=

CONFIG_VCENTER_USER=

CONFIG_VCENTER_PASSWORD=

CONFIG_VCENTER_CLUSTER_NAMES=

CONFIG_STORAGE_HOST=192.169.142.127

CONFIG_SAHARA_HOST=192.169.142.127

CONFIG_USE_EPEL=y

CONFIG_REPO=

CONFIG_ENABLE_RDO_TESTING=n

CONFIG_RH_USER=

CONFIG_SATELLITE_URL=

CONFIG_RH_SAT6_SERVER=

CONFIG_RH_PW=

CONFIG_RH_OPTIONAL=y

CONFIG_RH_PROXY=

CONFIG_RH_SAT6_ORG=

CONFIG_RH_SAT6_KEY=

CONFIG_RH_PROXY_PORT=

CONFIG_RH_PROXY_USER=

CONFIG_RH_PROXY_PW=

CONFIG_SATELLITE_USER=

CONFIG_SATELLITE_PW=

CONFIG_SATELLITE_AKEY=

CONFIG_SATELLITE_CACERT=

CONFIG_SATELLITE_PROFILE=

CONFIG_SATELLITE_FLAGS=

CONFIG_SATELLITE_PROXY=

CONFIG_SATELLITE_PROXY_USER=

CONFIG_SATELLITE_PROXY_PW=

CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt

CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key

CONFIG_SSL_CERT_DIR=~/packstackca/

CONFIG_SSL_CACERT_SELFSIGN=y

CONFIG_SELFSIGN_CACERT_SUBJECT_C=–

CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State

CONFIG_SELFSIGN_CACERT_SUBJECT_L=City

CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack

CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack

CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net

CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net

CONFIG_AMQP_BACKEND=rabbitmq

CONFIG_AMQP_HOST=192.169.142.127

CONFIG_AMQP_ENABLE_SSL=n

CONFIG_AMQP_ENABLE_AUTH=n

CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER

CONFIG_AMQP_AUTH_USER=amqp_user

CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER

CONFIG_MARIADB_HOST=192.169.142.127

CONFIG_MARIADB_USER=root

CONFIG_MARIADB_PW=7207ae344ed04957

CONFIG_KEYSTONE_DB_PW=abcae16b785245c3

CONFIG_KEYSTONE_DB_PURGE_ENABLE=True

CONFIG_KEYSTONE_REGION=RegionOne

CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9

CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost

CONFIG_KEYSTONE_ADMIN_USERNAME=admin

CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468

CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398

CONFIG_KEYSTONE_API_VERSION=v2.0

CONFIG_KEYSTONE_TOKEN_FORMAT=UUID

CONFIG_KEYSTONE_SERVICE_NAME=httpd

CONFIG_KEYSTONE_IDENTITY_BACKEND=sql

CONFIG_KEYSTONE_LDAP_URL=ldap://12.0.0.127

CONFIG_KEYSTONE_LDAP_USER_DN=

CONFIG_KEYSTONE_LDAP_USER_PASSWORD=

CONFIG_KEYSTONE_LDAP_SUFFIX=

CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one

CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1

CONFIG_KEYSTONE_LDAP_USER_SUBTREE=

CONFIG_KEYSTONE_LDAP_USER_FILTER=

CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=

CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1

CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE

CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n

CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=

CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n

CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n

CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n

CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=

CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=

CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=

CONFIG_KEYSTONE_LDAP_GROUP_FILTER=

CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=

CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=

CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n

CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n

CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n

CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=

CONFIG_KEYSTONE_LDAP_USE_TLS=n

CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=

CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=

CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand

CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8

CONFIG_GLANCE_KS_PW=f6a9398960534797

CONFIG_GLANCE_BACKEND=file

CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69

CONFIG_CINDER_DB_PURGE_ENABLE=True

CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f

CONFIG_CINDER_BACKEND=lvm

CONFIG_CINDER_VOLUMES_CREATE=y

CONFIG_CINDER_VOLUMES_SIZE=2G

CONFIG_CINDER_GLUSTER_MOUNTS=

CONFIG_CINDER_NFS_MOUNTS=

CONFIG_CINDER_NETAPP_LOGIN=

CONFIG_CINDER_NETAPP_PASSWORD=

CONFIG_CINDER_NETAPP_HOSTNAME=

CONFIG_CINDER_NETAPP_SERVER_PORT=80

CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster

CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http

CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs

CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0

CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60

CONFIG_CINDER_NETAPP_NFS_SHARES=

CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf

CONFIG_CINDER_NETAPP_VOLUME_LIST=

CONFIG_CINDER_NETAPP_VFILER=

CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=

CONFIG_CINDER_NETAPP_VSERVER=

CONFIG_CINDER_NETAPP_CONTROLLER_IPS=

CONFIG_CINDER_NETAPP_SA_PASSWORD=

CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp

CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2

CONFIG_CINDER_NETAPP_STORAGE_POOLS=

CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER

CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER

CONFIG_NOVA_DB_PURGE_ENABLE=True

CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8

CONFIG_NOVA_KS_PW=d9583177a2444f06

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp

CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager

CONFIG_VNC_SSL_CERT=

CONFIG_VNC_SSL_KEY=

CONFIG_NOVA_PCI_ALIAS=

CONFIG_NOVA_PCI_PASSTHROUGH_WHITELIST=

CONFIG_NOVA_COMPUTE_PRIVIF=

CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

CONFIG_NOVA_NETWORK_PUBIF=eth0

CONFIG_NOVA_NETWORK_PRIVIF=

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22

CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

CONFIG_NOVA_NETWORK_VLAN_START=100

CONFIG_NOVA_NETWORK_NUMBER=1

CONFIG_NOVA_NETWORK_SIZE=255

CONFIG_NEUTRON_KS_PW=808e36e154bd4cee

CONFIG_NEUTRON_DB_PW=0e2b927a21b44737

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502

CONFIG_LBAAS_INSTALL=n

CONFIG_NEUTRON_METERING_AGENT_INSTALL=n

CONFIG_NEUTRON_FWAAS=n

CONFIG_NEUTRON_VPNAAS=n

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch

CONFIG_NEUTRON_ML2_SUPPORTED_PCI_VENDOR_DEVS=[’15b3:1004′, ‘8086:10ca’]

CONFIG_NEUTRON_ML2_SRIOV_AGENT_REQUIRED=n

CONFIG_NEUTRON_ML2_SRIOV_INTERFACE_MAPPINGS=

CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1

CONFIG_NEUTRON_OVS_TUNNEL_SUBNETS=

CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_MANILA_DB_PW=PW_PLACEHOLDER

CONFIG_MANILA_KS_PW=PW_PLACEHOLDER

CONFIG_MANILA_BACKEND=generic

CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false

CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https

CONFIG_MANILA_NETAPP_LOGIN=admin

CONFIG_MANILA_NETAPP_PASSWORD=

CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=

CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster

CONFIG_MANILA_NETAPP_SERVER_PORT=443

CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)

CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=

CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root

CONFIG_MANILA_NETAPP_VSERVER=

CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true

CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s

CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares

CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2

CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu

CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu

CONFIG_MANILA_NETWORK_TYPE=neutron

CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=

CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=

CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=

CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=

CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4

CONFIG_MANILA_GLUSTERFS_SERVERS=

CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=

CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=

CONFIG_MANILA_GLUSTERFS_TARGET=

CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=

CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster

CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=

CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=

CONFIG_HORIZON_SSL=n

CONFIG_HORIZON_SECRET_KEY=33cade531a764c858e4e6c22488f379f

CONFIG_HORIZON_SSL_CERT=

CONFIG_HORIZON_SSL_KEY=

CONFIG_HORIZON_SSL_CACERT=

CONFIG_SWIFT_KS_PW=30911de72a15427e

CONFIG_SWIFT_STORAGES=

CONFIG_SWIFT_STORAGE_ZONES=1

CONFIG_SWIFT_STORAGE_REPLICAS=1

CONFIG_SWIFT_STORAGE_FSTYPE=ext4

CONFIG_SWIFT_HASH=a55607bff10c4210

CONFIG_SWIFT_STORAGE_SIZE=2G

CONFIG_HEAT_DB_PW=PW_PLACEHOLDER

CONFIG_HEAT_AUTH_ENC_KEY=0ef4161f3bb24230

CONFIG_HEAT_KS_PW=PW_PLACEHOLDER

CONFIG_HEAT_CLOUDWATCH_INSTALL=n

CONFIG_HEAT_CFN_INSTALL=n

CONFIG_HEAT_DOMAIN=heat

CONFIG_HEAT_DOMAIN_ADMIN=heat_admin

CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER

CONFIG_PROVISION_DEMO=n

CONFIG_PROVISION_TEMPEST=n

CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_IMAGE_NAME=cirros

CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

CONFIG_PROVISION_IMAGE_FORMAT=qcow2

CONFIG_PROVISION_IMAGE_SSH_USER=cirros

CONFIG_TEMPEST_HOST=

CONFIG_PROVISION_TEMPEST_USER=

CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER

CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

CONFIG_RUN_TEMPEST=n

CONFIG_RUN_TEMPEST_TESTS=smoke

CONFIG_PROVISION_OVS_BRIDGE=n

CONFIG_CEILOMETER_SECRET=19ae0e7430174349

CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753

CONFIG_CEILOMETER_SERVICE_NAME=httpd

CONFIG_CEILOMETER_COORDINATION_BACKEND=redis

CONFIG_MONGODB_HOST=192.169.142.127

CONFIG_REDIS_MASTER_HOST=192.169.142.127

CONFIG_REDIS_PORT=6379

CONFIG_REDIS_HA=n

CONFIG_REDIS_SLAVE_HOSTS=

CONFIG_REDIS_SENTINEL_HOSTS=

CONFIG_REDIS_SENTINEL_CONTACT_HOST=

CONFIG_REDIS_SENTINEL_PORT=26379

CONFIG_REDIS_SENTINEL_QUORUM=2

CONFIG_REDIS_MASTER_NAME=mymaster

CONFIG_AODH_KS_PW=acdd500a5fed4700

CONFIG_GNOCCHI_DB_PW=cf11b5d6205f40e7

CONFIG_GNOCCHI_KS_PW=36eba4690b224044

CONFIG_TROVE_DB_PW=PW_PLACEHOLDER

CONFIG_TROVE_KS_PW=PW_PLACEHOLDER

CONFIG_TROVE_NOVA_USER=trove

CONFIG_TROVE_NOVA_TENANT=services

CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER

CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER

CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER

CONFIG_NAGIOS_PW=02f168ee8edd44e4

**********************************************************************

Upon completion connect to external network on Compute Node :-

**********************************************************************

[root@ip-192-169-142-137 network-scripts(keystone_admin)]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”172.124.4.137″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”172.124.4.255″
GATEWAY=”172.124.4.1″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ip-192-169-142-137 network-scripts(keystone_admin)]# cat ifcfg-eth2

DEVICE=”eth2″
# HWADDR=00:22:15:63:E4:E2
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-137 network-scripts(keystone_admin)]# cat start.sh

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

**********************************************
Verification Compute node status
**********************************************

[root@ip-192-169-142-137 ~(keystone_admin)]# openstack-status

== Nova services ==
openstack-nova-api:                     inactive  (disabled on boot)
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               inactive  (disabled on boot)
== neutron services ==
neutron-server:                         inactive  (disabled on boot)
neutron-dhcp-agent:                     active
neutron-l3-agent:                          active
neutron-metadata-agent:               active
neutron-openvswitch-agent:          active

==ceilometer services==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:         inactive  (disabled on boot)
openstack-ceilometer-compute:       active
openstack-ceilometer-collector:       inactive  (disabled on boot)
== Support services ==
openvswitch:                            active
dbus:                                        active
Warning novarc not sourced

[root@ip-192-169-142-137 ~(keystone_admin)]# nova-manage version
13.0.0-0.20160329105656.7662fb9.el7.centos

Also install  python-openstackclient on Compute

******************************************
Verfication status on Controller
******************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status

== Nova services ==
openstack-nova-api:                     active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-cert:                    active
openstack-nova-conductor:               active
openstack-nova-console:                 inactive  (disabled on boot)
openstack-nova-consoleauth:             active
openstack-nova-xvpvncproxy:             inactive  (disabled on boot)
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     inactive  (disabled on boot)
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                        active
neutron-dhcp-agent:                 inactive  (disabled on boot)
neutron-l3-agent:                      inactive  (disabled on boot)
neutron-metadata-agent:           inactive  (disabled on boot)
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-notification:      active
== Support services ==
mysqld:                               inactive  (disabled on boot)
dbus:                                   active
target:                                 active
rabbitmq-server:                  active
memcached:                        active

== Keystone users ==

+———————————-+————+———+———————-+

|                id                |    name    | enabled |        email         |

+———————————-+————+———+———————-+
| f7dbea6e5b704c7d8e77e88c1ce1fce8 |   admin    |   True  |    root@localhost    |
| baf4ee3fe0e749f982747ffe68e0e562 |    aodh    |   True  |    aodh@localhost    |
| 770d5c0974fb49998440b1080e5939a0 |   boris    |   True  |                      |
| f88d8e83df0f43a991cb7ff063a2439f | ceilometer |   True  | ceilometer@localhost |
| e7a92f59f081403abd9c0f92c4f8d8d0 |   cinder   |   True  |   cinder@localhost   |
| 58e531b5eba74db2b4559aaa16561900 |   glance   |   True  |   glance@localhost   |
| d215d99466aa481f847df2a909c139f7 |  gnocchi   |   True  |  gnocchi@localhost   |
| 5d3433f7d54d40d8b9eeb576582cc672 |  neutron   |   True  |  neutron@localhost   |
| 3a50997aa6fc4c129dff624ed9745b94 |    nova    |   True  |    nova@localhost    |
| ef1a323f98cb43c789e4f84860afea35 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+

== Glance images ==

+————————————–+————————–+
| ID                                   | Name                     |
+————————————–+————————–+
| cbf88266-0b49-4bc2-9527-cc9c9da0c1eb | derby/docker-glassfish41 |
| 5d0a97c3-c717-46ac-a30f-86208ea0d31d | larsks/thttpd            |
| 80eb0d7d-17ae-49c7-997f-38d8a3aeeabd | rastasheep/ubuntu-sshd   |
+————————————–+————————–+

== Nova managed services ==

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 5  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:53.000000 |                |
| 6  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:52.000000 | –               |
| 7  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:52.000000 | –               |
| 8  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:54.000000 | –               |
| 10 | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2016-03-31T09:59:55.000000 | –               |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+————–+——+
| ID                                   | Label        | Cidr |
+————————————–+————–+——+
| 47798c88-29e5-4dee-8206-d0f9b7e19130 | public       | –    |
| 8f849505-0550-4f6c-8c73-6b8c9ec56789 | private      | –    |
| bcfcf3c3-c651-4ae7-b7ee-fdafae04a2a9 | demo_network | –    |
+————————————–+————–+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+——————+———————————-+——–+————+————-+—————————————+
| ID                                   | Name             | Tenant ID                        | Status | Task State | Power State | Networks                              |
+————————————–+——————+———————————-+——–+————+————-+—————————————+

| c8284258-f9c0-4b81-8cd0-db6e7cbf8d48 | UbuntuRastasheep | 32df2fd0c85745c9901b2247ec4905bc | ACTIVE | –          | Running     | demo_network=90.0.0.15, 172.124.4.154 |
| 50f22f8a-e6ff-4b8b-8c15-f3b9bbd1aad2 | derbyGlassfish   | 32df2fd0c85745c9901b2247ec4905bc | ACTIVE | –          | Running     | demo_network=90.0.0.16, 172.124.4.155 |
| 03664d5e-f3c5-4ebb-9109-e96189150626 | testLars         | 32df2fd0c85745c9901b2247ec4905bc | ACTIVE | –          | Running     | demo_network=90.0.0.14, 172.124.4.153 |
+————————————–+——————+———————————-+——–+————+————-+—————————————+

*********************************
Nova-Docker Setup on Compute
*********************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************
vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

***********************************
Next one on Controller
***********************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

****************************************************
Nova Compute Service restart on Compute
****************************************************

# systemctl restart openstack-nova-compute

****************************************
Glance API Service restart on Controller
****************************************

vi /etc/glance/glance-api.conf
container_formats=ami,ari,aki,bare,ovf,ova,docker
# systemctl restart openstack-glance-api

Build on Compute GlassFish 4.1 docker image per
http://bderzhavets.blogspot.com/2015/01/hacking-dockers-phusionbaseimage-to.html  and upload to glance :-

[root@ip-192-169-142-137 ~(keystone_admin)]# docker images

REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE

derby/docker-glassfish41   latest              615ce2c6a21f        29 minutes ago      1.155 GB
rastasheep/ubuntu-sshd     latest              70e0ac74c691        32 hours ago        251.6 MB
phusion/baseimage          latest              772dd063a060        3 months ago        305.1 MB
larsks/thttpd              latest              a31ab5050b67        15 months ago       1.058 MB

[root@ip-192-169-142-137 ~(keystone_admin)]# docker save derby/docker-glassfish41 | openstack image create  derby/docker-glassfish41  –public –container-format docker –disk-format raw

+——————+——————————————————+
| Field            | Value                                                |
+——————+——————————————————+
| checksum         | dca755d516e35d947ae87ff8bef8fa8f                     |
| container_format | docker                                               |
| created_at       | 2016-03-31T09:32:53Z                                 |
| disk_format      | raw                                                  |
| file             | /v2/images/cbf88266-0b49-4bc2-9527-cc9c9da0c1eb/file |
| id               | cbf88266-0b49-4bc2-9527-cc9c9da0c1eb                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
name             | derby/docker-glassfish41                             |
| owner            | 677c4fec97d14b8db0639086f5d59f7d                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 1175030784                                           |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2016-03-31T09:33:58Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+——————+——————————————————+

Now launch DerbyGassfish instance via dashboard and assign floating ip

Access to Glassfish instance via FIP 172.124.4.155

root@ip-192-169-142-137 ~(keystone_admin)]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

70ac259e9176        derby/docker-glassfish41   “/sbin/my_init”          3 minutes ago       Up 3 minutes                            nova-50f22f8a-e6ff-4b8b-8c15-f3b9bbd1aad2
a0826911eabe        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”      About an hour ago   Up About an hour                        nova-c8284258-f9c0-4b81-8cd0-db6e7cbf8d48
7923487076d5        larsks/thttpd              “/thttpd -D -l /dev/s”   About an hour ago   Up About an hour                        nova-03664d5e-f3c5-4ebb-9109-e96189150626


Storage Node (LVMiSCSI) deployment for RDO Kilo on CentOS 7.2

January 4, 2016

RDO deployment bellow has been done via straightforward RDO Kilo packstack run demonstrates that Storage Node might work as traditional iSCSI Target Server and each Compute Node is actually iSCSI initiator client. This functionality is provided by tuning Cinder && Glance Services running on Storage Node.
Following bellow is set up for 3 node deployment test Controller/Network & Compute & Storage on RDO Kilo (CentOS 7.2), which was performed on Fedora 23 host with KVM/Libvirt Hypervisor (32 GB RAM, Intel Core i7-4790 Haswell CPU, ASUS Z97-P ) .Three VMs (4 GB RAM, 4 VCPUS) have been setup. Controller/Network VM two (external/management subnet,vteps’s subnet) VNICs, Compute Node VM two VNICS (management,vtep’s subnets), Storage Node VM one VNIC (management)

Setup :-

192.169.142.127 – Controller/Network Node
192.169.142.137 – Compute Node
192.169.142.157 – Storage Node (LVMiSCSI)

Deployment could be done via answer-file from https://www.linux.com/community/blogs/133-general-linux/864102-storage-node-lvmiscsi-deployment-for-rdo-liberty-on-centos-71

Notice that Glance,Cinder, Swift Services are not running on Controller. Connection to http://StorageNode-IP:8776/v1/xxxxxx/types will be satisfied as soon as dependencies introduced by https://review.openstack.org/192883 will be satisfied on Storage Node, otherwise it could be done only via second run of RDO Kilo installer, having this port ready to respond on Controller (cinder-api port) previously been set up as first storage node. Thanks to Javier Pena, who did the this troubleshooting in https://bugzilla.redhat.com/show_bug.cgi?id=1234038. Issue has been fixed in RDO Liberty release.

 

SantiagoController1

Storage Node

SantiagoStorage1

SantiagoStorage2

SantiagoStorage3

Compute Node

SantiagoCompute1

[root@ip-192-169-142-137 ~(keystone_admin)]# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-30
Target: iqn.2010-10.org.openstack:volume-3ab60233-5110-4915-9998-7cec7d3ac919 (non-flash)
Current Portal: 192.169.142.157:3260,1
Persistent Portal: 192.169.142.157:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:576fc73f30e9
Iface IPaddress: 192.169.142.137
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: hBbbvVmompAY6ikd8DJF
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 2 State: running
scsi2 Channel 00 Id 0 Lun: 0
Attached scsi disk sda State: running
Target: iqn.2010-10.org.openstack:volume-2087aa9a-7984-4f4e-b00d-e461fcd02099 (non-flash)
Current Portal: 192.169.142.157:3260,1
Persistent Portal: 192.169.142.157:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:576fc73f30e9
Iface IPaddress: 192.169.142.137
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: TB8qiKbMdrWwoLBPdCTs
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 3 State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running


Attempt to set up HAProxy/Keepalived 3 Node Controller on RDO Liberty per Javier Pena

November 18, 2015

URGENT UPDATE 11/18/2015
Please, view https://github.com/beekhof/osp-ha-deploy/commit/b2e01e86ca93cfad9ad01d533b386b4c9607c60d
It looks as work in progress.
See also https://www.redhat.com/archives/rdo-list/2015-November/msg00168.html
END UPDATE

Actually, setup bellow follows closely https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md

As far as to my knowledge Cisco’s schema has been implemented :-
Keepalived, HAProxy,Galera for MySQL Manual install, at least 3 controller nodes. I just highlighted several steps  which as I believe allowed me to bring this work to success.  Javier is using flat external network provider for Controllers cluster disabling from the same start NetworkManager && enabling service network, there is one step which i was unable to skip. It’s disabling IP’s of eth0’s interfaces && restarting network service right before running `ovs-vsctl add-port br-eth0 eth0` per  Neutron building instructions of mentioned “Howto”, which seems to be one of the best I’ve ever seen.

I (just) guess that due this sequence of steps even on already been built and seems to run OK  three nodes Controller Cluster external network is still ping able :-

However, would i disable eth0’s IPs from the start i would lost connectivity right away switching to network service from NetworkManager . In general,  external network is supposed to be ping able from qrouter namespace due to Neutron router’s  DNAT/SNAT IPtables forwarding, but not from Controller . I am also aware of that when Ethernet interface becomes an OVS port of OVS bridge it’s IP is supposed to be suppressed. When external network provider is not used , then br-ex gets any IP  available IP on external network. Using external network provider changes situation. Details may be seen here :-

https://www.linux.com/community/blogs/133-general-linux/858156-multiple-external-networks-with-a-single-l3-agent-testing-on-rdo-liberty-per-lars-kellogg-stedman

[root@hacontroller1 ~(keystone_admin)]# systemctl status NetworkManager
NetworkManager.service – Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled)
Active: inactive (dead)

[root@hacontroller1 ~(keystone_admin)]# systemctl status network
network.service – LSB: Bring up/down networking
Loaded: loaded (/etc/rc.d/init.d/network)
Active: active (exited) since Wed 2015-11-18 08:36:53 MSK; 2h 10min ago
Process: 708 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)

Nov 18 08:36:47 hacontroller1.example.com network[708]: Bringing up loopback interface:  [  OK  ]
Nov 18 08:36:51 hacontroller1.example.com network[708]: Bringing up interface eth0:  [  OK  ]
Nov 18 08:36:53 hacontroller1.example.com network[708]: Bringing up interface eth1:  [  OK  ]
Nov 18 08:36:53 hacontroller1.example.com systemd[1]: Started LSB: Bring up/down networking.

[root@hacontroller1 ~(keystone_admin)]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet6 fe80::5054:ff:fe6d:926a  prefixlen 64  scopeid 0x20<link>
ether 52:54:00:6d:92:6a  txqueuelen 1000  (Ethernet)
RX packets 5036  bytes 730778 (713.6 KiB)
RX errors 0  dropped 12  overruns 0  frame 0
TX packets 15715  bytes 930045 (908.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 192.169.142.221  netmask 255.255.255.0  broadcast 192.169.142.255
inet6 fe80::5054:ff:fe5e:9644  prefixlen 64  scopeid 0x20<link>
ether 52:54:00:5e:96:44  txqueuelen 1000  (Ethernet)
RX packets 1828396  bytes 283908183 (270.7 MiB)
RX errors 0  dropped 13  overruns 0  frame 0
TX packets 1839312  bytes 282429736 (269.3 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 869067  bytes 69567890 (66.3 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 869067  bytes 69567890 (66.3 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@hacontroller1 ~(keystone_admin)]# ping -c 3  10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=2.04 ms
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.103 ms
64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.118 ms

— 10.10.10.1 ping statistics —

3 packets transmitted, 3 received, 0% packet loss, time 2001ms

rtt min/avg/max/mdev = 0.103/0.754/2.043/0.911 ms

 

Both mgmt and external networks emulated by corresponging Libvirt Networks
on F23 Virtualization Server. Total four VMs been setup , 3 of them for Controller nodes and one for compute (4 VCPUS, 4 GB RAM)

[root@fedora23wks ~]# cat openstackvms.xml ( for eth1’s)

<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@fedora23wks ~]# cat public.xml ( for external network provider )

<network>
<name>public</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.10.10.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.10.10.2′ end=’10.10.10.254′ />
</dhcp>
</ip>
</network>

Only one file is bit different on Controller Nodes , it is l3_agent.ini

[root@hacontroller1 neutron(keystone_demo)]# cat l3_agent.ini | grep -v ^# | grep -v ^$

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
send_arp_for_ha = 3
metadata_ip = controller-vip.example.com
external_network_bridge =
gateway_external_network_id =
[AGENT]

*************************************************************************************
Due to posted “UPDATE” on the top of  the blog entry in meantime
perfect solution is provided by
https://github.com/beekhof/osp-ha-deploy/commit/b2e01e86ca93cfad9ad01d533b386b4c9607c60d
The commit has been done on 11/14/2015 right after discussion at RDO mailing list.
*************************************************************************************

One more step which I did ( not sure that is really has
to be done at this point of time )
IP’s on eth0’s interfaces were disabled just before
running `ovs-vsctl add-port br-eth0 eth0`:-

1. Updated ifcfg-eth0 files on all Controllers
2. `service network restart` on all Controllers
3. `ovs-vsctl add-port br-eth0 eth0`on all Controllers

*****************************************************************************************
Targeting just POC ( to get floating ips accessible from Fedora 23 Virtualization host )  resulted  Controllers Cluster setup:-
*****************************************************************************************

I installed only

Keystone
Glance
Neutron
Nova
Horizon

**************************
UPDATE to official docs
**************************
[root@hacontroller1 ~(keystone_admin)]# cat   keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=regionOne
export OS_PASSWORD=keystonetest
export OS_AUTH_URL=http://controller-vip.example.com:35357/v2.0/
export OS_SERVICE_ENDPOINT=http://controller-vip.example.com:35357/v2.0
export OS_SERVICE_TOKEN=2fbe298b385e132da335
export PS1='[\u@\h \W(keystone_admin)]\$ ‘

Due to running Galera Synchronous MultiMaster Replication between Controllers each commands like :-

# su keystone -s /bin/sh -c “keystone-manage db_sync”
# su glance -s /bin/sh -c “glance-manage db_sync”
# neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugin.ini upgrade head
# su nova -s /bin/sh -c “nova-manage db sync”

are supposed to run just once from Conroller node 1 ( for instance )

************************
Compute Node setup:-
*************************

Compute setup

**********************
On all nodes
**********************

[root@hacontroller1 neutron(keystone_demo)]# cat /etc/hosts
192.169.142.220 controller-vip.example.com controller-vip
192.169.142.221 hacontroller1.example.com hacontroller1
192.169.142.222 hacontroller2.example.com hacontroller2
192.169.142.223 hacontroller3.example.com hacontroller3
192.169.142.224 compute.example.con compute
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

[root@hacontroller1 ~(keystone_admin)]# cat /etc/neutron/neutron.conf | grep -v ^$| grep -v ^#

[DEFAULT]
bind_host = 192.169.142.22(X)
auth_strategy = keystone
notification_driver = neutron.openstack.common.notifier.rpc_notifier
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = router,lbaas
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
dhcp_agents_per_network = 2
api_workers = 2
rpc_workers = 2
l3_ha = True
min_l3_agents_per_router = 2
max_l3_agents_per_router = 2

[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller-vip.example.com:5000/
identity_uri = http://127.0.0.1:5000
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
auth_plugin = password
auth_url = http://controller-vip.example.com:35357/
username = neutron
password = neutrontest
project_name = services
[database]
connection = mysql://neutron:neutrontest@controller-vip.example.com:3306/neutron
max_retries = -1
[nova]
nova_region_name = regionOne
project_domain_id = default
project_name = services
user_domain_id = default
password = novatest
username = compute
auth_url = http://controller-vip.example.com:35357/
auth_plugin = password
[oslo_concurrency]
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_hosts = hacontroller1,hacontroller2,hacontroller3
rabbit_ha_queues = true
[qos]

[root@hacontroller1 haproxy(keystone_demo)]# cat haproxy.cfg
global
daemon
stats socket /var/lib/haproxy/stats
defaults
mode tcp
maxconn 10000
timeout connect 5s
timeout client 30s
timeout server 30s
listen monitor
bind 192.169.142.220:9300
mode http
monitor-uri /status
stats enable
stats uri /admin
stats realm Haproxy\ Statistics
stats auth root:redhat
stats refresh 5s
frontend vip-db
bind 192.169.142.220:3306
timeout client 90m
default_backend db-vms-galera
backend db-vms-galera
option httpchk
stick-table type ip size 1000
stick on dst
timeout server 90m
server rhos8-node1 192.169.142.221:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
# Note the RabbitMQ entry is only needed for CloudForms compatibility
# and should be removed in the future
frontend vip-rabbitmq
option clitcpka
bind 192.169.142.220:5672
timeout client 900m
default_backend rabbitmq-vms
backend rabbitmq-vms
option srvtcpka
balance roundrobin
timeout server 900m
server rhos8-node1 192.169.142.221:5672 check inter 1s
server rhos8-node2 192.169.142.222:5672 check inter 1s
server rhos8-node3 192.169.142.223:5672 check inter 1s
frontend vip-keystone-admin
bind 192.169.142.220:35357
default_backend keystone-admin-vms
timeout client 600s
backend keystone-admin-vms
balance roundrobin
timeout server 600s
server rhos8-node1 192.169.142.221:35357 check inter 1s on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:35357 check inter 1s on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:35357 check inter 1s on-marked-down shutdown-sessions
frontend vip-keystone-public
bind 192.169.142.220:5000
default_backend keystone-public-vms
timeout client 600s
backend keystone-public-vms
balance roundrobin
timeout server 600s
server rhos8-node1 192.169.142.221:5000 check inter 1s on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:5000 check inter 1s on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:5000 check inter 1s on-marked-down shutdown-sessions
frontend vip-glance-api
bind 192.169.142.220:9191
default_backend glance-api-vms
backend glance-api-vms
balance roundrobin
server rhos8-node1 192.169.142.221:9191 check inter 1s
server rhos8-node2 192.169.142.222:9191 check inter 1s
server rhos8-node3 192.169.142.223:9191 check inter 1s
frontend vip-glance-registry
bind 192.169.142.220:9292
default_backend glance-registry-vms
backend glance-registry-vms
balance roundrobin
server rhos8-node1 192.169.142.221:9292 check inter 1s
server rhos8-node2 192.169.142.222:9292 check inter 1s
server rhos8-node3 192.169.142.223:9292 check inter 1s
frontend vip-cinder
bind 192.169.142.220:8776
default_backend cinder-vms
backend cinder-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8776 check inter 1s
server rhos8-node2 192.169.142.222:8776 check inter 1s
server rhos8-node3 192.169.142.223:8776 check inter 1s
frontend vip-swift
bind 192.169.142.220:8080
default_backend swift-vms
backend swift-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8080 check inter 1s
server rhos8-node2 192.169.142.222:8080 check inter 1s
server rhos8-node3 192.169.142.223:8080 check inter 1s
frontend vip-neutron
bind 192.169.142.220:9696
default_backend neutron-vms
backend neutron-vms
balance roundrobin
server rhos8-node1 192.169.142.221:9696 check inter 1s
server rhos8-node2 192.169.142.222:9696 check inter 1s
server rhos8-node3 192.169.142.223:9696 check inter 1s
frontend vip-nova-vnc-novncproxy
bind 192.169.142.220:6080
default_backend nova-vnc-novncproxy-vms
backend nova-vnc-novncproxy-vms
balance roundrobin
timeout tunnel 1h
server rhos8-node1 192.169.142.221:6080 check inter 1s
server rhos8-node2 192.169.142.222:6080 check inter 1s
server rhos8-node3 192.169.142.223:6080 check inter 1s
frontend nova-metadata-vms
bind 192.169.142.220:8775
default_backend nova-metadata-vms
backend nova-metadata-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8775 check inter 1s
server rhos8-node2 192.169.142.222:8775 check inter 1s
server rhos8-node3 192.169.142.223:8775 check inter 1s
frontend vip-nova-api
bind 192.169.142.220:8774
default_backend nova-api-vms
backend nova-api-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8774 check inter 1s
server rhos8-node2 192.169.142.222:8774 check inter 1s
server rhos8-node3 192.169.142.223:8774 check inter 1s
frontend vip-horizon
bind 192.169.142.220:80
timeout client 180s
default_backend horizon-vms
backend horizon-vms
balance roundrobin
timeout server 180s
mode http
cookie SERVERID insert indirect nocache
server rhos8-node1 192.169.142.221:80 check inter 1s cookie rhos8-horizon1 on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:80 check inter 1s cookie rhos8-horizon2 on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:80 check inter 1s cookie rhos8-horizon3 on-marked-down shutdown-sessions
frontend vip-heat-cfn
bind 192.169.142.220:8000
default_backend heat-cfn-vms
backend heat-cfn-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8000 check inter 1s
server rhos8-node2 192.169.142.222:8000 check inter 1s
server rhos8-node3 192.169.142.223:8000 check inter 1s
frontend vip-heat-cloudw
bind 192.169.142.220:8003
default_backend heat-cloudw-vms
backend heat-cloudw-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8003 check inter 1s
server rhos8-node2 192.169.142.222:8003 check inter 1s
server rhos8-node3 192.169.142.223:8003 check inter 1s
frontend vip-heat-srv
bind 192.169.142.220:8004
default_backend heat-srv-vms
backend heat-srv-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8004 check inter 1s
server rhos8-node2 192.169.142.222:8004 check inter 1s
server rhos8-node3 192.169.142.223:8004 check inter 1s
frontend vip-ceilometer
bind 192.169.142.220:8777
timeout client 90s
default_backend ceilometer-vms
backend ceilometer-vms
balance roundrobin
timeout server 90s
server rhos8-node1 192.169.142.221:8777 check inter 1s
server rhos8-node2 192.169.142.222:8777 check inter 1s
server rhos8-node3 192.169.142.223:8777 check inter 1s
frontend vip-sahara
bind 192.169.142.220:8386
default_backend sahara-vms
backend sahara-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8386 check inter 1s
server rhos8-node2 192.169.142.222:8386 check inter 1s
server rhos8-node3 192.169.142.223:8386 check inter 1s
frontend vip-trove
bind 192.169.142.220:8779
default_backend trove-vms
backend trove-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8779 check inter 1s
server rhos8-node2 192.169.142.222:8779 check inter 1s
server rhos8-node3 192.169.142.223:8779 check inter 1s

[root@hacontroller1 ~(keystone_demo)]# cat /etc/my.cnf.d/galera.cnf
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
max_connections=8192
query_cache_size=0
query_cache_type=0
bind_address=192.169.142.22(X)
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name=”galera_cluster”
wsrep_cluster_address=”gcomm://192.169.142.221,192.169.142.222,192.169.142.223″
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync

[root@hacontroller1 ~(keystone_demo)]# cat /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script “/usr/bin/killall -0 haproxy”
interval 2
}
vrrp_instance VI_PUBLIC {
interface eth1
state BACKUP
virtual_router_id 52
priority 101
virtual_ipaddress {
192.169.142.220 dev eth1
}
track_script {
chk_haproxy
}
# Avoid failback
nopreempt
}
vrrp_sync_group VG1
group {
VI_PUBLIC
}

*************************************************************************
The most difficult  procedure is re-syncing Galera Mariadb cluster
*************************************************************************

https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/galera-bootstrap.md

Due to nova services start not waiting for getting in sync Galera databases
After sync is done , regardless systemctl reports that service are up and running.
Database update by `openstack-service restart nova` is required on every Controller.  Also the most suspicious reason for failure access Nova metadata Server by starting VMs is failure to start neutron-l3-agent service  on each Controller due to classical design – VM’s access metadata via neutron-ns-metadata-proxy running in qrouter namespace. neutron-l3-agents may be started with no problems, some times just restarted when needed.

RUN Time Snapshots. Keepalived status on Controller’s nodes

HA Neutron router belonging tenant demo create via Neutron CLI

***********************************************************************

 At this point hacontroller1 goes down. On hacontroller2 run :-

***********************************************************************

root@hacontroller2 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterHA

+————————————–+—————————+—————-+——-+———-+

| id                                   | host                      | admin_state_up | alive | ha_state |

+————————————–+—————————+—————-+——-+———-+

| a03409d2-fbe9-492c-a954-e1bdf7627491 | hacontroller2.example.com | True           | :-)   | active   |

| 0d6e658a-e796-4cff-962f-06e455fce02f | hacontroller1.example.com | True           | xxx   | active   |

+————————————–+—————————+—————-+——-+——-

***********************************************************************

 At this point hacontroller2 goes down. hacontroller1 goes up :-

***********************************************************************

Nova Services status on all Controllers

Neutron Services status on all Controllers

Compute Node status

******************************************************************************
Cloud VM (L3) at runtime . Accessibility from F23 Virtualization Host,
running HA 3  Nodes Controller and Compute Node VMs (L2)
******************************************************************************

[root@fedora23wks ~]# ping  10.10.10.103

PING 10.10.10.103 (10.10.10.103) 56(84) bytes of data.
64 bytes from 10.10.10.103: icmp_seq=1 ttl=63 time=1.14 ms
64 bytes from 10.10.10.103: icmp_seq=2 ttl=63 time=0.813 ms
64 bytes from 10.10.10.103: icmp_seq=3 ttl=63 time=0.636 ms
64 bytes from 10.10.10.103: icmp_seq=4 ttl=63 time=0.778 ms
64 bytes from 10.10.10.103: icmp_seq=5 ttl=63 time=0.493 ms
^C

— 10.10.10.103 ping statistics —

5 packets transmitted, 5 received, 0% packet loss, time 4001ms

rtt min/avg/max/mdev = 0.493/0.773/1.146/0.218 ms

[root@fedora23wks ~]# ssh -i oskey1.priv fedora@10.10.10.103
Last login: Tue Nov 17 09:02:30 2015
[fedora@vf23dev ~]$ uname -a
Linux vf23dev.novalocal 4.2.5-300.fc23.x86_64 #1 SMP Tue Oct 27 04:29:56 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

********************************************************************************
Verifying neutron workflow on 3 node controller been built via patch:-
********************************************************************************

[root@hacontroller1 ~(keystone_admin)]# ovs-ofctl show br-eth0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000baf0db1a854f
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(eth0): addr:52:54:00:aa:0e:fc
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
2(phy-br-eth0): addr:46:c0:e0:30:72:92
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-eth0): addr:ba:f0:db:1a:85:4f
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

[root@hacontroller1 ~(keystone_admin)]# ovs-ofctl dump-flows  br-eth0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=15577.057s, table=0, n_packets=50441, n_bytes=3262529, idle_age=2, priority=4,in_port=2,dl_vlan=3 actions=strip_vlan,NORMAL
cookie=0x0, duration=15765.938s, table=0, n_packets=31225, n_bytes=1751795, idle_age=0, priority=2,in_port=2 actions=drop
cookie=0x0, duration=15765.974s, table=0, n_packets=39982, n_bytes=42838752, idle_age=1, priority=0 actions=NORMAL

Check `ovs-vsctl show`

Bridge br-int
fail_mode: secure
Port “tapc8488877-45”
tag: 4
Interface “tapc8488877-45”
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap14aa6eeb-70”
tag: 2
Interface “tap14aa6eeb-70”
type: internal
Port “qr-8f5b3f4a-45”
tag: 2
Interface “qr-8f5b3f4a-45”
type: internal
Port “int-br-eth0”
Interface “int-br-eth0″
type: patch
options: {peer=”phy-br-eth0”}
Port “qg-34893aa0-17”
tag: 3

[root@hacontroller2 ~(keystone_demo)]# ovs-ofctl show  br-eth0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000b6bfa2bafd45
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(eth0): addr:52:54:00:73:df:29
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
2(phy-br-eth0): addr:be:89:61:87:56:20
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-eth0): addr:b6:bf:a2:ba:fd:45
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

[root@hacontroller2 ~(keystone_demo)]# ovs-ofctl dump-flows  br-eth0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=15810.746s, table=0, n_packets=0, n_bytes=0, idle_age=15810, priority=4,in_port=2,dl_vlan=2 actions=strip_vlan,NORMAL
cookie=0x0, duration=16105.662s, table=0, n_packets=31849, n_bytes=1786827, idle_age=0, priority=2,in_port=2 actions=drop
cookie=0x0, duration=16105.696s, table=0, n_packets=39762, n_bytes=2100763, idle_age=0, priority=0 actions=NORMAL

Check `ovs-vsctl show`
Bridge br-int
fail_mode: secure
Port “qg-34893aa0-17”
tag: 2
Interface “qg-34893aa0-17”
type: internal


RDO Liberty Set up for three Nodes (Controller+Network+Compute) ML2&OVS&VXLAN on CentOS 7.1

October 22, 2015

As advertised officially

In addition to the comprehensive OpenStack services, libraries and clients, this release also provides Packstack, a simple installer for proof-of-concept installations, as small as a single all-in-one box and RDO Manager an OpenStack deployment and management tool for production environments based on the OpenStack TripleO project

In posting bellow I intend to test packstack on Liberty to perform classic three node deployment.  If packstack will succeed then post installation  actions  like VRRP or DVR setups might be committed as well. One of the real problems for packstack is HA Controller(s) setup. Here RDO Manager is supposed to get a significant advantage, replacing with comprehensive CLI a lot of manual configuration.

Following bellow is brief instruction  for three node deployment test Controller&&Network&&Compute for RDO Liberty, which was performed on Fedora 22 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,4 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep’s external subnets), Compute Node VM two VNICS (management,vtep’s subnets)

SELINUX stays in enforcing mode.

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related with VM serves as RDO Liberty Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.

Three Libvirt networks created

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@vfedora22wks ~]# cat public.xml
<network>
<name>public</name>
<uuid>d1e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’172.24.4.225′ netmask=’255.255.255.240′>
<dhcp>
<range start=’172.24.4.226′ end=’172.24.4.238′ />
</dhcp>
</ip>
</network>

[root@vfedora22wks ~]# cat vteps.xml
<network>
<name>vteps</name>
<uuid>d2e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr3′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.0.0.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>
</ip>
</network>

# virsh net-list
Name                 State      Autostart     Persistent
————————————————————————–
default               active        yes           yes
openstackvms     active        yes           yes
public                active        yes           yes
vteps                 active         yes          yes

*********************************************************************************
1. First Libvirt subnet “openstackvms”  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet “public” serves for simulation external network  Network Node attached to public,latter on “eth2” interface (belongs to “public”) is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.225 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
***********************************************************************************
3.Third Libvirt subnet “vteps” serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet. ***********************************************************************************

*********************
Answer-file :-
*********************

[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer3Nodet.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
# In case of two Compute nodes
# CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.157
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
# This is VXLAN tunnel endpoint interface
# It should be assigned IP from vteps network
# before running packstack
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4
**************************************
At this point run on Controller:-
**************************************
Keep SELINUX=enforcing ( RDO Liberty is supposed to handle this)
# yum -y  install centos-release-openstack-liberty
# yum -y  install openstack-packstack
# packstack –answer-file=./answer3Node.txt
**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”172.24.4.232″
NETMASK=”255.255.255.240″
DNS1=”83.221.202.254″
BROADCAST=”172.24.4.239″
GATEWAY=”172.24.4.225″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE=”eth2″
# HWADDR=00:22:15:63:E4:E2
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

OVS PORT should be eth2 (third Ethernet interface on Network Node)
  Libvirt bridge VIRBR2 in real deployment is a your router to External
  network. OVS BRIDGE br-ex should have IP belongs to External network 

*******************
On Controller :-
*******************

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -lntp |  grep 35357
tcp6       0      0 :::35357                :::*                    LISTEN      7047/httpd

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 7047
root      7047     1  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
keystone  7089  7047  0 11:22 ?        00:00:07 keystone-admin  -DFOREGROUND
keystone  7090  7047  0 11:22 ?        00:00:02 keystone-main   -DFOREGROUND
apache    7092  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7093  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7094  7047  0 11:22 ?        00:00:03 /usr/sbin/httpd -DFOREGROUND
apache    7095  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7096  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7097  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7098  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7099  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7100  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7101  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7102  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
root     28963 17739  0 12:51 pts/1    00:00:00 grep –color=auto 7047

********************
On Network Node
********************

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+—————————————-+——-+—————-+—————————+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+
| 217fb0f5-8dd1-4361-aae7-cc9a7d18d6e4 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 5dabfc17-db64-470c-9f01-8d2297d155f3 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 5e3c6e2f-3f6d-4ede-b058-bc1b317d4ee1 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| f0f02931-e7e6-4b01-8b87-46224cb71e6d | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| f16a5d9d-55e6-47c3-b509-ca445d05d34d | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
9221d1c1-008a-464a-ac26-1e0340407714
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port “vxlan-0a000089”
Interface “vxlan-0a000089″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.147″, out_key=flow, remote_ip=”10.0.0.137″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “eth2”
Interface “eth2”
Port “qg-1deeaf96-e8”
Interface “qg-1deeaf96-e8”
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge br-int
fail_mode: secure
Port “qr-1909e3bb-fd”
tag: 2
Interface “qr-1909e3bb-fd”
type: internal
Port “tapfdf24cad-f8”
tag: 2
Interface “tapfdf24cad-f8”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
ovs_version: “2.4.0”

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[    2.233302] device ovs-system entered promiscuous mode
[    2.273206] device br-int entered promiscuous mode
[    2.274981] device qr-838ad1f3-7d entered promiscuous mode
[    2.276333] device tap0f21eab4-db entered promiscuous mode
[    2.312740] device br-tun entered promiscuous mode
[    2.314509] device qg-2b712b60-d0 entered promiscuous mode
[    2.315921] device br-ex entered promiscuous mode
[    2.316022] device eth2 entered promiscuous mode
[   10.704329] device qr-838ad1f3-7d left promiscuous mode
[   10.729045] device tap0f21eab4-db left promiscuous mode
[   10.761844] device qg-2b712b60-d0 left promiscuous mode
[  224.746399] device eth2 left promiscuous mode
[  232.173791] device eth2 entered promiscuous mode
[  232.978909] device tap0f21eab4-db entered promiscuous mode
[  233.690854] device qr-838ad1f3-7d entered promiscuous mode
[  233.895213] device qg-2b712b60-d0 entered promiscuous mode
[ 1253.611501] device qr-838ad1f3-7d left promiscuous mode
[ 1254.017129] device qg-2b712b60-d0 left promiscuous mode
[ 1404.697825] device tapfdf24cad-f8 entered promiscuous mode
[ 1421.812107] device qr-1909e3bb-fd entered promiscuous mode
[ 1422.045593] device qg-1deeaf96-e8 entered promiscuous mode
[ 6111.042488] device tap0f21eab4-db left promiscuous mode

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip route
default via 172.24.4.225 dev qg-1deeaf96-e8
50.0.0.0/24 dev qr-1909e3bb-fd  proto kernel  scope link  src 50.0.0.1
172.24.4.224/28 dev qg-1deeaf96-e8  proto kernel  scope link  src 172.24.4.227 

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-1deeaf96-e8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 172.24.4.227  netmask 255.255.255.240  broadcast 172.24.4.239
inet6 fe80::f816:3eff:fe93:12de  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:93:12:de  txqueuelen 0  (Ethernet)
RX packets 864432  bytes 1185656986 (1.1 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 382639  bytes 29347929 (27.9 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-1909e3bb-fd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
inet6 fe80::f816:3eff:feae:d1e0  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:ae:d1:e0  txqueuelen 0  (Ethernet)
RX packets 382969  bytes 29386380 (28.0 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 864601  bytes 1185686714 (1.1 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-153edd99-9152-49ad-a445-7280aa9df187 ip route
default via 50.0.0.1 dev tapfdf24cad-f8
50.0.0.0/24 dev tapfdf24cad-f8  proto kernel  scope link  src 50.0.0.10 

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-153edd99-9152-49ad-a445-7280aa9df187 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapfdf24cad-f8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 50.0.0.10  netmask 255.255.255.0  broadcast 50.0.0.255
inet6 fe80::f816:3eff:fe98:c66  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:98:0c:66  txqueuelen 0  (Ethernet)
RX packets 63  bytes 6445 (6.2 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 14  bytes 2508 (2.4 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

16: qr-1909e3bb-fd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:ae:d1:e0 brd ff:ff:ff:ff:ff:ff
inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-1909e3bb-fd
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feae:d1e0/64 scope link
valid_lft forever preferred_lft forever

17: qg-1deeaf96-e8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:93:12:de brd ff:ff:ff:ff:ff:ff
inet 172.24.4.227/28 brd 172.24.4.239 scope global qg-1deeaf96-e8
valid_lft forever preferred_lft forever
inet 172.24.4.229/32 brd 172.24.4.229 scope global qg-1deeaf96-e8
valid_lft forever preferred_lft forever
inet 172.24.4.230/32 brd 172.24.4.230 scope global qg-1deeaf96-e8
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe93:12de/64 scope link
valid_lft forever preferred_lft forever



RDO Kilo DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1

September 30, 2015

Per http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-ovs-dvr.html

1. Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node. (North-South Routing).
2. Neutron DVR implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur with Network Node not involved. (East-West Routing).
3. Neutron Distributed Virtual Router provides the legacy SNAT behavior for the default SNAT for all private VMs. SNAT service is not distributed, it is centralized and the service node will host the service.

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance,

Neutron (using Open vSwitch plugin && VXLAN )

– (2x) Compute node: Nova (nova-compute),

Neutron (openvswitch-agent,l3-agent,metadata-agent )

Three CentOS 7.1 VMs (4 GB RAM, 4 VCPU, 2 VNICs ) has been built for testing

at Fedora 22 KVM Hypervisor. Two libvirt sub-nets were used first “openstackvms” for emulating External && Mgmt Networks 192.169.142.0/24 gateway virbr1 (192.169.142.1) and “vteps” 10.0.0.0/24 to support two VXLAN tunnels between Controller and Compute Nodes.

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

# cat vteps.xml

<network>
<name>vteps</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.0.0.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>

</ip>
</network>

# virsh net-define openstackvms.xml
# virsh net-start openstackvms
# virsh net-autostart openstackvms

Second libvirt sub-net maybe defined and started same way.

ip-192-169-142-127.ip.secureserver.net – Controller/Network Node
ip-192-169-142-137.ip.secureserver.net – Compute Node
ip-192-169-142-147.ip.secureserver.net – Compute Node

Answer File :-

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4
********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
***********
Then
***********

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

Reboot

*****************************************
On Controller update neutron.conf
*****************************************

router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

*****************
On Controller
*****************

[root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#| grep -v ^$

[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
allow_automatic_l3agent_failover=False

*********************************
On each Compute Node
*********************************

[root@ip-192-169-142-147 neutron]# cat l3_agent.ini | grep -v ^#| grep -v ^$

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr

*******************
On each node
*******************

[root@ip-192-169-142-147 neutron]# cat metadata_agent.ini | grep -v ^#| grep -v ^$

[DEFAULT]
debug = False
auth_url = http://192.169.142.127:35357/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =4
metadata_backlog = 4096
cache_url = memory://?default_ttl=5

[root@ip-192-169-142-147 neutron]# cat ml2_conf.ini | grep -v ^#| grep -v ^$

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[securitygroup]
enable_security_group = True
# On Compute nodes
[agent]
l2_population = True

The last entry for [agent] is important for DVR configuration on Kilo ( vs Juno )

[root@ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep -v ^#| grep -v ^$

[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2population = True
enable_distributed_routing = True
arp_responder = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

*********************************************************************
On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started.
*********************************************************************

# yum install openstack-neutron-ml2
# systemctl start neutron-l3-agent
# systemctl start neutron-metadata-agent
# systemctl enable neutron-l3-agent
# systemctl enable neutron-metadata-agent

 

DVR01@KIlo

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterDemo
+————————————–+—————————————-+—————-+——-+———-+
| id | host | admin_state_up | alive | ha_state |
+————————————–+—————————————-+—————-+——-+———-+
| 50388b16-4461-441c-83a4-f7e7084ec415 | ip-192-169-142-127.ip.secureserver.net | True |:-) | |
| 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 | ip-192-169-142-137.ip.secureserver.net | True |:-) | |
| d18cdf01-6814-489d-bef2-5207c1aac0eb | ip-192-169-142-147.ip.secureserver.net | True |:-) | |
+————————————–+—————————————-+—————-+——-+———-+
[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-show 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4
+———————+——————————————————————————-+
| Field | Value |
+———————+——————————————————————————-+
| admin_state_up | True |
| agent_type | L3 agent |
| alive | True |
| binary | neutron-l3-agent |
| configurations | { |
| | “router_id”: “”, |
| | “agent_mode”: “dvr”, |
| | “gateway_external_network_id”: “”, |
| | “handle_internal_only_routers”: true, |
| | “use_namespaces”: true, |
| | “routers”: 1, |
| | “interfaces”: 1, |
| | “floating_ips”: 1, |
| | “interface_driver”: “neutron.agent.linux.interface.OVSInterfaceDriver”, |
| | “external_network_bridge”: “br-ex”, |
| | “ex_gw_ports”: 1 |
| | } |
| created_at | 2015-09-29 07:40:37 |
| description | |
| heartbeat_timestamp | 2015-09-30 09:58:24 |
| host | ip-192-169-142-137.ip.secureserver.net |
| id | 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 |
| started_at | 2015-09-30 08:08:53 |
| topic | l3_agent |
+———————+————————————————————————–

DVR02@Kilo

Screenshot from 2015-09-30 13-41-49                                          Screenshot from 2015-09-30 13-43-54

 

 

 


CPU Pinning and NUMA Topology on RDO Kilo on Fedora Server 22

August 1, 2015
Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
on RDO Kilo installed on Fedora 22 . After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Details follow bellow :-
*****************************************************************************************
RDO Kilo set up on Fedora ( openstack-puppet-modules-2015.1.9-4.fc23.noarch)
*****************************************************************************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf install -y openstack-packstack
Generate answer-file and make update :-
# packstack  –gen-answer-file answer-file-aio.txt
and set CONFIG_KEYSTONE_SERVICE_NAME=httpd
****************************************************************************
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
****************************************************************************
You might be hit by bug  https://bugzilla.redhat.com/show_bug.cgi?id=1249482
As pre-install step apply patch https://review.openstack.org/#/c/209032/
to fix neutron_api.pp. Location of puppet templates
/usr/lib/python2.7/site-packages/packstack/puppet/templates.
You might be also hit by  https://bugzilla.redhat.com/show_bug.cgi?id=1234042
Workaround is in comments 6,11
****************
Then run :-
****************

# packstack  –answer-file=./answer-file-aio.txt

Final target is to reproduce mentioned article on i7 4790 Haswell CPU box, perform launching nova instance with CPU pinning.

[root@fedora22server ~(keystone_admin)]# uname -a
Linux fedora22server.localdomain 4.1.3-200.fc22.x86_64 #1 SMP Wed Jul 22 19:51:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

[root@fedora22server ~(keystone_admin)]# rpm -qa \*qemu\*
qemu-system-x86-2.3.0-6.fc22.x86_64
qemu-img-2.3.0-6.fc22.x86_64
qemu-guest-agent-2.3.0-6.fc22.x86_64
qemu-kvm-2.3.0-6.fc22.x86_64
ipxe-roms-qemu-20150407-1.gitdc795b9f.fc22.noarch
qemu-common-2.3.0-6.fc22.x86_64
libvirt-daemon-driver-qemu-1.2.13.1-2.fc22.x86_64

[root@fedora22server ~(keystone_admin)]# numactl –hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 15991 MB
node 0 free: 4399 MB
node distances:
node 0
0: 10

[root@fedora22server ~(keystone_admin)]# virsh capabilities

<capabilities>
<host>
<uuid>00fd5d2c-dad7-dd11-ad7e-7824af431b53</uuid>
<cpu>
<arch>x86_64</arch>
<model>Haswell-noTSX</model>
<vendor>Intel</vendor>
<topology sockets=’1′ cores=’4′ threads=’2’/>
<feature name=’invtsc’/>
<feature name=’abm’/>
<feature name=’pdpe1gb’/>
<feature name=’rdrand’/>
<feature name=’f16c’/>
<feature name=’osxsave’/>
<feature name=’pdcm’/>
<feature name=’xtpr’/>
<feature name=’tm2’/>
<feature name=’est’/>
<feature name=’smx’/>
<feature name=’vmx’/>
<feature name=’ds_cpl’/>
<feature name=’monitor’/>
<feature name=’dtes64’/>
<feature name=’pbe’/>
<feature name=’tm’/>
<feature name=’ht’/>
<feature name=’ss’/>
<feature name=’acpi’/>
<feature name=’ds’/>
<feature name=’vme’/>
<pages unit=’KiB’ size=’4’/>
<pages unit=’KiB’ size=’2048’/>
</cpu>
<power_management>
<suspend_mem/>
<suspend_disk/>
<suspend_hybrid/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
<uri_transport>rdma</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num=’1′>
<cell id=’0′>
<memory unit=’KiB’>16374824</memory>
<pages unit=’KiB’ size=’4′>4093706</pages>
<pages unit=’KiB’ size=’2048′>0</pages>
<distances>
<sibling id=’0′ value=’10’/>
</distances>
<cpus num=’8′>
<cpu id=’0′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’1′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’2′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’3′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
<cpu id=’4′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’5′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’6′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’7′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
</cpus>
</cell>
</cells>
</topology>

On each Compute node that pinning of virtual machines will be permitted on open the /etc/nova/nova.conf file and make the following modifications:

Set the vcpu_pin_set value to a list or range of logical CPU cores to reserve for virtual machine processes. OpenStack Compute will ensure guest virtual machine instances are pinned to these virtual CPU cores.
vcpu_pin_set=2,3,6,7

Set the reserved_host_memory_mb to reserve RAM for host processes. For the purposes of testing used the default of 512 MB:
reserved_host_memory_mb=512

# systemctl restart openstack-nova-compute.service

************************************
SCHEDULER CONFIGURATION
************************************

Update /etc/nova/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
NUMATopologyFilter,AggregateInstanceExtraSpecsFilter

# systemctl restart openstack-nova-scheduler.service

At this point if creating a guest you may see some changes to appear in the XML, pinning the guest vCPU(s) to the cores listed in vcpu_pin_set:

<vcpu placement=’static’ cpuset=’2-3,6-7′>1</vcpu>

Add to vmlinuz grub2 command line at the end
isolcpus=2,3,6,7

***************
REBOOT
***************
[root@fedora22server ~(keystone_admin)]# nova aggregate-create performance

+—-+————-+——————-+——-+———-+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+———-+

| 1 | performance | – | | |

+—-+————-+——————-+——-+———-+

[root@fedora22server ~(keystone_admin)]# nova aggregate-set-metadata 1 pinned=true
Metadata has been successfully updated for aggregate 1.
+—-+————-+——————-+——-+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+—————+

| 1 | performance | – | | ‘pinned=true’ |

+—-+————-+——————-+——-+—————+

[root@fedora22server ~(keystone_admin)]# nova flavor-create m1.small.performance 6 4096 20 4
+—-+———————-+———–+——+———–+——+——-+————-+———–+

| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

| 6 | m1.small.performance | 4096 | 20 | 0 | | 4 | 1.0 | True |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set hw:cpu_policy=dedicated
[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true
[root@fedora22server ~(keystone_admin)]# hostname
fedora22server.localdomain

[root@fedora22server ~(keystone_admin)]# nova aggregate-add-host 1 fedora22server.localdomain
Host fedora22server.localdomain has been successfully added for aggregate 1
+—-+————-+——————-+——————————+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——————————+—————+
| 1 | performance | – | ‘fedora22server.localdomain’ | ‘pinned=true’ |
+—-+————-+——————-+——————————+—————+

[root@fedora22server ~(keystone_admin)]# . keystonerc_demo
[root@fedora22server ~(keystone_demo)]# glance image-list
+————————————–+———————————+————-+——————+————-+——–+
| ID | Name | Disk Format | Container Format | Size | Status |
+————————————–+———————————+————-+——————+————-+——–+
| bf6f5272-ae26-49ae-b0f9-3c4fcba350f6 | CentOS71Image | qcow2 | bare | 1004994560 | active |
| 05ac955e-3503-4bcf-8413-6a1b3c98aefa | cirros | qcow2 | bare | 13200896 | active |
| 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 | VF22Image | qcow2 | bare | 228599296 | active |
| c695e7fa-a69f-4220-abd8-2269b75af827 | Windows Server 2012 R2 Std Eval | qcow2 | bare | 17182752768 | active |
+————————————–+———————————+————-+——————+————-+——–+

[root@fedora22server ~(keystone_demo)]#neutron net-list

+————————————–+———-+—————————————————–+
| id | name | subnets |
+————————————–+———-+—————————————————–+
| 0daa3a02-c598-4c46-b1ac-368da5542927 | public | 8303b2f3-2de2-44c2-bd5e-fc0966daec53 192.168.1.0/24 |
| c85a4215-1558-4a95-886d-a2f75500e052 | demo_net | 0cab6cbc-dd80-42c6-8512-74d7b2cbf730 50.0.0.0/24 |
+————————————–+———-+—————————————————–+

*************************************************************************
At this point attempt to launch F22 Cloud instance with created flavor
m1.small.performance
*************************************************************************

[root@fedora22server ~(keystone_demo)]# nova boot –image 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 –key-name oskeydev –flavor m1.small.performance –nic net-id=c85a4215-1558-4a95-886d-a2f75500e052 vf22-instance

+————————————–+————————————————–+
| Property | Value |
+————————————–+————————————————–+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | – |
| OS-SRV-USG:terminated_at | – |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | XsGr87ZLGX8P |
| config_drive | |
| created | 2015-07-31T08:03:49Z |
| flavor | m1.small.performance (6) |
| hostId | |
| id | 4b99f3cf-3126-48f3-9e00-94787f040e43 |
| image | VF22Image (7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52) |
| key_name | oskeydev |
| metadata | {} |
| name | vf22-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 14f736e6952644b584b2006353ca51be |
| updated | 2015-07-31T08:03:50Z |
| user_id | 4ece2385b17a4490b6fc5a01ff53350c |
+————————————–+————————————————–+

[root@fedora22server ~(keystone_demo)]#nova list

+————————————–+—————+———+————+————-+———————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+—————+———+————+————-+———————————–+
| 93906a61-ec0b-481d-b964-2bb99d095646 | CentOS71RLX | SHUTOFF | – | Shutdown | demo_net=50.0.0.21, 192.168.1.159 |
| ac7e9be5-d2dc-4ec0-b0a1-4096b552e578 | VF22Devpin | ACTIVE | – | Running | demo_net=50.0.0.22 |
| b93c9526-ded5-4b7a-ae3a-106b34317744 | VF22Devs | SHUTOFF | – | Shutdown | demo_net=50.0.0.19, 192.168.1.157 |
| bef20a1e-3faa-4726-a301-73ca49666fa6 | WinSrv2012 | SHUTOFF | – | Shutdown | demo_net=50.0.0.16 |
| 4b99f3cf-3126-48f3-9e00-94787f040e43 | vf22-instance | ACTIVE | – | Running | demo_net=50.0.0.23, 192.168.1.160 |
+————————————–+—————+———+————+————-+———————————–+

[root@fedora22server ~(keystone_demo)]#virsh list

Id Name State

—————————————————-
2 instance-0000000c running
3 instance-0000000d running

Please, see http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
regarding detailed explanation of highlighted blocks, keeping in mind that pinning is done to logical CPU cores ( not physical due to 4 Core CPU with HT enabled ). Multiple cells are also absent, due limitations of i7 47XX Haswell CPU architecture

[root@fedora22server ~(keystone_demo)]#virsh dumpxml instance-0000000d > vf22-instance.xml
<domain type=’kvm’ id=’3′>
<name>instance-0000000d</name>
<uuid>4b99f3cf-3126-48f3-9e00-94787f040e43</uuid>
<metadata>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.fc23″/>
<nova:name>vf22-instance</nova:name>
<nova:creationTime>2015-07-31 08:03:54</nova:creationTime>
<nova:flavor name=”m1.small.performance”>
<nova:memory>4096</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>4</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”4ece2385b17a4490b6fc5a01ff53350c”>demo</nova:user>
<nova:project uuid=”14f736e6952644b584b2006353ca51be”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52″/>
</nova:instance>
</metadata>
<memory unit=’KiB’>4194304</memory>
<currentMemory unit=’KiB’>4194304</currentMemory>
<vcpu placement=’static’>4</vcpu>
<cputune>
<shares>4096</shares>
<vcpupin vcpu=’0′ cpuset=’2’/>
<vcpupin vcpu=’1′ cpuset=’6’/>
<vcpupin vcpu=’2′ cpuset=’3’/>
<vcpupin vcpu=’3′ cpuset=’7’/>
<emulatorpin cpuset=’2-3,6-7’/>
</cputune>
<numatune>
<memory mode=’strict’ nodeset=’0’/>
<memnode cellid=’0′ mode=’strict’ nodeset=’0’/>
</numatune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type=’smbios’>
<system>
<entry name=’manufacturer’>Fedora Project</entry>
<entry name=’product’>OpenStack Nova</entry>
<entry name=’version’>2015.1.0-3.fc23</entry>
<entry name=’serial’>f1b336b1-6abf-4180-865a-b6be5670352e</entry>
<entry name=’uuid’>4b99f3cf-3126-48f3-9e00-94787f040e43</entry>
</system>
</sysinfo>
<os>
<type arch=’x86_64′ machine=’pc-i440fx-2.3′>hvm</type>
<boot dev=’hd’/>
<smbios mode=’sysinfo’/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode=’host-model’>
<model fallback=’allow’/>
<topology sockets=’2′ cores=’1′ threads=’2’/>
<numa>
<cell id=’0′ cpus=’0-3′ memory=’4194304′ unit=’KiB’/>
</numa>
</cpu>
<clock offset=’utc’>
<timer name=’pit’ tickpolicy=’delay’/>
<timer name=’rtc’ tickpolicy=’catchup’/>
<timer name=’hpet’ present=’no’/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type=’file’ device=’disk’>
<driver name=’qemu’ type=’qcow2′ cache=’none’/>
<source file=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/disk’/>
<backingStore type=’file’ index=’1′>
<format type=’raw’/>
<source file=’/var/lib/nova/instances/_base/6c60a5ed1b3037bbdb2bed198dac944f4c0d09cb’/>
<backingStore/>
</backingStore>
<target dev=’vda’ bus=’virtio’/>
<alias name=’virtio-disk0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x06′ function=’0x0’/>
</disk>
<controller type=’usb’ index=’0′>
<alias name=’usb0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x01′ function=’0x2’/>
</controller>
<controller type=’pci’ index=’0′ model=’pci-root’>
<alias name=’pci.0’/>
</controller>
<controller type=’virtio-serial’ index=’0′>
<alias name=’virtio-serial0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x05′ function=’0x0’/>
</controller>
<interface type=’bridge’>
<mac address=’fa:16:3e:4f:25:03’/>
<source bridge=’qbr567b21fe-52’/>
<target dev=’tap567b21fe-52’/>
<model type=’virtio’/>
<alias name=’net0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x03′ function=’0x0’/>
</interface>
<serial type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target port=’0’/>
<alias name=’serial0’/>
</serial>
<serial type=’pty’>
<source path=’/dev/pts/2’/>
<target port=’1’/>
<alias name=’serial1’/>
</serial>
<console type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target type=’serial’ port=’0’/>
<alias name=’serial0’/>
</console>
<channel type=’spicevmc’>
<target type=’virtio’ name=’com.redhat.spice.0′ state=’disconnected’/>
<alias name=’channel0’/>
<address type=’virtio-serial’ controller=’0′ bus=’0′ port=’1’/>
</channel>
<input type=’mouse’ bus=’ps2’/>
<input type=’keyboard’ bus=’ps2’/>
<graphics type=’spice’ port=’5901′ autoport=’yes’ listen=’0.0.0.0′ keymap=’en-us’>
<listen type=’address’ address=’0.0.0.0’/>
</graphics>
<sound model=’ich6′>
<alias name=’sound0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x04′ function=’0x0’/>
</sound>
<video>
<model type=’qxl’ ram=’65536′ vram=’65536′ vgamem=’16384′ heads=’1’/>
<alias name=’video0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x02′ function=’0x0’/>
</video>
<memballoon model=’virtio’>
<alias name=’balloon0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x07′ function=’0x0’/>
<stats period=’10’/>
</memballoon>
</devices>
<seclabel type=’dynamic’ model=’selinux’ relabel=’yes’>
<label>system_u:system_r:svirt_t:s0:c359,c706</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c359,c706</imagelabel>
</seclabel>
</domain>

Screenshot from 2015-07-31 21-55-33                                              Screenshot from 2015-07-31 15-05-53

 


Switching to Dashboard Spice Console in RDO Kilo on Fedora 22

July 3, 2015

*************************
UPDATE 06/27/2015
*************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf  install -y openstack-packstack  
# dnf install fedora-repos-rawhide
# dnf  –enablerepo=rawhide update openstack-packstack
Fedora – Rawhide – Developmental packages for the next Fedora re 1.7 MB/s |  45 MB     00:27
Last metadata expiration check performed 0:00:39 ago on Sat Jun 27 13:23:03 2015.
Dependencies resolved.
==============================================================
Package                       Arch      Version                                Repository  Size
==============================================================
Upgrading:
openstack-packstack           noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide    233 k
openstack-packstack-puppet    noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide     233 k
Transaction Summary
==============================================================
Upgrade  2 Packages
.  .  .  .  .
# dnf install python3-pyOpenSSL.noarch 
At this point run :-
# packstack  –gen-answer-file answer-file-aio.txt
and set
CONFIG_KEYSTONE_SERVICE_NAME=httpd
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
Then run `packstack –answer-file=./answer-file-aio.txt` , however you will still need pre-patch provision_demo.pp at the moment
( see third patch at http://textuploader.com/yn0v ) , the rest should work fine.

Upon completion you may try follow :-
https://www.rdoproject.org/Neutron_with_existing_external_network

I didn’t test it on Fedora 22, just creating external and private networks of VXLAN type and configure
 
[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.32″
NETMASK=”255.255.255.0″
DNS1=”8.8.8.8″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-enp2s0
DEVICE=”enp2s0″
ONBOOT=”yes”
HWADDR=”90:E6:BA:2D:11:EB”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

When configuration above is done :-

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

*************************
UPDATE 06/26/2015
*************************

To install RDO Kilo on Fedora 22 :-
after `dnf -y install openstack-packstack `
# cd /usr/lib/python2.7/site-packages/packstack/puppet/templates
Then apply following 3 patches
# cd ; packstack  –gen-answer-file answer-file-aio.txt
Set “CONFIG_NAGIOS_INSTALL=n” in  answer-file-aio.txt
# packstack –answer-file=./answer-file-aio.txt

************************
UPDATE 05/19/2015
************************
MATE Desktop supports sound ( via patch mentioned bellow) on RDO Kilo  Cloud instances F22, F21, F20. RDO Kilo AIO install performed on bare metal.
Also Windows Server 2012 (evaluation version) cloud VM provides pretty stable “video/sound” ( http://www.cloudbase.it/windows-cloud-images/ ) .

************************
UPDATE 05/14/2015
************************
I’ve  got sound working on CentOS 7 VM ( connection  to console via virt-manager)  with slightly updated patch of Y.Kawada , self.type set “ich6″ RDO Kilo installed on bare metal AIO testing host, Fedora 22. Same results have been  obtained for RDO Kilo on CentOS 7.1. However , connection to spice console having cut&amp;&amp;paste and sound enabled features may be obtained via spicy ( remote connection)

Generated libvirt.xml

<domain type=”kvm”>
<uuid>455877f2-7070-48a7-bb24-e0702be2fbc5</uuid>
<name>instance-00000003</name>
<memory>2097152</memory>
<vcpu cpuset=”0-7″>1</vcpu>
<metadata>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.el7″/>
<nova:name>CentOS7RSX05</nova:name>
<nova:creationTime>2015-06-14 18:42:11</nova:creationTime>
<nova:flavor name=”m1.small”>
<nova:memory>2048</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>1</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”da79d2c66db747eab942bdbe20bb3f44″>demo</nova:user>
<nova:project uuid=”8c9defac20a74633af4bb4773e45f11e”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”4a2d708c-7624-439f-9e7e-6e133062e23a”/>
</nova:instance>
</metadata>
<sysinfo type=”smbios”>
<system>
<entry name=”manufacturer”>Fedora Project</entry>
<entry name=”product”>OpenStack Nova</entry>
<entry name=”version”>2015.1.0-3.el7</entry>
<entry name=”serial”>b3fae7c3-10bd-455b-88b7-95e586342203</entry>
<entry name=”uuid”>455877f2-7070-48a7-bb24-e0702be2fbc5</entry>
</system>
</sysinfo>
<os>
<type>hvm</type>
<boot dev=”hd”/>
<smbios mode=”sysinfo”/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cputune>
<shares>1024</shares>
</cputune>
<clock offset=”utc”>
<timer name=”pit” tickpolicy=”delay”/>
<timer name=”rtc” tickpolicy=”catchup”/>
<timer name=”hpet” present=”no”/>
</clock>
<cpu mode=”host-model” match=”exact”>
<topology sockets=”1″ cores=”1″ threads=”1″/>
</cpu>
<devices>
<disk type=”file” device=”disk”>
<driver name=”qemu” type=”qcow2″ cache=”none”/>
<source file=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/disk”/>
<target bus=”virtio” dev=”vda”/>
</disk>
<interface type=”bridge”>
<mac address=”fa:16:3e:87:4b:29″/>
<model type=”virtio”/>
<source bridge=”qbr8ce9ae7b-f0″/>
<target dev=”tap8ce9ae7b-f0″/>
</interface>
<serial type=”file”>
<source path=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/console.log”/>
</serial>
<serial type=”pty”/>
<channel type=”spicevmc”>
<target type=”virtio” name=”com.redhat.spice.0″/>
</channel>
<graphics type=”spice” autoport=”yes” keymap=”en-us” listen=”0.0.0.0   “/>
<video>
<model type=”qxl”/>
</video>
<sound model=”ich6″/>
<memballoon model=”virtio”>
<stats period=”10″/>
</memballoon>
</devices>
</domain>

*****************
END UPDATE
*****************
The post follows up http://lxer.com/module/newswire/view/214893/index.html
The most recent `yum update` on F22 significantly improved network performance on cloud VMs (L2) . Watching movies running on cloud F22 VM (with “Mate Desktop” been installed and functioning pretty smoothly) without sound refreshes spice memories,view https://bugzilla.redhat.com/show_bug.cgi?format=multiple&amp;id=913607
# dnf -y install spice-html5 ( installed on Controller &amp;&amp; Compute)
# dnf -y install  openstack-nova-spicehtml5proxy (Compute Node)
# rpm -qa | grep openstack-nova-spicehtml5proxy
openstack-nova-spicehtml5proxy-2015.1.0-3.fc23.noarch

***********************************************************************
Update /etc/nova/nova.conf on Controller &amp;&amp; Compute Node as follows :-
***********************************************************************

[DEFAULT]
. . . . .
web=/usr/share/spice-html5
. . . . . .
spicehtml5proxy_host=0.0.0.0  (only Compute)
spicehtml5proxy_port=6082     (only Compute)
. . . . . . .
# Disable VNC
vnc_enabled=false
. . . . . . .
[spice]

# Compute Node Management IP 192.169.142.137
html5proxy_base_url=http://192.169.142.137:6082/spice_auto.html
server_proxyclient_address=127.0.0.1 ( only  Compute )
server_listen=0.0.0.0 ( only  Compute )
enabled=true
agent_enabled=true
keymap=en-us

:wq

# service httpd restart ( on Controller )
Next actions to be performed on Compute Node

# service openstack-nova-compute restart
# service openstack-nova-spicehtml5proxy start
# systemctl enable openstack-nova-spicehtml5proxy

On Controller

[root@ip-192-169-142-127 ~(keystone_admin)]# nova list –all-tenants
+————————————–+———–+———————————-+———+————+————-+———————————-+
| ID                                   | Name      | Tenant ID                        | Status  | Task State | Power State | Networks                         |
+————————————–+———–+———————————-+———+————+————-+———————————-+
| 6c8ef008-e8e0-4f1c-af17-b5f846f8b2d9 | CirrOSDev | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | SHUTOFF | –          | Shutdown    | demo_net=50.0.0.11, 172.24.4.228 |
| cfd735ea-d9a8-4c4e-9a77-03035f01d443 | VF22DEVS  | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | ACTIVE  | –          | Running     | demo_net=50.0.0.14, 172.24.4.231 |
+————————————–+———–+———————————-+———+————+————-+———————————-+
[root@ip-192-169-142-127 ~(keystone_admin)]# nova get-spice-console cfd735ea-d9a8-4c4e-9a77-03035f01d443  spice-html5
+————-+—————————————————————————————-+
| Type        | Url                                                                                    |

+————-+—————————————————————————————-+
| spice-html5 | http://192.169.142.137:6082/spice_auto.html?token=24fb65c7-e7e9-4727-bad3-ba7c2c29f7f4 |
+————-+—————————————————————————————-+

Session running by virt-manager on Virtualization Host ( F22 )

Connection to Compute Node 192.169.142.137 has been activated


Once again about pros/cons of Systemd and Upstart

May 16, 2015

Upstart advantages.

1. Upstart is simpler for porting on the systems other than Linux while systemd is very rigidly tied on Linux kernel opportunities.Adaptation of Upstart for work in Debian GNU/kFreeBSD and Debian GNU/Hurd looks quite real task that it is impossible to tell about systemd;

2. Upstart is more habitual for the Debian developers, many of which in combination participate in development of Ubuntu. Two members of Technical committee Debian (Steve Langasek and Colin Watson) are a part of group of the Upstart developers.

3. Upstart simpler and is more lightweight than systemd, as a result, less code – less mistakes; Upstart is suitable for integration with a code of system daemons better.The policy of systemd is reduced to that authors of daemons have to be arranged under upstream (it is necessary to provide the analog compatible at the level of the external interface for replacement of the systemd component) instead of upstream provided comfortable means for developers of daemons.

4. Upstart is simpler in respect of maintenance and maintenance of packages; Community of the Upstart developers are more openly for collaboration. In case of systemd it is necessary to take the systemd methods for granted and to follow them, for example, to support the separate section “/usr” or
to use only absolute paths for start. Shortcomings of Upstart belong to category of reparable problems; in current state of Upstart it is already completely ready for use in Debian 8.0 (Jessie).

5. In Upstart more habitual model of definition of a configuration of services, unlike systemd where settings in / etc block the basic settings of units determined in hierarchy/lib. Use of Upstart will allow to support a sound mind of the competition which will promote development of various approaches and will keep developers in a tone.

Systemd advantages

1. Without essential processing of architecture of Upstart won’t be able to catch up with systemd on functionality (for example, the turned model of start of dependences (instead of start of all demanded dependences at start of the set service,start of service in Upstart is carried out at receipt of an event about availability for service of dependences);

2. Use of ptrace disturbs application of upstart-works for such daemons as avahi, apache and postfix;possibility of activation of service only upon the appeal to a socket, but not on indirect signs,such as dependence on activation of other socket; lack of reliable tracking of conditions of the carried-out processes.

3. Systemd contains rather self-sufficient set of components that allows to concentrate attention on elimination of problems,but not completion of a configuration with Upstart to the opportunities which are already present at Systemd. For example, in Upstart are absent:- support of the detailed status and maintaining the log of work of daemons,multiple activation through sockets,activation through sockets for IPv6 and UDP,flexible mechanism of restriction of resources.

4. Use of systemd will allow to pull together among themselves and to unify control facilities various distribution kits. Systemd is already passed to RHEL 7.X,CentOS 7.X, Fedora,openSUSE,Sabayon,Mandriva,Arch Linux,

5. At systemd there is more active, large and versatile community of developers into which engineers of the SUSE and Red Hat companies enter. When using upstart the distribution kit becomes dependent on Canonical without which support of upstart remains without developers and will be doomed to stagnation.Participation in development of upstart requires signing of the agreement on transfer of property rights of the Canonical company. The Red Hat company not without cause decided on replacement of upstart by systemd.Debian project was already compelled to migrate for systemd. For realization of some opportunities of loading in Upstart it is required to use fragments of shell-scripts that does initialization process less reliable and more labor-consuming for debugging.

6. Support of systemd is realized in GNOME and KDE which more and more actively use possibilities of systemd (for example, means for management of the user sessions and start of each appendix in separate cgroup). GNOME continues to be positioned as the main environment of Debian, but the relations between the Ubuntu/Upstart and GNOME projects had obviously intense character.

References

http://www.opennet.ru/opennews/art.shtml?num=38762


RDO Kilo Three Node Setup for Controller+Network+Compute (ML2&OVS&VXLAN) on CentOS 7.1

May 9, 2015

Following bellow is brief instruction  for traditional three node deployment test Controller&amp;&amp;Network&amp;&amp;Compute for oncoming RDO Kilo, which was performed on Fedora 21 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4771 Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,2 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep’s external subnets), Compute Node VM two VNICS (management,vtep’s subnets)

SELINUX stays in enforcing mode.

Three Libvirt networks created

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# cat public.xml

<network>
<name>public</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr3′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’172.24.4.225′ netmask=’255.255.255.240′>
<dhcp>
<range start=’172.24.4.226′ end=’172.24.4.238′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# cat vteps.xml

<network>
<name>vteps</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr4′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.0.0.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# virsh net-list

Name                 State      Autostart     Persistent

————————————————————————–

default               active        yes           yes
openstackvms    active        yes           yes
public                active        yes           yes
vteps                 active         yes          yes

*********************************************************************************
1. First Libvirt subnet “openstackvms”  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet “public” serves for simulation external network  Network Node attached to public,latter on “eth3” interface (belongs to “public”) is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via interface virbr3 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28


***********************************************************************************
3. Third Libvirt subnet “vteps” serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet.
***********************************************************************************
Start testing following RH instructions
Per https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test

# yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm
# yum install -y openstack-packstack
*******************************************************
Install rdo-testing-kilo.rpm on all three nodes due to
*******************************************************

https://bugzilla.redhat.com/show_bug.cgi?id=1218750

# yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm

Keep SELINUX=enforcing
Package  openstack-selinux-0.6.31-1.el7.noarch will be installed by prescript
puppet on all nodes of deployment

*********************
Answer-file :-
*********************

[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer-fileRHTest.txt

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”172.24.4.227″
NETMASK=”255.255.255.240″
DNS1=”83.221.202.254″
BROADCAST=”172.24.4.239″
GATEWAY=”172.24.4.225″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth3

DEVICE=”eth3″
# HWADDR=00:22:15:63:E4:E2
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
*************************************************
Next step to performed on Network Node :-
*************************************************

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

 

f15

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show

d9a60201-a2c2-4c6a-ad9d-63cc2ae296b3

Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “eth3”
Interface “eth3”

Port br-ex
Interface br-ex
type: internal
Port “eth2”
Interface “eth2”
Port “qg-d433fa46-e2”
Interface “qg-d433fa46-e2”
type: internal
Bridge br-tun
fail_mode: secure
Port “vxlan-0a000089”
Interface “vxlan-0a000089″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.147″, out_key=flow, remote_ip=”10.0.0.137″}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port br-int
Interface br-int
type: internal
Port “tap70da94fb-c1”
tag: 1
Interface “tap70da94fb-c1”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qr-0737c492-f6”
tag: 1
Interface “qr-0737c492-f6”
type: internal
ovs_version: “2.3.1”
**********************************************************
Following bellow is Network Node status verification
**********************************************************

[root@ip-192-169-142-147 ~(keystone_admin)]# openstack-status

== neutron services ==

neutron-server:                           inactive  (disabled on boot)
neutron-dhcp-agent:                    active
neutron-l3-agent:                         active
neutron-metadata-agent:              active
neutron-openvswitch-agent:         active
== Support services ==
libvirtd:                               active
openvswitch:                       active
dbus:                                   active
[root@ip-192-169-142-147 ~(keystone_admin)]# neutron net-list

+————————————–+———-+——————————————————+
| id                                   | name     | subnets                                              |
+————————————–+———-+——————————————————+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | 5fc0118a-f710-448d-af67-17dbfe01d5fc 172.24.4.224/28 |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | ba2cded7-5546-4a64-aa49-7ef4d077dee3 50.0.0.0/24     |
+————————————–+———-+——————————————————+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-list

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+
| id                                   | name       | external_gateway_info                                                                                                                                                                   | distributed | ha    |

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+

| d63ca3f3-5b71-4540-bb5c-01b44ce3081b | RouterDemo | {“network_id”: “7ecdfc27-57cf-410d-9a76-8e9eb76582cb”, “enable_snat”: true, “external_fixed_ips”: [{“subnet_id”: “5fc0118a-f710-448d-af67-17dbfe01d5fc”, “ip_address”: “172.24.4.229”}]} | False       | False |

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-port-list RouterDemo

+————————————–+——+——————-+————————————————————————————-+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+————————————–+——+——————-+————————————————————————————-+
| 0737c492-f607-4d6a-8e72-ad447453b3c0 |      | fa:16:3e:d7:d0:66 | {“subnet_id”: “ba2cded7-5546-4a64-aa49-7ef4d077dee3”, “ip_address”: “50.0.0.1”}     |
| d433fa46-e203-4fdd-b3f7-dcbc884e9f1e |      | fa:16:3e:02:ef:51 | {“subnet_id”: “5fc0118a-f710-448d-af67-17dbfe01d5fc”, “ip_address”: “172.24.4.229”} |
+————————————–+——+——————-+————————————————————————————-+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron port-show 0737c492-f607-4d6a-8e72-ad447453b3c0 | grep ACTIVE
| status                | ACTIVE                                                                          |

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[   14.174240] device ovs-system entered promiscuous mode
[   14.184284] device br-ex entered promiscuous mode
[   14.200068] device eth2 entered promiscuous mode
[   14.200253] device eth3 entered promiscuous mode
[   14.207443] device br-int entered promiscuous mode
[   14.209360] device br-tun entered promiscuous mode
[   27.311116] device virbr0-nic entered promiscuous mode
[  142.406262] device tap70da94fb-c1 entered promiscuous mode
[  144.045031] device qr-0737c492-f6 entered promiscuous mode
[  144.792618] device qg-d433fa46-e2 entered promiscuous mode

**************************************************************
Compute Node Status
**************************************************************

[root@ip-192-169-142-137 ~]#  dmesg | grep promisc
[    9.683238] device ovs-system entered promiscuous mode
[    9.699664] device br-ex entered promiscuous mode
[    9.735288] device br-int entered promiscuous mode
[    9.748086] device br-tun entered promiscuous mode
[  137.203583] device qvbe7160159-fd entered promiscuous mode
[  137.288235] device qvoe7160159-fd entered promiscuous mode
[  137.715508] device qvbe90ef79b-80 entered promiscuous mode
[  137.796083] device qvoe90ef79b-80 entered promiscuous mode
[  605.884770] device tape90ef79b-80 entered promiscuous mode
[  767.083214] device qvbbf1c441c-ad entered promiscuous mode
[  767.184783] device qvobf1c441c-ad entered promiscuous mode
[  767.446575] device tapbf1c441c-ad entered promiscuous mode
[  973.679071] device qvb3c3e98d7-2d entered promiscuous mode
[  973.775480] device qvo3c3e98d7-2d entered promiscuous mode
[  973.997621] device tap3c3e98d7-2d entered promiscuous mode
[ 1863.868574] device tapbf1c441c-ad left promiscuous mode
[ 1889.386251] device tape90ef79b-80 left promiscuous mode
[ 2256.698108] device tap3c3e98d7-2d left promiscuous mode
[ 2336.931559] device qvb6597428d-5b entered promiscuous mode
[ 2337.021941] device qvo6597428d-5b entered promiscuous mode
[ 2337.283293] device tap6597428d-5b entered promiscuous mode
[ 4092.577561] device tap6597428d-5b left promiscuous mode
[ 4099.798474] device tap6597428d-5b entered promiscuous mode
[ 5098.563689] device tape90ef79b-80 entered promiscuous mode

[root@ip-192-169-142-137 ~]# ovs-vsctl show
a0cb406e-b028-4b09-8849-e6e2869ab051
Bridge br-tun
fail_mode: secure
Port “vxlan-0a000093”
Interface “vxlan-0a000093″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.137″, out_key=flow, remote_ip=”10.0.0.147″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
fail_mode: secure
Port “qvoe90ef79b-80”
tag: 1
Interface “qvoe90ef79b-80”
Port br-int
Interface br-int
type: internal
Port “qvobf1c441c-ad”
tag: 1
Interface “qvobf1c441c-ad”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port “qvo6597428d-5b”
tag: 1
Interface “qvo6597428d-5b”
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
ovs_version: “2.3.1”

[root@ip-192-169-142-137 ~]# brctl show

bridge name    bridge id        STP enabled    interfaces
qbr6597428d-5b       8000.1a483dd02cee    no        qvb6597428d-5b
tap6597428d-5b
qbrbf1c441c-ad        8000.ca2f911ff649      no        qvbbf1c441c-ad
qbre90ef79b-80        8000.16342824f4ba    no        qvbe90ef79b-80
tape90ef79b-80
**************************************************
Controller Node status verification
**************************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:             inactive  (disabled on boot)
openstack-nova-network:              inactive  (disabled on boot)
openstack-nova-scheduler:           active
openstack-nova-conductor:           active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:            active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                  inactive  (disabled on boot)
neutron-l3-agent:                       inactive  (disabled on boot)
neutron-metadata-agent:            inactive  (disabled on boot)
== Swift services ==
openstack-swift-proxy:                 active
openstack-swift-account:              active
openstack-swift-container:            active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                      active
openstack-cinder-scheduler:            active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:                 active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:         inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
mysqld:                                    inactive  (disabled on boot)
libvirtd:                                    active
dbus:                                        active
target:                                      active
rabbitmq-server:                       active
memcached:                             active
== Keystone users ==
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.

‘python-keystoneclient.’, DeprecationWarning)

+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 4e1008fd31944fecbb18cdc215af23ec |   admin    |   True  |    root@localhost    |
| 621b84dd4b904760b8aa0cc7b897c95c | ceilometer |   True  | ceilometer@localhost |
| 4d6cdea3b7bc49948890457808c0f6f8 |   cinder   |   True  |   cinder@localhost   |
| 8393bb4de49a44b798af8b118b9f0eb6 |    demo    |   True  |                      |
| f9be6eaa789e4b3c8771372fffb00230 |   glance   |   True  |   glance@localhost   |
| a518b95a92044ad9a4b04f0be90e385f |  neutron   |   True  |  neutron@localhost   |
| 40dddef540fb4fa5a69fb7baa03de657 |    nova    |   True  |    nova@localhost    |
| 5fbb2b97ab9d4192a3f38f090e54ffb1 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+————–+————-+——————+———–+——–+
| ID                                   | Name         | Disk Format | Container Format | Size      | Status |
+————————————–+————–+————-+——————+———–+——–+
| 1b4a6b08-d63c-4d8d-91da-16f6ba177009 | cirros       | qcow2       | bare             | 13200896  | active |
| cb05124d-0d30-43a7-a033-0b7ff0ea1d47 | Fedor21image | qcow2       | bare             | 158443520 | active |
+————————————–+————–+————-+——————+———–+——–+
== Nova managed services ==
+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | –               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | –               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | –               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | –               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:14:21.000000 | –               |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+———-+——+
| ID                                   | Label    | Cidr |

+————————————–+———-+——+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | –    |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | –    |
+————————————–+———-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+

| ID | Name | Status | Task State | Power State | Networks |

+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list

+—-+—————————————-+——-+———+
| ID | Hypervisor hostname                    | State | Status  |
+—-+—————————————-+——-+———+
| 1  | ip-192-169-142-137.ip.secureserver.net | up    | enabled |
+—-+—————————————-+——-+———+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+—————————————-+——-+—————-+—————————+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

| 22af7b3b-232f-4642-9418-d1c8021c7eb5 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 34e1078c-c75b-4d14-b813-b273ea8f7b86 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 5d652094-6711-409d-8546-e29c09e03d5a | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
| 8a8ad680-1071-4c7f-8787-ba4ef0a7dfb7 | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| d81e97af-c210-4855-af06-fb1d139e2e10 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova service-list

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | –               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | –               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | –               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | –               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:15:21.000000 | –               |
+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+


Nova libvirt-xen driver fails to schedule instance under Xen 4.4.1 Hypervisor with libxl toolstack

April 13, 2015

UPDATE as of 16/04/2015
For now http://www.slideshare.net/xen_com_mgr/openstack-xenfinal
is supposed to work only with nova networking per Anthony PERARD
  Neutron appears to be an issue.
  Please, view details of troubleshooting and diagnostic obtained (thanks to Ian   Campbell)
http://lists.xen.org/archives/html/xen-devel/2015-04/msg01856.html
END UPDATE

This post is written in regards of two publications done in February 2015
First:   http://wiki.xen.org/wiki/OpenStack_via_DevStack
Second : http://www.slideshare.net/xen_com_mgr/openstack-xenfinal

Both of them are devoted to same problem nova libvirt-xen driver. Second one states that everything is supposed to be fine as far as some mysterious patch will merge mainline libvirt .Both of them don’t work for me generating errors  in  libxl-driver.log even with  libvirt 1.2.14 ( the most recent version as of time of writing).

For better understanding problem been raised up view also https://ask.openstack.org/en/question/64942/nova-libvirt-xen-driver-and-patch-feb-2015-in-upstream-libvirt/

I’ve followed more accurately written second one :-

On Ubuntu 14.04.2

# apt-get update
# apt-get -y upgrade
# apt-get install xen-hypervisor-4.4-amd64
# sudo reboot

$ git clone https://git.openstack.org/openstack-dev/devstack

Created local.conf under devstack folder as follows :-

[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50

FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

# Useful logging options for debugging:
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service n-cauth
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest

# This is a Xen Project host:
LIBVIRT_TYPE=xen

Ran ./stack.sh and successfully completed installation, versions of libvirt 1.2.2,1.2.9.1.2.24 have been tested. The first one is default on Trusty, 1.2.9 && 1.2.14 have been built and installed after stack.sh completion. For every version of libvirt been tested new hardware instance of Ubuntu 14.04.2 has been created.

Manual libvirt upgrade was done via :-

# apt-get build-dep libvirt
# tar xvzf libvirt-1.2.14.tar.gz -C /usr/src
# cd /usr/src/libvirt-1.2.14
# ./configure –prefix=/usr/
# make
# make install
# service libvirt-bin restart

root@ubuntu-system:~# virsh –connect xen:///
Welcome to virsh, the virtualization interactive terminal.

Type: ‘help’ for help with commands
‘quit’ to quit

virsh # version
Compiled against library: libvirt 1.2.14
Using library: libvirt 1.2.14
Using API: Xen 1.2.14
Running hypervisor: Xen 4.4.0

Per page 19 of second post

xen.gz command line tuned
ubuntu@ubuntu-system:~/devstack$ nova image-meta cirros-0.3.2-x86_64-uec set vm_mode=HVM
ubuntu@ubuntu-system:~/devstack$ nova image-meta cirros-0.3.2-x86_64-uec delete vm_mode

Attempt to launch instance ( nova-compute is up ) error “No available host found” in n-sch.log from Nova side

The libxl-driver.log reports :-

root@ubuntu-system:/var/log/libvirt/libxl# ls -l
total 32
-rw-r–r– 1 root root 30700 Apr 12 03:47 libxl-driver.log

**************************************************************************************

libxl: debug: libxl_dm.c:1320:libxl__spawn_local_dm: Spawning device-model /usr/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: /usr/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-domid
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 2
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -chardev
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -mon
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -nodefaults
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-attach
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -name
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: instance-00000002
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -vnc
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 127.0.0.1:1
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -display
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: none
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -k
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: en-us
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -machine
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: xenpv
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -m
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 513
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: register slotnum=3
libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x7f36cc0012e0: inprogress: poller=0x7f36d8013130, flags=i
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “qmp_capabilities”,
“id”: 1
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-chardev”,
“id”: 2
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-vnc”,
“id”: 3
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: register slotnum=3
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:657:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:653:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 ok
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8: deregister unregistered
libxl: debug: libxl_device.c:1023:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge online [-1] exited with error status 1
libxl: error: libxl_device.c:1085:device_hotplug_child_death_cb: script: ip link set vif2.0 name tap5600079c-9e failed
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_create.c:1226:domcreate_attach_vtpms: unable to add nic devices

libxl: debug: libxl_dm.c:1495:kill_device_model: Device Model signaled

 


Setup the most recent Nova Docker Driver via Devstack on Fedora 21

March 23, 2015

*********************************************************************************
UPDATE as 03/26/2015
To make devstack configuration persistent between reboots on Fedora 21, e.g. restart-able via ./rejoin-stack.sh, following services must be enabled :-
*********************************************************************************
systemctl enable rabbitmq-server
systemctl enable openvswitch
systemctl enable httpd
systemctl enable mariadb
systemctl enable mysqld

File /etc/rc.d/rc.local should contain ( in my case ) :-

ip addr flush dev br-ex ;
ip addr add 192.168.10.15/24 dev br-ex ;
ip link set br-ex up ;
route add -net 10.254.1.0/24 gw 192.168.10.15 ;

System is supposed to be shutdown via :-
$sudo ./unstack.sh
********************************************************************************

This post follows up http://blog.oddbit.com/2015/02/06/installing-nova-docker-on-fedora-21/  however , RDO Juno is not pre-installed and Nova Docker driver is built first based on the top commit of https://git.openstack.org/cgit/stackforge/nova-docker/ , next step is :-

$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack

Creating local.conf under devstack following any of two links provided
and run ./stack.sh performing AIO Openstack installation, like it does
it on Ubuntu 14.04. All steps preventing stack.sh from crash on F21 described right bellow.

# yum -y install git docker-io fedora-repos-rawhide
# yum –enablerepo=rawhide install python-six  python-pip python-pbr systemd
# reboot
# yum – y install gcc python-devel ( required for driver build )

$ git clone http://github.com/stackforge/nova-docker.git
$ cd nova-docker
$ sudo pip install .

To raise to 1.9 version python-six dropped to 1.2 during driver’s build

yum –enablerepo=rawhide reinstall python-six

Run devstack with Lars’s local.conf
per http://blog.oddbit.com/2015/02/11/installing-novadocker-with-devstack/
or view  http://bderzhavets.blogspot.com/2015/02/set-up-nova-docker-driver-on-ubuntu.html   for another version of local.conf
*****************************************************************************
My version of local.conf which allows define floating pool as you need,a bit more flexible then original
*****************************************************************************
[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

SERVICE_TOKEN=super-secret-admin-token
VIRT_DRIVER=novadocker.virt.docker.DockerDriver

DEST=$HOME/stack
SERVICE_DIR=$DEST/status
DATA_DIR=$DEST/data
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest
# Introduce glance to docker images

[[post-config|$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver

**************************************************************************************
After stack.sh completion disable firewalld, because devstack has no interaction with fedoras firewalld bringing up openstack daemons requiring corresponding ports  to be opened.
***************************************************************************************

#  systemctl stop firewalld
#  systemtcl disable firewalld

$ cd dev*
$ . openrc demo
$ neutron security-group-rule-create –protocol icmp \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default
$ neutron security-group-rule-create –protocol tcp \
–port-range-min 22 –port-range-max 22 \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default
$ neutron security-group-rule-create –protocol tcp \
–port-range-min 80 –port-range-max 80 \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default

Uploading docker image to glance

$ . openrc admin
$  docker pull rastasheep/ubuntu-sshd:14.04
$  docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

Launch new instance via uploaded image :-

$ . openrc demo
$  nova boot –image “rastasheep/ubuntu-sshd:14.04” –flavor m1.tiny
–nic net-id=private-net-id UbuntuDocker

To provide internet access for launched nova-docker instance run :-
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Horizon is unavailable , regardless installed


Set up Two Node RDO Juno ML2&OVS&VXLAN Cluster runnig Docker Hypervisor on Compute Node (CentOS 7, kernel 3.10.0-123.20.1.el7.x86_64)

February 6, 2015

It’s quite obvious that Nova-Docker driver set up success for real application is important to get on Compute Nodes . It’s nice when everything works on AIO Juno host or Controller, but  just as demonstration. Might be I did something wrong , might be due to some other reason but kernel version 3.10.0-123.20.1.el7.x86_64 seems to be the first brings  success on RDO Juno Compute nodes.

Follow http://lxer.com/module/newswire/view/209851/index.html  up to section

“Set up Nova-Docker on Controller&amp;&amp;Network Node”

***************************************************
Set up  Nova-Docker Driver on Compute Node
***************************************************

# yum install python-pbr
# yum install docker-io -y
# git clone https://github.com/stackforge/nova-docker
# cd nova-docker
# git checkout stable/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
#  mkdir /etc/nova/rootwrap.d

************************************************
Create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker
:wq

******************************
Update nova.conf
******************************

vi /etc/nova/nova.conf

set “compute_driver = novadocker.virt.docker.DockerDriver”

************************
Restart Services
************************

usermod -G docker nova

systemctl restart openstack-nova-compute (on Compute)
systemctl status openstack-nova-compute
systemctl restart openstack-glance-api (on Controller&amp;&amp;Network )

At this point `scp  /root/keystonerc_admin compute:/root`  from Controller to Compute Node

*************************************************************
Test installation Nova-Docker Driver on Compute Node (RDO Juno , CentOS 7,kernel 3.10.0-123.20.1.el7.x86_64 )
**************************************************************

*******************************************

Setup Ubuntu 14.04 with SSH access

*******************************************

First on Compute node

# docker pull rastasheep/ubuntu-sshd:14.04

# . keystonerc_admin

# docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

Second on Controller node launch Nova-Docker container , running on Compute via dashboard and assign floating IP address

Pic15          Pic16

 

*********************************************
Verify `docker ps ` on Compute Node
*********************************************

[root@juno1dev ~]# ssh 192.168.1.137

Last login: Fri Feb  6 15:38:49 2015 from juno1dev.localdomain

[root@juno2dev ~]# docker ps

CONTAINER ID        IMAGE                          COMMAND               CREATED             STATUS              PORTS               NAMES

ef23d030e35a        rastasheep/ubuntu-sshd:14.04   “/usr/sbin/sshd -D”   7 hours ago         Up 6 minutes                            nova-211bcb54-35ba-4f0a-a150-7e73546d8f46

[root@juno2dev ~]# ip netns

ef23d030e35af63c17698d1f4c6f7d8023c29455e9dff0288ce224657828993a
ca9aa6cb527f2302985817d3410a99c6f406f4820ed6d3f62485781d50f16590
fea73a69337334b36625e78f9a124e19bf956c73b34453f1994575b667e7401b
58834d3bbea1bffa368724527199d73d0d6fde74fa5d24de9cca41c29f978e31
********************************
On Controller run :-
********************************

[root@juno1dev ~]# ssh root@192.168.1.173
root@192.168.1.173’s password:
Last login: Fri Feb  6 12:11:19 2015 from 192.168.1.127

root@instance-0000002b:~# apt-get update

Ign http://archive.ubuntu.com trusty InRelease
Ign http://archive.ubuntu.com trusty-updates InRelease
Ign http://archive.ubuntu.com trusty-security InRelease
Hit http://archive.ubuntu.com trusty Release.gpg
Get:1 http://archive.ubuntu.com trusty-updates Release.gpg [933 B]
Get:2 http://archive.ubuntu.com trusty-security Release.gpg [933 B]
Hit http://archive.ubuntu.com trusty Release
Get:3 http://archive.ubuntu.com trusty-updates Release [62.0 kB]
Get:4 http://archive.ubuntu.com trusty-security Release [62.0 kB]
Hit http://archive.ubuntu.com trusty/main Sources
Hit http://archive.ubuntu.com trusty/restricted Sources
Hit http://archive.ubuntu.com trusty/universe Sources
Hit http://archive.ubuntu.com trusty/main amd64 Packages
Hit http://archive.ubuntu.com trusty/restricted amd64 Packages
Hit http://archive.ubuntu.com trusty/universe amd64 Packages
Get:5 http://archive.ubuntu.com trusty-updates/main Sources [208 kB]
Get:6 http://archive.ubuntu.com trusty-updates/restricted Sources [1874 B]
Get:7 http://archive.ubuntu.com trusty-updates/universe Sources [124 kB]
Get:8 http://archive.ubuntu.com trusty-updates/main amd64 Packages [524 kB]
Get:9 http://archive.ubuntu.com trusty-updates/restricted amd64 Packages [14.8 kB]
Get:10 http://archive.ubuntu.com trusty-updates/universe amd64 Packages [318 kB]
Get:11 http://archive.ubuntu.com trusty-security/main Sources [79.8 kB]
Get:12 http://archive.ubuntu.com trusty-security/restricted Sources [1874 B]
Get:13 http://archive.ubuntu.com trusty-security/universe Sources [19.1 kB]
Get:14 http://archive.ubuntu.com trusty-security/main amd64 Packages [251 kB]
Get:15 http://archive.ubuntu.com trusty-security/restricted amd64 Packages [14.8 kB]
Get:16 http://archive.ubuntu.com trusty-security/universe amd64 Packages [110 kB]
Fetched 1793 kB in 9s (199 kB/s)
Reading package lists… Done

If network operations like `apt-get install … ` run afterwards with no problems

Nova-Docker driver is installed  and works on Compute Node

**************************************************************************************
Finally I’ve set up openstack-nova-compute on Controller ,  to run several instances with  Qemu/Libvirt driver :-
**************************************************************************************

Pic17          Pic18


Set up Nova-Docker on OpenStack RDO Juno on top of Fedora 21

January 11, 2015
****************************************************************************************
UPDATE as of 01/31/2015 to get Docker && Nova-Docker working on Fedora 21
****************************************************************************************
Per https://github.com/docker/docker/issues/10280
download systemd-218-3.fc22.src.rpm && build 218-3 rpms and upgrade systemd
First packages for rpmbuild :-

$ sudo yum install audit-libs-devel autoconf  automake cryptsetup-devel \
dbus-devel docbook-style-xsl elfutils-devel  \
glib2-devel  gnutls-devel  gobject-introspection-devel \
gperf     gtk-doc intltool kmod-devel libacl-devel \
libblkid-devel     libcap-devel libcurl-devel libgcrypt-devel \
libidn-devel libmicrohttpd-devel libmount-devel libseccomp-devel \
libselinux-devel libtool pam-devel python3-devel python3-lxml \
qrencode-devel  python2-devel  xz-devel

Second:-
$cd rpmbuild/SPEC
$rpmbuild -bb systemd.spec
$ cd ../RPMS/x86_64
Third:-

$ sudo yum install libgudev1-218-3.fc21.x86_64.rpm \
libgudev1-devel-218-3.fc21.x86_64.rpm \
systemd-218-3.fc21.x86_64.rpm \
systemd-compat-libs-218-3.fc21.x86_64.rpm \
systemd-debuginfo-218-3.fc21.x86_64.rpm \
systemd-devel-218-3.fc21.x86_64.rpm \
systemd-journal-gateway-218-3.fc21.x86_64.rpm \
systemd-libs-218-3.fc21.x86_64.rpm \
systemd-python-218-3.fc21.x86_64.rpm \
systemd-python3-218-3.fc21.x86_64.rpm

****************************************************************************************

Recently Filip Krikava made a fork on github and created a Juno branch using

the latest commitFix the problem when an image is not located in the local docker image registry

Master https://github.com/stackforge/nova-docker.git is targeting latest Nova ( Kilo release ), forked branch is supposed to work for Juno , reasonably including commits after “Merge oslo.i18n”. Posting bellow is supposed to test Juno Branch https://github.com/fikovnik/nova-docker.git

Quote ([2]) :-

The Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.

Install required packages to install nova-docker driver per https://wiki.openstack.org/wiki/Docker

***************************
Initial docker setup
***************************

# yum install docker-io -y
# yum install -y python-pip git
# git clone https://github.com/fikovnik/nova-docker.git
# cd nova-docker
# git branch -v -a

master                1ed1820 A note no firewall drivers.
remotes/origin/HEAD   -&gt; origin/master
remotes/origin/juno   1a08ea5 Fix the problem when an image
is not located in the local docker image registry.
remotes/origin/master 1ed1820 A note no firewall drivers.
# git checkout -b juno origin/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
# pip install pbr
#  mkdir /etc/nova/rootwrap.d

******************************
Update nova.conf
******************************

vi /etc/nova/nova.conf

set “compute_driver = novadocker.virt.docker.DockerDriver”

************************************************
Next, create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines
# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************

container_formats=ami,ari,aki,bare,ovf,ova,docker

:wq

************************
Restart Services
************************

usermod -G docker nova

systemctl restart openstack-nova-compute
systemctl status openstack-nova-compute
systemctl restart openstack-glance-api

*******************************************************************************
Verification nova-docker driver  been  built on Fedora 21

*******************************************************************************
Build bellow is extending  phusion/baseimage to start several daemons at a time during launching nova-docker container. It has been tested on Nova-Docker RDO Juno on top of CentOS 7 ( view Set up GlassFish 4.1 Nova-Docker Container via phusion/baseimage on RDO Juno ). Here it is reproduced on Nova-Docker RDO Juno on top of Fedora 21 coming afterwards `packstack –allinone` Juno installation on Fedora 21,  been run pretty smoothly .

 FROM phusion/baseimage

MAINTAINER Boris Derzhavets

RUN apt-get update
RUN echo ‘root:root’ |chpasswd
RUN sed -ri ‘s/^PermitRootLogin\s+.*/PermitRootLogin yes/’ /etc/ssh/sshd_config
RUN sed -ri ‘s/UsePAM yes/#UsePAM yes/g’ /etc/ssh/sshd_config
##################################################
# Hack to avoid external start SSH session inside container,
# otherwise sshd won’t start when docker container loads
##################################################
RUN echo “/usr/sbin/sshd > log & ” >>  /etc/my_init.d/00_regen_ssh_host_keys.sh

RUN apt-get update && apt-get install -y wget
RUN wget –no-check-certificate –no-cookies –header “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz
RUN cp  jdk-8u25-linux-x64.tar.gz /opt
RUN cd /opt; tar -zxvf jdk-8u25-linux-x64.tar.gz
ENV PATH /opt/jdk1.8.0_25/bin:$PATH

RUN apt-get update &&  \
apt-get install -y wget unzip pwgen expect net-tools vim &&  \
wget http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip &&  \
unzip glassfish-4.1.zip -d /opt &&  \
rm glassfish-4.1.zip &&  \
apt-get clean &&  \
rm -rf /var/lib/apt/lists/*
ENV PATH /opt/glassfish4/bin:$PATH

ADD run.sh /etc/my_init.d/
ADD database.sh  /etc/my_init.d/

ADD change_admin_password.sh /change_admin_password.sh
ADD change_admin_password_func.sh /change_admin_password_func.sh
ADD enable_secure_admin.sh /enable_secure_admin.sh
RUN chmod +x /*.sh /etc/my_init.d/*.sh

# 4848 (administration), 8080 (HTTP listener), 8181 (HTTPS listener), 9009 (JPDA debug port)

EXPOSE 22  4848 8080 8181 9009

CMD [“/sbin/my_init”]

***************************************************************
Another option not to touch 00_regen_ssh_host_keys.sh
***************************************************************
# RUN echo “/usr/sbin/sshd > log & ” >>  /etc/my_init.d/00_regen_ssh_host_keys.sh

***************************************************************
Create in building folder script  01_sshd_start.sh
***************************************************************

#!/bin/bash
/usr/sbin/sshd > log &
and insert in Dockerfile:-
ADD 01_sshd_start.sh /etc/my_init.d/

********************************************************************************
I had to update database.sh script as follows to make nova-docker container
starting  on RDO Juno on top of Fedora 21 ( view http://lxer.com/module/newswire/view/209277/index.html ).
********************************************************************************

# cat database.sh

#!/bin/bash
set -e
asadmin start-database –dbhost 127.0.0.1 –terse=true >  log &;

the important  change is binding dbhost to 127.0.0.1 , which  is not required for loading docker container. Nova-Docker Driver ( http://www.linux.com/community/blogs/133-general-linux/799569-running-nova-docker-on-openstack-rdo-juno-centos-7 ) seems to be more picky about –dbhost  key value of Derby Database

*********************
Build image
*********************

[root@junolxc docker-glassfish41]# ls -l

total 44
-rw-r–r–. 1 root root   217 Jan  7 00:27 change_admin_password_func.sh
-rw-r–r–. 1 root root   833 Jan  7 00:27 change_admin_password.sh
-rw-r–r–. 1 root root   473 Jan  7 00:27 circle.yml
-rw-r–r–. 1 root root    44 Jan  7 00:27 database.sh
-rw-r–r–. 1 root root  1287 Jan  7 00:27 Dockerfile
-rw-r–r–. 1 root root   167 Jan  7 00:27 enable_secure_admin.sh
-rw-r–r–. 1 root root 11323 Jan  7 00:27 LICENSE
-rw-r–r–. 1 root root  2123 Jan  7 00:27 README.md
-rw-r–r–. 1 root root   354 Jan  7 00:27 run.sh
[root@junolxc docker-glassfish41]# docker build -t derby/docker-glassfish41 .

******************************************
RDO (AIO install)  Juno status on Fedora 21
*******************************************

[root@fedora21 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
target:                                 inactive  (disabled on boot)
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| edfb1cd3c4d54401ac810b14e8d953f2 |   admin    |   True  |    root@localhost    |
| 783df7494254423aaed3bfe0cc2262af | ceilometer |   True  | ceilometer@localhost |
| 955e7619fc6749f68843030d9da6cef3 |   cinder   |   True  |   cinder@localhost   |
| 1ed0f9f7705341b79f58190ea31160fc |    demo    |   True  |                      |
| 68362c2c7ad642ab9ea31164cad35268 |   glance   |   True  |   glance@localhost   |
| b7dec54d6b984c16afca2935cc09c478 |  neutron   |   True  |  neutron@localhost   |
| c35cad56c0e548aaa6907e0da3eca569 |    nova    |   True  |    nova@localhost    |
| a959def1f10e48d6959a70bc930e8522 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+———————————+————-+——————+————+——–+
| ID                                   | Name                            | Disk Format | Container Format | Size       | Status |
+————————————–+———————————+————-+——————+————+——–+
| 08b235e5-7f2b-4bc4-959e-582482037019 | cirros                          | qcow2       | bare             | 13200896   | active |
| fcb9a93a-6a28-413f-853b-4ad362aed0c5 | derby/docker-glassfish41:latest | raw         | docker           | 1112110592 | active |
| 032952ba-5bb3-41cc-9a2a-d4c76d197571 | dba07/docker-glassfish41:latest | raw         | docker           | 1112110592 | active |
| ce0adab4-3f09-45cc-81fa-cd8cc6acc7c1 | rastasheep/ubuntu-sshd:14.04    | raw         | docker           | 263785472  | active |
| 230040b3-c5d1-4bf0-b5e4-9f112fd71c70 | Ubuntu14.04-011014              | qcow2       | bare             | 256311808  | active |
+————————————–+———————————+————-+——————+————+——–+
== Nova managed services ==
+—-+——————+———————-+———-+———+——-+—————————-+—————–+
| Id | Binary           | Host                 | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—-+——————+———————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:21.000000 | –               |
| 2  | nova-scheduler   | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:22.000000 | –               |
| 3  | nova-conductor   | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:22.000000 | –               |
| 5  | nova-compute     | fedora21.localdomain | nova     | enabled | up    | 2015-01-11T09:45:20.000000 | –               |
| 6  | nova-cert        | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:29.000000 | –               |
+—-+——————+———————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+————–+——+
| ID                                   | Label        | Cidr |
+————————————–+————–+——+
| 046e1e6f-b09c-4daf-9732-3ed0b6e5fdf8 | public       | –    |
| 76709a1a-61e7-4488-9ecf-96dbd88d4fb6 | private      | –    |
| 7b2c1d87-cea1-40aa-a1d7-dbac3cc99798 | demo_network | –    |
+————————————–+————–+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

*************************
Upload image to glance
*************************

# . keystonerc_admin

# docker save derby/docker-glassfish41:latest | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name derby/docker-glassfish41:latest

**********************
Launch instance
**********************
# .  keystonerc_demo

# nova boot –image “derby/docker-glassfish41:latest” –flavor m1.small –key-name  oskey57    –nic net-id=demo_network-id DerbyGlassfish41

Derby1F21

Derby2F21

Derby3F21

Derby5F21


Set up GlassFish 4.1 Nova-Docker Container via docker’s phusion/baseimage on RDO Juno

January 9, 2015

The problem here is that phusion/baseimage per https://github.com/phusion/baseimage-docker should provide ssh access to container , however it doesn’t. Working with docker container there is easy workaround suggested by Mykola Gurov in http://stackoverflow.com/questions/27816298/cannot-get-ssh-access-to-glassfish-4-1-docker-container
# docker exec container-id exec /usr/sbin/sshd -D
*******************************************************************************
To   bring sshd back to life  create in building folder script  01_sshd_start.sh
*******************************************************************************
#!/bin/bash

if [[ ! -e /etc/ssh/ssh_host_rsa_key ]]; then
echo “No SSH host key available. Generating one…”
export LC_ALL=C
export DEBIAN_FRONTEND=noninteractive
dpkg-reconfigure openssh-server
echo “SSH KEYS regenerated by Boris just in case !”
fi

/usr/sbin/sshd &gt; log &amp;
echo “SSHD started !”

and insert in Dockerfile:-

ADD 01_sshd_start.sh /etc/my_init.d/ 

Following bellow is Dockerfile been used to build image for GlassFish 4.1 nova-docker container extending phusion/baseimage and starting three daemons at a time when launching nova-docker instance been built via image been prepared to be used by Nova-Docker driver on Juno

FROM phusion/baseimage
MAINTAINER Boris Derzhavets

RUN apt-get update
RUN echo ‘root:root’ |chpasswd
RUN sed -ri ‘s/^PermitRootLogin\s+.*/PermitRootLogin yes/’ /etc/ssh/sshd_config
RUN sed -ri ‘s/UsePAM yes/#UsePAM yes/g’ /etc/ssh/sshd_config

RUN apt-get update && apt-get install -y wget
RUN wget –no-check-certificate –no-cookies –header “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz
RUN cp jdk-8u25-linux-x64.tar.gz /opt
RUN cd /opt; tar -zxvf jdk-8u25-linux-x64.tar.gz
ENV PATH /opt/jdk1.8.0_25/bin:$PATH
RUN apt-get update && \

apt-get install -y wget unzip pwgen expect net-tools vim && \
wget http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip && \
unzip glassfish-4.1.zip -d /opt && \
rm glassfish-4.1.zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

ENV PATH /opt/glassfish4/bin:$PATH

ADD 01_sshd_start.sh /etc/my_init.d/
ADD run.sh /etc/my_init.d/
ADD database.sh /etc/my_init.d/
ADD change_admin_password.sh /change_admin_password.sh
ADD change_admin_password_func.sh /change_admin_password_func.sh
ADD enable_secure_admin.sh /enable_secure_admin.sh
RUN chmod +x /*.sh /etc/my_init.d/*.sh

# 4848 (administration), 8080 (HTTP listener), 8181 (HTTPS listener), 9009 (JPDA debug port)

EXPOSE 22 4848 8080 8181 9009

CMD [“/sbin/my_init”]

********************************************************************************
I had to update database.sh script as follows to make nova-docker container
starting on RDO Juno
********************************************************************************
# cat database.sh

#!/bin/bash
set -e
asadmin start-database –dbhost 127.0.0.1 –terse=true > log &

the important change is binding dbhost to 127.0.0.1 , which is not required for loading docker container. Nova-Docker Driver ( http://www.linux.com/community/blogs/133-general-linux/799569-running-nova-docker-on-openstack-rdo-juno-centos-7 ) seems to be more picky about –dbhost key value of Derby Database

*********************
Build image
*********************
[root@junolxc docker-glassfish41]# ls -l
total 44
-rw-r–r–. 1 root root 217 Jan 7 00:27 change_admin_password_func.sh
-rw-r–r–. 1 root root 833 Jan 7 00:27 change_admin_password.sh
-rw-r–r–. 1 root root 473 Jan 7 00:27 circle.yml
-rw-r–r–. 1 root root 44 Jan 7 00:27 database.sh
-rw-r–r–. 1 root root 1287 Jan 7 00:27 Dockerfile
-rw-r–r–. 1 root root 167 Jan 7 00:27 enable_secure_admin.sh
-rw-r–r–. 1 root root 11323 Jan 7 00:27 LICENSE
-rw-r–r–. 1 root root 2123 Jan 7 00:27 README.md
-rw-r–r–. 1 root root 354 Jan 7 00:27 run.sh

[root@junolxc docker-glassfish41]# docker build -t boris/docker-glassfish41 .

*************************
Upload image to glance
*************************
# . keystonerc_admin
# docker save boris/docker-glassfish41:latest | glance image-create –is-public=True –container-format=docker –disk-format=raw –name boris/docker-glassfish41:latest

**********************
Launch instance
**********************
# . keystonerc_demo
# nova boot –image “boris/docker-glassfish41:latest” –flavor m1.small –key-name osxkey –nic net-id=demo_network-id OracleGlassfish41

[root@junodocker (keystone_admin)]# ssh root@192.168.1.175
root@192.168.1.175’s password:
Last login: Fri Jan 9 10:09:50 2015 from 192.168.1.57

root@instance-00000045:~# ps -ef

UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:15 ? 00:00:00 /usr/bin/python3 -u /sbin/my_init
root 12 1 0 10:15 ? 00:00:00 /usr/sbin/sshd

root 46 1 0 10:15 ? 00:00:08 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/opt/glassfish4/glassfish/lib -cp /opt/glassfish4/glassfish/lib/asadmin/cli-optional.jar:/opt/glassfish4/javadb/lib/derby.jar:/opt/glassfish4/javadb/lib/derbytools.jar:/opt/glassfish4/javadb/lib/derbynet.jar:/opt/glassfish4/javadb/lib/derbyclient.jar com.sun.enterprise.admin.cli.optional.DerbyControl start 127.0.0.1 1527 true /opt/glassfish4/glassfish/databases

root 137 1 0 10:15 ? 00:00:00 /bin/bash /etc/my_init.d/run.sh
root 358 137 0 10:15 ? 00:00:05 java -jar /opt/glassfish4/bin/../glassfish/lib/client/appserver-cli.jar start-domain –debug=false -w

root 375 358 0 10:15 ? 00:02:59 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/glassfish/modules/glassfish.jar -XX:+UnlockDiagnosticVMOptions -XX:NewRatio=2 -XX:MaxPermSize=192m -Xmx512m -client -javaagent:/opt/glassfish4/glassfish/lib/monitor/flashlight-agent.jar -Djavax.xml.accessExternalSchema=all -Djavax.net.ssl.trustStore=/opt/glassfish4/glassfish/domains/domain1/config/cacerts.jks -Djdk.corba.allowOutputStreamSubclass=true -Dfelix.fileinstall.dir=/opt/glassfish4/glassfish/modules/autostart/ -Dorg.glassfish.additionalOSGiBundlesToStart=org.apache.felix.shell,org.apache.felix.gogo.runtime,org.apache.felix.gogo.shell,org.apache.felix.gogo.command,org.apache.felix.shell.remote,org.apache.felix.fileinstall -Dcom.sun.aas.installRoot=/opt/glassfish4/glassfish -Dfelix.fileinstall.poll=5000 -Djava.endorsed.dirs=/opt/glassfish4/glassfish/modules/endorsed:/opt/glassfish4/glassfish/lib/endorsed -Djava.security.policy=/opt/glassfish4/glassfish/domains/domain1/config/server.policy -Dosgi.shell.telnet.maxconn=1 -Dfelix.fileinstall.bundles.startTransient=true -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dfelix.fileinstall.log.level=2 -Djavax.net.ssl.keyStore=/opt/glassfish4/glassfish/domains/domain1/config/keystore.jks -Djava.security.auth.login.config=/opt/glassfish4/glassfish/domains/domain1/config/login.conf -Dfelix.fileinstall.disableConfigSave=false -Dfelix.fileinstall.bundles.new.start=true -Dcom.sun.aas.instanceRoot=/opt/glassfish4/glassfish/domains/domain1 -Dosgi.shell.telnet.port=6666 -Dgosh.args=–nointeractive -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Dosgi.shell.telnet.ip=127.0.0.1 -DANTLR_USE_DIRECT_CLASS_LOADING=true -Djava.awt.headless=true -Dcom.ctc.wstx.returnNullForDefaultNamespace=true -Djava.ext.dirs=/opt/jdk1.8.0_25/lib/ext:/opt/jdk1.8.0_25/jre/lib/ext:/opt/glassfish4/glassfish/domains/domain1/lib/ext -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -Djava.library.path=/opt/glassfish4/glassfish/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib com.sun.enterprise.glassfish.bootstrap.ASMain -upgrade false -domaindir /opt/glassfish4/glassfish/domains/domain1 -read-stdin true -asadmin-args –host,,,localhost,,,–port,,,4848,,,–secure=false,,,–terse=false,,,–echo=false,,,–interactive=false,,,start-domain,,,–verbose=false,,,–watchdog=true,,,–debug=false,,,–domaindir,,,/opt/glassfish4/glassfish/domains,,,domain1 -domainname domain1 -instancename server -type DAS -verbose false -asadmin-classpath /opt/glassfish4/glassfish/lib/client/appserver-cli.jar -debug false -asadmin-classname com.sun.enterprise.admin.cli.AdminMain

root 1186 12 0 14:02 ? 00:00:00 sshd: root@pts/0
root 1188 1186 0 14:02 pts/0 00:00:00 -bash
root 1226 1188 0 15:45 pts/0 00:00:00 ps -ef

Screenshot from 2015-01-09 09_44_16

Screenshot from 2015-01-09 10_02_57

Original idea of using ./run.sh script is coming from
https://registry.hub.docker.com/u/bonelli/glassfish-4.1/

[root@junodocker ~(keystone_admin)]# docker logs 65a3f4cf1994

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh…
No SSH host key available. Generating one…
Creating SSH2 RSA key; this may take some time …
Creating SSH2 DSA key; this may take some time …
Creating SSH2 ECDSA key; this may take some time …
Creating SSH2 ED25519 key; this may take some time …
invoke-rc.d: policy-rc.d denied execution of restart.

*** Running /etc/my_init.d/database.sh…
Starting database in Network Server mode on host 127.0.0.1 and port 1527.
——— Derby Network Server Information ——–
Version: CSS10100/10.10.2.0 – (1582446) Build: 1582446 DRDA Product Id: CSS10100
— listing properties —
derby.drda.traceDirectory=/opt/glassfish4/glassfish/databases
derby.drda.maxThreads=0
derby.drda.sslMode=off
derby.drda.keepAlive=true
derby.drda.minThreads=0
derby.drda.portNumber=1527
derby.drda.logConnections=false
derby.drda.timeSlice=0
derby.drda.startNetworkServer=false
derby.drda.host=127.0.0.1
derby.drda.traceAll=false
—————— Java Information ——————
Java Version: 1.8.0_25
Java Vendor: Oracle Corporation
Java home: /opt/jdk1.8.0_25/jre
Java classpath: /opt/glassfish4/glassfish/lib/asadmin/cli-optional.jar:/opt/glassfish4/javadb/lib/derby.jar:/opt/glassfish4/javadb/lib/derbytools.jar:/opt/glassfish4/javadb/lib/derbynet.jar:/opt/glassfish4/javadb/lib/derbyclient.jar
OS name: Linux
OS architecture: amd64
OS version: 3.10.0-123.el7.x86_64
Java user name: root
Java user home: /root
Java user dir: /
java.specification.name: Java Platform API Specification
java.specification.version: 1.8
java.runtime.version: 1.8.0_25-b17
——— Derby Information ——–
[/opt/glassfish4/javadb/lib/derby.jar] 10.10.2.0 – (1582446)
[/opt/glassfish4/javadb/lib/derbytools.jar] 10.10.2.0 – (1582446)
[/opt/glassfish4/javadb/lib/derbynet.jar] 10.10.2.0 – (1582446)
[/opt/glassfish4/javadb/lib/derbyclient.jar] 10.10.2.0 – (1582446)
——————————————————
—————– Locale Information —————–

Current Locale : [English/United States [en_US]]
Found support for locale: [cs]
version: 10.10.2.0 – (1582446)
Found support for locale: [de_DE]
version: 10.10.2.0 – (1582446)
Found support for locale: [es]
version: 10.10.2.0 – (1582446)
Found support for locale: [fr]
version: 10.10.2.0 – (1582446)
Found support for locale: [hu]
version: 10.10.2.0 – (1582446)
Found support for locale: [it]
version: 10.10.2.0 – (1582446)
Found support for locale: [ja_JP]
version: 10.10.2.0 – (1582446)
Found support for locale: [ko_KR]
version: 10.10.2.0 – (1582446)
Found support for locale: [pl]
version: 10.10.2.0 – (1582446)
Found support for locale: [pt_BR]
version: 10.10.2.0 – (1582446)
Found support for locale: [ru]
version: 10.10.2.0 – (1582446)
Found support for locale: [zh_CN]
version: 10.10.2.0 – (1582446)
Found support for locale: [zh_TW]
version: 10.10.2.0 – (1582446)
——————————————————
——————————————————

Starting database in the background.

Log redirected to /opt/glassfish4/glassfish/databases/derby.log.
Command start-database executed successfully.
*** Running /etc/my_init.d/run.sh…
Bad Network Configuration. DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000045: instance-00000045: unknown error

Waiting for domain1 to start …….
Successfully started the domain : domain1
domain Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=> Modifying password of admin to random in Glassfish
spawn asadmin –user admin change-admin-password
Enter the admin password>
Enter the new admin password>
Enter the new admin password again>
Command change-admin-password executed successfully.
=> Enabling secure admin login
spawn asadmin enable-secure-admin
Enter admin user name> admin
Enter admin password for user “admin”>
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.
=> Done!
========================================================================
You can now connect to this Glassfish server using:
admin:fCZNVP80JiyI
Please remember to change the above password as soon as possible!
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop .
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false


Running Nova-Docker on OpenStack Juno (CentOS 7)

December 16, 2014

Recently Filip Krikava made a fork on github and created a Juno branch using the latest commit +Fix the problem when an image is not located in the local docker image registry ( https://github.com/fikovnik/nova-docker/commit/016cc98e2f8950ae3bf5e27912be20c52fc9e40e )
Master https://github.com/stackforge/nova-docker.git is targeting latest Nova ( Kilo release ), forked branch is supposed to work for Juno , reasonably including commits after “Merge oslo.i18n”. Posting bellow is supposed to test Juno Branch https://github.com/fikovnik/nova-docker.git

Quote ([2]) :-

The Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.

This post in general follows up ([2]) with detailed instructions of nova-docker

driver install on RDO Juno (CentOS 7) ([3]).

Install required packages to install nova-dockker driver per https://wiki.openstack.org/wiki/Docker

***************************

Initial docker setup

***************************

# yum install docker-io -y
# yum install -y python-pip git
# git clone https://github.com/fikovnik/nova-docker.git
# cd nova-docker
# git branch -v -a

#  master                1ed1820 A note no firewall drivers.
remotes/origin/HEAD   -&gt; origin/master
remotes/origin/juno   1a08ea5 Fix the problem when an image
is not located in the local docker image registry.
remotes/origin/master 1ed1820 A note no firewall drivers.
# git checkout -b juno origin/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
# pip install pbr
#  mkdir /etc/nova/rootwrap.d

******************************

Update nova.conf

******************************

vi /etc/nova/nova.conf

set “compute_driver = novadocker.virt.docker.DockerDriver”

************************************************

Next, create the docker.filters file:

************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver

# This file should be owned by (and only-writeable by) the root user

[Filters]

# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’

ln: CommandFilter, /bin/ln, root

*****************************************

Add line /etc/glance/glance-api.conf

*****************************************

container_formats=ami,ari,aki,bare,ovf,ova,docker

:wq

************************

Restart Services

************************

usermod -G docker nova

systemctl restart openstack-nova-compute

systemctl status openstack-nova-compute

systemctl restart openstack-glance-api

******************************

Verification docker install

******************************

[root@juno ~]# docker run -i -t fedora /bin/bash

Unable to find image ‘fedora’ locally

fedora:latest: The image you are pulling has been verified

00a0c78eeb6d: Pull complete

2f6ab0c1646e: Pull complete

511136ea3c5a: Already exists

Status: Downloaded newer image for fedora:latest

bash-4.3# cat /etc/issue

Fedora release 21 (Twenty One)

Kernel \r on an \m (\l)

[root@juno ~]# docker ps -a

CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS                        PORTS               NAMES

738e54f9efd4        fedora:latest            “/bin/bash”         3 minutes ago       Exited (127) 25 seconds ago                       stoic_lumiere

14fd0cbba76d        ubuntu:latest            “/bin/bash”         3 minutes ago       Exited (0) 3 minutes ago                          prickly_hypatia

ef1a726d1cd4        fedora:latest            “/bin/bash”         5 minutes ago       Exited (0) 3 minutes ago                          drunk_shockley

0a2da90a269f        ubuntu:latest            “/bin/bash”         11 hours ago        Exited (0) 11 hours ago                           thirsty_kowalevski

5a3288ce0e8e        ubuntu:latest            “/bin/bash”         11 hours ago        Exited (0) 11 hours ago                           happy_leakey

21e84951eabd        tutum/wordpress:latest   “/run.sh”           16 hours ago        Up About an hour                                  nova-bf5f7eb9-900d-48bf-a230-275d65813b0f

*******************

Setup WordPress

*******************

# docker pull tutum/wordpress

# . keystonerc_admin

# docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress



[root@juno ~(keystone_admin)]# glance image-list
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+
| c6d01e60-56c2-443f-bf87-15a0372bc2d9 | cirros          | qcow2       | bare             | 13200896  | active |
| 9d59e7ad-35b4-4c3f-9103-68f85916f36e | tutum/wordpress | raw         | docker           | 517639680 | active |
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+

********************

Start container

********************

$ . keystonerc_demo

[root@juno ~(keystone_demo)]# neutron net-list

+————————————–+————–+——————————————————-+

| id                                   | name         | subnets                                               |

+————————————–+————–+——————————————————-+

| ccfc4bb1-696d-4381-91d7-28ce7c9cb009 | private      | 6c0a34ab-e3f1-458c-b24a-96f5a2149878 10.0.0.0/24      |

| 32c14896-8d47-4a56-b3c6-0dd823f03089 | public       | b1799aef-3f69-429c-9881-f81c74d83060 192.169.142.0/24 |

| a65bff8f-e397-491b-aa97-955864bec2f9 | demo_private | 69012862-f72e-4cd2-a4fc-4106d431cf2f 70.0.0.0/24      |

+————————————–+————–+——————————————————-+

$ nova boot –image “tutum/wordpress” –flavor m1.tiny –key-name  osxkey –nic net-id=a65bff8f-e397-491b-aa97-955864bec2f9 WordPress

[root@juno ~(keystone_demo)]# nova list

+————————————–+———–+———+————+————-+—————————————–+

| ID                                   | Name      | Status  | Task State | Power State | Networks                                |

+————————————–+———–+———+————+————-+—————————————–+

| bf5f7eb9-900d-48bf-a230-275d65813b0f |  WordPress   | ACTIVE  | –          | Running     | demo_private=70.0.0.16, 192.169.142.153 |

+—————————-———-+———–+———+————+————-+—————————————–+

[root@juno ~(keystone_demo)]# docker ps -a

CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS                   PORTS               NAMES

21e84951eabd        tutum/wordpress:latest   “/run.sh”           About an hour ago   Up 11 minutes                                nova-bf5f7eb9-900d-48bf-a230-275d65813b0f

**************************

Starting WordPress

**************************

Immediately after VM starts (on non-default Libvirts Subnet 192.169.142.0/24) status WordPress  is SHUTOFF, so we start WordPress (browser launched to

Juno VM 192.169.142.45 from KVM Hypervisor Server ) :-

   Browser launched to WordPress container 192.169.142.153  from KVM  Hypervisor Server

 

 **********************************************************************************

Floating IP assigned to WordPress container  been used to launch browser:-

**********************************************************************************

*******************************************************************************************

Another sample to demonstrating nova-docker container functionality. Browser launched to WordPress nova-docker container   (192.169.142.155)   from KVM Hypervisor Server hosting Libvirt’s Subnet (192.169.142.0/24)

*******************************************************************************************

 

*****************

MySQL Setup

*****************

  # docker pull tutum/mysql

  # .   keystonerc_admin

*****************************

Creating Glance Image

*****************************

#   docker save tutum/mysql:latest | glance image-create –is-public=True –container-format=docker –disk-format=raw –name tutum/mysql:latest

****************************************

Starting Nova-Docker container

****************************************

# .   keystonerc_demo

#   nova boot –image “tutum/mysql:latest” –flavor m1.tiny –key-name  osxkey –nic net-id=5fcd01ac-bc8e-450d-be67-f0c274edd041 mysql

 

 [root@ip-192-169-142-45 ~(keystone_demo)]# nova list

+————————————–+—————+——–+————+————-+—————————————–+

| ID                                   | Name          | Status | Task State | Power State | Networks                                |

+————————————–+—————+——–+————+————-+—————————————–+

| 3dbf981f-f28c-4abe-8fd1-09b8b8cad930 | WordPress     | ACTIVE | –          | Running     | demo_network=70.0.0.16, 192.169.142.153 |

| 39eef361-1329-44d9-b05a-f6b4b8693aa3 | mysql         | ACTIVE | –          | Running     | demo_network=70.0.0.19, 192.169.142.155 |

| 626bd8e0-cf1a-4891-aafc-620c464e8a94 | tutum/hipache | ACTIVE | –          | Running     | demo_network=70.0.0.18, 192.169.142.154 |

+————————————–+—————+——–+————+————-+—————————————–+

[root@ip-192-169-142-45 ~(keystone_demo)]# docker ps -a

CONTAINER ID        IMAGE                          COMMAND               CREATED             STATUS                         PORTS               NAMES

3da1e94892aa        tutum/mysql:latest             “/run.sh”             25 seconds ago      Up 23 seconds                                      nova-39eef361-1329-44d9-b05a-f6b4b8693aa3

77538873a273        tutum/hipache:latest           “/run.sh”             30 minutes ago                                                         condescending_leakey

844c75ca5a0e        tutum/hipache:latest           “/run.sh”             31 minutes ago                                                         condescending_turing

f477605840d0        tutum/hipache:latest           “/run.sh”             42 minutes ago      Up 31 minutes                                      nova-626bd8e0-cf1a-4891-aafc-620c464e8a94

3e2fe064d822        rastasheep/ubuntu-sshd:14.04   “/usr/sbin/sshd -D”   About an hour ago   Exited (0) About an hour ago                       test_sshd

8e79f9d8e357        fedora:latest                  “/bin/bash”           About an hour ago   Exited (0) About an hour ago                       evil_colden

9531ab33db8d        ubuntu:latest                  “/bin/bash”           About an hour ago   Exited (0) About an hour ago                       angry_bardeen

df6f3c9007a7        tutum/wordpress:latest         “/run.sh”             2 hours ago         Up About an hour                                   nova-3dbf981f-f28c-4abe-8fd1-09b8b8cad930

 

[root@ip-192-169-142-45 ~(keystone_demo)]# docker logs 3da1e94892aa

=&gt; An empty or uninitialized MySQL volume is detected in /var/lib/mysql

=&gt; Installing MySQL …

=&gt; Done!

=&gt; Creating admin user …

=&gt; Waiting for confirmation of MySQL service startup, trying 0/13 …

=&gt; Creating MySQL user admin with random password

=&gt; Done!

========================================================================

You can now connect to this MySQL Server using:

mysql -uadmin -pfXs5UarEYaow -h -P

Please remember to change the above password as soon as possible!
MySQL user ‘root’ has no password but only allows local connections
========================================================================
141218 20:45:31 mysqld_safe Can’t log to error log and syslog at the same time.
Remove all –log-error configuration options for –syslog to take effect.

141218 20:45:31 mysqld_safe Logging to ‘/var/log/mysql/error.log’.
141218 20:45:31 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql

[root@ip-192-169-142-45 ~(keystone_demo)]# mysql -uadmin -pfXs5UarEYaow -h 192.169.142.155  -P 3306
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.5.40-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MySQL [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| mysql              |
| performance_schema |
+——————–+
3 rows in set (0.01 sec)

MySQL [(none)]&gt;

*******************************************

Setup Ubuntu 14.04 with SSH access

*******************************************

# docker pull rastasheep/ubuntu-sshd:14.04

# . keystonerc_admin

# docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

# . keystonerc_demo

# nova boot –image “rastasheep/ubuntu-sshd:14.04” –flavor m1.tiny –key-name  osxkey    –nic net-id=5fcd01ac-bc8e-450d-be67-f0c274edd041 ubuntuTrusty

***********************************************************

Login to dashboard &amp;&amp; assign floating IP via dashboard:-

***********************************************************

  [root@ip-192-169-142-45 ~(keystone_demo)]# nova list

+————————————–+————–+———+————+————-+—————————————–+

| ID                                   | Name         | Status  | Task State | Power State | Networks                                |

+————————————–+————–+———+————+————-+—————————————–+

| 3dbf981f-f28c-4abe-8fd1-09b8b8cad930 | WordPress    | SHUTOFF | –          | Shutdown    | demo_network=70.0.0.16, 192.169.142.153 |

| 7bbf887f-167c-461e-9ee0-dd4d43605c9e | lamp         | ACTIVE  | –          | Running     | demo_network=70.0.0.20, 192.169.142.156 |

| 39eef361-1329-44d9-b05a-f6b4b8693aa3 | mysql        | SHUTOFF | –          | Shutdown    | demo_network=70.0.0.19, 192.169.142.155 |

| f21dc265-958e-4ed0-9251-31c4bbab35f4 | ubuntuTrusty | ACTIVE  | –          | Running     | demo_network=70.0.0.21, 192.169.142.157 |

+————————————–+————–+———+————+————-+—————————————–+

[root@ip-192-169-142-45 ~(keystone_demo)]# ssh root@192.169.142.157

root@192.169.142.157’s password:

Last login: Fri Dec 19 09:19:40 2014 from ip-192-169-142-45.ip.secureserver.net

root@instance-0000000d:~# cat /etc/issue

Ubuntu 14.04.1 LTS \n \l

root@instance-0000000d:~# ifconfig

lo        Link encap:Local Loopback

inet addr:127.0.0.1  Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING  MTU:65536  Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

nse49711e9-93 Link encap:Ethernet  HWaddr fa:16:3e:32:5e:d8

inet addr:70.0.0.21  Bcast:70.0.0.255  Mask:255.255.255.0

inet6 addr: fe80::f816:3eff:fe32:5ed8/64 Scope:Link

UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

RX packets:2574 errors:0 dropped:0 overruns:0 frame:0

TX packets:1653 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2257920 (2.2 MB)  TX bytes:255582 (255.5 KB)

root@instance-0000000d:~# df -h

Filesystem                                                                                         Size  Used Avail Use% Mounted on

/dev/mapper/docker-253:1-4600578-76893e146987bf4b58b42ff6ed80892df938ffba108f22c7a4591b18990e0438  9.8G  302M  9.0G   4% /

tmpfs                                                                                              1.9G     0  1.9G   0% /dev

shm                                                                                                 64M     0   64M   0% /dev/shm

/dev/mapper/centos-root                                                                             36G  9.8G   26G  28% /etc/hosts

tmpfs                                                                                              1.9G     0  1.9G   0% /run/secrets

tmpfs                                                                                              1.9G     0  1.9G   0% /proc/kcore

 

 References

1. http://cloudssky.com/en/blog/Nova-Docker-on-OpenStack-RDO-Juno/

2. https://www.mirantis.com/openstack-portal/external-tutorials/nova-docker-juno/


LVMiSCSI cinder backend for RDO Juno on CentOS 7

November 9, 2014

Current post follows up http://lxer.com/module/newswire/view/207415/index.html RDO Juno has been intalled on Controller and Compute nodes via packstack as described in link @lxer.com. iSCSI initiator implementation on CentOS 7 differs significantly from CentOS 6.5 and is based on  CLI utility targetcli and service target.  With Enterprise Linux 7, both Red Hat and CentOS, there is a big change in the management of iSCSI targets.Software run as part of the standard systemd structure. Consequently there will be significant changes in multi back end cinder architecture of RDO Juno running on CentOS 7 or Fedora 21 utilizing LVM based iSCSI targets.

Create following entries in /etc/cinder/cinder.conf on Controller ( which in case of two node Cluster works as Storage node as well).

#######################

enabled_backends=lvm51,lvm52

#######################

[lvm51]

iscsi_helper=lioadm

volume_group=cinder-volumes51

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI51

[lvm52]

iscsi_helper=lioadm

volume_group=cinder-volumes52

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI52

 

VG cinder-volumes52,51 created on /dev/sda6 and /dev/sdb1 correspondently

# pvcreate /dev/sda6

# vgcreate cinder-volumes52  /dev/sda6

Then issue :-

[root@juno1 ~(keystone_admin)]# cinder type-create lvms

+————————————–+——+

|                  ID                  | Name |

+————————————–+——+

| 64414f3a-7770-4958-b422-8db0c3e2f433 | lvms  |

+————————————–+——+

[root@juno1 ~(keystone_admin)]# cinder type-create lvmz +————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | lvmz |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-list

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 |  lvmz   |

| 64414f3a-7770-4958-b422-8db0c3e2f433 |  lvms   |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-key lvmz set volume_backend_name=LVM_iSCSI51

[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI52

Then enable and start service target:-

[root@juno1 ~(keystone_admin)]#   service target enable

[root@juno1 ~(keystone_admin)]#   service target start

[root@juno1 ~(keystone_admin)]# service target status

Redirecting to /bin/systemctl status  target.service

target.service – Restore LIO kernel target configuration

Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)

Active: active (exited) since Wed 2014-11-05 13:23:09 MSK; 44min ago

Process: 1611 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)

Main PID: 1611 (code=exited, status=0/SUCCESS)

CGroup: /system.slice/target.service

Nov 05 13:23:07 juno1.localdomain systemd[1]: Starting Restore LIO kernel target configuration…

Nov 05 13:23:09 juno1.localdomain systemd[1]: Started Restore LIO kernel target configuration.

Now all changes done by creating cinder volumes of types lvms,lvmz ( via

dashboard – volume create with dropdown menu volume types or via cinder CLI )

will be persistent in  targetcli&gt; ls output between reboots

[root@juno1 ~(keystone_boris)]# cinder list

+————————————–+——–+——————+——+————-+———-+————————————–+

|                  ID                  | Status |   Display Name   | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+——————+——+————-+———-+————————————–+

| 3a4f6878-530a-4a28-87bb-92ee256f63ea | in-use | UbuntuUTLV510851 |  5   |     lvmz    |   true   | efb1762e-6782-4895-bf2b-564f14105b5b |

| 51528876-405d-4a15-abc2-61ad72fc7d7e | in-use |   CentOS7LVG51   |  10  |     lvmz    |   true   | ba3e87fa-ee81-42fc-baed-c59ca6c8a100 |

| ca0694ae-7e8d-4c84-aad8-3f178416dec6 | in-use |  VF20LVG520711   |  7   |     lvms    |   true   | 51a20959-0a0c-4ef6-81ec-2edeab6e3588 |

| dc9e31f0-b27f-4400-a666-688365126f67 | in-use | UbuntuUTLV520711 |  7   |     lvms    |   true   | 1fe7d2c3-58ae-4ee8-8f5f-baf334195a59 |

+————————————–+——–+——————+——+————-+———-+————————————–+

Compare ‘green’ highlighted volume id’s and tarcgetcli&gt;ls output

 

  

  

Next snapshot demonstrates lvms &amp;&amp; lvmz volumes attached to corresponding

nova instances utilizing LVMiSCSI cinder backend.

 

On Compute Node iscsiadm output will look as follows :-

[root@juno2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-3a4f6878-530a-4a28-87bb-92ee256f63ea

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-ca0694ae-7e8d-4c84-aad8-3f178416dec6

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-dc9e31f0-b27f-4400-a666-688365126f67

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-51528876-405d-4a15-abc2-61ad72fc7d7e

References

1.  https://www.centos.org/forums/viewtopic.php?f=47&amp;t=48591


RDO Juno Set up Two Real Node (Controller+Compute) Gluster 3.5.2 Cluster ML2&OVS&VXLAN on CentOS 7

November 3, 2014

Post bellow follows up http://cloudssky.com/en/blog/RDO-OpenStack-Juno-ML2-VXLAN-2-Node-Deployment-On-CentOS-7-With-Packstack/ however answer file provided here allows in single run create Controller &amp;&amp; Compute Node.Based oh RDO Juno release as of 10/27/2014 it doesn’t require creating OVS bridge br-ex and OVS port enp2s0 on Compute Node. It also doesn’t install nova-compute service on Controller. Gluster 3.5.2 setup also is performed in way which differs from similar procedure on IceHouse &amp;&amp; Havana RDO releases. Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for Controller &amp;&amp; Compute Nodes setup. Before running`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set support VXLAN tunnel  (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

I have also to note that in regards  of LVMiSCSI cinder backend support on CentOS 7 post http://theurbanpenguin.com/wp/?p=3403 is misleading. Name of service making changes done in targetcli  persistent between reboots is “target” not “targetd”

To setup iSCSI initiator on CentOS 7 ( activate LIO kernel support) you have
to issue :-
# systemctl enable target
# systemctl start target
# systemctl status target -l
target.service – Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Sat 2014-11-08 14:45:06 MSK; 3h 26min ago
  Process: 1661 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 1661 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service

Nov 01 14:45:06 juno1.localdomain systemd[1]: Started Restore LIO kernel target configuration.

 

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin &amp;&amp; VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

juno1.localdomain   –  Controller (192.168.1.127)

juno2.localdomain   –  Compute   (192.168.1.137)

Answer File :-

[general]

CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

CONFIG_DEFAULT_PASSWORD=

CONFIG_MARIADB_INSTALL=y

CONFIG_GLANCE_INSTALL=y

CONFIG_CINDER_INSTALL=y

CONFIG_NOVA_INSTALL=y

CONFIG_NEUTRON_INSTALL=y

CONFIG_HORIZON_INSTALL=y

CONFIG_SWIFT_INSTALL=y

CONFIG_CEILOMETER_INSTALL=y

CONFIG_HEAT_INSTALL=n

CONFIG_CLIENT_INSTALL=y

CONFIG_NTP_SERVERS=

CONFIG_NAGIOS_INSTALL=y

EXCLUDE_SERVERS=

CONFIG_DEBUG_MODE=n

CONFIG_CONTROLLER_HOST=192.168.1.127

CONFIG_COMPUTE_HOSTS=192.168.1.137

CONFIG_NETWORK_HOSTS=192.168.1.127

CONFIG_VMWARE_BACKEND=n

CONFIG_UNSUPPORTED=n

CONFIG_VCENTER_HOST=

CONFIG_VCENTER_USER=

CONFIG_VCENTER_PASSWORD=

CONFIG_VCENTER_CLUSTER_NAME=

CONFIG_STORAGE_HOST=192.168.1.127

CONFIG_USE_EPEL=y

CONFIG_REPO=

CONFIG_RH_USER=

CONFIG_SATELLITE_URL=

CONFIG_RH_PW=

CONFIG_RH_OPTIONAL=y

CONFIG_RH_PROXY=

CONFIG_RH_PROXY_PORT=

CONFIG_RH_PROXY_USER=

CONFIG_RH_PROXY_PW=

CONFIG_SATELLITE_USER=

CONFIG_SATELLITE_PW=

CONFIG_SATELLITE_AKEY=

CONFIG_SATELLITE_CACERT=

CONFIG_SATELLITE_PROFILE=

CONFIG_SATELLITE_FLAGS=

CONFIG_SATELLITE_PROXY=

CONFIG_SATELLITE_PROXY_USER=

CONFIG_SATELLITE_PROXY_PW=

CONFIG_AMQP_BACKEND=rabbitmq

CONFIG_AMQP_HOST=192.168.1.127

CONFIG_AMQP_ENABLE_SSL=n

CONFIG_AMQP_ENABLE_AUTH=n

CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER

CONFIG_AMQP_SSL_PORT=5671

CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem

CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem

CONFIG_AMQP_SSL_SELF_SIGNED=y

CONFIG_AMQP_AUTH_USER=amqp_user

CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER

CONFIG_MARIADB_HOST=192.168.1.127

CONFIG_MARIADB_USER=root

CONFIG_MARIADB_PW=7207ae344ed04957

CONFIG_KEYSTONE_DB_PW=abcae16b785245c3

CONFIG_KEYSTONE_REGION=RegionOne

CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9

CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468

CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398

CONFIG_KEYSTONE_TOKEN_FORMAT=UUID

CONFIG_KEYSTONE_SERVICE_NAME=keystone

CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8

CONFIG_GLANCE_KS_PW=f6a9398960534797

CONFIG_GLANCE_BACKEND=file

CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69

CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f

CONFIG_CINDER_BACKEND=lvm

CONFIG_CINDER_VOLUMES_CREATE=y

CONFIG_CINDER_VOLUMES_SIZE=20G

CONFIG_CINDER_GLUSTER_MOUNTS=

CONFIG_CINDER_NFS_MOUNTS=

CONFIG_CINDER_NETAPP_LOGIN=

CONFIG_CINDER_NETAPP_PASSWORD=

CONFIG_CINDER_NETAPP_HOSTNAME=

CONFIG_CINDER_NETAPP_SERVER_PORT=80

CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster

CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http

CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs

CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0

CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60

CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=

CONFIG_CINDER_NETAPP_VOLUME_LIST=

CONFIG_CINDER_NETAPP_VFILER=

CONFIG_CINDER_NETAPP_VSERVER=

CONFIG_CINDER_NETAPP_CONTROLLER_IPS=

CONFIG_CINDER_NETAPP_SA_PASSWORD=

CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2

CONFIG_CINDER_NETAPP_STORAGE_POOLS=

CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8

CONFIG_NOVA_KS_PW=d9583177a2444f06

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp

CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n

CONFIG_SSL_CERT=

CONFIG_SSL_KEY=

CONFIG_SSL_CACHAIN=

CONFIG_SWIFT_KS_PW=8f75bfd461234c30

CONFIG_SWIFT_STORAGES=

CONFIG_SWIFT_STORAGE_ZONES=1

CONFIG_SWIFT_STORAGE_REPLICAS=1

CONFIG_SWIFT_STORAGE_FSTYPE=ext4

CONFIG_SWIFT_HASH=a60aacbedde7429a

CONFIG_SWIFT_STORAGE_SIZE=2G

CONFIG_PROVISION_DEMO=y

CONFIG_PROVISION_TEMPEST=n

CONFIG_PROVISION_TEMPEST_USER=

CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459

CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n

CONFIG_HEAT_DB_PW=PW_PLACEHOLDER

CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0

CONFIG_HEAT_KS_PW=PW_PLACEHOLDER

CONFIG_HEAT_CLOUDWATCH_INSTALL=n

CONFIG_HEAT_USING_TRUSTS=y

CONFIG_HEAT_CFN_INSTALL=n

CONFIG_HEAT_DOMAIN=heat

CONFIG_HEAT_DOMAIN_ADMIN=heat_admin

CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER

CONFIG_CEILOMETER_SECRET=19ae0e7430174349

CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753

CONFIG_MONGODB_HOST=192.168.1.127

CONFIG_NAGIOS_PW=02f168ee8edd44e4

Only on Controller updates :-

[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-br-ex

DEVICE=”br-ex”

BOOTPROTO=”static”

IPADDR=”192.168.1.127″

NETMASK=”255.255.255.0″

DNS1=”83.221.202.254″

BROADCAST=”192.168.1.255″

GATEWAY=”192.168.1.1″

NM_CONTROLLED=”no”

DEFROUTE=”yes”

IPV4_FAILURE_FATAL=”yes”

IPV6INIT=no

ONBOOT=”yes”

TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-enp2s0

DEVICE=”enp2s0″

# HWADDR=00:22:15:63:E4:E2

ONBOOT=”yes”

TYPE=”OVSPort”

DEVICETYPE=”ovs”

OVS_BRIDGE=br-ex

NM_CONTROLLED=no

IPV6INIT=no

Setup Gluster Backend for cinder in Juno

*************************************************************************

Updates  /etc/cinder/cinder.conf to activate Gluster 3.5.2 backend

*************************************************************************

Gluster 3.5.2 cluster installed per  http://bderzhavets.blogspot.com/2014/08/setup-gluster-352-on-two-node.html

enabled_backends=gluster,lvm52

[gluster]

volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver

glusterfs_shares_config = /etc/cinder/shares.conf

glusterfs_mount_point_base = /var/lib/cinder/volumes

volume_backend_name=GLUSTER

[lvm52]

iscsi_helper=lioadm

volume_group=cinder-volumes52

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI52

Now follow  http://giuliofidente.com/2013/06/openstack-cinder-configure-multiple-backends.html   :-

[root@juno1 ~(keystone_admin)]# cinder type-create lvms

+————————————–+——+

|                  ID                  | Name |

+————————————–+——+

| 64414f3a-7770-4958-b422-8db0c3e2f433 | lvms  |

+————————————–+——+

+————————————–+——-+

[root@juno1 ~(keystone_admin)]# cinder type-create gluster

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | gluster |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-list

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | gluster |

| 64414f3a-7770-4958-b422-8db0c3e2f433 |   lvms  |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI

[root@juno1 ~(keystone_admin)]# cinder type-key gluster  set volume_backend_name=GLUSTER

Next step is cinder services restart :-

[root@juno1 ~(keystone_demo)]# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

[root@juno1 ~(keystone_admin)]# df -h

Filesystem                       Size  Used Avail Use% Mounted on

/dev/mapper/centos01-root00      147G   17G  130G  12% /

devtmpfs                         3.9G     0  3.9G   0% /dev

tmpfs                            3.9G   96K  3.9G   1% /dev/shm

tmpfs                            3.9G  9.1M  3.9G   1% /run

tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup

/dev/loop0                       1.9G  6.0M  1.7G   1% /srv/node/swift_loopback

/dev/sda3                        477M  146M  302M  33% /boot

/dev/mapper/centos01-data5        98G  1.4G   97G   2% /data5

192.168.1.127:/cinder-volumes57   98G  1.4G   97G   2% /var/lib/cinder/volumes/8478b56ad61cf67ab9839fb0a5296965

tmpfs                            3.9G  9.1M  3.9G   1% /run/netns

[root@juno1 ~(keystone_demo)]# gluster volume info

Volume Name: cinder-volumes57

Type: Replicate

Volume ID: c1f2e1d2-0b11-426e-af3d-7af0d1d24d5e

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: juno1.localdomain:/data5/data-volumes

Brick2: juno2.localdomain:/data5/data-volumes

Options Reconfigured:

auth.allow: 192.168.1.*

[root@juno1 ~(keystone_demo)]# gluster volume status

Status of volume: cinder-volumes57

Gluster process                        Port    Online    Pid

——————————————————————————

Brick juno1.localdomain:/data5/data-volumes        49152    Y    3806

Brick juno2.localdomain:/data5/data-volumes        49152    Y    3047

NFS Server on localhost                    2049    Y    4146

Self-heal Daemon on localhost                N/A    Y    4141

NFS Server on juno2.localdomain                2049    Y    3881

Self-heal Daemon on juno2.localdomain            N/A    Y    3877

Task Status of Volume cinder-volumes57

——————————————————————————

**********************************************

Creating cinder volume of gluster type:-

**********************************************

[root@juno1 ~(keystone_demo)]# cinder create –volume_type gluster –image-id d83a6fec-ce82-411c-aa11-04cbb34bf2a2 –display_name UbuntuGLS1029 5

[root@juno1 ~(keystone_demo)]# cinder list

+————————————–+——–+—————+——+————-+———-+————————————–+

|                  ID                  | Status |  Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+—————+——+————-+———-+————————————–+

| ca7ac946-3c4e-4544-ba3a-8cd085d5882b | in-use | UbuntuGLS1029 |  5   |   gluster   |   true   | cdb57658-795a-4a6e-82c9-67bf24acd498 |

+————————————–+——–+—————+——+————-+———-+————————————–+

[root@juno1 ~(keystone_demo)]# nova list

+————————————–+————-+———–+————+————-+———————————–+

| ID                                   | Name        | Status    | Task State | Power State | Networks                          |

+————————————–+————-+———–+————+————-+———————————–+

| 5c366eb9-8830-4432-b9bb-06239ae83d8a | CentOS7RS01 | SUSPENDED | –          | Shutdown    | demo_net=40.0.0.25, 192.168.1.161 |

| cdb57658-795a-4a6e-82c9-67bf24acd498 | UbuntuGLS01 | ACTIVE  | –          | Shutdown    | demo_net=40.0.0.22, 192.168.1.157 |

| 39d5312c-e661-4f9f-82ab-db528a7cdc9a | UbuntuRXS52 | ACTIVE    | –          | Running     | demo_net=40.0.0.32, 192.168.1.165 |

| 16911bfa-cf8b-44b7-b46e-8a54c9b3db69 | VF20GLR01   | ACTIVE    | –          | Running     | demo_net=40.0.0.23, 192.168.1.159 |

+————————————–+————-+———–+————+————-+———————————–+

 

 

Get detailed information about server-id :-

[root@juno1 ~(keystone_demo)]# nova show 16911bfa-cf8b-44b7-b46e-8a54c9b3db69

+————————————–+———————————————————-+

| Property                             | Value                                                    |

+————————————–+———————————————————-+

| OS-DCF:diskConfig                    | AUTO                                                     |

| OS-EXT-AZ:availability_zone          | nova                                                     |

| OS-EXT-STS:power_state               | 1                                                        |

| OS-EXT-STS:task_state                | –                                                        |

| OS-EXT-STS:vm_state                  | active                                                   |

| OS-SRV-USG:launched_at               | 2014-11-01T22:20:12.000000                               |

| OS-SRV-USG:terminated_at             | –                                                        |

| accessIPv4                           |                                                          |

| accessIPv6                           |                                                          |

| config_drive                         |                                                          |

| created                              | 2014-11-01T22:20:04Z                                     |

| demo_net network                     | 40.0.0.23, 192.168.1.159                                 |

| flavor                               | m1.small (2)                                             |

| hostId                               | 2e37cbf1f1145a0eaad46d35cbc8f4df3b579bbaf0404855511732a9 |

| id                                   | 16911bfa-cf8b-44b7-b46e-8a54c9b3db69                     |

| image                                | Attempt to boot from volume – no image supplied          |

| key_name                             | oskey45                                                  |

| metadata                             | {}                                                       |

| name                                 | VF20GLR01                                                |

| os-extended-volumes:volumes_attached | [{“id”: “6ff40c2b-c363-42da-8988-5425eca0eea3”}]         |

| progress                             | 0                                                        |

| security_groups                      | default                                                  |

| status                               | ACTIVE                                                   |

| tenant_id                            | b302ecfaf76740189fca446e2e4a9a6e                         |

| updated                              | 2014-11-03T09:29:25Z                                     |

| user_id                              | ad7db1242c7e41ee88bc813873c85da3                         |

+————————————–+———————————————————-+

[root@juno1 ~(keystone_demo)]# cinder show 6ff40c2b-c363-42da-8988-5425eca0eea3 | grep volume_type

volume_type | gluster

*******************************

Gluster cinder-volumes list :-

*******************************

[root@juno1 data-volumes(keystone_demo)]# cinder list

+————————————–+——–+—————+——+————-+———-+————————————–+

|                  ID                  | Status |  Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+—————+——+————-+———-+————————————–+

| 6ff40c2b-c363-42da-8988-5425eca0eea3 | in-use |  VF20VLG0211  |  7   |   gluster   |   true   | 16911bfa-cf8b-44b7-b46e-8a54c9b3db69 |

| 8ade9f17-163d-48ca-bea5-bc9c6ea99b17 | in-use |  UbuntuLVS52  |  5   |     lvms    |   true   | 39d5312c-e661-4f9f-82ab-db528a7cdc9a |

| ca7ac946-3c4e-4544-ba3a-8cd085d5882b | in-use | UbuntuGLS1029 |  5   |   gluster   |   true   | cdb57658-795a-4a6e-82c9-67bf24acd498 |

| d8f77604-f984-4e98-81cc-971003d3fb54 | in-use |   CentOS7VLG  |  10  |   gluster   |   true   | 5c366eb9-8830-4432-b9bb-06239ae83d8a |

+————————————–+——–+—————+——+————-+———-+————————————–+

[root@juno1 data-volumes(keystone_demo)]# ls -la

total 7219560

drwxrwxr-x.   3 root cinder        4096 Nov  3 19:29 .

drwxr-xr-x.   3 root root            25 Nov  1 19:17 ..

drw——-. 252 root root          4096 Nov  3 19:21 .glusterfs

-rw-rw-rw-.   2 qemu qemu    7516192768 Nov  3 19:06 volume-6ff40c2b-c363-42da-8988-5425eca0eea3

-rw-rw-rw-.   2 qemu qemu    5368709120 Nov  3 19:21 volume-ca7ac946-3c4e-4544-ba3a-8cd085d5882b

-rw-rw-rw-.   2 root root   10737418240 Nov  2 10:57 volume-d8f77604-f984-4e98-81cc-971003d3fb54

References

1. http://cloudssky.com/en/blog/RDO-OpenStack-Juno-ML2-VXLAN-2-Node-Deployment-On-CentOS-7-With-Packstack


RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7

September 5, 2014

As of 07/28/2014 Bug https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ is still pending and workaround suggested above should be applied during two node RDO packstack installation.

Successful implementation of Neutron ML2&&OVS&&VXLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack.

Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for

Controller &amp;&amp; Compute Nodes setup. Before running

`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set to support VXLAN  tunnel  (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin &amp;&amp; VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

 [root@icehouse1 ~(keystone_admin)]# cat TwoNodeVXLAN.txt

[general]

CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

CONFIG_MYSQL_INSTALL=y

CONFIG_GLANCE_INSTALL=y

CONFIG_CINDER_INSTALL=y

CONFIG_NOVA_INSTALL=y

CONFIG_NEUTRON_INSTALL=y

CONFIG_HORIZON_INSTALL=y

CONFIG_SWIFT_INSTALL=n

CONFIG_CEILOMETER_INSTALL=y

CONFIG_HEAT_INSTALL=n

CONFIG_CLIENT_INSTALL=y

CONFIG_NTP_SERVERS=

CONFIG_NAGIOS_INSTALL=y

EXCLUDE_SERVERS=

CONFIG_DEBUG_MODE=n

CONFIG_VMWARE_BACKEND=n

CONFIG_MYSQL_HOST=192.168.1.127

CONFIG_MYSQL_USER=root

CONFIG_MYSQL_PW=a7f0349d1f7a4ab0

CONFIG_AMQP_SERVER=rabbitmq

CONFIG_AMQP_HOST=192.168.1.127

CONFIG_AMQP_ENABLE_SSL=n

CONFIG_AMQP_ENABLE_AUTH=n

CONFIG_AMQP_NSS_CERTDB_PW=0915db728b00409caf4b6e433b756308

CONFIG_AMQP_SSL_PORT=5671

CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem

CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem

CONFIG_AMQP_SSL_SELF_SIGNED=y

CONFIG_AMQP_AUTH_USER=amqp_user

CONFIG_AMQP_AUTH_PASSWORD=f16d26ff54cd4033

CONFIG_KEYSTONE_HOST=192.168.1.127

CONFIG_KEYSTONE_DB_PW=32419736ee454c2c

CONFIG_KEYSTONE_ADMIN_TOKEN=836891519cb640458551556447a5a644

CONFIG_KEYSTONE_ADMIN_PW=4ebab181262d4224

CONFIG_KEYSTONE_DEMO_PW=56eb6360019e45bf

CONFIG_KEYSTONE_TOKEN_FORMAT=PKI

CONFIG_GLANCE_HOST=192.168.1.127

CONFIG_GLANCE_DB_PW=e51feef536104b49

CONFIG_GLANCE_KS_PW=2458775cd64848cb

CONFIG_CINDER_HOST=192.168.1.127

CONFIG_CINDER_DB_PW=bcf3b09c9c4144e2

CONFIG_CINDER_KS_PW=888c59cc113e4489

CONFIG_CINDER_BACKEND=lvm

CONFIG_CINDER_VOLUMES_CREATE=y

CONFIG_CINDER_VOLUMES_SIZE=15G

CONFIG_CINDER_GLUSTER_MOUNTS=

CONFIG_CINDER_NFS_MOUNTS=

CONFIG_VCENTER_HOST=192.168.1.127

CONFIG_VCENTER_USER=

CONFIG_VCENTER_PASSWORD=

CONFIG_NOVA_API_HOST=192.168.1.127

CONFIG_NOVA_CERT_HOST=192.168.1.127

CONFIG_NOVA_VNCPROXY_HOST=192.168.1.127

CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137

CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.127

CONFIG_NOVA_DB_PW=8cc18e22eaeb4c4d

CONFIG_NOVA_KS_PW=aaf8cf4c60224150

CONFIG_NOVA_SCHED_HOST=192.168.1.127

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1

CONFIG_NOVA_NETWORK_HOSTS=192.168.1.127

CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

CONFIG_NOVA_NETWORK_PUBIF=enp2s0

CONFIG_NOVA_NETWORK_PRIVIF=enp5s1

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22

CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova

CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

CONFIG_NOVA_NETWORK_VLAN_START=100

CONFIG_NOVA_NETWORK_NUMBER=1

CONFIG_NOVA_NETWORK_SIZE=255

CONFIG_VCENTER_HOST=192.168.1.127

CONFIG_VCENTER_USER=

CONFIG_VCENTER_PASSWORD=

CONFIG_VCENTER_CLUSTER_NAME=

CONFIG_NEUTRON_SERVER_HOST=192.168.1.127

CONFIG_NEUTRON_KS_PW=5f11f559abc94440

CONFIG_NEUTRON_DB_PW=0302dcfeb69e439f

CONFIG_NEUTRON_L3_HOSTS=192.168.1.127

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.127

CONFIG_NEUTRON_LBAAS_HOSTS=

CONFIG_NEUTRON_L2_PLUGIN=ml2

CONFIG_NEUTRON_METADATA_HOSTS=192.168.1.127

CONFIG_NEUTRON_METADATA_PW=227f7bbc8b6f4f74

############################################

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan

CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan

############################################

CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch

CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*

CONFIG_NEUTRON_ML2_VLAN_RANGES=

CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000

CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2

CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000

CONFIG_NEUTRON_L2_AGENT=openvswitch

CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local

CONFIG_NEUTRON_LB_VLAN_RANGES=

CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

#########################################

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan

CONFIG_NEUTRON_OVS_VLAN_RANGES=

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000

CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1

########################################

CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_OSCLIENT_HOST=192.168.1.127

CONFIG_HORIZON_HOST=192.168.1.127

CONFIG_HORIZON_SSL=n

CONFIG_SSL_CERT=

CONFIG_SSL_KEY=

CONFIG_SWIFT_PROXY_HOSTS=192.168.1.127

CONFIG_SWIFT_KS_PW=63d3108083ac495b

CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.127

CONFIG_SWIFT_STORAGE_ZONES=1

CONFIG_SWIFT_STORAGE_REPLICAS=1

CONFIG_SWIFT_STORAGE_FSTYPE=ext4

CONFIG_SWIFT_HASH=ebf91dbf930c49ca

CONFIG_SWIFT_STORAGE_SIZE=2G

CONFIG_PROVISION_DEMO=y

CONFIG_PROVISION_TEMPEST=n

CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n

CONFIG_HEAT_HOST=192.168.1.127

CONFIG_HEAT_DB_PW=f0be2b0fa2044183

CONFIG_HEAT_AUTH_ENC_KEY=29419b1f4e574e5e

CONFIG_HEAT_KS_PW=d5c39c630c364c5b

CONFIG_HEAT_CLOUDWATCH_INSTALL=n

CONFIG_HEAT_CFN_INSTALL=n

CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.127

CONFIG_HEAT_CFN_HOST=192.168.1.127

CONFIG_CEILOMETER_HOST=192.168.1.127

CONFIG_CEILOMETER_SECRET=d1ed1459830e4288

CONFIG_CEILOMETER_KS_PW=84f18f2e478f4230

CONFIG_MONGODB_HOST=192.168.1.127

CONFIG_NAGIOS_HOST=192.168.1.127

CONFIG_NAGIOS_PW=e2d02c03b5664ffe

CONFIG_USE_EPEL=y

CONFIG_REPO=

CONFIG_RH_USER=

CONFIG_RH_PW=

CONFIG_RH_BETA_REPO=n

CONFIG_SATELLITE_URL=

CONFIG_SATELLITE_USER=

CONFIG_SATELLITE_PW=

CONFIG_SATELLITE_AKEY=

CONFIG_SATELLITE_CACERT=

CONFIG_SATELLITE_PROFILE=

CONFIG_SATELLITE_FLAGS=

CONFIG_SATELLITE_PROXY=

CONFIG_SATELLITE_PROXY_USER=

CONFIG_SATELLITE_PROXY_PW=

[root@icehouse1 ~(keystone_admin)]# cat /etc/neutron/plugin.ini

[ml2]

type_drivers = vxlan

tenant_network_types = vxlan

mechanism_drivers =openvswitch

[ml2_type_flat]

[ml2_type_vlan]

[ml2_type_gre]

[ml2_type_vxlan]

vni_ranges =1001:2000

vxlan_group =239.1.1.2

[OVS]

local_ip=192.168.0.127

enable_tunneling=True

integration_bridge=br-int

tunnel_bridge=br-tun

[securitygroup]

enable_security_group = True

firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[agent]

polling_interval=2

[root@icehouse1 ~(keystone_admin)]# ls -l /etc/neutron

total 64

-rw-r–r–. 1 root root      193 Jul 29 16:15 api-paste.ini

-rw-r—–. 1 root neutron  3853 Jul 29 16:14 dhcp_agent.ini

-rw-r—–. 1 root neutron   208 Jul 29 16:15 fwaas_driver.ini

-rw-r—–. 1 root neutron  3431 Jul 29 16:14 l3_agent.ini

-rw-r—–. 1 root neutron  1400 Jun  8 01:38 lbaas_agent.ini

-rw-r—–. 1 root neutron  1481 Jul 29 16:15 metadata_agent.ini

-rw-r—–. 1 root neutron 19150 Jul 29 16:15 neutron.conf

lrwxrwxrwx. 1 root root       37 Jul 29 16:14 plugin.ini -&gt; /etc/neutron/plugins/ml2/ml2_conf.ini

-rw-r–r–. 1 root root      452 Jul 29 17:11 plugin.out

drwxr-xr-x. 4 root root       34 Jul 29 16:14 plugins

-rw-r—–. 1 root neutron  6148 Jun  8 01:38 policy.json

-rw-r–r–. 1 root root       78 Jul  2 15:11 release

-rw-r–r–. 1 root root     1216 Jun  8 01:38 rootwrap.conf

On Controller

[root@icehouse1 ~(keystone_admin)]# ovs-vsctl show

2742fa6e-78bf-440e-a2c1-cb48242ea565

Bridge br-ex

Port phy-br-ex

Interface phy-br-ex

Port “qg-76f29fee-9c”

Interface “qg-76f29fee-9c”

type: internal

Port br-ex

Interface br-ex

type: internal

Port “enp2s0”

Interface “enp2s0”

Bridge br-tun

Port “vxlan-c0a80089”

Interface “vxlan-c0a80089″

type: vxlan

options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137”}

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port br-tun

Interface br-tun

type: internal

Bridge br-int

Port “qr-8cad61e3-ce”

tag: 1

Interface “qr-8cad61e3-ce”

type: internal

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port “tapff8659ee-8d”

tag: 1

Interface “tapff8659ee-8d”

type: internal

Port br-int

Interface br-int

type: internal

Port int-br-ex

Interface int-br-ex

ovs_version: “2.0.0”

On Compute

[root@icehouse2 ~]# ovs-vsctl show

642d8c9f-116e-4b44-842a-e975e506fe24

Bridge br-ex

Port phy-br-ex

Interface phy-br-ex

Port br-ex

Interface br-ex

type: internal

Bridge br-tun

Port br-tun

Interface br-tun

type: internal

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port “vxlan-c0a8007f”

Interface “vxlan-c0a8007f”

type: vxlan

options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}

Bridge br-int

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port int-br-ex

Interface int-br-ex

Port “qvodc2c598a-b3”

tag: 1

Interface “qvodc2c598a-b3”

Port br-int

Interface br-int

type: internal

Port “qvo25cbd1fa-96”

tag: 1

Interface “qvo25cbd1fa-96”

ovs_version: “2.0.0”


RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7

July 29, 2014

As of 07/28/2014 Bug https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ is still pending and workaround suggested above should be applied during two node RDO packstack installation.
Successful implementation of Neutron ML2&&OVS&&VXLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack.

Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for
Controller && Compute Nodes setup. Before running
`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)
icehouse2.localdomain   –  Compute   (192.168.1.137)

[root@icehouse1 ~(keystone_admin)]# cat TwoNodeVXLAN.txt

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_MYSQL_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_VMWARE_BACKEND=n
CONFIG_MYSQL_HOST=192.168.1.127
CONFIG_MYSQL_USER=root
CONFIG_MYSQL_PW=a7f0349d1f7a4ab0
CONFIG_AMQP_SERVER=rabbitmq
CONFIG_AMQP_HOST=192.168.1.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=0915db728b00409caf4b6e433b756308
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=f16d26ff54cd4033
CONFIG_KEYSTONE_HOST=192.168.1.127
CONFIG_KEYSTONE_DB_PW=32419736ee454c2c
CONFIG_KEYSTONE_ADMIN_TOKEN=836891519cb640458551556447a5a644
CONFIG_KEYSTONE_ADMIN_PW=4ebab181262d4224
CONFIG_KEYSTONE_DEMO_PW=56eb6360019e45bf
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI
CONFIG_GLANCE_HOST=192.168.1.127
CONFIG_GLANCE_DB_PW=e51feef536104b49
CONFIG_GLANCE_KS_PW=2458775cd64848cb
CONFIG_CINDER_HOST=192.168.1.127
CONFIG_CINDER_DB_PW=bcf3b09c9c4144e2
CONFIG_CINDER_KS_PW=888c59cc113e4489
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=15G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_VCENTER_HOST=192.168.1.127
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_NOVA_API_HOST=192.168.1.127
CONFIG_NOVA_CERT_HOST=192.168.1.127
CONFIG_NOVA_VNCPROXY_HOST=192.168.1.127
CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137
CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.127
CONFIG_NOVA_DB_PW=8cc18e22eaeb4c4d
CONFIG_NOVA_KS_PW=aaf8cf4c60224150
CONFIG_NOVA_SCHED_HOST=192.168.1.127
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_PRIVIF=p4p1
CONFIG_NOVA_NETWORK_HOSTS=192.168.1.127
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_VCENTER_HOST=192.168.1.127
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_NEUTRON_SERVER_HOST=192.168.1.127
CONFIG_NEUTRON_KS_PW=5f11f559abc94440
CONFIG_NEUTRON_DB_PW=0302dcfeb69e439f
CONFIG_NEUTRON_L3_HOSTS=192.168.1.127
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.127
CONFIG_NEUTRON_LBAAS_HOSTS=
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_HOSTS=192.168.1.127
CONFIG_NEUTRON_METADATA_PW=227f7bbc8b6f4f74
############################################
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
############################################
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
#########################################
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
########################################
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_OSCLIENT_HOST=192.168.1.127
CONFIG_HORIZON_HOST=192.168.1.127
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SWIFT_PROXY_HOSTS=192.168.1.127
CONFIG_SWIFT_KS_PW=63d3108083ac495b
CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.127
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=ebf91dbf930c49ca
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_HOST=192.168.1.127
CONFIG_HEAT_DB_PW=f0be2b0fa2044183
CONFIG_HEAT_AUTH_ENC_KEY=29419b1f4e574e5e
CONFIG_HEAT_KS_PW=d5c39c630c364c5b
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.127
CONFIG_HEAT_CFN_HOST=192.168.1.127
CONFIG_CEILOMETER_HOST=192.168.1.127
CONFIG_CEILOMETER_SECRET=d1ed1459830e4288
CONFIG_CEILOMETER_KS_PW=84f18f2e478f4230
CONFIG_MONGODB_HOST=192.168.1.127
CONFIG_NAGIOS_HOST=192.168.1.127
CONFIG_NAGIOS_PW=e2d02c03b5664ffe
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_RH_PW=
CONFIG_RH_BETA_REPO=n
CONFIG_SATELLITE_URL=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=

[root@icehouse1 ~(keystone_admin)]# cat /etc/neutron/plugin.ini
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[OVS]
local_ip=192.168.1.127
enable_tunneling=True
integration_bridge=br-int
tunnel_bridge=br-tun
[securitygroup]
enable_security_group = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
polling_interval=2

[root@icehouse1 ~(keystone_admin)]# ls -l /etc/neutron
total 64
-rw-r–r–. 1 root root      193 Jul 29 16:15 api-paste.ini
-rw-r—–. 1 root neutron  3853 Jul 29 16:14 dhcp_agent.ini
-rw-r—–. 1 root neutron   208 Jul 29 16:15 fwaas_driver.ini
-rw-r—–. 1 root neutron  3431 Jul 29 16:14 l3_agent.ini
-rw-r—–. 1 root neutron  1400 Jun  8 01:38 lbaas_agent.ini
-rw-r—–. 1 root neutron  1481 Jul 29 16:15 metadata_agent.ini
-rw-r—–. 1 root neutron 19150 Jul 29 16:15 neutron.conf
lrwxrwxrwx. 1 root root       37 Jul 29 16:14 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
-rw-r–r–. 1 root root      452 Jul 29 17:11 plugin.out
drwxr-xr-x. 4 root root       34 Jul 29 16:14 plugins
-rw-r—–. 1 root neutron  6148 Jun  8 01:38 policy.json
-rw-r–r–. 1 root root       78 Jul  2 15:11 release
-rw-r–r–. 1 root root     1216 Jun  8 01:38 rootwrap.conf

On Controller

[root@icehouse1 ~(keystone_admin)]# ovs-vsctl show
2742fa6e-78bf-440e-a2c1-cb48242ea565
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
Port “qg-76f29fee-9c”
Interface “qg-76f29fee-9c”
type: internal
Port br-ex
Interface br-ex
type: internal
Port “enp2s0”
Interface “enp2s0”
Bridge br-tun
Port “vxlan-c0a80089”
Interface “vxlan-c0a80089″
type: vxlan
options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137”}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qr-8cad61e3-ce”
tag: 1
Interface “qr-8cad61e3-ce”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapff8659ee-8d”
tag: 1
Interface “tapff8659ee-8d”
type: internal
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
ovs_version: “2.0.0”

On Compute

[root@icehouse2 ~]# ovs-vsctl show
642d8c9f-116e-4b44-842a-e975e506fe24
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “vxlan-c0a8007f”
Interface “vxlan-c0a8007f”
type: vxlan
options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
Port “qvodc2c598a-b3”
tag: 1
Interface “qvodc2c598a-b3”
Port br-int
Interface br-int
type: internal
Port “qvo25cbd1fa-96”
tag: 1
Interface “qvo25cbd1fa-96”
ovs_version: “2.0.0”


RDO IceHouse Setup Two Node (Controller+Compute) Neutron ML2&OVS&VLAN Cluster on Fedora 20

June 22, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller &amp;&amp; Compute Nodes setup. Before running `packstack –answer-file= TwoNodeML2&amp;OVS&amp;VLAN.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from VLAN Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

Answer file been used by packstack here http://textuploader.com/k9xo

 [root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                inactive  (disabled on boot)
== Ceilometer services ==
openstack-ceilometer-api:               failed
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
== Support services ==
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 42ceb5a601b041f0a5669868dd7f7663 |   admin    |   True  |    test@test.com     |
| d602599e69904691a6094d86f07b6121 | ceilometer |   True  | ceilometer@localhost |
| cc11c36f6e9a4bb7b050db7a380a51db |   cinder   |   True  |   cinder@localhost   |
| c3b1e25936a241bfa63c791346f179fc |   glance   |   True  |   glance@localhost   |
| d2bfcd4e6fc44478899b0a2544df0b00 |  neutron   |   True  |  neutron@localhost   |
| 3d572a8e32b94ac09dd3318cd84fd932 |    nova    |   True  |    nova@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+—————–+————-+——————+———–+——–+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+————————————–+—————–+————-+——————+———–+——–+
| 898a4245-d191-46b8-ac87-e0f1e1873cb1 | CirrOS31        | qcow2       | bare             | 13147648  | active |
| c4647c90-5160-48b1-8b26-dba69381b6fa | Ubuntu 06/18/14 | qcow2       | bare             | 254149120 | active |
+————————————–+—————–+————-+——————+———–+——–+
== Nova managed services ==
+——————+—————————————-+———-+———+——-+—————————-+—————–+
| Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+——————+—————————————-+———-+———+——-+—————————-+—————–+
| nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | –               |
| nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:21.000000 | –               |
| nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:23.000000 | –               |
| nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | –               |
| nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2014-06-22T10:39:23.000000 | –               |
+——————+—————————————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+———+——+
| ID                                   | Label   | Cidr |
+————————————–+———+——+
| 577b7ba7-adad-4051-a03f-787eb8bd55f6 | public  | –    |
| 70298098-a022-4a6b-841f-cef13524d86f | private | –    |
| 7459c84b-b460-4da2-8f24-e0c840be2637 | int     | –    |
+————————————–+———+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+————-+———–+————+————-+————————————+
| ID                                   | Name        | Status    | Task State | Power State | Networks                           |
+————————————–+————-+———–+————+————-+————————————+
| 388bbe10-87b2-40e5-a6ee-b87b05116d51 | CirrOS445   | ACTIVE    | –          | Running     | private=30.0.0.14, 192.169.142.155 |
| 4d380c79-3213-45c0-8e4c-cef2dd19836d | UbuntuSRV01 | SUSPENDED | –          | Shutdown    | private=30.0.0.13, 192.169.142.154 |
+————————————–+————-+———–+————+————-+————————————+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:00
nova-scheduler   ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:01
nova-conductor   ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:03
nova-cert        ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:00
nova-compute     ip-192-169-142-137.ip.secureserver.net nova             enabled    :-)   2014-06-22 10:40:03

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+—————————————-+——-+—————-+
| id                                   | agent_type         | host                                   | alive | admin_state_up |
+————————————–+——————–+—————————————-+——-+—————-+
| 61160392-4c97-4e8f-a902-1e55867e4425 | DHCP agent         | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| 6cd022b9-9eb8-4d1e-9991-01dfe678eba5 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           |
| 893a1a71-5709-48e9-b1a4-11e02f5eca15 | Metadata agent     | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| bb29c2dc-2db6-487c-a262-32cecf85c608 | L3 agent           | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| d7456233-53ba-4ae4-8936-3448f6ea9d65 | Open vSwitch agent | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
+————————————–+——————–+—————————————-+——-+—————-+

[root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

 [root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth0
DEVICE=”eth0″
# HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

 [root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
# HWADDR=52:54:00:EE:94:93
NM_CONTROLLED=no

 [root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
86e16ac0-c2e6-4eb4-a311-cee56fe86800
Bridge br-ex
Port “eth0”
Interface “eth0”
Port “qg-068e0e7a-95”
Interface “qg-068e0e7a-95”
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge “br-eth1”
Port “eth1”
Interface “eth1”
Port “phy-br-eth1”
Interface “phy-br-eth1”
Port “br-eth1”
Interface “br-eth1”
type: internal
Bridge br-int
Port “qr-16b1ea2b-fc”
tag: 1
Interface “qr-16b1ea2b-fc”
type: internal
Port “qr-2bb007df-e1”
tag: 2
Interface “qr-2bb007df-e1”
type: internal
Port “tap1c48d234-23”
tag: 2
Interface “tap1c48d234-23”
type: internal
Port br-int
Interface br-int
type: internal
Port “tap26440f58-b0”
tag: 1
Interface “tap26440f58-b0”
type: internal
Port “int-br-eth1”
Interface “int-br-eth1”
ovs_version: “2.1.2”

[root@ip-192-169-142-127 neutron]# cat plugin.ini
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch
[ml2_type_vlan]
network_vlan_ranges = physnet1:100:200
[ovs]
network_vlan_ranges = physnet1:100:200
tenant_network_type = vlan
enable_tunneling = False
integration_bridge = br-int
bridge_mappings = physnet1:br-eth1
local_ip = 192.168.122.127
[AGENT]
polling_interval = 2
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Checksum offloading disabled on eth1 of Compute Node
[root@ip-192-169-142-137 neutron]# /usr/sbin/ethtool --offload eth1 tx off
Actual changes:
tx-checksumming: off
    tx-checksum-ip-generic: off
tcp-segmentation-offload: off
    tx-tcp-segmentation: off [requested on]
    tx-tcp-ecn-segmentation: off [requested on]
    tx-tcp6-segmentation: off [requested on]
udp-fragmentation-offload: off [requested on]

 


Two Real Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 4, 2014

Two boxes  have been setup , each one having 2  NICs (p37p1,p4p1) for Controller && Compute Nodes setup. Before running `packstack –answer-file= TwoRealNodeOVS&GRE.txt` SELINUX set to permissive on both nodes.Both p4p1’s assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface p37p1 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

 Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin &amp;&amp; GRE )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

Post packstack install  updates :-

1. nova.conf && metadata_agent.ini on Controller per

Two Real Node IceHouse Neutron OVS&amp;GRE

This updates enable nova-api to listen at port 9697

View section –

“Metadata support configured on Controller+NeutronServer Node”

 2. File /etc/sysconfig/iptables updated on both nodes with lines :-

*filter section

-A INPUT -p gre -j ACCEPT
-A OUTPUT -p gre -j ACCEPT

Service iptables restarted 

 ***************************************

 On Controller+NeutronServer

 ***************************************

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-p37p1
DEVICE=p37p1
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-p4p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p4p1
UUID=dbc361f1-805b-4f57-8150-cbc24ab7ee1a
ONBOOT=yes
IPADDR=192.168.0.127
PREFIX=24
# GATEWAY=192.168.0.1
DNS1=83.221.202.254
# HWADDR=00:E0:53:13:17:4C
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse1 network-scripts(keystone_admin)]# ovs-vsctl show
119e5be5-5ef6-4f39-875c-ab1dfdb18972
Bridge br-int
Port “qr-209f67c4-b1”
tag: 1
Interface “qr-209f67c4-b1”
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapb5da1c7e-50”
tag: 1
Interface “tapb5da1c7e-50”
type: internal
Bridge br-ex
Port “qg-22a1fffe-91”
Interface “qg-22a1fffe-91”
type: internal
Port “p37p1”
Interface “p37p1”
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137”}
ovs_version: “2.1.2”

**********************************

On Compute

**********************************

[root@icehouse2 network-scripts]# cat ifcfg-p37p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p37p1
UUID=b29ecd0e-7093-4ba9-8a2d-79ac74e93ea5
ONBOOT=yes
IPADDR=192.168.1.137
PREFIX=24
GATEWAY=192.168.1.1
DNS1=83.221.202.254
HWADDR=90:E6:BA:2D:11:EB
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse2 network-scripts]# cat ifcfg-p4p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p4p1
UUID=a57d6dd3-32fe-4a9f-a6d0-614e004bfdf6
ONBOOT=yes
IPADDR=192.168.0.137
PREFIX=24
GATEWAY=192.168.0.1
DNS1=83.221.202.254
HWADDR=00:0C:76:E0:1E:C5
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse2 network-scripts]# ovs-vsctl show
2dd63952-602e-4370-900f-85d8c984a0cb
Bridge br-int
Port “qvo615e1af7-f4”
tag: 3
Interface “qvo615e1af7-f4”
Port “qvoe78bebdb-36”
tag: 3
Interface “qvoe78bebdb-36”
Port br-int
Interface br-int
type: internal
Port “qvo9ccf821f-87”
tag: 3
Interface “qvo9ccf821f-87”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-2”
Interface “gre-2”
type: gre
options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}
Port br-tun
Interface br-tun
type: internal
ovs_version: “2.1.2

**************************************************

Update dhcp_agent.ini and create dnsmasq.conf

**************************************************

[root@icehouse1 neutron(keystone_admin)]# cat  dhcp_agent.ini

[DEFAULT]
debug = False
resync_interval = 30
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
enable_isolated_metadata = False
enable_metadata_network = False
dhcp_delete_namespaces = False
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
state_path=/var/lib/neutron

[root@icehouse1 neutron(keystone_admin)]# cat  dnsmasq.conf
log-facility = /var/log/neutron/dnsmasq.log
log-dhcp
# Line added
dhcp-option=26,1454

**************************************************************************

Metadata support configured on Controller+NeutronServer Node :- 

***************************************************************************

[root@icehouse1 ~(keystone_admin)]# ip netns
qrouter-269dfed8-e314-4a23-b693-b891ba00582e
qdhcp-79eb80f1-d550-4f4c-9670-f8e10b43e7eb

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-269dfed8-e314-4a23-b693-b891ba00582e iptables -S -t nat | grep 169.254

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-269dfed8-e314-4a23-b693-b891ba00582e netstat -anpt

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      5212/python

[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 5212


root      5212     1  0 11:40 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/269dfed8-e314-4a23-b693-b891ba00582e.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=269dfed8-e314-4a23-b693-b891ba00582e –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-269dfed8-e314-4a23-b693-b891ba00582e.log –log-dir=/var/log/neutron
root     21188  4697  0 14:29 pts/0    00:00:00 grep –color=auto 5212

[root@icehouse1 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1228/python       


[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 1228

nova      1228     1  0 11:38 ?          00:00:56 /usr/bin/python /usr/bin/nova-api
nova      3623  1228  0 11:39 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3626  1228  0 11:39 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3719  1228  0 11:39 ?        00:00:12 /usr/bin/python /usr/bin/nova-api
nova      3720  1228  0 11:39 ?        00:00:10 /usr/bin/python /usr/bin/nova-api
nova      3775  1228  0 11:39 ?        00:00:01 /usr/bin/python /usr/bin/nova-api
nova      3776  1228  0 11:39 ?        00:00:01 /usr/bin/python /usr/bin/nova-api
root     21230  4697  0 14:29 pts/0    00:00:00 grep –color=auto 1228

[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-scheduler   icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-conductor   icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-cert        icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-compute     icehouse2.localdomain                nova             enabled    :-)   2014-06-03 10:39:07

[root@icehouse1 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+———————–+——-+—————-+
| id                                   | agent_type         | host                  | alive | admin_state_up |
+————————————–+——————–+———————–+——-+—————-+
| 4f37a350-2613-4a2b-95b2-b3bd4ee075a0 | L3 agent           | icehouse1.localdomain | :-)   | True           |
| 5b800eb7-aaf8-476a-8197-d13a0fc931c6 | Metadata agent     | icehouse1.localdomain | :-)   | True           |
| 5ce5e6fe-4d17-4ce0-9e6e-2f3b255ffeb0 | Open vSwitch agent | icehouse1.localdomain | :-)   | True           |
| 7f88512a-c59a-4ea4-8494-02e910cae034 | DHCP agent         | icehouse1.localdomain | :-)   | True           |
| a23e4d51-3cbc-42ee-845a-f5c17dff2370 | Open vSwitch agent | icehouse2.localdomain | :-)   | True           |
+————————————–+——————–+———————–+——-+————

  

    

    

 


Two Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 2, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from GRE Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

ANSWER FILE Two Node IceHouse Neutron OVS&GRE  and  updated *.ini , *.conf files after packstack setup  http://textuploader.com/0ts8

Two Libvirt’s  subnet created on F20 KVM Sever to support installation

  Public subnet :  192.169.142.0/24  

 GRE Tunnel  Support subnet:      192.168.122.0/24 

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat openstackvms.xml
<network>
 <name>openstackvms</name>
 <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr1' stp='on' delay='0' />
 <mac address='52:54:00:60:f8:6e'/>
 <ip address='192.169.142.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.169.142.2' end='192.169.142.254' />
 </dhcp>
 </ip>
 </network>
 2. Define the above network:
  $ virsh net-define openstackvms.xml
3. Start the network and enable it for "autostart"
 $ virsh net-start openstackvms
 $ virsh net-autostart openstackvms

4. List your libvirt networks to see if it reflects:
  $ virsh net-list
  Name                 State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
  openstackvms         active     yes           yes

5. Optionally, list your bridge devices:
  $ brctl show
  bridge name     bridge id               STP enabled     interfaces
  virbr0          8000.5254003339b3       yes             virbr0-nic
  virbr1          8000.52540060f86e       yes             virbr1-nic

 After packstack 2 Node (Controller+Compute) IceHouse OVS&amp;GRE setup :-

[root@ip-192-169-142-127 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 143
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| nova               |
| ovs_neutron        |
| performance_schema |
| test               |
+——————–+
9 rows in set (0.02 sec)

MariaDB [(none)]&gt; use ovs_neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [ovs_neutron]&gt; show tables ;
+—————————+
| Tables_in_ovs_neutron     |
+—————————+
| agents                    |
| alembic_version           |
| allowedaddresspairs       |
| dnsnameservers            |
| externalnetworks          |
| extradhcpopts             |
| floatingips               |
| ipallocationpools         |
| ipallocations             |
| ipavailabilityranges      |
| networkdhcpagentbindings  |
| networks                  |
| ovs_network_bindings      |
| ovs_tunnel_allocations    |
| ovs_tunnel_endpoints      |
| ovs_vlan_allocations      |
| portbindingports          |
| ports                     |
| quotas                    |
| routerl3agentbindings     |
| routerroutes              |
| routers                   |
| securitygroupportbindings |
| securitygrouprules        |
| securitygroups            |
| servicedefinitions        |
| servicetypes              |
| subnetroutes              |
| subnets                   |
+—————————+
29 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from networks ;
+———————————-+————————————–+———+——–+—————-+——–+
| tenant_id                        | id                                   | name    | status | admin_state_up | shared |
+———————————-+————————————–+———+——–+—————-+——–+
| 179e44f8f53641da89c3eb5d07405523 | 3854bc88-ae14-47b0-9787-233e54ffe7e5 | private | ACTIVE |              1 |      0 |
| 179e44f8f53641da89c3eb5d07405523 | 6e8dc33d-55d4-47b5-925f-e1fa96128c02 | public  | ACTIVE |              1 |      1 |
+———————————-+————————————–+———+——–+—————-+——–+
2 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from routers ;
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| tenant_id                        | id                                   | name    | status | admin_state_up | gw_port_id                           | enable_snat |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| 179e44f8f53641da89c3eb5d07405523 | 94829282-e6b2-4364-b640-8c0980218a4f | ROUTER3 | ACTIVE |              1 | df9711e4-d1a2-4255-9321-69105fbd8665 |           1 |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
1 row in set (0.00 sec)

MariaDB [ovs_neutron]&gt;

*********************

On Controller :-

********************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

*************************************

ovs-vsctl show output on controller

*************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
dc2c76d6-40e3-496e-bdec-470452758c32
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.122.127″, out_key=flow, remote_ip=”192.168.122.137”}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap7acb7666-aa”
tag: 1
Interface “tap7acb7666-aa”
type: internal
Port “qr-a26fe722-07”
tag: 1
Interface “qr-a26fe722-07”
type: internal
Bridge br-ex
Port “qg-df9711e4-d1”
Interface “qg-df9711e4-d1”
type: internal
Port “eth0”
Interface “eth0”
Port br-ex
Interface br-ex
type: internal
ovs_version: “2.1.2”

********************

On Compute:-

********************

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth0
UUID=f96e561d-d14c-4fb1-9657-0c935f7f5721
ONBOOT=yes
IPADDR=192.169.142.137
PREFIX=24
GATEWAY=192.169.142.1
DNS1=83.221.202.254
HWADDR=52:54:00:67:AC:04
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.137
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

************************************

ovs-vsctl show output on compute

************************************

[root@ip-192-169-142-137 ~]# ovs-vsctl show
1c6671de-fcdf-4a29-9eee-96c949848fff
Bridge br-tun
Port “gre-2”
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.122.137″, out_key=flow, remote_ip=”192.168.122.127”}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qvo87038189-3f”
tag: 1
Interface “qvo87038189-3f”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: “2.1.2”

[root@ip-192-169-142-137 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr87038189-3f        8000.2abf9e69f97c    no        qvb87038189-3f
tap87038189-3f

*************************

Metadata verification 

*************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f iptables -S -t nat | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3771/python
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | 3771
bash: 3771: command not found…

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 3771
root      3771     1  0 13:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/94829282-e6b2-4364-b640-8c0980218a4f.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=94829282-e6b2-4364-b640-8c0980218a4f –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-94829282-e6b2-4364-b640-8c0980218a4f.log –log-dir=/var/log/neutron

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1024/python

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 1024
nova      1024     1  1 13:58 ?        00:00:05 /usr/bin/python /usr/bin/nova-api
nova      3369  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3370  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3397  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3398  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3423  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3424  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root      4947  4301  0 14:06 pts/0    00:00:00 grep –color=auto 1024

  

  

  

 


Two Real Node (Controller+Compute) RDO IceHouse Neutron OVS&VLAN Cluster on Fedora 20 Setup

May 27, 2014

Two boxes , each one having 2  NICs (p37p1,p4p1) for (Controller+NeutronServer) &amp;&amp; Compute Nodes have been setup.

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

Before running `packstack –answer-file=TwoRealNode-answer.txt` SELINUX set to permissive on both nodes.  Interfaces p4p1 on both nodes set to promiscuous mode (e.g. HWADDRESS was commented out).

Specific of answer-file on real F20 boxes :-

CONFIG_NOVA_COMPUTE_PRIVIF=p4p1

CONFIG_NOVA_NETWORK_PUBIF=p37p1

CONFIG_NOVA_NETWORK_PRIVIF=p4p1

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan

CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:100:200

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-p4p1

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-p4p1:p4p1

Post installation steps :-

1. NetworkManager should be disabled on both nodes, service network enabled.

2. Syntax of ifcfg-* files of corresponding OVS ports  should follow RHEL 6.5 notations rather then F20

3. Special care should be taken to bring up p4p1 (in my case)

4. Post install reconfiguration *.ini  && *.conf   http://textuploader.com/9oec

5. Configuration p4p1 interfaces 

# cat ifcfg-p4p1

TYPE=Ethernet

BOOTPROTO=none

DEVICE=p4p1

ONBOOT=yes

NM_CONTROLLED=no

Metadata access verification on Controller:-

[root@icehouse1 ~(keystone_admin)]# ip netns

qdhcp-a2bf6363-6447-47f5-a243-b998d206d593

qrouter-2462467b-ea0a-4a40-a093-493572010694

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-2462467b-ea0a-4a40-a093-493572010694  iptables -S -t nat | grep 169

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8775

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-2462467b-ea0a-4a40-a093-493572010694  netstat -anpt

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   

tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      6156/python  

[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 6156

root      5691  4082  0 07:58 pts/0    00:00:00 grep –color=auto 6156
root      6156     1  0 06:04 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/2462467b-ea0a-4a40-a093-493572010694.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=2462467b-ea0a-4a40-a093-493572010694 –state_path=/var/lib/neutron –metadata_port=8775 –verbose –log-file=neutron-ns-metadata-proxy-2462467b-ea0a-4a40-a093-493572010694.log –log-dir=/var/log/neutron

[root@icehouse1 ~(keystone_admin)]# netstat -anpt | grep 8775

tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      1224/python 

[root@icehouse1 ~(keystone_admin)]# ps -aux | grep 1224

nova      1224  0.7  0.7 337092 65052 ?        Ss   05:59   0:46 /usr/bin/python /usr/bin/nova-api

boris     3789  0.0  0.1 504676 12248 ?        Sl   06:01   0:00 /usr/libexec/tracker-store

Verifying dhcp lease for private IPs for instances currently running :-

[root@icehouse1 ~(keystone_admin)]# ip netns exec qdhcp-a2bf6363-6447-47f5-a243-b998d206d593 ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 3  bytes 1728 (1.6 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 3  bytes 1728 (1.6 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapa7e1ac48-7b: flags=67  mtu 1500
inet 10.0.0.11  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::f816:3eff:fe9d:874d  prefixlen 64  scopeid 0x20
ether fa:16:3e:9d:87:4d  txqueuelen 0  (Ethernet)
RX packets 3364  bytes 626074 (611.4 KiB)
RX errors 0  dropped 35  overruns 0  frame 0
TX packets 2124  bytes 427060 (417.0 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@icehouse1 ~(keystone_admin)]# ip netns exec qdhcp-a2bf6363-6447-47f5-a243-b998d206d593 tcpdump -ln -i tapa7e1ac48-7b

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on tapa7e1ac48-7b, link-type EN10MB (Ethernet), capture size 65535 bytes

11:07:02.388376 ARP, Request who-has 10.0.0.11 tell 10.0.0.38, length 46

11:07:02.388399 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:12.239833 IP 10.0.0.43.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:40:da:a1, length 300

11:07:12.240491 IP 10.0.0.11.bootps &gt; 10.0.0.43.bootpc: BOOTP/DHCP, Reply, length 324

11:07:12.313087 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:13.313070 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:15.634980 IP 0.0.0.0.bootpc &gt; 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:81:ff, length 280

11:07:15.635595 IP 10.0.0.11.bootps &gt; 10.0.0.31.bootpc: BOOTP/DHCP, Reply, length 324

11:07:15.635954 IP 10.0.0.31 &gt; 10.0.0.11: ICMP 10.0.0.31 udp port bootpc unreachable, length 360

11:07:17.254260 ARP, Request who-has 10.0.0.43 tell 10.0.0.11, length 28

11:07:17.254866 ARP, Reply 10.0.0.43 is-at fa:16:3e:40:da:a1, length 46

11:07:20.644135 ARP, Request who-has 10.0.0.11 tell 10.0.0.31, length 28

11:07:20.644157 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:45.972179 IP 10.0.0.38.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:9d:67:df, length 300

11:07:45.973023 IP 10.0.0.11.bootps &gt; 10.0.0.38.bootpc: BOOTP/DHCP, Reply, length 324

11:07:50.980701 ARP, Request who-has 10.0.0.11 tell 10.0.0.38, length 46

11:07:50.980725 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:55.821920 IP 10.0.0.43.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:40:da:a1, length 300

11:07:55.822423 IP 10.0.0.11.bootps &gt; 10.0.0.43.bootpc: BOOTP/DHCP, Reply, length 324

11:07:55.898024 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:56.897994 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:08:00.823637 ARP, Request who-has 10.0.0.11 tell 10.0.0.43, length 46

******************

On Controller

******************

[root@icehouse1 ~(keystone_admin)]# ovs-vsctl show

a675c73e-c707-4f29-af60-57fb7c3f81c4

Bridge br-int

Port “int-br-p4p1”

Interface “int-br-p4p1”

Port br-int

Interface br-int

type: internal

Port “qr-bbba6fd3-a3”

tag: 1

Interface “qr-bbba6fd3-a3”

type: internal

Port “qvo61d82a0f-32”

tag: 1

Interface “qvo61d82a0f-32”

Port “tapa7e1ac48-7b”

tag: 1

Interface “tapa7e1ac48-7b”

type: internal

Port “qvof8c8a1a2-51”

tag: 1

Interface “qvof8c8a1a2-51”

Bridge br-ex

Port “p37p1”

Interface “p37p1”

Port br-ex

Interface br-ex

type: internal

Port “qg-3787602d-29”

Interface “qg-3787602d-29”

type: internal

Bridge “br-p4p1”

Port “p4p1”

Interface “p4p1”

Port “phy-br-p4p1”

Interface “phy-br-p4p1”

Port “br-p4p1”

Interface “br-p4p1”

type: internal

ovs_version: “2.0.1”

****************

On Compute

****************

[root@icehouse2 ]# ovs-vsctl show

bf768fc8-d18b-4762-bdd2-a410fcf88a9b

Bridge “br-p4p1”

Port “br-p4p1”

Interface “br-p4p1”

type: internal

Port “phy-br-p4p1”

Interface “phy-br-p4p1”

Port “p4p1”

Interface “p4p1”

Bridge br-int

Port br-int

Interface br-int

type: internal

Port “int-br-p4p1”

Interface “int-br-p4p1”

Port “qvoe5a82d77-d4”

tag: 8

Interface “qvoe5a82d77-d4”

ovs_version: “2.0.1”

[root@icehouse1 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active

openstack-nova-cert:                    active

openstack-nova-compute:                 active

openstack-nova-network:                 inactive  (disabled on boot)

openstack-nova-scheduler:               active

openstack-nova-volume:                  inactive  (disabled on boot)

openstack-nova-conductor:               active

== Glance services ==

openstack-glance-api:                   active

openstack-glance-registry:              active

== Keystone service ==

openstack-keystone:                     active

== Horizon service ==

openstack-dashboard:                    active

== neutron services ==

neutron-server:                         active

neutron-dhcp-agent:                     active

neutron-l3-agent:                       active

neutron-metadata-agent:                 active

neutron-lbaas-agent:                    inactive  (disabled on boot)

neutron-openvswitch-agent:              active

neutron-linuxbridge-agent:              inactive  (disabled on boot)

neutron-ryu-agent:                      inactive  (disabled on boot)

neutron-nec-agent:                      inactive  (disabled on boot)

neutron-mlnx-agent:                     inactive  (disabled on boot)

== Swift services ==

openstack-swift-proxy:                  active

openstack-swift-account:                active

openstack-swift-container:              active

openstack-swift-object:                 active

== Cinder services ==

openstack-cinder-api:                   active

openstack-cinder-scheduler:             active

openstack-cinder-volume:                active

openstack-cinder-backup:                inactive

== Ceilometer services ==

openstack-ceilometer-api:               active

openstack-ceilometer-central:           active

openstack-ceilometer-compute:           active

openstack-ceilometer-collector:         active

openstack-ceilometer-alarm-notifier:    active

openstack-ceilometer-alarm-evaluator:   active

== Support services ==

libvirtd:                               active

openvswitch:                            active

dbus:                                   active

tgtd:                                   active

rabbitmq-server:                        active

memcached:                              active

== Keystone users ==

+———————————-+————+———+———————-+

|                id                |    name    | enabled |        email         |

+———————————-+————+———+———————-+

| df9165cd160846b19f73491e0bc041c2 |   admin    |   True  |    test@test.com     |

| bafe2fc4d51a400a99b1b41ef50d4afd | ceilometer |   True  | ceilometer@localhost |

| df59d0782f174a34a3a73215300c64ca |   cinder   |   True  |   cinder@localhost   |

| ca624394c9d941b6ad0a07363ab668b2 |   glance   |   True  |   glance@localhost   |

| fb5125484a1f4b7aaf8503025eb018ba |  neutron   |   True  |  neutron@localhost   |

| 64912bc3726c48db8f003ce79d8fe746 |    nova    |   True  |    nova@localhost    |

| 6d8b48605d3b476097d89486813360c0 |   swift    |   True  |   swift@localhost    |

+———————————-+————+———+———————-+

== Glance images ==

+————————————–+—————–+————-+——————+———–+——–+

| ID                                   | Name            | Disk Format | Container Format | Size      | Status |

+————————————–+—————–+————-+——————+———–+——–+

| 8593a43a-2449-4b49-918f-9871011249a7 | CirrOS31        | qcow2       | bare             | 13147648  | active |

| 4be72a99-06e0-477d-b446-b597435455a9 | Fedora20image   | qcow2       | bare             | 210829312 | active |

| 28470072-f317-4a72-b3e8-3fffbe6a7661 | UubuntuServer14 | qcow2       | bare             | 253559296 | active |

+————————————–+—————–+————-+——————+———–+——–+

== Nova managed services ==

+——————+———————–+———-+———+——-+—————————-+—————–+

| Binary           | Host                  | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+——————+———————–+———-+———+——-+—————————-+—————–+

| nova-consoleauth | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | –               |

| nova-scheduler   | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | –               |

| nova-conductor   | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:13.000000 | –               |

| nova-compute     | icehouse1.localdomain | nova     | enabled | up    | 2014-05-25T03:03:10.000000 | –               |

| nova-cert        | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | –               |

| nova-compute     | icehouse2.localdomain | nova     | enabled | up    | 2014-05-25T03:03:13.000000 | –               |

+——————+———————–+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+———+——+

| ID                                   | Label   | Cidr |

+————————————–+———+——+

| 09e18ced-8c22-4166-a1a1-cbceece46884 | public  | –    |

| a2bf6363-6447-47f5-a243-b998d206d593 | private | –    |

+————————————–+———+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———–+———–+——+———–+——+——-+————-+———–+

| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |

| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |

| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |

| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |

| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |

+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+————–+———–+————+————-+———————————+

| ID                                   | Name         | Status    | Task State | Power State | Networks                        |

+————————————–+————–+———–+————+————-+———————————+

| b661a130-fdb7-41eb-aba5-588924634c9d | CirrOS302    | ACTIVE    | –          | Running     | private=10.0.0.31, 192.168.1.63 |

| 5d1dbb9d-7bef-4e51-be8d-4270ddd3d4cc | CirrOS351    | ACTIVE    | –          | Running     | private=10.0.0.39, 192.168.1.66 |

| ef73a897-8700-4999-ab25-49f25b896f34 | CirrOS370    | ACTIVE    | –          | Running     | private=10.0.0.40, 192.168.1.69 |

| 02718e21-edb9-4b59-8bb7-21e0290650fd | CirrOS390    | SUSPENDED | –          | Shutdown    | private=10.0.0.41, 192.168.1.67 |                           |

| 6992e37c-48c7-49b6-b6fc-8e35fe240704 | UbuntuSRV350 | SUSPENDED | –          | Shutdown    | private=10.0.0.38, 192.168.1.62 |

| 9953ed52-b666-4fe1-ac35-23621122af5a | VF20RS02     | ACTIVE    | –          | Running     | private=10.0.0.43, 192.168.1.71 |

+————————————–+————–+———–+————+————-+———————————+

[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-scheduler   icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-conductor   icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:14
nova-compute     icehouse1.localdomain                nova             enabled    :-)   2014-05-27 10:16:18
nova-cert        icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-compute     icehouse2.localdomain                nova             enabled    :-)   2014-05-27 10:16:12

[root@icehouse1 ~(keystone_admin)]# neutron agent-list

+--------------------------------------+--------------------+-----------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------+-------+----------------+
| 6775fac7-d594-4272-8447-f136b54247e8 | L3 agent | icehouse1.localdomain |:-) | True |
| 77fdc8a9-0d77-4f53-9cdd-1c732f0cfdb1 | Metadata agent | icehouse1.localdomain |:-) | True |
| 8f70b2c4-c65b-4d0b-9808-ba494c764d99 | Open vSwitch agent | icehouse1.localdomain |:-) | True |
| a86f1272-2afb-43b5-a7e6-e5fc6df565b5 | Open vSwitch agent | icehouse2.localdomain |:-) | True |
| e72bdcd5-3dd1-4994-860f-e21d4a58dd4c | DHCP agent | icehouse1.localdomain |:-) | True |
+--------------------------------------+--------------------+-----------------------+-------+----------------+


 
   


 
 Windows 2012 evaluation Server running on Compute Node :-
 


  


									

Setup Horizon Dashboard-2014.1 on F20 Havana Controller (firefox upgrade up to 29.0-5)

May 3, 2014

It’s hard to know what the right thing is. Once you know, it’s hard not to do it.
                       Harry Fertig (Kingsley). The Confession (1999 film)

Recent upgrade firefox up to 29.0-5 on Fedora 20 causes to fail login to Dashboard Console for Havana F20 Controller been setup per VNC Console in Dashboard on Two Node Neutron GRE+OVS F20 Cluster

Procedure bellow actually does a backport F21 packages python-django-horizon-2104.1-1 , python-django-openstack-auth-1.1.5-1, python-pbr-0.7.0-2 via manual

install of corresponding SRC.RPMs and invoking rpmbuild utility to produce F20

packages. The hard thing to know is which packages to backport ?

I had to perform AIO RDO IceHouse setup via packstack on specially created VM to run `rpm -qa | grep django` to obtain required list. Officially RDO Havana

comes with F20 and appears that most recent firefox upgrade breaks Horizon Dashboard supposed to be maintained as supported component for F20.

Download from Net :-

[boris@dfw02 Downloads]$ ls -l *.src.rpm

-rw-r–r–. 1 boris boris 4252988 May  3 08:21 python-django-horizon-2014.1-1.fc21.src.rpm

-rw-r–r–. 1 boris boris   47126 May  3 08:37 python-django-openstack-auth-1.1.5-1.fc21.src.rpm

-rw-r–r–. 1 boris boris   83761 May  3 08:48 python-pbr-0.7.0-2.fc21.src.rpm

Install src.rpms and build

[boris@dfw02 SPECS]$ rpmbuild -bb python-django-openstack-auth.spec

[boris@dfw02 SPECS]$ rpmbuild -bb python-pbr.spec

Then install rpms as preventive step before core package build

[boris@dfw02 noarch]$sudo yum install python-django-openstack-auth-1.1.5-1.fc20.noarch.rpm

[boris@dfw02 noarch]$sudo yum install  python-pbr-0.7.0-2.fc20.noarch.rpm

[boris@dfw02 noarch]$ cd –

/home/boris/rpmbuild/SPECS

Core build to succeed :-

[boris@dfw02 SPECS]$ rpmbuild -bb python-django-horizon.spec

[boris@dfw02 SPECS]$ cd ../RPMS/n*

[boris@dfw02 noarch]$ ls -l

total 6616

-rw-rw-r–. 1 boris boris 3293068 May  3 09:01 openstack-dashboard-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  732020 May  3 09:01 openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  160868 May  3 08:51 python3-pbr-0.7.0-2.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  823332 May  3 09:01 python-django-horizon-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris 1548752 May  3 09:01 python-django-horizon-doc-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris   43944 May  3 08:39 python-django-openstack-auth-1.1.5-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  158204 May  3 08:51 python-pbr-0.7.0-2.fc20.noarch.rpm

[boris@dfw02 noarch]$ ls *.rpm &gt; inst

[boris@dfw02 noarch]$ vi inst

[boris@dfw02 noarch]$ chmod u+x inst

[boris@dfw02 noarch]$ ./inst

[sudo] password for boris:

Loaded plugins: langpacks, priorities, refresh-packagekit

Examining openstack-dashboard-2014.1-1.fc20.noarch.rpm: openstack-dashboard-2014.1-1.fc20.noarch

Marking openstack-dashboard-2014.1-1.fc20.noarch.rpm as an update to openstack-dashboard-2013.2.3-1.fc20.noarch

Examining openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm: openstack-dashboard-theme-2014.1-1.fc20.noarch

Marking openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm to be installed

Examining python-django-horizon-2014.1-1.fc20.noarch.rpm: python-django-horizon-2014.1-1.fc20.noarch

Marking python-django-horizon-2014.1-1.fc20.noarch.rpm as an update to python-django-horizon-2013.2.3-1.fc20.noarch

Examining python-django-horizon-doc-2014.1-1.fc20.noarch.rpm: python-django-horizon-doc-2014.1-1.fc20.noarch

Marking python-django-horizon-doc-2014.1-1.fc20.noarch.rpm to be installed

Resolving Dependencies

–&gt; Running transaction check

—&gt; Package openstack-dashboard.noarch 0:2013.2.3-1.fc20 will be updated

—&gt; Package openstack-dashboard.noarch 0:2014.1-1.fc20 will be an update

—&gt; Package openstack-dashboard-theme.noarch 0:2014.1-1.fc20 will be installed

—&gt; Package python-django-horizon.noarch 0:2013.2.3-1.fc20 will be updated

—&gt; Package python-django-horizon.noarch 0:2014.1-1.fc20 will be an update

—&gt; Package python-django-horizon-doc.noarch 0:2014.1-1.fc20 will be installed

–&gt; Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================

Package                   Arch   Version          Repository                                       Size

=========================================================================================================

Installing:

openstack-dashboard-theme noarch 2014.1-1.fc20    /openstack-dashboard-theme-2014.1-1.fc20.noarch 1.5 M

python-django-horizon-doc noarch 2014.1-1.fc20    /python-django-horizon-doc-2014.1-1.fc20.noarch  24 M

Updating:

openstack-dashboard       noarch 2014.1-1.fc20    /openstack-dashboard-2014.1-1.fc20.noarch        14 M

python-django-horizon     noarch 2014.1-1.fc20    /python-django-horizon-2014.1-1.fc20.noarch     3.3 M

Transaction Summary

=========================================================================================================

Install  2 Packages

Upgrade  2 Packages

 

Total size: 42 M

Is this ok [y/d/N]: y

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Updating   : python-django-horizon-2014.1-1.fc20.noarch                                            1/6

Updating   : openstack-dashboard-2014.1-1.fc20.noarch                                              2/6

warning: /etc/openstack-dashboard/local_settings created as /etc/openstack-dashboard/local_settings.rpmnew

Installing : openstack-dashboard-theme-2014.1-1.fc20.noarch                                        3/6

Installing : python-django-horizon-doc-2014.1-1.fc20.noarch                                        4/6

Cleanup    : openstack-dashboard-2013.2.3-1.fc20.noarch                                            5/6

Cleanup    : python-django-horizon-2013.2.3-1.fc20.noarch                                          6/6

Verifying  : openstack-dashboard-theme-2014.1-1.fc20.noarch                                        1/6

Verifying  : openstack-dashboard-2014.1-1.fc20.noarch                                              2/6

Verifying  : python-django-horizon-doc-2014.1-1.fc20.noarch                                        3/6

Verifying  : python-django-horizon-2014.1-1.fc20.noarch                                            4/6

Verifying  : openstack-dashboard-2013.2.3-1.fc20.noarch                                            5/6

Verifying  : python-django-horizon-2013.2.3-1.fc20.noarch                                          6/6

Installed:

openstack-dashboard-theme.noarch 0:2014.1-1.fc20    python-django-horizon-doc.noarch 0:2014.1-1.fc20

Updated:

openstack-dashboard.noarch 0:2014.1-1.fc20         python-django-horizon.noarch 0:2014.1-1.fc20

Complete!

[root@dfw02 ~(keystone_admin)]$ rpm -qa | grep django

python-django-horizon-doc-2014.1-1.fc20.noarch

python-django-horizon-2014.1-1.fc20.noarch

python-django-1.6.3-1.fc20.noarch

python-django-nose-1.2-1.fc20.noarch

python-django-bash-completion-1.6.3-1.fc20.noarch

python-django-openstack-auth-1.1.5-1.fc20.noarch

python-django-appconf-0.6-2.fc20.noarch

python-django-compressor-1.3-2.fc20.noarch

Admin’s reports regarding Cluster status

 

 

 

     Ubuntu Trusty Server VM running


RDO Havana Neutron Namespaces Troubleshooting for OVS&VLAN(GRE) Config

April 14, 2014

The  OpenStack Networking components are deployed on the Controller, Compute, and Network nodes in the following configuration:

In case of Two Node Development Cluster :-

Controller node: hosts the Neutron server service, which provides the networking API and communicates with and tracks the agents.

DHCP agent: spawns and controls dnsmasq processes to provide leases to instances. This agent also spawns neutron-ns-metadata-proxy processes as part of the metadata system.

Metadata agent: Provides a metadata proxy to the nova-api-metadata service. The neutron-ns-metadata-proxy direct traffic that they receive in their namespaces to the proxy.

OVS plugin agent: Controls OVS network bridges and routes between them via patch, tunnel, or tap without requiring an external OpenFlow controller.

L3 agent: performs L3 forwarding and NAT.

In case of Three Node or more ( several Compute Nodes) :-

Separate box hosts Neutron Server and all services mentioned above

Compute node: has an OVS plugin agent and openstack-nova-compute service.

Namespaces (View  Identifying and Troubleshooting Neutron Namespaces )

For each network you create, the Network node (or Controller node, if combined) will have a unique network namespace (netns) created by the DHCP and Metadata agents. The netns hosts an interface and IP addresses for dnsmasq and the neutron-ns-metadata-proxy. You can view the namespaces with the `ip netns list`  command, and can interact with the namespaces with the `ip netns exec namespace command`   command.

Every l2-agent/private network has an associated dhcp namespace and

Every l3-agent/router has an associated router namespace.

Network namespace starts with dhcp- followed by the ID of the network.

Router namespace starts with qrouter- followed by the ID of the router.

Source admin credentials and get network list

[root@dfw02 ~(keystone_admin)]$ neutron net-list

+————————————–+——+—————————————————–+

| id                                   | name | subnets                                             |

+————————————–+——+—————————————————–+

| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int  | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24    |

| 426bb226-0ab9-440d-ba14-05634a17fb2b | int1 | 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06 40.0.0.0/24    |

| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |

+————————————–+——+—————————————————–+

Using command `ip netns list` run following commands to get tenants

qdhcp-* names

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep 1eea88bb-4952-4aa4-9148-18b61c22d5b7

qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep 426bb226-0ab9-440d-ba14-05634a17fb2b

qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b

Check tenants Namespace via getting IP and ping this IP inside namespaces

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 35  bytes 4416 (4.3 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 35  bytes 4416 (4.3 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ns-343b0090-24: flags=4163  mtu 1500
inet 40.0.0.3  netmask 255.255.255.0  broadcast 40.0.0.255

inet6 fe80::f816:3eff:fe01:8b55  prefixlen 64  scopeid 0x20
ether fa:16:3e:01:8b:55  txqueuelen 1000  (Ethernet)
RX packets 3251  bytes 386284 (377.2 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1774  bytes 344082 (336.0 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b ping  -c 3 40.0.0.3
PING 40.0.0.3 (40.0.0.3) 56(84) bytes of data.
64 bytes from 40.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from 40.0.0.3: icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from 40.0.0.3: icmp_seq=3 ttl=64 time=0.034 ms

— 40.0.0.3 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.034/0.036/0.041/0.007 ms

Now verify that we have a copy of dnsmasq process to support every tenants namespace

[root@dfw02 ~(keystone_admin)]$ ps -aux | grep dhcp

neutron   2320  0.3  0.3 263908 30696 ?        Ss   08:18   2:14 /usr/bin/python /usr/bin/neutron-dhcp-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/dhcp_agent.ini –log-file /var/log/neutron/dhcp-agent.log

nobody    3529  0.0  0.0  15532   832 ?        S    08:20   0:00 dnsmasq –no-hosts –no-resolv –strict-order –bind-interfaces –interface=ns-40dd712c-e4 –except-interface=lo –pid-file=/var/lib/neutron/dhcp/1eea88bb-4952-4aa4-9148-18b61c22d5b7/pid –dhcp-hostsfile=/var/lib/neutron/dhcp/1eea88bb-4952-4aa4-9148-18b61c22d5b7/host –dhcp-optsfile=/var/lib/neutron/dhcp/1eea88bb-4952-4aa4-9148-18b61c22d5b7/opts –leasefile-ro –dhcp-range=set:tag0,10.0.0.0,static,120s –dhcp-lease-max=256 –conf-file=/etc/neutron/dnsmasq.conf –domain=openstacklocal

nobody    3530  0.0  0.0  15532   944 ?        S    08:20   0:00 dnsmasq –no-hosts –no-resolv –strict-order –bind-interfaces –interface=ns-343b0090-24 –except-interface=lo –pid-file=/var/lib/neutron/dhcp/426bb226-0ab9-440d-ba14-05634a17fb2b/pid –dhcp-hostsfile=/var/lib/neutron/dhcp/426bb226-0ab9-440d-ba14-05634a17fb2b/host –dhcp-optsfile=/var/lib/neutron/dhcp/426bb226-0ab9-440d-ba14-05634a17fb2b/opts –leasefile-ro –dhcp-range=set:tag0,40.0.0.0,static,120s –dhcp-lease-max=256 –conf-file=/etc/neutron/dnsmasq.conf –domain=openstacklocal

List interfaces inside dhcp namespace

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

2: ns-343b0090-24: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:01:8b:55 brd ff:ff:ff:ff:ff:ff
inet 40.0.0.3/24 brd 40.0.0.255 scope global ns-343b0090-24
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe01:8b55/64 scope link
valid_lft forever preferred_lft forever

(A)( From the instance to a router)

Check routing inside dhcp namespace

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b  ip r

default via 40.0.0.1 dev ns-343b0090-24

40.0.0.0/24 dev ns-343b0090-24  proto kernel  scope link  src 40.0.0.3

Check routing inside the router namespace

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 ip r

default via 192.168.1.1 dev qg-9c090153-08

40.0.0.0/24 dev qr-e031db6b-d0  proto kernel  scope link  src 40.0.0.1

192.168.1.0/24 dev qg-9c090153-08  proto kernel  scope link  src 192.168.1.114

Get routers list  via similar grep and network-id to obtain Routers Namespaces

[root@dfw02 ~(keystone_admin)]$ neutron router-list

+————————————–+———+—————————————————————————–+

| id                                   | name    | external_gateway_info                                                       |

+————————————–+———+—————————————————————————–+

| 86b3008c-297f-4301-9bdc-766b839785f1 | router2 | {“network_id”: “780ce2f3-2e6e-4881-bbac-857813f9a8e0”, “enable_snat”: true} |

| bf360d81-79fb-4636-8241-0a843f228fc8 | router1 | {“network_id”: “780ce2f3-2e6e-4881-bbac-857813f9a8e0”, “enable_snat”: true} |

+————————————–+———+—————————————————————————–+

Now get qrouter-* namespaces via `ip netns list` command :-

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep 86b3008c-297f-4301-9bdc-766b839785f1
qrouter-86b3008c-297f-4301-9bdc-766b839785f1

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep  bf360d81-79fb-4636-8241-0a843f228fc8
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8

Now verify L3 forwarding  & NAT via command  `iptables -L -t nat` inside router namespace and check  routing   port 80 for 169.254.169.254 to the RDO Havana Controller’s ( in my configuration running Neutron Server Service along with all agents) host at metadata port 8700

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 iptables -L -t nat

Chain PREROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-PREROUTING  all  —  anywhere             anywhere

Chain INPUT (policy ACCEPT)

target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-OUTPUT  all  —  anywhere             anywhere

Chain POSTROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-POSTROUTING  all  —  anywhere             anywhere

neutron-postrouting-bottom  all  —  anywhere             anywhere

Chain neutron-l3-agent-OUTPUT (1 references)

target     prot opt source               destination

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.2

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.6

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.5

Chain neutron-l3-agent-POSTROUTING (1 references)

target     prot opt source               destination

ACCEPT     all  —  anywhere             anywhere             ! ctstate DNAT

Chain neutron-l3-agent-PREROUTING (1 references)

target     prot opt source               destination

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports 8700

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.2

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.6

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.5

Chain neutron-l3-agent-float-snat (1 references)

target     prot opt source               destination

SNAT       all  —  40.0.0.2             anywhere             to:192.168.1.107

SNAT       all  —  40.0.0.6             anywhere             to:192.168.1.104

SNAT       all  —  40.0.0.5             anywhere             to:192.168.1.110

Chain neutron-l3-agent-snat (1 references)

target     prot opt source               destination

neutron-l3-agent-float-snat  all  —  anywhere             anywhere

SNAT       all  —  40.0.0.0/24          anywhere             to:192.168.1.114

Chain neutron-postrouting-bottom (1 references)

target     prot opt source               destination

neutron-l3-agent-snat  all  —  anywhere             anywhere

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  iptables -L -t nat

Chain PREROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-PREROUTING  all  —  anywhere             anywhere

Chain INPUT (policy ACCEPT)

target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-OUTPUT  all  —  anywhere             anywhere

Chain POSTROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-POSTROUTING  all  —  anywhere             anywhere

neutron-postrouting-bottom  all  —  anywhere             anywhere

Chain neutron-l3-agent-OUTPUT (1 references)

target     prot opt source               destination

DNAT       all  —  anywhere             dfw02.localdomain    to:10.0.0.2

Chain neutron-l3-agent-POSTROUTING (1 references)

target     prot opt source               destination

ACCEPT     all  —  anywhere             anywhere             ! ctstate DNAT

Chain neutron-l3-agent-PREROUTING (1 references)

target     prot opt source               destination

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports 8700

DNAT       all  —  anywhere             dfw02.localdomain    to:10.0.0.2

Chain neutron-l3-agent-float-snat (1 references)

target     prot opt source               destination

SNAT       all  —  10.0.0.2             anywhere             to:192.168.1.103

Chain neutron-l3-agent-snat (1 references)

target     prot opt source               destination

neutron-l3-agent-float-snat  all  —  anywhere             anywhere

SNAT       all  —  10.0.0.0/24          anywhere             to:192.168.1.100

Chain neutron-postrouting-bottom (1 references)

target     prot opt source               destination

neutron-l3-agent-snat  all  —  anywhere             anywhere

(B) ( through a NAT rule in the router namespace)

Check the NAT table

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 iptables -t nat -S

-P PREROUTING ACCEPT

-P INPUT ACCEPT

-P OUTPUT ACCEPT

-P POSTROUTING ACCEPT

-N neutron-l3-agent-OUTPUT

-N neutron-l3-agent-POSTROUTING

-N neutron-l3-agent-PREROUTING

-N neutron-l3-agent-float-snat

-N neutron-l3-agent-snat

-N neutron-postrouting-bottom

-A PREROUTING -j neutron-l3-agent-PREROUTING

-A OUTPUT -j neutron-l3-agent-OUTPUT

-A POSTROUTING -j neutron-l3-agent-POSTROUTING

-A POSTROUTING -j neutron-postrouting-bottom

-A neutron-l3-agent-OUTPUT -d 192.168.1.112/32 -j DNAT –to-destination 40.0.0.2

-A neutron-l3-agent-OUTPUT -d 192.168.1.113/32 -j DNAT –to-destination 40.0.0.4

-A neutron-l3-agent-OUTPUT -d 192.168.1.104/32 -j DNAT –to-destination 40.0.0.6

-A neutron-l3-agent-OUTPUT -d 192.168.1.110/32 -j DNAT –to-destination 40.0.0.5

-A neutron-l3-agent-POSTROUTING ! -i qg-9c090153-08 ! -o qg-9c090153-08 -m conntrack ! –ctstate DNAT -j ACCEPT

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8700

-A neutron-l3-agent-PREROUTING -d 192.168.1.112/32 -j DNAT –to-destination 40.0.0.2

-A neutron-l3-agent-PREROUTING -d 192.168.1.113/32 -j DNAT –to-destination 40.0.0.4

-A neutron-l3-agent-PREROUTING -d 192.168.1.104/32 -j DNAT –to-destination 40.0.0.6

-A neutron-l3-agent-PREROUTING -d 192.168.1.110/32 -j DNAT –to-destination 40.0.0.5

-A neutron-l3-agent-float-snat -s 40.0.0.2/32 -j SNAT –to-source 192.168.1.112

-A neutron-l3-agent-float-snat -s 40.0.0.4/32 -j SNAT –to-source 192.168.1.113

-A neutron-l3-agent-float-snat -s 40.0.0.6/32 -j SNAT –to-source 192.168.1.104

-A neutron-l3-agent-float-snat -s 40.0.0.5/32 -j SNAT –to-source 192.168.1.110

-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat

-A neutron-l3-agent-snat -s 40.0.0.0/24 -j SNAT –to-source 192.168.1.114

-A neutron-postrouting-bottom -j neutron-l3-agent-snat

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 iptables -t nat -S

-P PREROUTING ACCEPT

-P INPUT ACCEPT

-P OUTPUT ACCEPT

-P POSTROUTING ACCEPT

-N neutron-l3-agent-OUTPUT

-N neutron-l3-agent-POSTROUTING

-N neutron-l3-agent-PREROUTING

-N neutron-l3-agent-float-snat

-N neutron-l3-agent-snat

-N neutron-postrouting-bottom

-A PREROUTING -j neutron-l3-agent-PREROUTING

-A OUTPUT -j neutron-l3-agent-OUTPUT

-A POSTROUTING -j neutron-l3-agent-POSTROUTING

-A POSTROUTING -j neutron-postrouting-bottom

-A neutron-l3-agent-OUTPUT -d 192.168.1.103/32 -j DNAT –to-destination 10.0.0.2

-A neutron-l3-agent-POSTROUTING ! -i qg-54e34740-87 ! -o qg-54e34740-87 -m conntrack ! –ctstate DNAT -j ACCEPT

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8700

-A neutron-l3-agent-PREROUTING -d 192.168.1.103/32 -j DNAT –to-destination 10.0.0.2

-A neutron-l3-agent-float-snat -s 10.0.0.2/32 -j SNAT –to-source 192.168.1.103

-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat

-A neutron-l3-agent-snat -s 10.0.0.0/24 -j SNAT –to-source 192.168.1.100

-A neutron-postrouting-bottom -j neutron-l3-agent-snat

Ping to verify network connections

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

64 bytes from 8.8.8.8: icmp_seq=1 ttl=47 time=42.6 ms

64 bytes from 8.8.8.8: icmp_seq=2 ttl=47 time=40.8 ms

64 bytes from 8.8.8.8: icmp_seq=3 ttl=47 time=41.6 ms

64 bytes from 8.8.8.8: icmp_seq=4 ttl=47 time=41.0 ms

Verifying  service listening at 8700 port  inside routers namespaces 

output seems like this :-

(C) (to an instance of the neutron-ns-metadata-proxy)

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1  netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      4946/python

Check process with pid 4946

[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4946

root      4946     1  0 08:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/86b3008c-297f-4301-9bdc-766b839785f1.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=86b3008c-297f-4301-9bdc-766b839785f1 –state_path=/var/lib/neutron –metadata_port=8700 –verbose –log-file=neutron-ns-metadata-proxy-86b3008c-297f-4301-9bdc-766b839785f1.log –log-dir=/var/log/neutron

root     10396 11489  0 16:33 pts/3    00:00:00 grep –color=auto 4946

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      4746/python

Check process with pid 4746

[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4746

root      4746     1  0 08:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/bf360d81-79fb-4636-8241-0a843f228fc8.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=bf360d81-79fb-4636-8241-0a843f228fc8 –state_path=/var/lib/neutron –metadata_port=8700 –verbose –log-file=neutron-ns-metadata-proxy-bf360d81-79fb-4636-8241-0a843f228fc8.log –log-dir=/var/log/neutron

Now run following commands inside routers namespaces to check status of neutron-metadata port :-

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1  netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN

Active UNIX domain sockets (servers and established)

Proto RefCnt Flags       Type       State         I-Node   Path

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN

Active UNIX domain sockets (servers and established)

Proto RefCnt Flags       Type       State         I-Node   Path

Outside routers namespace it would look like

(D) (to the actual Nova metadata service)

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      2746/python

Check process with pid  2746

[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 2746

nova      2746     1  0 08:57 ?        00:02:31 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

nova      2830  2746  0 08:57 ?        00:00:00 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

nova      2851  2746  0 08:57 ?        00:00:10 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

nova      2858  2746  0 08:57 ?        00:00:02 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

root      9976 11489  0 16:31 pts/3    00:00:00 grep –color=auto 2746

So , we actually verified statement from Direct access to Nova metadata

in an environment running Neutron, a request from your instance must traverse a number of steps:

1. From the instance to a router, (A)

2. Through a NAT rule in the router namespace,  (B)

3. To an instance of the neutron-ns-metadata-proxy, (C)

4. To the actual Nova metadata service (D)

References

1. OpenStack Networking concepts


HowTo access metadata from RDO Havana Instance on Fedora 20

April 5, 2014

Per  Direct_access _to_Nova_metadata

In an environment running Neutron, a request from your instance must traverse a number of steps:

1. From the instance to a router,
2. Through a NAT rule in the router namespace,
3. To an instance of the neutron-ns-metadata-proxy,
4. To the actual Nova metadata service

   Reproducing  Dirrect_access_to_Nova_metadata   I was able to get only list of EC2 metadata available, but not the values. However, the major concern is getting  values of metadata obtained in post  Direct_access _to_Nova_metadata
and also at  /openstack  location. The last  ones seem to me important not less then present  in EC2 list . This metadata are also not provided by this list.

Commands been run bellow are supposed to verify Nova&Neutron Setup to be performed  successfully , otherwise passing four steps 1,2,3,4 is supposed to fail and it will force you to analyse corresponding Logs file ( View References). It doesn’t matter did you set up cloud environment  manually or via RDO packstack

Run on Controller Node :-

[root@dallas1 ~(keystone_admin)]$ ip netns list

qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d
qdhcp-166d9651-d299-47df-a5a1-b368e87b612f

Check on the Routing on Cloud controller’s router namespace, it should show port 80 for 169.254.169.254 routes to the host at port 8700

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d iptables -L -t nat | grep 169

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports  8700

Check routing table inside the router namespace:

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d ip r

default via 192.168.1.1 dev qg-8fbb6202-3d
10.0.0.0/24 dev qr-2dd1ba70-34  proto kernel  scope link  src 10.0.0.1
192.168.1.0/24 dev qg-8fbb6202-3d  proto kernel  scope link  src 192.168.1.100

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN   
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ ip netns exec qdhcp-166d9651-d299-47df-a5a1-b368e87b612f netstat -na

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 10.0.0.3:53             0.0.0.0:*               LISTEN
tcp6       0      0 fe80::f816:3eff:feef:53 :::*                    LISTEN
udp        0      0 10.0.0.3:53             0.0.0.0:*
udp        0      0 0.0.0.0:67              0.0.0.0:*
udp6       0      0 fe80::f816:3eff:feef:53 :::*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ iptables-save | grep 8700

-A INPUT -p tcp -m multiport –dports 8700 -m comment –comment “001 metadata incoming” -j ACCEPT

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      2830/python  

[root@dallas1 ~(keystone_admin)]$ ps -ef | grep 2830
nova      2830     1  0 09:41 ?        00:00:57 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2856  2830  0 09:41 ?        00:00:00 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2874  2830  0 09:41 ?        00:00:09 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2875  2830  0 09:41 ?        00:00:01 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

1. At this point  you should be able (inside any running Havana instance) to launch your browser (“links” at least if there is no Light Weight X environment)  to

http://169.254.169.254/openstack/latest (not EC2)

The response  will be  :    meta_data.json password vendor_data.json

 If Light Weight X Environment is unavailable then use “links”

 

 

 What is curl   http://curl.haxx.se/docs/faq.html#What_is_cURL

Now you should be able to run on F20 instance

[root@vf20rs0404 ~] # curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

%  Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

Dload  Upload   Total   Spent    Left  Speed

100  1286  100  1286    0     0   1109      0  0:00:01  0:00:01 –:–:–  1127

. . . . . . . .

“uuid”: “10142280-44a2-4830-acce-f12f3849cb32“,

“availability_zone”: “nova”,

“hostname”: “vf20rs0404.novalocal”,

“launch_index”: 0,

“public_keys”: {“key2”: “ssh-rsa . . . . .  Generated by Nova\n”},

“name”: “VF20RS0404”

On another instance (in my case Ubuntu 14.04 )

 root@ubuntutrs0407:~#curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

Dload  Upload   Total   Spent    Left  Speed

100  1292  100  1292    0     0    444      0  0:00:02  0:00:02 –:–:–   446

{“random_seed”: “…”,

“uuid”: “8c79e60c-4f1d-44e5-8446-b42b4d94c4fc“,

“availability_zone”: “nova”,

“hostname”: “ubuntutrs0407.novalocal”,

“launch_index”: 0,

“public_keys”: {“key2”: “ssh-rsa …. Generated by Nova\n”},

“name”: “UbuntuTRS0407”}

Running VMs on Compute node:-

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+—————+———–+————+————-+—————————–+

| ID                                   | Name          | Status    | Task State | Power State | Networks                    |

+————————————–+—————+———–+————+————-+—————————–+

| d0f947b1-ff6a-4ff0-b858-b63a3d07cca3 | UbuntuTRS0405 | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.106 |

| 8c79e60c-4f1d-44e5-8446-b42b4d94c4fc | UbuntuTRS0407 | ACTIVE    | None       | Running     | int=10.0.0.6, 192.168.1.107 |

| 8775924c-dbbd-4fbb-afb8-7e38d9ac7615 | VF20RS037     | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.115 |

| d22a2376-33da-4a0e-a066-d334bd2e511d | VF20RS0402    | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.103 |

| 10142280-44a2-4830-acce-f12f3849cb32 | VF20RS0404    | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.105 |

+————————————–+—————+———–+————+————-+——————–

Launching browser to http://169.254.169.254/openstack/latest/meta_data.json on another Two Node Neutron GRE+OVS F20 Cluster. Output is sent directly to browser

2. I have provided some information about the OpenStack metadata api, which is available at /openstack, but if you are concerned  about the EC2 metadata API , browser should be launched to  http://169.254.169.254/latest/meta-data/

 What allows to to get any of displayed parameters

For instance :-

 

   OR via CLI

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/instance-id

i-000000a4

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-hostname

ubuntutrs0407.novalocal

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-ipv4

192.168.1.107

To verify instance-id launch virt-manger connected to Compute Node

 

 

which shows same value “000000a4”

Another option in text mode is “links” browser

$ ssh -l ubuntu -i key2.pem 192.168.1.109

Inside Ubuntu 14.04 instance  :-

# apt-get -y install links

# links

Press ESC to get to menu:-

 

 

 

 

References

1.https://ask.openstack.org/en/question/10140/wget-http1692541692542009-04-04meta-datainstance-id-error-404/


Setup Dashboard&VNC console on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster

March 13, 2014

This post follows up  Gluster 3.4.2 on Two Node Controller&Compute Neutron GRE+OVS Fedora 20 Cluster in particular,  it could be performed after Basic Setup  to make system management more comfortable the only CLI.

It’s also easy to create instance via  Dashboard :

  Placing in post creating panel customization script ( analog –user-data)

#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

To be able log in as “fedora” and set MTU=1457  inside VM (GRE tunneling)

   Key-pair submitted upon creation works like this :

[root@dfw02 Downloads(keystone_boris)]$ ssh -l fedora -i key2.pem  192.168.1.109
Last login: Sat Mar 15 07:47:45 2014

[fedora@vf20rs015 ~]$ uname -a
Linux vf20rs015.novalocal 3.13.6-200.fc20.x86_64 #1 SMP Fri Mar 7 17:02:28 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

[fedora@vf20rs015 ~]$ ifconfig
eth0: flags=4163  mtu 1457
inet 40.0.0.7  netmask 255.255.255.0  broadcast 40.0.0.255
inet6 fe80::f816:3eff:fe1e:1de6  prefixlen 64  scopeid 0x20
ether fa:16:3e:1e:1d:e6  txqueuelen 1000  (Ethernet)
RX packets 225  bytes 25426 (24.8 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 221  bytes 23674 (23.1 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Setup described at link mentioned above was originally suggested by Kashyap Chamarthy  for VMs running on non-default Libvirt’s subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for  nova-compute and neutron-openvswitch-agent  from Compute to Controller,   changes to /etc/sysconfig/iptables to enable  Gluster 3.4.2 setup on F20  systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for “dnsmasq.conf” and added to dnsmasq.conf line “dhcp-option=26,1454”. Updated configuration files are critical for launching instance without “Customization script” and allow to work with usual ssh keypair.  Actually , when updates are done instance gets created with MTU 1454. View  [2]. This setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..`  loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept).

Setup

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling ), Dashboard

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   -  Controller (192.168.1.127) 
dwf01.localdomain   -  Compute   (192.168.1.137)

1. First step follows  http://docs.openstack.org/havana/install-guide/install/yum/content/install_dashboard.html   and  http://docs.openstack.org/havana/install-guide/install/yum/content/dashboard-session-database.html Sequence of actions per manuals above :-

# yum install memcached python-memcached mod_wsgi openstack-dashboard

Modify the value of CACHES[‘default’][‘LOCATION’] in /etc/openstack-dashboard/local_settings to match the ones set in /etc/sysconfig/memcached. Open /etc/openstack-dashboard/local_settings and look for this line:

CACHES =

{ ‘default’:

{ ‘BACKEND’ : ‘django.core.cache.backends.memcached.MemcachedCache’,

‘LOCATION’ : ‘127.0.0.1:11211’ }

}

Update the ALLOWED_HOSTS in local_settings.py to include the addresses you wish to access the dashboard from. Edit /etc/openstack-dashboard/local_settings:

ALLOWED_HOSTS = [‘Controller-IP’, ‘my-desktop’]

This guide assumes that you are running the Dashboard on the controller node. You can easily run the dashboard on a separate server, by changing the appropriate settings in local_settings.py. Edit /etc/openstack-dashboard/local_settings and change OPENSTACK_HOST to the hostname of your Identity Service:

OPENSTACK_HOST = “Controller-IP”

Start the Apache web server and memcached: # service httpd restart

# systemctl start memcached

# systemctl enable memcached

To configure the MySQL database, create the dash database:

mysql&gt; CREATE DATABASE dash; Create a MySQL user for the newly-created dash database that has full control of the database. Replace DASH_DBPASS with a password for the new user:

mysql&gt; GRANT ALL ON dash.* TO ‘dash’@’%’ IDENTIFIED BY ‘fedora’;

mysql&gt; GRANT ALL ON dash.* TO ‘dash’@’localhost’ IDENTIFIED BY ‘fedora’;

In the local_settings file /etc/openstack-dashboard/local_settings

SESSION_ENGINE = ‘django.contrib.sessions.backends.db’

DATABASES =

{ ‘default’:

{ # Database configuration here

‘ENGINE’: ‘django.db.backends.mysql’,

‘NAME’: ‘dash’,

‘USER’: ‘dash’, ‘PASSWORD’:

‘fedora’, ‘HOST’: ‘Controller-IP’,

‘default-character-set’: ‘utf8’ }

}

After configuring the local_settings as shown, you can run the manage.py syncdb command to populate this newly-created database.

# /usr/share/openstack-dashboard/manage.py syncdb

Attempting to run syncdb you  might get an error like ‘dash’@’yourhost’ is not authorized to do it with password ‘YES’.  Then ( for instance in my case)

# mysql -u root -p

MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;

MariaDB [(none)]>  insert into mysql.user(User,Host,Password) values (‘dash’,’dallas1.localdomain’,’ ‘);

Query OK, 1 row affected, 4 warnings (0.00 sec)

MariaDB [(none)]> UPDATE mysql.user SET Password = PASSWORD(‘fedora’)

> WHERE User = ‘dash’ ;

Query OK, 1 row affected (0.00 sec) Rows matched: 3  Changed: 1  Warnings: 0

MariaDB [(none)]>  SELECT User, Host, Password FROM mysql.user;

.   .  .  .

| dash     | %                   | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |

| dash     | localhost       | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |

| dash     | dallas1.localdomain | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 | +———-+———————+——————————————-+

20 rows in set (0.00 sec)

That is exactly the same issue which comes up when starting openstack-nova-scheduler &amp; openstcak-nova-conductor  services during basic installation of Controller on Fedora 20. View Basic setup in particular :-

Set table mysql.user in proper status

shell> mysql -u root -p
mysql> insert into mysql.user (User,Host,Password) values ('nova','dfw02.localdomain',' ');
mysql> UPDATE mysql.user SET Password = PASSWORD('nova')
    ->    WHERE User = 'nova';
mysql> FLUSH PRIVILEGES;

Start, enable nova-{api,scheduler,conductor} services

  $ for i in start enable status; \
    do systemctl $i openstack-nova-api; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-scheduler; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-conductor; done

 # service httpd restart

Finally on Controller (dfw02  – 192.168.1.127)  file /etc/openstack-dashboard/local_settings  looks like https://bderzhavets.wordpress.com/2014/03/14/sample-of-etcopenstack-dashboardlocal_settings/

At this point dashboard is functional, but instances sessions outputs are unavailable via dashboard.  I didn’t get any error code, just

Instance Detail: VF20RS03

OverviewLogConsole

Loading…

2. Second step skipped in mentioned manual , however known by experienced persons https://ask.openstack.org/en/question/520/vnc-console-in-dashboard-fails-to-connect-ot-server-code-1006/

**************************************

Controller  dfw02 – 192.168.1.127

**************************************

# ssh-keygen (Hit Enter to accept all of the defaults)

# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dfw01

[root@dfw02 ~(keystone_boris)]$ ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5903:127.0.0.1:5903 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5904:127.0.0.1:5904 -N -f -l root 192.168.1.137

Compute’s  IP is 192.168.1.137

Update /etc/nova/nova.conf:

novncproxy_host=0.0.0.0

novncproxy_port=6080

novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html

[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-consoleauth.service
ln -s ‘/usr/lib/systemd/system/openstack-nova-consoleauth.service’ ‘/etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service’
[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-novncproxy.service
ln -s ‘/usr/lib/systemd/system/openstack-nova-novncproxy.service’ ‘/etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service’

[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-consoleauth.service
[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-novncproxy.service

[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-consoleauth.service

openstack-nova-consoleauth.service – OpenStack Nova VNC console auth Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-consoleauth.service; enabled)

Active: active (running) since Thu 2014-03-13 19:14:45 MSK; 20min ago

Main PID: 14679 (nova-consoleaut)

CGroup: /system.slice/openstack-nova-consoleauth.service

└─14679 /usr/bin/python /usr/bin/nova-consoleauth –logfile /var/log/nova/consoleauth.log

Mar 13 19:14:45 dfw02.localdomain systemd[1]: Started OpenStack Nova VNC console auth Server.

[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-novncproxy.service

openstack-nova-novncproxy.service – OpenStack Nova NoVNC Proxy Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled)

Active: active (running) since Thu 2014-03-13 19:14:58 MSK; 20min ago

Main PID: 14762 (nova-novncproxy)

CGroup: /system.slice/openstack-nova-novncproxy.service

├─14762 /usr/bin/python /usr/bin/nova-novncproxy –web /usr/share/novnc/

└─17166 /usr/bin/python /usr/bin/nova-novncproxy –web /usr/share/novnc/

Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: 127.0.0.1: Path: ‘/websockify’

Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: connecting to: 127.0.0.1:5900

Mar 13 19:23:55 dfw02.localdomain nova-novncproxy[14762]: 19: 127.0.0.1: ignoring empty handshake

Mar 13 19:24:31 dfw02.localdomain nova-novncproxy[14762]: 22: 127.0.0.1: ignoring socket not ready

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Plain non-SSL (ws://) WebSocket connection

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Version hybi-13, base64: ‘True’

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Path: ‘/websockify’

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: connecting to: 127.0.0.1:5901

Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 26: 127.0.0.1: ignoring empty handshake

Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 25: 127.0.0.1: ignoring empty handshake

Hint: Some lines were ellipsized, use -l to show in full.

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 6080

tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      14762/python

*********************************

Compute  dfw01 – 192.168.1.137

*********************************

Update  /etc/nova/nova.conf:

vnc_enabled=True

novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=192.168.1.137

# systemctl restart openstack-nova-compute

Finally :-

[root@dfw02 ~(keystone_admin)]$ systemctl list-units | grep nova

openstack-nova-api.service                      loaded active running   OpenStack Nova API Server
openstack-nova-conductor.service           loaded active running   OpenStack Nova Conductor Server
openstack-nova-consoleauth.service       loaded active running   OpenStack Nova VNC console auth Server
openstack-nova-novncproxy.service         loaded active running   OpenStack Nova NoVNC Proxy Server
openstack-nova-scheduler.service            loaded active running   OpenStack Nova Scheduler Server

[root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At

nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54

nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54

nova-compute     dfw01.localdomain                     nova             enabled    :-)   2014-03-13 16:56:45

nova-consoleauth dfw02.localdomain                   internal         enabled    :-)   2014-03-13 16:56:47

[root@dfw02 ~(keystone_admin)]$ neutron agent-list

+————————————–+——————–+——————-+——-+—————-+

| id                                   | agent_type         | host              | alive | admin_state_up |

+————————————–+——————–+——————-+——-+—————-+

| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent           | dfw02.localdomain | :-)   | True           |

| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-)   | True           |

| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent         | dfw02.localdomain | :-)   | True           |

| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-)   | True           |

+————————————–+——————–+——————-+——-+—————-+

Users console views :-

    Admin Console views :-

[root@dallas2 ~]# service openstack-nova-compute status -l
Redirecting to /bin/systemctl status  -l openstack-nova-compute.service
openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
Active: active (running) since Thu 2014-03-20 16:29:07 MSK; 6h ago
Main PID: 1685 (nova-compute)
CGroup: /system.slice/openstack-nova-compute.service
├─1685 /usr/bin/python /usr/bin/nova-compute –logfile /var/log/nova/compute.log
└─3552 /usr/sbin/glusterfs –volfile-id=cinder-volumes012 –volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

Mar 20 22:20:15 dallas2.localdomain sudo[11210]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb372fd13e-d2 up
Mar 20 22:20:15 dallas2.localdomain sudo[11213]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb372fd13e-d2 promisc on
Mar 20 22:20:16 dallas2.localdomain sudo[11216]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo372fd13e-d2 up
Mar 20 22:20:16 dallas2.localdomain sudo[11219]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo372fd13e-d2 promisc on
Mar 20 22:20:16 dallas2.localdomain sudo[11222]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qbr372fd13e-d2 up
Mar 20 22:20:16 dallas2.localdomain sudo[11225]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf brctl addif qbr372fd13e-d2 qvb372fd13e-d2
Mar 20 22:20:16 dallas2.localdomain sudo[11228]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl — –may-exist add-port br-int qvo372fd13e-d2 — set Interface qvo372fd13e-d2 external-ids:iface-id=372fd13e-d283-43ba-9a4e-a1684660f4ce external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:0d:4a:12 external-ids:vm-uuid=9679d849-7e4b-4cb5-b644-43279d53f01b
Mar 20 22:20:16 dallas2.localdomain ovs-vsctl[11230]: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl — –may-exist add-port br-int qvo372fd13e-d2 — set Interface qvo372fd13e-d2 external-ids:iface-id=372fd13e-d283-43ba-9a4e-a1684660f4ce external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:0d:4a:12 external-ids:vm-uuid=9679d849-7e4b-4cb5-b644-43279d53f01b
Mar 20 22:20:16 dallas2.localdomain sudo[11244]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf tee /sys/class/net/tap372fd13e-d2/brport/hairpin_mode
Mar 20 22:25:53 dallas2.localdomain nova-compute[1685]: 2014-03-20 22:25:53.102 1685 WARNING nova.compute.manager [-] Found 5 in the database and 2 on the hypervisor.

[root@dallas2 ~]# ovs-vsctl show
3e7422a7-8828-4e7c-b595-8a5b6504bc08
Bridge br-int
Port “qvod0e086e7-32”
tag: 1
Interface “qvod0e086e7-32”
Port br-int
            Interface br-int
type: internal
Port “qvo372fd13e-d2”
tag: 1
            Interface “qvo372fd13e-d2”
Port “qvob49ecf5e-8e”
tag: 1
Interface “qvob49ecf5e-8e”
Port “qvo756757a8-40”
tag: 1
Interface “qvo756757a8-40”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qvo4d1f9115-03”
tag: 1
Interface “qvo4d1f9115-03”
Bridge br-tun
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130”}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+————–+———–+————+————-+—————————–+
| ID                                   | Name         | Status    | Task State | Power State | Networks                    |
+————————————–+————–+———–+————+————-+—————————–+
| 690d29ae-4c3c-4b2e-b2df-e4d654668336 | UbuntuSRS007 | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.103 |
| 9c791573-1238-44c4-a103-6873fddc17d1 | UbuntuTS019  | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.107 |
| 70db20be-efa6-4a96-bf39-6250962784a3 | VF20RS015    | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.101 |
| 3c888e6a-dd4f-489a-82bb-1f1f9ce6a696 | VF20RS017    | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 9679d849-7e4b-4cb5-b644-43279d53f01b | VF20RS024    | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.105 |
+————————————–+————–+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ nova show 9679d849-7e4b-4cb5-b644-43279d53f01b
+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-03-20T18:20:16Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.2, 192.168.1.105                                  |
| hostId                               | 8477c225f2a46d84dcd609798bf5ee71cc8d20b44256b3b2a54b723f |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2014-03-20T18:20:16.000000                               |
| flavor                               | m1.small (2)                                             |
| id                                   | 9679d849-7e4b-4cb5-b644-43279d53f01b                     |
| security_groups                      | [{u’name’: u’default’}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                         |
| name                                 | VF20RS024                                                |
| created                              | 2014-03-20T18:20:10Z                                     |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’abc0f5b8-5144-42b7-b49f-a42a20ddd88f‘}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+
[root@dallas1 ~(keystone_boris)]$ ls -l /FDR/Replicate
total 8383848
-rw-rw-rw-. 2 root root 5368709120 Mar 17 21:58 volume-4b807fe8-dcd2-46eb-b7dd-6ab10641c32a
-rw-rw-rw-. 2 root root 5368709120 Mar 20 18:26 volume-4df4fadf-1be9-4a09-b51c-723b8a6b9c23
-rw-rw-rw-. 2 root root 5368709120 Mar 19 13:46 volume-6ccc137a-6361-42ee-8925-57c6a2eeccf4
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:18 volume-abc0f5b8-5144-42b7-b49f-a42a20ddd88f
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:18 volume-ec9670b8-fa64-46e9-9695-641f51bf1421

[root@dallas1 ~(keystone_boris)]$ ssh 192.168.1.140
Last login: Thu Mar 20 20:15:49 2014
[root@dallas2 ~]# ls -l /FDR/Replicate
total 8383860
-rw-rw-rw-. 2 root root 5368709120 Mar 17 21:58 volume-4b807fe8-dcd2-46eb-b7dd-6ab10641c32a
-rw-rw-rw-. 2 root root 5368709120 Mar 20 18:26 volume-4df4fadf-1be9-4a09-b51c-723b8a6b9c23
-rw-rw-rw-. 2 root root 5368709120 Mar 19 13:46 volume-6ccc137a-6361-42ee-8925-57c6a2eeccf4
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:19 volume-abc0f5b8-5144-42b7-b49f-a42a20ddd88f
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:19 volume-ec9670b8-fa64-46e9-9695-641f51bf1421


Setup Gluster 3.4.2 on Two Node Controller&Compute Neutron GRE+OVS Fedora 20 Cluster

March 10, 2014

This post is an update for http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html  . It’s focused on Gluster 3.4.2  implementation including tuning /etc/sysconfig/iptables files on Controller and Compute Nodes.
Copying ssh-key from master node to compute, step by step verification of gluster volume replica 2  functionality and switching RDO Havana cinder services to work with gluster volume created  to store instances bootable cinders volumes for performance improvement. Of course creating gluster bricks under “/”  is not recommended . It should be a separate mount point for “xfs” filesystem to store gluster bricks on each node.

 Manual RDO Havana setup itself was originally suggested by Kashyap Chamarthy  for F20 VMs running on non-default Libvirt’s subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for  nova-compute and neutron-openvswitch-agent  from Compute to Controller,   changes to /etc/sysconfig/iptables to enable  Gluster 3.4.2 setup on F20  systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for “dnsmasq.conf” and added to dnsmasq.conf line “dhcp-option=26,1454”. Updated configuration files are critical for launching instance without “Customization script” and allow to work with usual ssh keypair.  Actually , when updates are done instance gets created with MTU 1454. View  [2]. Original  setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..`  loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept). Regarding Dashboard Setup&VNC Console,  view   :-
Setup Dashboard&VNC console on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster

Updated setup procedure itself may be viewed here

Setup 

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dallas1.localdomain   –  Controller (192.168.1.130)

dallas2.localdomain   –  Compute   (192.168.1.140)

First step is tuning /etc/sysconfig/iptables for IPv4 iptables firewall (service firewalld should be disabled) :-

Update /etc/sysconfig/iptables on both nodes:-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment out lines bellow , ignoring instruction from http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt  . It’s critical for Gluster functionality. Having them active you are supposed to work with thin LVM as cinder volumes. You won’t be able even remote mount with “-t glusterfs” option. Gluster’s  replications will be dead for ever.

# -A FORWARD -j REJECT –reject-with icmp-host-prohibited
# -A INPUT -j REJECT –reject-with icmp-host-prohibited

Restart service iptables on both nodes

Second step:-

On dallas1, run the following commands :

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dallas2

On both nodes run :-

# yum  -y install glusterfs glusterfs-server glusterfs-fuse
# service glusterd start

On dallas1

#gluster peer probe dallas2.localdomain
Should return “success”

[root@dallas1 ~(keystone_admin)]$ gluster peer status

Number of Peers: 1
Hostname: dallas2.localdomain
Uuid: b3b1cf43-2fec-4904-82d4-b9be03f77c5f
State: Peer in Cluster (Connected)
On dallas2
[root@dallas2 ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.1.130
Uuid: a57433dd-4a1a-4442-a5ae-ba2f682e5c79
State: Peer in Cluster (Connected)

*************************************************************************************
On Controller (192.168.1.130)  & Compute nodes (192.168.1.140)
**********************************************************************************

Verify ports availability:-

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep gluster
tcp    0      0 0.0.0.0:655        0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:49152      0.0.0.0:*    LISTEN      2524/glusterfsd
tcp    0      0 0.0.0.0:2049       0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38465      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38466      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:49155      0.0.0.0:*    LISTEN      2525/glusterfsd
tcp    0      0 0.0.0.0:38468      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38469      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:24007      0.0.0.0:*    LISTEN      2380/glusterd

************************************

Switching Cinder to Gluster volume

************************************

# gluster volume create cinder-volumes021  replica 2 ddallas1.localdomain:/FDR/Replicate   dallas2.localdomain:/FDR/Replicate force
# gluster volume start cinder-volumes021
# gluster volume set cinder-volumes021  auth.allow 192.168.1.*

[root@dallas1 ~(keystone_admin)]$ gluster volume info cinder-volumes012

Volume Name: cinder-volumes012
Type: Replicate
Volume ID: 9ee31c6c-0ae3-4fee-9886-b9cb6a518f48
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dallas1.localdomain:/FDR/Replicate
Brick2: dallas2.localdomain:/FDR/Replicate
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
auth.allow: 192.168.1.*

[root@dallas1 ~(keystone_admin)]$ gluster volume status cinder-volumes012

Status of volume: cinder-volumes012
Gluster process                                                    Port    Online    Pid
——————————————————————————
Brick dallas1.localdomain:/FDR/Replicate         49155    Y    2525
Brick dallas2.localdomain:/FDR/Replicate         49152    Y    1615
NFS Server on localhost                                  2049    Y    2591
Self-heal Daemon on localhost                         N/A    Y    2596
NFS Server on dallas2.localdomain                   2049    Y    2202
Self-heal Daemon on dallas2.localdomain          N/A    Y    2197

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf
192.168.1.130:cinder-volumes021
:wq

Make sure all thin LVM have been deleted via `cinder list` , if no then delete them all.

[root@dallas1 ~(keystone_admin)]$ for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

It should add row to `df -h` output :

192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34

[root@dallas1 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                        active
openstack-nova-cert:                       inactive  (disabled on boot)
openstack-nova-compute:               inactive  (disabled on boot)
openstack-nova-network:                inactive  (disabled on boot)
openstack-nova-scheduler:             active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:             active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:           active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                active
neutron-l3-agent:                     active
neutron-metadata-agent:        active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:       active
neutron-linuxbridge-agent:         inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                   inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:        active
openstack-cinder-volume:             active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 871cf99617ff40e09039185aa7ab11f8 |  admin  |   True  |       |
| df4a984ce2f24848a6b84aaa99e296f1 |  boris  |   True  |       |
| 57fc5466230b497a9f206a20618dbe25 |  cinder |   True  |       |
| cdb2e5af7bae4c5486a1e3e2f42727f0 |  glance |   True  |       |
| adb14139a0874c74b14d61d2d4f22371 | neutron |   True  |       |
| 2485122e3538409c8a6fa2ea4343cedf |   nova  |   True  |       |
+———————————-+———+———+——-+
== Glance images ==
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
== Nova managed services ==
+—————-+———————+———-+———+——-+—————————-+—————–+
| Binary         | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+———————+———-+———+——-+—————————-+—————–+
| nova-scheduler | dallas1.localdomain | internal | enabled | up    | 2014-03-09T14:19:31.000000 | None            |
| nova-conductor | dallas1.localdomain | internal | enabled | up    | 2014-03-09T14:19:30.000000 | None            |
| nova-compute   | dallas2.localdomain | nova     | enabled | up    | 2014-03-09T14:19:33.000000 | None            |
+—————-+———————+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 0ed406bf-3552-4036-9006-440f3e69618e | ext   | None |
| 166d9651-d299-47df-a5a1-b368e87b612f | int   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ df -h

Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora01-root        187G   32G  146G  18% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G  184K  3.9G   1% /dev/shm
tmpfs                            3.9G  9.1M  3.9G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                            3.9G  464K  3.9G   1% /tmp
/dev/sdb5                        477M  122M  327M  28% /boot
tmpfs                            3.9G  9.1M  3.9G   1% /run/netns
192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34

(neutron) agent-list

+————————————–+——————–+———————+——-+—————-+
| id                                   | agent_type         | host                | alive | admin_state_up |
+————————————–+——————–+———————+——-+—————-+
| 3ed1cd15-81af-4252-9d6f-e9bb140bf6cf | L3 agent           | dallas1.localdomain | :-)   | True           |
| a088a6df-633c-4959-a316-510c99f3876b | DHCP agent         | dallas1.localdomain | :-)   | True           |
| a3e5200c-b391-4930-b3ee-58c8d1b13c73 | Open vSwitch agent | dallas1.localdomain | :-)   | True           |
| b6da839a-0d93-44ad-9793-6d0919fbb547 | Open vSwitch agent | dallas2.localdomain | :-)   | True           |
+————————————–+——————–+———————+——-+—————-+
If Controller has been correctly set up:-

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep python
tcp    0     0 0.0.0.0:8700      0.0.0.0:*     LISTEN      1160/python
tcp    0     0 0.0.0.0:35357     0.0.0.0:*     LISTEN      1163/python
tcp   0      0 0.0.0.0:9696      0.0.0.0:*      LISTEN      1165/python
tcp   0      0 0.0.0.0:8773      0.0.0.0:*      LISTEN      1160/python
tcp   0      0 0.0.0.0:8774      0.0.0.0:*      LISTEN      1160/python
tcp   0      0 0.0.0.0:9191      0.0.0.0:*      LISTEN      1173/python
tcp   0      0 0.0.0.0:8776      0.0.0.0:*      LISTEN      8169/python
tcp   0      0 0.0.0.0:5000      0.0.0.0:*      LISTEN      1163/python
tcp   0      0 0.0.0.0:9292      0.0.0.0:*      LISTEN      1168/python 

**********************************************
Creating instance utilizing glusterfs volume
**********************************************

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

I have to notice that schema with `cinder create –image-id  .. –display_name VOL_NAME SIZE` &amp; `nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=volume_id:::0 VM_NAME`  doesn’t work stable  for me in meantime.

As of 03/11 standard schema via `cinder create –image-id IMAGE_ID –display_name VOL_NAME SIZE `& ` nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=VOLUME_ID:::0  INSTANCE_NAME`  started to work fine. However, schema described bellow on the contrary stopped to work on glusterfs based cinder’s volumes.

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-09T12:41:22Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f            |
| security_groups                      | [{u’name’: u’default’}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20RS012                                       |
| adminPass                            | eFDhC8ZSCFU2                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-09T12:41:22Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+———–+———————-+————-+—————————–+
| ID                                   | Name      | Status    | Task State           | Power State | Networks                    |
+————————————–+———–+———–+———————-+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None                 | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | BUILD     | block_device_mapping | NOSTATE     |                             |
+————————————–+———–+———–+———————-+————-+—————————–+
WAIT …
[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE    | None       | Running     | int=10.0.0.4                |
+————————————–+———–+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 5c74667d-9b22-4092-ae0a-70ff3a06e785 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 8142ee4c-ef56-4b61-8a0b-ecd82d21484f

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| dc60b5f4-739e-49bd-a004-3ef806e2b488 |      | fa:16:3e:70:56:cc | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2”, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate 5c74667d-9b22-4092-ae0a-70ff3a06e785 dc60b5f4-739e-49bd-a004-3ef806e2b488

Associated floatingip 5c74667d-9b22-4092-ae0a-70ff3a06e785

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.102

PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data.
64 bytes from 192.168.1.102: icmp_seq=1 ttl=63 time=6.23 ms
64 bytes from 192.168.1.102: icmp_seq=2 ttl=63 time=0.702 ms
64 bytes from 192.168.1.102: icmp_seq=3 ttl=63 time=1.07 ms
64 bytes from 192.168.1.102: icmp_seq=4 ttl=63 time=0.693 ms
64 bytes from 192.168.1.102: icmp_seq=5 ttl=63 time=1.80 ms
64 bytes from 192.168.1.102: icmp_seq=6 ttl=63 time=0.750 ms
^C

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ cinder list

+————————————–+——–+————–+——+————-+———-+————————————–+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+————–+——+————-+———-+————————————–+

| 575be853-b104-458e-bc72-1785ef524416 | in-use |              |  5   |     None    |   true   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8  | in-use |              |  5   |     None    |   true   | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+——–+————–+——+————-+———-+——————————

On Compute:-

[root@dallas1 ~]# ssh 192.168.1.140

Last login: Sun Mar  9 16:46:40 2014

[root@dallas2 ~]# df -h

Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora01-root        187G   18G  160G  11% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G  3.1M  3.9G   1% /dev/shm
tmpfs                            3.9G  9.4M  3.9G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                            3.9G  115M  3.8G   3% /tmp
/dev/sdb5                        477M  122M  327M  28% /boot
192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

[root@dallas2 ~]# ps -ef| grep nova

nova      1548     1  0 16:29 ?        00:00:42 /usr/bin/python /usr/bin/nova-compute –logfile /var/log/nova/compute.log

root      3005     1  0 16:34 ?        00:00:38 /usr/sbin/glusterfs –volfile-id=cinder-volumes012 –volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

qemu      4762     1 58 16:42 ?        00:52:17 /usr/bin/qemu-system-x86_64 -name instance-00000061 -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 8142ee4c-ef56-4b61-8a0b-ecd82d21484f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=8142ee4c-ef56-4b61-8a0b-ecd82d21484f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000061.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-575be853-b104-458e-bc72-1785ef524416,if=none,id=drive-virtio-disk0,format=raw,serial=575be853-b104-458e-bc72-1785ef524416,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:70:56:cc,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/8142ee4c-ef56-4b61-8a0b-ecd82d21484f/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

qemu      6330     1 44 16:49 ?        00:36:02 /usr/bin/qemu-system-x86_64 -name instance-0000005f -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 9566adec-9406-4c3e-bce5-109ecb8bcf6b -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=9566adec-9406-4c3e-bce5-109ecb8bcf6b -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000005f.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-9794bd45-8923-4f3e-a48f-fa1d62a964f8,if=none,id=drive-virtio-disk0,format=raw,serial=9794bd45-8923-4f3e-a48f-fa1d62a964f8,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:84:72,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/9566adec-9406-4c3e-bce5-109ecb8bcf6b/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming fd:24 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

root     24713 24622  0 18:11 pts/4    00:00:00 grep –color=auto nova

[root@dallas2 ~]# ps -ef| grep neutron

neutron   1549     1  0 16:29 ?        00:00:53 /usr/bin/python /usr/bin/neutron-openvswitch-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini –log-file /var/log/neutron/openvswitch-agent.log

root     24981 24622  0 18:12 pts/4    00:00:00 grep –color=auto neutron

  Top at Compute node (192.168.1.140)

      Runtime at Compute node ( dallas2 192.168.1.140)

 ******************************************************

Building Ubuntu 14.04 instance via cinder volume

******************************************************

[root@dallas1 ~(keystone_boris)]$ glance image-list
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 | Ubuntu 14.04        | qcow2       | bare             | 264176128 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
[root@dallas1 ~(keystone_boris)]$ cinder create –image-id c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 –display_name UbuntuTrusty 5
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-10T06:35:39.873978      |
| display_description |                 None                 |
|     display_name    |             UbuntuTrusty             |
|          id         | 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 |
|       image_id      | c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 |
|       metadata      |                  {}                  |
|         size        |                  5                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————————————–+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+———–+————–+——+————-+———-+————————————–+
| 56ceaaa8-c0ec-45f3-98a4-555c1231b34e |   in-use  |              |  5   |     None    |   true   | e29606c5-582f-4766-ae1b-52043a698743 |
| 575be853-b104-458e-bc72-1785ef524416 |   in-use  |              |  5   |     None    |   true   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 | available | UbuntuTrusty |  5   |     None    |   true   |                                      |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8 |   in-use  |              |  5   |     None    |   true   | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+———–+————–+——+————-+———-+————————————–+

[root@dallas1 ~(keystone_boris)]$  nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2:::0 UbuntuTR01

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+

| status                               | BUILD                                              |
| updated                              | 2014-03-10T06:40:14Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume – no image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 0859e52d-c07b-4f56-ac79-2b37080d2843               |
| security_groups                      | [{u’name’: u’default’}]                            |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                   |
| name                                 | UbuntuTR01                                         |
| adminPass                            | L8VuhttJMbJf                                       |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                   |
| created                              | 2014-03-10T06:40:13Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u’id’: u’8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2′}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+———–+————+————-+—————————–+
| ID                                   | Name       | Status    | Task State | Power State | Networks                    |
+————————————–+————+———–+————+————-+—————————–+
| 0859e52d-c07b-4f56-ac79-2b37080d2843 | UbuntuTR01 | ACTIVE    | None       | Running     | int=10.0.0.6                |
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007  | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012  | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
| e29606c5-582f-4766-ae1b-52043a698743 | VF20RS016  | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
+————————————–+————+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 9498ac85-82b0-468a-b526-64a659080ab9 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 0859e52d-c07b-4f56-ac79-2b37080d2843

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 1f02fe57-d844-4fd8-a325-646f27163c8b |      | fa:16:3e:3f:a3:d4 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2”, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate  9498ac85-82b0-468a-b526-64a659080ab9 1f02fe57-d844-4fd8-a325-646f27163c8b

Associated floatingip 9498ac85-82b0-468a-b526-64a659080ab9

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.104

PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=2.35 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=2.56 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.17 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=4.08 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=2.19 ms
^C


Up to date procedure of Creating cinder’s ThinLVM based Cloud Instance F20,Ubuntu 13.10 on Fedora 20 Havana Compute Node.

March 4, 2014

  This post follows up  https://bderzhavets.wordpress.com/2014/01/24/setting-up-two-physical-node-openstack-rdo-havana-neutron-gre-on-fedora-20-boxes-with-both-controller-and-compute-nodes-each-one-having-one-ethernet-adapter/

   Per my experience `cinder create –image-id  Image_id –display_name …..` && `nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=Volume_id :::0 <VM_NAME>  doesn’t   work any longer, giving an error :-

$ tail -f /var/log/nova/compute.log  reports :-

 2014-03-03 13:28:43.646 1344 WARNING nova.virt.libvirt.driver [req-1bd6630e-b799-4d78-b702-f06da5f1464b df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29b a86d7eb] [instance: f621815f-3805-4f52-a878-9040c6a4af53] File injection into a boot from volume instance is not supported

Followed by python stack trace and Nova Exception

Workaround for this issue follows bellow. First stop and and start “tgtd” daemon :-

[root@dallas1 ~(keystone_admin)]$ service tgtd stop
Redirecting to /bin/systemctl stop  tgtd.service
[root@dallas1 ~(keystone_admin)]$ service tgtd status
Redirecting to /bin/systemctl status  tgtd.service
tgtd.service – tgtd iSCSI target daemon
Loaded: loaded (/usr/lib/systemd/system/tgtd.service; enabled)
Active: inactive (dead) since Tue 2014-03-04 11:46:18 MSK; 8s ago
Process: 11978 ExecStop=/usr/sbin/tgtadm –op delete –mode system (code=exited, status=0/SUCCESS)
Process: 11974 ExecStop=/usr/sbin/tgt-admin –update ALL -c /dev/null (code=exited, status=0/SUCCESS)
Process: 11972 ExecStop=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 1797 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v ready (code=exited, status=0/SUCCESS)
Process: 1791 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
Process: 1790 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 1173 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
Process: 1172 ExecStart=/usr/sbin/tgtd -f $TGTD_OPTS (code=exited, status=0/SUCCESS)
Main PID: 1172 (code=exited, status=0/SUCCESS)

Mar 04 11:14:04 dallas1.localdomain tgtd[1172]: tgtd: work_timer_start(146) use timer_fd based scheduler
Mar 04 11:14:04 dallas1.localdomain tgtd[1172]: tgtd: bs_init_signalfd(271) could not open backing-store module direct…store
Mar 04 11:14:04 dallas1.localdomain tgtd[1172]: tgtd: bs_init(390) use signalfd notification
Mar 04 11:14:09 dallas1.localdomain systemd[1]: Started tgtd iSCSI target daemon.
Mar 04 11:26:01 dallas1.localdomain tgtd[1172]: tgtd: device_mgmt(246) sz:69 params:path=/dev/cinder-volumes/volume-a0…2864d
Mar 04 11:26:01 dallas1.localdomain tgtd[1172]: tgtd: bs_thread_open(412) 16
Mar 04 11:33:32 dallas1.localdomain tgtd[1172]: tgtd: device_mgmt(246) sz:69 params:path=/dev/cinder-volumes/volume-01…f2969
Mar 04 11:33:32 dallas1.localdomain tgtd[1172]: tgtd: bs_thread_open(412) 16
Mar 04 11:46:18 dallas1.localdomain systemd[1]: Stopping tgtd iSCSI target daemon…
Mar 04 11:46:18 dallas1.localdomain systemd[1]: Stopped tgtd iSCSI target daemon.
Hint: Some lines were ellipsized, use -l to show in full.

[root@dallas1 ~(keystone_admin)]$ service tgtd start
Redirecting to /bin/systemctl start  tgtd.service
[root@dallas1 ~(keystone_admin)]$ service tgtd status -l
Redirecting to /bin/systemctl status  -l tgtd.service
tgtd.service – tgtd iSCSI target daemon
Loaded: loaded (/usr/lib/systemd/system/tgtd.service; enabled)
Active: active (running) since Tue 2014-03-04 11:46:40 MSK; 4s ago
Process: 11978 ExecStop=/usr/sbin/tgtadm –op delete –mode system (code=exited, status=0/SUCCESS)
Process: 11974 ExecStop=/usr/sbin/tgt-admin –update ALL -c /dev/null (code=exited, status=0/SUCCESS)
Process: 11972 ExecStop=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 12084 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v ready (code=exited, status=0/SUCCESS)
Process: 12078 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
Process: 12076 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 12052 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
Main PID: 12051 (tgtd)
CGroup: /system.slice/tgtd.service
└─12051 /usr/sbin/tgtd -f

Mar 04 11:46:35 dallas1.localdomain systemd[1]: Starting tgtd iSCSI target daemon…
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: librdmacm: Warning: couldn’t read ABI version.
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: librdmacm: Warning: assuming: 4
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: librdmacm: Fatal: unable to get RDMA device list
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: iser_ib_init(3351) Failed to initialize RDMA; load kernel modules?
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: work_timer_start(146) use timer_fd based scheduler
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: bs_init_signalfd(271) could not open backing-store module directory /usr/lib64/tgt/backing-store
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: bs_init(390) use signalfd notification
Mar 04 11:46:40 dallas1.localdomain systemd[1]: Started tgtd iSCSI target daemon.
[root@dallas1 ~(keystone_admin)]$ for i in api scheduler volume ; do service openstack-cinder-${i} restart ;done
Redirecting to /bin/systemctl restart  openstack-cinder-api.service
Redirecting to /bin/systemctl restart  openstack-cinder-scheduler.service
Redirecting to /bin/systemctl restart  openstack-cinder-volume.service
[root@dallas1 ~(keystone_Boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

Create thin LVM via Nova with login option “fedora”&”mysecret” in one command

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS01

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-07T05:50:18Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 770e33f7-7aab-49f1-95ca-3cf343f744ef            |
| security_groups                      | [{u’name’: u’default’}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20RS01                                        |
| adminPass                            | CqjGVUm9bbs9                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-07T05:50:18Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———-+——–+———————-+————-+———-+
| ID                                   | Name     | Status | Task State           | Power State | Networks |
+————————————–+———-+——–+———————-+————-+———-+
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01 | BUILD  | block_device_mapping | NOSTATE     |          |
+————————————–+———-+——–+———————-+————-+———-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———-+——–+———————-+————-+———-+
| ID                                   | Name     | Status | Task State           | Power State | Networks |
+————————————–+———-+——–+———————-+————-+———-+
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01 | BUILD  | block_device_mapping | NOSTATE     |          |
+————————————–+———-+——–+———————-+————-+———-+
[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———-+——–+————+————-+————–+
| ID                                   | Name     | Status | Task State | Power State | Networks     |
+————————————–+———-+——–+————+————-+————–+
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01 | ACTIVE | None       | Running     | int=10.0.0.2 |
+————————————–+———-+——–+————+————-+————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.101                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | f7d9cd3f-e544-4f23-821d-0307ed4eb852 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 770e33f7-7aab-49f1-95ca-3cf343f744ef

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 8b5f142e-ce99-40e0-bbbe-620b201c0323 |      | fa:16:3e:0d:c4:e6 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2”, “ip_address”: “10.0.0.2”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate f7d9cd3f-e544-4f23-821d-0307ed4eb852 8b5f142e-ce99-40e0-bbbe-620b201c0323
Associated floatingip f7d9cd3f-e544-4f23-821d-0307ed4eb852

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.101

PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data.
64 bytes from 192.168.1.101: icmp_seq=1 ttl=63 time=7.75 ms
64 bytes from 192.168.1.101: icmp_seq=2 ttl=63 time=1.06 ms
64 bytes from 192.168.1.101: icmp_seq=3 ttl=63 time=1.27 ms
64 bytes from 192.168.1.101: icmp_seq=4 ttl=63 time=1.43 ms
64 bytes from 192.168.1.101: icmp_seq=5 ttl=63 time=1.80 ms
64 bytes from 192.168.1.101: icmp_seq=6 ttl=63 time=0.916 ms
64 bytes from 192.168.1.101: icmp_seq=7 ttl=63 time=0.919 ms
64 bytes from 192.168.1.101: icmp_seq=8 ttl=63 time=0.930 ms
64 bytes from 192.168.1.101: icmp_seq=9 ttl=63 time=0.977 ms
64 bytes from 192.168.1.101: icmp_seq=10 ttl=63 time=0.690 ms
^C

— 192.168.1.101 ping statistics —

10 packets transmitted, 10 received, 0% packet loss, time 9008ms

rtt min/avg/max/mdev = 0.690/1.776/7.753/2.015 ms

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=3e6eea8e-32e6-4373-9eb1-e04b8a3167f9,dest=volume,size=5,shutdown=preserve,bootindex=0 UbuntuRS01

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-07T05:53:44Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | bfcb2120-942f-4d3f-a173-93f6076a4be8            |
| security_groups                      | [{u’name’: u’default’}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | UbuntuRS01                                      |
| adminPass                            | bXND2XTsvuA4                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-07T05:53:44Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+————+————-+—————————–+
| ID                                   | Name       | Status | Task State | Power State | Networks                    |
+————————————–+————+——–+————+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None       | Running     | int=10.0.0.4                |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+

| Field               | Value                                |

+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | b3d3f262-5142-4a99-9b8d-431c231cb1d7 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id bfcb2120-942f-4d3f-a173-93f6076a4be8

+————————————–+——+——————-+———————————————————————————+

| id                                   | name | mac_address       | fixed_ips                                                                       |

+————————————–+——+——————-+———————————————————————————+
| c81ca027-8f9b-49c3-af10-adc60f5d4d12 |      | fa:16:3e:ac:86:50 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2”, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate b3d3f262-5142-4a99-9b8d-431c231cb1d7 c81ca027-8f9b-49c3-af10-adc60f5d4d12

Associated floatingip b3d3f262-5142-4a99-9b8d-431c231cb1d7

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.102

PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data.
64 bytes from 192.168.1.102: icmp_seq=1 ttl=63 time=3.84 ms
64 bytes from 192.168.1.102: icmp_seq=2 ttl=63 time=3.06 ms
64 bytes from 192.168.1.102: icmp_seq=3 ttl=63 time=6.58 ms
64 bytes from 192.168.1.102: icmp_seq=4 ttl=63 time=7.98 ms
64 bytes from 192.168.1.102: icmp_seq=5 ttl=63 time=2.09 ms
64 bytes from 192.168.1.102: icmp_seq=6 ttl=63 time=1.06 ms
64 bytes from 192.168.1.102: icmp_seq=7 ttl=63 time=3.55 ms
64 bytes from 192.168.1.102: icmp_seq=8 ttl=63 time=2.01 ms
64 bytes from 192.168.1.102: icmp_seq=9 ttl=63 time=1.05 ms
64 bytes from 192.168.1.102: icmp_seq=10 ttl=63 time=3.45 ms
64 bytes from 192.168.1.102: icmp_seq=11 ttl=63 time=2.31 ms
64 bytes from 192.168.1.102: icmp_seq=12 ttl=63 time=0.977 ms
^C

— 192.168.1.102 ping statistics —

12 packets transmitted, 12 received, 0% packet loss, time 11014ms

rtt min/avg/max/mdev = 0.977/3.168/7.985/2.091 ms

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20GLX

+————————————–+————————————————-+

| Property                             | Value                                           |

+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-07T05:58:40Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 62ff1641-2c96-470f-9147-9272d68d2e5c            |
| security_groups                      | [{u’name’: u’default’}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20GLX                                         |
| adminPass                            | E9KXeLp8fWig                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-07T05:58:40Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+————+————-+—————————–+

| ID                                   | Name       | Status | Task State | Power State | Networks                    |

+————————————–+————+——–+————+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | ACTIVE | None       | Running     | int=10.0.0.5                |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create extCreated a new floatingip:

+———————+————————————–+
| Field               | Value                                |
———————+————————————–+

| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.103                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 3fb87bb2-f485-4f1c-b2b7-7c5d90588d27 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 62ff1641-2c96-470f-9147-9272d68d2e5c

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 0845ad30-4d2c-487d-8847-2b6e3e8b9b9d |      | fa:16:3e:2c:84:62 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2”, “ip_address”: “10.0.0.5”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate 3fb87bb2-f485-4f1c-b2b7-7c5d90588d27 0845ad30-4d2c-487d-8847-2b6e3e8b9b9d

Associated floatingip 3fb87bb2-f485-4f1c-b2b7-7c5d90588d27

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.103

PING 192.168.1.103 (192.168.1.103) 56(84) bytes of data.
64 bytes from 192.168.1.103: icmp_seq=1 ttl=63 time=4.08 ms
64 bytes from 192.168.1.103: icmp_seq=2 ttl=63 time=1.59 ms
64 bytes from 192.168.1.103: icmp_seq=3 ttl=63 time=1.22 ms
64 bytes from 192.168.1.103: icmp_seq=4 ttl=63 time=1.49 ms
64 bytes from 192.168.1.103: icmp_seq=5 ttl=63 time=1.11 ms
64 bytes from 192.168.1.103: icmp_seq=6 ttl=63 time=0.980 ms
64 bytes from 192.168.1.103: icmp_seq=7 ttl=63 time=6.71 ms
^C

— 192.168.1.103 ping statistics —

7 packets transmitted, 7 received, 0% packet loss, time 6007ms

rtt min/avg/max/mdev = 0.980/2.458/6.711/1.996 ms

[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+————+——–+————+————-+—————————–+
| ID                                   | Name       | Status | Task State | Power State | Networks                    |
+————————————–+————+——–+————+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | ACTIVE | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$  vgdisplay
….

— Volume group —
VG Name               cinder-volumes
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  66
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                3
Open LV               3
Max PV                0
Cur PV                1
Act PV                1
VG Size               20.00 GiB
PE Size               4.00 MiB
Total PE              5119
Alloc PE / Size       3840 / 15.00 GiB
Free  PE / Size       1279 / 5.00 GiB
VG UUID               M11ikP-i6sd-ftwG-3XIH-F9wt-cSHe-m9kCtU


….

Three volumes have been created each one 5 GB

 [root@dallas1 ~(keystone_admin)]$ losetup -a

/dev/loop0: [64768]:14 (/cinder-volumes)

Same messages in log , but now it works

2014-03-03 23:50:19.851 6729 WARNING nova.virt.libvirt.driver [req-98443a14-3c3f-49f5-bf21-c183531a1778 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] [instance: baffc298-3b45-4e01-8891-1e6510e3dc0e] File injection into a boot from volume instance is not supported

2014-03-03 23:50:21.439 6729 WARNING nova.virt.libvirt.volume [req-98443a14-3c3f-49f5-bf21-c183531a1778 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] ISCSI volume not yet found at: vda. Will rescan &amp; retry.  Try number: 0

2014-03-03 23:50:21.518 6729 WARNING nova.virt.libvirt.vif [req-98443a14-3c3f-49f5-bf21-c183531a1778 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] Deprecated: The LibvirtHybridOVSBridgeDriver VIF driver is now deprecated and will be removed in the next release. Please use the LibvirtGenericVIFDriver VIF driver, together with a network plugin that reports the ‘vif_type’ attribute

2014-03-03 23:52:12.020 6729 WARNING nova.virt.libvirt.driver [req-1ea0e44e-b651-4f79-9d83-1ba872534440 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] [instance: a64a7a24-ff8a-4d01-aa59-80393a4213df] File injection into a boot from volume instance is not supported

2014-03-03 23:52:13.629 6729 WARNING nova.virt.libvirt.volume [req-1ea0e44e-b651-4f79-9d83-1ba872534440 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] ISCSI volume not yet found at: vda. Will rescan &amp; retry.  Try number: 0

2014-03-03 23:52:13.709 6729 WARNING nova.virt.libvirt.vif [req-1ea0e44e-b651-4f79-9d83-1ba872534440 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] Deprecated: The LibvirtHybridOVSBridgeDriver VIF driver is now deprecated and will be removed in the next release. Please use the LibvirtGenericVIFDriver VIF driver, together with a network plugin that reports the ‘vif_type’ attribute

2014-03-03 23:56:11.127 6729 WARNING nova.compute.manager [-] Found 4 in the database and 1 on the hypervisor.


USB Redirection hack on “Two Node Controller&Compute Neutron GRE+OVS” Fedora 20 Cluster

February 28, 2014
 
    I clearly understand that only incomplete  Havana RDO setup allows me to activate spice USB redirection communicating with cloud instances. There is no dashboard ( Administrative Web Console ) on Cluster. All information regarding nova instances status, neutron subnets,routers,ports is supposed to be obtained via CLI as well as managing instances, subnets,routers,ports and rules is also supposed to be done via CLI, having  carefully watch sourcing “keystonerc_user”  file to manage in environment of particular user of particular tenant.    Also I have to mention that  to create new instance I must have in `nova list` no more then four entries. Then I will be able create new one instance for sure.  It has been tested on two  “Two Node Neutron GRE+OVS Systems” It is related with `nova quota-show`  for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller. View https://ask.openstack.org/en/question/11746/openstack-nova-scheduler-service-cannot-any-longer-connect-to-amqp-server-performing-nova-boot-on-fedora-20/
Manual Setup  ( view [2]  http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html )
– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   –  Controller (192.168.1.127)

dwf01.localdomain   –  Compute   (192.168.1.137)

[root@dfw02 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                     active

openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           inactive  (disabled on boot)
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         inactive  (disabled on boot)
openstack-ceilometer-alarm-notifier:    inactive  (disabled on boot)
openstack-ceilometer-alarm-evaluator:   inactive  (disabled on boot)
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 970ed56ef7bc41d59c54f5ed8a1690dc |  admin  |   True  |       |
| 162021e787c54cac906ab3296a386006 |  boris  |   True  |       |
| 1beeaa4b20454048bf23f7d63a065137 |  cinder |   True  |       |
| 006c2728df9146bd82fab04232444abf |  glance |   True  |       |
| 5922aa93016344d5a5d49c0a2dab458c | neutron |   True  |       |
| af2f251586564b46a4f60cdf5ff6cf4f |   nova  |   True  |       |
+———————————-+———+———+——-+

== Glance images ==

+————————————–+———————————+————-+——————+————-+——–+
| ID                                   | Name                            | Disk Format | Container Format | Size        | Status |
+————————————–+———————————+————-+——————+————-+——–+
| a6e8ef59-e492-46e2-8147-fd8b1a65ed73 | CentOS 6.5 image                | qcow2       | bare             | 344457216   | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31                        | qcow2       | bare             | 13147648    | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64                | qcow2       | bare             | 237371392   | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image                 | qcow2       | bare             | 214106112   | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10             | qcow2       | bare             | 244514816   | active |
| b7d54434-1cc6-4770-82f3-c8619952575c | Ubuntu Trusty Tar 02/23/14      | qcow2       | bare             | 261029888   | active |
| 07071d00-fb85-4b32-a9b4-d515088700d0 | Windows Server 2012 R2 Std Eval | vhd         | bare             | 17182752768 | active |
+————————————–+———————————+————-+——————+————-+——–+

== Nova managed services ==

+—————-+——————-+———-+———+——-+—————————-+—————–+
| Binary         | Host              | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+——————-+———-+———+——-+—————————-+—————–+
| nova-scheduler | dfw02.localdomain | internal | enabled | up    | 2014-02-28T06:32:03.000000 | None            |
| nova-conductor | dfw02.localdomain | internal | enabled | up    | 2014-02-28T06:32:03.000000 | None            |
| nova-compute   | dfw01.localdomain | nova     | enabled | up    | 2014-02-28T06:31:59.000000 | None            |
+—————-+——————-+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int   | None |
| 426bb226-0ab9-440d-ba14-05634a17fb2b | int1  | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext   | None |
+————————————–+——-+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+

+—-+——+——–+————+————-+———-+
[root@dfw02 ~(keystone_boris)]$ nova list
+————————————–+———–+———–+————+————-+——————————+
| ID                                   | Name      | Status    | Task State | Power State | Networks                     |
+————————————–+———–+———–+————+————-+——————————+
| 5fcd83c3-1d4e-4b11-bfe5-061a03b73174 | UbuntuRSX | SUSPENDED | None       | Shutdown    | int1=40.0.0.5, 192.168.1.120 |
| 7953950c-112c-4c59-b183-5cbd06eabcf6 | VF19WXL   | SUSPENDED | None       | Shutdown    | int1=40.0.0.6, 192.168.1.121 |
| 784e8afc-d41a-4c2e-902a-8e109a40f7db | VF20GLS   | SUSPENDED | None       | Shutdown    | int1=40.0.0.4, 192.168.1.102 |
| 9b156b85-a6a1-4f15-bffa-6fdb124f8cff | VF20WXL   | SUSPENDED | None       | Shutdown    | int1=40.0.0.2, 192.168.1.101 |
+————————————–+———–+———–+————+————-+——————————+
 [root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-02-28 11:47:25
nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-02-28 11:47:25
nova-compute     dfw01.localdomain                    nova             enabled    :-)   2014-02-28 11:47:19

[root@dfw02 ~(keystone_admin)]$ neutron agent-list

+————————————–+——————–+——————-+——-+—————-+
| id                                   | agent_type         | host              | alive | admin_state_up |
+————————————–+——————–+——————-+——-+—————-+
| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent           | dfw02.localdomain | :-)   | True           |
| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-)   | True           |
| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent         | dfw02.localdomain | :-)   | True           |
| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-)   | True           |
+————————————–+——————–+——————-+——-+—————-+

Create F20 instance per http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html 

and run on newly built instance :-

# yum -y update
# yum -y install spice-vdagent
# reboot

Connect via virt-manager and switch to Properties tab :-

  

1. Switch to Spice Server
   2. Switch to Video QXL
   3. Add Hardware “Spice agent(spicevmc)”
   4. Add Hardware “USB Redirection”
       Spice channel
Then :- 

[root@dfw02 ~(keystone_boris)]$  nova reboot VF20GLS 

Plug in USB pen on Controller

[ 6443.772131] usb 1-2.1: USB disconnect, device number 5
[ 6523.996983] usb 1-2.1: new full-speed USB device number 6 using uhci_hcd
[ 6524.278848] usb 1-2.1: New USB device found, idVendor=0951, idProduct=160e
[ 6524.281206] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 6524.282055] usb 1-2.1: Product: DataTraveler 2.0
[ 6524.284851] usb 1-2.1: Manufacturer: Kingston
[ 6524.290527] usb 1-2.1: SerialNumber: 000AEB920161SK861E1301F6
[ 6524.369667] usb-storage 1-2.1:1.0: USB Mass Storage device detected
[ 6524.379638] scsi4 : usb-storage 1-2.1:1.0
[ 6525.420794] scsi 4:0:0:0: Direct-Access     Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 2
[ 6525.459504] sd 4:0:0:0: Attached scsi generic sg0 type 0
[ 6525.526419] sd 4:0:0:0: [sdb] 7856128 512-byte logical blocks: (4.02 GB/3.74 GiB)
[ 6525.554959] sd 4:0:0:0: [sdb] Write Protect is off
[ 6525.555010] sd 4:0:0:0: [sdb] Mode Sense: 23 00 00 00
[ 6525.571552] sd 4:0:0:0: [sdb] No Caching mode page found
[ 6525.573029] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 6525.667624] sd 4:0:0:0: [sdb] No Caching mode page found
[ 6525.669322] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 6525.816841]  sdb: sdb1
[ 6525.887493] sd 4:0:0:0: [sdb] No Caching mode page found
[ 6525.889142] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 6525.890478] sd 4:0:0:0: [sdb] Attached SCSI removable disk

$ sudo mount /dev/sdb1 /mnt/usbpen

[ 5685.621007] FAT-fs (sdb1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

[ 5685.631218] SELinux: initialized (dev sdb1, type vfat), uses genfs_contexts

Setup Light X Windows System &amp; Fluxbox on F20 instance ( [1] ) and make sure it’s completely functional and can read and wite to USB pen

   Nova status verification

 

 

   Neutron status verification

On the dfw02 (Controller) , run the following command:

ssh-keygen (Hit Enter to accept all of the defaults)
ssh-copy-id -i ~/.ssh/id_rsa.pub root@dwf01 (Compute)

Add to /etc/rc.d/rc.local lines :-

ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.137
ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.137
ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.137

to be comfortable with spicy connection to instances running on Compute node.

Build fresh spice-gtk packages :-

$ rpm -iv spice-gtk-0.23-1.fc21.src.rpm
$ cd ~/rpmbuild/SPEC
$ sudo yum install intltool gtk2-devel usbredir-devel libusb1-devel libgudev1-devel pixman-devel openssl-devel  libjpeg-turbo-devel celt051-devel pulseaudio-libs-devel pygtk2-devel python-devel zlib-devel cyrus-sasl-devel libcacard-devel gobject-introspection-devel  dbus-glib-devel libacl-devel polkit-devel gtk-doc vala-tools gtk3-devel spice-protocol opus-devel
$ rpmbuild -bb ./spice-gtk.spec
$ cd ../RPMS/x86_64

Install rpms been built , because spicy is not on the system

[boris@dfw02 x86_64]$  sudo yum install spice-glib-0.23-1.fc20.x86_64.rpm \
spice-glib-devel-0.23-1.fc20.x86_64.rpm \
spice-gtk-0.23-1.fc20.x86_64.rpm \
spice-gtk3-0.23-1.fc20.x86_64.rpm \
spice-gtk3-devel-0.23-1.fc20.x86_64.rpm \
spice-gtk3-vala-0.23-1.fc20.x86_64.rpm \
spice-gtk-debuginfo-0.23-1.fc20.x86_64.rpm \
spice-gtk-devel-0.23-1.fc20.x86_64.rpm  \
spice-gtk-python-0.23-1.fc20.x86_64.rpm \
spice-gtk-tools-0.23-1.fc20.x86_64.rpm

Verify new spice-gtk install on F20 :-

[boris@dfw02 x86_64]$ rpm -qa | grep spice-
spice-gtk-tools-0.23-1.fc20.x86_64
spice-server-0.12.4-3.fc20.x86_64
spice-glib-devel-0.23-1.fc20.x86_64
spice-gtk3-vala-0.23-1.fc20.x86_64
spice-gtk3-devel-0.23-1.fc20.x86_64
spice-gtk-python-0.23-1.fc20.x86_64
spice-vdagent-0.15.0-1.fc20.x86_64
spice-gtk-devel-0.23-1.fc20.x86_64
spice-gtk-0.23-1.fc20.x86_64
spice-gtk-debuginfo-0.23-1.fc20.x86_64
spice-glib-0.23-1.fc20.x86_64
spice-gtk3-0.23-1.fc20.x86_64
spice-protocol-0.12.6-2.fc20.noarch

Connection via spice will give  a warning :-

    just ignore this message.

References

1. http://bderzhavets.blogspot.com/2014/02/setup-light-weight-x-windows_2.html
2. http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Ongoing problems with “Two Real Controller&Compute Nodes Neutron GRE + OVS” setup on F20 via native Havana Repos

February 16, 2014

**************************************************************
UPDATE on 02/23/2014  
**************************************************************

To create new instance I must have in `nova list` no more then 4 entries. Then I can sequentially restart qpidd, openstack-nova-scheduler services ( it’s, actually, not always necessary)  and I will be able create new one instance for sure.  It has been tested on 2 “Two Node Neutron GRE+OVS+Gluster Backend for Cinder Cluster”.  It is related with `nova quota-show`  for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller. All testing  details here http://bderzhavets.blogspot.com/2014/02/next-attempt-to-set-up-two-node-neutron.html

Syntax like :

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 10    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+
[root@dallas1 ~(keystone_admin)]$  nova quota-class-update –instances 20 default

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 20    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+

doesn’t work for me

********************************************************************

Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don’t start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services :-
openstack-nova-conductor & openstack-nova-scheduler wouldn’t start if mysql.users table won’t be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.

Instance number on this snapshot is Instance-0000004a (HEX). This number is all the time increasing . This is instance created 74 th starting with 00000001

Detailed information about instances above:

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+————+———–+————+————-+—————————–+
| ID                                   | Name       | Status    | Task State | Power State | Networks                    |
+————————————–+————+———–+————+————-+—————————–+
| e52f8f4d-5d01-4237-a1ed-79ee53ecc88a | UbuntuSX5  | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.114
| 6c094d16-fda7-43fa-8f24-22e02e7a2fc6 | UbuntuVLG1 | ACTIVE    | None       | Running     | int=10.0.0.6, 192.168.1.118 |
| 526b803d-ded5-48d8-857a-f622f6082c18 | VF20GLF    | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.119 |
| c3a4c6d4-8618-4c4f-becb-0c53c2b3ad91 | VF20GLX    | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.117 |
| 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7 | VF20SX4    | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.110
+————————————–+————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ nova show 526b803d-ded5-48d8-857a-f622f6082c18
+————————————–+———————————————————-+
| Property                             | Value                                                               |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                            |
| updated                              | 2014-02-17T13:10:14Z                             |
| OS-EXT-STS:task_state                | None                                                  |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                         |
| key_name                             | None                                                             |
| image                Attempt to boot from volume – no image supplied      |
| int network                          | 10.0.0.5, 192.168.1.119                           |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                 |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000004a                                                                                                      |
| OS-SRV-USG:launched_at                                                                         | 2014-02-17T11:08:13.000000                                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                                                                       |
| flavor                               | m1.small (2)                                                     |
| id                                   | 526b803d-ded5-48d8-857a-f622f6082c18     |
| security_groups                      | [{u’name’: u’default’}]                          |
| OS-SRV-USG:terminated_at             | None                                              |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc      |
| name                                 | VF20GLF                                                        |
| created                              | 2014-02-17T11:08:07Z                                |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f    |
| OS-DCF:diskConfig                    | MANUAL                                              |
| metadata                             | {}                                                                 |
| os-extended-volumes:volumes_attached | [{u’id’: u’296d02ff-6e2a-424a-bd79-e75ed52875fc’}]       |
| accessIPv4                           |                                                                     |
| accessIPv6                           |                                                                     |
| progress                             | 0                                                                    |
| OS-EXT-STS:power_state               | 1                                                     |
| OS-EXT-AZ:availability_zone          | nova                                              |
| config_drive                         |                                                                     |
+————————————–+———————————————————-+

Instances numbers increasing sequence, old gets removed  new ones gets created

Top at Compute :-

Top at Controller :-

[root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-02-17 15:20:11
nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-02-17 15:20:11
nova-compute     dfw01.localdomain                    nova             enabled    :-)   2014-02-17 15:20:12

Watch also carefully “ovs-vsctl outputs on Controller &amp; Compute , presence of block :-
On controller :
Port “gre-2”
            Interface “gre-2”
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.130″, out_key=flow, remote_ip=”192.168.1.140″}
and this one on compute:
Port “gre-1”
            Interface “gre-1”
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130″}
is important for success . It might be gone from “ovs-vsctl show” report.

Initial point start testing . Continue per http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html

System is functional.
Controller – dallas1.localdomain 192.168.1.130
Compute  –  dallas2.localdomain 192.168.1.140

[root@dallas1 ~(keystone_admin)]$ date
Sat Feb 15 11:05:12 MSK 2014
[root@dallas1 ~(keystone_admin)]$ openstack-status

== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 974006673310455e8893e692f1d9350b |  admin  |   True  |       |
| fbba3a8646dc44e28e5200381d77493b |  cinder |   True  |       |
| 0214c6ae6ebc4d6ebeb3e68d825a1188 |  glance |   True  |       |
| abb1fa95b0ec448ea8da3cc99d61d301 | kashyap |   True  |       |
| 329b3ca03a894b319420b3a166d461b5 | neutron |   True  |       |
| 89b3f7d54dd04648b0519f8860bd0f2a |   nova  |   True  |       |
———————————-+———+———+——-+
== Glance images ==
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 2cada436-30a7-425b-9e75-ce56764cdd13 | Cirros31            | qcow2       | bare             | 13147648  | active |
| c0b90f9e-fd47-46da-b98b-1144a41a6c08 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b | Ubuntu 13.10 Server | qcow2       | bare             | 244711424 | active |
+————————————–+———————+————-+——————+———–+——–+
== Nova managed services ==
+—————-+———————+———-+———+——-+—————————-+—————–+
| Binary         | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+———————+———-+———+——-+—————————-+—————–+
| nova-scheduler | dallas1.localdomain | internal | enabled | up    | 2014-02-15T08:14:54.000000 | None            |
| nova-conductor | dallas1.localdomain | internal | enabled | up    | 2014-02-15T08:14:54.000000 | None            |
| nova-compute   | dallas2.localdomain | nova     | enabled | up    | 2014-02-15T08:14:59.000000 | None            |
+—————-+———————+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 082249a5-08f4-478f-b176-effad0ef6843 | ext   | None |
| cea0463e-1ef2-46ac-a449-d1c265f5ed7c | int   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+——————————

Looks good on both Controller and Compute

[root@dallas1 nova]# ovs-vsctl show
2790327e-fde5-4f35-9c99-b1180353b29e
Bridge br-int
Port br-int
Interface br-int
type: internal
Port “qr-f38eb3d5-20”
tag: 1
Interface “qr-f38eb3d5-20”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap5d1add26-f3”
tag: 1
Interface “tap5d1add26-f3”
type: internal
Bridge br-ex
Port “p37p1”
Interface “p37p1”
Port br-ex
Interface br-ex
type: internal
Port “qg-0dea8587-32”
Interface “qg-0dea8587-32”
type: internal
Bridge br-tun
        Port “gre-2”
            Interface “gre-2”
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.130″, out_key=flow, remote_ip=”192.168.1.140″}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

[root@dallas2 ~]# ovs-vsctl show
b2e33386-ca7e-46e2-b97e-6bbf511727ac
Bridge br-int
Port br-int
Interface br-int
type: internal
Port “qvo30c356f8-c0”
tag: 1
Interface “qvo30c356f8-c0”
Port “qvoa5c6c346-78”
tag: 1
Interface “qvoa5c6c346-78”
Port “qvo56bfcccb-86”
tag: 1
Interface “qvo56bfcccb-86”
Port “qvo051565c4-dd”
tag: 1
Interface “qvo051565c4-dd”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
  Port “gre-1”
            Interface “gre-1”
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130″}
Port br-tun
Interface br-tun
type: internal
ovs_version: “2.0.0”

[root@dallas1 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –image 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b UbuntuSRV

+————————————–+————————————–+
| Property                             | Value                                |
+————————————–+————————————–+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Ubuntu 13.10 Server                  |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000003                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | 6adf0838-bfcf-4980-a0a4-6a541facf9c9 |
| security_groups                      | [{u’name’: u’default’}]              |
| user_id                              | 974006673310455e8893e692f1d9350b     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-15T07:24:54Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | UbuntuSRV                            |
| adminPass                            | T2ArvfucEGqr                         |
| tenant_id                            | b5c0d0d4d31e4f3785362f2716df0b0f     |
| created                              | 2014-02-15T07:24:54Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+————————————–+————————————–+

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | BUILD  | spawning   | NOSTATE     | int=10.0.0.5                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ date

Sat Feb 15 11:25:36 MSK 2014

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE | None       | Running     | int=10.0.0.5                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+——————

/var/log/nova/schedure.log ( last message about 1 hour before successfull `nova boot  .. ` F20, Ubuntu 13.10, Cirrus loaded OK.

I believe a couple of `nova boot .. ` I still have.

Here is /var/log/nova/scheduler.log:-

2014-02-15 09:34:07.612 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 8 seconds

2014-02-15 09:34:15.617 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 16 seconds

2014-02-15 09:34:31.628 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 32 seconds

2014-02-15 09:35:03.630 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

2014-02-15 09:36:03.663 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

The last record in log :-

2014-02-15 09:37:03.713 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

Nothing else, still working

[root@dallas1 ~(keystone_admin)]$ date

Sat Feb 15 12:44:33 MSK 2014

[root@dallas1 Downloads(keystone_admin)]$ nova image-list

+————————————–+———————+——–+——–+
| ID                                   | Name                | Status | Server |
+————————————–+———————+——–+——–+
| 2cada436-30a7-425b-9e75-ce56764cdd13 | Cirros31            | ACTIVE |        |
| fd1cd492-d7d8-4fc3-961a-0b43f9aa148d | Fedora 20 Image     | ACTIVE |        |
| c0b90f9e-fd47-46da-b98b-1144a41a6c08 | Fedora 20 x86_64    | ACTIVE |        |
| 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b | Ubuntu 13.10 Server | ACTIVE |        |
+————————————–+———————+——–+——–+

[root@dallas1 Downloads(keystone_admin)]$ cd

[root@dallas1 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –image fd1cd492-d7d8-4fc3-961a-0b43f9aa148d VF20GLS

+————————————–+————————————–+
| Property                             | Value                                |
+————————————–+————————————–+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Fedora 20 Image                      |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000004                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | e948e74c-86e5-46e3-9df1-5b7ab890cb8a |
| security_groups                      | [{u’name’: u’default’}]              |
| user_id                              | 974006673310455e8893e692f1d9350b     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-15T09:04:22Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | VF20GLS                              |
| adminPass                            | i5Lb79SybSpV                         |
| tenant_id                            | b5c0d0d4d31e4f3785362f2716df0b0f     |
| created                              | 2014-02-15T09:04:22Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+————————————–+————————————–+

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| e948e74c-86e5-46e3-9df1-5b7ab890cb8a | VF20GLS   | BUILD     | spawning   | NOSTATE     | int=10.0.0.6                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| e948e74c-86e5-46e3-9df1-5b7ab890cb8a | VF20GLS   | ACTIVE    | None       | Running     | int=10.0.0.6                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 082249a5-08f4-478f-b176-effad0ef6843 |
| id                  | b582d8f9-8e44-4282-a71c-20f36f2e3d89 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | b5c0d0d4d31e4f3785362f2716df0b0f     |
+———————+————————————–+

[root@dallas1 ~(keystone_admin)]$ neutron port-list –device-id e948e74c-86e5-46e3-9df1-5b7ab890cb8a

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 30c356f8-c0e9-439b-b68e-6c1e950b39ef |      | fa:16:3e:7f:4a:57 | {“subnet_id”: “3d75d529-9a18-46d3-ac08-7cb4c733636c”, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_admin)]$ neutron floatingip-associate b582d8f9-8e44-4282-a71c-20f36f2e3d89 30c356f8-c0e9-439b-b68e-6c1e950b39ef

Associated floatingip b582d8f9-8e44-2014-02-15

[root@dallas1 ~(keystone_admin)]$ ping 192.168.1.104
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=3.67 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=0.758 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=0.687 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=0.731 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=0.767 ms
64 bytes from 192.168.1.104: icmp_seq=6 ttl=63 time=0.713 ms
64 bytes from 192.168.1.104: icmp_seq=7 ttl=63 time=0.817 ms
64 bytes from 192.168.1.104: icmp_seq=8 ttl=63 time=0.741 ms
64 bytes from 192.168.1.104: icmp_seq=9 ttl=63 time=0.703 ms
^C

— 192.168.1.104 ping statistics —

9 packets transmitted, 9 received, 0% packet loss, time 8002ms

rtt min/avg/max/mdev = 0.687/1.065/3.674/0.923 ms

[root@dallas1 ~(keystone_admin)]$ date
Sat Feb 15 13:15:13 MSK 2014
 

Check same log :-

2014-02-15 09:36:03.663 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

Last record still the same :-

2014-02-15 09:37:03.713 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds



  Top at Compute Node :-

 

 

[root@dallas2 ~]# virsh list –all

Id    Name                           State

—————————————————-
4     instance-00000001              running
5     instance-00000003              running
9     instance-00000005              running
10    instance-00000002              running
11    instance-00000004              running

Finally, I get ERROR&NOSTATE at 16:28

[root@dallas1 ~(keystone_admin)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| ee3ff870-91b7-4d14-bb06-e9a6603f0a83 | UbuntuSLM | ERROR     | None       | NOSTATE     |                             |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5e43c64d-41cf-413f-b317-0046b070a7a4 | VF20GLS   | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.105 |
| e948e74c-86e5-46e3-9df1-5b7ab890cb8a | VF20GLS   | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.104 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ date
Sat Feb 15 16:28:35 MSK 2014

I was allowed to create 5 instance . Six one goes to ERROR&amp;NOSTATE

Then make number of instances no more then four  and optionally run  restarts of services
# service qpidd restart ;
# service openstack-nova-scheduler restart ;

Then you may run   :-

[root@dallas1 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –image 14cf6e7b-9aed-40c6-8185-366eb0c4c397 UbuntuSL3 

+————————————–+————————————–+
| Property                             | Value                                |
+————————————–+————————————–+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Ubuntu Salamander Server             |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000009                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | 2712446b-3442-4af2-a330-c9365736ee73 |
| security_groups                      | [{u’name’: u’default’}]              |
| user_id                              | 974006673310455e8893e692f1d9350b     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-15T12:44:36Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | UbuntuSL3                            |
| adminPass                            | zq3n5FCktcYB                         |
| tenant_id                            | b5c0d0d4d31e4f3785362f2716df0b0f     |
| created                              | 2014-02-15T12:44:36Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+————————————–+————————————–+

[root@dallas1 ~(keystone_admin)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 2712446b-3442-4af2-a330-c9365736ee73 | UbuntuSL3 | BUILD     | spawning   | NOSTATE     | int=10.0.0.6                |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5e43c64d-41cf-413f-b317-0046b070a7a4 | VF20GLS   | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.105 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 2712446b-3442-4af2-a330-c9365736ee73 | UbuntuSL3 | ACTIVE    | None       | Running     | int=10.0.0.6                |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5e43c64d-41cf-413f-b317-0046b070a7a4 | VF20GLS   | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.105 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

Here goes sample on another Cluster :-

First remove one old instance if number =5 , then  run for “nova boot new instance”, otherwise there is a big chance to get “ERROR&NOSTATE” instead of “BUILD&SPAWNING”  status.  Log /var/log/nova/scheduling.log will explain you reason of rejecting – AMQP Server cannot be connected  after overcoming the limit of instances.

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2  –user-data=./myfile.txt –block_device_mapping vda=4cb4c501-c7b1-4c42-ba26-0141fcde038b:::0 VF20SX4


+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| OS-EXT-STS:task_state                | scheduling                                         |
| image                                | Attempt to boot from volume – no image supplied    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000003c                                  |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7               |
| security_groups                      | [{u’name’: u’default’}]                            |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                   |
| OS-DCF:diskConfig                    | MANUAL                                             |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
| status                               | BUILD                                              |
| updated                              | 2014-02-16T06:15:34Z                               |
| hostId                               |                                                    |
| OS-EXT-SRV-ATTR:host                 | None                                               |
| OS-SRV-USG:terminated_at             | None                                               |
| key_name                             | None                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                               |
| name                                 | VF20SX4                                            |
| adminPass                            | C8r6vtF3kHJi                                       |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                   |
| created                              | 2014-02-16T06:15:33Z                               |
| os-extended-volumes:volumes_attached | [{u’id’: u’4cb4c501-c7b1-4c42-ba26-0141fcde038b’}] |
| metadata                             | {}                                                 |
+————————————–+—————————————————-+

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+——————+———–+————+————-+—————————–+
| ID                                   | Name             | Status    | Task State | Power State | Networks                    |
+————————————–+——————+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5        | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312        | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 95a36074-5145-4959-b3b3-2651f2ac1a9c | UbuntuSalamander | SUSPENDED | None       | Shutdown    | int=10.0.0.8, 192.168.1.104 |
| 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7 | VF20SX4          | ACTIVE    | None       | Running     | int=10.0.0.4                |
| 55f6e0bc-281e-480d-b88f-193207ea4d4a | VF20XWL          | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.108 |
+————————————–+——————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ nova show 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-02-16T06:15:39Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                                        |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.4, 192.168.1.110                                  |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000003c                                        |
| OS-SRV-USG:launched_at               | 2014-02-16T06:15:39.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                        |
| flavor                               | m1.small (2)                                             |
| id                                   | 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7                     |
| security_groups                      | [{u’name’: u’default’}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                         |
| name                                 | VF20SX4                                                  |
| created                              | 2014-02-16T06:15:33Z                                     |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’4cb4c501-c7b1-4c42-ba26-0141fcde038b’}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

  Tenants Network testing

[root@dfw02 ~]#  cat  keystonerc_boris
export OS_USERNAME=boris
export OS_TENANT_NAME=ostenant
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://192.168.1.127:35357/v2.0/
export PS1='[\u@\h \W(keystone_boris)]$

[root@dfw02 ~]# . keystonerc_boris

[root@dfw02 ~(keystone_boris)]$ neutron net-list
+————————————–+——+—————————————+
| id                                   | name | subnets                               |
+————————————–+——+—————————————+
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d  |
+————————————–+——+—————————————+

[root@dfw02 ~(keystone_boris)]$ neutron router-create router2
Created a new router:
+———————–+————————————–+
| Field                 | Value                                |
+———————–+————————————–+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 86b3008c-297f-4301-9bdc-766b839785f1 |
| name                  | router2                              |
| status                | ACTIVE                               |
| tenant_id             | 4dacfff9e72c4245a48d648ee23468d5     |
+———————–+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron router-gateway-set router2 ext
Set gateway for router router2

[root@dfw02 ~(keystone_boris)]$  neutron net-create int1
Created a new network:
+—————-+————————————–+
| Field          | Value                                |
+—————-+————————————–+
| admin_state_up | True                                 |
| id             | 426bb226-0ab9-440d-ba14-05634a17fb2b |
| name           | int1                                 |
| shared         | False                                |
| status         | ACTIVE                               |
| subnets        |                                      |
| tenant_id      | 4dacfff9e72c4245a48d648ee23468d5     |
+—————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron subnet-create int1 40.0.0.0/24 –dns_nameservers list=true 83.221.202.254
Created a new subnet:
+——————+——————————————–+
| Field            | Value                                      |
+——————+——————————————–+
| allocation_pools | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
| cidr             | 40.0.0.0/24                                |
| dns_nameservers  | 83.221.202.254                             |
| enable_dhcp      | True                                       |
| gateway_ip       | 40.0.0.1                                   |
| host_routes      |                                            |
| id               | 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06       |
| ip_version       | 4                                          |
| name             |                                            |
| network_id       | 426bb226-0ab9-440d-ba14-05634a17fb2b       |
| tenant_id        | 4dacfff9e72c4245a48d648ee23468d5           |
+——————+——————————————–+

[root@dfw02 ~(keystone_boris)]$  neutron router-interface-add router2 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06
Added interface e031db6b-d0cc-4c57-877b-53b1c6946870 to router router2.

[root@dfw02 ~(keystone_boris)]$ neutron subnet-list
+————————————–+——+————-+——————————————–+
| id                                   | name | cidr        | allocation_pools                           |
+————————————–+——+————-+——————————————–+
| 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06 |      | 40.0.0.0/24 | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
+————————————–+——+————-+——————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol icmp \
>   –direction ingress –remote-ip-prefix 0.0.0.0/0 default
Created a new security_group_rule:
+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 4a6deddf-9350-4f98-97d7-a54cf6ebaa9a |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol tcp \
>   –port-range-min 22 –port-range-max 22 \
>   –direction ingress –remote-ip-prefix 0.0.0.0/0 default
Created a new security_group_rule:
+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 7a461936-ffbc-4968-975b-3d27ec975e04 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ glance image-list
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| b9782f24-2f37-4f8d-9467-f622507788bf | CentOS6.5 image     | qcow2       | bare             | 344457216 | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31            | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64    | qcow2       | bare             | 237371392 | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image     | qcow2       | bare             | 214106112 | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10 | qcow2       | bare             | 244514816 | active |
| dd8c11d2-f9b0-4b77-9b5e-534c7ac0fd58 | Ubuntu Trusty image | qcow2       | bare             | 246022144 | active |
+————————————–+———————+————-+——————+———–+——–+

[root@dfw02 ~(keystone_boris)]$ cinder create –image-id de93ee44-4085-4111-b022-a7437da8feac –display_name VF20VLG02 7
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-02-21T06:36:21.753407      |
| display_description |                 None                 |
|     display_name    |              VF20VLG02               |
|          id         | c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 |
|       image_id      | de93ee44-4085-4111-b022-a7437da8feac |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ cinder list
+————————————–+————-+————–+——+————-+———-+————-+
|                  ID                  |    Status   | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+————-+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | downloading |  VF20VLG02   |  7   |     None    |  false   |             |
+————————————–+————-+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ cinder list
+————————————–+———–+————–+——+————-+———-+————-+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | available |  VF20VLG02   |  7   |     None    |   true   |             |
+————————————–+———–+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ nova boot –flavor 2  –user-data=./myfile.txt –block_device_mapping vda=c3b09e44-1868-43c6-baaa-1ffcb4b80fb1:::0 VF20XWS
+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| status                               | BUILD                                              |
| updated                              | 2014-02-21T06:49:42Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume – no image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | c4573327-dd99-4e57-941e-3d35aacb637c               |
| security_groups                      | [{u’name’: u’default’}]                            |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | 162021e787c54cac906ab3296a386006                   |
| name                                 | VF20XWS                                            |
| adminPass                            | YkPYdW58gz7K                                       |
| tenant_id                            | 4dacfff9e72c4245a48d648ee23468d5                   |
| created                              | 2014-02-21T06:49:42Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u’id’: u’c3b09e44-1868-43c6-baaa-1ffcb4b80fb1′}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dfw02 ~(keystone_boris)]$ nova list
+————————————–+———+——–+————+————-+—————+
| ID                                   | Name    | Status | Task State | Power State | Networks      |
+————————————–+———+——–+————+————-+—————+
| c4573327-dd99-4e57-941e-3d35aacb637c | VF20XWS | ACTIVE | None       | Running     | int1=40.0.0.2 |
+————————————–+———+——–+————+————-+—————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-create ext
Created a new floatingip:
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron port-list –device-id c4573327-dd99-4e57-941e-3d35aacb637c
+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |      | fa:16:3e:10:a0:e3 | {“subnet_id”: “9e0d457b-c4c4-45cf-84e2-4ac7550f3b06”, “ip_address”: “40.0.0.2”} |
+————————————–+——+——————-+———————————————————————————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-associate 64dd749f-6127-4d0f-ba51-8a9978b8c211 2d6c6569-44c3-44b2-8bed-cdc8dde12336
Associated floatingip 64dd749f-6127-4d0f-ba51-8a9978b8c211

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-show 64dd749f-6127-4d0f-ba51-8a9978b8c211
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 40.0.0.2                             |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             | 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |
| router_id           | 86b3008c-297f-4301-9bdc-766b839785f1 |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ ping 192.168.1.115
PING 192.168.1.115 (192.168.1.115) 56(84) bytes of data.
64 bytes from 192.168.1.115: icmp_seq=1 ttl=63 time=3.80 ms
64 bytes from 192.168.1.115: icmp_seq=2 ttl=63 time=1.13 ms
64 bytes from 192.168.1.115: icmp_seq=3 ttl=63 time=0.954 ms
64 bytes from 192.168.1.115: icmp_seq=4 ttl=63 time=1.01 ms
64 bytes from 192.168.1.115: icmp_seq=5 ttl=63 time=0.999 ms
64 bytes from 192.168.1.115: icmp_seq=6 ttl=63 time=0.809 ms
64 bytes from 192.168.1.115: icmp_seq=7 ttl=63 time=1.02 ms
^C

The original text of documents was posted on fedorapeople.org by Kashyap in November 2013.
Atached ones tuned for new IP’s and should not have any more  typos of original version.They also contain MySQL preventive updates currently required for openstack-nova-compute & neutron-openvswitch-agent remote connection to Controller Node to succeed . MySQL stuff is mine. All attached *.conf & *.ini files have been update for my network as well.
In meantime I am quite sure  that using Libvirt’s default and non-default networks  for creating Controller and Compute nodes  as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real
physical boxes. Just one Ethernet Controller per box should be required in case of  using GRE tunnelling in case of RDO Havana on Fedora 20 manual setup.     

  References

1. http://textuploader.com/1hin
2. http://textuploader.com/1hey
3. http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt
4. http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
5.http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Surfing Internet & SSH connectoin on (to) cloud instance of Fedora 20 via Neutron GRE

February 4, 2014

When you meet the first time with GRE tunnelling you have to understand that GRE encapsulation requires 24 bytes and a lot of problems raise up , view http://www.cisco.com/en/US/tech/tk827/tk369/technologies_tech_note09186a0080093f1f.shtml

In particular,  Two Node (Controller+Compute) RDO Havana cluster on Fedora 20 hosts been built by myself per guidelines from http://kashyapc.wordpress.com/2013/11/23/neutron-configs-for-a-two-node-openstack-havana-setup-on-fedora-20/ was Neutron GRE  cluster. Hence, for any instance has been setup (Fedora or Ubuntu) problem with network communication raises up immediately. apt-get update just refuse to work on Ubuntu Salamander Server instance (default MTU value for Ethernet interface is 1500).

Light weight X windows environment also has been setup on Fedora 20 cloud instance (fluxbox) for quick Internet access.

Solution is simple to  set MTU to 1400 only on any cloud instance.

Place in /etc/rd.d/rc.local (or /etc/rc.local for Ubuntu Server) :-

#!/bin/sh
ifconfig eth0 mtu 1400 up ;
exit 0

At least in meantime I don’t see problems with LAN and routing to  Internet (via simple  DLINK router) on cloud instances F19,F20,Ubuntu 13.10 Server and LAN’s hosts.

For better understanding what is all about please view http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html  [1].

Load instance via :

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2  –user-data=./myfile.txt  –block_device_mapping vda=3cb671c2-06d8-4b3a-aca6-476b66fb309a:::0 VMF20RS

where

[root@dfw02 ~(keystone_admin)]$ cinder list
+————————————–+——–+————–+——+————-+———-+————————————–+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+————–+——+————-+———-+————————————–+
| 3cb671c2-06d8-4b3a-aca6-476b66fb309a | available | Fedora20VOL   |  9   |     None    |   true   |                                                                                           |
| 49d5b872-3720-4915-ad1e-ec428e956558 | in-use |   VF20VOL    |  9   |     None    |   true   | 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 |
| b4831720-941f-41a7-b747-1810df49b261 | in-use | UbuntuSALVG  |  7   |     None    |   true   | 5d750d44-0cad-4a02-8432-0ee10e988b2c |
+————————————–+——–+————–+——+————-+———-+————————————–+

and

[root@dfw02 ~(keystone_admin)]$ cat myfile.txt

#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

Then
[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+—————+———–+————+————-+—————————–+
| ID                                   | Name          | Status    | Task State | Power State | Networks                    |
+————————————–+—————+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5     | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312     | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 5d750d44-0cad-4a02-8432-0ee10e988b2c | UbuntuSaucySL | SUSPENDED | None       | Shutdown    | int=10.0.0.8, 192.168.1.112 |
| 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 | VF20KVM       | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.109 |
| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | ACTIVE  | None       | Running   | int=10.0.0.4                                  |
+————————————–+—————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ neutron port-list –device-id 10306d33-9684-4dab-a017-266fb9ab496a

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| fa982101-e2d9-4d21-be9d-7d485c792ce1 |      | fa:16:3e:57:e2:67 | {“subnet_id”: “fa930cea-3d51-4cbe-a305-579f12aa53c0”, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+——————————————————————————–

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-create ext

Created a new floatingip:
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | d9f1b47d-c4b1-4865-92d2-c1d9964a35fb |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+

[root@dfw02 ~(keystone_admin)]$  neutron floatingip-associate d9f1b47d-c4b1-4865-92d2-c1d9964a35fb fa982101-e2d9-4d21-be9d-7d485c792ce1

[root@dfw02 ~(keystone_admin)]$ ping  192.168.1.115

Connect via virt-manager to Compute from Controller and log into text mode console as “fedora” with known password “mysecret”.  Set MTU to 1400  , create new sudoer user, then reboot instance. Now ssh from Controller works in traditional way :

[root@dfw02 ~(keystone_admin)]$ nova list | grep VMF20RS
| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | SUSPENDED | resuming   | Shutdown    | int=10.0.0.4, 192.168.1.115 |

[root@dfw02 ~(keystone_admin)]$ nova list | grep VMF20RS

| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.115 |

[root@dfw02 ~(keystone_admin)]$ ssh root@192.168.1.115

root@192.168.1.115’s password:
Last login: Sat Feb  1 12:32:12 2014 from 192.168.1.127
[root@vmf20rs ~]# uname -a
Linux vmf20rs.novalocal 3.12.8-300.fc20.x86_64 #1 SMP Thu Jan 16 01:07:50 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

[root@vmf20rs ~]# ifconfig
eth0: flags=4163  mtu 1400
inet 10.0.0.4  netmask 255.255.255.0  broadcast 10.0.0.255

inet6 fe80::f816:3eff:fe57:e267  prefixlen 64  scopeid 0x20
ether fa:16:3e:57:e2:67  txqueuelen 1000  (Ethernet)
RX packets 591788  bytes 770176441 (734.4 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 196309  bytes 20105918 (19.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 2  bytes 140 (140.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2  bytes 140 (140.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Text mode Internet works as well via “links” for instance :-

Setup Light Weight X Windows environment on F20 Cloud instance and running Fedora 20 cloud instance in Spice session via virt-manager ( Controller connects to Compute node via virt-manager ).  Spice console and QXL specified in virt-manager , then `nova reboot VF20WRT`.

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

# echo “exec fluxbox” &gt; ~/.xinitrc
# startx

Fedora 20 cloud instance running in Spice Session via virt-manager with QXL 64 MB of VRAM  :-

Shutting down fluxbox :-

Done

Now run `nova suspend VF20WRT`

Connecting to Fedora 20 cloud instance via spicy from Compute node :-

Fluxbox on Ubuntu 13.10 Server Cloud Instance:-

References

1.http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Setup Light Weight X Windows environment on Fedora 20 Cloud instance and running F20 cloud instance in Spice session via virt-manager or spicy

February 3, 2014

Following bellow builds Light Weight X Windows environment on Fedora 20 Cloud instance and demonstrate running same instance in Spice session via virt-manager ( Controller connects to Compute node via virt-manager ).  Spice console and QXL specified in virt-manager , then instance rebooted via Nova.

This post follows up [1]  http://bderzhavets.blogspot.ru/2014/01/setting-up-two-physical-node-openstack.html 
getting things on cloud instances ready to work without openstack-dashboard setup (RDO Havana administrative WEB console)

Needless to say that Spice console behaviour with running X-Server much better then it happens in VNC session , where actually one X-sever is running in a client of another one at Controller Node (F20)

Spice-gtk source rpm installed on both boxes of Cluster and rebuilt:-
$ rpm -iv spice-gtk-0.22-1.fc21.src.rpm
$ cd ~/rpmbuild/SPEC
$ sudo yum install intltool gtk2-devel usbredir-devel libusb1-devel libgudev1-devel pixman-devel openssl-devel  libjpeg-turbo-devel celt051-devel pulseaudio-libs-devel pygtk2-devel python-devel zlib-devel cyrus-sasl-devel libcacard-devel gobject-introspection-devel  dbus-glib-devel libacl-devel polkit-devel gtk-doc vala-tools gtk3-devel spice-protocol

$ rpmbuild -bb ./spice-gtk.spec
$ cd ../RPMS/x86_64

RPMs been built installed , because spicy is not on the system

[boris@dfw02 x86_64]$  sudo yum install spice-glib-0.22-2.fc20.x86_64.rpm \
spice-glib-devel-0.22-2.fc20.x86_64.rpm \
spice-gtk-0.22-2.fc20.x86_64.rpm \
spice-gtk3-0.22-2.fc20.x86_64.rpm \
spice-gtk3-devel-0.22-2.fc20.x86_64.rpm \
spice-gtk3-vala-0.22-2.fc20.x86_64.rpm \
spice-gtk-debuginfo-0.22-2.fc20.x86_64.rpm \
spice-gtk-devel-0.22-2.fc20.x86_64.rpm  \
spice-gtk-python-0.22-2.fc20.x86_64.rpm \
spice-gtk-tools-0.22-2.fc20.x86_64.rpm

Next we install X windows on F20 to run fluxbox ( by the way after hours of googling I was unable to find requied set of packages and just picked them up

during KDE Env installation via yum , which I actually don’t need at all on cloud instance of Fedora )

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

Install some fonts :-

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

We are ready to go :-

# echo “exec fluxbox” &gt; ~/.xinitrc
# startx


Next:  $ yum -y install firefox
via x-terminal
$/usr/bin/firefox &amp;

Fedora 20 cloud instance running in Spice Session via virt-manager with QXL (64 MB of VRAM)  :-

Connecting via spicy from Compute Node to same F20 instance :-


   
  

    
  After port mapping :-
# ssh -L 5900:localhost:5900 -N -f -l root 192.168.1.137
Spicy may connect from Controller to Fedora 20 instance


 



“Setting up Two Physical-Node OpenStack RDO Havana + Gluster Backend for Cinder + Neutron GRE” on Fedora 20 boxes with both Controller and Compute nodes each one having one Ethernet adapter

January 24, 2014

**************************************************************
UPDATE on 02/23/2014  
**************************************************************

To create new instance I must have in `nova list` no more then 4 entries. Then I can sequentially restart qpidd, openstack-nova-scheduler services ( it’s, actually, not always necessary)  and I will be able create new one instance for sure.  It has been tested on 2 “Two Node Neutron GRE+OVS+Gluster Backend for Cinder Cluster”.  It is related with `nova quota-show`  for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller.
All testing  details here http://bderzhavets.blogspot.com/2014/02/next-attempt-to-set-up-two-node-neutron.html  Syntax like :

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 10    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+
[root@dallas1 ~(keystone_admin)]$  nova quota-class-update –instances 20 default

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 20    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+

doesn’t work for me
****************************************************************

1. F19,F20 have been installed via volumes based on glusterfs and show a good performance on Compute node. Yum works stable on F19 and a bit slower on F20.
2. CentOS 6.5 was installed only via glance image ( cinder shows ERROR status for volume ) network ops are slower then on Fedoras.
3. Ubuntu 13.10 Server was installed via volume based on glusterfs was able to obtain internal and floating IP. Network speed close to Fedora 19
4. Turning on Gluster backend for Cinder on F20 Two-Node Neutron GRE Cluster (Controller+Compute) improves performance significantly. Due to known F20 bug glustefs FS was ext4
5.On any cloud instance MTU should be set to 1400 for proper communications with GRE tunnel 

Post bellow follows up two Fedora 20 VMs setup described in :-
  http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt
  http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
  Both cases have been tested above –  default and non-default libvirt’s networks
In meantime I believe that using Libvirt’s networks for creating Controller and Compute nodes  as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling for RDO Havana on Fedora 20 manual setup.
  Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don’t start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services
openstack-nova-conductor & openstack-nova-scheduler wouldn’t start if mysql.users table won’t be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.

Manuals mentioned above require some editing per authors opinion as well.

Manual Setup  for two different physical boxes running Fedora 20 with the most recent `yum -y update`

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   –  Controller (192.168.1.127)

dwf01.localdomain   –  Compute   (192.168.1.137)

Two instances are running on Compute node :-

VF19RS instance has  192.168.1.102 – floating ip ,

CirrOS 3.1 instance has  192.168.1.101 – floating ip

Cloud instances running on Compute perform commands like nslookup,traceroute. Yum install & yum -y update work on Fedora 19 instance, however, in meantime time network on VF19 is stable, but relatively slow . Might be Realtek 8169 integrated on board is not good enough for GRE and it’s problem of my hardware ( dfw01 built up with Q9550,ASUS P5Q3, 8 GB DDR3, SATA 2 Seagate 500 GB). CentOS 6.5 with “RDO Havana+Glusterfs+Neutron VLAN” works on same box (dual booting with F20) much faster.  That is a first impression. I’ve also changed neutron.conf ‘s connection credentials to mysql to be able run neutron-server service. Neutron L3 agent and Neutron Openvswitch agent require some effort to be started on Controller.
Manual mentioned above requires some editing per authors opinion as well.

[root@dfw02 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           inactive  (disabled on boot)
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         inactive  (disabled on boot)
openstack-ceilometer-alarm-notifier:    inactive  (disabled on boot)
openstack-ceilometer-alarm-evaluator:   inactive  (disabled on boot)
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 970ed56ef7bc41d59c54f5ed8a1690dc |  admin  |   True  |       |
| 1beeaa4b20454048bf23f7d63a065137 |  cinder |   True  |       |
| 006c2728df9146bd82fab04232444abf |  glance |   True  |       |
| 5922aa93016344d5a5d49c0a2dab458c | neutron |   True  |       |
| af2f251586564b46a4f60cdf5ff6cf4f |   nova  |   True  |       |
+———————————-+———+———+——-+
== Glance images ==
+————————————–+——————+————-+——————+———–+——–+
| ID                                   | Name             | Disk Format | Container Format | Size      | Status |
+————————————–+——————+————-+——————+———–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | qcow2       | bare             | 237371392 | active |
+————————————–+——————+————-+——————+———–+——–+
== Nova managed services ==
 +—————-+——————-+———-+———+——-+—————————-+—————–+
| Binary         | Host              | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+——————-+———-+———+——-+—————————-+—————–+
| nova-scheduler | dfw02.localdomain | internal | enabled | up    | 2014-01-23T22:36:15.000000 | None            |
| nova-conductor | dfw02.localdomain | internal | enabled | up    | 2014-01-23T22:36:11.000000 | None            |
| nova-compute   | dfw01.localdomain | nova     | enabled | up    | 2014-01-23T22:36:10.000000 | None            |
+—————-+——————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int   | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—————&#