TripleO deployment of ‘master’ branch via instack-virt-setup

September 16, 2016

UPDATE 09/23/2016

Fix released for (1622683, 1622720 ) in :-
https://bugs.launchpad.net/tripleo/+bug/1622683 
****************************************************
Deploy completed OK the first time
****************************************************

2016-09-23 09:08:28Z [overcloud-AllNodesDeploySteps-yrsd7pkitjij]: CREATE_COMPLETE  Stack CREATE completed successfully
2016-09-23 09:08:28Z [AllNodesDeploySteps]: CREATE_COMPLETE  state changed
2016-09-23 09:08:28Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully
Stack overcloud CREATE_COMPLETE

Overcloud Endpoint: http://10.0.0.6:5000/v2.0
Overcloud Deployed

[stack@instack ~]$ nova list

+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| b3d97bcf-9318-48ef-91c7-09c8386a75aa | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.13 |
| 148aa223-513d-44d5-b865-2cb2c3dcbc6f | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.9  |
| e3ee61fb-c243-4454-949d-84c22e66b147 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.10 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+

[stack@instack ~]$ mistral environment-list

+-----------+-------------+---------+---------------------+---------------------+
| Name      | Description | Scope   | Created at          | Updated at          |
+-----------+-------------+---------+---------------------+---------------------+
| overcloud | None        | private | 2016-09-23 07:33:40 | 2016-09-23 08:41:29 |
+-----------+-------------+---------+---------------------+---------------------+

[stack@instack ~]$ swift list
ov-jjf6fn4qyjt-0-gfpul73m4fdl-Controller-dekw3w5stcqd
ov-pb3uu5djue-0-lmazr26t3z4u-NovaCompute-sqfaz5lstqov
ov-pb3uu5djue-1-7prlyxolsdhd-NovaCompute-ltmkwmq74iyq
overcloud

[stack@instack ~]$ openstack stack delete overcloud
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
Are you sure you want to delete this stack(s) [y/N]? y
[stack@instack ~]$ openstack stack list
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils

+---------------------+------------+--------------------+----------------------+--------------+
| ID                  | Stack Name | Stack Status       | Creation Time        | Updated Time |
+---------------------+------------+--------------------+----------------------+--------------+
| 6e3ae2b6-5ce1-45db- | overcloud  | DELETE_IN_PROGRESS | 2016-09-23T08:41:38Z | None         |
| bde5-06d2ce2e571b   |            |                    |                      |              |
+---------------------+------------+--------------------+----------------------+--------------+

[stack@instack ~]$ openstack stack list

WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils

***************************************************************************
Empty output – overcloud stack has been deleted
****************************************************************************

[stack@instack ~]$ mistral environment-list

+-----------+-------------+---------+---------------------+---------------------+
| Name      | Description | Scope   | Created at          | Updated at          |
+-----------+-------------+---------+---------------------+---------------------+
| overcloud | None        | private | 2016-09-23 07:33:40 | 2016-09-23 08:41:29 |
+-----------+-------------+---------+---------------------+---------------------+

[stack@instack ~]$ swift list

overcloud

******************************************************************************
Now attempt to redeploy the second time .  Success on 09/23/2016
******************************************************************************

[stack@instack ~]$ touch -f  /home/stack/tripleo-heat-templates/puppet/post.yaml
[stack@instack ~]$ ./overcloud-deploy.sh
+ source /home/stack/stackrc
++ export NOVA_VERSION=1.1

++ NOVA_VERSION=1.1
+++ sudo hiera admin_password
++ export OS_PASSWORD=68a350a2972f7ff9e88d0e9ea79056b3e0bb90ec
++ OS_PASSWORD=68a350a2972f7ff9e88d0e9ea79056b3e0bb90ec
++ export OS_AUTH_URL=http://192.0.2.1:5000/v2.0
++ OS_AUTH_URL=http://192.0.2.1:5000/v2.0
++ export OS_USERNAME=admin
++ OS_USERNAME=admin
++ export OS_TENANT_NAME=admin
++ OS_TENANT_NAME=admin
++ export COMPUTE_API_VERSION=1.1
++ COMPUTE_API_VERSION=1.1
++ export OS_BAREMETAL_API_VERSION=1.15
++ OS_BAREMETAL_API_VERSION=1.15
++ export OS_NO_CACHE=True
++ OS_NO_CACHE=True
++ export OS_CLOUDNAME=undercloud
++ OS_CLOUDNAME=undercloud
++ export OS_IMAGE_API_VERSION=1
++ OS_IMAGE_API_VERSION=1
+ openstack overcloud deploy –libvirt-type qemu –ntp-server pool.ntp.org –templates /home/stack/tripleo-heat-templates -e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml -e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network_env.yaml –control-scale 1 –compute-scale 2

WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
Removing the current plan files
Uploading new plan files
Started Mistral Workflow. Execution ID: 4d744a89-a2e7-43a5-82af-26bab11e6342
Plan updated
Deploying templates in the directory /home/stack/tripleo-heat-templates
Object GET failed: http://192.0.2.1:8080/v1/AUTH_7ea6220c67c84c828f4249b95886259f/overcloud/overcloud-without-mergepy.yaml 404 Not Found  [first 60 chars of response]

Started Mistral Workflow. Execution ID: 807a7047-a1c3-4686-9be7-11d73e72dfb8
2016-09-23 09:15:34Z [overcloud]: CREATE_IN_PROGRESS  Stack CREATE started
2016-09-23 09:15:34Z [HorizonSecret]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:34Z [RabbitCookie]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [ServiceNetMap]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [MysqlRootPassword]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [PcsdPassword]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [Networks]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [ServiceNetMap]: CREATE_COMPLETE  state changed
2016-09-23 09:15:35Z [RabbitCookie]: CREATE_COMPLETE  state changed
2016-09-23 09:15:35Z [HeatAuthEncryptionKey]: CREATE_COMPLETE  state changed
2016-09-23 09:15:35Z [PcsdPassword]: CREATE_COMPLETE  state changed
2016-09-23 09:15:35Z [HorizonSecret]: CREATE_COMPLETE  state changed

. . . . . .

2016-09-23 09:39:50Z [BlockStorageExtraConfigPost]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [CephStorageExtraConfigPost]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [ComputeExtraConfigPost]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [ObjectStorageExtraConfigPost]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [overcloud-AllNodesDeploySteps-5bfecsxdagiz]: CREATE_COMPLETE  Stack CREATE completed successfully
2016-09-23 09:39:51Z [AllNodesDeploySteps]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully

Stack overcloud CREATE_COMPLETE
Overcloud Endpoint: http://10.0.0.12:5000/v2.0
Overcloud Deployed

END UPDATE

UPDATE 09/21/2016

Work around for 1622720 which allows redeploy second time

During run time :-
[stack@instack ~]$ mistral environment-list
+———–+————-+———+———————+———————+
| Name | Description | Scope | Created at | Updated at |
+———–+————-+———+———————+———————+
| overcloud | None | private | 2016-09-21 12:35:43 | 2016-09-21 12:35:51 |
+———–+————-+———+———————+———————+

[stack@instack ~]$ swift list
ov-a2o6ekfrck5-0-zesuo2wtu2ed-Controller-ushkojdgxsim
ov-yfn5tgwipf-0-jebdxn5jfduz-NovaCompute-4hjdhzij3czv
ov-yfn5tgwipf-1-vypbavbviwxv-NovaCompute-luo274m3kmn2
overcloud

Here is snapshot which is bug evidence

Next step is workaround itself per https://bugs.launchpad.net/tripleo/+bug/1622720/comments/1  :-

[stack@instack ~]$ mistral environment-delete overcloud
Request to delete environment overcloud has been accepted.
[stack@instack ~]$ swift delete –all

$ touch -f /home/stack/tripleo-heat-templates/puppet/post.yaml
$ overcloud-deploy.sh

See following bugs at Launchpad :-

https://bugs.launchpad.net/tripleo/+bug/1622720
https://bugs.launchpad.net/tripleo/+bug/1622683
https://bugs.launchpad.net/tripleo/+bug/1622720/comments/2

END UPDATE

Due to Launchpad Bug  introspection hangs due to broken ipxe config
finally resolved on 09/01/2016  approach suggested in
TripleO manual deployment of ‘master’ branch by Carlo Camacho
has been retested.  As appears things in meantime have been changed. Following bellow is the way how mentioned above post worked for me right now on 32 GB VIRTHOST (i7 4790)

*****************************************
Tune stack environment on VIRTHOST
*****************************************

# useradd stack
# echo “stack:stack” | chpasswd
# echo “stack ALL=(root) NOPASSWD:ALL” | sudo tee -a /etc/sudoers.d/stack
#  chmod 0440 /etc/sudoers.d/stack
#  su – stack

***************************
Tune stack ENV
**************************

export NODE_DIST=centos7
export NODE_CPU=2
export NODE_MEM=7550
export NODE_COUNT=2
export UNDERCLOUD_NODE_CPU=2
export UNDERCLOUD_NODE_MEM=9000
export FS_TYPE=ext4

****************************************************************

Re-login to stack (highlight long line and copy if needed)

****************************************************************
$ sudo yum -y install epel-release sudo
$ yum -y install yum-plugin-priorities
$ sudo curl -o /etc/yum.repos.d/delorean.repo  http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/delorean.repo
$ sudo curl -o /etc/yum.repos.d/delorean-deps.repo  http://trunk.rdoproject.org/centos7/delorean-deps.repo
$ sudo yum install -y instack-undercloud
$ instack-virt-setup

*********************

On instack VM

*********************

Create swap file per http://www.anstack.com/blog/2016/07/04/manually-installing-tripleo-recipe.html  :-

#Add a 4GB swap file to the Undercloud
sudo dd if=/dev/zero of=/swapfile bs=1024 count=4194304
sudo mkswap /swapfile
#Turn ON the swap file
sudo chmod 600 /swapfile
sudo swapon /swapfile
#Enable it on start
sudo echo “/swapfile   swap   swap    defaults        0 0” >> /etc/fstab

***************************
Restart instack VM
***************************
Next
su – stack
sudo yum -y install yum-plugin-priorities

*************************************
Update .bashrc under ~stack/
*************************************

export USE_DELOREAN_TRUNK=1
export DELOREAN_TRUNK_REPO="http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/"
export DELOREAN_REPO_FILE="delorean.repo"
export FS_TYPE=ext4

************************************

  Re-login to stack

************************************

$ git clone https://github.com/openstack/tripleo-heat-templates
$ git clone https://github.com/openstack-infra/tripleo-ci.git

$ ./tripleo-ci/scripts/tripleo.sh –repo-setup
$ ./tripleo-ci/scripts/tripleo.sh –undercloud
$  source stackrc
$ ./tripleo-ci/scripts/tripleo.sh –overcloud-images
$ ./tripleo-ci/scripts/tripleo.sh –register-nodes
$ ./tripleo-ci/scripts/tripleo.sh –introspect-nodes

************************************************

  Passing step affected by mentioned bug

************************************************

  $ ./tripleo-ci/scripts/tripleo.sh –overcloud-deploy

Issue at start up of Overcloud deployment

###########################################################################################
tripleo.sh — Overcloud create started.
###########################################################################################
See Launchpad bugs 1622720 1622683 status . UPDATE 09/17/2016 is providing links. Back porting patch https://review.openstack.org/gitweb?p=openstack/tripleo-common.git;a=patch;h=203460176750aeda6c0a2d39ce349ad827053b11
via rebuilding openstack-tripleo-common-5.0.1-0.20160917031337.15c97e6.el7.centos.src.rpm && re installing new rpm doesn’t work for me.
###########################################################################################
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
WARNING: openstackclient.common.exceptions is deprecated and will be removed after Jun 2017. Please use osc_lib.exceptions
Creating Swift container to store the plan
Creating plan from template files in: /usr/share/openstack-tripleo-heat-templates/
Plan created
Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates
Object GET failed: http://192.0.2.1:8080/v1/AUTH_b4438648a72446eca04d2d216261c373/overcloud/overcloud-without-mergepy.yaml 404 Not Found  [first 60 chars of response]


Finally overcloud gets deployed

****************************************************************************************

On instack VM  verified  https://bugs.launchpad.net/tripleo/+bug/1604770  #9
****************************************************************************************

[stack@instack ~]$ sudo su –
Last login: Thu Sep 15 16:19:07 UTC 2016 from 192.168.122.1 on pts/1
[root@instack ~]# rpm -qa \*ipxe\*
ipxe-roms-qemu-20160127-1.git6366fa7a.el7.noarch
ipxe-bootimgs-20160127-1.git6366fa7a.el7.noarch

[stack@instack ~]$ openstack stack list

WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils

+————————+————+—————–+———————-+————–+
| ID                     | Stack Name | Stack Status    | Creation Time        | Updated Time |
+————————+————+—————–+———————-+————–+
| 7657df62-da09-4c0f-    | overcloud  | CREATE_COMPLETE | 2016-09-15T14:48:49Z | None         |
| bbdb-b9c95bdad537      |            |                 |                      |        |
+————————+————+—————–+———————-+————–+

[stack@instack ~]$ nova list

+————————————–+————————-+——–+————+————-+———————+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+————————————–+————————-+——–+————+————-+———————+
| 400e1499-5e02-4c92-a41b-814918f0edc3 | overcloud-controller-0  | ACTIVE | –          | Running     | ctlplane=192.0.2.15 |
| 58f3591f-c72f-4d97-9278-a33b3f631248 | overcloud-novacompute-0 | ACTIVE | –          | Running     | ctlplane=192.0.2.6  |
+————————————–+————————-+——–+————+————-+———————+

Managing and fixes required in overcloud

********************************************************************
Fix IP on Compute node && Open 6080 on Controller
********************************************************************

On Compute :-

[vnc]
vncserver_proxyclient_address=192.0.2.6
vncserver_listen=0.0.0.0
keymap=en-us
enabled=True
novncproxy_base_url=http://192.0.2.15:6080/vnc_auto.html <===

On Controller

Add line to /etc/sysconfig/iptables
-A INPUT -p tcp -m multiport –dports 6080 -m comment –comment “novncproxy” -m state –state NEW -j ACCEPT
Save /etc/sysconfig/iptables

#service iptables restart

[root@overcloud-controller-0 ~(keystone_admin)]# netstat -antp | grep 6080

tcp        0      0 192.0.2.15:6080         0.0.0.0:*               LISTEN      8397/python2       
tcp        1      0 192.0.2.8:56080         192.0.2.8:8080          CLOSE_WAIT  11606/gnocchi-metri
tcp        0      0 192.0.2.15:6080         192.0.2.1:47598         ESTABLISHED 28260/python2
tcp        0      0 192.0.2.15:6000         192.0.2.15:36080        TIME_WAIT   –

[root@overcloud-controller-0 ~(keystone_admin)]# ps -ef | grep 8397

nova      8397     1  0 15:06 ?                 00:00:05 /usr/bin/python2 /usr/bin/nova-novncproxy –web /usr/share/novnc/
nova      28260  8397  3 17:37 ?           00:00:56 /usr/bin/python2 /usr/bin/nova-novncproxy –web /usr/share/novnc/
root       31149 23941  0 18:06 pts/0    00:00:00 grep –color=auto 8397

**********************************
Create flavors as follows
**********************************

[root@overcloud-controller-0 ~]# nova flavor-create “m2.small” 2  1000 20 1

+—-+———-+———–+——+———–+——+——-+————-+———–+
| ID | Name     | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———-+———–+——+———–+——+——-+————-+———–+
| 2  | m2.small | 1000      | 20   | 0         |      | 1     | 1.0         | True      |
+—-+———-+———–+——+———–+——+——-+————-+———–+

[root@overcloud-controller-0 ~]# nova flavor-list

+————————————–+———————+———–+——+———–+——+——-+————-+———–+
| ID                                   | Name                | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+————————————–+———————+———–+——+———–+——+——-+————-+———–+
| 1                                    | 500MB Tiny Instance | 500       | 1    | 0         |      | 1     | 1.0         | True      |
| 2                                    | m2.small            | 1000      | 20   | 0         |      | 1     | 1.0         | True      |
+————————————–+———————+———–+——+———–+——+——-+————-+———–+

[root@overcloud-controller-0 ~]# nova flavor-list

+—-+———————+———–+——+———–+——+——-+————-+———–+
| ID | Name                | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———————+———–+——+———–+——+——-+————-+———–+
| 1  | 500MB Tiny Instance | 500       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m2.small            | 1000      | 20   | 0         |      | 1     | 1.0         | True      |
+—-+———————+———–+——+———–+——+——-+————-+———–+

[root@overcloud-controller-0 ~]# glance image-list

+————————————–+—————+
| ID                                   | Name          |
+————————————–+—————+
| c9faf86d-4a06-401a-839c-c5bd48ff704a | CirrOS34Cloud |
| 4bf6f43d-8cba-43d7-9e34-347cff2d4769 | UbuntuCloud   |
| 81e031b0-11b7-440b-946f-b8f9e3a83c95 | VF24Cloud     |
+————————————–+—————+

[root@overcloud-controller-0 ~]# neutron net-list

+————————————–+————–+—————————————-+
| id                                   | name         | subnets                                |
+————————————–+————–+—————————————-+
| 2d0ccb5f-0cc8-4710-819d-7c148137aea2 | public       | 795e0fea-0550-44e8-abf3-afd316cd7843   |
|                                      |              | 192.0.2.0/24                           |
| e2a9edb9-8e01-4e99-83b2-6c6e705967fe | demo_network | 56b70753-e776-4ce8-9b28-650431b43a63   |
|                                      |              | 50.0.0.0/24                            |
+————————————–+————–+—————————————-+

[root@overcloud-controller-0 ~]# nova boot –flavor 2 –key-name oskey09152016 \
–image 81e031b0-11b7-440b-946f-b8f9e3a83c95 \
–nic net-id=e2a9edb9-8e01-4e99-83b2-6c6e705967fe  VF24Devs05

+————————————–+————————————————–+
| Property                             | Value                                            |
+————————————–+————————————————–+
| OS-DCF:diskConfig                    | MANUAL                                           |
| OS-EXT-AZ:availability_zone          |                                                  |
| OS-EXT-SRV-ATTR:host                 | –                                                |
| OS-EXT-SRV-ATTR:hostname             | vf24devs05                                       |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | –                                                |
| OS-EXT-SRV-ATTR:instance_name        |                                                  |
| OS-EXT-SRV-ATTR:kernel_id            |                                                  |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                  |
| OS-EXT-SRV-ATTR:reservation_id       | r-psorddod                                       |
| OS-EXT-SRV-ATTR:root_device_name     | –                                                |
| OS-EXT-SRV-ATTR:user_data            | –                                                |
| OS-EXT-STS:power_state               | 0                                                |
| OS-EXT-STS:task_state                | scheduling                                       |
| OS-EXT-STS:vm_state                  | building                                         |
| OS-SRV-USG:launched_at               | –                                                |
| OS-SRV-USG:terminated_at             | –                                                |
| accessIPv4                           |                                                  |
| accessIPv6                           |                                                  |
| adminPass                            | dsFB8vrfUmv4                                     |
| config_drive                         |                                                  |
| created                              | 2016-09-15T12:01:34Z                             |
| description                          | –                                                |
| flavor                               | m2.small (2)                                     |
| hostId                               |                                                  |
| host_status                          |                                                  |
| id                                   | 212e06de-e971-428b-9e94-79dc8d91b6db             |
| image                                | VF24Cloud (81e031b0-11b7-440b-946f-b8f9e3a83c95) |
| key_name                             | oskey09152016                                    |
| locked                               | False                                            |
| metadata                             | {}                                               |
| name                                 | VF24Devs05                                       |
| os-extended-volumes:volumes_attached | []                                               |
| progress                             | 0                                                |
| security_groups                      | default                                          |
| status                               | BUILD                                            |
| tags                                 | []                                               |
| tenant_id                            | a1c9c1c1a1134384b4a496d585981aff                 |
| updated                              | 2016-09-15T12:01:34Z                             |
| user_id                              | e2383104829c45e1a3d70e11cc87d399                 |
+————————————–+————————————————–+

[root@overcloud-controller-0 ~]# nova list

+————————————–+————-+——–+————+————-+————————————-+
| ID                                   | Name        | Status | Task State | Power State | Networks                            |
+————————————–+————-+——–+————+————-+————————————-+
| c7cea368-9602-421d-beb3-c0ed37379b57 | CirrOSDevs1 | ACTIVE | –          | Running     | demo_network=50.0.0.17, 192.0.2.104 |
| 212e06de-e971-428b-9e94-79dc8d91b6db | VF24Devs05  | BUILD  | spawning   | NOSTATE     | demo_network=50.0.0.15              |
+————————————–+————-+——–+————+————-+————————————-+

[root@overcloud-controller-0 ~]# nova list

+————————————–+————-+——–+————+————-+————————————-+
| ID                                   | Name        | Status | Task State | Power State | Networks                            |
+————————————–+————-+——–+————+————-+————————————-+
| c7cea368-9602-421d-beb3-c0ed37379b57 | CirrOSDevs1 | ACTIVE | –          | Running     | demo_network=50.0.0.17, 192.0.2.104 |
| 212e06de-e971-428b-9e94-79dc8d91b6db | VF24Devs05  | ACTIVE | –          | Running     | demo_network=50.0.0.15              |
+————————————–+————-+——–+————+————-+————————————-+

Another option activate vlan10 following
http://bderzhavets.blogspot.com/2016/07/stable-mitaka-ha-instack-virt-setup.html
and instead of `./tripleo-ci/scripts/tripleo.sh –overcloud-deploy`
run following deployment with network isolation activated :-

#!/bin/bash -x
source /home/stack/stackrc
openstack overcloud deploy \
–control-scale 1 –compute-scale 1 \
–libvirt-type qemu \
–ntp-server pool.ntp.org \
–templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e $HOME/network_env.yaml

Presence of overcloud-resource-registry-puppet.yaml might explain avoiding

overcloud deployment failure no matter that overcloud-without-mergepy.yaml was not found at usual location


Access to TripleO QuickStart overcloud via sshuttle running on F24 WorkStation

August 16, 2016

Sshutle may be installed on Fedora 24 via straight forward `dnf -y install sshutle`.
[Fedora 24 Update: sshuttle-0.78.0-2.fc24].
https://lists.fedoraproject.org/pipermail/package-announce/2016-April/182490.html
So, when F24 has been set up as WKS for TripleO QuickStart deployment to VIRTHOST , there is no need to install add-on FoxyProxy and tune it on firefox as well as connect from ansible wks to undercloud via $ ssh -F ~/.quickstart/ssh.config.ansible undercloud -D 9090

What is sshuttle? It’s a Python app that uses SSH to create a quick and dirty VPN between your Linux, BSD, or Mac OS X machine and a remote system that has SSH access and Python. Been licensed under the GPLv2, sshuttle is a transparent proxy server that lets users fake a VPN with minimal hassle.

========================================
First install and start sshutle on Fedora 24 :-
========================================

boris@fedora24wks ~] dnf -y install sshutle
[root@fedora24wks ~]# rpm -qa \*sshuttle\*
sshuttle-0.78.0-2.fc24.noarch

========================================================
Now start sshutle via ssh.config.ansible, where 10.0.0.0/24 has been installed
as external network for OverCloud already been set up on VIRTHOST
========================================================

[boris@fedora24wks ~]$ sshuttle -e “ssh -F $HOME/.quickstart/ssh.config.ansible” -r undercloud -v 10.0.0.0/24 &

[3] 16385

[boris@fedora24wks ~]$ Starting sshuttle proxy.
firewall manager: Starting firewall with Python version 3.5.1
firewall manager: ready method name nat.
IPv6 enabled: False
UDP enabled: False
DNS enabled: False
TCP redirector listening on (‘127.0.0.1’, 12299).
Starting client with Python version 3.5.1
c : connecting to server…
Warning: Permanently added ‘192.168.1.74’ (ECDSA) to the list of known hosts.
Warning: Permanently added ‘undercloud’ (ECDSA) to the list of known hosts.
Starting server with Python version 2.7.5
s: latency control setting = True
s: available routes:
s: 2/10.0.0.0/24
s: 2/192.0.2.0/24
s: 2/192.168.23.0/24
s: 2/192.168.122.0/24
c : Connected.
firewall manager: setting up.
>> iptables -t nat -N sshuttle-12299
>> iptables -t nat -F sshuttle-12299
>> iptables -t nat -I OUTPUT 1 -j sshuttle-12299
>> iptables -t nat -I PREROUTING 1 -j sshuttle-12299
>> iptables -t nat -A sshuttle-12299 -j REDIRECT –dest 10.0.0.0/24 -p tcp –to-ports 12299 -m ttl ! –ttl 42
>> iptables -t nat -A sshuttle-12299 -j RETURN –dest 127.0.0.1/8 -p tcp
c : Accept TCP: 192.168.1.13:53068 -> 10.0.0.4:80.
c : warning: closed channel 1 got cmd=TCP_STOP_SENDING len=0
c : Accept TCP: 192.168.1.13:53072 -> 10.0.0.4:80.
s: SW’unknown’:Mux#1: deleting (3 remain)
s: SW#6:10.0.0.4:80: deleting (2 remain)
c : warning: closed channel 2 got cmd=TCP_STOP_SENDING len=0
c : Accept TCP: 192.168.1.13:53074 -> 10.0.0.4:80.
s: SW’unknown’:Mux#2: deleting (3 remain)
s: SW#7:10.0.0.4:80: deleting (2 remain)
c : Accept TCP: 192.168.1.13:58210 -> 10.0.0.4:6080.
c : Accept TCP: 192.168.1.13:58212 -> 10.0.0.4:6080.
c : SW’unknown’:Mux#2: deleting (9 remain)
c : SW#11:192.168.1.13:53072: deleting (8 remain)
c : SW’unknown’:Mux#1: deleting (7 remain)
c : SW#9:192.168.1.13:53068: deleting (6 remain)
c : Accept TCP: 192.168.1.13:58214 -> 10.0.0.4:6080.
c : Accept TCP: 192.168.1.13:58216 -> 10.0.0.4:6080.
c : warning: closed channel 4 got cmd=TCP_STOP_SENDING len=0
s: warning: closed channel 4 got cmd=TCP_STOP_SENDING len=0

This creates a transparent proxy server on your local machine for all IP addresses that match 10.0.0.0/24. Any TCP session you initiate to one of the proxied IP addresses will be captured by sshuttle and sent over an ssh session to the remote copy of sshuttle, which will then regenerate the connection on that end, and funnel the data back and forth through ssh. There is no need to install sshuttle on the remote server; the remote server just needs to have python available. sshuttle will automatically upload and run its source code to the remote python.

So,disable/remove FoxyProxy add-on from firefox ( if it has been set up ); interrupt connection from work station to undercloud via `ssh -F ~/.quickstart/ssh.config.ansible undercloud -D 9090`. Restart firefox and launch browser to http://10.0.0.4/dashboard

Screenshot from 2016-08-14 15-31-32


TripleO QuickStart HA Setup && Keeping undercloud persistent between cold reboots

June 25, 2016

This post follows up http://lxer.com/module/newswire/view/230814/index.html and might work as timer saver unless status undecloud.qcow2 per http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/ requires fresh installation to be done from scratch
So, we intend to survive VIRTHOST cold reboot (downtime) and keep previous version of undercloud VM been able to bring it up avoiding build via quickstart.sh and restart procedure from logging into undercloud and immediately run overcloud deployment. Proceed as follows :-

1. System shutdown

Cleanly commit :-
[stack@undercloud~] $ openstack stack delete overcloud
2. Login into VIRTHOST as stack and gracefully shutdown undercloud
[stack@ServerCentOS72 ~]$ virsh shutdown undercloud

**************************************
Shutdown and bring up VIRTHOST
**************************************
Login as root to VIRTHOST :-
[boris@ServerCentOS72 ~]$ sudo su –
[sudo] password for boris:
Last login: Fri Jun 24 16:47:25 MSK 2016 on pts/0

********************************************************************************
This is core step , not to create /run/user/1001/libvirt by root
setting appropriate permissions, just only set correct permissions
on /run/user.  This will allow “stack” to issue `virsh list –all` and create
by himself /run/user/1001/libvirt. The rest works fine for myself
********************************************************************************

[root@ServerCentOS72 ~]# chown -R stack /run/user
[root@ServerCentOS72 ~]# chgrp -R stack /run/user
[root@ServerCentOS72 ~]# ls -ld  /run/user
drwxr-xr-x. 3 stack stack 60 Jun 24 20:01 /run/user

[root@ServerCentOS72 ~]# su – stack
Last login: Fri Jun 24 16:48:09 MSK 2016 on pts/0
[stack@ServerCentOS72 ~]$ virsh list –all

Id    Name                           State
—————————————————-
–     compute_0                   shut off
–     compute_1                   shut off
–     control_0                   shut off
–     control_1                   shut off
–     control_2                   shut off
–     undercloud                  shut off

**********************
Make sure :-
**********************
[stack@ServerCentOS72 ~]$ ls -ld /run/user/1001/libvirt
drwx——. 6 stack stack 160 Jun 24 21:38 /run/user/1001/libvirt

[stack@ServerCentOS72 ~]$ virsh start undercloud
Domain undercloud started

[stack@ServerCentOS72 ~]$ virsh list –all
Id    Name                           State
—————————————————————
2     undercloud                     running
–     compute_0                      shut off
–     compute_1                      shut off
–     control_0                      shut off
–     control_1                      shut off
–     control_2                      shut off

Wait about 5 min and access the undercloud from workstation by:-

[boris@fedora22wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
Warning: Permanently added ‘192.168.1.75’ (ECDSA) to the list of known hosts.
Warning: Permanently added ‘undercloud’ (ECDSA) to the list of known hosts.
Last login: Fri Jun 24 15:34:40 2016 from gateway

[stack@undercloud ~]$ ls -l
total 1640244

-rw-rw-r–. 1 stack stack   13287936 Jun 24 13:10 cirros.img
-rw-rw-r–. 1 stack stack    3740163 Jun 24 13:10 cirros.initramfs
-rw-rw-r–. 1 stack stack    4979632 Jun 24 13:10 cirros.kernel
-rw-rw-r–. 1  1001  1001      21769 Jun 24 11:56 instackenv.json
-rw-r–r–. 1 root  root   385824684 Jun 24 03:28 ironic-python-agent.initramfs
-rwxr-xr-x. 1 root  root     5158704 Jun 24 03:28 ironic-python-agent.kernel
-rwxr-xr-x. 1 stack stack        487 Jun 24 12:17 network-environment.yaml
-rwxr-xr-x. 1 stack stack        792 Jun 24 12:17 overcloud-deploy-post.sh
-rwxr-xr-x. 1 stack stack       2284 Jun 24 12:17 overcloud-deploy.sh
-rw-rw-r–. 1 stack stack       4324 Jun 24 13:50 overcloud-env.json
-rw-r–r–. 1 root  root    36478203 Jun 24 03:28 overcloud-full.initrd
-rw-r–r–. 1 root  root  1224070144 Jun 24 03:29 overcloud-full.qcow2
-rwxr-xr-x. 1 root  root     5158704 Jun 24 03:29 overcloud-full.vmlinuz
-rw-rw-r–. 1 stack stack        389 Jun 24 14:28 overcloudrc
-rwxr-xr-x. 1 stack stack       3374 Jun 24 12:17 overcloud-validate.sh
-rwxr-xr-x. 1 stack stack        284 Jun 24 12:17 run-tempest.sh
-rw-r–r–. 1 stack stack        161 Jun 24 12:17 skipfile
-rw——-. 1 stack stack        287 Jun 24 12:16 stackrc
-rw-rw-r–. 1 stack stack        232 Jun 24 14:28 tempest-deployer-input.conf
drwxrwxr-x. 9 stack stack       4096 Jun 24 15:23 tripleo-ci
-rw-rw-r–. 1 stack stack       1123 Jun 24 14:28 tripleo-overcloud-passwords
-rw——-. 1 stack stack       6559 Jun 24 11:59 undercloud.conf
-rw-rw-r–. 1 stack stack     782405 Jun 24 12:16 undercloud_install.log
-rwxr-xr-x. 1 stack stack         83 Jun 24 12:00 undercloud-install.sh
-rw-rw-r–. 1 stack stack       1579 Jun 24 12:00 undercloud-passwords.conf
-rw-rw-r–. 1 stack stack       7699 Jun 24 12:17 undercloud_post_install.log
-rwxr-xr-x. 1 stack stack       2780 Jun 24 12:00 undercloud-post-install.sh

[stack@undercloud ~]$ ./overcloud-deploy.sh

Fourth redeployment based on same undercloud VM.  DHCP pool of ctlplane
is obviosly increasing  starting point



Libvirt's pool && volumes configuration been built by QuickStart



***************************************************************************
A bit different way to manage - login as stack and invoke virt-manager
via `virt-manager --connect qemu:///session` when /run/user already got
a correct permissions.
***************************************************************************

$ sudo su -
# chown -R stack /run/user
# chgrp -R stack /run/user
^D

[stack@ServerCentOS72 ~]$ virsh list --all
Id Name State
----------------------------------------------------
- compute_0 shut off
- compute_1 shut off
- control_0 shut off
- control_1 shut off
- control_2 shut off
- undercloud shut off

[stack@ServerCentOS72 ~]$ virt-manager --connect qemu:///session
[stack@ServerCentOS72 ~]$ virsh list --all
Id Name State
----------------------------------------------------
2 undercloud running
- compute_0 shut off
- compute_1 shut off
- control_0 shut off
- control_1 shut off
- control_2 shut off



From workstation connect to undercloud

[boris@fedora22wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud

[stack@undercloud~] ./overcloud-deploy.sh
In several minutes you will see




[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 40754e8a-461e-4328-b0c4-6740c71e9a0d | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.27 |
| df272524-a0bd-4ed7-b95c-92ac779c0b96 | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.26 |
| 22802ff4-c472-4500-94d7-415c429073ab | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.29 |
| e79a8967-5c81-4ce1-9037-4e07b298d779 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.25 |
| 27a7c6ac-a480-4945-b4d5-72e32b3c1886 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.28 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+


[stack@undercloud ~]$ ssh heat-admin@192.0.2.27
Last login: Sat Jun 25 09:35:35 2016 from 192.0.2.1
[heat-admin@overcloud-controller-0 ~]$ sudo su -
Last login: Sat Jun 25 09:54:06 UTC 2016 on pts/0

[root@overcloud-controller-0 ~]# .  keystonerc_admin
[root@overcloud-controller-0 ~(keystone_admin)]# pcs status
Cluster name: tripleo_cluster
Last updated: Sat Jun 25 10:04:32 2016        Last change: Sat Jun 25 09:21:21 2016 by root via cibadmin on overcloud-controller-0
Stack: corosync
Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
3 nodes and 127 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:

 ip-172.16.2.5    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.3.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
 Clone Set: haproxy-clone [haproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: galera-master [galera]
     Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: memcached-clone [memcached]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-192.0.2.24    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 ip-10.0.0.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.2.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
 ip-172.16.1.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: redis-master [redis]
     Masters: [ overcloud-controller-1 ]
     Slaves: [ overcloud-controller-0 overcloud-controller-2 ]
 Clone Set: mongod-clone [mongod]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 openstack-cinder-volume    (systemd:openstack-cinder-volume):    Started overcloud-controller-0
 Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-clone [openstack-heat-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-sahara-api-clone [openstack-sahara-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: delay-clone [delay]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: httpd-clone [httpd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Failed Actions:
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=92, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:16:45 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-1 'not running' (7): call=355, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 10:00:10 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-1 'not running' (7): call=313, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:20:51 2016', queued=0ms, exec=2101ms
* openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=328, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:23:05 2016', queued=0ms, exec=2121ms
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=97, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:16:43 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-0 'not running' (7): call=365, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 10:00:12 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-0 'not running' (7): call=324, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:22:32 2016', queued=0ms, exec=2237ms
* openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=342, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:23:32 2016', queued=0ms, exec=2200ms
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=94, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:16:47 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-2 'not running' (7): call=353, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 10:00:08 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-2 'not running' (7): call=318, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:22:39 2016', queued=0ms, exec=2113ms
* openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=322, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:22:48 2016', queued=0ms, exec=2123ms



PCSD Status:
overcloud-controller-0: Online
overcloud-controller-1: Online
overcloud-controller-2: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled


RDO Triple0 QuickStart HA Setup on Intel Core i7-4790 Desktop (work in progress)

June 18, 2016

This post follows up https://www.linux.com/blog/rdo-triple0-quickstart-ha-setup-intel-core-i7-4790-desktop
In meantime undercloud-install,undercloud-post-install (openstack undercloud install, openstack overcloud image upload ) are supposed to be performed during original run  `bash quickstart.sh –config /path-to/ha.yml $VIRTHOST` run. Neutron networks deployment on undercloud and HA Server’s configuration has been significantly rebuilt since 06/03/2016. I believe design bellow is close to proposed in https://remote-lab.net/rdo-manager-ha-openstack-deployment
However , attempt to reproduce http://docs.openstack.org/developer/tripleo-docs/installation/installation.html
results  hanging  on  `openstack undercloud install`, wheh it attempts to start openstack-nova-compute on undercloud. Nova-compute.log report failure to connect 127.0.0.1:5672. Verification via `netstat -antp | grep 5672` reports port 5672 bind only to 192.0.2.1 ( ctlplane IP address ).
See also https://www.redhat.com/archives/rdo-list/2016-March/msg00171.html
Quoting ( complaints are not mine)  :-
By the way, I’d love to see and help to have an complete installation guide for TripleO powered by RDO on the RDO site (the instack virt setup without quickstart . . . . 

*****************************
Start on workstation :-
*****************************

$ git clone https://github.com/openstack/tripleo-quickstart
$ cd tripleo-quickstart
$ sudo bash quickstart.sh –install-deps
$ sudo yum -y  install redhat-rpm-config
$ export VIRTHOST=192.168.1.75 #put your own IP here
$ ssh-keygen
$ ssh-copy-id root@$VIRTHOST
$ ssh root@$VIRTHOST uname -a # no root login prompt
######################
# Template code
######################
compute_memory: 6144
compute_vcpu:1
undercloud_memory: 8192

# Giving the undercloud additional CPUs can greatly improve heat’s
# performance (and result in a shorter deploy time).

undercloud_vcpu: 4

# Create three controller nodes and one compute node.

overcloud_nodes:
– name: control_0

flavor: control
– name: control_1
flavor: control
– name: control_2
flavor: control

– name: compute_0
flavor: compute
– name: compute_1
flavor: compute

# We don’t need introspection in a virtual environment (because we are
# creating all the “hardware” we really know the necessary
# information).
introspect: false
# Tell tripleo about our environment.

network_isolation: true
extra_args: >-
  –control-scale 3 –compute-scale 2 –neutron-network-type vxlan
  –neutron-tunnel-types vxlan
  -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml
  –ntp-server pool.ntp.org
deploy_timeout: 75
tempest: false
pingtest: true

***********************************************
Then run under tripleo-quickstart
***********************************************

$ bash quickstart.sh –config ./config/general_config/ha.yml  $VIRTHOST

During this run the most important is to reach this point on VIRTHOST

[root@ServerCentOS72 ~]# cd /var/cache/tripleo-quickstart/images

[root@ServerCentOS72 images]# ls -l
total 2638232
-rw-rw-r–. 1 stack stack 2701548544 Jun 17 19:25 83e62624dd7bd637dada343bbf4fe8f1.qcow2
lrwxrwxrwx. 1 stack stack         75 Jun 17 19:25 latest-undercloud.qcow2 -> /var/cache/tripleo-quickstart/images/83e62624dd7bd637dada343bbf4fe8f1.qcow2

Saturday 18 June 2016  12:07:05 +0300 (0:00:00.124)       0:26:21.276

===============================================================================
 tripleo/undercloud : Install the undercloud -------------------------- 1155.95s
/home/boris/tripleo-quickstart/roles/tripleo/undercloud/tasks/install-undercloud.yml:1 
setup/undercloud : Get undercloud vm ip address ------------------------ 81.26s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:173 
setup/undercloud : Resize undercloud image (call virt-resize) ---------- 76.39s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:122 
tripleo/undercloud : Prepare the undercloud for deploy ----------------- 70.15s
/home/boris/tripleo-quickstart/roles/tripleo/undercloud/tasks/post-install.yml:27 
setup/undercloud : Upload undercloud volume to storage pool ------------ 53.20s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:142 
setup/undercloud : Copy instackenv.json to appliance ------------------- 35.25s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:53
setup/undercloud : Get qcow2 image from cache -------------------------- 32.77s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/fetch_image.yml:144 
setup/undercloud : Inject undercloud ssh public key to appliance -------- 7.07s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:72 
setup ------------------------------------------------------------------- 6.68s
None --------------------------------------------------------------------------
setup/undercloud : Perform selinux relabel on undercloud image ---------- 3.47s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:94
environment/teardown : Check if libvirt is available -------------------- 1.99s
/home/boris/tripleo-quickstart/roles/environment/teardown/tasks/main.yml:8 ----
setup ------------------------------------------------------------------- 1.92s
/home/boris/.quickstart/playbooks/provision.yml:29 ----------------------------
setup ------------------------------------------------------------------- 1.90s
None --------------------------------------------------------------------------
setup ------------------------------------------------------------------- 1.81s
None --------------------------------------------------------------------------
parts/libvirt : Install packages for libvirt ---------------------------- 1.78s
/home/boris/tripleo-quickstart/roles/parts/libvirt/tasks/main.yml:5 -----------
setup/overcloud : Create overcloud vm storage --------------------------- 1.57s
/home/boris/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:55 
setup/overcloud : Define overcloud vms ---------------------------------- 1.48s
/home/boris/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:67 
provision/teardown : Remove non-root user account ----------------------- 1.41s
/home/boris/tripleo-quickstart/roles/provision/teardown/tasks/main.yml:47 -----
provision/teardown : Wait for processes to exit ------------------------- 1.41s
/home/boris/tripleo-quickstart/roles/provision/teardown/tasks/main.yml:27 -----
environment/teardown : Stop libvirt networks ---------------------------- 1.35s
/home/boris/tripleo-quickstart/roles/environment/teardown/tasks/main.yml:29 ---

+ set +x

##################################
Virtual Environment Setup Complete
##################################

Access the undercloud by:
ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
There are scripts in the home directory to continue the deploy:
1. overcloud-deploy.sh will deploy the overcloud

Detailed syntax of `openstack overcloud deploy –templates … `
captured by snapshot bellow, compare with https://remote-lab.net/rdo-manager-ha-openstack-deployment

$ openstack overcloud deploy –control-scale 3 –compute-scale 2  \
–libvirt-type qemu –ntp-server pool.ntp.org –templates ~/the-cloud/  \
-e ~/the-cloud/environments/puppet-pacemaker.yaml  \
-e ~/the-cloud/environments/network-isolation.yaml  \
-e ~/the-cloud/environments/net-single-nic-with-vlans.yaml  \
-e ~/the-cloud/environments/network-environment.yaml

Screenshot from 2016-06-19 14-29-39
  2.   overcloud-deploy-post.sh will do any post-deploy configuration
  3.   overcloud-validate.sh will run post-deploy validation

Alternatively, you can ignore these scripts and follow the upstream docs,
starting from the overcloud deploy section:

http://ow.ly/1Vc1301iBlb

Then run 3 mentoned above scripts

[stack@undercloud ~]$ . stackrc
[stack@undercloud ~]$ heat stack-list
+--------------------------------------+------------+-----------------+---------------------+--------------+
| id                                   | stack_name | stack_status    | creation_time       | updated_time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| 356243b1-a071-45c8-8083-85b9a12532c6 | overcloud  | CREATE_COMPLETE | 2016-06-18T09:09:40 | None         |
+--------------------------------------+------------+-----------------+---------------------+--------------+

[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| dbb233ab-9108-4a22-b0dd-44c6ef9a481a | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.11 |
| 1a91083e-e1ba-43c3-8ad2-78500f6b3ecb | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.7  |
| 0b3f6ec8-0a13-4f40-b9e3-4557f1b8c7a3 | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.9  |
| 97a8a546-72a0-4431-8065-c1f81103ee25 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.10 |
| e87a79db-75f8-437f-8ed7-f29aacfe7339 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.8  |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+

[stack@undercloud ~]$ neutron net-list
+--------------------------------------+--------------+----------------------------------------+
| id                                   | name         | subnets                                |
+--------------------------------------+--------------+----------------------------------------+
| cde382ae-a7fa-4ebb-bbdc-9e2af9c0df83 | external     | 42fac214-7177-4b4f-8778-105015ed30da   |
|                                      |              | 10.0.0.0/24                            |
| 5fc97bca-fa67-4ede-b4d3-8234c0ace5e5 | storage_mgmt | 719f9a19-2f1d-4eed-914a-430468086f10   |
|                                      |              | 172.16.3.0/24                          |
| 4236d358-b4cd-4fb9-a337-f8a421bb13cd | tenant       | d6f1e772-c0a1-4869-a9bc-b551faf5be8e   |
|                                      |              | 172.16.0.0/24                          |
| a4155b70-a4d8-41bf-bbe6-a5f4e248c5ad | ctlplane     | 199a8e99-d9c7-43f2-8ccd-6a59b8424362   |
|                                      |              | 192.0.2.0/24                           |
| fae53fb0-c5da-427f-b473-bfaa0ab21877 | internal_api | 5f2ff369-1000-4361-8131-b0ae69821b9f   |
|                                      |              | 172.16.2.0/24                          |
| 41862220-b9e6-4000-8341-9fbdb34b47f5 | storage      | d0cf1cac-f841-41dd-923d-47d164c07d0f   |
|                                      |              | 172.16.1.0/24                          |
+--------------------------------------+--------------+----------------------------------------+

[stack@undercloud ~]$ cat overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://10.0.0.4:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export no_proxy=,10.0.0.4,192.0.2.6
export OS_PASSWORD=gdjYmYMdB6aWX8PjBUWdCHkem
export PYTHONWARNINGS=”ignore:Certificate has no, ignore:A true SSLContext object is not available”
export OS_TENANT_NAME=admin

[stack@undercloud ~]$ nova list
+————————————–+————————-+——–+————+————-+———————+

| ID                                   | Name                    | Status | Task State | Power State | Networks            |

+————————————–+————————-+——–+————+————-+———————+
| dbb233ab-9108-4a22-b0dd-44c6ef9a481a | overcloud-controller-0  | ACTIVE | –          | Running     | ctlplane=192.0.2.11 |
| 1a91083e-e1ba-43c3-8ad2-78500f6b3ecb | overcloud-controller-1  | ACTIVE | –          | Running     | ctlplane=192.0.2.7  |
| 0b3f6ec8-0a13-4f40-b9e3-4557f1b8c7a3 | overcloud-controller-2  | ACTIVE | –          | Running     | ctlplane=192.0.2.9  |
| 97a8a546-72a0-4431-8065-c1f81103ee25 | overcloud-novacompute-0 | ACTIVE | –          | Running     | ctlplane=192.0.2.10 |
| e87a79db-75f8-437f-8ed7-f29aacfe7339 | overcloud-novacompute-1 | ACTIVE | –          | Running     | ctlplane=192.0.2.8  |
+————————————–+————————-+——–+————+———

[stack@undercloud ~]$ ssh heat-admin@192.0.2.11

The authenticity of host ‘192.0.2.11 (192.0.2.11)’ can’t be established.
ECDSA key fingerprint is 74:99:da:b1:c8:ac:58:e6:65:c1:51:45:64:e4:e9:ed.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.0.2.11’ (ECDSA) to the list of known hosts.
Last login: Sat Jun 18 09:52:37 2016 from 192.0.2.1
[heat-admin@overcloud-controller-0 ~]$ sudo su –
[root@overcloud-controller-0 ~]# vi keystonerc_admin
[root@overcloud-controller-0 ~]# .  keystonerc_admin
[root@overcloud-controller-0 ~(keystone_admin)]# psc status
-bash: psc: command not found
[root@overcloud-controller-0 ~(keystone_admin)]# pcs  status
Cluster name: tripleo_cluster
Last updated: Sat Jun 18 10:01:58 2016        Last change: Sat Jun 18 09:49:22 2016 by root via cibadmin on overcloud-controller-0
Stack: corosync
Current DC: overcloud-controller-1 (version 1.1.13-10.el7_2.2-44eb2dd) – partition with quorum

3 nodes and 127 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Full list of resources:
ip-192.0.2.6    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
ip-172.16.2.5    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
ip-172.16.3.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: memcached-clone [memcached]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
ip-10.0.0.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
ip-172.16.2.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
ip-172.16.1.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-core-clone [openstack-core]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-1 ]
Slaves: [ overcloud-controller-0 overcloud-controller-2 ]
Clone Set: mongod-clone [mongod]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
openstack-cinder-volume    (systemd:openstack-cinder-volume):    Started overcloud-controller-0
Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-heat-api-clone [openstack-heat-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-glance-api-clone [openstack-glance-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-api-clone [openstack-nova-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-sahara-api-clone [openstack-sahara-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: delay-clone [delay]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-server-clone [neutron-server]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: httpd-clone [httpd]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Failed Actions:
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=95, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:44:43 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-1 'not running' (7): call=331, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:56:44 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-1 'not running' (7): call=335, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:50:53 2016', queued=0ms, exec=2099ms
* openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=339, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:51:17 2016', queued=0ms, exec=2117ms
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=96, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:44:40 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-0 'not running' (7): call=332, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:56:42 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-0 'not running' (7): call=339, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:51:13 2016', queued=0ms, exec=2145ms
* openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=341, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:51:28 2016', queued=0ms, exec=2147ms
* openstack-aodh-evaluator_start_0 on overcloud-controller-2 'not running' (7): call=368, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:53:18 2016', queued=0ms, exec=2107ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-2 'not running' (7): call=321, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:56:46 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-2 'not running' (7): call=326, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:51:06 2016', queued=0ms, exec=2185ms
* openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=378, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:54:14 2016', queued=1ms, exec=2116ms

PCSD Status:
overcloud-controller-0: Online
overcloud-controller-1: Online
overcloud-controller-2: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

[root@overcloud-controller-0 ~(keystone_admin)]# ovs-vsctl show
8fea5ee4-62cf-4767-96c8-d9867cab9972
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port “vxlan-ac100004”
Interface “vxlan-ac100004”
type: vxlan
ptions: {df_default=”true”, in_key=flow, local_ip=”172.16.0.6″, out_key=flow, remote_ip=”172.16.0.4″}
Port “vxlan-ac100005”
Interface “vxlan-ac100005”
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”172.16.0.6″, out_key=flow, remote_ip=”172.16.0.5″}
Port “vxlan-ac100008”
Interface “vxlan-ac100008”
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”172.16.0.6″, out_key=flow, remote_ip=”172.16.0.8″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “vxlan-ac100007”
Interface “vxlan-ac100007”
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”172.16.0.6″, out_key=flow, remote_ip=”172.16.0.7″}
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port br-ex
Interface br-ex
  type: internal
Port “vlan20”
tag: 20
Interface “vlan20”
type: internal
Port “eth0”
Interface “eth0”
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “vlan40”
tag: 40
Interface “vlan40”
type: internal
Port “vlan50”
tag: 50
Interface “vlan50”
type: internal
Port “vlan10”
tag: 10
Interface “vlan10”
type: internal
Port “vlan30”
tag: 30
Interface “vlan30”
type: internal
ovs_version: “2.5.0”

[root@overcloud-controller-0 ~(keystone_admin)]# ifconfig

br-ex: flags=4163  mtu 1500
inet 192.0.2.11  netmask 255.255.255.0  broadcast 192.0.2.255
inet6 fe80::250:dcff:fecf:b7d5  prefixlen 64  scopeid 0x20
ether 00:50:dc:cf:b7:d5  txqueuelen 0  (Ethernet)
RX packets 15254  bytes 29305270 (27.9 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 15111  bytes 2037368 (1.9 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
inet6 fe80::250:dcff:fecf:b7d5  prefixlen 64  scopeid 0x20
ether 00:50:dc:cf:b7:d5  txqueuelen 1000  (Ethernet)
RX packets 554865  bytes 314056269 (299.5 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 537763  bytes 196316938 (187.2 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 128951  bytes 42842317 (40.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 128951  bytes 42842317 (40.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan10: flags=4163  mtu 1500
inet 10.0.0.6  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::2cf7:9cff:fe98:df2e  prefixlen 64  scopeid 0x20
ether 2e:f7:9c:98:df:2e  txqueuelen 0  (Ethernet)
RX packets 1563  bytes 22172141 (21.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 935  bytes 339459 (331.5 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan20: flags=4163  mtu 1500
inet 172.16.2.9  netmask 255.255.255.0  broadcast 172.16.2.255
inet6 fe80::9c4a:96ff:fe42:f562  prefixlen 64  scopeid 0x20
ether 9e:4a:96:42:f5:62  txqueuelen 0  (Ethernet)
RX packets 515281  bytes 202417994 (193.0 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 498334  bytes 112312907 (107.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan30: flags=4163  mtu 1500
inet 172.16.1.5  netmask 255.255.255.0  broadcast 172.16.1.255
inet6 fe80::8cbe:80ff:fe80:7945  prefixlen 64  scopeid 0x20
ether 8e:be:80:80:79:45  txqueuelen 0  (Ethernet)
RX packets 20275  bytes 45196003 (43.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 20405  bytes 52618634 (50.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan40: flags=4163  mtu 1500
inet 172.16.3.6  netmask 255.255.255.0  broadcast 172.16.3.255
inet6 fe80::8c06:98ff:fe7a:5b7  prefixlen 64  scopeid 0x20
ether 8e:06:98:7a:05:b7  txqueuelen 0  (Ethernet)
RX packets 2299  bytes 12722091 (12.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2557  bytes 26854977 (25.6 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan50: flags=4163  mtu 1500
inet 172.16.0.6  netmask 255.255.255.0  broadcast 172.16.0.255
inet6 fe80::6454:dff:fe41:90e9  prefixlen 64  scopeid 0x20
ether 66:54:0d:41:90:e9  txqueuelen 0  (Ethernet)
RX packets 107  bytes 9834 (9.6 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 121  bytes 12394 (12.1 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@overcloud-controller-0 ~(keystone_admin)]# route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 vlan10
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 vlan10
169.254.169.254 192.0.2.1       255.255.255.255 UGH   0      0        0 br-ex
172.16.0.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan50
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan30
172.16.2.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan20
172.16.3.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan40
192.0.2.0       0.0.0.0         255.255.255.0   U     0      0        0 br-ex

[root@overcloud-controller-0 ~]# cat /etc/os-net-config/config.json | jq ‘.[]’
[
{
“addresses”: [
{
“ip_netmask”: “192.0.2.11/24”
}
],
“type”: “ovs_bridge”,
“use_dhcp”: false,
“routes”: [
{
“next_hop”: “192.0.2.1”,
“ip_netmask”: “169.254.169.254/32”
}
],
“members”: [
{
“primary”: true,
“name”: “nic1”,
“type”: “interface”
},
{
“vlan_id”: 10,
“addresses”: [
{
“ip_netmask”: “10.0.0.6/24”
}
],
“type”: “vlan”,
“routes”: [
{
“next_hop”: “10.0.0.1”,
“default”: true
}
]
},
{
“vlan_id”: 20,
“addresses”: [
{
“ip_netmask”: “172.16.2.9/24”
}
],
“type”: “vlan”
},
{
“vlan_id”: 30,
“addresses”: [
{
“ip_netmask”: “172.16.1.5/24”
}
],
“type”: “vlan”
},
{
“vlan_id”: 40,
“addresses”: [
{
“ip_netmask”: “172.16.3.6/24”
}
],
“type”: “vlan”
},
{
“vlan_id”: 50,
“addresses”: [
{
“ip_netmask”: “172.16.0.6/24”
}
],
“type”: “vlan”
}
],
“name”: “br-ex”,
“dns_servers”: [
“8.8.8.8”,
“8.8.4.4”
]
}
]

************************
On undercloud
************************

[stack@undercloud ~]$ sudo su -
Last login: Sat Jun 18 10:47:31 UTC 2016 on pts/1
[root@undercloud ~]# ovs-vsctl show
7fb4d9b7-4704-410f-845f-6f3f0a1b65cd
Bridge br-ctlplane
Port "vlan10"
tag: 10
Interface "vlan10"
type: internal

Port br-ctlplane
Interface br-ctlplane
type: internal
Port phy-br-ctlplane
Interface phy-br-ctlplane
type: patch
options: {peer=int-br-ctlplane}
Port "eth1"
Interface "eth1"
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "tap41a7c72c-39"
tag: 1
Interface "tap41a7c72c-39"
type: internal
Port int-br-ctlplane
Interface int-br-ctlplane
type: patch
options: {peer=phy-br-ctlplane}
ovs_version: "2.5.0"

[root@undercloud ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.23.1 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan10
192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ctlplane
192.168.23.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

[root@undercloud ~]# ifconfig
br-ctlplane: flags=4163 mtu 1500
inet 192.0.2.1 netmask 255.255.255.0 broadcast 192.0.2.255

inet6 fe80::2ad:c4ff:fe6f:778a prefixlen 64 scopeid 0x20
ether 00:ad:c4:6f:77:8a txqueuelen 0 (Ethernet)
RX packets 4743446 bytes 382457275 (364.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6573214 bytes 31299066406 (29.1 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0: flags=4163 mtu 1500
inet 192.168.23.46 netmask 255.255.255.0 broadcast 192.168.23.255
inet6 fe80::2ad:c4ff:fe6f:7788 prefixlen 64 scopeid 0x20
ether 00:ad:c4:6f:77:88 txqueuelen 1000 (Ethernet)
RX packets 402911 bytes 1166354846 (1.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 286351 bytes 63608008 (60.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4163 mtu 1500
inet6 fe80::2ad:c4ff:fe6f:778a prefixlen 64 scopeid 0x20
ether 00:ad:c4:6f:77:8a txqueuelen 1000 (Ethernet)
RX packets 4793675 bytes 390579748 (372.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6627325 bytes 32167819071 (29.9 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 5342779 bytes 31375282714 (29.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5342779 bytes 31375282714 (29.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

virbr0: flags=4099 mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:b7:65:c0 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vlan10: flags=4163 mtu 1500
inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::c4d1:81ff:fec1:6006 prefixlen 64 scopeid 0x20
ether c6:d1:81:c1:60:06 txqueuelen 0 (Ethernet)
RX packets 49362 bytes 7857042 (7.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 52980 bytes 868430005 (828.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


Set up VM to connect Tripleo QuickStart Overcloud via Virt-manager GUI

May 29, 2016

Set up Gnome Desktop && VirtTools on Virtualization Server (VIRTHOST) and make remote connection to Virt-manager running on VIRTHOST (192.168.1.75). Then create VM via virt-manager as follows using standard CentOS 7.2 ISO image , I am aware of post “Connecting another vm to your tripleo-quickstart deployment”  at oddbits.com :-
http://blog.oddbit.com/2016/05/19/connecting-another-vm-to-your-tripleo-qu/
and manage this way deliberately. Just wondering is it possible to get results similar to obtained by LarsKS ( via in depth knowledge virsh CLI and Libvirt features) with Virt-manager GUI intuitively much more affordable. I realize that speed and flexibility of approach suggested bellow are losing the aforementioned

Proceed with VM set up via Virt-manager remote GUI. Attaching “external” and “overcloud” networks to VM and assign static IPs to eth0 and eth1 which belong to corresponding networks.

[root@ServerCentOS72 ~]# virsh net-list

Name State Autostart Persistent
———————————————————-
default active yes yes
external active yes yes
overcloud active yes yes

Looks good , start install

Installation completed.  Following step verfication availabilty connect to
overcloud on VIRTHOST. Check static IPs on Remote Console and connect
to dashboard of Controller

Now connect to VMs running in overcloud

Switching eth1 to DHCP mode on RemoteConsole (following post at oddbits.com)

[root@ServerCentOS72 ~]# virsh dumpxml RemoteConsole | xmllint –xpath ‘//interface‘ –
<interface type=”network”>
<mac address=”52:54:00:dd:c6:9d”/>
<source network=”overcloud” bridge=”brovc”/>
<target dev=”vnet1″/>
<model type=”virtio”/>
<alias name=”net1″/>
<address type=”pci” domain=”0x0000″ bus=”0x00″ slot=”0x04″ function=”0x0″/>

Creating port on ctlplane ( undercloud VM )

On RemoteConsole switch eth1 to DHCP mode via NetworkManager GUI

We are all set


RDO Triple0 QuickStart && First impressions

May 27, 2016

I believe the post bellow will bring some more light on TripleO QuickStart
procedure suggested on RDO QuickStart page ( size of memory 32 GB is a must. During minimal configuration runtime 23 GB of RAM are required ). Following tips from Deploying OpenStack on just one hosted server

Overcloud deployed .

************************************************************************
First of all taking  look at routing tables  on undercloud VM
************************************************************************

[root@undercloud ~]# ifconfig

br-ctlplane: flags=4163  mtu 1500
inet 192.0.2.1  netmask 255.255.255.0  broadcast 192.0.2.255

inet6 fe80::285:8cff:feee:4c12  prefixlen 64  scopeid 0x20
ether 00:85:8c:ee:4c:12  txqueuelen 0  (Ethernet)
RX packets 5458173  bytes 430801023 (410.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8562456  bytes 31493865046 (29.3 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
inet 192.168.23.28  netmask 255.255.255.0  broadcast 192.168.23.255
inet6 fe80::285:8cff:feee:4c10  prefixlen 64  scopeid 0x20
ether 00:85:8c:ee:4c:10  txqueuelen 1000  (Ethernet)
RX packets 4550861  bytes 7090076105 (6.6 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1597556  bytes 760511620 (725.2 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163  mtu 1500
inet6 fe80::285:8cff:feee:4c12  prefixlen 64  scopeid 0x20
ether 00:85:8c:ee:4c:12  txqueuelen 1000  (Ethernet)
RX packets 5459780  bytes 430920997 (410.9 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8564443  bytes 31494029129 (29.3 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 4361647  bytes 24858373851 (23.1 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 4361647  bytes 24858373851 (23.1 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099  mtu 1500
inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
ether 52:54:00:39:0a:ae  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan10: flags=4163  mtu 1500
inet 10.0.0.1  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::804e:69ff:fe19:844b  prefixlen 64  scopeid 0x20
ether 82:4e:69:19:84:4b  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 12  bytes 816 (816.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@undercloud ~]# ip route
default via 192.168.23.1 dev eth0
10.0.0.0/24 dev vlan10  proto kernel  scope link  src 10.0.0.1
192.0.2.0/24 dev br-ctlplane  proto kernel  scope link  src 192.0.2.1
192.168.23.0/24 dev eth0  proto kernel  scope link  src 192.168.23.28
192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1

[root@undercloud ~]# ovs-vsctl show
83b044ee-44ac-4575-88b3-4951a6e9847f
Bridge br-int
fail_mode: secure
Port “tapb3ad6627-29”
tag: 1
Interface “tapb3ad6627-29”
type: internal
Port int-br-ctlplane
Interface int-br-ctlplane
type: patch
options: {peer=phy-br-ctlplane}
Port br-int
Interface br-int
type: internal
Bridge br-ctlplane
Port “vlan10”
tag: 10
Interface “vlan10”
type: internal
Port phy-br-ctlplane
Interface phy-br-ctlplane
type: patch
options: {peer=int-br-ctlplane}
Port “eth1”
Interface “eth1”
Port br-ctlplane
Interface br-ctlplane
type: internal
ovs_version: “2.5.0”

*********************************************************
Here are admin credentials for overcloud controller
*********************************************************

[stack@undercloud ~]$ cat overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://192.0.2.10:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export no_proxy=,192.0.2.10,192.0.2.10
export OS_PASSWORD=pWyQpHsaXAWskcmYEq2ja4WaU
export PYTHONWARNINGS=”ignore:Certificate has no, ignore:A true SSLContext object is not available”
export OS_TENANT_NAME=admin

*******************************
At the same time on VIRTHOST
*******************************

[root@ServerCentOS72 ~]# virsh net-list

Name                 State      Autostart     Persistent
———————————————————-
default              active     yes           yes
external             active     yes           yes
overcloud            active     yes           yes

[root@ServerCentOS72 ~]#  virsh net-dunpxml external

<network>
<name>external</name>
<uuid>d585615b-c1c5-4e30-bf2d-ea247591c2b0</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’brext’ stp=’off’ delay=’0’/>
<mac address=’52:54:00:9d:b4:1d’/>
<ip address=’192.168.23.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.168.23.10′ end=’192.168.23.50’/>
</dhcp>
</ip>
</network>

[root@ServerCentOS72 ~]# su – stack

Last login: Thu May 26 18:01:31 MSK 2016 on :0

[stack@ServerCentOS72 ~]$ virsh list
Id    Name                           State

—————————————————-
2     undercloud                     running
11    compute_0                      running
12    control_0                      running

*************************************************************************
Source stackrc and run openstack-status on undercloud
Overcloud deployment is already done on undercloud VM
*************************************************************************

[root@undercloud ~]# . stackrc
[root@undercloud ~]# openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-cert:                    active
openstack-nova-conductor:               active
openstack-nova-console:                 inactive  (disabled on boot)
openstack-nova-consoleauth:             inactive  (disabled on boot)
openstack-nova-xvpvncproxy:             inactive  (disabled on boot)

== Glance services ==

openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==

openstack-keystone:                     inactive  (disabled on boot)

== Horizon service ==
openstack-dashboard:                    404
== neutron services ==

neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       inactive  (disabled on boot)
neutron-metadata-agent:                 inactive  (disabled on boot)
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-metering-agent:                 inactive  (disabled on boot)

== Swift services ==

openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active

== Cinder services ==

openstack-cinder-api:                   inactive  (disabled on boot)
openstack-cinder-scheduler:             inactive  (disabled on boot)
openstack-cinder-volume:                inactive  (disabled on boot)
openstack-cinder-backup:                inactive  (disabled on boot)

== Ceilometer services ==

openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-notification:      active

== Heat services ==
openstack-heat-api:                     active
openstack-heat-api-cfn:                 active
openstack-heat-api-cloudwatch:          inactive  (disabled on boot)
openstack-heat-engine:                  active

== Sahara services ==

openstack-sahara-all:                   inactive  (disabled on boot)

== Ironic services ==

openstack-ironic-api:                   active
openstack-ironic-conductor:             active

== Support services ==

mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
rabbitmq-server:                        active
memcached:                              active

====Keystone users ======

+———————————-+——————+———+———————————–+

|                id                |       name       | enabled |               email               |

+———————————-+——————+———+———————————–+
| c1668084d057422ab21c9180424b3e4a |      admin       |   True  |           root@localhost          |
| db938fe459c94cd09fe227a118f8be0f |       aodh       |   True  |           aodh@localhost          |
| 001a56a0872048a592db95dc9885292d |    ceilometer    |   True  |        ceilometer@localhost       |
| e038f5b685b84e6aa601b37312d84a56 |      glance      |   True  |          glance@localhost         |
| d7ddbfd73b814c13926c1ecd5ebe1bb2 |       heat       |   True  |           heat@localhost          |
| dc784308498d40568b649fbf12eaeb51 |      ironic      |   True  |          ironic@localhost         |
| 0c1f829c533240cdbec944236048ee1a | ironic-inspector |   True  | baremetal-introspection@localhost |
| ddbcb1dd885845c698f8d65f6f9ff44f |     neutron      |   True  |         neutron@localhost         |
| 987bd356963e4a5cbf2bd50c50919f9b |       nova       |   True  |           nova@localhost          |
| a5c862796ef24615afc2881e1a59f9d5 |      swift       |   True  |          swift@localhost          |
+———————————-+——————+———+———————————–+

== Glance images ==

+————————————–+————————+————-+——————+————+——–+

| ID                                   | Name                   | Disk Format | Container Format | Size       | Status |

+————————————–+————————+————-+——————+————+——–+
| c734ff64-7723-43ee-a5d2-d662e1e206eb | bm-deploy-kernel       | aki         | aki              | 5157360    | active |
| f80e32c4-cfce-4dcc-993a-939800440fbf | bm-deploy-ramdisk      | ari         | ari              | 380554146  | active |
| 8616adc8-7136-4536-8562-5ed9cf129ed2 | overcloud-full         | qcow2       | bare             | 1175351296 | active |
| 73f5bfc7-99c2-46dc-8507-e5978ec61b84 | overcloud-full-initrd  | ari         | ari              | 36444678   | active |
| 0d30aa5d-869c-4716-bdd4-87685e4790ca | overcloud-full-vmlinuz | aki         | aki              | 5157360    | active |
+————————————–+————————+————-+——————+————+——–+

== Nova managed services ==

+—-+—————-+————+———-+———+——-+—————————-+—————–+

| Id | Binary         | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+—————-+————+———-+———+——-+—————————-+—————–+
| 1  | nova-cert      | undercloud | internal | enabled | up    | 2016-05-26T18:41:57.000000 | –               |
| 7  | nova-scheduler | undercloud | internal | enabled | up    | 2016-05-26T18:41:55.000000 | –               |
| 8  | nova-conductor | undercloud | internal | enabled | up    | 2016-05-26T18:41:56.000000 | –               |
| 10 | nova-compute   | undercloud | nova     | enabled | up    | 2016-05-26T18:41:54.000000 | –               |
+—-+—————-+————+———-+———+——-+—————————-+—————–+

== Nova networks ==
+————————————–+———-+——+
| ID                                   | Label    | Cidr |
+————————————–+———-+——+
| c27b8d62-f838-4c7e-8828-64ae1503f4c4 | ctlplane | –    |
+————————————–+———-+——+

== Nova instance flavors ==

+————————————–+—————+———–+——+———–+——+——-+————-+———–+

| ID                                   | Name          | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+————————————–+—————+———–+——+———–+——+——-+————-+———–+
| 1320d766-7051-4639-9554-a42e7c7fd958 | control       | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 1b0ad845-6273-437f-8573-e4922a256ec7 | block-storage | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 27a0e9ee-c909-4d7d-8e86-1eb2e61fb1cb | oooq_control  | 8192      | 49   | 0         |      | 1     | 1.0         | True      |
| 40057aa6-5e8b-4d4b-85d4-f21418d01b5d | baremetal     | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 5750def3-dc08-43dd-b194-02d4ea73b8d7 | compute       | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 769969da-f429-4f5f-84c9-6456f39539f8 | ceph-storage  | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 9c1622bc-ee0f-4dfa-a988-1e89cad47015 | oooq_compute  | 8192      | 49   | 0         |      | 1     | 1.0         | True      |
| a2e5a055-3334-4080-86f9-4887931aee22 | swift-storage | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| b05b3c15-7928-4f59-9f8d-7d3947e19bee | oooq_ceph     | 8192      | 49   | 0         |      | 1     | 1.0         | True      |
+————————————–+—————+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+————————-+———————————-+——–+————+————-+———————+
| ID                                   | Name                    | Tenant ID                        | Status | Task State | Power State | Networks            |
+————————————–+————————-+———————————-+——–+————+————-+———————+
| 88f841ac-1ca0-4339-ba8a-c2895c0dc57c | overcloud-controller-0  | ccf0e5fdbebb4335ad7875ec821af91d | ACTIVE | –          | Running     | ctlplane=192.0.2.13 |
| f12a1086-7e23-4acb-80a7-8b2efe1e4ef2 | overcloud-novacompute-0 | ccf0e5fdbebb4335ad7875ec821af91d | ACTIVE | –          | Running     | ctlplane=192.0.2.12 |
+————————————–+————————-+———————————-+——–+————+————-+———————+

******************************************************
Neutron reports on undercloud VM
******************************************************

[root@undercloud ~]# neutron net-list

+————————————–+———-+——————————————+
| id                                   | name     | subnets                                  |
+————————————–+———-+——————————————+
| c27b8d62-f838-4c7e-8828-64ae1503f4c4 | ctlplane | 631022c3-cfc5-4353-b038-1592cceea57e     |
|                                      |          | 192.0.2.0/24                             |
+————————————–+———-+——————————————+

[root@undercloud ~]# neutron net-show ctlplane

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-05-26T11:32:18                  |
| description               |                                      |
| id                        | c27b8d62-f838-4c7e-8828-64ae1503f4c4 |
| ipv4_address_scope        |                                      |
| ipv6_address_scope        |                                      |
| mtu                       | 1500                                 |
| name                      | ctlplane                             |
| provider:network_type     | flat                                 |
| provider:physical_network | ctlplane                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 631022c3-cfc5-4353-b038-1592cceea57e |
| tags                      |                                      |
| tenant_id                 | ccf0e5fdbebb4335ad7875ec821af91d     |
| updated_at                | 2016-05-26T11:32:18                  |
+—————————+————————————–+

[root@undercloud ~]# neutron subnet-list

+————————————+——+————–+————————————+
| id                                 | name | cidr         | allocation_pools                   |
+————————————+——+————–+————————————+
| 631022c3-cfc5-4353-b038-1592cceea5 |      | 192.0.2.0/24 | {“start”: “192.0.2.5”, “end”:      |
| 7e                                 |      |              | “192.0.2.30”}                      |
+————————————+——+————–+————————————+

[root@undercloud ~]# neutron subnet-show 631022c3-cfc5-4353-b038-1592cceea57e

+——————-+—————————————————————+
| Field             | Value                                                         |
+——————-+—————————————————————+
| allocation_pools  | {“start”: “192.0.2.5”, “end”: “192.0.2.30”}                   |
| cidr              | 192.0.2.0/24                                                  |
| created_at        | 2016-05-26T11:32:18                                           |
| description       |                                                               |
| dns_nameservers   |                                                               |
| enable_dhcp       | True                                                          |
| gateway_ip        | 192.0.2.1                                                     |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “192.0.2.1”} |
| id                | 631022c3-cfc5-4353-b038-1592cceea57e                          |
| ip_version        | 4                                                             |
| ipv6_address_mode |                                                               |
| ipv6_ra_mode      |                                                               |
| name              |                                                               |
| network_id        | c27b8d62-f838-4c7e-8828-64ae1503f4c4                          |
| subnetpool_id     |                                                               |
| tenant_id         | ccf0e5fdbebb4335ad7875ec821af91d                              |
| updated_at        | 2016-05-26T11:32:18                                           |
+——————-+—————————————————————+

**********************************************
When overcloud deployment is done
**********************************************

[stack@undercloud ~]$ heat stack-list

+————————————–+————+—————–+———————+————–+

| id                                   | stack_name | stack_status    | creation_time       | updated_time |

+————————————–+————+—————–+———————+————–+
| 7002392b-cd2d-439f-b3cd-024979f153a5 | overcloud  | CREATE_COMPLETE | 2016-05-26T13:35:17 | None         |
+————————————–+————+—————–+———————+————–+

[stack@undercloud ~]$ nova list

+————————————–+————————-+——–+————+————-+———————+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+————————————–+————————-+——–+————+————-+———————+
| 88f841ac-1ca0-4339-ba8a-c2895c0dc57c | overcloud-controller-0  | ACTIVE | –          | Running     | ctlplane=192.0.2.13 |
| f12a1086-7e23-4acb-80a7-8b2efe1e4ef2 | overcloud-novacompute-0 | ACTIVE | –          | Running     | ctlplane=192.0.2.12 |
+————————————–+————————-+——–+————+————-+———————+

*******************************************
Log into overcloud controller
*******************************************

[stack@undercloud ~]$ ssh heat-admin@192.0.2.13
Last login: Thu May 26 16:52:28 2016 from gateway
[heat-admin@overcloud-controller-0 ~]$ sudo su –
Last login: Thu May 26 15:42:23 UTC 2016 on pts/0

[root@overcloud-controller-0 ~]# ls
keystonerc_admin  oskey01.pem
[root@overcloud-controller-0 ~]# . keystonerc_admin

[root@overcloud-controller-0 ~]# ifconfig

br-ex: flags=4163  mtu 1500
inet 192.0.2.13  netmask 255.255.255.0  broadcast 192.0.2.255
inet6 fe80::2f7:7fff:fe1a:ca59  prefixlen 64  scopeid 0x20
ether 00:f7:7f:1a:ca:59  txqueuelen 0  (Ethernet)
RX packets 689651  bytes 1362839189 (1.2 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2346450  bytes 3243444405 (3.0 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
inet6 fe80::2f7:7fff:fe1a:ca59  prefixlen 64  scopeid 0x20
ether 00:f7:7f:1a:ca:59  txqueuelen 1000  (Ethernet)
RX packets 2783352  bytes 4201989574 (3.9 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2876264  bytes 3280863833 (3.0 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 2962545  bytes 8418607495 (7.8 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2962545  bytes 8418607495 (7.8 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@overcloud-controller-0 ~]# ovs-vsctl show
cc8be4fb-f96f-4679-b85d-d0afc7dd7f72
Bridge br-int
fail_mode: secure
Port “tapb86d48f2-45”
tag: 2
Interface “tapb86d48f2-45”
type: internal
Port “tapa4fa2a9d-a4”
tag: 3
Interface “tapa4fa2a9d-a4”
type: internal
Port “qr-eb92ffa9-da”
tag: 2
Interface “qr-eb92ffa9-da”
type: internal
Port “qr-e8146f9f-51”
tag: 3
Interface “qr-e8146f9f-51”
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-tun
fail_mode: secure
Port “vxlan-c000020c”
Interface “vxlan-c000020c”
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”192.0.2.13″, out_key=flow, remote_ip=”192.0.2.12″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “qg-df23145d-8f”
Interface “qg-df23145d-8f”
type: internal
Port “qg-53315134-1d”
Interface “qg-53315134-1d”
type: internal
Port br-ex
Interface br-ex
type: internal
Port “eth0”
Interface “eth0”
ovs_version: “2.5.0”

***************************************************
Routing table on overcloud controller
***************************************************

[root@overcloud-controller-0 ~]# ip route
default via 192.0.2.1 dev br-ex  proto static
169.254.169.254 via 192.0.2.1 dev br-ex  proto static
192.0.2.0/24 dev br-ex  proto kernel  scope link  src 192.0.2.13

Network topology

[root@overcloud-controller-0 ~]# neutron net-list

+————————————–+————–+—————————————-+
| id                                   | name         | subnets                                |
+————————————–+————–+—————————————-+
| 1dad601c-c865-41d8-94cb-efc634c1fc83 | public       | 12787d8b-1b72-402d-9b93-2821f0a18b7b   |
|                                      |              | 192.0.2.0/24                           |
| 0086836e-2dc3-4d40-a2e2-21f222b159f4 | demo_network | dcc40bfc-9293-47bb-8788-d4b5f090d076   |
|                                      |              | 90.0.0.0/24                            |
| 59168b6e-adca-4ec6-982a-f94a0eb770c8 | private      | ede9bbc2-5099-4d9f-91af-2fd4387d52be   |
|                                      |              | 50.0.0.0/24                            |
+————————————–+————–+—————————————-+

[root@overcloud-controller-0 ~]# nova service-list

+—-+——————+————————————-+———-+———+——-+—————————-+—————–+
| Id | Binary           | Host                                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—-+——————+————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-cert        | overcloud-controller-0              | internal | enabled | up    | 2016-05-26T17:09:20.000000 | –               |
| 2  | nova-consoleauth | overcloud-controller-0              | internal | enabled | up    | 2016-05-26T17:09:20.000000 | –               |
| 5  | nova-scheduler   | overcloud-controller-0              | internal | enabled | up    | 2016-05-26T17:09:22.000000 | –               |
| 6  | nova-conductor   | overcloud-controller-0              | internal | enabled | up    | 2016-05-26T17:09:24.000000 | –               |
| 7  | nova-compute     | overcloud-novacompute-0.localdomain | nova     | enabled | up    | 2016-05-26T17:09:19.000000 | –               |
+—-+——————+————————————-+———-+———+——-+—————————-+—————–+

Running VMs

*************************************************************************
Verfication outbound connectivity. Connecting via floating IPs belong
192.0.2.0/24 to VMs running in overcloud from undercloud VM
*************************************************************************

********************************************************
`ip netns` on overcloud controller
********************************************************

It won’t work on 16 GB even minimal configuration.
Server memory allocation for minimal virtual environment


Backport upstream commits to stable RDO Mitaka release && Deployments with Keystone API V3

May 23, 2016

Posting bellow is written  with intend to avoid waiting until “koji” build will appear in updates repo of stable RDO Mitaka release, what might take a couple of months or so. Actually, it doesn’t require knowledge how to write properly source RH’s rpm file. It just needs picking up raw content of git commits from upstream git repo converting them into patches and rebuild required src.rpm(s) with patch(es) needed. There is also not commonly known command `rpm -qf` which is very useful when you need to detect which rpm has installed particular file. Just to know which src.rpm should be downloaded for git commit referencing say “cinder.rb”

[root@ServerCentOS72 /]# find . -name cinder.rb -print
find: ‘./run/user/1000/gvfs’: Permission denied
./usr/share/openstack-puppet/modules/cinder/lib/puppet/provider/cinder.rb

[root@ServerCentOS72 /]# rpm -qf /usr/share/openstack-puppet/modules/cinder/lib/puppet/provider/cinder.rb
openstack-puppet-modules-8.0.4-2.el7.centos.noarch

*******************************
Thus download from
*******************************

1. https://cbs.centos.org/koji/buildinfo?buildID=10895
openstack-packstack-8.0.0-1.el7.src.rpm

2. https://cbs.centos.org/koji/buildinfo?buildID=10859
openstack-puppet-modules-8.0.4-1.el7.src.rpm

[boris@ServerCentOS72 Downloads]$ ls -l
total 3116
-rw-rw-r–. 1 boris boris  170107 May 21 21:26 openstack-packstack-8.0.0-1.el7.src.rpm
-rw-rw-r–. 1 boris boris 3015046 May 21 18:33 openstack-puppet-modules-8.0.4-1.el7.src.rpm

****************
Then run :-
****************

$ rpm -iv openstack-packstack-8.0.0-1.el7.src.rpm
$ rpm -iv  openstack-puppet-modules-8.0.4-1.el7.src.rpm
$ cd ../rpmbuild

In folder ~boris/rpmbuild/SOURCES
create to patch files :-

0001-Use-versionless-auth_url-for-cinder.patch
0001-Enable-keystone-v3-support-for-cinder_type.patch

********************************************************************
In second patch file insert “cinder” in path to *.rb files
********************************************************************

diff –git a/cinder/lib/puppet/provider/cinder_type/openstack.rb b/cinder/lib/puppet/provider/cinder_type/openstack.rb
index feaea49..9aa31c5 100644
— a/cinder/lib/puppet/provider/cinder_type/openstack.rb
+++ b/cinder/lib/puppet/provider/cinder_type/openstack.rb
@@ -32,6 +32,10 @@ class Puppet::Provider::Cinder &lt; Puppet::Provider::Openstack

. . . . .

diff –git a/cinder/lib/puppet/provider/cinder_type/openstack.rb b/cinder/lib/puppet/provider/cinder_type/openstack.rb
index feaea49..9aa31c5 100644
— a/cinder/lib/puppet/provider/cinder_type/openstack.rb
+++ b/cinder/lib/puppet/provider/cinder_type/openstack.rb
@@ -7,7 +7,7 @@ Puppet::Type.type(:cinder_type).provide(

. . . . . .

diff –git a/cinder/spec/unit/provider/cinder_spec.rb b/cinder/spec/unit/provider/cinder_spec.rb
index cfc8850..246ae58 100644
— a/cinder/spec/unit/provider/cinder_spec.rb
+++ b/cinder/spec/unit/provider/cinder_spec.rb
@@ -24,10 +24,12 @@ describe Puppet::Provider::Cinder do

Finally SOURES folder would  look like :-

**********************
Next step is :-
**********************

$ cd ../SPECS

and update *.spec files , so that they would understand that patches placed
into SOURCES folder have to be applied to corresponding *.tar.gz archives
before building phase itself.

*****************************************
First openstack-packstack.spec :-
*****************************************

Name:           openstack-packstack
Version:        8.0.0
Release:        2%{?milestone}%{?dist}   <== increase 1 to 2
Summary:        Openstack Install Utility
Group:          Applications/System
License:        ASL 2.0 and GPLv2
URL:            https://github.com/openstack/packstack
Source0:        http://tarballs.openstack.org/packstack/packstack-%{upstream_version}.tar.gz
Patch0:         0001-Use-versionless-auth_url-for-cinder.patch  <=== Add line

. . . . . .

%prep
%setup -n packstack-%{upstream_version}
%patch0 -p1  <==  Add line
:wq

*****************************************
Second openstack-puppet-modules.spec
*****************************************

Name:           openstack-puppet-modules
Epoch:          1
Version:        8.0.4
Release:        2%{?milestone}%{?dist}  <===  increase 1 to 2
Summary:        Puppet modules used to deploy OpenStack
License:        ASL 2.0 and GPLv2 and GPLv3
URL:         https://github.com/redhat-openstack
Source0:    https://github.com/redhat-openstack/%{name}/archive/%{upstream_version}.tar.gz
Patch0:    0001-Enable-keystone-v3-support-for-cinder_type.patch  <== Add line

. . . . .

%prep
%setup -q -n %{name}-%{?upstream_version}
%patch0 -p1  <== Add line
:wq

******************************************
Attempt rpmbuild for each spec file
******************************************

$ rpmbuild -bb openstack-packstack.spec
$ rpmbuild -bb openstack-puppet-modules.spec

If particular build is missing some packages it will report their’s names to screen
This packages could be usually installed via yum, otherwise you have a problem
with local build.
If each build output finishes with message like

Wrote: /home/boris/rpmbuild/RPMS/noarch/openstack-puppet-modules-8.0.4-2.el7.centos.noarch.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.wX6p3q
+ umask 022
+ cd /home/boris/rpmbuild/BUILD
+ cd openstack-puppet-modules-8.0.4
+ /usr/bin/rm -rf /home/boris/rpmbuild/BUILDROOT/openstack-puppet-modules-8.0.4-2.el7.centos.x86_64
+ exit 0

Everything is going fine. In particular case results will be written

to ../RPMS/noarch

Then

$ cd ../RPMS/noarch

and create installation script

[boris@ServerCentOS72 SPECS]$ cd ../RPMS/noarch

[boris@ServerCentOS72 noarch]$ ls -l
total 3428
-rwxrwxr-x. 1 boris boris     239 May 21 21:40 install
-rw-rw-r–. 1 boris boris  247312 May 21 21:34 openstack-packstack-8.0.0-2.el7.centos.noarch.rpm
-rw-rw-r–. 1 boris boris   17376 May 21 21:34 openstack-packstack-doc-8.0.0-2.el7.centos.noarch.rpm
-rw-rw-r–. 1 boris boris   16792 May 21 21:34 openstack-packstack-puppet-8.0.0-2.el7.centos.noarch.rpm
-rw-rw-r–. 1 boris boris 3212844 May 21 21:38 openstack-puppet-modules-8.0.4-2.el7.centos.noarch.rpm

[boris@ServerCentOS72 noarch]$ cat install

sudo yum install openstack-packstack-8.0.0-2.el7.centos.noarch.rpm \
openstack-packstack-doc-8.0.0-2.el7.centos.noarch.rpm \
openstack-packstack-puppet-8.0.0-2.el7.centos.noarch.rpm \
openstack-puppet-modules-8.0.4-2.el7.centos.noarch.rpm

****************************
Run install :-
****************************

[boris@ServerCentOS72 noarch]$ ./install
Due to increased  release (1=>2) old rpms should be replaced by just been built

[root@ServerCentOS72 ~]# rpm -qa  \*openstack-packstack\*
openstack-packstack-doc-8.0.0-2.el7.centos.noarch
openstack-packstack-puppet-8.0.0-2.el7.centos.noarch
openstack-packstack-8.0.0-2.el7.centos.noarch

[root@ServerCentOS72 ~]# rpm -qa \*openstack-puppet-modules\*
openstack-puppet-modules-8.0.4-2.el7.centos.noarch

****************************************************************
Since that point following entry in your answer-file :-
****************************************************************
# Identity service API version string. [‘v2.0’, ‘v3’]
CONFIG_KEYSTONE_API_VERSION=v3
won’t cause cinder puppet to crash packstack run, no matter of kind of your deployment

References
1. https://bugzilla.redhat.com/show_bug.cgi?id=1330289


Creating functional ssh key-pair on RDO Mitaka via Chrome Advanced REST Client

May 2, 2016

The problem here is that REST API POST request creating ssh-keypair to access nova servers  doesn’t write to disk rsa private key  and only upload public one to nova. Closing Chrome Client results loosing rsa private key. To prevent failure to write to disk private key , save response-export.json as shown bellow. Working via CLI ( invoking curl ) allows to upload rsa public key to Nova and create rsa private key as file :-

#!/bin/bash -x
 curl -g -i -X POST \
 http://192.169.142.127:8774/v2/052b16e56537467d8161266b52a43b54/os-keypairs \
 -H "User-Agent: python-novaclient" \
 -H "Content-Type: application/json" -H "Accept: application/json" \
 -H "X-Auth-Token: 2ae281359a8f4b249d5e8cf36c4233c0" -d  \
 '{"keypair": {"name": "oskey2"}}' |  tail -1 >output.json ;
 echo "Genegating rsa privare key for server access as file";
 echo "-----BEGIN RSA PRIVATE KEY-----" >  oskey2.pem ;
 sed 's/\\n/\
 /g' <  output.json | grep -v "keypair" | grep -v "user_id" >>oskey2.pem ;
 chmod 600 oskey2.pem

To start ( keystone api v3 environment ) obtain project’s scoped token via request

[root@ip-192-169-142-127 ~(keystone_admin)]# curl -i  -H “Content-Type: application/json” -d ‘ { “auth”:
{ “identity”:
{ “methods”: [“password”], “password”:
{ “user”:
{ “name”: “admin”, “domain”:
{ “id”: “default” }, “password”: “7049f834927e4468” }
}
},
“scope”:
{ “project”:
{ “name”: “demo”, “domain”:
{ “id”: “default” }
}
}
}
}’  http://192.169.142.127:5000/v3/auth/tokens ; echo

HTTP/1.1 201 Created
Date: Mon, 02 May 2016 10:41:25 GMT
Server: Apache/2.4.6 (CentOS)
X-Subject-Token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  &lt;= token value
Vary: X-Auth-Token
x-openstack-request-id: req-bed4f407-8cbd-4d43-acd5-7450d028bc45
Content-Length: 5791
Connection: close

Content-Type: application/json

*******************************************************************************
The run script extracting from response-export.json the rsa private key
*******************************************************************************

#!/bin/bash -x
echo “Genegating privare key for server access”
echo “—–BEGIN RSA PRIVATE KEY—–” > $1.pem
sed ‘s/\\n/\
/g’ <  response-export.json | grep -v “keypair” | grep -v “user_id” >> $1.pem
chmod 600 $1.pem

like :-

# ./filter.sh oskeymitakaV3

***********************************
Shell command [ 1 ]  :-
***********************************

sed ‘s/\\n/\
/g’ <  response-export.json

will replace ‘\n’ by Carriage Return in  response-export.json.

Now login to dashboard and verify that rsa public key gets uploaded

Relaunch Chrome Advanced Rest Client and launch server with
“key_name” : “oskeymitakaV3”

******************************************************************************
Login to server using rsa private key  oskeymitakaV3.pem
******************************************************************************

[boris@fedora23wks json]$ ssh -i oskeymitakaV3.pem ubuntu@192.169.142.169

The authenticity of host ‘192.169.142.169 (192.169.142.169)’ can’t be established.
ECDSA key fingerprint is SHA256:khfhZEHHwz7T18oIlKMCKWKY9b6ctsS8XMW5ZpVlRa8.
ECDSA key fingerprint is MD5:25:98:50:9f:b3:37:f3:a1:ed:95:5d:44:f4:03:13:14.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.169.142.169’ (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-21-generic x86_64)
* Documentation:  https://help.ubuntu.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
0 packages can be updated.
0 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user “root”), use “sudo “.
See “man sudo_root” for details.
ubuntu@ubuntuxenialdevs:~$


Creating Servers via REST API on RDO Mitaka && Keystone API V3

April 29, 2016

As usual ssh-kepair for particular tenant is supposed to be created sourcing tenant’s credentials and afterwards it works for particular tenant. By some reasons upgrade keystone api version to v3 breaks this schema in regards of REST API POST requests issued for servers creation. I am not sure either following bellow is workaround or it is supposed to work this way.

Assign admin role user admin on project demo via openstack client

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack project list| \
grep demo > list2

[root@ip-192-169-142-127 ~(keystone_admin)]#openstack user list| \
grep admin >> list2

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack role list|\
grep admin >> list2

[root@ip-192-169-142-127 ~(keystone_admin)]# cat list2
| 052b16e56537467d8161266b52a43b54 | demo |
| b6f2f511caa44f4e94ce5b2a5809dc50 | admin |
| f40413a0de92494680ed8b812f2bf266 | admin |

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack role add \
–project \
052b16e56537467d8161266b52a43b54 \
–user b6f2f511caa44f4e94ce5b2a5809dc50 \
f40413a0de92494680ed8b812f2bf266

*********************************************************************
Run to obtain token scoped “demo”
*********************************************************************

# . keystonerc_admin
# curl -i -H “Content-Type: application/json” -d \
‘ { “auth”:
{ “identity”:
{ “methods”: [“password”], “password”:
{ “user”:
{ “name”: “admin”, “domain”:
{ “id”: “default” }, “password”: “7049f834927e4468” }
}
},
“scope”:
{ “project”:
{ “name”: “demo”, “domain”:
{ “id”: “default” }
}
}
}
}’ http://192.169.142.127:5000/v3/auth/tokens ; echo

Screenshot from 2016-04-28 19-47-00

Created ssh keypair “oskeydemoV3” sourcing keystonerc_admin

Screenshot from 2016-04-28 19-50-02

Admin Console shows

Screenshot from 2016-04-28 20-28-57

***************************************************************************************
Submit “oskeydemoV3” as value for key_name into Chrome REST Client environment &amp;&amp; issue POST request to create the server , “key_name” will be accepted ( vs case when ssh-keypair was created by tenant demo )
*************************************************************************************

Screenshot from 2016-04-28 19-52-24

Now log into dashboard as demo

Screenshot from 2016-04-28 19-56-25

Verify that created keypair “oskeydemoV3” allows log into server

Screenshot from 2016-04-28 19-58-56


AIO RDO Liberty && several external networks VLAN provider setup

April 28, 2016

Post bellow is addressing the question when AIO RDO Liberty Node has to have external networks of VLAN type with predefined vlan tags. Straight forward packstack –allinone install doesn’t  allow to achieve desired network configuration. External network provider of vlan type appears to be required. In particular case, office networks 10.10.10.0/24 vlan tagged (157) ,10.10.57.0/24 vlan tagged (172), 10.10.32.0/24 vlan tagged (200) already exists when RDO install is running. If demo_provision was “y” , then delete router1 and created external network of VXLAN type.

I got back to this writing due to recent post
https://ask.openstack.org/en/question/91611/how-to-configure-multiple-external-networks-in-rdo-libertymitaka/
answer provided contains several misleading steps  in configuration  vlan enabled bridges.

First

***********************************************************
Update /etc/neutron/plugins/ml2/ml2_conf.ini
***********************************************************

[root@ip-192-169-142-52 ml2(keystone_demo)]# cat ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vlan,vxlan
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges = vlan157:157:157,vlan172:172:172,vlan200:200:200
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =10:100
vxlan_group =224.0.0.1
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

**************
Then
**************

# openstack-service restart neutron

***************************************************
Invoke external network provider
***************************************************

[root@ip-192-169-142-52 ~(keystone_admin]#neutron net-create vlan157 –shared –provider:network_type vlan –provider:segmentation_id 157 –provider:physical_network vlan157 –router:external

[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create –name sub-vlan157 –gateway 10.10.10.1  –allocation-pool start=10.10.10.100,end=10.10.10.200 vlan157 10.10.10.0/24

***********************************************
Create second external network
***********************************************

[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan172 --shared --provider:network_type vlan --provider:segmentation_id 172 --provider:physical_network vlan172  --router:external


[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan172 --gateway 10.10.57.1 --allocation-pool start=10.10.57.100,end=10.10.57.200 vlan172 10.10.57.0/24

***********************************************
Create third external network
***********************************************

[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan200 --shared --provider:network_type vlan --provider:segmentation_id 200 --provider:physical_network vlan200  --router:external

[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan200 --gateway 10.10.32.1 --allocation-pool start=10.10.32.100,end=10.10.57.200 vlan172 10.10.32.0/24

***********************************************************************
No need to update sub-net (vs [ 1 ]). No switch to "enable_isolataed_metadata=True"
Neutron L3 agent configuration results attaching qg-<port-id> interfaces to br-int
***********************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan157

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | b41e4d36-9a63-4631-abb0-6436f2f50e2e |
| mtu                       | 0                                    |
| name                      | vlan157                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan157                              |
| provider:segmentation_id  | 157                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | bb753fc3-f257-4ce5-aa7c-56648648056b |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan157

+——————-+——————————————————————+
| Field             | Value                                                            |
+——————-+——————————————————————+
| allocation_pools  | {“start”: “10.10.10.100”, “end”: “10.10.10.200”}                 |
| cidr              | 10.10.10.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.10.1                                                       |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “10.10.10.151”} |
| id                | bb753fc3-f257-4ce5-aa7c-56648648056b                             |
| ip_version        | 4                                                                |
| ipv6_address_mode |                                                                  |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan157                                                      |
| network_id        | b41e4d36-9a63-4631-abb0-6436f2f50e2e                             |
| subnetpool_id     |                                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                                 |
+——————-+——————————————————————+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan172

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 3714adc9-ab17-4f96-9df2-48a6c0b64513 |
| mtu                       | 0                                    |
| name                      | vlan172                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan172                              |
| provider:segmentation_id  | 172                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 21419f2f-212b-409a-8021-2b4a2ba6532f |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan172

+——————-+——————————————————————+
| Field             | Value                                                            |
+——————-+——————————————————————+
| allocation_pools  | {“start”: “10.10.57.100”, “end”: “10.10.57.200”}                 |
| cidr              | 10.10.57.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.57.1                                                       |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “10.10.57.151”} |
| id                | 21419f2f-212b-409a-8021-2b4a2ba6532f                             |
| ip_version        | 4                                                                |
| ipv6_address_mode |                                                                  |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan172                                                      |
| network_id        | 3714adc9-ab17-4f96-9df2-48a6c0b64513                             |
| subnetpool_id     |                                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                                 |
+——————-+——————————————————————+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan200

+—————————+————————————–+

| Field                     | Value                                |

+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 3dc90ff7-b1df-4079-aca1-cceedb23f440 |
| mtu                       | 0                                    |
| name                      | vlan200                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan200                              |
| provider:segmentation_id  | 200                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 60181211-ea36-4e4e-8781-f13f743baa19 |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan200

+——————-+————————————————–+
| Field             | Value                                            |
+——————-+————————————————–+
| allocation_pools  | {“start”: “10.10.32.100”, “end”: “10.10.32.200”} |
| cidr              | 10.10.32.0/24                                    |
| dns_nameservers   |                                                  |
| enable_dhcp       | True                                             |
| gateway_ip        | 10.10.32.1                                       |
| host_routes       |                                                  |
| id                | 60181211-ea36-4e4e-8781-f13f743baa19             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | sub-vlan200                                      |
| network_id        | 3dc90ff7-b1df-4079-aca1-cceedb23f440             |
| subnetpool_id     |                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                 |
+——————-+————————————————–+

**************
Next Step
**************

# modprobe 8021q
# ovs-vsctl add-br br-vlan
# ovs-vsctl add-port br-vlan eth1
# vconfig add br-vlan 157
# ovs-vsctl add-br br-vlan2
# ovs-vsctl add-port br-vlan2 eth2
# vconfig add br-vlan2 172
# ovs-vsctl add-br br-vlan3
# ovs-vsctl add-port br-vlan3 eth3
# vconfig add br-vlan3  200

******************************
Update l3_agent.ini file
******************************
external_network_bridge =
gateway_external_network_id =

**********************************************
/etc/neutron/plugins/ml2/openvswitch_agent.ini
**********************************************

bridge_mappings = vlan157:br-vlan,vlan172:br-vlan2,vlan200:br-vlan3

*************************************
Update Neutron Configuration
*************************************

# openstack-service restart neutron

*******************************************
Set up config persistent between reboots
*******************************************

/etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=”eth1″
ONBOOT=yes
OVS_BRIDGE=br-vlan
TYPE=OVSPort
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan

DEVICE=br-vlan
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan.157

BOOTPROTO=”none”
DEVICE=”br-vlan.157″
ONBOOT=”yes”
IPADDR=”10.10.10.150″
PREFIX=”24″
GATEWAY=”10.10.10.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=”eth2″
ONBOOT=yes
OVS_BRIDGE=br-vlan2
TYPE=OVSPort
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan2

DEVICE=br-vlan2
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan2.172

BOOTPROTO=”none”
DEVICE=”br-vlan2.172″
ONBOOT=”yes”
IPADDR=”10.10.57.150″
PREFIX=”24″
GATEWAY=”10.10.57.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes

/etc/sysconfig/network-scripts/ifcfg-br-vlan3

DEVICE=br-vlan3
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan3.200

BOOTPROTO=”none”
DEVICE=”br-vlan3.200″
ONBOOT=”yes”
IPADDR=”10.10.32.150″
PREFIX=”24″
GATEWAY=”10.10.32.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=”eth3″
ONBOOT=yes
OVS_BRIDGE=br-vlan3
TYPE=OVSPort
DEVICETYPE=”ovs”

********************************************
Routing table on AIO RDO Liberty Node
********************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip route

default via 10.10.10.1 dev br-vlan.157
10.10.10.0/24 dev br-vlan.157  proto kernel  scope link  src 10.10.10.150
10.10.32.0/24 dev br-vlan3.200  proto kernel  scope link  src 10.10.32.150
10.10.57.0/24 dev br-vlan2.172  proto kernel  scope link  src 10.10.57.150
169.254.0.0/16 dev eth0  scope link  metric 1002
169.254.0.0/16 dev eth1  scope link  metric 1003
169.254.0.0/16 dev eth2  scope link  metric 1004
169.254.0.0/16 dev eth3  scope link  metric 1005
169.254.0.0/16 dev br-vlan3  scope link  metric 1008
169.254.0.0/16 dev br-vlan2  scope link  metric 1009
169.254.0.0/16 dev br-vlan  scope link  metric 1011
192.169.142.0/24 dev eth0  proto kernel  scope link  src 192.169.142.52

****************************************************************************
Notice that both qrouter-namespaces are attached to br-int.
No switch to “enable_isolated_metadata=True” vs  [ 1 ]
*****************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-list | grep vlan
| 3dc90ff7-b1df-4079-aca1-cceedb23f440 | vlan200   | 60181211-ea36-4e4e-8781-f13f743baa19 10.10.32.0/24 |
| 235c8173-d3f8-407e-ad6a-c1d3d423c763 | vlan172   | c7588239-4941-419b-8d27-ccd970acc4ce 10.10.57.0/24 |
| b41e4d36-9a63-4631-abb0-6436f2f50e2e | vlan157   | bb753fc3-f257-4ce5-aa7c-56648648056b 10.10.10.0/24 |

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-vsctl show
40286423-e174-4714-9c82-32d026ef47ca
Bridge br-vlan
        Port “eth1”
            Interface “eth1”
        Port br-vlan
            Interface br-vlan
                type: internal
        Port phy-br-vlan
            Interface phy-br-vlan
                type: patch
                options: {peer=int-br-vlan}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
  Bridge “br-vlan2”
        Port “phy-br-vlan2”
            Interface “phy-br-vlan2”
                type: patch
                options: {peer=”int-br-vlan2″}
        Port “eth2”
            Interface “eth2”
        Port “br-vlan2”
            Interface “br-vlan2”
                type: internal
    Bridge “br-vlan3”
        Port “br-vlan3”
            Interface “br-vlan3”
                type: internal
        Port “phy-br-vlan3”
            Interface “phy-br-vlan3”
                type: patch
                options: {peer=”int-br-vlan3″}
        Port “eth3”
            Interface “eth3”
Bridge br-int
fail_mode: secure
Port “qr-4e77c7a3-b5”
tag: 3
Interface “qr-4e77c7a3-b5”
type: internal
Port “int-br-vlan3”
Interface “int-br-vlan3″
type: patch
options: {peer=”phy-br-vlan3”}
Port “tap8e684c78-a3”
tag: 2
Interface “tap8e684c78-a3”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qvoe2761636-b5”
tag: 4
Interface “qvoe2761636-b5”
Port “tap6cd6fadf-31”
tag: 1
Interface “tap6cd6fadf-31”
type: internal
Port “qg-02f7ff0d-6d”
            tag: 2
            Interface “qg-02f7ff0d-6d”
                type: internal
        Port “qg-943f7831-46”
            tag: 1
            Interface “qg-943f7831-46”
                type: internal
Port “tap4ef27b41-be”
tag: 5
Interface “tap4ef27b41-be”
type: internal
Port “qr-f0fd3793-4e”
tag: 8
Interface “qr-f0fd3793-4e”
type: internal
Port “tapb1435e62-8b”
tag: 7
Interface “tapb1435e62-8b”
type: internal
Port “qvo1bb76476-05”
tag: 3
Interface “qvo1bb76476-05”
Port “qvocf68fcd8-68”
tag: 8
Interface “qvocf68fcd8-68”
Port “qvo8605f075-25”
tag: 4
Interface “qvo8605f075-25”
Port “qg-08ccc224-1e”
            tag: 7
            Interface “qg-08ccc224-1e”
                type: internal
Port “tapbb485628-0b”
tag: 4
Interface “tapbb485628-0b”
type: internal
Port “int-br-vlan2”
Interface “int-br-vlan2″
type: patch
options: {peer=”phy-br-vlan2”}
Port “tapee030534-da”
tag: 8
Interface “tapee030534-da”
type: internal
Port “qr-4d679697-39”
tag: 4
Interface “qr-4d679697-39”
type: internal
Port br-int
Interface br-int
type: internal
Port “tap9b38c69e-46”
tag: 6
Interface “tap9b38c69e-46”
type: internal
Port “tapc166022a-54”
tag: 3
Interface “tapc166022a-54”
type: internal
Port “qvo66d8f235-d4”
tag: 8
Interface “qvo66d8f235-d4”
Port int-br-vlan
Interface int-br-vlan
type: patch
options: {peer=phy-br-vlan}
ovs_version: “2.4.0”

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns
qdhcp-e826aa22-dee0-478d-8bd7-721336e3824a
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-eda69965-c6ee-42be-944f-2d61498e4bea
qdhcp-6768214b-b71c-4178-a0fc-774b2a5d59ef
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qdhcp-03812cc9-69c5-492a-9995-985bf6e1ff13
qdhcp-235c8173-d3f8-407e-ad6a-c1d3d423c763
qdhcp-d958a059-f7bd-4f9f-93a3-3499d20a1fe2
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28
qrouter-71237c84-59ca-45dc-a6ec-23eb94c4249d

********************************************************************************
Access to Nova Metadata Server provided via neutron-ns-metadata-proxy
running in corresponding qrouter namespaces  (Neutron L3 Configuration)
********************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b netstat -antp

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      12548/python2    
[root@ip-192-169-142-52 ~(keystone_admin)]# ps aux | grep 12548

neutron  12548  0.0  0.4 281028 35992 ?        S    18:34   0:00 /usr/bin/python2 /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b –state_path=/var/lib/neutron –metadata_port=9697 –metadata_proxy_user=990 –metadata_proxy_group=988 –verbose –log-file=neutron-ns-metadata-proxy-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b.log –log-dir=/var/log/neutron
root     32665  0.0  0.0 112644   960 pts/8    S+   19:29   0:00 grep –color=auto 12548

******************************************************************************
OVS flow verification on br-vlan3,br-vlan2. On each external network  vlan172,
vlan200 two VMs (on each one of vlan networks) are pinging each other
******************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3554.739s, table=0, n_packets=33, n_bytes=2074, idle_age=2137, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4204.459s, table=0, n_packets=2102, n_bytes=109304, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3557.643s, table=0, n_packets=33, n_bytes=2074, idle_age=2140, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4207.363s, table=0, n_packets=2103, n_bytes=109356, idle_age=2, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3568.225s, table=0, n_packets=33, n_bytes=2074, idle_age=2151, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4217.945s, table=0, n_packets=2109, n_bytes=109668, idle_age=0, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4140.528s, table=0, n_packets=11, n_bytes=642, idle_age=695, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4225.918s, table=0, n_packets=2113, n_bytes=109876, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4143.600s, table=0, n_packets=11, n_bytes=642, idle_age=698, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4228.990s, table=0, n_packets=2115, n_bytes=109980, idle_age=0, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4145.912s, table=0, n_packets=11, n_bytes=642, idle_age=700, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4231.302s, table=0, n_packets=2116, n_bytes=110032, idle_age=0, priority=0 actions=NORMAL

********************************************************************************
Next question how local vlan tag 7 gets generated
Run following commands :-
********************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan200

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 3dc90ff7-b1df-4079-aca1-cceedb23f440 |
| mtu                       | 0                                    |
| name                      | vlan200                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan200                              |
| provider:segmentation_id  | 200                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 60181211-ea36-4e4e-8781-f13f743baa19 |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep 3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
tapb1435e62-8b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.10.32.100  netmask 255.255.255.0  broadcast 10.10.32.255
inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
RX packets 27  bytes 1526 (1.4 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8  bytes 648 (648.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 tapb1435e62-8b
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 tapb1435e62-8b

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-vsctl show | grep b1435e62-8b

Port “tapb1435e62-8b”
Interface “tapb1435e62-8b”

**************************************************************************
Actually, directives mentioned in  [ 1 ]
**************************************************************************

# neutron subnet-create –name vlan100 –gateway 192.168.0.1 –allocation-pool \
start=192.168.0.150,end=192.168.0.200 –enable-dhcp \
–dns-nameserver 192.168.0.1 vlan100 192.168.0.0/24
# neutron subnet-update –host-route destination=169.254.169.254/32,nexthop=192.168.0.151 vlan100

along with switch to “enable_isolated_metadata=True” are targeting launching VMs to external_fixed_ip pool in qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 without creating Neutron router, spiting tenants with vlan tag IDs. I might be missing somesing , but 1 ] configures system where each vlan(XXX) external network would belong the only one tenant supposed identified by tag (XXX).

Unless RBAC policies will be created to control who has access to the provider network.

That is not what I intend to do. Neutron work flow on br-int won’t touch mentioned qdhcp-namespace at all. Any  external vlan(XXX) network might be used by several tenants each one having it ownVXLAN subnet (identified in system by VXLAN ID)  and it’s own neutron router(XXX) to external network vlan(XXX). AIO RDO set up is just a sample, I am talking about Network Node in multi node RDO Liberty depoyment.

*********************************************
Fragment from `ovs-vsct show `
*********************************************
Port “tapb1435e62-8b”
tag: 7
Interface “tapb1435e62-8b”

*************************************************************************
Next appearance of vlan tag 7, as expected is qg-08ccc224-1e.
Outgoing interface of  qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
namespace.
*************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
qg-08ccc224-1e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.10.32.101  netmask 255.255.255.0  broadcast 10.10.32.255
inet6 fe80::f816:3eff:fed4:e7d  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:d4:0e:7d  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 28  bytes 1704 (1.6 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-f0fd3793-4e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 30.0.0.1  netmask 255.255.255.0  broadcast 30.0.0.255
inet6 fe80::f816:3eff:fea9:5422  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:a9:54:22  txqueuelen 0  (Ethernet)
RX packets 68948  bytes 7192868 (6.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 68859  bytes 7185051 (6.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 qg-08ccc224-1e
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 qg-08ccc224-1e
30.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 qr-f0fd3793-4e

*******************************************************************************************************
Now verify Neutron router connecting qrouter-namespace, having interface with tag 7 and qdhcp namespace, been create to launch the instances.
RoutesDSA has been created with external gateway to vlan200 and internal interface to subnet private07 (30.0.0.0/24) having dhcp enabled and DNS server defined.
vlan157,vlan172 are configured as external networks for theirs coresponding routers as well.
*******************************************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron router-list | grep RoutesDSA

| a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b | RoutesDSA  | {“network_id”: “3dc90ff7-b1df-4079-aca1-cceedb23f440“, “enable_snat”: true, “external_fixed_ips”: [{“subnet_id”: “60181211-ea36-4e4e-8781-f13f743baa19”, “ip_address”: “10.10.32.101”}]} | False       | False |

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep 3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
tapb1435e62-8b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.10.32.100  netmask 255.255.255.0  broadcast 10.10.32.255
inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
RX packets 27  bytes 1526 (1.4 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8  bytes 648 (648.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

**************************
Finally run:-
**************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron router-port-list RoutesDSA

+————————————–+——+——————-+————————————————————————————-+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+————————————–+——+——————-+————————————————————————————-+
| 08ccc224-1e23-491a-8eec-c4db0ec00f02 |      | fa:16:3e:d4:0e:7d | {“subnet_id”: “60181211-ea36-4e4e-8781-f13f743baa19“, “ip_address”: “10.10.32.101”} |
| f0fd3793-4e5a-467a-bd3c-e87bc9063d26 |      | fa:16:3e:a9:54:22 | {“subnet_id”: “0c962484-3e48-4d86-a17f-16b0b1e5fc4d“, “ip_address”: “30.0.0.1”}     |
+————————————–+——+——————-+————————————————————————————-+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 0c962484-3e48-4d86-a17f-16b0b1e5fc4d
| 0c962484-3e48-4d86-a17f-16b0b1e5fc4d |               | 30.0.0.0/24   | {“start”: “30.0.0.2”, “end”: “30.0.0.254”}       |
[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 60181211-ea36-4e4e-8781-f13f743baa19
| 60181211-ea36-4e4e-8781-f13f743baa19 | sub-vlan200   | 10.10.32.0/24 | {“start”: “10.10.32.100”, “end”: “10.10.32.200”} |

************************************
OVS Flows at br-vlan3
************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL

cookie=0x0, duration=15793.182s, table=0, n_packets=33, n_bytes=2074, idle_age=14376, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=16442.902s, table=0, n_packets=8221, n_bytes=427492, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
сookie=0x0, duration=15796.300s, table=0, n_packets=33, n_bytes=2074, idle_age=14379, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=16446.020s, table=0, n_packets=8223, n_bytes=427596, idle_age=0, priority=0 actions=NORMAL

************************************************************
OVS Flow for {phy-br-vlan3,in-br-vlan3} veth pair
************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl show br-vlan3 | grep phy-br-vlan3
2(phy-br-vlan3): addr:da:e4:fb:ba:8b:1a

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl show br-int | grep int-br-vlan3
19(int-br-vlan3): addr:b2:a9:9e:89:07:1b

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2
OFPST_PORT reply (xid=0x2): 1 ports
port  2: rx pkts=6977, bytes=304270, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2

OFPST_PORT reply (xid=0x2): 1 ports
port  2: rx pkts=6979, bytes=304354, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2
OFPST_PORT reply (xid=0x2): 1 ports
port  2: rx pkts=6981, bytes=304438, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0
[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
OFPST_PORT reply (xid=0x2): 1 ports
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=6991, bytes=304858, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
OFPST_PORT reply (xid=0x2): 1 ports
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=6994, bytes=304984, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
OFPST_PORT reply (xid=0x2): 1 ports
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=7450, bytes=324136, drop=0, errs=0, coll=0

****************************************************************
Another OVS flow on test br-int for vlan157
****************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh -i oskeyvls.pem cirros@10.10.10.101

$ ping -c 5 10.10.10.108

PING 10.10.10.108 (10.10.10.108): 56 data bytes
64 bytes from 10.10.10.108: seq=0 ttl=63 time=0.706 ms
64 bytes from 10.10.10.108: seq=1 ttl=63 time=0.772 ms
64 bytes from 10.10.10.108: seq=2 ttl=63 time=0.734 ms
64 bytes from 10.10.10.108: seq=3 ttl=63 time=0.740 ms
64 bytes from 10.10.10.108: seq=4 ttl=63 time=0.785 ms

— 10.10.10.108 ping statistics —

5 packets transmitted, 5 packets received, 0% packet loss

round-trip min/avg/max = 0.706/0.747/0.785 ms

******************************************************************************
Testing VM1<=>VM2 via floating IPs on external vlan net 10.10.10.0/24
*******************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# nova list –all

+————————————–+————–+———————————-+——–+————+————-+———————————+
| ID                                   | Name         | Tenant ID                        | Status | Task State | Power State | Networks                        |
+————————————–+————–+———————————-+——–+————+————-+———————————+
| a3d5ecf6-0fdb-4aa3-815f-171871eccb77 | CirrOSDevs01 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.17, 10.10.10.101 |
| 1b65f5db-d7d5-4e92-9a7c-60e7866ff8e5 | CirrOSDevs02 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.18, 10.10.10.110 |
| 46b7dad1-3a7d-4d94-8407-a654cca42750 | VF23Devs01   | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.19, 10.10.10.111 |
+————————————–+————–+———————————-+——–+————+————-+———————————+

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns

qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh cirros@10.10.10.110

The authenticity of host ‘10.10.10.110 (10.10.10.110)’ can’t be established.
RSA key fingerprint is b8:d3:ec:10:70:a7:da:d4:50:13:a8:2d:01:ba:e4:83.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘10.10.10.110’ (RSA) to the list of known hosts.
cirros@10.10.10.110’s password:

$ ifconfig

eth0      Link encap:Ethernet  HWaddr FA:16:3E:F1:6E:E5
inet addr:40.0.0.18  Bcast:40.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fef1:6ee5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
RX packets:367 errors:0 dropped:0 overruns:0 frame:0
TX packets:291 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:36442 (35.5 KiB)  TX bytes:32019 (31.2 KiB)

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.110$

$ ssh fedora@10.10.10.111
Host ‘10.10.10.111’ is not in the trusted hosts file.
(fingerprint md5 23:c0:fb:fd:74:80:2f:12:d3:09:2f:9e:dd:19:f1:74)
Do you want to continue connecting? (y/n) y
fedora@10.10.10.111’s password:
Last login: Sun Dec 13 15:52:43 2015 from 10.10.10.101
[fedora@vf23devs01 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400
inet 40.0.0.19  netmask 255.255.255.0  broadcast 40.0.0.255
inet6 fe80::f816:3eff:fea4:1a52  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:a4:1a:52  txqueuelen 1000  (Ethernet)
RX packets 283  bytes 30213 (29.5 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 303  bytes 35022 (34.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[fedora@vf23devs01 ~]$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.111[fedora@vf23devs01 ~]$
[fedora@vf23devs01 ~]$ curl http://169.254.169.254/latest/meta-data/instance-id
i-00000009[fedora@vf23devs01 ~]$

[fedora@vf23devs01 ~]$


Creating Servers via REST API on RDO Mitaka via Chrome Advanced REST Client

April 21, 2016

In posting bellow we are going to demonstrate Chrome Advanced REST Client successfully issuing REST API POST requests for creating RDO Mitaka Servers (VMs) as well as getting information about servers via GET requests. All required HTTP Headers are configured in GUI environment as well as body request field for servers creation.

Version of keystone API installed v2.0

Following [ 1 ] to authenticate access to OpenStack Services, you are supposed first of all to issue an authentication request to get authentication token. If the request succeeds, the server returns an authentication token.

Source keystonerc_demo on Controller or on Compute node. It doesn’t
matter. Then run this cURL command to request a token:

curl -s -X POST http://192.169.142.54:5000/v2.0/tokens \
-H “Content-Type: application/json” \
-d ‘{“auth”: {“tenantName”: “‘”$OS_TENANT_NAME”‘”, “passwordCredentials”: {“username”: “‘”$OS_USERNAME”‘”, “password”: “‘”$OS_PASSWORD”‘”}}}’ \
| python -m json.tool

to get authentication token and scroll down to the bottom :-

“token”: {
“audit_ids”: [
“ce1JojlRSiO6TmMTDW3QNQ”
],
“expires”: “2016-04-21T18:26:28Z”,
“id”: “0cfb3ec7a10c4f549a3dc138cf8a270a”, &lt;== X-Auth-Token
“issued_at”: “2016-04-21T17:26:28.246724Z”,
“tenant”: {
“description”: “default tenant”,
“enabled”: true,
“id”: “1578b57cfd8d43278098c5266f64e49f”, &lt;=== Demo tenant’s id
“name”: “demo”
}
},
“user”: {
“id”: “8e1e992eee474c3ab7a08ffde678e35b”,
“name”: “demo”,
“roles”: [
{
“name”: “heat_stack_owner”
},
{
“name”: “_member_”
}
],
“roles_links”: [],
“username”: “demo”
}
}
}

********************************************************************************************
Original request to obtain token might be issued via Chrome Advanced REST Client as well
********************************************************************************************

Scrolling down shows up token been returned and demo’s tenant id

Required output

{

access“: 

{

token“: 

{
issued_at“: 2016-04-21T21:56:52.668252Z
expires“: 2016-04-21T22:56:52Z
id“: dd119ea14e97416b834ca72aab7f8b5a

tenant“: 

{
description“: default tenant
enabled“: true
id“: 1578b57cfd8d43278098c5266f64e49f
name“: demo
}

*****************************************************************************
Next create ssh-keypair via CLI or dashboard for particular tenant :-
*****************************************************************************
nova keypair-add oskeymitaka0417 &gt; oskeymitaka0417.pem
chmod 600 *.pem

******************************************************************************************
Following bellow is a couple of samples REST API POST requests starting servers as they usually are issued and described.
******************************************************************************************

curl -g -i -X POST http://192.169.142.54:8774/v2/1578b57cfd8d43278098c5266f64e49f/servers -H “User-Agent: python-novaclient” -H “Content-Type: application/json” -H “Accept: application/json” -H “X-Auth-Token: 0cfb3ec7a10c4f549a3dc138cf8a270a” -d ‘{“server”: {“name”: “CirrOSDevs03”, “key_name” : “oskeymitaka0417”, “imageRef”: “2e148cd0-7dac-49a7-8a79-2efddbd83852”, “flavorRef”: “1”, “max_count”: 1, “min_count”: 1, “networks”: [{“uuid”: “e7c90970-c304-4f51-9d65-4be42318487c”}], “security_groups”: [{“name”: “default”}]}}’

curl -g -i -X POST http://192.169.142. 54:8774/v2/1578b57cfd8d43278098c5266f64e49f/servers -H “User-Agent: python-novaclient” -H “Content-Type: application/json” -H “Accept: application/json” -H “X-Auth-Token: 0cfb3ec7a10c4f549a3dc138cf8a270a” -d ‘{“server”: {“name”: “VF23Devs03”, “key_name” : “oskeymitaka0417”, “imageRef”: “5b00b1a8-30d1-4e9d-bf7d-5f1abed5173b”, “flavorRef”: “2”, “max_count”: 1, “min_count”: 1, “networks”: [{“uuid”: “e7c90970-c304-4f51-9d65-4be42318487c”}], “security_groups”: [{“name”: “default”}]}}’

**********************************************************************************
We are going to initiate REST API POST requests creating servers been
issued  via Chrome Advanced REST Client
**********************************************************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# glance image-list

+————————————–+———————–+
| ID                                   | Name                  |
+————————————–+———————–+
| 28b590fa-05c8-4706-893a-54efc4ca8cd6 | cirros                |
| 9c78c3da-b25b-4b26-9d24-514185e99c00 | Ubuntu1510Cloud-image |
| a050a122-a1dc-40d0-883f-25617e452d90 | VF23Cloud-image       |
+————————————–+———————–+

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron net-list
+————————————–+————–+—————————————-+
| id                                   | name         | subnets                                |
+————————————–+————–+—————————————-+
| 43daa7c3-4e04-4661-8e78-6634b06d63f3 | public       | 71e0197b-fe9a-4643-b25f-65424d169492   |
|                                      |              | 192.169.142.0/24                       |
| 292a2f21-70af-48ef-b100-c0639a8ffb22 | demo_network | d7aa6f0f-33ba-430d-a409-bd673bed7060   |
|                                      |              | 50.0.0.0/24                            |
+————————————–+————–+—————————————-+

First required Headers were created in corresponding fields and
following fragment was placed in Raw Payload area of Chrome Client

{“server”:
{“name”: “VF23Devs03”,
“key_name” : “oskeymitaka0420”,
“imageRef” : “a050a122-a1dc-40d0-883f-25617e452d90“,
“flavorRef”: “2”,
“max_count”: 1,
“min_count”: 1,
“networks”: [{“uuid”: “292a2f21-70af-48ef-b100-c0639a8ffb22“}],
“security_groups”: [{“name”: “default”}]
}
}

Launching Fedora 23 Server :-

Next Ubuntu 15.10 Server (VM) will be created via changing  image-id in  Advanced RESTful Client GUI environment

Make sure that servers have been created and are currently up and running

***************************************************************************************
Now launch Chrome REST Client again for servers verification via GET request
***************************************************************************************


Neutron work flow for Docker Hypervisor running on DVR Cluster RDO Mitaka in appropriate amount of details && HA support for Glance storage using to load nova-docker instances

April 6, 2016

Why DVR come into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo) with which I had same kind of problems (VXLAN connection Controller <==> Compute) on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1). I just guess that Nova-Docker driver has a problem with OVS 2.4.0 no matter of stable/kilo, stable/liberty, stable/mitaka branches been checked out for driver build.

I have to notice that issue is related specifically with ML2&OVS&VXLAN setup, RDO Mitaka deployment ML2&OVS&VLAN  works with Nova-Docker (stable/mitaka) with no problems.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc, because even having proved malfunctinality I cannot file it to BZ. Nova-Docker Driver is not packaged for RDO so it’s upstream stuff. Upstream won’t consider issue which involves build driver from source on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup. It will result South-North traffic to be forwarded right away from host running Docker Hypervisor to Internet and vice/versa due to basic “fg” functionality ( outgoing interface of fip-namespace,residing on Compute node having L3 agent running in “dvr” agent_mode ).

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. See pre-deployment actions to be undertaken on Controller/Storage Node  

Before DVR set up change swift to glance back end  ( swift is configured in answer-file as follows )

CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1
CONFIG_SWIFT_STORAGE_ZONES=3
CONFIG_SWIFT_STORAGE_REPLICAS=3
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=a55607bff10c4210
CONFIG_SWIFT_STORAGE_SIZE=10G

Up on set up completion on storage node :-

[root@ip-192-169-142-127 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   45G  5.3G   40G  12% /
devtmpfs                 2.8G     0  2.8G   0% /dev
tmpfs                    2.8G  204K  2.8G   1% /dev/shm
tmpfs                    2.8G   25M  2.8G   1% /run
tmpfs                    2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/vdc1                 10G  2.5G  7.5G  25% /srv/node/vdc1
/dev/vdb1                 10G  2.5G  7.5G  25% /srv/node/vdb1
/dev/vdd1                 10G  2.5G  7.5G  25% /srv/node/vdd1

/dev/vda1                497M  211M  286M  43% /boot
tmpfs                    567M  4.0K  567M   1% /run/user/42
tmpfs                    567M  8.0K  567M   1% /run/user/1000

****************************
Update  glance-api.conf
****************************

[glance_store]
stores = swift
default_store = swift
swift_store_auth_address = http://192.169.142.127:5000/v2.0/
swift_store_user = services:glance
swift_store_key = f6a9398960534797 

swift_store_create_container_on_put = True
os_region_name=RegionOne

# openstack-service restart glance

# keystone user-role-add –tenant_id=$UUID_SERVICES_TENANT \
–user=$UUID_GLANCE_USER –role=$UUID_ResellerAdmin_ROLE

Value f6a9398960534797 is corresponding CONFIG_GLANCE_KS_PW in answer-file,i.e. keystone glance password for authentification

2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”
http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html
Just one notice on RDO Mitaka on each compute node run

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth0

Then configure

***********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
***********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************
vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute

***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf

container_formats=ami,ari,aki,bare,ovf,ova,docker
# systemctl restart openstack-glance-api

**************************************************
Network flow on Compute in a bit more details
**************************************************

When floating IP gets assigned to  VM ,  what actually happens ( [1] ) :-

The same explanation may be found in ([4]) , the only style would not be in step by step manner, in particular it contains detailed description of reverse network flow and ARP Proxy functionality.

1.The fip- namespace is created on the local compute node
(if it does not already exist)
2.A new port rfp- gets created on the qrouter- namespace
(if it does not already exist)
3.The rfp port on the qrouter namespace is assigned the associated floating IP address
4.The fpr port on the fip namespace gets created and linked via point-to-point  network to the rfp port of the qrouter namespace
5.The fip namespace gateway port fg- is assigned an additional address
from the public network range to set up  ARP proxy point
6.The fg- is configured as a Proxy ARP

*********************
Flow itself  ( [1] ):
*********************

1.The VM, initiating transmission, sends a packet via default gateway
and br-int forwards the traffic to the local DVR gateway port (qr-).
2.DVR routes the packet using the routing table to the rfp- port
3.The packet is applied NAT rule, replacing the source-IP of VM to
the assigned floating IP, and then it gets sent through the rfp- port,
which connects to the fip namespace via point-to-point network
169.254.31.28/31
4. The packet is received on the fpr- port in the fip namespace
and then routed outside through the fg- port

dvr273Screenshot from 2016-04-06 22-17-32

[root@ip-192-169-142-137 ~(keystone_demo)]# nova list

+————————————–+—————-+——–+————+————-+—————————————–+
| ID                                   | Name           | Status | Task State | Power State | Networks                                |
+————————————–+—————-+——–+————+————-+—————————————–+
| 957814c1-834e-47e5-9236-ef228455fe36 | UbuntuDevs01   | ACTIVE | –          | Running     | demo_network=50.0.0.12, 192.169.142.151 |
| 65dd55b9-23ea-4e5b-aeed-4db259436df2 | derbyGlassfish | ACTIVE | –          | Running     | demo_network=50.0.0.13, 192.169.142.153 |
| f9311d57-4352-48a6-a042-b36393e0af7a | fedora22docker | ACTIVE | –          | Running     | demo_network=50.0.0.14, 192.169.142.154 |
+————————————–+—————-+——–+————+————-+—————————————–+

[root@ip-192-169-142-137 ~(keystone_demo)]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

336679f5bf7a        kumarpraveen/fedora-sshd   “/usr/bin/supervisord”   About an hour ago   Up About an hour                        nova-f9311d57-4352-48a6-a042-b36393e0af7a
8bb2ce01e671        derby/docker-glassfish41   “/sbin/my_init”          2 hours ago         Up 2 hours                              nova-65dd55b9-23ea-4e5b-aeed-4db259436df2
fe5eb55a4c9d        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”      3 hours ago         Up 3 hours                              nova-957814c1-834e-47e5-9236-ef228455fe36

[root@ip-192-169-142-137 ~(keystone_demo)]# nova show f9311d57-4352-48a6-a042-b36393e0af7a | grep image
| image                                | kumarpraveen/fedora-sshd (93345f0b-fcbd-41e4-b335-a4ecb8b59e73) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 65dd55b9-23ea-4e5b-aeed-4db259436df2 | grep image
| image                                | derby/docker-glassfish41 (9f2cd9bc-7840-47c1-81e8-3bc0f76426ec) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 957814c1-834e-47e5-9236-ef228455fe36 | grep image
| image                                | rastasheep/ubuntu-sshd (29c057f1-3c7d-43e3-80e6-dc8fef1ea035) |

[root@ip-192-169-142-137 ~(keystone_demo)]# . keystonerc_glance
[root@ip-192-169-142-137 ~(keystone_glance)]# glance image-list

+————————————–+————————–+
| ID                                   | Name                     |

+————————————–+————————–+
| 27551b28-6df7-4b0e-a0c8-322b416092c1 | cirros                   |
| 9f2cd9bc-7840-47c1-81e8-3bc0f76426ec | derby/docker-glassfish41 |
| 93345f0b-fcbd-41e4-b335-a4ecb8b59e73 | kumarpraveen/fedora-sshd |
| 29c057f1-3c7d-43e3-80e6-dc8fef1ea035 | rastasheep/ubuntu-sshd   |
+————————————–+————————–+

[root@ip-192-169-142-137 ~(keystone_glance)]# swift list glance

29c057f1-3c7d-43e3-80e6-dc8fef1ea035
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00001
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00002

93345f0b-fcbd-41e4-b335-a4ecb8b59e73
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00001
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00002
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00003
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00004
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00005

9f2cd9bc-7840-47c1-81e8-3bc0f76426ec
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00001
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00002
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00003
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00004
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00005
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00006

Screenshot from 2016-04-06 18-08-30     Screenshot from 2016-04-06 18-08-46

Screenshot from 2016-04-06 18-09-28

 

 


Setting up Nova-Docker on Multi Node DVR Cluster RDO Mitaka

April 1, 2016

UPDATE 04/03/2016
   In meantime  better use  repositories for RC1,
   rather then Delorean trunks
END UPDATE

DVR && Nova-Docker Driver (stable/mitaka) tested fine on RDO Mitaka (build 20160329) with no issues described in previous post for RDO Liberty
So, create DVR deployment with Contrpoller/Network + N(*)Compute Nodes. Switch to Docker Hypervisor on each Compute Node and make requiered updates to glance and filters file on Controller. You are all set. Nova-Dockers instances FIP(s) are available from outside via Neutron Distributed Router (DNAT) using “fg” inteface ( fip-namespace ) residing on same host as Docker Hypervisor. South-North traffic is not related with VXLAN tunneling on DVR systems.

Why DVR comes into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo)
with which I had same kind of problems (VXLAN connection Controller <==> Compute)
on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1).
I just guess that Nova-Docker driver has a problem with OVS 2.4.0
no matter of stable/kilo, stable/liberty, stable/mitaka branches
been checked out for driver build.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc,
because even having proved malfunctinality I cannot file it to BZ.
Nova-Docker Driver is not packaged for RDO so it’s upstream stuff,
Upstream won’t consider issue which involves build driver from source
on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup,
to kill two birds with one stone. It will result South-North traffic
to be forwarded right away from host running Docker Hypervisor to Internet
and vice/versa due to basic “fg” functionality (outgoing interface of
fip-namespace,residing on Compute node having L3 agent running in “dvr”
agent_mode).

dvr273

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”  :-

http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html

Just one notice on RDO Mitaka on each compute node, first create br-ex and add port eth0

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth0

Then configure

*********************************
Compute nodes X=(3,4)
*********************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0

DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************

vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute
***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf
container_formats=ami,ari,aki,bare,ovf,ova,docker

# systemctl restart openstack-glance-api

Screenshot from 2016-04-03 12-22-34                                          Screenshot from 2016-04-03 12-57-09                                          Screenshot from 2016-04-03 12-32-41

Screenshot from 2016-04-03 14-39-11

**************************************************************************************
Build on Compute GlassFish 4.1 docker image per
http://bderzhavets.blogspot.com/2015/01/hacking-dockers-phusionbaseimage-to.html  and upload to glance :-
**************************************************************************************

[root@ip-192-169-142-137 ~(keystone_admin)]# docker images

REPOSITORY                 TAG                 IMAGE ID            CREATED              SIZE
derby/docker-glassfish41   latest              3a6b84ec9206        About a minute ago   1.155 GB
rastasheep/ubuntu-sshd     latest              70e0ac74c691        2 days ago           251.6 MB
phusion/baseimage          latest              772dd063a060        3 months ago         305.1 MB
tutum/tomcat               latest              2edd730bbedd        7 months ago         539.9 MB
larsks/thttpd              latest              a31ab5050b67        15 months ago        1.058 MB

[root@ip-192-169-142-137 ~(keystone_admin)]# docker save derby/docker-glassfish41 |  openstack image create  derby/docker-glassfish41  –public –container-format docker –disk-format raw

+——————+——————————————————+
| Field            | Value                                                |
+——————+——————————————————+
| checksum         | 9bea6dd0bcd8d0d7da2d82579c0e658a                     |
| container_format | docker                                               |
| created_at       | 2016-04-01T14:29:20Z                                 |
| disk_format      | raw                                                  |
| file             | /v2/images/acf03d15-b7c5-4364-b00f-603b6a5d9af2/file |
| id               | acf03d15-b7c5-4364-b00f-603b6a5d9af2                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | derby/docker-glassfish41                             |
| owner            | 31b24d4b1574424abe53b9a5affc70c8                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 1175020032                                           |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2016-04-01T14:30:13Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+——————+——————————————————+

[root@ip-192-169-142-137 ~(keystone_admin)]# docker ps

CONTAINER ID        IMAGE                      COMMAND               CREATED             STATUS              PORTS               NAMES

8f551d35f2d7        derby/docker-glassfish41   “/sbin/my_init”       39 seconds ago      Up 31 seconds                           nova-faba725e-e031-4edb-bf2c-41c6dfc188c1
dee4425261e8        tutum/tomcat               “/run.sh”             About an hour ago   Up About an hour                        nova-13450558-12d7-414c-bcd2-d746495d7a57
41d2ebc54d75        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”   2 hours ago         Up About an hour                        nova-04ddea42-10a3-4a08-9f00-df60b5890ee9

[root@ip-192-169-142-137 ~(keystone_admin)]# docker logs 8f551d35f2d7

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh…
No SSH host key available. Generating one…
*** Running /etc/my_init.d/01_sshd_start.sh…
Creating SSH2 RSA key; this may take some time …
Creating SSH2 DSA key; this may take some time …
Creating SSH2 ECDSA key; this may take some time …
Creating SSH2 ED25519 key; this may take some time …
invoke-rc.d: policy-rc.d denied execution of restart.
SSH KEYS regenerated by Boris just in case !
SSHD started !

*** Running /etc/my_init.d/database.sh…
Derby database started !
*** Running /etc/my_init.d/run.sh…

Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000006: instance-00000006: unknown error

Waiting for domain1 to start ……
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=&gt; Modifying password of admin to random in Glassfish
spawn asadmin –user admin change-admin-password
Enter the admin password&gt;
Enter the new admin password&gt;
Enter the new admin password again&gt;
Command change-admin-password executed successfully.

Fairly hard docker image been built by “docker expert” as myself😉
gets launched and nova-docker instance seems to run properly
several daemons at a time ( sshd enabled )
[boris@fedora23wks Downloads]$ ssh root@192.169.142.156

root@192.169.142.156’s password:
Last login: Fri Apr  1 15:33:06 2016 from 192.169.142.1
root@instance-00000006:~# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 14:32 ?        00:00:00 /usr/bin/python3 -u /sbin/my_init
root       100     1  0 14:33 ?        00:00:00 /bin/bash /etc/my_init.d/run.sh
root       103     1  0 14:33 ?        00:00:00 /usr/sbin/sshd
root       170     1  0 14:33 ?        00:00:03 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/op
root       427   100  0 14:33 ?        00:00:02 java -jar /opt/glassfish4/bin/../glassfish/lib/cl
root       444   427  2 14:33 ?        00:01:23 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/gla

root      1078     0  0 15:32 ?        00:00:00 bash
root      1110   103  0 15:33 ?        00:00:00 sshd: root@pts/0
root      1112  1110  0 15:33 pts/0    00:00:00 -bash
root      1123  1112  0 15:33 pts/0    00:00:00 ps -ef

Glassfish is running indeed


Setup Docker Hypervisor on Multi Node DVR Cluster RDO Mitaka

March 31, 2016

UPDATE 04/01/2016

  DVR && Nova-Docker Driver (stable/mitaka) tested fine on RDO Mitaka (build 20160329) with no issues discribed in link for RDO Liberty.So, create DVR deployment with Contrpoller/Network + N(*)Compute Nodes. Switch to Docker Hypervisor on each Compute Node and make requiered  updates to glance and filters file on Controller. You are all set. Nova-Dockers instances FIP(s) are available from outside via Neutron Distributed Router (DNAT) using “fg” inteface ( fip-namespace ) residing on same host as Docker Hypervisor. South-North traffic is not related with VXLAN tunneling on DVR systems.

END UPDATE

Perform two node cluster deployment Controller + Network&amp;Compute (ML2&amp;OVS&amp;VXLAN).  Another configuration available via packstack  is Controller+Storage+Compute&amp;Network.
Deployment schema bellow will start on Compute node ( supposed to run Nova-Docker instances ) all four Neutron agents. Thus routing via VXLAN tunnel will be excluded . Nova-Docker instances will be routed to the Internet and vice/versa via local neutron router (DNAT/SNAT) residing on the same host where Docker Hypervisor is running.

For multi node node solution testing DVR with Nova-Docker driver is required.

For now tested only on RDO Liberty DVR system :-
RDO Liberty DVR cluster switched no Nova-Docker (stable/liberty) successfully. Containers (instances) may be launched on Compute Nodes and are available via theirs fip(s) due to neutron (DNAT) routing via “fg” interface of corresponding fip-namespace.  Snapshots  here

Question will be closed if I would be able get same results on RDO Mitaka, which will solve problem of Multi Node Docker Hypervisor deployment across Compute nodes , not using VXLAN tunnels for South-North traffic, supported by Metadata,L3,openvswitch neutron agents with unique dhcp agent proviging
private IPs  and residing on Controller/Network Node.
SELINUX should be set to permissive mode after rdo deployment.

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

********************************************

Answer file for RDO Mitaka deployment

********************************************

[general]

CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

CONFIG_DEFAULT_PASSWORD=

CONFIG_SERVICE_WORKERS=%{::processorcount}

CONFIG_MARIADB_INSTALL=y

CONFIG_GLANCE_INSTALL=y

CONFIG_CINDER_INSTALL=y

CONFIG_MANILA_INSTALL=n

CONFIG_NOVA_INSTALL=y

CONFIG_NEUTRON_INSTALL=y

CONFIG_HORIZON_INSTALL=y

CONFIG_SWIFT_INSTALL=y

CONFIG_CEILOMETER_INSTALL=y

CONFIG_AODH_INSTALL=y

CONFIG_GNOCCHI_INSTALL=y

CONFIG_SAHARA_INSTALL=n

CONFIG_HEAT_INSTALL=n

CONFIG_TROVE_INSTALL=n

CONFIG_IRONIC_INSTALL=n

CONFIG_CLIENT_INSTALL=y

CONFIG_NTP_SERVERS=

CONFIG_NAGIOS_INSTALL=y

EXCLUDE_SERVERS=

CONFIG_DEBUG_MODE=n

CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.137

CONFIG_VMWARE_BACKEND=n

CONFIG_UNSUPPORTED=n

CONFIG_USE_SUBNETS=n

CONFIG_VCENTER_HOST=

CONFIG_VCENTER_USER=

CONFIG_VCENTER_PASSWORD=

CONFIG_VCENTER_CLUSTER_NAMES=

CONFIG_STORAGE_HOST=192.169.142.127

CONFIG_SAHARA_HOST=192.169.142.127

CONFIG_USE_EPEL=y

CONFIG_REPO=

CONFIG_ENABLE_RDO_TESTING=n

CONFIG_RH_USER=

CONFIG_SATELLITE_URL=

CONFIG_RH_SAT6_SERVER=

CONFIG_RH_PW=

CONFIG_RH_OPTIONAL=y

CONFIG_RH_PROXY=

CONFIG_RH_SAT6_ORG=

CONFIG_RH_SAT6_KEY=

CONFIG_RH_PROXY_PORT=

CONFIG_RH_PROXY_USER=

CONFIG_RH_PROXY_PW=

CONFIG_SATELLITE_USER=

CONFIG_SATELLITE_PW=

CONFIG_SATELLITE_AKEY=

CONFIG_SATELLITE_CACERT=

CONFIG_SATELLITE_PROFILE=

CONFIG_SATELLITE_FLAGS=

CONFIG_SATELLITE_PROXY=

CONFIG_SATELLITE_PROXY_USER=

CONFIG_SATELLITE_PROXY_PW=

CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt

CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key

CONFIG_SSL_CERT_DIR=~/packstackca/

CONFIG_SSL_CACERT_SELFSIGN=y

CONFIG_SELFSIGN_CACERT_SUBJECT_C=–

CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State

CONFIG_SELFSIGN_CACERT_SUBJECT_L=City

CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack

CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack

CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net

CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net

CONFIG_AMQP_BACKEND=rabbitmq

CONFIG_AMQP_HOST=192.169.142.127

CONFIG_AMQP_ENABLE_SSL=n

CONFIG_AMQP_ENABLE_AUTH=n

CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER

CONFIG_AMQP_AUTH_USER=amqp_user

CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER

CONFIG_MARIADB_HOST=192.169.142.127

CONFIG_MARIADB_USER=root

CONFIG_MARIADB_PW=7207ae344ed04957

CONFIG_KEYSTONE_DB_PW=abcae16b785245c3

CONFIG_KEYSTONE_DB_PURGE_ENABLE=True

CONFIG_KEYSTONE_REGION=RegionOne

CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9

CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost

CONFIG_KEYSTONE_ADMIN_USERNAME=admin

CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468

CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398

CONFIG_KEYSTONE_API_VERSION=v2.0

CONFIG_KEYSTONE_TOKEN_FORMAT=UUID

CONFIG_KEYSTONE_SERVICE_NAME=httpd

CONFIG_KEYSTONE_IDENTITY_BACKEND=sql

CONFIG_KEYSTONE_LDAP_URL=ldap://12.0.0.127

CONFIG_KEYSTONE_LDAP_USER_DN=

CONFIG_KEYSTONE_LDAP_USER_PASSWORD=

CONFIG_KEYSTONE_LDAP_SUFFIX=

CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one

CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1

CONFIG_KEYSTONE_LDAP_USER_SUBTREE=

CONFIG_KEYSTONE_LDAP_USER_FILTER=

CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=

CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1

CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE

CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n

CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=

CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n

CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n

CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n

CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=

CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=

CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=

CONFIG_KEYSTONE_LDAP_GROUP_FILTER=

CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=

CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=

CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n

CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n

CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n

CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=

CONFIG_KEYSTONE_LDAP_USE_TLS=n

CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=

CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=

CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand

CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8

CONFIG_GLANCE_KS_PW=f6a9398960534797

CONFIG_GLANCE_BACKEND=file

CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69

CONFIG_CINDER_DB_PURGE_ENABLE=True

CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f

CONFIG_CINDER_BACKEND=lvm

CONFIG_CINDER_VOLUMES_CREATE=y

CONFIG_CINDER_VOLUMES_SIZE=2G

CONFIG_CINDER_GLUSTER_MOUNTS=

CONFIG_CINDER_NFS_MOUNTS=

CONFIG_CINDER_NETAPP_LOGIN=

CONFIG_CINDER_NETAPP_PASSWORD=

CONFIG_CINDER_NETAPP_HOSTNAME=

CONFIG_CINDER_NETAPP_SERVER_PORT=80

CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster

CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http

CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs

CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0

CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60

CONFIG_CINDER_NETAPP_NFS_SHARES=

CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf

CONFIG_CINDER_NETAPP_VOLUME_LIST=

CONFIG_CINDER_NETAPP_VFILER=

CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=

CONFIG_CINDER_NETAPP_VSERVER=

CONFIG_CINDER_NETAPP_CONTROLLER_IPS=

CONFIG_CINDER_NETAPP_SA_PASSWORD=

CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp

CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2

CONFIG_CINDER_NETAPP_STORAGE_POOLS=

CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER

CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER

CONFIG_NOVA_DB_PURGE_ENABLE=True

CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8

CONFIG_NOVA_KS_PW=d9583177a2444f06

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp

CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager

CONFIG_VNC_SSL_CERT=

CONFIG_VNC_SSL_KEY=

CONFIG_NOVA_PCI_ALIAS=

CONFIG_NOVA_PCI_PASSTHROUGH_WHITELIST=

CONFIG_NOVA_COMPUTE_PRIVIF=

CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

CONFIG_NOVA_NETWORK_PUBIF=eth0

CONFIG_NOVA_NETWORK_PRIVIF=

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22

CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

CONFIG_NOVA_NETWORK_VLAN_START=100

CONFIG_NOVA_NETWORK_NUMBER=1

CONFIG_NOVA_NETWORK_SIZE=255

CONFIG_NEUTRON_KS_PW=808e36e154bd4cee

CONFIG_NEUTRON_DB_PW=0e2b927a21b44737

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502

CONFIG_LBAAS_INSTALL=n

CONFIG_NEUTRON_METERING_AGENT_INSTALL=n

CONFIG_NEUTRON_FWAAS=n

CONFIG_NEUTRON_VPNAAS=n

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch

CONFIG_NEUTRON_ML2_SUPPORTED_PCI_VENDOR_DEVS=[’15b3:1004′, ‘8086:10ca’]

CONFIG_NEUTRON_ML2_SRIOV_AGENT_REQUIRED=n

CONFIG_NEUTRON_ML2_SRIOV_INTERFACE_MAPPINGS=

CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1

CONFIG_NEUTRON_OVS_TUNNEL_SUBNETS=

CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_MANILA_DB_PW=PW_PLACEHOLDER

CONFIG_MANILA_KS_PW=PW_PLACEHOLDER

CONFIG_MANILA_BACKEND=generic

CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false

CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https

CONFIG_MANILA_NETAPP_LOGIN=admin

CONFIG_MANILA_NETAPP_PASSWORD=

CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=

CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster

CONFIG_MANILA_NETAPP_SERVER_PORT=443

CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)

CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=

CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root

CONFIG_MANILA_NETAPP_VSERVER=

CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true

CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s

CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares

CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2

CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu

CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu

CONFIG_MANILA_NETWORK_TYPE=neutron

CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=

CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=

CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=

CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=

CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4

CONFIG_MANILA_GLUSTERFS_SERVERS=

CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=

CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=

CONFIG_MANILA_GLUSTERFS_TARGET=

CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=

CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster

CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=

CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=

CONFIG_HORIZON_SSL=n

CONFIG_HORIZON_SECRET_KEY=33cade531a764c858e4e6c22488f379f

CONFIG_HORIZON_SSL_CERT=

CONFIG_HORIZON_SSL_KEY=

CONFIG_HORIZON_SSL_CACERT=

CONFIG_SWIFT_KS_PW=30911de72a15427e

CONFIG_SWIFT_STORAGES=

CONFIG_SWIFT_STORAGE_ZONES=1

CONFIG_SWIFT_STORAGE_REPLICAS=1

CONFIG_SWIFT_STORAGE_FSTYPE=ext4

CONFIG_SWIFT_HASH=a55607bff10c4210

CONFIG_SWIFT_STORAGE_SIZE=2G

CONFIG_HEAT_DB_PW=PW_PLACEHOLDER

CONFIG_HEAT_AUTH_ENC_KEY=0ef4161f3bb24230

CONFIG_HEAT_KS_PW=PW_PLACEHOLDER

CONFIG_HEAT_CLOUDWATCH_INSTALL=n

CONFIG_HEAT_CFN_INSTALL=n

CONFIG_HEAT_DOMAIN=heat

CONFIG_HEAT_DOMAIN_ADMIN=heat_admin

CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER

CONFIG_PROVISION_DEMO=n

CONFIG_PROVISION_TEMPEST=n

CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_IMAGE_NAME=cirros

CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

CONFIG_PROVISION_IMAGE_FORMAT=qcow2

CONFIG_PROVISION_IMAGE_SSH_USER=cirros

CONFIG_TEMPEST_HOST=

CONFIG_PROVISION_TEMPEST_USER=

CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER

CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

CONFIG_RUN_TEMPEST=n

CONFIG_RUN_TEMPEST_TESTS=smoke

CONFIG_PROVISION_OVS_BRIDGE=n

CONFIG_CEILOMETER_SECRET=19ae0e7430174349

CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753

CONFIG_CEILOMETER_SERVICE_NAME=httpd

CONFIG_CEILOMETER_COORDINATION_BACKEND=redis

CONFIG_MONGODB_HOST=192.169.142.127

CONFIG_REDIS_MASTER_HOST=192.169.142.127

CONFIG_REDIS_PORT=6379

CONFIG_REDIS_HA=n

CONFIG_REDIS_SLAVE_HOSTS=

CONFIG_REDIS_SENTINEL_HOSTS=

CONFIG_REDIS_SENTINEL_CONTACT_HOST=

CONFIG_REDIS_SENTINEL_PORT=26379

CONFIG_REDIS_SENTINEL_QUORUM=2

CONFIG_REDIS_MASTER_NAME=mymaster

CONFIG_AODH_KS_PW=acdd500a5fed4700

CONFIG_GNOCCHI_DB_PW=cf11b5d6205f40e7

CONFIG_GNOCCHI_KS_PW=36eba4690b224044

CONFIG_TROVE_DB_PW=PW_PLACEHOLDER

CONFIG_TROVE_KS_PW=PW_PLACEHOLDER

CONFIG_TROVE_NOVA_USER=trove

CONFIG_TROVE_NOVA_TENANT=services

CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER

CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER

CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER

CONFIG_NAGIOS_PW=02f168ee8edd44e4

**********************************************************************

Upon completion connect to external network on Compute Node :-

**********************************************************************

[root@ip-192-169-142-137 network-scripts(keystone_admin)]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”172.124.4.137″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”172.124.4.255″
GATEWAY=”172.124.4.1″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ip-192-169-142-137 network-scripts(keystone_admin)]# cat ifcfg-eth2

DEVICE=”eth2″
# HWADDR=00:22:15:63:E4:E2
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-137 network-scripts(keystone_admin)]# cat start.sh

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

**********************************************
Verification Compute node status
**********************************************

[root@ip-192-169-142-137 ~(keystone_admin)]# openstack-status

== Nova services ==
openstack-nova-api:                     inactive  (disabled on boot)
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               inactive  (disabled on boot)
== neutron services ==
neutron-server:                         inactive  (disabled on boot)
neutron-dhcp-agent:                     active
neutron-l3-agent:                          active
neutron-metadata-agent:               active
neutron-openvswitch-agent:          active

==ceilometer services==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:         inactive  (disabled on boot)
openstack-ceilometer-compute:       active
openstack-ceilometer-collector:       inactive  (disabled on boot)
== Support services ==
openvswitch:                            active
dbus:                                        active
Warning novarc not sourced

[root@ip-192-169-142-137 ~(keystone_admin)]# nova-manage version
13.0.0-0.20160329105656.7662fb9.el7.centos

Also install  python-openstackclient on Compute

******************************************
Verfication status on Controller
******************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status

== Nova services ==
openstack-nova-api:                     active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-cert:                    active
openstack-nova-conductor:               active
openstack-nova-console:                 inactive  (disabled on boot)
openstack-nova-consoleauth:             active
openstack-nova-xvpvncproxy:             inactive  (disabled on boot)
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     inactive  (disabled on boot)
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                        active
neutron-dhcp-agent:                 inactive  (disabled on boot)
neutron-l3-agent:                      inactive  (disabled on boot)
neutron-metadata-agent:           inactive  (disabled on boot)
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-notification:      active
== Support services ==
mysqld:                               inactive  (disabled on boot)
dbus:                                   active
target:                                 active
rabbitmq-server:                  active
memcached:                        active

== Keystone users ==

+———————————-+————+———+———————-+

|                id                |    name    | enabled |        email         |

+———————————-+————+———+———————-+
| f7dbea6e5b704c7d8e77e88c1ce1fce8 |   admin    |   True  |    root@localhost    |
| baf4ee3fe0e749f982747ffe68e0e562 |    aodh    |   True  |    aodh@localhost    |
| 770d5c0974fb49998440b1080e5939a0 |   boris    |   True  |                      |
| f88d8e83df0f43a991cb7ff063a2439f | ceilometer |   True  | ceilometer@localhost |
| e7a92f59f081403abd9c0f92c4f8d8d0 |   cinder   |   True  |   cinder@localhost   |
| 58e531b5eba74db2b4559aaa16561900 |   glance   |   True  |   glance@localhost   |
| d215d99466aa481f847df2a909c139f7 |  gnocchi   |   True  |  gnocchi@localhost   |
| 5d3433f7d54d40d8b9eeb576582cc672 |  neutron   |   True  |  neutron@localhost   |
| 3a50997aa6fc4c129dff624ed9745b94 |    nova    |   True  |    nova@localhost    |
| ef1a323f98cb43c789e4f84860afea35 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+

== Glance images ==

+————————————–+————————–+
| ID                                   | Name                     |
+————————————–+————————–+
| cbf88266-0b49-4bc2-9527-cc9c9da0c1eb | derby/docker-glassfish41 |
| 5d0a97c3-c717-46ac-a30f-86208ea0d31d | larsks/thttpd            |
| 80eb0d7d-17ae-49c7-997f-38d8a3aeeabd | rastasheep/ubuntu-sshd   |
+————————————–+————————–+

== Nova managed services ==

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 5  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:53.000000 |                |
| 6  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:52.000000 | –               |
| 7  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:52.000000 | –               |
| 8  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:54.000000 | –               |
| 10 | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2016-03-31T09:59:55.000000 | –               |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+————–+——+
| ID                                   | Label        | Cidr |
+————————————–+————–+——+
| 47798c88-29e5-4dee-8206-d0f9b7e19130 | public       | –    |
| 8f849505-0550-4f6c-8c73-6b8c9ec56789 | private      | –    |
| bcfcf3c3-c651-4ae7-b7ee-fdafae04a2a9 | demo_network | –    |
+————————————–+————–+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+——————+———————————-+——–+————+————-+—————————————+
| ID                                   | Name             | Tenant ID                        | Status | Task State | Power State | Networks                              |
+————————————–+——————+———————————-+——–+————+————-+—————————————+

| c8284258-f9c0-4b81-8cd0-db6e7cbf8d48 | UbuntuRastasheep | 32df2fd0c85745c9901b2247ec4905bc | ACTIVE | –          | Running     | demo_network=90.0.0.15, 172.124.4.154 |
| 50f22f8a-e6ff-4b8b-8c15-f3b9bbd1aad2 | derbyGlassfish   | 32df2fd0c85745c9901b2247ec4905bc | ACTIVE | –          | Running     | demo_network=90.0.0.16, 172.124.4.155 |
| 03664d5e-f3c5-4ebb-9109-e96189150626 | testLars         | 32df2fd0c85745c9901b2247ec4905bc | ACTIVE | –          | Running     | demo_network=90.0.0.14, 172.124.4.153 |
+————————————–+——————+———————————-+——–+————+————-+—————————————+

*********************************
Nova-Docker Setup on Compute
*********************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************
vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

***********************************
Next one on Controller
***********************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

****************************************************
Nova Compute Service restart on Compute
****************************************************

# systemctl restart openstack-nova-compute

****************************************
Glance API Service restart on Controller
****************************************

vi /etc/glance/glance-api.conf
container_formats=ami,ari,aki,bare,ovf,ova,docker
# systemctl restart openstack-glance-api

Build on Compute GlassFish 4.1 docker image per
http://bderzhavets.blogspot.com/2015/01/hacking-dockers-phusionbaseimage-to.html  and upload to glance :-

[root@ip-192-169-142-137 ~(keystone_admin)]# docker images

REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE

derby/docker-glassfish41   latest              615ce2c6a21f        29 minutes ago      1.155 GB
rastasheep/ubuntu-sshd     latest              70e0ac74c691        32 hours ago        251.6 MB
phusion/baseimage          latest              772dd063a060        3 months ago        305.1 MB
larsks/thttpd              latest              a31ab5050b67        15 months ago       1.058 MB

[root@ip-192-169-142-137 ~(keystone_admin)]# docker save derby/docker-glassfish41 | openstack image create  derby/docker-glassfish41  –public –container-format docker –disk-format raw

+——————+——————————————————+
| Field            | Value                                                |
+——————+——————————————————+
| checksum         | dca755d516e35d947ae87ff8bef8fa8f                     |
| container_format | docker                                               |
| created_at       | 2016-03-31T09:32:53Z                                 |
| disk_format      | raw                                                  |
| file             | /v2/images/cbf88266-0b49-4bc2-9527-cc9c9da0c1eb/file |
| id               | cbf88266-0b49-4bc2-9527-cc9c9da0c1eb                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
name             | derby/docker-glassfish41                             |
| owner            | 677c4fec97d14b8db0639086f5d59f7d                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 1175030784                                           |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2016-03-31T09:33:58Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+——————+——————————————————+

Now launch DerbyGassfish instance via dashboard and assign floating ip

Access to Glassfish instance via FIP 172.124.4.155

root@ip-192-169-142-137 ~(keystone_admin)]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

70ac259e9176        derby/docker-glassfish41   “/sbin/my_init”          3 minutes ago       Up 3 minutes                            nova-50f22f8a-e6ff-4b8b-8c15-f3b9bbd1aad2
a0826911eabe        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”      About an hour ago   Up About an hour                        nova-c8284258-f9c0-4b81-8cd0-db6e7cbf8d48
7923487076d5        larsks/thttpd              “/thttpd -D -l /dev/s”   About an hour ago   Up About an hour                        nova-03664d5e-f3c5-4ebb-9109-e96189150626


Storage Node (LVMiSCSI) deployment for RDO Kilo on CentOS 7.2

January 4, 2016

RDO deployment bellow has been done via straightforward RDO Kilo packstack run demonstrates that Storage Node might work as traditional iSCSI Target Server and each Compute Node is actually iSCSI initiator client. This functionality is provided by tuning Cinder && Glance Services running on Storage Node.
Following bellow is set up for 3 node deployment test Controller/Network & Compute & Storage on RDO Kilo (CentOS 7.2), which was performed on Fedora 23 host with KVM/Libvirt Hypervisor (32 GB RAM, Intel Core i7-4790 Haswell CPU, ASUS Z97-P ) .Three VMs (4 GB RAM, 4 VCPUS) have been setup. Controller/Network VM two (external/management subnet,vteps’s subnet) VNICs, Compute Node VM two VNICS (management,vtep’s subnets), Storage Node VM one VNIC (management)

Setup :-

192.169.142.127 – Controller/Network Node
192.169.142.137 – Compute Node
192.169.142.157 – Storage Node (LVMiSCSI)

Deployment could be done via answer-file from https://www.linux.com/community/blogs/133-general-linux/864102-storage-node-lvmiscsi-deployment-for-rdo-liberty-on-centos-71

Notice that Glance,Cinder, Swift Services are not running on Controller. Connection to http://StorageNode-IP:8776/v1/xxxxxx/types will be satisfied as soon as dependencies introduced by https://review.openstack.org/192883 will be satisfied on Storage Node, otherwise it could be done only via second run of RDO Kilo installer, having this port ready to respond on Controller (cinder-api port) previously been set up as first storage node. Thanks to Javier Pena, who did the this troubleshooting in https://bugzilla.redhat.com/show_bug.cgi?id=1234038. Issue has been fixed in RDO Liberty release.

 

SantiagoController1

Storage Node

SantiagoStorage1

SantiagoStorage2

SantiagoStorage3

Compute Node

SantiagoCompute1

[root@ip-192-169-142-137 ~(keystone_admin)]# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-30
Target: iqn.2010-10.org.openstack:volume-3ab60233-5110-4915-9998-7cec7d3ac919 (non-flash)
Current Portal: 192.169.142.157:3260,1
Persistent Portal: 192.169.142.157:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:576fc73f30e9
Iface IPaddress: 192.169.142.137
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: hBbbvVmompAY6ikd8DJF
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 2 State: running
scsi2 Channel 00 Id 0 Lun: 0
Attached scsi disk sda State: running
Target: iqn.2010-10.org.openstack:volume-2087aa9a-7984-4f4e-b00d-e461fcd02099 (non-flash)
Current Portal: 192.169.142.157:3260,1
Persistent Portal: 192.169.142.157:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:576fc73f30e9
Iface IPaddress: 192.169.142.137
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: TB8qiKbMdrWwoLBPdCTs
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 3 State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running


Attempt to set up HAProxy/Keepalived 3 Node Controller on RDO Liberty per Javier Pena

November 18, 2015

URGENT UPDATE 11/18/2015
Please, view https://github.com/beekhof/osp-ha-deploy/commit/b2e01e86ca93cfad9ad01d533b386b4c9607c60d
It looks as work in progress.
See also https://www.redhat.com/archives/rdo-list/2015-November/msg00168.html
END UPDATE

Actually, setup bellow follows closely https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md

As far as to my knowledge Cisco’s schema has been implemented :-
Keepalived, HAProxy,Galera for MySQL Manual install, at least 3 controller nodes. I just highlighted several steps  which as I believe allowed me to bring this work to success.  Javier is using flat external network provider for Controllers cluster disabling from the same start NetworkManager && enabling service network, there is one step which i was unable to skip. It’s disabling IP’s of eth0’s interfaces && restarting network service right before running `ovs-vsctl add-port br-eth0 eth0` per  Neutron building instructions of mentioned “Howto”, which seems to be one of the best I’ve ever seen.

I (just) guess that due this sequence of steps even on already been built and seems to run OK  three nodes Controller Cluster external network is still ping able :-

However, would i disable eth0’s IPs from the start i would lost connectivity right away switching to network service from NetworkManager . In general,  external network is supposed to be ping able from qrouter namespace due to Neutron router’s  DNAT/SNAT IPtables forwarding, but not from Controller . I am also aware of that when Ethernet interface becomes an OVS port of OVS bridge it’s IP is supposed to be suppressed. When external network provider is not used , then br-ex gets any IP  available IP on external network. Using external network provider changes situation. Details may be seen here :-

https://www.linux.com/community/blogs/133-general-linux/858156-multiple-external-networks-with-a-single-l3-agent-testing-on-rdo-liberty-per-lars-kellogg-stedman

[root@hacontroller1 ~(keystone_admin)]# systemctl status NetworkManager
NetworkManager.service – Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled)
Active: inactive (dead)

[root@hacontroller1 ~(keystone_admin)]# systemctl status network
network.service – LSB: Bring up/down networking
Loaded: loaded (/etc/rc.d/init.d/network)
Active: active (exited) since Wed 2015-11-18 08:36:53 MSK; 2h 10min ago
Process: 708 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)

Nov 18 08:36:47 hacontroller1.example.com network[708]: Bringing up loopback interface:  [  OK  ]
Nov 18 08:36:51 hacontroller1.example.com network[708]: Bringing up interface eth0:  [  OK  ]
Nov 18 08:36:53 hacontroller1.example.com network[708]: Bringing up interface eth1:  [  OK  ]
Nov 18 08:36:53 hacontroller1.example.com systemd[1]: Started LSB: Bring up/down networking.

[root@hacontroller1 ~(keystone_admin)]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet6 fe80::5054:ff:fe6d:926a  prefixlen 64  scopeid 0x20<link>
ether 52:54:00:6d:92:6a  txqueuelen 1000  (Ethernet)
RX packets 5036  bytes 730778 (713.6 KiB)
RX errors 0  dropped 12  overruns 0  frame 0
TX packets 15715  bytes 930045 (908.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 192.169.142.221  netmask 255.255.255.0  broadcast 192.169.142.255
inet6 fe80::5054:ff:fe5e:9644  prefixlen 64  scopeid 0x20<link>
ether 52:54:00:5e:96:44  txqueuelen 1000  (Ethernet)
RX packets 1828396  bytes 283908183 (270.7 MiB)
RX errors 0  dropped 13  overruns 0  frame 0
TX packets 1839312  bytes 282429736 (269.3 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 869067  bytes 69567890 (66.3 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 869067  bytes 69567890 (66.3 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@hacontroller1 ~(keystone_admin)]# ping -c 3  10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=2.04 ms
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.103 ms
64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.118 ms

— 10.10.10.1 ping statistics —

3 packets transmitted, 3 received, 0% packet loss, time 2001ms

rtt min/avg/max/mdev = 0.103/0.754/2.043/0.911 ms

 

Both mgmt and external networks emulated by corresponging Libvirt Networks
on F23 Virtualization Server. Total four VMs been setup , 3 of them for Controller nodes and one for compute (4 VCPUS, 4 GB RAM)

[root@fedora23wks ~]# cat openstackvms.xml ( for eth1’s)

<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@fedora23wks ~]# cat public.xml ( for external network provider )

<network>
<name>public</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.10.10.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.10.10.2′ end=’10.10.10.254′ />
</dhcp>
</ip>
</network>

Only one file is bit different on Controller Nodes , it is l3_agent.ini

[root@hacontroller1 neutron(keystone_demo)]# cat l3_agent.ini | grep -v ^# | grep -v ^$

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
send_arp_for_ha = 3
metadata_ip = controller-vip.example.com
external_network_bridge =
gateway_external_network_id =
[AGENT]

*************************************************************************************
Due to posted “UPDATE” on the top of  the blog entry in meantime
perfect solution is provided by
https://github.com/beekhof/osp-ha-deploy/commit/b2e01e86ca93cfad9ad01d533b386b4c9607c60d
The commit has been done on 11/14/2015 right after discussion at RDO mailing list.
*************************************************************************************

One more step which I did ( not sure that is really has
to be done at this point of time )
IP’s on eth0’s interfaces were disabled just before
running `ovs-vsctl add-port br-eth0 eth0`:-

1. Updated ifcfg-eth0 files on all Controllers
2. `service network restart` on all Controllers
3. `ovs-vsctl add-port br-eth0 eth0`on all Controllers

*****************************************************************************************
Targeting just POC ( to get floating ips accessible from Fedora 23 Virtualization host )  resulted  Controllers Cluster setup:-
*****************************************************************************************

I installed only

Keystone
Glance
Neutron
Nova
Horizon

**************************
UPDATE to official docs
**************************
[root@hacontroller1 ~(keystone_admin)]# cat   keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=regionOne
export OS_PASSWORD=keystonetest
export OS_AUTH_URL=http://controller-vip.example.com:35357/v2.0/
export OS_SERVICE_ENDPOINT=http://controller-vip.example.com:35357/v2.0
export OS_SERVICE_TOKEN=2fbe298b385e132da335
export PS1='[\u@\h \W(keystone_admin)]\$ ‘

Due to running Galera Synchronous MultiMaster Replication between Controllers each commands like :-

# su keystone -s /bin/sh -c “keystone-manage db_sync”
# su glance -s /bin/sh -c “glance-manage db_sync”
# neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugin.ini upgrade head
# su nova -s /bin/sh -c “nova-manage db sync”

are supposed to run just once from Conroller node 1 ( for instance )

************************
Compute Node setup:-
*************************

Compute setup

**********************
On all nodes
**********************

[root@hacontroller1 neutron(keystone_demo)]# cat /etc/hosts
192.169.142.220 controller-vip.example.com controller-vip
192.169.142.221 hacontroller1.example.com hacontroller1
192.169.142.222 hacontroller2.example.com hacontroller2
192.169.142.223 hacontroller3.example.com hacontroller3
192.169.142.224 compute.example.con compute
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

[root@hacontroller1 ~(keystone_admin)]# cat /etc/neutron/neutron.conf | grep -v ^$| grep -v ^#

[DEFAULT]
bind_host = 192.169.142.22(X)
auth_strategy = keystone
notification_driver = neutron.openstack.common.notifier.rpc_notifier
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = router,lbaas
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
dhcp_agents_per_network = 2
api_workers = 2
rpc_workers = 2
l3_ha = True
min_l3_agents_per_router = 2
max_l3_agents_per_router = 2

[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller-vip.example.com:5000/
identity_uri = http://127.0.0.1:5000
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
auth_plugin = password
auth_url = http://controller-vip.example.com:35357/
username = neutron
password = neutrontest
project_name = services
[database]
connection = mysql://neutron:neutrontest@controller-vip.example.com:3306/neutron
max_retries = -1
[nova]
nova_region_name = regionOne
project_domain_id = default
project_name = services
user_domain_id = default
password = novatest
username = compute
auth_url = http://controller-vip.example.com:35357/
auth_plugin = password
[oslo_concurrency]
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_hosts = hacontroller1,hacontroller2,hacontroller3
rabbit_ha_queues = true
[qos]

[root@hacontroller1 haproxy(keystone_demo)]# cat haproxy.cfg
global
daemon
stats socket /var/lib/haproxy/stats
defaults
mode tcp
maxconn 10000
timeout connect 5s
timeout client 30s
timeout server 30s
listen monitor
bind 192.169.142.220:9300
mode http
monitor-uri /status
stats enable
stats uri /admin
stats realm Haproxy\ Statistics
stats auth root:redhat
stats refresh 5s
frontend vip-db
bind 192.169.142.220:3306
timeout client 90m
default_backend db-vms-galera
backend db-vms-galera
option httpchk
stick-table type ip size 1000
stick on dst
timeout server 90m
server rhos8-node1 192.169.142.221:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
# Note the RabbitMQ entry is only needed for CloudForms compatibility
# and should be removed in the future
frontend vip-rabbitmq
option clitcpka
bind 192.169.142.220:5672
timeout client 900m
default_backend rabbitmq-vms
backend rabbitmq-vms
option srvtcpka
balance roundrobin
timeout server 900m
server rhos8-node1 192.169.142.221:5672 check inter 1s
server rhos8-node2 192.169.142.222:5672 check inter 1s
server rhos8-node3 192.169.142.223:5672 check inter 1s
frontend vip-keystone-admin
bind 192.169.142.220:35357
default_backend keystone-admin-vms
timeout client 600s
backend keystone-admin-vms
balance roundrobin
timeout server 600s
server rhos8-node1 192.169.142.221:35357 check inter 1s on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:35357 check inter 1s on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:35357 check inter 1s on-marked-down shutdown-sessions
frontend vip-keystone-public
bind 192.169.142.220:5000
default_backend keystone-public-vms
timeout client 600s
backend keystone-public-vms
balance roundrobin
timeout server 600s
server rhos8-node1 192.169.142.221:5000 check inter 1s on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:5000 check inter 1s on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:5000 check inter 1s on-marked-down shutdown-sessions
frontend vip-glance-api
bind 192.169.142.220:9191
default_backend glance-api-vms
backend glance-api-vms
balance roundrobin
server rhos8-node1 192.169.142.221:9191 check inter 1s
server rhos8-node2 192.169.142.222:9191 check inter 1s
server rhos8-node3 192.169.142.223:9191 check inter 1s
frontend vip-glance-registry
bind 192.169.142.220:9292
default_backend glance-registry-vms
backend glance-registry-vms
balance roundrobin
server rhos8-node1 192.169.142.221:9292 check inter 1s
server rhos8-node2 192.169.142.222:9292 check inter 1s
server rhos8-node3 192.169.142.223:9292 check inter 1s
frontend vip-cinder
bind 192.169.142.220:8776
default_backend cinder-vms
backend cinder-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8776 check inter 1s
server rhos8-node2 192.169.142.222:8776 check inter 1s
server rhos8-node3 192.169.142.223:8776 check inter 1s
frontend vip-swift
bind 192.169.142.220:8080
default_backend swift-vms
backend swift-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8080 check inter 1s
server rhos8-node2 192.169.142.222:8080 check inter 1s
server rhos8-node3 192.169.142.223:8080 check inter 1s
frontend vip-neutron
bind 192.169.142.220:9696
default_backend neutron-vms
backend neutron-vms
balance roundrobin
server rhos8-node1 192.169.142.221:9696 check inter 1s
server rhos8-node2 192.169.142.222:9696 check inter 1s
server rhos8-node3 192.169.142.223:9696 check inter 1s
frontend vip-nova-vnc-novncproxy
bind 192.169.142.220:6080
default_backend nova-vnc-novncproxy-vms
backend nova-vnc-novncproxy-vms
balance roundrobin
timeout tunnel 1h
server rhos8-node1 192.169.142.221:6080 check inter 1s
server rhos8-node2 192.169.142.222:6080 check inter 1s
server rhos8-node3 192.169.142.223:6080 check inter 1s
frontend nova-metadata-vms
bind 192.169.142.220:8775
default_backend nova-metadata-vms
backend nova-metadata-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8775 check inter 1s
server rhos8-node2 192.169.142.222:8775 check inter 1s
server rhos8-node3 192.169.142.223:8775 check inter 1s
frontend vip-nova-api
bind 192.169.142.220:8774
default_backend nova-api-vms
backend nova-api-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8774 check inter 1s
server rhos8-node2 192.169.142.222:8774 check inter 1s
server rhos8-node3 192.169.142.223:8774 check inter 1s
frontend vip-horizon
bind 192.169.142.220:80
timeout client 180s
default_backend horizon-vms
backend horizon-vms
balance roundrobin
timeout server 180s
mode http
cookie SERVERID insert indirect nocache
server rhos8-node1 192.169.142.221:80 check inter 1s cookie rhos8-horizon1 on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:80 check inter 1s cookie rhos8-horizon2 on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:80 check inter 1s cookie rhos8-horizon3 on-marked-down shutdown-sessions
frontend vip-heat-cfn
bind 192.169.142.220:8000
default_backend heat-cfn-vms
backend heat-cfn-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8000 check inter 1s
server rhos8-node2 192.169.142.222:8000 check inter 1s
server rhos8-node3 192.169.142.223:8000 check inter 1s
frontend vip-heat-cloudw
bind 192.169.142.220:8003
default_backend heat-cloudw-vms
backend heat-cloudw-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8003 check inter 1s
server rhos8-node2 192.169.142.222:8003 check inter 1s
server rhos8-node3 192.169.142.223:8003 check inter 1s
frontend vip-heat-srv
bind 192.169.142.220:8004
default_backend heat-srv-vms
backend heat-srv-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8004 check inter 1s
server rhos8-node2 192.169.142.222:8004 check inter 1s
server rhos8-node3 192.169.142.223:8004 check inter 1s
frontend vip-ceilometer
bind 192.169.142.220:8777
timeout client 90s
default_backend ceilometer-vms
backend ceilometer-vms
balance roundrobin
timeout server 90s
server rhos8-node1 192.169.142.221:8777 check inter 1s
server rhos8-node2 192.169.142.222:8777 check inter 1s
server rhos8-node3 192.169.142.223:8777 check inter 1s
frontend vip-sahara
bind 192.169.142.220:8386
default_backend sahara-vms
backend sahara-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8386 check inter 1s
server rhos8-node2 192.169.142.222:8386 check inter 1s
server rhos8-node3 192.169.142.223:8386 check inter 1s
frontend vip-trove
bind 192.169.142.220:8779
default_backend trove-vms
backend trove-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8779 check inter 1s
server rhos8-node2 192.169.142.222:8779 check inter 1s
server rhos8-node3 192.169.142.223:8779 check inter 1s

[root@hacontroller1 ~(keystone_demo)]# cat /etc/my.cnf.d/galera.cnf
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
max_connections=8192
query_cache_size=0
query_cache_type=0
bind_address=192.169.142.22(X)
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name=”galera_cluster”
wsrep_cluster_address=”gcomm://192.169.142.221,192.169.142.222,192.169.142.223″
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync

[root@hacontroller1 ~(keystone_demo)]# cat /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script “/usr/bin/killall -0 haproxy”
interval 2
}
vrrp_instance VI_PUBLIC {
interface eth1
state BACKUP
virtual_router_id 52
priority 101
virtual_ipaddress {
192.169.142.220 dev eth1
}
track_script {
chk_haproxy
}
# Avoid failback
nopreempt
}
vrrp_sync_group VG1
group {
VI_PUBLIC
}

*************************************************************************
The most difficult  procedure is re-syncing Galera Mariadb cluster
*************************************************************************

https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/galera-bootstrap.md

Due to nova services start not waiting for getting in sync Galera databases
After sync is done , regardless systemctl reports that service are up and running.
Database update by `openstack-service restart nova` is required on every Controller.  Also the most suspicious reason for failure access Nova metadata Server by starting VMs is failure to start neutron-l3-agent service  on each Controller due to classical design – VM’s access metadata via neutron-ns-metadata-proxy running in qrouter namespace. neutron-l3-agents may be started with no problems, some times just restarted when needed.

RUN Time Snapshots. Keepalived status on Controller’s nodes

HA Neutron router belonging tenant demo create via Neutron CLI

***********************************************************************

 At this point hacontroller1 goes down. On hacontroller2 run :-

***********************************************************************

root@hacontroller2 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterHA

+————————————–+—————————+—————-+——-+———-+

| id                                   | host                      | admin_state_up | alive | ha_state |

+————————————–+—————————+—————-+——-+———-+

| a03409d2-fbe9-492c-a954-e1bdf7627491 | hacontroller2.example.com | True           | :-)   | active   |

| 0d6e658a-e796-4cff-962f-06e455fce02f | hacontroller1.example.com | True           | xxx   | active   |

+————————————–+—————————+—————-+——-+——-

***********************************************************************

 At this point hacontroller2 goes down. hacontroller1 goes up :-

***********************************************************************

Nova Services status on all Controllers

Neutron Services status on all Controllers

Compute Node status

******************************************************************************
Cloud VM (L3) at runtime . Accessibility from F23 Virtualization Host,
running HA 3  Nodes Controller and Compute Node VMs (L2)
******************************************************************************

[root@fedora23wks ~]# ping  10.10.10.103

PING 10.10.10.103 (10.10.10.103) 56(84) bytes of data.
64 bytes from 10.10.10.103: icmp_seq=1 ttl=63 time=1.14 ms
64 bytes from 10.10.10.103: icmp_seq=2 ttl=63 time=0.813 ms
64 bytes from 10.10.10.103: icmp_seq=3 ttl=63 time=0.636 ms
64 bytes from 10.10.10.103: icmp_seq=4 ttl=63 time=0.778 ms
64 bytes from 10.10.10.103: icmp_seq=5 ttl=63 time=0.493 ms
^C

— 10.10.10.103 ping statistics —

5 packets transmitted, 5 received, 0% packet loss, time 4001ms

rtt min/avg/max/mdev = 0.493/0.773/1.146/0.218 ms

[root@fedora23wks ~]# ssh -i oskey1.priv fedora@10.10.10.103
Last login: Tue Nov 17 09:02:30 2015
[fedora@vf23dev ~]$ uname -a
Linux vf23dev.novalocal 4.2.5-300.fc23.x86_64 #1 SMP Tue Oct 27 04:29:56 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

********************************************************************************
Verifying neutron workflow on 3 node controller been built via patch:-
********************************************************************************

[root@hacontroller1 ~(keystone_admin)]# ovs-ofctl show br-eth0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000baf0db1a854f
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(eth0): addr:52:54:00:aa:0e:fc
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
2(phy-br-eth0): addr:46:c0:e0:30:72:92
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-eth0): addr:ba:f0:db:1a:85:4f
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

[root@hacontroller1 ~(keystone_admin)]# ovs-ofctl dump-flows  br-eth0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=15577.057s, table=0, n_packets=50441, n_bytes=3262529, idle_age=2, priority=4,in_port=2,dl_vlan=3 actions=strip_vlan,NORMAL
cookie=0x0, duration=15765.938s, table=0, n_packets=31225, n_bytes=1751795, idle_age=0, priority=2,in_port=2 actions=drop
cookie=0x0, duration=15765.974s, table=0, n_packets=39982, n_bytes=42838752, idle_age=1, priority=0 actions=NORMAL

Check `ovs-vsctl show`

Bridge br-int
fail_mode: secure
Port “tapc8488877-45”
tag: 4
Interface “tapc8488877-45”
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap14aa6eeb-70”
tag: 2
Interface “tap14aa6eeb-70”
type: internal
Port “qr-8f5b3f4a-45”
tag: 2
Interface “qr-8f5b3f4a-45”
type: internal
Port “int-br-eth0”
Interface “int-br-eth0″
type: patch
options: {peer=”phy-br-eth0”}
Port “qg-34893aa0-17”
tag: 3

[root@hacontroller2 ~(keystone_demo)]# ovs-ofctl show  br-eth0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000b6bfa2bafd45
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(eth0): addr:52:54:00:73:df:29
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
2(phy-br-eth0): addr:be:89:61:87:56:20
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-eth0): addr:b6:bf:a2:ba:fd:45
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

[root@hacontroller2 ~(keystone_demo)]# ovs-ofctl dump-flows  br-eth0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=15810.746s, table=0, n_packets=0, n_bytes=0, idle_age=15810, priority=4,in_port=2,dl_vlan=2 actions=strip_vlan,NORMAL
cookie=0x0, duration=16105.662s, table=0, n_packets=31849, n_bytes=1786827, idle_age=0, priority=2,in_port=2 actions=drop
cookie=0x0, duration=16105.696s, table=0, n_packets=39762, n_bytes=2100763, idle_age=0, priority=0 actions=NORMAL

Check `ovs-vsctl show`
Bridge br-int
fail_mode: secure
Port “qg-34893aa0-17”
tag: 2
Interface “qg-34893aa0-17”
type: internal


RDO Liberty Set up for three Nodes (Controller+Network+Compute) ML2&OVS&VXLAN on CentOS 7.1

October 22, 2015

As advertised officially

In addition to the comprehensive OpenStack services, libraries and clients, this release also provides Packstack, a simple installer for proof-of-concept installations, as small as a single all-in-one box and RDO Manager an OpenStack deployment and management tool for production environments based on the OpenStack TripleO project

In posting bellow I intend to test packstack on Liberty to perform classic three node deployment.  If packstack will succeed then post installation  actions  like VRRP or DVR setups might be committed as well. One of the real problems for packstack is HA Controller(s) setup. Here RDO Manager is supposed to get a significant advantage, replacing with comprehensive CLI a lot of manual configuration.

Following bellow is brief instruction  for three node deployment test Controller&&Network&&Compute for RDO Liberty, which was performed on Fedora 22 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,4 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep’s external subnets), Compute Node VM two VNICS (management,vtep’s subnets)

SELINUX stays in enforcing mode.

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related with VM serves as RDO Liberty Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.

Three Libvirt networks created

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@vfedora22wks ~]# cat public.xml
<network>
<name>public</name>
<uuid>d1e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’172.24.4.225′ netmask=’255.255.255.240′>
<dhcp>
<range start=’172.24.4.226′ end=’172.24.4.238′ />
</dhcp>
</ip>
</network>

[root@vfedora22wks ~]# cat vteps.xml
<network>
<name>vteps</name>
<uuid>d2e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr3′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.0.0.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>
</ip>
</network>

# virsh net-list
Name                 State      Autostart     Persistent
————————————————————————–
default               active        yes           yes
openstackvms     active        yes           yes
public                active        yes           yes
vteps                 active         yes          yes

*********************************************************************************
1. First Libvirt subnet “openstackvms”  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet “public” serves for simulation external network  Network Node attached to public,latter on “eth2” interface (belongs to “public”) is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.225 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
***********************************************************************************
3.Third Libvirt subnet “vteps” serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet. ***********************************************************************************

*********************
Answer-file :-
*********************

[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer3Nodet.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
# In case of two Compute nodes
# CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.157
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
# This is VXLAN tunnel endpoint interface
# It should be assigned IP from vteps network
# before running packstack
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4
**************************************
At this point run on Controller:-
**************************************
Keep SELINUX=enforcing ( RDO Liberty is supposed to handle this)
# yum -y  install centos-release-openstack-liberty
# yum -y  install openstack-packstack
# packstack –answer-file=./answer3Node.txt
**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”172.24.4.232″
NETMASK=”255.255.255.240″
DNS1=”83.221.202.254″
BROADCAST=”172.24.4.239″
GATEWAY=”172.24.4.225″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE=”eth2″
# HWADDR=00:22:15:63:E4:E2
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

OVS PORT should be eth2 (third Ethernet interface on Network Node)
  Libvirt bridge VIRBR2 in real deployment is a your router to External
  network. OVS BRIDGE br-ex should have IP belongs to External network 

*******************
On Controller :-
*******************

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -lntp |  grep 35357
tcp6       0      0 :::35357                :::*                    LISTEN      7047/httpd

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 7047
root      7047     1  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
keystone  7089  7047  0 11:22 ?        00:00:07 keystone-admin  -DFOREGROUND
keystone  7090  7047  0 11:22 ?        00:00:02 keystone-main   -DFOREGROUND
apache    7092  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7093  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7094  7047  0 11:22 ?        00:00:03 /usr/sbin/httpd -DFOREGROUND
apache    7095  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7096  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7097  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7098  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7099  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7100  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7101  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7102  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
root     28963 17739  0 12:51 pts/1    00:00:00 grep –color=auto 7047

********************
On Network Node
********************

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+—————————————-+——-+—————-+—————————+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+
| 217fb0f5-8dd1-4361-aae7-cc9a7d18d6e4 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 5dabfc17-db64-470c-9f01-8d2297d155f3 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 5e3c6e2f-3f6d-4ede-b058-bc1b317d4ee1 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| f0f02931-e7e6-4b01-8b87-46224cb71e6d | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| f16a5d9d-55e6-47c3-b509-ca445d05d34d | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
9221d1c1-008a-464a-ac26-1e0340407714
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port “vxlan-0a000089”
Interface “vxlan-0a000089″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.147″, out_key=flow, remote_ip=”10.0.0.137″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “eth2”
Interface “eth2”
Port “qg-1deeaf96-e8”
Interface “qg-1deeaf96-e8”
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge br-int
fail_mode: secure
Port “qr-1909e3bb-fd”
tag: 2
Interface “qr-1909e3bb-fd”
type: internal
Port “tapfdf24cad-f8”
tag: 2
Interface “tapfdf24cad-f8”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
ovs_version: “2.4.0”

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[    2.233302] device ovs-system entered promiscuous mode
[    2.273206] device br-int entered promiscuous mode
[    2.274981] device qr-838ad1f3-7d entered promiscuous mode
[    2.276333] device tap0f21eab4-db entered promiscuous mode
[    2.312740] device br-tun entered promiscuous mode
[    2.314509] device qg-2b712b60-d0 entered promiscuous mode
[    2.315921] device br-ex entered promiscuous mode
[    2.316022] device eth2 entered promiscuous mode
[   10.704329] device qr-838ad1f3-7d left promiscuous mode
[   10.729045] device tap0f21eab4-db left promiscuous mode
[   10.761844] device qg-2b712b60-d0 left promiscuous mode
[  224.746399] device eth2 left promiscuous mode
[  232.173791] device eth2 entered promiscuous mode
[  232.978909] device tap0f21eab4-db entered promiscuous mode
[  233.690854] device qr-838ad1f3-7d entered promiscuous mode
[  233.895213] device qg-2b712b60-d0 entered promiscuous mode
[ 1253.611501] device qr-838ad1f3-7d left promiscuous mode
[ 1254.017129] device qg-2b712b60-d0 left promiscuous mode
[ 1404.697825] device tapfdf24cad-f8 entered promiscuous mode
[ 1421.812107] device qr-1909e3bb-fd entered promiscuous mode
[ 1422.045593] device qg-1deeaf96-e8 entered promiscuous mode
[ 6111.042488] device tap0f21eab4-db left promiscuous mode

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip route
default via 172.24.4.225 dev qg-1deeaf96-e8
50.0.0.0/24 dev qr-1909e3bb-fd  proto kernel  scope link  src 50.0.0.1
172.24.4.224/28 dev qg-1deeaf96-e8  proto kernel  scope link  src 172.24.4.227 

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-1deeaf96-e8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 172.24.4.227  netmask 255.255.255.240  broadcast 172.24.4.239
inet6 fe80::f816:3eff:fe93:12de  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:93:12:de  txqueuelen 0  (Ethernet)
RX packets 864432  bytes 1185656986 (1.1 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 382639  bytes 29347929 (27.9 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-1909e3bb-fd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
inet6 fe80::f816:3eff:feae:d1e0  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:ae:d1:e0  txqueuelen 0  (Ethernet)
RX packets 382969  bytes 29386380 (28.0 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 864601  bytes 1185686714 (1.1 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-153edd99-9152-49ad-a445-7280aa9df187 ip route
default via 50.0.0.1 dev tapfdf24cad-f8
50.0.0.0/24 dev tapfdf24cad-f8  proto kernel  scope link  src 50.0.0.10 

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-153edd99-9152-49ad-a445-7280aa9df187 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapfdf24cad-f8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 50.0.0.10  netmask 255.255.255.0  broadcast 50.0.0.255
inet6 fe80::f816:3eff:fe98:c66  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:98:0c:66  txqueuelen 0  (Ethernet)
RX packets 63  bytes 6445 (6.2 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 14  bytes 2508 (2.4 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

16: qr-1909e3bb-fd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:ae:d1:e0 brd ff:ff:ff:ff:ff:ff
inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-1909e3bb-fd
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:feae:d1e0/64 scope link
valid_lft forever preferred_lft forever

17: qg-1deeaf96-e8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether fa:16:3e:93:12:de brd ff:ff:ff:ff:ff:ff
inet 172.24.4.227/28 brd 172.24.4.239 scope global qg-1deeaf96-e8
valid_lft forever preferred_lft forever
inet 172.24.4.229/32 brd 172.24.4.229 scope global qg-1deeaf96-e8
valid_lft forever preferred_lft forever
inet 172.24.4.230/32 brd 172.24.4.230 scope global qg-1deeaf96-e8
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe93:12de/64 scope link
valid_lft forever preferred_lft forever



RDO Kilo DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1

September 30, 2015

Per http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-ovs-dvr.html

1. Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node. (North-South Routing).
2. Neutron DVR implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur with Network Node not involved. (East-West Routing).
3. Neutron Distributed Virtual Router provides the legacy SNAT behavior for the default SNAT for all private VMs. SNAT service is not distributed, it is centralized and the service node will host the service.

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance,

Neutron (using Open vSwitch plugin && VXLAN )

– (2x) Compute node: Nova (nova-compute),

Neutron (openvswitch-agent,l3-agent,metadata-agent )

Three CentOS 7.1 VMs (4 GB RAM, 4 VCPU, 2 VNICs ) has been built for testing

at Fedora 22 KVM Hypervisor. Two libvirt sub-nets were used first “openstackvms” for emulating External && Mgmt Networks 192.169.142.0/24 gateway virbr1 (192.169.142.1) and “vteps” 10.0.0.0/24 to support two VXLAN tunnels between Controller and Compute Nodes.

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

# cat vteps.xml

<network>
<name>vteps</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.0.0.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>

</ip>
</network>

# virsh net-define openstackvms.xml
# virsh net-start openstackvms
# virsh net-autostart openstackvms

Second libvirt sub-net maybe defined and started same way.

ip-192-169-142-127.ip.secureserver.net – Controller/Network Node
ip-192-169-142-137.ip.secureserver.net – Compute Node
ip-192-169-142-147.ip.secureserver.net – Compute Node

Answer File :-

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4
********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
***********
Then
***********

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

Reboot

*****************************************
On Controller update neutron.conf
*****************************************

router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

*****************
On Controller
*****************

[root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#| grep -v ^$

[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
allow_automatic_l3agent_failover=False

*********************************
On each Compute Node
*********************************

[root@ip-192-169-142-147 neutron]# cat l3_agent.ini | grep -v ^#| grep -v ^$

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr

*******************
On each node
*******************

[root@ip-192-169-142-147 neutron]# cat metadata_agent.ini | grep -v ^#| grep -v ^$

[DEFAULT]
debug = False
auth_url = http://192.169.142.127:35357/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =4
metadata_backlog = 4096
cache_url = memory://?default_ttl=5

[root@ip-192-169-142-147 neutron]# cat ml2_conf.ini | grep -v ^#| grep -v ^$

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[securitygroup]
enable_security_group = True
# On Compute nodes
[agent]
l2_population = True

The last entry for [agent] is important for DVR configuration on Kilo ( vs Juno )

[root@ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep -v ^#| grep -v ^$

[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2population = True
enable_distributed_routing = True
arp_responder = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

*********************************************************************
On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started.
*********************************************************************

# yum install openstack-neutron-ml2
# systemctl start neutron-l3-agent
# systemctl start neutron-metadata-agent
# systemctl enable neutron-l3-agent
# systemctl enable neutron-metadata-agent

 

DVR01@KIlo

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterDemo
+————————————–+—————————————-+—————-+——-+———-+
| id | host | admin_state_up | alive | ha_state |
+————————————–+—————————————-+—————-+——-+———-+
| 50388b16-4461-441c-83a4-f7e7084ec415 | ip-192-169-142-127.ip.secureserver.net | True |🙂 | |
| 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 | ip-192-169-142-137.ip.secureserver.net | True |🙂 | |
| d18cdf01-6814-489d-bef2-5207c1aac0eb | ip-192-169-142-147.ip.secureserver.net | True |🙂 | |
+————————————–+—————————————-+—————-+——-+———-+
[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-show 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4
+———————+——————————————————————————-+
| Field | Value |
+———————+——————————————————————————-+
| admin_state_up | True |
| agent_type | L3 agent |
| alive | True |
| binary | neutron-l3-agent |
| configurations | { |
| | “router_id”: “”, |
| | “agent_mode”: “dvr”, |
| | “gateway_external_network_id”: “”, |
| | “handle_internal_only_routers”: true, |
| | “use_namespaces”: true, |
| | “routers”: 1, |
| | “interfaces”: 1, |
| | “floating_ips”: 1, |
| | “interface_driver”: “neutron.agent.linux.interface.OVSInterfaceDriver”, |
| | “external_network_bridge”: “br-ex”, |
| | “ex_gw_ports”: 1 |
| | } |
| created_at | 2015-09-29 07:40:37 |
| description | |
| heartbeat_timestamp | 2015-09-30 09:58:24 |
| host | ip-192-169-142-137.ip.secureserver.net |
| id | 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 |
| started_at | 2015-09-30 08:08:53 |
| topic | l3_agent |
+———————+————————————————————————–

DVR02@Kilo

Screenshot from 2015-09-30 13-41-49                                          Screenshot from 2015-09-30 13-43-54

 

 

 


CPU Pinning and NUMA Topology on RDO Kilo on Fedora Server 22

August 1, 2015
Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
on RDO Kilo installed on Fedora 22 . After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Details follow bellow :-
*****************************************************************************************
RDO Kilo set up on Fedora ( openstack-puppet-modules-2015.1.9-4.fc23.noarch)
*****************************************************************************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf install -y openstack-packstack
Generate answer-file and make update :-
# packstack  –gen-answer-file answer-file-aio.txt
and set CONFIG_KEYSTONE_SERVICE_NAME=httpd
****************************************************************************
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
****************************************************************************
You might be hit by bug  https://bugzilla.redhat.com/show_bug.cgi?id=1249482
As pre-install step apply patch https://review.openstack.org/#/c/209032/
to fix neutron_api.pp. Location of puppet templates
/usr/lib/python2.7/site-packages/packstack/puppet/templates.
You might be also hit by  https://bugzilla.redhat.com/show_bug.cgi?id=1234042
Workaround is in comments 6,11
****************
Then run :-
****************

# packstack  –answer-file=./answer-file-aio.txt

Final target is to reproduce mentioned article on i7 4790 Haswell CPU box, perform launching nova instance with CPU pinning.

[root@fedora22server ~(keystone_admin)]# uname -a
Linux fedora22server.localdomain 4.1.3-200.fc22.x86_64 #1 SMP Wed Jul 22 19:51:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

[root@fedora22server ~(keystone_admin)]# rpm -qa \*qemu\*
qemu-system-x86-2.3.0-6.fc22.x86_64
qemu-img-2.3.0-6.fc22.x86_64
qemu-guest-agent-2.3.0-6.fc22.x86_64
qemu-kvm-2.3.0-6.fc22.x86_64
ipxe-roms-qemu-20150407-1.gitdc795b9f.fc22.noarch
qemu-common-2.3.0-6.fc22.x86_64
libvirt-daemon-driver-qemu-1.2.13.1-2.fc22.x86_64

[root@fedora22server ~(keystone_admin)]# numactl –hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 15991 MB
node 0 free: 4399 MB
node distances:
node 0
0: 10

[root@fedora22server ~(keystone_admin)]# virsh capabilities

<capabilities>
<host>
<uuid>00fd5d2c-dad7-dd11-ad7e-7824af431b53</uuid>
<cpu>
<arch>x86_64</arch>
<model>Haswell-noTSX</model>
<vendor>Intel</vendor>
<topology sockets=’1′ cores=’4′ threads=’2’/>
<feature name=’invtsc’/>
<feature name=’abm’/>
<feature name=’pdpe1gb’/>
<feature name=’rdrand’/>
<feature name=’f16c’/>
<feature name=’osxsave’/>
<feature name=’pdcm’/>
<feature name=’xtpr’/>
<feature name=’tm2’/>
<feature name=’est’/>
<feature name=’smx’/>
<feature name=’vmx’/>
<feature name=’ds_cpl’/>
<feature name=’monitor’/>
<feature name=’dtes64’/>
<feature name=’pbe’/>
<feature name=’tm’/>
<feature name=’ht’/>
<feature name=’ss’/>
<feature name=’acpi’/>
<feature name=’ds’/>
<feature name=’vme’/>
<pages unit=’KiB’ size=’4’/>
<pages unit=’KiB’ size=’2048’/>
</cpu>
<power_management>
<suspend_mem/>
<suspend_disk/>
<suspend_hybrid/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
<uri_transport>rdma</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num=’1′>
<cell id=’0′>
<memory unit=’KiB’>16374824</memory>
<pages unit=’KiB’ size=’4′>4093706</pages>
<pages unit=’KiB’ size=’2048′>0</pages>
<distances>
<sibling id=’0′ value=’10’/>
</distances>
<cpus num=’8′>
<cpu id=’0′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’1′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’2′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’3′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
<cpu id=’4′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’5′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’6′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’7′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
</cpus>
</cell>
</cells>
</topology>

On each Compute node that pinning of virtual machines will be permitted on open the /etc/nova/nova.conf file and make the following modifications:

Set the vcpu_pin_set value to a list or range of logical CPU cores to reserve for virtual machine processes. OpenStack Compute will ensure guest virtual machine instances are pinned to these virtual CPU cores.
vcpu_pin_set=2,3,6,7

Set the reserved_host_memory_mb to reserve RAM for host processes. For the purposes of testing used the default of 512 MB:
reserved_host_memory_mb=512

# systemctl restart openstack-nova-compute.service

************************************
SCHEDULER CONFIGURATION
************************************

Update /etc/nova/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
NUMATopologyFilter,AggregateInstanceExtraSpecsFilter

# systemctl restart openstack-nova-scheduler.service

At this point if creating a guest you may see some changes to appear in the XML, pinning the guest vCPU(s) to the cores listed in vcpu_pin_set:

<vcpu placement=’static’ cpuset=’2-3,6-7′>1</vcpu>

Add to vmlinuz grub2 command line at the end
isolcpus=2,3,6,7

***************
REBOOT
***************
[root@fedora22server ~(keystone_admin)]# nova aggregate-create performance

+—-+————-+——————-+——-+———-+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+———-+

| 1 | performance | – | | |

+—-+————-+——————-+——-+———-+

[root@fedora22server ~(keystone_admin)]# nova aggregate-set-metadata 1 pinned=true
Metadata has been successfully updated for aggregate 1.
+—-+————-+——————-+——-+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+—————+

| 1 | performance | – | | ‘pinned=true’ |

+—-+————-+——————-+——-+—————+

[root@fedora22server ~(keystone_admin)]# nova flavor-create m1.small.performance 6 4096 20 4
+—-+———————-+———–+——+———–+——+——-+————-+———–+

| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

| 6 | m1.small.performance | 4096 | 20 | 0 | | 4 | 1.0 | True |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set hw:cpu_policy=dedicated
[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true
[root@fedora22server ~(keystone_admin)]# hostname
fedora22server.localdomain

[root@fedora22server ~(keystone_admin)]# nova aggregate-add-host 1 fedora22server.localdomain
Host fedora22server.localdomain has been successfully added for aggregate 1
+—-+————-+——————-+——————————+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——————————+—————+
| 1 | performance | – | ‘fedora22server.localdomain’ | ‘pinned=true’ |
+—-+————-+——————-+——————————+—————+

[root@fedora22server ~(keystone_admin)]# . keystonerc_demo
[root@fedora22server ~(keystone_demo)]# glance image-list
+————————————–+———————————+————-+——————+————-+——–+
| ID | Name | Disk Format | Container Format | Size | Status |
+————————————–+———————————+————-+——————+————-+——–+
| bf6f5272-ae26-49ae-b0f9-3c4fcba350f6 | CentOS71Image | qcow2 | bare | 1004994560 | active |
| 05ac955e-3503-4bcf-8413-6a1b3c98aefa | cirros | qcow2 | bare | 13200896 | active |
| 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 | VF22Image | qcow2 | bare | 228599296 | active |
| c695e7fa-a69f-4220-abd8-2269b75af827 | Windows Server 2012 R2 Std Eval | qcow2 | bare | 17182752768 | active |
+————————————–+———————————+————-+——————+————-+——–+

[root@fedora22server ~(keystone_demo)]#neutron net-list

+————————————–+———-+—————————————————–+
| id | name | subnets |
+————————————–+———-+—————————————————–+
| 0daa3a02-c598-4c46-b1ac-368da5542927 | public | 8303b2f3-2de2-44c2-bd5e-fc0966daec53 192.168.1.0/24 |
| c85a4215-1558-4a95-886d-a2f75500e052 | demo_net | 0cab6cbc-dd80-42c6-8512-74d7b2cbf730 50.0.0.0/24 |
+————————————–+———-+—————————————————–+

*************************************************************************
At this point attempt to launch F22 Cloud instance with created flavor
m1.small.performance
*************************************************************************

[root@fedora22server ~(keystone_demo)]# nova boot –image 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 –key-name oskeydev –flavor m1.small.performance –nic net-id=c85a4215-1558-4a95-886d-a2f75500e052 vf22-instance

+————————————–+————————————————–+
| Property | Value |
+————————————–+————————————————–+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | – |
| OS-SRV-USG:terminated_at | – |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | XsGr87ZLGX8P |
| config_drive | |
| created | 2015-07-31T08:03:49Z |
| flavor | m1.small.performance (6) |
| hostId | |
| id | 4b99f3cf-3126-48f3-9e00-94787f040e43 |
| image | VF22Image (7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52) |
| key_name | oskeydev |
| metadata | {} |
| name | vf22-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 14f736e6952644b584b2006353ca51be |
| updated | 2015-07-31T08:03:50Z |
| user_id | 4ece2385b17a4490b6fc5a01ff53350c |
+————————————–+————————————————–+

[root@fedora22server ~(keystone_demo)]#nova list

+————————————–+—————+———+————+————-+———————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+—————+———+————+————-+———————————–+
| 93906a61-ec0b-481d-b964-2bb99d095646 | CentOS71RLX | SHUTOFF | – | Shutdown | demo_net=50.0.0.21, 192.168.1.159 |
| ac7e9be5-d2dc-4ec0-b0a1-4096b552e578 | VF22Devpin | ACTIVE | – | Running | demo_net=50.0.0.22 |
| b93c9526-ded5-4b7a-ae3a-106b34317744 | VF22Devs | SHUTOFF | – | Shutdown | demo_net=50.0.0.19, 192.168.1.157 |
| bef20a1e-3faa-4726-a301-73ca49666fa6 | WinSrv2012 | SHUTOFF | – | Shutdown | demo_net=50.0.0.16 |
| 4b99f3cf-3126-48f3-9e00-94787f040e43 | vf22-instance | ACTIVE | – | Running | demo_net=50.0.0.23, 192.168.1.160 |
+————————————–+—————+———+————+————-+———————————–+

[root@fedora22server ~(keystone_demo)]#virsh list

Id Name State

—————————————————-
2 instance-0000000c running
3 instance-0000000d running

Please, see http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
regarding detailed explanation of highlighted blocks, keeping in mind that pinning is done to logical CPU cores ( not physical due to 4 Core CPU with HT enabled ). Multiple cells are also absent, due limitations of i7 47XX Haswell CPU architecture

[root@fedora22server ~(keystone_demo)]#virsh dumpxml instance-0000000d > vf22-instance.xml
<domain type=’kvm’ id=’3′>
<name>instance-0000000d</name>
<uuid>4b99f3cf-3126-48f3-9e00-94787f040e43</uuid>
<metadata>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.fc23″/>
<nova:name>vf22-instance</nova:name>
<nova:creationTime>2015-07-31 08:03:54</nova:creationTime>
<nova:flavor name=”m1.small.performance”>
<nova:memory>4096</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>4</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”4ece2385b17a4490b6fc5a01ff53350c”>demo</nova:user>
<nova:project uuid=”14f736e6952644b584b2006353ca51be”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52″/>
</nova:instance>
</metadata>
<memory unit=’KiB’>4194304</memory>
<currentMemory unit=’KiB’>4194304</currentMemory>
<vcpu placement=’static’>4</vcpu>
<cputune>
<shares>4096</shares>
<vcpupin vcpu=’0′ cpuset=’2’/>
<vcpupin vcpu=’1′ cpuset=’6’/>
<vcpupin vcpu=’2′ cpuset=’3’/>
<vcpupin vcpu=’3′ cpuset=’7’/>
<emulatorpin cpuset=’2-3,6-7’/>
</cputune>
<numatune>
<memory mode=’strict’ nodeset=’0’/>
<memnode cellid=’0′ mode=’strict’ nodeset=’0’/>
</numatune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type=’smbios’>
<system>
<entry name=’manufacturer’>Fedora Project</entry>
<entry name=’product’>OpenStack Nova</entry>
<entry name=’version’>2015.1.0-3.fc23</entry>
<entry name=’serial’>f1b336b1-6abf-4180-865a-b6be5670352e</entry>
<entry name=’uuid’>4b99f3cf-3126-48f3-9e00-94787f040e43</entry>
</system>
</sysinfo>
<os>
<type arch=’x86_64′ machine=’pc-i440fx-2.3′>hvm</type>
<boot dev=’hd’/>
<smbios mode=’sysinfo’/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode=’host-model’>
<model fallback=’allow’/>
<topology sockets=’2′ cores=’1′ threads=’2’/>
<numa>
<cell id=’0′ cpus=’0-3′ memory=’4194304′ unit=’KiB’/>
</numa>
</cpu>
<clock offset=’utc’>
<timer name=’pit’ tickpolicy=’delay’/>
<timer name=’rtc’ tickpolicy=’catchup’/>
<timer name=’hpet’ present=’no’/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type=’file’ device=’disk’>
<driver name=’qemu’ type=’qcow2′ cache=’none’/>
<source file=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/disk’/>
<backingStore type=’file’ index=’1′>
<format type=’raw’/>
<source file=’/var/lib/nova/instances/_base/6c60a5ed1b3037bbdb2bed198dac944f4c0d09cb’/>
<backingStore/>
</backingStore>
<target dev=’vda’ bus=’virtio’/>
<alias name=’virtio-disk0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x06′ function=’0x0’/>
</disk>
<controller type=’usb’ index=’0′>
<alias name=’usb0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x01′ function=’0x2’/>
</controller>
<controller type=’pci’ index=’0′ model=’pci-root’>
<alias name=’pci.0’/>
</controller>
<controller type=’virtio-serial’ index=’0′>
<alias name=’virtio-serial0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x05′ function=’0x0’/>
</controller>
<interface type=’bridge’>
<mac address=’fa:16:3e:4f:25:03’/>
<source bridge=’qbr567b21fe-52’/>
<target dev=’tap567b21fe-52’/>
<model type=’virtio’/>
<alias name=’net0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x03′ function=’0x0’/>
</interface>
<serial type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target port=’0’/>
<alias name=’serial0’/>
</serial>
<serial type=’pty’>
<source path=’/dev/pts/2’/>
<target port=’1’/>
<alias name=’serial1’/>
</serial>
<console type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target type=’serial’ port=’0’/>
<alias name=’serial0’/>
</console>
<channel type=’spicevmc’>
<target type=’virtio’ name=’com.redhat.spice.0′ state=’disconnected’/>
<alias name=’channel0’/>
<address type=’virtio-serial’ controller=’0′ bus=’0′ port=’1’/>
</channel>
<input type=’mouse’ bus=’ps2’/>
<input type=’keyboard’ bus=’ps2’/>
<graphics type=’spice’ port=’5901′ autoport=’yes’ listen=’0.0.0.0′ keymap=’en-us’>
<listen type=’address’ address=’0.0.0.0’/>
</graphics>
<sound model=’ich6′>
<alias name=’sound0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x04′ function=’0x0’/>
</sound>
<video>
<model type=’qxl’ ram=’65536′ vram=’65536′ vgamem=’16384′ heads=’1’/>
<alias name=’video0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x02′ function=’0x0’/>
</video>
<memballoon model=’virtio’>
<alias name=’balloon0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x07′ function=’0x0’/>
<stats period=’10’/>
</memballoon>
</devices>
<seclabel type=’dynamic’ model=’selinux’ relabel=’yes’>
<label>system_u:system_r:svirt_t:s0:c359,c706</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c359,c706</imagelabel>
</seclabel>
</domain>

Screenshot from 2015-07-31 21-55-33                                              Screenshot from 2015-07-31 15-05-53

 


Switching to Dashboard Spice Console in RDO Kilo on Fedora 22

July 3, 2015

*************************
UPDATE 06/27/2015
*************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf  install -y openstack-packstack  
# dnf install fedora-repos-rawhide
# dnf  –enablerepo=rawhide update openstack-packstack
Fedora – Rawhide – Developmental packages for the next Fedora re 1.7 MB/s |  45 MB     00:27
Last metadata expiration check performed 0:00:39 ago on Sat Jun 27 13:23:03 2015.
Dependencies resolved.
==============================================================
Package                       Arch      Version                                Repository  Size
==============================================================
Upgrading:
openstack-packstack           noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide    233 k
openstack-packstack-puppet    noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide     233 k
Transaction Summary
==============================================================
Upgrade  2 Packages
.  .  .  .  .
# dnf install python3-pyOpenSSL.noarch 
At this point run :-
# packstack  –gen-answer-file answer-file-aio.txt
and set
CONFIG_KEYSTONE_SERVICE_NAME=httpd
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
Then run `packstack –answer-file=./answer-file-aio.txt` , however you will still need pre-patch provision_demo.pp at the moment
( see third patch at http://textuploader.com/yn0v ) , the rest should work fine.

Upon completion you may try follow :-
https://www.rdoproject.org/Neutron_with_existing_external_network

I didn’t test it on Fedora 22, just creating external and private networks of VXLAN type and configure
 
[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.32″
NETMASK=”255.255.255.0″
DNS1=”8.8.8.8″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-enp2s0
DEVICE=”enp2s0″
ONBOOT=”yes”
HWADDR=”90:E6:BA:2D:11:EB”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

When configuration above is done :-

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

*************************
UPDATE 06/26/2015
*************************

To install RDO Kilo on Fedora 22 :-
after `dnf -y install openstack-packstack `
# cd /usr/lib/python2.7/site-packages/packstack/puppet/templates
Then apply following 3 patches
# cd ; packstack  –gen-answer-file answer-file-aio.txt
Set “CONFIG_NAGIOS_INSTALL=n” in  answer-file-aio.txt
# packstack –answer-file=./answer-file-aio.txt

************************
UPDATE 05/19/2015
************************
MATE Desktop supports sound ( via patch mentioned bellow) on RDO Kilo  Cloud instances F22, F21, F20. RDO Kilo AIO install performed on bare metal.
Also Windows Server 2012 (evaluation version) cloud VM provides pretty stable “video/sound” ( http://www.cloudbase.it/windows-cloud-images/ ) .

************************
UPDATE 05/14/2015
************************
I’ve  got sound working on CentOS 7 VM ( connection  to console via virt-manager)  with slightly updated patch of Y.Kawada , self.type set “ich6″ RDO Kilo installed on bare metal AIO testing host, Fedora 22. Same results have been  obtained for RDO Kilo on CentOS 7.1. However , connection to spice console having cut&amp;&amp;paste and sound enabled features may be obtained via spicy ( remote connection)

Generated libvirt.xml

<domain type=”kvm”>
<uuid>455877f2-7070-48a7-bb24-e0702be2fbc5</uuid>
<name>instance-00000003</name>
<memory>2097152</memory>
<vcpu cpuset=”0-7″>1</vcpu>
<metadata>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.el7″/>
<nova:name>CentOS7RSX05</nova:name>
<nova:creationTime>2015-06-14 18:42:11</nova:creationTime>
<nova:flavor name=”m1.small”>
<nova:memory>2048</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>1</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”da79d2c66db747eab942bdbe20bb3f44″>demo</nova:user>
<nova:project uuid=”8c9defac20a74633af4bb4773e45f11e”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”4a2d708c-7624-439f-9e7e-6e133062e23a”/>
</nova:instance>
</metadata>
<sysinfo type=”smbios”>
<system>
<entry name=”manufacturer”>Fedora Project</entry>
<entry name=”product”>OpenStack Nova</entry>
<entry name=”version”>2015.1.0-3.el7</entry>
<entry name=”serial”>b3fae7c3-10bd-455b-88b7-95e586342203</entry>
<entry name=”uuid”>455877f2-7070-48a7-bb24-e0702be2fbc5</entry>
</system>
</sysinfo>
<os>
<type>hvm</type>
<boot dev=”hd”/>
<smbios mode=”sysinfo”/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cputune>
<shares>1024</shares>
</cputune>
<clock offset=”utc”>
<timer name=”pit” tickpolicy=”delay”/>
<timer name=”rtc” tickpolicy=”catchup”/>
<timer name=”hpet” present=”no”/>
</clock>
<cpu mode=”host-model” match=”exact”>
<topology sockets=”1″ cores=”1″ threads=”1″/>
</cpu>
<devices>
<disk type=”file” device=”disk”>
<driver name=”qemu” type=”qcow2″ cache=”none”/>
<source file=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/disk”/>
<target bus=”virtio” dev=”vda”/>
</disk>
<interface type=”bridge”>
<mac address=”fa:16:3e:87:4b:29″/>
<model type=”virtio”/>
<source bridge=”qbr8ce9ae7b-f0″/>
<target dev=”tap8ce9ae7b-f0″/>
</interface>
<serial type=”file”>
<source path=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/console.log”/>
</serial>
<serial type=”pty”/>
<channel type=”spicevmc”>
<target type=”virtio” name=”com.redhat.spice.0″/>
</channel>
<graphics type=”spice” autoport=”yes” keymap=”en-us” listen=”0.0.0.0   “/>
<video>
<model type=”qxl”/>
</video>
<sound model=”ich6″/>
<memballoon model=”virtio”>
<stats period=”10″/>
</memballoon>
</devices>
</domain>

*****************
END UPDATE
*****************
The post follows up http://lxer.com/module/newswire/view/214893/index.html
The most recent `yum update` on F22 significantly improved network performance on cloud VMs (L2) . Watching movies running on cloud F22 VM (with “Mate Desktop” been installed and functioning pretty smoothly) without sound refreshes spice memories,view https://bugzilla.redhat.com/show_bug.cgi?format=multiple&amp;id=913607
# dnf -y install spice-html5 ( installed on Controller &amp;&amp; Compute)
# dnf -y install  openstack-nova-spicehtml5proxy (Compute Node)
# rpm -qa | grep openstack-nova-spicehtml5proxy
openstack-nova-spicehtml5proxy-2015.1.0-3.fc23.noarch

***********************************************************************
Update /etc/nova/nova.conf on Controller &amp;&amp; Compute Node as follows :-
***********************************************************************

[DEFAULT]
. . . . .
web=/usr/share/spice-html5
. . . . . .
spicehtml5proxy_host=0.0.0.0  (only Compute)
spicehtml5proxy_port=6082     (only Compute)
. . . . . . .
# Disable VNC
vnc_enabled=false
. . . . . . .
[spice]

# Compute Node Management IP 192.169.142.137
html5proxy_base_url=http://192.169.142.137:6082/spice_auto.html
server_proxyclient_address=127.0.0.1 ( only  Compute )
server_listen=0.0.0.0 ( only  Compute )
enabled=true
agent_enabled=true
keymap=en-us

:wq

# service httpd restart ( on Controller )
Next actions to be performed on Compute Node

# service openstack-nova-compute restart
# service openstack-nova-spicehtml5proxy start
# systemctl enable openstack-nova-spicehtml5proxy

On Controller

[root@ip-192-169-142-127 ~(keystone_admin)]# nova list –all-tenants
+————————————–+———–+———————————-+———+————+————-+———————————-+
| ID                                   | Name      | Tenant ID                        | Status  | Task State | Power State | Networks                         |
+————————————–+———–+———————————-+———+————+————-+———————————-+
| 6c8ef008-e8e0-4f1c-af17-b5f846f8b2d9 | CirrOSDev | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | SHUTOFF | –          | Shutdown    | demo_net=50.0.0.11, 172.24.4.228 |
| cfd735ea-d9a8-4c4e-9a77-03035f01d443 | VF22DEVS  | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | ACTIVE  | –          | Running     | demo_net=50.0.0.14, 172.24.4.231 |
+————————————–+———–+———————————-+———+————+————-+———————————-+
[root@ip-192-169-142-127 ~(keystone_admin)]# nova get-spice-console cfd735ea-d9a8-4c4e-9a77-03035f01d443  spice-html5
+————-+—————————————————————————————-+
| Type        | Url                                                                                    |

+————-+—————————————————————————————-+
| spice-html5 | http://192.169.142.137:6082/spice_auto.html?token=24fb65c7-e7e9-4727-bad3-ba7c2c29f7f4 |
+————-+—————————————————————————————-+

Session running by virt-manager on Virtualization Host ( F22 )

Connection to Compute Node 192.169.142.137 has been activated


Once again about pros/cons of Systemd and Upstart

May 16, 2015

Upstart advantages.

1. Upstart is simpler for porting on the systems other than Linux while systemd is very rigidly tied on Linux kernel opportunities.Adaptation of Upstart for work in Debian GNU/kFreeBSD and Debian GNU/Hurd looks quite real task that it is impossible to tell about systemd;

2. Upstart is more habitual for the Debian developers, many of which in combination participate in development of Ubuntu. Two members of Technical committee Debian (Steve Langasek and Colin Watson) are a part of group of the Upstart developers.

3. Upstart simpler and is more lightweight than systemd, as a result, less code – less mistakes; Upstart is suitable for integration with a code of system daemons better.The policy of systemd is reduced to that authors of daemons have to be arranged under upstream (it is necessary to provide the analog compatible at the level of the external interface for replacement of the systemd component) instead of upstream provided comfortable means for developers of daemons.

4. Upstart is simpler in respect of maintenance and maintenance of packages; Community of the Upstart developers are more openly for collaboration. In case of systemd it is necessary to take the systemd methods for granted and to follow them, for example, to support the separate section “/usr” or
to use only absolute paths for start. Shortcomings of Upstart belong to category of reparable problems; in current state of Upstart it is already completely ready for use in Debian 8.0 (Jessie).

5. In Upstart more habitual model of definition of a configuration of services, unlike systemd where settings in / etc block the basic settings of units determined in hierarchy/lib. Use of Upstart will allow to support a sound mind of the competition which will promote development of various approaches and will keep developers in a tone.

Systemd advantages

1. Without essential processing of architecture of Upstart won’t be able to catch up with systemd on functionality (for example, the turned model of start of dependences (instead of start of all demanded dependences at start of the set service,start of service in Upstart is carried out at receipt of an event about availability for service of dependences);

2. Use of ptrace disturbs application of upstart-works for such daemons as avahi, apache and postfix;possibility of activation of service only upon the appeal to a socket, but not on indirect signs,such as dependence on activation of other socket; lack of reliable tracking of conditions of the carried-out processes.

3. Systemd contains rather self-sufficient set of components that allows to concentrate attention on elimination of problems,but not completion of a configuration with Upstart to the opportunities which are already present at Systemd. For example, in Upstart are absent:- support of the detailed status and maintaining the log of work of daemons,multiple activation through sockets,activation through sockets for IPv6 and UDP,flexible mechanism of restriction of resources.

4. Use of systemd will allow to pull together among themselves and to unify control facilities various distribution kits. Systemd is already passed to RHEL 7.X,CentOS 7.X, Fedora,openSUSE,Sabayon,Mandriva,Arch Linux,

5. At systemd there is more active, large and versatile community of developers into which engineers of the SUSE and Red Hat companies enter. When using upstart the distribution kit becomes dependent on Canonical without which support of upstart remains without developers and will be doomed to stagnation.Participation in development of upstart requires signing of the agreement on transfer of property rights of the Canonical company. The Red Hat company not without cause decided on replacement of upstart by systemd.Debian project was already compelled to migrate for systemd. For realization of some opportunities of loading in Upstart it is required to use fragments of shell-scripts that does initialization process less reliable and more labor-consuming for debugging.

6. Support of systemd is realized in GNOME and KDE which more and more actively use possibilities of systemd (for example, means for management of the user sessions and start of each appendix in separate cgroup). GNOME continues to be positioned as the main environment of Debian, but the relations between the Ubuntu/Upstart and GNOME projects had obviously intense character.

References

http://www.opennet.ru/opennews/art.shtml?num=38762


RDO Kilo Three Node Setup for Controller+Network+Compute (ML2&OVS&VXLAN) on CentOS 7.1

May 9, 2015

Following bellow is brief instruction  for traditional three node deployment test Controller&amp;&amp;Network&amp;&amp;Compute for oncoming RDO Kilo, which was performed on Fedora 21 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4771 Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,2 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep’s external subnets), Compute Node VM two VNICS (management,vtep’s subnets)

SELINUX stays in enforcing mode.

Three Libvirt networks created

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# cat public.xml

<network>
<name>public</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr3′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’172.24.4.225′ netmask=’255.255.255.240′>
<dhcp>
<range start=’172.24.4.226′ end=’172.24.4.238′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# cat vteps.xml

<network>
<name>vteps</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr4′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.0.0.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# virsh net-list

Name                 State      Autostart     Persistent

————————————————————————–

default               active        yes           yes
openstackvms    active        yes           yes
public                active        yes           yes
vteps                 active         yes          yes

*********************************************************************************
1. First Libvirt subnet “openstackvms”  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet “public” serves for simulation external network  Network Node attached to public,latter on “eth3” interface (belongs to “public”) is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via interface virbr3 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28


***********************************************************************************
3. Third Libvirt subnet “vteps” serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet.
***********************************************************************************
Start testing following RH instructions
Per https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test

# yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm
# yum install -y openstack-packstack
*******************************************************
Install rdo-testing-kilo.rpm on all three nodes due to
*******************************************************

https://bugzilla.redhat.com/show_bug.cgi?id=1218750

# yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm

Keep SELINUX=enforcing
Package  openstack-selinux-0.6.31-1.el7.noarch will be installed by prescript
puppet on all nodes of deployment

*********************
Answer-file :-
*********************

[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer-fileRHTest.txt

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”172.24.4.227″
NETMASK=”255.255.255.240″
DNS1=”83.221.202.254″
BROADCAST=”172.24.4.239″
GATEWAY=”172.24.4.225″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth3

DEVICE=”eth3″
# HWADDR=00:22:15:63:E4:E2
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
*************************************************
Next step to performed on Network Node :-
*************************************************

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

 

f15

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show

d9a60201-a2c2-4c6a-ad9d-63cc2ae296b3

Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “eth3”
Interface “eth3”

Port br-ex
Interface br-ex
type: internal
Port “eth2”
Interface “eth2”
Port “qg-d433fa46-e2”
Interface “qg-d433fa46-e2”
type: internal
Bridge br-tun
fail_mode: secure
Port “vxlan-0a000089”
Interface “vxlan-0a000089″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.147″, out_key=flow, remote_ip=”10.0.0.137″}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port br-int
Interface br-int
type: internal
Port “tap70da94fb-c1”
tag: 1
Interface “tap70da94fb-c1”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qr-0737c492-f6”
tag: 1
Interface “qr-0737c492-f6”
type: internal
ovs_version: “2.3.1”
**********************************************************
Following bellow is Network Node status verification
**********************************************************

[root@ip-192-169-142-147 ~(keystone_admin)]# openstack-status

== neutron services ==

neutron-server:                           inactive  (disabled on boot)
neutron-dhcp-agent:                    active
neutron-l3-agent:                         active
neutron-metadata-agent:              active
neutron-openvswitch-agent:         active
== Support services ==
libvirtd:                               active
openvswitch:                       active
dbus:                                   active
[root@ip-192-169-142-147 ~(keystone_admin)]# neutron net-list

+————————————–+———-+——————————————————+
| id                                   | name     | subnets                                              |
+————————————–+———-+——————————————————+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | 5fc0118a-f710-448d-af67-17dbfe01d5fc 172.24.4.224/28 |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | ba2cded7-5546-4a64-aa49-7ef4d077dee3 50.0.0.0/24     |
+————————————–+———-+——————————————————+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-list

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+
| id                                   | name       | external_gateway_info                                                                                                                                                                   | distributed | ha    |

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+

| d63ca3f3-5b71-4540-bb5c-01b44ce3081b | RouterDemo | {“network_id”: “7ecdfc27-57cf-410d-9a76-8e9eb76582cb”, “enable_snat”: true, “external_fixed_ips”: [{“subnet_id”: “5fc0118a-f710-448d-af67-17dbfe01d5fc”, “ip_address”: “172.24.4.229”}]} | False       | False |

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-port-list RouterDemo

+————————————–+——+——————-+————————————————————————————-+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+————————————–+——+——————-+————————————————————————————-+
| 0737c492-f607-4d6a-8e72-ad447453b3c0 |      | fa:16:3e:d7:d0:66 | {“subnet_id”: “ba2cded7-5546-4a64-aa49-7ef4d077dee3”, “ip_address”: “50.0.0.1”}     |
| d433fa46-e203-4fdd-b3f7-dcbc884e9f1e |      | fa:16:3e:02:ef:51 | {“subnet_id”: “5fc0118a-f710-448d-af67-17dbfe01d5fc”, “ip_address”: “172.24.4.229”} |
+————————————–+——+——————-+————————————————————————————-+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron port-show 0737c492-f607-4d6a-8e72-ad447453b3c0 | grep ACTIVE
| status                | ACTIVE                                                                          |

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[   14.174240] device ovs-system entered promiscuous mode
[   14.184284] device br-ex entered promiscuous mode
[   14.200068] device eth2 entered promiscuous mode
[   14.200253] device eth3 entered promiscuous mode
[   14.207443] device br-int entered promiscuous mode
[   14.209360] device br-tun entered promiscuous mode
[   27.311116] device virbr0-nic entered promiscuous mode
[  142.406262] device tap70da94fb-c1 entered promiscuous mode
[  144.045031] device qr-0737c492-f6 entered promiscuous mode
[  144.792618] device qg-d433fa46-e2 entered promiscuous mode

**************************************************************
Compute Node Status
**************************************************************

[root@ip-192-169-142-137 ~]#  dmesg | grep promisc
[    9.683238] device ovs-system entered promiscuous mode
[    9.699664] device br-ex entered promiscuous mode
[    9.735288] device br-int entered promiscuous mode
[    9.748086] device br-tun entered promiscuous mode
[  137.203583] device qvbe7160159-fd entered promiscuous mode
[  137.288235] device qvoe7160159-fd entered promiscuous mode
[  137.715508] device qvbe90ef79b-80 entered promiscuous mode
[  137.796083] device qvoe90ef79b-80 entered promiscuous mode
[  605.884770] device tape90ef79b-80 entered promiscuous mode
[  767.083214] device qvbbf1c441c-ad entered promiscuous mode
[  767.184783] device qvobf1c441c-ad entered promiscuous mode
[  767.446575] device tapbf1c441c-ad entered promiscuous mode
[  973.679071] device qvb3c3e98d7-2d entered promiscuous mode
[  973.775480] device qvo3c3e98d7-2d entered promiscuous mode
[  973.997621] device tap3c3e98d7-2d entered promiscuous mode
[ 1863.868574] device tapbf1c441c-ad left promiscuous mode
[ 1889.386251] device tape90ef79b-80 left promiscuous mode
[ 2256.698108] device tap3c3e98d7-2d left promiscuous mode
[ 2336.931559] device qvb6597428d-5b entered promiscuous mode
[ 2337.021941] device qvo6597428d-5b entered promiscuous mode
[ 2337.283293] device tap6597428d-5b entered promiscuous mode
[ 4092.577561] device tap6597428d-5b left promiscuous mode
[ 4099.798474] device tap6597428d-5b entered promiscuous mode
[ 5098.563689] device tape90ef79b-80 entered promiscuous mode

[root@ip-192-169-142-137 ~]# ovs-vsctl show
a0cb406e-b028-4b09-8849-e6e2869ab051
Bridge br-tun
fail_mode: secure
Port “vxlan-0a000093”
Interface “vxlan-0a000093″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.137″, out_key=flow, remote_ip=”10.0.0.147″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
fail_mode: secure
Port “qvoe90ef79b-80”
tag: 1
Interface “qvoe90ef79b-80”
Port br-int
Interface br-int
type: internal
Port “qvobf1c441c-ad”
tag: 1
Interface “qvobf1c441c-ad”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port “qvo6597428d-5b”
tag: 1
Interface “qvo6597428d-5b”
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
ovs_version: “2.3.1”

[root@ip-192-169-142-137 ~]# brctl show

bridge name    bridge id        STP enabled    interfaces
qbr6597428d-5b       8000.1a483dd02cee    no        qvb6597428d-5b
tap6597428d-5b
qbrbf1c441c-ad        8000.ca2f911ff649      no        qvbbf1c441c-ad
qbre90ef79b-80        8000.16342824f4ba    no        qvbe90ef79b-80
tape90ef79b-80
**************************************************
Controller Node status verification
**************************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:             inactive  (disabled on boot)
openstack-nova-network:              inactive  (disabled on boot)
openstack-nova-scheduler:           active
openstack-nova-conductor:           active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:            active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                  inactive  (disabled on boot)
neutron-l3-agent:                       inactive  (disabled on boot)
neutron-metadata-agent:            inactive  (disabled on boot)
== Swift services ==
openstack-swift-proxy:                 active
openstack-swift-account:              active
openstack-swift-container:            active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                      active
openstack-cinder-scheduler:            active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:                 active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:         inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
mysqld:                                    inactive  (disabled on boot)
libvirtd:                                    active
dbus:                                        active
target:                                      active
rabbitmq-server:                       active
memcached:                             active
== Keystone users ==
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.

‘python-keystoneclient.’, DeprecationWarning)

+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 4e1008fd31944fecbb18cdc215af23ec |   admin    |   True  |    root@localhost    |
| 621b84dd4b904760b8aa0cc7b897c95c | ceilometer |   True  | ceilometer@localhost |
| 4d6cdea3b7bc49948890457808c0f6f8 |   cinder   |   True  |   cinder@localhost   |
| 8393bb4de49a44b798af8b118b9f0eb6 |    demo    |   True  |                      |
| f9be6eaa789e4b3c8771372fffb00230 |   glance   |   True  |   glance@localhost   |
| a518b95a92044ad9a4b04f0be90e385f |  neutron   |   True  |  neutron@localhost   |
| 40dddef540fb4fa5a69fb7baa03de657 |    nova    |   True  |    nova@localhost    |
| 5fbb2b97ab9d4192a3f38f090e54ffb1 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+————–+————-+——————+———–+——–+
| ID                                   | Name         | Disk Format | Container Format | Size      | Status |
+————————————–+————–+————-+——————+———–+——–+
| 1b4a6b08-d63c-4d8d-91da-16f6ba177009 | cirros       | qcow2       | bare             | 13200896  | active |
| cb05124d-0d30-43a7-a033-0b7ff0ea1d47 | Fedor21image | qcow2       | bare             | 158443520 | active |
+————————————–+————–+————-+——————+———–+——–+
== Nova managed services ==
+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | –               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | –               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | –               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | –               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:14:21.000000 | –               |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+———-+——+
| ID                                   | Label    | Cidr |

+————————————–+———-+——+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | –    |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | –    |
+————————————–+———-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+

| ID | Name | Status | Task State | Power State | Networks |

+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list

+—-+—————————————-+——-+———+
| ID | Hypervisor hostname                    | State | Status  |
+—-+—————————————-+——-+———+
| 1  | ip-192-169-142-137.ip.secureserver.net | up    | enabled |
+—-+—————————————-+——-+———+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+—————————————-+——-+—————-+—————————+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

| 22af7b3b-232f-4642-9418-d1c8021c7eb5 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 34e1078c-c75b-4d14-b813-b273ea8f7b86 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 5d652094-6711-409d-8546-e29c09e03d5a | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
| 8a8ad680-1071-4c7f-8787-ba4ef0a7dfb7 | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| d81e97af-c210-4855-af06-fb1d139e2e10 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova service-list

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | –               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | –               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | –               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | –               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:15:21.000000 | –               |
+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+


Nova libvirt-xen driver fails to schedule instance under Xen 4.4.1 Hypervisor with libxl toolstack

April 13, 2015

UPDATE as of 16/04/2015
For now http://www.slideshare.net/xen_com_mgr/openstack-xenfinal
is supposed to work only with nova networking per Anthony PERARD
  Neutron appears to be an issue.
  Please, view details of troubleshooting and diagnostic obtained (thanks to Ian   Campbell)
http://lists.xen.org/archives/html/xen-devel/2015-04/msg01856.html
END UPDATE

This post is written in regards of two publications done in February 2015
First:   http://wiki.xen.org/wiki/OpenStack_via_DevStack
Second : http://www.slideshare.net/xen_com_mgr/openstack-xenfinal

Both of them are devoted to same problem nova libvirt-xen driver. Second one states that everything is supposed to be fine as far as some mysterious patch will merge mainline libvirt .Both of them don’t work for me generating errors  in  libxl-driver.log even with  libvirt 1.2.14 ( the most recent version as of time of writing).

For better understanding problem been raised up view also https://ask.openstack.org/en/question/64942/nova-libvirt-xen-driver-and-patch-feb-2015-in-upstream-libvirt/

I’ve followed more accurately written second one :-

On Ubuntu 14.04.2

# apt-get update
# apt-get -y upgrade
# apt-get install xen-hypervisor-4.4-amd64
# sudo reboot

$ git clone https://git.openstack.org/openstack-dev/devstack

Created local.conf under devstack folder as follows :-

[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50

FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

# Useful logging options for debugging:
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service n-cauth
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest

# This is a Xen Project host:
LIBVIRT_TYPE=xen

Ran ./stack.sh and successfully completed installation, versions of libvirt 1.2.2,1.2.9.1.2.24 have been tested. The first one is default on Trusty, 1.2.9 && 1.2.14 have been built and installed after stack.sh completion. For every version of libvirt been tested new hardware instance of Ubuntu 14.04.2 has been created.

Manual libvirt upgrade was done via :-

# apt-get build-dep libvirt
# tar xvzf libvirt-1.2.14.tar.gz -C /usr/src
# cd /usr/src/libvirt-1.2.14
# ./configure –prefix=/usr/
# make
# make install
# service libvirt-bin restart

root@ubuntu-system:~# virsh –connect xen:///
Welcome to virsh, the virtualization interactive terminal.

Type: ‘help’ for help with commands
‘quit’ to quit

virsh # version
Compiled against library: libvirt 1.2.14
Using library: libvirt 1.2.14
Using API: Xen 1.2.14
Running hypervisor: Xen 4.4.0

Per page 19 of second post

xen.gz command line tuned
ubuntu@ubuntu-system:~/devstack$ nova image-meta cirros-0.3.2-x86_64-uec set vm_mode=HVM
ubuntu@ubuntu-system:~/devstack$ nova image-meta cirros-0.3.2-x86_64-uec delete vm_mode

Attempt to launch instance ( nova-compute is up ) error “No available host found” in n-sch.log from Nova side

The libxl-driver.log reports :-

root@ubuntu-system:/var/log/libvirt/libxl# ls -l
total 32
-rw-r–r– 1 root root 30700 Apr 12 03:47 libxl-driver.log

**************************************************************************************

libxl: debug: libxl_dm.c:1320:libxl__spawn_local_dm: Spawning device-model /usr/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: /usr/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-domid
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 2
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -chardev
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -mon
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -nodefaults
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-attach
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -name
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: instance-00000002
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -vnc
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 127.0.0.1:1
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -display
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: none
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -k
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: en-us
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -machine
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: xenpv
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -m
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 513
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: register slotnum=3
libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x7f36cc0012e0: inprogress: poller=0x7f36d8013130, flags=i
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “qmp_capabilities”,
“id”: 1
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-chardev”,
“id”: 2
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-vnc”,
“id”: 3
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: register slotnum=3
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:657:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:653:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 ok
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8: deregister unregistered
libxl: debug: libxl_device.c:1023:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge online [-1] exited with error status 1
libxl: error: libxl_device.c:1085:device_hotplug_child_death_cb: script: ip link set vif2.0 name tap5600079c-9e failed
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_create.c:1226:domcreate_attach_vtpms: unable to add nic devices

libxl: debug: libxl_dm.c:1495:kill_device_model: Device Model signaled

 


Setup the most recent Nova Docker Driver via Devstack on Fedora 21

March 23, 2015

*********************************************************************************
UPDATE as 03/26/2015
To make devstack configuration persistent between reboots on Fedora 21, e.g. restart-able via ./rejoin-stack.sh, following services must be enabled :-
*********************************************************************************
systemctl enable rabbitmq-server
systemctl enable openvswitch
systemctl enable httpd
systemctl enable mariadb
systemctl enable mysqld

File /etc/rc.d/rc.local should contain ( in my case ) :-

ip addr flush dev br-ex ;
ip addr add 192.168.10.15/24 dev br-ex ;
ip link set br-ex up ;
route add -net 10.254.1.0/24 gw 192.168.10.15 ;

System is supposed to be shutdown via :-
$sudo ./unstack.sh
********************************************************************************

This post follows up http://blog.oddbit.com/2015/02/06/installing-nova-docker-on-fedora-21/  however , RDO Juno is not pre-installed and Nova Docker driver is built first based on the top commit of https://git.openstack.org/cgit/stackforge/nova-docker/ , next step is :-

$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack

Creating local.conf under devstack following any of two links provided
and run ./stack.sh performing AIO Openstack installation, like it does
it on Ubuntu 14.04. All steps preventing stack.sh from crash on F21 described right bellow.

# yum -y install git docker-io fedora-repos-rawhide
# yum –enablerepo=rawhide install python-six  python-pip python-pbr systemd
# reboot
# yum – y install gcc python-devel ( required for driver build )

$ git clone http://github.com/stackforge/nova-docker.git
$ cd nova-docker
$ sudo pip install .

To raise to 1.9 version python-six dropped to 1.2 during driver’s build

yum –enablerepo=rawhide reinstall python-six

Run devstack with Lars’s local.conf
per http://blog.oddbit.com/2015/02/11/installing-novadocker-with-devstack/
or view  http://bderzhavets.blogspot.com/2015/02/set-up-nova-docker-driver-on-ubuntu.html   for another version of local.conf
*****************************************************************************
My version of local.conf which allows define floating pool as you need,a bit more flexible then original
*****************************************************************************
[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

SERVICE_TOKEN=super-secret-admin-token
VIRT_DRIVER=novadocker.virt.docker.DockerDriver

DEST=$HOME/stack
SERVICE_DIR=$DEST/status
DATA_DIR=$DEST/data
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest
# Introduce glance to docker images

[[post-config|$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver

**************************************************************************************
After stack.sh completion disable firewalld, because devstack has no interaction with fedoras firewalld bringing up openstack daemons requiring corresponding ports  to be opened.
***************************************************************************************

#  systemctl stop firewalld
#  systemtcl disable firewalld

$ cd dev*
$ . openrc demo
$ neutron security-group-rule-create –protocol icmp \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default
$ neutron security-group-rule-create –protocol tcp \
–port-range-min 22 –port-range-max 22 \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default
$ neutron security-group-rule-create –protocol tcp \
–port-range-min 80 –port-range-max 80 \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default

Uploading docker image to glance

$ . openrc admin
$  docker pull rastasheep/ubuntu-sshd:14.04
$  docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

Launch new instance via uploaded image :-

$ . openrc demo
$  nova boot –image “rastasheep/ubuntu-sshd:14.04” –flavor m1.tiny
–nic net-id=private-net-id UbuntuDocker

To provide internet access for launched nova-docker instance run :-
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Horizon is unavailable , regardless installed


Set up Two Node RDO Juno ML2&OVS&VXLAN Cluster runnig Docker Hypervisor on Compute Node (CentOS 7, kernel 3.10.0-123.20.1.el7.x86_64)

February 6, 2015

It’s quite obvious that Nova-Docker driver set up success for real application is important to get on Compute Nodes . It’s nice when everything works on AIO Juno host or Controller, but  just as demonstration. Might be I did something wrong , might be due to some other reason but kernel version 3.10.0-123.20.1.el7.x86_64 seems to be the first brings  success on RDO Juno Compute nodes.

Follow http://lxer.com/module/newswire/view/209851/index.html  up to section

“Set up Nova-Docker on Controller&amp;&amp;Network Node”

***************************************************
Set up  Nova-Docker Driver on Compute Node
***************************************************

# yum install python-pbr
# yum install docker-io -y
# git clone https://github.com/stackforge/nova-docker
# cd nova-docker
# git checkout stable/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
#  mkdir /etc/nova/rootwrap.d

************************************************
Create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker
:wq

******************************
Update nova.conf
******************************

vi /etc/nova/nova.conf

set “compute_driver = novadocker.virt.docker.DockerDriver”

************************
Restart Services
************************

usermod -G docker nova

systemctl restart openstack-nova-compute (on Compute)
systemctl status openstack-nova-compute
systemctl restart openstack-glance-api (on Controller&amp;&amp;Network )

At this point `scp  /root/keystonerc_admin compute:/root`  from Controller to Compute Node

*************************************************************
Test installation Nova-Docker Driver on Compute Node (RDO Juno , CentOS 7,kernel 3.10.0-123.20.1.el7.x86_64 )
**************************************************************

*******************************************

Setup Ubuntu 14.04 with SSH access

*******************************************

First on Compute node

# docker pull rastasheep/ubuntu-sshd:14.04

# . keystonerc_admin

# docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

Second on Controller node launch Nova-Docker container , running on Compute via dashboard and assign floating IP address

Pic15          Pic16

 

*********************************************
Verify `docker ps ` on Compute Node
*********************************************

[root@juno1dev ~]# ssh 192.168.1.137

Last login: Fri Feb  6 15:38:49 2015 from juno1dev.localdomain

[root@juno2dev ~]# docker ps

CONTAINER ID        IMAGE                          COMMAND               CREATED             STATUS              PORTS               NAMES

ef23d030e35a        rastasheep/ubuntu-sshd:14.04   “/usr/sbin/sshd -D”   7 hours ago         Up 6 minutes                            nova-211bcb54-35ba-4f0a-a150-7e73546d8f46

[root@juno2dev ~]# ip netns

ef23d030e35af63c17698d1f4c6f7d8023c29455e9dff0288ce224657828993a
ca9aa6cb527f2302985817d3410a99c6f406f4820ed6d3f62485781d50f16590
fea73a69337334b36625e78f9a124e19bf956c73b34453f1994575b667e7401b
58834d3bbea1bffa368724527199d73d0d6fde74fa5d24de9cca41c29f978e31
********************************
On Controller run :-
********************************

[root@juno1dev ~]# ssh root@192.168.1.173
root@192.168.1.173’s password:
Last login: Fri Feb  6 12:11:19 2015 from 192.168.1.127

root@instance-0000002b:~# apt-get update

Ign http://archive.ubuntu.com trusty InRelease
Ign http://archive.ubuntu.com trusty-updates InRelease
Ign http://archive.ubuntu.com trusty-security InRelease
Hit http://archive.ubuntu.com trusty Release.gpg
Get:1 http://archive.ubuntu.com trusty-updates Release.gpg [933 B]
Get:2 http://archive.ubuntu.com trusty-security Release.gpg [933 B]
Hit http://archive.ubuntu.com trusty Release
Get:3 http://archive.ubuntu.com trusty-updates Release [62.0 kB]
Get:4 http://archive.ubuntu.com trusty-security Release [62.0 kB]
Hit http://archive.ubuntu.com trusty/main Sources
Hit http://archive.ubuntu.com trusty/restricted Sources
Hit http://archive.ubuntu.com trusty/universe Sources
Hit http://archive.ubuntu.com trusty/main amd64 Packages
Hit http://archive.ubuntu.com trusty/restricted amd64 Packages
Hit http://archive.ubuntu.com trusty/universe amd64 Packages
Get:5 http://archive.ubuntu.com trusty-updates/main Sources [208 kB]
Get:6 http://archive.ubuntu.com trusty-updates/restricted Sources [1874 B]
Get:7 http://archive.ubuntu.com trusty-updates/universe Sources [124 kB]
Get:8 http://archive.ubuntu.com trusty-updates/main amd64 Packages [524 kB]
Get:9 http://archive.ubuntu.com trusty-updates/restricted amd64 Packages [14.8 kB]
Get:10 http://archive.ubuntu.com trusty-updates/universe amd64 Packages [318 kB]
Get:11 http://archive.ubuntu.com trusty-security/main Sources [79.8 kB]
Get:12 http://archive.ubuntu.com trusty-security/restricted Sources [1874 B]
Get:13 http://archive.ubuntu.com trusty-security/universe Sources [19.1 kB]
Get:14 http://archive.ubuntu.com trusty-security/main amd64 Packages [251 kB]
Get:15 http://archive.ubuntu.com trusty-security/restricted amd64 Packages [14.8 kB]
Get:16 http://archive.ubuntu.com trusty-security/universe amd64 Packages [110 kB]
Fetched 1793 kB in 9s (199 kB/s)
Reading package lists… Done

If network operations like `apt-get install … ` run afterwards with no problems

Nova-Docker driver is installed  and works on Compute Node

**************************************************************************************
Finally I’ve set up openstack-nova-compute on Controller ,  to run several instances with  Qemu/Libvirt driver :-
**************************************************************************************

Pic17          Pic18


Set up Nova-Docker on OpenStack RDO Juno on top of Fedora 21

January 11, 2015
****************************************************************************************
UPDATE as of 01/31/2015 to get Docker && Nova-Docker working on Fedora 21
****************************************************************************************
Per https://github.com/docker/docker/issues/10280
download systemd-218-3.fc22.src.rpm && build 218-3 rpms and upgrade systemd
First packages for rpmbuild :-

$ sudo yum install audit-libs-devel autoconf  automake cryptsetup-devel \
dbus-devel docbook-style-xsl elfutils-devel  \
glib2-devel  gnutls-devel  gobject-introspection-devel \
gperf     gtk-doc intltool kmod-devel libacl-devel \
libblkid-devel     libcap-devel libcurl-devel libgcrypt-devel \
libidn-devel libmicrohttpd-devel libmount-devel libseccomp-devel \
libselinux-devel libtool pam-devel python3-devel python3-lxml \
qrencode-devel  python2-devel  xz-devel

Second:-
$cd rpmbuild/SPEC
$rpmbuild -bb systemd.spec
$ cd ../RPMS/x86_64
Third:-

$ sudo yum install libgudev1-218-3.fc21.x86_64.rpm \
libgudev1-devel-218-3.fc21.x86_64.rpm \
systemd-218-3.fc21.x86_64.rpm \
systemd-compat-libs-218-3.fc21.x86_64.rpm \
systemd-debuginfo-218-3.fc21.x86_64.rpm \
systemd-devel-218-3.fc21.x86_64.rpm \
systemd-journal-gateway-218-3.fc21.x86_64.rpm \
systemd-libs-218-3.fc21.x86_64.rpm \
systemd-python-218-3.fc21.x86_64.rpm \
systemd-python3-218-3.fc21.x86_64.rpm

****************************************************************************************

Recently Filip Krikava made a fork on github and created a Juno branch using

the latest commitFix the problem when an image is not located in the local docker image registry

Master https://github.com/stackforge/nova-docker.git is targeting latest Nova ( Kilo release ), forked branch is supposed to work for Juno , reasonably including commits after “Merge oslo.i18n”. Posting bellow is supposed to test Juno Branch https://github.com/fikovnik/nova-docker.git

Quote ([2]) :-

The Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.

Install required packages to install nova-docker driver per https://wiki.openstack.org/wiki/Docker

***************************
Initial docker setup
***************************

# yum install docker-io -y
# yum install -y python-pip git
# git clone https://github.com/fikovnik/nova-docker.git
# cd nova-docker
# git branch -v -a

master                1ed1820 A note no firewall drivers.
remotes/origin/HEAD   -&gt; origin/master
remotes/origin/juno   1a08ea5 Fix the problem when an image
is not located in the local docker image registry.
remotes/origin/master 1ed1820 A note no firewall drivers.
# git checkout -b juno origin/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
# pip install pbr
#  mkdir /etc/nova/rootwrap.d

******************************
Update nova.conf
******************************

vi /etc/nova/nova.conf

set “compute_driver = novadocker.virt.docker.DockerDriver”

************************************************
Next, create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines
# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************

container_formats=ami,ari,aki,bare,ovf,ova,docker

:wq

************************
Restart Services
************************

usermod -G docker nova

systemctl restart openstack-nova-compute
systemctl status openstack-nova-compute
systemctl restart openstack-glance-api

*******************************************************************************
Verification nova-docker driver  been  built on Fedora 21

*******************************************************************************
Build bellow is extending  phusion/baseimage to start several daemons at a time during launching nova-docker container. It has been tested on Nova-Docker RDO Juno on top of CentOS 7 ( view Set up GlassFish 4.1 Nova-Docker Container via phusion/baseimage on RDO Juno ). Here it is reproduced on Nova-Docker RDO Juno on top of Fedora 21 coming afterwards `packstack –allinone` Juno installation on Fedora 21,  been run pretty smoothly .

 FROM phusion/baseimage

MAINTAINER Boris Derzhavets

RUN apt-get update
RUN echo ‘root:root’ |chpasswd
RUN sed -ri ‘s/^PermitRootLogin\s+.*/PermitRootLogin yes/’ /etc/ssh/sshd_config
RUN sed -ri ‘s/UsePAM yes/#UsePAM yes/g’ /etc/ssh/sshd_config
##################################################
# Hack to avoid external start SSH session inside container,
# otherwise sshd won’t start when docker container loads
##################################################
RUN echo “/usr/sbin/sshd > log & ” >>  /etc/my_init.d/00_regen_ssh_host_keys.sh

RUN apt-get update && apt-get install -y wget
RUN wget –no-check-certificate –no-cookies –header “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz
RUN cp  jdk-8u25-linux-x64.tar.gz /opt
RUN cd /opt; tar -zxvf jdk-8u25-linux-x64.tar.gz
ENV PATH /opt/jdk1.8.0_25/bin:$PATH

RUN apt-get update &&  \
apt-get install -y wget unzip pwgen expect net-tools vim &&  \
wget http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip &&  \
unzip glassfish-4.1.zip -d /opt &&  \
rm glassfish-4.1.zip &&  \
apt-get clean &&  \
rm -rf /var/lib/apt/lists/*
ENV PATH /opt/glassfish4/bin:$PATH

ADD run.sh /etc/my_init.d/
ADD database.sh  /etc/my_init.d/

ADD change_admin_password.sh /change_admin_password.sh
ADD change_admin_password_func.sh /change_admin_password_func.sh
ADD enable_secure_admin.sh /enable_secure_admin.sh
RUN chmod +x /*.sh /etc/my_init.d/*.sh

# 4848 (administration), 8080 (HTTP listener), 8181 (HTTPS listener), 9009 (JPDA debug port)

EXPOSE 22  4848 8080 8181 9009

CMD [“/sbin/my_init”]

***************************************************************
Another option not to touch 00_regen_ssh_host_keys.sh
***************************************************************
# RUN echo “/usr/sbin/sshd > log & ” >>  /etc/my_init.d/00_regen_ssh_host_keys.sh

***************************************************************
Create in building folder script  01_sshd_start.sh
***************************************************************

#!/bin/bash
/usr/sbin/sshd > log &
and insert in Dockerfile:-
ADD 01_sshd_start.sh /etc/my_init.d/

********************************************************************************
I had to update database.sh script as follows to make nova-docker container
starting  on RDO Juno on top of Fedora 21 ( view http://lxer.com/module/newswire/view/209277/index.html ).
********************************************************************************

# cat database.sh

#!/bin/bash
set -e
asadmin start-database –dbhost 127.0.0.1 –terse=true >  log &;

the important  change is binding dbhost to 127.0.0.1 , which  is not required for loading docker container. Nova-Docker Driver ( http://www.linux.com/community/blogs/133-general-linux/799569-running-nova-docker-on-openstack-rdo-juno-centos-7 ) seems to be more picky about –dbhost  key value of Derby Database

*********************
Build image
*********************

[root@junolxc docker-glassfish41]# ls -l

total 44
-rw-r–r–. 1 root root   217 Jan  7 00:27 change_admin_password_func.sh
-rw-r–r–. 1 root root   833 Jan  7 00:27 change_admin_password.sh
-rw-r–r–. 1 root root   473 Jan  7 00:27 circle.yml
-rw-r–r–. 1 root root    44 Jan  7 00:27 database.sh
-rw-r–r–. 1 root root  1287 Jan  7 00:27 Dockerfile
-rw-r–r–. 1 root root   167 Jan  7 00:27 enable_secure_admin.sh
-rw-r–r–. 1 root root 11323 Jan  7 00:27 LICENSE
-rw-r–r–. 1 root root  2123 Jan  7 00:27 README.md
-rw-r–r–. 1 root root   354 Jan  7 00:27 run.sh
[root@junolxc docker-glassfish41]# docker build -t derby/docker-glassfish41 .

******************************************
RDO (AIO install)  Juno status on Fedora 21
*******************************************

[root@fedora21 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
target:                                 inactive  (disabled on boot)
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| edfb1cd3c4d54401ac810b14e8d953f2 |   admin    |   True  |    root@localhost    |
| 783df7494254423aaed3bfe0cc2262af | ceilometer |   True  | ceilometer@localhost |
| 955e7619fc6749f68843030d9da6cef3 |   cinder   |   True  |   cinder@localhost   |
| 1ed0f9f7705341b79f58190ea31160fc |    demo    |   True  |                      |
| 68362c2c7ad642ab9ea31164cad35268 |   glance   |   True  |   glance@localhost   |
| b7dec54d6b984c16afca2935cc09c478 |  neutron   |   True  |  neutron@localhost   |
| c35cad56c0e548aaa6907e0da3eca569 |    nova    |   True  |    nova@localhost    |
| a959def1f10e48d6959a70bc930e8522 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+———————————+————-+——————+————+——–+
| ID                                   | Name                            | Disk Format | Container Format | Size       | Status |
+————————————–+———————————+————-+——————+————+——–+
| 08b235e5-7f2b-4bc4-959e-582482037019 | cirros                          | qcow2       | bare             | 13200896   | active |
| fcb9a93a-6a28-413f-853b-4ad362aed0c5 | derby/docker-glassfish41:latest | raw         | docker           | 1112110592 | active |
| 032952ba-5bb3-41cc-9a2a-d4c76d197571 | dba07/docker-glassfish41:latest | raw         | docker           | 1112110592 | active |
| ce0adab4-3f09-45cc-81fa-cd8cc6acc7c1 | rastasheep/ubuntu-sshd:14.04    | raw         | docker           | 263785472  | active |
| 230040b3-c5d1-4bf0-b5e4-9f112fd71c70 | Ubuntu14.04-011014              | qcow2       | bare             | 256311808  | active |
+————————————–+———————————+————-+——————+————+——–+
== Nova managed services ==
+—-+——————+———————-+———-+———+——-+—————————-+—————–+
| Id | Binary           | Host                 | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—-+——————+———————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:21.000000 | –               |
| 2  | nova-scheduler   | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:22.000000 | –               |
| 3  | nova-conductor   | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:22.000000 | –               |
| 5  | nova-compute     | fedora21.localdomain | nova     | enabled | up    | 2015-01-11T09:45:20.000000 | –               |
| 6  | nova-cert        | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:29.000000 | –               |
+—-+——————+———————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+————–+——+
| ID                                   | Label        | Cidr |
+————————————–+————–+——+
| 046e1e6f-b09c-4daf-9732-3ed0b6e5fdf8 | public       | –    |
| 76709a1a-61e7-4488-9ecf-96dbd88d4fb6 | private      | –    |
| 7b2c1d87-cea1-40aa-a1d7-dbac3cc99798 | demo_network | –    |
+————————————–+————–+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

*************************
Upload image to glance
*************************

# . keystonerc_admin

# docker save derby/docker-glassfish41:latest | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name derby/docker-glassfish41:latest

**********************
Launch instance
**********************
# .  keystonerc_demo

# nova boot –image “derby/docker-glassfish41:latest” –flavor m1.small –key-name  oskey57    –nic net-id=demo_network-id DerbyGlassfish41

Derby1F21

Derby2F21

Derby3F21

Derby5F21


Set up GlassFish 4.1 Nova-Docker Container via docker’s phusion/baseimage on RDO Juno

January 9, 2015

The problem here is that phusion/baseimage per https://github.com/phusion/baseimage-docker should provide ssh access to container , however it doesn’t. Working with docker container there is easy workaround suggested by Mykola Gurov in http://stackoverflow.com/questions/27816298/cannot-get-ssh-access-to-glassfish-4-1-docker-container
# docker exec container-id exec /usr/sbin/sshd -D
*******************************************************************************
To   bring sshd back to life  create in building folder script  01_sshd_start.sh
*******************************************************************************
#!/bin/bash

if [[ ! -e /etc/ssh/ssh_host_rsa_key ]]; then
echo “No SSH host key available. Generating one…”
export LC_ALL=C
export DEBIAN_FRONTEND=noninteractive
dpkg-reconfigure openssh-server
echo “SSH KEYS regenerated by Boris just in case !”
fi

/usr/sbin/sshd &gt; log &amp;
echo “SSHD started !”

and insert in Dockerfile:-

ADD 01_sshd_start.sh /etc/my_init.d/ 

Following bellow is Dockerfile been used to build image for GlassFish 4.1 nova-docker container extending phusion/baseimage and starting three daemons at a time when launching nova-docker instance been built via image been prepared to be used by Nova-Docker driver on Juno

FROM phusion/baseimage
MAINTAINER Boris Derzhavets

RUN apt-get update
RUN echo ‘root:root’ |chpasswd
RUN sed -ri ‘s/^PermitRootLogin\s+.*/PermitRootLogin yes/’ /etc/ssh/sshd_config
RUN sed -ri ‘s/UsePAM yes/#UsePAM yes/g’ /etc/ssh/sshd_config

RUN apt-get update && apt-get install -y wget
RUN wget –no-check-certificate –no-cookies –header “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz
RUN cp jdk-8u25-linux-x64.tar.gz /opt
RUN cd /opt; tar -zxvf jdk-8u25-linux-x64.tar.gz
ENV PATH /opt/jdk1.8.0_25/bin:$PATH
RUN apt-get update && \

apt-get install -y wget unzip pwgen expect net-tools vim && \
wget http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip && \
unzip glassfish-4.1.zip -d /opt && \
rm glassfish-4.1.zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

ENV PATH /opt/glassfish4/bin:$PATH

ADD 01_sshd_start.sh /etc/my_init.d/
ADD run.sh /etc/my_init.d/
ADD database.sh /etc/my_init.d/
ADD change_admin_password.sh /change_admin_password.sh
ADD change_admin_password_func.sh /change_admin_password_func.sh
ADD enable_secure_admin.sh /enable_secure_admin.sh
RUN chmod +x /*.sh /etc/my_init.d/*.sh

# 4848 (administration), 8080 (HTTP listener), 8181 (HTTPS listener), 9009 (JPDA debug port)

EXPOSE 22 4848 8080 8181 9009

CMD [“/sbin/my_init”]

********************************************************************************
I had to update database.sh script as follows to make nova-docker container
starting on RDO Juno
********************************************************************************
# cat database.sh

#!/bin/bash
set -e
asadmin start-database –dbhost 127.0.0.1 –terse=true > log &

the important change is binding dbhost to 127.0.0.1 , which is not required for loading docker container. Nova-Docker Driver ( http://www.linux.com/community/blogs/133-general-linux/799569-running-nova-docker-on-openstack-rdo-juno-centos-7 ) seems to be more picky about –dbhost key value of Derby Database

*********************
Build image
*********************
[root@junolxc docker-glassfish41]# ls -l
total 44
-rw-r–r–. 1 root root 217 Jan 7 00:27 change_admin_password_func.sh
-rw-r–r–. 1 root root 833 Jan 7 00:27 change_admin_password.sh
-rw-r–r–. 1 root root 473 Jan 7 00:27 circle.yml
-rw-r–r–. 1 root root 44 Jan 7 00:27 database.sh
-rw-r–r–. 1 root root 1287 Jan 7 00:27 Dockerfile
-rw-r–r–. 1 root root 167 Jan 7 00:27 enable_secure_admin.sh
-rw-r–r–. 1 root root 11323 Jan 7 00:27 LICENSE
-rw-r–r–. 1 root root 2123 Jan 7 00:27 README.md
-rw-r–r–. 1 root root 354 Jan 7 00:27 run.sh

[root@junolxc docker-glassfish41]# docker build -t boris/docker-glassfish41 .

*************************
Upload image to glance
*************************
# . keystonerc_admin
# docker save boris/docker-glassfish41:latest | glance image-create –is-public=True –container-format=docker –disk-format=raw –name boris/docker-glassfish41:latest

**********************
Launch instance
**********************
# . keystonerc_demo
# nova boot –image “boris/docker-glassfish41:latest” –flavor m1.small –key-name osxkey –nic net-id=demo_network-id OracleGlassfish41

[root@junodocker (keystone_admin)]# ssh root@192.168.1.175
root@192.168.1.175’s password:
Last login: Fri Jan 9 10:09:50 2015 from 192.168.1.57

root@instance-00000045:~# ps -ef

UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:15 ? 00:00:00 /usr/bin/python3 -u /sbin/my_init
root 12 1 0 10:15 ? 00:00:00 /usr/sbin/sshd

root 46 1 0 10:15 ? 00:00:08 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/opt/glassfish4/glassfish/lib -cp /opt/glassfish4/glassfish/lib/asadmin/cli-optional.jar:/opt/glassfish4/javadb/lib/derby.jar:/opt/glassfish4/javadb/lib/derbytools.jar:/opt/glassfish4/javadb/lib/derbynet.jar:/opt/glassfish4/javadb/lib/derbyclient.jar com.sun.enterprise.admin.cli.optional.DerbyControl start 127.0.0.1 1527 true /opt/glassfish4/glassfish/databases

root 137 1 0 10:15 ? 00:00:00 /bin/bash /etc/my_init.d/run.sh
root 358 137 0 10:15 ? 00:00:05 java -jar /opt/glassfish4/bin/../glassfish/lib/client/appserver-cli.jar start-domain –debug=false -w

root 375 358 0 10:15 ? 00:02:59 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/glassfish/modules/glassfish.jar -XX:+UnlockDiagnosticVMOptions -XX:NewRatio=2 -XX:MaxPermSize=192m -Xmx512m -client -javaagent:/opt/glassfish4/glassfish/lib/monitor/flashlight-agent.jar -Djavax.xml.accessExternalSchema=all -Djavax.net.ssl.trustStore=/opt/glassfish4/glassfish/domains/domain1/config/cacerts.jks -Djdk.corba.allowOutputStreamSubclass=true -Dfelix.fileinstall.dir=/opt/glassfish4/glassfish/modules/autostart/ -Dorg.glassfish.additionalOSGiBundlesToStart=org.apache.felix.shell,org.apache.felix.gogo.runtime,org.apache.felix.gogo.shell,org.apache.felix.gogo.command,org.apache.felix.shell.remote,org.apache.felix.fileinstall -Dcom.sun.aas.installRoot=/opt/glassfish4/glassfish -Dfelix.fileinstall.poll=5000 -Djava.endorsed.dirs=/opt/glassfish4/glassfish/modules/endorsed:/opt/glassfish4/glassfish/lib/endorsed -Djava.security.policy=/opt/glassfish4/glassfish/domains/domain1/config/server.policy -Dosgi.shell.telnet.maxconn=1 -Dfelix.fileinstall.bundles.startTransient=true -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dfelix.fileinstall.log.level=2 -Djavax.net.ssl.keyStore=/opt/glassfish4/glassfish/domains/domain1/config/keystore.jks -Djava.security.auth.login.config=/opt/glassfish4/glassfish/domains/domain1/config/login.conf -Dfelix.fileinstall.disableConfigSave=false -Dfelix.fileinstall.bundles.new.start=true -Dcom.sun.aas.instanceRoot=/opt/glassfish4/glassfish/domains/domain1 -Dosgi.shell.telnet.port=6666 -Dgosh.args=–nointeractive -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Dosgi.shell.telnet.ip=127.0.0.1 -DANTLR_USE_DIRECT_CLASS_LOADING=true -Djava.awt.headless=true -Dcom.ctc.wstx.returnNullForDefaultNamespace=true -Djava.ext.dirs=/opt/jdk1.8.0_25/lib/ext:/opt/jdk1.8.0_25/jre/lib/ext:/opt/glassfish4/glassfish/domains/domain1/lib/ext -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -Djava.library.path=/opt/glassfish4/glassfish/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib com.sun.enterprise.glassfish.bootstrap.ASMain -upgrade false -domaindir /opt/glassfish4/glassfish/domains/domain1 -read-stdin true -asadmin-args –host,,,localhost,,,–port,,,4848,,,–secure=false,,,–terse=false,,,–echo=false,,,–interactive=false,,,start-domain,,,–verbose=false,,,–watchdog=true,,,–debug=false,,,–domaindir,,,/opt/glassfish4/glassfish/domains,,,domain1 -domainname domain1 -instancename server -type DAS -verbose false -asadmin-classpath /opt/glassfish4/glassfish/lib/client/appserver-cli.jar -debug false -asadmin-classname com.sun.enterprise.admin.cli.AdminMain

root 1186 12 0 14:02 ? 00:00:00 sshd: root@pts/0
root 1188 1186 0 14:02 pts/0 00:00:00 -bash
root 1226 1188 0 15:45 pts/0 00:00:00 ps -ef

Screenshot from 2015-01-09 09_44_16

Screenshot from 2015-01-09 10_02_57

Original idea of using ./run.sh script is coming from
https://registry.hub.docker.com/u/bonelli/glassfish-4.1/

[root@junodocker ~(keystone_admin)]# docker logs 65a3f4cf1994

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh…
No SSH host key available. Generating one…
Creating SSH2 RSA key; this may take some time …
Creating SSH2 DSA key; this may take some time …
Creating SSH2 ECDSA key; this may take some time …
Creating SSH2 ED25519 key; this may take some time …
invoke-rc.d: policy-rc.d denied execution of restart.

*** Running /etc/my_init.d/database.sh…
Starting database in Network Server mode on host 127.0.0.1 and port 1527.
——— Derby Network Server Information ——–
Version: CSS10100/10.10.2.0 – (1582446) Build: 1582446 DRDA Product Id: CSS10100
— listing properties —
derby.drda.traceDirectory=/opt/glassfish4/glassfish/databases
derby.drda.maxThreads=0
derby.drda.sslMode=off
derby.drda.keepAlive=true
derby.drda.minThreads=0
derby.drda.portNumber=1527
derby.drda.logConnections=false
derby.drda.timeSlice=0
derby.drda.startNetworkServer=false
derby.drda.host=127.0.0.1
derby.drda.traceAll=false
—————— Java Information ——————
Java Version: 1.8.0_25
Java Vendor: Oracle Corporation
Java home: /opt/jdk1.8.0_25/jre
Java classpath: /opt/glassfish4/glassfish/lib/asadmin/cli-optional.jar:/opt/glassfish4/javadb/lib/derby.jar:/opt/glassfish4/javadb/lib/derbytools.jar:/opt/glassfish4/javadb/lib/derbynet.jar:/opt/glassfish4/javadb/lib/derbyclient.jar
OS name: Linux
OS architecture: amd64
OS version: 3.10.0-123.el7.x86_64
Java user name: root
Java user home: /root
Java user dir: /
java.specification.name: Java Platform API Specification
java.specification.version: 1.8
java.runtime.version: 1.8.0_25-b17
——— Derby Information ——–
[/opt/glassfish4/javadb/lib/derby.jar] 10.10.2.0 – (1582446)
[/opt/glassfish4/javadb/lib/derbytools.jar] 10.10.2.0 – (1582446)
[/opt/glassfish4/javadb/lib/derbynet.jar] 10.10.2.0 – (1582446)
[/opt/glassfish4/javadb/lib/derbyclient.jar] 10.10.2.0 – (1582446)
——————————————————
—————– Locale Information —————–

Current Locale : [English/United States [en_US]]
Found support for locale: [cs]
version: 10.10.2.0 – (1582446)
Found support for locale: [de_DE]
version: 10.10.2.0 – (1582446)
Found support for locale: [es]
version: 10.10.2.0 – (1582446)
Found support for locale: [fr]
version: 10.10.2.0 – (1582446)
Found support for locale: [hu]
version: 10.10.2.0 – (1582446)
Found support for locale: [it]
version: 10.10.2.0 – (1582446)
Found support for locale: [ja_JP]
version: 10.10.2.0 – (1582446)
Found support for locale: [ko_KR]
version: 10.10.2.0 – (1582446)
Found support for locale: [pl]
version: 10.10.2.0 – (1582446)
Found support for locale: [pt_BR]
version: 10.10.2.0 – (1582446)
Found support for locale: [ru]
version: 10.10.2.0 – (1582446)
Found support for locale: [zh_CN]
version: 10.10.2.0 – (1582446)
Found support for locale: [zh_TW]
version: 10.10.2.0 – (1582446)
——————————————————
——————————————————

Starting database in the background.

Log redirected to /opt/glassfish4/glassfish/databases/derby.log.
Command start-database executed successfully.
*** Running /etc/my_init.d/run.sh…
Bad Network Configuration. DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000045: instance-00000045: unknown error

Waiting for domain1 to start …….
Successfully started the domain : domain1
domain Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=> Modifying password of admin to random in Glassfish
spawn asadmin –user admin change-admin-password
Enter the admin password>
Enter the new admin password>
Enter the new admin password again>
Command change-admin-password executed successfully.
=> Enabling secure admin login
spawn asadmin enable-secure-admin
Enter admin user name> admin
Enter admin password for user “admin”>
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.
=> Done!
========================================================================
You can now connect to this Glassfish server using:
admin:fCZNVP80JiyI
Please remember to change the above password as soon as possible!
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop .
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false


Running Nova-Docker on OpenStack Juno (CentOS 7)

December 16, 2014

Recently Filip Krikava made a fork on github and created a Juno branch using the latest commit +Fix the problem when an image is not located in the local docker image registry ( https://github.com/fikovnik/nova-docker/commit/016cc98e2f8950ae3bf5e27912be20c52fc9e40e )
Master https://github.com/stackforge/nova-docker.git is targeting latest Nova ( Kilo release ), forked branch is supposed to work for Juno , reasonably including commits after “Merge oslo.i18n”. Posting bellow is supposed to test Juno Branch https://github.com/fikovnik/nova-docker.git

Quote ([2]) :-

The Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.

This post in general follows up ([2]) with detailed instructions of nova-docker

driver install on RDO Juno (CentOS 7) ([3]).

Install required packages to install nova-dockker driver per https://wiki.openstack.org/wiki/Docker

***************************

Initial docker setup

***************************

# yum install docker-io -y
# yum install -y python-pip git
# git clone https://github.com/fikovnik/nova-docker.git
# cd nova-docker
# git branch -v -a

#  master                1ed1820 A note no firewall drivers.
remotes/origin/HEAD   -&gt; origin/master
remotes/origin/juno   1a08ea5 Fix the problem when an image
is not located in the local docker image registry.
remotes/origin/master 1ed1820 A note no firewall drivers.
# git checkout -b juno origin/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
# pip install pbr
#  mkdir /etc/nova/rootwrap.d

******************************

Update nova.conf

******************************

vi /etc/nova/nova.conf

set “compute_driver = novadocker.virt.docker.DockerDriver”

************************************************

Next, create the docker.filters file:

************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver

# This file should be owned by (and only-writeable by) the root user

[Filters]

# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’

ln: CommandFilter, /bin/ln, root

*****************************************

Add line /etc/glance/glance-api.conf

*****************************************

container_formats=ami,ari,aki,bare,ovf,ova,docker

:wq

************************

Restart Services

************************

usermod -G docker nova

systemctl restart openstack-nova-compute

systemctl status openstack-nova-compute

systemctl restart openstack-glance-api

******************************

Verification docker install

******************************

[root@juno ~]# docker run -i -t fedora /bin/bash

Unable to find image ‘fedora’ locally

fedora:latest: The image you are pulling has been verified

00a0c78eeb6d: Pull complete

2f6ab0c1646e: Pull complete

511136ea3c5a: Already exists

Status: Downloaded newer image for fedora:latest

bash-4.3# cat /etc/issue

Fedora release 21 (Twenty One)

Kernel \r on an \m (\l)

[root@juno ~]# docker ps -a

CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS                        PORTS               NAMES

738e54f9efd4        fedora:latest            “/bin/bash”         3 minutes ago       Exited (127) 25 seconds ago                       stoic_lumiere

14fd0cbba76d        ubuntu:latest            “/bin/bash”         3 minutes ago       Exited (0) 3 minutes ago                          prickly_hypatia

ef1a726d1cd4        fedora:latest            “/bin/bash”         5 minutes ago       Exited (0) 3 minutes ago                          drunk_shockley

0a2da90a269f        ubuntu:latest            “/bin/bash”         11 hours ago        Exited (0) 11 hours ago                           thirsty_kowalevski

5a3288ce0e8e        ubuntu:latest            “/bin/bash”         11 hours ago        Exited (0) 11 hours ago                           happy_leakey

21e84951eabd        tutum/wordpress:latest   “/run.sh”           16 hours ago        Up About an hour                                  nova-bf5f7eb9-900d-48bf-a230-275d65813b0f

*******************

Setup WordPress

*******************

# docker pull tutum/wordpress

# . keystonerc_admin

# docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress



[root@juno ~(keystone_admin)]# glance image-list
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+
| c6d01e60-56c2-443f-bf87-15a0372bc2d9 | cirros          | qcow2       | bare             | 13200896  | active |
| 9d59e7ad-35b4-4c3f-9103-68f85916f36e | tutum/wordpress | raw         | docker           | 517639680 | active |
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+

********************

Start container

********************

$ . keystonerc_demo

[root@juno ~(keystone_demo)]# neutron net-list

+————————————–+————–+——————————————————-+

| id                                   | name         | subnets                                               |

+————————————–+————–+——————————————————-+

| ccfc4bb1-696d-4381-91d7-28ce7c9cb009 | private      | 6c0a34ab-e3f1-458c-b24a-96f5a2149878 10.0.0.0/24      |

| 32c14896-8d47-4a56-b3c6-0dd823f03089 | public       | b1799aef-3f69-429c-9881-f81c74d83060 192.169.142.0/24 |

| a65bff8f-e397-491b-aa97-955864bec2f9 | demo_private | 69012862-f72e-4cd2-a4fc-4106d431cf2f 70.0.0.0/24      |

+————————————–+————–+——————————————————-+

$ nova boot –image “tutum/wordpress” –flavor m1.tiny –key-name  osxkey –nic net-id=a65bff8f-e397-491b-aa97-955864bec2f9 WordPress

[root@juno ~(keystone_demo)]# nova list

+————————————–+———–+———+————+————-+—————————————–+

| ID                                   | Name      | Status  | Task State | Power State | Networks                                |

+————————————–+———–+———+————+————-+—————————————–+

| bf5f7eb9-900d-48bf-a230-275d65813b0f |  WordPress   | ACTIVE  | –          | Running     | demo_private=70.0.0.16, 192.169.142.153 |

+—————————-———-+———–+———+————+————-+—————————————–+

[root@juno ~(keystone_demo)]# docker ps -a

CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS                   PORTS               NAMES

21e84951eabd        tutum/wordpress:latest   “/run.sh”           About an hour ago   Up 11 minutes                                nova-bf5f7eb9-900d-48bf-a230-275d65813b0f

**************************

Starting WordPress

**************************

Immediately after VM starts (on non-default Libvirts Subnet 192.169.142.0/24) status WordPress  is SHUTOFF, so we start WordPress (browser launched to

Juno VM 192.169.142.45 from KVM Hypervisor Server ) :-

   Browser launched to WordPress container 192.169.142.153  from KVM  Hypervisor Server

 

 **********************************************************************************

Floating IP assigned to WordPress container  been used to launch browser:-

**********************************************************************************

*******************************************************************************************

Another sample to demonstrating nova-docker container functionality. Browser launched to WordPress nova-docker container   (192.169.142.155)   from KVM Hypervisor Server hosting Libvirt’s Subnet (192.169.142.0/24)

*******************************************************************************************

 

*****************

MySQL Setup

*****************

  # docker pull tutum/mysql

  # .   keystonerc_admin

*****************************

Creating Glance Image

*****************************

#   docker save tutum/mysql:latest | glance image-create –is-public=True –container-format=docker –disk-format=raw –name tutum/mysql:latest

****************************************

Starting Nova-Docker container

****************************************

# .   keystonerc_demo

#   nova boot –image “tutum/mysql:latest” –flavor m1.tiny –key-name  osxkey –nic net-id=5fcd01ac-bc8e-450d-be67-f0c274edd041 mysql

 

 [root@ip-192-169-142-45 ~(keystone_demo)]# nova list

+————————————–+—————+——–+————+————-+—————————————–+

| ID                                   | Name          | Status | Task State | Power State | Networks                                |

+————————————–+—————+——–+————+————-+—————————————–+

| 3dbf981f-f28c-4abe-8fd1-09b8b8cad930 | WordPress     | ACTIVE | –          | Running     | demo_network=70.0.0.16, 192.169.142.153 |

| 39eef361-1329-44d9-b05a-f6b4b8693aa3 | mysql         | ACTIVE | –          | Running     | demo_network=70.0.0.19, 192.169.142.155 |

| 626bd8e0-cf1a-4891-aafc-620c464e8a94 | tutum/hipache | ACTIVE | –          | Running     | demo_network=70.0.0.18, 192.169.142.154 |

+————————————–+—————+——–+————+————-+—————————————–+

[root@ip-192-169-142-45 ~(keystone_demo)]# docker ps -a

CONTAINER ID        IMAGE                          COMMAND               CREATED             STATUS                         PORTS               NAMES

3da1e94892aa        tutum/mysql:latest             “/run.sh”             25 seconds ago      Up 23 seconds                                      nova-39eef361-1329-44d9-b05a-f6b4b8693aa3

77538873a273        tutum/hipache:latest           “/run.sh”             30 minutes ago                                                         condescending_leakey

844c75ca5a0e        tutum/hipache:latest           “/run.sh”             31 minutes ago                                                         condescending_turing

f477605840d0        tutum/hipache:latest           “/run.sh”             42 minutes ago      Up 31 minutes                                      nova-626bd8e0-cf1a-4891-aafc-620c464e8a94

3e2fe064d822        rastasheep/ubuntu-sshd:14.04   “/usr/sbin/sshd -D”   About an hour ago   Exited (0) About an hour ago                       test_sshd

8e79f9d8e357        fedora:latest                  “/bin/bash”           About an hour ago   Exited (0) About an hour ago                       evil_colden

9531ab33db8d        ubuntu:latest                  “/bin/bash”           About an hour ago   Exited (0) About an hour ago                       angry_bardeen

df6f3c9007a7        tutum/wordpress:latest         “/run.sh”             2 hours ago         Up About an hour                                   nova-3dbf981f-f28c-4abe-8fd1-09b8b8cad930

 

[root@ip-192-169-142-45 ~(keystone_demo)]# docker logs 3da1e94892aa

=&gt; An empty or uninitialized MySQL volume is detected in /var/lib/mysql

=&gt; Installing MySQL …

=&gt; Done!

=&gt; Creating admin user …

=&gt; Waiting for confirmation of MySQL service startup, trying 0/13 …

=&gt; Creating MySQL user admin with random password

=&gt; Done!

========================================================================

You can now connect to this MySQL Server using:

mysql -uadmin -pfXs5UarEYaow -h -P

Please remember to change the above password as soon as possible!
MySQL user ‘root’ has no password but only allows local connections
========================================================================
141218 20:45:31 mysqld_safe Can’t log to error log and syslog at the same time.
Remove all –log-error configuration options for –syslog to take effect.

141218 20:45:31 mysqld_safe Logging to ‘/var/log/mysql/error.log’.
141218 20:45:31 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql

[root@ip-192-169-142-45 ~(keystone_demo)]# mysql -uadmin -pfXs5UarEYaow -h 192.169.142.155  -P 3306
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.5.40-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MySQL [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| mysql              |
| performance_schema |
+——————–+
3 rows in set (0.01 sec)

MySQL [(none)]&gt;

*******************************************

Setup Ubuntu 14.04 with SSH access

*******************************************

# docker pull rastasheep/ubuntu-sshd:14.04

# . keystonerc_admin

# docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

# . keystonerc_demo

# nova boot –image “rastasheep/ubuntu-sshd:14.04” –flavor m1.tiny –key-name  osxkey    –nic net-id=5fcd01ac-bc8e-450d-be67-f0c274edd041 ubuntuTrusty

***********************************************************

Login to dashboard &amp;&amp; assign floating IP via dashboard:-

***********************************************************

  [root@ip-192-169-142-45 ~(keystone_demo)]# nova list

+————————————–+————–+———+————+————-+—————————————–+

| ID                                   | Name         | Status  | Task State | Power State | Networks                                |

+————————————–+————–+———+————+————-+—————————————–+

| 3dbf981f-f28c-4abe-8fd1-09b8b8cad930 | WordPress    | SHUTOFF | –          | Shutdown    | demo_network=70.0.0.16, 192.169.142.153 |

| 7bbf887f-167c-461e-9ee0-dd4d43605c9e | lamp         | ACTIVE  | –          | Running     | demo_network=70.0.0.20, 192.169.142.156 |

| 39eef361-1329-44d9-b05a-f6b4b8693aa3 | mysql        | SHUTOFF | –          | Shutdown    | demo_network=70.0.0.19, 192.169.142.155 |

| f21dc265-958e-4ed0-9251-31c4bbab35f4 | ubuntuTrusty | ACTIVE  | –          | Running     | demo_network=70.0.0.21, 192.169.142.157 |

+————————————–+————–+———+————+————-+—————————————–+

[root@ip-192-169-142-45 ~(keystone_demo)]# ssh root@192.169.142.157

root@192.169.142.157’s password:

Last login: Fri Dec 19 09:19:40 2014 from ip-192-169-142-45.ip.secureserver.net

root@instance-0000000d:~# cat /etc/issue

Ubuntu 14.04.1 LTS \n \l

root@instance-0000000d:~# ifconfig

lo        Link encap:Local Loopback

inet addr:127.0.0.1  Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING  MTU:65536  Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

nse49711e9-93 Link encap:Ethernet  HWaddr fa:16:3e:32:5e:d8

inet addr:70.0.0.21  Bcast:70.0.0.255  Mask:255.255.255.0

inet6 addr: fe80::f816:3eff:fe32:5ed8/64 Scope:Link

UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

RX packets:2574 errors:0 dropped:0 overruns:0 frame:0

TX packets:1653 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2257920 (2.2 MB)  TX bytes:255582 (255.5 KB)

root@instance-0000000d:~# df -h

Filesystem                                                                                         Size  Used Avail Use% Mounted on

/dev/mapper/docker-253:1-4600578-76893e146987bf4b58b42ff6ed80892df938ffba108f22c7a4591b18990e0438  9.8G  302M  9.0G   4% /

tmpfs                                                                                              1.9G     0  1.9G   0% /dev

shm                                                                                                 64M     0   64M   0% /dev/shm

/dev/mapper/centos-root                                                                             36G  9.8G   26G  28% /etc/hosts

tmpfs                                                                                              1.9G     0  1.9G   0% /run/secrets

tmpfs                                                                                              1.9G     0  1.9G   0% /proc/kcore

 

 References

1. http://cloudssky.com/en/blog/Nova-Docker-on-OpenStack-RDO-Juno/

2. https://www.mirantis.com/openstack-portal/external-tutorials/nova-docker-juno/


LVMiSCSI cinder backend for RDO Juno on CentOS 7

November 9, 2014

Current post follows up http://lxer.com/module/newswire/view/207415/index.html RDO Juno has been intalled on Controller and Compute nodes via packstack as described in link @lxer.com. iSCSI initiator implementation on CentOS 7 differs significantly from CentOS 6.5 and is based on  CLI utility targetcli and service target.  With Enterprise Linux 7, both Red Hat and CentOS, there is a big change in the management of iSCSI targets.Software run as part of the standard systemd structure. Consequently there will be significant changes in multi back end cinder architecture of RDO Juno running on CentOS 7 or Fedora 21 utilizing LVM based iSCSI targets.

Create following entries in /etc/cinder/cinder.conf on Controller ( which in case of two node Cluster works as Storage node as well).

#######################

enabled_backends=lvm51,lvm52

#######################

[lvm51]

iscsi_helper=lioadm

volume_group=cinder-volumes51

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI51

[lvm52]

iscsi_helper=lioadm

volume_group=cinder-volumes52

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI52

 

VG cinder-volumes52,51 created on /dev/sda6 and /dev/sdb1 correspondently

# pvcreate /dev/sda6

# vgcreate cinder-volumes52  /dev/sda6

Then issue :-

[root@juno1 ~(keystone_admin)]# cinder type-create lvms

+————————————–+——+

|                  ID                  | Name |

+————————————–+——+

| 64414f3a-7770-4958-b422-8db0c3e2f433 | lvms  |

+————————————–+——+

[root@juno1 ~(keystone_admin)]# cinder type-create lvmz +————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | lvmz |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-list

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 |  lvmz   |

| 64414f3a-7770-4958-b422-8db0c3e2f433 |  lvms   |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-key lvmz set volume_backend_name=LVM_iSCSI51

[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI52

Then enable and start service target:-

[root@juno1 ~(keystone_admin)]#   service target enable

[root@juno1 ~(keystone_admin)]#   service target start

[root@juno1 ~(keystone_admin)]# service target status

Redirecting to /bin/systemctl status  target.service

target.service – Restore LIO kernel target configuration

Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)

Active: active (exited) since Wed 2014-11-05 13:23:09 MSK; 44min ago

Process: 1611 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)

Main PID: 1611 (code=exited, status=0/SUCCESS)

CGroup: /system.slice/target.service

Nov 05 13:23:07 juno1.localdomain systemd[1]: Starting Restore LIO kernel target configuration…

Nov 05 13:23:09 juno1.localdomain systemd[1]: Started Restore LIO kernel target configuration.

Now all changes done by creating cinder volumes of types lvms,lvmz ( via

dashboard – volume create with dropdown menu volume types or via cinder CLI )

will be persistent in  targetcli&gt; ls output between reboots

[root@juno1 ~(keystone_boris)]# cinder list

+————————————–+——–+——————+——+————-+———-+————————————–+

|                  ID                  | Status |   Display Name   | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+——————+——+————-+———-+————————————–+

| 3a4f6878-530a-4a28-87bb-92ee256f63ea | in-use | UbuntuUTLV510851 |  5   |     lvmz    |   true   | efb1762e-6782-4895-bf2b-564f14105b5b |

| 51528876-405d-4a15-abc2-61ad72fc7d7e | in-use |   CentOS7LVG51   |  10  |     lvmz    |   true   | ba3e87fa-ee81-42fc-baed-c59ca6c8a100 |

| ca0694ae-7e8d-4c84-aad8-3f178416dec6 | in-use |  VF20LVG520711   |  7   |     lvms    |   true   | 51a20959-0a0c-4ef6-81ec-2edeab6e3588 |

| dc9e31f0-b27f-4400-a666-688365126f67 | in-use | UbuntuUTLV520711 |  7   |     lvms    |   true   | 1fe7d2c3-58ae-4ee8-8f5f-baf334195a59 |

+————————————–+——–+——————+——+————-+———-+————————————–+

Compare ‘green’ highlighted volume id’s and tarcgetcli&gt;ls output

 

  

  

Next snapshot demonstrates lvms &amp;&amp; lvmz volumes attached to corresponding

nova instances utilizing LVMiSCSI cinder backend.

 

On Compute Node iscsiadm output will look as follows :-

[root@juno2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-3a4f6878-530a-4a28-87bb-92ee256f63ea

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-ca0694ae-7e8d-4c84-aad8-3f178416dec6

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-dc9e31f0-b27f-4400-a666-688365126f67

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-51528876-405d-4a15-abc2-61ad72fc7d7e

References

1.  https://www.centos.org/forums/viewtopic.php?f=47&amp;t=48591


RDO Juno Set up Two Real Node (Controller+Compute) Gluster 3.5.2 Cluster ML2&OVS&VXLAN on CentOS 7

November 3, 2014

Post bellow follows up http://cloudssky.com/en/blog/RDO-OpenStack-Juno-ML2-VXLAN-2-Node-Deployment-On-CentOS-7-With-Packstack/ however answer file provided here allows in single run create Controller &amp;&amp; Compute Node.Based oh RDO Juno release as of 10/27/2014 it doesn’t require creating OVS bridge br-ex and OVS port enp2s0 on Compute Node. It also doesn’t install nova-compute service on Controller. Gluster 3.5.2 setup also is performed in way which differs from similar procedure on IceHouse &amp;&amp; Havana RDO releases. Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for Controller &amp;&amp; Compute Nodes setup. Before running`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set support VXLAN tunnel  (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

I have also to note that in regards  of LVMiSCSI cinder backend support on CentOS 7 post http://theurbanpenguin.com/wp/?p=3403 is misleading. Name of service making changes done in targetcli  persistent between reboots is “target” not “targetd”

To setup iSCSI initiator on CentOS 7 ( activate LIO kernel support) you have
to issue :-
# systemctl enable target
# systemctl start target
# systemctl status target -l
target.service – Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Sat 2014-11-08 14:45:06 MSK; 3h 26min ago
  Process: 1661 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 1661 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service

Nov 01 14:45:06 juno1.localdomain systemd[1]: Started Restore LIO kernel target configuration.

 

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin &amp;&amp; VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

juno1.localdomain   –  Controller (192.168.1.127)

juno2.localdomain   –  Compute   (192.168.1.137)

Answer File :-

[general]

CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

CONFIG_DEFAULT_PASSWORD=

CONFIG_MARIADB_INSTALL=y

CONFIG_GLANCE_INSTALL=y

CONFIG_CINDER_INSTALL=y

CONFIG_NOVA_INSTALL=y

CONFIG_NEUTRON_INSTALL=y

CONFIG_HORIZON_INSTALL=y

CONFIG_SWIFT_INSTALL=y

CONFIG_CEILOMETER_INSTALL=y

CONFIG_HEAT_INSTALL=n

CONFIG_CLIENT_INSTALL=y

CONFIG_NTP_SERVERS=

CONFIG_NAGIOS_INSTALL=y

EXCLUDE_SERVERS=

CONFIG_DEBUG_MODE=n

CONFIG_CONTROLLER_HOST=192.168.1.127

CONFIG_COMPUTE_HOSTS=192.168.1.137

CONFIG_NETWORK_HOSTS=192.168.1.127

CONFIG_VMWARE_BACKEND=n

CONFIG_UNSUPPORTED=n

CONFIG_VCENTER_HOST=

CONFIG_VCENTER_USER=

CONFIG_VCENTER_PASSWORD=

CONFIG_VCENTER_CLUSTER_NAME=

CONFIG_STORAGE_HOST=192.168.1.127

CONFIG_USE_EPEL=y

CONFIG_REPO=

CONFIG_RH_USER=

CONFIG_SATELLITE_URL=

CONFIG_RH_PW=

CONFIG_RH_OPTIONAL=y

CONFIG_RH_PROXY=

CONFIG_RH_PROXY_PORT=

CONFIG_RH_PROXY_USER=

CONFIG_RH_PROXY_PW=

CONFIG_SATELLITE_USER=

CONFIG_SATELLITE_PW=

CONFIG_SATELLITE_AKEY=

CONFIG_SATELLITE_CACERT=

CONFIG_SATELLITE_PROFILE=

CONFIG_SATELLITE_FLAGS=

CONFIG_SATELLITE_PROXY=

CONFIG_SATELLITE_PROXY_USER=

CONFIG_SATELLITE_PROXY_PW=

CONFIG_AMQP_BACKEND=rabbitmq

CONFIG_AMQP_HOST=192.168.1.127

CONFIG_AMQP_ENABLE_SSL=n

CONFIG_AMQP_ENABLE_AUTH=n

CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER

CONFIG_AMQP_SSL_PORT=5671

CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem

CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem

CONFIG_AMQP_SSL_SELF_SIGNED=y

CONFIG_AMQP_AUTH_USER=amqp_user

CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER

CONFIG_MARIADB_HOST=192.168.1.127

CONFIG_MARIADB_USER=root

CONFIG_MARIADB_PW=7207ae344ed04957

CONFIG_KEYSTONE_DB_PW=abcae16b785245c3

CONFIG_KEYSTONE_REGION=RegionOne

CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9

CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468

CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398

CONFIG_KEYSTONE_TOKEN_FORMAT=UUID

CONFIG_KEYSTONE_SERVICE_NAME=keystone

CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8

CONFIG_GLANCE_KS_PW=f6a9398960534797

CONFIG_GLANCE_BACKEND=file

CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69

CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f

CONFIG_CINDER_BACKEND=lvm

CONFIG_CINDER_VOLUMES_CREATE=y

CONFIG_CINDER_VOLUMES_SIZE=20G

CONFIG_CINDER_GLUSTER_MOUNTS=

CONFIG_CINDER_NFS_MOUNTS=

CONFIG_CINDER_NETAPP_LOGIN=

CONFIG_CINDER_NETAPP_PASSWORD=

CONFIG_CINDER_NETAPP_HOSTNAME=

CONFIG_CINDER_NETAPP_SERVER_PORT=80

CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster

CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http

CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs

CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0

CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60

CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=

CONFIG_CINDER_NETAPP_VOLUME_LIST=

CONFIG_CINDER_NETAPP_VFILER=

CONFIG_CINDER_NETAPP_VSERVER=

CONFIG_CINDER_NETAPP_CONTROLLER_IPS=

CONFIG_CINDER_NETAPP_SA_PASSWORD=

CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2

CONFIG_CINDER_NETAPP_STORAGE_POOLS=

CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8

CONFIG_NOVA_KS_PW=d9583177a2444f06

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp

CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n

CONFIG_SSL_CERT=

CONFIG_SSL_KEY=

CONFIG_SSL_CACHAIN=

CONFIG_SWIFT_KS_PW=8f75bfd461234c30

CONFIG_SWIFT_STORAGES=

CONFIG_SWIFT_STORAGE_ZONES=1

CONFIG_SWIFT_STORAGE_REPLICAS=1

CONFIG_SWIFT_STORAGE_FSTYPE=ext4

CONFIG_SWIFT_HASH=a60aacbedde7429a

CONFIG_SWIFT_STORAGE_SIZE=2G

CONFIG_PROVISION_DEMO=y

CONFIG_PROVISION_TEMPEST=n

CONFIG_PROVISION_TEMPEST_USER=

CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459

CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n

CONFIG_HEAT_DB_PW=PW_PLACEHOLDER

CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0

CONFIG_HEAT_KS_PW=PW_PLACEHOLDER

CONFIG_HEAT_CLOUDWATCH_INSTALL=n

CONFIG_HEAT_USING_TRUSTS=y

CONFIG_HEAT_CFN_INSTALL=n

CONFIG_HEAT_DOMAIN=heat

CONFIG_HEAT_DOMAIN_ADMIN=heat_admin

CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER

CONFIG_CEILOMETER_SECRET=19ae0e7430174349

CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753

CONFIG_MONGODB_HOST=192.168.1.127

CONFIG_NAGIOS_PW=02f168ee8edd44e4

Only on Controller updates :-

[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-br-ex

DEVICE=”br-ex”

BOOTPROTO=”static”

IPADDR=”192.168.1.127″

NETMASK=”255.255.255.0″

DNS1=”83.221.202.254″

BROADCAST=”192.168.1.255″

GATEWAY=”192.168.1.1″

NM_CONTROLLED=”no”

DEFROUTE=”yes”

IPV4_FAILURE_FATAL=”yes”

IPV6INIT=no

ONBOOT=”yes”

TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-enp2s0

DEVICE=”enp2s0″

# HWADDR=00:22:15:63:E4:E2

ONBOOT=”yes”

TYPE=”OVSPort”

DEVICETYPE=”ovs”

OVS_BRIDGE=br-ex

NM_CONTROLLED=no

IPV6INIT=no

Setup Gluster Backend for cinder in Juno

*************************************************************************

Updates  /etc/cinder/cinder.conf to activate Gluster 3.5.2 backend

*************************************************************************

Gluster 3.5.2 cluster installed per  http://bderzhavets.blogspot.com/2014/08/setup-gluster-352-on-two-node.html

enabled_backends=gluster,lvm52

[gluster]

volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver

glusterfs_shares_config = /etc/cinder/shares.conf

glusterfs_mount_point_base = /var/lib/cinder/volumes

volume_backend_name=GLUSTER

[lvm52]

iscsi_helper=lioadm

volume_group=cinder-volumes52

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI52

Now follow  http://giuliofidente.com/2013/06/openstack-cinder-configure-multiple-backends.html   :-

[root@juno1 ~(keystone_admin)]# cinder type-create lvms

+————————————–+——+

|                  ID                  | Name |

+————————————–+——+

| 64414f3a-7770-4958-b422-8db0c3e2f433 | lvms  |

+————————————–+——+

+————————————–+——-+

[root@juno1 ~(keystone_admin)]# cinder type-create gluster

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | gluster |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-list

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | gluster |

| 64414f3a-7770-4958-b422-8db0c3e2f433 |   lvms  |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI

[root@juno1 ~(keystone_admin)]# cinder type-key gluster  set volume_backend_name=GLUSTER

Next step is cinder services restart :-

[root@juno1 ~(keystone_demo)]# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

[root@juno1 ~(keystone_admin)]# df -h

Filesystem                       Size  Used Avail Use% Mounted on

/dev/mapper/centos01-root00      147G   17G  130G  12% /

devtmpfs                         3.9G     0  3.9G   0% /dev

tmpfs                            3.9G   96K  3.9G   1% /dev/shm

tmpfs                            3.9G  9.1M  3.9G   1% /run

tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup

/dev/loop0                       1.9G  6.0M  1.7G   1% /srv/node/swift_loopback

/dev/sda3                        477M  146M  302M  33% /boot

/dev/mapper/centos01-data5        98G  1.4G   97G   2% /data5

192.168.1.127:/cinder-volumes57   98G  1.4G   97G   2% /var/lib/cinder/volumes/8478b56ad61cf67ab9839fb0a5296965

tmpfs                            3.9G  9.1M  3.9G   1% /run/netns

[root@juno1 ~(keystone_demo)]# gluster volume info

Volume Name: cinder-volumes57

Type: Replicate

Volume ID: c1f2e1d2-0b11-426e-af3d-7af0d1d24d5e

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: juno1.localdomain:/data5/data-volumes

Brick2: juno2.localdomain:/data5/data-volumes

Options Reconfigured:

auth.allow: 192.168.1.*

[root@juno1 ~(keystone_demo)]# gluster volume status

Status of volume: cinder-volumes57

Gluster process                        Port    Online    Pid

——————————————————————————

Brick juno1.localdomain:/data5/data-volumes        49152    Y    3806

Brick juno2.localdomain:/data5/data-volumes        49152    Y    3047

NFS Server on localhost                    2049    Y    4146

Self-heal Daemon on localhost                N/A    Y    4141

NFS Server on juno2.localdomain                2049    Y    3881

Self-heal Daemon on juno2.localdomain            N/A    Y    3877

Task Status of Volume cinder-volumes57

——————————————————————————

**********************************************

Creating cinder volume of gluster type:-

**********************************************

[root@juno1 ~(keystone_demo)]# cinder create –volume_type gluster –image-id d83a6fec-ce82-411c-aa11-04cbb34bf2a2 –display_name UbuntuGLS1029 5

[root@juno1 ~(keystone_demo)]# cinder list

+————————————–+——–+—————+——+————-+———-+————————————–+

|                  ID                  | Status |  Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+—————+——+————-+———-+————————————–+

| ca7ac946-3c4e-4544-ba3a-8cd085d5882b | in-use | UbuntuGLS1029 |  5   |   gluster   |   true   | cdb57658-795a-4a6e-82c9-67bf24acd498 |

+————————————–+——–+—————+——+————-+———-+————————————–+

[root@juno1 ~(keystone_demo)]# nova list

+————————————–+————-+———–+————+————-+———————————–+

| ID                                   | Name        | Status    | Task State | Power State | Networks                          |

+————————————–+————-+———–+————+————-+———————————–+

| 5c366eb9-8830-4432-b9bb-06239ae83d8a | CentOS7RS01 | SUSPENDED | –          | Shutdown    | demo_net=40.0.0.25, 192.168.1.161 |

| cdb57658-795a-4a6e-82c9-67bf24acd498 | UbuntuGLS01 | ACTIVE  | –          | Shutdown    | demo_net=40.0.0.22, 192.168.1.157 |

| 39d5312c-e661-4f9f-82ab-db528a7cdc9a | UbuntuRXS52 | ACTIVE    | –          | Running     | demo_net=40.0.0.32, 192.168.1.165 |

| 16911bfa-cf8b-44b7-b46e-8a54c9b3db69 | VF20GLR01   | ACTIVE    | –          | Running     | demo_net=40.0.0.23, 192.168.1.159 |

+————————————–+————-+———–+————+————-+———————————–+

 

 

Get detailed information about server-id :-

[root@juno1 ~(keystone_demo)]# nova show 16911bfa-cf8b-44b7-b46e-8a54c9b3db69

+————————————–+———————————————————-+

| Property                             | Value                                                    |

+————————————–+———————————————————-+

| OS-DCF:diskConfig                    | AUTO                                                     |

| OS-EXT-AZ:availability_zone          | nova                                                     |

| OS-EXT-STS:power_state               | 1                                                        |

| OS-EXT-STS:task_state                | –                                                        |

| OS-EXT-STS:vm_state                  | active                                                   |

| OS-SRV-USG:launched_at               | 2014-11-01T22:20:12.000000                               |

| OS-SRV-USG:terminated_at             | –                                                        |

| accessIPv4                           |                                                          |

| accessIPv6                           |                                                          |

| config_drive                         |                                                          |

| created                              | 2014-11-01T22:20:04Z                                     |

| demo_net network                     | 40.0.0.23, 192.168.1.159                                 |

| flavor                               | m1.small (2)                                             |

| hostId                               | 2e37cbf1f1145a0eaad46d35cbc8f4df3b579bbaf0404855511732a9 |

| id                                   | 16911bfa-cf8b-44b7-b46e-8a54c9b3db69                     |

| image                                | Attempt to boot from volume – no image supplied          |

| key_name                             | oskey45                                                  |

| metadata                             | {}                                                       |

| name                                 | VF20GLR01                                                |

| os-extended-volumes:volumes_attached | [{“id”: “6ff40c2b-c363-42da-8988-5425eca0eea3”}]         |

| progress                             | 0                                                        |

| security_groups                      | default                                                  |

| status                               | ACTIVE                                                   |

| tenant_id                            | b302ecfaf76740189fca446e2e4a9a6e                         |

| updated                              | 2014-11-03T09:29:25Z                                     |

| user_id                              | ad7db1242c7e41ee88bc813873c85da3                         |

+————————————–+———————————————————-+

[root@juno1 ~(keystone_demo)]# cinder show 6ff40c2b-c363-42da-8988-5425eca0eea3 | grep volume_type

volume_type | gluster

*******************************

Gluster cinder-volumes list :-

*******************************

[root@juno1 data-volumes(keystone_demo)]# cinder list

+————————————–+——–+—————+——+————-+———-+————————————–+

|                  ID                  | Status |  Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+—————+——+————-+———-+————————————–+

| 6ff40c2b-c363-42da-8988-5425eca0eea3 | in-use |  VF20VLG0211  |  7   |   gluster   |   true   | 16911bfa-cf8b-44b7-b46e-8a54c9b3db69 |

| 8ade9f17-163d-48ca-bea5-bc9c6ea99b17 | in-use |  UbuntuLVS52  |  5   |     lvms    |   true   | 39d5312c-e661-4f9f-82ab-db528a7cdc9a |

| ca7ac946-3c4e-4544-ba3a-8cd085d5882b | in-use | UbuntuGLS1029 |  5   |   gluster   |   true   | cdb57658-795a-4a6e-82c9-67bf24acd498 |

| d8f77604-f984-4e98-81cc-971003d3fb54 | in-use |   CentOS7VLG  |  10  |   gluster   |   true   | 5c366eb9-8830-4432-b9bb-06239ae83d8a |

+————————————–+——–+—————+——+————-+———-+————————————–+

[root@juno1 data-volumes(keystone_demo)]# ls -la

total 7219560

drwxrwxr-x.   3 root cinder        4096 Nov  3 19:29 .

drwxr-xr-x.   3 root root            25 Nov  1 19:17 ..

drw——-. 252 root root          4096 Nov  3 19:21 .glusterfs

-rw-rw-rw-.   2 qemu qemu    7516192768 Nov  3 19:06 volume-6ff40c2b-c363-42da-8988-5425eca0eea3

-rw-rw-rw-.   2 qemu qemu    5368709120 Nov  3 19:21 volume-ca7ac946-3c4e-4544-ba3a-8cd085d5882b

-rw-rw-rw-.   2 root root   10737418240 Nov  2 10:57 volume-d8f77604-f984-4e98-81cc-971003d3fb54

References

1. http://cloudssky.com/en/blog/RDO-OpenStack-Juno-ML2-VXLAN-2-Node-Deployment-On-CentOS-7-With-Packstack


RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7

September 5, 2014

As of 07/28/2014 Bug https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ is still pending and workaround suggested above should be applied during two node RDO packstack installation.

Successful implementation of Neutron ML2&&OVS&&VXLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack.

Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for

Controller &amp;&amp; Compute Nodes setup. Before running

`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set to support VXLAN  tunnel  (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin &amp;&amp; VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

 [root@icehouse1 ~(keystone_admin)]# cat TwoNodeVXLAN.txt

[general]

CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

CONFIG_MYSQL_INSTALL=y

CONFIG_GLANCE_INSTALL=y

CONFIG_CINDER_INSTALL=y

CONFIG_NOVA_INSTALL=y

CONFIG_NEUTRON_INSTALL=y

CONFIG_HORIZON_INSTALL=y

CONFIG_SWIFT_INSTALL=n

CONFIG_CEILOMETER_INSTALL=y

CONFIG_HEAT_INSTALL=n

CONFIG_CLIENT_INSTALL=y

CONFIG_NTP_SERVERS=

CONFIG_NAGIOS_INSTALL=y

EXCLUDE_SERVERS=

CONFIG_DEBUG_MODE=n

CONFIG_VMWARE_BACKEND=n

CONFIG_MYSQL_HOST=192.168.1.127

CONFIG_MYSQL_USER=root

CONFIG_MYSQL_PW=a7f0349d1f7a4ab0

CONFIG_AMQP_SERVER=rabbitmq

CONFIG_AMQP_HOST=192.168.1.127

CONFIG_AMQP_ENABLE_SSL=n

CONFIG_AMQP_ENABLE_AUTH=n

CONFIG_AMQP_NSS_CERTDB_PW=0915db728b00409caf4b6e433b756308

CONFIG_AMQP_SSL_PORT=5671

CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem

CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem

CONFIG_AMQP_SSL_SELF_SIGNED=y

CONFIG_AMQP_AUTH_USER=amqp_user

CONFIG_AMQP_AUTH_PASSWORD=f16d26ff54cd4033

CONFIG_KEYSTONE_HOST=192.168.1.127

CONFIG_KEYSTONE_DB_PW=32419736ee454c2c

CONFIG_KEYSTONE_ADMIN_TOKEN=836891519cb640458551556447a5a644

CONFIG_KEYSTONE_ADMIN_PW=4ebab181262d4224

CONFIG_KEYSTONE_DEMO_PW=56eb6360019e45bf

CONFIG_KEYSTONE_TOKEN_FORMAT=PKI

CONFIG_GLANCE_HOST=192.168.1.127

CONFIG_GLANCE_DB_PW=e51feef536104b49

CONFIG_GLANCE_KS_PW=2458775cd64848cb

CONFIG_CINDER_HOST=192.168.1.127

CONFIG_CINDER_DB_PW=bcf3b09c9c4144e2

CONFIG_CINDER_KS_PW=888c59cc113e4489

CONFIG_CINDER_BACKEND=lvm

CONFIG_CINDER_VOLUMES_CREATE=y

CONFIG_CINDER_VOLUMES_SIZE=15G

CONFIG_CINDER_GLUSTER_MOUNTS=

CONFIG_CINDER_NFS_MOUNTS=

CONFIG_VCENTER_HOST=192.168.1.127

CONFIG_VCENTER_USER=

CONFIG_VCENTER_PASSWORD=

CONFIG_NOVA_API_HOST=192.168.1.127

CONFIG_NOVA_CERT_HOST=192.168.1.127

CONFIG_NOVA_VNCPROXY_HOST=192.168.1.127

CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137

CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.127

CONFIG_NOVA_DB_PW=8cc18e22eaeb4c4d

CONFIG_NOVA_KS_PW=aaf8cf4c60224150

CONFIG_NOVA_SCHED_HOST=192.168.1.127

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1

CONFIG_NOVA_NETWORK_HOSTS=192.168.1.127

CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

CONFIG_NOVA_NETWORK_PUBIF=enp2s0

CONFIG_NOVA_NETWORK_PRIVIF=enp5s1

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22

CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova

CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

CONFIG_NOVA_NETWORK_VLAN_START=100

CONFIG_NOVA_NETWORK_NUMBER=1

CONFIG_NOVA_NETWORK_SIZE=255

CONFIG_VCENTER_HOST=192.168.1.127

CONFIG_VCENTER_USER=

CONFIG_VCENTER_PASSWORD=

CONFIG_VCENTER_CLUSTER_NAME=

CONFIG_NEUTRON_SERVER_HOST=192.168.1.127

CONFIG_NEUTRON_KS_PW=5f11f559abc94440

CONFIG_NEUTRON_DB_PW=0302dcfeb69e439f

CONFIG_NEUTRON_L3_HOSTS=192.168.1.127

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.127

CONFIG_NEUTRON_LBAAS_HOSTS=

CONFIG_NEUTRON_L2_PLUGIN=ml2

CONFIG_NEUTRON_METADATA_HOSTS=192.168.1.127

CONFIG_NEUTRON_METADATA_PW=227f7bbc8b6f4f74

############################################

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan

CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan

############################################

CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch

CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*

CONFIG_NEUTRON_ML2_VLAN_RANGES=

CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000

CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2

CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000

CONFIG_NEUTRON_L2_AGENT=openvswitch

CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local

CONFIG_NEUTRON_LB_VLAN_RANGES=

CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

#########################################

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan

CONFIG_NEUTRON_OVS_VLAN_RANGES=

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000

CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1

########################################

CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_OSCLIENT_HOST=192.168.1.127

CONFIG_HORIZON_HOST=192.168.1.127

CONFIG_HORIZON_SSL=n

CONFIG_SSL_CERT=

CONFIG_SSL_KEY=

CONFIG_SWIFT_PROXY_HOSTS=192.168.1.127

CONFIG_SWIFT_KS_PW=63d3108083ac495b

CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.127

CONFIG_SWIFT_STORAGE_ZONES=1

CONFIG_SWIFT_STORAGE_REPLICAS=1

CONFIG_SWIFT_STORAGE_FSTYPE=ext4

CONFIG_SWIFT_HASH=ebf91dbf930c49ca

CONFIG_SWIFT_STORAGE_SIZE=2G

CONFIG_PROVISION_DEMO=y

CONFIG_PROVISION_TEMPEST=n

CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n

CONFIG_HEAT_HOST=192.168.1.127

CONFIG_HEAT_DB_PW=f0be2b0fa2044183

CONFIG_HEAT_AUTH_ENC_KEY=29419b1f4e574e5e

CONFIG_HEAT_KS_PW=d5c39c630c364c5b

CONFIG_HEAT_CLOUDWATCH_INSTALL=n

CONFIG_HEAT_CFN_INSTALL=n

CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.127

CONFIG_HEAT_CFN_HOST=192.168.1.127

CONFIG_CEILOMETER_HOST=192.168.1.127

CONFIG_CEILOMETER_SECRET=d1ed1459830e4288

CONFIG_CEILOMETER_KS_PW=84f18f2e478f4230

CONFIG_MONGODB_HOST=192.168.1.127

CONFIG_NAGIOS_HOST=192.168.1.127

CONFIG_NAGIOS_PW=e2d02c03b5664ffe

CONFIG_USE_EPEL=y

CONFIG_REPO=

CONFIG_RH_USER=

CONFIG_RH_PW=

CONFIG_RH_BETA_REPO=n

CONFIG_SATELLITE_URL=

CONFIG_SATELLITE_USER=

CONFIG_SATELLITE_PW=

CONFIG_SATELLITE_AKEY=

CONFIG_SATELLITE_CACERT=

CONFIG_SATELLITE_PROFILE=

CONFIG_SATELLITE_FLAGS=

CONFIG_SATELLITE_PROXY=

CONFIG_SATELLITE_PROXY_USER=

CONFIG_SATELLITE_PROXY_PW=

[root@icehouse1 ~(keystone_admin)]# cat /etc/neutron/plugin.ini

[ml2]

type_drivers = vxlan

tenant_network_types = vxlan

mechanism_drivers =openvswitch

[ml2_type_flat]

[ml2_type_vlan]

[ml2_type_gre]

[ml2_type_vxlan]

vni_ranges =1001:2000

vxlan_group =239.1.1.2

[OVS]

local_ip=192.168.0.127

enable_tunneling=True

integration_bridge=br-int

tunnel_bridge=br-tun

[securitygroup]

enable_security_group = True

firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[agent]

polling_interval=2

[root@icehouse1 ~(keystone_admin)]# ls -l /etc/neutron

total 64

-rw-r–r–. 1 root root      193 Jul 29 16:15 api-paste.ini

-rw-r—–. 1 root neutron  3853 Jul 29 16:14 dhcp_agent.ini

-rw-r—–. 1 root neutron   208 Jul 29 16:15 fwaas_driver.ini

-rw-r—–. 1 root neutron  3431 Jul 29 16:14 l3_agent.ini

-rw-r—–. 1 root neutron  1400 Jun  8 01:38 lbaas_agent.ini

-rw-r—–. 1 root neutron  1481 Jul 29 16:15 metadata_agent.ini

-rw-r—–. 1 root neutron 19150 Jul 29 16:15 neutron.conf

lrwxrwxrwx. 1 root root       37 Jul 29 16:14 plugin.ini -&gt; /etc/neutron/plugins/ml2/ml2_conf.ini

-rw-r–r–. 1 root root      452 Jul 29 17:11 plugin.out

drwxr-xr-x. 4 root root       34 Jul 29 16:14 plugins

-rw-r—–. 1 root neutron  6148 Jun  8 01:38 policy.json

-rw-r–r–. 1 root root       78 Jul  2 15:11 release

-rw-r–r–. 1 root root     1216 Jun  8 01:38 rootwrap.conf

On Controller

[root@icehouse1 ~(keystone_admin)]# ovs-vsctl show

2742fa6e-78bf-440e-a2c1-cb48242ea565

Bridge br-ex

Port phy-br-ex

Interface phy-br-ex

Port “qg-76f29fee-9c”

Interface “qg-76f29fee-9c”

type: internal

Port br-ex

Interface br-ex

type: internal

Port “enp2s0”

Interface “enp2s0”

Bridge br-tun

Port “vxlan-c0a80089”

Interface “vxlan-c0a80089″

type: vxlan

options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137”}

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port br-tun

Interface br-tun

type: internal

Bridge br-int

Port “qr-8cad61e3-ce”

tag: 1

Interface “qr-8cad61e3-ce”

type: internal

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port “tapff8659ee-8d”

tag: 1

Interface “tapff8659ee-8d”

type: internal

Port br-int

Interface br-int

type: internal

Port int-br-ex

Interface int-br-ex

ovs_version: “2.0.0”

On Compute

[root@icehouse2 ~]# ovs-vsctl show

642d8c9f-116e-4b44-842a-e975e506fe24

Bridge br-ex

Port phy-br-ex

Interface phy-br-ex

Port br-ex

Interface br-ex

type: internal

Bridge br-tun

Port br-tun

Interface br-tun

type: internal

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port “vxlan-c0a8007f”

Interface “vxlan-c0a8007f”

type: vxlan

options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}

Bridge br-int

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port int-br-ex

Interface int-br-ex

Port “qvodc2c598a-b3”

tag: 1

Interface “qvodc2c598a-b3”

Port br-int

Interface br-int

type: internal

Port “qvo25cbd1fa-96”

tag: 1

Interface “qvo25cbd1fa-96”

ovs_version: “2.0.0”


RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7

July 29, 2014

As of 07/28/2014 Bug https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ is still pending and workaround suggested above should be applied during two node RDO packstack installation.
Successful implementation of Neutron ML2&&OVS&&VXLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack.

Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for
Controller && Compute Nodes setup. Before running
`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)
icehouse2.localdomain   –  Compute   (192.168.1.137)

[root@icehouse1 ~(keystone_admin)]# cat TwoNodeVXLAN.txt

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_MYSQL_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_VMWARE_BACKEND=n
CONFIG_MYSQL_HOST=192.168.1.127
CONFIG_MYSQL_USER=root
CONFIG_MYSQL_PW=a7f0349d1f7a4ab0
CONFIG_AMQP_SERVER=rabbitmq
CONFIG_AMQP_HOST=192.168.1.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=0915db728b00409caf4b6e433b756308
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=f16d26ff54cd4033
CONFIG_KEYSTONE_HOST=192.168.1.127
CONFIG_KEYSTONE_DB_PW=32419736ee454c2c
CONFIG_KEYSTONE_ADMIN_TOKEN=836891519cb640458551556447a5a644
CONFIG_KEYSTONE_ADMIN_PW=4ebab181262d4224
CONFIG_KEYSTONE_DEMO_PW=56eb6360019e45bf
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI
CONFIG_GLANCE_HOST=192.168.1.127
CONFIG_GLANCE_DB_PW=e51feef536104b49
CONFIG_GLANCE_KS_PW=2458775cd64848cb
CONFIG_CINDER_HOST=192.168.1.127
CONFIG_CINDER_DB_PW=bcf3b09c9c4144e2
CONFIG_CINDER_KS_PW=888c59cc113e4489
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=15G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_VCENTER_HOST=192.168.1.127
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_NOVA_API_HOST=192.168.1.127
CONFIG_NOVA_CERT_HOST=192.168.1.127
CONFIG_NOVA_VNCPROXY_HOST=192.168.1.127
CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137
CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.127
CONFIG_NOVA_DB_PW=8cc18e22eaeb4c4d
CONFIG_NOVA_KS_PW=aaf8cf4c60224150
CONFIG_NOVA_SCHED_HOST=192.168.1.127
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_PRIVIF=p4p1
CONFIG_NOVA_NETWORK_HOSTS=192.168.1.127
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_VCENTER_HOST=192.168.1.127
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_NEUTRON_SERVER_HOST=192.168.1.127
CONFIG_NEUTRON_KS_PW=5f11f559abc94440
CONFIG_NEUTRON_DB_PW=0302dcfeb69e439f
CONFIG_NEUTRON_L3_HOSTS=192.168.1.127
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.127
CONFIG_NEUTRON_LBAAS_HOSTS=
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_HOSTS=192.168.1.127
CONFIG_NEUTRON_METADATA_PW=227f7bbc8b6f4f74
############################################
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
############################################
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
#########################################
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
########################################
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_OSCLIENT_HOST=192.168.1.127
CONFIG_HORIZON_HOST=192.168.1.127
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SWIFT_PROXY_HOSTS=192.168.1.127
CONFIG_SWIFT_KS_PW=63d3108083ac495b
CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.127
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=ebf91dbf930c49ca
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_HOST=192.168.1.127
CONFIG_HEAT_DB_PW=f0be2b0fa2044183
CONFIG_HEAT_AUTH_ENC_KEY=29419b1f4e574e5e
CONFIG_HEAT_KS_PW=d5c39c630c364c5b
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.127
CONFIG_HEAT_CFN_HOST=192.168.1.127
CONFIG_CEILOMETER_HOST=192.168.1.127
CONFIG_CEILOMETER_SECRET=d1ed1459830e4288
CONFIG_CEILOMETER_KS_PW=84f18f2e478f4230
CONFIG_MONGODB_HOST=192.168.1.127
CONFIG_NAGIOS_HOST=192.168.1.127
CONFIG_NAGIOS_PW=e2d02c03b5664ffe
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_RH_PW=
CONFIG_RH_BETA_REPO=n
CONFIG_SATELLITE_URL=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=

[root@icehouse1 ~(keystone_admin)]# cat /etc/neutron/plugin.ini
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[OVS]
local_ip=192.168.1.127
enable_tunneling=True
integration_bridge=br-int
tunnel_bridge=br-tun
[securitygroup]
enable_security_group = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
polling_interval=2

[root@icehouse1 ~(keystone_admin)]# ls -l /etc/neutron
total 64
-rw-r–r–. 1 root root      193 Jul 29 16:15 api-paste.ini
-rw-r—–. 1 root neutron  3853 Jul 29 16:14 dhcp_agent.ini
-rw-r—–. 1 root neutron   208 Jul 29 16:15 fwaas_driver.ini
-rw-r—–. 1 root neutron  3431 Jul 29 16:14 l3_agent.ini
-rw-r—–. 1 root neutron  1400 Jun  8 01:38 lbaas_agent.ini
-rw-r—–. 1 root neutron  1481 Jul 29 16:15 metadata_agent.ini
-rw-r—–. 1 root neutron 19150 Jul 29 16:15 neutron.conf
lrwxrwxrwx. 1 root root       37 Jul 29 16:14 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
-rw-r–r–. 1 root root      452 Jul 29 17:11 plugin.out
drwxr-xr-x. 4 root root       34 Jul 29 16:14 plugins
-rw-r—–. 1 root neutron  6148 Jun  8 01:38 policy.json
-rw-r–r–. 1 root root       78 Jul  2 15:11 release
-rw-r–r–. 1 root root     1216 Jun  8 01:38 rootwrap.conf

On Controller

[root@icehouse1 ~(keystone_admin)]# ovs-vsctl show
2742fa6e-78bf-440e-a2c1-cb48242ea565
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
Port “qg-76f29fee-9c”
Interface “qg-76f29fee-9c”
type: internal
Port br-ex
Interface br-ex
type: internal
Port “enp2s0”
Interface “enp2s0”
Bridge br-tun
Port “vxlan-c0a80089”
Interface “vxlan-c0a80089″
type: vxlan
options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137”}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qr-8cad61e3-ce”
tag: 1
Interface “qr-8cad61e3-ce”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapff8659ee-8d”
tag: 1
Interface “tapff8659ee-8d”
type: internal
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
ovs_version: “2.0.0”

On Compute

[root@icehouse2 ~]# ovs-vsctl show
642d8c9f-116e-4b44-842a-e975e506fe24
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “vxlan-c0a8007f”
Interface “vxlan-c0a8007f”
type: vxlan
options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}
Bridge br-int
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
Port “qvodc2c598a-b3”
tag: 1
Interface “qvodc2c598a-b3”
Port br-int
Interface br-int
type: internal
Port “qvo25cbd1fa-96”
tag: 1
Interface “qvo25cbd1fa-96”
ovs_version: “2.0.0”


RDO IceHouse Setup Two Node (Controller+Compute) Neutron ML2&OVS&VLAN Cluster on Fedora 20

June 22, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller &amp;&amp; Compute Nodes setup. Before running `packstack –answer-file= TwoNodeML2&amp;OVS&amp;VLAN.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from VLAN Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

Answer file been used by packstack here http://textuploader.com/k9xo

 [root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                inactive  (disabled on boot)
== Ceilometer services ==
openstack-ceilometer-api:               failed
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
== Support services ==
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 42ceb5a601b041f0a5669868dd7f7663 |   admin    |   True  |    test@test.com     |
| d602599e69904691a6094d86f07b6121 | ceilometer |   True  | ceilometer@localhost |
| cc11c36f6e9a4bb7b050db7a380a51db |   cinder   |   True  |   cinder@localhost   |
| c3b1e25936a241bfa63c791346f179fc |   glance   |   True  |   glance@localhost   |
| d2bfcd4e6fc44478899b0a2544df0b00 |  neutron   |   True  |  neutron@localhost   |
| 3d572a8e32b94ac09dd3318cd84fd932 |    nova    |   True  |    nova@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+—————–+————-+——————+———–+——–+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+————————————–+—————–+————-+——————+———–+——–+
| 898a4245-d191-46b8-ac87-e0f1e1873cb1 | CirrOS31        | qcow2       | bare             | 13147648  | active |
| c4647c90-5160-48b1-8b26-dba69381b6fa | Ubuntu 06/18/14 | qcow2       | bare             | 254149120 | active |
+————————————–+—————–+————-+——————+———–+——–+
== Nova managed services ==
+——————+—————————————-+———-+———+——-+—————————-+—————–+
| Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+——————+—————————————-+———-+———+——-+—————————-+—————–+
| nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | –               |
| nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:21.000000 | –               |
| nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:23.000000 | –               |
| nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | –               |
| nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2014-06-22T10:39:23.000000 | –               |
+——————+—————————————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+———+——+
| ID                                   | Label   | Cidr |
+————————————–+———+——+
| 577b7ba7-adad-4051-a03f-787eb8bd55f6 | public  | –    |
| 70298098-a022-4a6b-841f-cef13524d86f | private | –    |
| 7459c84b-b460-4da2-8f24-e0c840be2637 | int     | –    |
+————————————–+———+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+————-+———–+————+————-+————————————+
| ID                                   | Name        | Status    | Task State | Power State | Networks                           |
+————————————–+————-+———–+————+————-+————————————+
| 388bbe10-87b2-40e5-a6ee-b87b05116d51 | CirrOS445   | ACTIVE    | –          | Running     | private=30.0.0.14, 192.169.142.155 |
| 4d380c79-3213-45c0-8e4c-cef2dd19836d | UbuntuSRV01 | SUSPENDED | –          | Shutdown    | private=30.0.0.13, 192.169.142.154 |
+————————————–+————-+———–+————+————-+————————————+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:00
nova-scheduler   ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:01
nova-conductor   ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:03
nova-cert        ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:00
nova-compute     ip-192-169-142-137.ip.secureserver.net nova             enabled    :-)   2014-06-22 10:40:03

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+—————————————-+——-+—————-+
| id                                   | agent_type         | host                                   | alive | admin_state_up |
+————————————–+——————–+—————————————-+——-+—————-+
| 61160392-4c97-4e8f-a902-1e55867e4425 | DHCP agent         | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| 6cd022b9-9eb8-4d1e-9991-01dfe678eba5 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           |
| 893a1a71-5709-48e9-b1a4-11e02f5eca15 | Metadata agent     | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| bb29c2dc-2db6-487c-a262-32cecf85c608 | L3 agent           | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| d7456233-53ba-4ae4-8936-3448f6ea9d65 | Open vSwitch agent | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
+————————————–+——————–+—————————————-+——-+—————-+

[root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

 [root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth0
DEVICE=”eth0″
# HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

 [root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
# HWADDR=52:54:00:EE:94:93
NM_CONTROLLED=no

 [root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
86e16ac0-c2e6-4eb4-a311-cee56fe86800
Bridge br-ex
Port “eth0”
Interface “eth0”
Port “qg-068e0e7a-95”
Interface “qg-068e0e7a-95”
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge “br-eth1”
Port “eth1”
Interface “eth1”
Port “phy-br-eth1”
Interface “phy-br-eth1”
Port “br-eth1”
Interface “br-eth1”
type: internal
Bridge br-int
Port “qr-16b1ea2b-fc”
tag: 1
Interface “qr-16b1ea2b-fc”
type: internal
Port “qr-2bb007df-e1”
tag: 2
Interface “qr-2bb007df-e1”
type: internal
Port “tap1c48d234-23”
tag: 2
Interface “tap1c48d234-23”
type: internal
Port br-int
Interface br-int
type: internal
Port “tap26440f58-b0”
tag: 1
Interface “tap26440f58-b0”
type: internal
Port “int-br-eth1”
Interface “int-br-eth1”
ovs_version: “2.1.2”

[root@ip-192-169-142-127 neutron]# cat plugin.ini
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch
[ml2_type_vlan]
network_vlan_ranges = physnet1:100:200
[ovs]
network_vlan_ranges = physnet1:100:200
tenant_network_type = vlan
enable_tunneling = False
integration_bridge = br-int
bridge_mappings = physnet1:br-eth1
local_ip = 192.168.122.127
[AGENT]
polling_interval = 2
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Checksum offloading disabled on eth1 of Compute Node
[root@ip-192-169-142-137 neutron]# /usr/sbin/ethtool --offload eth1 tx off
Actual changes:
tx-checksumming: off
    tx-checksum-ip-generic: off
tcp-segmentation-offload: off
    tx-tcp-segmentation: off [requested on]
    tx-tcp-ecn-segmentation: off [requested on]
    tx-tcp6-segmentation: off [requested on]
udp-fragmentation-offload: off [requested on]

 


Two Real Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 4, 2014

Two boxes  have been setup , each one having 2  NICs (p37p1,p4p1) for Controller && Compute Nodes setup. Before running `packstack –answer-file= TwoRealNodeOVS&GRE.txt` SELINUX set to permissive on both nodes.Both p4p1’s assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface p37p1 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

 Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin &amp;&amp; GRE )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

Post packstack install  updates :-

1. nova.conf && metadata_agent.ini on Controller per

Two Real Node IceHouse Neutron OVS&amp;GRE

This updates enable nova-api to listen at port 9697

View section –

“Metadata support configured on Controller+NeutronServer Node”

 2. File /etc/sysconfig/iptables updated on both nodes with lines :-

*filter section

-A INPUT -p gre -j ACCEPT
-A OUTPUT -p gre -j ACCEPT

Service iptables restarted 

 ***************************************

 On Controller+NeutronServer

 ***************************************

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-p37p1
DEVICE=p37p1
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-p4p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p4p1
UUID=dbc361f1-805b-4f57-8150-cbc24ab7ee1a
ONBOOT=yes
IPADDR=192.168.0.127
PREFIX=24
# GATEWAY=192.168.0.1
DNS1=83.221.202.254
# HWADDR=00:E0:53:13:17:4C
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse1 network-scripts(keystone_admin)]# ovs-vsctl show
119e5be5-5ef6-4f39-875c-ab1dfdb18972
Bridge br-int
Port “qr-209f67c4-b1”
tag: 1
Interface “qr-209f67c4-b1”
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapb5da1c7e-50”
tag: 1
Interface “tapb5da1c7e-50”
type: internal
Bridge br-ex
Port “qg-22a1fffe-91”
Interface “qg-22a1fffe-91”
type: internal
Port “p37p1”
Interface “p37p1”
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137”}
ovs_version: “2.1.2”

**********************************

On Compute

**********************************

[root@icehouse2 network-scripts]# cat ifcfg-p37p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p37p1
UUID=b29ecd0e-7093-4ba9-8a2d-79ac74e93ea5
ONBOOT=yes
IPADDR=192.168.1.137
PREFIX=24
GATEWAY=192.168.1.1
DNS1=83.221.202.254
HWADDR=90:E6:BA:2D:11:EB
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse2 network-scripts]# cat ifcfg-p4p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p4p1
UUID=a57d6dd3-32fe-4a9f-a6d0-614e004bfdf6
ONBOOT=yes
IPADDR=192.168.0.137
PREFIX=24
GATEWAY=192.168.0.1
DNS1=83.221.202.254
HWADDR=00:0C:76:E0:1E:C5
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse2 network-scripts]# ovs-vsctl show
2dd63952-602e-4370-900f-85d8c984a0cb
Bridge br-int
Port “qvo615e1af7-f4”
tag: 3
Interface “qvo615e1af7-f4”
Port “qvoe78bebdb-36”
tag: 3
Interface “qvoe78bebdb-36”
Port br-int
Interface br-int
type: internal
Port “qvo9ccf821f-87”
tag: 3
Interface “qvo9ccf821f-87”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-2”
Interface “gre-2”
type: gre
options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}
Port br-tun
Interface br-tun
type: internal
ovs_version: “2.1.2

**************************************************

Update dhcp_agent.ini and create dnsmasq.conf

**************************************************

[root@icehouse1 neutron(keystone_admin)]# cat  dhcp_agent.ini

[DEFAULT]
debug = False
resync_interval = 30
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
enable_isolated_metadata = False
enable_metadata_network = False
dhcp_delete_namespaces = False
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
state_path=/var/lib/neutron

[root@icehouse1 neutron(keystone_admin)]# cat  dnsmasq.conf
log-facility = /var/log/neutron/dnsmasq.log
log-dhcp
# Line added
dhcp-option=26,1454

**************************************************************************

Metadata support configured on Controller+NeutronServer Node :- 

***************************************************************************

[root@icehouse1 ~(keystone_admin)]# ip netns
qrouter-269dfed8-e314-4a23-b693-b891ba00582e
qdhcp-79eb80f1-d550-4f4c-9670-f8e10b43e7eb

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-269dfed8-e314-4a23-b693-b891ba00582e iptables -S -t nat | grep 169.254

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-269dfed8-e314-4a23-b693-b891ba00582e netstat -anpt

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      5212/python

[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 5212


root      5212     1  0 11:40 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/269dfed8-e314-4a23-b693-b891ba00582e.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=269dfed8-e314-4a23-b693-b891ba00582e –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-269dfed8-e314-4a23-b693-b891ba00582e.log –log-dir=/var/log/neutron
root     21188  4697  0 14:29 pts/0    00:00:00 grep –color=auto 5212

[root@icehouse1 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1228/python       


[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 1228

nova      1228     1  0 11:38 ?          00:00:56 /usr/bin/python /usr/bin/nova-api
nova      3623  1228  0 11:39 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3626  1228  0 11:39 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3719  1228  0 11:39 ?        00:00:12 /usr/bin/python /usr/bin/nova-api
nova      3720  1228  0 11:39 ?        00:00:10 /usr/bin/python /usr/bin/nova-api
nova      3775  1228  0 11:39 ?        00:00:01 /usr/bin/python /usr/bin/nova-api
nova      3776  1228  0 11:39 ?        00:00:01 /usr/bin/python /usr/bin/nova-api
root     21230  4697  0 14:29 pts/0    00:00:00 grep –color=auto 1228

[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-scheduler   icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-conductor   icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-cert        icehouse1.localdomain                internal         enabled    :-)   2014-06-03 10:39:08
nova-compute     icehouse2.localdomain                nova             enabled    :-)   2014-06-03 10:39:07

[root@icehouse1 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+———————–+——-+—————-+
| id                                   | agent_type         | host                  | alive | admin_state_up |
+————————————–+——————–+———————–+——-+—————-+
| 4f37a350-2613-4a2b-95b2-b3bd4ee075a0 | L3 agent           | icehouse1.localdomain | :-)   | True           |
| 5b800eb7-aaf8-476a-8197-d13a0fc931c6 | Metadata agent     | icehouse1.localdomain | :-)   | True           |
| 5ce5e6fe-4d17-4ce0-9e6e-2f3b255ffeb0 | Open vSwitch agent | icehouse1.localdomain | :-)   | True           |
| 7f88512a-c59a-4ea4-8494-02e910cae034 | DHCP agent         | icehouse1.localdomain | :-)   | True           |
| a23e4d51-3cbc-42ee-845a-f5c17dff2370 | Open vSwitch agent | icehouse2.localdomain | :-)   | True           |
+————————————–+——————–+———————–+——-+————

  

    

    

 


Two Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 2, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from GRE Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

ANSWER FILE Two Node IceHouse Neutron OVS&GRE  and  updated *.ini , *.conf files after packstack setup  http://textuploader.com/0ts8

Two Libvirt’s  subnet created on F20 KVM Sever to support installation

  Public subnet :  192.169.142.0/24  

 GRE Tunnel  Support subnet:      192.168.122.0/24 

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat openstackvms.xml
<network>
 <name>openstackvms</name>
 <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr1' stp='on' delay='0' />
 <mac address='52:54:00:60:f8:6e'/>
 <ip address='192.169.142.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.169.142.2' end='192.169.142.254' />
 </dhcp>
 </ip>
 </network>
 2. Define the above network:
  $ virsh net-define openstackvms.xml
3. Start the network and enable it for "autostart"
 $ virsh net-start openstackvms
 $ virsh net-autostart openstackvms

4. List your libvirt networks to see if it reflects:
  $ virsh net-list
  Name                 State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
  openstackvms         active     yes           yes

5. Optionally, list your bridge devices:
  $ brctl show
  bridge name     bridge id               STP enabled     interfaces
  virbr0          8000.5254003339b3       yes             virbr0-nic
  virbr1          8000.52540060f86e       yes             virbr1-nic

 After packstack 2 Node (Controller+Compute) IceHouse OVS&amp;GRE setup :-

[root@ip-192-169-142-127 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 143
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| nova               |
| ovs_neutron        |
| performance_schema |
| test               |
+——————–+
9 rows in set (0.02 sec)

MariaDB [(none)]&gt; use ovs_neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [ovs_neutron]&gt; show tables ;
+—————————+
| Tables_in_ovs_neutron     |
+—————————+
| agents                    |
| alembic_version           |
| allowedaddresspairs       |
| dnsnameservers            |
| externalnetworks          |
| extradhcpopts             |
| floatingips               |
| ipallocationpools         |
| ipallocations             |
| ipavailabilityranges      |
| networkdhcpagentbindings  |
| networks                  |
| ovs_network_bindings      |
| ovs_tunnel_allocations    |
| ovs_tunnel_endpoints      |
| ovs_vlan_allocations      |
| portbindingports          |
| ports                     |
| quotas                    |
| routerl3agentbindings     |
| routerroutes              |
| routers                   |
| securitygroupportbindings |
| securitygrouprules        |
| securitygroups            |
| servicedefinitions        |
| servicetypes              |
| subnetroutes              |
| subnets                   |
+—————————+
29 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from networks ;
+———————————-+————————————–+———+——–+—————-+——–+
| tenant_id                        | id                                   | name    | status | admin_state_up | shared |
+———————————-+————————————–+———+——–+—————-+——–+
| 179e44f8f53641da89c3eb5d07405523 | 3854bc88-ae14-47b0-9787-233e54ffe7e5 | private | ACTIVE |              1 |      0 |
| 179e44f8f53641da89c3eb5d07405523 | 6e8dc33d-55d4-47b5-925f-e1fa96128c02 | public  | ACTIVE |              1 |      1 |
+———————————-+————————————–+———+——–+—————-+——–+
2 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from routers ;
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| tenant_id                        | id                                   | name    | status | admin_state_up | gw_port_id                           | enable_snat |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| 179e44f8f53641da89c3eb5d07405523 | 94829282-e6b2-4364-b640-8c0980218a4f | ROUTER3 | ACTIVE |              1 | df9711e4-d1a2-4255-9321-69105fbd8665 |           1 |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
1 row in set (0.00 sec)

MariaDB [ovs_neutron]&gt;

*********************

On Controller :-

********************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

*************************************

ovs-vsctl show output on controller

*************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
dc2c76d6-40e3-496e-bdec-470452758c32
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.122.127″, out_key=flow, remote_ip=”192.168.122.137”}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap7acb7666-aa”
tag: 1
Interface “tap7acb7666-aa”
type: internal
Port “qr-a26fe722-07”
tag: 1
Interface “qr-a26fe722-07”
type: internal
Bridge br-ex
Port “qg-df9711e4-d1”
Interface “qg-df9711e4-d1”
type: internal
Port “eth0”
Interface “eth0”
Port br-ex
Interface br-ex
type: internal
ovs_version: “2.1.2”

********************

On Compute:-

********************

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth0
UUID=f96e561d-d14c-4fb1-9657-0c935f7f5721
ONBOOT=yes
IPADDR=192.169.142.137
PREFIX=24
GATEWAY=192.169.142.1
DNS1=83.221.202.254
HWADDR=52:54:00:67:AC:04
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.137
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

************************************

ovs-vsctl show output on compute

************************************

[root@ip-192-169-142-137 ~]# ovs-vsctl show
1c6671de-fcdf-4a29-9eee-96c949848fff
Bridge br-tun
Port “gre-2”
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.122.137″, out_key=flow, remote_ip=”192.168.122.127”}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qvo87038189-3f”
tag: 1
Interface “qvo87038189-3f”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: “2.1.2”

[root@ip-192-169-142-137 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr87038189-3f        8000.2abf9e69f97c    no        qvb87038189-3f
tap87038189-3f

*************************

Metadata verification 

*************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f iptables -S -t nat | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3771/python
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | 3771
bash: 3771: command not found…

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 3771
root      3771     1  0 13:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/94829282-e6b2-4364-b640-8c0980218a4f.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=94829282-e6b2-4364-b640-8c0980218a4f –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-94829282-e6b2-4364-b640-8c0980218a4f.log –log-dir=/var/log/neutron

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1024/python

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 1024
nova      1024     1  1 13:58 ?        00:00:05 /usr/bin/python /usr/bin/nova-api
nova      3369  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3370  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3397  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3398  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3423  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3424  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root      4947  4301  0 14:06 pts/0    00:00:00 grep –color=auto 1024

  

  

  

 


Two Real Node (Controller+Compute) RDO IceHouse Neutron OVS&VLAN Cluster on Fedora 20 Setup

May 27, 2014

Two boxes , each one having 2  NICs (p37p1,p4p1) for (Controller+NeutronServer) &amp;&amp; Compute Nodes have been setup.

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

Before running `packstack –answer-file=TwoRealNode-answer.txt` SELINUX set to permissive on both nodes.  Interfaces p4p1 on both nodes set to promiscuous mode (e.g. HWADDRESS was commented out).

Specific of answer-file on real F20 boxes :-

CONFIG_NOVA_COMPUTE_PRIVIF=p4p1

CONFIG_NOVA_NETWORK_PUBIF=p37p1

CONFIG_NOVA_NETWORK_PRIVIF=p4p1

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan

CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:100:200

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-p4p1

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-p4p1:p4p1

Post installation steps :-

1. NetworkManager should be disabled on both nodes, service network enabled.

2. Syntax of ifcfg-* files of corresponding OVS ports  should follow RHEL 6.5 notations rather then F20

3. Special care should be taken to bring up p4p1 (in my case)

4. Post install reconfiguration *.ini  && *.conf   http://textuploader.com/9oec

5. Configuration p4p1 interfaces 

# cat ifcfg-p4p1

TYPE=Ethernet

BOOTPROTO=none

DEVICE=p4p1

ONBOOT=yes

NM_CONTROLLED=no

Metadata access verification on Controller:-

[root@icehouse1 ~(keystone_admin)]# ip netns

qdhcp-a2bf6363-6447-47f5-a243-b998d206d593

qrouter-2462467b-ea0a-4a40-a093-493572010694

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-2462467b-ea0a-4a40-a093-493572010694  iptables -S -t nat | grep 169

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8775

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-2462467b-ea0a-4a40-a093-493572010694  netstat -anpt

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   

tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      6156/python  

[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 6156

root      5691  4082  0 07:58 pts/0    00:00:00 grep –color=auto 6156
root      6156     1  0 06:04 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/2462467b-ea0a-4a40-a093-493572010694.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=2462467b-ea0a-4a40-a093-493572010694 –state_path=/var/lib/neutron –metadata_port=8775 –verbose –log-file=neutron-ns-metadata-proxy-2462467b-ea0a-4a40-a093-493572010694.log –log-dir=/var/log/neutron

[root@icehouse1 ~(keystone_admin)]# netstat -anpt | grep 8775

tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      1224/python 

[root@icehouse1 ~(keystone_admin)]# ps -aux | grep 1224

nova      1224  0.7  0.7 337092 65052 ?        Ss   05:59   0:46 /usr/bin/python /usr/bin/nova-api

boris     3789  0.0  0.1 504676 12248 ?        Sl   06:01   0:00 /usr/libexec/tracker-store

Verifying dhcp lease for private IPs for instances currently running :-

[root@icehouse1 ~(keystone_admin)]# ip netns exec qdhcp-a2bf6363-6447-47f5-a243-b998d206d593 ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 3  bytes 1728 (1.6 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 3  bytes 1728 (1.6 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapa7e1ac48-7b: flags=67  mtu 1500
inet 10.0.0.11  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::f816:3eff:fe9d:874d  prefixlen 64  scopeid 0x20
ether fa:16:3e:9d:87:4d  txqueuelen 0  (Ethernet)
RX packets 3364  bytes 626074 (611.4 KiB)
RX errors 0  dropped 35  overruns 0  frame 0
TX packets 2124  bytes 427060 (417.0 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@icehouse1 ~(keystone_admin)]# ip netns exec qdhcp-a2bf6363-6447-47f5-a243-b998d206d593 tcpdump -ln -i tapa7e1ac48-7b

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on tapa7e1ac48-7b, link-type EN10MB (Ethernet), capture size 65535 bytes

11:07:02.388376 ARP, Request who-has 10.0.0.11 tell 10.0.0.38, length 46

11:07:02.388399 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:12.239833 IP 10.0.0.43.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:40:da:a1, length 300

11:07:12.240491 IP 10.0.0.11.bootps &gt; 10.0.0.43.bootpc: BOOTP/DHCP, Reply, length 324

11:07:12.313087 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:13.313070 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:15.634980 IP 0.0.0.0.bootpc &gt; 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:81:ff, length 280

11:07:15.635595 IP 10.0.0.11.bootps &gt; 10.0.0.31.bootpc: BOOTP/DHCP, Reply, length 324

11:07:15.635954 IP 10.0.0.31 &gt; 10.0.0.11: ICMP 10.0.0.31 udp port bootpc unreachable, length 360

11:07:17.254260 ARP, Request who-has 10.0.0.43 tell 10.0.0.11, length 28

11:07:17.254866 ARP, Reply 10.0.0.43 is-at fa:16:3e:40:da:a1, length 46

11:07:20.644135 ARP, Request who-has 10.0.0.11 tell 10.0.0.31, length 28

11:07:20.644157 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:45.972179 IP 10.0.0.38.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:9d:67:df, length 300

11:07:45.973023 IP 10.0.0.11.bootps &gt; 10.0.0.38.bootpc: BOOTP/DHCP, Reply, length 324

11:07:50.980701 ARP, Request who-has 10.0.0.11 tell 10.0.0.38, length 46

11:07:50.980725 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:55.821920 IP 10.0.0.43.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:40:da:a1, length 300

11:07:55.822423 IP 10.0.0.11.bootps &gt; 10.0.0.43.bootpc: BOOTP/DHCP, Reply, length 324

11:07:55.898024 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:56.897994 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:08:00.823637 ARP, Request who-has 10.0.0.11 tell 10.0.0.43, length 46

******************

On Controller

******************

[root@icehouse1 ~(keystone_admin)]# ovs-vsctl show

a675c73e-c707-4f29-af60-57fb7c3f81c4

Bridge br-int

Port “int-br-p4p1”

Interface “int-br-p4p1”

Port br-int

Interface br-int

type: internal

Port “qr-bbba6fd3-a3”

tag: 1

Interface “qr-bbba6fd3-a3”

type: internal

Port “qvo61d82a0f-32”

tag: 1

Interface “qvo61d82a0f-32”

Port “tapa7e1ac48-7b”

tag: 1

Interface “tapa7e1ac48-7b”

type: internal

Port “qvof8c8a1a2-51”

tag: 1

Interface “qvof8c8a1a2-51”

Bridge br-ex

Port “p37p1”

Interface “p37p1”

Port br-ex

Interface br-ex

type: internal

Port “qg-3787602d-29”

Interface “qg-3787602d-29”

type: internal

Bridge “br-p4p1”

Port “p4p1”

Interface “p4p1”

Port “phy-br-p4p1”

Interface “phy-br-p4p1”

Port “br-p4p1”

Interface “br-p4p1”

type: internal

ovs_version: “2.0.1”

****************

On Compute

****************

[root@icehouse2 ]# ovs-vsctl show

bf768fc8-d18b-4762-bdd2-a410fcf88a9b

Bridge “br-p4p1”

Port “br-p4p1”

Interface “br-p4p1”

type: internal

Port “phy-br-p4p1”

Interface “phy-br-p4p1”

Port “p4p1”

Interface “p4p1”

Bridge br-int

Port br-int

Interface br-int

type: internal

Port “int-br-p4p1”

Interface “int-br-p4p1”

Port “qvoe5a82d77-d4”

tag: 8

Interface “qvoe5a82d77-d4”

ovs_version: “2.0.1”

[root@icehouse1 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active

openstack-nova-cert:                    active

openstack-nova-compute:                 active

openstack-nova-network:                 inactive  (disabled on boot)

openstack-nova-scheduler:               active

openstack-nova-volume:                  inactive  (disabled on boot)

openstack-nova-conductor:               active

== Glance services ==

openstack-glance-api:                   active

openstack-glance-registry:              active

== Keystone service ==

openstack-keystone:                     active

== Horizon service ==

openstack-dashboard:                    active

== neutron services ==

neutron-server:                         active

neutron-dhcp-agent:                     active

neutron-l3-agent:                       active

neutron-metadata-agent:                 active

neutron-lbaas-agent:                    inactive  (disabled on boot)

neutron-openvswitch-agent:              active

neutron-linuxbridge-agent:              inactive  (disabled on boot)

neutron-ryu-agent:                      inactive  (disabled on boot)

neutron-nec-agent:                      inactive  (disabled on boot)

neutron-mlnx-agent:                     inactive  (disabled on boot)

== Swift services ==

openstack-swift-proxy:                  active

openstack-swift-account:                active

openstack-swift-container:              active

openstack-swift-object:                 active

== Cinder services ==

openstack-cinder-api:                   active

openstack-cinder-scheduler:             active

openstack-cinder-volume:                active

openstack-cinder-backup:                inactive

== Ceilometer services ==

openstack-ceilometer-api:               active

openstack-ceilometer-central:           active

openstack-ceilometer-compute:           active

openstack-ceilometer-collector:         active

openstack-ceilometer-alarm-notifier:    active

openstack-ceilometer-alarm-evaluator:   active

== Support services ==

libvirtd:                               active

openvswitch:                            active

dbus:                                   active

tgtd:                                   active

rabbitmq-server:                        active

memcached:                              active

== Keystone users ==

+———————————-+————+———+———————-+

|                id                |    name    | enabled |        email         |

+———————————-+————+———+———————-+

| df9165cd160846b19f73491e0bc041c2 |   admin    |   True  |    test@test.com     |

| bafe2fc4d51a400a99b1b41ef50d4afd | ceilometer |   True  | ceilometer@localhost |

| df59d0782f174a34a3a73215300c64ca |   cinder   |   True  |   cinder@localhost   |

| ca624394c9d941b6ad0a07363ab668b2 |   glance   |   True  |   glance@localhost   |

| fb5125484a1f4b7aaf8503025eb018ba |  neutron   |   True  |  neutron@localhost   |

| 64912bc3726c48db8f003ce79d8fe746 |    nova    |   True  |    nova@localhost    |

| 6d8b48605d3b476097d89486813360c0 |   swift    |   True  |   swift@localhost    |

+———————————-+————+———+———————-+

== Glance images ==

+————————————–+—————–+————-+——————+———–+——–+

| ID                                   | Name            | Disk Format | Container Format | Size      | Status |

+————————————–+—————–+————-+——————+———–+——–+

| 8593a43a-2449-4b49-918f-9871011249a7 | CirrOS31        | qcow2       | bare             | 13147648  | active |

| 4be72a99-06e0-477d-b446-b597435455a9 | Fedora20image   | qcow2       | bare             | 210829312 | active |

| 28470072-f317-4a72-b3e8-3fffbe6a7661 | UubuntuServer14 | qcow2       | bare             | 253559296 | active |

+————————————–+—————–+————-+——————+———–+——–+

== Nova managed services ==

+——————+———————–+———-+———+——-+—————————-+—————–+

| Binary           | Host                  | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+——————+———————–+———-+———+——-+—————————-+—————–+

| nova-consoleauth | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | –               |

| nova-scheduler   | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | –               |

| nova-conductor   | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:13.000000 | –               |

| nova-compute     | icehouse1.localdomain | nova     | enabled | up    | 2014-05-25T03:03:10.000000 | –               |

| nova-cert        | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | –               |

| nova-compute     | icehouse2.localdomain | nova     | enabled | up    | 2014-05-25T03:03:13.000000 | –               |

+——————+———————–+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+———+——+

| ID                                   | Label   | Cidr |

+————————————–+———+——+

| 09e18ced-8c22-4166-a1a1-cbceece46884 | public  | –    |

| a2bf6363-6447-47f5-a243-b998d206d593 | private | –    |

+————————————–+———+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———–+———–+——+———–+——+——-+————-+———–+

| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |

| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |

| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |

| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |

| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |

+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+————–+———–+————+————-+———————————+

| ID                                   | Name         | Status    | Task State | Power State | Networks                        |

+————————————–+————–+———–+————+————-+———————————+

| b661a130-fdb7-41eb-aba5-588924634c9d | CirrOS302    | ACTIVE    | –          | Running     | private=10.0.0.31, 192.168.1.63 |

| 5d1dbb9d-7bef-4e51-be8d-4270ddd3d4cc | CirrOS351    | ACTIVE    | –          | Running     | private=10.0.0.39, 192.168.1.66 |

| ef73a897-8700-4999-ab25-49f25b896f34 | CirrOS370    | ACTIVE    | –          | Running     | private=10.0.0.40, 192.168.1.69 |

| 02718e21-edb9-4b59-8bb7-21e0290650fd | CirrOS390    | SUSPENDED | –          | Shutdown    | private=10.0.0.41, 192.168.1.67 |                           |

| 6992e37c-48c7-49b6-b6fc-8e35fe240704 | UbuntuSRV350 | SUSPENDED | –          | Shutdown    | private=10.0.0.38, 192.168.1.62 |

| 9953ed52-b666-4fe1-ac35-23621122af5a | VF20RS02     | ACTIVE    | –          | Running     | private=10.0.0.43, 192.168.1.71 |

+————————————–+————–+———–+————+————-+———————————+

[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-scheduler   icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-conductor   icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:14
nova-compute     icehouse1.localdomain                nova             enabled    :-)   2014-05-27 10:16:18
nova-cert        icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-compute     icehouse2.localdomain                nova             enabled    :-)   2014-05-27 10:16:12

[root@icehouse1 ~(keystone_admin)]# neutron agent-list

+--------------------------------------+--------------------+-----------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------+-------+----------------+
| 6775fac7-d594-4272-8447-f136b54247e8 | L3 agent | icehouse1.localdomain |🙂 | True |
| 77fdc8a9-0d77-4f53-9cdd-1c732f0cfdb1 | Metadata agent | icehouse1.localdomain |🙂 | True |
| 8f70b2c4-c65b-4d0b-9808-ba494c764d99 | Open vSwitch agent | icehouse1.localdomain |🙂 | True |
| a86f1272-2afb-43b5-a7e6-e5fc6df565b5 | Open vSwitch agent | icehouse2.localdomain |🙂 | True |
| e72bdcd5-3dd1-4994-860f-e21d4a58dd4c | DHCP agent | icehouse1.localdomain |🙂 | True |
+--------------------------------------+--------------------+-----------------------+-------+----------------+


 
   


 
 Windows 2012 evaluation Server running on Compute Node :-
 


  


									

Setup Horizon Dashboard-2014.1 on F20 Havana Controller (firefox upgrade up to 29.0-5)

May 3, 2014

It’s hard to know what the right thing is. Once you know, it’s hard not to do it.
                       Harry Fertig (Kingsley). The Confession (1999 film)

Recent upgrade firefox up to 29.0-5 on Fedora 20 causes to fail login to Dashboard Console for Havana F20 Controller been setup per VNC Console in Dashboard on Two Node Neutron GRE+OVS F20 Cluster

Procedure bellow actually does a backport F21 packages python-django-horizon-2104.1-1 , python-django-openstack-auth-1.1.5-1, python-pbr-0.7.0-2 via manual

install of corresponding SRC.RPMs and invoking rpmbuild utility to produce F20

packages. The hard thing to know is which packages to backport ?

I had to perform AIO RDO IceHouse setup via packstack on specially created VM to run `rpm -qa | grep django` to obtain required list. Officially RDO Havana

comes with F20 and appears that most recent firefox upgrade breaks Horizon Dashboard supposed to be maintained as supported component for F20.

Download from Net :-

[boris@dfw02 Downloads]$ ls -l *.src.rpm

-rw-r–r–. 1 boris boris 4252988 May  3 08:21 python-django-horizon-2014.1-1.fc21.src.rpm

-rw-r–r–. 1 boris boris   47126 May  3 08:37 python-django-openstack-auth-1.1.5-1.fc21.src.rpm

-rw-r–r–. 1 boris boris   83761 May  3 08:48 python-pbr-0.7.0-2.fc21.src.rpm

Install src.rpms and build

[boris@dfw02 SPECS]$ rpmbuild -bb python-django-openstack-auth.spec

[boris@dfw02 SPECS]$ rpmbuild -bb python-pbr.spec

Then install rpms as preventive step before core package build

[boris@dfw02 noarch]$sudo yum install python-django-openstack-auth-1.1.5-1.fc20.noarch.rpm

[boris@dfw02 noarch]$sudo yum install  python-pbr-0.7.0-2.fc20.noarch.rpm

[boris@dfw02 noarch]$ cd –

/home/boris/rpmbuild/SPECS

Core build to succeed :-

[boris@dfw02 SPECS]$ rpmbuild -bb python-django-horizon.spec

[boris@dfw02 SPECS]$ cd ../RPMS/n*

[boris@dfw02 noarch]$ ls -l

total 6616

-rw-rw-r–. 1 boris boris 3293068 May  3 09:01 openstack-dashboard-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  732020 May  3 09:01 openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  160868 May  3 08:51 python3-pbr-0.7.0-2.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  823332 May  3 09:01 python-django-horizon-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris 1548752 May  3 09:01 python-django-horizon-doc-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris   43944 May  3 08:39 python-django-openstack-auth-1.1.5-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  158204 May  3 08:51 python-pbr-0.7.0-2.fc20.noarch.rpm

[boris@dfw02 noarch]$ ls *.rpm &gt; inst

[boris@dfw02 noarch]$ vi inst

[boris@dfw02 noarch]$ chmod u+x inst

[boris@dfw02 noarch]$ ./inst

[sudo] password for boris:

Loaded plugins: langpacks, priorities, refresh-packagekit

Examining openstack-dashboard-2014.1-1.fc20.noarch.rpm: openstack-dashboard-2014.1-1.fc20.noarch

Marking openstack-dashboard-2014.1-1.fc20.noarch.rpm as an update to openstack-dashboard-2013.2.3-1.fc20.noarch

Examining openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm: openstack-dashboard-theme-2014.1-1.fc20.noarch

Marking openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm to be installed

Examining python-django-horizon-2014.1-1.fc20.noarch.rpm: python-django-horizon-2014.1-1.fc20.noarch

Marking python-django-horizon-2014.1-1.fc20.noarch.rpm as an update to python-django-horizon-2013.2.3-1.fc20.noarch

Examining python-django-horizon-doc-2014.1-1.fc20.noarch.rpm: python-django-horizon-doc-2014.1-1.fc20.noarch

Marking python-django-horizon-doc-2014.1-1.fc20.noarch.rpm to be installed

Resolving Dependencies

–&gt; Running transaction check

—&gt; Package openstack-dashboard.noarch 0:2013.2.3-1.fc20 will be updated

—&gt; Package openstack-dashboard.noarch 0:2014.1-1.fc20 will be an update

—&gt; Package openstack-dashboard-theme.noarch 0:2014.1-1.fc20 will be installed

—&gt; Package python-django-horizon.noarch 0:2013.2.3-1.fc20 will be updated

—&gt; Package python-django-horizon.noarch 0:2014.1-1.fc20 will be an update

—&gt; Package python-django-horizon-doc.noarch 0:2014.1-1.fc20 will be installed

–&gt; Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================

Package                   Arch   Version          Repository                                       Size

=========================================================================================================

Installing:

openstack-dashboard-theme noarch 2014.1-1.fc20    /openstack-dashboard-theme-2014.1-1.fc20.noarch 1.5 M

python-django-horizon-doc noarch 2014.1-1.fc20    /python-django-horizon-doc-2014.1-1.fc20.noarch  24 M

Updating:

openstack-dashboard       noarch 2014.1-1.fc20    /openstack-dashboard-2014.1-1.fc20.noarch        14 M

python-django-horizon     noarch 2014.1-1.fc20    /python-django-horizon-2014.1-1.fc20.noarch     3.3 M

Transaction Summary

=========================================================================================================

Install  2 Packages

Upgrade  2 Packages

 

Total size: 42 M

Is this ok [y/d/N]: y

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Updating   : python-django-horizon-2014.1-1.fc20.noarch                                            1/6

Updating   : openstack-dashboard-2014.1-1.fc20.noarch                                              2/6

warning: /etc/openstack-dashboard/local_settings created as /etc/openstack-dashboard/local_settings.rpmnew

Installing : openstack-dashboard-theme-2014.1-1.fc20.noarch                                        3/6

Installing : python-django-horizon-doc-2014.1-1.fc20.noarch                                        4/6

Cleanup    : openstack-dashboard-2013.2.3-1.fc20.noarch                                            5/6

Cleanup    : python-django-horizon-2013.2.3-1.fc20.noarch                                          6/6

Verifying  : openstack-dashboard-theme-2014.1-1.fc20.noarch                                        1/6

Verifying  : openstack-dashboard-2014.1-1.fc20.noarch                                              2/6

Verifying  : python-django-horizon-doc-2014.1-1.fc20.noarch                                        3/6

Verifying  : python-django-horizon-2014.1-1.fc20.noarch                                            4/6

Verifying  : openstack-dashboard-2013.2.3-1.fc20.noarch                                            5/6

Verifying  : python-django-horizon-2013.2.3-1.fc20.noarch                                          6/6

Installed:

openstack-dashboard-theme.noarch 0:2014.1-1.fc20    python-django-horizon-doc.noarch 0:2014.1-1.fc20

Updated:

openstack-dashboard.noarch 0:2014.1-1.fc20         python-django-horizon.noarch 0:2014.1-1.fc20

Complete!

[root@dfw02 ~(keystone_admin)]$ rpm -qa | grep django

python-django-horizon-doc-2014.1-1.fc20.noarch

python-django-horizon-2014.1-1.fc20.noarch

python-django-1.6.3-1.fc20.noarch

python-django-nose-1.2-1.fc20.noarch

python-django-bash-completion-1.6.3-1.fc20.noarch

python-django-openstack-auth-1.1.5-1.fc20.noarch

python-django-appconf-0.6-2.fc20.noarch

python-django-compressor-1.3-2.fc20.noarch

Admin’s reports regarding Cluster status

 

 

 

     Ubuntu Trusty Server VM running


RDO Havana Neutron Namespaces Troubleshooting for OVS&VLAN(GRE) Config

April 14, 2014

The  OpenStack Networking components are deployed on the Controller, Compute, and Network nodes in the following configuration:

In case of Two Node Development Cluster :-

Controller node: hosts the Neutron server service, which provides the networking API and communicates with and tracks the agents.

DHCP agent: spawns and controls dnsmasq processes to provide leases to instances. This agent also spawns neutron-ns-metadata-proxy processes as part of the metadata system.

Metadata agent: Provides a metadata proxy to the nova-api-metadata service. The neutron-ns-metadata-proxy direct traffic that they receive in their namespaces to the proxy.

OVS plugin agent: Controls OVS network bridges and routes between them via patch, tunnel, or tap without requiring an external OpenFlow controller.

L3 agent: performs L3 forwarding and NAT.

In case of Three Node or more ( several Compute Nodes) :-

Separate box hosts Neutron Server and all services mentioned above

Compute node: has an OVS plugin agent and openstack-nova-compute service.

Namespaces (View  Identifying and Troubleshooting Neutron Namespaces )

For each network you create, the Network node (or Controller node, if combined) will have a unique network namespace (netns) created by the DHCP and Metadata agents. The netns hosts an interface and IP addresses for dnsmasq and the neutron-ns-metadata-proxy. You can view the namespaces with the `ip netns list`  command, and can interact with the namespaces with the `ip netns exec namespace command`   command.

Every l2-agent/private network has an associated dhcp namespace and

Every l3-agent/router has an associated router namespace.

Network namespace starts with dhcp- followed by the ID of the network.

Router namespace starts with qrouter- followed by the ID of the router.

Source admin credentials and get network list

[root@dfw02 ~(keystone_admin)]$ neutron net-list

+————————————–+——+—————————————————–+

| id                                   | name | subnets                                             |

+————————————–+——+—————————————————–+

| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int  | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24    |

| 426bb226-0ab9-440d-ba14-05634a17fb2b | int1 | 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06 40.0.0.0/24    |

| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |

+————————————–+——+—————————————————–+

Using command `ip netns list` run following commands to get tenants

qdhcp-* names

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep 1eea88bb-4952-4aa4-9148-18b61c22d5b7

qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep 426bb226-0ab9-440d-ba14-05634a17fb2b

qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b

Check tenants Namespace via getting IP and ping this IP inside namespaces

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 35  bytes 4416 (4.3 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 35  bytes 4416 (4.3 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ns-343b0090-24: flags=4163  mtu 1500
inet 40.0.0.3  netmask 255.255.255.0  broadcast 40.0.0.255

inet6 fe80::f816:3eff:fe01:8b55  prefixlen 64  scopeid 0x20
ether fa:16:3e:01:8b:55  txqueuelen 1000  (Ethernet)
RX packets 3251  bytes 386284 (377.2 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1774  bytes 344082 (336.0 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b ping  -c 3 40.0.0.3
PING 40.0.0.3 (40.0.0.3) 56(84) bytes of data.
64 bytes from 40.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from 40.0.0.3: icmp_seq=2 ttl=64 time=0.035 ms
64 bytes from 40.0.0.3: icmp_seq=3 ttl=64 time=0.034 ms

— 40.0.0.3 ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.034/0.036/0.041/0.007 ms

Now verify that we have a copy of dnsmasq process to support every tenants namespace

[root@dfw02 ~(keystone_admin)]$ ps -aux | grep dhcp

neutron   2320  0.3  0.3 263908 30696 ?        Ss   08:18   2:14 /usr/bin/python /usr/bin/neutron-dhcp-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/dhcp_agent.ini –log-file /var/log/neutron/dhcp-agent.log

nobody    3529  0.0  0.0  15532   832 ?        S    08:20   0:00 dnsmasq –no-hosts –no-resolv –strict-order –bind-interfaces –interface=ns-40dd712c-e4 –except-interface=lo –pid-file=/var/lib/neutron/dhcp/1eea88bb-4952-4aa4-9148-18b61c22d5b7/pid –dhcp-hostsfile=/var/lib/neutron/dhcp/1eea88bb-4952-4aa4-9148-18b61c22d5b7/host –dhcp-optsfile=/var/lib/neutron/dhcp/1eea88bb-4952-4aa4-9148-18b61c22d5b7/opts –leasefile-ro –dhcp-range=set:tag0,10.0.0.0,static,120s –dhcp-lease-max=256 –conf-file=/etc/neutron/dnsmasq.conf –domain=openstacklocal

nobody    3530  0.0  0.0  15532   944 ?        S    08:20   0:00 dnsmasq –no-hosts –no-resolv –strict-order –bind-interfaces –interface=ns-343b0090-24 –except-interface=lo –pid-file=/var/lib/neutron/dhcp/426bb226-0ab9-440d-ba14-05634a17fb2b/pid –dhcp-hostsfile=/var/lib/neutron/dhcp/426bb226-0ab9-440d-ba14-05634a17fb2b/host –dhcp-optsfile=/var/lib/neutron/dhcp/426bb226-0ab9-440d-ba14-05634a17fb2b/opts –leasefile-ro –dhcp-range=set:tag0,40.0.0.0,static,120s –dhcp-lease-max=256 –conf-file=/etc/neutron/dnsmasq.conf –domain=openstacklocal

List interfaces inside dhcp namespace

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

2: ns-343b0090-24: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:01:8b:55 brd ff:ff:ff:ff:ff:ff
inet 40.0.0.3/24 brd 40.0.0.255 scope global ns-343b0090-24
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe01:8b55/64 scope link
valid_lft forever preferred_lft forever

(A)( From the instance to a router)

Check routing inside dhcp namespace

[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b  ip r

default via 40.0.0.1 dev ns-343b0090-24

40.0.0.0/24 dev ns-343b0090-24  proto kernel  scope link  src 40.0.0.3

Check routing inside the router namespace

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 ip r

default via 192.168.1.1 dev qg-9c090153-08

40.0.0.0/24 dev qr-e031db6b-d0  proto kernel  scope link  src 40.0.0.1

192.168.1.0/24 dev qg-9c090153-08  proto kernel  scope link  src 192.168.1.114

Get routers list  via similar grep and network-id to obtain Routers Namespaces

[root@dfw02 ~(keystone_admin)]$ neutron router-list

+————————————–+———+—————————————————————————–+

| id                                   | name    | external_gateway_info                                                       |

+————————————–+———+—————————————————————————–+

| 86b3008c-297f-4301-9bdc-766b839785f1 | router2 | {“network_id”: “780ce2f3-2e6e-4881-bbac-857813f9a8e0”, “enable_snat”: true} |

| bf360d81-79fb-4636-8241-0a843f228fc8 | router1 | {“network_id”: “780ce2f3-2e6e-4881-bbac-857813f9a8e0”, “enable_snat”: true} |

+————————————–+———+—————————————————————————–+

Now get qrouter-* namespaces via `ip netns list` command :-

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep 86b3008c-297f-4301-9bdc-766b839785f1
qrouter-86b3008c-297f-4301-9bdc-766b839785f1

[root@dfw02 ~(keystone_admin)]$ ip netns list | grep  bf360d81-79fb-4636-8241-0a843f228fc8
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8

Now verify L3 forwarding  & NAT via command  `iptables -L -t nat` inside router namespace and check  routing   port 80 for 169.254.169.254 to the RDO Havana Controller’s ( in my configuration running Neutron Server Service along with all agents) host at metadata port 8700

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 iptables -L -t nat

Chain PREROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-PREROUTING  all  —  anywhere             anywhere

Chain INPUT (policy ACCEPT)

target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-OUTPUT  all  —  anywhere             anywhere

Chain POSTROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-POSTROUTING  all  —  anywhere             anywhere

neutron-postrouting-bottom  all  —  anywhere             anywhere

Chain neutron-l3-agent-OUTPUT (1 references)

target     prot opt source               destination

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.2

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.6

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.5

Chain neutron-l3-agent-POSTROUTING (1 references)

target     prot opt source               destination

ACCEPT     all  —  anywhere             anywhere             ! ctstate DNAT

Chain neutron-l3-agent-PREROUTING (1 references)

target     prot opt source               destination

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports 8700

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.2

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.6

DNAT       all  —  anywhere             dfw02.localdomain    to:40.0.0.5

Chain neutron-l3-agent-float-snat (1 references)

target     prot opt source               destination

SNAT       all  —  40.0.0.2             anywhere             to:192.168.1.107

SNAT       all  —  40.0.0.6             anywhere             to:192.168.1.104

SNAT       all  —  40.0.0.5             anywhere             to:192.168.1.110

Chain neutron-l3-agent-snat (1 references)

target     prot opt source               destination

neutron-l3-agent-float-snat  all  —  anywhere             anywhere

SNAT       all  —  40.0.0.0/24          anywhere             to:192.168.1.114

Chain neutron-postrouting-bottom (1 references)

target     prot opt source               destination

neutron-l3-agent-snat  all  —  anywhere             anywhere

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  iptables -L -t nat

Chain PREROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-PREROUTING  all  —  anywhere             anywhere

Chain INPUT (policy ACCEPT)

target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-OUTPUT  all  —  anywhere             anywhere

Chain POSTROUTING (policy ACCEPT)

target     prot opt source               destination

neutron-l3-agent-POSTROUTING  all  —  anywhere             anywhere

neutron-postrouting-bottom  all  —  anywhere             anywhere

Chain neutron-l3-agent-OUTPUT (1 references)

target     prot opt source               destination

DNAT       all  —  anywhere             dfw02.localdomain    to:10.0.0.2

Chain neutron-l3-agent-POSTROUTING (1 references)

target     prot opt source               destination

ACCEPT     all  —  anywhere             anywhere             ! ctstate DNAT

Chain neutron-l3-agent-PREROUTING (1 references)

target     prot opt source               destination

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports 8700

DNAT       all  —  anywhere             dfw02.localdomain    to:10.0.0.2

Chain neutron-l3-agent-float-snat (1 references)

target     prot opt source               destination

SNAT       all  —  10.0.0.2             anywhere             to:192.168.1.103

Chain neutron-l3-agent-snat (1 references)

target     prot opt source               destination

neutron-l3-agent-float-snat  all  —  anywhere             anywhere

SNAT       all  —  10.0.0.0/24          anywhere             to:192.168.1.100

Chain neutron-postrouting-bottom (1 references)

target     prot opt source               destination

neutron-l3-agent-snat  all  —  anywhere             anywhere

(B) ( through a NAT rule in the router namespace)

Check the NAT table

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 iptables -t nat -S

-P PREROUTING ACCEPT

-P INPUT ACCEPT

-P OUTPUT ACCEPT

-P POSTROUTING ACCEPT

-N neutron-l3-agent-OUTPUT

-N neutron-l3-agent-POSTROUTING

-N neutron-l3-agent-PREROUTING

-N neutron-l3-agent-float-snat

-N neutron-l3-agent-snat

-N neutron-postrouting-bottom

-A PREROUTING -j neutron-l3-agent-PREROUTING

-A OUTPUT -j neutron-l3-agent-OUTPUT

-A POSTROUTING -j neutron-l3-agent-POSTROUTING

-A POSTROUTING -j neutron-postrouting-bottom

-A neutron-l3-agent-OUTPUT -d 192.168.1.112/32 -j DNAT –to-destination 40.0.0.2

-A neutron-l3-agent-OUTPUT -d 192.168.1.113/32 -j DNAT –to-destination 40.0.0.4

-A neutron-l3-agent-OUTPUT -d 192.168.1.104/32 -j DNAT –to-destination 40.0.0.6

-A neutron-l3-agent-OUTPUT -d 192.168.1.110/32 -j DNAT –to-destination 40.0.0.5

-A neutron-l3-agent-POSTROUTING ! -i qg-9c090153-08 ! -o qg-9c090153-08 -m conntrack ! –ctstate DNAT -j ACCEPT

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8700

-A neutron-l3-agent-PREROUTING -d 192.168.1.112/32 -j DNAT –to-destination 40.0.0.2

-A neutron-l3-agent-PREROUTING -d 192.168.1.113/32 -j DNAT –to-destination 40.0.0.4

-A neutron-l3-agent-PREROUTING -d 192.168.1.104/32 -j DNAT –to-destination 40.0.0.6

-A neutron-l3-agent-PREROUTING -d 192.168.1.110/32 -j DNAT –to-destination 40.0.0.5

-A neutron-l3-agent-float-snat -s 40.0.0.2/32 -j SNAT –to-source 192.168.1.112

-A neutron-l3-agent-float-snat -s 40.0.0.4/32 -j SNAT –to-source 192.168.1.113

-A neutron-l3-agent-float-snat -s 40.0.0.6/32 -j SNAT –to-source 192.168.1.104

-A neutron-l3-agent-float-snat -s 40.0.0.5/32 -j SNAT –to-source 192.168.1.110

-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat

-A neutron-l3-agent-snat -s 40.0.0.0/24 -j SNAT –to-source 192.168.1.114

-A neutron-postrouting-bottom -j neutron-l3-agent-snat

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 iptables -t nat -S

-P PREROUTING ACCEPT

-P INPUT ACCEPT

-P OUTPUT ACCEPT

-P POSTROUTING ACCEPT

-N neutron-l3-agent-OUTPUT

-N neutron-l3-agent-POSTROUTING

-N neutron-l3-agent-PREROUTING

-N neutron-l3-agent-float-snat

-N neutron-l3-agent-snat

-N neutron-postrouting-bottom

-A PREROUTING -j neutron-l3-agent-PREROUTING

-A OUTPUT -j neutron-l3-agent-OUTPUT

-A POSTROUTING -j neutron-l3-agent-POSTROUTING

-A POSTROUTING -j neutron-postrouting-bottom

-A neutron-l3-agent-OUTPUT -d 192.168.1.103/32 -j DNAT –to-destination 10.0.0.2

-A neutron-l3-agent-POSTROUTING ! -i qg-54e34740-87 ! -o qg-54e34740-87 -m conntrack ! –ctstate DNAT -j ACCEPT

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8700

-A neutron-l3-agent-PREROUTING -d 192.168.1.103/32 -j DNAT –to-destination 10.0.0.2

-A neutron-l3-agent-float-snat -s 10.0.0.2/32 -j SNAT –to-source 192.168.1.103

-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat

-A neutron-l3-agent-snat -s 10.0.0.0/24 -j SNAT –to-source 192.168.1.100

-A neutron-postrouting-bottom -j neutron-l3-agent-snat

Ping to verify network connections

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 ping 8.8.8.8

PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

64 bytes from 8.8.8.8: icmp_seq=1 ttl=47 time=42.6 ms

64 bytes from 8.8.8.8: icmp_seq=2 ttl=47 time=40.8 ms

64 bytes from 8.8.8.8: icmp_seq=3 ttl=47 time=41.6 ms

64 bytes from 8.8.8.8: icmp_seq=4 ttl=47 time=41.0 ms

Verifying  service listening at 8700 port  inside routers namespaces 

output seems like this :-

(C) (to an instance of the neutron-ns-metadata-proxy)

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1  netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      4946/python

Check process with pid 4946

[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4946

root      4946     1  0 08:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/86b3008c-297f-4301-9bdc-766b839785f1.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=86b3008c-297f-4301-9bdc-766b839785f1 –state_path=/var/lib/neutron –metadata_port=8700 –verbose –log-file=neutron-ns-metadata-proxy-86b3008c-297f-4301-9bdc-766b839785f1.log –log-dir=/var/log/neutron

root     10396 11489  0 16:33 pts/3    00:00:00 grep –color=auto 4946

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      4746/python

Check process with pid 4746

[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4746

root      4746     1  0 08:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/bf360d81-79fb-4636-8241-0a843f228fc8.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=bf360d81-79fb-4636-8241-0a843f228fc8 –state_path=/var/lib/neutron –metadata_port=8700 –verbose –log-file=neutron-ns-metadata-proxy-bf360d81-79fb-4636-8241-0a843f228fc8.log –log-dir=/var/log/neutron

Now run following commands inside routers namespaces to check status of neutron-metadata port :-

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1  netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN

Active UNIX domain sockets (servers and established)

Proto RefCnt Flags       Type       State         I-Node   Path

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN

Active UNIX domain sockets (servers and established)

Proto RefCnt Flags       Type       State         I-Node   Path

Outside routers namespace it would look like

(D) (to the actual Nova metadata service)

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      2746/python

Check process with pid  2746

[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 2746

nova      2746     1  0 08:57 ?        00:02:31 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

nova      2830  2746  0 08:57 ?        00:00:00 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

nova      2851  2746  0 08:57 ?        00:00:10 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

nova      2858  2746  0 08:57 ?        00:00:02 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

root      9976 11489  0 16:31 pts/3    00:00:00 grep –color=auto 2746

So , we actually verified statement from Direct access to Nova metadata

in an environment running Neutron, a request from your instance must traverse a number of steps:

1. From the instance to a router, (A)

2. Through a NAT rule in the router namespace,  (B)

3. To an instance of the neutron-ns-metadata-proxy, (C)

4. To the actual Nova metadata service (D)

References

1. OpenStack Networking concepts


HowTo access metadata from RDO Havana Instance on Fedora 20

April 5, 2014

Per  Direct_access _to_Nova_metadata

In an environment running Neutron, a request from your instance must traverse a number of steps:

1. From the instance to a router,
2. Through a NAT rule in the router namespace,
3. To an instance of the neutron-ns-metadata-proxy,
4. To the actual Nova metadata service

   Reproducing  Dirrect_access_to_Nova_metadata   I was able to get only list of EC2 metadata available, but not the values. However, the major concern is getting  values of metadata obtained in post  Direct_access _to_Nova_metadata
and also at  /openstack  location. The last  ones seem to me important not less then present  in EC2 list . This metadata are also not provided by this list.

Commands been run bellow are supposed to verify Nova&Neutron Setup to be performed  successfully , otherwise passing four steps 1,2,3,4 is supposed to fail and it will force you to analyse corresponding Logs file ( View References). It doesn’t matter did you set up cloud environment  manually or via RDO packstack

Run on Controller Node :-

[root@dallas1 ~(keystone_admin)]$ ip netns list

qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d
qdhcp-166d9651-d299-47df-a5a1-b368e87b612f

Check on the Routing on Cloud controller’s router namespace, it should show port 80 for 169.254.169.254 routes to the host at port 8700

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d iptables -L -t nat | grep 169

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports  8700

Check routing table inside the router namespace:

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d ip r

default via 192.168.1.1 dev qg-8fbb6202-3d
10.0.0.0/24 dev qr-2dd1ba70-34  proto kernel  scope link  src 10.0.0.1
192.168.1.0/24 dev qg-8fbb6202-3d  proto kernel  scope link  src 192.168.1.100

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN   
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ ip netns exec qdhcp-166d9651-d299-47df-a5a1-b368e87b612f netstat -na

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 10.0.0.3:53             0.0.0.0:*               LISTEN
tcp6       0      0 fe80::f816:3eff:feef:53 :::*                    LISTEN
udp        0      0 10.0.0.3:53             0.0.0.0:*
udp        0      0 0.0.0.0:67              0.0.0.0:*
udp6       0      0 fe80::f816:3eff:feef:53 :::*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ iptables-save | grep 8700

-A INPUT -p tcp -m multiport –dports 8700 -m comment –comment “001 metadata incoming” -j ACCEPT

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      2830/python  

[root@dallas1 ~(keystone_admin)]$ ps -ef | grep 2830
nova      2830     1  0 09:41 ?        00:00:57 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2856  2830  0 09:41 ?        00:00:00 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2874  2830  0 09:41 ?        00:00:09 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2875  2830  0 09:41 ?        00:00:01 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

1. At this point  you should be able (inside any running Havana instance) to launch your browser (“links” at least if there is no Light Weight X environment)  to

http://169.254.169.254/openstack/latest (not EC2)

The response  will be  :    meta_data.json password vendor_data.json

 If Light Weight X Environment is unavailable then use “links”

 

 

 What is curl   http://curl.haxx.se/docs/faq.html#What_is_cURL

Now you should be able to run on F20 instance

[root@vf20rs0404 ~] # curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

%  Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

Dload  Upload   Total   Spent    Left  Speed

100  1286  100  1286    0     0   1109      0  0:00:01  0:00:01 –:–:–  1127

. . . . . . . .

“uuid”: “10142280-44a2-4830-acce-f12f3849cb32“,

“availability_zone”: “nova”,

“hostname”: “vf20rs0404.novalocal”,

“launch_index”: 0,

“public_keys”: {“key2”: “ssh-rsa . . . . .  Generated by Nova\n”},

“name”: “VF20RS0404”

On another instance (in my case Ubuntu 14.04 )

 root@ubuntutrs0407:~#curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

Dload  Upload   Total   Spent    Left  Speed

100  1292  100  1292    0     0    444      0  0:00:02  0:00:02 –:–:–   446

{“random_seed”: “…”,

“uuid”: “8c79e60c-4f1d-44e5-8446-b42b4d94c4fc“,

“availability_zone”: “nova”,

“hostname”: “ubuntutrs0407.novalocal”,

“launch_index”: 0,

“public_keys”: {“key2”: “ssh-rsa …. Generated by Nova\n”},

“name”: “UbuntuTRS0407”}

Running VMs on Compute node:-

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+—————+———–+————+————-+—————————–+

| ID                                   | Name          | Status    | Task State | Power State | Networks                    |

+————————————–+—————+———–+————+————-+—————————–+

| d0f947b1-ff6a-4ff0-b858-b63a3d07cca3 | UbuntuTRS0405 | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.106 |

| 8c79e60c-4f1d-44e5-8446-b42b4d94c4fc | UbuntuTRS0407 | ACTIVE    | None       | Running     | int=10.0.0.6, 192.168.1.107 |

| 8775924c-dbbd-4fbb-afb8-7e38d9ac7615 | VF20RS037     | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.115 |

| d22a2376-33da-4a0e-a066-d334bd2e511d | VF20RS0402    | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.103 |

| 10142280-44a2-4830-acce-f12f3849cb32 | VF20RS0404    | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.105 |

+————————————–+—————+———–+————+————-+——————–

Launching browser to http://169.254.169.254/openstack/latest/meta_data.json on another Two Node Neutron GRE+OVS F20 Cluster. Output is sent directly to browser

2. I have provided some information about the OpenStack metadata api, which is available at /openstack, but if you are concerned  about the EC2 metadata API , browser should be launched to  http://169.254.169.254/latest/meta-data/

 What allows to to get any of displayed parameters

For instance :-

 

   OR via CLI

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/instance-id

i-000000a4

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-hostname

ubuntutrs0407.novalocal

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-ipv4

192.168.1.107

To verify instance-id launch virt-manger connected to Compute Node

 

 

which shows same value “000000a4”

Another option in text mode is “links” browser

$ ssh -l ubuntu -i key2.pem 192.168.1.109

Inside Ubuntu 14.04 instance  :-

# apt-get -y install links

# links

Press ESC to get to menu:-

 

 

 

 

References

1.https://ask.openstack.org/en/question/10140/wget-http1692541692542009-04-04meta-datainstance-id-error-404/


Setup Dashboard&VNC console on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster

March 13, 2014

This post follows up  Gluster 3.4.2 on Two Node Controller&Compute Neutron GRE+OVS Fedora 20 Cluster in particular,  it could be performed after Basic Setup  to make system management more comfortable the only CLI.

It’s also easy to create instance via  Dashboard :

  Placing in post creating panel customization script ( analog –user-data)

#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

To be able log in as “fedora” and set MTU=1457  inside VM (GRE tunneling)

   Key-pair submitted upon creation works like this :

[root@dfw02 Downloads(keystone_boris)]$ ssh -l fedora -i key2.pem  192.168.1.109
Last login: Sat Mar 15 07:47:45 2014

[fedora@vf20rs015 ~]$ uname -a
Linux vf20rs015.novalocal 3.13.6-200.fc20.x86_64 #1 SMP Fri Mar 7 17:02:28 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

[fedora@vf20rs015 ~]$ ifconfig
eth0: flags=4163  mtu 1457
inet 40.0.0.7  netmask 255.255.255.0  broadcast 40.0.0.255
inet6 fe80::f816:3eff:fe1e:1de6  prefixlen 64  scopeid 0x20
ether fa:16:3e:1e:1d:e6  txqueuelen 1000  (Ethernet)
RX packets 225  bytes 25426 (24.8 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 221  bytes 23674 (23.1 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Setup described at link mentioned above was originally suggested by Kashyap Chamarthy  for VMs running on non-default Libvirt’s subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for  nova-compute and neutron-openvswitch-agent  from Compute to Controller,   changes to /etc/sysconfig/iptables to enable  Gluster 3.4.2 setup on F20  systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for “dnsmasq.conf” and added to dnsmasq.conf line “dhcp-option=26,1454”. Updated configuration files are critical for launching instance without “Customization script” and allow to work with usual ssh keypair.  Actually , when updates are done instance gets created with MTU 1454. View  [2]. This setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..`  loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept).

Setup

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling ), Dashboard

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   -  Controller (192.168.1.127) 
dwf01.localdomain   -  Compute   (192.168.1.137)

1. First step follows  http://docs.openstack.org/havana/install-guide/install/yum/content/install_dashboard.html   and  http://docs.openstack.org/havana/install-guide/install/yum/content/dashboard-session-database.html Sequence of actions per manuals above :-

# yum install memcached python-memcached mod_wsgi openstack-dashboard

Modify the value of CACHES[‘default’][‘LOCATION’] in /etc/openstack-dashboard/local_settings to match the ones set in /etc/sysconfig/memcached. Open /etc/openstack-dashboard/local_settings and look for this line:

CACHES =

{ ‘default’:

{ ‘BACKEND’ : ‘django.core.cache.backends.memcached.MemcachedCache’,

‘LOCATION’ : ‘127.0.0.1:11211’ }

}

Update the ALLOWED_HOSTS in local_settings.py to include the addresses you wish to access the dashboard from. Edit /etc/openstack-dashboard/local_settings:

ALLOWED_HOSTS = [‘Controller-IP’, ‘my-desktop’]

This guide assumes that you are running the Dashboard on the controller node. You can easily run the dashboard on a separate server, by changing the appropriate settings in local_settings.py. Edit /etc/openstack-dashboard/local_settings and change OPENSTACK_HOST to the hostname of your Identity Service:

OPENSTACK_HOST = “Controller-IP”

Start the Apache web server and memcached: # service httpd restart

# systemctl start memcached

# systemctl enable memcached

To configure the MySQL database, create the dash database:

mysql&gt; CREATE DATABASE dash; Create a MySQL user for the newly-created dash database that has full control of the database. Replace DASH_DBPASS with a password for the new user:

mysql&gt; GRANT ALL ON dash.* TO ‘dash’@’%’ IDENTIFIED BY ‘fedora’;

mysql&gt; GRANT ALL ON dash.* TO ‘dash’@’localhost’ IDENTIFIED BY ‘fedora’;

In the local_settings file /etc/openstack-dashboard/local_settings

SESSION_ENGINE = ‘django.contrib.sessions.backends.db’

DATABASES =

{ ‘default’:

{ # Database configuration here

‘ENGINE’: ‘django.db.backends.mysql’,

‘NAME’: ‘dash’,

‘USER’: ‘dash’, ‘PASSWORD’:

‘fedora’, ‘HOST’: ‘Controller-IP’,

‘default-character-set’: ‘utf8’ }

}

After configuring the local_settings as shown, you can run the manage.py syncdb command to populate this newly-created database.

# /usr/share/openstack-dashboard/manage.py syncdb

Attempting to run syncdb you  might get an error like ‘dash’@’yourhost’ is not authorized to do it with password ‘YES’.  Then ( for instance in my case)

# mysql -u root -p

MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;

MariaDB [(none)]>  insert into mysql.user(User,Host,Password) values (‘dash’,’dallas1.localdomain’,’ ‘);

Query OK, 1 row affected, 4 warnings (0.00 sec)

MariaDB [(none)]> UPDATE mysql.user SET Password = PASSWORD(‘fedora’)

> WHERE User = ‘dash’ ;

Query OK, 1 row affected (0.00 sec) Rows matched: 3  Changed: 1  Warnings: 0

MariaDB [(none)]>  SELECT User, Host, Password FROM mysql.user;

.   .  .  .

| dash     | %                   | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |

| dash     | localhost       | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |

| dash     | dallas1.localdomain | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 | +———-+———————+——————————————-+

20 rows in set (0.00 sec)

That is exactly the same issue which comes up when starting openstack-nova-scheduler &amp; openstcak-nova-conductor  services during basic installation of Controller on Fedora 20. View Basic setup in particular :-

Set table mysql.user in proper status

shell> mysql -u root -p
mysql> insert into mysql.user (User,Host,Password) values ('nova','dfw02.localdomain',' ');
mysql> UPDATE mysql.user SET Password = PASSWORD('nova')
    ->    WHERE User = 'nova';
mysql> FLUSH PRIVILEGES;

Start, enable nova-{api,scheduler,conductor} services

  $ for i in start enable status; \
    do systemctl $i openstack-nova-api; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-scheduler; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-conductor; done

 # service httpd restart

Finally on Controller (dfw02  – 192.168.1.127)  file /etc/openstack-dashboard/local_settings  looks like https://bderzhavets.wordpress.com/2014/03/14/sample-of-etcopenstack-dashboardlocal_settings/

At this point dashboard is functional, but instances sessions outputs are unavailable via dashboard.  I didn’t get any error code, just

Instance Detail: VF20RS03

OverviewLogConsole

Loading…

2. Second step skipped in mentioned manual , however known by experienced persons https://ask.openstack.org/en/question/520/vnc-console-in-dashboard-fails-to-connect-ot-server-code-1006/

**************************************

Controller  dfw02 – 192.168.1.127

**************************************

# ssh-keygen (Hit Enter to accept all of the defaults)

# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dfw01

[root@dfw02 ~(keystone_boris)]$ ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5903:127.0.0.1:5903 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5904:127.0.0.1:5904 -N -f -l root 192.168.1.137

Compute’s  IP is 192.168.1.137

Update /etc/nova/nova.conf:

novncproxy_host=0.0.0.0

novncproxy_port=6080

novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html

[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-consoleauth.service
ln -s ‘/usr/lib/systemd/system/openstack-nova-consoleauth.service’ ‘/etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service’
[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-novncproxy.service
ln -s ‘/usr/lib/systemd/system/openstack-nova-novncproxy.service’ ‘/etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service’

[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-consoleauth.service
[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-novncproxy.service

[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-consoleauth.service

openstack-nova-consoleauth.service – OpenStack Nova VNC console auth Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-consoleauth.service; enabled)

Active: active (running) since Thu 2014-03-13 19:14:45 MSK; 20min ago

Main PID: 14679 (nova-consoleaut)

CGroup: /system.slice/openstack-nova-consoleauth.service

└─14679 /usr/bin/python /usr/bin/nova-consoleauth –logfile /var/log/nova/consoleauth.log

Mar 13 19:14:45 dfw02.localdomain systemd[1]: Started OpenStack Nova VNC console auth Server.

[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-novncproxy.service

openstack-nova-novncproxy.service – OpenStack Nova NoVNC Proxy Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled)

Active: active (running) since Thu 2014-03-13 19:14:58 MSK; 20min ago

Main PID: 14762 (nova-novncproxy)

CGroup: /system.slice/openstack-nova-novncproxy.service

├─14762 /usr/bin/python /usr/bin/nova-novncproxy –web /usr/share/novnc/

└─17166 /usr/bin/python /usr/bin/nova-novncproxy –web /usr/share/novnc/

Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: 127.0.0.1: Path: ‘/websockify’

Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: connecting to: 127.0.0.1:5900

Mar 13 19:23:55 dfw02.localdomain nova-novncproxy[14762]: 19: 127.0.0.1: ignoring empty handshake

Mar 13 19:24:31 dfw02.localdomain nova-novncproxy[14762]: 22: 127.0.0.1: ignoring socket not ready

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Plain non-SSL (ws://) WebSocket connection

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Version hybi-13, base64: ‘True’

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Path: ‘/websockify’

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: connecting to: 127.0.0.1:5901

Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 26: 127.0.0.1: ignoring empty handshake

Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 25: 127.0.0.1: ignoring empty handshake

Hint: Some lines were ellipsized, use -l to show in full.

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 6080

tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      14762/python

*********************************

Compute  dfw01 – 192.168.1.137

*********************************

Update  /etc/nova/nova.conf:

vnc_enabled=True

novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=192.168.1.137

# systemctl restart openstack-nova-compute

Finally :-

[root@dfw02 ~(keystone_admin)]$ systemctl list-units | grep nova

openstack-nova-api.service                      loaded active running   OpenStack Nova API Server
openstack-nova-conductor.service           loaded active running   OpenStack Nova Conductor Server
openstack-nova-consoleauth.service       loaded active running   OpenStack Nova VNC console auth Server
openstack-nova-novncproxy.service         loaded active running   OpenStack Nova NoVNC Proxy Server
openstack-nova-scheduler.service            loaded active running   OpenStack Nova Scheduler Server

[root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At

nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54

nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54

nova-compute     dfw01.localdomain                     nova             enabled    :-)   2014-03-13 16:56:45

nova-consoleauth dfw02.localdomain                   internal         enabled    :-)   2014-03-13 16:56:47

[root@dfw02 ~(keystone_admin)]$ neutron agent-list

+————————————–+——————–+——————-+——-+—————-+

| id                                   | agent_type         | host              | alive | admin_state_up |

+————————————–+——————–+——————-+——-+—————-+

| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent           | dfw02.localdomain | :-)   | True           |

| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-)   | True           |

| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent         | dfw02.localdomain | :-)   | True           |

| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-)   | True           |

+————————————–+——————–+——————-+——-+—————-+

Users console views :-

    Admin Console views :-

[root@dallas2 ~]# service openstack-nova-compute status -l
Redirecting to /bin/systemctl status  -l openstack-nova-compute.service
openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
Active: active (running) since Thu 2014-03-20 16:29:07 MSK; 6h ago
Main PID: 1685 (nova-compute)
CGroup: /system.slice/openstack-nova-compute.service
├─1685 /usr/bin/python /usr/bin/nova-compute –logfile /var/log/nova/compute.log
└─3552 /usr/sbin/glusterfs –volfile-id=cinder-volumes012 –volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

Mar 20 22:20:15 dallas2.localdomain sudo[11210]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb372fd13e-d2 up
Mar 20 22:20:15 dallas2.localdomain sudo[11213]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb372fd13e-d2 promisc on
Mar 20 22:20:16 dallas2.localdomain sudo[11216]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo372fd13e-d2 up
Mar 20 22:20:16 dallas2.localdomain sudo[11219]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo372fd13e-d2 promisc on
Mar 20 22:20:16 dallas2.localdomain sudo[11222]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qbr372fd13e-d2 up
Mar 20 22:20:16 dallas2.localdomain sudo[11225]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf brctl addif qbr372fd13e-d2 qvb372fd13e-d2
Mar 20 22:20:16 dallas2.localdomain sudo[11228]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl — –may-exist add-port br-int qvo372fd13e-d2 — set Interface qvo372fd13e-d2 external-ids:iface-id=372fd13e-d283-43ba-9a4e-a1684660f4ce external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:0d:4a:12 external-ids:vm-uuid=9679d849-7e4b-4cb5-b644-43279d53f01b
Mar 20 22:20:16 dallas2.localdomain ovs-vsctl[11230]: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl — –may-exist add-port br-int qvo372fd13e-d2 — set Interface qvo372fd13e-d2 external-ids:iface-id=372fd13e-d283-43ba-9a4e-a1684660f4ce external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:0d:4a:12 external-ids:vm-uuid=9679d849-7e4b-4cb5-b644-43279d53f01b
Mar 20 22:20:16 dallas2.localdomain sudo[11244]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf tee /sys/class/net/tap372fd13e-d2/brport/hairpin_mode
Mar 20 22:25:53 dallas2.localdomain nova-compute[1685]: 2014-03-20 22:25:53.102 1685 WARNING nova.compute.manager [-] Found 5 in the database and 2 on the hypervisor.

[root@dallas2 ~]# ovs-vsctl show
3e7422a7-8828-4e7c-b595-8a5b6504bc08
Bridge br-int
Port “qvod0e086e7-32”
tag: 1
Interface “qvod0e086e7-32”
Port br-int
            Interface br-int
type: internal
Port “qvo372fd13e-d2”
tag: 1
            Interface “qvo372fd13e-d2”
Port “qvob49ecf5e-8e”
tag: 1
Interface “qvob49ecf5e-8e”
Port “qvo756757a8-40”
tag: 1
Interface “qvo756757a8-40”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qvo4d1f9115-03”
tag: 1
Interface “qvo4d1f9115-03”
Bridge br-tun
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130”}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+————–+———–+————+————-+—————————–+
| ID                                   | Name         | Status    | Task State | Power State | Networks                    |
+————————————–+————–+———–+————+————-+—————————–+
| 690d29ae-4c3c-4b2e-b2df-e4d654668336 | UbuntuSRS007 | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.103 |
| 9c791573-1238-44c4-a103-6873fddc17d1 | UbuntuTS019  | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.107 |
| 70db20be-efa6-4a96-bf39-6250962784a3 | VF20RS015    | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.101 |
| 3c888e6a-dd4f-489a-82bb-1f1f9ce6a696 | VF20RS017    | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 9679d849-7e4b-4cb5-b644-43279d53f01b | VF20RS024    | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.105 |
+————————————–+————–+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ nova show 9679d849-7e4b-4cb5-b644-43279d53f01b
+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-03-20T18:20:16Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.2, 192.168.1.105                                  |
| hostId                               | 8477c225f2a46d84dcd609798bf5ee71cc8d20b44256b3b2a54b723f |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2014-03-20T18:20:16.000000                               |
| flavor                               | m1.small (2)                                             |
| id                                   | 9679d849-7e4b-4cb5-b644-43279d53f01b                     |
| security_groups                      | [{u’name’: u’default’}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                         |
| name                                 | VF20RS024                                                |
| created                              | 2014-03-20T18:20:10Z                                     |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’abc0f5b8-5144-42b7-b49f-a42a20ddd88f‘}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+
[root@dallas1 ~(keystone_boris)]$ ls -l /FDR/Replicate
total 8383848
-rw-rw-rw-. 2 root root 5368709120 Mar 17 21:58 volume-4b807fe8-dcd2-46eb-b7dd-6ab10641c32a
-rw-rw-rw-. 2 root root 5368709120 Mar 20 18:26 volume-4df4fadf-1be9-4a09-b51c-723b8a6b9c23
-rw-rw-rw-. 2 root root 5368709120 Mar 19 13:46 volume-6ccc137a-6361-42ee-8925-57c6a2eeccf4
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:18 volume-abc0f5b8-5144-42b7-b49f-a42a20ddd88f
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:18 volume-ec9670b8-fa64-46e9-9695-641f51bf1421

[root@dallas1 ~(keystone_boris)]$ ssh 192.168.1.140
Last login: Thu Mar 20 20:15:49 2014
[root@dallas2 ~]# ls -l /FDR/Replicate
total 8383860
-rw-rw-rw-. 2 root root 5368709120 Mar 17 21:58 volume-4b807fe8-dcd2-46eb-b7dd-6ab10641c32a
-rw-rw-rw-. 2 root root 5368709120 Mar 20 18:26 volume-4df4fadf-1be9-4a09-b51c-723b8a6b9c23
-rw-rw-rw-. 2 root root 5368709120 Mar 19 13:46 volume-6ccc137a-6361-42ee-8925-57c6a2eeccf4
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:19 volume-abc0f5b8-5144-42b7-b49f-a42a20ddd88f
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:19 volume-ec9670b8-fa64-46e9-9695-641f51bf1421


Setup Gluster 3.4.2 on Two Node Controller&Compute Neutron GRE+OVS Fedora 20 Cluster

March 10, 2014

This post is an update for http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html  . It’s focused on Gluster 3.4.2  implementation including tuning /etc/sysconfig/iptables files on Controller and Compute Nodes.
Copying ssh-key from master node to compute, step by step verification of gluster volume replica 2  functionality and switching RDO Havana cinder services to work with gluster volume created  to store instances bootable cinders volumes for performance improvement. Of course creating gluster bricks under “/”  is not recommended . It should be a separate mount point for “xfs” filesystem to store gluster bricks on each node.

 Manual RDO Havana setup itself was originally suggested by Kashyap Chamarthy  for F20 VMs running on non-default Libvirt’s subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for  nova-compute and neutron-openvswitch-agent  from Compute to Controller,   changes to /etc/sysconfig/iptables to enable  Gluster 3.4.2 setup on F20  systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for “dnsmasq.conf” and added to dnsmasq.conf line “dhcp-option=26,1454”. Updated configuration files are critical for launching instance without “Customization script” and allow to work with usual ssh keypair.  Actually , when updates are done instance gets created with MTU 1454. View  [2]. Original  setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..`  loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept). Regarding Dashboard Setup&VNC Console,  view   :-
Setup Dashboard&VNC console on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster

Updated setup procedure itself may be viewed here

Setup 

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dallas1.localdomain   –  Controller (192.168.1.130)

dallas2.localdomain   –  Compute   (192.168.1.140)

First step is tuning /etc/sysconfig/iptables for IPv4 iptables firewall (service firewalld should be disabled) :-

Update /etc/sysconfig/iptables on both nodes:-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment out lines bellow , ignoring instruction from http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt  . It’s critical for Gluster functionality. Having them active you are supposed to work with thin LVM as cinder volumes. You won’t be able even remote mount with “-t glusterfs” option. Gluster’s  replications will be dead for ever.

# -A FORWARD -j REJECT –reject-with icmp-host-prohibited
# -A INPUT -j REJECT –reject-with icmp-host-prohibited

Restart service iptables on both nodes

Second step:-

On dallas1, run the following commands :

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dallas2

On both nodes run :-

# yum  -y install glusterfs glusterfs-server glusterfs-fuse
# service glusterd start

On dallas1

#gluster peer probe dallas2.localdomain
Should return “success”

[root@dallas1 ~(keystone_admin)]$ gluster peer status

Number of Peers: 1
Hostname: dallas2.localdomain
Uuid: b3b1cf43-2fec-4904-82d4-b9be03f77c5f
State: Peer in Cluster (Connected)
On dallas2
[root@dallas2 ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.1.130
Uuid: a57433dd-4a1a-4442-a5ae-ba2f682e5c79
State: Peer in Cluster (Connected)

*************************************************************************************
On Controller (192.168.1.130)  & Compute nodes (192.168.1.140)
**********************************************************************************

Verify ports availability:-

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep gluster
tcp    0      0 0.0.0.0:655        0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:49152      0.0.0.0:*    LISTEN      2524/glusterfsd
tcp    0      0 0.0.0.0:2049       0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38465      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38466      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:49155      0.0.0.0:*    LISTEN      2525/glusterfsd
tcp    0      0 0.0.0.0:38468      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38469      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:24007      0.0.0.0:*    LISTEN      2380/glusterd

************************************

Switching Cinder to Gluster volume

************************************

# gluster volume create cinder-volumes021  replica 2 ddallas1.localdomain:/FDR/Replicate   dallas2.localdomain:/FDR/Replicate force
# gluster volume start cinder-volumes021
# gluster volume set cinder-volumes021  auth.allow 192.168.1.*

[root@dallas1 ~(keystone_admin)]$ gluster volume info cinder-volumes012

Volume Name: cinder-volumes012
Type: Replicate
Volume ID: 9ee31c6c-0ae3-4fee-9886-b9cb6a518f48
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dallas1.localdomain:/FDR/Replicate
Brick2: dallas2.localdomain:/FDR/Replicate
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
auth.allow: 192.168.1.*

[root@dallas1 ~(keystone_admin)]$ gluster volume status cinder-volumes012

Status of volume: cinder-volumes012
Gluster process                                                    Port    Online    Pid
——————————————————————————
Brick dallas1.localdomain:/FDR/Replicate         49155    Y    2525
Brick dallas2.localdomain:/FDR/Replicate         49152    Y    1615
NFS Server on localhost                                  2049    Y    2591
Self-heal Daemon on localhost                         N/A    Y    2596
NFS Server on dallas2.localdomain                   2049    Y    2202
Self-heal Daemon on dallas2.localdomain          N/A    Y    2197

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf
192.168.1.130:cinder-volumes021
:wq

Make sure all thin LVM have been deleted via `cinder list` , if no then delete them all.

[root@dallas1 ~(keystone_admin)]$ for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

It should add row to `df -h` output :

192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34

[root@dallas1 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                        active
openstack-nova-cert:                       inactive  (disabled on boot)
openstack-nova-compute:               inactive  (disabled on boot)
openstack-nova-network:                inactive  (disabled on boot)
openstack-nova-scheduler:             active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:             active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:           active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                active
neutron-l3-agent:                     active
neutron-metadata-agent:        active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:       active
neutron-linuxbridge-agent:         inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                   inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:        active
openstack-cinder-volume:             active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 871cf99617ff40e09039185aa7ab11f8 |  admin  |   True  |       |
| df4a984ce2f24848a6b84aaa99e296f1 |  boris  |   True  |       |
| 57fc5466230b497a9f206a20618dbe25 |  cinder |   True  |       |
| cdb2e5af7bae4c5486a1e3e2f42727f0 |  glance |   True  |       |
| adb14139a0874c74b14d61d2d4f22371 | neutron |   True  |       |
| 2485122e3538409c8a6fa2ea4343cedf |   nova  |   True  |       |
+———————————-+———+———+——-+
== Glance images ==
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
== Nova managed services ==
+—————-+———————+———-+———+——-+—————————-+—————–+
| Binary         | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+———————+———-+———+——-+—————————-+—————–+
| nova-scheduler | dallas1.localdomain | internal | enabled | up    | 2014-03-09T14:19:31.000000 | None            |
| nova-conductor | dallas1.localdomain | internal | enabled | up    | 2014-03-09T14:19:30.000000 | None            |
| nova-compute   | dallas2.localdomain | nova     | enabled | up    | 2014-03-09T14:19:33.000000 | None            |
+—————-+———————+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 0ed406bf-3552-4036-9006-440f3e69618e | ext   | None |
| 166d9651-d299-47df-a5a1-b368e87b612f | int   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ df -h

Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora01-root        187G   32G  146G  18% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G  184K  3.9G   1% /dev/shm
tmpfs                            3.9G  9.1M  3.9G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                            3.9G  464K  3.9G   1% /tmp
/dev/sdb5                        477M  122M  327M  28% /boot
tmpfs                            3.9G  9.1M  3.9G   1% /run/netns
192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34

(neutron) agent-list

+————————————–+——————–+———————+——-+—————-+
| id                                   | agent_type         | host                | alive | admin_state_up |
+————————————–+——————–+———————+——-+—————-+
| 3ed1cd15-81af-4252-9d6f-e9bb140bf6cf | L3 agent           | dallas1.localdomain | :-)   | True           |
| a088a6df-633c-4959-a316-510c99f3876b | DHCP agent         | dallas1.localdomain | :-)   | True           |
| a3e5200c-b391-4930-b3ee-58c8d1b13c73 | Open vSwitch agent | dallas1.localdomain | :-)   | True           |
| b6da839a-0d93-44ad-9793-6d0919fbb547 | Open vSwitch agent | dallas2.localdomain | :-)   | True           |
+————————————–+——————–+———————+——-+—————-+
If Controller has been correctly set up:-

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep python
tcp    0     0 0.0.0.0:8700      0.0.0.0:*     LISTEN      1160/python
tcp    0     0 0.0.0.0:35357     0.0.0.0:*     LISTEN      1163/python
tcp   0      0 0.0.0.0:9696      0.0.0.0:*      LISTEN      1165/python
tcp   0      0 0.0.0.0:8773      0.0.0.0:*      LISTEN      1160/python
tcp   0      0 0.0.0.0:8774      0.0.0.0:*      LISTEN      1160/python
tcp   0      0 0.0.0.0:9191      0.0.0.0:*      LISTEN      1173/python
tcp   0      0 0.0.0.0:8776      0.0.0.0:*      LISTEN      8169/python
tcp   0      0 0.0.0.0:5000      0.0.0.0:*      LISTEN      1163/python
tcp   0      0 0.0.0.0:9292      0.0.0.0:*      LISTEN      1168/python 

**********************************************
Creating instance utilizing glusterfs volume
**********************************************

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

I have to notice that schema with `cinder create –image-id  .. –display_name VOL_NAME SIZE` &amp; `nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=volume_id:::0 VM_NAME`  doesn’t work stable  for me in meantime.

As of 03/11 standard schema via `cinder create –image-id IMAGE_ID –display_name VOL_NAME SIZE `& ` nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=VOLUME_ID:::0  INSTANCE_NAME`  started to work fine. However, schema described bellow on the contrary stopped to work on glusterfs based cinder’s volumes.

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-09T12:41:22Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f            |
| security_groups                      | [{u’name’: u’default’}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20RS012                                       |
| adminPass                            | eFDhC8ZSCFU2                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-09T12:41:22Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+———–+———————-+————-+—————————–+
| ID                                   | Name      | Status    | Task State           | Power State | Networks                    |
+————————————–+———–+———–+———————-+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None                 | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | BUILD     | block_device_mapping | NOSTATE     |                             |
+————————————–+———–+———–+———————-+————-+—————————–+
WAIT …
[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE    | None       | Running     | int=10.0.0.4                |
+————————————–+———–+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 5c74667d-9b22-4092-ae0a-70ff3a06e785 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 8142ee4c-ef56-4b61-8a0b-ecd82d21484f

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| dc60b5f4-739e-49bd-a004-3ef806e2b488 |      | fa:16:3e:70:56:cc | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2”, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate 5c74667d-9b22-4092-ae0a-70ff3a06e785 dc60b5f4-739e-49bd-a004-3ef806e2b488

Associated floatingip 5c74667d-9b22-4092-ae0a-70ff3a06e785

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.102

PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data.
64 bytes from 192.168.1.102: icmp_seq=1 ttl=63 time=6.23 ms
64 bytes from 192.168.1.102: icmp_seq=2 ttl=63 time=0.702 ms
64 bytes from 192.168.1.102: icmp_seq=3 ttl=63 time=1.07 ms
64 bytes from 192.168.1.102: icmp_seq=4 ttl=63 time=0.693 ms
64 bytes from 192.168.1.102: icmp_seq=5 ttl=63 time=1.80 ms
64 bytes from 192.168.1.102: icmp_seq=6 ttl=63 time=0.750 ms
^C

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ cinder list

+————————————–+——–+————–+——+————-+———-+————————————–+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+————–+——+————-+———-+————————————–+

| 575be853-b104-458e-bc72-1785ef524416 | in-use |              |  5   |     None    |   true   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8  | in-use |              |  5   |     None    |   true   | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+——–+————–+——+————-+———-+——————————

On Compute:-

[root@dallas1 ~]# ssh 192.168.1.140

Last login: Sun Mar  9 16:46:40 2014

[root@dallas2 ~]# df -h

Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora01-root        187G   18G  160G  11% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G  3.1M  3.9G   1% /dev/shm
tmpfs                            3.9G  9.4M  3.9G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                            3.9G  115M  3.8G   3% /tmp
/dev/sdb5                        477M  122M  327M  28% /boot
192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

[root@dallas2 ~]# ps -ef| grep nova

nova      1548     1  0 16:29 ?        00:00:42 /usr/bin/python /usr/bin/nova-compute –logfile /var/log/nova/compute.log

root      3005     1  0 16:34 ?        00:00:38 /usr/sbin/glusterfs –volfile-id=cinder-volumes012 –volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

qemu      4762     1 58 16:42 ?        00:52:17 /usr/bin/qemu-system-x86_64 -name instance-00000061 -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 8142ee4c-ef56-4b61-8a0b-ecd82d21484f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=8142ee4c-ef56-4b61-8a0b-ecd82d21484f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000061.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-575be853-b104-458e-bc72-1785ef524416,if=none,id=drive-virtio-disk0,format=raw,serial=575be853-b104-458e-bc72-1785ef524416,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:70:56:cc,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/8142ee4c-ef56-4b61-8a0b-ecd82d21484f/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

qemu      6330     1 44 16:49 ?        00:36:02 /usr/bin/qemu-system-x86_64 -name instance-0000005f -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 9566adec-9406-4c3e-bce5-109ecb8bcf6b -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=9566adec-9406-4c3e-bce5-109ecb8bcf6b -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000005f.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-9794bd45-8923-4f3e-a48f-fa1d62a964f8,if=none,id=drive-virtio-disk0,format=raw,serial=9794bd45-8923-4f3e-a48f-fa1d62a964f8,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:84:72,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/9566adec-9406-4c3e-bce5-109ecb8bcf6b/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming fd:24 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

root     24713 24622  0 18:11 pts/4    00:00:00 grep –color=auto nova

[root@dallas2 ~]# ps -ef| grep neutron

neutron   1549     1  0 16:29 ?        00:00:53 /usr/bin/python /usr/bin/neutron-openvswitch-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini –log-file /var/log/neutron/openvswitch-agent.log

root     24981 24622  0 18:12 pts/4    00:00:00 grep –color=auto neutron

  Top at Compute node (192.168.1.140)

      Runtime at Compute node ( dallas2 192.168.1.140)

 ******************************************************

Building Ubuntu 14.04 instance via cinder volume

******************************************************

[root@dallas1 ~(keystone_boris)]$ glance image-list
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 | Ubuntu 14.04        | qcow2       | bare             | 264176128 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
[root@dallas1 ~(keystone_boris)]$ cinder create –image-id c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 –display_name UbuntuTrusty 5
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-10T06:35:39.873978      |
| display_description |                 None                 |
|     display_name    |             UbuntuTrusty             |
|          id         | 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 |
|       image_id      | c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 |
|       metadata      |                  {}                  |
|         size        |                  5                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————————————–+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+———–+————–+——+————-+———-+————————————–+
| 56ceaaa8-c0ec-45f3-98a4-555c1231b34e |   in-use  |              |  5   |     None    |   true   | e29606c5-582f-4766-ae1b-52043a698743 |
| 575be853-b104-458e-bc72-1785ef524416 |   in-use  |              |  5   |     None    |   true   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 | available | UbuntuTrusty |  5   |     None    |   true   |                                      |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8 |   in-use  |              |  5   |     None    |   true   | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+———–+————–+——+————-+———-+————————————–+

[root@dallas1 ~(keystone_boris)]$  nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2:::0 UbuntuTR01

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+

| status                               | BUILD                                              |
| updated                              | 2014-03-10T06:40:14Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume – no image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 0859e52d-c07b-4f56-ac79-2b37080d2843               |
| security_groups                      | [{u’name’: u’default’}]                            |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                   |
| name                                 | UbuntuTR01                                         |
| adminPass                            | L8VuhttJMbJf                                       |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                   |
| created                              | 2014-03-10T06:40:13Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u’id’: u’8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2′}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+———–+————+————-+—————————–+
| ID                                   | Name       | Status    | Task State | Power State | Networks                    |
+————————————–+————+———–+————+————-+—————————–+
| 0859e52d-c07b-4f56-ac79-2b37080d2843 | UbuntuTR01 | ACTIVE    | None       | Running     | int=10.0.0.6                |
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007  | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012  | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
| e29606c5-582f-4766-ae1b-52043a698743 | VF20RS016  | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
+————————————–+————+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 9498ac85-82b0-468a-b526-64a659080ab9 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 0859e52d-c07b-4f56-ac79-2b37080d2843

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 1f02fe57-d844-4fd8-a325-646f27163c8b |      | fa:16:3e:3f:a3:d4 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2”, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate  9498ac85-82b0-468a-b526-64a659080ab9 1f02fe57-d844-4fd8-a325-646f27163c8b

Associated floatingip 9498ac85-82b0-468a-b526-64a659080ab9

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.104

PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=2.35 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=2.56 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.17 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=4.08 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=2.19 ms
^C


Up to date procedure of Creating cinder’s ThinLVM based Cloud Instance F20,Ubuntu 13.10 on Fedora 20 Havana Compute Node.

March 4, 2014

  This post follows up  https://bderzhavets.wordpress.com/2014/01/24/setting-up-two-physical-node-openstack-rdo-havana-neutron-gre-on-fedora-20-boxes-with-both-controller-and-compute-nodes-each-one-having-one-ethernet-adapter/

   Per my experience `cinder create –image-id  Image_id –display_name …..` && `nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=Volume_id :::0 <VM_NAME>  doesn’t   work any longer, giving an error :-

$ tail -f /var/log/nova/compute.log  reports :-

 2014-03-03 13:28:43.646 1344 WARNING nova.virt.libvirt.driver [req-1bd6630e-b799-4d78-b702-f06da5f1464b df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29b a86d7eb] [instance: f621815f-3805-4f52-a878-9040c6a4af53] File injection into a b