Basic predicates technique per Helen Mironchick and pushing challenging problem #18 of DEL(X) type (EGE Informatics 2019) to community

April 27, 2019

Denote by DEL (n, m) the statement “a natural number n is divided without a remainder by a positive integer m”. For what is the smallest natural number A, the formula is
(Del (x, 35) ⊕ Del (x, 56))→(¬ Del (x, A)^Del (x, 14))+¬ Del (x,4)
is identically true (that is, it takes the value 1 for any natural value of the variable x)?

(D(35)≡D(56)) + ¬D(a)*D(2)*D(7) + ¬D(4) ≡ 1

D(35)*D(56) + ¬D(35)*¬D(56) + ¬D(A)*D(7) + ¬D(4) ≡ 1
D(35)*D(56) = D(7)*D(5)*D(8)

¬D(35)*¬D(56) = (¬D(5) + ¬D(7))*(¬D(8) + ¬D(7) =
= ¬D(5)*¬D(8) + ¬D(7)

D(7)*D(5)*D(8) + ¬D(5)*¬D(8) + ¬D(7) +
+ ¬D(A)*D(7) + ¬D(4) ≡ 1

D(5)*D(8) + ¬D(5)*¬D(8) +
+ ¬D(7) + ¬D(A) + ¬D(4) ≡ 1

D(40) + ¬D(5)*¬D(8) + ¬D(28) + ¬D(A) ≡ 1

Thus A(min) = 40

Solution of one non-trivial system of boolean equations

March 3, 2019

I am thankful to Helen Mironchick for pvoviding this problem to me as well as core idea of building graph based diagrammes for solution of system without transition pairs ( usually utilized by mapping method )

((x1≡ x3)=> (x2≡x4))=>x5 = 1
((x5≡ x7)=> (x6≡ x8)=> x10 = 1

Solution of the equation E(M)⊕E(N) => E(A)*¬E(M & N) ≡ 1 via the calculus of basic predicates by E.A.Mironchick

December 31, 2018

In general we follow guidelines of technique developed in

Per link mentioned above (quoting Helen A. Mironchick)
Let Et (x) be a predicate whose truth set is all x for which x & t ≠ 0.
If t is a power of two, then such a predicate will be called basic.
The basic predicate describes (fixes) a single unit in the binary notation.
Further, for brevity, the predicate Et (x) will be denoted by E(t);
we will also denote the truth set of this predicate.

(quoting ends)

Denote by {X} the binary representation of a natural number X.
The core statement of the post below  is :-

Let R, M, N be natural numbers. R is the minimum
satisfying the condition {M OR N} = {R OR {M & N}},
where “OR” is a bitwise disjunction, and “&” is a bitwise conjunction
Then the smallest A satisfying the equation
E(M)⊕E(N)=>E(A)*¬E (M & N) ≡ 1 would be equal R.

First we intend to show that, E(M) v E(N) = E(R) v E(M&N).
Notice also that everywhere below, “*” is “^”.

Consider expansions in the logical sum of  basic predicates.
E (M) and E(N). All pairs of equal basic predicates will be
collapsed into one and the logical sum of such predicate pairs
will obviously give E(M&N). The logical sum of all those
remaining is exactly E (R). It remains to apply the formulas
of De Morgan.
¬(E(M) v E(N)) = ¬(E(R) v E(M & N))
and get the required equality below
¬E(M)*¬E(N) = ¬E(R)*¬E(M&N)  (1)
Bitwise2 has a familiar formula. Due to the fact that ¬E(N) = Z(N)
Z(M)*Z(N) = Z(M OR N) = Z(R)*Z(M & N)
See :-   Solving the equation ¬Z(M)⊕¬Z(N) => ¬A*Z(M&N) ≡ 1 in the Bitwise2 technique  https://informatics-ege.blogspot.com/2018/12/zmn-am-1-bitwise2.html
Thus, ¬E(M)*¬E(N) = ¬E(R)*¬E (M & N) can be obtained
Convert the original equation as follows
E(M)⊕E(N) => E(A)*¬E(M&N) ≡ 1
(E(M)≡E(N)) v E(A)*¬E(M&N) ≡ 1
¬E(M)*¬E(N) v E(M)*E(N) v E(A)*¬E(M&N) ≡ 1
From the decomposition of M and N into basic predicates
define the numbers REST-M and REST-N such that
each of them has no common unit bits with M&N and in doing so
obtain
{REST-M} + {M & N} = {M}
{REST-N} + {M & N} = {N}
Consequently
E(M) = E(REST-M) v E(M&N)
E(N) = E(REST-N) v E(M&N)
Apply formula (1) to ¬E(M)*¬E(N):-
¬E(R)*¬E(M&N) v (E(REST-M) v E(M & N))*(E(REST-N) v E(M&N)) v
v E(A)*¬E(M&N) ≡ 1
¬E(R)*¬E(M&N) v E(REST-M)*E(REST-N) v E (M&N) v
v E(A)*¬E(M&N) ≡ 1
¬E(R) v E(REST-M)*E(REST-N) v E(M&N) v E(A)≡ 1

Thus A (min) = R

Solution of one system of equations in boolean variables having style like x1 => x2 => … =>x6=1 && y1 => y2 => … =>y6 =1 && x1 => y1 via Mapping method

June 20, 2017

Original system looks like :-
x1 => x2 => x3 => x4 => x5 => x6 =1
y1 => y2 => y3 => y4 => y5 => y6 =1
x1 => y1 =1

Down here we follow approach originally developed in
http://www.loiro.ru/files/news/news_943_etodotobrajeniya-mea-2013-10.pdf
Build basic diagram and define function F( ) to apply Mapping method
suggested  by E. Mironchick

Now calculate number of solutions of equation
x1 => x2 => x2 => x3 => x4 => x5 => x6 =1 starting with x1=1

Calculate  number of solutions of equation
y1 => y2 => y3 => y4 => y5 => y6 =1  starting with y1=0

So, we intend to calculate number  of {x},{y} corteges breaking
third equation and afterwards deduct amount, been obtained,  from 43^2

Keeping in mind

Thus final answer is : – Count = 43^2 – 21*22 = 1387

Solution of one system of equations in boolean variables via bitmasks in regards of training for Unified State Examination in Informatics (Russia)

April 16, 2017

In brief, bitmasks are supposed to be a core tool for solution of systems of equations in Boolean variables versus method suggested at
https://inf-ege.sdamgia.ru/test?theme=264
for task 11 which is pretty much similar to sample been analyzed bellow

*************************************
*************************************
Determine total number of corteges
{x1,…,x9,y1,….,y9} which and only which
satisfy system :-

((x1 ≡ y1) → (x2 ≡ y2)) ∧ (x1 → x2) ∧ (y1 → y2) = 1
((x2 ≡ y2) → (x3 ≡ y3)) ∧ (x2 → x3) ∧ (y2 → y3) = 1

((x8 ≡ y8) → (x9 ≡ y9)) ∧ (x8 → x9) ∧ (y8 → y9) = 1

Consider truncated system :-

(x1 → x2) ∧ (y1 → y2) = 1
(x2 → x3) ∧ (y2 → y3) = 1

(x8 → x9) ∧ (y8 → y9) = 1

Now build well known bitmasks for {x} and {y}

x1 x2 x3 x4 x5 x6 x7 x8 x9
—————————————-
1   1   1   1   1   1   1   1   1
0   1   1   1   1   1   1   1   1
0   0   1   1   1   1   1   1   1
0   0   0   1   1   1   1   1   1
0   0   0   0   1   1   1   1   1
0   0   0   0   0   1   1   1   1
0   0   0   0   0   0   1   1   1
0   0   0   0   0   0   0   1   1
0   0   0   0   0   0   0   0   1
0   0   0   0   0   0   0   0   0

y1 y2 y3 y4 y5 y6 y7 y8 y9
—————————————–
1   1   1   1   1   1   1   1   1
0   1   1   1   1   1   1   1   1
0   0   1   1   1   1   1   1   1
0   0   0   1   1   1   1   1   1
0   0   0   0   1   1   1   1   1
0   0   0   0   0   1   1   1   1
0   0   0   0   0   0   1   1   1
0   0   0   0   0   0   0   1   1
0   0   0   0   0   0   0   0   1
0   0   0   0   0   0   0   0   0

We would name bellow first matrix “X” an second “Y”

For j=2 to j=9 consider  following two concatenations
“X” ->”Y” and “Y” -> “X”

```First one :-

X                                     Y
---------------------------           ------------------------------
|                         |           |                            |
---------------------------           ------------------------------
j                         |           |                            |
---------------------------           ------------------------------
. . . .                   |           j+1                          |
---------------------------           ------------------------------
j+2                          |
------------------------------
| . . . . .                  |
---------------------------           ------------------------------
10                        |           10                           |
---------------------------           ------------------------------

Record {j} from X with records {j+1,j+2,. . . 10} from Y
```
```and vice versa second one :-

Y                                     X
---------------------------           ------------------------------
|                         |           |                            |
---------------------------           ------------------------------
j                         |           |                            |
---------------------------           ------------------------------
. . . .                   |           j+1                          |
---------------------------           ------------------------------
j+2                          |
------------------------------
| . . . . .                  |
---------------------------           ------------------------------
10                        |           10                           |
---------------------------           ------------------------------```

Record { j } from Y with records {j+1,j+2,. . . 10} from X

We’ll get total 2*(10-j) сorteges making boolean value of implication

((x[j-1] ≡ y[j-1])) → (x[j] ≡ y[j]) equal FALSE

**************************************
For instance when j=3 we get
**************************************

x1 x2 x3 x4 x5 x6 x7 x8 x9
—————————————-
1   1   1   1   1   1   1   1   1
0   1   1   1   1   1   1   1   1
0   0   1   1   1   1   1   1   1   =>
0   0   0   1   1   1   1   1   1
0   0   0   0   1   1   1   1   1
0   0   0   0   0   1   1   1   1
0   0   0   0   0   0   1   1   1
0   0   0   0   0   0   0   1   1
0   0   0   0   0   0   0   0   1
0   0   0   0   0   0   0   0   0

y1 y2 y3 y4 y5 y6 y7 y8 y9
—————————————–
1   1   1   1   1   1   1   1   1
0   1   1   1   1   1   1   1   1
0   0   1   1   1   1   1   1   1
0   0   0   1   1   1   1   1   1 <=
0   0   0   0   1   1   1   1   1 <=
0   0   0   0   0   1   1   1   1 <=
0   0   0   0   0   0   1   1   1 <=
0   0   0   0   0   0   0   1   1 <=
0   0   0   0   0   0   0   0   1 <=
0   0   0   0   0   0   0   0   0 <=

Vice Versa Set :-

y1 y2 y3 y4 y5 y6 y7 y8 y9
—————————————–
1   1   1   1   1   1   1   1   1
0   1   1   1   1   1   1   1   1
0   0   1   1   1   1   1   1   1 =>
0   0   0   1   1   1   1   1   1
0   0   0   0   1   1   1   1   1
0   0   0   0   0   1   1   1   1
0   0   0   0   0   0   1   1   1
0   0   0   0   0   0   0   1   1
0   0   0   0   0   0   0   0   1
0   0   0   0   0   0   0   0   0

x1 x2 x3 x4 x5 x6 x7 x8 x9
—————————————-
1   1   1   1   1   1   1   1   1
0   1   1   1   1   1   1   1   1
0   0   1   1   1   1   1   1   1
0   0   0   1   1   1   1   1   1 <=
0   0   0   0   1   1   1   1   1 <=
0   0   0   0   0   1   1   1   1 <=
0   0   0   0   0   0   1   1   1 <=
0   0   0   0   0   0   0   1   1 <=
0   0   0   0   0   0   0   0   1 <=
0   0   0   0   0   0   0   0   0 <=

****************************************************************************
So when j=3 we have 2*7 = 14 corteges where x2≡y2 is True
and x3≡y3 is False. So,  (x2 ≡ y2) → (x3 ≡ y3) is actually 1 -> 0
what is False by definition.
****************************************************************************

What is sign that set of corteges generated for each j from [2.3.4,…,9]
should be removed from 100 total solutions of truncated system of boolean
equations.

Now calculate :-

s:= 0 ;
for j=2 to j=10 do
begin
s:= s + (10-j) ;
end ;
s= 2*s ;
writeln (s);

Finally, we get s=72

Total number of corteges obtained via decart multiplication of X and Y is equal 100. So number of solutions of original system would be 100-72=28

I appreciate courtesy provided by informatik “BU”
However, don’t behave so nicely as “BU” always does.

TripleO QuickStart functionality and recent commit Merge “move the undercloud deploy role to quickstart-extras for composability”

January 3, 2017

##############################
UPDATE  01/04/2017 11:07 AM EST
###############################

Fixed in upstream :-
commit e2e73b94bd88a3f9cc19925a59cbd12ff6172060
Merge: b6dbf6a 6a05cf5
Author: Jenkins
Date:   Wed Jan 4 15:31:59 2017 +0000
Merge “Run extras playbook by default”
commit b6dbf6a084ddc08086c7087af85b575bc7d43799
Merge: e0493a2 7528970

############################
Following commit merged  master
############################

commit 6a05cf5c47f7b46eb1565c910ba9c90ea5f089e4
Author: Sagi Shnaidman
Date:   Tue Dec 6 16:01:30 2016 +0100
Run extras playbook by default
For developer purposes we need scripts for overcloud are ready
in home dir after undercloud install. Now all the
undercloud-scripts and overcloud-scripts tagged tasks are in extras
roles, so we need to run extras playbook by default to get them
Change-Id: I3e216b21dac5a9086374fda9182a9be1cbe75a4f

#################################
END UPDATE
################################

Straight forward following https://github.com/openstack/tripleo-quickstart

==> Deploying without instructions

```\$ bash quickstart.sh -p quickstart-extras.yml \
-r quickstart-extras-requirements.txt \
--tags all \$VIRTHOST```

You may choose to execute an end to end deployment without displaying the instructions and scripts provided by default. Using the `--tags all` flag will instruct quickstart to provision the environment and deploy both the undercloud and overcloud. Additionally a validation test will be executed to ensure the overcloud is functional.

<==>

*************************************************************************
However cloning https://github.com/openstack/tripleo-quickstart  and reverting merges to master several the most recent commits
**************************************************************************

\$ cd tripleo-quickstart
\$ [boris@fedora24wks tripleo-quickstart]\$ ./revert.sh
+ git revert -m 1 –no-commit b6dbf6a084ddc08086c7087af85b575bc7d43799
+ git revert -m 1 –no-commit e0493a24dff0a535a3be644eb565eacbe765c59d
+ git revert -m 1 –no-commit 9dd2eb77e0bacc8497aa91c2fc54b0e64a3745f1
+ git revert -m 1 –no-commit 6fea2c037e831738cd59eef61d4073b9771bf51b
+ git commit -m ‘Reverting is done’
[master ffc105a] Reverting is done

Committer: boris
You can suppress this message by setting them explicitly. Run the

git config –global –edit
After doing this, you may fix the identity used for this commit with:
git commit –amend –reset-author

15 files changed, 640 insertions(+), 108 deletions(-)
delete mode 100644 config/general_config/containers_minimal.yml
create mode 100644 roles/tripleo/undercloud/defaults/main.yml
create mode 100644 roles/tripleo/undercloud/meta/main.yml
create mode 100644 roles/tripleo/undercloud/templates/undercloud-install.sh.j2
create mode 100644 roles/tripleo/undercloud/templates/undercloud.conf.j2

******************************************************************************
In particular,  un-merging from master branch commits
******************************************************************************

Move the undercloud deploy role to quickstart-extras for composability
```In an effort to make more of the tripleo deployment ci more composable
it has been discussed to break out the undercloud deployment into it's
own role.  There are examples where additional configuration is needed
prior to the undercloud installation such as dpdk, and installing in
other ci environments.
This patch moves the undercloud deployment from the quickstart.yml
playbook to the quickstart-extras.yml playbook```

2. 7528970a78545e68da795d91cccb9ab3449e589f

Fix for quickstart.sh requirements

```The correct change did *not* land in
https://review.openstack.org/#/c/410757```

******************************************
Does allow run successfully :-
******************************************

[boris@fedora24wks tripleo-quickstart]\$ bash quickstart.sh -R newton –config config/general_config/ha.yml -p quickstart-extras.yml -r quickstart-extras-requirements.txt  \$VIRTHOST

New python executable in /home/boris/.quickstart/bin/python2
Also creating executable in /home/boris/.quickstart/bin/python
Installing setuptools, pip, wheel…done.
Requirement already up-to-date: pip in /home/boris/.quickstart/lib/python2.7/site-packages
Cloning tripleo-quickstart repository…
Cloning into ‘/home/boris/.quickstart/tripleo-quickstart’…
remote: Counting objects: 5741, done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 5741 (delta 0), reused 0 (delta 0), pack-reused 5739
Receiving objects: 100% (5741/5741), 914.60 KiB | 686.00 KiB/s, done.
Resolving deltas: 100% (2977/2977), done.
Checking connectivity… done.
Fetching origin
~/.quickstart/tripleo-quickstart ~/.quickstart/tripleo-quickstart

Installed /home/boris/.quickstart/.eggs/pbr-1.10.0-py2.7.egg
[pbr] Generating ChangeLog
running install
running build
running install_data
creating /home/boris/.quickstart/usr
creating /home/boris/.quickstart/usr/local
creating /home/boris/.quickstart/usr/local/share
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/user
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/teardown
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/kvm
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/nodes
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/local
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/meta
copying roles/provision/remote/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/meta
copying roles/libvirt/setup/overcloud/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/convert-image
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/fetch-images
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/files
copying roles/libvirt/setup/undercloud/files/get-undercloud-ip.sh -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/files
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/overcloud
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/support_check
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/support_check/meta
copying roles/provision/support_check/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/support_check/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/test_plugins
copying test_plugins/equalto.py -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/test_plugins/
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/user
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/user/meta
copying roles/libvirt/setup/user/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/user/meta
creating /home/boris/.quickstart/playbooks
copying playbooks/build-images-and-quickstart.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/libvirt-teardown.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/tripleo-roles.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/quickstart-extras.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/noop.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/teardown-provision.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/provision.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/quickstart.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/teardown-nodes.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/build-images.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/teardown.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/libvirt-setup.yml -&gt; /home/boris/.quickstart/playbooks/
copying playbooks/teardown-environment.yml -&gt; /home/boris/.quickstart/playbooks/
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars
copying roles/environment/vars/redhat.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars
copying roles/environment/vars/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars
copying roles/environment/vars/fedora.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars
copying roles/environment/vars/centos-7.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/meta
copying roles/environment/setup/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/kvm/defaults
copying roles/parts/kvm/defaults/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/kvm/defaults
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/meta
copying roles/libvirt/setup/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/defaults
copying roles/tripleo/defaults/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/defaults
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/convert-image/templates
copying roles/convert-image/templates/convert_image.sh.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/convert-image/templates
creating /home/boris/.quickstart/config
creating /home/boris/.quickstart/config/general_config
copying config/general_config/containers_minimal.yml -&gt; /home/boris/.quickstart/config/general_config/
copying config/general_config/minimal.yml -&gt; /home/boris/.quickstart/config/general_config/
copying config/general_config/ha_ipv6.yml -&gt; /home/boris/.quickstart/config/general_config/
copying config/general_config/ha.yml -&gt; /home/boris/.quickstart/config/general_config/
copying config/general_config/minimal_pacemaker.yml -&gt; /home/boris/.quickstart/config/general_config/
copying config/general_config/ceph.yml -&gt; /home/boris/.quickstart/config/general_config/
copying config/general_config/minimal_no_netiso.yml -&gt; /home/boris/.quickstart/config/general_config/
copying config/general_config/ha_big.yml -&gt; /home/boris/.quickstart/config/general_config/
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/user
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/user/meta
copying roles/provision/user/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/user/meta
creating /home/boris/.quickstart/config/release
copying config/release/master.yml -&gt; /home/boris/.quickstart/config/release/
copying config/release/master-tripleo-ci.yml -&gt; /home/boris/.quickstart/config/release/
copying config/release/liberty.yml -&gt; /home/boris/.quickstart/config/release/
copying config/release/mitaka.yml -&gt; /home/boris/.quickstart/config/release/
copying config/release/newton.yml -&gt; /home/boris/.quickstart/config/release/
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/meta
copying roles/libvirt/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/defaults
copying roles/tripleo-inventory/defaults/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/defaults
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/teardown
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/teardown/meta
copying roles/environment/teardown/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/teardown/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/common
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/common/defaults
copying roles/common/defaults/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/common/defaults
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/defaults
copying roles/libvirt/defaults/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/defaults
creating /home/boris/.quickstart/config/release/stable
copying config/release/stable/mitaka.yml -&gt; /home/boris/.quickstart/config/release/stable
copying config/release/stable/newton.yml -&gt; /home/boris/.quickstart/config/release/stable
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/overcloud/meta
copying roles/overcloud/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/overcloud/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/templates
copying roles/provision/remote/templates/libvirt.pkla.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/templates
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/local/meta
copying roles/provision/local/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/local/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/libvirt
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates
copying roles/tripleo-inventory/templates/ssh_config.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates
copying roles/tripleo-inventory/templates/ssh_config_localhost.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates
copying roles/tripleo-inventory/templates/inventory.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates
copying roles/tripleo-inventory/templates/ssh_config_no_undercloud.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates
copying roles/tripleo-inventory/templates/get-overcloud-nodes.py.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/teardown/meta
copying roles/provision/teardown/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/teardown/meta
creating /home/boris/.quickstart/config/release/trunk
copying config/release/trunk/liberty.yml -&gt; /home/boris/.quickstart/config/release/trunk
copying config/release/trunk/mitaka.yml -&gt; /home/boris/.quickstart/config/release/trunk
copying config/release/trunk/newton.yml -&gt; /home/boris/.quickstart/config/release/trunk
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/libvirt/defaults
copying roles/parts/libvirt/defaults/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/libvirt/defaults
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/defaults
copying roles/libvirt/setup/undercloud/defaults/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/defaults
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/templates
copying roles/environment/setup/templates/network.xml.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/templates
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/templates
copying roles/libvirt/setup/overcloud/templates/baremetalvm.xml.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/templates
copying roles/libvirt/setup/overcloud/templates/volume_pool.xml.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/templates
copying roles/libvirt/setup/overcloud/templates/instackenv.json.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/templates
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/meta
copying roles/tripleo/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests
copying roles/tripleo-inventory/tests/test.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests
copying roles/tripleo-inventory/tests/inventory -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests/playbooks
copying roles/tripleo-inventory/tests/playbooks/quickstart-usb.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests/playbooks
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/fetch-images/meta
copying roles/fetch-images/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/fetch-images/meta
creating /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/master.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/liberty.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/mitaka-cloudsig-testing.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/mitaka.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/newton.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/master-current-tripleo.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/newton-cloudsig-stable.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/master-consistent.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/newton-consistent.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/mitaka-cloudsig-stable.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/liberty-consistent.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/newton-cloudsig-testing.yml -&gt; /home/boris/.quickstart/config/release/centosci
copying config/release/centosci/mitaka-consistent.yml -&gt; /home/boris/.quickstart/config/release/centosci
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates
copying roles/libvirt/setup/undercloud/templates/inject_gating_repo.sh.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates
copying roles/libvirt/setup/undercloud/templates/undercloudvm.xml.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates
copying roles/libvirt/setup/undercloud/templates/ssh.config.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates
copying roles/libvirt/setup/undercloud/templates/update_image.sh.j2 -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/library
copying library/generate_macs.py -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/library/
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/meta
copying roles/provision/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/defaults
copying roles/provision/defaults/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/defaults
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/meta
copying roles/environment/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/meta
creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/meta
copying roles/libvirt/teardown/meta/main.yml -&gt; /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/meta
running install_egg_info
running egg_info
creating /home/boris/.quickstart/tripleo_quickstart.egg-info
writing pbr to /home/boris/.quickstart/tripleo_quickstart.egg-info/pbr.json
writing requirements to /home/boris/.quickstart/tripleo_quickstart.egg-info/requires.txt
writing /home/boris/.quickstart/tripleo_quickstart.egg-info/PKG-INFO
writing top-level names to /home/boris/.quickstart/tripleo_quickstart.egg-info/top_level.txt
[pbr] Processing SOURCES.txt
writing manifest file ‘/home/boris/.quickstart/tripleo_quickstart.egg-info/SOURCES.txt’
[pbr] In git context, generating filelist from git
warning: no files found matching ‘AUTHORS’
warning: no files found matching ‘ChangeLog’
warning: no previously-included files matching ‘*.pyc’ found anywhere in distribution
writing manifest file ‘/home/boris/.quickstart/tripleo_quickstart.egg-info/SOURCES.txt’
Copying /home/boris/.quickstart/tripleo_quickstart.egg-info to /home/boris/.quickstart/lib/python2.7/site-packages/tripleo_quickstart-1.0.1.dev217-py2.7.egg-info
running install_scripts

******************************************************************************
Reverting commits results following downloads to happen  &&  `setup.py install`s to run setting up ansible environment for successful  quickstart.sh command line running
******************************************************************************

Collecting ansible==2.2.0.0 (from -r requirements.txt (line 1))

100% |################################| 2.4MB 5.9MB/s
Collecting netaddr&gt;=0.7.18 (from -r requirements.txt (line 2))
100% |################################| 1.5MB 3.8MB/s
Collecting pbr&gt;=1.6 (from -r requirements.txt (line 3))
100% |################################| 102kB 5.9MB/s
Requirement already satisfied: setuptools&gt;=11.3 in /home/boris/.quickstart/lib/python2.7/site-packages (from -r requirements.txt (line 4))
Collecting tripleo-quickstart-extras from git+https://git.openstack.org/openstack/tripleo-quickstart-extras/#egg=tripleo-quickstart-extras (from -r quickstart-extras-requirements.txt (line 1))
Cloning https://git.openstack.org/openstack/tripleo-quickstart-extras/ to /tmp/pip-build-QpkA1O/tripleo-quickstart-extras
Collecting paramiko (from ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
100% |################################| 174kB 5.0MB/s
Collecting jinja2 (from ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
100% |################################| 266kB 4.0MB/s
Collecting PyYAML (from ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
100% |################################| 256kB 3.8MB/s
Collecting pycrypto&gt;=2.6 (from ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
100% |################################| 450kB 5.5MB/s
Collecting pyasn1&gt;=0.1.7 (from paramiko-&gt;ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
Collecting cryptography&gt;=1.1 (from paramiko-&gt;ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
100% |################################| 430kB 5.9MB/s
Collecting MarkupSafe (from jinja2-&gt;ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
Collecting idna&gt;=2.0 (from cryptography&gt;=1.1-&gt;paramiko-&gt;ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
100% |################################| 61kB 8.1MB/s
Collecting six&gt;=1.4.1 (from cryptography&gt;=1.1-&gt;paramiko-&gt;ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
Collecting enum34 (from cryptography&gt;=1.1-&gt;paramiko-&gt;ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
Collecting ipaddress (from cryptography&gt;=1.1-&gt;paramiko-&gt;ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
Collecting cffi&gt;=1.4.1 (from cryptography&gt;=1.1-&gt;paramiko-&gt;ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
100% |################################| 389kB 5.4MB/s
Collecting pycparser (from cffi&gt;=1.4.1-&gt;cryptography&gt;=1.1-&gt;paramiko-&gt;ansible==2.2.0.0-&gt;-r requirements.txt (line 1))
100% |################################| 235kB 6.1MB/s
Installing collected packages: pyasn1, idna, six, enum34, ipaddress, pycparser, cffi, cryptography, paramiko, MarkupSafe, jinja2, PyYAML, pycrypto, ansible, netaddr, pbr, tripleo-quickstart-extras

Running setup.py install for pycparser … done
Running setup.py install for cryptography … done
Running setup.py install for MarkupSafe … done
Running setup.py install for PyYAML … done
Running setup.py install for pycrypto … done
Running setup.py install for ansible … done
Running setup.py install for tripleo-quickstart-extras … done
Successfully installed MarkupSafe-0.23 PyYAML-3.12 ansible-2.2.0.0 cffi-1.9.1 cryptography-1.7.1 enum34-1.1.6 idna-2.2 ipaddress-1.0.17 jinja2-2.8.1 netaddr-0.7.18 paramiko-2.1.1 pbr-1.10.0 pyasn1-0.1.9 pycparser-2.17 pycrypto-2.6.1 six-1.10.0 tripleo-quickstart-extras-0.0.1.dev528
~/.quickstart/tripleo-quickstart
—————————————————————————-
|                                ,   .   ,                                 |
|                                )-_”’_-(                                 |
|                               ./ o\ /o \.                                |
|                              . \__/ \__/ .                               |
|                              …   V   …                               |
|                              … – – – …                               |
|                               .   – –   .                                |
|                                `-…..-´                                 |
|   ____         ____      ____        _      _        _             _     |
|  / __ \       / __ \    / __ \      (_)    | |      | |           | |    |
| | |  | | ___ | |  | |  | |  | |_   _ _  ___| | _____| |_ __ _ _ __| |_   |
| | |  | |/ _ \| |  | |  | |  | | | | | |/ __| |/ / __| __/ _` | ‘__| __|  |
| | |__| | | |__| |  | |__| | | | (__|   &lt;\__ \ |_|(_| | |  | |_   |
|  \____/ \___/ \____/    \___\_\\__,_|_|\___|_|\_\___/\__\__,_|_|   \__|  |
|                                                                          |
|                                                                          |
—————————————————————————-

Installing OpenStack newton on host 192.168.0.74
Using directory /home/boris/.quickstart for a local working directory
+ export ANSIBLE_CONFIG=/home/boris/.quickstart/tripleo-quickstart/ansible.cfg
+ ANSIBLE_CONFIG=/home/boris/.quickstart/tripleo-quickstart/ansible.cfg
+ export ANSIBLE_INVENTORY=/home/boris/.quickstart/hosts
+ ANSIBLE_INVENTORY=/home/boris/.quickstart/hosts
+ source /home/boris/.quickstart/tripleo-quickstart/ansible_ssh_env.sh
++ export OPT_WORKDIR=/home/boris/.quickstart
++ OPT_WORKDIR=/home/boris/.quickstart
++ export SSH_CONFIG=/home/boris/.quickstart/ssh.config.ansible
++ SSH_CONFIG=/home/boris/.quickstart/ssh.config.ansible
++ touch /home/boris/.quickstart/ssh.config.ansible
++ export ‘ANSIBLE_SSH_ARGS=-F /home/boris/.quickstart/ssh.config.ansible’
++ ANSIBLE_SSH_ARGS=’-F /home/boris/.quickstart/ssh.config.ansible’
+ ‘[‘ 0 = 0 ‘]’
+ rm -f /home/boris/.quickstart/hosts
+ ‘[‘ 192.168.0.74 = localhost ‘]’
+ ‘[‘ ” = 1 ‘]’
+ VERBOSITY=vv
+ ansible-playbook -vv /home/boris/.quickstart/playbooks/quickstart-extras.yml -e @config/general_config/ha.yml -e ansible_python_interpreter=/usr/bin/python -e @/home/boris/.quickstart/config/release/newton.yml -e local_working_dir=/home/boris/.quickstart -e virthost=192.168.0.74 -t untagged,provision,environment,undercloud-scripts,overcloud-scripts,undercloud-install,undercloud-post-install,teardown-nodes
Using /home/boris/.quickstart/tripleo-quickstart/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available

. . . . . .
PLAY RECAP
*********************************************************************
192.168.0.74               : ok=107  changed=36   unreachable=0    failed=0
localhost                  : ok=19   changed=8    unreachable=0    failed=0
undercloud                 : ok=31   changed=22   unreachable=0    failed=0

Monday 02 January 2017  13:03:48 +0300 (0:00:00.716)       0:32:39.725 ********
=================================================
undercloud-deploy : Install the undercloud —————————- 993.80s
overcloud-prep-images : Prepare the overcloud images for deploy ——- 329.70s
setup/undercloud : Perform selinux relabel on undercloud image ——– 124.89s
setup/undercloud : Resize undercloud image (call virt-resize) ———- 67.62s
setup/undercloud : Upload undercloud volume to storage pool ———— 55.47s
setup/undercloud : Copy instackenv.json to appliance ——————- 36.71s
fetch-images : Get qcow2 image from cache —————————— 30.23s
overcloud-prep-flavors : Prepare the scripts for overcloud flavors —– 26.48s
setup/undercloud : Get undercloud vm ip address ———————— 12.76s
parts/libvirt : Install packages for libvirt —————————- 8.58s
setup/overcloud : Create overcloud vm storage ————————— 7.58s
setup/overcloud : Define overcloud vms ———————————- 7.04s
parts/libvirt : If ipxe-roms-qemu is not installed, install a known good version — 6.98s
setup/undercloud : Inject undercloud ssh public key to appliance ——– 6.77s
teardown/nodes : Delete baremetal vm storage —————————- 6.58s
teardown/nodes : Check overcloud vms ———————————— 6.56s
setup/overcloud : Check if overcloud volumes exist ———————- 6.50s
overcloud-prep-network : Prepare the network-isolation required networks on the undercloud — 6.18s
undercloud-deploy : Create undercloud configuration ——————— 5.27s
setup ——————————————————————- 5.05s
——————————————————————————
+ set +x
[boris@fedora24wks tripleo-quickstart]\$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
Warning: Permanently added ‘192.168.0.74’ (ECDSA) to the list of known hosts.
Warning: Permanently added ‘undercloud’ (ECDSA) to the list of known hosts.
Last login: Mon Jan  2 10:03:44 2017 from gateway
[stack@undercloud ~]\$ . stackrc
[stack@undercloud ~]\$ ls -l
total 1625036
-rwxr-xr-x. 1 stack stack        770 Jan  2 09:56 containers-default-parameters.yaml
-rw-rw-r–. 1 stack stack      22051 Jan  2 09:34 instackenv.json
-rw-r–r–. 1 root  root   355820146 Dec 29 09:00 ironic-python-agent.initramfs
-rwxr-xr-x. 1 root  root     5393328 Dec 29 09:00 ironic-python-agent.kernel
-rw-r–r–. 1 stack stack        474 Jan  2 09:56 network-environment.yaml
-rwxr-xr-x. 1 stack stack        208 Jan  2 10:03 neutronl3ha.yaml
-rw-rw-r–. 1 stack stack          0 Jan  2 09:56 overcloud_custom_tht_script.log
-rwxr-xr-x. 1 stack stack        293 Jan  2 09:56 overcloud-custom-tht-script.sh
-rwxr-xr-x. 1 stack stack       1012 Jan  2 10:03 overcloud-deploy-post.sh
-rwxr-xr-x. 1 stack stack       2900 Jan  2 10:03 overcloud-deploy.sh
-rw-r–r–. 1 root  root    46801971 Dec 29 09:01 overcloud-full.initrd
-rw-r–r–. 1 root  root  1250309120 Dec 29 09:01 overcloud-full.qcow2
-rwxr-xr-x. 1 root  root     5393328 Dec 29 09:01 overcloud-full.vmlinuz
-rwxr-xr-x. 1 stack stack       3932 Jan  2 09:56 overcloud-prep-containers.sh
-rw-rw-r–. 1 stack stack       7336 Jan  2 10:03 overcloud_prep_flavors.log
-rwxr-xr-x. 1 stack stack       3672 Jan  2 10:02 overcloud-prep-flavors.sh
-rw-rw-r–. 1 stack stack       5039 Jan  2 10:02 overcloud_prep_images.log
-rwxr-xr-x. 1 stack stack        746 Jan  2 09:57 overcloud-prep-images.sh
-rw-rw-r–. 1 stack stack       1315 Jan  2 10:03 overcloud_prep_network.log
-rwxr-xr-x. 1 stack stack        861 Jan  2 10:03 overcloud-prep-network.sh
-rw——-. 1 stack stack        351 Jan  2 09:39 quickstart-hieradata-overrides.yaml
-rw——-. 1 stack stack        587 Jan  2 09:55 stackrc
-rw——-. 1 stack stack       7868 Jan  2 09:39 undercloud.conf
-rw-rw-r–. 1 stack stack     191197 Jan  2 09:56 undercloud_install.log
-rwxr-xr-x. 1 stack stack        151 Jan  2 09:39 undercloud-install.sh
-rw-rw-r–. 1 stack stack       1650 Jan  2 09:40 undercloud-passwords.conf
-rwxr-xr-x. 1 stack stack        494 Jan  2 09:57 upload_images_to_local_registry.py

[stack@undercloud ~]\$ ./overcloud-deploy.sh
+ source /home/stack/stackrc
++ NOVA_VERSION=1.1
++ export NOVA_VERSION
++ OS_AUTH_URL=https://192.168.24.2:13000/v2.0
++ PYTHONWARNINGS=’ignore:Certificate has no, ignore:A true SSLContext object is not available’
++ export OS_AUTH_URL
++ export PYTHONWARNINGS
++ COMPUTE_API_VERSION=1.1
++ OS_BAREMETAL_API_VERSION=1.15
++ OS_NO_CACHE=True
++ OS_CLOUDNAME=undercloud
++ OS_IMAGE_API_VERSION=1
++ export OS_TENANT_NAME
++ export COMPUTE_API_VERSION
++ export OS_BAREMETAL
++ export OS_NO_CACHE
++ export OS_CLOUDNAME
++ export OS_IMAGE_API_VERSION
+ true
++ openstack hypervisor stats show -c count -f value
+ count=6
+ ‘[‘ 6 -gt 0 ‘]’
+ break
+ openstack overcloud deploy –templates /usr/share/openstack-tripleo-heat-templates
–libvirt-type qemu –control-flavor oooq_control –compute-flavor oooq_compute
–ceph-storage-flavor oooq_ceph –block-storage-flavor oooq_blockstorage
–swift-storage-flavor oooq_objectstorage –timeout 90
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml
-e /home/stack/network-environment.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml
-e /home/stack/neutronl3ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml
–validation-warnings-fatal –control-scale 3 –compute-scale 1 –ceph-storage-scale 2
–neutron-network-type vxlan –neutron-tunnel-types vxlan –ntp-server pool.ntp.org
-e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml
Removing the current plan files
Started Mistral Workflow. Execution ID: 017ae06f-2b09-4a90-8022-6d5fd2215674
Plan updated
Deploying templates in the directory /tmp/tripleoclient-TvEeVV/tripleo-heat-templates
Started Mistral Workflow. Execution ID: 7c5a7903-4950-47fe-bffe-8b5e51e0809e
2017-01-02 10:50:42Z [overcloud]: CREATE_IN_PROGRESS  Stack CREATE started
2017-01-02 10:50:42Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS  state changed
2017-01-02 10:50:43Z [overcloud.PcsdPassword]: CREATE_IN_PROGRESS  state changed
2017-01-02 10:50:43Z [overcloud.RabbitCookie]: CREATE_IN_PROGRESS  state changed
2017-01-02 10:50:43Z [overcloud.Networks]: CREATE_IN_PROGRESS  state changed
2017-01-02 10:50:44Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS  state changed
2017-01-02 10:50:44Z [overcloud.HeatAuthEncryptionKey]: CREATE_IN_PROGRESS  state changed
2017-01-02 10:50:44Z [overcloud.Networks]: CREATE_IN_PROGRESS  Stack CREATE started
2017-01-02 10:50:44Z [overcloud.Networks.InternalNetwork]: CREATE_IN_PROGRESS  state changed
2017-01-02 10:50:44Z [overcloud.MysqlRootPassword]: CREATE_IN_PROGRESS  state changed
2017-01-02 10:50:45Z [overcloud.ServiceNetMap]: CREATE_COMPLETE  state changed
. . . . .
2017-01-02 11:42:00Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_IN_PROGRESS  state changed
2017-01-02 11:43:00Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_COMPLETE  state changed
2017-01-02 11:43:01Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE  Stack CREATE completed successfully
2017-01-02 11:43:02Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE  state changed
2017-01-02 11:43:02Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE  Stack CREATE completed successfully
2017-01-02 11:43:03Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE  state changed
2017-01-02 11:43:03Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully
Stack overcloud CREATE_COMPLETE
Started Mistral Workflow. Execution ID: 634338d8-1424-4e31-868b-a4826127a0aa
Overcloud Endpoint: http://10.0.0.7:5000/v2.0
Overcloud Deployed
+ heat stack-list
+ grep -q CREATE_FAILED
WARNING (shell) “heat stack-list” is deprecated, please use “openstack stack list” instead
``` [stack@undercloud ~]\$ nova list +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ | ID                                   | Name                    | Status | Task State | Power State | Networks               | +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ | ecd3870d-83c4-46c8-a7a0-24742f6f22a8 | overcloud-cephstorage-0 | ACTIVE | -          | Running     | ctlplane=192.168.24.6  | | de9a1166-771e-4a50-b087-23915e97d64f | overcloud-cephstorage-1 | ACTIVE | -          | Running     | ctlplane=192.168.24.16 | | dc3b86a2-769e-4616-8a17-fcc4ad0db83d | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.168.24.13 | | 8290ffbe-3c8b-4d2d-ae0a-bfc0c2e5bd01 | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.168.24.18 | | d05025e8-179e-4d66-a15f-1d33ecd661b1 | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.168.24.10 | | 4c3c5717-0868-4d93-bd5e-e1c418cd39ac | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.168.24.8  | +--------------------------------------+-------------------------+--------+------------+-------------+------------------------+ ```
[stack@undercloud ~]\$ cat overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://10.0.0.7:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export no_proxy=,10.0.0.7,192.168.24.7
export PYTHONWARNINGS=”ignore:Certificate has no, ignore:A true SSLContext object is not available”
The authenticity of host ‘192.168.24.13 (192.168.24.13)’ can’t be established.
ECDSA key fingerprint is b2:a5:15:6f:ce:04:39:df:37:3a:eb:81:af:d5:68:c9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.24.13’ (ECDSA) to the list of known hosts.
[root@overcloud-controller-0 ~]# vi overcloudrc
[root@overcloud-controller-0 ~]# . overcloudrc
[root@overcloud-controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: overcloud-controller-2 (version 1.1.15-11.el7_3.2-e174ec8) – partition with quorum
Last updated: Mon Jan  2 11:45:56 2017        Last change: Mon Jan  2 11:41:49 2017 by root via cibadmin on overcloud-controller-1

3 nodes and 19 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:

Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-0 ]
Slaves: [ overcloud-controller-1 overcloud-controller-2 ]
openstack-cinder-volume    (systemd:openstack-cinder-volume):    Started overcloud-controller-0

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@overcloud-controller-0 ~]# ceph status
cluster b2826c88-d0d1-11e6-91bc-00ff8b05e286
health HEALTH_OK
monmap e1: 3 mons at {overcloud-controller-0=172.16.1.5:6789/0,overcloud-controller-1=172.16.1.11:6789/0,overcloud-controller-2=172.16.1.6:6789/0}
election epoch 8, quorum 0,1,2 overcloud-controller-0,overcloud-controller-2,overcloud-controller-1
osdmap e15: 2 osds: 2 up, 2 in
flags sortbitwise
pgmap v144: 224 pgs, 6 pools, 0 bytes data, 0 objects
16964 MB used, 85411 MB / 102375 MB avail

224 active+clean

TripleO QuickStart KSM vs instack-virt-setup deploying RDO Newton HA Overcloud

October 12, 2016

=================
UPDATE 10/17/2016
=================
I initiated KSM&&KSMTUNED on CentOS 7.2 VIRTHOST running instack-virt-setup 2 days ago along with instack-virt-setup HA overcloud deployment. Stack’s virthost’s .bashrc
[stack@Server72Centos ~]\$ cat .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi

# Uncomment the following line if you don’t like systemctl’s auto-paging feature:
# export SYSTEMD_PAGER=
export NODE_DIST=centos7
export NODE_CPU=2
export NODE_MEM=6500
export NODE_COUNT=4
export UNDERCLOUD_NODE_CPU=2
export UNDERCLOUD_NODE_MEM=8000
export NODE_DISK=45
export UNDERCLOUD_NODE_DISK=35
export FS_TYPE=ext4
# User specific aliases and functions
export LIBVIRT_DEFAULT_URI=”qemu:///system”

So far I don’t see any negative drawback from ksmd daemon up and running on VIRTHOST ( HA Overcloud been built via instack-virt-setup). I also have to notice that performance problems caused by swap memory utilization around 3.5 GB are eliminated. Overcloud KVM nodes demonstrate performance close to TripleO QuickStart (in meantime unavailable for RDO Newton -> trunk/newton)

============
END UPDATE
=============

Posting bellow is supposed to demonstrate KSM implementation on QuickStart providing significant relief  on 32 GB VIRTHOST vs quite the same deploymet described in previous draft http://lxer.com/module/newswire/view/234740/index.html   Deployment procedure for TripleO QuickStart is bit more complicated in meantime then it was designed for Mitaka stable release. Instructions bellow provide a step by step guide usually not required by QuickStart environment on undecloud VM after you logged into undercloud

Git clone repo bellow :-
[jon@fedora24wks release]\$   git clone https://github.com/openstack/tripleo-quickstart
[jon@fedora24wks release]\$  cd tripleo* ; cd ./config/release

**********************************************
Now verify that newton.yml is here.
**********************************************
[jon@fedora24wks release]\$ cat newton.yml
release: newton
undercloud_image_url: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/newton/delorean/undercloud.qcow2
overcloud_image_url: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/newton/delorean/overcloud-full.tar
ipa_image_url: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/newton/delorean/ironic-python-agent.tar

**************************************************************************************
UPDATE ./config/general_config/ha.yml  of memory allocation for HA controller
up to  6.5 GB ( as minimum to avoid crash in step5 of overcloud deployment )
**************************************************************************************

[john@fedora24wks tripleo-quickstart]\$ cat ./config/general_config/ha.yml# Deploy an HA openstack environment.
#
# This will require (6144 * 4) == approx. 24GB for the overcloud
# nodes, plus another 8GB for the undercloud, for a total of around
# 32GB.
control_memory: 6500
compute_memory: 6144
undercloud_memory: 8192
# Giving the undercloud additional CPUs can greatly improve heat’s
# performance (and result in a shorter deploy time).
undercloud_vcpu: 4
# Create three controller nodes and one compute node.
overcloud_nodes:

– name: control_0
flavor: control
– name: control_1
flavor: control
– name: control_2
flavor: control

– name: compute_0
flavor: compute

# We don’t need introspection in a virtual environment (because we are
# creating all the “hardware” we really know the necessary
# information).
step_introspect: false

# Tell tripleo about our environment.
network_isolation: true
extra_args: >-
–control-scale 3 –compute-scale 1 –neutron-network-type vxlan
–neutron-tunnel-types vxlan
–ntp-server pool.ntp.org
test_ping: true
enable_pacemaker: true
tempest_config: false
run_tempest: false

****************************************************************************
Run quickstart.sh to create undercloud VM on VIRTHOST
****************************************************************************

[john@fedora24wks tripleo-quickstart]\$ bash quickstart.sh -R newton –config ./config/general_config/ha.yml \$VIRTHOST

[john@fedora24wks tripleo-quickstart]\$  ssh -F /home/john/.quickstart/ssh.config.ansible undercloud

****************************************************************************************************
In meantime QuickStart requires manual overcloud deployment
Now you are logged into undecloud VM running on VIRTHOST as stack
Building overcloud images is skipped due to QuickStart CI. There is no harm in attempt of building them. It will take a second, they are already there.
****************************************************************************************************

`# Upload per-built overcloud images`
```[stack@undercloud ~]\$ source stackrc
[stack@undercloud ~]\$ openstack overcloud image upload
[stack@undercloud ~]\$ openstack baremetal import instackenv.json
[stack@undercloud ~]\$ openstack baremetal configure boot
[stack@undercloud ~]\$ openstack baremetal introspection bulk start
[stack@undercloud ~]\$ ironic node-list
stack@undercloud ~]\$ neutron subnet-list
[stack@undercloud ~]\$ neutron subnet-update 1b7d82e5-0bf1-4ba5-8008-4aa402598065 \
--dns-nameserver 192.168.122.1```

**************************************
Create external interface vlan10
*************************************
[stack@undercloud ~]\$  sudo vi /etc/sysconfig/network-scripts/ifcfg-vlan10
DEVICE=vlan10
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSIntPort
BOOTPROTO=static
OVS_BRIDGE=br-ctlplane
OVS_OPTIONS=”tag=10″

[stack@undercloud ~]\$ sudo ifup vlan10
[stack@undercloud ~]\$ sudo ovs-vsctl show

0d9f9351-93cd-4c83-8eb4-82e8b1ca6665
Manager “ptcp:6640:127.0.0.1”
is_connected: true
Bridge br-ctlplane
Controller “tcp:127.0.0.1:6633”
is_connected: true
fail_mode: secure
Port br-ctlplane
Interface br-ctlplane
type: internal
Port phy-br-ctlplane
Interface phy-br-ctlplane
type: patch
options: {peer=int-br-ctlplane}
Port “eth1”
Interface “eth1”
Port “vlan10”
tag: 10
Interface “vlan10”
type: internal
Bridge br-int
Controller “tcp:127.0.0.1:6633”
is_connected: true
fail_mode: secure
Port int-br-ctlplane
Interface int-br-ctlplane
type: patch
options: {peer=phy-br-ctlplane}
Port br-int
Interface br-int
type: internal
Port “tapb0b80495-42”
tag: 1
Interface “tapb0b80495-42”
type: internal
ovs_version: “2.5.0”

*********************************************
Create manually network_env.yaml
*********************************************

[stack@instack ~]\$vi network_env.yaml

{
“parameter_defaults”: {
“ControlPlaneDefaultRoute”: “192.0.2.1”,
“ControlPlaneSubnetCidr”: “24”,
“DnsServers”: [
“192.168.122.1”
],
“ExternalAllocationPools”: [
{
“end”: “10.0.0.250”,
“start”: “10.0.0.4”
}
],
“ExternalNetCidr”: “10.0.0.1/24”,
“NeutronExternalNetworkBridge”: “”
}
}

\$ sudo iptables -A BOOTSTACK_MASQ -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE -t nat

```[stack@undercloud ~]\$ sudo touch -f \
/usr/share/openstack-tripleo-heat-templates/puppet/post.yaml```

```[stack@undercloud ~]\$ cat overcloud-deploy.sh
```
``````#!/bin/bash -x
source /home/stack/stackrc
openstack overcloud deploy  \
--control-scale 3 --compute-scale 1 \
--libvirt-type qemu \
--ntp-server pool.ntp.org  \
--templates  /usr/share/openstack-tripleo-heat-templates \
-e  /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \
-e  /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e  /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e  \$HOME/network_env.yaml``````

`[stack@undercloud ~]\$ ./overcloud-deploy.sh`
```+ source /home/stack/stackrc
++ export NOVA_VERSION=1.1
++ NOVA_VERSION=1.1
++ export OS_AUTH_URL=http://192.0.2.1:5000/v2.0
++ OS_AUTH_URL=http://192.0.2.1:5000/v2.0
++ export COMPUTE_API_VERSION=1.1
++ COMPUTE_API_VERSION=1.1
++ export OS_BAREMETAL_API_VERSION=1.15
++ OS_BAREMETAL_API_VERSION=1.15
++ export OS_NO_CACHE=True
++ OS_NO_CACHE=True
++ export OS_CLOUDNAME=undercloud
++ OS_CLOUDNAME=undercloud
++ export OS_IMAGE_API_VERSION=1
++ OS_IMAGE_API_VERSION=1
+ openstack overcloud deploy --control-scale 3 --compute-scale 1 --libvirt-type qemu --ntp-server pool.ntp.org --templates /usr/share/openstack-tripleo-heat-templates -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network_env.yaml
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
Removing the current plan files
Started Mistral Workflow. Execution ID: 5511b4a9-4d0c-4937-9450-e2d9e7e36ab3
Plan updated
Deploying templates in the directory /tmp/tripleoclient-LwH7ZR/tripleo-heat-templates
Started Mistral Workflow. Execution ID: 5e331cfa-4b4e-49dd-bc4c-89b50aa42740
2016-10-12 07:30:50Z [overcloud]: CREATE_IN_PROGRESS  Stack CREATE started
2016-10-12 07:30:51Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS  state changed
2016-10-12 07:30:52Z [overcloud.PcsdPassword]: CREATE_IN_PROGRESS  state changed
2016-10-12 07:30:52Z [overcloud.MysqlRootPassword]: CREATE_IN_PROGRESS  state changed
2016-10-12 07:30:52Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS  state changed
2016-10-12 07:30:53Z [overcloud.HeatAuthEncryptionKey]: CREATE_IN_PROGRESS  state changed
2016-10-12 07:30:53Z [overcloud.RabbitCookie]: CREATE_IN_PROGRESS  state changed
2016-10-12 07:30:54Z [overcloud.Networks]: CREATE_IN_PROGRESS  state changed
2016-10-12 07:30:54Z [overcloud.PcsdPassword]: CREATE_COMPLETE  state changed
2016-10-12 07:30:55Z [overcloud.ServiceNetMap]: CREATE_COMPLETE  state changed
2016-10-12 07:30:55Z [overcloud.HeatAuthEncryptionKey]: CREATE_COMPLETE  state changed
2016-10-12 07:30:55Z [overcloud.Networks]: CREATE_IN_PROGRESS  Stack CREATE started
2016-10-12 07:30:55Z [overcloud.MysqlRootPassword]: CREATE_COMPLETE  state changed
2016-10-12 07:30:55Z [overcloud.RabbitCookie]: CREATE_COMPLETE  state changed
2016-10-12 07:30:55Z [overcloud.Networks.StorageMgmtNetwork]: CREATE_IN_PROGRESS  state changed
2016-10-12 07:30:56Z [overcloud.HorizonSecret]: CREATE_COMPLETE  state changed
2016-10-12 07:30:56Z [overcloud.DefaultPasswords]: CREATE_IN_PROGRESS  state changed
2016-10-12 07:30:56Z [overcloud.Networks.TenantNetwork]: CREATE_IN_PROGRESS  state changed

. . . . . .

2016-10-12 08:18:50Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_COMPLETE  Stack CREATE completed successfully
2016-10-12 08:18:51Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_COMPLETE  state changed
2016-10-12 08:18:51Z [overcloud.AllNodesDeploySteps.ComputePostConfig]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:51Z [overcloud.AllNodesDeploySteps.ControllerPostConfig]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:51Z [overcloud.AllNodesDeploySteps.BlockStoragePostConfig]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:51Z [overcloud.AllNodesDeploySteps.ObjectStoragePostConfig]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:51Z [overcloud.AllNodesDeploySteps.CephStoragePostConfig]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:53Z [overcloud.AllNodesDeploySteps.ControllerPostConfig]: CREATE_COMPLETE  state changed
2016-10-12 08:18:53Z [overcloud.AllNodesDeploySteps.ObjectStoragePostConfig]: CREATE_COMPLETE  state changed
2016-10-12 08:18:53Z [overcloud.ControllerAllNodesDeployment.0]: SIGNAL_COMPLETE  Unknown
2016-10-12 08:18:54Z [overcloud.AllNodesDeploySteps.ComputePostConfig]: CREATE_COMPLETE  state changed
2016-10-12 08:18:54Z [overcloud.AllNodesDeploySteps.BlockStoragePostConfig]: CREATE_COMPLETE  state changed
2016-10-12 08:18:54Z [overcloud.AllNodesDeploySteps.CephStoragePostConfig]: CREATE_COMPLETE  state changed
2016-10-12 08:18:54Z [overcloud.AllNodesDeploySteps.BlockStorageExtraConfigPost]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:55Z [overcloud.AllNodesDeploySteps.ComputeExtraConfigPost]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:55Z [overcloud.AllNodesDeploySteps.CephStorageExtraConfigPost]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:56Z [overcloud.AllNodesDeploySteps.ControllerExtraConfigPost]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:56Z [overcloud.Controller.0.ControllerDeployment]: SIGNAL_COMPLETE  Unknown
2016-10-12 08:18:56Z [overcloud.AllNodesDeploySteps.ObjectStorageExtraConfigPost]: CREATE_IN_PROGRESS  state changed
2016-10-12 08:18:58Z [overcloud.AllNodesDeploySteps.ComputeExtraConfigPost]: CREATE_COMPLETE  state changed
2016-10-12 08:18:58Z [overcloud.AllNodesDeploySteps.BlockStorageExtraConfigPost]: CREATE_COMPLETE  state changed
2016-10-12 08:18:59Z [overcloud.AllNodesDeploySteps.CephStorageExtraConfigPost]: CREATE_COMPLETE  state changed
2016-10-12 08:18:59Z [overcloud.Controller.0.NetworkDeployment]: SIGNAL_COMPLETE  Unknown
2016-10-12 08:18:59Z [overcloud.AllNodesDeploySteps.ControllerExtraConfigPost]: CREATE_COMPLETE  state changed
2016-10-12 08:18:59Z [overcloud.AllNodesDeploySteps.ObjectStorageExtraConfigPost]: CREATE_COMPLETE  state changed
2016-10-12 08:19:00Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE  Stack CREATE completed successfully
2016-10-12 08:19:01Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE  state changed
2016-10-12 08:19:01Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully

Stack overcloud CREATE_COMPLETE

Overcloud Endpoint: http://10.0.0.10:5000/v2.0
Overcloud Deployed
[stack@undercloud ~]\$ . stackrc
[stack@undercloud ~]\$ nova list
--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| e6951ba8-a467-4c54-a853-b1fa5f1f3d20 | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.6  |
| 1a4c436f-0aab-4fb3-bb86-34fbf38bec4a | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.12 |
| 5bc7e75d-2a99-4e73-b440-a37b6164c0b6 | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.14 |
| 6e379541-37de-4f3b-8667-fbe5284de10b | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.11 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+

[stack@undercloud ~]\$ cat overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://10.0.0.10:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export no_proxy=,10.0.0.10,192.0.2.10
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
``` [stack@undercloud ~]\$ ssh heat-admin@192.0.2.6
The authenticity of host '192.0.2.6 (192.0.2.6)' can't be established.
ECDSA key fingerprint is d1:71:51:eb:72:d2:50:fb:c6:30:13:49:0d:4b:c8:b1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.0.2.6' (ECDSA) to the list of known hosts.
[root@overcloud-controller-0 ~]# vi overcloudrc
[root@overcloud-controller-0 ~]# .  overcloudrc
[root@overcloud-controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Last updated: Wed Oct 12 08:21:33 2016  Last change: Wed Oct 12 08:09:48 2016 by root via cibadmin on overcloud-controller-0
Stack: corosync
Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum
3 nodes and 19 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:

Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-0 ]
Slaves: [ overcloud-controller-1 overcloud-controller-2 ]
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0

PCSD Status:
overcloud-controller-0: Online
overcloud-controller-1: Online
overcloud-controller-2: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled```

```[root@overcloud-controller-0 ~]# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 32
Galera cluster node is synced.```

Final “top” snapshot on VIRTHOST ( for QuickStart ) after same deployment as

Compare numbers under SHR header in reports provided down here.

Cloud F24 VM is running on overcloud-novacompute-0

Quite the same configuration been done by instack-virt-setup

Swap area utilization at least 2.5 GB ( up to 3.5 GB ) during cloud VM F24 runtime.
***************************************************************************************
System information provided via dashboard ( remote sshuttle connection )
***************************************************************************************

Network Configuration

TripleO deployment of ‘master’ branch via instack-virt-setup on VIRTHOST (2)

September 29, 2016

UPDATE 09/29/2016
\$ sudo route add -net 192.0.2.0/24 gw 192.0.2.1 ( on instack VM )
no longer needed , moreover affects ssh connect to overcloud nodes
END UPDATE

Upstream gets close to Newton Release , bugs scheduled for RC2 went away.

Following bellow is a clean and smoothly running procedure of Overcloud deployment TripleO Master branch via instack-virt-setup on 32 GB VIRTHOST Network isolation in overcloud is pre-configured on instack (undercloud ) , step which is usually very hard to locate in official docs http://tripleo.org/index.html

and which is silently skipped in TripleO related blogs. Also mistral execution list gets verified up on overcloud deployment completion. Running `systemctl status keepalived -l` on overcloud controller we see sending gratuitous ARPs what confirms network isolation implementation.

Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]:
VRRP_Instance(51) Sending gratuitous ARPs on br-ex for 192.0.2.13
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]:
VRRP_Instance(56) Entering MASTER STATE
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]:
VRRP_Instance(56) setting protocol VIPs.
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]:
VRRP_Instance(56) Sending gratuitous ARPs on vlan20 for 172.16.2.7
Sep 27 14:55:03 overcloud-controller-0 Keepalived_healthcheckers[18505]:
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]:
VRRP_Instance(52) Entering MASTER STATE
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]:
VRRP_Instance(52) setting protocol VIPs.
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]:
VRRP_Instance(52) Sending gratuitous ARPs on br-ex for 10.0.0.4
Sep 27 14:55:03 overcloud-controller-0 Keepalived_healthcheckers[18505]:

*****************************************
Tune stack environment on VIRTHOST
*****************************************
# echo “stack:stack” | chpasswd
# echo “stack ALL=(root) NOPASSWD:ALL” | sudo tee -a /etc/sudoers.d/stack
# chmod 0440 /etc/sudoers.d/stack
# su – stack

***************************
Tune stack ENV
**************************
export NODE_DIST=centos7
export NODE_CPU=2
export NODE_MEM=7550
export NODE_COUNT=3
export UNDERCLOUD_NODE_CPU=4
export UNDERCLOUD_NODE_MEM=9000
export FS_TYPE=ext4

****************************************************************
Re-login to stack (highlight long line and copy if needed)
****************************************************************

\$ sudo yum -y install epel-release
\$ sudo yum -y install yum-plugin-priorities
\$ sudo curl -o /etc/yum.repos.d/delorean.repo \
http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/delorean.repo
\$ sudo curl -o /etc/yum.repos.d/delorean-deps.repo \
http://trunk.rdoproject.org/centos7/delorean-deps.repo
\$ sudo yum install -y instack-undercloud
\$ instack-virt-setup

*********************
On instack VM
*********************
Create swap file per http://www.anstack.com/blog/2016/07/04/manually-installing-tripleo-recipe.html :-

#Add a 4GB swap file to the Undercloud
sudo dd if=/dev/zero of=/swapfile bs=1024 count=4194304
sudo mkswap /swapfile
#Turn ON the swap file
sudo chmod 600 /swapfile
sudo swapon /swapfile
#Enable it on start
sudo echo “/swapfile swap swap defaults 0 0″ >> /etc/fstab

***************************
Restart instack VM
***************************

Next :-
# su – stack

*************************************
Update .bashrc under ~stack/
*************************************

export USE_DELOREAN_TRUNK=1
export DELOREAN_TRUNK_REPO=”http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/&#8221;
export DELOREAN_REPO_FILE=”delorean.repo”
export FS_TYPE=ext4

[stack@instack ~]\$ git clone https://github.com/openstack/tripleo-heat-templates
[stack@instack ~]\$ git clone https://github.com/openstack-infra/tripleo-ci.git

[stack@instack ~]\$ ./tripleo-ci/scripts/tripleo.sh –repo-setup
[stack@instack ~]\$ ./tripleo-ci/scripts/tripleo.sh –undercloud
[stack@instack ~]\$ source stackrc
[stack@instack ~]\$ ./tripleo-ci/scripts/tripleo.sh –overcloud-images
[stack@instack ~]\$ ./tripleo-ci/scripts/tripleo.sh –register-nodes
[stack@instack ~]\$ ./tripleo-ci/scripts/tripleo.sh –introspect-nodes

Image file overcloud-full.qcow2 created…
Successfully built all requested images
You must source a stackrc file for the Undercloud.
Attempting to source /home/stack/stackrc
Done
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
+————————————–+————————+————-+———+——–+
| ID | Name | Disk Format | Size | Status |
+————————————–+————————+————-+———+——–+
| 37dba3bf-5683-4a33-b6d5-9ed90e1f189d | overcloud-full-vmlinuz | aki | 5157296 | active |
+————————————–+————————+————-+———+——–+
+————————————–+———————–+————-+———-+——–+
| ID | Name | Disk Format | Size | Status |
+————————————–+———————–+————-+———-+——–+
| 0bfd61f2-1c03-43ab-82e5-811c346dadd0 | overcloud-full-initrd | ari | 42124221 | active |
+————————————–+———————–+————-+———-+——–+
+————————————–+—————-+————-+————+——–+
| ID | Name | Disk Format | Size | Status |
+————————————–+—————-+————-+————+——–+
| d2c41746-fb4c-4438-995b-22811df6f772 | overcloud-full | qcow2 | 1178590720 | active |
+————————————–+—————-+————-+————+——–+
+————————————–+——————+————-+———+——–+
| ID | Name | Disk Format | Size | Status |
+————————————–+——————+————-+———+——–+
| f237b9a5-33a8-4f33-998a-571059f0522b | bm-deploy-kernel | aki | 5157296 | active |
+————————————–+——————+————-+———+——–+
+————————————–+——————-+————-+———–+——–+
| ID | Name | Disk Format | Size | Status |
+————————————–+——————-+————-+———–+——–+
| 97c40ed7-296f-42b8-9d3c-3d40b36040eb | bm-deploy-ramdisk | ari | 318648193 | active |
+————————————–+——————-+————-+———–+——–+
~
#################
tripleo.sh — Overcloud images – DONE.
#################
#################
[stack@instack ~]\$ ./tripleo-ci/scripts/tripleo.sh –register-nodes
#################
tripleo.sh — Register nodes
#################
You must source a stackrc file for the Undercloud.
Attempting to source /home/stack/stackrc
Done
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
Started Mistral Workflow. Execution ID: 9a148b8b-fe55-43b1-b3e1-cb13fad49759
Successfully registered node UUID 1f031f3f-edb1-434c-8b6f-c60bffce9941
Successfully registered node UUID cca63d2d-6912-4878-9ea7-a90510fc09b2
Successfully registered node UUID 584bb979-b715-4c08-836f-2200c6d4d937
Started Mistral Workflow. Execution ID: 071e4b85-2b7c-420e-96bd-bbbe980f9db7
Successfully set all nodes to available.
+————————————–+——+—————+————-+——————–+————-+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+——+—————+————-+——————–+————-+
| 1f031f3f-edb1-434c-8b6f-c60bffce9941 | None | None | power off | available | False |
| cca63d2d-6912-4878-9ea7-a90510fc09b2 | None | None | power off | available | False |
| 584bb979-b715-4c08-836f-2200c6d4d937 | None | None | power off | available | False |
+————————————–+——+—————+————-+——————–+————-+
#################
tripleo.sh — Register nodes – DONE.
#################
[stack@instack ~]\$ ./tripleo-ci/scripts/tripleo.sh –introspect-nodes
#################
tripleo.sh — Introspect nodes
#################
You must source a stackrc file for the Undercloud.
Attempting to source /home/stack/stackrc
Done
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
Setting nodes for introspection to manageable…
Starting introspection of manageable nodes
Started Mistral Workflow. Execution ID: e4e63d1a-3e6c-42d5-9575-e4166853cdd0
Waiting for introspection to finish…
Introspection for UUID 1f031f3f-edb1-434c-8b6f-c60bffce9941 finished successfully.
Introspection for UUID cca63d2d-6912-4878-9ea7-a90510fc09b2 finished successfully.
Introspection for UUID 584bb979-b715-4c08-836f-2200c6d4d937 finished successfully.
Introspection completed.
Setting manageable nodes to available…
Started Mistral Workflow. Execution ID: 1d119a65-a5a8-4b81-b5da-2fd3b15f26e1

#################
tripleo.sh — Introspect nodes – DONE.
#################

Now create external interface vlan10.

[stack@instack ~]\$ sudo vi /etc/sysconfig/network-scripts/ifcfg-vlan10

DEVICE=vlan10
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSIntPort
BOOTPROTO=static
OVS_BRIDGE=br-ctlplane
OVS_OPTIONS=”tag=10″

[stack@instack ~]\$ sudo ifup vlan10

[stack@instack ~]\$ sudo ovs-vsctl show
43ccb3e7-74ed-4192-a87d-80b5a71a7e80
Manager “ptcp:6640:127.0.0.1”
is_connected: true
Bridge br-int
Controller “tcp:127.0.0.1:6633”
is_connected: true
fail_mode: secure
Port int-br-ctlplane
Interface int-br-ctlplane
type: patch
options: {peer=phy-br-ctlplane}
Port br-int
Interface br-int
type: internal
Port “tap0d0fb165-79”
tag: 1
Interface “tap0d0fb165-79”
type: internal
Bridge br-ctlplane
Controller “tcp:127.0.0.1:6633”
is_connected: true
fail_mode: secure
Port “eth1”
Interface “eth1”
Port br-ctlplane
Interface br-ctlplane
type: internal
Port “vlan10”
tag: 10
Interface “vlan10”
type: internal
Port phy-br-ctlplane
Interface phy-br-ctlplane
type: patch
options: {peer=int-br-ctlplane}
ovs_version: “2.5.0”
[stack@instack ~]\$ ifconfig
br-ctlplane: flags=4163 mtu 1500
inet6 fe80::283:bbff:feda:c642 prefixlen 64 scopeid 0x20
ether 00:83:bb:da:c6:42 txqueuelen 0 (Ethernet)
RX packets 43022 bytes 2956223 (2.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15525 bytes 972453334 (927.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0: flags=4163 mtu 1500
inet6 fe80::5054:ff:fe28:530d prefixlen 64 scopeid 0x20
ether 52:54:00:28:53:0d txqueuelen 1000 (Ethernet)
RX packets 881966 bytes 1281751784 (1.1 GiB)
RX errors 0 dropped 3 overruns 0 frame 0
TX packets 539560 bytes 43216702 (41.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4163 mtu 1500
inet6 fe80::283:bbff:feda:c642 prefixlen 64 scopeid 0x20
ether 00:83:bb:da:c6:42 txqueuelen 1000 (Ethernet)
RX packets 43015 bytes 2955825 (2.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15538 bytes 972454368 (927.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73 mtu 65536
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 867304 bytes 4826379602 (4.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 867304 bytes 4826379602 (4.4 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vlan10: flags=4163 mtu 1500
inet6 fe80::dc07:f5ff:fe72:2c9 prefixlen 64 scopeid 0x20
ether de:07:f5:72:02:c9 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12 bytes 816 (816.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Instack IP is 192.168.122.90
[stack@instack ~]\$ vi network_env.yaml
{
“parameter_defaults”: {
“ControlPlaneDefaultRoute”: “192.0.2.1”,
“ControlPlaneSubnetCidr”: “24”,
“DnsServers”: [
“192.168.122.90”
],
“ExternalAllocationPools”: [
{
“end”: “10.0.0.250”,
“start”: “10.0.0.4”
}
],
“ExternalNetCidr”: “10.0.0.1/24”,
“NeutronExternalNetworkBridge”: “”
}
}

[stack@instack ~]\$ sudo iptables -A BOOTSTACK_MASQ -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE -t nat
[stack@instack ~]\$ vi overcloud-deploy.sh
#!/bin/bash -x
source /home/stack/stackrc
openstack overcloud deploy \
–libvirt-type qemu \
–ntp-server pool.ntp.org \
–templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e \$HOME/network_env.yaml \
–control-scale 1 –compute-scale 2

[stack@instack ~]\$ cat \$HOME/network_env.yaml
[stack@instack ~]\$ chmod a+x overcloud-deploy.sh
[stack@instack ~]\$ touch -f /home/stack/tripleo-heat-templates/puppet/post.yaml

[stack@instack ~]\$ neutron subnet-list
+————————————+——+————–+————————————+
| id | name | cidr | allocation_pools |
+————————————+——+————–+————————————+
| bc762c84-558a-4091-aeca- | | 192.0.2.0/24 | {“start”: “192.0.2.5”, “end”: |
| b0b1a428e5f1 | | | “192.0.2.24”} |
+————————————+——+————–+————————————+
[stack@instack ~]\$ neutron subnet-update bc762c84-558a-4091-aeca-b0b1a428e5f1 –dns-nameserver 83.221.202.254
Updated subnet: bc762c84-558a-4091-aeca-b0b1a428e5f1
[stack@instack ~]\$ chmod a+x overcloud-deploy.sh
[stack@instack ~]\$ ./overcloud-deploy.sh
+ source /home/stack/stackrc
++ export NOVA_VERSION=1.1
++ NOVA_VERSION=1.1
++ export OS_AUTH_URL=http://192.0.2.1:5000/v2.0
++ OS_AUTH_URL=http://192.0.2.1:5000/v2.0
++ export COMPUTE_API_VERSION=1.1
++ COMPUTE_API_VERSION=1.1
++ export OS_BAREMETAL_API_VERSION=1.15
++ OS_BAREMETAL_API_VERSION=1.15
++ export OS_NO_CACHE=True
++ OS_NO_CACHE=True
++ export OS_CLOUDNAME=undercloud
++ OS_CLOUDNAME=undercloud
++ export OS_IMAGE_API_VERSION=1
++ OS_IMAGE_API_VERSION=1
+ openstack overcloud deploy –libvirt-type qemu \
–ntp-server pool.ntp.org –templates \
/home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml\
-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e /home/stack/network_env.yaml \
–control-scale 1 –compute-scale 2

WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
Removing the current plan files
Started Mistral Workflow. Execution ID: 08b899f5-0444-4eb8-8719-a6eba9a81fa0
Plan updated
Deploying templates in the directory /home/stack/tripleo-heat-templates
Started Mistral Workflow. Execution ID: 1c291c68-aec7-49e8-836f-658b06763c92
2016-09-27 14:09:37Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started
2016-09-27 14:09:37Z [MysqlRootPassword]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:37Z [RabbitCookie]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:37Z [HorizonSecret]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:37Z [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:37Z [Networks]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:37Z [PcsdPassword]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [ServiceNetMap]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [overcloud-Networks-oqxpnzrtiaem]: CREATE_IN_PROGRESS Stack CREATE started
2016-09-27 14:09:38Z [InternalNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [ManagementNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [StorageMgmtNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [overcloud-Networks-oqxpnzrtiaem-InternalNetwork-l4jjn3botrgj]: CREATE_IN_PROGRESS Stack CREATE started
2016-09-27 14:09:38Z [InternalApiNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [RabbitCookie]: CREATE_COMPLETE state changed
2016-09-27 14:09:38Z [StorageNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [HorizonSecret]: CREATE_COMPLETE state changed
2016-09-27 14:09:38Z [PcsdPassword]: CREATE_COMPLETE state changed
2016-09-27 14:09:38Z [overcloud-Networks-oqxpnzrtiaem-StorageMgmtNetwork-jnuiofatb5tu]: CREATE_IN_PROGRESS Stack CREATE started
2016-09-27 14:09:38Z [HeatAuthEncryptionKey]: CREATE_COMPLETE state changed
2016-09-27 14:09:38Z [MysqlRootPassword]: CREATE_COMPLETE state changed
2016-09-27 14:09:38Z [StorageMgmtNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [TenantNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [DefaultPasswords]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [overcloud-Networks-oqxpnzrtiaem-StorageNetwork-x3pafjtlirmz]: CREATE_IN_PROGRESS Stack CREATE started
2016-09-27 14:09:38Z [StorageNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [ExternalNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [overcloud-Networks-oqxpnzrtiaem-TenantNetwork-2nndp6au2sfp]: CREATE_IN_PROGRESS Stack CREATE started
2016-09-27 14:09:38Z [TenantNetwork]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [NetworkExtraConfig]: CREATE_IN_PROGRESS state changed
2016-09-27 14:09:38Z [overcloud-Networks-oqxpnzrtiaem-ExternalNetwork-ht7dkpzmiskb]: CREATE_IN_PROGRESS Stack CREATE started
2016-09-27 14:09:38Z [ManagementNetwork]: CREATE_COMPLETE state changed

. . . . . .

2016-09-27 14:38:22Z [0]: CREATE_COMPLETE state changed
2016-09-27 14:38:22Z [overcloud-AllNodesDeploySteps-zs2vx53nvajt-ControllerDeployment_Step5-is6nnvfdpzxg]: CREATE_COMPLETE Stack CREATE completed successfully
2016-09-27 14:38:22Z [ControllerDeployment_Step5]: CREATE_COMPLETE state changed
2016-09-27 14:38:22Z [BlockStoragePostConfig]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:22Z [ObjectStoragePostConfig]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:22Z [ComputePostConfig]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:22Z [CephStoragePostConfig]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:22Z [ControllerPostConfig]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:23Z [BlockStoragePostConfig]: CREATE_COMPLETE state changed
2016-09-27 14:38:23Z [ObjectStoragePostConfig]: CREATE_COMPLETE state changed
2016-09-27 14:38:23Z [ComputePostConfig]: CREATE_COMPLETE state changed
2016-09-27 14:38:24Z [CephStoragePostConfig]: CREATE_COMPLETE state changed
2016-09-27 14:38:24Z [ControllerPostConfig]: CREATE_COMPLETE state changed
2016-09-27 14:38:24Z [CephStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:24Z [ComputeExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:24Z [BlockStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:24Z [ControllerExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:24Z [ObjectStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-09-27 14:38:25Z [ComputeExtraConfigPost]: CREATE_COMPLETE state changed
2016-09-27 14:38:25Z [CephStorageExtraConfigPost]: CREATE_COMPLETE state changed
2016-09-27 14:38:25Z [ControllerExtraConfigPost]: CREATE_COMPLETE state changed
2016-09-27 14:38:25Z [ObjectStorageExtraConfigPost]: CREATE_COMPLETE state changed
2016-09-27 14:38:25Z [BlockStorageExtraConfigPost]: CREATE_COMPLETE state changed
2016-09-27 14:38:25Z [overcloud-AllNodesDeploySteps-zs2vx53nvajt]: CREATE_COMPLETE Stack CREATE completed successfully
2016-09-27 14:38:26Z [AllNodesDeploySteps]: CREATE_COMPLETE state changed
2016-09-27 14:38:26Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully

Stack overcloud CREATE_COMPLETE

Overcloud Endpoint: http://10.0.0.4:5000/v2.0
Overcloud Deployed

***********************************************************
Checking for errors in mistral execution list
which is new in Newton release
***********************************************************

[stack@instack ~]\$ mistral execution-list

```+----------+-------------+---------------+-------------+-------------------+---------+------------+------------+---------------+
| ID       | Workflow ID | Workflow name | Description | Task Execution ID | State   | State info | Created at | Updated at    |
+----------+-------------+---------------+-------------+-------------------+---------+------------+------------+---------------+
| 5bce3202 | bde1cc99-ef | tripleo.plan_ |             |             | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -c2b3-47 | e5-40df-    | management.v1 |             |                   |         |            | 12:31:44   | 12:31:58      |
| 35-ad40- | bc5a-359141 | .create_defau |             |                   |         |            |            |               |
| 2b35e775 | e48a73      | lt_deployment |             |                   |         |            |            |               |
| 4647     |             | _plan         |             |                   |         |            |            |               |
| 9a148b8b | 75fb3808    | tripleo.barem |             |             | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -fe55-43 | -142c-4d98- | etal.v1.regis |             |                   |         |            | 13:40:34   | 13:40:49      |
| b1-b3e1- | a509-edfcee | ter_or_update |             |                   |         |            |            |               |
| cb13fad4 | 056fe5      |               |             |                   |         |            |            |               |
| 9759     |             |               |             |                   |         |            |            |               |
| 0a450c25 | bcc2d68d-cd | tripleo.barem | sub-        | 191d237c-1322     | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -bf68    | da-4919-998 | etal.v1.set_n | workflow    | -4b3c-867e-       |         |            | 13:40:42   | 13:40:45      |
| -487a-af | 4-00e03bbdf | ode_state     | execution   | ddbd7121e01f      |         |            |            |               |
| 98-7125e | db8         |               |             |                   |         |            |            |               |
| 809a737  |             |               |             |                   |         |            |            |               |
| 3e8eff39 | bcc2d68d-cd | tripleo.barem | sub-        | 191d237c-1322     | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -f732-49 | da-4919-998 | etal.v1.set_n | workflow    | -4b3c-867e-       |         |            | 13:40:42   | 13:40:45      |
| 76-b541- | 4-00e03bbdf | ode_state     | execution   | ddbd7121e01f      |         |            |            |               |
| b3446c6c | db8         |               |             |                   |         |            |            |               |
| 6fde     |             |               |             |                   |         |            |            |               |
| 55385132 | bcc2d68d-cd | tripleo.barem | sub-        | 191d237c-1322     | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -6e79    | da-4919-998 | etal.v1.set_n | workflow    | -4b3c-867e-       |         |            | 13:40:42   | 13:40:45      |
| -400b-   | 4-00e03bbdf | ode_state     | execution   | ddbd7121e01f      |         |            |            |               |
| 965d-1a2 | db8         |               |             |                   |         |            |            |               |
| f6ccada2 |             |               |             |                   |         |            |            |               |
| 4        |             |               |             |                   |         |            |            |               |
| 071e4b85 | 22cd7376    | tripleo.barem |             |             | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -2b7c-   | -d6cd-49db- | etal.v1.provi |             |                   |         |            | 13:40:49   | 13:40:56      |
| 420e-    | 801b-74ef4e | de            |             |                   |         |            |            |               |
| 96bd-bbb | 197f3f      |               |             |                   |         |            |            |               |
| e980f9db |             |               |             |                   |         |            |            |               |
| 7        |             |               |             |                   |         |            |            |               |
| 288e2ff2 | bcc2d68d-cd | tripleo.barem | sub-        | 52352bbc-8667-458 | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -2161    | da-4919-998 | etal.v1.set_n | workflow    | 5-8099-39f6738381 |         |            | 13:40:49   | 13:40:52      |
| -473f-84 | 4-00e03bbdf | ode_state     | execution   | 5a                |         |            |            |               |
| c9-382bd | db8         |               |             |                   |         |            |            |               |
| 7b6b0bd  |             |               |             |                   |         |            |            |               |
| 28e41b0a | bcc2d68d-cd | tripleo.barem | sub-        | 52352bbc-8667-458 | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -eb5f-4e | da-4919-998 | etal.v1.set_n | workflow    | 5-8099-39f6738381 |         |            | 13:40:49   | 13:40:52      |
| 70-b537- | 4-00e03bbdf | ode_state     | execution   | 5a                |         |            |            |               |
| e5da84e2 | db8         |               |             |                   |         |            |            |               |
| af7b     |             |               |             |                   |         |            |            |               |
| 2c30a482 | bcc2d68d-cd | tripleo.barem | sub-        | 52352bbc-8667-458 | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -a4fc-   | da-4919-998 | etal.v1.set_n | workflow    | 5-8099-39f6738381 |         |            | 13:40:49   | 13:40:52      |
| 42be-    | 4-00e03bbdf | ode_state     | execution   | 5a                |         |            |            |               |
| a66b-568 | db8         |               |             |                   |         |            |            |               |
| b5bb7fa5 |             |               |             |                   |         |            |            |               |
| 0        |             |               |             |                   |         |            |            |               |
| 2026ebb3 | a3b2b56e-   | tripleo.barem | sub-        | 68dfd5f3-ba31-432 | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -2cf9-43 | 0a12-4354   | etal.v1.intro | workflow    | 1-9b0e-           |         |            | 13:41:12   | 13:43:19      |
| 90-90fc- | -b7de-2cbaa | spect         | execution   | 1d35b5181c49      |         |            |            |               |
| 05688171 | c7b3406     |               |             |                   |         |            |            |               |
| 9836     |             |               |             |                   |         |            |            |               |
| e4e63d1a | 4130daad-38 | tripleo.barem |             |             | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -3e6c-42 | 0d-4406-b02 | etal.v1.intro |             |                   |         |            | 13:41:12   | 13:43:22      |
| d5-9575- | 1-9e8b4bb3b | spect_managea |             |                   |         |            |            |               |
| e4166853 | 0e1         | ble_nodes     |             |                   |         |            |            |               |
| cdd0     |             |               |             |                   |         |            |            |               |
| 1d119a65 | 29c51fab-   | tripleo.barem |             |             | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -a5a8-4b | f6b9-4cc1   | etal.v1.provi |             |                   |         |            | 13:43:21   | 13:43:33      |
| 81-b5da- | -94bc-14144 | de_manageable |             |                   |         |            |            |               |
| 2fd3b15f | bc8284f     | _nodes        |             |                   |         |            |            |               |
| 26e1     |             |               |             |                   |         |            |            |               |
| 9262a91e | 22cd7376    | tripleo.barem | sub-        | 8052fcbb-b511     | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -a7bf-   | -d6cd-49db- | etal.v1.provi | workflow    | -423a-bfdc-       |         |            | 13:43:22   | 13:43:30      |
| 439b-    | 801b-74ef4e | de            | execution   | 8555b8e169a8      |         |            |            |               |
| b9fc-e48 | 197f3f      |               |             |                   |         |            |            |               |
| ff49f5aa |             |               |             |                   |         |            |            |               |
| d        |             |               |             |                   |         |            |            |               |
| ab696f08 | bcc2d68d-cd | tripleo.barem | sub-        | da2c83ad-30bf-425 | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -76a8    | da-4919-998 | etal.v1.set_n | workflow    | 1-8859-20c8d9b895 |         |            | 13:43:22   | 13:43:25      |
| -478c-8c | 4-00e03bbdf | ode_state     | execution   | cb                |         |            |            |               |
| d2-7b5bb | db8         |               |             |                   |         |            |            |               |
| 3f1fba0  |             |               |             |                   |         |            |            |               |
| 77ede1a7 | bcc2d68d-cd | tripleo.barem | sub-        | da2c83ad-30bf-425 | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -4167-48 | da-4919-998 | etal.v1.set_n | workflow    | 1-8859-20c8d9b895 |         |            | 13:43:23   | 13:43:26      |
| 95-b786- | 4-00e03bbdf | ode_state     | execution   | cb                |         |            |            |               |
| 4ff65ce7 | db8         |               |             |                   |         |            |            |               |
| 1c6b     |             |               |             |                   |         |            |            |               |
| feec4ba0 | bcc2d68d-cd | tripleo.barem | sub-        | da2c83ad-30bf-425 | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -7331    | da-4919-998 | etal.v1.set_n | workflow    | 1-8859-20c8d9b895 |         |            | 13:43:23   | 13:43:26      |
| -4adb-8a | 4-00e03bbdf | ode_state     | execution   | cb                |         |            |            |               |
| 08-352c3 | db8         |               |             |                   |         |            |            |               |
| 151965c  |             |               |             |                   |         |            |            |               |
| 08b899f5 | 9210744f-   | tripleo.plan_ |             |             | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -0444-4e | 42f1-45de-  | management.v1 |             |                   |         |            | 14:09:19   | 14:09:26      |
| b8-8719- | 902a-a1e0a1 | .update_deplo |             |                   |         |            |            |               |
| a6eba9a8 | 4f91bb      | yment_plan    |             |                   |         |            |            |               |
| 1fa0     |             |               |             |                   |         |            |            |               |
| 1c291c68 | feef43e7-28 | tripleo.deplo |             |             | SUCCESS | None       | 2016-09-27 | 2016-09-27    |
| -aec7-49 | 65-4123-b0e | yment.v1.depl |             |                   |         |            | 14:09:26   | 14:09:40      |
| e8-836f- | 9-f4eaf6d5d | oy_plan       |             |                   |         |            |            |               |
| 658b0676 | 77d         |               |             |                   |         |            |            |               |
| 3c92     |             |               |             |                   |         |            |            |               |
+----------+-------------+---------------+-------------+-------------------+---------+------------+------------+---------------+```

[stack@instack ~]\$ sudo route add -net 192.0.2.0/24 gw 192.0.2.1 <=== No longer needed as of 09/29/2016
[stack@instack ~]\$ sudo route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.122.1 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan10
192.0.2.0 192.0.2.1 255.255.255.0 UG 0 0 0 br-ctlplane
192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ctlplane
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

[root@overcloud-controller-0 ~]# nova service-list
+—-+——————+————————————-+———-+———+——-+—————————-+—————–+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+—-+——————+————————————-+———-+———+——-+—————————-+—————–+
| 3 | nova-consoleauth | overcloud-controller-0.localdomain | internal | enabled | up | 2016-09-27T14:54:34.000000 | – |
| 4 | nova-scheduler | overcloud-controller-0.localdomain | internal | enabled | up | 2016-09-27T14:54:35.000000 | – |
| 5 | nova-conductor | overcloud-controller-0.localdomain | internal | enabled | up | 2016-09-27T14:54:27.000000 | – |
| 6 | nova-compute | overcloud-novacompute-0.localdomain | nova | enabled | up | 2016-09-27T14:54:26.000000 | – |
| 7 | nova-compute | overcloud-novacompute-1.localdomain | nova | enabled | up | 2016-09-27T14:54:27.000000 | – |
+—-+——————+————————————-+———-+———+——-+—————————-+—————–+

****************************************************************************************************

Verification status VIP 10.0.0.4 via keepalived status on overcloud-controller-0.localdomain

****************************************************************************************************
[root@overcloud-controller-0 ~]# systemctl status keepalived -l
● keepalived.service – LVS and VRRP High Availability Monitor
Active: active (running) since Tue 2016-09-27 14:55:01 UTC; 2s ago
Process: 18503 ExecStart=/usr/sbin/keepalived \$KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 850 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/keepalived.service
├─18504 /usr/sbin/keepalived -D
├─18505 /usr/sbin/keepalived -D
└─18506 /usr/sbin/keepalived -D

Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]: VRRP_Instance(51) Sending gratuitous ARPs on br-ex for 192.0.2.13
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]: VRRP_Instance(56) Entering MASTER STATE
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]: VRRP_Instance(56) setting protocol VIPs.
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]: VRRP_Instance(56) Sending gratuitous ARPs on vlan20 for 172.16.2.7
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]: VRRP_Instance(52) Entering MASTER STATE
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]: VRRP_Instance(52) setting protocol VIPs.
Sep 27 14:55:03 overcloud-controller-0 Keepalived_vrrp[18506]: VRRP_Instance(52) Sending gratuitous ARPs on br-ex for 10.0.0.4

```********************************
Compute Node Status
********************************

Last login: Tue Sep 27 15:30:34 UTC 2016 on pts/0
[root@overcloud-novacompute-0 ~]# virsh --connect qemu:///system
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
'quit' to quit

virsh # version
Compiled against library: libvirt 1.2.17
Using library: libvirt 1.2.17
Using API: QEMU 1.2.17
Running hypervisor: QEMU 2.3.0

virsh # list --all
Id    Name                           State
----------------------------------------------------
3     instance-00000001              running
```

TripleO deployment of ‘master’ branch via instack-virt-setup

September 16, 2016

UPDATE 09/23/2016

Fix released for (1622683, 1622720 ) in :-
****************************************************
Deploy completed OK the first time
****************************************************

2016-09-23 09:08:28Z [overcloud-AllNodesDeploySteps-yrsd7pkitjij]: CREATE_COMPLETE  Stack CREATE completed successfully
2016-09-23 09:08:28Z [AllNodesDeploySteps]: CREATE_COMPLETE  state changed
2016-09-23 09:08:28Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully
Stack overcloud CREATE_COMPLETE

Overcloud Endpoint: http://10.0.0.6:5000/v2.0
Overcloud Deployed

[stack@instack ~]\$ nova list

```+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| b3d97bcf-9318-48ef-91c7-09c8386a75aa | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.13 |
| 148aa223-513d-44d5-b865-2cb2c3dcbc6f | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.9  |
| e3ee61fb-c243-4454-949d-84c22e66b147 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.10 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+```

[stack@instack ~]\$ mistral environment-list

```+-----------+-------------+---------+---------------------+---------------------+
| Name      | Description | Scope   | Created at          | Updated at          |
+-----------+-------------+---------+---------------------+---------------------+
| overcloud | None        | private | 2016-09-23 07:33:40 | 2016-09-23 08:41:29 |
+-----------+-------------+---------+---------------------+---------------------+```

[stack@instack ~]\$ swift list
ov-jjf6fn4qyjt-0-gfpul73m4fdl-Controller-dekw3w5stcqd
ov-pb3uu5djue-0-lmazr26t3z4u-NovaCompute-sqfaz5lstqov
ov-pb3uu5djue-1-7prlyxolsdhd-NovaCompute-ltmkwmq74iyq
overcloud

[stack@instack ~]\$ openstack stack delete overcloud
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
Are you sure you want to delete this stack(s) [y/N]? y
[stack@instack ~]\$ openstack stack list
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils

```+---------------------+------------+--------------------+----------------------+--------------+
| ID                  | Stack Name | Stack Status       | Creation Time        | Updated Time |
+---------------------+------------+--------------------+----------------------+--------------+
| 6e3ae2b6-5ce1-45db- | overcloud  | DELETE_IN_PROGRESS | 2016-09-23T08:41:38Z | None         |
| bde5-06d2ce2e571b   |            |                    |                      |              |
+---------------------+------------+--------------------+----------------------+--------------+```

[stack@instack ~]\$ openstack stack list

WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils

***************************************************************************
Empty output – overcloud stack has been deleted
****************************************************************************

[stack@instack ~]\$ mistral environment-list

```+-----------+-------------+---------+---------------------+---------------------+
| Name      | Description | Scope   | Created at          | Updated at          |
+-----------+-------------+---------+---------------------+---------------------+
| overcloud | None        | private | 2016-09-23 07:33:40 | 2016-09-23 08:41:29 |
+-----------+-------------+---------+---------------------+---------------------+```

[stack@instack ~]\$ swift list

overcloud

******************************************************************************
Now attempt to redeploy the second time .  Success on 09/23/2016
******************************************************************************

[stack@instack ~]\$ touch -f  /home/stack/tripleo-heat-templates/puppet/post.yaml
[stack@instack ~]\$ ./overcloud-deploy.sh
+ source /home/stack/stackrc
++ export NOVA_VERSION=1.1

++ NOVA_VERSION=1.1
++ export OS_AUTH_URL=http://192.0.2.1:5000/v2.0
++ OS_AUTH_URL=http://192.0.2.1:5000/v2.0
++ export COMPUTE_API_VERSION=1.1
++ COMPUTE_API_VERSION=1.1
++ export OS_BAREMETAL_API_VERSION=1.15
++ OS_BAREMETAL_API_VERSION=1.15
++ export OS_NO_CACHE=True
++ OS_NO_CACHE=True
++ export OS_CLOUDNAME=undercloud
++ OS_CLOUDNAME=undercloud
++ export OS_IMAGE_API_VERSION=1
++ OS_IMAGE_API_VERSION=1
+ openstack overcloud deploy –libvirt-type qemu –ntp-server pool.ntp.org –templates /home/stack/tripleo-heat-templates -e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml -e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network_env.yaml –control-scale 1 –compute-scale 2

WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
Removing the current plan files
Started Mistral Workflow. Execution ID: 4d744a89-a2e7-43a5-82af-26bab11e6342
Plan updated
Deploying templates in the directory /home/stack/tripleo-heat-templates
Object GET failed: http://192.0.2.1:8080/v1/AUTH_7ea6220c67c84c828f4249b95886259f/overcloud/overcloud-without-mergepy.yaml 404 Not Found  [first 60 chars of response]

Started Mistral Workflow. Execution ID: 807a7047-a1c3-4686-9be7-11d73e72dfb8
2016-09-23 09:15:34Z [overcloud]: CREATE_IN_PROGRESS  Stack CREATE started
2016-09-23 09:15:34Z [HorizonSecret]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:34Z [RabbitCookie]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [ServiceNetMap]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [MysqlRootPassword]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [PcsdPassword]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [Networks]: CREATE_IN_PROGRESS  state changed
2016-09-23 09:15:35Z [ServiceNetMap]: CREATE_COMPLETE  state changed
2016-09-23 09:15:35Z [RabbitCookie]: CREATE_COMPLETE  state changed
2016-09-23 09:15:35Z [HeatAuthEncryptionKey]: CREATE_COMPLETE  state changed
2016-09-23 09:15:35Z [PcsdPassword]: CREATE_COMPLETE  state changed
2016-09-23 09:15:35Z [HorizonSecret]: CREATE_COMPLETE  state changed

. . . . . .

2016-09-23 09:39:50Z [BlockStorageExtraConfigPost]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [CephStorageExtraConfigPost]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [ComputeExtraConfigPost]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [ObjectStorageExtraConfigPost]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [overcloud-AllNodesDeploySteps-5bfecsxdagiz]: CREATE_COMPLETE  Stack CREATE completed successfully
2016-09-23 09:39:51Z [AllNodesDeploySteps]: CREATE_COMPLETE  state changed
2016-09-23 09:39:51Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully

Stack overcloud CREATE_COMPLETE
Overcloud Endpoint: http://10.0.0.12:5000/v2.0
Overcloud Deployed

END UPDATE

UPDATE 09/21/2016

Work around for 1622720 which allows redeploy second time

During run time :-
[stack@instack ~]\$ mistral environment-list
+———–+————-+———+———————+———————+
| Name | Description | Scope | Created at | Updated at |
+———–+————-+———+———————+———————+
| overcloud | None | private | 2016-09-21 12:35:43 | 2016-09-21 12:35:51 |
+———–+————-+———+———————+———————+

[stack@instack ~]\$ swift list
ov-a2o6ekfrck5-0-zesuo2wtu2ed-Controller-ushkojdgxsim
ov-yfn5tgwipf-0-jebdxn5jfduz-NovaCompute-4hjdhzij3czv
ov-yfn5tgwipf-1-vypbavbviwxv-NovaCompute-luo274m3kmn2
overcloud

Here is snapshot which is bug evidence

[stack@instack ~]\$ mistral environment-delete overcloud
Request to delete environment overcloud has been accepted.
[stack@instack ~]\$ swift delete –all

\$ touch -f /home/stack/tripleo-heat-templates/puppet/post.yaml
\$ overcloud-deploy.sh

See following bugs at Launchpad :-

END UPDATE

Due to Launchpad Bug  introspection hangs due to broken ipxe config
finally resolved on 09/01/2016  approach suggested in
TripleO manual deployment of ‘master’ branch by Carlo Camacho
has been retested.  As appears things in meantime have been changed. Following bellow is the way how mentioned above post worked for me right now on 32 GB VIRTHOST (i7 4790)

*****************************************
Tune stack environment on VIRTHOST
*****************************************

# echo “stack:stack” | chpasswd
# echo “stack ALL=(root) NOPASSWD:ALL” | sudo tee -a /etc/sudoers.d/stack
#  chmod 0440 /etc/sudoers.d/stack
#  su – stack

***************************
Tune stack ENV
**************************

export NODE_DIST=centos7
export NODE_CPU=2
export NODE_MEM=7550
export NODE_COUNT=2
export UNDERCLOUD_NODE_CPU=2
export UNDERCLOUD_NODE_MEM=9000
export FS_TYPE=ext4

****************************************************************

Re-login to stack (highlight long line and copy if needed)

****************************************************************
```\$ sudo yum -y install epel-release sudo \$ yum -y install yum-plugin-priorities \$ sudo curl -o /etc/yum.repos.d/delorean.repo  http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/delorean.repo \$ sudo curl -o /etc/yum.repos.d/delorean-deps.repo  http://trunk.rdoproject.org/centos7/delorean-deps.repo \$ sudo yum install -y instack-undercloud \$ instack-virt-setup ```
*********************

On instack VM

*********************

Create swap file per http://www.anstack.com/blog/2016/07/04/manually-installing-tripleo-recipe.html  :-

#Add a 4GB swap file to the Undercloud
sudo dd if=/dev/zero of=/swapfile bs=1024 count=4194304
sudo mkswap /swapfile
#Turn ON the swap file
sudo chmod 600 /swapfile
sudo swapon /swapfile
#Enable it on start
sudo echo “/swapfile   swap   swap    defaults        0 0” &gt;&gt; /etc/fstab

***************************
Restart instack VM
***************************
Next
su – stack
sudo yum -y install yum-plugin-priorities

*************************************
Update .bashrc under ~stack/
*************************************
``` export USE_DELOREAN_TRUNK=1 export DELOREAN_TRUNK_REPO="http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/" export DELOREAN_REPO_FILE="delorean.repo" export FS_TYPE=ext4 ```
************************************

************************************

\$ ./tripleo-ci/scripts/tripleo.sh –repo-setup
\$ ./tripleo-ci/scripts/tripleo.sh –undercloud
\$  source stackrc
\$ ./tripleo-ci/scripts/tripleo.sh –overcloud-images
\$ ./tripleo-ci/scripts/tripleo.sh –register-nodes
\$ ./tripleo-ci/scripts/tripleo.sh –introspect-nodes

************************************************

Passing step affected by mentioned bug

************************************************

\$ ./tripleo-ci/scripts/tripleo.sh –overcloud-deploy

Issue at start up of Overcloud deployment

###########################################################################################
tripleo.sh — Overcloud create started.
###########################################################################################
via rebuilding openstack-tripleo-common-5.0.1-0.20160917031337.15c97e6.el7.centos.src.rpm && re installing new rpm doesn’t work for me.
###########################################################################################
```WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils WARNING: openstackclient.common.exceptions is deprecated and will be removed after Jun 2017. Please use osc_lib.exceptions Creating Swift container to store the plan Creating plan from template files in: /usr/share/openstack-tripleo-heat-templates/ Plan created Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates Object GET failed: http://192.0.2.1:8080/v1/AUTH_b4438648a72446eca04d2d216261c373/overcloud/overcloud-without-mergepy.yaml 404 Not Found  [first 60 chars of response] ```

Finally overcloud gets deployed

****************************************************************************************

On instack VM  verified  https://bugs.launchpad.net/tripleo/+bug/1604770  #9
****************************************************************************************

[stack@instack ~]\$ sudo su –
Last login: Thu Sep 15 16:19:07 UTC 2016 from 192.168.122.1 on pts/1
[root@instack ~]# rpm -qa \*ipxe\*
ipxe-roms-qemu-20160127-1.git6366fa7a.el7.noarch
ipxe-bootimgs-20160127-1.git6366fa7a.el7.noarch

[stack@instack ~]\$ openstack stack list

WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils

+————————+————+—————–+———————-+————–+
| ID                     | Stack Name | Stack Status    | Creation Time        | Updated Time |
+————————+————+—————–+———————-+————–+
| 7657df62-da09-4c0f-    | overcloud  | CREATE_COMPLETE | 2016-09-15T14:48:49Z | None         |
| bbdb-b9c95bdad537      |            |                 |                      |        |
+————————+————+—————–+———————-+————–+

[stack@instack ~]\$ nova list

+————————————–+————————-+——–+————+————-+———————+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+————————————–+————————-+——–+————+————-+———————+
| 400e1499-5e02-4c92-a41b-814918f0edc3 | overcloud-controller-0  | ACTIVE | –          | Running     | ctlplane=192.0.2.15 |
| 58f3591f-c72f-4d97-9278-a33b3f631248 | overcloud-novacompute-0 | ACTIVE | –          | Running     | ctlplane=192.0.2.6  |
+————————————–+————————-+——–+————+————-+———————+

Managing and fixes required in overcloud

********************************************************************
Fix IP on Compute node &amp;&amp; Open 6080 on Controller
********************************************************************

On Compute :-

[vnc]
vncserver_listen=0.0.0.0
keymap=en-us
enabled=True
novncproxy_base_url=http://192.0.2.15:6080/vnc_auto.html &lt;===

On Controller

-A INPUT -p tcp -m multiport –dports 6080 -m comment –comment “novncproxy” -m state –state NEW -j ACCEPT
Save /etc/sysconfig/iptables

#service iptables restart

[root@overcloud-controller-0 ~(keystone_admin)]# netstat -antp | grep 6080

tcp        0      0 192.0.2.15:6080         0.0.0.0:*               LISTEN      8397/python2
tcp        1      0 192.0.2.8:56080         192.0.2.8:8080          CLOSE_WAIT  11606/gnocchi-metri
tcp        0      0 192.0.2.15:6080         192.0.2.1:47598         ESTABLISHED 28260/python2
tcp        0      0 192.0.2.15:6000         192.0.2.15:36080        TIME_WAIT   –

[root@overcloud-controller-0 ~(keystone_admin)]# ps -ef | grep 8397

nova      8397     1  0 15:06 ?                 00:00:05 /usr/bin/python2 /usr/bin/nova-novncproxy –web /usr/share/novnc/
nova      28260  8397  3 17:37 ?           00:00:56 /usr/bin/python2 /usr/bin/nova-novncproxy –web /usr/share/novnc/
root       31149 23941  0 18:06 pts/0    00:00:00 grep –color=auto 8397

**********************************
Create flavors as follows
**********************************

[root@overcloud-controller-0 ~]# nova flavor-create “m2.small” 2  1000 20 1

+—-+———-+———–+——+———–+——+——-+————-+———–+
| ID | Name     | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———-+———–+——+———–+——+——-+————-+———–+
| 2  | m2.small | 1000      | 20   | 0         |      | 1     | 1.0         | True      |
+—-+———-+———–+——+———–+——+——-+————-+———–+

[root@overcloud-controller-0 ~]# nova flavor-list

+————————————–+———————+———–+——+———–+——+——-+————-+———–+
| ID                                   | Name                | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+————————————–+———————+———–+——+———–+——+——-+————-+———–+
| 1                                    | 500MB Tiny Instance | 500       | 1    | 0         |      | 1     | 1.0         | True      |
| 2                                    | m2.small            | 1000      | 20   | 0         |      | 1     | 1.0         | True      |
+————————————–+———————+———–+——+———–+——+——-+————-+———–+

[root@overcloud-controller-0 ~]# nova flavor-list

+—-+———————+———–+——+———–+——+——-+————-+———–+
| ID | Name                | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———————+———–+——+———–+——+——-+————-+———–+
| 1  | 500MB Tiny Instance | 500       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m2.small            | 1000      | 20   | 0         |      | 1     | 1.0         | True      |
+—-+———————+———–+——+———–+——+——-+————-+———–+

[root@overcloud-controller-0 ~]# glance image-list

+————————————–+—————+
| ID                                   | Name          |
+————————————–+—————+
| c9faf86d-4a06-401a-839c-c5bd48ff704a | CirrOS34Cloud |
| 4bf6f43d-8cba-43d7-9e34-347cff2d4769 | UbuntuCloud   |
| 81e031b0-11b7-440b-946f-b8f9e3a83c95 | VF24Cloud     |
+————————————–+—————+

[root@overcloud-controller-0 ~]# neutron net-list

+————————————–+————–+—————————————-+
| id                                   | name         | subnets                                |
+————————————–+————–+—————————————-+
| 2d0ccb5f-0cc8-4710-819d-7c148137aea2 | public       | 795e0fea-0550-44e8-abf3-afd316cd7843   |
|                                      |              | 192.0.2.0/24                           |
| e2a9edb9-8e01-4e99-83b2-6c6e705967fe | demo_network | 56b70753-e776-4ce8-9b28-650431b43a63   |
|                                      |              | 50.0.0.0/24                            |
+————————————–+————–+—————————————-+

[root@overcloud-controller-0 ~]# nova boot –flavor 2 –key-name oskey09152016 \
–image 81e031b0-11b7-440b-946f-b8f9e3a83c95 \
–nic net-id=e2a9edb9-8e01-4e99-83b2-6c6e705967fe  VF24Devs05

+————————————–+————————————————–+
| Property                             | Value                                            |
+————————————–+————————————————–+
| OS-DCF:diskConfig                    | MANUAL                                           |
| OS-EXT-AZ:availability_zone          |                                                  |
| OS-EXT-SRV-ATTR:host                 | –                                                |
| OS-EXT-SRV-ATTR:hostname             | vf24devs05                                       |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | –                                                |
| OS-EXT-SRV-ATTR:instance_name        |                                                  |
| OS-EXT-SRV-ATTR:kernel_id            |                                                  |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                  |
| OS-EXT-SRV-ATTR:reservation_id       | r-psorddod                                       |
| OS-EXT-SRV-ATTR:root_device_name     | –                                                |
| OS-EXT-SRV-ATTR:user_data            | –                                                |
| OS-EXT-STS:power_state               | 0                                                |
| OS-EXT-STS:vm_state                  | building                                         |
| OS-SRV-USG:launched_at               | –                                                |
| OS-SRV-USG:terminated_at             | –                                                |
| accessIPv4                           |                                                  |
| accessIPv6                           |                                                  |
| config_drive                         |                                                  |
| created                              | 2016-09-15T12:01:34Z                             |
| description                          | –                                                |
| flavor                               | m2.small (2)                                     |
| hostId                               |                                                  |
| host_status                          |                                                  |
| id                                   | 212e06de-e971-428b-9e94-79dc8d91b6db             |
| image                                | VF24Cloud (81e031b0-11b7-440b-946f-b8f9e3a83c95) |
| key_name                             | oskey09152016                                    |
| locked                               | False                                            |
| name                                 | VF24Devs05                                       |
| os-extended-volumes:volumes_attached | []                                               |
| progress                             | 0                                                |
| security_groups                      | default                                          |
| status                               | BUILD                                            |
| tags                                 | []                                               |
| tenant_id                            | a1c9c1c1a1134384b4a496d585981aff                 |
| updated                              | 2016-09-15T12:01:34Z                             |
| user_id                              | e2383104829c45e1a3d70e11cc87d399                 |
+————————————–+————————————————–+

[root@overcloud-controller-0 ~]# nova list

+————————————–+————-+——–+————+————-+————————————-+
| ID                                   | Name        | Status | Task State | Power State | Networks                            |
+————————————–+————-+——–+————+————-+————————————-+
| c7cea368-9602-421d-beb3-c0ed37379b57 | CirrOSDevs1 | ACTIVE | –          | Running     | demo_network=50.0.0.17, 192.0.2.104 |
| 212e06de-e971-428b-9e94-79dc8d91b6db | VF24Devs05  | BUILD  | spawning   | NOSTATE     | demo_network=50.0.0.15              |
+————————————–+————-+——–+————+————-+————————————-+

[root@overcloud-controller-0 ~]# nova list

+————————————–+————-+——–+————+————-+————————————-+
| ID                                   | Name        | Status | Task State | Power State | Networks                            |
+————————————–+————-+——–+————+————-+————————————-+
| c7cea368-9602-421d-beb3-c0ed37379b57 | CirrOSDevs1 | ACTIVE | –          | Running     | demo_network=50.0.0.17, 192.0.2.104 |
| 212e06de-e971-428b-9e94-79dc8d91b6db | VF24Devs05  | ACTIVE | –          | Running     | demo_network=50.0.0.15              |
+————————————–+————-+——–+————+————-+————————————-+

Another option activate vlan10 following
http://bderzhavets.blogspot.com/2016/07/stable-mitaka-ha-instack-virt-setup.html
run following deployment with network isolation activated :-

#!/bin/bash -x
source /home/stack/stackrc
openstack overcloud deploy \
–control-scale 1 –compute-scale 1 \
–libvirt-type qemu \
–ntp-server pool.ntp.org \
–templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e \$HOME/network_env.yaml

Presence of overcloud-resource-registry-puppet.yaml might explain avoiding

overcloud deployment failure no matter that overcloud-without-mergepy.yaml was not found at usual location

August 16, 2016

Sshutle may be installed on Fedora 24 via straight forward `dnf -y install sshutle`.
[Fedora 24 Update: sshuttle-0.78.0-2.fc24].
https://lists.fedoraproject.org/pipermail/package-announce/2016-April/182490.html
So, when F24 has been set up as WKS for TripleO QuickStart deployment to VIRTHOST , there is no need to install add-on FoxyProxy and tune it on firefox as well as connect from ansible wks to undercloud via \$ ssh -F ~/.quickstart/ssh.config.ansible undercloud -D 9090

What is sshuttle? It’s a Python app that uses SSH to create a quick and dirty VPN between your Linux, BSD, or Mac OS X machine and a remote system that has SSH access and Python. Been licensed under the GPLv2, sshuttle is a transparent proxy server that lets users fake a VPN with minimal hassle.

========================================
First install and start sshutle on Fedora 24 :-
========================================

boris@fedora24wks ~] dnf -y install sshutle
[root@fedora24wks ~]# rpm -qa \*sshuttle\*
sshuttle-0.78.0-2.fc24.noarch

========================================================
Now start sshutle via ssh.config.ansible, where 10.0.0.0/24 has been installed
as external network for OverCloud already been set up on VIRTHOST
========================================================

[boris@fedora24wks ~]\$ sshuttle -e “ssh -F \$HOME/.quickstart/ssh.config.ansible” -r undercloud -v 10.0.0.0/24 &

[3] 16385

[boris@fedora24wks ~]\$ Starting sshuttle proxy.
firewall manager: Starting firewall with Python version 3.5.1
firewall manager: ready method name nat.
IPv6 enabled: False
UDP enabled: False
DNS enabled: False
TCP redirector listening on (‘127.0.0.1’, 12299).
Starting client with Python version 3.5.1
c : connecting to server…
Warning: Permanently added ‘192.168.1.74’ (ECDSA) to the list of known hosts.
Warning: Permanently added ‘undercloud’ (ECDSA) to the list of known hosts.
Starting server with Python version 2.7.5
s: latency control setting = True
s: available routes:
s: 2/10.0.0.0/24
s: 2/192.0.2.0/24
s: 2/192.168.23.0/24
s: 2/192.168.122.0/24
c : Connected.
firewall manager: setting up.
>> iptables -t nat -N sshuttle-12299
>> iptables -t nat -F sshuttle-12299
>> iptables -t nat -I OUTPUT 1 -j sshuttle-12299
>> iptables -t nat -I PREROUTING 1 -j sshuttle-12299
>> iptables -t nat -A sshuttle-12299 -j REDIRECT –dest 10.0.0.0/24 -p tcp –to-ports 12299 -m ttl ! –ttl 42
>> iptables -t nat -A sshuttle-12299 -j RETURN –dest 127.0.0.1/8 -p tcp
c : Accept TCP: 192.168.1.13:53068 -> 10.0.0.4:80.
c : warning: closed channel 1 got cmd=TCP_STOP_SENDING len=0
c : Accept TCP: 192.168.1.13:53072 -> 10.0.0.4:80.
s: SW’unknown’:Mux#1: deleting (3 remain)
s: SW#6:10.0.0.4:80: deleting (2 remain)
c : warning: closed channel 2 got cmd=TCP_STOP_SENDING len=0
c : Accept TCP: 192.168.1.13:53074 -> 10.0.0.4:80.
s: SW’unknown’:Mux#2: deleting (3 remain)
s: SW#7:10.0.0.4:80: deleting (2 remain)
c : Accept TCP: 192.168.1.13:58210 -> 10.0.0.4:6080.
c : Accept TCP: 192.168.1.13:58212 -> 10.0.0.4:6080.
c : SW’unknown’:Mux#2: deleting (9 remain)
c : SW#11:192.168.1.13:53072: deleting (8 remain)
c : SW’unknown’:Mux#1: deleting (7 remain)
c : SW#9:192.168.1.13:53068: deleting (6 remain)
c : Accept TCP: 192.168.1.13:58214 -> 10.0.0.4:6080.
c : Accept TCP: 192.168.1.13:58216 -> 10.0.0.4:6080.
c : warning: closed channel 4 got cmd=TCP_STOP_SENDING len=0
s: warning: closed channel 4 got cmd=TCP_STOP_SENDING len=0

This creates a transparent proxy server on your local machine for all IP addresses that match 10.0.0.0/24. Any TCP session you initiate to one of the proxied IP addresses will be captured by sshuttle and sent over an ssh session to the remote copy of sshuttle, which will then regenerate the connection on that end, and funnel the data back and forth through ssh. There is no need to install sshuttle on the remote server; the remote server just needs to have python available. sshuttle will automatically upload and run its source code to the remote python.

So,disable/remove FoxyProxy add-on from firefox ( if it has been set up ); interrupt connection from work station to undercloud via `ssh -F ~/.quickstart/ssh.config.ansible undercloud -D 9090`. Restart firefox and launch browser to http://10.0.0.4/dashboard

TripleO QuickStart HA Setup && Keeping undercloud persistent between cold reboots

June 25, 2016

This post follows up http://lxer.com/module/newswire/view/230814/index.html and might work as timer saver unless status undecloud.qcow2 per http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/ requires fresh installation to be done from scratch
So, we intend to survive VIRTHOST cold reboot (downtime) and keep previous version of undercloud VM been able to bring it up avoiding build via quickstart.sh and restart procedure from logging into undercloud and immediately run overcloud deployment. Proceed as follows :-

1. System shutdown

Cleanly commit :-
[stack@undercloud~] \$ openstack stack delete overcloud
2. Login into VIRTHOST as stack and gracefully shutdown undercloud
[stack@ServerCentOS72 ~]\$ virsh shutdown undercloud

**************************************
Shutdown and bring up VIRTHOST
**************************************
Login as root to VIRTHOST :-
[boris@ServerCentOS72 ~]\$ sudo su –
Last login: Fri Jun 24 16:47:25 MSK 2016 on pts/0

********************************************************************************
This is core step , not to create /run/user/1001/libvirt by root
setting appropriate permissions, just only set correct permissions
on /run/user.  This will allow “stack” to issue `virsh list –all` and create
by himself /run/user/1001/libvirt. The rest works fine for myself
********************************************************************************

[root@ServerCentOS72 ~]# chown -R stack /run/user
[root@ServerCentOS72 ~]# chgrp -R stack /run/user
[root@ServerCentOS72 ~]# ls -ld  /run/user
drwxr-xr-x. 3 stack stack 60 Jun 24 20:01 /run/user

[root@ServerCentOS72 ~]# su – stack
Last login: Fri Jun 24 16:48:09 MSK 2016 on pts/0
[stack@ServerCentOS72 ~]\$ virsh list –all

Id    Name                           State
—————————————————-
–     compute_0                   shut off
–     compute_1                   shut off
–     control_0                   shut off
–     control_1                   shut off
–     control_2                   shut off
–     undercloud                  shut off

**********************
Make sure :-
**********************
[stack@ServerCentOS72 ~]\$ ls -ld /run/user/1001/libvirt
drwx——. 6 stack stack 160 Jun 24 21:38 /run/user/1001/libvirt

[stack@ServerCentOS72 ~]\$ virsh start undercloud
Domain undercloud started

[stack@ServerCentOS72 ~]\$ virsh list –all
Id    Name                           State
—————————————————————
2     undercloud                     running
–     compute_0                      shut off
–     compute_1                      shut off
–     control_0                      shut off
–     control_1                      shut off
–     control_2                      shut off

Wait about 5 min and access the undercloud from workstation by:-

[boris@fedora22wks tripleo-quickstart]\$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
Warning: Permanently added ‘192.168.1.75’ (ECDSA) to the list of known hosts.
Warning: Permanently added ‘undercloud’ (ECDSA) to the list of known hosts.
Last login: Fri Jun 24 15:34:40 2016 from gateway

[stack@undercloud ~]\$ ls -l
total 1640244

-rw-rw-r–. 1 stack stack   13287936 Jun 24 13:10 cirros.img
-rw-rw-r–. 1 stack stack    3740163 Jun 24 13:10 cirros.initramfs
-rw-rw-r–. 1 stack stack    4979632 Jun 24 13:10 cirros.kernel
-rw-rw-r–. 1  1001  1001      21769 Jun 24 11:56 instackenv.json
-rw-r–r–. 1 root  root   385824684 Jun 24 03:28 ironic-python-agent.initramfs
-rwxr-xr-x. 1 root  root     5158704 Jun 24 03:28 ironic-python-agent.kernel
-rwxr-xr-x. 1 stack stack        487 Jun 24 12:17 network-environment.yaml
-rwxr-xr-x. 1 stack stack        792 Jun 24 12:17 overcloud-deploy-post.sh
-rwxr-xr-x. 1 stack stack       2284 Jun 24 12:17 overcloud-deploy.sh
-rw-rw-r–. 1 stack stack       4324 Jun 24 13:50 overcloud-env.json
-rw-r–r–. 1 root  root    36478203 Jun 24 03:28 overcloud-full.initrd
-rw-r–r–. 1 root  root  1224070144 Jun 24 03:29 overcloud-full.qcow2
-rwxr-xr-x. 1 root  root     5158704 Jun 24 03:29 overcloud-full.vmlinuz
-rw-rw-r–. 1 stack stack        389 Jun 24 14:28 overcloudrc
-rwxr-xr-x. 1 stack stack       3374 Jun 24 12:17 overcloud-validate.sh
-rwxr-xr-x. 1 stack stack        284 Jun 24 12:17 run-tempest.sh
-rw-r–r–. 1 stack stack        161 Jun 24 12:17 skipfile
-rw——-. 1 stack stack        287 Jun 24 12:16 stackrc
-rw-rw-r–. 1 stack stack        232 Jun 24 14:28 tempest-deployer-input.conf
drwxrwxr-x. 9 stack stack       4096 Jun 24 15:23 tripleo-ci
-rw-rw-r–. 1 stack stack       1123 Jun 24 14:28 tripleo-overcloud-passwords
-rw——-. 1 stack stack       6559 Jun 24 11:59 undercloud.conf
-rw-rw-r–. 1 stack stack     782405 Jun 24 12:16 undercloud_install.log
-rwxr-xr-x. 1 stack stack         83 Jun 24 12:00 undercloud-install.sh
-rw-rw-r–. 1 stack stack       1579 Jun 24 12:00 undercloud-passwords.conf
-rw-rw-r–. 1 stack stack       7699 Jun 24 12:17 undercloud_post_install.log
-rwxr-xr-x. 1 stack stack       2780 Jun 24 12:00 undercloud-post-install.sh

[stack@undercloud ~]\$ ./overcloud-deploy.sh

Fourth redeployment based on same undercloud VM.  DHCP pool of ctlplane
is obviosly increasing  starting point

`Libvirt's pool && volumes configuration been built by QuickStart`

```*************************************************************************** A bit different way to manage - login as stack and invoke virt-manager via `virt-manager --connect qemu:///session` when /run/user already got a correct permissions. *************************************************************************** ```
```\$ sudo su - # chown -R stack /run/user # chgrp -R stack /run/user ^D ```
```[stack@ServerCentOS72 ~]\$ virsh list --all Id Name State ---------------------------------------------------- - compute_0 shut off - compute_1 shut off - control_0 shut off - control_1 shut off - control_2 shut off - undercloud shut off```

``` ```

```[stack@ServerCentOS72 ~]\$ virt-manager --connect qemu:///session [stack@ServerCentOS72 ~]\$ virsh list --all Id Name State ---------------------------------------------------- 2 undercloud running - compute_0 shut off - compute_1 shut off - control_0 shut off - control_1 shut off - control_2 shut off ```
``` From workstation connect to undercloud ```
```[boris@fedora22wks tripleo-quickstart]\$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud [stack@undercloud~] ./overcloud-deploy.sh In several minutes you will see ```

[stack@undercloud ~]\$ nova list
```+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | ID                                   | Name                    | Status | Task State | Power State | Networks            | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | 40754e8a-461e-4328-b0c4-6740c71e9a0d | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.27 | | df272524-a0bd-4ed7-b95c-92ac779c0b96 | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.26 | | 22802ff4-c472-4500-94d7-415c429073ab | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.29 | | e79a8967-5c81-4ce1-9037-4e07b298d779 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.25 | | 27a7c6ac-a480-4945-b4d5-72e32b3c1886 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.28 | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+```
``` [stack@undercloud ~]\$ ssh heat-admin@192.0.2.27 Last login: Sat Jun 25 09:35:35 2016 from 192.0.2.1 [heat-admin@overcloud-controller-0 ~]\$ sudo su - Last login: Sat Jun 25 09:54:06 UTC 2016 on pts/0```

``` [root@overcloud-controller-0 ~]# .  keystonerc_admin [root@overcloud-controller-0 ~(keystone_admin)]# pcs status Cluster name: tripleo_cluster Last updated: Sat Jun 25 10:04:32 2016        Last change: Sat Jun 25 09:21:21 2016 by root via cibadmin on overcloud-controller-0 Stack: corosync Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum 3 nodes and 127 resources configured Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Full list of resources:  ip-172.16.2.5    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0  ip-172.16.3.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1  Clone Set: haproxy-clone [haproxy]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Master/Slave Set: galera-master [galera]      Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: memcached-clone [memcached]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  ip-192.0.2.24    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2  ip-10.0.0.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0  ip-172.16.2.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1  ip-172.16.1.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2  Clone Set: rabbitmq-clone [rabbitmq]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-core-clone [openstack-core]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Master/Slave Set: redis-master [redis]      Masters: [ overcloud-controller-1 ]      Slaves: [ overcloud-controller-0 overcloud-controller-2 ]  Clone Set: mongod-clone [mongod]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]      Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: neutron-l3-agent-clone [neutron-l3-agent]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  openstack-cinder-volume    (systemd:openstack-cinder-volume):    Started overcloud-controller-0  Clone Set: openstack-heat-engine-clone [openstack-heat-engine]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]      Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]      Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]      Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-heat-api-clone [openstack-heat-api]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]      Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-glance-api-clone [openstack-glance-api]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-nova-api-clone [openstack-nova-api]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-sahara-api-clone [openstack-sahara-api]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-glance-registry-clone [openstack-glance-registry]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]      Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-cinder-api-clone [openstack-cinder-api]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: delay-clone [delay]      Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: neutron-server-clone [neutron-server]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]      Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: httpd-clone [httpd]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]  Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]      Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] ```

```Failed Actions: * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=92, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 09:16:45 2016', queued=0ms, exec=0ms * openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-1 'not running' (7): call=355, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 10:00:10 2016', queued=0ms, exec=0ms * openstack-gnocchi-statsd_start_0 on overcloud-controller-1 'not running' (7): call=313, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 09:20:51 2016', queued=0ms, exec=2101ms * openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=328, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 09:23:05 2016', queued=0ms, exec=2121ms * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=97, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 09:16:43 2016', queued=0ms, exec=0ms * openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-0 'not running' (7): call=365, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 10:00:12 2016', queued=0ms, exec=0ms * openstack-gnocchi-statsd_start_0 on overcloud-controller-0 'not running' (7): call=324, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 09:22:32 2016', queued=0ms, exec=2237ms * openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=342, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 09:23:32 2016', queued=0ms, exec=2200ms * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=94, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 09:16:47 2016', queued=0ms, exec=0ms * openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-2 'not running' (7): call=353, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 10:00:08 2016', queued=0ms, exec=0ms * openstack-gnocchi-statsd_start_0 on overcloud-controller-2 'not running' (7): call=318, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 09:22:39 2016', queued=0ms, exec=2113ms * openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=322, status=complete, exitreason='none',     last-rc-change='Sat Jun 25 09:22:48 2016', queued=0ms, exec=2123ms ```

``` PCSD Status: overcloud-controller-0: Online overcloud-controller-1: Online overcloud-controller-2: Online```

``` ```

```Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled ```

RDO Triple0 QuickStart HA Setup on Intel Core i7-4790 Desktop (work in progress)

June 18, 2016

This post follows up https://www.linux.com/blog/rdo-triple0-quickstart-ha-setup-intel-core-i7-4790-desktop
In meantime undercloud-install,undercloud-post-install (openstack undercloud install, openstack overcloud image upload ) are supposed to be performed during original run  `bash quickstart.sh –config /path-to/ha.yml \$VIRTHOST` run. Neutron networks deployment on undercloud and HA Server’s configuration has been significantly rebuilt since 06/03/2016. I believe design bellow is close to proposed in https://remote-lab.net/rdo-manager-ha-openstack-deployment
However , attempt to reproduce http://docs.openstack.org/developer/tripleo-docs/installation/installation.html
results  hanging  on  `openstack undercloud install`, wheh it attempts to start openstack-nova-compute on undercloud. Nova-compute.log report failure to connect 127.0.0.1:5672. Verification via `netstat -antp | grep 5672` reports port 5672 bind only to 192.0.2.1 ( ctlplane IP address ).
Quoting ( complaints are not mine)  :-
By the way, I’d love to see and help to have an complete installation guide for TripleO powered by RDO on the RDO site (the instack virt setup without quickstart . . . .

*****************************
Start on workstation :-
*****************************

\$ git clone https://github.com/openstack/tripleo-quickstart
\$ cd tripleo-quickstart
\$ sudo bash quickstart.sh –install-deps
\$ sudo yum -y  install redhat-rpm-config
\$ export VIRTHOST=192.168.1.75 #put your own IP here
\$ ssh-keygen
\$ ssh-copy-id root@\$VIRTHOST
\$ ssh root@\$VIRTHOST uname -a # no root login prompt
######################
# Template code
######################
compute_memory: 6144
compute_vcpu:1
undercloud_memory: 8192

# Giving the undercloud additional CPUs can greatly improve heat’s
# performance (and result in a shorter deploy time).

undercloud_vcpu: 4

# Create three controller nodes and one compute node.

overcloud_nodes:
– name: control_0

flavor: control
– name: control_1
flavor: control
– name: control_2
flavor: control

– name: compute_0
flavor: compute
– name: compute_1
flavor: compute

# We don’t need introspection in a virtual environment (because we are
# creating all the “hardware” we really know the necessary
# information).
introspect: false
# Tell tripleo about our environment.

network_isolation: true
extra_args: >-
–control-scale 3 –compute-scale 2 –neutron-network-type vxlan
–neutron-tunnel-types vxlan
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml
–ntp-server pool.ntp.org
deploy_timeout: 75
tempest: false
pingtest: true

***********************************************
Then run under tripleo-quickstart
***********************************************

\$ bash quickstart.sh –config ./config/general_config/ha.yml  \$VIRTHOST

During this run the most important is to reach this point on VIRTHOST

`[root@ServerCentOS72 ~]# cd /var/cache/tripleo-quickstart/images`

[root@ServerCentOS72 images]# ls -l
total 2638232
-rw-rw-r–. 1 stack stack 2701548544 Jun 17 19:25 83e62624dd7bd637dada343bbf4fe8f1.qcow2
lrwxrwxrwx. 1 stack stack         75 Jun 17 19:25 latest-undercloud.qcow2 -> /var/cache/tripleo-quickstart/images/83e62624dd7bd637dada343bbf4fe8f1.qcow2

Saturday 18 June 2016  12:07:05 +0300 (0:00:00.124)       0:26:21.276

```===============================================================================
tripleo/undercloud : Install the undercloud -------------------------- 1155.95s
setup/undercloud : Get undercloud vm ip address ------------------------ 81.26s
setup/undercloud : Resize undercloud image (call virt-resize) ---------- 76.39s
tripleo/undercloud : Prepare the undercloud for deploy ----------------- 70.15s
setup/undercloud : Upload undercloud volume to storage pool ------------ 53.20s
setup/undercloud : Copy instackenv.json to appliance ------------------- 35.25s
setup/undercloud : Get qcow2 image from cache -------------------------- 32.77s
setup/undercloud : Inject undercloud ssh public key to appliance -------- 7.07s
setup ------------------------------------------------------------------- 6.68s
None --------------------------------------------------------------------------
setup/undercloud : Perform selinux relabel on undercloud image ---------- 3.47s
environment/teardown : Check if libvirt is available -------------------- 1.99s
setup ------------------------------------------------------------------- 1.92s
/home/boris/.quickstart/playbooks/provision.yml:29 ----------------------------
setup ------------------------------------------------------------------- 1.90s
None --------------------------------------------------------------------------
setup ------------------------------------------------------------------- 1.81s
None --------------------------------------------------------------------------
parts/libvirt : Install packages for libvirt ---------------------------- 1.78s
setup/overcloud : Create overcloud vm storage --------------------------- 1.57s
setup/overcloud : Define overcloud vms ---------------------------------- 1.48s
provision/teardown : Remove non-root user account ----------------------- 1.41s
provision/teardown : Wait for processes to exit ------------------------- 1.41s
environment/teardown : Stop libvirt networks ---------------------------- 1.35s

+ set +x

##################################
Virtual Environment Setup Complete
##################################

Access the undercloud by:
ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
There are scripts in the home directory to continue the deploy:
1. overcloud-deploy.sh will deploy the overcloud

Detailed syntax of `openstack overcloud deploy –templates … `
captured by snapshot bellow, compare with https://remote-lab.net/rdo-manager-ha-openstack-deployment

\$ openstack overcloud deploy –control-scale 3 –compute-scale 2  \
–libvirt-type qemu –ntp-server pool.ntp.org –templates ~/the-cloud/  \
-e ~/the-cloud/environments/puppet-pacemaker.yaml  \
-e ~/the-cloud/environments/network-isolation.yaml  \
-e ~/the-cloud/environments/net-single-nic-with-vlans.yaml  \
-e ~/the-cloud/environments/network-environment.yaml

2.   overcloud-deploy-post.sh will do any post-deploy configuration
3.   overcloud-validate.sh will run post-deploy validation

Alternatively, you can ignore these scripts and follow the upstream docs,
starting from the overcloud deploy section:

http://ow.ly/1Vc1301iBlb

Then run 3 mentoned above scripts

[stack@undercloud ~]\$ . stackrc
[stack@undercloud ~]\$ heat stack-list
```+--------------------------------------+------------+-----------------+---------------------+--------------+ | id                                   | stack_name | stack_status    | creation_time       | updated_time | +--------------------------------------+------------+-----------------+---------------------+--------------+ | 356243b1-a071-45c8-8083-85b9a12532c6 | overcloud  | CREATE_COMPLETE | 2016-06-18T09:09:40 | None         | +--------------------------------------+------------+-----------------+---------------------+--------------+```
[stack@undercloud ~]\$ nova list
```+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | ID                                   | Name                    | Status | Task State | Power State | Networks            | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | dbb233ab-9108-4a22-b0dd-44c6ef9a481a | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.11 | | 1a91083e-e1ba-43c3-8ad2-78500f6b3ecb | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.7  | | 0b3f6ec8-0a13-4f40-b9e3-4557f1b8c7a3 | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.9  | | 97a8a546-72a0-4431-8065-c1f81103ee25 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.10 | | e87a79db-75f8-437f-8ed7-f29aacfe7339 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.8  | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+```
[stack@undercloud ~]\$ neutron net-list
```+--------------------------------------+--------------+----------------------------------------+ | id                                   | name         | subnets                                | +--------------------------------------+--------------+----------------------------------------+ | cde382ae-a7fa-4ebb-bbdc-9e2af9c0df83 | external     | 42fac214-7177-4b4f-8778-105015ed30da   | |                                      |              | 10.0.0.0/24                            | | 5fc97bca-fa67-4ede-b4d3-8234c0ace5e5 | storage_mgmt | 719f9a19-2f1d-4eed-914a-430468086f10   | |                                      |              | 172.16.3.0/24                          | | 4236d358-b4cd-4fb9-a337-f8a421bb13cd | tenant       | d6f1e772-c0a1-4869-a9bc-b551faf5be8e   | |                                      |              | 172.16.0.0/24                          | | a4155b70-a4d8-41bf-bbe6-a5f4e248c5ad | ctlplane     | 199a8e99-d9c7-43f2-8ccd-6a59b8424362   | |                                      |              | 192.0.2.0/24                           | | fae53fb0-c5da-427f-b473-bfaa0ab21877 | internal_api | 5f2ff369-1000-4361-8131-b0ae69821b9f   | |                                      |              | 172.16.2.0/24                          | | 41862220-b9e6-4000-8341-9fbdb34b47f5 | storage      | d0cf1cac-f841-41dd-923d-47d164c07d0f   | |                                      |              | 172.16.1.0/24                          | +--------------------------------------+--------------+----------------------------------------+ ```
[stack@undercloud ~]\$ cat overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://10.0.0.4:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export no_proxy=,10.0.0.4,192.0.2.6
export PYTHONWARNINGS=”ignore:Certificate has no, ignore:A true SSLContext object is not available”

[stack@undercloud ~]\$ nova list
+————————————–+————————-+——–+————+————-+———————+

| ID                                   | Name                    | Status | Task State | Power State | Networks            |

+————————————–+————————-+——–+————+————-+———————+
| dbb233ab-9108-4a22-b0dd-44c6ef9a481a | overcloud-controller-0  | ACTIVE | –          | Running     | ctlplane=192.0.2.11 |
| 1a91083e-e1ba-43c3-8ad2-78500f6b3ecb | overcloud-controller-1  | ACTIVE | –          | Running     | ctlplane=192.0.2.7  |
| 0b3f6ec8-0a13-4f40-b9e3-4557f1b8c7a3 | overcloud-controller-2  | ACTIVE | –          | Running     | ctlplane=192.0.2.9  |
| 97a8a546-72a0-4431-8065-c1f81103ee25 | overcloud-novacompute-0 | ACTIVE | –          | Running     | ctlplane=192.0.2.10 |
| e87a79db-75f8-437f-8ed7-f29aacfe7339 | overcloud-novacompute-1 | ACTIVE | –          | Running     | ctlplane=192.0.2.8  |
+————————————–+————————-+——–+————+———

The authenticity of host ‘192.0.2.11 (192.0.2.11)’ can’t be established.
ECDSA key fingerprint is 74:99:da:b1:c8:ac:58:e6:65:c1:51:45:64:e4:e9:ed.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.0.2.11’ (ECDSA) to the list of known hosts.
Last login: Sat Jun 18 09:52:37 2016 from 192.0.2.1
Cluster name: tripleo_cluster
Last updated: Sat Jun 18 10:01:58 2016        Last change: Sat Jun 18 09:49:22 2016 by root via cibadmin on overcloud-controller-0
Stack: corosync
Current DC: overcloud-controller-1 (version 1.1.13-10.el7_2.2-44eb2dd) – partition with quorum

3 nodes and 127 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Full list of resources:
```ip-192.0.2.6    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0 ip-172.16.2.5    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1 ip-172.16.3.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2 Clone Set: haproxy-clone [haproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Master/Slave Set: galera-master [galera] Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: memcached-clone [memcached] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] ip-10.0.0.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0 ip-172.16.2.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1 ip-172.16.1.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2 Clone Set: rabbitmq-clone [rabbitmq] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-core-clone [openstack-core] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Master/Slave Set: redis-master [redis] Masters: [ overcloud-controller-1 ] Slaves: [ overcloud-controller-0 overcloud-controller-2 ] Clone Set: mongod-clone [mongod] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] openstack-cinder-volume    (systemd:openstack-cinder-volume):    Started overcloud-controller-0 Clone Set: openstack-heat-engine-clone [openstack-heat-engine] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-clone [openstack-heat-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-api-clone [openstack-glance-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-api-clone [openstack-nova-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-api-clone [openstack-sahara-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-registry-clone [openstack-glance-registry] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-api-clone [openstack-cinder-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: delay-clone [delay] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-server-clone [neutron-server] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: httpd-clone [httpd] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] ```
Failed Actions:
```* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=95, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:44:43 2016', queued=0ms, exec=0ms * openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-1 'not running' (7): call=331, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:56:44 2016', queued=0ms, exec=0ms * openstack-gnocchi-statsd_start_0 on overcloud-controller-1 'not running' (7): call=335, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:50:53 2016', queued=0ms, exec=2099ms * openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=339, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:51:17 2016', queued=0ms, exec=2117ms * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=96, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:44:40 2016', queued=0ms, exec=0ms * openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-0 'not running' (7): call=332, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:56:42 2016', queued=0ms, exec=0ms * openstack-gnocchi-statsd_start_0 on overcloud-controller-0 'not running' (7): call=339, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:51:13 2016', queued=0ms, exec=2145ms * openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=341, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:51:28 2016', queued=0ms, exec=2147ms * openstack-aodh-evaluator_start_0 on overcloud-controller-2 'not running' (7): call=368, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:53:18 2016', queued=0ms, exec=2107ms * openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-2 'not running' (7): call=321, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:56:46 2016', queued=0ms, exec=0ms * openstack-gnocchi-statsd_start_0 on overcloud-controller-2 'not running' (7): call=326, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:51:06 2016', queued=0ms, exec=2185ms * openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=378, status=complete, exitreason='none', last-rc-change='Sat Jun 18 09:54:14 2016', queued=1ms, exec=2116ms ```
PCSD Status:
overcloud-controller-0: Online
overcloud-controller-1: Online
overcloud-controller-2: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

8fea5ee4-62cf-4767-96c8-d9867cab9972
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port “vxlan-ac100004”
Interface “vxlan-ac100004”
type: vxlan
ptions: {df_default=”true”, in_key=flow, local_ip=”172.16.0.6″, out_key=flow, remote_ip=”172.16.0.4″}
Port “vxlan-ac100005”
Interface “vxlan-ac100005”
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”172.16.0.6″, out_key=flow, remote_ip=”172.16.0.5″}
Port “vxlan-ac100008”
Interface “vxlan-ac100008”
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”172.16.0.6″, out_key=flow, remote_ip=”172.16.0.8″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “vxlan-ac100007”
Interface “vxlan-ac100007”
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”172.16.0.6″, out_key=flow, remote_ip=”172.16.0.7″}
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port “vlan20”
tag: 20
Interface “vlan20”
type: internal
Port “eth0”
Interface “eth0”
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “vlan40”
tag: 40
Interface “vlan40”
type: internal
Port “vlan50”
tag: 50
Interface “vlan50”
type: internal
Port “vlan10”
tag: 10
Interface “vlan10”
type: internal
Port “vlan30”
tag: 30
Interface “vlan30”
type: internal
ovs_version: “2.5.0”

br-ex: flags=4163  mtu 1500
inet6 fe80::250:dcff:fecf:b7d5  prefixlen 64  scopeid 0x20
ether 00:50:dc:cf:b7:d5  txqueuelen 0  (Ethernet)
RX packets 15254  bytes 29305270 (27.9 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 15111  bytes 2037368 (1.9 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
inet6 fe80::250:dcff:fecf:b7d5  prefixlen 64  scopeid 0x20
ether 00:50:dc:cf:b7:d5  txqueuelen 1000  (Ethernet)
RX packets 554865  bytes 314056269 (299.5 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 537763  bytes 196316938 (187.2 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 128951  bytes 42842317 (40.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 128951  bytes 42842317 (40.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan10: flags=4163  mtu 1500
inet6 fe80::2cf7:9cff:fe98:df2e  prefixlen 64  scopeid 0x20
ether 2e:f7:9c:98:df:2e  txqueuelen 0  (Ethernet)
RX packets 1563  bytes 22172141 (21.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 935  bytes 339459 (331.5 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan20: flags=4163  mtu 1500
inet6 fe80::9c4a:96ff:fe42:f562  prefixlen 64  scopeid 0x20
ether 9e:4a:96:42:f5:62  txqueuelen 0  (Ethernet)
RX packets 515281  bytes 202417994 (193.0 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 498334  bytes 112312907 (107.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan30: flags=4163  mtu 1500
inet6 fe80::8cbe:80ff:fe80:7945  prefixlen 64  scopeid 0x20
ether 8e:be:80:80:79:45  txqueuelen 0  (Ethernet)
RX packets 20275  bytes 45196003 (43.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 20405  bytes 52618634 (50.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan40: flags=4163  mtu 1500
inet6 fe80::8c06:98ff:fe7a:5b7  prefixlen 64  scopeid 0x20
ether 8e:06:98:7a:05:b7  txqueuelen 0  (Ethernet)
RX packets 2299  bytes 12722091 (12.1 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2557  bytes 26854977 (25.6 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan50: flags=4163  mtu 1500
inet6 fe80::6454:dff:fe41:90e9  prefixlen 64  scopeid 0x20
ether 66:54:0d:41:90:e9  txqueuelen 0  (Ethernet)
RX packets 107  bytes 9834 (9.6 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 121  bytes 12394 (12.1 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

```Kernel IP routing table Destination     Gateway         Genmask         Flags Metric Ref    Use Iface 0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 vlan10 10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 vlan10 169.254.169.254 192.0.2.1       255.255.255.255 UGH   0      0        0 br-ex 172.16.0.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan50 172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan30 172.16.2.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan20 172.16.3.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan40 192.0.2.0       0.0.0.0         255.255.255.0   U     0      0        0 br-ex```

[root@overcloud-controller-0 ~]# cat /etc/os-net-config/config.json | jq ‘.[]’
[
{
{
}
],
“type”: “ovs_bridge”,
“use_dhcp”: false,
“routes”: [
{
“next_hop”: “192.0.2.1”,
}
],
“members”: [
{
“primary”: true,
“name”: “nic1”,
“type”: “interface”
},
{
“vlan_id”: 10,
{
}
],
“type”: “vlan”,
“routes”: [
{
“next_hop”: “10.0.0.1”,
“default”: true
}
]
},
{
“vlan_id”: 20,
{
}
],
“type”: “vlan”
},
{
“vlan_id”: 30,
{
}
],
“type”: “vlan”
},
{
“vlan_id”: 40,
{
}
],
“type”: “vlan”
},
{
“vlan_id”: 50,
{
}
],
“type”: “vlan”
}
],
“name”: “br-ex”,
“dns_servers”: [
“8.8.8.8”,
“8.8.4.4”
]
}
]

```************************ On undercloud ************************ ```
```[stack@undercloud ~]\$ sudo su - Last login: Sat Jun 18 10:47:31 UTC 2016 on pts/1 [root@undercloud ~]# ovs-vsctl show 7fb4d9b7-4704-410f-845f-6f3f0a1b65cd Bridge br-ctlplane Port "vlan10" tag: 10 Interface "vlan10" type: internal Port br-ctlplane Interface br-ctlplane type: internal Port phy-br-ctlplane Interface phy-br-ctlplane type: patch options: {peer=int-br-ctlplane} Port "eth1" Interface "eth1" Bridge br-int fail_mode: secure Port br-int Interface br-int type: internal Port "tap41a7c72c-39" tag: 1 Interface "tap41a7c72c-39" type: internal Port int-br-ctlplane Interface int-br-ctlplane type: patch options: {peer=phy-br-ctlplane} ovs_version: "2.5.0"```

```[root@undercloud ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.23.1 0.0.0.0 UG 0 0 0 eth0 10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan10 192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ctlplane 192.168.23.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 ```

```[root@undercloud ~]# ifconfig br-ctlplane: flags=4163 mtu 1500 inet 192.0.2.1 netmask 255.255.255.0 broadcast 192.0.2.255 inet6 fe80::2ad:c4ff:fe6f:778a prefixlen 64 scopeid 0x20 ether 00:ad:c4:6f:77:8a txqueuelen 0 (Ethernet) RX packets 4743446 bytes 382457275 (364.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6573214 bytes 31299066406 (29.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0```

eth0: flags=4163 mtu 1500
inet6 fe80::2ad:c4ff:fe6f:7788 prefixlen 64 scopeid 0x20
RX packets 402911 bytes 1166354846 (1.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 286351 bytes 63608008 (60.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4163 mtu 1500
inet6 fe80::2ad:c4ff:fe6f:778a prefixlen 64 scopeid 0x20
RX packets 4793675 bytes 390579748 (372.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6627325 bytes 32167819071 (29.9 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73 mtu 65536
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 5342779 bytes 31375282714 (29.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5342779 bytes 31375282714 (29.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

virbr0: flags=4099 mtu 1500
ether 52:54:00:b7:65:c0 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

vlan10: flags=4163 mtu 1500
inet6 fe80::c4d1:81ff:fec1:6006 prefixlen 64 scopeid 0x20
ether c6:d1:81:c1:60:06 txqueuelen 0 (Ethernet)
RX packets 49362 bytes 7857042 (7.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 52980 bytes 868430005 (828.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Set up VM to connect Tripleo QuickStart Overcloud via Virt-manager GUI

May 29, 2016

Set up Gnome Desktop && VirtTools on Virtualization Server (VIRTHOST) and make remote connection to Virt-manager running on VIRTHOST (192.168.1.75). Then create VM via virt-manager as follows using standard CentOS 7.2 ISO image , I am aware of post “Connecting another vm to your tripleo-quickstart deployment”  at oddbits.com :-
http://blog.oddbit.com/2016/05/19/connecting-another-vm-to-your-tripleo-qu/
and manage this way deliberately. Just wondering is it possible to get results similar to obtained by LarsKS ( via in depth knowledge virsh CLI and Libvirt features) with Virt-manager GUI intuitively much more affordable. I realize that speed and flexibility of approach suggested bellow are losing the aforementioned

Proceed with VM set up via Virt-manager remote GUI. Attaching “external” and “overcloud” networks to VM and assign static IPs to eth0 and eth1 which belong to corresponding networks.

[root@ServerCentOS72 ~]# virsh net-list

Name State Autostart Persistent
———————————————————-
default active yes yes
external active yes yes
overcloud active yes yes

Looks good , start install

Installation completed.  Following step verfication availabilty connect to
overcloud on VIRTHOST. Check static IPs on Remote Console and connect
to dashboard of Controller

Now connect to VMs running in overcloud

Switching eth1 to DHCP mode on RemoteConsole (following post at oddbits.com)

[root@ServerCentOS72 ~]# virsh dumpxml RemoteConsole | xmllint –xpath ‘//interface‘ –
<interface type=”network”>
<source network=”overcloud” bridge=”brovc”/>
<target dev=”vnet1″/>
<model type=”virtio”/>
<alias name=”net1″/>
<address type=”pci” domain=”0x0000″ bus=”0x00″ slot=”0x04″ function=”0x0″/>

Creating port on ctlplane ( undercloud VM )

On RemoteConsole switch eth1 to DHCP mode via NetworkManager GUI

We are all set

RDO Triple0 QuickStart && First impressions

May 27, 2016

I believe the post bellow will bring some more light on TripleO QuickStart
procedure suggested on RDO QuickStart page ( size of memory 32 GB is a must. During minimal configuration runtime 23 GB of RAM are required ). Following tips from Deploying OpenStack on just one hosted server

Overcloud deployed .

************************************************************************
First of all taking  look at routing tables  on undercloud VM
************************************************************************

[root@undercloud ~]# ifconfig

br-ctlplane: flags=4163  mtu 1500

inet6 fe80::285:8cff:feee:4c12  prefixlen 64  scopeid 0x20
ether 00:85:8c:ee:4c:12  txqueuelen 0  (Ethernet)
RX packets 5458173  bytes 430801023 (410.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8562456  bytes 31493865046 (29.3 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
inet6 fe80::285:8cff:feee:4c10  prefixlen 64  scopeid 0x20
ether 00:85:8c:ee:4c:10  txqueuelen 1000  (Ethernet)
RX packets 4550861  bytes 7090076105 (6.6 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 1597556  bytes 760511620 (725.2 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163  mtu 1500
inet6 fe80::285:8cff:feee:4c12  prefixlen 64  scopeid 0x20
ether 00:85:8c:ee:4c:12  txqueuelen 1000  (Ethernet)
RX packets 5459780  bytes 430920997 (410.9 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8564443  bytes 31494029129 (29.3 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 4361647  bytes 24858373851 (23.1 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 4361647  bytes 24858373851 (23.1 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099  mtu 1500
ether 52:54:00:39:0a:ae  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan10: flags=4163  mtu 1500
inet6 fe80::804e:69ff:fe19:844b  prefixlen 64  scopeid 0x20
ether 82:4e:69:19:84:4b  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 12  bytes 816 (816.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@undercloud ~]# ip route
default via 192.168.23.1 dev eth0
10.0.0.0/24 dev vlan10  proto kernel  scope link  src 10.0.0.1
192.0.2.0/24 dev br-ctlplane  proto kernel  scope link  src 192.0.2.1
192.168.23.0/24 dev eth0  proto kernel  scope link  src 192.168.23.28
192.168.122.0/24 dev virbr0  proto kernel  scope link  src 192.168.122.1

[root@undercloud ~]# ovs-vsctl show
83b044ee-44ac-4575-88b3-4951a6e9847f
Bridge br-int
fail_mode: secure
tag: 1
type: internal
Port int-br-ctlplane
Interface int-br-ctlplane
type: patch
options: {peer=phy-br-ctlplane}
Port br-int
Interface br-int
type: internal
Bridge br-ctlplane
Port “vlan10”
tag: 10
Interface “vlan10”
type: internal
Port phy-br-ctlplane
Interface phy-br-ctlplane
type: patch
options: {peer=int-br-ctlplane}
Port “eth1”
Interface “eth1”
Port br-ctlplane
Interface br-ctlplane
type: internal
ovs_version: “2.5.0”

*********************************************************
Here are admin credentials for overcloud controller
*********************************************************

[stack@undercloud ~]\$ cat overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://192.0.2.10:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export no_proxy=,192.0.2.10,192.0.2.10
export PYTHONWARNINGS=”ignore:Certificate has no, ignore:A true SSLContext object is not available”

*******************************
At the same time on VIRTHOST
*******************************

[root@ServerCentOS72 ~]# virsh net-list

Name                 State      Autostart     Persistent
———————————————————-
default              active     yes           yes
external             active     yes           yes
overcloud            active     yes           yes

[root@ServerCentOS72 ~]#  virsh net-dunpxml external

<network>
<name>external</name>
<uuid>d585615b-c1c5-4e30-bf2d-ea247591c2b0</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’brext’ stp=’off’ delay=’0’/>
<dhcp>
<range start=’192.168.23.10′ end=’192.168.23.50’/>
</dhcp>
</ip>
</network>

[root@ServerCentOS72 ~]# su – stack

Last login: Thu May 26 18:01:31 MSK 2016 on :0

[stack@ServerCentOS72 ~]\$ virsh list
Id    Name                           State

—————————————————-
2     undercloud                     running
11    compute_0                      running
12    control_0                      running

*************************************************************************
Source stackrc and run openstack-status on undercloud
Overcloud deployment is already done on undercloud VM
*************************************************************************

[root@undercloud ~]# . stackrc
[root@undercloud ~]# openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-cert:                    active
openstack-nova-conductor:               active
openstack-nova-console:                 inactive  (disabled on boot)
openstack-nova-consoleauth:             inactive  (disabled on boot)
openstack-nova-xvpvncproxy:             inactive  (disabled on boot)

== Glance services ==

openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==

openstack-keystone:                     inactive  (disabled on boot)

== Horizon service ==
openstack-dashboard:                    404
== neutron services ==

neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       inactive  (disabled on boot)
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-metering-agent:                 inactive  (disabled on boot)

== Swift services ==

openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active

== Cinder services ==

openstack-cinder-api:                   inactive  (disabled on boot)
openstack-cinder-scheduler:             inactive  (disabled on boot)
openstack-cinder-volume:                inactive  (disabled on boot)
openstack-cinder-backup:                inactive  (disabled on boot)

== Ceilometer services ==

openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active

== Heat services ==
openstack-heat-api:                     active
openstack-heat-api-cfn:                 active
openstack-heat-api-cloudwatch:          inactive  (disabled on boot)
openstack-heat-engine:                  active

== Sahara services ==

openstack-sahara-all:                   inactive  (disabled on boot)

== Ironic services ==

openstack-ironic-api:                   active
openstack-ironic-conductor:             active

== Support services ==

mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
rabbitmq-server:                        active
memcached:                              active

====Keystone users ======

+———————————-+——————+———+———————————–+

|                id                |       name       | enabled |               email               |

+———————————-+——————+———+———————————–+
| c1668084d057422ab21c9180424b3e4a |      admin       |   True  |           root@localhost          |
| db938fe459c94cd09fe227a118f8be0f |       aodh       |   True  |           aodh@localhost          |
| 001a56a0872048a592db95dc9885292d |    ceilometer    |   True  |        ceilometer@localhost       |
| e038f5b685b84e6aa601b37312d84a56 |      glance      |   True  |          glance@localhost         |
| d7ddbfd73b814c13926c1ecd5ebe1bb2 |       heat       |   True  |           heat@localhost          |
| dc784308498d40568b649fbf12eaeb51 |      ironic      |   True  |          ironic@localhost         |
| 0c1f829c533240cdbec944236048ee1a | ironic-inspector |   True  | baremetal-introspection@localhost |
| ddbcb1dd885845c698f8d65f6f9ff44f |     neutron      |   True  |         neutron@localhost         |
| 987bd356963e4a5cbf2bd50c50919f9b |       nova       |   True  |           nova@localhost          |
| a5c862796ef24615afc2881e1a59f9d5 |      swift       |   True  |          swift@localhost          |
+———————————-+——————+———+———————————–+

== Glance images ==

+————————————–+————————+————-+——————+————+——–+

| ID                                   | Name                   | Disk Format | Container Format | Size       | Status |

+————————————–+————————+————-+——————+————+——–+
| c734ff64-7723-43ee-a5d2-d662e1e206eb | bm-deploy-kernel       | aki         | aki              | 5157360    | active |
| f80e32c4-cfce-4dcc-993a-939800440fbf | bm-deploy-ramdisk      | ari         | ari              | 380554146  | active |
| 8616adc8-7136-4536-8562-5ed9cf129ed2 | overcloud-full         | qcow2       | bare             | 1175351296 | active |
| 73f5bfc7-99c2-46dc-8507-e5978ec61b84 | overcloud-full-initrd  | ari         | ari              | 36444678   | active |
| 0d30aa5d-869c-4716-bdd4-87685e4790ca | overcloud-full-vmlinuz | aki         | aki              | 5157360    | active |
+————————————–+————————+————-+——————+————+——–+

== Nova managed services ==

+—-+—————-+————+———-+———+——-+—————————-+—————–+

| Id | Binary         | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+—————-+————+———-+———+——-+—————————-+—————–+
| 1  | nova-cert      | undercloud | internal | enabled | up    | 2016-05-26T18:41:57.000000 | –               |
| 7  | nova-scheduler | undercloud | internal | enabled | up    | 2016-05-26T18:41:55.000000 | –               |
| 8  | nova-conductor | undercloud | internal | enabled | up    | 2016-05-26T18:41:56.000000 | –               |
| 10 | nova-compute   | undercloud | nova     | enabled | up    | 2016-05-26T18:41:54.000000 | –               |
+—-+—————-+————+———-+———+——-+—————————-+—————–+

== Nova networks ==
+————————————–+———-+——+
| ID                                   | Label    | Cidr |
+————————————–+———-+——+
| c27b8d62-f838-4c7e-8828-64ae1503f4c4 | ctlplane | –    |
+————————————–+———-+——+

== Nova instance flavors ==

+————————————–+—————+———–+——+———–+——+——-+————-+———–+

| ID                                   | Name          | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+————————————–+—————+———–+——+———–+——+——-+————-+———–+
| 1320d766-7051-4639-9554-a42e7c7fd958 | control       | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 1b0ad845-6273-437f-8573-e4922a256ec7 | block-storage | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 27a0e9ee-c909-4d7d-8e86-1eb2e61fb1cb | oooq_control  | 8192      | 49   | 0         |      | 1     | 1.0         | True      |
| 40057aa6-5e8b-4d4b-85d4-f21418d01b5d | baremetal     | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 5750def3-dc08-43dd-b194-02d4ea73b8d7 | compute       | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 769969da-f429-4f5f-84c9-6456f39539f8 | ceph-storage  | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| 9c1622bc-ee0f-4dfa-a988-1e89cad47015 | oooq_compute  | 8192      | 49   | 0         |      | 1     | 1.0         | True      |
| a2e5a055-3334-4080-86f9-4887931aee22 | swift-storage | 4096      | 40   | 0         |      | 1     | 1.0         | True      |
| b05b3c15-7928-4f59-9f8d-7d3947e19bee | oooq_ceph     | 8192      | 49   | 0         |      | 1     | 1.0         | True      |
+————————————–+—————+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+————————-+———————————-+——–+————+————-+———————+
| ID                                   | Name                    | Tenant ID                        | Status | Task State | Power State | Networks            |
+————————————–+————————-+———————————-+——–+————+————-+———————+
| 88f841ac-1ca0-4339-ba8a-c2895c0dc57c | overcloud-controller-0  | ccf0e5fdbebb4335ad7875ec821af91d | ACTIVE | –          | Running     | ctlplane=192.0.2.13 |
| f12a1086-7e23-4acb-80a7-8b2efe1e4ef2 | overcloud-novacompute-0 | ccf0e5fdbebb4335ad7875ec821af91d | ACTIVE | –          | Running     | ctlplane=192.0.2.12 |
+————————————–+————————-+———————————-+——–+————+————-+———————+

******************************************************
Neutron reports on undercloud VM
******************************************************

[root@undercloud ~]# neutron net-list

+————————————–+———-+——————————————+
| id                                   | name     | subnets                                  |
+————————————–+———-+——————————————+
| c27b8d62-f838-4c7e-8828-64ae1503f4c4 | ctlplane | 631022c3-cfc5-4353-b038-1592cceea57e     |
|                                      |          | 192.0.2.0/24                             |
+————————————–+———-+——————————————+

[root@undercloud ~]# neutron net-show ctlplane

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| availability_zone_hints   |                                      |
| availability_zones        | nova                                 |
| created_at                | 2016-05-26T11:32:18                  |
| description               |                                      |
| id                        | c27b8d62-f838-4c7e-8828-64ae1503f4c4 |
| mtu                       | 1500                                 |
| name                      | ctlplane                             |
| provider:network_type     | flat                                 |
| provider:physical_network | ctlplane                             |
| provider:segmentation_id  |                                      |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 631022c3-cfc5-4353-b038-1592cceea57e |
| tags                      |                                      |
| updated_at                | 2016-05-26T11:32:18                  |
+—————————+————————————–+

[root@undercloud ~]# neutron subnet-list

+————————————+——+————–+————————————+
| id                                 | name | cidr         | allocation_pools                   |
+————————————+——+————–+————————————+
| 631022c3-cfc5-4353-b038-1592cceea5 |      | 192.0.2.0/24 | {“start”: “192.0.2.5”, “end”:      |
| 7e                                 |      |              | “192.0.2.30”}                      |
+————————————+——+————–+————————————+

[root@undercloud ~]# neutron subnet-show 631022c3-cfc5-4353-b038-1592cceea57e

+——————-+—————————————————————+
| Field             | Value                                                         |
+——————-+—————————————————————+
| allocation_pools  | {“start”: “192.0.2.5”, “end”: “192.0.2.30”}                   |
| cidr              | 192.0.2.0/24                                                  |
| created_at        | 2016-05-26T11:32:18                                           |
| description       |                                                               |
| dns_nameservers   |                                                               |
| enable_dhcp       | True                                                          |
| gateway_ip        | 192.0.2.1                                                     |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “192.0.2.1”} |
| id                | 631022c3-cfc5-4353-b038-1592cceea57e                          |
| ip_version        | 4                                                             |
| ipv6_ra_mode      |                                                               |
| name              |                                                               |
| network_id        | c27b8d62-f838-4c7e-8828-64ae1503f4c4                          |
| subnetpool_id     |                                                               |
| updated_at        | 2016-05-26T11:32:18                                           |
+——————-+—————————————————————+

**********************************************
When overcloud deployment is done
**********************************************

[stack@undercloud ~]\$ heat stack-list

+————————————–+————+—————–+———————+————–+

| id                                   | stack_name | stack_status    | creation_time       | updated_time |

+————————————–+————+—————–+———————+————–+
| 7002392b-cd2d-439f-b3cd-024979f153a5 | overcloud  | CREATE_COMPLETE | 2016-05-26T13:35:17 | None         |
+————————————–+————+—————–+———————+————–+

[stack@undercloud ~]\$ nova list

+————————————–+————————-+——–+————+————-+———————+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+————————————–+————————-+——–+————+————-+———————+
| 88f841ac-1ca0-4339-ba8a-c2895c0dc57c | overcloud-controller-0  | ACTIVE | –          | Running     | ctlplane=192.0.2.13 |
| f12a1086-7e23-4acb-80a7-8b2efe1e4ef2 | overcloud-novacompute-0 | ACTIVE | –          | Running     | ctlplane=192.0.2.12 |
+————————————–+————————-+——–+————+————-+———————+

*******************************************
*******************************************

Last login: Thu May 26 16:52:28 2016 from gateway
Last login: Thu May 26 15:42:23 UTC 2016 on pts/0

[root@overcloud-controller-0 ~]# ls

[root@overcloud-controller-0 ~]# ifconfig

br-ex: flags=4163  mtu 1500
inet6 fe80::2f7:7fff:fe1a:ca59  prefixlen 64  scopeid 0x20
ether 00:f7:7f:1a:ca:59  txqueuelen 0  (Ethernet)
RX packets 689651  bytes 1362839189 (1.2 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2346450  bytes 3243444405 (3.0 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
inet6 fe80::2f7:7fff:fe1a:ca59  prefixlen 64  scopeid 0x20
ether 00:f7:7f:1a:ca:59  txqueuelen 1000  (Ethernet)
RX packets 2783352  bytes 4201989574 (3.9 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2876264  bytes 3280863833 (3.0 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 2962545  bytes 8418607495 (7.8 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2962545  bytes 8418607495 (7.8 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@overcloud-controller-0 ~]# ovs-vsctl show
cc8be4fb-f96f-4679-b85d-d0afc7dd7f72
Bridge br-int
fail_mode: secure
Port “tapb86d48f2-45”
tag: 2
Interface “tapb86d48f2-45”
type: internal
Port “tapa4fa2a9d-a4”
tag: 3
Interface “tapa4fa2a9d-a4”
type: internal
Port “qr-eb92ffa9-da”
tag: 2
Interface “qr-eb92ffa9-da”
type: internal
Port “qr-e8146f9f-51”
tag: 3
Interface “qr-e8146f9f-51”
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-tun
fail_mode: secure
Port “vxlan-c000020c”
Interface “vxlan-c000020c”
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”192.0.2.13″, out_key=flow, remote_ip=”192.0.2.12″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “qg-df23145d-8f”
Interface “qg-df23145d-8f”
type: internal
Port “qg-53315134-1d”
Interface “qg-53315134-1d”
type: internal
Port br-ex
Interface br-ex
type: internal
Port “eth0”
Interface “eth0”
ovs_version: “2.5.0”

***************************************************
Routing table on overcloud controller
***************************************************

[root@overcloud-controller-0 ~]# ip route
default via 192.0.2.1 dev br-ex  proto static
169.254.169.254 via 192.0.2.1 dev br-ex  proto static
192.0.2.0/24 dev br-ex  proto kernel  scope link  src 192.0.2.13

Network topology

[root@overcloud-controller-0 ~]# neutron net-list

+————————————–+————–+—————————————-+
| id                                   | name         | subnets                                |
+————————————–+————–+—————————————-+
| 1dad601c-c865-41d8-94cb-efc634c1fc83 | public       | 12787d8b-1b72-402d-9b93-2821f0a18b7b   |
|                                      |              | 192.0.2.0/24                           |
| 0086836e-2dc3-4d40-a2e2-21f222b159f4 | demo_network | dcc40bfc-9293-47bb-8788-d4b5f090d076   |
|                                      |              | 90.0.0.0/24                            |
| 59168b6e-adca-4ec6-982a-f94a0eb770c8 | private      | ede9bbc2-5099-4d9f-91af-2fd4387d52be   |
|                                      |              | 50.0.0.0/24                            |
+————————————–+————–+—————————————-+

[root@overcloud-controller-0 ~]# nova service-list

+—-+——————+————————————-+———-+———+——-+—————————-+—————–+
| Id | Binary           | Host                                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—-+——————+————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-cert        | overcloud-controller-0              | internal | enabled | up    | 2016-05-26T17:09:20.000000 | –               |
| 2  | nova-consoleauth | overcloud-controller-0              | internal | enabled | up    | 2016-05-26T17:09:20.000000 | –               |
| 5  | nova-scheduler   | overcloud-controller-0              | internal | enabled | up    | 2016-05-26T17:09:22.000000 | –               |
| 6  | nova-conductor   | overcloud-controller-0              | internal | enabled | up    | 2016-05-26T17:09:24.000000 | –               |
| 7  | nova-compute     | overcloud-novacompute-0.localdomain | nova     | enabled | up    | 2016-05-26T17:09:19.000000 | –               |
+—-+——————+————————————-+———-+———+——-+—————————-+—————–+

Running VMs

*************************************************************************
Verfication outbound connectivity. Connecting via floating IPs belong
192.0.2.0/24 to VMs running in overcloud from undercloud VM
*************************************************************************

********************************************************
``ip netns`` on overcloud controller
********************************************************

It won’t work on 16 GB even minimal configuration.
Server memory allocation for minimal virtual environment

Backport upstream commits to stable RDO Mitaka release && Deployments with Keystone API V3

May 23, 2016

Posting bellow is written  with intend to avoid waiting until “koji” build will appear in updates repo of stable RDO Mitaka release, what might take a couple of months or so. Actually, it doesn’t require knowledge how to write properly source RH’s rpm file. It just needs picking up raw content of git commits from upstream git repo converting them into patches and rebuild required src.rpm(s) with patch(es) needed. There is also not commonly known command `rpm -qf` which is very useful when you need to detect which rpm has installed particular file. Just to know which src.rpm should be downloaded for git commit referencing say “cinder.rb”

[root@ServerCentOS72 /]# find . -name cinder.rb -print
find: ‘./run/user/1000/gvfs’: Permission denied
./usr/share/openstack-puppet/modules/cinder/lib/puppet/provider/cinder.rb

[root@ServerCentOS72 /]# rpm -qf /usr/share/openstack-puppet/modules/cinder/lib/puppet/provider/cinder.rb
openstack-puppet-modules-8.0.4-2.el7.centos.noarch

*******************************
*******************************

1. https://cbs.centos.org/koji/buildinfo?buildID=10895
openstack-packstack-8.0.0-1.el7.src.rpm

2. https://cbs.centos.org/koji/buildinfo?buildID=10859
openstack-puppet-modules-8.0.4-1.el7.src.rpm

total 3116
-rw-rw-r–. 1 boris boris  170107 May 21 21:26 openstack-packstack-8.0.0-1.el7.src.rpm
-rw-rw-r–. 1 boris boris 3015046 May 21 18:33 openstack-puppet-modules-8.0.4-1.el7.src.rpm

****************
Then run :-
****************

\$ rpm -iv openstack-packstack-8.0.0-1.el7.src.rpm
\$ rpm -iv  openstack-puppet-modules-8.0.4-1.el7.src.rpm
\$ cd ../rpmbuild

In folder ~boris/rpmbuild/SOURCES
create to patch files :-

********************************************************************
In second patch file insert “cinder” in path to *.rb files
********************************************************************

diff –git a/cinder/lib/puppet/provider/cinder_type/openstack.rb b/cinder/lib/puppet/provider/cinder_type/openstack.rb
index feaea49..9aa31c5 100644
— a/cinder/lib/puppet/provider/cinder_type/openstack.rb
+++ b/cinder/lib/puppet/provider/cinder_type/openstack.rb
@@ -32,6 +32,10 @@ class Puppet::Provider::Cinder &lt; Puppet::Provider::Openstack

. . . . .

diff –git a/cinder/lib/puppet/provider/cinder_type/openstack.rb b/cinder/lib/puppet/provider/cinder_type/openstack.rb
index feaea49..9aa31c5 100644
— a/cinder/lib/puppet/provider/cinder_type/openstack.rb
+++ b/cinder/lib/puppet/provider/cinder_type/openstack.rb
@@ -7,7 +7,7 @@ Puppet::Type.type(:cinder_type).provide(

. . . . . .

diff –git a/cinder/spec/unit/provider/cinder_spec.rb b/cinder/spec/unit/provider/cinder_spec.rb
index cfc8850..246ae58 100644
— a/cinder/spec/unit/provider/cinder_spec.rb
+++ b/cinder/spec/unit/provider/cinder_spec.rb
@@ -24,10 +24,12 @@ describe Puppet::Provider::Cinder do

Finally SOURES folder would  look like :-

**********************
Next step is :-
**********************

\$ cd ../SPECS

and update *.spec files , so that they would understand that patches placed
into SOURCES folder have to be applied to corresponding *.tar.gz archives
before building phase itself.

*****************************************
First openstack-packstack.spec :-
*****************************************

Name:           openstack-packstack
Version:        8.0.0
Release:        2%{?milestone}%{?dist}   <== increase 1 to 2
Summary:        Openstack Install Utility
Group:          Applications/System
URL:            https://github.com/openstack/packstack
Source0:        http://tarballs.openstack.org/packstack/packstack-%{upstream_version}.tar.gz

. . . . . .

%prep
%setup -n packstack-%{upstream_version}
:wq

*****************************************
Second openstack-puppet-modules.spec
*****************************************

Name:           openstack-puppet-modules
Epoch:          1
Version:        8.0.4
Release:        2%{?milestone}%{?dist}  <===  increase 1 to 2
Summary:        Puppet modules used to deploy OpenStack
License:        ASL 2.0 and GPLv2 and GPLv3
URL:         https://github.com/redhat-openstack
Source0:    https://github.com/redhat-openstack/%{name}/archive/%{upstream_version}.tar.gz

. . . . .

%prep
%setup -q -n %{name}-%{?upstream_version}
:wq

******************************************
Attempt rpmbuild for each spec file
******************************************

\$ rpmbuild -bb openstack-packstack.spec
\$ rpmbuild -bb openstack-puppet-modules.spec

If particular build is missing some packages it will report their’s names to screen
This packages could be usually installed via yum, otherwise you have a problem
with local build.
If each build output finishes with message like

```Wrote: /home/boris/rpmbuild/RPMS/noarch/openstack-puppet-modules-8.0.4-2.el7.centos.noarch.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.wX6p3q
+ cd /home/boris/rpmbuild/BUILD
+ cd openstack-puppet-modules-8.0.4
+ /usr/bin/rm -rf /home/boris/rpmbuild/BUILDROOT/openstack-puppet-modules-8.0.4-2.el7.centos.x86_64
+ exit 0```

Everything is going fine. In particular case results will be written

to ../RPMS/noarch

Then

\$ cd ../RPMS/noarch

and create installation script

[boris@ServerCentOS72 SPECS]\$ cd ../RPMS/noarch

[boris@ServerCentOS72 noarch]\$ ls -l
total 3428
-rwxrwxr-x. 1 boris boris     239 May 21 21:40 install
-rw-rw-r–. 1 boris boris  247312 May 21 21:34 openstack-packstack-8.0.0-2.el7.centos.noarch.rpm
-rw-rw-r–. 1 boris boris   17376 May 21 21:34 openstack-packstack-doc-8.0.0-2.el7.centos.noarch.rpm
-rw-rw-r–. 1 boris boris   16792 May 21 21:34 openstack-packstack-puppet-8.0.0-2.el7.centos.noarch.rpm
-rw-rw-r–. 1 boris boris 3212844 May 21 21:38 openstack-puppet-modules-8.0.4-2.el7.centos.noarch.rpm

[boris@ServerCentOS72 noarch]\$ cat install

sudo yum install openstack-packstack-8.0.0-2.el7.centos.noarch.rpm \
openstack-packstack-doc-8.0.0-2.el7.centos.noarch.rpm \
openstack-packstack-puppet-8.0.0-2.el7.centos.noarch.rpm \
openstack-puppet-modules-8.0.4-2.el7.centos.noarch.rpm

****************************
Run install :-
****************************

[boris@ServerCentOS72 noarch]\$ ./install
Due to increased  release (1=>2) old rpms should be replaced by just been built

[root@ServerCentOS72 ~]# rpm -qa  \*openstack-packstack\*
openstack-packstack-doc-8.0.0-2.el7.centos.noarch
openstack-packstack-puppet-8.0.0-2.el7.centos.noarch
openstack-packstack-8.0.0-2.el7.centos.noarch

[root@ServerCentOS72 ~]# rpm -qa \*openstack-puppet-modules\*
openstack-puppet-modules-8.0.4-2.el7.centos.noarch

****************************************************************
****************************************************************
# Identity service API version string. [‘v2.0’, ‘v3’]
CONFIG_KEYSTONE_API_VERSION=v3
won’t cause cinder puppet to crash packstack run, no matter of kind of your deployment

Creating functional ssh key-pair on RDO Mitaka via Chrome Advanced REST Client

May 2, 2016

The problem here is that REST API POST request creating ssh-keypair to access nova servers  doesn’t write to disk rsa private key  and only upload public one to nova. Closing Chrome Client results loosing rsa private key. To prevent failure to write to disk private key , save response-export.json as shown bellow. Working via CLI ( invoking curl ) allows to upload rsa public key to Nova and create rsa private key as file :-

```#!/bin/bash -x
curl -g -i -X POST \
http://192.169.142.127:8774/v2/052b16e56537467d8161266b52a43b54/os-keypairs \
-H "User-Agent: python-novaclient" \
-H "Content-Type: application/json" -H "Accept: application/json" \
-H "X-Auth-Token: 2ae281359a8f4b249d5e8cf36c4233c0" -d  \
'{"keypair": {"name": "oskey2"}}' |  tail -1 >output.json ;
echo "Genegating rsa privare key for server access as file";
echo "-----BEGIN RSA PRIVATE KEY-----" >  oskey2.pem ;
sed 's/\\n/\
/g' <  output.json | grep -v "keypair" | grep -v "user_id" >>oskey2.pem ;
chmod 600 oskey2.pem
```

To start ( keystone api v3 environment ) obtain project’s scoped token via request

[root@ip-192-169-142-127 ~(keystone_admin)]# curl -i  -H “Content-Type: application/json” -d ‘ { “auth”:
{ “identity”:
{ “user”:
{ “id”: “default” }, “password”: “7049f834927e4468” }
}
},
“scope”:
{ “project”:
{ “name”: “demo”, “domain”:
{ “id”: “default” }
}
}
}
}’  http://192.169.142.127:5000/v3/auth/tokens ; echo

HTTP/1.1 201 Created
Date: Mon, 02 May 2016 10:41:25 GMT
Server: Apache/2.4.6 (CentOS)
X-Subject-Token: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  &lt;= token value
Vary: X-Auth-Token
x-openstack-request-id: req-bed4f407-8cbd-4d43-acd5-7450d028bc45
Content-Length: 5791
Connection: close

Content-Type: application/json

*******************************************************************************
The run script extracting from response-export.json the rsa private key
*******************************************************************************

#!/bin/bash -x
echo “Genegating privare key for server access”
echo “—–BEGIN RSA PRIVATE KEY—–” > \$1.pem
sed ‘s/\\n/\
/g’ <  response-export.json | grep -v “keypair” | grep -v “user_id” >> \$1.pem
chmod 600 \$1.pem

like :-

# ./filter.sh oskeymitakaV3

***********************************
Shell command [ 1 ]  :-
***********************************

sed ‘s/\\n/\
/g’ <  response-export.json

will replace ‘\n’ by Carriage Return in  response-export.json.

Now login to dashboard and verify that rsa public key gets uploaded

Relaunch Chrome Advanced Rest Client and launch server with
“key_name” : “oskeymitakaV3”

******************************************************************************
Login to server using rsa private key  oskeymitakaV3.pem
******************************************************************************

[boris@fedora23wks json]\$ ssh -i oskeymitakaV3.pem ubuntu@192.169.142.169

The authenticity of host ‘192.169.142.169 (192.169.142.169)’ can’t be established.
ECDSA key fingerprint is SHA256:khfhZEHHwz7T18oIlKMCKWKY9b6ctsS8XMW5ZpVlRa8.
ECDSA key fingerprint is MD5:25:98:50:9f:b3:37:f3:a1:ed:95:5d:44:f4:03:13:14.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.169.142.169’ (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-21-generic x86_64)
* Documentation:  https://help.ubuntu.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
0 packages can be updated.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user “root”), use “sudo “.
See “man sudo_root” for details.
ubuntu@ubuntuxenialdevs:~\$

Creating Servers via REST API on RDO Mitaka && Keystone API V3

April 29, 2016

As usual ssh-kepair for particular tenant is supposed to be created sourcing tenant’s credentials and afterwards it works for particular tenant. By some reasons upgrade keystone api version to v3 breaks this schema in regards of REST API POST requests issued for servers creation. I am not sure either following bellow is workaround or it is supposed to work this way.

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack project list| \
grep demo > list2

| 052b16e56537467d8161266b52a43b54 | demo |

–project \
052b16e56537467d8161266b52a43b54 \
–user b6f2f511caa44f4e94ce5b2a5809dc50 \
f40413a0de92494680ed8b812f2bf266

*********************************************************************
Run to obtain token scoped “demo”
*********************************************************************

# curl -i -H “Content-Type: application/json” -d \
‘ { “auth”:
{ “identity”:
{ “user”:
{ “id”: “default” }, “password”: “7049f834927e4468” }
}
},
“scope”:
{ “project”:
{ “name”: “demo”, “domain”:
{ “id”: “default” }
}
}
}
}’ http://192.169.142.127:5000/v3/auth/tokens ; echo

Created ssh keypair “oskeydemoV3” sourcing keystonerc_admin

***************************************************************************************
Submit “oskeydemoV3” as value for key_name into Chrome REST Client environment &amp;&amp; issue POST request to create the server , “key_name” will be accepted ( vs case when ssh-keypair was created by tenant demo )
*************************************************************************************

AIO RDO Liberty && several external networks VLAN provider setup

April 28, 2016

Post bellow is addressing the question when AIO RDO Liberty Node has to have external networks of VLAN type with predefined vlan tags. Straight forward packstack –allinone install doesn’t  allow to achieve desired network configuration. External network provider of vlan type appears to be required. In particular case, office networks 10.10.10.0/24 vlan tagged (157) ,10.10.57.0/24 vlan tagged (172), 10.10.32.0/24 vlan tagged (200) already exists when RDO install is running. If demo_provision was “y” , then delete router1 and created external network of VXLAN type.

I got back to this writing due to recent post

First

***********************************************************
Update /etc/neutron/plugins/ml2/ml2_conf.ini
***********************************************************

[root@ip-192-169-142-52 ml2(keystone_demo)]# cat ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vlan,vxlan
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges = vlan157:157:157,vlan172:172:172,vlan200:200:200
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =10:100
vxlan_group =224.0.0.1
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

**************
Then
**************

# openstack-service restart neutron

***************************************************
Invoke external network provider
***************************************************

[root@ip-192-169-142-52 ~(keystone_admin]#neutron net-create vlan157 –shared –provider:network_type vlan –provider:segmentation_id 157 –provider:physical_network vlan157 –router:external

[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create –name sub-vlan157 –gateway 10.10.10.1  –allocation-pool start=10.10.10.100,end=10.10.10.200 vlan157 10.10.10.0/24

***********************************************
`Create second external network `
***********************************************

`[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan172 --shared --provider:network_type vlan --provider:segmentation_id 172 --provider:physical_network vlan172  --router:external`

``` ````[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan172 --gateway 10.10.57.1 --allocation-pool start=10.10.57.100,end=10.10.57.200 vlan172 10.10.57.0/24`

***********************************************
`Create third external network `
***********************************************

`[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan200 --shared --provider:network_type vlan --provider:segmentation_id 200 --provider:physical_network vlan200  --router:external`

`[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan200 --gateway 10.10.32.1 --allocation-pool start=10.10.32.100,end=10.10.57.200 vlan172 10.10.32.0/24`

`*********************************************************************** `
`No need to update sub-net (``vs [ 1 ]). No switch to "enable_isolataed_metadata=True"`
`Neutron L3 agent configuration results attaching qg-<port-id> interfaces to br-int `
`*********************************************************************** `

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| id                        | b41e4d36-9a63-4631-abb0-6436f2f50e2e |
| mtu                       | 0                                    |
| name                      | vlan157                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan157                              |
| provider:segmentation_id  | 157                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | bb753fc3-f257-4ce5-aa7c-56648648056b |
+—————————+————————————–+

+——————-+——————————————————————+
| Field             | Value                                                            |
+——————-+——————————————————————+
| allocation_pools  | {“start”: “10.10.10.100”, “end”: “10.10.10.200”}                 |
| cidr              | 10.10.10.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.10.1                                                       |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “10.10.10.151”} |
| id                | bb753fc3-f257-4ce5-aa7c-56648648056b                             |
| ip_version        | 4                                                                |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan157                                                      |
| network_id        | b41e4d36-9a63-4631-abb0-6436f2f50e2e                             |
| subnetpool_id     |                                                                  |
+——————-+——————————————————————+

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| mtu                       | 0                                    |
| name                      | vlan172                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan172                              |
| provider:segmentation_id  | 172                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 21419f2f-212b-409a-8021-2b4a2ba6532f |
+—————————+————————————–+

+——————-+——————————————————————+
| Field             | Value                                                            |
+——————-+——————————————————————+
| allocation_pools  | {“start”: “10.10.57.100”, “end”: “10.10.57.200”}                 |
| cidr              | 10.10.57.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.57.1                                                       |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “10.10.57.151”} |
| id                | 21419f2f-212b-409a-8021-2b4a2ba6532f                             |
| ip_version        | 4                                                                |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan172                                                      |
| subnetpool_id     |                                                                  |
+——————-+——————————————————————+

+—————————+————————————–+

| Field                     | Value                                |

+—————————+————————————–+
| id                        | 3dc90ff7-b1df-4079-aca1-cceedb23f440 |
| mtu                       | 0                                    |
| name                      | vlan200                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan200                              |
| provider:segmentation_id  | 200                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 60181211-ea36-4e4e-8781-f13f743baa19 |
+—————————+————————————–+

+——————-+————————————————–+
| Field             | Value                                            |
+——————-+————————————————–+
| allocation_pools  | {“start”: “10.10.32.100”, “end”: “10.10.32.200”} |
| cidr              | 10.10.32.0/24                                    |
| dns_nameservers   |                                                  |
| enable_dhcp       | True                                             |
| gateway_ip        | 10.10.32.1                                       |
| host_routes       |                                                  |
| id                | 60181211-ea36-4e4e-8781-f13f743baa19             |
| ip_version        | 4                                                |
| ipv6_ra_mode      |                                                  |
| name              | sub-vlan200                                      |
| network_id        | 3dc90ff7-b1df-4079-aca1-cceedb23f440             |
| subnetpool_id     |                                                  |
+——————-+————————————————–+

**************
Next Step
**************

# modprobe 8021q

******************************
Update l3_agent.ini file
******************************
external_network_bridge =
`gateway_external_network_id =`

**********************************************
/etc/neutron/plugins/ml2/openvswitch_agent.ini
**********************************************

bridge_mappings = vlan157:br-vlan,vlan172:br-vlan2,vlan200:br-vlan3

*************************************
Update Neutron Configuration
*************************************

# openstack-service restart neutron

*******************************************
Set up config persistent between reboots
*******************************************

/etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=”eth1″
ONBOOT=yes
OVS_BRIDGE=br-vlan
TYPE=OVSPort
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan

DEVICE=br-vlan
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan.157

BOOTPROTO=”none”
DEVICE=”br-vlan.157″
ONBOOT=”yes”
PREFIX=”24″
GATEWAY=”10.10.10.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=”eth2″
ONBOOT=yes
OVS_BRIDGE=br-vlan2
TYPE=OVSPort
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan2

DEVICE=br-vlan2
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan2.172

BOOTPROTO=”none”
DEVICE=”br-vlan2.172″
ONBOOT=”yes”
PREFIX=”24″
GATEWAY=”10.10.57.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes

/etc/sysconfig/network-scripts/ifcfg-br-vlan3

DEVICE=br-vlan3
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan3.200

BOOTPROTO=”none”
DEVICE=”br-vlan3.200″
ONBOOT=”yes”
PREFIX=”24″
GATEWAY=”10.10.32.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=”eth3″
ONBOOT=yes
OVS_BRIDGE=br-vlan3
TYPE=OVSPort
DEVICETYPE=”ovs”

********************************************
Routing table on AIO RDO Liberty Node
********************************************

default via 10.10.10.1 dev br-vlan.157
10.10.10.0/24 dev br-vlan.157  proto kernel  scope link  src 10.10.10.150
10.10.32.0/24 dev br-vlan3.200  proto kernel  scope link  src 10.10.32.150
10.10.57.0/24 dev br-vlan2.172  proto kernel  scope link  src 10.10.57.150
169.254.0.0/16 dev eth0  scope link  metric 1002
169.254.0.0/16 dev eth1  scope link  metric 1003
169.254.0.0/16 dev eth2  scope link  metric 1004
169.254.0.0/16 dev eth3  scope link  metric 1005
169.254.0.0/16 dev br-vlan3  scope link  metric 1008
169.254.0.0/16 dev br-vlan2  scope link  metric 1009
169.254.0.0/16 dev br-vlan  scope link  metric 1011
192.169.142.0/24 dev eth0  proto kernel  scope link  src 192.169.142.52

****************************************************************************
Notice that both qrouter-namespaces are attached to br-int.
No switch to “enable_isolated_metadata=True” vs  [ 1 ]
*****************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-list | grep vlan
| 3dc90ff7-b1df-4079-aca1-cceedb23f440 | vlan200   | 60181211-ea36-4e4e-8781-f13f743baa19 10.10.32.0/24 |
| 235c8173-d3f8-407e-ad6a-c1d3d423c763 | vlan172   | c7588239-4941-419b-8d27-ccd970acc4ce 10.10.57.0/24 |
| b41e4d36-9a63-4631-abb0-6436f2f50e2e | vlan157   | bb753fc3-f257-4ce5-aa7c-56648648056b 10.10.10.0/24 |

40286423-e174-4714-9c82-32d026ef47ca
Bridge br-vlan
Port “eth1”
Interface “eth1”
Port br-vlan
Interface br-vlan
type: internal
Port phy-br-vlan
Interface phy-br-vlan
type: patch
options: {peer=int-br-vlan}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge “br-vlan2”
Port “phy-br-vlan2”
Interface “phy-br-vlan2”
type: patch
options: {peer=”int-br-vlan2″}
Port “eth2”
Interface “eth2”
Port “br-vlan2”
Interface “br-vlan2”
type: internal
Bridge “br-vlan3”
Port “br-vlan3”
Interface “br-vlan3”
type: internal
Port “phy-br-vlan3”
Interface “phy-br-vlan3”
type: patch
options: {peer=”int-br-vlan3″}
Port “eth3”
Interface “eth3”
Bridge br-int
fail_mode: secure
Port “qr-4e77c7a3-b5”
tag: 3
Interface “qr-4e77c7a3-b5”
type: internal
Port “int-br-vlan3”
Interface “int-br-vlan3″
type: patch
options: {peer=”phy-br-vlan3”}
Port “tap8e684c78-a3”
tag: 2
Interface “tap8e684c78-a3”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qvoe2761636-b5”
tag: 4
Interface “qvoe2761636-b5”
tag: 1
type: internal
Port “qg-02f7ff0d-6d”
tag: 2
Interface “qg-02f7ff0d-6d”
type: internal
Port “qg-943f7831-46”
tag: 1
Interface “qg-943f7831-46”
type: internal
Port “tap4ef27b41-be”
tag: 5
Interface “tap4ef27b41-be”
type: internal
Port “qr-f0fd3793-4e”
tag: 8
Interface “qr-f0fd3793-4e”
type: internal
Port “tapb1435e62-8b”
tag: 7
Interface “tapb1435e62-8b”
type: internal
Port “qvo1bb76476-05”
tag: 3
Interface “qvo1bb76476-05”
Port “qvocf68fcd8-68”
tag: 8
Interface “qvocf68fcd8-68”
Port “qvo8605f075-25”
tag: 4
Interface “qvo8605f075-25”
Port “qg-08ccc224-1e”
tag: 7
Interface “qg-08ccc224-1e”
type: internal
Port “tapbb485628-0b”
tag: 4
Interface “tapbb485628-0b”
type: internal
Port “int-br-vlan2”
Interface “int-br-vlan2″
type: patch
options: {peer=”phy-br-vlan2”}
Port “tapee030534-da”
tag: 8
Interface “tapee030534-da”
type: internal
Port “qr-4d679697-39”
tag: 4
Interface “qr-4d679697-39”
type: internal
Port br-int
Interface br-int
type: internal
Port “tap9b38c69e-46”
tag: 6
Interface “tap9b38c69e-46”
type: internal
Port “tapc166022a-54”
tag: 3
Interface “tapc166022a-54”
type: internal
Port “qvo66d8f235-d4”
tag: 8
Interface “qvo66d8f235-d4”
Port int-br-vlan
Interface int-br-vlan
type: patch
options: {peer=phy-br-vlan}
ovs_version: “2.4.0”

qdhcp-e826aa22-dee0-478d-8bd7-721336e3824a
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-eda69965-c6ee-42be-944f-2d61498e4bea
qdhcp-6768214b-b71c-4178-a0fc-774b2a5d59ef
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qdhcp-03812cc9-69c5-492a-9995-985bf6e1ff13
qdhcp-d958a059-f7bd-4f9f-93a3-3499d20a1fe2
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28
qrouter-71237c84-59ca-45dc-a6ec-23eb94c4249d

********************************************************************************
running in corresponding qrouter namespaces  (Neutron L3 Configuration)
********************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b netstat -antp

Active Internet connections (servers and established)

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      12548/python2
[root@ip-192-169-142-52 ~(keystone_admin)]# ps aux | grep 12548

root     32665  0.0  0.0 112644   960 pts/8    S+   19:29   0:00 grep –color=auto 12548

******************************************************************************
OVS flow verification on br-vlan3,br-vlan2. On each external network  vlan172,
vlan200 two VMs (on each one of vlan networks) are pinging each other
******************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3554.739s, table=0, n_packets=33, n_bytes=2074, idle_age=2137, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4204.459s, table=0, n_packets=2102, n_bytes=109304, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3557.643s, table=0, n_packets=33, n_bytes=2074, idle_age=2140, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4207.363s, table=0, n_packets=2103, n_bytes=109356, idle_age=2, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3568.225s, table=0, n_packets=33, n_bytes=2074, idle_age=2151, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4217.945s, table=0, n_packets=2109, n_bytes=109668, idle_age=0, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4140.528s, table=0, n_packets=11, n_bytes=642, idle_age=695, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4225.918s, table=0, n_packets=2113, n_bytes=109876, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4143.600s, table=0, n_packets=11, n_bytes=642, idle_age=698, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4228.990s, table=0, n_packets=2115, n_bytes=109980, idle_age=0, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4145.912s, table=0, n_packets=11, n_bytes=642, idle_age=700, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4231.302s, table=0, n_packets=2116, n_bytes=110032, idle_age=0, priority=0 actions=NORMAL

********************************************************************************
Next question how local vlan tag 7 gets generated
Run following commands :-
********************************************************************************

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| id                        | 3dc90ff7-b1df-4079-aca1-cceedb23f440 |
| mtu                       | 0                                    |
| name                      | vlan200                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan200                              |
| provider:segmentation_id  | 200                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 60181211-ea36-4e4e-8781-f13f743baa19 |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep 3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
RX packets 27  bytes 1526 (1.4 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8  bytes 648 (648.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 tapb1435e62-8b
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 tapb1435e62-8b

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-vsctl show | grep b1435e62-8b

Port “tapb1435e62-8b”
Interface “tapb1435e62-8b”

**************************************************************************
Actually, directives mentioned in  [ 1 ]
**************************************************************************

# neutron subnet-create –name vlan100 –gateway 192.168.0.1 –allocation-pool \
start=192.168.0.150,end=192.168.0.200 –enable-dhcp \
–dns-nameserver 192.168.0.1 vlan100 192.168.0.0/24
# neutron subnet-update –host-route destination=169.254.169.254/32,nexthop=192.168.0.151 vlan100

along with switch to “enable_isolated_metadata=True” are targeting launching VMs to external_fixed_ip pool in qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 without creating Neutron router, spiting tenants with vlan tag IDs. I might be missing somesing , but 1 ] configures system where each vlan(XXX) external network would belong the only one tenant supposed identified by tag (XXX).

Unless RBAC policies will be created to control who has access to the provider network.

That is not what I intend to do. Neutron work flow on br-int won’t touch mentioned qdhcp-namespace at all. Any  external vlan(XXX) network might be used by several tenants each one having it ownVXLAN subnet (identified in system by VXLAN ID)  and it’s own neutron router(XXX) to external network vlan(XXX). AIO RDO set up is just a sample, I am talking about Network Node in multi node RDO Liberty depoyment.

*********************************************
Fragment from `ovs-vsct show `
*********************************************
Port “tapb1435e62-8b”
tag: 7
Interface “tapb1435e62-8b”

*************************************************************************
Next appearance of vlan tag 7, as expected is qg-08ccc224-1e.
Outgoing interface of  qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
namespace.
*************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
inet6 fe80::f816:3eff:fed4:e7d  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:d4:0e:7d  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 28  bytes 1704 (1.6 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

inet6 fe80::f816:3eff:fea9:5422  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:a9:54:22  txqueuelen 0  (Ethernet)
RX packets 68948  bytes 7192868 (6.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 68859  bytes 7185051 (6.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 qg-08ccc224-1e
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 qg-08ccc224-1e
30.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 qr-f0fd3793-4e

*******************************************************************************************************
Now verify Neutron router connecting qrouter-namespace, having interface with tag 7 and qdhcp namespace, been create to launch the instances.
RoutesDSA has been created with external gateway to vlan200 and internal interface to subnet private07 (30.0.0.0/24) having dhcp enabled and DNS server defined.
vlan157,vlan172 are configured as external networks for theirs coresponding routers as well.
*******************************************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron router-list | grep RoutesDSA

| a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b | RoutesDSA  | {“network_id”: “3dc90ff7-b1df-4079-aca1-cceedb23f440“, “enable_snat”: true, “external_fixed_ips”: [{“subnet_id”: “60181211-ea36-4e4e-8781-f13f743baa19”, “ip_address”: “10.10.32.101”}]} | False       | False |

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep 3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
RX packets 27  bytes 1526 (1.4 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8  bytes 648 (648.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

**************************
Finally run:-
**************************

+————————————–+——+——————-+————————————————————————————-+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+————————————–+——+——————-+————————————————————————————-+
| 08ccc224-1e23-491a-8eec-c4db0ec00f02 |      | fa:16:3e:d4:0e:7d | {“subnet_id”: “60181211-ea36-4e4e-8781-f13f743baa19“, “ip_address”: “10.10.32.101”} |
| f0fd3793-4e5a-467a-bd3c-e87bc9063d26 |      | fa:16:3e:a9:54:22 | {“subnet_id”: “0c962484-3e48-4d86-a17f-16b0b1e5fc4d“, “ip_address”: “30.0.0.1”}     |
+————————————–+——+——————-+————————————————————————————-+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 0c962484-3e48-4d86-a17f-16b0b1e5fc4d
| 0c962484-3e48-4d86-a17f-16b0b1e5fc4d |               | 30.0.0.0/24   | {“start”: “30.0.0.2”, “end”: “30.0.0.254”}       |
[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 60181211-ea36-4e4e-8781-f13f743baa19
| 60181211-ea36-4e4e-8781-f13f743baa19 | sub-vlan200   | 10.10.32.0/24 | {“start”: “10.10.32.100”, “end”: “10.10.32.200”} |

************************************
OVS Flows at br-vlan3
************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL

cookie=0x0, duration=15793.182s, table=0, n_packets=33, n_bytes=2074, idle_age=14376, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=16442.902s, table=0, n_packets=8221, n_bytes=427492, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
сookie=0x0, duration=15796.300s, table=0, n_packets=33, n_bytes=2074, idle_age=14379, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=16446.020s, table=0, n_packets=8223, n_bytes=427596, idle_age=0, priority=0 actions=NORMAL

************************************************************
OVS Flow for {phy-br-vlan3,in-br-vlan3} veth pair
************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl show br-vlan3 | grep phy-br-vlan3

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl show br-int | grep int-br-vlan3

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2
port  2: rx pkts=6977, bytes=304270, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2

port  2: rx pkts=6979, bytes=304354, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2
port  2: rx pkts=6981, bytes=304438, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0
[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=6991, bytes=304858, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=6994, bytes=304984, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=7450, bytes=324136, drop=0, errs=0, coll=0

****************************************************************
Another OVS flow on test br-int for vlan157
****************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh -i oskeyvls.pem cirros@10.10.10.101

\$ ping -c 5 10.10.10.108

PING 10.10.10.108 (10.10.10.108): 56 data bytes
64 bytes from 10.10.10.108: seq=0 ttl=63 time=0.706 ms
64 bytes from 10.10.10.108: seq=1 ttl=63 time=0.772 ms
64 bytes from 10.10.10.108: seq=2 ttl=63 time=0.734 ms
64 bytes from 10.10.10.108: seq=3 ttl=63 time=0.740 ms
64 bytes from 10.10.10.108: seq=4 ttl=63 time=0.785 ms

— 10.10.10.108 ping statistics —

5 packets transmitted, 5 packets received, 0% packet loss

round-trip min/avg/max = 0.706/0.747/0.785 ms

******************************************************************************
Testing VM1<=>VM2 via floating IPs on external vlan net 10.10.10.0/24
*******************************************************************************

+————————————–+————–+———————————-+——–+————+————-+———————————+
| ID                                   | Name         | Tenant ID                        | Status | Task State | Power State | Networks                        |
+————————————–+————–+———————————-+——–+————+————-+———————————+
| a3d5ecf6-0fdb-4aa3-815f-171871eccb77 | CirrOSDevs01 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.17, 10.10.10.101 |
| 1b65f5db-d7d5-4e92-9a7c-60e7866ff8e5 | CirrOSDevs02 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.18, 10.10.10.110 |
| 46b7dad1-3a7d-4d94-8407-a654cca42750 | VF23Devs01   | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.19, 10.10.10.111 |
+————————————–+————–+———————————-+——–+————+————-+———————————+

qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh cirros@10.10.10.110

The authenticity of host ‘10.10.10.110 (10.10.10.110)’ can’t be established.
RSA key fingerprint is b8:d3:ec:10:70:a7:da:d4:50:13:a8:2d:01:ba:e4:83.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘10.10.10.110’ (RSA) to the list of known hosts.

\$ ifconfig

UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
RX packets:367 errors:0 dropped:0 overruns:0 frame:0
TX packets:291 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:36442 (35.5 KiB)  TX bytes:32019 (31.2 KiB)

UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

\$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.110\$

\$ ssh fedora@10.10.10.111
Host ‘10.10.10.111’ is not in the trusted hosts file.
(fingerprint md5 23:c0:fb:fd:74:80:2f:12:d3:09:2f:9e:dd:19:f1:74)
Do you want to continue connecting? (y/n) y
Last login: Sun Dec 13 15:52:43 2015 from 10.10.10.101
[fedora@vf23devs01 ~]\$ ifconfig
inet6 fe80::f816:3eff:fea4:1a52  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:a4:1a:52  txqueuelen 1000  (Ethernet)
RX packets 283  bytes 30213 (29.5 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 303  bytes 35022 (34.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[fedora@vf23devs01 ~]\$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.111[fedora@vf23devs01 ~]\$
[fedora@vf23devs01 ~]\$ curl http://169.254.169.254/latest/meta-data/instance-id
i-00000009[fedora@vf23devs01 ~]\$

[fedora@vf23devs01 ~]\$

Creating Servers via REST API on RDO Mitaka via Chrome Advanced REST Client

April 21, 2016

In posting bellow we are going to demonstrate Chrome Advanced REST Client successfully issuing REST API POST requests for creating RDO Mitaka Servers (VMs) as well as getting information about servers via GET requests. All required HTTP Headers are configured in GUI environment as well as body request field for servers creation.

Version of keystone API installed v2.0

Following [ 1 ] to authenticate access to OpenStack Services, you are supposed first of all to issue an authentication request to get authentication token. If the request succeeds, the server returns an authentication token.

Source keystonerc_demo on Controller or on Compute node. It doesn’t
matter. Then run this cURL command to request a token:

curl -s -X POST http://192.169.142.54:5000/v2.0/tokens \
-H “Content-Type: application/json” \
| python -m json.tool

to get authentication token and scroll down to the bottom :-

“token”: {
“audit_ids”: [
“ce1JojlRSiO6TmMTDW3QNQ”
],
“expires”: “2016-04-21T18:26:28Z”,
“id”: “0cfb3ec7a10c4f549a3dc138cf8a270a”, &lt;== X-Auth-Token
“issued_at”: “2016-04-21T17:26:28.246724Z”,
“tenant”: {
“description”: “default tenant”,
“enabled”: true,
“id”: “1578b57cfd8d43278098c5266f64e49f”, &lt;=== Demo tenant’s id
“name”: “demo”
}
},
“user”: {
“id”: “8e1e992eee474c3ab7a08ffde678e35b”,
“name”: “demo”,
“roles”: [
{
“name”: “heat_stack_owner”
},
{
“name”: “_member_”
}
],
}
}
}

********************************************************************************************
Original request to obtain token might be issued via Chrome Advanced REST Client as well
********************************************************************************************

Scrolling down shows up token been returned and demo’s tenant id

Required output

{

access“:

{

token“:

{
issued_at“: 2016-04-21T21:56:52.668252Z
expires“: 2016-04-21T22:56:52Z
id“: dd119ea14e97416b834ca72aab7f8b5a

tenant“:

{
description“: default tenant
enabled“: true
id“: 1578b57cfd8d43278098c5266f64e49f
name“: demo
}

*****************************************************************************
Next create ssh-keypair via CLI or dashboard for particular tenant :-
*****************************************************************************
chmod 600 *.pem

******************************************************************************************
Following bellow is a couple of samples REST API POST requests starting servers as they usually are issued and described.
******************************************************************************************

curl -g -i -X POST http://192.169.142.54:8774/v2/1578b57cfd8d43278098c5266f64e49f/servers -H “User-Agent: python-novaclient” -H “Content-Type: application/json” -H “Accept: application/json” -H “X-Auth-Token: 0cfb3ec7a10c4f549a3dc138cf8a270a” -d ‘{“server”: {“name”: “CirrOSDevs03”, “key_name” : “oskeymitaka0417”, “imageRef”: “2e148cd0-7dac-49a7-8a79-2efddbd83852”, “flavorRef”: “1”, “max_count”: 1, “min_count”: 1, “networks”: [{“uuid”: “e7c90970-c304-4f51-9d65-4be42318487c”}], “security_groups”: [{“name”: “default”}]}}’

curl -g -i -X POST http://192.169.142. 54:8774/v2/1578b57cfd8d43278098c5266f64e49f/servers -H “User-Agent: python-novaclient” -H “Content-Type: application/json” -H “Accept: application/json” -H “X-Auth-Token: 0cfb3ec7a10c4f549a3dc138cf8a270a” -d ‘{“server”: {“name”: “VF23Devs03”, “key_name” : “oskeymitaka0417”, “imageRef”: “5b00b1a8-30d1-4e9d-bf7d-5f1abed5173b”, “flavorRef”: “2”, “max_count”: 1, “min_count”: 1, “networks”: [{“uuid”: “e7c90970-c304-4f51-9d65-4be42318487c”}], “security_groups”: [{“name”: “default”}]}}’

**********************************************************************************
We are going to initiate REST API POST requests creating servers been
issued  via Chrome Advanced REST Client
**********************************************************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# glance image-list

+————————————–+———————–+
| ID                                   | Name                  |
+————————————–+———————–+
| 28b590fa-05c8-4706-893a-54efc4ca8cd6 | cirros                |
| 9c78c3da-b25b-4b26-9d24-514185e99c00 | Ubuntu1510Cloud-image |
| a050a122-a1dc-40d0-883f-25617e452d90 | VF23Cloud-image       |
+————————————–+———————–+

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron net-list
+————————————–+————–+—————————————-+
| id                                   | name         | subnets                                |
+————————————–+————–+—————————————-+
| 43daa7c3-4e04-4661-8e78-6634b06d63f3 | public       | 71e0197b-fe9a-4643-b25f-65424d169492   |
|                                      |              | 192.169.142.0/24                       |
| 292a2f21-70af-48ef-b100-c0639a8ffb22 | demo_network | d7aa6f0f-33ba-430d-a409-bd673bed7060   |
|                                      |              | 50.0.0.0/24                            |
+————————————–+————–+—————————————-+

First required Headers were created in corresponding fields and
following fragment was placed in Raw Payload area of Chrome Client

{“server”:
{“name”: “VF23Devs03”,
“key_name” : “oskeymitaka0420”,
“imageRef” : “a050a122-a1dc-40d0-883f-25617e452d90“,
“flavorRef”: “2”,
“max_count”: 1,
“min_count”: 1,
“networks”: [{“uuid”: “292a2f21-70af-48ef-b100-c0639a8ffb22“}],
“security_groups”: [{“name”: “default”}]
}
}

Launching Fedora 23 Server :-

Next Ubuntu 15.10 Server (VM) will be created via changing  image-id in  Advanced RESTful Client GUI environment

Make sure that servers have been created and are currently up and running

***************************************************************************************
Now launch Chrome REST Client again for servers verification via GET request
***************************************************************************************

Neutron work flow for Docker Hypervisor running on DVR Cluster RDO Mitaka in appropriate amount of details && HA support for Glance storage using to load nova-docker instances

April 6, 2016

Why DVR come into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo) with which I had same kind of problems (VXLAN connection Controller <==> Compute) on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1). I just guess that Nova-Docker driver has a problem with OVS 2.4.0 no matter of stable/kilo, stable/liberty, stable/mitaka branches been checked out for driver build.

I have to notice that issue is related specifically with ML2&OVS&VXLAN setup, RDO Mitaka deployment ML2&OVS&VLAN  works with Nova-Docker (stable/mitaka) with no problems.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc, because even having proved malfunctinality I cannot file it to BZ. Nova-Docker Driver is not packaged for RDO so it’s upstream stuff. Upstream won’t consider issue which involves build driver from source on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup. It will result South-North traffic to be forwarded right away from host running Docker Hypervisor to Internet and vice/versa due to basic “fg” functionality ( outgoing interface of fip-namespace,residing on Compute node having L3 agent running in “dvr” agent_mode ).

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d

# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

Before DVR set up change swift to glance back end  ( swift is configured in answer-file as follows )

CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1
CONFIG_SWIFT_STORAGE_ZONES=3
CONFIG_SWIFT_STORAGE_REPLICAS=3
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=a55607bff10c4210
CONFIG_SWIFT_STORAGE_SIZE=10G

Up on set up completion on storage node :-

[root@ip-192-169-142-127 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   45G  5.3G   40G  12% /
devtmpfs                 2.8G     0  2.8G   0% /dev
tmpfs                    2.8G  204K  2.8G   1% /dev/shm
tmpfs                    2.8G   25M  2.8G   1% /run
tmpfs                    2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/vdc1                 10G  2.5G  7.5G  25% /srv/node/vdc1
/dev/vdb1                 10G  2.5G  7.5G  25% /srv/node/vdb1
/dev/vdd1                 10G  2.5G  7.5G  25% /srv/node/vdd1

/dev/vda1                497M  211M  286M  43% /boot
tmpfs                    567M  4.0K  567M   1% /run/user/42
tmpfs                    567M  8.0K  567M   1% /run/user/1000

****************************
Update  glance-api.conf
****************************

[glance_store]
stores = swift
default_store = swift
swift_store_user = services:glance
swift_store_key = f6a9398960534797

swift_store_create_container_on_put = True
os_region_name=RegionOne

# openstack-service restart glance

Value f6a9398960534797 is corresponding CONFIG_GLANCE_KS_PW in answer-file,i.e. keystone glance password for authentification

2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”
http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html
Just one notice on RDO Mitaka on each compute node run

Then configure

***********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
***********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
DNS1=”83.221.202.254″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************
vi /etc/nova/nova.conf

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute

***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf

container_formats=ami,ari,aki,bare,ovf,ova,docker
# systemctl restart openstack-glance-api

**************************************************
Network flow on Compute in a bit more details
**************************************************

When floating IP gets assigned to  VM ,  what actually happens ( [1] ) :-

The same explanation may be found in ([4]) , the only style would not be in step by step manner, in particular it contains detailed description of reverse network flow and ARP Proxy functionality.

1.The fip- namespace is created on the local compute node
(if it does not already exist)
2.A new port rfp- gets created on the qrouter- namespace
(if it does not already exist)
3.The rfp port on the qrouter namespace is assigned the associated floating IP address
4.The fpr port on the fip namespace gets created and linked via point-to-point  network to the rfp port of the qrouter namespace
from the public network range to set up  ARP proxy point
6.The fg- is configured as a Proxy ARP

*********************
Flow itself  ( [1] ):
*********************

1.The VM, initiating transmission, sends a packet via default gateway
and br-int forwards the traffic to the local DVR gateway port (qr-).
2.DVR routes the packet using the routing table to the rfp- port
3.The packet is applied NAT rule, replacing the source-IP of VM to
the assigned floating IP, and then it gets sent through the rfp- port,
which connects to the fip namespace via point-to-point network
169.254.31.28/31
4. The packet is received on the fpr- port in the fip namespace
and then routed outside through the fg- port

[root@ip-192-169-142-137 ~(keystone_demo)]# nova list

+————————————–+—————-+——–+————+————-+—————————————–+
| ID                                   | Name           | Status | Task State | Power State | Networks                                |
+————————————–+—————-+——–+————+————-+—————————————–+
| 957814c1-834e-47e5-9236-ef228455fe36 | UbuntuDevs01   | ACTIVE | –          | Running     | demo_network=50.0.0.12, 192.169.142.151 |
| 65dd55b9-23ea-4e5b-aeed-4db259436df2 | derbyGlassfish | ACTIVE | –          | Running     | demo_network=50.0.0.13, 192.169.142.153 |
| f9311d57-4352-48a6-a042-b36393e0af7a | fedora22docker | ACTIVE | –          | Running     | demo_network=50.0.0.14, 192.169.142.154 |
+————————————–+—————-+——–+————+————-+—————————————–+

[root@ip-192-169-142-137 ~(keystone_demo)]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

336679f5bf7a        kumarpraveen/fedora-sshd   “/usr/bin/supervisord”   About an hour ago   Up About an hour                        nova-f9311d57-4352-48a6-a042-b36393e0af7a
8bb2ce01e671        derby/docker-glassfish41   “/sbin/my_init”          2 hours ago         Up 2 hours                              nova-65dd55b9-23ea-4e5b-aeed-4db259436df2
fe5eb55a4c9d        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”      3 hours ago         Up 3 hours                              nova-957814c1-834e-47e5-9236-ef228455fe36

[root@ip-192-169-142-137 ~(keystone_demo)]# nova show f9311d57-4352-48a6-a042-b36393e0af7a | grep image
| image                                | kumarpraveen/fedora-sshd (93345f0b-fcbd-41e4-b335-a4ecb8b59e73) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 65dd55b9-23ea-4e5b-aeed-4db259436df2 | grep image
| image                                | derby/docker-glassfish41 (9f2cd9bc-7840-47c1-81e8-3bc0f76426ec) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 957814c1-834e-47e5-9236-ef228455fe36 | grep image
| image                                | rastasheep/ubuntu-sshd (29c057f1-3c7d-43e3-80e6-dc8fef1ea035) |

[root@ip-192-169-142-137 ~(keystone_demo)]# . keystonerc_glance
[root@ip-192-169-142-137 ~(keystone_glance)]# glance image-list

+————————————–+————————–+
| ID                                   | Name                     |

+————————————–+————————–+
| 27551b28-6df7-4b0e-a0c8-322b416092c1 | cirros                   |
| 9f2cd9bc-7840-47c1-81e8-3bc0f76426ec | derby/docker-glassfish41 |
| 93345f0b-fcbd-41e4-b335-a4ecb8b59e73 | kumarpraveen/fedora-sshd |
| 29c057f1-3c7d-43e3-80e6-dc8fef1ea035 | rastasheep/ubuntu-sshd   |
+————————————–+————————–+

[root@ip-192-169-142-137 ~(keystone_glance)]# swift list glance

29c057f1-3c7d-43e3-80e6-dc8fef1ea035
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00001
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00002

93345f0b-fcbd-41e4-b335-a4ecb8b59e73
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00001
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00002
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00003
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00004
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00005

9f2cd9bc-7840-47c1-81e8-3bc0f76426ec
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00001
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00002
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00003
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00004
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00005
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00006

Setting up Nova-Docker on Multi Node DVR Cluster RDO Mitaka

April 1, 2016

UPDATE 04/03/2016
In meantime  better use  repositories for RC1,
rather then Delorean trunks
END UPDATE

DVR && Nova-Docker Driver (stable/mitaka) tested fine on RDO Mitaka (build 20160329) with no issues described in previous post for RDO Liberty
So, create DVR deployment with Contrpoller/Network + N(*)Compute Nodes. Switch to Docker Hypervisor on each Compute Node and make requiered updates to glance and filters file on Controller. You are all set. Nova-Dockers instances FIP(s) are available from outside via Neutron Distributed Router (DNAT) using “fg” inteface ( fip-namespace ) residing on same host as Docker Hypervisor. South-North traffic is not related with VXLAN tunneling on DVR systems.

Why DVR comes into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo)
with which I had same kind of problems (VXLAN connection Controller <==> Compute)
on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1).
I just guess that Nova-Docker driver has a problem with OVS 2.4.0
no matter of stable/kilo, stable/liberty, stable/mitaka branches
been checked out for driver build.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc,
because even having proved malfunctinality I cannot file it to BZ.
Nova-Docker Driver is not packaged for RDO so it’s upstream stuff,
Upstream won’t consider issue which involves build driver from source
on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup,
to kill two birds with one stone. It will result South-North traffic
to be forwarded right away from host running Docker Hypervisor to Internet
and vice/versa due to basic “fg” functionality (outgoing interface of
fip-namespace,residing on Compute node having L3 agent running in “dvr”
agent_mode).

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d

# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”  :-

http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html

Just one notice on RDO Mitaka on each compute node, first create br-ex and add port eth0

Then configure

*********************************
Compute nodes X=(3,4)
*********************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
DNS1=”83.221.202.254″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0

DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************

vi /etc/nova/nova.conf

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute
***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf
container_formats=ami,ari,aki,bare,ovf,ova,docker

# systemctl restart openstack-glance-api

**************************************************************************************
Build on Compute GlassFish 4.1 docker image per
http://bderzhavets.blogspot.com/2015/01/hacking-dockers-phusionbaseimage-to.html  and upload to glance :-
**************************************************************************************

REPOSITORY                 TAG                 IMAGE ID            CREATED              SIZE
derby/docker-glassfish41   latest              3a6b84ec9206        About a minute ago   1.155 GB
rastasheep/ubuntu-sshd     latest              70e0ac74c691        2 days ago           251.6 MB
phusion/baseimage          latest              772dd063a060        3 months ago         305.1 MB
tutum/tomcat               latest              2edd730bbedd        7 months ago         539.9 MB
larsks/thttpd              latest              a31ab5050b67        15 months ago        1.058 MB

[root@ip-192-169-142-137 ~(keystone_admin)]# docker save derby/docker-glassfish41 |  openstack image create  derby/docker-glassfish41  –public –container-format docker –disk-format raw

+——————+——————————————————+
| Field            | Value                                                |
+——————+——————————————————+
| checksum         | 9bea6dd0bcd8d0d7da2d82579c0e658a                     |
| container_format | docker                                               |
| created_at       | 2016-04-01T14:29:20Z                                 |
| disk_format      | raw                                                  |
| file             | /v2/images/acf03d15-b7c5-4364-b00f-603b6a5d9af2/file |
| id               | acf03d15-b7c5-4364-b00f-603b6a5d9af2                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | derby/docker-glassfish41                             |
| owner            | 31b24d4b1574424abe53b9a5affc70c8                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 1175020032                                           |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2016-04-01T14:30:13Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+——————+——————————————————+

CONTAINER ID        IMAGE                      COMMAND               CREATED             STATUS              PORTS               NAMES

8f551d35f2d7        derby/docker-glassfish41   “/sbin/my_init”       39 seconds ago      Up 31 seconds                           nova-faba725e-e031-4edb-bf2c-41c6dfc188c1
dee4425261e8        tutum/tomcat               “/run.sh”             About an hour ago   Up About an hour                        nova-13450558-12d7-414c-bcd2-d746495d7a57
41d2ebc54d75        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”   2 hours ago         Up About an hour                        nova-04ddea42-10a3-4a08-9f00-df60b5890ee9

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh…
No SSH host key available. Generating one…
*** Running /etc/my_init.d/01_sshd_start.sh…
Creating SSH2 RSA key; this may take some time …
Creating SSH2 DSA key; this may take some time …
Creating SSH2 ECDSA key; this may take some time …
Creating SSH2 ED25519 key; this may take some time …
invoke-rc.d: policy-rc.d denied execution of restart.
SSH KEYS regenerated by Boris just in case !
SSHD started !

*** Running /etc/my_init.d/database.sh…
Derby database started !
*** Running /etc/my_init.d/run.sh…

Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000006: instance-00000006: unknown error

Waiting for domain1 to start ……
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Command start-domain executed successfully.

Fairly hard docker image been built by “docker expert” as myself 😉
gets launched and nova-docker instance seems to run properly
several daemons at a time ( sshd enabled )

Last login: Fri Apr  1 15:33:06 2016 from 192.169.142.1
root@instance-00000006:~# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 14:32 ?        00:00:00 /usr/bin/python3 -u /sbin/my_init
root       100     1  0 14:33 ?        00:00:00 /bin/bash /etc/my_init.d/run.sh
root       103     1  0 14:33 ?        00:00:00 /usr/sbin/sshd
root       170     1  0 14:33 ?        00:00:03 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/op
root       427   100  0 14:33 ?        00:00:02 java -jar /opt/glassfish4/bin/../glassfish/lib/cl
root       444   427  2 14:33 ?        00:01:23 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/gla

root      1078     0  0 15:32 ?        00:00:00 bash
root      1110   103  0 15:33 ?        00:00:00 sshd: root@pts/0
root      1112  1110  0 15:33 pts/0    00:00:00 -bash
root      1123  1112  0 15:33 pts/0    00:00:00 ps -ef

Glassfish is running indeed

Setup Docker Hypervisor on Multi Node DVR Cluster RDO Mitaka

March 31, 2016

UPDATE 04/01/2016

DVR && Nova-Docker Driver (stable/mitaka) tested fine on RDO Mitaka (build 20160329) with no issues discribed in link for RDO Liberty.So, create DVR deployment with Contrpoller/Network + N(*)Compute Nodes. Switch to Docker Hypervisor on each Compute Node and make requiered  updates to glance and filters file on Controller. You are all set. Nova-Dockers instances FIP(s) are available from outside via Neutron Distributed Router (DNAT) using “fg” inteface ( fip-namespace ) residing on same host as Docker Hypervisor. South-North traffic is not related with VXLAN tunneling on DVR systems.

END UPDATE

Perform two node cluster deployment Controller + Network&amp;Compute (ML2&amp;OVS&amp;VXLAN).  Another configuration available via packstack  is Controller+Storage+Compute&amp;Network.
Deployment schema bellow will start on Compute node ( supposed to run Nova-Docker instances ) all four Neutron agents. Thus routing via VXLAN tunnel will be excluded . Nova-Docker instances will be routed to the Internet and vice/versa via local neutron router (DNAT/SNAT) residing on the same host where Docker Hypervisor is running.

For multi node node solution testing DVR with Nova-Docker driver is required.

For now tested only on RDO Liberty DVR system :-
RDO Liberty DVR cluster switched no Nova-Docker (stable/liberty) successfully. Containers (instances) may be launched on Compute Nodes and are available via theirs fip(s) due to neutron (DNAT) routing via “fg” interface of corresponding fip-namespace.  Snapshots  here

Question will be closed if I would be able get same results on RDO Mitaka, which will solve problem of Multi Node Docker Hypervisor deployment across Compute nodes , not using VXLAN tunnels for South-North traffic, supported by Metadata,L3,openvswitch neutron agents with unique dhcp agent proviging
private IPs  and residing on Controller/Network Node.
SELINUX should be set to permissive mode after rdo deployment.

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d

# yum -y install openstack-packstack (Controller only)

********************************************

Answer file for RDO Mitaka deployment

********************************************

[general]

CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

CONFIG_SERVICE_WORKERS=%{::processorcount}

CONFIG_GLANCE_INSTALL=y

CONFIG_CINDER_INSTALL=y

CONFIG_MANILA_INSTALL=n

CONFIG_NOVA_INSTALL=y

CONFIG_NEUTRON_INSTALL=y

CONFIG_HORIZON_INSTALL=y

CONFIG_SWIFT_INSTALL=y

CONFIG_CEILOMETER_INSTALL=y

CONFIG_AODH_INSTALL=y

CONFIG_GNOCCHI_INSTALL=y

CONFIG_SAHARA_INSTALL=n

CONFIG_HEAT_INSTALL=n

CONFIG_TROVE_INSTALL=n

CONFIG_IRONIC_INSTALL=n

CONFIG_CLIENT_INSTALL=y

CONFIG_NTP_SERVERS=

CONFIG_NAGIOS_INSTALL=y

EXCLUDE_SERVERS=

CONFIG_DEBUG_MODE=n

CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.137

CONFIG_VMWARE_BACKEND=n

CONFIG_UNSUPPORTED=n

CONFIG_USE_SUBNETS=n

CONFIG_VCENTER_HOST=

CONFIG_VCENTER_USER=

CONFIG_VCENTER_CLUSTER_NAMES=

CONFIG_STORAGE_HOST=192.169.142.127

CONFIG_SAHARA_HOST=192.169.142.127

CONFIG_USE_EPEL=y

CONFIG_REPO=

CONFIG_ENABLE_RDO_TESTING=n

CONFIG_RH_USER=

CONFIG_SATELLITE_URL=

CONFIG_RH_SAT6_SERVER=

CONFIG_RH_PW=

CONFIG_RH_OPTIONAL=y

CONFIG_RH_PROXY=

CONFIG_RH_SAT6_ORG=

CONFIG_RH_SAT6_KEY=

CONFIG_RH_PROXY_PORT=

CONFIG_RH_PROXY_USER=

CONFIG_RH_PROXY_PW=

CONFIG_SATELLITE_USER=

CONFIG_SATELLITE_PW=

CONFIG_SATELLITE_AKEY=

CONFIG_SATELLITE_CACERT=

CONFIG_SATELLITE_PROFILE=

CONFIG_SATELLITE_FLAGS=

CONFIG_SATELLITE_PROXY=

CONFIG_SATELLITE_PROXY_USER=

CONFIG_SATELLITE_PROXY_PW=

CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt

CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key

CONFIG_SSL_CERT_DIR=~/packstackca/

CONFIG_SSL_CACERT_SELFSIGN=y

CONFIG_SELFSIGN_CACERT_SUBJECT_C=–

CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State

CONFIG_SELFSIGN_CACERT_SUBJECT_L=City

CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack

CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack

CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net

CONFIG_AMQP_BACKEND=rabbitmq

CONFIG_AMQP_HOST=192.169.142.127

CONFIG_AMQP_ENABLE_SSL=n

CONFIG_AMQP_ENABLE_AUTH=n

CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER

CONFIG_AMQP_AUTH_USER=amqp_user

CONFIG_KEYSTONE_DB_PW=abcae16b785245c3

CONFIG_KEYSTONE_DB_PURGE_ENABLE=True

CONFIG_KEYSTONE_REGION=RegionOne

CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398

CONFIG_KEYSTONE_API_VERSION=v2.0

CONFIG_KEYSTONE_TOKEN_FORMAT=UUID

CONFIG_KEYSTONE_SERVICE_NAME=httpd

CONFIG_KEYSTONE_IDENTITY_BACKEND=sql

CONFIG_KEYSTONE_LDAP_URL=ldap://12.0.0.127

CONFIG_KEYSTONE_LDAP_USER_DN=

CONFIG_KEYSTONE_LDAP_SUFFIX=

CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one

CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1

CONFIG_KEYSTONE_LDAP_USER_SUBTREE=

CONFIG_KEYSTONE_LDAP_USER_FILTER=

CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=

CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE

CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n

CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=

CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n

CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n

CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n

CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=

CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=

CONFIG_KEYSTONE_LDAP_GROUP_FILTER=

CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=

CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=

CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=

CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n

CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n

CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n

CONFIG_KEYSTONE_LDAP_USE_TLS=n

CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=

CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=

CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand

CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8

CONFIG_GLANCE_KS_PW=f6a9398960534797

CONFIG_GLANCE_BACKEND=file

CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69

CONFIG_CINDER_DB_PURGE_ENABLE=True

CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f

CONFIG_CINDER_BACKEND=lvm

CONFIG_CINDER_VOLUMES_CREATE=y

CONFIG_CINDER_VOLUMES_SIZE=2G

CONFIG_CINDER_GLUSTER_MOUNTS=

CONFIG_CINDER_NFS_MOUNTS=

CONFIG_CINDER_NETAPP_HOSTNAME=

CONFIG_CINDER_NETAPP_SERVER_PORT=80

CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster

CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http

CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs

CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0

CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60

CONFIG_CINDER_NETAPP_NFS_SHARES=

CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf

CONFIG_CINDER_NETAPP_VOLUME_LIST=

CONFIG_CINDER_NETAPP_VFILER=

CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=

CONFIG_CINDER_NETAPP_VSERVER=

CONFIG_CINDER_NETAPP_CONTROLLER_IPS=

CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp

CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2

CONFIG_CINDER_NETAPP_STORAGE_POOLS=

CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER

CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER

CONFIG_NOVA_DB_PURGE_ENABLE=True

CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8

CONFIG_NOVA_KS_PW=d9583177a2444f06

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp

CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager

CONFIG_VNC_SSL_CERT=

CONFIG_VNC_SSL_KEY=

CONFIG_NOVA_PCI_ALIAS=

CONFIG_NOVA_PCI_PASSTHROUGH_WHITELIST=

CONFIG_NOVA_COMPUTE_PRIVIF=

CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

CONFIG_NOVA_NETWORK_PUBIF=eth0

CONFIG_NOVA_NETWORK_PRIVIF=

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22

CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

CONFIG_NOVA_NETWORK_VLAN_START=100

CONFIG_NOVA_NETWORK_NUMBER=1

CONFIG_NOVA_NETWORK_SIZE=255

CONFIG_NEUTRON_KS_PW=808e36e154bd4cee

CONFIG_NEUTRON_DB_PW=0e2b927a21b44737

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

CONFIG_LBAAS_INSTALL=n

CONFIG_NEUTRON_METERING_AGENT_INSTALL=n

CONFIG_NEUTRON_FWAAS=n

CONFIG_NEUTRON_VPNAAS=n

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch

CONFIG_NEUTRON_ML2_SUPPORTED_PCI_VENDOR_DEVS=[’15b3:1004′, ‘8086:10ca’]

CONFIG_NEUTRON_ML2_SRIOV_AGENT_REQUIRED=n

CONFIG_NEUTRON_ML2_SRIOV_INTERFACE_MAPPINGS=

CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1

CONFIG_NEUTRON_OVS_TUNNEL_SUBNETS=

CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_MANILA_DB_PW=PW_PLACEHOLDER

CONFIG_MANILA_KS_PW=PW_PLACEHOLDER

CONFIG_MANILA_BACKEND=generic

CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false

CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https

CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=

CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster

CONFIG_MANILA_NETAPP_SERVER_PORT=443

CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)

CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=

CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root

CONFIG_MANILA_NETAPP_VSERVER=

CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true

CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s

CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares

CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2

CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu

CONFIG_MANILA_NETWORK_TYPE=neutron

CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=

CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=

CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=

CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4

CONFIG_MANILA_GLUSTERFS_SERVERS=

CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=

CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=

CONFIG_MANILA_GLUSTERFS_TARGET=

CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=

CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster

CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=

CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=

CONFIG_HORIZON_SSL=n

CONFIG_HORIZON_SSL_CERT=

CONFIG_HORIZON_SSL_KEY=

CONFIG_HORIZON_SSL_CACERT=

CONFIG_SWIFT_KS_PW=30911de72a15427e

CONFIG_SWIFT_STORAGES=

CONFIG_SWIFT_STORAGE_ZONES=1

CONFIG_SWIFT_STORAGE_REPLICAS=1

CONFIG_SWIFT_STORAGE_FSTYPE=ext4

CONFIG_SWIFT_HASH=a55607bff10c4210

CONFIG_SWIFT_STORAGE_SIZE=2G

CONFIG_HEAT_DB_PW=PW_PLACEHOLDER

CONFIG_HEAT_AUTH_ENC_KEY=0ef4161f3bb24230

CONFIG_HEAT_KS_PW=PW_PLACEHOLDER

CONFIG_HEAT_CLOUDWATCH_INSTALL=n

CONFIG_HEAT_CFN_INSTALL=n

CONFIG_HEAT_DOMAIN=heat

CONFIG_PROVISION_DEMO=n

CONFIG_PROVISION_TEMPEST=n

CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_IMAGE_NAME=cirros

CONFIG_PROVISION_IMAGE_FORMAT=qcow2

CONFIG_PROVISION_IMAGE_SSH_USER=cirros

CONFIG_TEMPEST_HOST=

CONFIG_PROVISION_TEMPEST_USER=

CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER

CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

CONFIG_RUN_TEMPEST=n

CONFIG_RUN_TEMPEST_TESTS=smoke

CONFIG_PROVISION_OVS_BRIDGE=n

CONFIG_CEILOMETER_SECRET=19ae0e7430174349

CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753

CONFIG_CEILOMETER_SERVICE_NAME=httpd

CONFIG_CEILOMETER_COORDINATION_BACKEND=redis

CONFIG_MONGODB_HOST=192.169.142.127

CONFIG_REDIS_MASTER_HOST=192.169.142.127

CONFIG_REDIS_PORT=6379

CONFIG_REDIS_HA=n

CONFIG_REDIS_SLAVE_HOSTS=

CONFIG_REDIS_SENTINEL_HOSTS=

CONFIG_REDIS_SENTINEL_CONTACT_HOST=

CONFIG_REDIS_SENTINEL_PORT=26379

CONFIG_REDIS_SENTINEL_QUORUM=2

CONFIG_REDIS_MASTER_NAME=mymaster

CONFIG_AODH_KS_PW=acdd500a5fed4700

CONFIG_GNOCCHI_DB_PW=cf11b5d6205f40e7

CONFIG_GNOCCHI_KS_PW=36eba4690b224044

CONFIG_TROVE_DB_PW=PW_PLACEHOLDER

CONFIG_TROVE_KS_PW=PW_PLACEHOLDER

CONFIG_TROVE_NOVA_USER=trove

CONFIG_TROVE_NOVA_TENANT=services

CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER

CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER

CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER

CONFIG_NAGIOS_PW=02f168ee8edd44e4

**********************************************************************

Upon completion connect to external network on Compute Node :-

**********************************************************************

DEVICE=”br-ex”
BOOTPROTO=”static”
DNS1=”83.221.202.254″
GATEWAY=”172.124.4.1″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

DEVICE=”eth2″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

**********************************************
Verification Compute node status
**********************************************

== Nova services ==
openstack-nova-api:                     inactive  (disabled on boot)
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               inactive  (disabled on boot)
== neutron services ==
neutron-server:                         inactive  (disabled on boot)
neutron-dhcp-agent:                     active
neutron-l3-agent:                          active
neutron-openvswitch-agent:          active

==ceilometer services==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:         inactive  (disabled on boot)
openstack-ceilometer-compute:       active
openstack-ceilometer-collector:       inactive  (disabled on boot)
== Support services ==
openvswitch:                            active
dbus:                                        active
Warning novarc not sourced

13.0.0-0.20160329105656.7662fb9.el7.centos

Also install  python-openstackclient on Compute

******************************************
Verfication status on Controller
******************************************

== Nova services ==
openstack-nova-api:                     active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-cert:                    active
openstack-nova-conductor:               active
openstack-nova-console:                 inactive  (disabled on boot)
openstack-nova-consoleauth:             active
openstack-nova-xvpvncproxy:             inactive  (disabled on boot)
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     inactive  (disabled on boot)
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                        active
neutron-dhcp-agent:                 inactive  (disabled on boot)
neutron-l3-agent:                      inactive  (disabled on boot)
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
== Support services ==
mysqld:                               inactive  (disabled on boot)
dbus:                                   active
target:                                 active
rabbitmq-server:                  active
memcached:                        active

== Keystone users ==

+———————————-+————+———+———————-+

|                id                |    name    | enabled |        email         |

+———————————-+————+———+———————-+
| f7dbea6e5b704c7d8e77e88c1ce1fce8 |   admin    |   True  |    root@localhost    |
| baf4ee3fe0e749f982747ffe68e0e562 |    aodh    |   True  |    aodh@localhost    |
| 770d5c0974fb49998440b1080e5939a0 |   boris    |   True  |                      |
| f88d8e83df0f43a991cb7ff063a2439f | ceilometer |   True  | ceilometer@localhost |
| e7a92f59f081403abd9c0f92c4f8d8d0 |   cinder   |   True  |   cinder@localhost   |
| 58e531b5eba74db2b4559aaa16561900 |   glance   |   True  |   glance@localhost   |
| d215d99466aa481f847df2a909c139f7 |  gnocchi   |   True  |  gnocchi@localhost   |
| 5d3433f7d54d40d8b9eeb576582cc672 |  neutron   |   True  |  neutron@localhost   |
| 3a50997aa6fc4c129dff624ed9745b94 |    nova    |   True  |    nova@localhost    |
| ef1a323f98cb43c789e4f84860afea35 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+

== Glance images ==

+————————————–+————————–+
| ID                                   | Name                     |
+————————————–+————————–+
| cbf88266-0b49-4bc2-9527-cc9c9da0c1eb | derby/docker-glassfish41 |
| 5d0a97c3-c717-46ac-a30f-86208ea0d31d | larsks/thttpd            |
| 80eb0d7d-17ae-49c7-997f-38d8a3aeeabd | rastasheep/ubuntu-sshd   |
+————————————–+————————–+

== Nova managed services ==

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 5  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:53.000000 |                |
| 6  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:52.000000 | –               |
| 7  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:52.000000 | –               |
| 8  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2016-03-31T09:59:54.000000 | –               |
| 10 | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2016-03-31T09:59:55.000000 | –               |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+————–+——+
| ID                                   | Label        | Cidr |
+————————————–+————–+——+
| 47798c88-29e5-4dee-8206-d0f9b7e19130 | public       | –    |
| 8f849505-0550-4f6c-8c73-6b8c9ec56789 | private      | –    |
| bcfcf3c3-c651-4ae7-b7ee-fdafae04a2a9 | demo_network | –    |
+————————————–+————–+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+——————+———————————-+——–+————+————-+—————————————+
| ID                                   | Name             | Tenant ID                        | Status | Task State | Power State | Networks                              |
+————————————–+——————+———————————-+——–+————+————-+—————————————+

| c8284258-f9c0-4b81-8cd0-db6e7cbf8d48 | UbuntuRastasheep | 32df2fd0c85745c9901b2247ec4905bc | ACTIVE | –          | Running     | demo_network=90.0.0.15, 172.124.4.154 |
| 50f22f8a-e6ff-4b8b-8c15-f3b9bbd1aad2 | derbyGlassfish   | 32df2fd0c85745c9901b2247ec4905bc | ACTIVE | –          | Running     | demo_network=90.0.0.16, 172.124.4.155 |
| 03664d5e-f3c5-4ebb-9109-e96189150626 | testLars         | 32df2fd0c85745c9901b2247ec4905bc | ACTIVE | –          | Running     | demo_network=90.0.0.14, 172.124.4.153 |
+————————————–+——————+———————————-+——–+————+————-+—————————————+

*********************************
Nova-Docker Setup on Compute
*********************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************
vi /etc/nova/nova.conf

***********************************
Next one on Controller
***********************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

****************************************************
Nova Compute Service restart on Compute
****************************************************

# systemctl restart openstack-nova-compute

****************************************
Glance API Service restart on Controller
****************************************

vi /etc/glance/glance-api.conf
container_formats=ami,ari,aki,bare,ovf,ova,docker
# systemctl restart openstack-glance-api

Build on Compute GlassFish 4.1 docker image per
http://bderzhavets.blogspot.com/2015/01/hacking-dockers-phusionbaseimage-to.html  and upload to glance :-

REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE

derby/docker-glassfish41   latest              615ce2c6a21f        29 minutes ago      1.155 GB
rastasheep/ubuntu-sshd     latest              70e0ac74c691        32 hours ago        251.6 MB
phusion/baseimage          latest              772dd063a060        3 months ago        305.1 MB
larsks/thttpd              latest              a31ab5050b67        15 months ago       1.058 MB

[root@ip-192-169-142-137 ~(keystone_admin)]# docker save derby/docker-glassfish41 | openstack image create  derby/docker-glassfish41  –public –container-format docker –disk-format raw

+——————+——————————————————+
| Field            | Value                                                |
+——————+——————————————————+
| checksum         | dca755d516e35d947ae87ff8bef8fa8f                     |
| container_format | docker                                               |
| created_at       | 2016-03-31T09:32:53Z                                 |
| disk_format      | raw                                                  |
| file             | /v2/images/cbf88266-0b49-4bc2-9527-cc9c9da0c1eb/file |
| id               | cbf88266-0b49-4bc2-9527-cc9c9da0c1eb                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
name             | derby/docker-glassfish41                             |
| owner            | 677c4fec97d14b8db0639086f5d59f7d                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 1175030784                                           |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2016-03-31T09:33:58Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+——————+——————————————————+

Now launch DerbyGassfish instance via dashboard and assign floating ip

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

70ac259e9176        derby/docker-glassfish41   “/sbin/my_init”          3 minutes ago       Up 3 minutes                            nova-50f22f8a-e6ff-4b8b-8c15-f3b9bbd1aad2
a0826911eabe        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”      About an hour ago   Up About an hour                        nova-c8284258-f9c0-4b81-8cd0-db6e7cbf8d48
7923487076d5        larsks/thttpd              “/thttpd -D -l /dev/s”   About an hour ago   Up About an hour                        nova-03664d5e-f3c5-4ebb-9109-e96189150626

Storage Node (LVMiSCSI) deployment for RDO Kilo on CentOS 7.2

January 4, 2016

RDO deployment bellow has been done via straightforward RDO Kilo packstack run demonstrates that Storage Node might work as traditional iSCSI Target Server and each Compute Node is actually iSCSI initiator client. This functionality is provided by tuning Cinder && Glance Services running on Storage Node.
Following bellow is set up for 3 node deployment test Controller/Network & Compute & Storage on RDO Kilo (CentOS 7.2), which was performed on Fedora 23 host with KVM/Libvirt Hypervisor (32 GB RAM, Intel Core i7-4790 Haswell CPU, ASUS Z97-P ) .Three VMs (4 GB RAM, 4 VCPUS) have been setup. Controller/Network VM two (external/management subnet,vteps’s subnet) VNICs, Compute Node VM two VNICS (management,vtep’s subnets), Storage Node VM one VNIC (management)

Setup :-

192.169.142.127 – Controller/Network Node
192.169.142.137 – Compute Node
192.169.142.157 – Storage Node (LVMiSCSI)

Deployment could be done via answer-file from https://www.linux.com/community/blogs/133-general-linux/864102-storage-node-lvmiscsi-deployment-for-rdo-liberty-on-centos-71

Notice that Glance,Cinder, Swift Services are not running on Controller. Connection to http://StorageNode-IP:8776/v1/xxxxxx/types will be satisfied as soon as dependencies introduced by https://review.openstack.org/192883 will be satisfied on Storage Node, otherwise it could be done only via second run of RDO Kilo installer, having this port ready to respond on Controller (cinder-api port) previously been set up as first storage node. Thanks to Javier Pena, who did the this troubleshooting in https://bugzilla.redhat.com/show_bug.cgi?id=1234038. Issue has been fixed in RDO Liberty release.

Storage Node

Compute Node

iSCSI Transport Class version 2.0-870
version 6.2.0.873-30
Target: iqn.2010-10.org.openstack:volume-3ab60233-5110-4915-9998-7cec7d3ac919 (non-flash)
Current Portal: 192.169.142.157:3260,1
Persistent Portal: 192.169.142.157:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:576fc73f30e9
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
************************
Negotiated iSCSI params:
************************
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 2 State: running
scsi2 Channel 00 Id 0 Lun: 0
Attached scsi disk sda State: running
Target: iqn.2010-10.org.openstack:volume-2087aa9a-7984-4f4e-b00d-e461fcd02099 (non-flash)
Current Portal: 192.169.142.157:3260,1
Persistent Portal: 192.169.142.157:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:576fc73f30e9
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
************************
Negotiated iSCSI params:
************************
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 3 State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running

Attempt to set up HAProxy/Keepalived 3 Node Controller on RDO Liberty per Javier Pena

November 18, 2015

URGENT UPDATE 11/18/2015
It looks as work in progress.
END UPDATE

Actually, setup bellow follows closely https://github.com/beekhof/osp-ha-deploy/blob/master/HA-keepalived.md

As far as to my knowledge Cisco’s schema has been implemented :-
Keepalived, HAProxy,Galera for MySQL Manual install, at least 3 controller nodes. I just highlighted several steps  which as I believe allowed me to bring this work to success.  Javier is using flat external network provider for Controllers cluster disabling from the same start NetworkManager && enabling service network, there is one step which i was unable to skip. It’s disabling IP’s of eth0’s interfaces && restarting network service right before running `ovs-vsctl add-port br-eth0 eth0` per  Neutron building instructions of mentioned “Howto”, which seems to be one of the best I’ve ever seen.

I (just) guess that due this sequence of steps even on already been built and seems to run OK  three nodes Controller Cluster external network is still ping able :-

However, would i disable eth0’s IPs from the start i would lost connectivity right away switching to network service from NetworkManager . In general,  external network is supposed to be ping able from qrouter namespace due to Neutron router’s  DNAT/SNAT IPtables forwarding, but not from Controller . I am also aware of that when Ethernet interface becomes an OVS port of OVS bridge it’s IP is supposed to be suppressed. When external network provider is not used , then br-ex gets any IP  available IP on external network. Using external network provider changes situation. Details may be seen here :-

https://www.linux.com/community/blogs/133-general-linux/858156-multiple-external-networks-with-a-single-l3-agent-testing-on-rdo-liberty-per-lars-kellogg-stedman

NetworkManager.service – Network Manager

network.service – LSB: Bring up/down networking
Active: active (exited) since Wed 2015-11-18 08:36:53 MSK; 2h 10min ago
Process: 708 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=0/SUCCESS)

Nov 18 08:36:47 hacontroller1.example.com network[708]: Bringing up loopback interface:  [  OK  ]
Nov 18 08:36:51 hacontroller1.example.com network[708]: Bringing up interface eth0:  [  OK  ]
Nov 18 08:36:53 hacontroller1.example.com network[708]: Bringing up interface eth1:  [  OK  ]
Nov 18 08:36:53 hacontroller1.example.com systemd[1]: Started LSB: Bring up/down networking.

inet6 fe80::5054:ff:fe6d:926a  prefixlen 64  scopeid 0x20<link>
ether 52:54:00:6d:92:6a  txqueuelen 1000  (Ethernet)
RX packets 5036  bytes 730778 (713.6 KiB)
RX errors 0  dropped 12  overruns 0  frame 0
TX packets 15715  bytes 930045 (908.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

inet6 fe80::5054:ff:fe5e:9644  prefixlen 64  scopeid 0x20<link>
ether 52:54:00:5e:96:44  txqueuelen 1000  (Ethernet)
RX packets 1828396  bytes 283908183 (270.7 MiB)
RX errors 0  dropped 13  overruns 0  frame 0
TX packets 1839312  bytes 282429736 (269.3 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 869067  bytes 69567890 (66.3 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 869067  bytes 69567890 (66.3 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@hacontroller1 ~(keystone_admin)]# ping -c 3  10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=2.04 ms
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.103 ms
64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.118 ms

— 10.10.10.1 ping statistics —

3 packets transmitted, 3 received, 0% packet loss, time 2001ms

rtt min/avg/max/mdev = 0.103/0.754/2.043/0.911 ms

Both mgmt and external networks emulated by corresponging Libvirt Networks
on F23 Virtualization Server. Total four VMs been setup , 3 of them for Controller nodes and one for compute (4 VCPUS, 4 GB RAM)

[root@fedora23wks ~]# cat openstackvms.xml ( for eth1’s)

<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@fedora23wks ~]# cat public.xml ( for external network provider )

<network>
<name>public</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’10.10.10.2′ end=’10.10.10.254′ />
</dhcp>
</ip>
</network>

Only one file is bit different on Controller Nodes , it is l3_agent.ini

[root@hacontroller1 neutron(keystone_demo)]# cat l3_agent.ini | grep -v ^# | grep -v ^\$

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
send_arp_for_ha = 3
external_network_bridge =
gateway_external_network_id =
[AGENT]

*************************************************************************************
Due to posted “UPDATE” on the top of  the blog entry in meantime
perfect solution is provided by
The commit has been done on 11/14/2015 right after discussion at RDO mailing list.
*************************************************************************************

One more step which I did ( not sure that is really has
to be done at this point of time )
IP’s on eth0’s interfaces were disabled just before

1. Updated ifcfg-eth0 files on all Controllers
2. `service network restart` on all Controllers
3. `ovs-vsctl add-port br-eth0 eth0`on all Controllers

*****************************************************************************************
Targeting just POC ( to get floating ips accessible from Fedora 23 Virtualization host )  resulted  Controllers Cluster setup:-
*****************************************************************************************

I installed only

**************************
UPDATE to official docs
**************************
export OS_REGION_NAME=regionOne
export OS_AUTH_URL=http://controller-vip.example.com:35357/v2.0/
export OS_SERVICE_ENDPOINT=http://controller-vip.example.com:35357/v2.0
export OS_SERVICE_TOKEN=2fbe298b385e132da335

Due to running Galera Synchronous MultiMaster Replication between Controllers each commands like :-

# su keystone -s /bin/sh -c “keystone-manage db_sync”
# su glance -s /bin/sh -c “glance-manage db_sync”
# su nova -s /bin/sh -c “nova-manage db sync”

are supposed to run just once from Conroller node 1 ( for instance )

************************
Compute Node setup:-
*************************

Compute setup

**********************
On all nodes
**********************

[root@hacontroller1 neutron(keystone_demo)]# cat /etc/hosts
192.169.142.220 controller-vip.example.com controller-vip
192.169.142.221 hacontroller1.example.com hacontroller1
192.169.142.222 hacontroller2.example.com hacontroller2
192.169.142.223 hacontroller3.example.com hacontroller3
192.169.142.224 compute.example.con compute
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

[root@hacontroller1 ~(keystone_admin)]# cat /etc/neutron/neutron.conf | grep -v ^\$| grep -v ^#

[DEFAULT]
bind_host = 192.169.142.22(X)
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = router,lbaas
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
dhcp_agents_per_network = 2
api_workers = 2
rpc_workers = 2
l3_ha = True
min_l3_agents_per_router = 2
max_l3_agents_per_router = 2

[matchmaker_redis]
[matchmaker_ring]
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://controller-vip.example.com:5000/
identity_uri = http://127.0.0.1:5000
auth_url = http://controller-vip.example.com:35357/
project_name = services
[database]
connection = mysql://neutron:neutrontest@controller-vip.example.com:3306/neutron
max_retries = -1
[nova]
nova_region_name = regionOne
project_domain_id = default
project_name = services
user_domain_id = default
auth_url = http://controller-vip.example.com:35357/
[oslo_concurrency]
[oslo_policy]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_hosts = hacontroller1,hacontroller2,hacontroller3
rabbit_ha_queues = true
[qos]

[root@hacontroller1 haproxy(keystone_demo)]# cat haproxy.cfg
global
daemon
stats socket /var/lib/haproxy/stats
defaults
mode tcp
maxconn 10000
timeout connect 5s
timeout client 30s
timeout server 30s
listen monitor
bind 192.169.142.220:9300
mode http
monitor-uri /status
stats enable
stats realm Haproxy\ Statistics
stats auth root:redhat
stats refresh 5s
frontend vip-db
bind 192.169.142.220:3306
timeout client 90m
default_backend db-vms-galera
backend db-vms-galera
option httpchk
stick-table type ip size 1000
stick on dst
timeout server 90m
server rhos8-node1 192.169.142.221:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
# Note the RabbitMQ entry is only needed for CloudForms compatibility
# and should be removed in the future
frontend vip-rabbitmq
option clitcpka
bind 192.169.142.220:5672
timeout client 900m
default_backend rabbitmq-vms
backend rabbitmq-vms
option srvtcpka
balance roundrobin
timeout server 900m
server rhos8-node1 192.169.142.221:5672 check inter 1s
server rhos8-node2 192.169.142.222:5672 check inter 1s
server rhos8-node3 192.169.142.223:5672 check inter 1s
bind 192.169.142.220:35357
timeout client 600s
balance roundrobin
timeout server 600s
server rhos8-node1 192.169.142.221:35357 check inter 1s on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:35357 check inter 1s on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:35357 check inter 1s on-marked-down shutdown-sessions
frontend vip-keystone-public
bind 192.169.142.220:5000
default_backend keystone-public-vms
timeout client 600s
backend keystone-public-vms
balance roundrobin
timeout server 600s
server rhos8-node1 192.169.142.221:5000 check inter 1s on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:5000 check inter 1s on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:5000 check inter 1s on-marked-down shutdown-sessions
frontend vip-glance-api
bind 192.169.142.220:9191
default_backend glance-api-vms
backend glance-api-vms
balance roundrobin
server rhos8-node1 192.169.142.221:9191 check inter 1s
server rhos8-node2 192.169.142.222:9191 check inter 1s
server rhos8-node3 192.169.142.223:9191 check inter 1s
frontend vip-glance-registry
bind 192.169.142.220:9292
default_backend glance-registry-vms
backend glance-registry-vms
balance roundrobin
server rhos8-node1 192.169.142.221:9292 check inter 1s
server rhos8-node2 192.169.142.222:9292 check inter 1s
server rhos8-node3 192.169.142.223:9292 check inter 1s
frontend vip-cinder
bind 192.169.142.220:8776
default_backend cinder-vms
backend cinder-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8776 check inter 1s
server rhos8-node2 192.169.142.222:8776 check inter 1s
server rhos8-node3 192.169.142.223:8776 check inter 1s
frontend vip-swift
bind 192.169.142.220:8080
default_backend swift-vms
backend swift-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8080 check inter 1s
server rhos8-node2 192.169.142.222:8080 check inter 1s
server rhos8-node3 192.169.142.223:8080 check inter 1s
frontend vip-neutron
bind 192.169.142.220:9696
default_backend neutron-vms
backend neutron-vms
balance roundrobin
server rhos8-node1 192.169.142.221:9696 check inter 1s
server rhos8-node2 192.169.142.222:9696 check inter 1s
server rhos8-node3 192.169.142.223:9696 check inter 1s
frontend vip-nova-vnc-novncproxy
bind 192.169.142.220:6080
default_backend nova-vnc-novncproxy-vms
backend nova-vnc-novncproxy-vms
balance roundrobin
timeout tunnel 1h
server rhos8-node1 192.169.142.221:6080 check inter 1s
server rhos8-node2 192.169.142.222:6080 check inter 1s
server rhos8-node3 192.169.142.223:6080 check inter 1s
bind 192.169.142.220:8775
balance roundrobin
server rhos8-node1 192.169.142.221:8775 check inter 1s
server rhos8-node2 192.169.142.222:8775 check inter 1s
server rhos8-node3 192.169.142.223:8775 check inter 1s
frontend vip-nova-api
bind 192.169.142.220:8774
default_backend nova-api-vms
backend nova-api-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8774 check inter 1s
server rhos8-node2 192.169.142.222:8774 check inter 1s
server rhos8-node3 192.169.142.223:8774 check inter 1s
frontend vip-horizon
bind 192.169.142.220:80
timeout client 180s
default_backend horizon-vms
backend horizon-vms
balance roundrobin
timeout server 180s
mode http
server rhos8-node1 192.169.142.221:80 check inter 1s cookie rhos8-horizon1 on-marked-down shutdown-sessions
server rhos8-node2 192.169.142.222:80 check inter 1s cookie rhos8-horizon2 on-marked-down shutdown-sessions
server rhos8-node3 192.169.142.223:80 check inter 1s cookie rhos8-horizon3 on-marked-down shutdown-sessions
frontend vip-heat-cfn
bind 192.169.142.220:8000
default_backend heat-cfn-vms
backend heat-cfn-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8000 check inter 1s
server rhos8-node2 192.169.142.222:8000 check inter 1s
server rhos8-node3 192.169.142.223:8000 check inter 1s
frontend vip-heat-cloudw
bind 192.169.142.220:8003
default_backend heat-cloudw-vms
backend heat-cloudw-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8003 check inter 1s
server rhos8-node2 192.169.142.222:8003 check inter 1s
server rhos8-node3 192.169.142.223:8003 check inter 1s
frontend vip-heat-srv
bind 192.169.142.220:8004
default_backend heat-srv-vms
backend heat-srv-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8004 check inter 1s
server rhos8-node2 192.169.142.222:8004 check inter 1s
server rhos8-node3 192.169.142.223:8004 check inter 1s
frontend vip-ceilometer
bind 192.169.142.220:8777
timeout client 90s
default_backend ceilometer-vms
backend ceilometer-vms
balance roundrobin
timeout server 90s
server rhos8-node1 192.169.142.221:8777 check inter 1s
server rhos8-node2 192.169.142.222:8777 check inter 1s
server rhos8-node3 192.169.142.223:8777 check inter 1s
frontend vip-sahara
bind 192.169.142.220:8386
default_backend sahara-vms
backend sahara-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8386 check inter 1s
server rhos8-node2 192.169.142.222:8386 check inter 1s
server rhos8-node3 192.169.142.223:8386 check inter 1s
frontend vip-trove
bind 192.169.142.220:8779
default_backend trove-vms
backend trove-vms
balance roundrobin
server rhos8-node1 192.169.142.221:8779 check inter 1s
server rhos8-node2 192.169.142.222:8779 check inter 1s
server rhos8-node3 192.169.142.223:8779 check inter 1s

[root@hacontroller1 ~(keystone_demo)]# cat /etc/my.cnf.d/galera.cnf
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
max_connections=8192
query_cache_size=0
query_cache_type=0
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name=”galera_cluster”
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_notify_cmd=
wsrep_sst_method=rsync

[root@hacontroller1 ~(keystone_demo)]# cat /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
script “/usr/bin/killall -0 haproxy”
interval 2
}
vrrp_instance VI_PUBLIC {
interface eth1
state BACKUP
virtual_router_id 52
priority 101
192.169.142.220 dev eth1
}
track_script {
chk_haproxy
}
# Avoid failback
nopreempt
}
vrrp_sync_group VG1
group {
VI_PUBLIC
}

*************************************************************************
The most difficult  procedure is re-syncing Galera Mariadb cluster
*************************************************************************

https://github.com/beekhof/osp-ha-deploy/blob/master/keepalived/galera-bootstrap.md

Due to nova services start not waiting for getting in sync Galera databases
After sync is done , regardless systemctl reports that service are up and running.
Database update by `openstack-service restart nova` is required on every Controller.  Also the most suspicious reason for failure access Nova metadata Server by starting VMs is failure to start neutron-l3-agent service  on each Controller due to classical design – VM’s access metadata via neutron-ns-metadata-proxy running in qrouter namespace. neutron-l3-agents may be started with no problems, some times just restarted when needed.

RUN Time Snapshots. Keepalived status on Controller’s nodes

HA Neutron router belonging tenant demo create via Neutron CLI

***********************************************************************

At this point hacontroller1 goes down. On hacontroller2 run :-

***********************************************************************

+————————————–+—————————+—————-+——-+———-+

| id                                   | host                      | admin_state_up | alive | ha_state |

+————————————–+—————————+—————-+——-+———-+

| a03409d2-fbe9-492c-a954-e1bdf7627491 | hacontroller2.example.com | True           | 🙂   | active   |

| 0d6e658a-e796-4cff-962f-06e455fce02f | hacontroller1.example.com | True           | xxx   | active   |

+————————————–+—————————+—————-+——-+——-

***********************************************************************

At this point hacontroller2 goes down. hacontroller1 goes up :-

***********************************************************************

Nova Services status on all Controllers

Neutron Services status on all Controllers

Compute Node status

******************************************************************************
Cloud VM (L3) at runtime . Accessibility from F23 Virtualization Host,
running HA 3  Nodes Controller and Compute Node VMs (L2)
******************************************************************************

[root@fedora23wks ~]# ping  10.10.10.103

PING 10.10.10.103 (10.10.10.103) 56(84) bytes of data.
64 bytes from 10.10.10.103: icmp_seq=1 ttl=63 time=1.14 ms
64 bytes from 10.10.10.103: icmp_seq=2 ttl=63 time=0.813 ms
64 bytes from 10.10.10.103: icmp_seq=3 ttl=63 time=0.636 ms
64 bytes from 10.10.10.103: icmp_seq=4 ttl=63 time=0.778 ms
64 bytes from 10.10.10.103: icmp_seq=5 ttl=63 time=0.493 ms
^C

— 10.10.10.103 ping statistics —

5 packets transmitted, 5 received, 0% packet loss, time 4001ms

rtt min/avg/max/mdev = 0.493/0.773/1.146/0.218 ms

[root@fedora23wks ~]# ssh -i oskey1.priv fedora@10.10.10.103
Last login: Tue Nov 17 09:02:30 2015
[fedora@vf23dev ~]\$ uname -a
Linux vf23dev.novalocal 4.2.5-300.fc23.x86_64 #1 SMP Tue Oct 27 04:29:56 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

********************************************************************************
Verifying neutron workflow on 3 node controller been built via patch:-
********************************************************************************

n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max

cookie=0x0, duration=15577.057s, table=0, n_packets=50441, n_bytes=3262529, idle_age=2, priority=4,in_port=2,dl_vlan=3 actions=strip_vlan,NORMAL
cookie=0x0, duration=15765.938s, table=0, n_packets=31225, n_bytes=1751795, idle_age=0, priority=2,in_port=2 actions=drop
cookie=0x0, duration=15765.974s, table=0, n_packets=39982, n_bytes=42838752, idle_age=1, priority=0 actions=NORMAL

Check `ovs-vsctl show`

Bridge br-int
fail_mode: secure
Port “tapc8488877-45”
tag: 4
Interface “tapc8488877-45”
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap14aa6eeb-70”
tag: 2
Interface “tap14aa6eeb-70”
type: internal
Port “qr-8f5b3f4a-45”
tag: 2
Interface “qr-8f5b3f4a-45”
type: internal
Port “int-br-eth0”
Interface “int-br-eth0″
type: patch
options: {peer=”phy-br-eth0”}
Port “qg-34893aa0-17”
tag: 3

[root@hacontroller2 ~(keystone_demo)]# ovs-ofctl show  br-eth0
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max
config:     0
state:      0
speed: 0 Mbps now, 0 Mbps max

[root@hacontroller2 ~(keystone_demo)]# ovs-ofctl dump-flows  br-eth0
cookie=0x0, duration=15810.746s, table=0, n_packets=0, n_bytes=0, idle_age=15810, priority=4,in_port=2,dl_vlan=2 actions=strip_vlan,NORMAL
cookie=0x0, duration=16105.662s, table=0, n_packets=31849, n_bytes=1786827, idle_age=0, priority=2,in_port=2 actions=drop
cookie=0x0, duration=16105.696s, table=0, n_packets=39762, n_bytes=2100763, idle_age=0, priority=0 actions=NORMAL

Check `ovs-vsctl show`
Bridge br-int
fail_mode: secure
Port “qg-34893aa0-17”
tag: 2
Interface “qg-34893aa0-17”
type: internal

RDO Liberty Set up for three Nodes (Controller+Network+Compute) ML2&OVS&VXLAN on CentOS 7.1

October 22, 2015

In addition to the comprehensive OpenStack services, libraries and clients, this release also provides Packstack, a simple installer for proof-of-concept installations, as small as a single all-in-one box and RDO Manager an OpenStack deployment and management tool for production environments based on the OpenStack TripleO project

In posting bellow I intend to test packstack on Liberty to perform classic three node deployment.  If packstack will succeed then post installation  actions  like VRRP or DVR setups might be committed as well. One of the real problems for packstack is HA Controller(s) setup. Here RDO Manager is supposed to get a significant advantage, replacing with comprehensive CLI a lot of manual configuration.

Following bellow is brief instruction  for three node deployment test Controller&&Network&&Compute for RDO Liberty, which was performed on Fedora 22 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,4 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep’s external subnets), Compute Node VM two VNICS (management,vtep’s subnets)

SELINUX stays in enforcing mode.

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related with VM serves as RDO Liberty Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.

Three Libvirt networks created

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@vfedora22wks ~]# cat public.xml
<network>
<name>public</name>
<uuid>d1e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’172.24.4.226′ end=’172.24.4.238′ />
</dhcp>
</ip>
</network>

[root@vfedora22wks ~]# cat vteps.xml
<network>
<name>vteps</name>
<uuid>d2e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr3′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>
</ip>
</network>

# virsh net-list
Name                 State      Autostart     Persistent
————————————————————————–
default               active        yes           yes
openstackvms     active        yes           yes
public                active        yes           yes
vteps                 active         yes          yes

*********************************************************************************
1. First Libvirt subnet “openstackvms”  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet “public” serves for simulation external network  Network Node attached to public,latter on “eth2” interface (belongs to “public”) is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.225 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
***********************************************************************************
3.Third Libvirt subnet “vteps” serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet. ***********************************************************************************

*********************
*********************

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
# In case of two Compute nodes
# CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.157
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
# This is VXLAN tunnel endpoint interface
# It should be assigned IP from vteps network
# before running packstack
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4
**************************************
At this point run on Controller:-
**************************************
Keep SELINUX=enforcing ( RDO Liberty is supposed to handle this)
# yum -y  install centos-release-openstack-liberty
# yum -y  install openstack-packstack
**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
DNS1=”83.221.202.254″
GATEWAY=”172.24.4.225″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE=”eth2″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

OVS PORT should be eth2 (third Ethernet interface on Network Node)
Libvirt bridge VIRBR2 in real deployment is a your router to External
network. OVS BRIDGE br-ex should have IP belongs to External network

*******************
On Controller :-
*******************

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -lntp |  grep 35357
tcp6       0      0 :::35357                :::*                    LISTEN      7047/httpd

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 7047
root      7047     1  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
keystone  7089  7047  0 11:22 ?        00:00:07 keystone-admin  -DFOREGROUND
keystone  7090  7047  0 11:22 ?        00:00:02 keystone-main   -DFOREGROUND
apache    7092  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7093  7047  0 11:22 ?        00:00:04 /usr/sbin/httpd -DFOREGROUND
apache    7094  7047  0 11:22 ?        00:00:03 /usr/sbin/httpd -DFOREGROUND
apache    7095  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7096  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7097  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7098  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7099  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7100  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7101  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    7102  7047  0 11:22 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
root     28963 17739  0 12:51 pts/1    00:00:00 grep –color=auto 7047

********************
On Network Node
********************

+————————————–+——————–+—————————————-+——-+—————-+—————————+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+
| 217fb0f5-8dd1-4361-aae7-cc9a7d18d6e4 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-openvswitch-agent |
| 5dabfc17-db64-470c-9f01-8d2297d155f3 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-l3-agent          |
| 5e3c6e2f-3f6d-4ede-b058-bc1b317d4ee1 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | 🙂   | True           | neutron-openvswitch-agent |
| f0f02931-e7e6-4b01-8b87-46224cb71e6d | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-dhcp-agent        |
| f16a5d9d-55e6-47c3-b509-ca445d05d34d | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-metadata-agent    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

9221d1c1-008a-464a-ac26-1e0340407714
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port “vxlan-0a000089”
Interface “vxlan-0a000089″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.147″, out_key=flow, remote_ip=”10.0.0.137″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “eth2”
Interface “eth2”
Port “qg-1deeaf96-e8”
Interface “qg-1deeaf96-e8”
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge br-int
fail_mode: secure
Port “qr-1909e3bb-fd”
tag: 2
Interface “qr-1909e3bb-fd”
type: internal
tag: 2
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
ovs_version: “2.4.0”

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[    2.233302] device ovs-system entered promiscuous mode
[    2.273206] device br-int entered promiscuous mode
[    2.274981] device qr-838ad1f3-7d entered promiscuous mode
[    2.276333] device tap0f21eab4-db entered promiscuous mode
[    2.312740] device br-tun entered promiscuous mode
[    2.314509] device qg-2b712b60-d0 entered promiscuous mode
[    2.315921] device br-ex entered promiscuous mode
[    2.316022] device eth2 entered promiscuous mode
[   10.704329] device qr-838ad1f3-7d left promiscuous mode
[   10.729045] device tap0f21eab4-db left promiscuous mode
[   10.761844] device qg-2b712b60-d0 left promiscuous mode
[  224.746399] device eth2 left promiscuous mode
[  232.173791] device eth2 entered promiscuous mode
[  232.978909] device tap0f21eab4-db entered promiscuous mode
[  233.690854] device qr-838ad1f3-7d entered promiscuous mode
[  233.895213] device qg-2b712b60-d0 entered promiscuous mode
[ 1253.611501] device qr-838ad1f3-7d left promiscuous mode
[ 1254.017129] device qg-2b712b60-d0 left promiscuous mode
[ 1404.697825] device tapfdf24cad-f8 entered promiscuous mode
[ 1421.812107] device qr-1909e3bb-fd entered promiscuous mode
[ 1422.045593] device qg-1deeaf96-e8 entered promiscuous mode
[ 6111.042488] device tap0f21eab4-db left promiscuous mode

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip route
default via 172.24.4.225 dev qg-1deeaf96-e8
50.0.0.0/24 dev qr-1909e3bb-fd  proto kernel  scope link  src 50.0.0.1
172.24.4.224/28 dev qg-1deeaf96-e8  proto kernel  scope link  src 172.24.4.227

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

inet6 fe80::f816:3eff:fe93:12de  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:93:12:de  txqueuelen 0  (Ethernet)
RX packets 864432  bytes 1185656986 (1.1 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 382639  bytes 29347929 (27.9 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

inet6 fe80::f816:3eff:feae:d1e0  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:ae:d1:e0  txqueuelen 0  (Ethernet)
RX packets 382969  bytes 29386380 (28.0 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 864601  bytes 1185686714 (1.1 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

inet6 fe80::f816:3eff:fe98:c66  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:98:0c:66  txqueuelen 0  (Ethernet)
RX packets 63  bytes 6445 (6.2 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 14  bytes 2508 (2.4 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-dd26c4ed-f757-416d-a772-64b503ffc497 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

16: qr-1909e3bb-fd: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-1909e3bb-fd
valid_lft forever preferred_lft forever
valid_lft forever preferred_lft forever

17: qg-1deeaf96-e8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
inet 172.24.4.227/28 brd 172.24.4.239 scope global qg-1deeaf96-e8
valid_lft forever preferred_lft forever
inet 172.24.4.229/32 brd 172.24.4.229 scope global qg-1deeaf96-e8
valid_lft forever preferred_lft forever
inet 172.24.4.230/32 brd 172.24.4.230 scope global qg-1deeaf96-e8
valid_lft forever preferred_lft forever
valid_lft forever preferred_lft forever

RDO Kilo DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1

September 30, 2015

1. Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node. (North-South Routing).
2. Neutron DVR implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur with Network Node not involved. (East-West Routing).
3. Neutron Distributed Virtual Router provides the legacy SNAT behavior for the default SNAT for all private VMs. SNAT service is not distributed, it is centralized and the service node will host the service.

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance,

Neutron (using Open vSwitch plugin && VXLAN )

– (2x) Compute node: Nova (nova-compute),

Three CentOS 7.1 VMs (4 GB RAM, 4 VCPU, 2 VNICs ) has been built for testing

at Fedora 22 KVM Hypervisor. Two libvirt sub-nets were used first “openstackvms” for emulating External && Mgmt Networks 192.169.142.0/24 gateway virbr1 (192.169.142.1) and “vteps” 10.0.0.0/24 to support two VXLAN tunnels between Controller and Compute Nodes.

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

# cat vteps.xml

<network>
<name>vteps</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>

</ip>
</network>

# virsh net-define openstackvms.xml
# virsh net-start openstackvms
# virsh net-autostart openstackvms

Second libvirt sub-net maybe defined and started same way.

ip-192-169-142-127.ip.secureserver.net – Controller/Network Node
ip-192-169-142-137.ip.secureserver.net – Compute Node
ip-192-169-142-147.ip.secureserver.net – Compute Node

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4
********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
DNS1=”83.221.202.254″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
***********
Then
***********

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

Reboot

*****************************************
On Controller update neutron.conf
*****************************************

router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

*****************
On Controller
*****************

[root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#| grep -v ^\$

[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
handle_internal_only_routers = True
external_network_bridge = br-ex
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
router_delete_namespaces = False
agent_mode = dvr_snat
allow_automatic_l3agent_failover=False

*********************************
On each Compute Node
*********************************

[root@ip-192-169-142-147 neutron]# cat l3_agent.ini | grep -v ^#| grep -v ^\$

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr

*******************
On each node
*******************

[root@ip-192-169-142-147 neutron]# cat metadata_agent.ini | grep -v ^#| grep -v ^\$

[DEFAULT]
debug = False
auth_url = http://192.169.142.127:35357/v2.0
auth_region = RegionOne
auth_insecure = False
cache_url = memory://?default_ttl=5

[root@ip-192-169-142-147 neutron]# cat ml2_conf.ini | grep -v ^#| grep -v ^\$

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[securitygroup]
enable_security_group = True
# On Compute nodes
[agent]
l2_population = True

The last entry for [agent] is important for DVR configuration on Kilo ( vs Juno )

[root@ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep -v ^#| grep -v ^\$

[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2population = True
enable_distributed_routing = True
arp_responder = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

*********************************************************************
On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started.
*********************************************************************

# yum install openstack-neutron-ml2
# systemctl start neutron-l3-agent
# systemctl enable neutron-l3-agent

+————————————–+—————————————-+—————-+——-+———-+
| id | host | admin_state_up | alive | ha_state |
+————————————–+—————————————-+—————-+——-+———-+
| 50388b16-4461-441c-83a4-f7e7084ec415 | ip-192-169-142-127.ip.secureserver.net | True | 🙂 | |
| 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 | ip-192-169-142-137.ip.secureserver.net | True | 🙂 | |
| d18cdf01-6814-489d-bef2-5207c1aac0eb | ip-192-169-142-147.ip.secureserver.net | True | 🙂 | |
+————————————–+—————————————-+—————-+——-+———-+
+———————+——————————————————————————-+
| Field | Value |
+———————+——————————————————————————-+
| agent_type | L3 agent |
| alive | True |
| binary | neutron-l3-agent |
| configurations | { |
| | “router_id”: “”, |
| | “agent_mode”: “dvr”, |
| | “gateway_external_network_id”: “”, |
| | “handle_internal_only_routers”: true, |
| | “use_namespaces”: true, |
| | “routers”: 1, |
| | “interfaces”: 1, |
| | “floating_ips”: 1, |
| | “interface_driver”: “neutron.agent.linux.interface.OVSInterfaceDriver”, |
| | “external_network_bridge”: “br-ex”, |
| | “ex_gw_ports”: 1 |
| | } |
| created_at | 2015-09-29 07:40:37 |
| description | |
| heartbeat_timestamp | 2015-09-30 09:58:24 |
| host | ip-192-169-142-137.ip.secureserver.net |
| id | 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 |
| started_at | 2015-09-30 08:08:53 |
| topic | l3_agent |
+———————+————————————————————————–

CPU Pinning and NUMA Topology on RDO Kilo on Fedora Server 22

August 1, 2015
Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
on RDO Kilo installed on Fedora 22 . After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Details follow bellow :-
*****************************************************************************************
RDO Kilo set up on Fedora ( openstack-puppet-modules-2015.1.9-4.fc23.noarch)
*****************************************************************************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf install -y openstack-packstack
Generate answer-file and make update :-
and set CONFIG_KEYSTONE_SERVICE_NAME=httpd
****************************************************************************
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
****************************************************************************
You might be hit by bug  https://bugzilla.redhat.com/show_bug.cgi?id=1249482
As pre-install step apply patch https://review.openstack.org/#/c/209032/
to fix neutron_api.pp. Location of puppet templates
/usr/lib/python2.7/site-packages/packstack/puppet/templates.
You might be also hit by  https://bugzilla.redhat.com/show_bug.cgi?id=1234042
****************
Then run :-
****************

Final target is to reproduce mentioned article on i7 4790 Haswell CPU box, perform launching nova instance with CPU pinning.

Linux fedora22server.localdomain 4.1.3-200.fc22.x86_64 #1 SMP Wed Jul 22 19:51:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

qemu-system-x86-2.3.0-6.fc22.x86_64
qemu-img-2.3.0-6.fc22.x86_64
qemu-guest-agent-2.3.0-6.fc22.x86_64
qemu-kvm-2.3.0-6.fc22.x86_64
ipxe-roms-qemu-20150407-1.gitdc795b9f.fc22.noarch
qemu-common-2.3.0-6.fc22.x86_64
libvirt-daemon-driver-qemu-1.2.13.1-2.fc22.x86_64

available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 15991 MB
node 0 free: 4399 MB
node distances:
node 0
0: 10

<capabilities>
<host>
<cpu>
<arch>x86_64</arch>
<model>Haswell-noTSX</model>
<vendor>Intel</vendor>
<feature name=’invtsc’/>
<feature name=’abm’/>
<feature name=’pdpe1gb’/>
<feature name=’rdrand’/>
<feature name=’f16c’/>
<feature name=’osxsave’/>
<feature name=’pdcm’/>
<feature name=’xtpr’/>
<feature name=’tm2’/>
<feature name=’est’/>
<feature name=’smx’/>
<feature name=’vmx’/>
<feature name=’ds_cpl’/>
<feature name=’monitor’/>
<feature name=’dtes64’/>
<feature name=’pbe’/>
<feature name=’tm’/>
<feature name=’ht’/>
<feature name=’ss’/>
<feature name=’acpi’/>
<feature name=’ds’/>
<feature name=’vme’/>
<pages unit=’KiB’ size=’4’/>
<pages unit=’KiB’ size=’2048’/>
</cpu>
<power_management>
<suspend_mem/>
<suspend_disk/>
<suspend_hybrid/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
<uri_transport>rdma</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num=’1′>
<cell id=’0′>
<memory unit=’KiB’>16374824</memory>
<pages unit=’KiB’ size=’4′>4093706</pages>
<pages unit=’KiB’ size=’2048′>0</pages>
<distances>
<sibling id=’0′ value=’10’/>
</distances>
<cpus num=’8′>
<cpu id=’0′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’1′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’2′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’3′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
<cpu id=’4′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’5′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’6′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’7′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
</cpus>
</cell>
</cells>
</topology>

On each Compute node that pinning of virtual machines will be permitted on open the /etc/nova/nova.conf file and make the following modifications:

Set the vcpu_pin_set value to a list or range of logical CPU cores to reserve for virtual machine processes. OpenStack Compute will ensure guest virtual machine instances are pinned to these virtual CPU cores.
vcpu_pin_set=2,3,6,7

Set the reserved_host_memory_mb to reserve RAM for host processes. For the purposes of testing used the default of 512 MB:
reserved_host_memory_mb=512

# systemctl restart openstack-nova-compute.service

************************************
SCHEDULER CONFIGURATION
************************************

Update /etc/nova/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
NUMATopologyFilter,AggregateInstanceExtraSpecsFilter

# systemctl restart openstack-nova-scheduler.service

At this point if creating a guest you may see some changes to appear in the XML, pinning the guest vCPU(s) to the cores listed in vcpu_pin_set:

<vcpu placement=’static’ cpuset=’2-3,6-7′>1</vcpu>

Add to vmlinuz grub2 command line at the end
isolcpus=2,3,6,7

***************
REBOOT
***************

+—-+————-+——————-+——-+———-+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+———-+

| 1 | performance | – | | |

+—-+————-+——————-+——-+———-+

Metadata has been successfully updated for aggregate 1.
+—-+————-+——————-+——-+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+—————+

| 1 | performance | – | | ‘pinned=true’ |

+—-+————-+——————-+——-+—————+

[root@fedora22server ~(keystone_admin)]# nova flavor-create m1.small.performance 6 4096 20 4
+—-+———————-+———–+——+———–+——+——-+————-+———–+

| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

| 6 | m1.small.performance | 4096 | 20 | 0 | | 4 | 1.0 | True |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set hw:cpu_policy=dedicated
[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true
fedora22server.localdomain

Host fedora22server.localdomain has been successfully added for aggregate 1
+—-+————-+——————-+——————————+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——————————+—————+
| 1 | performance | – | ‘fedora22server.localdomain’ | ‘pinned=true’ |
+—-+————-+——————-+——————————+—————+

[root@fedora22server ~(keystone_demo)]# glance image-list
+————————————–+———————————+————-+——————+————-+——–+
| ID | Name | Disk Format | Container Format | Size | Status |
+————————————–+———————————+————-+——————+————-+——–+
| bf6f5272-ae26-49ae-b0f9-3c4fcba350f6 | CentOS71Image | qcow2 | bare | 1004994560 | active |
| 05ac955e-3503-4bcf-8413-6a1b3c98aefa | cirros | qcow2 | bare | 13200896 | active |
| 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 | VF22Image | qcow2 | bare | 228599296 | active |
| c695e7fa-a69f-4220-abd8-2269b75af827 | Windows Server 2012 R2 Std Eval | qcow2 | bare | 17182752768 | active |
+————————————–+———————————+————-+——————+————-+——–+

[root@fedora22server ~(keystone_demo)]#neutron net-list

+————————————–+———-+—————————————————–+
| id | name | subnets |
+————————————–+———-+—————————————————–+
| 0daa3a02-c598-4c46-b1ac-368da5542927 | public | 8303b2f3-2de2-44c2-bd5e-fc0966daec53 192.168.1.0/24 |
| c85a4215-1558-4a95-886d-a2f75500e052 | demo_net | 0cab6cbc-dd80-42c6-8512-74d7b2cbf730 50.0.0.0/24 |
+————————————–+———-+—————————————————–+

*************************************************************************
At this point attempt to launch F22 Cloud instance with created flavor
m1.small.performance
*************************************************************************

[root@fedora22server ~(keystone_demo)]# nova boot –image 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 –key-name oskeydev –flavor m1.small.performance –nic net-id=c85a4215-1558-4a95-886d-a2f75500e052 vf22-instance

+————————————–+————————————————–+
| Property | Value |
+————————————–+————————————————–+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | – |
| OS-SRV-USG:terminated_at | – |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2015-07-31T08:03:49Z |
| flavor | m1.small.performance (6) |
| hostId | |
| id | 4b99f3cf-3126-48f3-9e00-94787f040e43 |
| image | VF22Image (7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52) |
| key_name | oskeydev |
| name | vf22-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 14f736e6952644b584b2006353ca51be |
| updated | 2015-07-31T08:03:50Z |
| user_id | 4ece2385b17a4490b6fc5a01ff53350c |
+————————————–+————————————————–+

[root@fedora22server ~(keystone_demo)]#nova list

+————————————–+—————+———+————+————-+———————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+—————+———+————+————-+———————————–+
| 93906a61-ec0b-481d-b964-2bb99d095646 | CentOS71RLX | SHUTOFF | – | Shutdown | demo_net=50.0.0.21, 192.168.1.159 |
| ac7e9be5-d2dc-4ec0-b0a1-4096b552e578 | VF22Devpin | ACTIVE | – | Running | demo_net=50.0.0.22 |
| b93c9526-ded5-4b7a-ae3a-106b34317744 | VF22Devs | SHUTOFF | – | Shutdown | demo_net=50.0.0.19, 192.168.1.157 |
| bef20a1e-3faa-4726-a301-73ca49666fa6 | WinSrv2012 | SHUTOFF | – | Shutdown | demo_net=50.0.0.16 |
| 4b99f3cf-3126-48f3-9e00-94787f040e43 | vf22-instance | ACTIVE | – | Running | demo_net=50.0.0.23, 192.168.1.160 |
+————————————–+—————+———+————+————-+———————————–+

[root@fedora22server ~(keystone_demo)]#virsh list

Id Name State

—————————————————-
2 instance-0000000c running
3 instance-0000000d running

regarding detailed explanation of highlighted blocks, keeping in mind that pinning is done to logical CPU cores ( not physical due to 4 Core CPU with HT enabled ). Multiple cells are also absent, due limitations of i7 47XX Haswell CPU architecture

[root@fedora22server ~(keystone_demo)]#virsh dumpxml instance-0000000d > vf22-instance.xml
<domain type=’kvm’ id=’3′>
<name>instance-0000000d</name>
<uuid>4b99f3cf-3126-48f3-9e00-94787f040e43</uuid>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.fc23″/>
<nova:name>vf22-instance</nova:name>
<nova:creationTime>2015-07-31 08:03:54</nova:creationTime>
<nova:flavor name=”m1.small.performance”>
<nova:memory>4096</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>4</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”4ece2385b17a4490b6fc5a01ff53350c”>demo</nova:user>
<nova:project uuid=”14f736e6952644b584b2006353ca51be”>demo</nova:project>
</nova:owner>
</nova:instance>
<memory unit=’KiB’>4194304</memory>
<currentMemory unit=’KiB’>4194304</currentMemory>
<vcpu placement=’static’>4</vcpu>
<cputune>
<shares>4096</shares>
<vcpupin vcpu=’0′ cpuset=’2’/>
<vcpupin vcpu=’1′ cpuset=’6’/>
<vcpupin vcpu=’2′ cpuset=’3’/>
<vcpupin vcpu=’3′ cpuset=’7’/>
<emulatorpin cpuset=’2-3,6-7’/>
</cputune>
<numatune>
<memory mode=’strict’ nodeset=’0’/>
<memnode cellid=’0′ mode=’strict’ nodeset=’0’/>
</numatune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type=’smbios’>
<system>
<entry name=’manufacturer’>Fedora Project</entry>
<entry name=’product’>OpenStack Nova</entry>
<entry name=’version’>2015.1.0-3.fc23</entry>
<entry name=’serial’>f1b336b1-6abf-4180-865a-b6be5670352e</entry>
<entry name=’uuid’>4b99f3cf-3126-48f3-9e00-94787f040e43</entry>
</system>
</sysinfo>
<os>
<type arch=’x86_64′ machine=’pc-i440fx-2.3′>hvm</type>
<boot dev=’hd’/>
<smbios mode=’sysinfo’/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode=’host-model’>
<model fallback=’allow’/>
<numa>
<cell id=’0′ cpus=’0-3′ memory=’4194304′ unit=’KiB’/>
</numa>
</cpu>
<clock offset=’utc’>
<timer name=’pit’ tickpolicy=’delay’/>
<timer name=’rtc’ tickpolicy=’catchup’/>
<timer name=’hpet’ present=’no’/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type=’file’ device=’disk’>
<driver name=’qemu’ type=’qcow2′ cache=’none’/>
<source file=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/disk’/>
<backingStore type=’file’ index=’1′>
<format type=’raw’/>
<source file=’/var/lib/nova/instances/_base/6c60a5ed1b3037bbdb2bed198dac944f4c0d09cb’/>
<backingStore/>
</backingStore>
<target dev=’vda’ bus=’virtio’/>
<alias name=’virtio-disk0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x06′ function=’0x0’/>
</disk>
<controller type=’usb’ index=’0′>
<alias name=’usb0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x01′ function=’0x2’/>
</controller>
<controller type=’pci’ index=’0′ model=’pci-root’>
<alias name=’pci.0’/>
</controller>
<controller type=’virtio-serial’ index=’0′>
<alias name=’virtio-serial0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x05′ function=’0x0’/>
</controller>
<interface type=’bridge’>
<source bridge=’qbr567b21fe-52’/>
<target dev=’tap567b21fe-52’/>
<model type=’virtio’/>
<alias name=’net0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x03′ function=’0x0’/>
</interface>
<serial type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target port=’0’/>
<alias name=’serial0’/>
</serial>
<serial type=’pty’>
<source path=’/dev/pts/2’/>
<target port=’1’/>
<alias name=’serial1’/>
</serial>
<console type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target type=’serial’ port=’0’/>
<alias name=’serial0’/>
</console>
<channel type=’spicevmc’>
<target type=’virtio’ name=’com.redhat.spice.0′ state=’disconnected’/>
<alias name=’channel0’/>
</channel>
<input type=’mouse’ bus=’ps2’/>
<input type=’keyboard’ bus=’ps2’/>
<graphics type=’spice’ port=’5901′ autoport=’yes’ listen=’0.0.0.0′ keymap=’en-us’>
</graphics>
<sound model=’ich6′>
<alias name=’sound0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x04′ function=’0x0’/>
</sound>
<video>
<model type=’qxl’ ram=’65536′ vram=’65536′ vgamem=’16384′ heads=’1’/>
<alias name=’video0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x02′ function=’0x0’/>
</video>
<memballoon model=’virtio’>
<alias name=’balloon0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x07′ function=’0x0’/>
<stats period=’10’/>
</memballoon>
</devices>
<seclabel type=’dynamic’ model=’selinux’ relabel=’yes’>
<label>system_u:system_r:svirt_t:s0:c359,c706</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c359,c706</imagelabel>
</seclabel>
</domain>

Switching to Dashboard Spice Console in RDO Kilo on Fedora 22

July 3, 2015

*************************
UPDATE 06/27/2015
*************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf  install -y openstack-packstack
# dnf install fedora-repos-rawhide
# dnf  –enablerepo=rawhide update openstack-packstack
Fedora – Rawhide – Developmental packages for the next Fedora re 1.7 MB/s |  45 MB     00:27
Last metadata expiration check performed 0:00:39 ago on Sat Jun 27 13:23:03 2015.
Dependencies resolved.
==============================================================
Package                       Arch      Version                                Repository  Size
==============================================================
openstack-packstack           noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide    233 k
openstack-packstack-puppet    noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide     233 k
Transaction Summary
==============================================================
.  .  .  .  .
# dnf install python3-pyOpenSSL.noarch
At this point run :-
and set
CONFIG_KEYSTONE_SERVICE_NAME=httpd
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
Then run `packstack –answer-file=./answer-file-aio.txt` , however you will still need pre-patch provision_demo.pp at the moment
( see third patch at http://textuploader.com/yn0v ) , the rest should work fine.

Upon completion you may try follow :-
https://www.rdoproject.org/Neutron_with_existing_external_network

I didn’t test it on Fedora 22, just creating external and private networks of VXLAN type and configure

DEVICE=”br-ex”
BOOTPROTO=”static”
DNS1=”8.8.8.8″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

DEVICE=”enp2s0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

When configuration above is done :-

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

*************************
UPDATE 06/26/2015
*************************

To install RDO Kilo on Fedora 22 :-
after `dnf -y install openstack-packstack `
# cd /usr/lib/python2.7/site-packages/packstack/puppet/templates
Then apply following 3 patches

************************
UPDATE 05/19/2015
************************
MATE Desktop supports sound ( via patch mentioned bellow) on RDO Kilo  Cloud instances F22, F21, F20. RDO Kilo AIO install performed on bare metal.
Also Windows Server 2012 (evaluation version) cloud VM provides pretty stable “video/sound” ( http://www.cloudbase.it/windows-cloud-images/ ) .

************************
UPDATE 05/14/2015
************************
I’ve  got sound working on CentOS 7 VM ( connection  to console via virt-manager)  with slightly updated patch of Y.Kawada , self.type set “ich6″ RDO Kilo installed on bare metal AIO testing host, Fedora 22. Same results have been  obtained for RDO Kilo on CentOS 7.1. However , connection to spice console having cut&amp;&amp;paste and sound enabled features may be obtained via spicy ( remote connection)

Generated libvirt.xml

<domain type=”kvm”>
<uuid>455877f2-7070-48a7-bb24-e0702be2fbc5</uuid>
<name>instance-00000003</name>
<memory>2097152</memory>
<vcpu cpuset=”0-7″>1</vcpu>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.el7″/>
<nova:name>CentOS7RSX05</nova:name>
<nova:creationTime>2015-06-14 18:42:11</nova:creationTime>
<nova:flavor name=”m1.small”>
<nova:memory>2048</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>1</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”da79d2c66db747eab942bdbe20bb3f44″>demo</nova:user>
<nova:project uuid=”8c9defac20a74633af4bb4773e45f11e”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”4a2d708c-7624-439f-9e7e-6e133062e23a”/>
</nova:instance>
<sysinfo type=”smbios”>
<system>
<entry name=”manufacturer”>Fedora Project</entry>
<entry name=”product”>OpenStack Nova</entry>
<entry name=”version”>2015.1.0-3.el7</entry>
<entry name=”serial”>b3fae7c3-10bd-455b-88b7-95e586342203</entry>
<entry name=”uuid”>455877f2-7070-48a7-bb24-e0702be2fbc5</entry>
</system>
</sysinfo>
<os>
<type>hvm</type>
<boot dev=”hd”/>
<smbios mode=”sysinfo”/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cputune>
<shares>1024</shares>
</cputune>
<clock offset=”utc”>
<timer name=”pit” tickpolicy=”delay”/>
<timer name=”rtc” tickpolicy=”catchup”/>
<timer name=”hpet” present=”no”/>
</clock>
<cpu mode=”host-model” match=”exact”>
</cpu>
<devices>
<disk type=”file” device=”disk”>
<driver name=”qemu” type=”qcow2″ cache=”none”/>
<source file=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/disk”/>
<target bus=”virtio” dev=”vda”/>
</disk>
<interface type=”bridge”>
<model type=”virtio”/>
<source bridge=”qbr8ce9ae7b-f0″/>
<target dev=”tap8ce9ae7b-f0″/>
</interface>
<serial type=”file”>
<source path=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/console.log”/>
</serial>
<serial type=”pty”/>
<channel type=”spicevmc”>
<target type=”virtio” name=”com.redhat.spice.0″/>
</channel>
<graphics type=”spice” autoport=”yes” keymap=”en-us” listen=”0.0.0.0   “/>
<video>
<model type=”qxl”/>
</video>
<sound model=”ich6″/>
<memballoon model=”virtio”>
<stats period=”10″/>
</memballoon>
</devices>
</domain>

*****************
END UPDATE
*****************
The post follows up http://lxer.com/module/newswire/view/214893/index.html
The most recent `yum update` on F22 significantly improved network performance on cloud VMs (L2) . Watching movies running on cloud F22 VM (with “Mate Desktop” been installed and functioning pretty smoothly) without sound refreshes spice memories,view https://bugzilla.redhat.com/show_bug.cgi?format=multiple&amp;id=913607
# dnf -y install spice-html5 ( installed on Controller &amp;&amp; Compute)
# dnf -y install  openstack-nova-spicehtml5proxy (Compute Node)
# rpm -qa | grep openstack-nova-spicehtml5proxy
openstack-nova-spicehtml5proxy-2015.1.0-3.fc23.noarch

***********************************************************************
Update /etc/nova/nova.conf on Controller &amp;&amp; Compute Node as follows :-
***********************************************************************

[DEFAULT]
. . . . .
web=/usr/share/spice-html5
. . . . . .
spicehtml5proxy_host=0.0.0.0  (only Compute)
spicehtml5proxy_port=6082     (only Compute)
. . . . . . .
# Disable VNC
vnc_enabled=false
. . . . . . .
[spice]

# Compute Node Management IP 192.169.142.137
html5proxy_base_url=http://192.169.142.137:6082/spice_auto.html
server_listen=0.0.0.0 ( only  Compute )
enabled=true
agent_enabled=true
keymap=en-us

:wq

# service httpd restart ( on Controller )
Next actions to be performed on Compute Node

# service openstack-nova-compute restart
# service openstack-nova-spicehtml5proxy start
# systemctl enable openstack-nova-spicehtml5proxy

On Controller

+————————————–+———–+———————————-+———+————+————-+———————————-+
| ID                                   | Name      | Tenant ID                        | Status  | Task State | Power State | Networks                         |
+————————————–+———–+———————————-+———+————+————-+———————————-+
| 6c8ef008-e8e0-4f1c-af17-b5f846f8b2d9 | CirrOSDev | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | SHUTOFF | –          | Shutdown    | demo_net=50.0.0.11, 172.24.4.228 |
| cfd735ea-d9a8-4c4e-9a77-03035f01d443 | VF22DEVS  | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | ACTIVE  | –          | Running     | demo_net=50.0.0.14, 172.24.4.231 |
+————————————–+———–+———————————-+———+————+————-+———————————-+
[root@ip-192-169-142-127 ~(keystone_admin)]# nova get-spice-console cfd735ea-d9a8-4c4e-9a77-03035f01d443  spice-html5
+————-+—————————————————————————————-+
| Type        | Url                                                                                    |

+————-+—————————————————————————————-+
+————-+—————————————————————————————-+

Session running by virt-manager on Virtualization Host ( F22 )

Connection to Compute Node 192.169.142.137 has been activated

Once again about pros/cons of Systemd and Upstart

May 16, 2015

1. Upstart is simpler for porting on the systems other than Linux while systemd is very rigidly tied on Linux kernel opportunities.Adaptation of Upstart for work in Debian GNU/kFreeBSD and Debian GNU/Hurd looks quite real task that it is impossible to tell about systemd;

2. Upstart is more habitual for the Debian developers, many of which in combination participate in development of Ubuntu. Two members of Technical committee Debian (Steve Langasek and Colin Watson) are a part of group of the Upstart developers.

3. Upstart simpler and is more lightweight than systemd, as a result, less code – less mistakes; Upstart is suitable for integration with a code of system daemons better.The policy of systemd is reduced to that authors of daemons have to be arranged under upstream (it is necessary to provide the analog compatible at the level of the external interface for replacement of the systemd component) instead of upstream provided comfortable means for developers of daemons.

4. Upstart is simpler in respect of maintenance and maintenance of packages; Community of the Upstart developers are more openly for collaboration. In case of systemd it is necessary to take the systemd methods for granted and to follow them, for example, to support the separate section “/usr” or
to use only absolute paths for start. Shortcomings of Upstart belong to category of reparable problems; in current state of Upstart it is already completely ready for use in Debian 8.0 (Jessie).

5. In Upstart more habitual model of definition of a configuration of services, unlike systemd where settings in / etc block the basic settings of units determined in hierarchy/lib. Use of Upstart will allow to support a sound mind of the competition which will promote development of various approaches and will keep developers in a tone.

1. Without essential processing of architecture of Upstart won’t be able to catch up with systemd on functionality (for example, the turned model of start of dependences (instead of start of all demanded dependences at start of the set service,start of service in Upstart is carried out at receipt of an event about availability for service of dependences);

2. Use of ptrace disturbs application of upstart-works for such daemons as avahi, apache and postfix;possibility of activation of service only upon the appeal to a socket, but not on indirect signs,such as dependence on activation of other socket; lack of reliable tracking of conditions of the carried-out processes.

3. Systemd contains rather self-sufficient set of components that allows to concentrate attention on elimination of problems,but not completion of a configuration with Upstart to the opportunities which are already present at Systemd. For example, in Upstart are absent:- support of the detailed status and maintaining the log of work of daemons,multiple activation through sockets,activation through sockets for IPv6 and UDP,flexible mechanism of restriction of resources.

4. Use of systemd will allow to pull together among themselves and to unify control facilities various distribution kits. Systemd is already passed to RHEL 7.X,CentOS 7.X, Fedora,openSUSE,Sabayon,Mandriva,Arch Linux,

5. At systemd there is more active, large and versatile community of developers into which engineers of the SUSE and Red Hat companies enter. When using upstart the distribution kit becomes dependent on Canonical without which support of upstart remains without developers and will be doomed to stagnation.Participation in development of upstart requires signing of the agreement on transfer of property rights of the Canonical company. The Red Hat company not without cause decided on replacement of upstart by systemd.Debian project was already compelled to migrate for systemd. For realization of some opportunities of loading in Upstart it is required to use fragments of shell-scripts that does initialization process less reliable and more labor-consuming for debugging.

6. Support of systemd is realized in GNOME and KDE which more and more actively use possibilities of systemd (for example, means for management of the user sessions and start of each appendix in separate cgroup). GNOME continues to be positioned as the main environment of Debian, but the relations between the Ubuntu/Upstart and GNOME projects had obviously intense character.

References

http://www.opennet.ru/opennews/art.shtml?num=38762

RDO Kilo Three Node Setup for Controller+Network+Compute (ML2&OVS&VXLAN) on CentOS 7.1

May 9, 2015

Following bellow is brief instruction  for traditional three node deployment test Controller&amp;&amp;Network&amp;&amp;Compute for oncoming RDO Kilo, which was performed on Fedora 21 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4771 Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,2 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep’s external subnets), Compute Node VM two VNICS (management,vtep’s subnets)

SELINUX stays in enforcing mode.

Three Libvirt networks created

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# cat public.xml

<network>
<name>public</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr3′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’172.24.4.226′ end=’172.24.4.238′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# cat vteps.xml

<network>
<name>vteps</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr4′ stp=’on’ delay=’0′ />
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# virsh net-list

Name                 State      Autostart     Persistent

————————————————————————–

default               active        yes           yes
openstackvms    active        yes           yes
public                active        yes           yes
vteps                 active         yes          yes

*********************************************************************************
1. First Libvirt subnet “openstackvms”  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet “public” serves for simulation external network  Network Node attached to public,latter on “eth3” interface (belongs to “public”) is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via interface virbr3 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28

***********************************************************************************
3. Third Libvirt subnet “vteps” serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet.
***********************************************************************************
Start testing following RH instructions
Per https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test

# yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm
# yum install -y openstack-packstack
*******************************************************
Install rdo-testing-kilo.rpm on all three nodes due to
*******************************************************

https://bugzilla.redhat.com/show_bug.cgi?id=1218750

Keep SELINUX=enforcing
Package  openstack-selinux-0.6.31-1.el7.noarch will be installed by prescript
puppet on all nodes of deployment

*********************
*********************

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
DNS1=”83.221.202.254″
GATEWAY=”172.24.4.225″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth3

DEVICE=”eth3″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
*************************************************
Next step to performed on Network Node :-
*************************************************

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “eth3”
Interface “eth3”

Port br-ex
Interface br-ex
type: internal
Port “eth2”
Interface “eth2”
Port “qg-d433fa46-e2”
Interface “qg-d433fa46-e2”
type: internal
Bridge br-tun
fail_mode: secure
Port “vxlan-0a000089”
Interface “vxlan-0a000089″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.147″, out_key=flow, remote_ip=”10.0.0.137″}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port br-int
Interface br-int
type: internal
Port “tap70da94fb-c1”
tag: 1
Interface “tap70da94fb-c1”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qr-0737c492-f6”
tag: 1
Interface “qr-0737c492-f6”
type: internal
ovs_version: “2.3.1”
**********************************************************
Following bellow is Network Node status verification
**********************************************************

== neutron services ==

neutron-server:                           inactive  (disabled on boot)
neutron-dhcp-agent:                    active
neutron-l3-agent:                         active
neutron-openvswitch-agent:         active
== Support services ==
libvirtd:                               active
openvswitch:                       active
dbus:                                   active

+————————————–+———-+——————————————————+
| id                                   | name     | subnets                                              |
+————————————–+———-+——————————————————+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | 5fc0118a-f710-448d-af67-17dbfe01d5fc 172.24.4.224/28 |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | ba2cded7-5546-4a64-aa49-7ef4d077dee3 50.0.0.0/24     |
+————————————–+———-+——————————————————+

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+
| id                                   | name       | external_gateway_info                                                                                                                                                                   | distributed | ha    |

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+

| d63ca3f3-5b71-4540-bb5c-01b44ce3081b | RouterDemo | {“network_id”: “7ecdfc27-57cf-410d-9a76-8e9eb76582cb”, “enable_snat”: true, “external_fixed_ips”: [{“subnet_id”: “5fc0118a-f710-448d-af67-17dbfe01d5fc”, “ip_address”: “172.24.4.229”}]} | False       | False |

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+

+————————————–+——+——————-+————————————————————————————-+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+————————————–+——+——————-+————————————————————————————-+
| d433fa46-e203-4fdd-b3f7-dcbc884e9f1e |      | fa:16:3e:02:ef:51 | {“subnet_id”: “5fc0118a-f710-448d-af67-17dbfe01d5fc”, “ip_address”: “172.24.4.229”} |
+————————————–+——+——————-+————————————————————————————-+

| status                | ACTIVE                                                                          |

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[   14.174240] device ovs-system entered promiscuous mode
[   14.184284] device br-ex entered promiscuous mode
[   14.200068] device eth2 entered promiscuous mode
[   14.200253] device eth3 entered promiscuous mode
[   14.207443] device br-int entered promiscuous mode
[   14.209360] device br-tun entered promiscuous mode
[   27.311116] device virbr0-nic entered promiscuous mode
[  142.406262] device tap70da94fb-c1 entered promiscuous mode
[  144.045031] device qr-0737c492-f6 entered promiscuous mode
[  144.792618] device qg-d433fa46-e2 entered promiscuous mode

**************************************************************
Compute Node Status
**************************************************************

[root@ip-192-169-142-137 ~]#  dmesg | grep promisc
[    9.683238] device ovs-system entered promiscuous mode
[    9.699664] device br-ex entered promiscuous mode
[    9.735288] device br-int entered promiscuous mode
[    9.748086] device br-tun entered promiscuous mode
[  137.203583] device qvbe7160159-fd entered promiscuous mode
[  137.288235] device qvoe7160159-fd entered promiscuous mode
[  137.715508] device qvbe90ef79b-80 entered promiscuous mode
[  137.796083] device qvoe90ef79b-80 entered promiscuous mode
[  605.884770] device tape90ef79b-80 entered promiscuous mode
[  767.083214] device qvbbf1c441c-ad entered promiscuous mode
[  767.184783] device qvobf1c441c-ad entered promiscuous mode
[  767.446575] device tapbf1c441c-ad entered promiscuous mode
[  973.679071] device qvb3c3e98d7-2d entered promiscuous mode
[  973.775480] device qvo3c3e98d7-2d entered promiscuous mode
[  973.997621] device tap3c3e98d7-2d entered promiscuous mode
[ 1863.868574] device tapbf1c441c-ad left promiscuous mode
[ 1889.386251] device tape90ef79b-80 left promiscuous mode
[ 2256.698108] device tap3c3e98d7-2d left promiscuous mode
[ 2336.931559] device qvb6597428d-5b entered promiscuous mode
[ 2337.021941] device qvo6597428d-5b entered promiscuous mode
[ 2337.283293] device tap6597428d-5b entered promiscuous mode
[ 4092.577561] device tap6597428d-5b left promiscuous mode
[ 4099.798474] device tap6597428d-5b entered promiscuous mode
[ 5098.563689] device tape90ef79b-80 entered promiscuous mode

[root@ip-192-169-142-137 ~]# ovs-vsctl show
a0cb406e-b028-4b09-8849-e6e2869ab051
Bridge br-tun
fail_mode: secure
Port “vxlan-0a000093”
Interface “vxlan-0a000093″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.137″, out_key=flow, remote_ip=”10.0.0.147″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
fail_mode: secure
Port “qvoe90ef79b-80”
tag: 1
Interface “qvoe90ef79b-80”
Port br-int
Interface br-int
type: internal
tag: 1
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port “qvo6597428d-5b”
tag: 1
Interface “qvo6597428d-5b”
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
ovs_version: “2.3.1”

[root@ip-192-169-142-137 ~]# brctl show

bridge name    bridge id        STP enabled    interfaces
qbr6597428d-5b       8000.1a483dd02cee    no        qvb6597428d-5b
tap6597428d-5b
qbre90ef79b-80        8000.16342824f4ba    no        qvbe90ef79b-80
tape90ef79b-80
**************************************************
Controller Node status verification
**************************************************

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:             inactive  (disabled on boot)
openstack-nova-network:              inactive  (disabled on boot)
openstack-nova-scheduler:           active
openstack-nova-conductor:           active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:            active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                  inactive  (disabled on boot)
neutron-l3-agent:                       inactive  (disabled on boot)
== Swift services ==
openstack-swift-proxy:                 active
openstack-swift-account:              active
openstack-swift-container:            active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                      active
openstack-cinder-scheduler:            active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:                 active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:         inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
== Support services ==
mysqld:                                    inactive  (disabled on boot)
libvirtd:                                    active
dbus:                                        active
target:                                      active
rabbitmq-server:                       active
memcached:                             active
== Keystone users ==
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.

‘python-keystoneclient.’, DeprecationWarning)

+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 4e1008fd31944fecbb18cdc215af23ec |   admin    |   True  |    root@localhost    |
| 621b84dd4b904760b8aa0cc7b897c95c | ceilometer |   True  | ceilometer@localhost |
| 4d6cdea3b7bc49948890457808c0f6f8 |   cinder   |   True  |   cinder@localhost   |
| 8393bb4de49a44b798af8b118b9f0eb6 |    demo    |   True  |                      |
| f9be6eaa789e4b3c8771372fffb00230 |   glance   |   True  |   glance@localhost   |
| a518b95a92044ad9a4b04f0be90e385f |  neutron   |   True  |  neutron@localhost   |
| 40dddef540fb4fa5a69fb7baa03de657 |    nova    |   True  |    nova@localhost    |
| 5fbb2b97ab9d4192a3f38f090e54ffb1 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+————–+————-+——————+———–+——–+
| ID                                   | Name         | Disk Format | Container Format | Size      | Status |
+————————————–+————–+————-+——————+———–+——–+
| 1b4a6b08-d63c-4d8d-91da-16f6ba177009 | cirros       | qcow2       | bare             | 13200896  | active |
| cb05124d-0d30-43a7-a033-0b7ff0ea1d47 | Fedor21image | qcow2       | bare             | 158443520 | active |
+————————————–+————–+————-+——————+———–+——–+
== Nova managed services ==
+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | –               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | –               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | –               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | –               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:14:21.000000 | –               |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+———-+——+
| ID                                   | Label    | Cidr |

+————————————–+———-+——+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | –    |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | –    |
+————————————–+———-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+

| ID | Name | Status | Task State | Power State | Networks |

+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

+—-+—————————————-+——-+———+
| ID | Hypervisor hostname                    | State | Status  |
+—-+—————————————-+——-+———+
| 1  | ip-192-169-142-137.ip.secureserver.net | up    | enabled |
+—-+—————————————-+——-+———+

+————————————–+——————–+—————————————-+——-+—————-+—————————+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

| 22af7b3b-232f-4642-9418-d1c8021c7eb5 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-openvswitch-agent |
| 34e1078c-c75b-4d14-b813-b273ea8f7b86 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-l3-agent          |
| 5d652094-6711-409d-8546-e29c09e03d5a | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-metadata-agent    |
| 8a8ad680-1071-4c7f-8787-ba4ef0a7dfb7 | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-dhcp-agent        |
| d81e97af-c210-4855-af06-fb1d139e2e10 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | 🙂   | True           | neutron-openvswitch-agent |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | –               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | –               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | –               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | –               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:15:21.000000 | –               |
+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

Nova libvirt-xen driver fails to schedule instance under Xen 4.4.1 Hypervisor with libxl toolstack

April 13, 2015

UPDATE as of 16/04/2015
For now http://www.slideshare.net/xen_com_mgr/openstack-xenfinal
is supposed to work only with nova networking per Anthony PERARD
Neutron appears to be an issue.
Please, view details of troubleshooting and diagnostic obtained (thanks to Ian   Campbell)
http://lists.xen.org/archives/html/xen-devel/2015-04/msg01856.html
END UPDATE

This post is written in regards of two publications done in February 2015
First:   http://wiki.xen.org/wiki/OpenStack_via_DevStack
Second : http://www.slideshare.net/xen_com_mgr/openstack-xenfinal

Both of them are devoted to same problem nova libvirt-xen driver. Second one states that everything is supposed to be fine as far as some mysterious patch will merge mainline libvirt .Both of them don’t work for me generating errors  in  libxl-driver.log even with  libvirt 1.2.14 ( the most recent version as of time of writing).

For better understanding problem been raised up view also https://ask.openstack.org/en/question/64942/nova-libvirt-xen-driver-and-patch-feb-2015-in-upstream-libvirt/

I’ve followed more accurately written second one :-

On Ubuntu 14.04.2

# apt-get update
# apt-get install xen-hypervisor-4.4-amd64
# sudo reboot

\$ git clone https://git.openstack.org/openstack-dev/devstack

Created local.conf under devstack folder as follows :-

[[local|localrc]]
HOST_IP=192.168.1.57
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50

FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

# Useful logging options for debugging:
DEST=/opt/stack
LOGFILE=\$DEST/logs/stack.sh.log
SCREEN_LOGDIR=\$DEST/logs/screen

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service n-cauth
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest

# This is a Xen Project host:
LIBVIRT_TYPE=xen

Ran ./stack.sh and successfully completed installation, versions of libvirt 1.2.2,1.2.9.1.2.24 have been tested. The first one is default on Trusty, 1.2.9 && 1.2.14 have been built and installed after stack.sh completion. For every version of libvirt been tested new hardware instance of Ubuntu 14.04.2 has been created.

Manual libvirt upgrade was done via :-

# apt-get build-dep libvirt
# tar xvzf libvirt-1.2.14.tar.gz -C /usr/src
# cd /usr/src/libvirt-1.2.14
# ./configure –prefix=/usr/
# make
# make install
# service libvirt-bin restart

root@ubuntu-system:~# virsh –connect xen:///
Welcome to virsh, the virtualization interactive terminal.

Type: ‘help’ for help with commands
‘quit’ to quit

virsh # version
Compiled against library: libvirt 1.2.14
Using library: libvirt 1.2.14
Using API: Xen 1.2.14
Running hypervisor: Xen 4.4.0

Per page 19 of second post

xen.gz command line tuned
ubuntu@ubuntu-system:~/devstack\$ nova image-meta cirros-0.3.2-x86_64-uec set vm_mode=HVM
ubuntu@ubuntu-system:~/devstack\$ nova image-meta cirros-0.3.2-x86_64-uec delete vm_mode

Attempt to launch instance ( nova-compute is up ) error “No available host found” in n-sch.log from Nova side

The libxl-driver.log reports :-

root@ubuntu-system:/var/log/libvirt/libxl# ls -l
total 32
-rw-r–r– 1 root root 30700 Apr 12 03:47 libxl-driver.log

**************************************************************************************

libxl: debug: libxl_dm.c:1320:libxl__spawn_local_dm: Spawning device-model /usr/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: /usr/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-domid
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 2
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -chardev
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -mon
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -nodefaults
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-attach
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -name
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: instance-00000002
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -vnc
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 127.0.0.1:1
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -display
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: none
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -k
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: en-us
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -machine
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: xenpv
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -m
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 513
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: register slotnum=3
libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x7f36cc0012e0: inprogress: poller=0x7f36d8013130, flags=i
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “qmp_capabilities”,
“id”: 1
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-chardev”,
“id”: 2
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-vnc”,
“id”: 3
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: register slotnum=3
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:657:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:653:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 ok
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8: deregister unregistered
libxl: debug: libxl_device.c:1023:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge online [-1] exited with error status 1
libxl: error: libxl_device.c:1085:device_hotplug_child_death_cb: script: ip link set vif2.0 name tap5600079c-9e failed
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_create.c:1226:domcreate_attach_vtpms: unable to add nic devices

libxl: debug: libxl_dm.c:1495:kill_device_model: Device Model signaled

Setup the most recent Nova Docker Driver via Devstack on Fedora 21

March 23, 2015

*********************************************************************************
UPDATE as 03/26/2015
To make devstack configuration persistent between reboots on Fedora 21, e.g. restart-able via ./rejoin-stack.sh, following services must be enabled :-
*********************************************************************************
systemctl enable rabbitmq-server
systemctl enable openvswitch
systemctl enable httpd
systemctl enable mysqld

File /etc/rc.d/rc.local should contain ( in my case ) :-

ip addr flush dev br-ex ;
ip link set br-ex up ;
route add -net 10.254.1.0/24 gw 192.168.10.15 ;

System is supposed to be shutdown via :-
\$sudo ./unstack.sh
********************************************************************************

This post follows up http://blog.oddbit.com/2015/02/06/installing-nova-docker-on-fedora-21/  however , RDO Juno is not pre-installed and Nova Docker driver is built first based on the top commit of https://git.openstack.org/cgit/stackforge/nova-docker/ , next step is :-

\$ git clone https://git.openstack.org/openstack-dev/devstack
\$ cd devstack

Creating local.conf under devstack following any of two links provided
and run ./stack.sh performing AIO Openstack installation, like it does
it on Ubuntu 14.04. All steps preventing stack.sh from crash on F21 described right bellow.

# yum -y install git docker-io fedora-repos-rawhide
# yum –enablerepo=rawhide install python-six  python-pip python-pbr systemd
# reboot
# yum – y install gcc python-devel ( required for driver build )

\$ git clone http://github.com/stackforge/nova-docker.git
\$ cd nova-docker
\$ sudo pip install .

To raise to 1.9 version python-six dropped to 1.2 during driver’s build

yum –enablerepo=rawhide reinstall python-six

Run devstack with Lars’s local.conf
or view  http://bderzhavets.blogspot.com/2015/02/set-up-nova-docker-driver-on-ubuntu.html   for another version of local.conf
*****************************************************************************
My version of local.conf which allows define floating pool as you need,a bit more flexible then original
*****************************************************************************
[[local|localrc]]
HOST_IP=192.168.1.57
FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

DEST=\$HOME/stack
SERVICE_DIR=\$DEST/status
DATA_DIR=\$DEST/data
LOGFILE=\$DEST/logs/stack.sh.log
LOGDIR=\$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest
# Introduce glance to docker images

[[post-config|\$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|\$NOVA_CONF]]
[DEFAULT]

**************************************************************************************
After stack.sh completion disable firewalld, because devstack has no interaction with fedoras firewalld bringing up openstack daemons requiring corresponding ports  to be opened.
***************************************************************************************

#  systemctl stop firewalld
#  systemtcl disable firewalld

\$ cd dev*
\$ . openrc demo
\$ neutron security-group-rule-create –protocol icmp \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default
\$ neutron security-group-rule-create –protocol tcp \
–port-range-min 22 –port-range-max 22 \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default
\$ neutron security-group-rule-create –protocol tcp \
–port-range-min 80 –port-range-max 80 \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default

\$  docker pull rastasheep/ubuntu-sshd:14.04
\$  docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

Launch new instance via uploaded image :-

\$ . openrc demo
\$  nova boot –image “rastasheep/ubuntu-sshd:14.04” –flavor m1.tiny
–nic net-id=private-net-id UbuntuDocker

To provide internet access for launched nova-docker instance run :-
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Horizon is unavailable , regardless installed

Set up Two Node RDO Juno ML2&OVS&VXLAN Cluster runnig Docker Hypervisor on Compute Node (CentOS 7, kernel 3.10.0-123.20.1.el7.x86_64)

February 6, 2015

It’s quite obvious that Nova-Docker driver set up success for real application is important to get on Compute Nodes . It’s nice when everything works on AIO Juno host or Controller, but  just as demonstration. Might be I did something wrong , might be due to some other reason but kernel version 3.10.0-123.20.1.el7.x86_64 seems to be the first brings  success on RDO Juno Compute nodes.

“Set up Nova-Docker on Controller&amp;&amp;Network Node”

***************************************************
Set up  Nova-Docker Driver on Compute Node
***************************************************

# yum install python-pbr
# yum install docker-io -y
# git clone https://github.com/stackforge/nova-docker
# cd nova-docker
# git checkout stable/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
#  mkdir /etc/nova/rootwrap.d

************************************************
Create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

*****************************************
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker
:wq

******************************
Update nova.conf
******************************

vi /etc/nova/nova.conf

************************
Restart Services
************************

`usermod -G docker nova`

systemctl restart openstack-nova-compute (on Compute)
systemctl status openstack-nova-compute
systemctl restart openstack-glance-api (on Controller&amp;&amp;Network )

At this point `scp  /root/keystonerc_admin compute:/root`  from Controller to Compute Node

*************************************************************
Test installation Nova-Docker Driver on Compute Node (RDO Juno , CentOS 7,kernel 3.10.0-123.20.1.el7.x86_64 )
**************************************************************

*******************************************

Setup Ubuntu 14.04 with SSH access

*******************************************

First on Compute node

# docker pull rastasheep/ubuntu-sshd:14.04

# docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

Second on Controller node launch Nova-Docker container , running on Compute via dashboard and assign floating IP address

*********************************************
Verify `docker ps ` on Compute Node
*********************************************

[root@juno1dev ~]# ssh 192.168.1.137

Last login: Fri Feb  6 15:38:49 2015 from juno1dev.localdomain

[root@juno2dev ~]# docker ps

CONTAINER ID        IMAGE                          COMMAND               CREATED             STATUS              PORTS               NAMES

ef23d030e35a        rastasheep/ubuntu-sshd:14.04   “/usr/sbin/sshd -D”   7 hours ago         Up 6 minutes                            nova-211bcb54-35ba-4f0a-a150-7e73546d8f46

[root@juno2dev ~]# ip netns

ef23d030e35af63c17698d1f4c6f7d8023c29455e9dff0288ce224657828993a
ca9aa6cb527f2302985817d3410a99c6f406f4820ed6d3f62485781d50f16590
fea73a69337334b36625e78f9a124e19bf956c73b34453f1994575b667e7401b
58834d3bbea1bffa368724527199d73d0d6fde74fa5d24de9cca41c29f978e31
********************************
On Controller run :-
********************************

[root@juno1dev ~]# ssh root@192.168.1.173
Last login: Fri Feb  6 12:11:19 2015 from 192.168.1.127

root@instance-0000002b:~# apt-get update

Ign http://archive.ubuntu.com trusty InRelease
Ign http://archive.ubuntu.com trusty-security InRelease
Hit http://archive.ubuntu.com trusty Release.gpg
Get:1 http://archive.ubuntu.com trusty-updates Release.gpg [933 B]
Get:2 http://archive.ubuntu.com trusty-security Release.gpg [933 B]
Hit http://archive.ubuntu.com trusty Release
Get:3 http://archive.ubuntu.com trusty-updates Release [62.0 kB]
Get:4 http://archive.ubuntu.com trusty-security Release [62.0 kB]
Hit http://archive.ubuntu.com trusty/main Sources
Hit http://archive.ubuntu.com trusty/restricted Sources
Hit http://archive.ubuntu.com trusty/universe Sources
Hit http://archive.ubuntu.com trusty/main amd64 Packages
Hit http://archive.ubuntu.com trusty/restricted amd64 Packages
Hit http://archive.ubuntu.com trusty/universe amd64 Packages
Get:5 http://archive.ubuntu.com trusty-updates/main Sources [208 kB]
Get:6 http://archive.ubuntu.com trusty-updates/restricted Sources [1874 B]
Get:7 http://archive.ubuntu.com trusty-updates/universe Sources [124 kB]
Get:8 http://archive.ubuntu.com trusty-updates/main amd64 Packages [524 kB]
Get:9 http://archive.ubuntu.com trusty-updates/restricted amd64 Packages [14.8 kB]
Get:10 http://archive.ubuntu.com trusty-updates/universe amd64 Packages [318 kB]
Get:11 http://archive.ubuntu.com trusty-security/main Sources [79.8 kB]
Get:12 http://archive.ubuntu.com trusty-security/restricted Sources [1874 B]
Get:13 http://archive.ubuntu.com trusty-security/universe Sources [19.1 kB]
Get:14 http://archive.ubuntu.com trusty-security/main amd64 Packages [251 kB]
Get:15 http://archive.ubuntu.com trusty-security/restricted amd64 Packages [14.8 kB]
Get:16 http://archive.ubuntu.com trusty-security/universe amd64 Packages [110 kB]
Fetched 1793 kB in 9s (199 kB/s)

If network operations like `apt-get install … ` run afterwards with no problems

Nova-Docker driver is installed  and works on Compute Node

**************************************************************************************
Finally I’ve set up openstack-nova-compute on Controller ,  to run several instances with  Qemu/Libvirt driver :-
**************************************************************************************

Set up Nova-Docker on OpenStack RDO Juno on top of Fedora 21

January 11, 2015
****************************************************************************************
UPDATE as of 01/31/2015 to get Docker && Nova-Docker working on Fedora 21
****************************************************************************************
Per https://github.com/docker/docker/issues/10280
First packages for rpmbuild :-

\$ sudo yum install audit-libs-devel autoconf  automake cryptsetup-devel \
dbus-devel docbook-style-xsl elfutils-devel  \
glib2-devel  gnutls-devel  gobject-introspection-devel \
gperf     gtk-doc intltool kmod-devel libacl-devel \
libblkid-devel     libcap-devel libcurl-devel libgcrypt-devel \
libidn-devel libmicrohttpd-devel libmount-devel libseccomp-devel \
libselinux-devel libtool pam-devel python3-devel python3-lxml \
qrencode-devel  python2-devel  xz-devel

Second:-
\$cd rpmbuild/SPEC
\$rpmbuild -bb systemd.spec
\$ cd ../RPMS/x86_64
Third:-

\$ sudo yum install libgudev1-218-3.fc21.x86_64.rpm \
libgudev1-devel-218-3.fc21.x86_64.rpm \
systemd-218-3.fc21.x86_64.rpm \
systemd-compat-libs-218-3.fc21.x86_64.rpm \
systemd-debuginfo-218-3.fc21.x86_64.rpm \
systemd-devel-218-3.fc21.x86_64.rpm \
systemd-journal-gateway-218-3.fc21.x86_64.rpm \
systemd-libs-218-3.fc21.x86_64.rpm \
systemd-python-218-3.fc21.x86_64.rpm \
systemd-python3-218-3.fc21.x86_64.rpm

****************************************************************************************

Recently Filip Krikava made a fork on github and created a Juno branch using

Master https://github.com/stackforge/nova-docker.git is targeting latest Nova ( Kilo release ), forked branch is supposed to work for Juno , reasonably including commits after “Merge oslo.i18n”. Posting bellow is supposed to test Juno Branch https://github.com/fikovnik/nova-docker.git

Quote ([2]) :-

The Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.

Install required packages to install nova-docker driver per https://wiki.openstack.org/wiki/Docker

***************************
Initial docker setup
***************************

```# yum install docker-io -y # yum install -y python-pip git # git clone https://github.com/fikovnik/nova-docker.git # cd nova-docker # git branch -v -a```

```master                1ed1820 A note no firewall drivers. remotes/origin/HEAD   -&gt; origin/master remotes/origin/juno   1a08ea5 Fix the problem when an image is not located in the local docker image registry. remotes/origin/master 1ed1820 A note no firewall drivers. # git checkout -b juno origin/juno # python setup.py install # systemctl start docker # systemctl enable docker # chmod 660  /var/run/docker.sock # pip install pbr #  mkdir /etc/nova/rootwrap.d```

******************************
Update nova.conf
******************************

vi /etc/nova/nova.conf

************************************************
Next, create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines
# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

*****************************************
*****************************************

container_formats=ami,ari,aki,bare,ovf,ova,docker

:wq

************************
Restart Services
************************

`usermod -G docker nova`

systemctl restart openstack-nova-compute
systemctl status openstack-nova-compute
systemctl restart openstack-glance-api

*******************************************************************************
Verification nova-docker driver  been  built on Fedora 21

*******************************************************************************
Build bellow is extending  phusion/baseimage to start several daemons at a time during launching nova-docker container. It has been tested on Nova-Docker RDO Juno on top of CentOS 7 ( view Set up GlassFish 4.1 Nova-Docker Container via phusion/baseimage on RDO Juno ). Here it is reproduced on Nova-Docker RDO Juno on top of Fedora 21 coming afterwards `packstack –allinone` Juno installation on Fedora 21,  been run pretty smoothly .

FROM phusion/baseimage

MAINTAINER Boris Derzhavets

RUN apt-get update
RUN echo ‘root:root’ |chpasswd
RUN sed -ri ‘s/UsePAM yes/#UsePAM yes/g’ /etc/ssh/sshd_config
##################################################
# Hack to avoid external start SSH session inside container,
# otherwise sshd won’t start when docker container loads
##################################################
RUN echo “/usr/sbin/sshd > log & ” >>  /etc/my_init.d/00_regen_ssh_host_keys.sh

RUN apt-get update && apt-get install -y wget
RUN cp  jdk-8u25-linux-x64.tar.gz /opt
RUN cd /opt; tar -zxvf jdk-8u25-linux-x64.tar.gz
ENV PATH /opt/jdk1.8.0_25/bin:\$PATH

RUN apt-get update &&  \
apt-get install -y wget unzip pwgen expect net-tools vim &&  \
unzip glassfish-4.1.zip -d /opt &&  \
rm glassfish-4.1.zip &&  \
apt-get clean &&  \
rm -rf /var/lib/apt/lists/*
ENV PATH /opt/glassfish4/bin:\$PATH

RUN chmod +x /*.sh /etc/my_init.d/*.sh

# 4848 (administration), 8080 (HTTP listener), 8181 (HTTPS listener), 9009 (JPDA debug port)

EXPOSE 22  4848 8080 8181 9009

CMD [“/sbin/my_init”]

***************************************************************
Another option not to touch 00_regen_ssh_host_keys.sh
***************************************************************
# RUN echo “/usr/sbin/sshd > log & ” >>  /etc/my_init.d/00_regen_ssh_host_keys.sh

***************************************************************
Create in building folder script  01_sshd_start.sh
***************************************************************

#!/bin/bash
/usr/sbin/sshd > log &
and insert in Dockerfile:-

********************************************************************************
I had to update database.sh script as follows to make nova-docker container
starting  on RDO Juno on top of Fedora 21 ( view http://lxer.com/module/newswire/view/209277/index.html ).
********************************************************************************

# cat database.sh

#!/bin/bash
set -e
asadmin start-database –dbhost 127.0.0.1 –terse=true >  log &;

the important  change is binding dbhost to 127.0.0.1 , which  is not required for loading docker container. Nova-Docker Driver ( http://www.linux.com/community/blogs/133-general-linux/799569-running-nova-docker-on-openstack-rdo-juno-centos-7 ) seems to be more picky about –dbhost  key value of Derby Database

*********************
Build image
*********************

[root@junolxc docker-glassfish41]# ls -l

total 44
-rw-r–r–. 1 root root   473 Jan  7 00:27 circle.yml
-rw-r–r–. 1 root root    44 Jan  7 00:27 database.sh
-rw-r–r–. 1 root root  1287 Jan  7 00:27 Dockerfile
-rw-r–r–. 1 root root   167 Jan  7 00:27 enable_secure_admin.sh
-rw-r–r–. 1 root root 11323 Jan  7 00:27 LICENSE
-rw-r–r–. 1 root root  2123 Jan  7 00:27 README.md
-rw-r–r–. 1 root root   354 Jan  7 00:27 run.sh
[root@junolxc docker-glassfish41]# docker build -t derby/docker-glassfish41 .

******************************************
RDO (AIO install)  Juno status on Fedora 21
*******************************************

== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 active
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
== Support services ==
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
target:                                 inactive  (disabled on boot)
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| edfb1cd3c4d54401ac810b14e8d953f2 |   admin    |   True  |    root@localhost    |
| 783df7494254423aaed3bfe0cc2262af | ceilometer |   True  | ceilometer@localhost |
| 955e7619fc6749f68843030d9da6cef3 |   cinder   |   True  |   cinder@localhost   |
| 1ed0f9f7705341b79f58190ea31160fc |    demo    |   True  |                      |
| b7dec54d6b984c16afca2935cc09c478 |  neutron   |   True  |  neutron@localhost   |
| c35cad56c0e548aaa6907e0da3eca569 |    nova    |   True  |    nova@localhost    |
| a959def1f10e48d6959a70bc930e8522 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+———————————+————-+——————+————+——–+
| ID                                   | Name                            | Disk Format | Container Format | Size       | Status |
+————————————–+———————————+————-+——————+————+——–+
| 08b235e5-7f2b-4bc4-959e-582482037019 | cirros                          | qcow2       | bare             | 13200896   | active |
| fcb9a93a-6a28-413f-853b-4ad362aed0c5 | derby/docker-glassfish41:latest | raw         | docker           | 1112110592 | active |
| 032952ba-5bb3-41cc-9a2a-d4c76d197571 | dba07/docker-glassfish41:latest | raw         | docker           | 1112110592 | active |
| ce0adab4-3f09-45cc-81fa-cd8cc6acc7c1 | rastasheep/ubuntu-sshd:14.04    | raw         | docker           | 263785472  | active |
| 230040b3-c5d1-4bf0-b5e4-9f112fd71c70 | Ubuntu14.04-011014              | qcow2       | bare             | 256311808  | active |
+————————————–+———————————+————-+——————+————+——–+
== Nova managed services ==
+—-+——————+———————-+———-+———+——-+—————————-+—————–+
| Id | Binary           | Host                 | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—-+——————+———————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:21.000000 | –               |
| 2  | nova-scheduler   | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:22.000000 | –               |
| 3  | nova-conductor   | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:22.000000 | –               |
| 5  | nova-compute     | fedora21.localdomain | nova     | enabled | up    | 2015-01-11T09:45:20.000000 | –               |
| 6  | nova-cert        | fedora21.localdomain | internal | enabled | up    | 2015-01-11T09:45:29.000000 | –               |
+—-+——————+———————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+————–+——+
| ID                                   | Label        | Cidr |
+————————————–+————–+——+
| 046e1e6f-b09c-4daf-9732-3ed0b6e5fdf8 | public       | –    |
| 76709a1a-61e7-4488-9ecf-96dbd88d4fb6 | private      | –    |
| 7b2c1d87-cea1-40aa-a1d7-dbac3cc99798 | demo_network | –    |
+————————————–+————–+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

*************************
*************************

# docker save derby/docker-glassfish41:latest | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name derby/docker-glassfish41:latest

**********************
Launch instance
**********************
# .  keystonerc_demo

# nova boot –image “derby/docker-glassfish41:latest” –flavor m1.small –key-name  oskey57    –nic net-id=demo_network-id DerbyGlassfish41

Set up GlassFish 4.1 Nova-Docker Container via docker’s phusion/baseimage on RDO Juno

January 9, 2015

The problem here is that phusion/baseimage per https://github.com/phusion/baseimage-docker should provide ssh access to container , however it doesn’t. Working with docker container there is easy workaround suggested by Mykola Gurov in http://stackoverflow.com/questions/27816298/cannot-get-ssh-access-to-glassfish-4-1-docker-container
# docker exec container-id exec /usr/sbin/sshd -D
*******************************************************************************
To   bring sshd back to life  create in building folder script  01_sshd_start.sh
*******************************************************************************
#!/bin/bash

if [[ ! -e /etc/ssh/ssh_host_rsa_key ]]; then
echo “No SSH host key available. Generating one…”
export LC_ALL=C
export DEBIAN_FRONTEND=noninteractive
dpkg-reconfigure openssh-server
echo “SSH KEYS regenerated by Boris just in case !”
fi

/usr/sbin/sshd &gt; log &amp;
echo “SSHD started !”

and insert in Dockerfile:-

Following bellow is Dockerfile been used to build image for GlassFish 4.1 nova-docker container extending phusion/baseimage and starting three daemons at a time when launching nova-docker instance been built via image been prepared to be used by Nova-Docker driver on Juno

FROM phusion/baseimage
MAINTAINER Boris Derzhavets

RUN apt-get update
RUN echo ‘root:root’ |chpasswd
RUN sed -ri ‘s/UsePAM yes/#UsePAM yes/g’ /etc/ssh/sshd_config

RUN apt-get update && apt-get install -y wget
RUN cp jdk-8u25-linux-x64.tar.gz /opt
RUN cd /opt; tar -zxvf jdk-8u25-linux-x64.tar.gz
ENV PATH /opt/jdk1.8.0_25/bin:\$PATH
RUN apt-get update && \

apt-get install -y wget unzip pwgen expect net-tools vim && \
unzip glassfish-4.1.zip -d /opt && \
rm glassfish-4.1.zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

ENV PATH /opt/glassfish4/bin:\$PATH

RUN chmod +x /*.sh /etc/my_init.d/*.sh

# 4848 (administration), 8080 (HTTP listener), 8181 (HTTPS listener), 9009 (JPDA debug port)

EXPOSE 22 4848 8080 8181 9009

CMD [“/sbin/my_init”]

********************************************************************************
I had to update database.sh script as follows to make nova-docker container
starting on RDO Juno
********************************************************************************
# cat database.sh

#!/bin/bash
set -e
asadmin start-database –dbhost 127.0.0.1 –terse=true > log &

the important change is binding dbhost to 127.0.0.1 , which is not required for loading docker container. Nova-Docker Driver ( http://www.linux.com/community/blogs/133-general-linux/799569-running-nova-docker-on-openstack-rdo-juno-centos-7 ) seems to be more picky about –dbhost key value of Derby Database

*********************
Build image
*********************
[root@junolxc docker-glassfish41]# ls -l
total 44
-rw-r–r–. 1 root root 473 Jan 7 00:27 circle.yml
-rw-r–r–. 1 root root 44 Jan 7 00:27 database.sh
-rw-r–r–. 1 root root 1287 Jan 7 00:27 Dockerfile
-rw-r–r–. 1 root root 167 Jan 7 00:27 enable_secure_admin.sh
-rw-r–r–. 1 root root 11323 Jan 7 00:27 LICENSE
-rw-r–r–. 1 root root 2123 Jan 7 00:27 README.md
-rw-r–r–. 1 root root 354 Jan 7 00:27 run.sh

[root@junolxc docker-glassfish41]# docker build -t boris/docker-glassfish41 .

*************************
*************************
# docker save boris/docker-glassfish41:latest | glance image-create –is-public=True –container-format=docker –disk-format=raw –name boris/docker-glassfish41:latest

**********************
Launch instance
**********************
# . keystonerc_demo
# nova boot –image “boris/docker-glassfish41:latest” –flavor m1.small –key-name osxkey –nic net-id=demo_network-id OracleGlassfish41

Last login: Fri Jan 9 10:09:50 2015 from 192.168.1.57

root@instance-00000045:~# ps -ef

UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:15 ? 00:00:00 /usr/bin/python3 -u /sbin/my_init
root 12 1 0 10:15 ? 00:00:00 /usr/sbin/sshd

root 137 1 0 10:15 ? 00:00:00 /bin/bash /etc/my_init.d/run.sh
root 358 137 0 10:15 ? 00:00:05 java -jar /opt/glassfish4/bin/../glassfish/lib/client/appserver-cli.jar start-domain –debug=false -w

root 1186 12 0 14:02 ? 00:00:00 sshd: root@pts/0
root 1188 1186 0 14:02 pts/0 00:00:00 -bash
root 1226 1188 0 15:45 pts/0 00:00:00 ps -ef

Original idea of using ./run.sh script is coming from
https://registry.hub.docker.com/u/bonelli/glassfish-4.1/

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh…
No SSH host key available. Generating one…
Creating SSH2 RSA key; this may take some time …
Creating SSH2 DSA key; this may take some time …
Creating SSH2 ECDSA key; this may take some time …
Creating SSH2 ED25519 key; this may take some time …
invoke-rc.d: policy-rc.d denied execution of restart.

*** Running /etc/my_init.d/database.sh…
Starting database in Network Server mode on host 127.0.0.1 and port 1527.
——— Derby Network Server Information ——–
Version: CSS10100/10.10.2.0 – (1582446) Build: 1582446 DRDA Product Id: CSS10100
— listing properties —
derby.drda.traceDirectory=/opt/glassfish4/glassfish/databases
derby.drda.sslMode=off
derby.drda.keepAlive=true
derby.drda.portNumber=1527
derby.drda.logConnections=false
derby.drda.timeSlice=0
derby.drda.startNetworkServer=false
derby.drda.host=127.0.0.1
derby.drda.traceAll=false
—————— Java Information ——————
Java Version: 1.8.0_25
Java Vendor: Oracle Corporation
Java home: /opt/jdk1.8.0_25/jre
OS name: Linux
OS architecture: amd64
OS version: 3.10.0-123.el7.x86_64
Java user name: root
Java user home: /root
Java user dir: /
java.specification.name: Java Platform API Specification
java.specification.version: 1.8
java.runtime.version: 1.8.0_25-b17
——— Derby Information ——–
——————————————————
—————– Locale Information —————–

Current Locale : [English/United States [en_US]]
Found support for locale: [cs]
version: 10.10.2.0 – (1582446)
Found support for locale: [de_DE]
version: 10.10.2.0 – (1582446)
Found support for locale: [es]
version: 10.10.2.0 – (1582446)
Found support for locale: [fr]
version: 10.10.2.0 – (1582446)
Found support for locale: [hu]
version: 10.10.2.0 – (1582446)
Found support for locale: [it]
version: 10.10.2.0 – (1582446)
Found support for locale: [ja_JP]
version: 10.10.2.0 – (1582446)
Found support for locale: [ko_KR]
version: 10.10.2.0 – (1582446)
Found support for locale: [pl]
version: 10.10.2.0 – (1582446)
Found support for locale: [pt_BR]
version: 10.10.2.0 – (1582446)
Found support for locale: [ru]
version: 10.10.2.0 – (1582446)
Found support for locale: [zh_CN]
version: 10.10.2.0 – (1582446)
Found support for locale: [zh_TW]
version: 10.10.2.0 – (1582446)
——————————————————
——————————————————

Starting database in the background.

Log redirected to /opt/glassfish4/glassfish/databases/derby.log.
Command start-database executed successfully.
*** Running /etc/my_init.d/run.sh…
Bad Network Configuration. DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000045: instance-00000045: unknown error

Waiting for domain1 to start …….
Successfully started the domain : domain1
domain Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Command start-domain executed successfully.
You must restart all running servers for the change in secure admin to take effect.
=> Done!
========================================================================
You can now connect to this Glassfish server using:
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop .
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false

Running Nova-Docker on OpenStack Juno (CentOS 7)

December 16, 2014

Recently Filip Krikava made a fork on github and created a Juno branch using the latest commit +Fix the problem when an image is not located in the local docker image registry ( https://github.com/fikovnik/nova-docker/commit/016cc98e2f8950ae3bf5e27912be20c52fc9e40e )
Master https://github.com/stackforge/nova-docker.git is targeting latest Nova ( Kilo release ), forked branch is supposed to work for Juno , reasonably including commits after “Merge oslo.i18n”. Posting bellow is supposed to test Juno Branch https://github.com/fikovnik/nova-docker.git

Quote ([2]) :-

The Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.

This post in general follows up ([2]) with detailed instructions of nova-docker

driver install on RDO Juno (CentOS 7) ([3]).

Install required packages to install nova-dockker driver per https://wiki.openstack.org/wiki/Docker

***************************

Initial docker setup

***************************

```# yum install docker-io -y # yum install -y python-pip git # git clone https://github.com/fikovnik/nova-docker.git # cd nova-docker # git branch -v -a```

```#  master                1ed1820 A note no firewall drivers. remotes/origin/HEAD   -&gt; origin/master remotes/origin/juno   1a08ea5 Fix the problem when an image is not located in the local docker image registry. remotes/origin/master 1ed1820 A note no firewall drivers. # git checkout -b juno origin/juno # python setup.py install # systemctl start docker # systemctl enable docker # chmod 660  /var/run/docker.sock # pip install pbr #  mkdir /etc/nova/rootwrap.d```

******************************

Update nova.conf

******************************

vi /etc/nova/nova.conf

************************************************

Next, create the docker.filters file:

************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver

# This file should be owned by (and only-writeable by) the root user

[Filters]

# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’

ln: CommandFilter, /bin/ln, root

*****************************************

*****************************************

container_formats=ami,ari,aki,bare,ovf,ova,docker

:wq

************************

Restart Services

************************

`usermod -G docker nova`

systemctl restart openstack-nova-compute

systemctl status openstack-nova-compute

systemctl restart openstack-glance-api

******************************

Verification docker install

******************************

[root@juno ~]# docker run -i -t fedora /bin/bash

Unable to find image ‘fedora’ locally

fedora:latest: The image you are pulling has been verified

00a0c78eeb6d: Pull complete

2f6ab0c1646e: Pull complete

bash-4.3# cat /etc/issue

Fedora release 21 (Twenty One)

Kernel \r on an \m (\l)

[root@juno ~]# docker ps -a

CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS                        PORTS               NAMES

738e54f9efd4        fedora:latest            “/bin/bash”         3 minutes ago       Exited (127) 25 seconds ago                       stoic_lumiere

14fd0cbba76d        ubuntu:latest            “/bin/bash”         3 minutes ago       Exited (0) 3 minutes ago                          prickly_hypatia

ef1a726d1cd4        fedora:latest            “/bin/bash”         5 minutes ago       Exited (0) 3 minutes ago                          drunk_shockley

0a2da90a269f        ubuntu:latest            “/bin/bash”         11 hours ago        Exited (0) 11 hours ago                           thirsty_kowalevski

5a3288ce0e8e        ubuntu:latest            “/bin/bash”         11 hours ago        Exited (0) 11 hours ago                           happy_leakey

21e84951eabd        tutum/wordpress:latest   “/run.sh”           16 hours ago        Up About an hour                                  nova-bf5f7eb9-900d-48bf-a230-275d65813b0f

*******************

Setup WordPress

*******************

`# docker pull tutum``/wordpress`

`# . keystonerc_admin `

`# docker save tutum``/wordpress` `| glance image-create --is-public=True --container-``format``=docker --disk-``format``=raw --name tutum``/wordpress`

```[root@juno ~(keystone_admin)]# glance image-list +--------------------------------------+-----------------+-------------+------------------+-----------+--------+ | ID                                   | Name            | Disk Format | Container Format | Size      | Status | +--------------------------------------+-----------------+-------------+------------------+-----------+--------+ | c6d01e60-56c2-443f-bf87-15a0372bc2d9 | cirros          | qcow2       | bare             | 13200896  | active | | 9d59e7ad-35b4-4c3f-9103-68f85916f36e | tutum/wordpress | raw         | docker           | 517639680 | active | +--------------------------------------+-----------------+-------------+------------------+-----------+--------+```

********************

Start container

********************

\$ . keystonerc_demo

[root@juno ~(keystone_demo)]# neutron net-list

+————————————–+————–+——————————————————-+

| id                                   | name         | subnets                                               |

+————————————–+————–+——————————————————-+

| ccfc4bb1-696d-4381-91d7-28ce7c9cb009 | private      | 6c0a34ab-e3f1-458c-b24a-96f5a2149878 10.0.0.0/24      |

| 32c14896-8d47-4a56-b3c6-0dd823f03089 | public       | b1799aef-3f69-429c-9881-f81c74d83060 192.169.142.0/24 |

| a65bff8f-e397-491b-aa97-955864bec2f9 | demo_private | 69012862-f72e-4cd2-a4fc-4106d431cf2f 70.0.0.0/24      |

+————————————–+————–+——————————————————-+

\$ nova boot –image “tutum/wordpress” –flavor m1.tiny –key-name  osxkey –nic net-id=a65bff8f-e397-491b-aa97-955864bec2f9 WordPress

[root@juno ~(keystone_demo)]# nova list

+————————————–+———–+———+————+————-+—————————————–+

| ID                                   | Name      | Status  | Task State | Power State | Networks                                |

+————————————–+———–+———+————+————-+—————————————–+

| bf5f7eb9-900d-48bf-a230-275d65813b0f |  WordPress   | ACTIVE  | –          | Running     | demo_private=70.0.0.16, 192.169.142.153 |

+—————————-———-+———–+———+————+————-+—————————————–+

[root@juno ~(keystone_demo)]# docker ps -a

CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS                   PORTS               NAMES

21e84951eabd        tutum/wordpress:latest   “/run.sh”           About an hour ago   Up 11 minutes                                nova-bf5f7eb9-900d-48bf-a230-275d65813b0f

**************************

Starting WordPress

**************************

Immediately after VM starts (on non-default Libvirts Subnet 192.169.142.0/24) status WordPress  is SHUTOFF, so we start WordPress (browser launched to

Juno VM 192.169.142.45 from KVM Hypervisor Server ) :-

Browser launched to WordPress container 192.169.142.153  from KVM  Hypervisor Server

**********************************************************************************

Floating IP assigned to WordPress container  been used to launch browser:-

**********************************************************************************

*******************************************************************************************

Another sample to demonstrating nova-docker container functionality. Browser launched to WordPress nova-docker container   (192.169.142.155)   from KVM Hypervisor Server hosting Libvirt’s Subnet (192.169.142.0/24)

*******************************************************************************************

*****************

MySQL Setup

*****************

# docker pull tutum/mysql

*****************************

Creating Glance Image

*****************************

#   docker save tutum/mysql:latest | glance image-create –is-public=True –container-format=docker –disk-format=raw –name tutum/mysql:latest

****************************************

Starting Nova-Docker container

****************************************

# .   keystonerc_demo

#   nova boot –image “tutum/mysql:latest” –flavor m1.tiny –key-name  osxkey –nic net-id=5fcd01ac-bc8e-450d-be67-f0c274edd041 mysql

[root@ip-192-169-142-45 ~(keystone_demo)]# nova list

+————————————–+—————+——–+————+————-+—————————————–+

| ID                                   | Name          | Status | Task State | Power State | Networks                                |

+————————————–+—————+——–+————+————-+—————————————–+

| 3dbf981f-f28c-4abe-8fd1-09b8b8cad930 | WordPress     | ACTIVE | –          | Running     | demo_network=70.0.0.16, 192.169.142.153 |

| 39eef361-1329-44d9-b05a-f6b4b8693aa3 | mysql         | ACTIVE | –          | Running     | demo_network=70.0.0.19, 192.169.142.155 |

| 626bd8e0-cf1a-4891-aafc-620c464e8a94 | tutum/hipache | ACTIVE | –          | Running     | demo_network=70.0.0.18, 192.169.142.154 |

+————————————–+—————+——–+————+————-+—————————————–+

[root@ip-192-169-142-45 ~(keystone_demo)]# docker ps -a

CONTAINER ID        IMAGE                          COMMAND               CREATED             STATUS                         PORTS               NAMES

3da1e94892aa        tutum/mysql:latest             “/run.sh”             25 seconds ago      Up 23 seconds                                      nova-39eef361-1329-44d9-b05a-f6b4b8693aa3

77538873a273        tutum/hipache:latest           “/run.sh”             30 minutes ago                                                         condescending_leakey

844c75ca5a0e        tutum/hipache:latest           “/run.sh”             31 minutes ago                                                         condescending_turing

f477605840d0        tutum/hipache:latest           “/run.sh”             42 minutes ago      Up 31 minutes                                      nova-626bd8e0-cf1a-4891-aafc-620c464e8a94

3e2fe064d822        rastasheep/ubuntu-sshd:14.04   “/usr/sbin/sshd -D”   About an hour ago   Exited (0) About an hour ago                       test_sshd

8e79f9d8e357        fedora:latest                  “/bin/bash”           About an hour ago   Exited (0) About an hour ago                       evil_colden

9531ab33db8d        ubuntu:latest                  “/bin/bash”           About an hour ago   Exited (0) About an hour ago                       angry_bardeen

[root@ip-192-169-142-45 ~(keystone_demo)]# docker logs 3da1e94892aa

=&gt; An empty or uninitialized MySQL volume is detected in /var/lib/mysql

=&gt; Installing MySQL …

=&gt; Done!

=&gt; Waiting for confirmation of MySQL service startup, trying 0/13 …

=&gt; Done!

========================================================================

You can now connect to this MySQL Server using:

MySQL user ‘root’ has no password but only allows local connections
========================================================================
141218 20:45:31 mysqld_safe Can’t log to error log and syslog at the same time.
Remove all –log-error configuration options for –syslog to take effect.

141218 20:45:31 mysqld_safe Logging to ‘/var/log/mysql/error.log’.
141218 20:45:31 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql

[root@ip-192-169-142-45 ~(keystone_demo)]# mysql -uadmin -pfXs5UarEYaow -h 192.169.142.155  -P 3306
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.5.40-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MySQL [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| mysql              |
| performance_schema |
+——————–+
3 rows in set (0.01 sec)

MySQL [(none)]&gt;

*******************************************

Setup Ubuntu 14.04 with SSH access

*******************************************

# docker pull rastasheep/ubuntu-sshd:14.04

# docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

# . keystonerc_demo

# nova boot –image “rastasheep/ubuntu-sshd:14.04” –flavor m1.tiny –key-name  osxkey    –nic net-id=5fcd01ac-bc8e-450d-be67-f0c274edd041 ubuntuTrusty

***********************************************************

Login to dashboard &amp;&amp; assign floating IP via dashboard:-

***********************************************************

[root@ip-192-169-142-45 ~(keystone_demo)]# nova list

+————————————–+————–+———+————+————-+—————————————–+

| ID                                   | Name         | Status  | Task State | Power State | Networks                                |

+————————————–+————–+———+————+————-+—————————————–+

| 3dbf981f-f28c-4abe-8fd1-09b8b8cad930 | WordPress    | SHUTOFF | –          | Shutdown    | demo_network=70.0.0.16, 192.169.142.153 |

| 7bbf887f-167c-461e-9ee0-dd4d43605c9e | lamp         | ACTIVE  | –          | Running     | demo_network=70.0.0.20, 192.169.142.156 |

| 39eef361-1329-44d9-b05a-f6b4b8693aa3 | mysql        | SHUTOFF | –          | Shutdown    | demo_network=70.0.0.19, 192.169.142.155 |

| f21dc265-958e-4ed0-9251-31c4bbab35f4 | ubuntuTrusty | ACTIVE  | –          | Running     | demo_network=70.0.0.21, 192.169.142.157 |

+————————————–+————–+———+————+————-+—————————————–+

[root@ip-192-169-142-45 ~(keystone_demo)]# ssh root@192.169.142.157

Last login: Fri Dec 19 09:19:40 2014 from ip-192-169-142-45.ip.secureserver.net

root@instance-0000000d:~# cat /etc/issue

Ubuntu 14.04.1 LTS \n \l

root@instance-0000000d:~# ifconfig

UP LOOPBACK RUNNING  MTU:65536  Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

RX packets:2574 errors:0 dropped:0 overruns:0 frame:0

TX packets:1653 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2257920 (2.2 MB)  TX bytes:255582 (255.5 KB)

root@instance-0000000d:~# df -h

Filesystem                                                                                         Size  Used Avail Use% Mounted on

/dev/mapper/docker-253:1-4600578-76893e146987bf4b58b42ff6ed80892df938ffba108f22c7a4591b18990e0438  9.8G  302M  9.0G   4% /

tmpfs                                                                                              1.9G     0  1.9G   0% /dev

shm                                                                                                 64M     0   64M   0% /dev/shm

/dev/mapper/centos-root                                                                             36G  9.8G   26G  28% /etc/hosts

tmpfs                                                                                              1.9G     0  1.9G   0% /run/secrets

tmpfs                                                                                              1.9G     0  1.9G   0% /proc/kcore

References

LVMiSCSI cinder backend for RDO Juno on CentOS 7

November 9, 2014

Current post follows up http://lxer.com/module/newswire/view/207415/index.html RDO Juno has been intalled on Controller and Compute nodes via packstack as described in link @lxer.com. iSCSI initiator implementation on CentOS 7 differs significantly from CentOS 6.5 and is based on  CLI utility targetcli and service target.  With Enterprise Linux 7, both Red Hat and CentOS, there is a big change in the management of iSCSI targets.Software run as part of the standard systemd structure. Consequently there will be significant changes in multi back end cinder architecture of RDO Juno running on CentOS 7 or Fedora 21 utilizing LVM based iSCSI targets.

Create following entries in /etc/cinder/cinder.conf on Controller ( which in case of two node Cluster works as Storage node as well).

#######################

enabled_backends=lvm51,lvm52

#######################

[lvm51]

volume_group=cinder-volumes51

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name=LVM_iSCSI51

[lvm52]

volume_group=cinder-volumes52

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name=LVM_iSCSI52

VG cinder-volumes52,51 created on /dev/sda6 and /dev/sdb1 correspondently

# pvcreate /dev/sda6

# vgcreate cinder-volumes52  /dev/sda6

Then issue :-

+————————————–+——+

|                  ID                  | Name |

+————————————–+——+

| 64414f3a-7770-4958-b422-8db0c3e2f433 | lvms  |

+————————————–+——+

[root@juno1 ~(keystone_admin)]# cinder type-create lvmz +————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | lvmz |

+————————————–+———+

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 |  lvmz   |

| 64414f3a-7770-4958-b422-8db0c3e2f433 |  lvms   |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-key lvmz set volume_backend_name=LVM_iSCSI51

[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI52

Then enable and start service target:-

Redirecting to /bin/systemctl status  target.service

target.service – Restore LIO kernel target configuration

Active: active (exited) since Wed 2014-11-05 13:23:09 MSK; 44min ago

Process: 1611 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)

Main PID: 1611 (code=exited, status=0/SUCCESS)

CGroup: /system.slice/target.service

Nov 05 13:23:07 juno1.localdomain systemd[1]: Starting Restore LIO kernel target configuration…

Nov 05 13:23:09 juno1.localdomain systemd[1]: Started Restore LIO kernel target configuration.

Now all changes done by creating cinder volumes of types lvms,lvmz ( via

dashboard – volume create with dropdown menu volume types or via cinder CLI )

will be persistent in  targetcli&gt; ls output between reboots

[root@juno1 ~(keystone_boris)]# cinder list

+————————————–+——–+——————+——+————-+———-+————————————–+

|                  ID                  | Status |   Display Name   | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+——————+——+————-+———-+————————————–+

| 3a4f6878-530a-4a28-87bb-92ee256f63ea | in-use | UbuntuUTLV510851 |  5   |     lvmz    |   true   | efb1762e-6782-4895-bf2b-564f14105b5b |

| 51528876-405d-4a15-abc2-61ad72fc7d7e | in-use |   CentOS7LVG51   |  10  |     lvmz    |   true   | ba3e87fa-ee81-42fc-baed-c59ca6c8a100 |

| ca0694ae-7e8d-4c84-aad8-3f178416dec6 | in-use |  VF20LVG520711   |  7   |     lvms    |   true   | 51a20959-0a0c-4ef6-81ec-2edeab6e3588 |

| dc9e31f0-b27f-4400-a666-688365126f67 | in-use | UbuntuUTLV520711 |  7   |     lvms    |   true   | 1fe7d2c3-58ae-4ee8-8f5f-baf334195a59 |

+————————————–+——–+——————+——+————-+———-+————————————–+

Compare ‘green’ highlighted volume id’s and tarcgetcli&gt;ls output

Next snapshot demonstrates lvms &amp;&amp; lvmz volumes attached to corresponding

nova instances utilizing LVMiSCSI cinder backend.

On Compute Node iscsiadm output will look as follows :-

[root@juno2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-3a4f6878-530a-4a28-87bb-92ee256f63ea

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-dc9e31f0-b27f-4400-a666-688365126f67

References

RDO Juno Set up Two Real Node (Controller+Compute) Gluster 3.5.2 Cluster ML2&OVS&VXLAN on CentOS 7

November 3, 2014

Post bellow follows up http://cloudssky.com/en/blog/RDO-OpenStack-Juno-ML2-VXLAN-2-Node-Deployment-On-CentOS-7-With-Packstack/ however answer file provided here allows in single run create Controller &amp;&amp; Compute Node.Based oh RDO Juno release as of 10/27/2014 it doesn’t require creating OVS bridge br-ex and OVS port enp2s0 on Compute Node. It also doesn’t install nova-compute service on Controller. Gluster 3.5.2 setup also is performed in way which differs from similar procedure on IceHouse &amp;&amp; Havana RDO releases. Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for Controller &amp;&amp; Compute Nodes setup. Before running`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set support VXLAN tunnel  (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

I have also to note that in regards  of LVMiSCSI cinder backend support on CentOS 7 post http://theurbanpenguin.com/wp/?p=3403 is misleading. Name of service making changes done in targetcli  persistent between reboots is “target” not “targetd”

To setup iSCSI initiator on CentOS 7 ( activate LIO kernel support) you have
to issue :-
# systemctl enable target
# systemctl start target
# systemctl status target -l
target.service – Restore LIO kernel target configuration
Active: active (exited) since Sat 2014-11-08 14:45:06 MSK; 3h 26min ago
Process: 1661 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
Main PID: 1661 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/target.service

Nov 01 14:45:06 juno1.localdomain systemd[1]: Started Restore LIO kernel target configuration.

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin &amp;&amp; VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

juno1.localdomain   –  Controller (192.168.1.127)

juno2.localdomain   –  Compute   (192.168.1.137)

[general]

CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

CONFIG_GLANCE_INSTALL=y

CONFIG_CINDER_INSTALL=y

CONFIG_NOVA_INSTALL=y

CONFIG_NEUTRON_INSTALL=y

CONFIG_HORIZON_INSTALL=y

CONFIG_SWIFT_INSTALL=y

CONFIG_CEILOMETER_INSTALL=y

CONFIG_HEAT_INSTALL=n

CONFIG_CLIENT_INSTALL=y

CONFIG_NTP_SERVERS=

CONFIG_NAGIOS_INSTALL=y

EXCLUDE_SERVERS=

CONFIG_DEBUG_MODE=n

CONFIG_CONTROLLER_HOST=192.168.1.127

CONFIG_COMPUTE_HOSTS=192.168.1.137

CONFIG_NETWORK_HOSTS=192.168.1.127

CONFIG_VMWARE_BACKEND=n

CONFIG_UNSUPPORTED=n

CONFIG_VCENTER_HOST=

CONFIG_VCENTER_USER=

CONFIG_VCENTER_CLUSTER_NAME=

CONFIG_STORAGE_HOST=192.168.1.127

CONFIG_USE_EPEL=y

CONFIG_REPO=

CONFIG_RH_USER=

CONFIG_SATELLITE_URL=

CONFIG_RH_PW=

CONFIG_RH_OPTIONAL=y

CONFIG_RH_PROXY=

CONFIG_RH_PROXY_PORT=

CONFIG_RH_PROXY_USER=

CONFIG_RH_PROXY_PW=

CONFIG_SATELLITE_USER=

CONFIG_SATELLITE_PW=

CONFIG_SATELLITE_AKEY=

CONFIG_SATELLITE_CACERT=

CONFIG_SATELLITE_PROFILE=

CONFIG_SATELLITE_FLAGS=

CONFIG_SATELLITE_PROXY=

CONFIG_SATELLITE_PROXY_USER=

CONFIG_SATELLITE_PROXY_PW=

CONFIG_AMQP_BACKEND=rabbitmq

CONFIG_AMQP_HOST=192.168.1.127

CONFIG_AMQP_ENABLE_SSL=n

CONFIG_AMQP_ENABLE_AUTH=n

CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER

CONFIG_AMQP_SSL_PORT=5671

CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem

CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem

CONFIG_AMQP_SSL_SELF_SIGNED=y

CONFIG_AMQP_AUTH_USER=amqp_user

CONFIG_KEYSTONE_DB_PW=abcae16b785245c3

CONFIG_KEYSTONE_REGION=RegionOne

CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398

CONFIG_KEYSTONE_TOKEN_FORMAT=UUID

CONFIG_KEYSTONE_SERVICE_NAME=keystone

CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8

CONFIG_GLANCE_KS_PW=f6a9398960534797

CONFIG_GLANCE_BACKEND=file

CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69

CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f

CONFIG_CINDER_BACKEND=lvm

CONFIG_CINDER_VOLUMES_CREATE=y

CONFIG_CINDER_VOLUMES_SIZE=20G

CONFIG_CINDER_GLUSTER_MOUNTS=

CONFIG_CINDER_NFS_MOUNTS=

CONFIG_CINDER_NETAPP_HOSTNAME=

CONFIG_CINDER_NETAPP_SERVER_PORT=80

CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster

CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http

CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs

CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0

CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20

CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60

CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=

CONFIG_CINDER_NETAPP_VOLUME_LIST=

CONFIG_CINDER_NETAPP_VFILER=

CONFIG_CINDER_NETAPP_VSERVER=

CONFIG_CINDER_NETAPP_CONTROLLER_IPS=

CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2

CONFIG_CINDER_NETAPP_STORAGE_POOLS=

CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8

CONFIG_NOVA_KS_PW=d9583177a2444f06

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp

CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n

CONFIG_SSL_CERT=

CONFIG_SSL_KEY=

CONFIG_SSL_CACHAIN=

CONFIG_SWIFT_KS_PW=8f75bfd461234c30

CONFIG_SWIFT_STORAGES=

CONFIG_SWIFT_STORAGE_ZONES=1

CONFIG_SWIFT_STORAGE_REPLICAS=1

CONFIG_SWIFT_STORAGE_FSTYPE=ext4

CONFIG_SWIFT_HASH=a60aacbedde7429a

CONFIG_SWIFT_STORAGE_SIZE=2G

CONFIG_PROVISION_DEMO=y

CONFIG_PROVISION_TEMPEST=n

CONFIG_PROVISION_TEMPEST_USER=

CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459

CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n

CONFIG_HEAT_DB_PW=PW_PLACEHOLDER

CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0

CONFIG_HEAT_KS_PW=PW_PLACEHOLDER

CONFIG_HEAT_CLOUDWATCH_INSTALL=n

CONFIG_HEAT_USING_TRUSTS=y

CONFIG_HEAT_CFN_INSTALL=n

CONFIG_HEAT_DOMAIN=heat

CONFIG_CEILOMETER_SECRET=19ae0e7430174349

CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753

CONFIG_MONGODB_HOST=192.168.1.127

CONFIG_NAGIOS_PW=02f168ee8edd44e4

DEVICE=”br-ex”

BOOTPROTO=”static”

DNS1=”83.221.202.254″

GATEWAY=”192.168.1.1″

NM_CONTROLLED=”no”

DEFROUTE=”yes”

IPV4_FAILURE_FATAL=”yes”

IPV6INIT=no

ONBOOT=”yes”

TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

DEVICE=”enp2s0″

ONBOOT=”yes”

TYPE=”OVSPort”

DEVICETYPE=”ovs”

OVS_BRIDGE=br-ex

NM_CONTROLLED=no

IPV6INIT=no

Setup Gluster Backend for cinder in Juno

*************************************************************************

Updates  /etc/cinder/cinder.conf to activate Gluster 3.5.2 backend

*************************************************************************

Gluster 3.5.2 cluster installed per  http://bderzhavets.blogspot.com/2014/08/setup-gluster-352-on-two-node.html

enabled_backends=gluster,lvm52

[gluster]

volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver

glusterfs_shares_config = /etc/cinder/shares.conf

glusterfs_mount_point_base = /var/lib/cinder/volumes

volume_backend_name=GLUSTER

[lvm52]

volume_group=cinder-volumes52

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name=LVM_iSCSI52

+————————————–+——+

|                  ID                  | Name |

+————————————–+——+

| 64414f3a-7770-4958-b422-8db0c3e2f433 | lvms  |

+————————————–+——+

+————————————–+——-+

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | gluster |

+————————————–+———+

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | gluster |

| 64414f3a-7770-4958-b422-8db0c3e2f433 |   lvms  |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI

[root@juno1 ~(keystone_admin)]# cinder type-key gluster  set volume_backend_name=GLUSTER

Next step is cinder services restart :-

[root@juno1 ~(keystone_demo)]# for i in api scheduler volume ; do service openstack-cinder-\${i} restart ; done

Filesystem                       Size  Used Avail Use% Mounted on

/dev/mapper/centos01-root00      147G   17G  130G  12% /

devtmpfs                         3.9G     0  3.9G   0% /dev

tmpfs                            3.9G   96K  3.9G   1% /dev/shm

tmpfs                            3.9G  9.1M  3.9G   1% /run

tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup

/dev/loop0                       1.9G  6.0M  1.7G   1% /srv/node/swift_loopback

/dev/sda3                        477M  146M  302M  33% /boot

/dev/mapper/centos01-data5        98G  1.4G   97G   2% /data5

192.168.1.127:/cinder-volumes57   98G  1.4G   97G   2% /var/lib/cinder/volumes/8478b56ad61cf67ab9839fb0a5296965

tmpfs                            3.9G  9.1M  3.9G   1% /run/netns

[root@juno1 ~(keystone_demo)]# gluster volume info

Volume Name: cinder-volumes57

Type: Replicate

Volume ID: c1f2e1d2-0b11-426e-af3d-7af0d1d24d5e

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: juno1.localdomain:/data5/data-volumes

Brick2: juno2.localdomain:/data5/data-volumes

Options Reconfigured:

auth.allow: 192.168.1.*

[root@juno1 ~(keystone_demo)]# gluster volume status

Status of volume: cinder-volumes57

Gluster process                        Port    Online    Pid

——————————————————————————

Brick juno1.localdomain:/data5/data-volumes        49152    Y    3806

Brick juno2.localdomain:/data5/data-volumes        49152    Y    3047

NFS Server on localhost                    2049    Y    4146

Self-heal Daemon on localhost                N/A    Y    4141

NFS Server on juno2.localdomain                2049    Y    3881

Self-heal Daemon on juno2.localdomain            N/A    Y    3877

——————————————————————————

**********************************************

Creating cinder volume of gluster type:-

**********************************************

[root@juno1 ~(keystone_demo)]# cinder create –volume_type gluster –image-id d83a6fec-ce82-411c-aa11-04cbb34bf2a2 –display_name UbuntuGLS1029 5

[root@juno1 ~(keystone_demo)]# cinder list

+————————————–+——–+—————+——+————-+———-+————————————–+

|                  ID                  | Status |  Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+—————+——+————-+———-+————————————–+

| ca7ac946-3c4e-4544-ba3a-8cd085d5882b | in-use | UbuntuGLS1029 |  5   |   gluster   |   true   | cdb57658-795a-4a6e-82c9-67bf24acd498 |

+————————————–+——–+—————+——+————-+———-+————————————–+

[root@juno1 ~(keystone_demo)]# nova list

+————————————–+————-+———–+————+————-+———————————–+

| ID                                   | Name        | Status    | Task State | Power State | Networks                          |

+————————————–+————-+———–+————+————-+———————————–+

| 5c366eb9-8830-4432-b9bb-06239ae83d8a | CentOS7RS01 | SUSPENDED | –          | Shutdown    | demo_net=40.0.0.25, 192.168.1.161 |

| cdb57658-795a-4a6e-82c9-67bf24acd498 | UbuntuGLS01 | ACTIVE  | –          | Shutdown    | demo_net=40.0.0.22, 192.168.1.157 |

| 39d5312c-e661-4f9f-82ab-db528a7cdc9a | UbuntuRXS52 | ACTIVE    | –          | Running     | demo_net=40.0.0.32, 192.168.1.165 |

| 16911bfa-cf8b-44b7-b46e-8a54c9b3db69 | VF20GLR01   | ACTIVE    | –          | Running     | demo_net=40.0.0.23, 192.168.1.159 |

+————————————–+————-+———–+————+————-+———————————–+

Get detailed information about server-id :-

[root@juno1 ~(keystone_demo)]# nova show 16911bfa-cf8b-44b7-b46e-8a54c9b3db69

+————————————–+———————————————————-+

| Property                             | Value                                                    |

+————————————–+———————————————————-+

| OS-DCF:diskConfig                    | AUTO                                                     |

| OS-EXT-AZ:availability_zone          | nova                                                     |

| OS-EXT-STS:power_state               | 1                                                        |

| OS-EXT-STS:vm_state                  | active                                                   |

| OS-SRV-USG:launched_at               | 2014-11-01T22:20:12.000000                               |

| OS-SRV-USG:terminated_at             | –                                                        |

| accessIPv4                           |                                                          |

| accessIPv6                           |                                                          |

| config_drive                         |                                                          |

| created                              | 2014-11-01T22:20:04Z                                     |

| demo_net network                     | 40.0.0.23, 192.168.1.159                                 |

| flavor                               | m1.small (2)                                             |

| id                                   | 16911bfa-cf8b-44b7-b46e-8a54c9b3db69                     |

| image                                | Attempt to boot from volume – no image supplied          |

| key_name                             | oskey45                                                  |

| name                                 | VF20GLR01                                                |

| os-extended-volumes:volumes_attached | [{“id”: “6ff40c2b-c363-42da-8988-5425eca0eea3”}]         |

| progress                             | 0                                                        |

| security_groups                      | default                                                  |

| status                               | ACTIVE                                                   |

| tenant_id                            | b302ecfaf76740189fca446e2e4a9a6e                         |

| updated                              | 2014-11-03T09:29:25Z                                     |

+————————————–+———————————————————-+

[root@juno1 ~(keystone_demo)]# cinder show 6ff40c2b-c363-42da-8988-5425eca0eea3 | grep volume_type

volume_type | gluster

*******************************

Gluster cinder-volumes list :-

*******************************

[root@juno1 data-volumes(keystone_demo)]# cinder list

+————————————–+——–+—————+——+————-+———-+————————————–+

|                  ID                  | Status |  Display Name | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+—————+——+————-+———-+————————————–+

| 6ff40c2b-c363-42da-8988-5425eca0eea3 | in-use |  VF20VLG0211  |  7   |   gluster   |   true   | 16911bfa-cf8b-44b7-b46e-8a54c9b3db69 |

| 8ade9f17-163d-48ca-bea5-bc9c6ea99b17 | in-use |  UbuntuLVS52  |  5   |     lvms    |   true   | 39d5312c-e661-4f9f-82ab-db528a7cdc9a |

| ca7ac946-3c4e-4544-ba3a-8cd085d5882b | in-use | UbuntuGLS1029 |  5   |   gluster   |   true   | cdb57658-795a-4a6e-82c9-67bf24acd498 |

| d8f77604-f984-4e98-81cc-971003d3fb54 | in-use |   CentOS7VLG  |  10  |   gluster   |   true   | 5c366eb9-8830-4432-b9bb-06239ae83d8a |

+————————————–+——–+—————+——+————-+———-+————————————–+

[root@juno1 data-volumes(keystone_demo)]# ls -la

total 7219560

drwxrwxr-x.   3 root cinder        4096 Nov  3 19:29 .

drwxr-xr-x.   3 root root            25 Nov  1 19:17 ..

drw——-. 252 root root          4096 Nov  3 19:21 .glusterfs

-rw-rw-rw-.   2 qemu qemu    7516192768 Nov  3 19:06 volume-6ff40c2b-c363-42da-8988-5425eca0eea3

-rw-rw-rw-.   2 qemu qemu    5368709120 Nov  3 19:21 volume-ca7ac946-3c4e-4544-ba3a-8cd085d5882b

-rw-rw-rw-.   2 root root   10737418240 Nov  2 10:57 volume-d8f77604-f984-4e98-81cc-971003d3fb54

References

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7

September 5, 2014

As of 07/28/2014 Bug https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ is still pending and workaround suggested above should be applied during two node RDO packstack installation.

Successful implementation of Neutron ML2&&OVS&&VXLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack.

Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for

Controller &amp;&amp; Compute Nodes setup. Before running

`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set to support VXLAN  tunnel  (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin &amp;&amp; VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

[general]

CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub

CONFIG_MYSQL_INSTALL=y

CONFIG_GLANCE_INSTALL=y

CONFIG_CINDER_INSTALL=y

CONFIG_NOVA_INSTALL=y

CONFIG_NEUTRON_INSTALL=y

CONFIG_HORIZON_INSTALL=y

CONFIG_SWIFT_INSTALL=n

CONFIG_CEILOMETER_INSTALL=y

CONFIG_HEAT_INSTALL=n

CONFIG_CLIENT_INSTALL=y

CONFIG_NTP_SERVERS=

CONFIG_NAGIOS_INSTALL=y

EXCLUDE_SERVERS=

CONFIG_DEBUG_MODE=n

CONFIG_VMWARE_BACKEND=n

CONFIG_MYSQL_HOST=192.168.1.127

CONFIG_MYSQL_USER=root

CONFIG_MYSQL_PW=a7f0349d1f7a4ab0

CONFIG_AMQP_SERVER=rabbitmq

CONFIG_AMQP_HOST=192.168.1.127

CONFIG_AMQP_ENABLE_SSL=n

CONFIG_AMQP_ENABLE_AUTH=n

CONFIG_AMQP_NSS_CERTDB_PW=0915db728b00409caf4b6e433b756308

CONFIG_AMQP_SSL_PORT=5671

CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem

CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem

CONFIG_AMQP_SSL_SELF_SIGNED=y

CONFIG_AMQP_AUTH_USER=amqp_user

CONFIG_KEYSTONE_HOST=192.168.1.127

CONFIG_KEYSTONE_DB_PW=32419736ee454c2c

CONFIG_KEYSTONE_DEMO_PW=56eb6360019e45bf

CONFIG_KEYSTONE_TOKEN_FORMAT=PKI

CONFIG_GLANCE_HOST=192.168.1.127

CONFIG_GLANCE_DB_PW=e51feef536104b49

CONFIG_GLANCE_KS_PW=2458775cd64848cb

CONFIG_CINDER_HOST=192.168.1.127

CONFIG_CINDER_DB_PW=bcf3b09c9c4144e2

CONFIG_CINDER_KS_PW=888c59cc113e4489

CONFIG_CINDER_BACKEND=lvm

CONFIG_CINDER_VOLUMES_CREATE=y

CONFIG_CINDER_VOLUMES_SIZE=15G

CONFIG_CINDER_GLUSTER_MOUNTS=

CONFIG_CINDER_NFS_MOUNTS=

CONFIG_VCENTER_HOST=192.168.1.127

CONFIG_VCENTER_USER=

CONFIG_NOVA_API_HOST=192.168.1.127

CONFIG_NOVA_CERT_HOST=192.168.1.127

CONFIG_NOVA_VNCPROXY_HOST=192.168.1.127

CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137

CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.127

CONFIG_NOVA_DB_PW=8cc18e22eaeb4c4d

CONFIG_NOVA_KS_PW=aaf8cf4c60224150

CONFIG_NOVA_SCHED_HOST=192.168.1.127

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5

CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1

CONFIG_NOVA_NETWORK_HOSTS=192.168.1.127

CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

CONFIG_NOVA_NETWORK_PUBIF=enp2s0

CONFIG_NOVA_NETWORK_PRIVIF=enp5s1

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22

CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova

CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

CONFIG_NOVA_NETWORK_VLAN_START=100

CONFIG_NOVA_NETWORK_NUMBER=1

CONFIG_NOVA_NETWORK_SIZE=255

CONFIG_VCENTER_HOST=192.168.1.127

CONFIG_VCENTER_USER=

CONFIG_VCENTER_CLUSTER_NAME=

CONFIG_NEUTRON_SERVER_HOST=192.168.1.127

CONFIG_NEUTRON_KS_PW=5f11f559abc94440

CONFIG_NEUTRON_DB_PW=0302dcfeb69e439f

CONFIG_NEUTRON_L3_HOSTS=192.168.1.127

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.127

CONFIG_NEUTRON_LBAAS_HOSTS=

CONFIG_NEUTRON_L2_PLUGIN=ml2

############################################

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan

CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan

############################################

CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch

CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*

CONFIG_NEUTRON_ML2_VLAN_RANGES=

CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000

CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2

CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000

CONFIG_NEUTRON_L2_AGENT=openvswitch

CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local

CONFIG_NEUTRON_LB_VLAN_RANGES=

CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

#########################################

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan

CONFIG_NEUTRON_OVS_VLAN_RANGES=

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000

CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1

########################################

CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_OSCLIENT_HOST=192.168.1.127

CONFIG_HORIZON_HOST=192.168.1.127

CONFIG_HORIZON_SSL=n

CONFIG_SSL_CERT=

CONFIG_SSL_KEY=

CONFIG_SWIFT_PROXY_HOSTS=192.168.1.127

CONFIG_SWIFT_KS_PW=63d3108083ac495b

CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.127

CONFIG_SWIFT_STORAGE_ZONES=1

CONFIG_SWIFT_STORAGE_REPLICAS=1

CONFIG_SWIFT_STORAGE_FSTYPE=ext4

CONFIG_SWIFT_HASH=ebf91dbf930c49ca

CONFIG_SWIFT_STORAGE_SIZE=2G

CONFIG_PROVISION_DEMO=y

CONFIG_PROVISION_TEMPEST=n

CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28

CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git

CONFIG_PROVISION_TEMPEST_REPO_REVISION=master

CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n

CONFIG_HEAT_HOST=192.168.1.127

CONFIG_HEAT_DB_PW=f0be2b0fa2044183

CONFIG_HEAT_AUTH_ENC_KEY=29419b1f4e574e5e

CONFIG_HEAT_KS_PW=d5c39c630c364c5b

CONFIG_HEAT_CLOUDWATCH_INSTALL=n

CONFIG_HEAT_CFN_INSTALL=n

CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.127

CONFIG_HEAT_CFN_HOST=192.168.1.127

CONFIG_CEILOMETER_HOST=192.168.1.127

CONFIG_CEILOMETER_SECRET=d1ed1459830e4288

CONFIG_CEILOMETER_KS_PW=84f18f2e478f4230

CONFIG_MONGODB_HOST=192.168.1.127

CONFIG_NAGIOS_HOST=192.168.1.127

CONFIG_NAGIOS_PW=e2d02c03b5664ffe

CONFIG_USE_EPEL=y

CONFIG_REPO=

CONFIG_RH_USER=

CONFIG_RH_PW=

CONFIG_RH_BETA_REPO=n

CONFIG_SATELLITE_URL=

CONFIG_SATELLITE_USER=

CONFIG_SATELLITE_PW=

CONFIG_SATELLITE_AKEY=

CONFIG_SATELLITE_CACERT=

CONFIG_SATELLITE_PROFILE=

CONFIG_SATELLITE_FLAGS=

CONFIG_SATELLITE_PROXY=

CONFIG_SATELLITE_PROXY_USER=

CONFIG_SATELLITE_PROXY_PW=

[ml2]

type_drivers = vxlan

tenant_network_types = vxlan

mechanism_drivers =openvswitch

[ml2_type_flat]

[ml2_type_vlan]

[ml2_type_gre]

[ml2_type_vxlan]

vni_ranges =1001:2000

vxlan_group =239.1.1.2

[OVS]

local_ip=192.168.0.127

enable_tunneling=True

integration_bridge=br-int

tunnel_bridge=br-tun

[securitygroup]

enable_security_group = True

firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[agent]

polling_interval=2

total 64

-rw-r–r–. 1 root root      193 Jul 29 16:15 api-paste.ini

-rw-r—–. 1 root neutron  3853 Jul 29 16:14 dhcp_agent.ini

-rw-r—–. 1 root neutron   208 Jul 29 16:15 fwaas_driver.ini

-rw-r—–. 1 root neutron  3431 Jul 29 16:14 l3_agent.ini

-rw-r—–. 1 root neutron  1400 Jun  8 01:38 lbaas_agent.ini

-rw-r—–. 1 root neutron  1481 Jul 29 16:15 metadata_agent.ini

-rw-r—–. 1 root neutron 19150 Jul 29 16:15 neutron.conf

lrwxrwxrwx. 1 root root       37 Jul 29 16:14 plugin.ini -&gt; /etc/neutron/plugins/ml2/ml2_conf.ini

-rw-r–r–. 1 root root      452 Jul 29 17:11 plugin.out

drwxr-xr-x. 4 root root       34 Jul 29 16:14 plugins

-rw-r—–. 1 root neutron  6148 Jun  8 01:38 policy.json

-rw-r–r–. 1 root root       78 Jul  2 15:11 release

-rw-r–r–. 1 root root     1216 Jun  8 01:38 rootwrap.conf

On Controller

2742fa6e-78bf-440e-a2c1-cb48242ea565

Bridge br-ex

Port phy-br-ex

Interface phy-br-ex

Port “qg-76f29fee-9c”

Interface “qg-76f29fee-9c”

type: internal

Port br-ex

Interface br-ex

type: internal

Port “enp2s0”

Interface “enp2s0”

Bridge br-tun

Port “vxlan-c0a80089”

Interface “vxlan-c0a80089″

type: vxlan

options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137”}

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port br-tun

Interface br-tun

type: internal

Bridge br-int

tag: 1

type: internal

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port “tapff8659ee-8d”

tag: 1

Interface “tapff8659ee-8d”

type: internal

Port br-int

Interface br-int

type: internal

Port int-br-ex

Interface int-br-ex

ovs_version: “2.0.0”

On Compute

[root@icehouse2 ~]# ovs-vsctl show

642d8c9f-116e-4b44-842a-e975e506fe24

Bridge br-ex

Port phy-br-ex

Interface phy-br-ex

Port br-ex

Interface br-ex

type: internal

Bridge br-tun

Port br-tun

Interface br-tun

type: internal

Port patch-int

Interface patch-int

type: patch

options: {peer=patch-tun}

Port “vxlan-c0a8007f”

Interface “vxlan-c0a8007f”

type: vxlan

options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}

Bridge br-int

Port patch-tun

Interface patch-tun

type: patch

options: {peer=patch-int}

Port int-br-ex

Interface int-br-ex

Port “qvodc2c598a-b3”

tag: 1

Interface “qvodc2c598a-b3”

Port br-int

Interface br-int

type: internal

Port “qvo25cbd1fa-96”

tag: 1

Interface “qvo25cbd1fa-96”

ovs_version: “2.0.0”

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7

July 29, 2014

As of 07/28/2014 Bug https://ask.openstack.org/en/question/35705/attempt-of-rdo-aio-install-icehouse-on-centos-7/ is still pending and workaround suggested above should be applied during two node RDO packstack installation.
Successful implementation of Neutron ML2&&OVS&&VXLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack.

Two boxes  have been setup , each one having 2  NICs (enp2s0,enp5s1) for
Controller && Compute Nodes setup. Before running
`packstack –answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1’s assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VXLAN )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)
icehouse2.localdomain   –  Compute   (192.168.1.137)

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_MYSQL_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_VMWARE_BACKEND=n
CONFIG_MYSQL_HOST=192.168.1.127
CONFIG_MYSQL_USER=root
CONFIG_MYSQL_PW=a7f0349d1f7a4ab0
CONFIG_AMQP_SERVER=rabbitmq
CONFIG_AMQP_HOST=192.168.1.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=0915db728b00409caf4b6e433b756308
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_KEYSTONE_HOST=192.168.1.127
CONFIG_KEYSTONE_DB_PW=32419736ee454c2c
CONFIG_KEYSTONE_DEMO_PW=56eb6360019e45bf
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI
CONFIG_GLANCE_HOST=192.168.1.127
CONFIG_GLANCE_DB_PW=e51feef536104b49
CONFIG_GLANCE_KS_PW=2458775cd64848cb
CONFIG_CINDER_HOST=192.168.1.127
CONFIG_CINDER_DB_PW=bcf3b09c9c4144e2
CONFIG_CINDER_KS_PW=888c59cc113e4489
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=15G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_VCENTER_HOST=192.168.1.127
CONFIG_VCENTER_USER=
CONFIG_NOVA_API_HOST=192.168.1.127
CONFIG_NOVA_CERT_HOST=192.168.1.127
CONFIG_NOVA_VNCPROXY_HOST=192.168.1.127
CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137
CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.127
CONFIG_NOVA_DB_PW=8cc18e22eaeb4c4d
CONFIG_NOVA_KS_PW=aaf8cf4c60224150
CONFIG_NOVA_SCHED_HOST=192.168.1.127
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_PRIVIF=p4p1
CONFIG_NOVA_NETWORK_HOSTS=192.168.1.127
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_VCENTER_HOST=192.168.1.127
CONFIG_VCENTER_USER=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_NEUTRON_SERVER_HOST=192.168.1.127
CONFIG_NEUTRON_KS_PW=5f11f559abc94440
CONFIG_NEUTRON_DB_PW=0302dcfeb69e439f
CONFIG_NEUTRON_L3_HOSTS=192.168.1.127
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.127
CONFIG_NEUTRON_LBAAS_HOSTS=
CONFIG_NEUTRON_L2_PLUGIN=ml2
############################################
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
############################################
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
#########################################
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
########################################
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_OSCLIENT_HOST=192.168.1.127
CONFIG_HORIZON_HOST=192.168.1.127
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SWIFT_PROXY_HOSTS=192.168.1.127
CONFIG_SWIFT_KS_PW=63d3108083ac495b
CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.127
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=ebf91dbf930c49ca
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_HOST=192.168.1.127
CONFIG_HEAT_DB_PW=f0be2b0fa2044183
CONFIG_HEAT_AUTH_ENC_KEY=29419b1f4e574e5e
CONFIG_HEAT_KS_PW=d5c39c630c364c5b
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.127
CONFIG_HEAT_CFN_HOST=192.168.1.127
CONFIG_CEILOMETER_HOST=192.168.1.127
CONFIG_CEILOMETER_SECRET=d1ed1459830e4288
CONFIG_CEILOMETER_KS_PW=84f18f2e478f4230
CONFIG_MONGODB_HOST=192.168.1.127
CONFIG_NAGIOS_HOST=192.168.1.127
CONFIG_NAGIOS_PW=e2d02c03b5664ffe
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_RH_PW=
CONFIG_RH_BETA_REPO=n
CONFIG_SATELLITE_URL=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[OVS]
local_ip=192.168.1.127
enable_tunneling=True
integration_bridge=br-int
tunnel_bridge=br-tun
[securitygroup]
enable_security_group = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
polling_interval=2

total 64
-rw-r–r–. 1 root root      193 Jul 29 16:15 api-paste.ini
-rw-r—–. 1 root neutron  3853 Jul 29 16:14 dhcp_agent.ini
-rw-r—–. 1 root neutron   208 Jul 29 16:15 fwaas_driver.ini
-rw-r—–. 1 root neutron  3431 Jul 29 16:14 l3_agent.ini
-rw-r—–. 1 root neutron  1400 Jun  8 01:38 lbaas_agent.ini
-rw-r—–. 1 root neutron  1481 Jul 29 16:15 metadata_agent.ini
-rw-r—–. 1 root neutron 19150 Jul 29 16:15 neutron.conf
lrwxrwxrwx. 1 root root       37 Jul 29 16:14 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
-rw-r–r–. 1 root root      452 Jul 29 17:11 plugin.out
drwxr-xr-x. 4 root root       34 Jul 29 16:14 plugins
-rw-r—–. 1 root neutron  6148 Jun  8 01:38 policy.json
-rw-r–r–. 1 root root       78 Jul  2 15:11 release
-rw-r–r–. 1 root root     1216 Jun  8 01:38 rootwrap.conf

On Controller