Neutron work flow for Docker Hypervisor running on DVR Cluster RDO Mitaka in appropriate amount of details && HA support for Glance storage using to load nova-docker instances

Why DVR come into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo) with which I had same kind of problems (VXLAN connection Controller <==> Compute) on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1). I just guess that Nova-Docker driver has a problem with OVS 2.4.0 no matter of stable/kilo, stable/liberty, stable/mitaka branches been checked out for driver build.

I have to notice that issue is related specifically with ML2&OVS&VXLAN setup, RDO Mitaka deployment ML2&OVS&VLAN  works with Nova-Docker (stable/mitaka) with no problems.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc, because even having proved malfunctinality I cannot file it to BZ. Nova-Docker Driver is not packaged for RDO so it’s upstream stuff. Upstream won’t consider issue which involves build driver from source on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup. It will result South-North traffic to be forwarded right away from host running Docker Hypervisor to Internet and vice/versa due to basic “fg” functionality ( outgoing interface of fip-namespace,residing on Compute node having L3 agent running in “dvr” agent_mode ).

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. See pre-deployment actions to be undertaken on Controller/Storage Node  

Before DVR set up change swift to glance back end  ( swift is configured in answer-file as follows )

CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1
CONFIG_SWIFT_STORAGE_ZONES=3
CONFIG_SWIFT_STORAGE_REPLICAS=3
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=a55607bff10c4210
CONFIG_SWIFT_STORAGE_SIZE=10G

Up on set up completion on storage node :-

[root@ip-192-169-142-127 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   45G  5.3G   40G  12% /
devtmpfs                 2.8G     0  2.8G   0% /dev
tmpfs                    2.8G  204K  2.8G   1% /dev/shm
tmpfs                    2.8G   25M  2.8G   1% /run
tmpfs                    2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/vdc1                 10G  2.5G  7.5G  25% /srv/node/vdc1
/dev/vdb1                 10G  2.5G  7.5G  25% /srv/node/vdb1
/dev/vdd1                 10G  2.5G  7.5G  25% /srv/node/vdd1

/dev/vda1                497M  211M  286M  43% /boot
tmpfs                    567M  4.0K  567M   1% /run/user/42
tmpfs                    567M  8.0K  567M   1% /run/user/1000

****************************
Update  glance-api.conf
****************************

[glance_store]
stores = swift
default_store = swift
swift_store_auth_address = http://192.169.142.127:5000/v2.0/
swift_store_user = services:glance
swift_store_key = f6a9398960534797 

swift_store_create_container_on_put = True
os_region_name=RegionOne

# openstack-service restart glance

# keystone user-role-add –tenant_id=$UUID_SERVICES_TENANT \
–user=$UUID_GLANCE_USER –role=$UUID_ResellerAdmin_ROLE

Value f6a9398960534797 is corresponding CONFIG_GLANCE_KS_PW in answer-file,i.e. keystone glance password for authentification

2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”
http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html
Just one notice on RDO Mitaka on each compute node run

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth0

Then configure

***********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
***********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************
vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute

***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf

container_formats=ami,ari,aki,bare,ovf,ova,docker
# systemctl restart openstack-glance-api

**************************************************
Network flow on Compute in a bit more details
**************************************************

When floating IP gets assigned to  VM ,  what actually happens ( [1] ) :-

The same explanation may be found in ([4]) , the only style would not be in step by step manner, in particular it contains detailed description of reverse network flow and ARP Proxy functionality.

1.The fip- namespace is created on the local compute node
(if it does not already exist)
2.A new port rfp- gets created on the qrouter- namespace
(if it does not already exist)
3.The rfp port on the qrouter namespace is assigned the associated floating IP address
4.The fpr port on the fip namespace gets created and linked via point-to-point  network to the rfp port of the qrouter namespace
5.The fip namespace gateway port fg- is assigned an additional address
from the public network range to set up  ARP proxy point
6.The fg- is configured as a Proxy ARP

*********************
Flow itself  ( [1] ):
*********************

1.The VM, initiating transmission, sends a packet via default gateway
and br-int forwards the traffic to the local DVR gateway port (qr-).
2.DVR routes the packet using the routing table to the rfp- port
3.The packet is applied NAT rule, replacing the source-IP of VM to
the assigned floating IP, and then it gets sent through the rfp- port,
which connects to the fip namespace via point-to-point network
169.254.31.28/31
4. The packet is received on the fpr- port in the fip namespace
and then routed outside through the fg- port

dvr273Screenshot from 2016-04-06 22-17-32

[root@ip-192-169-142-137 ~(keystone_demo)]# nova list

+————————————–+—————-+——–+————+————-+—————————————–+
| ID                                   | Name           | Status | Task State | Power State | Networks                                |
+————————————–+—————-+——–+————+————-+—————————————–+
| 957814c1-834e-47e5-9236-ef228455fe36 | UbuntuDevs01   | ACTIVE | –          | Running     | demo_network=50.0.0.12, 192.169.142.151 |
| 65dd55b9-23ea-4e5b-aeed-4db259436df2 | derbyGlassfish | ACTIVE | –          | Running     | demo_network=50.0.0.13, 192.169.142.153 |
| f9311d57-4352-48a6-a042-b36393e0af7a | fedora22docker | ACTIVE | –          | Running     | demo_network=50.0.0.14, 192.169.142.154 |
+————————————–+—————-+——–+————+————-+—————————————–+

[root@ip-192-169-142-137 ~(keystone_demo)]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

336679f5bf7a        kumarpraveen/fedora-sshd   “/usr/bin/supervisord”   About an hour ago   Up About an hour                        nova-f9311d57-4352-48a6-a042-b36393e0af7a
8bb2ce01e671        derby/docker-glassfish41   “/sbin/my_init”          2 hours ago         Up 2 hours                              nova-65dd55b9-23ea-4e5b-aeed-4db259436df2
fe5eb55a4c9d        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”      3 hours ago         Up 3 hours                              nova-957814c1-834e-47e5-9236-ef228455fe36

[root@ip-192-169-142-137 ~(keystone_demo)]# nova show f9311d57-4352-48a6-a042-b36393e0af7a | grep image
| image                                | kumarpraveen/fedora-sshd (93345f0b-fcbd-41e4-b335-a4ecb8b59e73) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 65dd55b9-23ea-4e5b-aeed-4db259436df2 | grep image
| image                                | derby/docker-glassfish41 (9f2cd9bc-7840-47c1-81e8-3bc0f76426ec) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 957814c1-834e-47e5-9236-ef228455fe36 | grep image
| image                                | rastasheep/ubuntu-sshd (29c057f1-3c7d-43e3-80e6-dc8fef1ea035) |

[root@ip-192-169-142-137 ~(keystone_demo)]# . keystonerc_glance
[root@ip-192-169-142-137 ~(keystone_glance)]# glance image-list

+————————————–+————————–+
| ID                                   | Name                     |

+————————————–+————————–+
| 27551b28-6df7-4b0e-a0c8-322b416092c1 | cirros                   |
| 9f2cd9bc-7840-47c1-81e8-3bc0f76426ec | derby/docker-glassfish41 |
| 93345f0b-fcbd-41e4-b335-a4ecb8b59e73 | kumarpraveen/fedora-sshd |
| 29c057f1-3c7d-43e3-80e6-dc8fef1ea035 | rastasheep/ubuntu-sshd   |
+————————————–+————————–+

[root@ip-192-169-142-137 ~(keystone_glance)]# swift list glance

29c057f1-3c7d-43e3-80e6-dc8fef1ea035
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00001
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00002

93345f0b-fcbd-41e4-b335-a4ecb8b59e73
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00001
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00002
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00003
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00004
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00005

9f2cd9bc-7840-47c1-81e8-3bc0f76426ec
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00001
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00002
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00003
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00004
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00005
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00006

Screenshot from 2016-04-06 18-08-30     Screenshot from 2016-04-06 18-08-46

Screenshot from 2016-04-06 18-09-28

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: