Neutron work flow for Docker Hypervisor running on DVR Cluster RDO Mitaka in appropriate amount of details && HA support for Glance storage using to load nova-docker instances

April 6, 2016

Why DVR come into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo) with which I had same kind of problems (VXLAN connection Controller <==> Compute) on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1). I just guess that Nova-Docker driver has a problem with OVS 2.4.0 no matter of stable/kilo, stable/liberty, stable/mitaka branches been checked out for driver build.

I have to notice that issue is related specifically with ML2&OVS&VXLAN setup, RDO Mitaka deployment ML2&OVS&VLAN  works with Nova-Docker (stable/mitaka) with no problems.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc, because even having proved malfunctinality I cannot file it to BZ. Nova-Docker Driver is not packaged for RDO so it’s upstream stuff. Upstream won’t consider issue which involves build driver from source on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup. It will result South-North traffic to be forwarded right away from host running Docker Hypervisor to Internet and vice/versa due to basic “fg” functionality ( outgoing interface of fip-namespace,residing on Compute node having L3 agent running in “dvr” agent_mode ).

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. See pre-deployment actions to be undertaken on Controller/Storage Node  

Before DVR set up change swift to glance back end  ( swift is configured in answer-file as follows )

CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1
CONFIG_SWIFT_STORAGE_ZONES=3
CONFIG_SWIFT_STORAGE_REPLICAS=3
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=a55607bff10c4210
CONFIG_SWIFT_STORAGE_SIZE=10G

Up on set up completion on storage node :-

[root@ip-192-169-142-127 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   45G  5.3G   40G  12% /
devtmpfs                 2.8G     0  2.8G   0% /dev
tmpfs                    2.8G  204K  2.8G   1% /dev/shm
tmpfs                    2.8G   25M  2.8G   1% /run
tmpfs                    2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/vdc1                 10G  2.5G  7.5G  25% /srv/node/vdc1
/dev/vdb1                 10G  2.5G  7.5G  25% /srv/node/vdb1
/dev/vdd1                 10G  2.5G  7.5G  25% /srv/node/vdd1

/dev/vda1                497M  211M  286M  43% /boot
tmpfs                    567M  4.0K  567M   1% /run/user/42
tmpfs                    567M  8.0K  567M   1% /run/user/1000

****************************
Update  glance-api.conf
****************************

[glance_store]
stores = swift
default_store = swift
swift_store_auth_address = http://192.169.142.127:5000/v2.0/
swift_store_user = services:glance
swift_store_key = f6a9398960534797 

swift_store_create_container_on_put = True
os_region_name=RegionOne

# openstack-service restart glance

# keystone user-role-add –tenant_id=$UUID_SERVICES_TENANT \
–user=$UUID_GLANCE_USER –role=$UUID_ResellerAdmin_ROLE

Value f6a9398960534797 is corresponding CONFIG_GLANCE_KS_PW in answer-file,i.e. keystone glance password for authentification

2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”
http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html
Just one notice on RDO Mitaka on each compute node run

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth0

Then configure

***********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
***********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************
vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute

***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf

container_formats=ami,ari,aki,bare,ovf,ova,docker
# systemctl restart openstack-glance-api

**************************************************
Network flow on Compute in a bit more details
**************************************************

When floating IP gets assigned to  VM ,  what actually happens ( [1] ) :-

The same explanation may be found in ([4]) , the only style would not be in step by step manner, in particular it contains detailed description of reverse network flow and ARP Proxy functionality.

1.The fip- namespace is created on the local compute node
(if it does not already exist)
2.A new port rfp- gets created on the qrouter- namespace
(if it does not already exist)
3.The rfp port on the qrouter namespace is assigned the associated floating IP address
4.The fpr port on the fip namespace gets created and linked via point-to-point  network to the rfp port of the qrouter namespace
5.The fip namespace gateway port fg- is assigned an additional address
from the public network range to set up  ARP proxy point
6.The fg- is configured as a Proxy ARP

*********************
Flow itself  ( [1] ):
*********************

1.The VM, initiating transmission, sends a packet via default gateway
and br-int forwards the traffic to the local DVR gateway port (qr-).
2.DVR routes the packet using the routing table to the rfp- port
3.The packet is applied NAT rule, replacing the source-IP of VM to
the assigned floating IP, and then it gets sent through the rfp- port,
which connects to the fip namespace via point-to-point network
169.254.31.28/31
4. The packet is received on the fpr- port in the fip namespace
and then routed outside through the fg- port

dvr273Screenshot from 2016-04-06 22-17-32

[root@ip-192-169-142-137 ~(keystone_demo)]# nova list

+————————————–+—————-+——–+————+————-+—————————————–+
| ID                                   | Name           | Status | Task State | Power State | Networks                                |
+————————————–+—————-+——–+————+————-+—————————————–+
| 957814c1-834e-47e5-9236-ef228455fe36 | UbuntuDevs01   | ACTIVE | –          | Running     | demo_network=50.0.0.12, 192.169.142.151 |
| 65dd55b9-23ea-4e5b-aeed-4db259436df2 | derbyGlassfish | ACTIVE | –          | Running     | demo_network=50.0.0.13, 192.169.142.153 |
| f9311d57-4352-48a6-a042-b36393e0af7a | fedora22docker | ACTIVE | –          | Running     | demo_network=50.0.0.14, 192.169.142.154 |
+————————————–+—————-+——–+————+————-+—————————————–+

[root@ip-192-169-142-137 ~(keystone_demo)]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

336679f5bf7a        kumarpraveen/fedora-sshd   “/usr/bin/supervisord”   About an hour ago   Up About an hour                        nova-f9311d57-4352-48a6-a042-b36393e0af7a
8bb2ce01e671        derby/docker-glassfish41   “/sbin/my_init”          2 hours ago         Up 2 hours                              nova-65dd55b9-23ea-4e5b-aeed-4db259436df2
fe5eb55a4c9d        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”      3 hours ago         Up 3 hours                              nova-957814c1-834e-47e5-9236-ef228455fe36

[root@ip-192-169-142-137 ~(keystone_demo)]# nova show f9311d57-4352-48a6-a042-b36393e0af7a | grep image
| image                                | kumarpraveen/fedora-sshd (93345f0b-fcbd-41e4-b335-a4ecb8b59e73) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 65dd55b9-23ea-4e5b-aeed-4db259436df2 | grep image
| image                                | derby/docker-glassfish41 (9f2cd9bc-7840-47c1-81e8-3bc0f76426ec) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 957814c1-834e-47e5-9236-ef228455fe36 | grep image
| image                                | rastasheep/ubuntu-sshd (29c057f1-3c7d-43e3-80e6-dc8fef1ea035) |

[root@ip-192-169-142-137 ~(keystone_demo)]# . keystonerc_glance
[root@ip-192-169-142-137 ~(keystone_glance)]# glance image-list

+————————————–+————————–+
| ID                                   | Name                     |

+————————————–+————————–+
| 27551b28-6df7-4b0e-a0c8-322b416092c1 | cirros                   |
| 9f2cd9bc-7840-47c1-81e8-3bc0f76426ec | derby/docker-glassfish41 |
| 93345f0b-fcbd-41e4-b335-a4ecb8b59e73 | kumarpraveen/fedora-sshd |
| 29c057f1-3c7d-43e3-80e6-dc8fef1ea035 | rastasheep/ubuntu-sshd   |
+————————————–+————————–+

[root@ip-192-169-142-137 ~(keystone_glance)]# swift list glance

29c057f1-3c7d-43e3-80e6-dc8fef1ea035
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00001
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00002

93345f0b-fcbd-41e4-b335-a4ecb8b59e73
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00001
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00002
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00003
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00004
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00005

9f2cd9bc-7840-47c1-81e8-3bc0f76426ec
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00001
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00002
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00003
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00004
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00005
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00006

Screenshot from 2016-04-06 18-08-30     Screenshot from 2016-04-06 18-08-46

Screenshot from 2016-04-06 18-09-28

 

 


Setting up Nova-Docker on Multi Node DVR Cluster RDO Mitaka

April 1, 2016

UPDATE 04/03/2016
   In meantime  better use  repositories for RC1,
   rather then Delorean trunks
END UPDATE

DVR && Nova-Docker Driver (stable/mitaka) tested fine on RDO Mitaka (build 20160329) with no issues described in previous post for RDO Liberty
So, create DVR deployment with Contrpoller/Network + N(*)Compute Nodes. Switch to Docker Hypervisor on each Compute Node and make requiered updates to glance and filters file on Controller. You are all set. Nova-Dockers instances FIP(s) are available from outside via Neutron Distributed Router (DNAT) using “fg” inteface ( fip-namespace ) residing on same host as Docker Hypervisor. South-North traffic is not related with VXLAN tunneling on DVR systems.

Why DVR comes into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo)
with which I had same kind of problems (VXLAN connection Controller <==> Compute)
on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1).
I just guess that Nova-Docker driver has a problem with OVS 2.4.0
no matter of stable/kilo, stable/liberty, stable/mitaka branches
been checked out for driver build.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc,
because even having proved malfunctinality I cannot file it to BZ.
Nova-Docker Driver is not packaged for RDO so it’s upstream stuff,
Upstream won’t consider issue which involves build driver from source
on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup,
to kill two birds with one stone. It will result South-North traffic
to be forwarded right away from host running Docker Hypervisor to Internet
and vice/versa due to basic “fg” functionality (outgoing interface of
fip-namespace,residing on Compute node having L3 agent running in “dvr”
agent_mode).

dvr273

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”  :-

http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html

Just one notice on RDO Mitaka on each compute node, first create br-ex and add port eth0

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth0

Then configure

*********************************
Compute nodes X=(3,4)
*********************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0

DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************

vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute
***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf
container_formats=ami,ari,aki,bare,ovf,ova,docker

# systemctl restart openstack-glance-api

Screenshot from 2016-04-03 12-22-34                                          Screenshot from 2016-04-03 12-57-09                                          Screenshot from 2016-04-03 12-32-41

Screenshot from 2016-04-03 14-39-11

**************************************************************************************
Build on Compute GlassFish 4.1 docker image per
http://bderzhavets.blogspot.com/2015/01/hacking-dockers-phusionbaseimage-to.html  and upload to glance :-
**************************************************************************************

[root@ip-192-169-142-137 ~(keystone_admin)]# docker images

REPOSITORY                 TAG                 IMAGE ID            CREATED              SIZE
derby/docker-glassfish41   latest              3a6b84ec9206        About a minute ago   1.155 GB
rastasheep/ubuntu-sshd     latest              70e0ac74c691        2 days ago           251.6 MB
phusion/baseimage          latest              772dd063a060        3 months ago         305.1 MB
tutum/tomcat               latest              2edd730bbedd        7 months ago         539.9 MB
larsks/thttpd              latest              a31ab5050b67        15 months ago        1.058 MB

[root@ip-192-169-142-137 ~(keystone_admin)]# docker save derby/docker-glassfish41 |  openstack image create  derby/docker-glassfish41  –public –container-format docker –disk-format raw

+——————+——————————————————+
| Field            | Value                                                |
+——————+——————————————————+
| checksum         | 9bea6dd0bcd8d0d7da2d82579c0e658a                     |
| container_format | docker                                               |
| created_at       | 2016-04-01T14:29:20Z                                 |
| disk_format      | raw                                                  |
| file             | /v2/images/acf03d15-b7c5-4364-b00f-603b6a5d9af2/file |
| id               | acf03d15-b7c5-4364-b00f-603b6a5d9af2                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | derby/docker-glassfish41                             |
| owner            | 31b24d4b1574424abe53b9a5affc70c8                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 1175020032                                           |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2016-04-01T14:30:13Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+——————+——————————————————+

[root@ip-192-169-142-137 ~(keystone_admin)]# docker ps

CONTAINER ID        IMAGE                      COMMAND               CREATED             STATUS              PORTS               NAMES

8f551d35f2d7        derby/docker-glassfish41   “/sbin/my_init”       39 seconds ago      Up 31 seconds                           nova-faba725e-e031-4edb-bf2c-41c6dfc188c1
dee4425261e8        tutum/tomcat               “/run.sh”             About an hour ago   Up About an hour                        nova-13450558-12d7-414c-bcd2-d746495d7a57
41d2ebc54d75        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”   2 hours ago         Up About an hour                        nova-04ddea42-10a3-4a08-9f00-df60b5890ee9

[root@ip-192-169-142-137 ~(keystone_admin)]# docker logs 8f551d35f2d7

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh…
No SSH host key available. Generating one…
*** Running /etc/my_init.d/01_sshd_start.sh…
Creating SSH2 RSA key; this may take some time …
Creating SSH2 DSA key; this may take some time …
Creating SSH2 ECDSA key; this may take some time …
Creating SSH2 ED25519 key; this may take some time …
invoke-rc.d: policy-rc.d denied execution of restart.
SSH KEYS regenerated by Boris just in case !
SSHD started !

*** Running /etc/my_init.d/database.sh…
Derby database started !
*** Running /etc/my_init.d/run.sh…

Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000006: instance-00000006: unknown error

Waiting for domain1 to start ……
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=&gt; Modifying password of admin to random in Glassfish
spawn asadmin –user admin change-admin-password
Enter the admin password&gt;
Enter the new admin password&gt;
Enter the new admin password again&gt;
Command change-admin-password executed successfully.

Fairly hard docker image been built by “docker expert” as myself 😉
gets launched and nova-docker instance seems to run properly
several daemons at a time ( sshd enabled )
[boris@fedora23wks Downloads]$ ssh root@192.169.142.156

root@192.169.142.156’s password:
Last login: Fri Apr  1 15:33:06 2016 from 192.169.142.1
root@instance-00000006:~# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 14:32 ?        00:00:00 /usr/bin/python3 -u /sbin/my_init
root       100     1  0 14:33 ?        00:00:00 /bin/bash /etc/my_init.d/run.sh
root       103     1  0 14:33 ?        00:00:00 /usr/sbin/sshd
root       170     1  0 14:33 ?        00:00:03 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/op
root       427   100  0 14:33 ?        00:00:02 java -jar /opt/glassfish4/bin/../glassfish/lib/cl
root       444   427  2 14:33 ?        00:01:23 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/gla

root      1078     0  0 15:32 ?        00:00:00 bash
root      1110   103  0 15:33 ?        00:00:00 sshd: root@pts/0
root      1112  1110  0 15:33 pts/0    00:00:00 -bash
root      1123  1112  0 15:33 pts/0    00:00:00 ps -ef

Glassfish is running indeed


Setup the most recent Nova Docker Driver via Devstack on Fedora 21

March 23, 2015

*********************************************************************************
UPDATE as 03/26/2015
To make devstack configuration persistent between reboots on Fedora 21, e.g. restart-able via ./rejoin-stack.sh, following services must be enabled :-
*********************************************************************************
systemctl enable rabbitmq-server
systemctl enable openvswitch
systemctl enable httpd
systemctl enable mariadb
systemctl enable mysqld

File /etc/rc.d/rc.local should contain ( in my case ) :-

ip addr flush dev br-ex ;
ip addr add 192.168.10.15/24 dev br-ex ;
ip link set br-ex up ;
route add -net 10.254.1.0/24 gw 192.168.10.15 ;

System is supposed to be shutdown via :-
$sudo ./unstack.sh
********************************************************************************

This post follows up http://blog.oddbit.com/2015/02/06/installing-nova-docker-on-fedora-21/  however , RDO Juno is not pre-installed and Nova Docker driver is built first based on the top commit of https://git.openstack.org/cgit/stackforge/nova-docker/ , next step is :-

$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack

Creating local.conf under devstack following any of two links provided
and run ./stack.sh performing AIO Openstack installation, like it does
it on Ubuntu 14.04. All steps preventing stack.sh from crash on F21 described right bellow.

# yum -y install git docker-io fedora-repos-rawhide
# yum –enablerepo=rawhide install python-six  python-pip python-pbr systemd
# reboot
# yum – y install gcc python-devel ( required for driver build )

$ git clone http://github.com/stackforge/nova-docker.git
$ cd nova-docker
$ sudo pip install .

To raise to 1.9 version python-six dropped to 1.2 during driver’s build

yum –enablerepo=rawhide reinstall python-six

Run devstack with Lars’s local.conf
per http://blog.oddbit.com/2015/02/11/installing-novadocker-with-devstack/
or view  http://bderzhavets.blogspot.com/2015/02/set-up-nova-docker-driver-on-ubuntu.html   for another version of local.conf
*****************************************************************************
My version of local.conf which allows define floating pool as you need,a bit more flexible then original
*****************************************************************************
[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15

SERVICE_TOKEN=super-secret-admin-token
VIRT_DRIVER=novadocker.virt.docker.DockerDriver

DEST=$HOME/stack
SERVICE_DIR=$DEST/status
DATA_DIR=$DEST/data
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest
# Introduce glance to docker images

[[post-config|$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver

**************************************************************************************
After stack.sh completion disable firewalld, because devstack has no interaction with fedoras firewalld bringing up openstack daemons requiring corresponding ports  to be opened.
***************************************************************************************

#  systemctl stop firewalld
#  systemtcl disable firewalld

$ cd dev*
$ . openrc demo
$ neutron security-group-rule-create –protocol icmp \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default
$ neutron security-group-rule-create –protocol tcp \
–port-range-min 22 –port-range-max 22 \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default
$ neutron security-group-rule-create –protocol tcp \
–port-range-min 80 –port-range-max 80 \
–direction ingress –remote-ip-prefix 0.0.0.0/0 default

Uploading docker image to glance

$ . openrc admin
$  docker pull rastasheep/ubuntu-sshd:14.04
$  docker save rastasheep/ubuntu-sshd:14.04 | glance image-create –is-public=True   –container-format=docker –disk-format=raw –name rastasheep/ubuntu-sshd:14.04

Launch new instance via uploaded image :-

$ . openrc demo
$  nova boot –image “rastasheep/ubuntu-sshd:14.04” –flavor m1.tiny
–nic net-id=private-net-id UbuntuDocker

To provide internet access for launched nova-docker instance run :-
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Horizon is unavailable , regardless installed


Set up GlassFish 4.1 Nova-Docker Container via docker’s phusion/baseimage on RDO Juno

January 9, 2015

The problem here is that phusion/baseimage per https://github.com/phusion/baseimage-docker should provide ssh access to container , however it doesn’t. Working with docker container there is easy workaround suggested by Mykola Gurov in http://stackoverflow.com/questions/27816298/cannot-get-ssh-access-to-glassfish-4-1-docker-container
# docker exec container-id exec /usr/sbin/sshd -D
*******************************************************************************
To   bring sshd back to life  create in building folder script  01_sshd_start.sh
*******************************************************************************
#!/bin/bash

if [[ ! -e /etc/ssh/ssh_host_rsa_key ]]; then
echo “No SSH host key available. Generating one…”
export LC_ALL=C
export DEBIAN_FRONTEND=noninteractive
dpkg-reconfigure openssh-server
echo “SSH KEYS regenerated by Boris just in case !”
fi

/usr/sbin/sshd &gt; log &amp;
echo “SSHD started !”

and insert in Dockerfile:-

ADD 01_sshd_start.sh /etc/my_init.d/ 

Following bellow is Dockerfile been used to build image for GlassFish 4.1 nova-docker container extending phusion/baseimage and starting three daemons at a time when launching nova-docker instance been built via image been prepared to be used by Nova-Docker driver on Juno

FROM phusion/baseimage
MAINTAINER Boris Derzhavets

RUN apt-get update
RUN echo ‘root:root’ |chpasswd
RUN sed -ri ‘s/^PermitRootLogin\s+.*/PermitRootLogin yes/’ /etc/ssh/sshd_config
RUN sed -ri ‘s/UsePAM yes/#UsePAM yes/g’ /etc/ssh/sshd_config

RUN apt-get update && apt-get install -y wget
RUN wget –no-check-certificate –no-cookies –header “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz
RUN cp jdk-8u25-linux-x64.tar.gz /opt
RUN cd /opt; tar -zxvf jdk-8u25-linux-x64.tar.gz
ENV PATH /opt/jdk1.8.0_25/bin:$PATH
RUN apt-get update && \

apt-get install -y wget unzip pwgen expect net-tools vim && \
wget http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip && \
unzip glassfish-4.1.zip -d /opt && \
rm glassfish-4.1.zip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

ENV PATH /opt/glassfish4/bin:$PATH

ADD 01_sshd_start.sh /etc/my_init.d/
ADD run.sh /etc/my_init.d/
ADD database.sh /etc/my_init.d/
ADD change_admin_password.sh /change_admin_password.sh
ADD change_admin_password_func.sh /change_admin_password_func.sh
ADD enable_secure_admin.sh /enable_secure_admin.sh
RUN chmod +x /*.sh /etc/my_init.d/*.sh

# 4848 (administration), 8080 (HTTP listener), 8181 (HTTPS listener), 9009 (JPDA debug port)

EXPOSE 22 4848 8080 8181 9009

CMD [“/sbin/my_init”]

********************************************************************************
I had to update database.sh script as follows to make nova-docker container
starting on RDO Juno
********************************************************************************
# cat database.sh

#!/bin/bash
set -e
asadmin start-database –dbhost 127.0.0.1 –terse=true > log &

the important change is binding dbhost to 127.0.0.1 , which is not required for loading docker container. Nova-Docker Driver ( http://www.linux.com/community/blogs/133-general-linux/799569-running-nova-docker-on-openstack-rdo-juno-centos-7 ) seems to be more picky about –dbhost key value of Derby Database

*********************
Build image
*********************
[root@junolxc docker-glassfish41]# ls -l
total 44
-rw-r–r–. 1 root root 217 Jan 7 00:27 change_admin_password_func.sh
-rw-r–r–. 1 root root 833 Jan 7 00:27 change_admin_password.sh
-rw-r–r–. 1 root root 473 Jan 7 00:27 circle.yml
-rw-r–r–. 1 root root 44 Jan 7 00:27 database.sh
-rw-r–r–. 1 root root 1287 Jan 7 00:27 Dockerfile
-rw-r–r–. 1 root root 167 Jan 7 00:27 enable_secure_admin.sh
-rw-r–r–. 1 root root 11323 Jan 7 00:27 LICENSE
-rw-r–r–. 1 root root 2123 Jan 7 00:27 README.md
-rw-r–r–. 1 root root 354 Jan 7 00:27 run.sh

[root@junolxc docker-glassfish41]# docker build -t boris/docker-glassfish41 .

*************************
Upload image to glance
*************************
# . keystonerc_admin
# docker save boris/docker-glassfish41:latest | glance image-create –is-public=True –container-format=docker –disk-format=raw –name boris/docker-glassfish41:latest

**********************
Launch instance
**********************
# . keystonerc_demo
# nova boot –image “boris/docker-glassfish41:latest” –flavor m1.small –key-name osxkey –nic net-id=demo_network-id OracleGlassfish41

[root@junodocker (keystone_admin)]# ssh root@192.168.1.175
root@192.168.1.175’s password:
Last login: Fri Jan 9 10:09:50 2015 from 192.168.1.57

root@instance-00000045:~# ps -ef

UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:15 ? 00:00:00 /usr/bin/python3 -u /sbin/my_init
root 12 1 0 10:15 ? 00:00:00 /usr/sbin/sshd

root 46 1 0 10:15 ? 00:00:08 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/opt/glassfish4/glassfish/lib -cp /opt/glassfish4/glassfish/lib/asadmin/cli-optional.jar:/opt/glassfish4/javadb/lib/derby.jar:/opt/glassfish4/javadb/lib/derbytools.jar:/opt/glassfish4/javadb/lib/derbynet.jar:/opt/glassfish4/javadb/lib/derbyclient.jar com.sun.enterprise.admin.cli.optional.DerbyControl start 127.0.0.1 1527 true /opt/glassfish4/glassfish/databases

root 137 1 0 10:15 ? 00:00:00 /bin/bash /etc/my_init.d/run.sh
root 358 137 0 10:15 ? 00:00:05 java -jar /opt/glassfish4/bin/../glassfish/lib/client/appserver-cli.jar start-domain –debug=false -w

root 375 358 0 10:15 ? 00:02:59 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/glassfish/modules/glassfish.jar -XX:+UnlockDiagnosticVMOptions -XX:NewRatio=2 -XX:MaxPermSize=192m -Xmx512m -client -javaagent:/opt/glassfish4/glassfish/lib/monitor/flashlight-agent.jar -Djavax.xml.accessExternalSchema=all -Djavax.net.ssl.trustStore=/opt/glassfish4/glassfish/domains/domain1/config/cacerts.jks -Djdk.corba.allowOutputStreamSubclass=true -Dfelix.fileinstall.dir=/opt/glassfish4/glassfish/modules/autostart/ -Dorg.glassfish.additionalOSGiBundlesToStart=org.apache.felix.shell,org.apache.felix.gogo.runtime,org.apache.felix.gogo.shell,org.apache.felix.gogo.command,org.apache.felix.shell.remote,org.apache.felix.fileinstall -Dcom.sun.aas.installRoot=/opt/glassfish4/glassfish -Dfelix.fileinstall.poll=5000 -Djava.endorsed.dirs=/opt/glassfish4/glassfish/modules/endorsed:/opt/glassfish4/glassfish/lib/endorsed -Djava.security.policy=/opt/glassfish4/glassfish/domains/domain1/config/server.policy -Dosgi.shell.telnet.maxconn=1 -Dfelix.fileinstall.bundles.startTransient=true -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dfelix.fileinstall.log.level=2 -Djavax.net.ssl.keyStore=/opt/glassfish4/glassfish/domains/domain1/config/keystore.jks -Djava.security.auth.login.config=/opt/glassfish4/glassfish/domains/domain1/config/login.conf -Dfelix.fileinstall.disableConfigSave=false -Dfelix.fileinstall.bundles.new.start=true -Dcom.sun.aas.instanceRoot=/opt/glassfish4/glassfish/domains/domain1 -Dosgi.shell.telnet.port=6666 -Dgosh.args=–nointeractive -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Dosgi.shell.telnet.ip=127.0.0.1 -DANTLR_USE_DIRECT_CLASS_LOADING=true -Djava.awt.headless=true -Dcom.ctc.wstx.returnNullForDefaultNamespace=true -Djava.ext.dirs=/opt/jdk1.8.0_25/lib/ext:/opt/jdk1.8.0_25/jre/lib/ext:/opt/glassfish4/glassfish/domains/domain1/lib/ext -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -Djava.library.path=/opt/glassfish4/glassfish/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib com.sun.enterprise.glassfish.bootstrap.ASMain -upgrade false -domaindir /opt/glassfish4/glassfish/domains/domain1 -read-stdin true -asadmin-args –host,,,localhost,,,–port,,,4848,,,–secure=false,,,–terse=false,,,–echo=false,,,–interactive=false,,,start-domain,,,–verbose=false,,,–watchdog=true,,,–debug=false,,,–domaindir,,,/opt/glassfish4/glassfish/domains,,,domain1 -domainname domain1 -instancename server -type DAS -verbose false -asadmin-classpath /opt/glassfish4/glassfish/lib/client/appserver-cli.jar -debug false -asadmin-classname com.sun.enterprise.admin.cli.AdminMain

root 1186 12 0 14:02 ? 00:00:00 sshd: root@pts/0
root 1188 1186 0 14:02 pts/0 00:00:00 -bash
root 1226 1188 0 15:45 pts/0 00:00:00 ps -ef

Screenshot from 2015-01-09 09_44_16

Screenshot from 2015-01-09 10_02_57

Original idea of using ./run.sh script is coming from
https://registry.hub.docker.com/u/bonelli/glassfish-4.1/

[root@junodocker ~(keystone_admin)]# docker logs 65a3f4cf1994

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh…
No SSH host key available. Generating one…
Creating SSH2 RSA key; this may take some time …
Creating SSH2 DSA key; this may take some time …
Creating SSH2 ECDSA key; this may take some time …
Creating SSH2 ED25519 key; this may take some time …
invoke-rc.d: policy-rc.d denied execution of restart.

*** Running /etc/my_init.d/database.sh…
Starting database in Network Server mode on host 127.0.0.1 and port 1527.
——— Derby Network Server Information ——–
Version: CSS10100/10.10.2.0 – (1582446) Build: 1582446 DRDA Product Id: CSS10100
— listing properties —
derby.drda.traceDirectory=/opt/glassfish4/glassfish/databases
derby.drda.maxThreads=0
derby.drda.sslMode=off
derby.drda.keepAlive=true
derby.drda.minThreads=0
derby.drda.portNumber=1527
derby.drda.logConnections=false
derby.drda.timeSlice=0
derby.drda.startNetworkServer=false
derby.drda.host=127.0.0.1
derby.drda.traceAll=false
—————— Java Information ——————
Java Version: 1.8.0_25
Java Vendor: Oracle Corporation
Java home: /opt/jdk1.8.0_25/jre
Java classpath: /opt/glassfish4/glassfish/lib/asadmin/cli-optional.jar:/opt/glassfish4/javadb/lib/derby.jar:/opt/glassfish4/javadb/lib/derbytools.jar:/opt/glassfish4/javadb/lib/derbynet.jar:/opt/glassfish4/javadb/lib/derbyclient.jar
OS name: Linux
OS architecture: amd64
OS version: 3.10.0-123.el7.x86_64
Java user name: root
Java user home: /root
Java user dir: /
java.specification.name: Java Platform API Specification
java.specification.version: 1.8
java.runtime.version: 1.8.0_25-b17
——— Derby Information ——–
[/opt/glassfish4/javadb/lib/derby.jar] 10.10.2.0 – (1582446)
[/opt/glassfish4/javadb/lib/derbytools.jar] 10.10.2.0 – (1582446)
[/opt/glassfish4/javadb/lib/derbynet.jar] 10.10.2.0 – (1582446)
[/opt/glassfish4/javadb/lib/derbyclient.jar] 10.10.2.0 – (1582446)
——————————————————
—————– Locale Information —————–

Current Locale : [English/United States [en_US]]
Found support for locale: [cs]
version: 10.10.2.0 – (1582446)
Found support for locale: [de_DE]
version: 10.10.2.0 – (1582446)
Found support for locale: [es]
version: 10.10.2.0 – (1582446)
Found support for locale: [fr]
version: 10.10.2.0 – (1582446)
Found support for locale: [hu]
version: 10.10.2.0 – (1582446)
Found support for locale: [it]
version: 10.10.2.0 – (1582446)
Found support for locale: [ja_JP]
version: 10.10.2.0 – (1582446)
Found support for locale: [ko_KR]
version: 10.10.2.0 – (1582446)
Found support for locale: [pl]
version: 10.10.2.0 – (1582446)
Found support for locale: [pt_BR]
version: 10.10.2.0 – (1582446)
Found support for locale: [ru]
version: 10.10.2.0 – (1582446)
Found support for locale: [zh_CN]
version: 10.10.2.0 – (1582446)
Found support for locale: [zh_TW]
version: 10.10.2.0 – (1582446)
——————————————————
——————————————————

Starting database in the background.

Log redirected to /opt/glassfish4/glassfish/databases/derby.log.
Command start-database executed successfully.
*** Running /etc/my_init.d/run.sh…
Bad Network Configuration. DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000045: instance-00000045: unknown error

Waiting for domain1 to start …….
Successfully started the domain : domain1
domain Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=> Modifying password of admin to random in Glassfish
spawn asadmin –user admin change-admin-password
Enter the admin password>
Enter the new admin password>
Enter the new admin password again>
Command change-admin-password executed successfully.
=> Enabling secure admin login
spawn asadmin enable-secure-admin
Enter admin user name> admin
Enter admin password for user “admin”>
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.
=> Done!
========================================================================
You can now connect to this Glassfish server using:
admin:fCZNVP80JiyI
Please remember to change the above password as soon as possible!
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop .
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false