Creating Servers via REST API on RDO Mitaka && Keystone API V3

April 29, 2016

As usual ssh-kepair for particular tenant is supposed to be created sourcing tenant’s credentials and afterwards it works for particular tenant. By some reasons upgrade keystone api version to v3 breaks this schema in regards of REST API POST requests issued for servers creation. I am not sure either following bellow is workaround or it is supposed to work this way.

Assign admin role user admin on project demo via openstack client

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack project list| \
grep demo > list2

[root@ip-192-169-142-127 ~(keystone_admin)]#openstack user list| \
grep admin >> list2

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack role list|\
grep admin >> list2

[root@ip-192-169-142-127 ~(keystone_admin)]# cat list2
| 052b16e56537467d8161266b52a43b54 | demo |
| b6f2f511caa44f4e94ce5b2a5809dc50 | admin |
| f40413a0de92494680ed8b812f2bf266 | admin |

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack role add \
–project \
052b16e56537467d8161266b52a43b54 \
–user b6f2f511caa44f4e94ce5b2a5809dc50 \
f40413a0de92494680ed8b812f2bf266

*********************************************************************
Run to obtain token scoped “demo”
*********************************************************************

# . keystonerc_admin
# curl -i -H “Content-Type: application/json” -d \
‘ { “auth”:
{ “identity”:
{ “methods”: [“password”], “password”:
{ “user”:
{ “name”: “admin”, “domain”:
{ “id”: “default” }, “password”: “7049f834927e4468” }
}
},
“scope”:
{ “project”:
{ “name”: “demo”, “domain”:
{ “id”: “default” }
}
}
}
}’ http://192.169.142.127:5000/v3/auth/tokens ; echo

Screenshot from 2016-04-28 19-47-00

Created ssh keypair “oskeydemoV3” sourcing keystonerc_admin

Screenshot from 2016-04-28 19-50-02

Admin Console shows

Screenshot from 2016-04-28 20-28-57

***************************************************************************************
Submit “oskeydemoV3” as value for key_name into Chrome REST Client environment && issue POST request to create the server , “key_name” will be accepted ( vs case when ssh-keypair was created by tenant demo )
*************************************************************************************

Screenshot from 2016-04-28 19-52-24

Now log into dashboard as demo

Screenshot from 2016-04-28 19-56-25

Verify that created keypair “oskeydemoV3” allows log into server

Screenshot from 2016-04-28 19-58-56


AIO RDO Liberty && several external networks VLAN provider setup

April 28, 2016

Post bellow is addressing the question when AIO RDO Liberty Node has to have external networks of VLAN type with predefined vlan tags. Straight forward packstack –allinone install doesn’t  allow to achieve desired network configuration. External network provider of vlan type appears to be required. In particular case, office networks 10.10.10.0/24 vlan tagged (157) ,10.10.57.0/24 vlan tagged (172), 10.10.32.0/24 vlan tagged (200) already exists when RDO install is running. If demo_provision was “y” , then delete router1 and created external network of VXLAN type.

I got back to this writing due to recent post
https://ask.openstack.org/en/question/91611/how-to-configure-multiple-external-networks-in-rdo-libertymitaka/
answer provided contains several misleading steps  in configuration  vlan enabled bridges.

First

***********************************************************
Update /etc/neutron/plugins/ml2/ml2_conf.ini
***********************************************************

[root@ip-192-169-142-52 ml2(keystone_demo)]# cat ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vlan,vxlan
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges = vlan157:157:157,vlan172:172:172,vlan200:200:200
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =10:100
vxlan_group =224.0.0.1
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

**************
Then
**************

# openstack-service restart neutron

***************************************************
Invoke external network provider
***************************************************

[root@ip-192-169-142-52 ~(keystone_admin]#neutron net-create vlan157 –shared –provider:network_type vlan –provider:segmentation_id 157 –provider:physical_network vlan157 –router:external

[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create –name sub-vlan157 –gateway 10.10.10.1  –allocation-pool start=10.10.10.100,end=10.10.10.200 vlan157 10.10.10.0/24

***********************************************
Create second external network
***********************************************

[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan172 --shared --provider:network_type vlan --provider:segmentation_id 172 --provider:physical_network vlan172  --router:external


[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan172 --gateway 10.10.57.1 --allocation-pool start=10.10.57.100,end=10.10.57.200 vlan172 10.10.57.0/24

***********************************************
Create third external network
***********************************************

[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan200 --shared --provider:network_type vlan --provider:segmentation_id 200 --provider:physical_network vlan200  --router:external

[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan200 --gateway 10.10.32.1 --allocation-pool start=10.10.32.100,end=10.10.57.200 vlan172 10.10.32.0/24

***********************************************************************
No need to update sub-net (vs [ 1 ]). No switch to "enable_isolataed_metadata=True"
Neutron L3 agent configuration results attaching qg-<port-id> interfaces to br-int
***********************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan157

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | b41e4d36-9a63-4631-abb0-6436f2f50e2e |
| mtu                       | 0                                    |
| name                      | vlan157                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan157                              |
| provider:segmentation_id  | 157                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | bb753fc3-f257-4ce5-aa7c-56648648056b |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan157

+——————-+——————————————————————+
| Field             | Value                                                            |
+——————-+——————————————————————+
| allocation_pools  | {“start”: “10.10.10.100”, “end”: “10.10.10.200”}                 |
| cidr              | 10.10.10.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.10.1                                                       |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “10.10.10.151”} |
| id                | bb753fc3-f257-4ce5-aa7c-56648648056b                             |
| ip_version        | 4                                                                |
| ipv6_address_mode |                                                                  |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan157                                                      |
| network_id        | b41e4d36-9a63-4631-abb0-6436f2f50e2e                             |
| subnetpool_id     |                                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                                 |
+——————-+——————————————————————+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan172

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 3714adc9-ab17-4f96-9df2-48a6c0b64513 |
| mtu                       | 0                                    |
| name                      | vlan172                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan172                              |
| provider:segmentation_id  | 172                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 21419f2f-212b-409a-8021-2b4a2ba6532f |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan172

+——————-+——————————————————————+
| Field             | Value                                                            |
+——————-+——————————————————————+
| allocation_pools  | {“start”: “10.10.57.100”, “end”: “10.10.57.200”}                 |
| cidr              | 10.10.57.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.57.1                                                       |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “10.10.57.151”} |
| id                | 21419f2f-212b-409a-8021-2b4a2ba6532f                             |
| ip_version        | 4                                                                |
| ipv6_address_mode |                                                                  |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan172                                                      |
| network_id        | 3714adc9-ab17-4f96-9df2-48a6c0b64513                             |
| subnetpool_id     |                                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                                 |
+——————-+——————————————————————+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan200

+—————————+————————————–+

| Field                     | Value                                |

+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 3dc90ff7-b1df-4079-aca1-cceedb23f440 |
| mtu                       | 0                                    |
| name                      | vlan200                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan200                              |
| provider:segmentation_id  | 200                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 60181211-ea36-4e4e-8781-f13f743baa19 |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan200

+——————-+————————————————–+
| Field             | Value                                            |
+——————-+————————————————–+
| allocation_pools  | {“start”: “10.10.32.100”, “end”: “10.10.32.200”} |
| cidr              | 10.10.32.0/24                                    |
| dns_nameservers   |                                                  |
| enable_dhcp       | True                                             |
| gateway_ip        | 10.10.32.1                                       |
| host_routes       |                                                  |
| id                | 60181211-ea36-4e4e-8781-f13f743baa19             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | sub-vlan200                                      |
| network_id        | 3dc90ff7-b1df-4079-aca1-cceedb23f440             |
| subnetpool_id     |                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                 |
+——————-+————————————————–+

**************
Next Step
**************

# modprobe 8021q
# ovs-vsctl add-br br-vlan
# ovs-vsctl add-port br-vlan eth1
# vconfig add br-vlan 157
# ovs-vsctl add-br br-vlan2
# ovs-vsctl add-port br-vlan2 eth2
# vconfig add br-vlan2 172
# ovs-vsctl add-br br-vlan3
# ovs-vsctl add-port br-vlan3 eth3
# vconfig add br-vlan3  200

******************************
Update l3_agent.ini file
******************************
external_network_bridge =
gateway_external_network_id =

**********************************************
/etc/neutron/plugins/ml2/openvswitch_agent.ini
**********************************************

bridge_mappings = vlan157:br-vlan,vlan172:br-vlan2,vlan200:br-vlan3

*************************************
Update Neutron Configuration
*************************************

# openstack-service restart neutron

*******************************************
Set up config persistent between reboots
*******************************************

/etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=”eth1″
ONBOOT=yes
OVS_BRIDGE=br-vlan
TYPE=OVSPort
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan

DEVICE=br-vlan
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan.157

BOOTPROTO=”none”
DEVICE=”br-vlan.157″
ONBOOT=”yes”
IPADDR=”10.10.10.150″
PREFIX=”24″
GATEWAY=”10.10.10.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=”eth2″
ONBOOT=yes
OVS_BRIDGE=br-vlan2
TYPE=OVSPort
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan2

DEVICE=br-vlan2
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan2.172

BOOTPROTO=”none”
DEVICE=”br-vlan2.172″
ONBOOT=”yes”
IPADDR=”10.10.57.150″
PREFIX=”24″
GATEWAY=”10.10.57.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes

/etc/sysconfig/network-scripts/ifcfg-br-vlan3

DEVICE=br-vlan3
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan3.200

BOOTPROTO=”none”
DEVICE=”br-vlan3.200″
ONBOOT=”yes”
IPADDR=”10.10.32.150″
PREFIX=”24″
GATEWAY=”10.10.32.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=”eth3″
ONBOOT=yes
OVS_BRIDGE=br-vlan3
TYPE=OVSPort
DEVICETYPE=”ovs”

********************************************
Routing table on AIO RDO Liberty Node
********************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip route

default via 10.10.10.1 dev br-vlan.157
10.10.10.0/24 dev br-vlan.157  proto kernel  scope link  src 10.10.10.150
10.10.32.0/24 dev br-vlan3.200  proto kernel  scope link  src 10.10.32.150
10.10.57.0/24 dev br-vlan2.172  proto kernel  scope link  src 10.10.57.150
169.254.0.0/16 dev eth0  scope link  metric 1002
169.254.0.0/16 dev eth1  scope link  metric 1003
169.254.0.0/16 dev eth2  scope link  metric 1004
169.254.0.0/16 dev eth3  scope link  metric 1005
169.254.0.0/16 dev br-vlan3  scope link  metric 1008
169.254.0.0/16 dev br-vlan2  scope link  metric 1009
169.254.0.0/16 dev br-vlan  scope link  metric 1011
192.169.142.0/24 dev eth0  proto kernel  scope link  src 192.169.142.52

****************************************************************************
Notice that both qrouter-namespaces are attached to br-int.
No switch to “enable_isolated_metadata=True” vs  [ 1 ]
*****************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-list | grep vlan
| 3dc90ff7-b1df-4079-aca1-cceedb23f440 | vlan200   | 60181211-ea36-4e4e-8781-f13f743baa19 10.10.32.0/24 |
| 235c8173-d3f8-407e-ad6a-c1d3d423c763 | vlan172   | c7588239-4941-419b-8d27-ccd970acc4ce 10.10.57.0/24 |
| b41e4d36-9a63-4631-abb0-6436f2f50e2e | vlan157   | bb753fc3-f257-4ce5-aa7c-56648648056b 10.10.10.0/24 |

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-vsctl show
40286423-e174-4714-9c82-32d026ef47ca
Bridge br-vlan
        Port “eth1”
            Interface “eth1”
        Port br-vlan
            Interface br-vlan
                type: internal
        Port phy-br-vlan
            Interface phy-br-vlan
                type: patch
                options: {peer=int-br-vlan}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
  Bridge “br-vlan2”
        Port “phy-br-vlan2”
            Interface “phy-br-vlan2”
                type: patch
                options: {peer=”int-br-vlan2″}
        Port “eth2”
            Interface “eth2”
        Port “br-vlan2”
            Interface “br-vlan2”
                type: internal
    Bridge “br-vlan3”
        Port “br-vlan3”
            Interface “br-vlan3”
                type: internal
        Port “phy-br-vlan3”
            Interface “phy-br-vlan3”
                type: patch
                options: {peer=”int-br-vlan3″}
        Port “eth3”
            Interface “eth3”
Bridge br-int
fail_mode: secure
Port “qr-4e77c7a3-b5”
tag: 3
Interface “qr-4e77c7a3-b5”
type: internal
Port “int-br-vlan3”
Interface “int-br-vlan3″
type: patch
options: {peer=”phy-br-vlan3”}
Port “tap8e684c78-a3”
tag: 2
Interface “tap8e684c78-a3”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qvoe2761636-b5”
tag: 4
Interface “qvoe2761636-b5”
Port “tap6cd6fadf-31”
tag: 1
Interface “tap6cd6fadf-31”
type: internal
Port “qg-02f7ff0d-6d”
            tag: 2
            Interface “qg-02f7ff0d-6d”
                type: internal
        Port “qg-943f7831-46”
            tag: 1
            Interface “qg-943f7831-46”
                type: internal
Port “tap4ef27b41-be”
tag: 5
Interface “tap4ef27b41-be”
type: internal
Port “qr-f0fd3793-4e”
tag: 8
Interface “qr-f0fd3793-4e”
type: internal
Port “tapb1435e62-8b”
tag: 7
Interface “tapb1435e62-8b”
type: internal
Port “qvo1bb76476-05”
tag: 3
Interface “qvo1bb76476-05”
Port “qvocf68fcd8-68”
tag: 8
Interface “qvocf68fcd8-68”
Port “qvo8605f075-25”
tag: 4
Interface “qvo8605f075-25”
Port “qg-08ccc224-1e”
            tag: 7
            Interface “qg-08ccc224-1e”
                type: internal
Port “tapbb485628-0b”
tag: 4
Interface “tapbb485628-0b”
type: internal
Port “int-br-vlan2”
Interface “int-br-vlan2″
type: patch
options: {peer=”phy-br-vlan2”}
Port “tapee030534-da”
tag: 8
Interface “tapee030534-da”
type: internal
Port “qr-4d679697-39”
tag: 4
Interface “qr-4d679697-39”
type: internal
Port br-int
Interface br-int
type: internal
Port “tap9b38c69e-46”
tag: 6
Interface “tap9b38c69e-46”
type: internal
Port “tapc166022a-54”
tag: 3
Interface “tapc166022a-54”
type: internal
Port “qvo66d8f235-d4”
tag: 8
Interface “qvo66d8f235-d4”
Port int-br-vlan
Interface int-br-vlan
type: patch
options: {peer=phy-br-vlan}
ovs_version: “2.4.0”

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns
qdhcp-e826aa22-dee0-478d-8bd7-721336e3824a
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-eda69965-c6ee-42be-944f-2d61498e4bea
qdhcp-6768214b-b71c-4178-a0fc-774b2a5d59ef
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qdhcp-03812cc9-69c5-492a-9995-985bf6e1ff13
qdhcp-235c8173-d3f8-407e-ad6a-c1d3d423c763
qdhcp-d958a059-f7bd-4f9f-93a3-3499d20a1fe2
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28
qrouter-71237c84-59ca-45dc-a6ec-23eb94c4249d

********************************************************************************
Access to Nova Metadata Server provided via neutron-ns-metadata-proxy
running in corresponding qrouter namespaces  (Neutron L3 Configuration)
********************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b netstat -antp

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      12548/python2    
[root@ip-192-169-142-52 ~(keystone_admin)]# ps aux | grep 12548

neutron  12548  0.0  0.4 281028 35992 ?        S    18:34   0:00 /usr/bin/python2 /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b –state_path=/var/lib/neutron –metadata_port=9697 –metadata_proxy_user=990 –metadata_proxy_group=988 –verbose –log-file=neutron-ns-metadata-proxy-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b.log –log-dir=/var/log/neutron
root     32665  0.0  0.0 112644   960 pts/8    S+   19:29   0:00 grep –color=auto 12548

******************************************************************************
OVS flow verification on br-vlan3,br-vlan2. On each external network  vlan172,
vlan200 two VMs (on each one of vlan networks) are pinging each other
******************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3554.739s, table=0, n_packets=33, n_bytes=2074, idle_age=2137, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4204.459s, table=0, n_packets=2102, n_bytes=109304, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3557.643s, table=0, n_packets=33, n_bytes=2074, idle_age=2140, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4207.363s, table=0, n_packets=2103, n_bytes=109356, idle_age=2, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3568.225s, table=0, n_packets=33, n_bytes=2074, idle_age=2151, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4217.945s, table=0, n_packets=2109, n_bytes=109668, idle_age=0, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4140.528s, table=0, n_packets=11, n_bytes=642, idle_age=695, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4225.918s, table=0, n_packets=2113, n_bytes=109876, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4143.600s, table=0, n_packets=11, n_bytes=642, idle_age=698, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4228.990s, table=0, n_packets=2115, n_bytes=109980, idle_age=0, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4145.912s, table=0, n_packets=11, n_bytes=642, idle_age=700, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4231.302s, table=0, n_packets=2116, n_bytes=110032, idle_age=0, priority=0 actions=NORMAL

********************************************************************************
Next question how local vlan tag 7 gets generated
Run following commands :-
********************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan200

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 3dc90ff7-b1df-4079-aca1-cceedb23f440 |
| mtu                       | 0                                    |
| name                      | vlan200                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan200                              |
| provider:segmentation_id  | 200                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 60181211-ea36-4e4e-8781-f13f743baa19 |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep 3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
tapb1435e62-8b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.10.32.100  netmask 255.255.255.0  broadcast 10.10.32.255
inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
RX packets 27  bytes 1526 (1.4 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8  bytes 648 (648.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 tapb1435e62-8b
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 tapb1435e62-8b

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-vsctl show | grep b1435e62-8b

Port “tapb1435e62-8b”
Interface “tapb1435e62-8b”

**************************************************************************
Actually, directives mentioned in  [ 1 ]
**************************************************************************

# neutron subnet-create –name vlan100 –gateway 192.168.0.1 –allocation-pool \
start=192.168.0.150,end=192.168.0.200 –enable-dhcp \
–dns-nameserver 192.168.0.1 vlan100 192.168.0.0/24
# neutron subnet-update –host-route destination=169.254.169.254/32,nexthop=192.168.0.151 vlan100

along with switch to “enable_isolated_metadata=True” are targeting launching VMs to external_fixed_ip pool in qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 without creating Neutron router, spiting tenants with vlan tag IDs. I might be missing somesing , but 1 ] configures system where each vlan(XXX) external network would belong the only one tenant supposed identified by tag (XXX).

Unless RBAC policies will be created to control who has access to the provider network.

That is not what I intend to do. Neutron work flow on br-int won’t touch mentioned qdhcp-namespace at all. Any  external vlan(XXX) network might be used by several tenants each one having it ownVXLAN subnet (identified in system by VXLAN ID)  and it’s own neutron router(XXX) to external network vlan(XXX). AIO RDO set up is just a sample, I am talking about Network Node in multi node RDO Liberty depoyment.

*********************************************
Fragment from `ovs-vsct show `
*********************************************
Port “tapb1435e62-8b”
tag: 7
Interface “tapb1435e62-8b”

*************************************************************************
Next appearance of vlan tag 7, as expected is qg-08ccc224-1e.
Outgoing interface of  qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
namespace.
*************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
qg-08ccc224-1e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.10.32.101  netmask 255.255.255.0  broadcast 10.10.32.255
inet6 fe80::f816:3eff:fed4:e7d  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:d4:0e:7d  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 28  bytes 1704 (1.6 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-f0fd3793-4e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 30.0.0.1  netmask 255.255.255.0  broadcast 30.0.0.255
inet6 fe80::f816:3eff:fea9:5422  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:a9:54:22  txqueuelen 0  (Ethernet)
RX packets 68948  bytes 7192868 (6.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 68859  bytes 7185051 (6.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 qg-08ccc224-1e
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 qg-08ccc224-1e
30.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 qr-f0fd3793-4e

*******************************************************************************************************
Now verify Neutron router connecting qrouter-namespace, having interface with tag 7 and qdhcp namespace, been create to launch the instances.
RoutesDSA has been created with external gateway to vlan200 and internal interface to subnet private07 (30.0.0.0/24) having dhcp enabled and DNS server defined.
vlan157,vlan172 are configured as external networks for theirs coresponding routers as well.
*******************************************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron router-list | grep RoutesDSA

| a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b | RoutesDSA  | {“network_id”: “3dc90ff7-b1df-4079-aca1-cceedb23f440“, “enable_snat”: true, “external_fixed_ips”: [{“subnet_id”: “60181211-ea36-4e4e-8781-f13f743baa19”, “ip_address”: “10.10.32.101”}]} | False       | False |

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep 3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
tapb1435e62-8b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.10.32.100  netmask 255.255.255.0  broadcast 10.10.32.255
inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
RX packets 27  bytes 1526 (1.4 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8  bytes 648 (648.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

**************************
Finally run:-
**************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron router-port-list RoutesDSA

+————————————–+——+——————-+————————————————————————————-+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+————————————–+——+——————-+————————————————————————————-+
| 08ccc224-1e23-491a-8eec-c4db0ec00f02 |      | fa:16:3e:d4:0e:7d | {“subnet_id”: “60181211-ea36-4e4e-8781-f13f743baa19“, “ip_address”: “10.10.32.101”} |
| f0fd3793-4e5a-467a-bd3c-e87bc9063d26 |      | fa:16:3e:a9:54:22 | {“subnet_id”: “0c962484-3e48-4d86-a17f-16b0b1e5fc4d“, “ip_address”: “30.0.0.1”}     |
+————————————–+——+——————-+————————————————————————————-+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 0c962484-3e48-4d86-a17f-16b0b1e5fc4d
| 0c962484-3e48-4d86-a17f-16b0b1e5fc4d |               | 30.0.0.0/24   | {“start”: “30.0.0.2”, “end”: “30.0.0.254”}       |
[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 60181211-ea36-4e4e-8781-f13f743baa19
| 60181211-ea36-4e4e-8781-f13f743baa19 | sub-vlan200   | 10.10.32.0/24 | {“start”: “10.10.32.100”, “end”: “10.10.32.200”} |

************************************
OVS Flows at br-vlan3
************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL

cookie=0x0, duration=15793.182s, table=0, n_packets=33, n_bytes=2074, idle_age=14376, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=16442.902s, table=0, n_packets=8221, n_bytes=427492, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
сookie=0x0, duration=15796.300s, table=0, n_packets=33, n_bytes=2074, idle_age=14379, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=16446.020s, table=0, n_packets=8223, n_bytes=427596, idle_age=0, priority=0 actions=NORMAL

************************************************************
OVS Flow for {phy-br-vlan3,in-br-vlan3} veth pair
************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl show br-vlan3 | grep phy-br-vlan3
2(phy-br-vlan3): addr:da:e4:fb:ba:8b:1a

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl show br-int | grep int-br-vlan3
19(int-br-vlan3): addr:b2:a9:9e:89:07:1b

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2
OFPST_PORT reply (xid=0x2): 1 ports
port  2: rx pkts=6977, bytes=304270, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2

OFPST_PORT reply (xid=0x2): 1 ports
port  2: rx pkts=6979, bytes=304354, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2
OFPST_PORT reply (xid=0x2): 1 ports
port  2: rx pkts=6981, bytes=304438, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0
[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
OFPST_PORT reply (xid=0x2): 1 ports
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=6991, bytes=304858, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
OFPST_PORT reply (xid=0x2): 1 ports
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=6994, bytes=304984, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
OFPST_PORT reply (xid=0x2): 1 ports
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=7450, bytes=324136, drop=0, errs=0, coll=0

****************************************************************
Another OVS flow on test br-int for vlan157
****************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh -i oskeyvls.pem cirros@10.10.10.101

$ ping -c 5 10.10.10.108

PING 10.10.10.108 (10.10.10.108): 56 data bytes
64 bytes from 10.10.10.108: seq=0 ttl=63 time=0.706 ms
64 bytes from 10.10.10.108: seq=1 ttl=63 time=0.772 ms
64 bytes from 10.10.10.108: seq=2 ttl=63 time=0.734 ms
64 bytes from 10.10.10.108: seq=3 ttl=63 time=0.740 ms
64 bytes from 10.10.10.108: seq=4 ttl=63 time=0.785 ms

— 10.10.10.108 ping statistics —

5 packets transmitted, 5 packets received, 0% packet loss

round-trip min/avg/max = 0.706/0.747/0.785 ms

******************************************************************************
Testing VM1<=>VM2 via floating IPs on external vlan net 10.10.10.0/24
*******************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# nova list –all

+————————————–+————–+———————————-+——–+————+————-+———————————+
| ID                                   | Name         | Tenant ID                        | Status | Task State | Power State | Networks                        |
+————————————–+————–+———————————-+——–+————+————-+———————————+
| a3d5ecf6-0fdb-4aa3-815f-171871eccb77 | CirrOSDevs01 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.17, 10.10.10.101 |
| 1b65f5db-d7d5-4e92-9a7c-60e7866ff8e5 | CirrOSDevs02 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.18, 10.10.10.110 |
| 46b7dad1-3a7d-4d94-8407-a654cca42750 | VF23Devs01   | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.19, 10.10.10.111 |
+————————————–+————–+———————————-+——–+————+————-+———————————+

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns

qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh cirros@10.10.10.110

The authenticity of host ‘10.10.10.110 (10.10.10.110)’ can’t be established.
RSA key fingerprint is b8:d3:ec:10:70:a7:da:d4:50:13:a8:2d:01:ba:e4:83.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘10.10.10.110’ (RSA) to the list of known hosts.
cirros@10.10.10.110’s password:

$ ifconfig

eth0      Link encap:Ethernet  HWaddr FA:16:3E:F1:6E:E5
inet addr:40.0.0.18  Bcast:40.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fef1:6ee5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
RX packets:367 errors:0 dropped:0 overruns:0 frame:0
TX packets:291 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:36442 (35.5 KiB)  TX bytes:32019 (31.2 KiB)

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.110$

$ ssh fedora@10.10.10.111
Host ‘10.10.10.111’ is not in the trusted hosts file.
(fingerprint md5 23:c0:fb:fd:74:80:2f:12:d3:09:2f:9e:dd:19:f1:74)
Do you want to continue connecting? (y/n) y
fedora@10.10.10.111’s password:
Last login: Sun Dec 13 15:52:43 2015 from 10.10.10.101
[fedora@vf23devs01 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400
inet 40.0.0.19  netmask 255.255.255.0  broadcast 40.0.0.255
inet6 fe80::f816:3eff:fea4:1a52  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:a4:1a:52  txqueuelen 1000  (Ethernet)
RX packets 283  bytes 30213 (29.5 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 303  bytes 35022 (34.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[fedora@vf23devs01 ~]$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.111[fedora@vf23devs01 ~]$
[fedora@vf23devs01 ~]$ curl http://169.254.169.254/latest/meta-data/instance-id
i-00000009[fedora@vf23devs01 ~]$

[fedora@vf23devs01 ~]$


Creating Servers via REST API on RDO Mitaka via Chrome Advanced REST Client

April 21, 2016

In posting bellow we are going to demonstrate Chrome Advanced REST Client successfully issuing REST API POST requests for creating RDO Mitaka Servers (VMs) as well as getting information about servers via GET requests. All required HTTP Headers are configured in GUI environment as well as body request field for servers creation.

Version of keystone API installed v2.0

Following [ 1 ] to authenticate access to OpenStack Services, you are supposed first of all to issue an authentication request to get authentication token. If the request succeeds, the server returns an authentication token.

Source keystonerc_demo on Controller or on Compute node. It doesn’t
matter. Then run this cURL command to request a token:

curl -s -X POST http://192.169.142.54:5000/v2.0/tokens \
-H “Content-Type: application/json” \
-d ‘{“auth”: {“tenantName”: “‘”$OS_TENANT_NAME”‘”, “passwordCredentials”: {“username”: “‘”$OS_USERNAME”‘”, “password”: “‘”$OS_PASSWORD”‘”}}}’ \
| python -m json.tool

to get authentication token and scroll down to the bottom :-

“token”: {
“audit_ids”: [
“ce1JojlRSiO6TmMTDW3QNQ”
],
“expires”: “2016-04-21T18:26:28Z”,
“id”: “0cfb3ec7a10c4f549a3dc138cf8a270a”, &lt;== X-Auth-Token
“issued_at”: “2016-04-21T17:26:28.246724Z”,
“tenant”: {
“description”: “default tenant”,
“enabled”: true,
“id”: “1578b57cfd8d43278098c5266f64e49f”, &lt;=== Demo tenant’s id
“name”: “demo”
}
},
“user”: {
“id”: “8e1e992eee474c3ab7a08ffde678e35b”,
“name”: “demo”,
“roles”: [
{
“name”: “heat_stack_owner”
},
{
“name”: “_member_”
}
],
“roles_links”: [],
“username”: “demo”
}
}
}

********************************************************************************************
Original request to obtain token might be issued via Chrome Advanced REST Client as well
********************************************************************************************

Scrolling down shows up token been returned and demo’s tenant id

Required output

{

access“: 

{

token“: 

{
issued_at“: 2016-04-21T21:56:52.668252Z
expires“: 2016-04-21T22:56:52Z
id“: dd119ea14e97416b834ca72aab7f8b5a

tenant“: 

{
description“: default tenant
enabled“: true
id“: 1578b57cfd8d43278098c5266f64e49f
name“: demo
}

*****************************************************************************
Next create ssh-keypair via CLI or dashboard for particular tenant :-
*****************************************************************************
nova keypair-add oskeymitaka0417 &gt; oskeymitaka0417.pem
chmod 600 *.pem

******************************************************************************************
Following bellow is a couple of samples REST API POST requests starting servers as they usually are issued and described.
******************************************************************************************

curl -g -i -X POST http://192.169.142.54:8774/v2/1578b57cfd8d43278098c5266f64e49f/servers -H “User-Agent: python-novaclient” -H “Content-Type: application/json” -H “Accept: application/json” -H “X-Auth-Token: 0cfb3ec7a10c4f549a3dc138cf8a270a” -d ‘{“server”: {“name”: “CirrOSDevs03”, “key_name” : “oskeymitaka0417”, “imageRef”: “2e148cd0-7dac-49a7-8a79-2efddbd83852”, “flavorRef”: “1”, “max_count”: 1, “min_count”: 1, “networks”: [{“uuid”: “e7c90970-c304-4f51-9d65-4be42318487c”}], “security_groups”: [{“name”: “default”}]}}’

curl -g -i -X POST http://192.169.142. 54:8774/v2/1578b57cfd8d43278098c5266f64e49f/servers -H “User-Agent: python-novaclient” -H “Content-Type: application/json” -H “Accept: application/json” -H “X-Auth-Token: 0cfb3ec7a10c4f549a3dc138cf8a270a” -d ‘{“server”: {“name”: “VF23Devs03”, “key_name” : “oskeymitaka0417”, “imageRef”: “5b00b1a8-30d1-4e9d-bf7d-5f1abed5173b”, “flavorRef”: “2”, “max_count”: 1, “min_count”: 1, “networks”: [{“uuid”: “e7c90970-c304-4f51-9d65-4be42318487c”}], “security_groups”: [{“name”: “default”}]}}’

**********************************************************************************
We are going to initiate REST API POST requests creating servers been
issued  via Chrome Advanced REST Client
**********************************************************************************

[root@ip-192-169-142-54 ~(keystone_demo)]# glance image-list

+————————————–+———————–+
| ID                                   | Name                  |
+————————————–+———————–+
| 28b590fa-05c8-4706-893a-54efc4ca8cd6 | cirros                |
| 9c78c3da-b25b-4b26-9d24-514185e99c00 | Ubuntu1510Cloud-image |
| a050a122-a1dc-40d0-883f-25617e452d90 | VF23Cloud-image       |
+————————————–+———————–+

[root@ip-192-169-142-54 ~(keystone_demo)]# neutron net-list
+————————————–+————–+—————————————-+
| id                                   | name         | subnets                                |
+————————————–+————–+—————————————-+
| 43daa7c3-4e04-4661-8e78-6634b06d63f3 | public       | 71e0197b-fe9a-4643-b25f-65424d169492   |
|                                      |              | 192.169.142.0/24                       |
| 292a2f21-70af-48ef-b100-c0639a8ffb22 | demo_network | d7aa6f0f-33ba-430d-a409-bd673bed7060   |
|                                      |              | 50.0.0.0/24                            |
+————————————–+————–+—————————————-+

First required Headers were created in corresponding fields and
following fragment was placed in Raw Payload area of Chrome Client

{“server”:
{“name”: “VF23Devs03”,
“key_name” : “oskeymitaka0420”,
“imageRef” : “a050a122-a1dc-40d0-883f-25617e452d90“,
“flavorRef”: “2”,
“max_count”: 1,
“min_count”: 1,
“networks”: [{“uuid”: “292a2f21-70af-48ef-b100-c0639a8ffb22“}],
“security_groups”: [{“name”: “default”}]
}
}

Launching Fedora 23 Server :-

Next Ubuntu 15.10 Server (VM) will be created via changing  image-id in  Advanced RESTful Client GUI environment

Make sure that servers have been created and are currently up and running

***************************************************************************************
Now launch Chrome REST Client again for servers verification via GET request
***************************************************************************************


Neutron work flow for Docker Hypervisor running on DVR Cluster RDO Mitaka in appropriate amount of details && HA support for Glance storage using to load nova-docker instances

April 6, 2016

Why DVR come into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo) with which I had same kind of problems (VXLAN connection Controller <==> Compute) on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1). I just guess that Nova-Docker driver has a problem with OVS 2.4.0 no matter of stable/kilo, stable/liberty, stable/mitaka branches been checked out for driver build.

I have to notice that issue is related specifically with ML2&OVS&VXLAN setup, RDO Mitaka deployment ML2&OVS&VLAN  works with Nova-Docker (stable/mitaka) with no problems.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc, because even having proved malfunctinality I cannot file it to BZ. Nova-Docker Driver is not packaged for RDO so it’s upstream stuff. Upstream won’t consider issue which involves build driver from source on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup. It will result South-North traffic to be forwarded right away from host running Docker Hypervisor to Internet and vice/versa due to basic “fg” functionality ( outgoing interface of fip-namespace,residing on Compute node having L3 agent running in “dvr” agent_mode ).

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. See pre-deployment actions to be undertaken on Controller/Storage Node  

Before DVR set up change swift to glance back end  ( swift is configured in answer-file as follows )

CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1
CONFIG_SWIFT_STORAGE_ZONES=3
CONFIG_SWIFT_STORAGE_REPLICAS=3
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=a55607bff10c4210
CONFIG_SWIFT_STORAGE_SIZE=10G

Up on set up completion on storage node :-

[root@ip-192-169-142-127 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   45G  5.3G   40G  12% /
devtmpfs                 2.8G     0  2.8G   0% /dev
tmpfs                    2.8G  204K  2.8G   1% /dev/shm
tmpfs                    2.8G   25M  2.8G   1% /run
tmpfs                    2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/vdc1                 10G  2.5G  7.5G  25% /srv/node/vdc1
/dev/vdb1                 10G  2.5G  7.5G  25% /srv/node/vdb1
/dev/vdd1                 10G  2.5G  7.5G  25% /srv/node/vdd1

/dev/vda1                497M  211M  286M  43% /boot
tmpfs                    567M  4.0K  567M   1% /run/user/42
tmpfs                    567M  8.0K  567M   1% /run/user/1000

****************************
Update  glance-api.conf
****************************

[glance_store]
stores = swift
default_store = swift
swift_store_auth_address = http://192.169.142.127:5000/v2.0/
swift_store_user = services:glance
swift_store_key = f6a9398960534797 

swift_store_create_container_on_put = True
os_region_name=RegionOne

# openstack-service restart glance

# keystone user-role-add –tenant_id=$UUID_SERVICES_TENANT \
–user=$UUID_GLANCE_USER –role=$UUID_ResellerAdmin_ROLE

Value f6a9398960534797 is corresponding CONFIG_GLANCE_KS_PW in answer-file,i.e. keystone glance password for authentification

2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”
http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html
Just one notice on RDO Mitaka on each compute node run

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth0

Then configure

***********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
***********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************
vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute

***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf

container_formats=ami,ari,aki,bare,ovf,ova,docker
# systemctl restart openstack-glance-api

**************************************************
Network flow on Compute in a bit more details
**************************************************

When floating IP gets assigned to  VM ,  what actually happens ( [1] ) :-

The same explanation may be found in ([4]) , the only style would not be in step by step manner, in particular it contains detailed description of reverse network flow and ARP Proxy functionality.

1.The fip- namespace is created on the local compute node
(if it does not already exist)
2.A new port rfp- gets created on the qrouter- namespace
(if it does not already exist)
3.The rfp port on the qrouter namespace is assigned the associated floating IP address
4.The fpr port on the fip namespace gets created and linked via point-to-point  network to the rfp port of the qrouter namespace
5.The fip namespace gateway port fg- is assigned an additional address
from the public network range to set up  ARP proxy point
6.The fg- is configured as a Proxy ARP

*********************
Flow itself  ( [1] ):
*********************

1.The VM, initiating transmission, sends a packet via default gateway
and br-int forwards the traffic to the local DVR gateway port (qr-).
2.DVR routes the packet using the routing table to the rfp- port
3.The packet is applied NAT rule, replacing the source-IP of VM to
the assigned floating IP, and then it gets sent through the rfp- port,
which connects to the fip namespace via point-to-point network
169.254.31.28/31
4. The packet is received on the fpr- port in the fip namespace
and then routed outside through the fg- port

dvr273Screenshot from 2016-04-06 22-17-32

[root@ip-192-169-142-137 ~(keystone_demo)]# nova list

+————————————–+—————-+——–+————+————-+—————————————–+
| ID                                   | Name           | Status | Task State | Power State | Networks                                |
+————————————–+—————-+——–+————+————-+—————————————–+
| 957814c1-834e-47e5-9236-ef228455fe36 | UbuntuDevs01   | ACTIVE | –          | Running     | demo_network=50.0.0.12, 192.169.142.151 |
| 65dd55b9-23ea-4e5b-aeed-4db259436df2 | derbyGlassfish | ACTIVE | –          | Running     | demo_network=50.0.0.13, 192.169.142.153 |
| f9311d57-4352-48a6-a042-b36393e0af7a | fedora22docker | ACTIVE | –          | Running     | demo_network=50.0.0.14, 192.169.142.154 |
+————————————–+—————-+——–+————+————-+—————————————–+

[root@ip-192-169-142-137 ~(keystone_demo)]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

336679f5bf7a        kumarpraveen/fedora-sshd   “/usr/bin/supervisord”   About an hour ago   Up About an hour                        nova-f9311d57-4352-48a6-a042-b36393e0af7a
8bb2ce01e671        derby/docker-glassfish41   “/sbin/my_init”          2 hours ago         Up 2 hours                              nova-65dd55b9-23ea-4e5b-aeed-4db259436df2
fe5eb55a4c9d        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”      3 hours ago         Up 3 hours                              nova-957814c1-834e-47e5-9236-ef228455fe36

[root@ip-192-169-142-137 ~(keystone_demo)]# nova show f9311d57-4352-48a6-a042-b36393e0af7a | grep image
| image                                | kumarpraveen/fedora-sshd (93345f0b-fcbd-41e4-b335-a4ecb8b59e73) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 65dd55b9-23ea-4e5b-aeed-4db259436df2 | grep image
| image                                | derby/docker-glassfish41 (9f2cd9bc-7840-47c1-81e8-3bc0f76426ec) |
[root@ip-192-169-142-137 ~(keystone_demo)]# nova show 957814c1-834e-47e5-9236-ef228455fe36 | grep image
| image                                | rastasheep/ubuntu-sshd (29c057f1-3c7d-43e3-80e6-dc8fef1ea035) |

[root@ip-192-169-142-137 ~(keystone_demo)]# . keystonerc_glance
[root@ip-192-169-142-137 ~(keystone_glance)]# glance image-list

+————————————–+————————–+
| ID                                   | Name                     |

+————————————–+————————–+
| 27551b28-6df7-4b0e-a0c8-322b416092c1 | cirros                   |
| 9f2cd9bc-7840-47c1-81e8-3bc0f76426ec | derby/docker-glassfish41 |
| 93345f0b-fcbd-41e4-b335-a4ecb8b59e73 | kumarpraveen/fedora-sshd |
| 29c057f1-3c7d-43e3-80e6-dc8fef1ea035 | rastasheep/ubuntu-sshd   |
+————————————–+————————–+

[root@ip-192-169-142-137 ~(keystone_glance)]# swift list glance

29c057f1-3c7d-43e3-80e6-dc8fef1ea035
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00001
29c057f1-3c7d-43e3-80e6-dc8fef1ea035-00002

93345f0b-fcbd-41e4-b335-a4ecb8b59e73
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00001
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00002
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00003
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00004
93345f0b-fcbd-41e4-b335-a4ecb8b59e73-00005

9f2cd9bc-7840-47c1-81e8-3bc0f76426ec
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00001
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00002
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00003
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00004
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00005
9f2cd9bc-7840-47c1-81e8-3bc0f76426ec-00006

Screenshot from 2016-04-06 18-08-30     Screenshot from 2016-04-06 18-08-46

Screenshot from 2016-04-06 18-09-28

 

 


Setting up Nova-Docker on Multi Node DVR Cluster RDO Mitaka

April 1, 2016

UPDATE 04/03/2016
   In meantime  better use  repositories for RC1,
   rather then Delorean trunks
END UPDATE

DVR && Nova-Docker Driver (stable/mitaka) tested fine on RDO Mitaka (build 20160329) with no issues described in previous post for RDO Liberty
So, create DVR deployment with Contrpoller/Network + N(*)Compute Nodes. Switch to Docker Hypervisor on each Compute Node and make requiered updates to glance and filters file on Controller. You are all set. Nova-Dockers instances FIP(s) are available from outside via Neutron Distributed Router (DNAT) using “fg” inteface ( fip-namespace ) residing on same host as Docker Hypervisor. South-North traffic is not related with VXLAN tunneling on DVR systems.

Why DVR comes into concern ?

Refreshing in memory similar problem with Nova-Docker Driver (Kilo)
with which I had same kind of problems (VXLAN connection Controller <==> Compute)
on F22 (OVS 2.4.0) when the same driver worked fine on CentOS 7.1 (OVS 2.3.1).
I just guess that Nova-Docker driver has a problem with OVS 2.4.0
no matter of stable/kilo, stable/liberty, stable/mitaka branches
been checked out for driver build.

I have not run ovs-ofctl dump-flows at br-tun bridges ant etc,
because even having proved malfunctinality I cannot file it to BZ.
Nova-Docker Driver is not packaged for RDO so it’s upstream stuff,
Upstream won’t consider issue which involves build driver from source
on RDO Mitaka (RC1).

Thus as quick and efficient workaround I suggest DVR deployment setup,
to kill two birds with one stone. It will result South-North traffic
to be forwarded right away from host running Docker Hypervisor to Internet
and vice/versa due to basic “fg” functionality (outgoing interface of
fip-namespace,residing on Compute node having L3 agent running in “dvr”
agent_mode).

dvr273

**************************
Procedure in details
**************************

First install repositories for RDO Mitaka (the most recent build passed CI):-
# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O https://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo
# curl -O https://trunk.rdoproject.org/centos7-mitaka/current-passed-ci/delorean.repo
# yum -y install openstack-packstack (Controller only)

Now proceed as follows :-

1. Here is   Answer file to deploy pre DVR Cluster
2. Convert cluster to DVR as advised in  “RDO Liberty DVR Neutron workflow on CentOS 7.2”  :-

http://dbaxps.blogspot.com/2015/10/rdo-liberty-rc-dvr-deployment.html

Just one notice on RDO Mitaka on each compute node, first create br-ex and add port eth0

# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth0

Then configure

*********************************
Compute nodes X=(3,4)
*********************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex

DEVICETYPE=”ovs”

# cat ifcfg-eth0

DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************

#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node.

**********************************************
Nova-Docker Setup on each Compute
**********************************************

# curl -sSL https://get.docker.com/ | sh
# usermod -aG docker nova      ( seems not help to set 660 for docker.sock )
# systemctl start docker
# systemctl enable docker
# chmod 666  /var/run/docker.sock (add to /etc/rc.d/rc.local)
# easy_install pip
# git clone -b stable/mitaka   https://github.com/openstack/nova-docker

*******************
Driver build
*******************

# cd nova-docker
# pip install -r requirements.txt
# python setup.py install

********************************************
Switch nova-compute to DockerDriver
********************************************

vi /etc/nova/nova.conf
compute_driver=novadocker.virt.docker.DockerDriver

******************************************************************
Next on Controller/Network Node and each Compute Node
******************************************************************

mkdir /etc/nova/rootwrap.d
vi /etc/nova/rootwrap.d/docker.filters

[Filters]
# nova/virt/docker/driver.py: ‘ln’, ‘-sf’, ‘/var/run/netns/.*’
ln: CommandFilter, /bin/ln, root

**********************************************************
Nova Compute Service restart on Compute Nodes
**********************************************************
# systemctl restart openstack-nova-compute
***********************************************
Glance API Service restart on Controller
**********************************************
vi /etc/glance/glance-api.conf
container_formats=ami,ari,aki,bare,ovf,ova,docker

# systemctl restart openstack-glance-api

Screenshot from 2016-04-03 12-22-34                                          Screenshot from 2016-04-03 12-57-09                                          Screenshot from 2016-04-03 12-32-41

Screenshot from 2016-04-03 14-39-11

**************************************************************************************
Build on Compute GlassFish 4.1 docker image per
http://bderzhavets.blogspot.com/2015/01/hacking-dockers-phusionbaseimage-to.html  and upload to glance :-
**************************************************************************************

[root@ip-192-169-142-137 ~(keystone_admin)]# docker images

REPOSITORY                 TAG                 IMAGE ID            CREATED              SIZE
derby/docker-glassfish41   latest              3a6b84ec9206        About a minute ago   1.155 GB
rastasheep/ubuntu-sshd     latest              70e0ac74c691        2 days ago           251.6 MB
phusion/baseimage          latest              772dd063a060        3 months ago         305.1 MB
tutum/tomcat               latest              2edd730bbedd        7 months ago         539.9 MB
larsks/thttpd              latest              a31ab5050b67        15 months ago        1.058 MB

[root@ip-192-169-142-137 ~(keystone_admin)]# docker save derby/docker-glassfish41 |  openstack image create  derby/docker-glassfish41  –public –container-format docker –disk-format raw

+——————+——————————————————+
| Field            | Value                                                |
+——————+——————————————————+
| checksum         | 9bea6dd0bcd8d0d7da2d82579c0e658a                     |
| container_format | docker                                               |
| created_at       | 2016-04-01T14:29:20Z                                 |
| disk_format      | raw                                                  |
| file             | /v2/images/acf03d15-b7c5-4364-b00f-603b6a5d9af2/file |
| id               | acf03d15-b7c5-4364-b00f-603b6a5d9af2                 |
| min_disk         | 0                                                    |
| min_ram          | 0                                                    |
| name             | derby/docker-glassfish41                             |
| owner            | 31b24d4b1574424abe53b9a5affc70c8                     |
| protected        | False                                                |
| schema           | /v2/schemas/image                                    |
| size             | 1175020032                                           |
| status           | active                                               |
| tags             |                                                      |
| updated_at       | 2016-04-01T14:30:13Z                                 |
| virtual_size     | None                                                 |
| visibility       | public                                               |
+——————+——————————————————+

[root@ip-192-169-142-137 ~(keystone_admin)]# docker ps

CONTAINER ID        IMAGE                      COMMAND               CREATED             STATUS              PORTS               NAMES

8f551d35f2d7        derby/docker-glassfish41   “/sbin/my_init”       39 seconds ago      Up 31 seconds                           nova-faba725e-e031-4edb-bf2c-41c6dfc188c1
dee4425261e8        tutum/tomcat               “/run.sh”             About an hour ago   Up About an hour                        nova-13450558-12d7-414c-bcd2-d746495d7a57
41d2ebc54d75        rastasheep/ubuntu-sshd     “/usr/sbin/sshd -D”   2 hours ago         Up About an hour                        nova-04ddea42-10a3-4a08-9f00-df60b5890ee9

[root@ip-192-169-142-137 ~(keystone_admin)]# docker logs 8f551d35f2d7

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh…
No SSH host key available. Generating one…
*** Running /etc/my_init.d/01_sshd_start.sh…
Creating SSH2 RSA key; this may take some time …
Creating SSH2 DSA key; this may take some time …
Creating SSH2 ECDSA key; this may take some time …
Creating SSH2 ED25519 key; this may take some time …
invoke-rc.d: policy-rc.d denied execution of restart.
SSH KEYS regenerated by Boris just in case !
SSHD started !

*** Running /etc/my_init.d/database.sh…
Derby database started !
*** Running /etc/my_init.d/run.sh…

Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000006: instance-00000006: unknown error

Waiting for domain1 to start ……
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=&gt; Modifying password of admin to random in Glassfish
spawn asadmin –user admin change-admin-password
Enter the admin password&gt;
Enter the new admin password&gt;
Enter the new admin password again&gt;
Command change-admin-password executed successfully.

Fairly hard docker image been built by “docker expert” as myself 😉
gets launched and nova-docker instance seems to run properly
several daemons at a time ( sshd enabled )
[boris@fedora23wks Downloads]$ ssh root@192.169.142.156

root@192.169.142.156’s password:
Last login: Fri Apr  1 15:33:06 2016 from 192.169.142.1
root@instance-00000006:~# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 14:32 ?        00:00:00 /usr/bin/python3 -u /sbin/my_init
root       100     1  0 14:33 ?        00:00:00 /bin/bash /etc/my_init.d/run.sh
root       103     1  0 14:33 ?        00:00:00 /usr/sbin/sshd
root       170     1  0 14:33 ?        00:00:03 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/op
root       427   100  0 14:33 ?        00:00:02 java -jar /opt/glassfish4/bin/../glassfish/lib/cl
root       444   427  2 14:33 ?        00:01:23 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/gla

root      1078     0  0 15:32 ?        00:00:00 bash
root      1110   103  0 15:33 ?        00:00:00 sshd: root@pts/0
root      1112  1110  0 15:33 pts/0    00:00:00 -bash
root      1123  1112  0 15:33 pts/0    00:00:00 ps -ef

Glassfish is running indeed