AIO RDO Liberty && several external networks VLAN provider setup

April 28, 2016

Post bellow is addressing the question when AIO RDO Liberty Node has to have external networks of VLAN type with predefined vlan tags. Straight forward packstack –allinone install doesn’t  allow to achieve desired network configuration. External network provider of vlan type appears to be required. In particular case, office networks 10.10.10.0/24 vlan tagged (157) ,10.10.57.0/24 vlan tagged (172), 10.10.32.0/24 vlan tagged (200) already exists when RDO install is running. If demo_provision was “y” , then delete router1 and created external network of VXLAN type.

I got back to this writing due to recent post
https://ask.openstack.org/en/question/91611/how-to-configure-multiple-external-networks-in-rdo-libertymitaka/
answer provided contains several misleading steps  in configuration  vlan enabled bridges.

First

***********************************************************
Update /etc/neutron/plugins/ml2/ml2_conf.ini
***********************************************************

[root@ip-192-169-142-52 ml2(keystone_demo)]# cat ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vlan,vxlan
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges = vlan157:157:157,vlan172:172:172,vlan200:200:200
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =10:100
vxlan_group =224.0.0.1
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

**************
Then
**************

# openstack-service restart neutron

***************************************************
Invoke external network provider
***************************************************

[root@ip-192-169-142-52 ~(keystone_admin]#neutron net-create vlan157 –shared –provider:network_type vlan –provider:segmentation_id 157 –provider:physical_network vlan157 –router:external

[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create –name sub-vlan157 –gateway 10.10.10.1  –allocation-pool start=10.10.10.100,end=10.10.10.200 vlan157 10.10.10.0/24

***********************************************
Create second external network
***********************************************

[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan172 --shared --provider:network_type vlan --provider:segmentation_id 172 --provider:physical_network vlan172  --router:external


[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan172 --gateway 10.10.57.1 --allocation-pool start=10.10.57.100,end=10.10.57.200 vlan172 10.10.57.0/24

***********************************************
Create third external network
***********************************************

[root@ip-192-169-142-52 ~(keystone_admin]# neutron net-create vlan200 --shared --provider:network_type vlan --provider:segmentation_id 200 --provider:physical_network vlan200  --router:external

[root@ip-192-169-142-52 ~(keystone_admin]# neutron subnet-create --name sub-vlan200 --gateway 10.10.32.1 --allocation-pool start=10.10.32.100,end=10.10.57.200 vlan172 10.10.32.0/24

***********************************************************************
No need to update sub-net (vs [ 1 ]). No switch to "enable_isolataed_metadata=True"
Neutron L3 agent configuration results attaching qg-<port-id> interfaces to br-int
***********************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan157

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | b41e4d36-9a63-4631-abb0-6436f2f50e2e |
| mtu                       | 0                                    |
| name                      | vlan157                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan157                              |
| provider:segmentation_id  | 157                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | bb753fc3-f257-4ce5-aa7c-56648648056b |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan157

+——————-+——————————————————————+
| Field             | Value                                                            |
+——————-+——————————————————————+
| allocation_pools  | {“start”: “10.10.10.100”, “end”: “10.10.10.200”}                 |
| cidr              | 10.10.10.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.10.1                                                       |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “10.10.10.151”} |
| id                | bb753fc3-f257-4ce5-aa7c-56648648056b                             |
| ip_version        | 4                                                                |
| ipv6_address_mode |                                                                  |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan157                                                      |
| network_id        | b41e4d36-9a63-4631-abb0-6436f2f50e2e                             |
| subnetpool_id     |                                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                                 |
+——————-+——————————————————————+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan172

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 3714adc9-ab17-4f96-9df2-48a6c0b64513 |
| mtu                       | 0                                    |
| name                      | vlan172                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan172                              |
| provider:segmentation_id  | 172                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 21419f2f-212b-409a-8021-2b4a2ba6532f |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan172

+——————-+——————————————————————+
| Field             | Value                                                            |
+——————-+——————————————————————+
| allocation_pools  | {“start”: “10.10.57.100”, “end”: “10.10.57.200”}                 |
| cidr              | 10.10.57.0/24                                                    |
| dns_nameservers   |                                                                  |
| enable_dhcp       | True                                                             |
| gateway_ip        | 10.10.57.1                                                       |
| host_routes       | {“destination”: “169.254.169.254/32”, “nexthop”: “10.10.57.151”} |
| id                | 21419f2f-212b-409a-8021-2b4a2ba6532f                             |
| ip_version        | 4                                                                |
| ipv6_address_mode |                                                                  |
| ipv6_ra_mode      |                                                                  |
| name              | sub-vlan172                                                      |
| network_id        | 3714adc9-ab17-4f96-9df2-48a6c0b64513                             |
| subnetpool_id     |                                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                                 |
+——————-+——————————————————————+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan200

+—————————+————————————–+

| Field                     | Value                                |

+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 3dc90ff7-b1df-4079-aca1-cceedb23f440 |
| mtu                       | 0                                    |
| name                      | vlan200                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan200                              |
| provider:segmentation_id  | 200                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 60181211-ea36-4e4e-8781-f13f743baa19 |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-show sub-vlan200

+——————-+————————————————–+
| Field             | Value                                            |
+——————-+————————————————–+
| allocation_pools  | {“start”: “10.10.32.100”, “end”: “10.10.32.200”} |
| cidr              | 10.10.32.0/24                                    |
| dns_nameservers   |                                                  |
| enable_dhcp       | True                                             |
| gateway_ip        | 10.10.32.1                                       |
| host_routes       |                                                  |
| id                | 60181211-ea36-4e4e-8781-f13f743baa19             |
| ip_version        | 4                                                |
| ipv6_address_mode |                                                  |
| ipv6_ra_mode      |                                                  |
| name              | sub-vlan200                                      |
| network_id        | 3dc90ff7-b1df-4079-aca1-cceedb23f440             |
| subnetpool_id     |                                                  |
| tenant_id         | b18d25d66bbc48b1ad4b855a9c14da70                 |
+——————-+————————————————–+

**************
Next Step
**************

# modprobe 8021q
# ovs-vsctl add-br br-vlan
# ovs-vsctl add-port br-vlan eth1
# vconfig add br-vlan 157
# ovs-vsctl add-br br-vlan2
# ovs-vsctl add-port br-vlan2 eth2
# vconfig add br-vlan2 172
# ovs-vsctl add-br br-vlan3
# ovs-vsctl add-port br-vlan3 eth3
# vconfig add br-vlan3  200

******************************
Update l3_agent.ini file
******************************
external_network_bridge =
gateway_external_network_id =

**********************************************
/etc/neutron/plugins/ml2/openvswitch_agent.ini
**********************************************

bridge_mappings = vlan157:br-vlan,vlan172:br-vlan2,vlan200:br-vlan3

*************************************
Update Neutron Configuration
*************************************

# openstack-service restart neutron

*******************************************
Set up config persistent between reboots
*******************************************

/etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=”eth1″
ONBOOT=yes
OVS_BRIDGE=br-vlan
TYPE=OVSPort
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan

DEVICE=br-vlan
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan.157

BOOTPROTO=”none”
DEVICE=”br-vlan.157″
ONBOOT=”yes”
IPADDR=”10.10.10.150″
PREFIX=”24″
GATEWAY=”10.10.10.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=”eth2″
ONBOOT=yes
OVS_BRIDGE=br-vlan2
TYPE=OVSPort
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan2

DEVICE=br-vlan2
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan2.172

BOOTPROTO=”none”
DEVICE=”br-vlan2.172″
ONBOOT=”yes”
IPADDR=”10.10.57.150″
PREFIX=”24″
GATEWAY=”10.10.57.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes

/etc/sysconfig/network-scripts/ifcfg-br-vlan3

DEVICE=br-vlan3
BOOTPROTO=none
ONBOOT=yes
TYPE=OVSBridge
DEVICETYPE=”ovs”

/etc/sysconfig/network-scripts/ifcfg-br-vlan3.200

BOOTPROTO=”none”
DEVICE=”br-vlan3.200″
ONBOOT=”yes”
IPADDR=”10.10.32.150″
PREFIX=”24″
GATEWAY=”10.10.32.1″
DNS1=”83.221.202.254″
VLAN=yes
NOZEROCONF=yes
USERCTL=no

/etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=”eth3″
ONBOOT=yes
OVS_BRIDGE=br-vlan3
TYPE=OVSPort
DEVICETYPE=”ovs”

********************************************
Routing table on AIO RDO Liberty Node
********************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip route

default via 10.10.10.1 dev br-vlan.157
10.10.10.0/24 dev br-vlan.157  proto kernel  scope link  src 10.10.10.150
10.10.32.0/24 dev br-vlan3.200  proto kernel  scope link  src 10.10.32.150
10.10.57.0/24 dev br-vlan2.172  proto kernel  scope link  src 10.10.57.150
169.254.0.0/16 dev eth0  scope link  metric 1002
169.254.0.0/16 dev eth1  scope link  metric 1003
169.254.0.0/16 dev eth2  scope link  metric 1004
169.254.0.0/16 dev eth3  scope link  metric 1005
169.254.0.0/16 dev br-vlan3  scope link  metric 1008
169.254.0.0/16 dev br-vlan2  scope link  metric 1009
169.254.0.0/16 dev br-vlan  scope link  metric 1011
192.169.142.0/24 dev eth0  proto kernel  scope link  src 192.169.142.52

****************************************************************************
Notice that both qrouter-namespaces are attached to br-int.
No switch to “enable_isolated_metadata=True” vs  [ 1 ]
*****************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-list | grep vlan
| 3dc90ff7-b1df-4079-aca1-cceedb23f440 | vlan200   | 60181211-ea36-4e4e-8781-f13f743baa19 10.10.32.0/24 |
| 235c8173-d3f8-407e-ad6a-c1d3d423c763 | vlan172   | c7588239-4941-419b-8d27-ccd970acc4ce 10.10.57.0/24 |
| b41e4d36-9a63-4631-abb0-6436f2f50e2e | vlan157   | bb753fc3-f257-4ce5-aa7c-56648648056b 10.10.10.0/24 |

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-vsctl show
40286423-e174-4714-9c82-32d026ef47ca
Bridge br-vlan
        Port “eth1”
            Interface “eth1”
        Port br-vlan
            Interface br-vlan
                type: internal
        Port phy-br-vlan
            Interface phy-br-vlan
                type: patch
                options: {peer=int-br-vlan}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
  Bridge “br-vlan2”
        Port “phy-br-vlan2”
            Interface “phy-br-vlan2”
                type: patch
                options: {peer=”int-br-vlan2″}
        Port “eth2”
            Interface “eth2”
        Port “br-vlan2”
            Interface “br-vlan2”
                type: internal
    Bridge “br-vlan3”
        Port “br-vlan3”
            Interface “br-vlan3”
                type: internal
        Port “phy-br-vlan3”
            Interface “phy-br-vlan3”
                type: patch
                options: {peer=”int-br-vlan3″}
        Port “eth3”
            Interface “eth3”
Bridge br-int
fail_mode: secure
Port “qr-4e77c7a3-b5”
tag: 3
Interface “qr-4e77c7a3-b5”
type: internal
Port “int-br-vlan3”
Interface “int-br-vlan3″
type: patch
options: {peer=”phy-br-vlan3”}
Port “tap8e684c78-a3”
tag: 2
Interface “tap8e684c78-a3”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qvoe2761636-b5”
tag: 4
Interface “qvoe2761636-b5”
Port “tap6cd6fadf-31”
tag: 1
Interface “tap6cd6fadf-31”
type: internal
Port “qg-02f7ff0d-6d”
            tag: 2
            Interface “qg-02f7ff0d-6d”
                type: internal
        Port “qg-943f7831-46”
            tag: 1
            Interface “qg-943f7831-46”
                type: internal
Port “tap4ef27b41-be”
tag: 5
Interface “tap4ef27b41-be”
type: internal
Port “qr-f0fd3793-4e”
tag: 8
Interface “qr-f0fd3793-4e”
type: internal
Port “tapb1435e62-8b”
tag: 7
Interface “tapb1435e62-8b”
type: internal
Port “qvo1bb76476-05”
tag: 3
Interface “qvo1bb76476-05”
Port “qvocf68fcd8-68”
tag: 8
Interface “qvocf68fcd8-68”
Port “qvo8605f075-25”
tag: 4
Interface “qvo8605f075-25”
Port “qg-08ccc224-1e”
            tag: 7
            Interface “qg-08ccc224-1e”
                type: internal
Port “tapbb485628-0b”
tag: 4
Interface “tapbb485628-0b”
type: internal
Port “int-br-vlan2”
Interface “int-br-vlan2″
type: patch
options: {peer=”phy-br-vlan2”}
Port “tapee030534-da”
tag: 8
Interface “tapee030534-da”
type: internal
Port “qr-4d679697-39”
tag: 4
Interface “qr-4d679697-39”
type: internal
Port br-int
Interface br-int
type: internal
Port “tap9b38c69e-46”
tag: 6
Interface “tap9b38c69e-46”
type: internal
Port “tapc166022a-54”
tag: 3
Interface “tapc166022a-54”
type: internal
Port “qvo66d8f235-d4”
tag: 8
Interface “qvo66d8f235-d4”
Port int-br-vlan
Interface int-br-vlan
type: patch
options: {peer=phy-br-vlan}
ovs_version: “2.4.0”

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns
qdhcp-e826aa22-dee0-478d-8bd7-721336e3824a
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-eda69965-c6ee-42be-944f-2d61498e4bea
qdhcp-6768214b-b71c-4178-a0fc-774b2a5d59ef
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qdhcp-03812cc9-69c5-492a-9995-985bf6e1ff13
qdhcp-235c8173-d3f8-407e-ad6a-c1d3d423c763
qdhcp-d958a059-f7bd-4f9f-93a3-3499d20a1fe2
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28
qrouter-71237c84-59ca-45dc-a6ec-23eb94c4249d

********************************************************************************
Access to Nova Metadata Server provided via neutron-ns-metadata-proxy
running in corresponding qrouter namespaces  (Neutron L3 Configuration)
********************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b netstat -antp

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      12548/python2    
[root@ip-192-169-142-52 ~(keystone_admin)]# ps aux | grep 12548

neutron  12548  0.0  0.4 281028 35992 ?        S    18:34   0:00 /usr/bin/python2 /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b –state_path=/var/lib/neutron –metadata_port=9697 –metadata_proxy_user=990 –metadata_proxy_group=988 –verbose –log-file=neutron-ns-metadata-proxy-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b.log –log-dir=/var/log/neutron
root     32665  0.0  0.0 112644   960 pts/8    S+   19:29   0:00 grep –color=auto 12548

******************************************************************************
OVS flow verification on br-vlan3,br-vlan2. On each external network  vlan172,
vlan200 two VMs (on each one of vlan networks) are pinging each other
******************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3554.739s, table=0, n_packets=33, n_bytes=2074, idle_age=2137, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4204.459s, table=0, n_packets=2102, n_bytes=109304, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3557.643s, table=0, n_packets=33, n_bytes=2074, idle_age=2140, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4207.363s, table=0, n_packets=2103, n_bytes=109356, idle_age=2, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
cookie=0x0, duration=3568.225s, table=0, n_packets=33, n_bytes=2074, idle_age=2151, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=4217.945s, table=0, n_packets=2109, n_bytes=109668, idle_age=0, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4140.528s, table=0, n_packets=11, n_bytes=642, idle_age=695, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4225.918s, table=0, n_packets=2113, n_bytes=109876, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4143.600s, table=0, n_packets=11, n_bytes=642, idle_age=698, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4228.990s, table=0, n_packets=2115, n_bytes=109980, idle_age=0, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan2 | grep NORMAL
cookie=0x0, duration=4145.912s, table=0, n_packets=11, n_bytes=642, idle_age=700, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=4231.302s, table=0, n_packets=2116, n_bytes=110032, idle_age=0, priority=0 actions=NORMAL

********************************************************************************
Next question how local vlan tag 7 gets generated
Run following commands :-
********************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron net-show vlan200

+—————————+————————————–+
| Field                     | Value                                |
+—————————+————————————–+
| admin_state_up            | True                                 |
| id                        | 3dc90ff7-b1df-4079-aca1-cceedb23f440 |
| mtu                       | 0                                    |
| name                      | vlan200                              |
| provider:network_type     | vlan                                 |
| provider:physical_network | vlan200                              |
| provider:segmentation_id  | 200                                  |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   | 60181211-ea36-4e4e-8781-f13f743baa19 |
| tenant_id                 | b18d25d66bbc48b1ad4b855a9c14da70     |
+—————————+————————————–+

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep 3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
tapb1435e62-8b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.10.32.100  netmask 255.255.255.0  broadcast 10.10.32.255
inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
RX packets 27  bytes 1526 (1.4 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8  bytes 648 (648.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 tapb1435e62-8b
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 tapb1435e62-8b

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-vsctl show | grep b1435e62-8b

Port “tapb1435e62-8b”
Interface “tapb1435e62-8b”

**************************************************************************
Actually, directives mentioned in  [ 1 ]
**************************************************************************

# neutron subnet-create –name vlan100 –gateway 192.168.0.1 –allocation-pool \
start=192.168.0.150,end=192.168.0.200 –enable-dhcp \
–dns-nameserver 192.168.0.1 vlan100 192.168.0.0/24
# neutron subnet-update –host-route destination=169.254.169.254/32,nexthop=192.168.0.151 vlan100

along with switch to “enable_isolated_metadata=True” are targeting launching VMs to external_fixed_ip pool in qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 without creating Neutron router, spiting tenants with vlan tag IDs. I might be missing somesing , but 1 ] configures system where each vlan(XXX) external network would belong the only one tenant supposed identified by tag (XXX).

Unless RBAC policies will be created to control who has access to the provider network.

That is not what I intend to do. Neutron work flow on br-int won’t touch mentioned qdhcp-namespace at all. Any  external vlan(XXX) network might be used by several tenants each one having it ownVXLAN subnet (identified in system by VXLAN ID)  and it’s own neutron router(XXX) to external network vlan(XXX). AIO RDO set up is just a sample, I am talking about Network Node in multi node RDO Liberty depoyment.

*********************************************
Fragment from `ovs-vsct show `
*********************************************
Port “tapb1435e62-8b”
tag: 7
Interface “tapb1435e62-8b”

*************************************************************************
Next appearance of vlan tag 7, as expected is qg-08ccc224-1e.
Outgoing interface of  qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
namespace.
*************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b ifconfig

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
qg-08ccc224-1e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.10.32.101  netmask 255.255.255.0  broadcast 10.10.32.255
inet6 fe80::f816:3eff:fed4:e7d  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:d4:0e:7d  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 28  bytes 1704 (1.6 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-f0fd3793-4e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 30.0.0.1  netmask 255.255.255.0  broadcast 30.0.0.255
inet6 fe80::f816:3eff:fea9:5422  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:a9:54:22  txqueuelen 0  (Ethernet)
RX packets 68948  bytes 7192868 (6.8 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 68859  bytes 7185051 (6.8 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.32.1      0.0.0.0         UG    0      0        0 qg-08ccc224-1e
10.10.32.0      0.0.0.0         255.255.255.0   U     0      0        0 qg-08ccc224-1e
30.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 qr-f0fd3793-4e

*******************************************************************************************************
Now verify Neutron router connecting qrouter-namespace, having interface with tag 7 and qdhcp namespace, been create to launch the instances.
RoutesDSA has been created with external gateway to vlan200 and internal interface to subnet private07 (30.0.0.0/24) having dhcp enabled and DNS server defined.
vlan157,vlan172 are configured as external networks for theirs coresponding routers as well.
*******************************************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron router-list | grep RoutesDSA

| a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b | RoutesDSA  | {“network_id”: “3dc90ff7-b1df-4079-aca1-cceedb23f440“, “enable_snat”: true, “external_fixed_ips”: [{“subnet_id”: “60181211-ea36-4e4e-8781-f13f743baa19”, “ip_address”: “10.10.32.101”}]} | False       | False |

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b
qrouter-a2f4c7e8-9b63-4ed3-8d9a-faa6158d253b

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns | grep 3dc90ff7-b1df-4079-aca1-cceedb23f440
qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-3dc90ff7-b1df-4079-aca1-cceedb23f440 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
tapb1435e62-8b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.10.32.100  netmask 255.255.255.0  broadcast 10.10.32.255
inet6 fe80::f816:3eff:fee3:19f2  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:e3:19:f2  txqueuelen 0  (Ethernet)
RX packets 27  bytes 1526 (1.4 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 8  bytes 648 (648.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

**************************
Finally run:-
**************************

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron router-port-list RoutesDSA

+————————————–+——+——————-+————————————————————————————-+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+————————————–+——+——————-+————————————————————————————-+
| 08ccc224-1e23-491a-8eec-c4db0ec00f02 |      | fa:16:3e:d4:0e:7d | {“subnet_id”: “60181211-ea36-4e4e-8781-f13f743baa19“, “ip_address”: “10.10.32.101”} |
| f0fd3793-4e5a-467a-bd3c-e87bc9063d26 |      | fa:16:3e:a9:54:22 | {“subnet_id”: “0c962484-3e48-4d86-a17f-16b0b1e5fc4d“, “ip_address”: “30.0.0.1”}     |
+————————————–+——+——————-+————————————————————————————-+

[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 0c962484-3e48-4d86-a17f-16b0b1e5fc4d
| 0c962484-3e48-4d86-a17f-16b0b1e5fc4d |               | 30.0.0.0/24   | {“start”: “30.0.0.2”, “end”: “30.0.0.254”}       |
[root@ip-192-169-142-52 ~(keystone_admin)]# neutron subnet-list | grep 60181211-ea36-4e4e-8781-f13f743baa19
| 60181211-ea36-4e4e-8781-f13f743baa19 | sub-vlan200   | 10.10.32.0/24 | {“start”: “10.10.32.100”, “end”: “10.10.32.200”} |

************************************
OVS Flows at br-vlan3
************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL

cookie=0x0, duration=15793.182s, table=0, n_packets=33, n_bytes=2074, idle_age=14376, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=16442.902s, table=0, n_packets=8221, n_bytes=427492, idle_age=1, priority=0 actions=NORMAL

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-flows br-vlan3 | grep NORMAL
сookie=0x0, duration=15796.300s, table=0, n_packets=33, n_bytes=2074, idle_age=14379, priority=4,in_port=2,dl_vlan=7 actions=mod_vlan_vid:200,NORMAL
cookie=0x0, duration=16446.020s, table=0, n_packets=8223, n_bytes=427596, idle_age=0, priority=0 actions=NORMAL

************************************************************
OVS Flow for {phy-br-vlan3,in-br-vlan3} veth pair
************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl show br-vlan3 | grep phy-br-vlan3
2(phy-br-vlan3): addr:da:e4:fb:ba:8b:1a

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl show br-int | grep int-br-vlan3
19(int-br-vlan3): addr:b2:a9:9e:89:07:1b

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2
OFPST_PORT reply (xid=0x2): 1 ports
port  2: rx pkts=6977, bytes=304270, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2

OFPST_PORT reply (xid=0x2): 1 ports
port  2: rx pkts=6979, bytes=304354, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-vlan3 2
OFPST_PORT reply (xid=0x2): 1 ports
port  2: rx pkts=6981, bytes=304438, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=55, bytes=7037, drop=0, errs=0, coll=0
[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
OFPST_PORT reply (xid=0x2): 1 ports
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=6991, bytes=304858, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
OFPST_PORT reply (xid=0x2): 1 ports
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=6994, bytes=304984, drop=0, errs=0, coll=0

[root@ip-192-169-142-52 ~(keystone_admin)]# ovs-ofctl dump-ports br-int 19
OFPST_PORT reply (xid=0x2): 1 ports
port 19: rx pkts=55, bytes=7037, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=7450, bytes=324136, drop=0, errs=0, coll=0

****************************************************************
Another OVS flow on test br-int for vlan157
****************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh -i oskeyvls.pem cirros@10.10.10.101

$ ping -c 5 10.10.10.108

PING 10.10.10.108 (10.10.10.108): 56 data bytes
64 bytes from 10.10.10.108: seq=0 ttl=63 time=0.706 ms
64 bytes from 10.10.10.108: seq=1 ttl=63 time=0.772 ms
64 bytes from 10.10.10.108: seq=2 ttl=63 time=0.734 ms
64 bytes from 10.10.10.108: seq=3 ttl=63 time=0.740 ms
64 bytes from 10.10.10.108: seq=4 ttl=63 time=0.785 ms

— 10.10.10.108 ping statistics —

5 packets transmitted, 5 packets received, 0% packet loss

round-trip min/avg/max = 0.706/0.747/0.785 ms

******************************************************************************
Testing VM1<=>VM2 via floating IPs on external vlan net 10.10.10.0/24
*******************************************************************************

[root@ip-192-169-142-52 ~(keystone_admin)]# nova list –all

+————————————–+————–+———————————-+——–+————+————-+———————————+
| ID                                   | Name         | Tenant ID                        | Status | Task State | Power State | Networks                        |
+————————————–+————–+———————————-+——–+————+————-+———————————+
| a3d5ecf6-0fdb-4aa3-815f-171871eccb77 | CirrOSDevs01 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.17, 10.10.10.101 |
| 1b65f5db-d7d5-4e92-9a7c-60e7866ff8e5 | CirrOSDevs02 | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.18, 10.10.10.110 |
| 46b7dad1-3a7d-4d94-8407-a654cca42750 | VF23Devs01   | f16de8f8497d4f92961018ed836dee19 | ACTIVE | –          | Running     | private=40.0.0.19, 10.10.10.111 |
+————————————–+————–+———————————-+——–+————+————-+———————————+

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns

qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20
qdhcp-b41e4d36-9a63-4631-abb0-6436f2f50e2e
qrouter-c1900dab-447f-4f87-80e7-b4c8ca087d28

[root@ip-192-169-142-52 ~(keystone_admin)]# ip netns exec qdhcp-4481aee1-ef86-4997-bf52-e435aafb9c20 ssh cirros@10.10.10.110

The authenticity of host ‘10.10.10.110 (10.10.10.110)’ can’t be established.
RSA key fingerprint is b8:d3:ec:10:70:a7:da:d4:50:13:a8:2d:01:ba:e4:83.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘10.10.10.110’ (RSA) to the list of known hosts.
cirros@10.10.10.110’s password:

$ ifconfig

eth0      Link encap:Ethernet  HWaddr FA:16:3E:F1:6E:E5
inet addr:40.0.0.18  Bcast:40.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fef1:6ee5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
RX packets:367 errors:0 dropped:0 overruns:0 frame:0
TX packets:291 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:36442 (35.5 KiB)  TX bytes:32019 (31.2 KiB)

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.110$

$ ssh fedora@10.10.10.111
Host ‘10.10.10.111’ is not in the trusted hosts file.
(fingerprint md5 23:c0:fb:fd:74:80:2f:12:d3:09:2f:9e:dd:19:f1:74)
Do you want to continue connecting? (y/n) y
fedora@10.10.10.111’s password:
Last login: Sun Dec 13 15:52:43 2015 from 10.10.10.101
[fedora@vf23devs01 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1400
inet 40.0.0.19  netmask 255.255.255.0  broadcast 40.0.0.255
inet6 fe80::f816:3eff:fea4:1a52  prefixlen 64  scopeid 0x20<link>
ether fa:16:3e:a4:1a:52  txqueuelen 1000  (Ethernet)
RX packets 283  bytes 30213 (29.5 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 303  bytes 35022 (34.2 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[fedora@vf23devs01 ~]$ curl http://169.254.169.254/latest/meta-data/public-ipv4
10.10.10.111[fedora@vf23devs01 ~]$
[fedora@vf23devs01 ~]$ curl http://169.254.169.254/latest/meta-data/instance-id
i-00000009[fedora@vf23devs01 ~]$

[fedora@vf23devs01 ~]$


Storage Node (LVMiSCSI) deployment for RDO Kilo on CentOS 7.2

January 4, 2016

RDO deployment bellow has been done via straightforward RDO Kilo packstack run demonstrates that Storage Node might work as traditional iSCSI Target Server and each Compute Node is actually iSCSI initiator client. This functionality is provided by tuning Cinder && Glance Services running on Storage Node.
Following bellow is set up for 3 node deployment test Controller/Network & Compute & Storage on RDO Kilo (CentOS 7.2), which was performed on Fedora 23 host with KVM/Libvirt Hypervisor (32 GB RAM, Intel Core i7-4790 Haswell CPU, ASUS Z97-P ) .Three VMs (4 GB RAM, 4 VCPUS) have been setup. Controller/Network VM two (external/management subnet,vteps’s subnet) VNICs, Compute Node VM two VNICS (management,vtep’s subnets), Storage Node VM one VNIC (management)

Setup :-

192.169.142.127 – Controller/Network Node
192.169.142.137 – Compute Node
192.169.142.157 – Storage Node (LVMiSCSI)

Deployment could be done via answer-file from https://www.linux.com/community/blogs/133-general-linux/864102-storage-node-lvmiscsi-deployment-for-rdo-liberty-on-centos-71

Notice that Glance,Cinder, Swift Services are not running on Controller. Connection to http://StorageNode-IP:8776/v1/xxxxxx/types will be satisfied as soon as dependencies introduced by https://review.openstack.org/192883 will be satisfied on Storage Node, otherwise it could be done only via second run of RDO Kilo installer, having this port ready to respond on Controller (cinder-api port) previously been set up as first storage node. Thanks to Javier Pena, who did the this troubleshooting in https://bugzilla.redhat.com/show_bug.cgi?id=1234038. Issue has been fixed in RDO Liberty release.

 

SantiagoController1

Storage Node

SantiagoStorage1

SantiagoStorage2

SantiagoStorage3

Compute Node

SantiagoCompute1

[root@ip-192-169-142-137 ~(keystone_admin)]# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-30
Target: iqn.2010-10.org.openstack:volume-3ab60233-5110-4915-9998-7cec7d3ac919 (non-flash)
Current Portal: 192.169.142.157:3260,1
Persistent Portal: 192.169.142.157:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:576fc73f30e9
Iface IPaddress: 192.169.142.137
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: hBbbvVmompAY6ikd8DJF
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 2 State: running
scsi2 Channel 00 Id 0 Lun: 0
Attached scsi disk sda State: running
Target: iqn.2010-10.org.openstack:volume-2087aa9a-7984-4f4e-b00d-e461fcd02099 (non-flash)
Current Portal: 192.169.142.157:3260,1
Persistent Portal: 192.169.142.157:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:576fc73f30e9
Iface IPaddress: 192.169.142.137
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: TB8qiKbMdrWwoLBPdCTs
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 3 State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running


Iftop -i eth0 running on Controller vs iftop -i eth0 running on Compute

September 29, 2015

Controller 192.169.142.127
Compute nodes 192.169.142.147,192.169.142.137

Screenshot from 2015-09-29 19-32-55

Screenshot from 2015-09-29 19-33-40

VM vf22devs02 is running on second compute node 192.169.142.137

Screenshot from 2015-09-29 21-44-45


CPU Pinning and NUMA Topology on RDO Kilo on Fedora Server 22

August 1, 2015
Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
on RDO Kilo installed on Fedora 22 . After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Details follow bellow :-
*****************************************************************************************
RDO Kilo set up on Fedora ( openstack-puppet-modules-2015.1.9-4.fc23.noarch)
*****************************************************************************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf install -y openstack-packstack
Generate answer-file and make update :-
# packstack  –gen-answer-file answer-file-aio.txt
and set CONFIG_KEYSTONE_SERVICE_NAME=httpd
****************************************************************************
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
****************************************************************************
You might be hit by bug  https://bugzilla.redhat.com/show_bug.cgi?id=1249482
As pre-install step apply patch https://review.openstack.org/#/c/209032/
to fix neutron_api.pp. Location of puppet templates
/usr/lib/python2.7/site-packages/packstack/puppet/templates.
You might be also hit by  https://bugzilla.redhat.com/show_bug.cgi?id=1234042
Workaround is in comments 6,11
****************
Then run :-
****************

# packstack  –answer-file=./answer-file-aio.txt

Final target is to reproduce mentioned article on i7 4790 Haswell CPU box, perform launching nova instance with CPU pinning.

[root@fedora22server ~(keystone_admin)]# uname -a
Linux fedora22server.localdomain 4.1.3-200.fc22.x86_64 #1 SMP Wed Jul 22 19:51:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

[root@fedora22server ~(keystone_admin)]# rpm -qa \*qemu\*
qemu-system-x86-2.3.0-6.fc22.x86_64
qemu-img-2.3.0-6.fc22.x86_64
qemu-guest-agent-2.3.0-6.fc22.x86_64
qemu-kvm-2.3.0-6.fc22.x86_64
ipxe-roms-qemu-20150407-1.gitdc795b9f.fc22.noarch
qemu-common-2.3.0-6.fc22.x86_64
libvirt-daemon-driver-qemu-1.2.13.1-2.fc22.x86_64

[root@fedora22server ~(keystone_admin)]# numactl –hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 15991 MB
node 0 free: 4399 MB
node distances:
node 0
0: 10

[root@fedora22server ~(keystone_admin)]# virsh capabilities

<capabilities>
<host>
<uuid>00fd5d2c-dad7-dd11-ad7e-7824af431b53</uuid>
<cpu>
<arch>x86_64</arch>
<model>Haswell-noTSX</model>
<vendor>Intel</vendor>
<topology sockets=’1′ cores=’4′ threads=’2’/>
<feature name=’invtsc’/>
<feature name=’abm’/>
<feature name=’pdpe1gb’/>
<feature name=’rdrand’/>
<feature name=’f16c’/>
<feature name=’osxsave’/>
<feature name=’pdcm’/>
<feature name=’xtpr’/>
<feature name=’tm2’/>
<feature name=’est’/>
<feature name=’smx’/>
<feature name=’vmx’/>
<feature name=’ds_cpl’/>
<feature name=’monitor’/>
<feature name=’dtes64’/>
<feature name=’pbe’/>
<feature name=’tm’/>
<feature name=’ht’/>
<feature name=’ss’/>
<feature name=’acpi’/>
<feature name=’ds’/>
<feature name=’vme’/>
<pages unit=’KiB’ size=’4’/>
<pages unit=’KiB’ size=’2048’/>
</cpu>
<power_management>
<suspend_mem/>
<suspend_disk/>
<suspend_hybrid/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
<uri_transport>rdma</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num=’1′>
<cell id=’0′>
<memory unit=’KiB’>16374824</memory>
<pages unit=’KiB’ size=’4′>4093706</pages>
<pages unit=’KiB’ size=’2048′>0</pages>
<distances>
<sibling id=’0′ value=’10’/>
</distances>
<cpus num=’8′>
<cpu id=’0′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’1′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’2′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’3′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
<cpu id=’4′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’5′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’6′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’7′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
</cpus>
</cell>
</cells>
</topology>

On each Compute node that pinning of virtual machines will be permitted on open the /etc/nova/nova.conf file and make the following modifications:

Set the vcpu_pin_set value to a list or range of logical CPU cores to reserve for virtual machine processes. OpenStack Compute will ensure guest virtual machine instances are pinned to these virtual CPU cores.
vcpu_pin_set=2,3,6,7

Set the reserved_host_memory_mb to reserve RAM for host processes. For the purposes of testing used the default of 512 MB:
reserved_host_memory_mb=512

# systemctl restart openstack-nova-compute.service

************************************
SCHEDULER CONFIGURATION
************************************

Update /etc/nova/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
NUMATopologyFilter,AggregateInstanceExtraSpecsFilter

# systemctl restart openstack-nova-scheduler.service

At this point if creating a guest you may see some changes to appear in the XML, pinning the guest vCPU(s) to the cores listed in vcpu_pin_set:

<vcpu placement=’static’ cpuset=’2-3,6-7′>1</vcpu>

Add to vmlinuz grub2 command line at the end
isolcpus=2,3,6,7

***************
REBOOT
***************
[root@fedora22server ~(keystone_admin)]# nova aggregate-create performance

+—-+————-+——————-+——-+———-+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+———-+

| 1 | performance | – | | |

+—-+————-+——————-+——-+———-+

[root@fedora22server ~(keystone_admin)]# nova aggregate-set-metadata 1 pinned=true
Metadata has been successfully updated for aggregate 1.
+—-+————-+——————-+——-+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+—————+

| 1 | performance | – | | ‘pinned=true’ |

+—-+————-+——————-+——-+—————+

[root@fedora22server ~(keystone_admin)]# nova flavor-create m1.small.performance 6 4096 20 4
+—-+———————-+———–+——+———–+——+——-+————-+———–+

| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

| 6 | m1.small.performance | 4096 | 20 | 0 | | 4 | 1.0 | True |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set hw:cpu_policy=dedicated
[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true
[root@fedora22server ~(keystone_admin)]# hostname
fedora22server.localdomain

[root@fedora22server ~(keystone_admin)]# nova aggregate-add-host 1 fedora22server.localdomain
Host fedora22server.localdomain has been successfully added for aggregate 1
+—-+————-+——————-+——————————+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——————————+—————+
| 1 | performance | – | ‘fedora22server.localdomain’ | ‘pinned=true’ |
+—-+————-+——————-+——————————+—————+

[root@fedora22server ~(keystone_admin)]# . keystonerc_demo
[root@fedora22server ~(keystone_demo)]# glance image-list
+————————————–+———————————+————-+——————+————-+——–+
| ID | Name | Disk Format | Container Format | Size | Status |
+————————————–+———————————+————-+——————+————-+——–+
| bf6f5272-ae26-49ae-b0f9-3c4fcba350f6 | CentOS71Image | qcow2 | bare | 1004994560 | active |
| 05ac955e-3503-4bcf-8413-6a1b3c98aefa | cirros | qcow2 | bare | 13200896 | active |
| 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 | VF22Image | qcow2 | bare | 228599296 | active |
| c695e7fa-a69f-4220-abd8-2269b75af827 | Windows Server 2012 R2 Std Eval | qcow2 | bare | 17182752768 | active |
+————————————–+———————————+————-+——————+————-+——–+

[root@fedora22server ~(keystone_demo)]#neutron net-list

+————————————–+———-+—————————————————–+
| id | name | subnets |
+————————————–+———-+—————————————————–+
| 0daa3a02-c598-4c46-b1ac-368da5542927 | public | 8303b2f3-2de2-44c2-bd5e-fc0966daec53 192.168.1.0/24 |
| c85a4215-1558-4a95-886d-a2f75500e052 | demo_net | 0cab6cbc-dd80-42c6-8512-74d7b2cbf730 50.0.0.0/24 |
+————————————–+———-+—————————————————–+

*************************************************************************
At this point attempt to launch F22 Cloud instance with created flavor
m1.small.performance
*************************************************************************

[root@fedora22server ~(keystone_demo)]# nova boot –image 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 –key-name oskeydev –flavor m1.small.performance –nic net-id=c85a4215-1558-4a95-886d-a2f75500e052 vf22-instance

+————————————–+————————————————–+
| Property | Value |
+————————————–+————————————————–+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | – |
| OS-SRV-USG:terminated_at | – |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | XsGr87ZLGX8P |
| config_drive | |
| created | 2015-07-31T08:03:49Z |
| flavor | m1.small.performance (6) |
| hostId | |
| id | 4b99f3cf-3126-48f3-9e00-94787f040e43 |
| image | VF22Image (7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52) |
| key_name | oskeydev |
| metadata | {} |
| name | vf22-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 14f736e6952644b584b2006353ca51be |
| updated | 2015-07-31T08:03:50Z |
| user_id | 4ece2385b17a4490b6fc5a01ff53350c |
+————————————–+————————————————–+

[root@fedora22server ~(keystone_demo)]#nova list

+————————————–+—————+———+————+————-+———————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+—————+———+————+————-+———————————–+
| 93906a61-ec0b-481d-b964-2bb99d095646 | CentOS71RLX | SHUTOFF | – | Shutdown | demo_net=50.0.0.21, 192.168.1.159 |
| ac7e9be5-d2dc-4ec0-b0a1-4096b552e578 | VF22Devpin | ACTIVE | – | Running | demo_net=50.0.0.22 |
| b93c9526-ded5-4b7a-ae3a-106b34317744 | VF22Devs | SHUTOFF | – | Shutdown | demo_net=50.0.0.19, 192.168.1.157 |
| bef20a1e-3faa-4726-a301-73ca49666fa6 | WinSrv2012 | SHUTOFF | – | Shutdown | demo_net=50.0.0.16 |
| 4b99f3cf-3126-48f3-9e00-94787f040e43 | vf22-instance | ACTIVE | – | Running | demo_net=50.0.0.23, 192.168.1.160 |
+————————————–+—————+———+————+————-+———————————–+

[root@fedora22server ~(keystone_demo)]#virsh list

Id Name State

—————————————————-
2 instance-0000000c running
3 instance-0000000d running

Please, see http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
regarding detailed explanation of highlighted blocks, keeping in mind that pinning is done to logical CPU cores ( not physical due to 4 Core CPU with HT enabled ). Multiple cells are also absent, due limitations of i7 47XX Haswell CPU architecture

[root@fedora22server ~(keystone_demo)]#virsh dumpxml instance-0000000d > vf22-instance.xml
<domain type=’kvm’ id=’3′>
<name>instance-0000000d</name>
<uuid>4b99f3cf-3126-48f3-9e00-94787f040e43</uuid>
<metadata>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.fc23″/>
<nova:name>vf22-instance</nova:name>
<nova:creationTime>2015-07-31 08:03:54</nova:creationTime>
<nova:flavor name=”m1.small.performance”>
<nova:memory>4096</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>4</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”4ece2385b17a4490b6fc5a01ff53350c”>demo</nova:user>
<nova:project uuid=”14f736e6952644b584b2006353ca51be”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52″/>
</nova:instance>
</metadata>
<memory unit=’KiB’>4194304</memory>
<currentMemory unit=’KiB’>4194304</currentMemory>
<vcpu placement=’static’>4</vcpu>
<cputune>
<shares>4096</shares>
<vcpupin vcpu=’0′ cpuset=’2’/>
<vcpupin vcpu=’1′ cpuset=’6’/>
<vcpupin vcpu=’2′ cpuset=’3’/>
<vcpupin vcpu=’3′ cpuset=’7’/>
<emulatorpin cpuset=’2-3,6-7’/>
</cputune>
<numatune>
<memory mode=’strict’ nodeset=’0’/>
<memnode cellid=’0′ mode=’strict’ nodeset=’0’/>
</numatune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type=’smbios’>
<system>
<entry name=’manufacturer’>Fedora Project</entry>
<entry name=’product’>OpenStack Nova</entry>
<entry name=’version’>2015.1.0-3.fc23</entry>
<entry name=’serial’>f1b336b1-6abf-4180-865a-b6be5670352e</entry>
<entry name=’uuid’>4b99f3cf-3126-48f3-9e00-94787f040e43</entry>
</system>
</sysinfo>
<os>
<type arch=’x86_64′ machine=’pc-i440fx-2.3′>hvm</type>
<boot dev=’hd’/>
<smbios mode=’sysinfo’/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode=’host-model’>
<model fallback=’allow’/>
<topology sockets=’2′ cores=’1′ threads=’2’/>
<numa>
<cell id=’0′ cpus=’0-3′ memory=’4194304′ unit=’KiB’/>
</numa>
</cpu>
<clock offset=’utc’>
<timer name=’pit’ tickpolicy=’delay’/>
<timer name=’rtc’ tickpolicy=’catchup’/>
<timer name=’hpet’ present=’no’/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type=’file’ device=’disk’>
<driver name=’qemu’ type=’qcow2′ cache=’none’/>
<source file=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/disk’/>
<backingStore type=’file’ index=’1′>
<format type=’raw’/>
<source file=’/var/lib/nova/instances/_base/6c60a5ed1b3037bbdb2bed198dac944f4c0d09cb’/>
<backingStore/>
</backingStore>
<target dev=’vda’ bus=’virtio’/>
<alias name=’virtio-disk0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x06′ function=’0x0’/>
</disk>
<controller type=’usb’ index=’0′>
<alias name=’usb0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x01′ function=’0x2’/>
</controller>
<controller type=’pci’ index=’0′ model=’pci-root’>
<alias name=’pci.0’/>
</controller>
<controller type=’virtio-serial’ index=’0′>
<alias name=’virtio-serial0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x05′ function=’0x0’/>
</controller>
<interface type=’bridge’>
<mac address=’fa:16:3e:4f:25:03’/>
<source bridge=’qbr567b21fe-52’/>
<target dev=’tap567b21fe-52’/>
<model type=’virtio’/>
<alias name=’net0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x03′ function=’0x0’/>
</interface>
<serial type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target port=’0’/>
<alias name=’serial0’/>
</serial>
<serial type=’pty’>
<source path=’/dev/pts/2’/>
<target port=’1’/>
<alias name=’serial1’/>
</serial>
<console type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target type=’serial’ port=’0’/>
<alias name=’serial0’/>
</console>
<channel type=’spicevmc’>
<target type=’virtio’ name=’com.redhat.spice.0′ state=’disconnected’/>
<alias name=’channel0’/>
<address type=’virtio-serial’ controller=’0′ bus=’0′ port=’1’/>
</channel>
<input type=’mouse’ bus=’ps2’/>
<input type=’keyboard’ bus=’ps2’/>
<graphics type=’spice’ port=’5901′ autoport=’yes’ listen=’0.0.0.0′ keymap=’en-us’>
<listen type=’address’ address=’0.0.0.0’/>
</graphics>
<sound model=’ich6′>
<alias name=’sound0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x04′ function=’0x0’/>
</sound>
<video>
<model type=’qxl’ ram=’65536′ vram=’65536′ vgamem=’16384′ heads=’1’/>
<alias name=’video0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x02′ function=’0x0’/>
</video>
<memballoon model=’virtio’>
<alias name=’balloon0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x07′ function=’0x0’/>
<stats period=’10’/>
</memballoon>
</devices>
<seclabel type=’dynamic’ model=’selinux’ relabel=’yes’>
<label>system_u:system_r:svirt_t:s0:c359,c706</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c359,c706</imagelabel>
</seclabel>
</domain>

Screenshot from 2015-07-31 21-55-33                                              Screenshot from 2015-07-31 15-05-53

 


Switching to Dashboard Spice Console in RDO Kilo on Fedora 22

July 3, 2015

*************************
UPDATE 06/27/2015
*************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf  install -y openstack-packstack  
# dnf install fedora-repos-rawhide
# dnf  –enablerepo=rawhide update openstack-packstack
Fedora – Rawhide – Developmental packages for the next Fedora re 1.7 MB/s |  45 MB     00:27
Last metadata expiration check performed 0:00:39 ago on Sat Jun 27 13:23:03 2015.
Dependencies resolved.
==============================================================
Package                       Arch      Version                                Repository  Size
==============================================================
Upgrading:
openstack-packstack           noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide    233 k
openstack-packstack-puppet    noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide     233 k
Transaction Summary
==============================================================
Upgrade  2 Packages
.  .  .  .  .
# dnf install python3-pyOpenSSL.noarch 
At this point run :-
# packstack  –gen-answer-file answer-file-aio.txt
and set
CONFIG_KEYSTONE_SERVICE_NAME=httpd
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
Then run `packstack –answer-file=./answer-file-aio.txt` , however you will still need pre-patch provision_demo.pp at the moment
( see third patch at http://textuploader.com/yn0v ) , the rest should work fine.

Upon completion you may try follow :-
https://www.rdoproject.org/Neutron_with_existing_external_network

I didn’t test it on Fedora 22, just creating external and private networks of VXLAN type and configure
 
[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.32″
NETMASK=”255.255.255.0″
DNS1=”8.8.8.8″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-enp2s0
DEVICE=”enp2s0″
ONBOOT=”yes”
HWADDR=”90:E6:BA:2D:11:EB”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

When configuration above is done :-

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

*************************
UPDATE 06/26/2015
*************************

To install RDO Kilo on Fedora 22 :-
after `dnf -y install openstack-packstack `
# cd /usr/lib/python2.7/site-packages/packstack/puppet/templates
Then apply following 3 patches
# cd ; packstack  –gen-answer-file answer-file-aio.txt
Set “CONFIG_NAGIOS_INSTALL=n” in  answer-file-aio.txt
# packstack –answer-file=./answer-file-aio.txt

************************
UPDATE 05/19/2015
************************
MATE Desktop supports sound ( via patch mentioned bellow) on RDO Kilo  Cloud instances F22, F21, F20. RDO Kilo AIO install performed on bare metal.
Also Windows Server 2012 (evaluation version) cloud VM provides pretty stable “video/sound” ( http://www.cloudbase.it/windows-cloud-images/ ) .

************************
UPDATE 05/14/2015
************************
I’ve  got sound working on CentOS 7 VM ( connection  to console via virt-manager)  with slightly updated patch of Y.Kawada , self.type set “ich6″ RDO Kilo installed on bare metal AIO testing host, Fedora 22. Same results have been  obtained for RDO Kilo on CentOS 7.1. However , connection to spice console having cut&amp;&amp;paste and sound enabled features may be obtained via spicy ( remote connection)

Generated libvirt.xml

<domain type=”kvm”>
<uuid>455877f2-7070-48a7-bb24-e0702be2fbc5</uuid>
<name>instance-00000003</name>
<memory>2097152</memory>
<vcpu cpuset=”0-7″>1</vcpu>
<metadata>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.el7″/>
<nova:name>CentOS7RSX05</nova:name>
<nova:creationTime>2015-06-14 18:42:11</nova:creationTime>
<nova:flavor name=”m1.small”>
<nova:memory>2048</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>1</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”da79d2c66db747eab942bdbe20bb3f44″>demo</nova:user>
<nova:project uuid=”8c9defac20a74633af4bb4773e45f11e”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”4a2d708c-7624-439f-9e7e-6e133062e23a”/>
</nova:instance>
</metadata>
<sysinfo type=”smbios”>
<system>
<entry name=”manufacturer”>Fedora Project</entry>
<entry name=”product”>OpenStack Nova</entry>
<entry name=”version”>2015.1.0-3.el7</entry>
<entry name=”serial”>b3fae7c3-10bd-455b-88b7-95e586342203</entry>
<entry name=”uuid”>455877f2-7070-48a7-bb24-e0702be2fbc5</entry>
</system>
</sysinfo>
<os>
<type>hvm</type>
<boot dev=”hd”/>
<smbios mode=”sysinfo”/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cputune>
<shares>1024</shares>
</cputune>
<clock offset=”utc”>
<timer name=”pit” tickpolicy=”delay”/>
<timer name=”rtc” tickpolicy=”catchup”/>
<timer name=”hpet” present=”no”/>
</clock>
<cpu mode=”host-model” match=”exact”>
<topology sockets=”1″ cores=”1″ threads=”1″/>
</cpu>
<devices>
<disk type=”file” device=”disk”>
<driver name=”qemu” type=”qcow2″ cache=”none”/>
<source file=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/disk”/>
<target bus=”virtio” dev=”vda”/>
</disk>
<interface type=”bridge”>
<mac address=”fa:16:3e:87:4b:29″/>
<model type=”virtio”/>
<source bridge=”qbr8ce9ae7b-f0″/>
<target dev=”tap8ce9ae7b-f0″/>
</interface>
<serial type=”file”>
<source path=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/console.log”/>
</serial>
<serial type=”pty”/>
<channel type=”spicevmc”>
<target type=”virtio” name=”com.redhat.spice.0″/>
</channel>
<graphics type=”spice” autoport=”yes” keymap=”en-us” listen=”0.0.0.0   “/>
<video>
<model type=”qxl”/>
</video>
<sound model=”ich6″/>
<memballoon model=”virtio”>
<stats period=”10″/>
</memballoon>
</devices>
</domain>

*****************
END UPDATE
*****************
The post follows up http://lxer.com/module/newswire/view/214893/index.html
The most recent `yum update` on F22 significantly improved network performance on cloud VMs (L2) . Watching movies running on cloud F22 VM (with “Mate Desktop” been installed and functioning pretty smoothly) without sound refreshes spice memories,view https://bugzilla.redhat.com/show_bug.cgi?format=multiple&amp;id=913607
# dnf -y install spice-html5 ( installed on Controller &amp;&amp; Compute)
# dnf -y install  openstack-nova-spicehtml5proxy (Compute Node)
# rpm -qa | grep openstack-nova-spicehtml5proxy
openstack-nova-spicehtml5proxy-2015.1.0-3.fc23.noarch

***********************************************************************
Update /etc/nova/nova.conf on Controller &amp;&amp; Compute Node as follows :-
***********************************************************************

[DEFAULT]
. . . . .
web=/usr/share/spice-html5
. . . . . .
spicehtml5proxy_host=0.0.0.0  (only Compute)
spicehtml5proxy_port=6082     (only Compute)
. . . . . . .
# Disable VNC
vnc_enabled=false
. . . . . . .
[spice]

# Compute Node Management IP 192.169.142.137
html5proxy_base_url=http://192.169.142.137:6082/spice_auto.html
server_proxyclient_address=127.0.0.1 ( only  Compute )
server_listen=0.0.0.0 ( only  Compute )
enabled=true
agent_enabled=true
keymap=en-us

:wq

# service httpd restart ( on Controller )
Next actions to be performed on Compute Node

# service openstack-nova-compute restart
# service openstack-nova-spicehtml5proxy start
# systemctl enable openstack-nova-spicehtml5proxy

On Controller

[root@ip-192-169-142-127 ~(keystone_admin)]# nova list –all-tenants
+————————————–+———–+———————————-+———+————+————-+———————————-+
| ID                                   | Name      | Tenant ID                        | Status  | Task State | Power State | Networks                         |
+————————————–+———–+———————————-+———+————+————-+———————————-+
| 6c8ef008-e8e0-4f1c-af17-b5f846f8b2d9 | CirrOSDev | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | SHUTOFF | –          | Shutdown    | demo_net=50.0.0.11, 172.24.4.228 |
| cfd735ea-d9a8-4c4e-9a77-03035f01d443 | VF22DEVS  | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | ACTIVE  | –          | Running     | demo_net=50.0.0.14, 172.24.4.231 |
+————————————–+———–+———————————-+———+————+————-+———————————-+
[root@ip-192-169-142-127 ~(keystone_admin)]# nova get-spice-console cfd735ea-d9a8-4c4e-9a77-03035f01d443  spice-html5
+————-+—————————————————————————————-+
| Type        | Url                                                                                    |

+————-+—————————————————————————————-+
| spice-html5 | http://192.169.142.137:6082/spice_auto.html?token=24fb65c7-e7e9-4727-bad3-ba7c2c29f7f4 |
+————-+—————————————————————————————-+

Session running by virt-manager on Virtualization Host ( F22 )

Connection to Compute Node 192.169.142.137 has been activated


Once again about pros/cons of Systemd and Upstart

May 16, 2015

Upstart advantages.

1. Upstart is simpler for porting on the systems other than Linux while systemd is very rigidly tied on Linux kernel opportunities.Adaptation of Upstart for work in Debian GNU/kFreeBSD and Debian GNU/Hurd looks quite real task that it is impossible to tell about systemd;

2. Upstart is more habitual for the Debian developers, many of which in combination participate in development of Ubuntu. Two members of Technical committee Debian (Steve Langasek and Colin Watson) are a part of group of the Upstart developers.

3. Upstart simpler and is more lightweight than systemd, as a result, less code – less mistakes; Upstart is suitable for integration with a code of system daemons better.The policy of systemd is reduced to that authors of daemons have to be arranged under upstream (it is necessary to provide the analog compatible at the level of the external interface for replacement of the systemd component) instead of upstream provided comfortable means for developers of daemons.

4. Upstart is simpler in respect of maintenance and maintenance of packages; Community of the Upstart developers are more openly for collaboration. In case of systemd it is necessary to take the systemd methods for granted and to follow them, for example, to support the separate section “/usr” or
to use only absolute paths for start. Shortcomings of Upstart belong to category of reparable problems; in current state of Upstart it is already completely ready for use in Debian 8.0 (Jessie).

5. In Upstart more habitual model of definition of a configuration of services, unlike systemd where settings in / etc block the basic settings of units determined in hierarchy/lib. Use of Upstart will allow to support a sound mind of the competition which will promote development of various approaches and will keep developers in a tone.

Systemd advantages

1. Without essential processing of architecture of Upstart won’t be able to catch up with systemd on functionality (for example, the turned model of start of dependences (instead of start of all demanded dependences at start of the set service,start of service in Upstart is carried out at receipt of an event about availability for service of dependences);

2. Use of ptrace disturbs application of upstart-works for such daemons as avahi, apache and postfix;possibility of activation of service only upon the appeal to a socket, but not on indirect signs,such as dependence on activation of other socket; lack of reliable tracking of conditions of the carried-out processes.

3. Systemd contains rather self-sufficient set of components that allows to concentrate attention on elimination of problems,but not completion of a configuration with Upstart to the opportunities which are already present at Systemd. For example, in Upstart are absent:- support of the detailed status and maintaining the log of work of daemons,multiple activation through sockets,activation through sockets for IPv6 and UDP,flexible mechanism of restriction of resources.

4. Use of systemd will allow to pull together among themselves and to unify control facilities various distribution kits. Systemd is already passed to RHEL 7.X,CentOS 7.X, Fedora,openSUSE,Sabayon,Mandriva,Arch Linux,

5. At systemd there is more active, large and versatile community of developers into which engineers of the SUSE and Red Hat companies enter. When using upstart the distribution kit becomes dependent on Canonical without which support of upstart remains without developers and will be doomed to stagnation.Participation in development of upstart requires signing of the agreement on transfer of property rights of the Canonical company. The Red Hat company not without cause decided on replacement of upstart by systemd.Debian project was already compelled to migrate for systemd. For realization of some opportunities of loading in Upstart it is required to use fragments of shell-scripts that does initialization process less reliable and more labor-consuming for debugging.

6. Support of systemd is realized in GNOME and KDE which more and more actively use possibilities of systemd (for example, means for management of the user sessions and start of each appendix in separate cgroup). GNOME continues to be positioned as the main environment of Debian, but the relations between the Ubuntu/Upstart and GNOME projects had obviously intense character.

References

http://www.opennet.ru/opennews/art.shtml?num=38762


Just to comment

February 19, 2015

Screenshot from 2015-02-19 14:32:15                        Screenshot from 2015-02-19 14:47:15                                                    Screenshot from 2015-02-19 15:16:09


Follow

Get every new post delivered to your Inbox.

Join 31 other followers