RDO IceHouse Setup Two Node (Controller+Compute) Neutron ML2&OVS&VLAN Cluster on Fedora 20

June 22, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file= TwoNodeML2&OVS&VLAN.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from VLAN Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

Answer file been used by packstack here http://textuploader.com/k9xo

 [root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                inactive  (disabled on boot)
== Ceilometer services ==
openstack-ceilometer-api:               failed
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
== Support services ==
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 42ceb5a601b041f0a5669868dd7f7663 |   admin    |   True  |    test@test.com     |
| d602599e69904691a6094d86f07b6121 | ceilometer |   True  | ceilometer@localhost |
| cc11c36f6e9a4bb7b050db7a380a51db |   cinder   |   True  |   cinder@localhost   |
| c3b1e25936a241bfa63c791346f179fc |   glance   |   True  |   glance@localhost   |
| d2bfcd4e6fc44478899b0a2544df0b00 |  neutron   |   True  |  neutron@localhost   |
| 3d572a8e32b94ac09dd3318cd84fd932 |    nova    |   True  |    nova@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+—————–+————-+——————+———–+——–+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+————————————–+—————–+————-+——————+———–+——–+
| 898a4245-d191-46b8-ac87-e0f1e1873cb1 | CirrOS31        | qcow2       | bare             | 13147648  | active |
| c4647c90-5160-48b1-8b26-dba69381b6fa | Ubuntu 06/18/14 | qcow2       | bare             | 254149120 | active |
+————————————–+—————–+————-+——————+———–+——–+
== Nova managed services ==
+——————+—————————————-+———-+———+——-+—————————-+—————–+
| Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+——————+—————————————-+———-+———+——-+—————————-+—————–+
| nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | –               |
| nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:21.000000 | –               |
| nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:23.000000 | –               |
| nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | –               |
| nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2014-06-22T10:39:23.000000 | –               |
+——————+—————————————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+———+——+
| ID                                   | Label   | Cidr |
+————————————–+———+——+
| 577b7ba7-adad-4051-a03f-787eb8bd55f6 | public  | –    |
| 70298098-a022-4a6b-841f-cef13524d86f | private | –    |
| 7459c84b-b460-4da2-8f24-e0c840be2637 | int     | –    |
+————————————–+———+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+————-+———–+————+————-+————————————+
| ID                                   | Name        | Status    | Task State | Power State | Networks                           |
+————————————–+————-+———–+————+————-+————————————+
| 388bbe10-87b2-40e5-a6ee-b87b05116d51 | CirrOS445   | ACTIVE    | –          | Running     | private=30.0.0.14, 192.169.142.155 |
| 4d380c79-3213-45c0-8e4c-cef2dd19836d | UbuntuSRV01 | SUSPENDED | –          | Shutdown    | private=30.0.0.13, 192.169.142.154 |
+————————————–+————-+———–+————+————-+————————————+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth ip-192-169-142-127.ip.secureserver.net internal         enabled    🙂   2014-06-22 10:40:00
nova-scheduler   ip-192-169-142-127.ip.secureserver.net internal         enabled    🙂   2014-06-22 10:40:01
nova-conductor   ip-192-169-142-127.ip.secureserver.net internal         enabled    🙂   2014-06-22 10:40:03
nova-cert        ip-192-169-142-127.ip.secureserver.net internal         enabled    🙂   2014-06-22 10:40:00
nova-compute     ip-192-169-142-137.ip.secureserver.net nova             enabled    🙂   2014-06-22 10:40:03

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+—————————————-+——-+—————-+
| id                                   | agent_type         | host                                   | alive | admin_state_up |
+————————————–+——————–+—————————————-+——-+—————-+
| 61160392-4c97-4e8f-a902-1e55867e4425 | DHCP agent         | ip-192-169-142-127.ip.secureserver.net | 🙂   | True           |
| 6cd022b9-9eb8-4d1e-9991-01dfe678eba5 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | 🙂   | True           |
| 893a1a71-5709-48e9-b1a4-11e02f5eca15 | Metadata agent     | ip-192-169-142-127.ip.secureserver.net | 🙂   | True           |
| bb29c2dc-2db6-487c-a262-32cecf85c608 | L3 agent           | ip-192-169-142-127.ip.secureserver.net | 🙂   | True           |
| d7456233-53ba-4ae4-8936-3448f6ea9d65 | Open vSwitch agent | ip-192-169-142-127.ip.secureserver.net | 🙂   | True           |
+————————————–+——————–+—————————————-+——-+—————-+

[root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

 [root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth0
DEVICE=”eth0″
# HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

 [root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
# HWADDR=52:54:00:EE:94:93
NM_CONTROLLED=no

 [root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
86e16ac0-c2e6-4eb4-a311-cee56fe86800
Bridge br-ex
Port “eth0”
Interface “eth0”
Port “qg-068e0e7a-95”
Interface “qg-068e0e7a-95”
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge “br-eth1”
Port “eth1”
Interface “eth1”
Port “phy-br-eth1”
Interface “phy-br-eth1”
Port “br-eth1”
Interface “br-eth1”
type: internal
Bridge br-int
Port “qr-16b1ea2b-fc”
tag: 1
Interface “qr-16b1ea2b-fc”
type: internal
Port “qr-2bb007df-e1”
tag: 2
Interface “qr-2bb007df-e1”
type: internal
Port “tap1c48d234-23”
tag: 2
Interface “tap1c48d234-23”
type: internal
Port br-int
Interface br-int
type: internal
Port “tap26440f58-b0”
tag: 1
Interface “tap26440f58-b0”
type: internal
Port “int-br-eth1”
Interface “int-br-eth1”
ovs_version: “2.1.2”

[root@ip-192-169-142-127 neutron]# cat plugin.ini
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch
[ml2_type_vlan]
network_vlan_ranges = physnet1:100:200
[ovs]
network_vlan_ranges = physnet1:100:200
tenant_network_type = vlan
enable_tunneling = False
integration_bridge = br-int
bridge_mappings = physnet1:br-eth1
local_ip = 192.168.122.127
[AGENT]
polling_interval = 2
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Checksum offloading disabled on eth1 of Compute Node
[root@ip-192-169-142-137 neutron]# /usr/sbin/ethtool --offload eth1 tx off
Actual changes:
tx-checksumming: off
    tx-checksum-ip-generic: off
tcp-segmentation-offload: off
    tx-tcp-segmentation: off [requested on]
    tx-tcp-ecn-segmentation: off [requested on]
    tx-tcp6-segmentation: off [requested on]
udp-fragmentation-offload: off [requested on]

 


Two Real Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 4, 2014

Two boxes  have been setup , each one having 2  NICs (p37p1,p4p1) for Controller && Compute Nodes setup. Before running `packstack –answer-file= TwoRealNodeOVS&GRE.txt` SELINUX set to permissive on both nodes.Both p4p1’s assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface p37p1 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

 Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && GRE )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   –  Controller (192.168.1.127)

icehouse2.localdomain   –  Compute   (192.168.1.137)

Post packstack install  updates :-

1. nova.conf && metadata_agent.ini on Controller per

Two Real Node IceHouse Neutron OVS&GRE

This updates enable nova-api to listen at port 9697

View section –

“Metadata support configured on Controller+NeutronServer Node”

 2. File /etc/sysconfig/iptables updated on both nodes with lines :-

*filter section

-A INPUT -p gre -j ACCEPT
-A OUTPUT -p gre -j ACCEPT

Service iptables restarted 

 ***************************************

 On Controller+NeutronServer

 ***************************************

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-p37p1
DEVICE=p37p1
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@icehouse1 network-scripts(keystone_admin)]# cat ifcfg-p4p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p4p1
UUID=dbc361f1-805b-4f57-8150-cbc24ab7ee1a
ONBOOT=yes
IPADDR=192.168.0.127
PREFIX=24
# GATEWAY=192.168.0.1
DNS1=83.221.202.254
# HWADDR=00:E0:53:13:17:4C
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse1 network-scripts(keystone_admin)]# ovs-vsctl show
119e5be5-5ef6-4f39-875c-ab1dfdb18972
Bridge br-int
Port “qr-209f67c4-b1”
tag: 1
Interface “qr-209f67c4-b1”
type: internal
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapb5da1c7e-50”
tag: 1
Interface “tapb5da1c7e-50”
type: internal
Bridge br-ex
Port “qg-22a1fffe-91”
Interface “qg-22a1fffe-91”
type: internal
Port “p37p1”
Interface “p37p1”
Port br-ex
Interface br-ex
type: internal
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.0.127″, out_key=flow, remote_ip=”192.168.0.137”}
ovs_version: “2.1.2”

**********************************

On Compute

**********************************

[root@icehouse2 network-scripts]# cat ifcfg-p37p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p37p1
UUID=b29ecd0e-7093-4ba9-8a2d-79ac74e93ea5
ONBOOT=yes
IPADDR=192.168.1.137
PREFIX=24
GATEWAY=192.168.1.1
DNS1=83.221.202.254
HWADDR=90:E6:BA:2D:11:EB
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse2 network-scripts]# cat ifcfg-p4p1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=p4p1
UUID=a57d6dd3-32fe-4a9f-a6d0-614e004bfdf6
ONBOOT=yes
IPADDR=192.168.0.137
PREFIX=24
GATEWAY=192.168.0.1
DNS1=83.221.202.254
HWADDR=00:0C:76:E0:1E:C5
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
NM_CONTROLLED=no

[root@icehouse2 network-scripts]# ovs-vsctl show
2dd63952-602e-4370-900f-85d8c984a0cb
Bridge br-int
Port “qvo615e1af7-f4”
tag: 3
Interface “qvo615e1af7-f4”
Port “qvoe78bebdb-36”
tag: 3
Interface “qvoe78bebdb-36”
Port br-int
Interface br-int
type: internal
Port “qvo9ccf821f-87”
tag: 3
Interface “qvo9ccf821f-87”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-2”
Interface “gre-2”
type: gre
options: {in_key=flow, local_ip=”192.168.0.137″, out_key=flow, remote_ip=”192.168.0.127″}
Port br-tun
Interface br-tun
type: internal
ovs_version: “2.1.2

**************************************************

Update dhcp_agent.ini and create dnsmasq.conf

**************************************************

[root@icehouse1 neutron(keystone_admin)]# cat  dhcp_agent.ini

[DEFAULT]
debug = False
resync_interval = 30
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
enable_isolated_metadata = False
enable_metadata_network = False
dhcp_delete_namespaces = False
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
state_path=/var/lib/neutron

[root@icehouse1 neutron(keystone_admin)]# cat  dnsmasq.conf
log-facility = /var/log/neutron/dnsmasq.log
log-dhcp
# Line added
dhcp-option=26,1454

**************************************************************************

Metadata support configured on Controller+NeutronServer Node :- 

***************************************************************************

[root@icehouse1 ~(keystone_admin)]# ip netns
qrouter-269dfed8-e314-4a23-b693-b891ba00582e
qdhcp-79eb80f1-d550-4f4c-9670-f8e10b43e7eb

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-269dfed8-e314-4a23-b693-b891ba00582e iptables -S -t nat | grep 169.254

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-269dfed8-e314-4a23-b693-b891ba00582e netstat -anpt

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      5212/python

[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 5212


root      5212     1  0 11:40 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/269dfed8-e314-4a23-b693-b891ba00582e.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=269dfed8-e314-4a23-b693-b891ba00582e –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-269dfed8-e314-4a23-b693-b891ba00582e.log –log-dir=/var/log/neutron
root     21188  4697  0 14:29 pts/0    00:00:00 grep –color=auto 5212

[root@icehouse1 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1228/python       


[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 1228

nova      1228     1  0 11:38 ?          00:00:56 /usr/bin/python /usr/bin/nova-api
nova      3623  1228  0 11:39 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3626  1228  0 11:39 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3719  1228  0 11:39 ?        00:00:12 /usr/bin/python /usr/bin/nova-api
nova      3720  1228  0 11:39 ?        00:00:10 /usr/bin/python /usr/bin/nova-api
nova      3775  1228  0 11:39 ?        00:00:01 /usr/bin/python /usr/bin/nova-api
nova      3776  1228  0 11:39 ?        00:00:01 /usr/bin/python /usr/bin/nova-api
root     21230  4697  0 14:29 pts/0    00:00:00 grep –color=auto 1228

[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    🙂   2014-06-03 10:39:08
nova-scheduler   icehouse1.localdomain                internal         enabled    🙂   2014-06-03 10:39:08
nova-conductor   icehouse1.localdomain                internal         enabled    🙂   2014-06-03 10:39:08
nova-cert        icehouse1.localdomain                internal         enabled    🙂   2014-06-03 10:39:08
nova-compute     icehouse2.localdomain                nova             enabled    🙂   2014-06-03 10:39:07

[root@icehouse1 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+———————–+——-+—————-+
| id                                   | agent_type         | host                  | alive | admin_state_up |
+————————————–+——————–+———————–+——-+—————-+
| 4f37a350-2613-4a2b-95b2-b3bd4ee075a0 | L3 agent           | icehouse1.localdomain | 🙂   | True           |
| 5b800eb7-aaf8-476a-8197-d13a0fc931c6 | Metadata agent     | icehouse1.localdomain | 🙂   | True           |
| 5ce5e6fe-4d17-4ce0-9e6e-2f3b255ffeb0 | Open vSwitch agent | icehouse1.localdomain | 🙂   | True           |
| 7f88512a-c59a-4ea4-8494-02e910cae034 | DHCP agent         | icehouse1.localdomain | 🙂   | True           |
| a23e4d51-3cbc-42ee-845a-f5c17dff2370 | Open vSwitch agent | icehouse2.localdomain | 🙂   | True           |
+————————————–+——————–+———————–+——-+————

  

    

    

 


Two Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 2, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from GRE Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

ANSWER FILE Two Node IceHouse Neutron OVS&GRE  and  updated *.ini , *.conf files after packstack setup  http://textuploader.com/0ts8

Two Libvirt’s  subnet created on F20 KVM Sever to support installation

  Public subnet :  192.169.142.0/24  

 GRE Tunnel  Support subnet:      192.168.122.0/24 

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat openstackvms.xml
<network>
 <name>openstackvms</name>
 <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr1' stp='on' delay='0' />
 <mac address='52:54:00:60:f8:6e'/>
 <ip address='192.169.142.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.169.142.2' end='192.169.142.254' />
 </dhcp>
 </ip>
 </network>
 2. Define the above network:
  $ virsh net-define openstackvms.xml
3. Start the network and enable it for "autostart"
 $ virsh net-start openstackvms
 $ virsh net-autostart openstackvms

4. List your libvirt networks to see if it reflects:
  $ virsh net-list
  Name                 State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
  openstackvms         active     yes           yes

5. Optionally, list your bridge devices:
  $ brctl show
  bridge name     bridge id               STP enabled     interfaces
  virbr0          8000.5254003339b3       yes             virbr0-nic
  virbr1          8000.52540060f86e       yes             virbr1-nic

 After packstack 2 Node (Controller+Compute) IceHouse OVS&amp;GRE setup :-

[root@ip-192-169-142-127 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 143
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| nova               |
| ovs_neutron        |
| performance_schema |
| test               |
+——————–+
9 rows in set (0.02 sec)

MariaDB [(none)]&gt; use ovs_neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [ovs_neutron]&gt; show tables ;
+—————————+
| Tables_in_ovs_neutron     |
+—————————+
| agents                    |
| alembic_version           |
| allowedaddresspairs       |
| dnsnameservers            |
| externalnetworks          |
| extradhcpopts             |
| floatingips               |
| ipallocationpools         |
| ipallocations             |
| ipavailabilityranges      |
| networkdhcpagentbindings  |
| networks                  |
| ovs_network_bindings      |
| ovs_tunnel_allocations    |
| ovs_tunnel_endpoints      |
| ovs_vlan_allocations      |
| portbindingports          |
| ports                     |
| quotas                    |
| routerl3agentbindings     |
| routerroutes              |
| routers                   |
| securitygroupportbindings |
| securitygrouprules        |
| securitygroups            |
| servicedefinitions        |
| servicetypes              |
| subnetroutes              |
| subnets                   |
+—————————+
29 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from networks ;
+———————————-+————————————–+———+——–+—————-+——–+
| tenant_id                        | id                                   | name    | status | admin_state_up | shared |
+———————————-+————————————–+———+——–+—————-+——–+
| 179e44f8f53641da89c3eb5d07405523 | 3854bc88-ae14-47b0-9787-233e54ffe7e5 | private | ACTIVE |              1 |      0 |
| 179e44f8f53641da89c3eb5d07405523 | 6e8dc33d-55d4-47b5-925f-e1fa96128c02 | public  | ACTIVE |              1 |      1 |
+———————————-+————————————–+———+——–+—————-+——–+
2 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from routers ;
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| tenant_id                        | id                                   | name    | status | admin_state_up | gw_port_id                           | enable_snat |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| 179e44f8f53641da89c3eb5d07405523 | 94829282-e6b2-4364-b640-8c0980218a4f | ROUTER3 | ACTIVE |              1 | df9711e4-d1a2-4255-9321-69105fbd8665 |           1 |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
1 row in set (0.00 sec)

MariaDB [ovs_neutron]&gt;

*********************

On Controller :-

********************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

*************************************

ovs-vsctl show output on controller

*************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
dc2c76d6-40e3-496e-bdec-470452758c32
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.122.127″, out_key=flow, remote_ip=”192.168.122.137”}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap7acb7666-aa”
tag: 1
Interface “tap7acb7666-aa”
type: internal
Port “qr-a26fe722-07”
tag: 1
Interface “qr-a26fe722-07”
type: internal
Bridge br-ex
Port “qg-df9711e4-d1”
Interface “qg-df9711e4-d1”
type: internal
Port “eth0”
Interface “eth0”
Port br-ex
Interface br-ex
type: internal
ovs_version: “2.1.2”

********************

On Compute:-

********************

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth0
UUID=f96e561d-d14c-4fb1-9657-0c935f7f5721
ONBOOT=yes
IPADDR=192.169.142.137
PREFIX=24
GATEWAY=192.169.142.1
DNS1=83.221.202.254
HWADDR=52:54:00:67:AC:04
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.137
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

************************************

ovs-vsctl show output on compute

************************************

[root@ip-192-169-142-137 ~]# ovs-vsctl show
1c6671de-fcdf-4a29-9eee-96c949848fff
Bridge br-tun
Port “gre-2”
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.122.137″, out_key=flow, remote_ip=”192.168.122.127”}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qvo87038189-3f”
tag: 1
Interface “qvo87038189-3f”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: “2.1.2”

[root@ip-192-169-142-137 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr87038189-3f        8000.2abf9e69f97c    no        qvb87038189-3f
tap87038189-3f

*************************

Metadata verification 

*************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f iptables -S -t nat | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3771/python
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | 3771
bash: 3771: command not found…

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 3771
root      3771     1  0 13:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/94829282-e6b2-4364-b640-8c0980218a4f.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=94829282-e6b2-4364-b640-8c0980218a4f –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-94829282-e6b2-4364-b640-8c0980218a4f.log –log-dir=/var/log/neutron

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1024/python

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 1024
nova      1024     1  1 13:58 ?        00:00:05 /usr/bin/python /usr/bin/nova-api
nova      3369  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3370  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3397  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3398  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3423  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3424  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root      4947  4301  0 14:06 pts/0    00:00:00 grep –color=auto 1024