Two Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 2, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Both eth1′s assigned IPs from GRE Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

ANSWER FILE Two Node IceHouse Neutron OVS&GRE  and  updated *.ini , *.conf files after packstack setup  http://textuploader.com/0ts8

Two Libvirt’s  subnet created on F20 KVM Sever to support installation

  Public subnet :  192.169.142.0/24  

 GRE Tunnel  Support subnet:      192.168.122.0/24 

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat openstackvms.xml
<network>
 <name>openstackvms</name>
 <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr1' stp='on' delay='0' />
 <mac address='52:54:00:60:f8:6e'/>
 <ip address='192.169.142.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.169.142.2' end='192.169.142.254' />
 </dhcp>
 </ip>
 </network>
 2. Define the above network:
  $ virsh net-define openstackvms.xml
3. Start the network and enable it for "autostart"
 $ virsh net-start openstackvms
 $ virsh net-autostart openstackvms

4. List your libvirt networks to see if it reflects:
  $ virsh net-list
  Name                 State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
  openstackvms         active     yes           yes

5. Optionally, list your bridge devices:
  $ brctl show
  bridge name     bridge id               STP enabled     interfaces
  virbr0          8000.5254003339b3       yes             virbr0-nic
  virbr1          8000.52540060f86e       yes             virbr1-nic

 After packstack 2 Node (Controller+Compute) IceHouse OVS&amp;GRE setup :-

[root@ip-192-169-142-127 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 143
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| nova               |
| ovs_neutron        |
| performance_schema |
| test               |
+——————–+
9 rows in set (0.02 sec)

MariaDB [(none)]&gt; use ovs_neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [ovs_neutron]&gt; show tables ;
+—————————+
| Tables_in_ovs_neutron     |
+—————————+
| agents                    |
| alembic_version           |
| allowedaddresspairs       |
| dnsnameservers            |
| externalnetworks          |
| extradhcpopts             |
| floatingips               |
| ipallocationpools         |
| ipallocations             |
| ipavailabilityranges      |
| networkdhcpagentbindings  |
| networks                  |
| ovs_network_bindings      |
| ovs_tunnel_allocations    |
| ovs_tunnel_endpoints      |
| ovs_vlan_allocations      |
| portbindingports          |
| ports                     |
| quotas                    |
| routerl3agentbindings     |
| routerroutes              |
| routers                   |
| securitygroupportbindings |
| securitygrouprules        |
| securitygroups            |
| servicedefinitions        |
| servicetypes              |
| subnetroutes              |
| subnets                   |
+—————————+
29 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from networks ;
+———————————-+————————————–+———+——–+—————-+——–+
| tenant_id                        | id                                   | name    | status | admin_state_up | shared |
+———————————-+————————————–+———+——–+—————-+——–+
| 179e44f8f53641da89c3eb5d07405523 | 3854bc88-ae14-47b0-9787-233e54ffe7e5 | private | ACTIVE |              1 |      0 |
| 179e44f8f53641da89c3eb5d07405523 | 6e8dc33d-55d4-47b5-925f-e1fa96128c02 | public  | ACTIVE |              1 |      1 |
+———————————-+————————————–+———+——–+—————-+——–+
2 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from routers ;
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| tenant_id                        | id                                   | name    | status | admin_state_up | gw_port_id                           | enable_snat |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| 179e44f8f53641da89c3eb5d07405523 | 94829282-e6b2-4364-b640-8c0980218a4f | ROUTER3 | ACTIVE |              1 | df9711e4-d1a2-4255-9321-69105fbd8665 |           1 |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
1 row in set (0.00 sec)

MariaDB [ovs_neutron]&gt;

*********************

On Controller :-

********************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

*************************************

ovs-vsctl show output on controller

*************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
dc2c76d6-40e3-496e-bdec-470452758c32
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-1″
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.122.127″, out_key=flow, remote_ip=”192.168.122.137″}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap7acb7666-aa”
tag: 1
Interface “tap7acb7666-aa”
type: internal
Port “qr-a26fe722-07″
tag: 1
Interface “qr-a26fe722-07″
type: internal
Bridge br-ex
Port “qg-df9711e4-d1″
Interface “qg-df9711e4-d1″
type: internal
Port “eth0″
Interface “eth0″
Port br-ex
Interface br-ex
type: internal
ovs_version: “2.1.2″

********************

On Compute:-

********************

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth0
UUID=f96e561d-d14c-4fb1-9657-0c935f7f5721
ONBOOT=yes
IPADDR=192.169.142.137
PREFIX=24
GATEWAY=192.169.142.1
DNS1=83.221.202.254
HWADDR=52:54:00:67:AC:04
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.137
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

************************************

ovs-vsctl show output on compute

************************************

[root@ip-192-169-142-137 ~]# ovs-vsctl show
1c6671de-fcdf-4a29-9eee-96c949848fff
Bridge br-tun
Port “gre-2″
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.122.137″, out_key=flow, remote_ip=”192.168.122.127″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qvo87038189-3f”
tag: 1
Interface “qvo87038189-3f”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: “2.1.2″

[root@ip-192-169-142-137 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr87038189-3f        8000.2abf9e69f97c    no        qvb87038189-3f
tap87038189-3f

*************************

Metadata verification 

*************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f iptables -S -t nat | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3771/python
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | 3771
bash: 3771: command not found…

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 3771
root      3771     1  0 13:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/94829282-e6b2-4364-b640-8c0980218a4f.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=94829282-e6b2-4364-b640-8c0980218a4f –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-94829282-e6b2-4364-b640-8c0980218a4f.log –log-dir=/var/log/neutron

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1024/python

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 1024
nova      1024     1  1 13:58 ?        00:00:05 /usr/bin/python /usr/bin/nova-api
nova      3369  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3370  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3397  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3398  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3423  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3424  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root      4947  4301  0 14:06 pts/0    00:00:00 grep –color=auto 1024

  

  

  

 


Two Real Node (Controller+Compute) RDO IceHouse Neutron OVS&VLAN Cluster on Fedora 20 Setup

May 27, 2014

Two boxes , each one having 2  NICs (p37p1,p4p1) for (Controller+NeutronServer) &amp;&amp; Compute Nodes have been setup.

Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VLAN )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   -  Controller (192.168.1.127)

icehouse2.localdomain   -  Compute   (192.168.1.137)

Before running `packstack –answer-file=TwoRealNode-answer.txt` SELINUX set to permissive on both nodes.  Interfaces p4p1 on both nodes set to promiscuous mode (e.g. HWADDRESS was commented out).

Specific of answer-file on real F20 boxes :-

CONFIG_NOVA_COMPUTE_PRIVIF=p4p1

CONFIG_NOVA_NETWORK_PUBIF=p37p1

CONFIG_NOVA_NETWORK_PRIVIF=p4p1

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan

CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:100:200

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-p4p1

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-p4p1:p4p1

Post installation steps :-

1. NetworkManager should be disabled on both nodes, service network enabled.

2. Syntax of ifcfg-* files of corresponding OVS ports  should follow RHEL 6.5 notations rather then F20

3. Special care should be taken to bring up p4p1 (in my case)

4. Post install reconfiguration *.ini  && *.conf   http://textuploader.com/9oec

5. Configuration p4p1 interfaces 

# cat ifcfg-p4p1

TYPE=Ethernet

BOOTPROTO=none

DEVICE=p4p1

ONBOOT=yes

NM_CONTROLLED=no

Metadata access verification on Controller:-

[root@icehouse1 ~(keystone_admin)]# ip netns

qdhcp-a2bf6363-6447-47f5-a243-b998d206d593

qrouter-2462467b-ea0a-4a40-a093-493572010694

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-2462467b-ea0a-4a40-a093-493572010694  iptables -S -t nat | grep 169

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 8775

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-2462467b-ea0a-4a40-a093-493572010694  netstat -anpt

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   

tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      6156/python  

[root@icehouse1 ~(keystone_admin)]# ps -ef | grep 6156

root      5691  4082  0 07:58 pts/0    00:00:00 grep –color=auto 6156
root      6156     1  0 06:04 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/2462467b-ea0a-4a40-a093-493572010694.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=2462467b-ea0a-4a40-a093-493572010694 –state_path=/var/lib/neutron –metadata_port=8775 –verbose –log-file=neutron-ns-metadata-proxy-2462467b-ea0a-4a40-a093-493572010694.log –log-dir=/var/log/neutron

[root@icehouse1 ~(keystone_admin)]# netstat -anpt | grep 8775

tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      1224/python 

[root@icehouse1 ~(keystone_admin)]# ps -aux | grep 1224

nova      1224  0.7  0.7 337092 65052 ?        Ss   05:59   0:46 /usr/bin/python /usr/bin/nova-api

boris     3789  0.0  0.1 504676 12248 ?        Sl   06:01   0:00 /usr/libexec/tracker-store

Verifying dhcp lease for private IPs for instances currently running :-

[root@icehouse1 ~(keystone_admin)]# ip netns exec qdhcp-a2bf6363-6447-47f5-a243-b998d206d593 ifconfig

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0×10
loop  txqueuelen 0  (Local Loopback)
RX packets 3  bytes 1728 (1.6 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 3  bytes 1728 (1.6 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapa7e1ac48-7b: flags=67  mtu 1500
inet 10.0.0.11  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::f816:3eff:fe9d:874d  prefixlen 64  scopeid 0×20
ether fa:16:3e:9d:87:4d  txqueuelen 0  (Ethernet)
RX packets 3364  bytes 626074 (611.4 KiB)
RX errors 0  dropped 35  overruns 0  frame 0
TX packets 2124  bytes 427060 (417.0 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@icehouse1 ~(keystone_admin)]# ip netns exec qdhcp-a2bf6363-6447-47f5-a243-b998d206d593 tcpdump -ln -i tapa7e1ac48-7b

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on tapa7e1ac48-7b, link-type EN10MB (Ethernet), capture size 65535 bytes

11:07:02.388376 ARP, Request who-has 10.0.0.11 tell 10.0.0.38, length 46

11:07:02.388399 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:12.239833 IP 10.0.0.43.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:40:da:a1, length 300

11:07:12.240491 IP 10.0.0.11.bootps &gt; 10.0.0.43.bootpc: BOOTP/DHCP, Reply, length 324

11:07:12.313087 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:13.313070 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:15.634980 IP 0.0.0.0.bootpc &gt; 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:81:ff, length 280

11:07:15.635595 IP 10.0.0.11.bootps &gt; 10.0.0.31.bootpc: BOOTP/DHCP, Reply, length 324

11:07:15.635954 IP 10.0.0.31 &gt; 10.0.0.11: ICMP 10.0.0.31 udp port bootpc unreachable, length 360

11:07:17.254260 ARP, Request who-has 10.0.0.43 tell 10.0.0.11, length 28

11:07:17.254866 ARP, Reply 10.0.0.43 is-at fa:16:3e:40:da:a1, length 46

11:07:20.644135 ARP, Request who-has 10.0.0.11 tell 10.0.0.31, length 28

11:07:20.644157 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:45.972179 IP 10.0.0.38.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:9d:67:df, length 300

11:07:45.973023 IP 10.0.0.11.bootps &gt; 10.0.0.38.bootpc: BOOTP/DHCP, Reply, length 324

11:07:50.980701 ARP, Request who-has 10.0.0.11 tell 10.0.0.38, length 46

11:07:50.980725 ARP, Reply 10.0.0.11 is-at fa:16:3e:9d:87:4d, length 28

11:07:55.821920 IP 10.0.0.43.bootpc &gt; 10.0.0.11.bootps: BOOTP/DHCP, Request from fa:16:3e:40:da:a1, length 300

11:07:55.822423 IP 10.0.0.11.bootps &gt; 10.0.0.43.bootpc: BOOTP/DHCP, Reply, length 324

11:07:55.898024 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:07:56.897994 ARP, Request who-has 10.0.0.43 (Broadcast) tell 0.0.0.0, length 46

11:08:00.823637 ARP, Request who-has 10.0.0.11 tell 10.0.0.43, length 46

******************

On Controller

******************

[root@icehouse1 ~(keystone_admin)]# ovs-vsctl show

a675c73e-c707-4f29-af60-57fb7c3f81c4

Bridge br-int

Port “int-br-p4p1″

Interface “int-br-p4p1″

Port br-int

Interface br-int

type: internal

Port “qr-bbba6fd3-a3″

tag: 1

Interface “qr-bbba6fd3-a3″

type: internal

Port “qvo61d82a0f-32″

tag: 1

Interface “qvo61d82a0f-32″

Port “tapa7e1ac48-7b”

tag: 1

Interface “tapa7e1ac48-7b”

type: internal

Port “qvof8c8a1a2-51″

tag: 1

Interface “qvof8c8a1a2-51″

Bridge br-ex

Port “p37p1″

Interface “p37p1″

Port br-ex

Interface br-ex

type: internal

Port “qg-3787602d-29″

Interface “qg-3787602d-29″

type: internal

Bridge “br-p4p1″

Port “p4p1″

Interface “p4p1″

Port “phy-br-p4p1″

Interface “phy-br-p4p1″

Port “br-p4p1″

Interface “br-p4p1″

type: internal

ovs_version: “2.0.1″

****************

On Compute

****************

[root@icehouse2 ]# ovs-vsctl show

bf768fc8-d18b-4762-bdd2-a410fcf88a9b

Bridge “br-p4p1″

Port “br-p4p1″

Interface “br-p4p1″

type: internal

Port “phy-br-p4p1″

Interface “phy-br-p4p1″

Port “p4p1″

Interface “p4p1″

Bridge br-int

Port br-int

Interface br-int

type: internal

Port “int-br-p4p1″

Interface “int-br-p4p1″

Port “qvoe5a82d77-d4″

tag: 8

Interface “qvoe5a82d77-d4″

ovs_version: “2.0.1″

[root@icehouse1 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active

openstack-nova-cert:                    active

openstack-nova-compute:                 active

openstack-nova-network:                 inactive  (disabled on boot)

openstack-nova-scheduler:               active

openstack-nova-volume:                  inactive  (disabled on boot)

openstack-nova-conductor:               active

== Glance services ==

openstack-glance-api:                   active

openstack-glance-registry:              active

== Keystone service ==

openstack-keystone:                     active

== Horizon service ==

openstack-dashboard:                    active

== neutron services ==

neutron-server:                         active

neutron-dhcp-agent:                     active

neutron-l3-agent:                       active

neutron-metadata-agent:                 active

neutron-lbaas-agent:                    inactive  (disabled on boot)

neutron-openvswitch-agent:              active

neutron-linuxbridge-agent:              inactive  (disabled on boot)

neutron-ryu-agent:                      inactive  (disabled on boot)

neutron-nec-agent:                      inactive  (disabled on boot)

neutron-mlnx-agent:                     inactive  (disabled on boot)

== Swift services ==

openstack-swift-proxy:                  active

openstack-swift-account:                active

openstack-swift-container:              active

openstack-swift-object:                 active

== Cinder services ==

openstack-cinder-api:                   active

openstack-cinder-scheduler:             active

openstack-cinder-volume:                active

openstack-cinder-backup:                inactive

== Ceilometer services ==

openstack-ceilometer-api:               active

openstack-ceilometer-central:           active

openstack-ceilometer-compute:           active

openstack-ceilometer-collector:         active

openstack-ceilometer-alarm-notifier:    active

openstack-ceilometer-alarm-evaluator:   active

== Support services ==

libvirtd:                               active

openvswitch:                            active

dbus:                                   active

tgtd:                                   active

rabbitmq-server:                        active

memcached:                              active

== Keystone users ==

+———————————-+————+———+———————-+

|                id                |    name    | enabled |        email         |

+———————————-+————+———+———————-+

| df9165cd160846b19f73491e0bc041c2 |   admin    |   True  |    test@test.com     |

| bafe2fc4d51a400a99b1b41ef50d4afd | ceilometer |   True  | ceilometer@localhost |

| df59d0782f174a34a3a73215300c64ca |   cinder   |   True  |   cinder@localhost   |

| ca624394c9d941b6ad0a07363ab668b2 |   glance   |   True  |   glance@localhost   |

| fb5125484a1f4b7aaf8503025eb018ba |  neutron   |   True  |  neutron@localhost   |

| 64912bc3726c48db8f003ce79d8fe746 |    nova    |   True  |    nova@localhost    |

| 6d8b48605d3b476097d89486813360c0 |   swift    |   True  |   swift@localhost    |

+———————————-+————+———+———————-+

== Glance images ==

+————————————–+—————–+————-+——————+———–+——–+

| ID                                   | Name            | Disk Format | Container Format | Size      | Status |

+————————————–+—————–+————-+——————+———–+——–+

| 8593a43a-2449-4b49-918f-9871011249a7 | CirrOS31        | qcow2       | bare             | 13147648  | active |

| 4be72a99-06e0-477d-b446-b597435455a9 | Fedora20image   | qcow2       | bare             | 210829312 | active |

| 28470072-f317-4a72-b3e8-3fffbe6a7661 | UubuntuServer14 | qcow2       | bare             | 253559296 | active |

+————————————–+—————–+————-+——————+———–+——–+

== Nova managed services ==

+——————+———————–+———-+———+——-+—————————-+—————–+

| Binary           | Host                  | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+——————+———————–+———-+———+——-+—————————-+—————–+

| nova-consoleauth | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | -               |

| nova-scheduler   | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | -               |

| nova-conductor   | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:13.000000 | -               |

| nova-compute     | icehouse1.localdomain | nova     | enabled | up    | 2014-05-25T03:03:10.000000 | -               |

| nova-cert        | icehouse1.localdomain | internal | enabled | up    | 2014-05-25T03:03:05.000000 | -               |

| nova-compute     | icehouse2.localdomain | nova     | enabled | up    | 2014-05-25T03:03:13.000000 | -               |

+——————+———————–+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+———+——+

| ID                                   | Label   | Cidr |

+————————————–+———+——+

| 09e18ced-8c22-4166-a1a1-cbceece46884 | public  | -    |

| a2bf6363-6447-47f5-a243-b998d206d593 | private | -    |

+————————————–+———+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———–+———–+——+———–+——+——-+————-+———–+

| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |

| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |

| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |

| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |

| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |

+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+————–+———–+————+————-+———————————+

| ID                                   | Name         | Status    | Task State | Power State | Networks                        |

+————————————–+————–+———–+————+————-+———————————+

| b661a130-fdb7-41eb-aba5-588924634c9d | CirrOS302    | ACTIVE    | -          | Running     | private=10.0.0.31, 192.168.1.63 |

| 5d1dbb9d-7bef-4e51-be8d-4270ddd3d4cc | CirrOS351    | ACTIVE    | -          | Running     | private=10.0.0.39, 192.168.1.66 |

| ef73a897-8700-4999-ab25-49f25b896f34 | CirrOS370    | ACTIVE    | -          | Running     | private=10.0.0.40, 192.168.1.69 |

| 02718e21-edb9-4b59-8bb7-21e0290650fd | CirrOS390    | SUSPENDED | -          | Shutdown    | private=10.0.0.41, 192.168.1.67 |                           |

| 6992e37c-48c7-49b6-b6fc-8e35fe240704 | UbuntuSRV350 | SUSPENDED | -          | Shutdown    | private=10.0.0.38, 192.168.1.62 |

| 9953ed52-b666-4fe1-ac35-23621122af5a | VF20RS02     | ACTIVE    | -          | Running     | private=10.0.0.43, 192.168.1.71 |

+————————————–+————–+———–+————+————-+———————————+

[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-scheduler   icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-conductor   icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:14
nova-compute     icehouse1.localdomain                nova             enabled    :-)   2014-05-27 10:16:18
nova-cert        icehouse1.localdomain                internal         enabled    :-)   2014-05-27 10:16:15
nova-compute     icehouse2.localdomain                nova             enabled    :-)   2014-05-27 10:16:12

[root@icehouse1 ~(keystone_admin)]# neutron agent-list

+--------------------------------------+--------------------+-----------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------+-------+----------------+
| 6775fac7-d594-4272-8447-f136b54247e8 | L3 agent | icehouse1.localdomain | :-) | True |
| 77fdc8a9-0d77-4f53-9cdd-1c732f0cfdb1 | Metadata agent | icehouse1.localdomain | :-) | True |
| 8f70b2c4-c65b-4d0b-9808-ba494c764d99 | Open vSwitch agent | icehouse1.localdomain | :-) | True |
| a86f1272-2afb-43b5-a7e6-e5fc6df565b5 | Open vSwitch agent | icehouse2.localdomain | :-) | True |
| e72bdcd5-3dd1-4994-860f-e21d4a58dd4c | DHCP agent | icehouse1.localdomain | :-) | True |
+--------------------------------------+--------------------+-----------------------+-------+----------------+


 
   


 
 Windows 2012 evaluation Server running on Compute Node :-
 


  


									

Follow

Get every new post delivered to your Inbox.