Once again about pros/cons of Systemd and Upstart

May 16, 2015

Upstart advantages.

1. Upstart is simpler for porting on the systems other than Linux while systemd is very rigidly tied on Linux kernel opportunities.Adaptation of Upstart for work in Debian GNU/kFreeBSD and Debian GNU/Hurd looks quite real task that it is impossible to tell about systemd;

2. Upstart is more habitual for the Debian developers, many of which in combination participate in development of Ubuntu. Two members of Technical committee Debian (Steve Langasek and Colin Watson) are a part of group of the Upstart developers.

3. Upstart simpler and is more lightweight than systemd, as a result, less code – less mistakes; Upstart is suitable for integration with a code of system daemons better.The policy of systemd is reduced to that authors of daemons have to be arranged under upstream (it is necessary to provide the analog compatible at the level of the external interface for replacement of the systemd component) instead of upstream provided comfortable means for developers of daemons.

4. Upstart is simpler in respect of maintenance and maintenance of packages; Community of the Upstart developers are more openly for collaboration. In case of systemd it is necessary to take the systemd methods for granted and to follow them, for example, to support the separate section “/usr” or
to use only absolute paths for start. Shortcomings of Upstart belong to category of reparable problems; in current state of Upstart it is already completely ready for use in Debian 8.0 (Jessie).

5. In Upstart more habitual model of definition of a configuration of services, unlike systemd where settings in / etc block the basic settings of units determined in hierarchy/lib. Use of Upstart will allow to support a sound mind of the competition which will promote development of various approaches and will keep developers in a tone.

Systemd advantages

1. Without essential processing of architecture of Upstart won’t be able to catch up with systemd on functionality (for example, the turned model of start of dependences (instead of start of all demanded dependences at start of the set service,start of service in Upstart is carried out at receipt of an event about availability for service of dependences);

2. Use of ptrace disturbs application of upstart-works for such daemons as avahi, apache and postfix;possibility of activation of service only upon the appeal to a socket, but not on indirect signs,such as dependence on activation of other socket; lack of reliable tracking of conditions of the carried-out processes.

3. Systemd contains rather self-sufficient set of components that allows to concentrate attention on elimination of problems,but not completion of a configuration with Upstart to the opportunities which are already present at Systemd. For example, in Upstart are absent:- support of the detailed status and maintaining the log of work of daemons,multiple activation through sockets,activation through sockets for IPv6 and UDP,flexible mechanism of restriction of resources.

4. Use of systemd will allow to pull together among themselves and to unify control facilities various distribution kits. Systemd is already passed to RHEL 7.X,CentOS 7.X, Fedora,openSUSE,Sabayon,Mandriva,Arch Linux,

5. At systemd there is more active, large and versatile community of developers into which engineers of the SUSE and Red Hat companies enter. When using upstart the distribution kit becomes dependent on Canonical without which support of upstart remains without developers and will be doomed to stagnation.Participation in development of upstart requires signing of the agreement on transfer of property rights of the Canonical company. The Red Hat company not without cause decided on replacement of upstart by systemd.Debian project was already compelled to migrate for systemd. For realization of some opportunities of loading in Upstart it is required to use fragments of shell-scripts that does initialization process less reliable and more labor-consuming for debugging.

6. Support of systemd is realized in GNOME and KDE which more and more actively use possibilities of systemd (for example, means for management of the user sessions and start of each appendix in separate cgroup). GNOME continues to be positioned as the main environment of Debian, but the relations between the Ubuntu/Upstart and GNOME projects had obviously intense character.

References

http://www.opennet.ru/opennews/art.shtml?num=38762

Advertisements

RDO Kilo Three Node Setup for Controller+Network+Compute (ML2&OVS&VXLAN) on CentOS 7.1

May 9, 2015

Following bellow is brief instruction  for traditional three node deployment test Controller&&Network&&Compute for oncoming RDO Kilo, which was performed on Fedora 21 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4771 Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,2 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep’s external subnets), Compute Node VM two VNICS (management,vtep’s subnets)

SELINUX stays in enforcing mode.

Three Libvirt networks created

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# cat public.xml

<network>
<name>public</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr3′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’172.24.4.225′ netmask=’255.255.255.240′>
<dhcp>
<range start=’172.24.4.226′ end=’172.24.4.238′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# cat vteps.xml

<network>
<name>vteps</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr4′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.0.0.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>
</ip>
</network>

[root@junoJVC01 ~]# virsh net-list

Name                 State      Autostart     Persistent

————————————————————————–

default               active        yes           yes
openstackvms    active        yes           yes
public                active        yes           yes
vteps                 active         yes          yes

*********************************************************************************
1. First Libvirt subnet “openstackvms”  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet “public” serves for simulation external network  Network Node attached to public,latter on “eth3” interface (belongs to “public”) is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via interface virbr3 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28


***********************************************************************************
3. Third Libvirt subnet “vteps” serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet.
***********************************************************************************
Start testing following RH instructions
Per https://www.rdoproject.org/RDO_test_day_Kilo#How_To_Test

# yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm
# yum install -y openstack-packstack
*******************************************************
Install rdo-testing-kilo.rpm on all three nodes due to
*******************************************************

https://bugzilla.redhat.com/show_bug.cgi?id=1218750

# yum install http://rdoproject.org/repos/openstack-kilo/rdo-testing-kilo.rpm

Keep SELINUX=enforcing
Package  openstack-selinux-0.6.31-1.el7.noarch will be installed by prescript
puppet on all nodes of deployment

*********************
Answer-file :-
*********************

[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer-fileRHTest.txt

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”172.24.4.227″
NETMASK=”255.255.255.240″
DNS1=”83.221.202.254″
BROADCAST=”172.24.4.239″
GATEWAY=”172.24.4.225″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth3

DEVICE=”eth3″
# HWADDR=00:22:15:63:E4:E2
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
*************************************************
Next step to performed on Network Node :-
*************************************************

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

 

f15

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show

d9a60201-a2c2-4c6a-ad9d-63cc2ae296b3

Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port “eth3”
Interface “eth3”

Port br-ex
Interface br-ex
type: internal
Port “eth2”
Interface “eth2”
Port “qg-d433fa46-e2”
Interface “qg-d433fa46-e2”
type: internal
Bridge br-tun
fail_mode: secure
Port “vxlan-0a000089”
Interface “vxlan-0a000089″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.147″, out_key=flow, remote_ip=”10.0.0.137″}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port br-int
Interface br-int
type: internal
Port “tap70da94fb-c1”
tag: 1
Interface “tap70da94fb-c1”
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qr-0737c492-f6”
tag: 1
Interface “qr-0737c492-f6”
type: internal
ovs_version: “2.3.1”
**********************************************************
Following bellow is Network Node status verification
**********************************************************

[root@ip-192-169-142-147 ~(keystone_admin)]# openstack-status

== neutron services ==

neutron-server:                           inactive  (disabled on boot)
neutron-dhcp-agent:                    active
neutron-l3-agent:                         active
neutron-metadata-agent:              active
neutron-openvswitch-agent:         active
== Support services ==
libvirtd:                               active
openvswitch:                       active
dbus:                                   active
[root@ip-192-169-142-147 ~(keystone_admin)]# neutron net-list

+————————————–+———-+——————————————————+
| id                                   | name     | subnets                                              |
+————————————–+———-+——————————————————+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | 5fc0118a-f710-448d-af67-17dbfe01d5fc 172.24.4.224/28 |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | ba2cded7-5546-4a64-aa49-7ef4d077dee3 50.0.0.0/24     |
+————————————–+———-+——————————————————+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-list

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+
| id                                   | name       | external_gateway_info                                                                                                                                                                   | distributed | ha    |

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+

| d63ca3f3-5b71-4540-bb5c-01b44ce3081b | RouterDemo | {“network_id”: “7ecdfc27-57cf-410d-9a76-8e9eb76582cb”, “enable_snat”: true, “external_fixed_ips”: [{“subnet_id”: “5fc0118a-f710-448d-af67-17dbfe01d5fc”, “ip_address”: “172.24.4.229”}]} | False       | False |

+————————————–+————+——————————————————————————————————————————————————————————————+————-+——-+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-port-list RouterDemo

+————————————–+——+——————-+————————————————————————————-+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+————————————–+——+——————-+————————————————————————————-+
| 0737c492-f607-4d6a-8e72-ad447453b3c0 |      | fa:16:3e:d7:d0:66 | {“subnet_id”: “ba2cded7-5546-4a64-aa49-7ef4d077dee3”, “ip_address”: “50.0.0.1”}     |
| d433fa46-e203-4fdd-b3f7-dcbc884e9f1e |      | fa:16:3e:02:ef:51 | {“subnet_id”: “5fc0118a-f710-448d-af67-17dbfe01d5fc”, “ip_address”: “172.24.4.229”} |
+————————————–+——+——————-+————————————————————————————-+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron port-show 0737c492-f607-4d6a-8e72-ad447453b3c0 | grep ACTIVE
| status                | ACTIVE                                                                          |

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[   14.174240] device ovs-system entered promiscuous mode
[   14.184284] device br-ex entered promiscuous mode
[   14.200068] device eth2 entered promiscuous mode
[   14.200253] device eth3 entered promiscuous mode
[   14.207443] device br-int entered promiscuous mode
[   14.209360] device br-tun entered promiscuous mode
[   27.311116] device virbr0-nic entered promiscuous mode
[  142.406262] device tap70da94fb-c1 entered promiscuous mode
[  144.045031] device qr-0737c492-f6 entered promiscuous mode
[  144.792618] device qg-d433fa46-e2 entered promiscuous mode

**************************************************************
Compute Node Status
**************************************************************

[root@ip-192-169-142-137 ~]#  dmesg | grep promisc
[    9.683238] device ovs-system entered promiscuous mode
[    9.699664] device br-ex entered promiscuous mode
[    9.735288] device br-int entered promiscuous mode
[    9.748086] device br-tun entered promiscuous mode
[  137.203583] device qvbe7160159-fd entered promiscuous mode
[  137.288235] device qvoe7160159-fd entered promiscuous mode
[  137.715508] device qvbe90ef79b-80 entered promiscuous mode
[  137.796083] device qvoe90ef79b-80 entered promiscuous mode
[  605.884770] device tape90ef79b-80 entered promiscuous mode
[  767.083214] device qvbbf1c441c-ad entered promiscuous mode
[  767.184783] device qvobf1c441c-ad entered promiscuous mode
[  767.446575] device tapbf1c441c-ad entered promiscuous mode
[  973.679071] device qvb3c3e98d7-2d entered promiscuous mode
[  973.775480] device qvo3c3e98d7-2d entered promiscuous mode
[  973.997621] device tap3c3e98d7-2d entered promiscuous mode
[ 1863.868574] device tapbf1c441c-ad left promiscuous mode
[ 1889.386251] device tape90ef79b-80 left promiscuous mode
[ 2256.698108] device tap3c3e98d7-2d left promiscuous mode
[ 2336.931559] device qvb6597428d-5b entered promiscuous mode
[ 2337.021941] device qvo6597428d-5b entered promiscuous mode
[ 2337.283293] device tap6597428d-5b entered promiscuous mode
[ 4092.577561] device tap6597428d-5b left promiscuous mode
[ 4099.798474] device tap6597428d-5b entered promiscuous mode
[ 5098.563689] device tape90ef79b-80 entered promiscuous mode

[root@ip-192-169-142-137 ~]# ovs-vsctl show
a0cb406e-b028-4b09-8849-e6e2869ab051
Bridge br-tun
fail_mode: secure
Port “vxlan-0a000093”
Interface “vxlan-0a000093″
type: vxlan
options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.137″, out_key=flow, remote_ip=”10.0.0.147″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
fail_mode: secure
Port “qvoe90ef79b-80”
tag: 1
Interface “qvoe90ef79b-80”
Port br-int
Interface br-int
type: internal
Port “qvobf1c441c-ad”
tag: 1
Interface “qvobf1c441c-ad”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port “qvo6597428d-5b”
tag: 1
Interface “qvo6597428d-5b”
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
ovs_version: “2.3.1”

[root@ip-192-169-142-137 ~]# brctl show

bridge name    bridge id        STP enabled    interfaces
qbr6597428d-5b       8000.1a483dd02cee    no        qvb6597428d-5b
tap6597428d-5b
qbrbf1c441c-ad        8000.ca2f911ff649      no        qvbbf1c441c-ad
qbre90ef79b-80        8000.16342824f4ba    no        qvbe90ef79b-80
tape90ef79b-80
**************************************************
Controller Node status verification
**************************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:             inactive  (disabled on boot)
openstack-nova-network:              inactive  (disabled on boot)
openstack-nova-scheduler:           active
openstack-nova-conductor:           active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:            active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                  inactive  (disabled on boot)
neutron-l3-agent:                       inactive  (disabled on boot)
neutron-metadata-agent:            inactive  (disabled on boot)
== Swift services ==
openstack-swift-proxy:                 active
openstack-swift-account:              active
openstack-swift-container:            active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                      active
openstack-cinder-scheduler:            active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:                 active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:         inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
mysqld:                                    inactive  (disabled on boot)
libvirtd:                                    active
dbus:                                        active
target:                                      active
rabbitmq-server:                       active
memcached:                             active
== Keystone users ==
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.

‘python-keystoneclient.’, DeprecationWarning)

+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 4e1008fd31944fecbb18cdc215af23ec |   admin    |   True  |    root@localhost    |
| 621b84dd4b904760b8aa0cc7b897c95c | ceilometer |   True  | ceilometer@localhost |
| 4d6cdea3b7bc49948890457808c0f6f8 |   cinder   |   True  |   cinder@localhost   |
| 8393bb4de49a44b798af8b118b9f0eb6 |    demo    |   True  |                      |
| f9be6eaa789e4b3c8771372fffb00230 |   glance   |   True  |   glance@localhost   |
| a518b95a92044ad9a4b04f0be90e385f |  neutron   |   True  |  neutron@localhost   |
| 40dddef540fb4fa5a69fb7baa03de657 |    nova    |   True  |    nova@localhost    |
| 5fbb2b97ab9d4192a3f38f090e54ffb1 |   swift    |   True  |   swift@localhost    |
+———————————-+————+———+———————-+
== Glance images ==
+————————————–+————–+————-+——————+———–+——–+
| ID                                   | Name         | Disk Format | Container Format | Size      | Status |
+————————————–+————–+————-+——————+———–+——–+
| 1b4a6b08-d63c-4d8d-91da-16f6ba177009 | cirros       | qcow2       | bare             | 13200896  | active |
| cb05124d-0d30-43a7-a033-0b7ff0ea1d47 | Fedor21image | qcow2       | bare             | 158443520 | active |
+————————————–+————–+————-+——————+———–+——–+
== Nova managed services ==
+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | –               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | –               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | –               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | –               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:14:21.000000 | –               |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+———-+——+
| ID                                   | Label    | Cidr |

+————————————–+———-+——+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | –    |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | –    |
+————————————–+———-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+

| ID | Name | Status | Task State | Power State | Networks |

+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list

+—-+—————————————-+——-+———+
| ID | Hypervisor hostname                    | State | Status  |
+—-+—————————————-+——-+———+
| 1  | ip-192-169-142-137.ip.secureserver.net | up    | enabled |
+—-+—————————————-+——-+———+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+————————————–+——————–+—————————————-+——-+—————-+—————————+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

| 22af7b3b-232f-4642-9418-d1c8021c7eb5 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-openvswitch-agent |
| 34e1078c-c75b-4d14-b813-b273ea8f7b86 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-l3-agent          |
| 5d652094-6711-409d-8546-e29c09e03d5a | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-metadata-agent    |
| 8a8ad680-1071-4c7f-8787-ba4ef0a7dfb7 | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | 🙂   | True           | neutron-dhcp-agent        |
| d81e97af-c210-4855-af06-fb1d139e2e10 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | 🙂   | True           | neutron-openvswitch-agent |
+————————————–+——————–+—————————————-+——-+—————-+—————————+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova service-list

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+

| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |

+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | –               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | –               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | –               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | –               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:15:21.000000 | –               |
+—-+——————+—————————————-+———-+———+——-+—————————-+—————–+