CPU Pinning and NUMA Topology on RDO Kilo on Fedora Server 22

August 1, 2015
Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
on RDO Kilo installed on Fedora 22 . After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Details follow bellow :-
*****************************************************************************************
RDO Kilo set up on Fedora ( openstack-puppet-modules-2015.1.9-4.fc23.noarch)
*****************************************************************************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf install -y openstack-packstack
Generate answer-file and make update :-
# packstack  –gen-answer-file answer-file-aio.txt
and set CONFIG_KEYSTONE_SERVICE_NAME=httpd
****************************************************************************
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
****************************************************************************
You might be hit by bug  https://bugzilla.redhat.com/show_bug.cgi?id=1249482
As pre-install step apply patch https://review.openstack.org/#/c/209032/
to fix neutron_api.pp. Location of puppet templates
/usr/lib/python2.7/site-packages/packstack/puppet/templates.
You might be also hit by  https://bugzilla.redhat.com/show_bug.cgi?id=1234042
Workaround is in comments 6,11
****************
Then run :-
****************

# packstack  –answer-file=./answer-file-aio.txt

Final target is to reproduce mentioned article on i7 4790 Haswell CPU box, perform launching nova instance with CPU pinning.

[root@fedora22server ~(keystone_admin)]# uname -a
Linux fedora22server.localdomain 4.1.3-200.fc22.x86_64 #1 SMP Wed Jul 22 19:51:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

[root@fedora22server ~(keystone_admin)]# rpm -qa \*qemu\*
qemu-system-x86-2.3.0-6.fc22.x86_64
qemu-img-2.3.0-6.fc22.x86_64
qemu-guest-agent-2.3.0-6.fc22.x86_64
qemu-kvm-2.3.0-6.fc22.x86_64
ipxe-roms-qemu-20150407-1.gitdc795b9f.fc22.noarch
qemu-common-2.3.0-6.fc22.x86_64
libvirt-daemon-driver-qemu-1.2.13.1-2.fc22.x86_64

[root@fedora22server ~(keystone_admin)]# numactl –hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 15991 MB
node 0 free: 4399 MB
node distances:
node 0
0: 10

[root@fedora22server ~(keystone_admin)]# virsh capabilities

<capabilities>
<host>
<uuid>00fd5d2c-dad7-dd11-ad7e-7824af431b53</uuid>
<cpu>
<arch>x86_64</arch>
<model>Haswell-noTSX</model>
<vendor>Intel</vendor>
<topology sockets=’1′ cores=’4′ threads=’2’/>
<feature name=’invtsc’/>
<feature name=’abm’/>
<feature name=’pdpe1gb’/>
<feature name=’rdrand’/>
<feature name=’f16c’/>
<feature name=’osxsave’/>
<feature name=’pdcm’/>
<feature name=’xtpr’/>
<feature name=’tm2’/>
<feature name=’est’/>
<feature name=’smx’/>
<feature name=’vmx’/>
<feature name=’ds_cpl’/>
<feature name=’monitor’/>
<feature name=’dtes64’/>
<feature name=’pbe’/>
<feature name=’tm’/>
<feature name=’ht’/>
<feature name=’ss’/>
<feature name=’acpi’/>
<feature name=’ds’/>
<feature name=’vme’/>
<pages unit=’KiB’ size=’4’/>
<pages unit=’KiB’ size=’2048’/>
</cpu>
<power_management>
<suspend_mem/>
<suspend_disk/>
<suspend_hybrid/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
<uri_transport>rdma</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num=’1′>
<cell id=’0′>
<memory unit=’KiB’>16374824</memory>
<pages unit=’KiB’ size=’4′>4093706</pages>
<pages unit=’KiB’ size=’2048′>0</pages>
<distances>
<sibling id=’0′ value=’10’/>
</distances>
<cpus num=’8′>
<cpu id=’0′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’1′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’2′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’3′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
<cpu id=’4′ socket_id=’0′ core_id=’0′ siblings=’0,4’/>
<cpu id=’5′ socket_id=’0′ core_id=’1′ siblings=’1,5’/>
<cpu id=’6′ socket_id=’0′ core_id=’2′ siblings=’2,6’/>
<cpu id=’7′ socket_id=’0′ core_id=’3′ siblings=’3,7’/>
</cpus>
</cell>
</cells>
</topology>

On each Compute node that pinning of virtual machines will be permitted on open the /etc/nova/nova.conf file and make the following modifications:

Set the vcpu_pin_set value to a list or range of logical CPU cores to reserve for virtual machine processes. OpenStack Compute will ensure guest virtual machine instances are pinned to these virtual CPU cores.
vcpu_pin_set=2,3,6,7

Set the reserved_host_memory_mb to reserve RAM for host processes. For the purposes of testing used the default of 512 MB:
reserved_host_memory_mb=512

# systemctl restart openstack-nova-compute.service

************************************
SCHEDULER CONFIGURATION
************************************

Update /etc/nova/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
NUMATopologyFilter,AggregateInstanceExtraSpecsFilter

# systemctl restart openstack-nova-scheduler.service

At this point if creating a guest you may see some changes to appear in the XML, pinning the guest vCPU(s) to the cores listed in vcpu_pin_set:

<vcpu placement=’static’ cpuset=’2-3,6-7′>1</vcpu>

Add to vmlinuz grub2 command line at the end
isolcpus=2,3,6,7

***************
REBOOT
***************
[root@fedora22server ~(keystone_admin)]# nova aggregate-create performance

+—-+————-+——————-+——-+———-+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+———-+

| 1 | performance | – | | |

+—-+————-+——————-+——-+———-+

[root@fedora22server ~(keystone_admin)]# nova aggregate-set-metadata 1 pinned=true
Metadata has been successfully updated for aggregate 1.
+—-+————-+——————-+——-+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——-+—————+

| 1 | performance | – | | ‘pinned=true’ |

+—-+————-+——————-+——-+—————+

[root@fedora22server ~(keystone_admin)]# nova flavor-create m1.small.performance 6 4096 20 4
+—-+———————-+———–+——+———–+——+——-+————-+———–+

| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

| 6 | m1.small.performance | 4096 | 20 | 0 | | 4 | 1.0 | True |

+—-+———————-+———–+——+———–+——+——-+————-+———–+

[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set hw:cpu_policy=dedicated
[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true
[root@fedora22server ~(keystone_admin)]# hostname
fedora22server.localdomain

[root@fedora22server ~(keystone_admin)]# nova aggregate-add-host 1 fedora22server.localdomain
Host fedora22server.localdomain has been successfully added for aggregate 1
+—-+————-+——————-+——————————+—————+

| Id | Name | Availability Zone | Hosts | Metadata |

+—-+————-+——————-+——————————+—————+
| 1 | performance | – | ‘fedora22server.localdomain’ | ‘pinned=true’ |
+—-+————-+——————-+——————————+—————+

[root@fedora22server ~(keystone_admin)]# . keystonerc_demo
[root@fedora22server ~(keystone_demo)]# glance image-list
+————————————–+———————————+————-+——————+————-+——–+
| ID | Name | Disk Format | Container Format | Size | Status |
+————————————–+———————————+————-+——————+————-+——–+
| bf6f5272-ae26-49ae-b0f9-3c4fcba350f6 | CentOS71Image | qcow2 | bare | 1004994560 | active |
| 05ac955e-3503-4bcf-8413-6a1b3c98aefa | cirros | qcow2 | bare | 13200896 | active |
| 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 | VF22Image | qcow2 | bare | 228599296 | active |
| c695e7fa-a69f-4220-abd8-2269b75af827 | Windows Server 2012 R2 Std Eval | qcow2 | bare | 17182752768 | active |
+————————————–+———————————+————-+——————+————-+——–+

[root@fedora22server ~(keystone_demo)]#neutron net-list

+————————————–+———-+—————————————————–+
| id | name | subnets |
+————————————–+———-+—————————————————–+
| 0daa3a02-c598-4c46-b1ac-368da5542927 | public | 8303b2f3-2de2-44c2-bd5e-fc0966daec53 192.168.1.0/24 |
| c85a4215-1558-4a95-886d-a2f75500e052 | demo_net | 0cab6cbc-dd80-42c6-8512-74d7b2cbf730 50.0.0.0/24 |
+————————————–+———-+—————————————————–+

*************************************************************************
At this point attempt to launch F22 Cloud instance with created flavor
m1.small.performance
*************************************************************************

[root@fedora22server ~(keystone_demo)]# nova boot –image 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 –key-name oskeydev –flavor m1.small.performance –nic net-id=c85a4215-1558-4a95-886d-a2f75500e052 vf22-instance

+————————————–+————————————————–+
| Property | Value |
+————————————–+————————————————–+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | – |
| OS-SRV-USG:terminated_at | – |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | XsGr87ZLGX8P |
| config_drive | |
| created | 2015-07-31T08:03:49Z |
| flavor | m1.small.performance (6) |
| hostId | |
| id | 4b99f3cf-3126-48f3-9e00-94787f040e43 |
| image | VF22Image (7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52) |
| key_name | oskeydev |
| metadata | {} |
| name | vf22-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 14f736e6952644b584b2006353ca51be |
| updated | 2015-07-31T08:03:50Z |
| user_id | 4ece2385b17a4490b6fc5a01ff53350c |
+————————————–+————————————————–+

[root@fedora22server ~(keystone_demo)]#nova list

+————————————–+—————+———+————+————-+———————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+—————+———+————+————-+———————————–+
| 93906a61-ec0b-481d-b964-2bb99d095646 | CentOS71RLX | SHUTOFF | – | Shutdown | demo_net=50.0.0.21, 192.168.1.159 |
| ac7e9be5-d2dc-4ec0-b0a1-4096b552e578 | VF22Devpin | ACTIVE | – | Running | demo_net=50.0.0.22 |
| b93c9526-ded5-4b7a-ae3a-106b34317744 | VF22Devs | SHUTOFF | – | Shutdown | demo_net=50.0.0.19, 192.168.1.157 |
| bef20a1e-3faa-4726-a301-73ca49666fa6 | WinSrv2012 | SHUTOFF | – | Shutdown | demo_net=50.0.0.16 |
| 4b99f3cf-3126-48f3-9e00-94787f040e43 | vf22-instance | ACTIVE | – | Running | demo_net=50.0.0.23, 192.168.1.160 |
+————————————–+—————+———+————+————-+———————————–+

[root@fedora22server ~(keystone_demo)]#virsh list

Id Name State

—————————————————-
2 instance-0000000c running
3 instance-0000000d running

Please, see http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
regarding detailed explanation of highlighted blocks, keeping in mind that pinning is done to logical CPU cores ( not physical due to 4 Core CPU with HT enabled ). Multiple cells are also absent, due limitations of i7 47XX Haswell CPU architecture

[root@fedora22server ~(keystone_demo)]#virsh dumpxml instance-0000000d > vf22-instance.xml
<domain type=’kvm’ id=’3′>
<name>instance-0000000d</name>
<uuid>4b99f3cf-3126-48f3-9e00-94787f040e43</uuid>
<metadata>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.fc23″/>
<nova:name>vf22-instance</nova:name>
<nova:creationTime>2015-07-31 08:03:54</nova:creationTime>
<nova:flavor name=”m1.small.performance”>
<nova:memory>4096</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>4</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”4ece2385b17a4490b6fc5a01ff53350c”>demo</nova:user>
<nova:project uuid=”14f736e6952644b584b2006353ca51be”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52″/>
</nova:instance>
</metadata>
<memory unit=’KiB’>4194304</memory>
<currentMemory unit=’KiB’>4194304</currentMemory>
<vcpu placement=’static’>4</vcpu>
<cputune>
<shares>4096</shares>
<vcpupin vcpu=’0′ cpuset=’2’/>
<vcpupin vcpu=’1′ cpuset=’6’/>
<vcpupin vcpu=’2′ cpuset=’3’/>
<vcpupin vcpu=’3′ cpuset=’7’/>
<emulatorpin cpuset=’2-3,6-7’/>
</cputune>
<numatune>
<memory mode=’strict’ nodeset=’0’/>
<memnode cellid=’0′ mode=’strict’ nodeset=’0’/>
</numatune>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type=’smbios’>
<system>
<entry name=’manufacturer’>Fedora Project</entry>
<entry name=’product’>OpenStack Nova</entry>
<entry name=’version’>2015.1.0-3.fc23</entry>
<entry name=’serial’>f1b336b1-6abf-4180-865a-b6be5670352e</entry>
<entry name=’uuid’>4b99f3cf-3126-48f3-9e00-94787f040e43</entry>
</system>
</sysinfo>
<os>
<type arch=’x86_64′ machine=’pc-i440fx-2.3′>hvm</type>
<boot dev=’hd’/>
<smbios mode=’sysinfo’/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode=’host-model’>
<model fallback=’allow’/>
<topology sockets=’2′ cores=’1′ threads=’2’/>
<numa>
<cell id=’0′ cpus=’0-3′ memory=’4194304′ unit=’KiB’/>
</numa>
</cpu>
<clock offset=’utc’>
<timer name=’pit’ tickpolicy=’delay’/>
<timer name=’rtc’ tickpolicy=’catchup’/>
<timer name=’hpet’ present=’no’/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type=’file’ device=’disk’>
<driver name=’qemu’ type=’qcow2′ cache=’none’/>
<source file=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/disk’/>
<backingStore type=’file’ index=’1′>
<format type=’raw’/>
<source file=’/var/lib/nova/instances/_base/6c60a5ed1b3037bbdb2bed198dac944f4c0d09cb’/>
<backingStore/>
</backingStore>
<target dev=’vda’ bus=’virtio’/>
<alias name=’virtio-disk0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x06′ function=’0x0’/>
</disk>
<controller type=’usb’ index=’0′>
<alias name=’usb0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x01′ function=’0x2’/>
</controller>
<controller type=’pci’ index=’0′ model=’pci-root’>
<alias name=’pci.0’/>
</controller>
<controller type=’virtio-serial’ index=’0′>
<alias name=’virtio-serial0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x05′ function=’0x0’/>
</controller>
<interface type=’bridge’>
<mac address=’fa:16:3e:4f:25:03’/>
<source bridge=’qbr567b21fe-52’/>
<target dev=’tap567b21fe-52’/>
<model type=’virtio’/>
<alias name=’net0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x03′ function=’0x0’/>
</interface>
<serial type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target port=’0’/>
<alias name=’serial0’/>
</serial>
<serial type=’pty’>
<source path=’/dev/pts/2’/>
<target port=’1’/>
<alias name=’serial1’/>
</serial>
<console type=’file’>
<source path=’/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log’/>
<target type=’serial’ port=’0’/>
<alias name=’serial0’/>
</console>
<channel type=’spicevmc’>
<target type=’virtio’ name=’com.redhat.spice.0′ state=’disconnected’/>
<alias name=’channel0’/>
<address type=’virtio-serial’ controller=’0′ bus=’0′ port=’1’/>
</channel>
<input type=’mouse’ bus=’ps2’/>
<input type=’keyboard’ bus=’ps2’/>
<graphics type=’spice’ port=’5901′ autoport=’yes’ listen=’0.0.0.0′ keymap=’en-us’>
<listen type=’address’ address=’0.0.0.0’/>
</graphics>
<sound model=’ich6′>
<alias name=’sound0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x04′ function=’0x0’/>
</sound>
<video>
<model type=’qxl’ ram=’65536′ vram=’65536′ vgamem=’16384′ heads=’1’/>
<alias name=’video0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x02′ function=’0x0’/>
</video>
<memballoon model=’virtio’>
<alias name=’balloon0’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x07′ function=’0x0’/>
<stats period=’10’/>
</memballoon>
</devices>
<seclabel type=’dynamic’ model=’selinux’ relabel=’yes’>
<label>system_u:system_r:svirt_t:s0:c359,c706</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c359,c706</imagelabel>
</seclabel>
</domain>

Screenshot from 2015-07-31 21-55-33                                              Screenshot from 2015-07-31 15-05-53

 


Switching to Dashboard Spice Console in RDO Kilo on Fedora 22

July 3, 2015

*************************
UPDATE 06/27/2015
*************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf  install -y openstack-packstack  
# dnf install fedora-repos-rawhide
# dnf  –enablerepo=rawhide update openstack-packstack
Fedora – Rawhide – Developmental packages for the next Fedora re 1.7 MB/s |  45 MB     00:27
Last metadata expiration check performed 0:00:39 ago on Sat Jun 27 13:23:03 2015.
Dependencies resolved.
==============================================================
Package                       Arch      Version                                Repository  Size
==============================================================
Upgrading:
openstack-packstack           noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide    233 k
openstack-packstack-puppet    noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide     233 k
Transaction Summary
==============================================================
Upgrade  2 Packages
.  .  .  .  .
# dnf install python3-pyOpenSSL.noarch 
At this point run :-
# packstack  –gen-answer-file answer-file-aio.txt
and set
CONFIG_KEYSTONE_SERVICE_NAME=httpd
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
Then run `packstack –answer-file=./answer-file-aio.txt` , however you will still need pre-patch provision_demo.pp at the moment
( see third patch at http://textuploader.com/yn0v ) , the rest should work fine.

Upon completion you may try follow :-
https://www.rdoproject.org/Neutron_with_existing_external_network

I didn’t test it on Fedora 22, just creating external and private networks of VXLAN type and configure
 
[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.32″
NETMASK=”255.255.255.0″
DNS1=”8.8.8.8″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-enp2s0
DEVICE=”enp2s0″
ONBOOT=”yes”
HWADDR=”90:E6:BA:2D:11:EB”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

When configuration above is done :-

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

*************************
UPDATE 06/26/2015
*************************

To install RDO Kilo on Fedora 22 :-
after `dnf -y install openstack-packstack `
# cd /usr/lib/python2.7/site-packages/packstack/puppet/templates
Then apply following 3 patches
# cd ; packstack  –gen-answer-file answer-file-aio.txt
Set “CONFIG_NAGIOS_INSTALL=n” in  answer-file-aio.txt
# packstack –answer-file=./answer-file-aio.txt

************************
UPDATE 05/19/2015
************************
MATE Desktop supports sound ( via patch mentioned bellow) on RDO Kilo  Cloud instances F22, F21, F20. RDO Kilo AIO install performed on bare metal.
Also Windows Server 2012 (evaluation version) cloud VM provides pretty stable “video/sound” ( http://www.cloudbase.it/windows-cloud-images/ ) .

************************
UPDATE 05/14/2015
************************
I’ve  got sound working on CentOS 7 VM ( connection  to console via virt-manager)  with slightly updated patch of Y.Kawada , self.type set “ich6″ RDO Kilo installed on bare metal AIO testing host, Fedora 22. Same results have been  obtained for RDO Kilo on CentOS 7.1. However , connection to spice console having cut&amp;&amp;paste and sound enabled features may be obtained via spicy ( remote connection)

Generated libvirt.xml

<domain type=”kvm”>
<uuid>455877f2-7070-48a7-bb24-e0702be2fbc5</uuid>
<name>instance-00000003</name>
<memory>2097152</memory>
<vcpu cpuset=”0-7″>1</vcpu>
<metadata>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.el7″/>
<nova:name>CentOS7RSX05</nova:name>
<nova:creationTime>2015-06-14 18:42:11</nova:creationTime>
<nova:flavor name=”m1.small”>
<nova:memory>2048</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>1</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”da79d2c66db747eab942bdbe20bb3f44″>demo</nova:user>
<nova:project uuid=”8c9defac20a74633af4bb4773e45f11e”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”4a2d708c-7624-439f-9e7e-6e133062e23a”/>
</nova:instance>
</metadata>
<sysinfo type=”smbios”>
<system>
<entry name=”manufacturer”>Fedora Project</entry>
<entry name=”product”>OpenStack Nova</entry>
<entry name=”version”>2015.1.0-3.el7</entry>
<entry name=”serial”>b3fae7c3-10bd-455b-88b7-95e586342203</entry>
<entry name=”uuid”>455877f2-7070-48a7-bb24-e0702be2fbc5</entry>
</system>
</sysinfo>
<os>
<type>hvm</type>
<boot dev=”hd”/>
<smbios mode=”sysinfo”/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cputune>
<shares>1024</shares>
</cputune>
<clock offset=”utc”>
<timer name=”pit” tickpolicy=”delay”/>
<timer name=”rtc” tickpolicy=”catchup”/>
<timer name=”hpet” present=”no”/>
</clock>
<cpu mode=”host-model” match=”exact”>
<topology sockets=”1″ cores=”1″ threads=”1″/>
</cpu>
<devices>
<disk type=”file” device=”disk”>
<driver name=”qemu” type=”qcow2″ cache=”none”/>
<source file=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/disk”/>
<target bus=”virtio” dev=”vda”/>
</disk>
<interface type=”bridge”>
<mac address=”fa:16:3e:87:4b:29″/>
<model type=”virtio”/>
<source bridge=”qbr8ce9ae7b-f0″/>
<target dev=”tap8ce9ae7b-f0″/>
</interface>
<serial type=”file”>
<source path=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/console.log”/>
</serial>
<serial type=”pty”/>
<channel type=”spicevmc”>
<target type=”virtio” name=”com.redhat.spice.0″/>
</channel>
<graphics type=”spice” autoport=”yes” keymap=”en-us” listen=”0.0.0.0   “/>
<video>
<model type=”qxl”/>
</video>
<sound model=”ich6″/>
<memballoon model=”virtio”>
<stats period=”10″/>
</memballoon>
</devices>
</domain>

*****************
END UPDATE
*****************
The post follows up http://lxer.com/module/newswire/view/214893/index.html
The most recent `yum update` on F22 significantly improved network performance on cloud VMs (L2) . Watching movies running on cloud F22 VM (with “Mate Desktop” been installed and functioning pretty smoothly) without sound refreshes spice memories,view https://bugzilla.redhat.com/show_bug.cgi?format=multiple&amp;id=913607
# dnf -y install spice-html5 ( installed on Controller &amp;&amp; Compute)
# dnf -y install  openstack-nova-spicehtml5proxy (Compute Node)
# rpm -qa | grep openstack-nova-spicehtml5proxy
openstack-nova-spicehtml5proxy-2015.1.0-3.fc23.noarch

***********************************************************************
Update /etc/nova/nova.conf on Controller &amp;&amp; Compute Node as follows :-
***********************************************************************

[DEFAULT]
. . . . .
web=/usr/share/spice-html5
. . . . . .
spicehtml5proxy_host=0.0.0.0  (only Compute)
spicehtml5proxy_port=6082     (only Compute)
. . . . . . .
# Disable VNC
vnc_enabled=false
. . . . . . .
[spice]

# Compute Node Management IP 192.169.142.137
html5proxy_base_url=http://192.169.142.137:6082/spice_auto.html
server_proxyclient_address=127.0.0.1 ( only  Compute )
server_listen=0.0.0.0 ( only  Compute )
enabled=true
agent_enabled=true
keymap=en-us

:wq

# service httpd restart ( on Controller )
Next actions to be performed on Compute Node

# service openstack-nova-compute restart
# service openstack-nova-spicehtml5proxy start
# systemctl enable openstack-nova-spicehtml5proxy

On Controller

[root@ip-192-169-142-127 ~(keystone_admin)]# nova list –all-tenants
+————————————–+———–+———————————-+———+————+————-+———————————-+
| ID                                   | Name      | Tenant ID                        | Status  | Task State | Power State | Networks                         |
+————————————–+———–+———————————-+———+————+————-+———————————-+
| 6c8ef008-e8e0-4f1c-af17-b5f846f8b2d9 | CirrOSDev | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | SHUTOFF | –          | Shutdown    | demo_net=50.0.0.11, 172.24.4.228 |
| cfd735ea-d9a8-4c4e-9a77-03035f01d443 | VF22DEVS  | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | ACTIVE  | –          | Running     | demo_net=50.0.0.14, 172.24.4.231 |
+————————————–+———–+———————————-+———+————+————-+———————————-+
[root@ip-192-169-142-127 ~(keystone_admin)]# nova get-spice-console cfd735ea-d9a8-4c4e-9a77-03035f01d443  spice-html5
+————-+—————————————————————————————-+
| Type        | Url                                                                                    |

+————-+—————————————————————————————-+
| spice-html5 | http://192.169.142.137:6082/spice_auto.html?token=24fb65c7-e7e9-4727-bad3-ba7c2c29f7f4 |
+————-+—————————————————————————————-+

Session running by virt-manager on Virtualization Host ( F22 )

Connection to Compute Node 192.169.142.137 has been activated


Once again about pros/cons of Systemd and Upstart

May 16, 2015

Upstart advantages.

1. Upstart is simpler for porting on the systems other than Linux while systemd is very rigidly tied on Linux kernel opportunities.Adaptation of Upstart for work in Debian GNU/kFreeBSD and Debian GNU/Hurd looks quite real task that it is impossible to tell about systemd;

2. Upstart is more habitual for the Debian developers, many of which in combination participate in development of Ubuntu. Two members of Technical committee Debian (Steve Langasek and Colin Watson) are a part of group of the Upstart developers.

3. Upstart simpler and is more lightweight than systemd, as a result, less code – less mistakes; Upstart is suitable for integration with a code of system daemons better.The policy of systemd is reduced to that authors of daemons have to be arranged under upstream (it is necessary to provide the analog compatible at the level of the external interface for replacement of the systemd component) instead of upstream provided comfortable means for developers of daemons.

4. Upstart is simpler in respect of maintenance and maintenance of packages; Community of the Upstart developers are more openly for collaboration. In case of systemd it is necessary to take the systemd methods for granted and to follow them, for example, to support the separate section “/usr” or
to use only absolute paths for start. Shortcomings of Upstart belong to category of reparable problems; in current state of Upstart it is already completely ready for use in Debian 8.0 (Jessie).

5. In Upstart more habitual model of definition of a configuration of services, unlike systemd where settings in / etc block the basic settings of units determined in hierarchy/lib. Use of Upstart will allow to support a sound mind of the competition which will promote development of various approaches and will keep developers in a tone.

Systemd advantages

1. Without essential processing of architecture of Upstart won’t be able to catch up with systemd on functionality (for example, the turned model of start of dependences (instead of start of all demanded dependences at start of the set service,start of service in Upstart is carried out at receipt of an event about availability for service of dependences);

2. Use of ptrace disturbs application of upstart-works for such daemons as avahi, apache and postfix;possibility of activation of service only upon the appeal to a socket, but not on indirect signs,such as dependence on activation of other socket; lack of reliable tracking of conditions of the carried-out processes.

3. Systemd contains rather self-sufficient set of components that allows to concentrate attention on elimination of problems,but not completion of a configuration with Upstart to the opportunities which are already present at Systemd. For example, in Upstart are absent:- support of the detailed status and maintaining the log of work of daemons,multiple activation through sockets,activation through sockets for IPv6 and UDP,flexible mechanism of restriction of resources.

4. Use of systemd will allow to pull together among themselves and to unify control facilities various distribution kits. Systemd is already passed to RHEL 7.X,CentOS 7.X, Fedora,openSUSE,Sabayon,Mandriva,Arch Linux,

5. At systemd there is more active, large and versatile community of developers into which engineers of the SUSE and Red Hat companies enter. When using upstart the distribution kit becomes dependent on Canonical without which support of upstart remains without developers and will be doomed to stagnation.Participation in development of upstart requires signing of the agreement on transfer of property rights of the Canonical company. The Red Hat company not without cause decided on replacement of upstart by systemd.Debian project was already compelled to migrate for systemd. For realization of some opportunities of loading in Upstart it is required to use fragments of shell-scripts that does initialization process less reliable and more labor-consuming for debugging.

6. Support of systemd is realized in GNOME and KDE which more and more actively use possibilities of systemd (for example, means for management of the user sessions and start of each appendix in separate cgroup). GNOME continues to be positioned as the main environment of Debian, but the relations between the Ubuntu/Upstart and GNOME projects had obviously intense character.

References

http://www.opennet.ru/opennews/art.shtml?num=38762


Just to comment

February 19, 2015

Screenshot from 2015-02-19 14:32:15                        Screenshot from 2015-02-19 14:47:15                                                    Screenshot from 2015-02-19 15:16:09


LVMiSCSI cinder backend for RDO Juno on CentOS 7

November 9, 2014

Current post follows up http://lxer.com/module/newswire/view/207415/index.html RDO Juno has been intalled on Controller and Compute nodes via packstack as described in link @lxer.com. iSCSI initiator implementation on CentOS 7 differs significantly from CentOS 6.5 and is based on  CLI utility targetcli and service target.  With Enterprise Linux 7, both Red Hat and CentOS, there is a big change in the management of iSCSI targets.Software run as part of the standard systemd structure. Consequently there will be significant changes in multi back end cinder architecture of RDO Juno running on CentOS 7 or Fedora 21 utilizing LVM based iSCSI targets.

Create following entries in /etc/cinder/cinder.conf on Controller ( which in case of two node Cluster works as Storage node as well).

#######################

enabled_backends=lvm51,lvm52

#######################

[lvm51]

iscsi_helper=lioadm

volume_group=cinder-volumes51

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI51

[lvm52]

iscsi_helper=lioadm

volume_group=cinder-volumes52

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI52

 

VG cinder-volumes52,51 created on /dev/sda6 and /dev/sdb1 correspondently

# pvcreate /dev/sda6

# vgcreate cinder-volumes52  /dev/sda6

Then issue :-

[root@juno1 ~(keystone_admin)]# cinder type-create lvms

+————————————–+——+

|                  ID                  | Name |

+————————————–+——+

| 64414f3a-7770-4958-b422-8db0c3e2f433 | lvms  |

+————————————–+——+

[root@juno1 ~(keystone_admin)]# cinder type-create lvmz +————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | lvmz |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-list

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 |  lvmz   |

| 64414f3a-7770-4958-b422-8db0c3e2f433 |  lvms   |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-key lvmz set volume_backend_name=LVM_iSCSI51

[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI52

Then enable and start service target:-

[root@juno1 ~(keystone_admin)]#   service target enable

[root@juno1 ~(keystone_admin)]#   service target start

[root@juno1 ~(keystone_admin)]# service target status

Redirecting to /bin/systemctl status  target.service

target.service – Restore LIO kernel target configuration

Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)

Active: active (exited) since Wed 2014-11-05 13:23:09 MSK; 44min ago

Process: 1611 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)

Main PID: 1611 (code=exited, status=0/SUCCESS)

CGroup: /system.slice/target.service

Nov 05 13:23:07 juno1.localdomain systemd[1]: Starting Restore LIO kernel target configuration…

Nov 05 13:23:09 juno1.localdomain systemd[1]: Started Restore LIO kernel target configuration.

Now all changes done by creating cinder volumes of types lvms,lvmz ( via

dashboard – volume create with dropdown menu volume types or via cinder CLI )

will be persistent in  targetcli&gt; ls output between reboots

[root@juno1 ~(keystone_boris)]# cinder list

+————————————–+——–+——————+——+————-+———-+————————————–+

|                  ID                  | Status |   Display Name   | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+——————+——+————-+———-+————————————–+

| 3a4f6878-530a-4a28-87bb-92ee256f63ea | in-use | UbuntuUTLV510851 |  5   |     lvmz    |   true   | efb1762e-6782-4895-bf2b-564f14105b5b |

| 51528876-405d-4a15-abc2-61ad72fc7d7e | in-use |   CentOS7LVG51   |  10  |     lvmz    |   true   | ba3e87fa-ee81-42fc-baed-c59ca6c8a100 |

| ca0694ae-7e8d-4c84-aad8-3f178416dec6 | in-use |  VF20LVG520711   |  7   |     lvms    |   true   | 51a20959-0a0c-4ef6-81ec-2edeab6e3588 |

| dc9e31f0-b27f-4400-a666-688365126f67 | in-use | UbuntuUTLV520711 |  7   |     lvms    |   true   | 1fe7d2c3-58ae-4ee8-8f5f-baf334195a59 |

+————————————–+——–+——————+——+————-+———-+————————————–+

Compare ‘green’ highlighted volume id’s and tarcgetcli&gt;ls output

 

  

  

Next snapshot demonstrates lvms &amp;&amp; lvmz volumes attached to corresponding

nova instances utilizing LVMiSCSI cinder backend.

 

On Compute Node iscsiadm output will look as follows :-

[root@juno2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-3a4f6878-530a-4a28-87bb-92ee256f63ea

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-ca0694ae-7e8d-4c84-aad8-3f178416dec6

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-dc9e31f0-b27f-4400-a666-688365126f67

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-51528876-405d-4a15-abc2-61ad72fc7d7e

References

1.  https://www.centos.org/forums/viewtopic.php?f=47&amp;t=48591


Two Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 2, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from GRE Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

ANSWER FILE Two Node IceHouse Neutron OVS&GRE  and  updated *.ini , *.conf files after packstack setup  http://textuploader.com/0ts8

Two Libvirt’s  subnet created on F20 KVM Sever to support installation

  Public subnet :  192.169.142.0/24  

 GRE Tunnel  Support subnet:      192.168.122.0/24 

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat openstackvms.xml
<network>
 <name>openstackvms</name>
 <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr1' stp='on' delay='0' />
 <mac address='52:54:00:60:f8:6e'/>
 <ip address='192.169.142.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.169.142.2' end='192.169.142.254' />
 </dhcp>
 </ip>
 </network>
 2. Define the above network:
  $ virsh net-define openstackvms.xml
3. Start the network and enable it for "autostart"
 $ virsh net-start openstackvms
 $ virsh net-autostart openstackvms

4. List your libvirt networks to see if it reflects:
  $ virsh net-list
  Name                 State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
  openstackvms         active     yes           yes

5. Optionally, list your bridge devices:
  $ brctl show
  bridge name     bridge id               STP enabled     interfaces
  virbr0          8000.5254003339b3       yes             virbr0-nic
  virbr1          8000.52540060f86e       yes             virbr1-nic

 After packstack 2 Node (Controller+Compute) IceHouse OVS&amp;GRE setup :-

[root@ip-192-169-142-127 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 143
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| nova               |
| ovs_neutron        |
| performance_schema |
| test               |
+——————–+
9 rows in set (0.02 sec)

MariaDB [(none)]&gt; use ovs_neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [ovs_neutron]&gt; show tables ;
+—————————+
| Tables_in_ovs_neutron     |
+—————————+
| agents                    |
| alembic_version           |
| allowedaddresspairs       |
| dnsnameservers            |
| externalnetworks          |
| extradhcpopts             |
| floatingips               |
| ipallocationpools         |
| ipallocations             |
| ipavailabilityranges      |
| networkdhcpagentbindings  |
| networks                  |
| ovs_network_bindings      |
| ovs_tunnel_allocations    |
| ovs_tunnel_endpoints      |
| ovs_vlan_allocations      |
| portbindingports          |
| ports                     |
| quotas                    |
| routerl3agentbindings     |
| routerroutes              |
| routers                   |
| securitygroupportbindings |
| securitygrouprules        |
| securitygroups            |
| servicedefinitions        |
| servicetypes              |
| subnetroutes              |
| subnets                   |
+—————————+
29 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from networks ;
+———————————-+————————————–+———+——–+—————-+——–+
| tenant_id                        | id                                   | name    | status | admin_state_up | shared |
+———————————-+————————————–+———+——–+—————-+——–+
| 179e44f8f53641da89c3eb5d07405523 | 3854bc88-ae14-47b0-9787-233e54ffe7e5 | private | ACTIVE |              1 |      0 |
| 179e44f8f53641da89c3eb5d07405523 | 6e8dc33d-55d4-47b5-925f-e1fa96128c02 | public  | ACTIVE |              1 |      1 |
+———————————-+————————————–+———+——–+—————-+——–+
2 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from routers ;
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| tenant_id                        | id                                   | name    | status | admin_state_up | gw_port_id                           | enable_snat |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| 179e44f8f53641da89c3eb5d07405523 | 94829282-e6b2-4364-b640-8c0980218a4f | ROUTER3 | ACTIVE |              1 | df9711e4-d1a2-4255-9321-69105fbd8665 |           1 |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
1 row in set (0.00 sec)

MariaDB [ovs_neutron]&gt;

*********************

On Controller :-

********************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

*************************************

ovs-vsctl show output on controller

*************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
dc2c76d6-40e3-496e-bdec-470452758c32
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-1”
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.122.127″, out_key=flow, remote_ip=”192.168.122.137”}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap7acb7666-aa”
tag: 1
Interface “tap7acb7666-aa”
type: internal
Port “qr-a26fe722-07”
tag: 1
Interface “qr-a26fe722-07”
type: internal
Bridge br-ex
Port “qg-df9711e4-d1”
Interface “qg-df9711e4-d1”
type: internal
Port “eth0”
Interface “eth0”
Port br-ex
Interface br-ex
type: internal
ovs_version: “2.1.2”

********************

On Compute:-

********************

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth0
UUID=f96e561d-d14c-4fb1-9657-0c935f7f5721
ONBOOT=yes
IPADDR=192.169.142.137
PREFIX=24
GATEWAY=192.169.142.1
DNS1=83.221.202.254
HWADDR=52:54:00:67:AC:04
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.137
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

************************************

ovs-vsctl show output on compute

************************************

[root@ip-192-169-142-137 ~]# ovs-vsctl show
1c6671de-fcdf-4a29-9eee-96c949848fff
Bridge br-tun
Port “gre-2”
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.122.137″, out_key=flow, remote_ip=”192.168.122.127”}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qvo87038189-3f”
tag: 1
Interface “qvo87038189-3f”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: “2.1.2”

[root@ip-192-169-142-137 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr87038189-3f        8000.2abf9e69f97c    no        qvb87038189-3f
tap87038189-3f

*************************

Metadata verification 

*************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f iptables -S -t nat | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3771/python
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | 3771
bash: 3771: command not found…

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 3771
root      3771     1  0 13:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/94829282-e6b2-4364-b640-8c0980218a4f.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=94829282-e6b2-4364-b640-8c0980218a4f –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-94829282-e6b2-4364-b640-8c0980218a4f.log –log-dir=/var/log/neutron

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1024/python

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 1024
nova      1024     1  1 13:58 ?        00:00:05 /usr/bin/python /usr/bin/nova-api
nova      3369  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3370  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3397  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3398  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3423  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3424  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root      4947  4301  0 14:06 pts/0    00:00:00 grep –color=auto 1024

  

  

  

 


Sample of /etc/openstack-dashboard/local_settings

March 14, 2014

[root@dfw02 ~(keystone_admin)]$ cat  /etc/openstack-dashboard/local_settings | grep -v ^# | grep -v ^$
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard import exceptions
DEBUG = False
TEMPLATE_DEBUG = DEBUG
ALLOWED_HOSTS = [‘192.168.1.127’, ‘localhost’]
SESSION_ENGINE = “django.contrib.sessions.backends.cached_db”
DATABASES = {
‘default’: {
‘ENGINE’: ‘django.db.backends.mysql’,
‘NAME’: ‘dash’,
‘USER’: ‘dash’,
‘PASSWORD’: ‘fedora’,
‘HOST’: ‘192.168.1.127’,
‘default-character-set’: ‘utf8’
}
}

HORIZON_CONFIG = {
‘dashboards’: (‘project’, ‘admin’, ‘settings’,),
‘default_dashboard’: ‘project’,
‘user_home’: ‘openstack_dashboard.views.get_user_home’,
‘ajax_queue_limit’: 10,
‘auto_fade_alerts’: {
‘delay’: 3000,
‘fade_duration’: 1500,
‘types’: [‘alert-success’, ‘alert-info’]
},

‘help_url’: “http://docs.openstack.org&#8221;,

‘exceptions’: {‘recoverable’: exceptions.RECOVERABLE,
‘not_found’: exceptions.NOT_FOUND,
‘unauthorized’: exceptions.UNAUTHORIZED},
}

from horizon.utils import secret_key

LOCAL_PATH = ‘/var/lib/openstack-dashboard’
SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, ‘.secret_key_store’))

CACHES = {
‘default’: {
‘BACKEND’ : ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’ : ‘127.0.0.1:11211’,
}
}

EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’

OPENSTACK_HOST = “192.168.1.127”

OPENSTACK_KEYSTONE_URL = “http://%s:5000/v2.0&#8221; % OPENSTACK_HOST

OPENSTACK_KEYSTONE_DEFAULT_ROLE = “Member”
OPENSTACK_KEYSTONE_BACKEND = {
‘name’: ‘native’,
‘can_edit_user’: True,
‘can_edit_group’: True,
‘can_edit_project’: True,
‘can_edit_domain’: True,
‘can_edit_role’: True
}

OPENSTACK_HYPERVISOR_FEATURES = {
‘can_set_mount_point’: False,
# NOTE: as of Grizzly this is not yet supported in Nova so enabling this
# setting will not do anything useful
‘can_encrypt_volumes’: False
}

OPENSTACK_NEUTRON_NETWORK = {
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_quotas’: True,
‘enable_vpn’: False,
# The profile_support option is used to detect if an external router can be
# configured via the dashboard. When using specific plugins the
# profile_support can be turned on if needed.
‘profile_support’: None,
#’profile_support’: ‘cisco’,
}
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
TIME_ZONE = “UTC”
POLICY_FILES_PATH = ‘/etc/openstack-dashboard’
POLICY_FILES = {
‘identity’: ‘keystone_policy.json’,
‘compute’: ‘nova_policy.json’
}

LOGGING = {
‘version’: 1,
# When set to True this will disable all logging except
# for loggers specified in this configuration dictionary. Note that
# if nothing is specified here and disable_existing_loggers is True,
# django.db.backends will still log unless it is disabled explicitly.
‘disable_existing_loggers’: False,

‘handlers’: {
‘null’: {
‘level’: ‘DEBUG’,
‘class’: ‘django.utils.log.NullHandler’,
},

‘console’: {
# Set the level to “DEBUG” for verbose output logging.
‘level’: ‘INFO’,
‘class’: ‘logging.StreamHandler’,
},

‘loggers’: {

# Logging from django.db.backends is VERY verbose, send to null
# by default.
‘django.db.backends’: {
‘handlers’: [‘null’],
‘propagate’: False,
},

‘requests’: {
‘handlers’: [‘null’],
‘propagate’: False,
},

‘horizon’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘openstack_dashboard’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘novaclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘cinderclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘keystoneclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘glanceclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘neutronclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘heatclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘ceilometerclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘troveclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘swiftclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘openstack_auth’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘nose.plugins.manager’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘django’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},

‘iso8601’: {
‘handlers’: [‘null’],
‘propagate’: False,
},

}

}

SECURITY_GROUP_RULES = {
‘all_tcp’: {
‘name’: ‘ALL TCP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘1’,
‘to_port’: ‘65535’,
},

‘all_udp’: {
‘name’: ‘ALL UDP’,
‘ip_protocol’: ‘udp’,
‘from_port’: ‘1’,
‘to_port’: ‘65535’,
},

‘all_icmp’: {
‘name’: ‘ALL ICMP’,
‘ip_protocol’: ‘icmp’,
‘from_port’: ‘-1’,
‘to_port’: ‘-1’,
},

‘ssh’: {
‘name’: ‘SSH’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ’22’,
‘to_port’: ’22’,
},

‘smtp’: {
‘name’: ‘SMTP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ’25’,
‘to_port’: ’25’,
},

‘dns’: {
‘name’: ‘DNS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ’53’,
‘to_port’: ’53’,
},

‘http’: {
‘name’: ‘HTTP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ’80’,
‘to_port’: ’80’,
},

‘pop3’: {
‘name’: ‘POP3’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘110’,
‘to_port’: ‘110’,
},

‘imap’: {
‘name’: ‘IMAP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘143’,
‘to_port’: ‘143’,
},

‘ldap’: {
‘name’: ‘LDAP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘389’,
‘to_port’: ‘389’,
},

‘https’: {
‘name’: ‘HTTPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘443’,
‘to_port’: ‘443’,
},

‘smtps’: {
‘name’: ‘SMTPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘465’,
‘to_port’: ‘465’,
},

‘imaps’: {
‘name’: ‘IMAPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘993’,
‘to_port’: ‘993’,
},

‘pop3s’: {
‘name’: ‘POP3S’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘995’,
‘to_port’: ‘995’,
},

‘ms_sql’: {
‘name’: ‘MS SQL’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘1433’,
‘to_port’: ‘1433’,
},

‘mysql’: {
‘name’: ‘MYSQL’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘3306’,
‘to_port’: ‘3306’,
},

‘rdp’: {
‘name’: ‘RDP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘3389’,
‘to_port’: ‘3389’,
},

}


Follow

Get every new post delivered to your Inbox.

Join 31 other followers