Switching to Dashboard Spice Console in RDO Kilo on Fedora 22

July 3, 2015

*************************
UPDATE 06/27/2015
*************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf  install -y openstack-packstack  
# dnf install fedora-repos-rawhide
# dnf  –enablerepo=rawhide update openstack-packstack
Fedora – Rawhide – Developmental packages for the next Fedora re 1.7 MB/s |  45 MB     00:27
Last metadata expiration check performed 0:00:39 ago on Sat Jun 27 13:23:03 2015.
Dependencies resolved.
==============================================================
Package                       Arch      Version                                Repository  Size
==============================================================
Upgrading:
openstack-packstack           noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide    233 k
openstack-packstack-puppet    noarch    2015.1-0.7.dev1577.gc9f8c3c.fc23       rawhide     233 k
Transaction Summary
==============================================================
Upgrade  2 Packages
.  .  .  .  .
# dnf install python3-pyOpenSSL.noarch 
At this point run :-
# packstack  –gen-answer-file answer-file-aio.txt
and set
CONFIG_KEYSTONE_SERVICE_NAME=httpd
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
Then run `packstack –answer-file=./answer-file-aio.txt` , however you will still need pre-patch provision_demo.pp at the moment
( see third patch at http://textuploader.com/yn0v ) , the rest should work fine.

Upon completion you may try follow :-
https://www.rdoproject.org/Neutron_with_existing_external_network

I didn’t test it on Fedora 22, just creating external and private networks of VXLAN type and configure
 
[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.32″
NETMASK=”255.255.255.0″
DNS1=”8.8.8.8″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no

[root@ServerFedora22 network-scripts(keystone_admin)]# cat ifcfg-enp2s0
DEVICE=”enp2s0″
ONBOOT=”yes”
HWADDR=”90:E6:BA:2D:11:EB”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

When configuration above is done :-

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot

*************************
UPDATE 06/26/2015
*************************

To install RDO Kilo on Fedora 22 :-
after `dnf -y install openstack-packstack `
# cd /usr/lib/python2.7/site-packages/packstack/puppet/templates
Then apply following 3 patches
# cd ; packstack  –gen-answer-file answer-file-aio.txt
Set “CONFIG_NAGIOS_INSTALL=n” in  answer-file-aio.txt
# packstack –answer-file=./answer-file-aio.txt

************************
UPDATE 05/19/2015
************************
MATE Desktop supports sound ( via patch mentioned bellow) on RDO Kilo  Cloud instances F22, F21, F20. RDO Kilo AIO install performed on bare metal.
Also Windows Server 2012 (evaluation version) cloud VM provides pretty stable “video/sound” ( http://www.cloudbase.it/windows-cloud-images/ ) .

************************
UPDATE 05/14/2015
************************
I’ve  got sound working on CentOS 7 VM ( connection  to console via virt-manager)  with slightly updated patch of Y.Kawada , self.type set “ich6″ RDO Kilo installed on bare metal AIO testing host, Fedora 22. Same results have been  obtained for RDO Kilo on CentOS 7.1. However , connection to spice console having cut&&paste and sound enabled features may be obtained via spicy ( remote connection)

Generated libvirt.xml

<domain type=”kvm”>
<uuid>455877f2-7070-48a7-bb24-e0702be2fbc5</uuid>
<name>instance-00000003</name>
<memory>2097152</memory>
<vcpu cpuset=”0-7″>1</vcpu>
<metadata>
<nova:instance xmlns:nova=”http://openstack.org/xmlns/libvirt/nova/1.0″&gt;
<nova:package version=”2015.1.0-3.el7″/>
<nova:name>CentOS7RSX05</nova:name>
<nova:creationTime>2015-06-14 18:42:11</nova:creationTime>
<nova:flavor name=”m1.small”>
<nova:memory>2048</nova:memory>
<nova:disk>20</nova:disk>
<nova:swap>0</nova:swap>
<nova:ephemeral>0</nova:ephemeral>
<nova:vcpus>1</nova:vcpus>
</nova:flavor>
<nova:owner>
<nova:user uuid=”da79d2c66db747eab942bdbe20bb3f44″>demo</nova:user>
<nova:project uuid=”8c9defac20a74633af4bb4773e45f11e”>demo</nova:project>
</nova:owner>
<nova:root type=”image” uuid=”4a2d708c-7624-439f-9e7e-6e133062e23a”/>
</nova:instance>
</metadata>
<sysinfo type=”smbios”>
<system>
<entry name=”manufacturer”>Fedora Project</entry>
<entry name=”product”>OpenStack Nova</entry>
<entry name=”version”>2015.1.0-3.el7</entry>
<entry name=”serial”>b3fae7c3-10bd-455b-88b7-95e586342203</entry>
<entry name=”uuid”>455877f2-7070-48a7-bb24-e0702be2fbc5</entry>
</system>
</sysinfo>
<os>
<type>hvm</type>
<boot dev=”hd”/>
<smbios mode=”sysinfo”/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cputune>
<shares>1024</shares>
</cputune>
<clock offset=”utc”>
<timer name=”pit” tickpolicy=”delay”/>
<timer name=”rtc” tickpolicy=”catchup”/>
<timer name=”hpet” present=”no”/>
</clock>
<cpu mode=”host-model” match=”exact”>
<topology sockets=”1″ cores=”1″ threads=”1″/>
</cpu>
<devices>
<disk type=”file” device=”disk”>
<driver name=”qemu” type=”qcow2″ cache=”none”/>
<source file=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/disk”/>
<target bus=”virtio” dev=”vda”/>
</disk>
<interface type=”bridge”>
<mac address=”fa:16:3e:87:4b:29″/>
<model type=”virtio”/>
<source bridge=”qbr8ce9ae7b-f0″/>
<target dev=”tap8ce9ae7b-f0″/>
</interface>
<serial type=”file”>
<source path=”/var/lib/nova/instances/455877f2-7070-48a7-bb24-e0702be2fbc5/console.log”/>
</serial>
<serial type=”pty”/>
<channel type=”spicevmc”>
<target type=”virtio” name=”com.redhat.spice.0″/>
</channel>
<graphics type=”spice” autoport=”yes” keymap=”en-us” listen=”0.0.0.0   “/>
<video>
<model type=”qxl”/>
</video>
<sound model=”ich6″/>
<memballoon model=”virtio”>
<stats period=”10″/>
</memballoon>
</devices>
</domain>

*****************
END UPDATE
*****************
The post follows up http://lxer.com/module/newswire/view/214893/index.html
The most recent `yum update` on F22 significantly improved network performance on cloud VMs (L2) . Watching movies running on cloud F22 VM (with “Mate Desktop” been installed and functioning pretty smoothly) without sound refreshes spice memories,view https://bugzilla.redhat.com/show_bug.cgi?format=multiple&amp;id=913607
# dnf -y install spice-html5 ( installed on Controller &amp;&amp; Compute)
# dnf -y install  openstack-nova-spicehtml5proxy (Compute Node)
# rpm -qa | grep openstack-nova-spicehtml5proxy
openstack-nova-spicehtml5proxy-2015.1.0-3.fc23.noarch

***********************************************************************
Update /etc/nova/nova.conf on Controller &amp;&amp; Compute Node as follows :-
***********************************************************************

[DEFAULT]
. . . . .
web=/usr/share/spice-html5
. . . . . .
spicehtml5proxy_host=0.0.0.0  (only Compute)
spicehtml5proxy_port=6082     (only Compute)
. . . . . . .
# Disable VNC
vnc_enabled=false
. . . . . . .
[spice]

# Compute Node Management IP 192.169.142.137
html5proxy_base_url=http://192.169.142.137:6082/spice_auto.html
server_proxyclient_address=127.0.0.1 ( only  Compute )
server_listen=0.0.0.0 ( only  Compute )
enabled=true
agent_enabled=true
keymap=en-us

:wq

# service httpd restart ( on Controller )
Next actions to be performed on Compute Node

# service openstack-nova-compute restart
# service openstack-nova-spicehtml5proxy start
# systemctl enable openstack-nova-spicehtml5proxy

On Controller

[root@ip-192-169-142-127 ~(keystone_admin)]# nova list –all-tenants
+————————————–+———–+———————————-+———+————+————-+———————————-+
| ID                                   | Name      | Tenant ID                        | Status  | Task State | Power State | Networks                         |
+————————————–+———–+———————————-+———+————+————-+———————————-+
| 6c8ef008-e8e0-4f1c-af17-b5f846f8b2d9 | CirrOSDev | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | SHUTOFF | –          | Shutdown    | demo_net=50.0.0.11, 172.24.4.228 |
| cfd735ea-d9a8-4c4e-9a77-03035f01d443 | VF22DEVS  | 7e5a0f3ec3fe45dc83ae0947ef52adc3 | ACTIVE  | –          | Running     | demo_net=50.0.0.14, 172.24.4.231 |
+————————————–+———–+———————————-+———+————+————-+———————————-+
[root@ip-192-169-142-127 ~(keystone_admin)]# nova get-spice-console cfd735ea-d9a8-4c4e-9a77-03035f01d443  spice-html5
+————-+—————————————————————————————-+
| Type        | Url                                                                                    |

+————-+—————————————————————————————-+
| spice-html5 | http://192.169.142.137:6082/spice_auto.html?token=24fb65c7-e7e9-4727-bad3-ba7c2c29f7f4 |
+————-+—————————————————————————————-+

Session running by virt-manager on Virtualization Host ( F22 )

Connection to Compute Node 192.169.142.137 has been activated


Once again about pros/cons of Systemd and Upstart

May 16, 2015

Upstart advantages.

1. Upstart is simpler for porting on the systems other than Linux while systemd is very rigidly tied on Linux kernel opportunities.Adaptation of Upstart for work in Debian GNU/kFreeBSD and Debian GNU/Hurd looks quite real task that it is impossible to tell about systemd;

2. Upstart is more habitual for the Debian developers, many of which in combination participate in development of Ubuntu. Two members of Technical committee Debian (Steve Langasek and Colin Watson) are a part of group of the Upstart developers.

3. Upstart simpler and is more lightweight than systemd, as a result, less code – less mistakes; Upstart is suitable for integration with a code of system daemons better.The policy of systemd is reduced to that authors of daemons have to be arranged under upstream (it is necessary to provide the analog compatible at the level of the external interface for replacement of the systemd component) instead of upstream provided comfortable means for developers of daemons.

4. Upstart is simpler in respect of maintenance and maintenance of packages; Community of the Upstart developers are more openly for collaboration. In case of systemd it is necessary to take the systemd methods for granted and to follow them, for example, to support the separate section “/usr” or
to use only absolute paths for start. Shortcomings of Upstart belong to category of reparable problems; in current state of Upstart it is already completely ready for use in Debian 8.0 (Jessie).

5. In Upstart more habitual model of definition of a configuration of services, unlike systemd where settings in / etc block the basic settings of units determined in hierarchy/lib. Use of Upstart will allow to support a sound mind of the competition which will promote development of various approaches and will keep developers in a tone.

Systemd advantages

1. Without essential processing of architecture of Upstart won’t be able to catch up with systemd on functionality (for example, the turned model of start of dependences (instead of start of all demanded dependences at start of the set service,start of service in Upstart is carried out at receipt of an event about availability for service of dependences);

2. Use of ptrace disturbs application of upstart-works for such daemons as avahi, apache and postfix;possibility of activation of service only upon the appeal to a socket, but not on indirect signs,such as dependence on activation of other socket; lack of reliable tracking of conditions of the carried-out processes.

3. Systemd contains rather self-sufficient set of components that allows to concentrate attention on elimination of problems,but not completion of a configuration with Upstart to the opportunities which are already present at Systemd. For example, in Upstart are absent:- support of the detailed status and maintaining the log of work of daemons,multiple activation through sockets,activation through sockets for IPv6 and UDP,flexible mechanism of restriction of resources.

4. Use of systemd will allow to pull together among themselves and to unify control facilities various distribution kits. Systemd is already passed to RHEL 7.X,CentOS 7.X, Fedora,openSUSE,Sabayon,Mandriva,Arch Linux,

5. At systemd there is more active, large and versatile community of developers into which engineers of the SUSE and Red Hat companies enter. When using upstart the distribution kit becomes dependent on Canonical without which support of upstart remains without developers and will be doomed to stagnation.Participation in development of upstart requires signing of the agreement on transfer of property rights of the Canonical company. The Red Hat company not without cause decided on replacement of upstart by systemd.Debian project was already compelled to migrate for systemd. For realization of some opportunities of loading in Upstart it is required to use fragments of shell-scripts that does initialization process less reliable and more labor-consuming for debugging.

6. Support of systemd is realized in GNOME and KDE which more and more actively use possibilities of systemd (for example, means for management of the user sessions and start of each appendix in separate cgroup). GNOME continues to be positioned as the main environment of Debian, but the relations between the Ubuntu/Upstart and GNOME projects had obviously intense character.

References

http://www.opennet.ru/opennews/art.shtml?num=38762


Just to comment

February 19, 2015

Screenshot from 2015-02-19 14:32:15                        Screenshot from 2015-02-19 14:47:15                                                    Screenshot from 2015-02-19 15:16:09


LVMiSCSI cinder backend for RDO Juno on CentOS 7

November 9, 2014

Current post follows up http://lxer.com/module/newswire/view/207415/index.html RDO Juno has been intalled on Controller and Compute nodes via packstack as described in link @lxer.com. iSCSI initiator implementation on CentOS 7 differs significantly from CentOS 6.5 and is based on  CLI utility targetcli and service target.  With Enterprise Linux 7, both Red Hat and CentOS, there is a big change in the management of iSCSI targets.Software run as part of the standard systemd structure. Consequently there will be significant changes in multi back end cinder architecture of RDO Juno running on CentOS 7 or Fedora 21 utilizing LVM based iSCSI targets.

Create following entries in /etc/cinder/cinder.conf on Controller ( which in case of two node Cluster works as Storage node as well).

#######################

enabled_backends=lvm51,lvm52

#######################

[lvm51]

iscsi_helper=lioadm

volume_group=cinder-volumes51

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI51

[lvm52]

iscsi_helper=lioadm

volume_group=cinder-volumes52

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

iscsi_ip_address=192.168.1.127

volume_backend_name=LVM_iSCSI52

 

VG cinder-volumes52,51 created on /dev/sda6 and /dev/sdb1 correspondently

# pvcreate /dev/sda6

# vgcreate cinder-volumes52  /dev/sda6

Then issue :-

[root@juno1 ~(keystone_admin)]# cinder type-create lvms

+————————————–+——+

|                  ID                  | Name |

+————————————–+——+

| 64414f3a-7770-4958-b422-8db0c3e2f433 | lvms  |

+————————————–+——+

[root@juno1 ~(keystone_admin)]# cinder type-create lvmz +————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 | lvmz |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-list

+————————————–+———+

|                  ID                  |   Name  |

+————————————–+———+

| 29917269-d73f-4c28-b295-59bfbda5d044 |  lvmz   |

| 64414f3a-7770-4958-b422-8db0c3e2f433 |  lvms   |

+————————————–+———+

[root@juno1 ~(keystone_admin)]# cinder type-key lvmz set volume_backend_name=LVM_iSCSI51

[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI52

Then enable and start service target:-

[root@juno1 ~(keystone_admin)]#   service target enable

[root@juno1 ~(keystone_admin)]#   service target start

[root@juno1 ~(keystone_admin)]# service target status

Redirecting to /bin/systemctl status  target.service

target.service – Restore LIO kernel target configuration

Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)

Active: active (exited) since Wed 2014-11-05 13:23:09 MSK; 44min ago

Process: 1611 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)

Main PID: 1611 (code=exited, status=0/SUCCESS)

CGroup: /system.slice/target.service

Nov 05 13:23:07 juno1.localdomain systemd[1]: Starting Restore LIO kernel target configuration…

Nov 05 13:23:09 juno1.localdomain systemd[1]: Started Restore LIO kernel target configuration.

Now all changes done by creating cinder volumes of types lvms,lvmz ( via

dashboard – volume create with dropdown menu volume types or via cinder CLI )

will be persistent in  targetcli&gt; ls output between reboots

[root@juno1 ~(keystone_boris)]# cinder list

+————————————–+——–+——————+——+————-+———-+————————————–+

|                  ID                  | Status |   Display Name   | Size | Volume Type | Bootable |             Attached to              |

+————————————–+——–+——————+——+————-+———-+————————————–+

| 3a4f6878-530a-4a28-87bb-92ee256f63ea | in-use | UbuntuUTLV510851 |  5   |     lvmz    |   true   | efb1762e-6782-4895-bf2b-564f14105b5b |

| 51528876-405d-4a15-abc2-61ad72fc7d7e | in-use |   CentOS7LVG51   |  10  |     lvmz    |   true   | ba3e87fa-ee81-42fc-baed-c59ca6c8a100 |

| ca0694ae-7e8d-4c84-aad8-3f178416dec6 | in-use |  VF20LVG520711   |  7   |     lvms    |   true   | 51a20959-0a0c-4ef6-81ec-2edeab6e3588 |

| dc9e31f0-b27f-4400-a666-688365126f67 | in-use | UbuntuUTLV520711 |  7   |     lvms    |   true   | 1fe7d2c3-58ae-4ee8-8f5f-baf334195a59 |

+————————————–+——–+——————+——+————-+———-+————————————–+

Compare ‘green’ highlighted volume id’s and tarcgetcli&gt;ls output

 

  

  

Next snapshot demonstrates lvms &amp;&amp; lvmz volumes attached to corresponding

nova instances utilizing LVMiSCSI cinder backend.

 

On Compute Node iscsiadm output will look as follows :-

[root@juno2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-3a4f6878-530a-4a28-87bb-92ee256f63ea

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-ca0694ae-7e8d-4c84-aad8-3f178416dec6

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-dc9e31f0-b27f-4400-a666-688365126f67

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-51528876-405d-4a15-abc2-61ad72fc7d7e

References

1.  https://www.centos.org/forums/viewtopic.php?f=47&amp;t=48591


Two Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 2, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from GRE Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

ANSWER FILE Two Node IceHouse Neutron OVS&GRE  and  updated *.ini , *.conf files after packstack setup  http://textuploader.com/0ts8

Two Libvirt’s  subnet created on F20 KVM Sever to support installation

  Public subnet :  192.169.142.0/24  

 GRE Tunnel  Support subnet:      192.168.122.0/24 

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat openstackvms.xml
<network>
 <name>openstackvms</name>
 <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr1' stp='on' delay='0' />
 <mac address='52:54:00:60:f8:6e'/>
 <ip address='192.169.142.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.169.142.2' end='192.169.142.254' />
 </dhcp>
 </ip>
 </network>
 2. Define the above network:
  $ virsh net-define openstackvms.xml
3. Start the network and enable it for "autostart"
 $ virsh net-start openstackvms
 $ virsh net-autostart openstackvms

4. List your libvirt networks to see if it reflects:
  $ virsh net-list
  Name                 State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
  openstackvms         active     yes           yes

5. Optionally, list your bridge devices:
  $ brctl show
  bridge name     bridge id               STP enabled     interfaces
  virbr0          8000.5254003339b3       yes             virbr0-nic
  virbr1          8000.52540060f86e       yes             virbr1-nic

 After packstack 2 Node (Controller+Compute) IceHouse OVS&amp;GRE setup :-

[root@ip-192-169-142-127 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 143
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| nova               |
| ovs_neutron        |
| performance_schema |
| test               |
+——————–+
9 rows in set (0.02 sec)

MariaDB [(none)]&gt; use ovs_neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [ovs_neutron]&gt; show tables ;
+—————————+
| Tables_in_ovs_neutron     |
+—————————+
| agents                    |
| alembic_version           |
| allowedaddresspairs       |
| dnsnameservers            |
| externalnetworks          |
| extradhcpopts             |
| floatingips               |
| ipallocationpools         |
| ipallocations             |
| ipavailabilityranges      |
| networkdhcpagentbindings  |
| networks                  |
| ovs_network_bindings      |
| ovs_tunnel_allocations    |
| ovs_tunnel_endpoints      |
| ovs_vlan_allocations      |
| portbindingports          |
| ports                     |
| quotas                    |
| routerl3agentbindings     |
| routerroutes              |
| routers                   |
| securitygroupportbindings |
| securitygrouprules        |
| securitygroups            |
| servicedefinitions        |
| servicetypes              |
| subnetroutes              |
| subnets                   |
+—————————+
29 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from networks ;
+———————————-+————————————–+———+——–+—————-+——–+
| tenant_id                        | id                                   | name    | status | admin_state_up | shared |
+———————————-+————————————–+———+——–+—————-+——–+
| 179e44f8f53641da89c3eb5d07405523 | 3854bc88-ae14-47b0-9787-233e54ffe7e5 | private | ACTIVE |              1 |      0 |
| 179e44f8f53641da89c3eb5d07405523 | 6e8dc33d-55d4-47b5-925f-e1fa96128c02 | public  | ACTIVE |              1 |      1 |
+———————————-+————————————–+———+——–+—————-+——–+
2 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from routers ;
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| tenant_id                        | id                                   | name    | status | admin_state_up | gw_port_id                           | enable_snat |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| 179e44f8f53641da89c3eb5d07405523 | 94829282-e6b2-4364-b640-8c0980218a4f | ROUTER3 | ACTIVE |              1 | df9711e4-d1a2-4255-9321-69105fbd8665 |           1 |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
1 row in set (0.00 sec)

MariaDB [ovs_neutron]&gt;

*********************

On Controller :-

********************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

*************************************

ovs-vsctl show output on controller

*************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
dc2c76d6-40e3-496e-bdec-470452758c32
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-1″
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.122.127″, out_key=flow, remote_ip=”192.168.122.137″}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap7acb7666-aa”
tag: 1
Interface “tap7acb7666-aa”
type: internal
Port “qr-a26fe722-07″
tag: 1
Interface “qr-a26fe722-07″
type: internal
Bridge br-ex
Port “qg-df9711e4-d1″
Interface “qg-df9711e4-d1″
type: internal
Port “eth0″
Interface “eth0″
Port br-ex
Interface br-ex
type: internal
ovs_version: “2.1.2”

********************

On Compute:-

********************

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth0
UUID=f96e561d-d14c-4fb1-9657-0c935f7f5721
ONBOOT=yes
IPADDR=192.169.142.137
PREFIX=24
GATEWAY=192.169.142.1
DNS1=83.221.202.254
HWADDR=52:54:00:67:AC:04
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.137
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

************************************

ovs-vsctl show output on compute

************************************

[root@ip-192-169-142-137 ~]# ovs-vsctl show
1c6671de-fcdf-4a29-9eee-96c949848fff
Bridge br-tun
Port “gre-2″
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.122.137″, out_key=flow, remote_ip=”192.168.122.127″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qvo87038189-3f”
tag: 1
Interface “qvo87038189-3f”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: “2.1.2”

[root@ip-192-169-142-137 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr87038189-3f        8000.2abf9e69f97c    no        qvb87038189-3f
tap87038189-3f

*************************

Metadata verification 

*************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f iptables -S -t nat | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3771/python
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | 3771
bash: 3771: command not found…

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 3771
root      3771     1  0 13:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/94829282-e6b2-4364-b640-8c0980218a4f.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=94829282-e6b2-4364-b640-8c0980218a4f –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-94829282-e6b2-4364-b640-8c0980218a4f.log –log-dir=/var/log/neutron

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1024/python

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 1024
nova      1024     1  1 13:58 ?        00:00:05 /usr/bin/python /usr/bin/nova-api
nova      3369  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3370  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3397  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3398  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3423  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3424  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root      4947  4301  0 14:06 pts/0    00:00:00 grep –color=auto 1024

  

  

  

 


Sample of /etc/openstack-dashboard/local_settings

March 14, 2014

[root@dfw02 ~(keystone_admin)]$ cat  /etc/openstack-dashboard/local_settings | grep -v ^# | grep -v ^$
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard import exceptions
DEBUG = False
TEMPLATE_DEBUG = DEBUG
ALLOWED_HOSTS = [‘192.168.1.127′, ‘localhost’]
SESSION_ENGINE = “django.contrib.sessions.backends.cached_db”
DATABASES = {
‘default': {
‘ENGINE': ‘django.db.backends.mysql’,
‘NAME': ‘dash’,
‘USER': ‘dash’,
‘PASSWORD': ‘fedora’,
‘HOST': ‘192.168.1.127’,
‘default-character-set': ‘utf8′
}
}

HORIZON_CONFIG = {
‘dashboards': (‘project’, ‘admin’, ‘settings’,),
‘default_dashboard': ‘project’,
‘user_home': ‘openstack_dashboard.views.get_user_home’,
‘ajax_queue_limit': 10,
‘auto_fade_alerts': {
‘delay': 3000,
‘fade_duration': 1500,
‘types': [‘alert-success’, ‘alert-info’]
},

‘help_url': “http://docs.openstack.org&#8221;,

‘exceptions': {‘recoverable': exceptions.RECOVERABLE,
‘not_found': exceptions.NOT_FOUND,
‘unauthorized': exceptions.UNAUTHORIZED},
}

from horizon.utils import secret_key

LOCAL_PATH = ‘/var/lib/openstack-dashboard’
SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, ‘.secret_key_store’))

CACHES = {
‘default': {
‘BACKEND’ : ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’ : ‘127.0.0.1:11211′,
}
}

EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’

OPENSTACK_HOST = “192.168.1.127”

OPENSTACK_KEYSTONE_URL = “http://%s:5000/v2.0&#8243; % OPENSTACK_HOST

OPENSTACK_KEYSTONE_DEFAULT_ROLE = “Member”
OPENSTACK_KEYSTONE_BACKEND = {
‘name': ‘native’,
‘can_edit_user': True,
‘can_edit_group': True,
‘can_edit_project': True,
‘can_edit_domain': True,
‘can_edit_role': True
}

OPENSTACK_HYPERVISOR_FEATURES = {
‘can_set_mount_point': False,
# NOTE: as of Grizzly this is not yet supported in Nova so enabling this
# setting will not do anything useful
‘can_encrypt_volumes': False
}

OPENSTACK_NEUTRON_NETWORK = {
‘enable_lb': False,
‘enable_firewall': False,
‘enable_quotas': True,
‘enable_vpn': False,
# The profile_support option is used to detect if an external router can be
# configured via the dashboard. When using specific plugins the
# profile_support can be turned on if needed.
‘profile_support': None,
#’profile_support': ‘cisco’,
}
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
TIME_ZONE = “UTC”
POLICY_FILES_PATH = ‘/etc/openstack-dashboard’
POLICY_FILES = {
‘identity': ‘keystone_policy.json’,
‘compute': ‘nova_policy.json’
}

LOGGING = {
‘version': 1,
# When set to True this will disable all logging except
# for loggers specified in this configuration dictionary. Note that
# if nothing is specified here and disable_existing_loggers is True,
# django.db.backends will still log unless it is disabled explicitly.
‘disable_existing_loggers': False,

‘handlers': {
‘null': {
‘level': ‘DEBUG’,
‘class': ‘django.utils.log.NullHandler’,
},

‘console': {
# Set the level to “DEBUG” for verbose output logging.
‘level': ‘INFO’,
‘class': ‘logging.StreamHandler’,
},

‘loggers': {

# Logging from django.db.backends is VERY verbose, send to null
# by default.
‘django.db.backends': {
‘handlers': [‘null’],
‘propagate': False,
},

‘requests': {
‘handlers': [‘null’],
‘propagate': False,
},

‘horizon': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘openstack_dashboard': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘novaclient': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘cinderclient': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘keystoneclient': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘glanceclient': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘neutronclient': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘heatclient': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘ceilometerclient': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘troveclient': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘swiftclient': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘openstack_auth': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘nose.plugins.manager': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘django': {
‘handlers': [‘console’],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘iso8601′: {
‘handlers': [‘null’],
‘propagate': False,
},

}

}

SECURITY_GROUP_RULES = {
‘all_tcp': {
‘name': ‘ALL TCP’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘1’,
‘to_port': ‘65535’,
},

‘all_udp': {
‘name': ‘ALL UDP’,
‘ip_protocol': ‘udp’,
‘from_port': ‘1’,
‘to_port': ‘65535’,
},

‘all_icmp': {
‘name': ‘ALL ICMP’,
‘ip_protocol': ‘icmp’,
‘from_port': ‘-1′,
‘to_port': ‘-1′,
},

‘ssh': {
‘name': ‘SSH’,
‘ip_protocol': ‘tcp’,
‘from_port': ’22’,
‘to_port': ’22’,
},

‘smtp': {
‘name': ‘SMTP’,
‘ip_protocol': ‘tcp’,
‘from_port': ’25’,
‘to_port': ’25’,
},

‘dns': {
‘name': ‘DNS’,
‘ip_protocol': ‘tcp’,
‘from_port': ’53’,
‘to_port': ’53’,
},

‘http': {
‘name': ‘HTTP’,
‘ip_protocol': ‘tcp’,
‘from_port': ’80’,
‘to_port': ’80’,
},

‘pop3′: {
‘name': ‘POP3′,
‘ip_protocol': ‘tcp’,
‘from_port': ‘110’,
‘to_port': ‘110’,
},

‘imap': {
‘name': ‘IMAP’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘143’,
‘to_port': ‘143’,
},

‘ldap': {
‘name': ‘LDAP’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘389’,
‘to_port': ‘389’,
},

‘https': {
‘name': ‘HTTPS’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘443’,
‘to_port': ‘443’,
},

‘smtps': {
‘name': ‘SMTPS’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘465’,
‘to_port': ‘465’,
},

‘imaps': {
‘name': ‘IMAPS’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘993’,
‘to_port': ‘993’,
},

‘pop3s': {
‘name': ‘POP3S’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘995’,
‘to_port': ‘995’,
},

‘ms_sql': {
‘name': ‘MS SQL’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘1433’,
‘to_port': ‘1433’,
},

‘mysql': {
‘name': ‘MYSQL’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘3306’,
‘to_port': ‘3306’,
},

‘rdp': {
‘name': ‘RDP’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘3389’,
‘to_port': ‘3389’,
},

}


Surfing Internet & SSH connectoin on (to) cloud instance of Fedora 20 via Neutron GRE

February 4, 2014

When you meet the first time with GRE tunnelling you have to understand that GRE encapsulation requires 24 bytes and a lot of problems raise up , view http://www.cisco.com/en/US/tech/tk827/tk369/technologies_tech_note09186a0080093f1f.shtml

In particular,  Two Node (Controller+Compute) RDO Havana cluster on Fedora 20 hosts been built by myself per guidelines from http://kashyapc.wordpress.com/2013/11/23/neutron-configs-for-a-two-node-openstack-havana-setup-on-fedora-20/ was Neutron GRE  cluster. Hence, for any instance has been setup (Fedora or Ubuntu) problem with network communication raises up immediately. apt-get update just refuse to work on Ubuntu Salamander Server instance (default MTU value for Ethernet interface is 1500).

Light weight X windows environment also has been setup on Fedora 20 cloud instance (fluxbox) for quick Internet access.

Solution is simple to  set MTU to 1400 only on any cloud instance.

Place in /etc/rd.d/rc.local (or /etc/rc.local for Ubuntu Server) :-

#!/bin/sh
ifconfig eth0 mtu 1400 up ;
exit 0

At least in meantime I don’t see problems with LAN and routing to  Internet (via simple  DLINK router) on cloud instances F19,F20,Ubuntu 13.10 Server and LAN’s hosts.

For better understanding what is all about please view http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html  [1].

Load instance via :

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2  –user-data=./myfile.txt  –block_device_mapping vda=3cb671c2-06d8-4b3a-aca6-476b66fb309a:::0 VMF20RS

where

[root@dfw02 ~(keystone_admin)]$ cinder list
+————————————–+——–+————–+——+————-+———-+————————————–+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+————–+——+————-+———-+————————————–+
| 3cb671c2-06d8-4b3a-aca6-476b66fb309a | available | Fedora20VOL   |  9   |     None    |   true   |                                                                                           |
| 49d5b872-3720-4915-ad1e-ec428e956558 | in-use |   VF20VOL    |  9   |     None    |   true   | 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 |
| b4831720-941f-41a7-b747-1810df49b261 | in-use | UbuntuSALVG  |  7   |     None    |   true   | 5d750d44-0cad-4a02-8432-0ee10e988b2c |
+————————————–+——–+————–+——+————-+———-+————————————–+

and

[root@dfw02 ~(keystone_admin)]$ cat myfile.txt

#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

Then
[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+—————+———–+————+————-+—————————–+
| ID                                   | Name          | Status    | Task State | Power State | Networks                    |
+————————————–+—————+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5     | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312     | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 5d750d44-0cad-4a02-8432-0ee10e988b2c | UbuntuSaucySL | SUSPENDED | None       | Shutdown    | int=10.0.0.8, 192.168.1.112 |
| 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 | VF20KVM       | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.109 |
| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | ACTIVE  | None       | Running   | int=10.0.0.4                                  |
+————————————–+—————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ neutron port-list –device-id 10306d33-9684-4dab-a017-266fb9ab496a

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| fa982101-e2d9-4d21-be9d-7d485c792ce1 |      | fa:16:3e:57:e2:67 | {“subnet_id”: “fa930cea-3d51-4cbe-a305-579f12aa53c0″, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+——————————————————————————–

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-create ext

Created a new floatingip:
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | d9f1b47d-c4b1-4865-92d2-c1d9964a35fb |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+

[root@dfw02 ~(keystone_admin)]$  neutron floatingip-associate d9f1b47d-c4b1-4865-92d2-c1d9964a35fb fa982101-e2d9-4d21-be9d-7d485c792ce1

[root@dfw02 ~(keystone_admin)]$ ping  192.168.1.115

Connect via virt-manager to Compute from Controller and log into text mode console as “fedora” with known password “mysecret”.  Set MTU to 1400  , create new sudoer user, then reboot instance. Now ssh from Controller works in traditional way :

[root@dfw02 ~(keystone_admin)]$ nova list | grep VMF20RS
| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | SUSPENDED | resuming   | Shutdown    | int=10.0.0.4, 192.168.1.115 |

[root@dfw02 ~(keystone_admin)]$ nova list | grep VMF20RS

| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.115 |

[root@dfw02 ~(keystone_admin)]$ ssh root@192.168.1.115

root@192.168.1.115’s password:
Last login: Sat Feb  1 12:32:12 2014 from 192.168.1.127
[root@vmf20rs ~]# uname -a
Linux vmf20rs.novalocal 3.12.8-300.fc20.x86_64 #1 SMP Thu Jan 16 01:07:50 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

[root@vmf20rs ~]# ifconfig
eth0: flags=4163  mtu 1400
inet 10.0.0.4  netmask 255.255.255.0  broadcast 10.0.0.255

inet6 fe80::f816:3eff:fe57:e267  prefixlen 64  scopeid 0x20
ether fa:16:3e:57:e2:67  txqueuelen 1000  (Ethernet)
RX packets 591788  bytes 770176441 (734.4 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 196309  bytes 20105918 (19.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 2  bytes 140 (140.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2  bytes 140 (140.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Text mode Internet works as well via “links” for instance :-

Setup Light Weight X Windows environment on F20 Cloud instance and running Fedora 20 cloud instance in Spice session via virt-manager ( Controller connects to Compute node via virt-manager ).  Spice console and QXL specified in virt-manager , then `nova reboot VF20WRT`.

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

# echo “exec fluxbox” &gt; ~/.xinitrc
# startx

Fedora 20 cloud instance running in Spice Session via virt-manager with QXL 64 MB of VRAM  :-

Shutting down fluxbox :-

Done

Now run `nova suspend VF20WRT`

Connecting to Fedora 20 cloud instance via spicy from Compute node :-

Fluxbox on Ubuntu 13.10 Server Cloud Instance:-

References

1.http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Follow

Get every new post delivered to your Inbox.

Join 31 other followers