Two Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

June 2, 2014

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack –answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Both eth1’s assigned IPs from GRE Libvirts subnet before installation and set to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to public IP – eth0  192.169.142.127 , Compute Node 192.169.142.137

ANSWER FILE Two Node IceHouse Neutron OVS&GRE  and  updated *.ini , *.conf files after packstack setup  http://textuploader.com/0ts8

Two Libvirt’s  subnet created on F20 KVM Sever to support installation

  Public subnet :  192.169.142.0/24  

 GRE Tunnel  Support subnet:      192.168.122.0/24 

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat openstackvms.xml
<network>
 <name>openstackvms</name>
 <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
 <forward mode='nat'>
 <nat>
 <port start='1024' end='65535'/>
 </nat>
 </forward>
 <bridge name='virbr1' stp='on' delay='0' />
 <mac address='52:54:00:60:f8:6e'/>
 <ip address='192.169.142.1' netmask='255.255.255.0'>
 <dhcp>
 <range start='192.169.142.2' end='192.169.142.254' />
 </dhcp>
 </ip>
 </network>
 2. Define the above network:
  $ virsh net-define openstackvms.xml
3. Start the network and enable it for "autostart"
 $ virsh net-start openstackvms
 $ virsh net-autostart openstackvms

4. List your libvirt networks to see if it reflects:
  $ virsh net-list
  Name                 State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
  openstackvms         active     yes           yes

5. Optionally, list your bridge devices:
  $ brctl show
  bridge name     bridge id               STP enabled     interfaces
  virbr0          8000.5254003339b3       yes             virbr0-nic
  virbr1          8000.52540060f86e       yes             virbr1-nic

 After packstack 2 Node (Controller+Compute) IceHouse OVS&amp;GRE setup :-

[root@ip-192-169-142-127 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 143
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]&gt; show databases ;
+——————–+
| Database           |
+——————–+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| nova               |
| ovs_neutron        |
| performance_schema |
| test               |
+——————–+
9 rows in set (0.02 sec)

MariaDB [(none)]&gt; use ovs_neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [ovs_neutron]&gt; show tables ;
+—————————+
| Tables_in_ovs_neutron     |
+—————————+
| agents                    |
| alembic_version           |
| allowedaddresspairs       |
| dnsnameservers            |
| externalnetworks          |
| extradhcpopts             |
| floatingips               |
| ipallocationpools         |
| ipallocations             |
| ipavailabilityranges      |
| networkdhcpagentbindings  |
| networks                  |
| ovs_network_bindings      |
| ovs_tunnel_allocations    |
| ovs_tunnel_endpoints      |
| ovs_vlan_allocations      |
| portbindingports          |
| ports                     |
| quotas                    |
| routerl3agentbindings     |
| routerroutes              |
| routers                   |
| securitygroupportbindings |
| securitygrouprules        |
| securitygroups            |
| servicedefinitions        |
| servicetypes              |
| subnetroutes              |
| subnets                   |
+—————————+
29 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from networks ;
+———————————-+————————————–+———+——–+—————-+——–+
| tenant_id                        | id                                   | name    | status | admin_state_up | shared |
+———————————-+————————————–+———+——–+—————-+——–+
| 179e44f8f53641da89c3eb5d07405523 | 3854bc88-ae14-47b0-9787-233e54ffe7e5 | private | ACTIVE |              1 |      0 |
| 179e44f8f53641da89c3eb5d07405523 | 6e8dc33d-55d4-47b5-925f-e1fa96128c02 | public  | ACTIVE |              1 |      1 |
+———————————-+————————————–+———+——–+—————-+——–+
2 rows in set (0.00 sec)

MariaDB [ovs_neutron]&gt; select * from routers ;
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| tenant_id                        | id                                   | name    | status | admin_state_up | gw_port_id                           | enable_snat |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
| 179e44f8f53641da89c3eb5d07405523 | 94829282-e6b2-4364-b640-8c0980218a4f | ROUTER3 | ACTIVE |              1 | df9711e4-d1a2-4255-9321-69105fbd8665 |           1 |
+———————————-+————————————–+———+——–+—————-+————————————–+————-+
1 row in set (0.00 sec)

MariaDB [ovs_neutron]&gt;

*********************

On Controller :-

********************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.127″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

*************************************

ovs-vsctl show output on controller

*************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
dc2c76d6-40e3-496e-bdec-470452758c32
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port “gre-1″
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.122.127″, out_key=flow, remote_ip=”192.168.122.137″}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap7acb7666-aa”
tag: 1
Interface “tap7acb7666-aa”
type: internal
Port “qr-a26fe722-07″
tag: 1
Interface “qr-a26fe722-07″
type: internal
Bridge br-ex
Port “qg-df9711e4-d1″
Interface “qg-df9711e4-d1″
type: internal
Port “eth0″
Interface “eth0″
Port br-ex
Interface br-ex
type: internal
ovs_version: “2.1.2”

********************

On Compute:-

********************

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth0
UUID=f96e561d-d14c-4fb1-9657-0c935f7f5721
ONBOOT=yes
IPADDR=192.169.142.137
PREFIX=24
GATEWAY=192.169.142.1
DNS1=83.221.202.254
HWADDR=52:54:00:67:AC:04
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

[root@ip-192-169-142-137 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.137
PREFIX=24
GATEWAY=192.168.122.1
DNS1=83.221.202.254
NM_CONTROLLED=no
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

************************************

ovs-vsctl show output on compute

************************************

[root@ip-192-169-142-137 ~]# ovs-vsctl show
1c6671de-fcdf-4a29-9eee-96c949848fff
Bridge br-tun
Port “gre-2″
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.122.137″, out_key=flow, remote_ip=”192.168.122.127″}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
Port “qvo87038189-3f”
tag: 1
Interface “qvo87038189-3f”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port br-int
Interface br-int
type: internal
ovs_version: “2.1.2”

[root@ip-192-169-142-137 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr87038189-3f        8000.2abf9e69f97c    no        qvb87038189-3f
tap87038189-3f

*************************

Metadata verification 

*************************

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f iptables -S -t nat | grep 169.254
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j REDIRECT –to-ports 9697

[root@ip-192-169-142-127 ~(keystone_admin)]# ip netns exec qrouter-94829282-e6b2-4364-b640-8c0980218a4f netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3771/python
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | 3771
bash: 3771: command not found…

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 3771
root      3771     1  0 13:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy –pid_file=/var/lib/neutron/external/pids/94829282-e6b2-4364-b640-8c0980218a4f.pid –metadata_proxy_socket=/var/lib/neutron/metadata_proxy –router_id=94829282-e6b2-4364-b640-8c0980218a4f –state_path=/var/lib/neutron –metadata_port=9697 –verbose –log-file=neutron-ns-metadata-proxy-94829282-e6b2-4364-b640-8c0980218a4f.log –log-dir=/var/log/neutron

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -anpt | grep 9697

tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      1024/python

[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 1024
nova      1024     1  1 13:58 ?        00:00:05 /usr/bin/python /usr/bin/nova-api
nova      3369  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3370  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3397  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3398  1024  0 13:58 ?        00:00:02 /usr/bin/python /usr/bin/nova-api
nova      3423  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3424  1024  0 13:58 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root      4947  4301  0 14:06 pts/0    00:00:00 grep –color=auto 1024

  

  

  

 


Sample of /etc/openstack-dashboard/local_settings

March 14, 2014

[root@dfw02 ~(keystone_admin)]$ cat  /etc/openstack-dashboard/local_settings | grep -v ^# | grep -v ^$
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard import exceptions
DEBUG = False
TEMPLATE_DEBUG = DEBUG
ALLOWED_HOSTS = ['192.168.1.127', 'localhost']
SESSION_ENGINE = “django.contrib.sessions.backends.cached_db”
DATABASES = {
‘default': {
‘ENGINE': ‘django.db.backends.mysql’,
‘NAME': ‘dash’,
‘USER': ‘dash’,
‘PASSWORD': ‘fedora’,
‘HOST': ‘192.168.1.127’,
‘default-character-set': ‘utf8′
}
}

HORIZON_CONFIG = {
‘dashboards': (‘project’, ‘admin’, ‘settings’,),
‘default_dashboard': ‘project’,
‘user_home': ‘openstack_dashboard.views.get_user_home’,
‘ajax_queue_limit': 10,
‘auto_fade_alerts': {
‘delay': 3000,
‘fade_duration': 1500,
‘types': ['alert-success', 'alert-info']
},

‘help_url': “http://docs.openstack.org&#8221;,

‘exceptions': {‘recoverable': exceptions.RECOVERABLE,
‘not_found': exceptions.NOT_FOUND,
‘unauthorized': exceptions.UNAUTHORIZED},
}

from horizon.utils import secret_key

LOCAL_PATH = ‘/var/lib/openstack-dashboard’
SECRET_KEY = secret_key.generate_or_read_from_file(os.path.join(LOCAL_PATH, ‘.secret_key_store’))

CACHES = {
‘default': {
‘BACKEND’ : ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’ : ‘127.0.0.1:11211′,
}
}

EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’

OPENSTACK_HOST = “192.168.1.127”

OPENSTACK_KEYSTONE_URL = “http://%s:5000/v2.0&#8243; % OPENSTACK_HOST

OPENSTACK_KEYSTONE_DEFAULT_ROLE = “Member”
OPENSTACK_KEYSTONE_BACKEND = {
‘name': ‘native’,
‘can_edit_user': True,
‘can_edit_group': True,
‘can_edit_project': True,
‘can_edit_domain': True,
‘can_edit_role': True
}

OPENSTACK_HYPERVISOR_FEATURES = {
‘can_set_mount_point': False,
# NOTE: as of Grizzly this is not yet supported in Nova so enabling this
# setting will not do anything useful
‘can_encrypt_volumes': False
}

OPENSTACK_NEUTRON_NETWORK = {
‘enable_lb': False,
‘enable_firewall': False,
‘enable_quotas': True,
‘enable_vpn': False,
# The profile_support option is used to detect if an external router can be
# configured via the dashboard. When using specific plugins the
# profile_support can be turned on if needed.
‘profile_support': None,
#’profile_support': ‘cisco’,
}
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
TIME_ZONE = “UTC”
POLICY_FILES_PATH = ‘/etc/openstack-dashboard’
POLICY_FILES = {
‘identity': ‘keystone_policy.json’,
‘compute': ‘nova_policy.json’
}

LOGGING = {
‘version': 1,
# When set to True this will disable all logging except
# for loggers specified in this configuration dictionary. Note that
# if nothing is specified here and disable_existing_loggers is True,
# django.db.backends will still log unless it is disabled explicitly.
‘disable_existing_loggers': False,

‘handlers': {
‘null': {
‘level': ‘DEBUG’,
‘class': ‘django.utils.log.NullHandler’,
},

‘console': {
# Set the level to “DEBUG” for verbose output logging.
‘level': ‘INFO’,
‘class': ‘logging.StreamHandler’,
},

‘loggers': {

# Logging from django.db.backends is VERY verbose, send to null
# by default.
‘django.db.backends': {
‘handlers': ['null'],
‘propagate': False,
},

‘requests': {
‘handlers': ['null'],
‘propagate': False,
},

‘horizon': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘openstack_dashboard': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘novaclient': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘cinderclient': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘keystoneclient': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘glanceclient': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘neutronclient': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘heatclient': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘ceilometerclient': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘troveclient': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘swiftclient': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘openstack_auth': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘nose.plugins.manager': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘django': {
‘handlers': ['console'],
‘level': ‘DEBUG’,
‘propagate': False,
},

‘iso8601′: {
‘handlers': ['null'],
‘propagate': False,
},

}

}

SECURITY_GROUP_RULES = {
‘all_tcp': {
‘name': ‘ALL TCP’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘1’,
‘to_port': ‘65535’,
},

‘all_udp': {
‘name': ‘ALL UDP’,
‘ip_protocol': ‘udp’,
‘from_port': ‘1’,
‘to_port': ‘65535’,
},

‘all_icmp': {
‘name': ‘ALL ICMP’,
‘ip_protocol': ‘icmp’,
‘from_port': ‘-1′,
‘to_port': ‘-1′,
},

‘ssh': {
‘name': ‘SSH’,
‘ip_protocol': ‘tcp’,
‘from_port': ’22’,
‘to_port': ’22’,
},

‘smtp': {
‘name': ‘SMTP’,
‘ip_protocol': ‘tcp’,
‘from_port': ’25’,
‘to_port': ’25’,
},

‘dns': {
‘name': ‘DNS’,
‘ip_protocol': ‘tcp’,
‘from_port': ’53’,
‘to_port': ’53’,
},

‘http': {
‘name': ‘HTTP’,
‘ip_protocol': ‘tcp’,
‘from_port': ’80’,
‘to_port': ’80’,
},

‘pop3′: {
‘name': ‘POP3′,
‘ip_protocol': ‘tcp’,
‘from_port': ‘110’,
‘to_port': ‘110’,
},

‘imap': {
‘name': ‘IMAP’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘143’,
‘to_port': ‘143’,
},

‘ldap': {
‘name': ‘LDAP’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘389’,
‘to_port': ‘389’,
},

‘https': {
‘name': ‘HTTPS’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘443’,
‘to_port': ‘443’,
},

‘smtps': {
‘name': ‘SMTPS’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘465’,
‘to_port': ‘465’,
},

‘imaps': {
‘name': ‘IMAPS’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘993’,
‘to_port': ‘993’,
},

‘pop3s': {
‘name': ‘POP3S’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘995’,
‘to_port': ‘995’,
},

‘ms_sql': {
‘name': ‘MS SQL’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘1433’,
‘to_port': ‘1433’,
},

‘mysql': {
‘name': ‘MYSQL’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘3306’,
‘to_port': ‘3306’,
},

‘rdp': {
‘name': ‘RDP’,
‘ip_protocol': ‘tcp’,
‘from_port': ‘3389’,
‘to_port': ‘3389’,
},

}


Surfing Internet & SSH connectoin on (to) cloud instance of Fedora 20 via Neutron GRE

February 4, 2014

When you meet the first time with GRE tunnelling you have to understand that GRE encapsulation requires 24 bytes and a lot of problems raise up , view http://www.cisco.com/en/US/tech/tk827/tk369/technologies_tech_note09186a0080093f1f.shtml

In particular,  Two Node (Controller+Compute) RDO Havana cluster on Fedora 20 hosts been built by myself per guidelines from http://kashyapc.wordpress.com/2013/11/23/neutron-configs-for-a-two-node-openstack-havana-setup-on-fedora-20/ was Neutron GRE  cluster. Hence, for any instance has been setup (Fedora or Ubuntu) problem with network communication raises up immediately. apt-get update just refuse to work on Ubuntu Salamander Server instance (default MTU value for Ethernet interface is 1500).

Light weight X windows environment also has been setup on Fedora 20 cloud instance (fluxbox) for quick Internet access.

Solution is simple to  set MTU to 1400 only on any cloud instance.

Place in /etc/rd.d/rc.local (or /etc/rc.local for Ubuntu Server) :-

#!/bin/sh
ifconfig eth0 mtu 1400 up ;
exit 0

At least in meantime I don’t see problems with LAN and routing to  Internet (via simple  DLINK router) on cloud instances F19,F20,Ubuntu 13.10 Server and LAN’s hosts.

For better understanding what is all about please view http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html  [1].

Load instance via :

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2  –user-data=./myfile.txt  –block_device_mapping vda=3cb671c2-06d8-4b3a-aca6-476b66fb309a:::0 VMF20RS

where

[root@dfw02 ~(keystone_admin)]$ cinder list
+————————————–+——–+————–+——+————-+———-+————————————–+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+————–+——+————-+———-+————————————–+
| 3cb671c2-06d8-4b3a-aca6-476b66fb309a | available | Fedora20VOL   |  9   |     None    |   true   |                                                                                           |
| 49d5b872-3720-4915-ad1e-ec428e956558 | in-use |   VF20VOL    |  9   |     None    |   true   | 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 |
| b4831720-941f-41a7-b747-1810df49b261 | in-use | UbuntuSALVG  |  7   |     None    |   true   | 5d750d44-0cad-4a02-8432-0ee10e988b2c |
+————————————–+——–+————–+——+————-+———-+————————————–+

and

[root@dfw02 ~(keystone_admin)]$ cat myfile.txt

#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

Then
[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+—————+———–+————+————-+—————————–+
| ID                                   | Name          | Status    | Task State | Power State | Networks                    |
+————————————–+—————+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5     | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312     | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 5d750d44-0cad-4a02-8432-0ee10e988b2c | UbuntuSaucySL | SUSPENDED | None       | Shutdown    | int=10.0.0.8, 192.168.1.112 |
| 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 | VF20KVM       | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.109 |
| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | ACTIVE  | None       | Running   | int=10.0.0.4                                  |
+————————————–+—————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ neutron port-list –device-id 10306d33-9684-4dab-a017-266fb9ab496a

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| fa982101-e2d9-4d21-be9d-7d485c792ce1 |      | fa:16:3e:57:e2:67 | {“subnet_id”: “fa930cea-3d51-4cbe-a305-579f12aa53c0″, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+——————————————————————————–

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-create ext

Created a new floatingip:
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | d9f1b47d-c4b1-4865-92d2-c1d9964a35fb |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+

[root@dfw02 ~(keystone_admin)]$  neutron floatingip-associate d9f1b47d-c4b1-4865-92d2-c1d9964a35fb fa982101-e2d9-4d21-be9d-7d485c792ce1

[root@dfw02 ~(keystone_admin)]$ ping  192.168.1.115

Connect via virt-manager to Compute from Controller and log into text mode console as “fedora” with known password “mysecret”.  Set MTU to 1400  , create new sudoer user, then reboot instance. Now ssh from Controller works in traditional way :

[root@dfw02 ~(keystone_admin)]$ nova list | grep VMF20RS
| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | SUSPENDED | resuming   | Shutdown    | int=10.0.0.4, 192.168.1.115 |

[root@dfw02 ~(keystone_admin)]$ nova list | grep VMF20RS

| 10306d33-9684-4dab-a017-266fb9ab496a | VMF20RS       | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.115 |

[root@dfw02 ~(keystone_admin)]$ ssh root@192.168.1.115

root@192.168.1.115’s password:
Last login: Sat Feb  1 12:32:12 2014 from 192.168.1.127
[root@vmf20rs ~]# uname -a
Linux vmf20rs.novalocal 3.12.8-300.fc20.x86_64 #1 SMP Thu Jan 16 01:07:50 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

[root@vmf20rs ~]# ifconfig
eth0: flags=4163  mtu 1400
inet 10.0.0.4  netmask 255.255.255.0  broadcast 10.0.0.255

inet6 fe80::f816:3eff:fe57:e267  prefixlen 64  scopeid 0x20
ether fa:16:3e:57:e2:67  txqueuelen 1000  (Ethernet)
RX packets 591788  bytes 770176441 (734.4 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 196309  bytes 20105918 (19.1 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 2  bytes 140 (140.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2  bytes 140 (140.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Text mode Internet works as well via “links” for instance :-

Setup Light Weight X Windows environment on F20 Cloud instance and running Fedora 20 cloud instance in Spice session via virt-manager ( Controller connects to Compute node via virt-manager ).  Spice console and QXL specified in virt-manager , then `nova reboot VF20WRT`.

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

# echo “exec fluxbox” &gt; ~/.xinitrc
# startx

Fedora 20 cloud instance running in Spice Session via virt-manager with QXL 64 MB of VRAM  :-

Shutting down fluxbox :-

Done

Now run `nova suspend VF20WRT`

Connecting to Fedora 20 cloud instance via spicy from Compute node :-

Fluxbox on Ubuntu 13.10 Server Cloud Instance:-

References

1.http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Setup Light Weight X Windows environment on Fedora 20 Cloud instance and running F20 cloud instance in Spice session via virt-manager or spicy

February 3, 2014

Following bellow builds Light Weight X Windows environment on Fedora 20 Cloud instance and demonstrate running same instance in Spice session via virt-manager ( Controller connects to Compute node via virt-manager ).  Spice console and QXL specified in virt-manager , then instance rebooted via Nova.

This post follows up [1]  http://bderzhavets.blogspot.ru/2014/01/setting-up-two-physical-node-openstack.html 
getting things on cloud instances ready to work without openstack-dashboard setup (RDO Havana administrative WEB console)

Needless to say that Spice console behaviour with running X-Server much better then it happens in VNC session , where actually one X-sever is running in a client of another one at Controller Node (F20)

Spice-gtk source rpm installed on both boxes of Cluster and rebuilt:-
$ rpm -iv spice-gtk-0.22-1.fc21.src.rpm
$ cd ~/rpmbuild/SPEC
$ sudo yum install intltool gtk2-devel usbredir-devel libusb1-devel libgudev1-devel pixman-devel openssl-devel  libjpeg-turbo-devel celt051-devel pulseaudio-libs-devel pygtk2-devel python-devel zlib-devel cyrus-sasl-devel libcacard-devel gobject-introspection-devel  dbus-glib-devel libacl-devel polkit-devel gtk-doc vala-tools gtk3-devel spice-protocol

$ rpmbuild -bb ./spice-gtk.spec
$ cd ../RPMS/x86_64

RPMs been built installed , because spicy is not on the system

[boris@dfw02 x86_64]$  sudo yum install spice-glib-0.22-2.fc20.x86_64.rpm \
spice-glib-devel-0.22-2.fc20.x86_64.rpm \
spice-gtk-0.22-2.fc20.x86_64.rpm \
spice-gtk3-0.22-2.fc20.x86_64.rpm \
spice-gtk3-devel-0.22-2.fc20.x86_64.rpm \
spice-gtk3-vala-0.22-2.fc20.x86_64.rpm \
spice-gtk-debuginfo-0.22-2.fc20.x86_64.rpm \
spice-gtk-devel-0.22-2.fc20.x86_64.rpm  \
spice-gtk-python-0.22-2.fc20.x86_64.rpm \
spice-gtk-tools-0.22-2.fc20.x86_64.rpm

Next we install X windows on F20 to run fluxbox ( by the way after hours of googling I was unable to find requied set of packages and just picked them up

during KDE Env installation via yum , which I actually don’t need at all on cloud instance of Fedora )

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

Install some fonts :-

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

We are ready to go :-

# echo “exec fluxbox” &gt; ~/.xinitrc
# startx


Next:  $ yum -y install firefox
via x-terminal
$/usr/bin/firefox &amp;

Fedora 20 cloud instance running in Spice Session via virt-manager with QXL (64 MB of VRAM)  :-

Connecting via spicy from Compute Node to same F20 instance :-


   
  

    
  After port mapping :-
# ssh -L 5900:localhost:5900 -N -f -l root 192.168.1.137
Spicy may connect from Controller to Fedora 20 instance


 



Running Internet browser on F19 instance via original router on the LAN

September 15, 2013

Running Internet browser on F19 instance  via original router on the LAN


Running F19 instance routed to orinal LAN as external

September 15, 2013

Running F19 instance routed to orinal LAN as external


Dashboard

September 15, 2013

Dashboard


Follow

Get every new post delivered to your Inbox.