Setup Horizon Dashboard-2014.1 on F20 Havana Controller (firefox upgrade up to 29.0-5)

May 3, 2014

It’s hard to know what the right thing is. Once you know, it’s hard not to do it.
                       Harry Fertig (Kingsley). The Confession (1999 film)

Recent upgrade firefox up to 29.0-5 on Fedora 20 causes to fail login to Dashboard Console for Havana F20 Controller been setup per VNC Console in Dashboard on Two Node Neutron GRE+OVS F20 Cluster

Procedure bellow actually does a backport F21 packages python-django-horizon-2104.1-1 , python-django-openstack-auth-1.1.5-1, python-pbr-0.7.0-2 via manual

install of corresponding SRC.RPMs and invoking rpmbuild utility to produce F20

packages. The hard thing to know is which packages to backport ?

I had to perform AIO RDO IceHouse setup via packstack on specially created VM to run `rpm -qa | grep django` to obtain required list. Officially RDO Havana

comes with F20 and appears that most recent firefox upgrade breaks Horizon Dashboard supposed to be maintained as supported component for F20.

Download from Net :-

[boris@dfw02 Downloads]$ ls -l *.src.rpm

-rw-r–r–. 1 boris boris 4252988 May  3 08:21 python-django-horizon-2014.1-1.fc21.src.rpm

-rw-r–r–. 1 boris boris   47126 May  3 08:37 python-django-openstack-auth-1.1.5-1.fc21.src.rpm

-rw-r–r–. 1 boris boris   83761 May  3 08:48 python-pbr-0.7.0-2.fc21.src.rpm

Install src.rpms and build

[boris@dfw02 SPECS]$ rpmbuild -bb python-django-openstack-auth.spec

[boris@dfw02 SPECS]$ rpmbuild -bb python-pbr.spec

Then install rpms as preventive step before core package build

[boris@dfw02 noarch]$sudo yum install python-django-openstack-auth-1.1.5-1.fc20.noarch.rpm

[boris@dfw02 noarch]$sudo yum install  python-pbr-0.7.0-2.fc20.noarch.rpm

[boris@dfw02 noarch]$ cd -

/home/boris/rpmbuild/SPECS

Core build to succeed :-

[boris@dfw02 SPECS]$ rpmbuild -bb python-django-horizon.spec

[boris@dfw02 SPECS]$ cd ../RPMS/n*

[boris@dfw02 noarch]$ ls -l

total 6616

-rw-rw-r–. 1 boris boris 3293068 May  3 09:01 openstack-dashboard-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  732020 May  3 09:01 openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  160868 May  3 08:51 python3-pbr-0.7.0-2.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  823332 May  3 09:01 python-django-horizon-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris 1548752 May  3 09:01 python-django-horizon-doc-2014.1-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris   43944 May  3 08:39 python-django-openstack-auth-1.1.5-1.fc20.noarch.rpm

-rw-rw-r–. 1 boris boris  158204 May  3 08:51 python-pbr-0.7.0-2.fc20.noarch.rpm

[boris@dfw02 noarch]$ ls *.rpm > inst

[boris@dfw02 noarch]$ vi inst

[boris@dfw02 noarch]$ chmod u+x inst

[boris@dfw02 noarch]$ ./inst

[sudo] password for boris:

Loaded plugins: langpacks, priorities, refresh-packagekit

Examining openstack-dashboard-2014.1-1.fc20.noarch.rpm: openstack-dashboard-2014.1-1.fc20.noarch

Marking openstack-dashboard-2014.1-1.fc20.noarch.rpm as an update to openstack-dashboard-2013.2.3-1.fc20.noarch

Examining openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm: openstack-dashboard-theme-2014.1-1.fc20.noarch

Marking openstack-dashboard-theme-2014.1-1.fc20.noarch.rpm to be installed

Examining python-django-horizon-2014.1-1.fc20.noarch.rpm: python-django-horizon-2014.1-1.fc20.noarch

Marking python-django-horizon-2014.1-1.fc20.noarch.rpm as an update to python-django-horizon-2013.2.3-1.fc20.noarch

Examining python-django-horizon-doc-2014.1-1.fc20.noarch.rpm: python-django-horizon-doc-2014.1-1.fc20.noarch

Marking python-django-horizon-doc-2014.1-1.fc20.noarch.rpm to be installed

Resolving Dependencies

–> Running transaction check

—> Package openstack-dashboard.noarch 0:2013.2.3-1.fc20 will be updated

—> Package openstack-dashboard.noarch 0:2014.1-1.fc20 will be an update

—> Package openstack-dashboard-theme.noarch 0:2014.1-1.fc20 will be installed

—> Package python-django-horizon.noarch 0:2013.2.3-1.fc20 will be updated

—> Package python-django-horizon.noarch 0:2014.1-1.fc20 will be an update

—> Package python-django-horizon-doc.noarch 0:2014.1-1.fc20 will be installed

–> Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================

Package                   Arch   Version          Repository                                       Size

=========================================================================================================

Installing:

openstack-dashboard-theme noarch 2014.1-1.fc20    /openstack-dashboard-theme-2014.1-1.fc20.noarch 1.5 M

python-django-horizon-doc noarch 2014.1-1.fc20    /python-django-horizon-doc-2014.1-1.fc20.noarch  24 M

Updating:

openstack-dashboard       noarch 2014.1-1.fc20    /openstack-dashboard-2014.1-1.fc20.noarch        14 M

python-django-horizon     noarch 2014.1-1.fc20    /python-django-horizon-2014.1-1.fc20.noarch     3.3 M

Transaction Summary

=========================================================================================================

Install  2 Packages

Upgrade  2 Packages

 

Total size: 42 M

Is this ok [y/d/N]: y

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

Updating   : python-django-horizon-2014.1-1.fc20.noarch                                            1/6

Updating   : openstack-dashboard-2014.1-1.fc20.noarch                                              2/6

warning: /etc/openstack-dashboard/local_settings created as /etc/openstack-dashboard/local_settings.rpmnew

Installing : openstack-dashboard-theme-2014.1-1.fc20.noarch                                        3/6

Installing : python-django-horizon-doc-2014.1-1.fc20.noarch                                        4/6

Cleanup    : openstack-dashboard-2013.2.3-1.fc20.noarch                                            5/6

Cleanup    : python-django-horizon-2013.2.3-1.fc20.noarch                                          6/6

Verifying  : openstack-dashboard-theme-2014.1-1.fc20.noarch                                        1/6

Verifying  : openstack-dashboard-2014.1-1.fc20.noarch                                              2/6

Verifying  : python-django-horizon-doc-2014.1-1.fc20.noarch                                        3/6

Verifying  : python-django-horizon-2014.1-1.fc20.noarch                                            4/6

Verifying  : openstack-dashboard-2013.2.3-1.fc20.noarch                                            5/6

Verifying  : python-django-horizon-2013.2.3-1.fc20.noarch                                          6/6

Installed:

openstack-dashboard-theme.noarch 0:2014.1-1.fc20    python-django-horizon-doc.noarch 0:2014.1-1.fc20

Updated:

openstack-dashboard.noarch 0:2014.1-1.fc20         python-django-horizon.noarch 0:2014.1-1.fc20

Complete!

[root@dfw02 ~(keystone_admin)]$ rpm -qa | grep django

python-django-horizon-doc-2014.1-1.fc20.noarch

python-django-horizon-2014.1-1.fc20.noarch

python-django-1.6.3-1.fc20.noarch

python-django-nose-1.2-1.fc20.noarch

python-django-bash-completion-1.6.3-1.fc20.noarch

python-django-openstack-auth-1.1.5-1.fc20.noarch

python-django-appconf-0.6-2.fc20.noarch

python-django-compressor-1.3-2.fc20.noarch

Admin’s reports regarding Cluster status

 

 

 

     Ubuntu Trusty Server VM running


HowTo access metadata from RDO Havana Instance on Fedora 20

April 5, 2014

Per  Direct_access _to_Nova_metadata

In an environment running Neutron, a request from your instance must traverse a number of steps:

1. From the instance to a router,
2. Through a NAT rule in the router namespace,
3. To an instance of the neutron-ns-metadata-proxy,
4. To the actual Nova metadata service

   Reproducing  Dirrect_access_to_Nova_metadata   I was able to get only list of EC2 metadata available, but not the values. However, the major concern is getting  values of metadata obtained in post  Direct_access _to_Nova_metadata
and also at  /openstack  location. The last  ones seem to me important not less then present  in EC2 list . This metadata are also not provided by this list.

Commands been run bellow are supposed to verify Nova&Neutron Setup to be performed  successfully , otherwise passing four steps 1,2,3,4 is supposed to fail and it will force you to analyse corresponding Logs file ( View References). It doesn’t matter did you set up cloud environment  manually or via RDO packstack

Run on Controller Node :-

[root@dallas1 ~(keystone_admin)]$ ip netns list

qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d
qdhcp-166d9651-d299-47df-a5a1-b368e87b612f

Check on the Routing on Cloud controller’s router namespace, it should show port 80 for 169.254.169.254 routes to the host at port 8700

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d iptables -L -t nat | grep 169

REDIRECT   tcp  —  anywhere             169.254.169.254      tcp dpt:http redir ports  8700

Check routing table inside the router namespace:

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d ip r

default via 192.168.1.1 dev qg-8fbb6202-3d
10.0.0.0/24 dev qr-2dd1ba70-34  proto kernel  scope link  src 10.0.0.1
192.168.1.0/24 dev qg-8fbb6202-3d  proto kernel  scope link  src 192.168.1.100

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d netstat -na

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN   
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ ip netns exec qdhcp-166d9651-d299-47df-a5a1-b368e87b612f netstat -na

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 10.0.0.3:53             0.0.0.0:*               LISTEN
tcp6       0      0 fe80::f816:3eff:feef:53 :::*                    LISTEN
udp        0      0 10.0.0.3:53             0.0.0.0:*
udp        0      0 0.0.0.0:67              0.0.0.0:*
udp6       0      0 fe80::f816:3eff:feef:53 :::*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ iptables-save | grep 8700

-A INPUT -p tcp -m multiport –dports 8700 -m comment –comment “001 metadata incoming” -j ACCEPT

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep 8700

tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      2830/python  

[root@dallas1 ~(keystone_admin)]$ ps -ef | grep 2830
nova      2830     1  0 09:41 ?        00:00:57 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2856  2830  0 09:41 ?        00:00:00 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2874  2830  0 09:41 ?        00:00:09 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log
nova      2875  2830  0 09:41 ?        00:00:01 /usr/bin/python /usr/bin/nova-api –logfile /var/log/nova/api.log

1. At this point  you should be able (inside any running Havana instance) to launch your browser (“links” at least if there is no Light Weight X environment)  to

http://169.254.169.254/openstack/latest (not EC2)

The response  will be  :    meta_data.json password vendor_data.json

 If Light Weight X Environment is unavailable then use “links”

 

 

 What is curl   http://curl.haxx.se/docs/faq.html#What_is_cURL

Now you should be able to run on F20 instance

[root@vf20rs0404 ~] # curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

%  Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

Dload  Upload   Total   Spent    Left  Speed

100  1286  100  1286    0     0   1109      0  0:00:01  0:00:01 –:–:–  1127

. . . . . . . .

“uuid”: “10142280-44a2-4830-acce-f12f3849cb32“,

“availability_zone”: “nova”,

“hostname”: “vf20rs0404.novalocal”,

“launch_index”: 0,

“public_keys”: {“key2″: “ssh-rsa . . . . .  Generated by Nova\n”},

“name”: “VF20RS0404″

On another instance (in my case Ubuntu 14.04 )

 root@ubuntutrs0407:~#curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

Dload  Upload   Total   Spent    Left  Speed

100  1292  100  1292    0     0    444      0  0:00:02  0:00:02 –:–:–   446

{“random_seed”: “…”,

“uuid”: “8c79e60c-4f1d-44e5-8446-b42b4d94c4fc“,

“availability_zone”: “nova”,

“hostname”: “ubuntutrs0407.novalocal”,

“launch_index”: 0,

“public_keys”: {“key2″: “ssh-rsa …. Generated by Nova\n”},

“name”: “UbuntuTRS0407″}

Running VMs on Compute node:-

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+—————+———–+————+————-+—————————–+

| ID                                   | Name          | Status    | Task State | Power State | Networks                    |

+————————————–+—————+———–+————+————-+—————————–+

| d0f947b1-ff6a-4ff0-b858-b63a3d07cca3 | UbuntuTRS0405 | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.106 |

| 8c79e60c-4f1d-44e5-8446-b42b4d94c4fc | UbuntuTRS0407 | ACTIVE    | None       | Running     | int=10.0.0.6, 192.168.1.107 |

| 8775924c-dbbd-4fbb-afb8-7e38d9ac7615 | VF20RS037     | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.115 |

| d22a2376-33da-4a0e-a066-d334bd2e511d | VF20RS0402    | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.103 |

| 10142280-44a2-4830-acce-f12f3849cb32 | VF20RS0404    | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.105 |

+————————————–+—————+———–+————+————-+——————–

Launching browser to http://169.254.169.254/openstack/latest/meta_data.json on another Two Node Neutron GRE+OVS F20 Cluster. Output is sent directly to browser

2. I have provided some information about the OpenStack metadata api, which is available at /openstack, but if you are concerned  about the EC2 metadata API , browser should be launched to  http://169.254.169.254/latest/meta-data/

 What allows to to get any of displayed parameters

For instance :-

 

   OR via CLI

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/instance-id

i-000000a4

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-hostname

ubuntutrs0407.novalocal

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-ipv4

192.168.1.107

To verify instance-id launch virt-manger connected to Compute Node

 

 

which shows same value “000000a4″

Another option in text mode is “links” browser

$ ssh -l ubuntu -i key2.pem 192.168.1.109

Inside Ubuntu 14.04 instance  :-

# apt-get -y install links

# links

Press ESC to get to menu:-

 

 

 

 

References

1.https://ask.openstack.org/en/question/10140/wget-http1692541692542009-04-04meta-datainstance-id-error-404/


Setup Dashboard&VNC console on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster

March 13, 2014

This post follows up  Gluster 3.4.2 on Two Node Controller&Compute Neutron GRE+OVS Fedora 20 Cluster in particular,  it could be performed after Basic Setup  to make system management more comfortable the only CLI.

It’s also easy to create instance via  Dashboard :

  Placing in post creating panel customization script ( analog –user-data)

#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

To be able log in as “fedora” and set MTU=1457  inside VM (GRE tunneling)

   Key-pair submitted upon creation works like this :

[root@dfw02 Downloads(keystone_boris)]$ ssh -l fedora -i key2.pem  192.168.1.109
Last login: Sat Mar 15 07:47:45 2014

[fedora@vf20rs015 ~]$ uname -a
Linux vf20rs015.novalocal 3.13.6-200.fc20.x86_64 #1 SMP Fri Mar 7 17:02:28 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

[fedora@vf20rs015 ~]$ ifconfig
eth0: flags=4163  mtu 1457
inet 40.0.0.7  netmask 255.255.255.0  broadcast 40.0.0.255
inet6 fe80::f816:3eff:fe1e:1de6  prefixlen 64  scopeid 0x20
ether fa:16:3e:1e:1d:e6  txqueuelen 1000  (Ethernet)
RX packets 225  bytes 25426 (24.8 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 221  bytes 23674 (23.1 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Setup described at link mentioned above was originally suggested by Kashyap Chamarthy  for VMs running on non-default Libvirt’s subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for  nova-compute and neutron-openvswitch-agent  from Compute to Controller,   changes to /etc/sysconfig/iptables to enable  Gluster 3.4.2 setup on F20  systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for “dnsmasq.conf” and added to dnsmasq.conf line “dhcp-option=26,1454″. Updated configuration files are critical for launching instance without “Customization script” and allow to work with usual ssh keypair.  Actually , when updates are done instance gets created with MTU 1454. View  [2]. This setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..`  loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept).

Setup

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling ), Dashboard

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   -  Controller (192.168.1.127) 
dwf01.localdomain   -  Compute   (192.168.1.137)

1. First step follows  http://docs.openstack.org/havana/install-guide/install/yum/content/install_dashboard.html   and  http://docs.openstack.org/havana/install-guide/install/yum/content/dashboard-session-database.html Sequence of actions per manuals above :-

# yum install memcached python-memcached mod_wsgi openstack-dashboard

Modify the value of CACHES['default']['LOCATION'] in /etc/openstack-dashboard/local_settings to match the ones set in /etc/sysconfig/memcached. Open /etc/openstack-dashboard/local_settings and look for this line:

CACHES =

{ ‘default':

{ ‘BACKEND’ : ‘django.core.cache.backends.memcached.MemcachedCache’,

‘LOCATION’ : ‘127.0.0.1:11211′ }

}

Update the ALLOWED_HOSTS in local_settings.py to include the addresses you wish to access the dashboard from. Edit /etc/openstack-dashboard/local_settings:

ALLOWED_HOSTS = ['Controller-IP', 'my-desktop']

This guide assumes that you are running the Dashboard on the controller node. You can easily run the dashboard on a separate server, by changing the appropriate settings in local_settings.py. Edit /etc/openstack-dashboard/local_settings and change OPENSTACK_HOST to the hostname of your Identity Service:

OPENSTACK_HOST = “Controller-IP”

Start the Apache web server and memcached: # service httpd restart

# systemctl start memcached

# systemctl enable memcached

To configure the MySQL database, create the dash database:

mysql> CREATE DATABASE dash; Create a MySQL user for the newly-created dash database that has full control of the database. Replace DASH_DBPASS with a password for the new user:

mysql> GRANT ALL ON dash.* TO ‘dash’@’%’ IDENTIFIED BY ‘fedora';

mysql> GRANT ALL ON dash.* TO ‘dash’@’localhost’ IDENTIFIED BY ‘fedora';

In the local_settings file /etc/openstack-dashboard/local_settings

SESSION_ENGINE = ‘django.contrib.sessions.backends.db’

DATABASES =

{ ‘default':

{ # Database configuration here

‘ENGINE': ‘django.db.backends.mysql’,

‘NAME': ‘dash’,

‘USER': ‘dash’, ‘PASSWORD':

‘fedora’, ‘HOST': ‘Controller-IP’,

‘default-character-set': ‘utf8′ }

}

After configuring the local_settings as shown, you can run the manage.py syncdb command to populate this newly-created database.

# /usr/share/openstack-dashboard/manage.py syncdb

Attempting to run syncdb you  might get an error like ‘dash’@’yourhost’ is not authorized to do it with password ‘YES’.  Then ( for instance in my case)

# mysql -u root -p

MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;

MariaDB [(none)]>  insert into mysql.user(User,Host,Password) values (‘dash’,’dallas1.localdomain’,’ ‘);

Query OK, 1 row affected, 4 warnings (0.00 sec)

MariaDB [(none)]> UPDATE mysql.user SET Password = PASSWORD(‘fedora’)

> WHERE User = ‘dash’ ;

Query OK, 1 row affected (0.00 sec) Rows matched: 3  Changed: 1  Warnings: 0

MariaDB [(none)]>  SELECT User, Host, Password FROM mysql.user;

.   .  .  .

| dash     | %                   | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |

| dash     | localhost       | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |

| dash     | dallas1.localdomain | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 | +———-+———————+——————————————-+

20 rows in set (0.00 sec)

That is exactly the same issue which comes up when starting openstack-nova-scheduler & openstcak-nova-conductor  services during basic installation of Controller on Fedora 20. View Basic setup in particular :-

Set table mysql.user in proper status

shell> mysql -u root -p
mysql> insert into mysql.user (User,Host,Password) values ('nova','dfw02.localdomain',' ');
mysql> UPDATE mysql.user SET Password = PASSWORD('nova')
    ->    WHERE User = 'nova';
mysql> FLUSH PRIVILEGES;

Start, enable nova-{api,scheduler,conductor} services

  $ for i in start enable status; \
    do systemctl $i openstack-nova-api; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-scheduler; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-conductor; done

 # service httpd restart

Finally on Controller (dfw02  – 192.168.1.127)  file /etc/openstack-dashboard/local_settings  looks like http://bderzhavets.wordpress.com/2014/03/14/sample-of-etcopenstack-dashboardlocal_settings/

At this point dashboard is functional, but instances sessions outputs are unavailable via dashboard.  I didn’t get any error code, just

Instance Detail: VF20RS03

OverviewLogConsole

Loading…

2. Second step skipped in mentioned manual , however known by experienced persons https://ask.openstack.org/en/question/520/vnc-console-in-dashboard-fails-to-connect-ot-server-code-1006/

**************************************

Controller  dfw02 – 192.168.1.127

**************************************

# ssh-keygen (Hit Enter to accept all of the defaults)

# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dfw01

[root@dfw02 ~(keystone_boris)]$ ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5903:127.0.0.1:5903 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5904:127.0.0.1:5904 -N -f -l root 192.168.1.137

Compute’s  IP is 192.168.1.137

Update /etc/nova/nova.conf:

novncproxy_host=0.0.0.0

novncproxy_port=6080

novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html

[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-consoleauth.service
ln -s ‘/usr/lib/systemd/system/openstack-nova-consoleauth.service’ ‘/etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service’
[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-novncproxy.service
ln -s ‘/usr/lib/systemd/system/openstack-nova-novncproxy.service’ ‘/etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service’

[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-consoleauth.service
[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-novncproxy.service

[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-consoleauth.service

openstack-nova-consoleauth.service – OpenStack Nova VNC console auth Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-consoleauth.service; enabled)

Active: active (running) since Thu 2014-03-13 19:14:45 MSK; 20min ago

Main PID: 14679 (nova-consoleaut)

CGroup: /system.slice/openstack-nova-consoleauth.service

└─14679 /usr/bin/python /usr/bin/nova-consoleauth –logfile /var/log/nova/consoleauth.log

Mar 13 19:14:45 dfw02.localdomain systemd[1]: Started OpenStack Nova VNC console auth Server.

[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-novncproxy.service

openstack-nova-novncproxy.service – OpenStack Nova NoVNC Proxy Server

Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled)

Active: active (running) since Thu 2014-03-13 19:14:58 MSK; 20min ago

Main PID: 14762 (nova-novncproxy)

CGroup: /system.slice/openstack-nova-novncproxy.service

├─14762 /usr/bin/python /usr/bin/nova-novncproxy –web /usr/share/novnc/

└─17166 /usr/bin/python /usr/bin/nova-novncproxy –web /usr/share/novnc/

Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: 127.0.0.1: Path: ‘/websockify’

Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: connecting to: 127.0.0.1:5900

Mar 13 19:23:55 dfw02.localdomain nova-novncproxy[14762]: 19: 127.0.0.1: ignoring empty handshake

Mar 13 19:24:31 dfw02.localdomain nova-novncproxy[14762]: 22: 127.0.0.1: ignoring socket not ready

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Plain non-SSL (ws://) WebSocket connection

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Version hybi-13, base64: ‘True’

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Path: ‘/websockify’

Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: connecting to: 127.0.0.1:5901

Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 26: 127.0.0.1: ignoring empty handshake

Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 25: 127.0.0.1: ignoring empty handshake

Hint: Some lines were ellipsized, use -l to show in full.

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 6080

tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      14762/python

*********************************

Compute  dfw01 – 192.168.1.137

*********************************

Update  /etc/nova/nova.conf:

vnc_enabled=True

novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html

vncserver_listen=0.0.0.0

vncserver_proxyclient_address=192.168.1.137

# systemctl restart openstack-nova-compute

Finally :-

[root@dfw02 ~(keystone_admin)]$ systemctl list-units | grep nova

openstack-nova-api.service                      loaded active running   OpenStack Nova API Server
openstack-nova-conductor.service           loaded active running   OpenStack Nova Conductor Server
openstack-nova-consoleauth.service       loaded active running   OpenStack Nova VNC console auth Server
openstack-nova-novncproxy.service         loaded active running   OpenStack Nova NoVNC Proxy Server
openstack-nova-scheduler.service            loaded active running   OpenStack Nova Scheduler Server

[root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At

nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54

nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54

nova-compute     dfw01.localdomain                     nova             enabled    :-)   2014-03-13 16:56:45

nova-consoleauth dfw02.localdomain                   internal         enabled    :-)   2014-03-13 16:56:47

[root@dfw02 ~(keystone_admin)]$ neutron agent-list

+————————————–+——————–+——————-+——-+—————-+

| id                                   | agent_type         | host              | alive | admin_state_up |

+————————————–+——————–+——————-+——-+—————-+

| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent           | dfw02.localdomain | :-)   | True           |

| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-)   | True           |

| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent         | dfw02.localdomain | :-)   | True           |

| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-)   | True           |

+————————————–+——————–+——————-+——-+—————-+

Users console views :-

    Admin Console views :-

[root@dallas2 ~]# service openstack-nova-compute status -l
Redirecting to /bin/systemctl status  -l openstack-nova-compute.service
openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
Active: active (running) since Thu 2014-03-20 16:29:07 MSK; 6h ago
Main PID: 1685 (nova-compute)
CGroup: /system.slice/openstack-nova-compute.service
├─1685 /usr/bin/python /usr/bin/nova-compute –logfile /var/log/nova/compute.log
└─3552 /usr/sbin/glusterfs –volfile-id=cinder-volumes012 –volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

Mar 20 22:20:15 dallas2.localdomain sudo[11210]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb372fd13e-d2 up
Mar 20 22:20:15 dallas2.localdomain sudo[11213]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb372fd13e-d2 promisc on
Mar 20 22:20:16 dallas2.localdomain sudo[11216]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo372fd13e-d2 up
Mar 20 22:20:16 dallas2.localdomain sudo[11219]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo372fd13e-d2 promisc on
Mar 20 22:20:16 dallas2.localdomain sudo[11222]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qbr372fd13e-d2 up
Mar 20 22:20:16 dallas2.localdomain sudo[11225]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf brctl addif qbr372fd13e-d2 qvb372fd13e-d2
Mar 20 22:20:16 dallas2.localdomain sudo[11228]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl — –may-exist add-port br-int qvo372fd13e-d2 — set Interface qvo372fd13e-d2 external-ids:iface-id=372fd13e-d283-43ba-9a4e-a1684660f4ce external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:0d:4a:12 external-ids:vm-uuid=9679d849-7e4b-4cb5-b644-43279d53f01b
Mar 20 22:20:16 dallas2.localdomain ovs-vsctl[11230]: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl — –may-exist add-port br-int qvo372fd13e-d2 — set Interface qvo372fd13e-d2 external-ids:iface-id=372fd13e-d283-43ba-9a4e-a1684660f4ce external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:0d:4a:12 external-ids:vm-uuid=9679d849-7e4b-4cb5-b644-43279d53f01b
Mar 20 22:20:16 dallas2.localdomain sudo[11244]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf tee /sys/class/net/tap372fd13e-d2/brport/hairpin_mode
Mar 20 22:25:53 dallas2.localdomain nova-compute[1685]: 2014-03-20 22:25:53.102 1685 WARNING nova.compute.manager [-] Found 5 in the database and 2 on the hypervisor.

[root@dallas2 ~]# ovs-vsctl show
3e7422a7-8828-4e7c-b595-8a5b6504bc08
Bridge br-int
Port “qvod0e086e7-32″
tag: 1
Interface “qvod0e086e7-32″
Port br-int
            Interface br-int
type: internal
Port “qvo372fd13e-d2″
tag: 1
            Interface “qvo372fd13e-d2″
Port “qvob49ecf5e-8e”
tag: 1
Interface “qvob49ecf5e-8e”
Port “qvo756757a8-40″
tag: 1
Interface “qvo756757a8-40″
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “qvo4d1f9115-03″
tag: 1
Interface “qvo4d1f9115-03″
Bridge br-tun
Port “gre-1″
Interface “gre-1″
type: gre
options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130″}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+————–+———–+————+————-+—————————–+
| ID                                   | Name         | Status    | Task State | Power State | Networks                    |
+————————————–+————–+———–+————+————-+—————————–+
| 690d29ae-4c3c-4b2e-b2df-e4d654668336 | UbuntuSRS007 | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.103 |
| 9c791573-1238-44c4-a103-6873fddc17d1 | UbuntuTS019  | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.107 |
| 70db20be-efa6-4a96-bf39-6250962784a3 | VF20RS015    | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.101 |
| 3c888e6a-dd4f-489a-82bb-1f1f9ce6a696 | VF20RS017    | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 9679d849-7e4b-4cb5-b644-43279d53f01b | VF20RS024    | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.105 |
+————————————–+————–+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ nova show 9679d849-7e4b-4cb5-b644-43279d53f01b
+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-03-20T18:20:16Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.2, 192.168.1.105                                  |
| hostId                               | 8477c225f2a46d84dcd609798bf5ee71cc8d20b44256b3b2a54b723f |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2014-03-20T18:20:16.000000                               |
| flavor                               | m1.small (2)                                             |
| id                                   | 9679d849-7e4b-4cb5-b644-43279d53f01b                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                         |
| name                                 | VF20RS024                                                |
| created                              | 2014-03-20T18:20:10Z                                     |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'abc0f5b8-5144-42b7-b49f-a42a20ddd88f'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+
[root@dallas1 ~(keystone_boris)]$ ls -l /FDR/Replicate
total 8383848
-rw-rw-rw-. 2 root root 5368709120 Mar 17 21:58 volume-4b807fe8-dcd2-46eb-b7dd-6ab10641c32a
-rw-rw-rw-. 2 root root 5368709120 Mar 20 18:26 volume-4df4fadf-1be9-4a09-b51c-723b8a6b9c23
-rw-rw-rw-. 2 root root 5368709120 Mar 19 13:46 volume-6ccc137a-6361-42ee-8925-57c6a2eeccf4
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:18 volume-abc0f5b8-5144-42b7-b49f-a42a20ddd88f
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:18 volume-ec9670b8-fa64-46e9-9695-641f51bf1421

[root@dallas1 ~(keystone_boris)]$ ssh 192.168.1.140
Last login: Thu Mar 20 20:15:49 2014
[root@dallas2 ~]# ls -l /FDR/Replicate
total 8383860
-rw-rw-rw-. 2 root root 5368709120 Mar 17 21:58 volume-4b807fe8-dcd2-46eb-b7dd-6ab10641c32a
-rw-rw-rw-. 2 root root 5368709120 Mar 20 18:26 volume-4df4fadf-1be9-4a09-b51c-723b8a6b9c23
-rw-rw-rw-. 2 root root 5368709120 Mar 19 13:46 volume-6ccc137a-6361-42ee-8925-57c6a2eeccf4
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:19 volume-abc0f5b8-5144-42b7-b49f-a42a20ddd88f
-rw-rw-rw-. 2 qemu qemu 5368709120 Mar 20 23:19 volume-ec9670b8-fa64-46e9-9695-641f51bf1421


Setup Gluster 3.4.2 on Two Node Controller&Compute Neutron GRE+OVS Fedora 20 Cluster

March 10, 2014

This post is an update for http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html  . It’s focused on Gluster 3.4.2  implementation including tuning /etc/sysconfig/iptables files on Controller and Compute Nodes.
Copying ssh-key from master node to compute, step by step verification of gluster volume replica 2  functionality and switching RDO Havana cinder services to work with gluster volume created  to store instances bootable cinders volumes for performance improvement. Of course creating gluster bricks under “/”  is not recommended . It should be a separate mount point for “xfs” filesystem to store gluster bricks on each node.

 Manual RDO Havana setup itself was originally suggested by Kashyap Chamarthy  for F20 VMs running on non-default Libvirt’s subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for  nova-compute and neutron-openvswitch-agent  from Compute to Controller,   changes to /etc/sysconfig/iptables to enable  Gluster 3.4.2 setup on F20  systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for “dnsmasq.conf” and added to dnsmasq.conf line “dhcp-option=26,1454″. Updated configuration files are critical for launching instance without “Customization script” and allow to work with usual ssh keypair.  Actually , when updates are done instance gets created with MTU 1454. View  [2]. Original  setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..`  loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept). Regarding Dashboard Setup&VNC Console,  view   :-
Setup Dashboard&VNC console on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster

Updated setup procedure itself may be viewed here

Setup 

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )

 – Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dallas1.localdomain   –  Controller (192.168.1.130)

dallas2.localdomain   –  Compute   (192.168.1.140)

First step is tuning /etc/sysconfig/iptables for IPv4 iptables firewall (service firewalld should be disabled) :-

Update /etc/sysconfig/iptables on both nodes:-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment out lines bellow , ignoring instruction from http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt  . It’s critical for Gluster functionality. Having them active you are supposed to work with thin LVM as cinder volumes. You won’t be able even remote mount with “-t glusterfs” option. Gluster’s  replications will be dead for ever.

# -A FORWARD -j REJECT –reject-with icmp-host-prohibited
# -A INPUT -j REJECT –reject-with icmp-host-prohibited

Restart service iptables on both nodes

Second step:-

On dallas1, run the following commands :

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dallas2

On both nodes run :-

# yum  -y install glusterfs glusterfs-server glusterfs-fuse
# service glusterd start

On dallas1

#gluster peer probe dallas2.localdomain
Should return “success”

[root@dallas1 ~(keystone_admin)]$ gluster peer status

Number of Peers: 1
Hostname: dallas2.localdomain
Uuid: b3b1cf43-2fec-4904-82d4-b9be03f77c5f
State: Peer in Cluster (Connected)
On dallas2
[root@dallas2 ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.1.130
Uuid: a57433dd-4a1a-4442-a5ae-ba2f682e5c79
State: Peer in Cluster (Connected)

*************************************************************************************
On Controller (192.168.1.130)  & Compute nodes (192.168.1.140)
**********************************************************************************

Verify ports availability:-

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep gluster
tcp    0      0 0.0.0.0:655        0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:49152      0.0.0.0:*    LISTEN      2524/glusterfsd
tcp    0      0 0.0.0.0:2049       0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38465      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38466      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:49155      0.0.0.0:*    LISTEN      2525/glusterfsd
tcp    0      0 0.0.0.0:38468      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:38469      0.0.0.0:*    LISTEN      2591/glusterfs
tcp    0      0 0.0.0.0:24007      0.0.0.0:*    LISTEN      2380/glusterd

************************************

Switching Cinder to Gluster volume

************************************

# gluster volume create cinder-volumes021  replica 2 ddallas1.localdomain:/FDR/Replicate   dallas2.localdomain:/FDR/Replicate force
# gluster volume start cinder-volumes021
# gluster volume set cinder-volumes021  auth.allow 192.168.1.*

[root@dallas1 ~(keystone_admin)]$ gluster volume info cinder-volumes012

Volume Name: cinder-volumes012
Type: Replicate
Volume ID: 9ee31c6c-0ae3-4fee-9886-b9cb6a518f48
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dallas1.localdomain:/FDR/Replicate
Brick2: dallas2.localdomain:/FDR/Replicate
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
auth.allow: 192.168.1.*

[root@dallas1 ~(keystone_admin)]$ gluster volume status cinder-volumes012

Status of volume: cinder-volumes012
Gluster process                                                    Port    Online    Pid
——————————————————————————
Brick dallas1.localdomain:/FDR/Replicate         49155    Y    2525
Brick dallas2.localdomain:/FDR/Replicate         49152    Y    1615
NFS Server on localhost                                  2049    Y    2591
Self-heal Daemon on localhost                         N/A    Y    2596
NFS Server on dallas2.localdomain                   2049    Y    2202
Self-heal Daemon on dallas2.localdomain          N/A    Y    2197

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf
192.168.1.130:cinder-volumes021
:wq

Make sure all thin LVM have been deleted via `cinder list` , if no then delete them all.

[root@dallas1 ~(keystone_admin)]$ for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

It should add row to `df -h` output :

192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34

[root@dallas1 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                        active
openstack-nova-cert:                       inactive  (disabled on boot)
openstack-nova-compute:               inactive  (disabled on boot)
openstack-nova-network:                inactive  (disabled on boot)
openstack-nova-scheduler:             active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:             active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:           active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                active
neutron-l3-agent:                     active
neutron-metadata-agent:        active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:       active
neutron-linuxbridge-agent:         inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                   inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:        active
openstack-cinder-volume:             active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 871cf99617ff40e09039185aa7ab11f8 |  admin  |   True  |       |
| df4a984ce2f24848a6b84aaa99e296f1 |  boris  |   True  |       |
| 57fc5466230b497a9f206a20618dbe25 |  cinder |   True  |       |
| cdb2e5af7bae4c5486a1e3e2f42727f0 |  glance |   True  |       |
| adb14139a0874c74b14d61d2d4f22371 | neutron |   True  |       |
| 2485122e3538409c8a6fa2ea4343cedf |   nova  |   True  |       |
+———————————-+———+———+——-+
== Glance images ==
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
== Nova managed services ==
+—————-+———————+———-+———+——-+—————————-+—————–+
| Binary         | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+———————+———-+———+——-+—————————-+—————–+
| nova-scheduler | dallas1.localdomain | internal | enabled | up    | 2014-03-09T14:19:31.000000 | None            |
| nova-conductor | dallas1.localdomain | internal | enabled | up    | 2014-03-09T14:19:30.000000 | None            |
| nova-compute   | dallas2.localdomain | nova     | enabled | up    | 2014-03-09T14:19:33.000000 | None            |
+—————-+———————+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 0ed406bf-3552-4036-9006-440f3e69618e | ext   | None |
| 166d9651-d299-47df-a5a1-b368e87b612f | int   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ df -h

Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora01-root        187G   32G  146G  18% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G  184K  3.9G   1% /dev/shm
tmpfs                            3.9G  9.1M  3.9G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                            3.9G  464K  3.9G   1% /tmp
/dev/sdb5                        477M  122M  327M  28% /boot
tmpfs                            3.9G  9.1M  3.9G   1% /run/netns
192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34

(neutron) agent-list

+————————————–+——————–+———————+——-+—————-+
| id                                   | agent_type         | host                | alive | admin_state_up |
+————————————–+——————–+———————+——-+—————-+
| 3ed1cd15-81af-4252-9d6f-e9bb140bf6cf | L3 agent           | dallas1.localdomain | :-)   | True           |
| a088a6df-633c-4959-a316-510c99f3876b | DHCP agent         | dallas1.localdomain | :-)   | True           |
| a3e5200c-b391-4930-b3ee-58c8d1b13c73 | Open vSwitch agent | dallas1.localdomain | :-)   | True           |
| b6da839a-0d93-44ad-9793-6d0919fbb547 | Open vSwitch agent | dallas2.localdomain | :-)   | True           |
+————————————–+——————–+———————+——-+—————-+
If Controller has been correctly set up:-

[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep python
tcp    0     0 0.0.0.0:8700      0.0.0.0:*     LISTEN      1160/python
tcp    0     0 0.0.0.0:35357     0.0.0.0:*     LISTEN      1163/python
tcp   0      0 0.0.0.0:9696      0.0.0.0:*      LISTEN      1165/python
tcp   0      0 0.0.0.0:8773      0.0.0.0:*      LISTEN      1160/python
tcp   0      0 0.0.0.0:8774      0.0.0.0:*      LISTEN      1160/python
tcp   0      0 0.0.0.0:9191      0.0.0.0:*      LISTEN      1173/python
tcp   0      0 0.0.0.0:8776      0.0.0.0:*      LISTEN      8169/python
tcp   0      0 0.0.0.0:5000      0.0.0.0:*      LISTEN      1163/python
tcp   0      0 0.0.0.0:9292      0.0.0.0:*      LISTEN      1168/python 

**********************************************
Creating instance utilizing glusterfs volume
**********************************************

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

I have to notice that schema with `cinder create –image-id  .. –display_name VOL_NAME SIZE` & `nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=volume_id:::0 VM_NAME`  doesn’t work stable  for me in meantime.

As of 03/11 standard schema via `cinder create –image-id IMAGE_ID –display_name VOL_NAME SIZE `& ` nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=VOLUME_ID:::0  INSTANCE_NAME`  started to work fine. However, schema described bellow on the contrary stopped to work on glusterfs based cinder’s volumes.

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-09T12:41:22Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f            |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20RS012                                       |
| adminPass                            | eFDhC8ZSCFU2                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-09T12:41:22Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+———–+———————-+————-+—————————–+
| ID                                   | Name      | Status    | Task State           | Power State | Networks                    |
+————————————–+———–+———–+———————-+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None                 | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | BUILD     | block_device_mapping | NOSTATE     |                             |
+————————————–+———–+———–+———————-+————-+—————————–+
WAIT …
[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE    | None       | Running     | int=10.0.0.4                |
+————————————–+———–+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 5c74667d-9b22-4092-ae0a-70ff3a06e785 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 8142ee4c-ef56-4b61-8a0b-ecd82d21484f

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| dc60b5f4-739e-49bd-a004-3ef806e2b488 |      | fa:16:3e:70:56:cc | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate 5c74667d-9b22-4092-ae0a-70ff3a06e785 dc60b5f4-739e-49bd-a004-3ef806e2b488

Associated floatingip 5c74667d-9b22-4092-ae0a-70ff3a06e785

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.102

PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data.
64 bytes from 192.168.1.102: icmp_seq=1 ttl=63 time=6.23 ms
64 bytes from 192.168.1.102: icmp_seq=2 ttl=63 time=0.702 ms
64 bytes from 192.168.1.102: icmp_seq=3 ttl=63 time=1.07 ms
64 bytes from 192.168.1.102: icmp_seq=4 ttl=63 time=0.693 ms
64 bytes from 192.168.1.102: icmp_seq=5 ttl=63 time=1.80 ms
64 bytes from 192.168.1.102: icmp_seq=6 ttl=63 time=0.750 ms
^C

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ cinder list

+————————————–+——–+————–+——+————-+———-+————————————–+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+————–+——+————-+———-+————————————–+

| 575be853-b104-458e-bc72-1785ef524416 | in-use |              |  5   |     None    |   true   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8  | in-use |              |  5   |     None    |   true   | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+——–+————–+——+————-+———-+——————————

On Compute:-

[root@dallas1 ~]# ssh 192.168.1.140

Last login: Sun Mar  9 16:46:40 2014

[root@dallas2 ~]# df -h

Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/fedora01-root        187G   18G  160G  11% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G  3.1M  3.9G   1% /dev/shm
tmpfs                            3.9G  9.4M  3.9G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                            3.9G  115M  3.8G   3% /tmp
/dev/sdb5                        477M  122M  327M  28% /boot
192.168.1.130:cinder-volumes012  187G   32G  146G  18% /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

[root@dallas2 ~]# ps -ef| grep nova

nova      1548     1  0 16:29 ?        00:00:42 /usr/bin/python /usr/bin/nova-compute –logfile /var/log/nova/compute.log

root      3005     1  0 16:34 ?        00:00:38 /usr/sbin/glusterfs –volfile-id=cinder-volumes012 –volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

qemu      4762     1 58 16:42 ?        00:52:17 /usr/bin/qemu-system-x86_64 -name instance-00000061 -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 8142ee4c-ef56-4b61-8a0b-ecd82d21484f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=8142ee4c-ef56-4b61-8a0b-ecd82d21484f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000061.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-575be853-b104-458e-bc72-1785ef524416,if=none,id=drive-virtio-disk0,format=raw,serial=575be853-b104-458e-bc72-1785ef524416,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:70:56:cc,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/8142ee4c-ef56-4b61-8a0b-ecd82d21484f/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

qemu      6330     1 44 16:49 ?        00:36:02 /usr/bin/qemu-system-x86_64 -name instance-0000005f -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 9566adec-9406-4c3e-bce5-109ecb8bcf6b -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=9566adec-9406-4c3e-bce5-109ecb8bcf6b -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000005f.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-9794bd45-8923-4f3e-a48f-fa1d62a964f8,if=none,id=drive-virtio-disk0,format=raw,serial=9794bd45-8923-4f3e-a48f-fa1d62a964f8,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:84:72,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/9566adec-9406-4c3e-bce5-109ecb8bcf6b/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming fd:24 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

root     24713 24622  0 18:11 pts/4    00:00:00 grep –color=auto nova

[root@dallas2 ~]# ps -ef| grep neutron

neutron   1549     1  0 16:29 ?        00:00:53 /usr/bin/python /usr/bin/neutron-openvswitch-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini –log-file /var/log/neutron/openvswitch-agent.log

root     24981 24622  0 18:12 pts/4    00:00:00 grep –color=auto neutron

  Top at Compute node (192.168.1.140)

      Runtime at Compute node ( dallas2 192.168.1.140)

 ******************************************************

Building Ubuntu 14.04 instance via cinder volume

******************************************************

[root@dallas1 ~(keystone_boris)]$ glance image-list
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 | Ubuntu 14.04        | qcow2       | bare             | 264176128 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
[root@dallas1 ~(keystone_boris)]$ cinder create –image-id c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 –display_name UbuntuTrusty 5
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-10T06:35:39.873978      |
| display_description |                 None                 |
|     display_name    |             UbuntuTrusty             |
|          id         | 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 |
|       image_id      | c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 |
|       metadata      |                  {}                  |
|         size        |                  5                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————————————–+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+————————————–+———–+————–+——+————-+———-+————————————–+
| 56ceaaa8-c0ec-45f3-98a4-555c1231b34e |   in-use  |              |  5   |     None    |   true   | e29606c5-582f-4766-ae1b-52043a698743 |
| 575be853-b104-458e-bc72-1785ef524416 |   in-use  |              |  5   |     None    |   true   | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 | available | UbuntuTrusty |  5   |     None    |   true   |                                      |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8 |   in-use  |              |  5   |     None    |   true   | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+———–+————–+——+————-+———-+————————————–+

[root@dallas1 ~(keystone_boris)]$  nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2:::0 UbuntuTR01

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+

| status                               | BUILD                                              |
| updated                              | 2014-03-10T06:40:14Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume – no image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 0859e52d-c07b-4f56-ac79-2b37080d2843               |
| security_groups                      | [{u'name': u'default'}]                            |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                   |
| name                                 | UbuntuTR01                                         |
| adminPass                            | L8VuhttJMbJf                                       |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                   |
| created                              | 2014-03-10T06:40:13Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u'id': u'8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2'}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+———–+————+————-+—————————–+
| ID                                   | Name       | Status    | Task State | Power State | Networks                    |
+————————————–+————+———–+————+————-+—————————–+
| 0859e52d-c07b-4f56-ac79-2b37080d2843 | UbuntuTR01 | ACTIVE    | None       | Running     | int=10.0.0.6                |
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007  | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012  | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
| e29606c5-582f-4766-ae1b-52043a698743 | VF20RS016  | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
+————————————–+————+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 9498ac85-82b0-468a-b526-64a659080ab9 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 0859e52d-c07b-4f56-ac79-2b37080d2843

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 1f02fe57-d844-4fd8-a325-646f27163c8b |      | fa:16:3e:3f:a3:d4 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate  9498ac85-82b0-468a-b526-64a659080ab9 1f02fe57-d844-4fd8-a325-646f27163c8b

Associated floatingip 9498ac85-82b0-468a-b526-64a659080ab9

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.104

PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=2.35 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=2.56 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.17 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=4.08 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=2.19 ms
^C


Up to date procedure of Creating cinder’s ThinLVM based Cloud Instance F20,Ubuntu 13.10 on Fedora 20 Havana Compute Node.

March 4, 2014

  This post follows up  http://bderzhavets.wordpress.com/2014/01/24/setting-up-two-physical-node-openstack-rdo-havana-neutron-gre-on-fedora-20-boxes-with-both-controller-and-compute-nodes-each-one-having-one-ethernet-adapter/

   Per my experience `cinder create –image-id  Image_id –display_name …..` && `nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=Volume_id :::0 <VM_NAME>  doesn’t   work any longer, giving an error :-

$ tail -f /var/log/nova/compute.log  reports :-

 2014-03-03 13:28:43.646 1344 WARNING nova.virt.libvirt.driver [req-1bd6630e-b799-4d78-b702-f06da5f1464b df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29b a86d7eb] [instance: f621815f-3805-4f52-a878-9040c6a4af53] File injection into a boot from volume instance is not supported

Followed by python stack trace and Nova Exception

Workaround for this issue follows bellow. First stop and and start “tgtd” daemon :-

[root@dallas1 ~(keystone_admin)]$ service tgtd stop
Redirecting to /bin/systemctl stop  tgtd.service
[root@dallas1 ~(keystone_admin)]$ service tgtd status
Redirecting to /bin/systemctl status  tgtd.service
tgtd.service – tgtd iSCSI target daemon
Loaded: loaded (/usr/lib/systemd/system/tgtd.service; enabled)
Active: inactive (dead) since Tue 2014-03-04 11:46:18 MSK; 8s ago
Process: 11978 ExecStop=/usr/sbin/tgtadm –op delete –mode system (code=exited, status=0/SUCCESS)
Process: 11974 ExecStop=/usr/sbin/tgt-admin –update ALL -c /dev/null (code=exited, status=0/SUCCESS)
Process: 11972 ExecStop=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 1797 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v ready (code=exited, status=0/SUCCESS)
Process: 1791 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
Process: 1790 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 1173 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
Process: 1172 ExecStart=/usr/sbin/tgtd -f $TGTD_OPTS (code=exited, status=0/SUCCESS)
Main PID: 1172 (code=exited, status=0/SUCCESS)

Mar 04 11:14:04 dallas1.localdomain tgtd[1172]: tgtd: work_timer_start(146) use timer_fd based scheduler
Mar 04 11:14:04 dallas1.localdomain tgtd[1172]: tgtd: bs_init_signalfd(271) could not open backing-store module direct…store
Mar 04 11:14:04 dallas1.localdomain tgtd[1172]: tgtd: bs_init(390) use signalfd notification
Mar 04 11:14:09 dallas1.localdomain systemd[1]: Started tgtd iSCSI target daemon.
Mar 04 11:26:01 dallas1.localdomain tgtd[1172]: tgtd: device_mgmt(246) sz:69 params:path=/dev/cinder-volumes/volume-a0…2864d
Mar 04 11:26:01 dallas1.localdomain tgtd[1172]: tgtd: bs_thread_open(412) 16
Mar 04 11:33:32 dallas1.localdomain tgtd[1172]: tgtd: device_mgmt(246) sz:69 params:path=/dev/cinder-volumes/volume-01…f2969
Mar 04 11:33:32 dallas1.localdomain tgtd[1172]: tgtd: bs_thread_open(412) 16
Mar 04 11:46:18 dallas1.localdomain systemd[1]: Stopping tgtd iSCSI target daemon…
Mar 04 11:46:18 dallas1.localdomain systemd[1]: Stopped tgtd iSCSI target daemon.
Hint: Some lines were ellipsized, use -l to show in full.

[root@dallas1 ~(keystone_admin)]$ service tgtd start
Redirecting to /bin/systemctl start  tgtd.service
[root@dallas1 ~(keystone_admin)]$ service tgtd status -l
Redirecting to /bin/systemctl status  -l tgtd.service
tgtd.service – tgtd iSCSI target daemon
Loaded: loaded (/usr/lib/systemd/system/tgtd.service; enabled)
Active: active (running) since Tue 2014-03-04 11:46:40 MSK; 4s ago
Process: 11978 ExecStop=/usr/sbin/tgtadm –op delete –mode system (code=exited, status=0/SUCCESS)
Process: 11974 ExecStop=/usr/sbin/tgt-admin –update ALL -c /dev/null (code=exited, status=0/SUCCESS)
Process: 11972 ExecStop=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 12084 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v ready (code=exited, status=0/SUCCESS)
Process: 12078 ExecStartPost=/usr/sbin/tgt-admin -e -c $TGTD_CONFIG (code=exited, status=0/SUCCESS)
Process: 12076 ExecStartPost=/usr/sbin/tgtadm –op update –mode sys –name State -v offline (code=exited, status=0/SUCCESS)
Process: 12052 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
Main PID: 12051 (tgtd)
CGroup: /system.slice/tgtd.service
└─12051 /usr/sbin/tgtd -f

Mar 04 11:46:35 dallas1.localdomain systemd[1]: Starting tgtd iSCSI target daemon…
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: librdmacm: Warning: couldn’t read ABI version.
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: librdmacm: Warning: assuming: 4
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: librdmacm: Fatal: unable to get RDMA device list
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: iser_ib_init(3351) Failed to initialize RDMA; load kernel modules?
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: work_timer_start(146) use timer_fd based scheduler
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: bs_init_signalfd(271) could not open backing-store module directory /usr/lib64/tgt/backing-store
Mar 04 11:46:35 dallas1.localdomain tgtd[12051]: tgtd: bs_init(390) use signalfd notification
Mar 04 11:46:40 dallas1.localdomain systemd[1]: Started tgtd iSCSI target daemon.
[root@dallas1 ~(keystone_admin)]$ for i in api scheduler volume ; do service openstack-cinder-${i} restart ;done
Redirecting to /bin/systemctl restart  openstack-cinder-api.service
Redirecting to /bin/systemctl restart  openstack-cinder-scheduler.service
Redirecting to /bin/systemctl restart  openstack-cinder-volume.service
[root@dallas1 ~(keystone_Boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

Create thin LVM via Nova with login option “fedora”&”mysecret” in one command

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS01

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-07T05:50:18Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 770e33f7-7aab-49f1-95ca-3cf343f744ef            |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20RS01                                        |
| adminPass                            | CqjGVUm9bbs9                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-07T05:50:18Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———-+——–+———————-+————-+———-+
| ID                                   | Name     | Status | Task State           | Power State | Networks |
+————————————–+———-+——–+———————-+————-+———-+
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01 | BUILD  | block_device_mapping | NOSTATE     |          |
+————————————–+———-+——–+———————-+————-+———-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———-+——–+———————-+————-+———-+
| ID                                   | Name     | Status | Task State           | Power State | Networks |
+————————————–+———-+——–+———————-+————-+———-+
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01 | BUILD  | block_device_mapping | NOSTATE     |          |
+————————————–+———-+——–+———————-+————-+———-+
[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+———-+——–+————+————-+————–+
| ID                                   | Name     | Status | Task State | Power State | Networks     |
+————————————–+———-+——–+————+————-+————–+
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01 | ACTIVE | None       | Running     | int=10.0.0.2 |
+————————————–+———-+——–+————+————-+————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.101                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | f7d9cd3f-e544-4f23-821d-0307ed4eb852 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 770e33f7-7aab-49f1-95ca-3cf343f744ef

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 8b5f142e-ce99-40e0-bbbe-620b201c0323 |      | fa:16:3e:0d:c4:e6 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.2”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate f7d9cd3f-e544-4f23-821d-0307ed4eb852 8b5f142e-ce99-40e0-bbbe-620b201c0323
Associated floatingip f7d9cd3f-e544-4f23-821d-0307ed4eb852

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.101

PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data.
64 bytes from 192.168.1.101: icmp_seq=1 ttl=63 time=7.75 ms
64 bytes from 192.168.1.101: icmp_seq=2 ttl=63 time=1.06 ms
64 bytes from 192.168.1.101: icmp_seq=3 ttl=63 time=1.27 ms
64 bytes from 192.168.1.101: icmp_seq=4 ttl=63 time=1.43 ms
64 bytes from 192.168.1.101: icmp_seq=5 ttl=63 time=1.80 ms
64 bytes from 192.168.1.101: icmp_seq=6 ttl=63 time=0.916 ms
64 bytes from 192.168.1.101: icmp_seq=7 ttl=63 time=0.919 ms
64 bytes from 192.168.1.101: icmp_seq=8 ttl=63 time=0.930 ms
64 bytes from 192.168.1.101: icmp_seq=9 ttl=63 time=0.977 ms
64 bytes from 192.168.1.101: icmp_seq=10 ttl=63 time=0.690 ms
^C

— 192.168.1.101 ping statistics —

10 packets transmitted, 10 received, 0% packet loss, time 9008ms

rtt min/avg/max/mdev = 0.690/1.776/7.753/2.015 ms

[root@dallas1 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31           | qcow2       | bare             | 13147648  | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2       | bare             | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=3e6eea8e-32e6-4373-9eb1-e04b8a3167f9,dest=volume,size=5,shutdown=preserve,bootindex=0 UbuntuRS01

+————————————–+————————————————-+
| Property                             | Value                                           |
+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-07T05:53:44Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | bfcb2120-942f-4d3f-a173-93f6076a4be8            |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | UbuntuRS01                                      |
| adminPass                            | bXND2XTsvuA4                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-07T05:53:44Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+————+————-+—————————–+
| ID                                   | Name       | Status | Task State | Power State | Networks                    |
+————————————–+————+——–+————+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None       | Running     | int=10.0.0.4                |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+

| Field               | Value                                |

+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | b3d3f262-5142-4a99-9b8d-431c231cb1d7 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id bfcb2120-942f-4d3f-a173-93f6076a4be8

+————————————–+——+——————-+———————————————————————————+

| id                                   | name | mac_address       | fixed_ips                                                                       |

+————————————–+——+——————-+———————————————————————————+
| c81ca027-8f9b-49c3-af10-adc60f5d4d12 |      | fa:16:3e:ac:86:50 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate b3d3f262-5142-4a99-9b8d-431c231cb1d7 c81ca027-8f9b-49c3-af10-adc60f5d4d12

Associated floatingip b3d3f262-5142-4a99-9b8d-431c231cb1d7

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.102

PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data.
64 bytes from 192.168.1.102: icmp_seq=1 ttl=63 time=3.84 ms
64 bytes from 192.168.1.102: icmp_seq=2 ttl=63 time=3.06 ms
64 bytes from 192.168.1.102: icmp_seq=3 ttl=63 time=6.58 ms
64 bytes from 192.168.1.102: icmp_seq=4 ttl=63 time=7.98 ms
64 bytes from 192.168.1.102: icmp_seq=5 ttl=63 time=2.09 ms
64 bytes from 192.168.1.102: icmp_seq=6 ttl=63 time=1.06 ms
64 bytes from 192.168.1.102: icmp_seq=7 ttl=63 time=3.55 ms
64 bytes from 192.168.1.102: icmp_seq=8 ttl=63 time=2.01 ms
64 bytes from 192.168.1.102: icmp_seq=9 ttl=63 time=1.05 ms
64 bytes from 192.168.1.102: icmp_seq=10 ttl=63 time=3.45 ms
64 bytes from 192.168.1.102: icmp_seq=11 ttl=63 time=2.31 ms
64 bytes from 192.168.1.102: icmp_seq=12 ttl=63 time=0.977 ms
^C

— 192.168.1.102 ping statistics —

12 packets transmitted, 12 received, 0% packet loss, time 11014ms

rtt min/avg/max/mdev = 0.977/3.168/7.985/2.091 ms

[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20GLX

+————————————–+————————————————-+

| Property                             | Value                                           |

+————————————–+————————————————-+
| status                               | BUILD                                           |
| updated                              | 2014-03-07T05:58:40Z                            |
| OS-EXT-STS:task_state                | scheduling                                      |
| key_name                             | None                                            |
| image                                | Attempt to boot from volume – no image supplied |
| hostId                               |                                                 |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | None                                            |
| flavor                               | m1.small                                        |
| id                                   | 62ff1641-2c96-470f-9147-9272d68d2e5c            |
| security_groups                      | [{u'name': u'default'}]                         |
| OS-SRV-USG:terminated_at             | None                                            |
| user_id                              | df4a984ce2f24848a6b84aaa99e296f1                |
| name                                 | VF20GLX                                         |
| adminPass                            | E9KXeLp8fWig                                    |
| tenant_id                            | e896be65e94a4893b870bc29ba86d7eb                |
| created                              | 2014-03-07T05:58:40Z                            |
| OS-DCF:diskConfig                    | MANUAL                                          |
| metadata                             | {}                                              |
| os-extended-volumes:volumes_attached | []                                              |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| progress                             | 0                                               |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-AZ:availability_zone          | nova                                            |
| config_drive                         |                                                 |
+————————————–+————————————————-+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+———————-+————-+—————————–+
| ID                                   | Name       | Status | Task State           | Power State | Networks                    |
+————————————–+————+——–+———————-+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None                 | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | BUILD  | block_device_mapping | NOSTATE     |                             |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None                 | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+———————-+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ nova list

+————————————–+————+——–+————+————-+—————————–+

| ID                                   | Name       | Status | Task State | Power State | Networks                    |

+————————————–+————+——–+————+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | ACTIVE | None       | Running     | int=10.0.0.5                |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create extCreated a new floatingip:

+———————+————————————–+
| Field               | Value                                |
———————+————————————–+

| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.103                        |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id                  | 3fb87bb2-f485-4f1c-b2b7-7c5d90588d27 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | e896be65e94a4893b870bc29ba86d7eb     |
+———————+————————————–+

[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 62ff1641-2c96-470f-9147-9272d68d2e5c

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 0845ad30-4d2c-487d-8847-2b6e3e8b9b9d |      | fa:16:3e:2c:84:62 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2″, “ip_address”: “10.0.0.5”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate 3fb87bb2-f485-4f1c-b2b7-7c5d90588d27 0845ad30-4d2c-487d-8847-2b6e3e8b9b9d

Associated floatingip 3fb87bb2-f485-4f1c-b2b7-7c5d90588d27

[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.103

PING 192.168.1.103 (192.168.1.103) 56(84) bytes of data.
64 bytes from 192.168.1.103: icmp_seq=1 ttl=63 time=4.08 ms
64 bytes from 192.168.1.103: icmp_seq=2 ttl=63 time=1.59 ms
64 bytes from 192.168.1.103: icmp_seq=3 ttl=63 time=1.22 ms
64 bytes from 192.168.1.103: icmp_seq=4 ttl=63 time=1.49 ms
64 bytes from 192.168.1.103: icmp_seq=5 ttl=63 time=1.11 ms
64 bytes from 192.168.1.103: icmp_seq=6 ttl=63 time=0.980 ms
64 bytes from 192.168.1.103: icmp_seq=7 ttl=63 time=6.71 ms
^C

— 192.168.1.103 ping statistics —

7 packets transmitted, 7 received, 0% packet loss, time 6007ms

rtt min/avg/max/mdev = 0.980/2.458/6.711/1.996 ms

[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+————+——–+————+————-+—————————–+
| ID                                   | Name       | Status | Task State | Power State | Networks                    |
+————————————–+————+——–+————+————-+—————————–+
| bfcb2120-942f-4d3f-a173-93f6076a4be8 | UbuntuRS01 | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
| 62ff1641-2c96-470f-9147-9272d68d2e5c | VF20GLX    | ACTIVE | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| 770e33f7-7aab-49f1-95ca-3cf343f744ef | VF20RS01   | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
+————————————–+————+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$  vgdisplay
….

— Volume group —
VG Name               cinder-volumes
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  66
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                3
Open LV               3
Max PV                0
Cur PV                1
Act PV                1
VG Size               20.00 GiB
PE Size               4.00 MiB
Total PE              5119
Alloc PE / Size       3840 / 15.00 GiB
Free  PE / Size       1279 / 5.00 GiB
VG UUID               M11ikP-i6sd-ftwG-3XIH-F9wt-cSHe-m9kCtU


….

Three volumes have been created each one 5 GB

 [root@dallas1 ~(keystone_admin)]$ losetup -a

/dev/loop0: [64768]:14 (/cinder-volumes)

Same messages in log , but now it works

2014-03-03 23:50:19.851 6729 WARNING nova.virt.libvirt.driver [req-98443a14-3c3f-49f5-bf21-c183531a1778 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] [instance: baffc298-3b45-4e01-8891-1e6510e3dc0e] File injection into a boot from volume instance is not supported

2014-03-03 23:50:21.439 6729 WARNING nova.virt.libvirt.volume [req-98443a14-3c3f-49f5-bf21-c183531a1778 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] ISCSI volume not yet found at: vda. Will rescan &amp; retry.  Try number: 0

2014-03-03 23:50:21.518 6729 WARNING nova.virt.libvirt.vif [req-98443a14-3c3f-49f5-bf21-c183531a1778 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] Deprecated: The LibvirtHybridOVSBridgeDriver VIF driver is now deprecated and will be removed in the next release. Please use the LibvirtGenericVIFDriver VIF driver, together with a network plugin that reports the ‘vif_type’ attribute

2014-03-03 23:52:12.020 6729 WARNING nova.virt.libvirt.driver [req-1ea0e44e-b651-4f79-9d83-1ba872534440 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] [instance: a64a7a24-ff8a-4d01-aa59-80393a4213df] File injection into a boot from volume instance is not supported

2014-03-03 23:52:13.629 6729 WARNING nova.virt.libvirt.volume [req-1ea0e44e-b651-4f79-9d83-1ba872534440 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] ISCSI volume not yet found at: vda. Will rescan &amp; retry.  Try number: 0

2014-03-03 23:52:13.709 6729 WARNING nova.virt.libvirt.vif [req-1ea0e44e-b651-4f79-9d83-1ba872534440 df4a984ce2f24848a6b84aaa99e296f1 e896be65e94a4893b870bc29ba86d7eb] Deprecated: The LibvirtHybridOVSBridgeDriver VIF driver is now deprecated and will be removed in the next release. Please use the LibvirtGenericVIFDriver VIF driver, together with a network plugin that reports the ‘vif_type’ attribute

2014-03-03 23:56:11.127 6729 WARNING nova.compute.manager [-] Found 4 in the database and 1 on the hypervisor.


USB Redirection hack on “Two Node Controller&Compute Neutron GRE+OVS” Fedora 20 Cluster

February 28, 2014
 
    I clearly understand that only incomplete  Havana RDO setup allows me to activate spice USB redirection communicating with cloud instances. There is no dashboard ( Administrative Web Console ) on Cluster. All information regarding nova instances status, neutron subnets,routers,ports is supposed to be obtained via CLI as well as managing instances, subnets,routers,ports and rules is also supposed to be done via CLI, having  carefully watch sourcing “keystonerc_user”  file to manage in environment of particular user of particular tenant.    Also I have to mention that  to create new instance I must have in `nova list` no more then four entries. Then I will be able create new one instance for sure.  It has been tested on two  “Two Node Neutron GRE+OVS Systems” It is related with `nova quota-show`  for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller. View https://ask.openstack.org/en/question/11746/openstack-nova-scheduler-service-cannot-any-longer-connect-to-amqp-server-performing-nova-boot-on-fedora-20/
Manual Setup  ( view [2]  http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html )
- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain   –  Controller (192.168.1.127)

dwf01.localdomain   –  Compute   (192.168.1.137)

[root@dfw02 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                     active

openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           inactive  (disabled on boot)
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         inactive  (disabled on boot)
openstack-ceilometer-alarm-notifier:    inactive  (disabled on boot)
openstack-ceilometer-alarm-evaluator:   inactive  (disabled on boot)
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 970ed56ef7bc41d59c54f5ed8a1690dc |  admin  |   True  |       |
| 162021e787c54cac906ab3296a386006 |  boris  |   True  |       |
| 1beeaa4b20454048bf23f7d63a065137 |  cinder |   True  |       |
| 006c2728df9146bd82fab04232444abf |  glance |   True  |       |
| 5922aa93016344d5a5d49c0a2dab458c | neutron |   True  |       |
| af2f251586564b46a4f60cdf5ff6cf4f |   nova  |   True  |       |
+———————————-+———+———+——-+

== Glance images ==

+————————————–+———————————+————-+——————+————-+——–+
| ID                                   | Name                            | Disk Format | Container Format | Size        | Status |
+————————————–+———————————+————-+——————+————-+——–+
| a6e8ef59-e492-46e2-8147-fd8b1a65ed73 | CentOS 6.5 image                | qcow2       | bare             | 344457216   | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31                        | qcow2       | bare             | 13147648    | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64                | qcow2       | bare             | 237371392   | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image                 | qcow2       | bare             | 214106112   | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10             | qcow2       | bare             | 244514816   | active |
| b7d54434-1cc6-4770-82f3-c8619952575c | Ubuntu Trusty Tar 02/23/14      | qcow2       | bare             | 261029888   | active |
| 07071d00-fb85-4b32-a9b4-d515088700d0 | Windows Server 2012 R2 Std Eval | vhd         | bare             | 17182752768 | active |
+————————————–+———————————+————-+——————+————-+——–+

== Nova managed services ==

+—————-+——————-+———-+———+——-+—————————-+—————–+
| Binary         | Host              | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+——————-+———-+———+——-+—————————-+—————–+
| nova-scheduler | dfw02.localdomain | internal | enabled | up    | 2014-02-28T06:32:03.000000 | None            |
| nova-conductor | dfw02.localdomain | internal | enabled | up    | 2014-02-28T06:32:03.000000 | None            |
| nova-compute   | dfw01.localdomain | nova     | enabled | up    | 2014-02-28T06:31:59.000000 | None            |
+—————-+——————-+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int   | None |
| 426bb226-0ab9-440d-ba14-05634a17fb2b | int1  | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext   | None |
+————————————–+——-+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+

+—-+——+——–+————+————-+———-+
[root@dfw02 ~(keystone_boris)]$ nova list
+————————————–+———–+———–+————+————-+——————————+
| ID                                   | Name      | Status    | Task State | Power State | Networks                     |
+————————————–+———–+———–+————+————-+——————————+
| 5fcd83c3-1d4e-4b11-bfe5-061a03b73174 | UbuntuRSX | SUSPENDED | None       | Shutdown    | int1=40.0.0.5, 192.168.1.120 |
| 7953950c-112c-4c59-b183-5cbd06eabcf6 | VF19WXL   | SUSPENDED | None       | Shutdown    | int1=40.0.0.6, 192.168.1.121 |
| 784e8afc-d41a-4c2e-902a-8e109a40f7db | VF20GLS   | SUSPENDED | None       | Shutdown    | int1=40.0.0.4, 192.168.1.102 |
| 9b156b85-a6a1-4f15-bffa-6fdb124f8cff | VF20WXL   | SUSPENDED | None       | Shutdown    | int1=40.0.0.2, 192.168.1.101 |
+————————————–+———–+———–+————+————-+——————————+
 [root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-02-28 11:47:25
nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-02-28 11:47:25
nova-compute     dfw01.localdomain                    nova             enabled    :-)   2014-02-28 11:47:19

[root@dfw02 ~(keystone_admin)]$ neutron agent-list

+————————————–+——————–+——————-+——-+—————-+
| id                                   | agent_type         | host              | alive | admin_state_up |
+————————————–+——————–+——————-+——-+—————-+
| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent           | dfw02.localdomain | :-)   | True           |
| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-)   | True           |
| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent         | dfw02.localdomain | :-)   | True           |
| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-)   | True           |
+————————————–+——————–+——————-+——-+—————-+

Create F20 instance per http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html 

and run on newly built instance :-

# yum -y update
# yum -y install spice-vdagent
# reboot

Connect via virt-manager and switch to Properties tab :-

  

1. Switch to Spice Server
   2. Switch to Video QXL
   3. Add Hardware “Spice agent(spicevmc)”
   4. Add Hardware “USB Redirection”
       Spice channel
Then :- 

[root@dfw02 ~(keystone_boris)]$  nova reboot VF20GLS 

Plug in USB pen on Controller

[ 6443.772131] usb 1-2.1: USB disconnect, device number 5
[ 6523.996983] usb 1-2.1: new full-speed USB device number 6 using uhci_hcd
[ 6524.278848] usb 1-2.1: New USB device found, idVendor=0951, idProduct=160e
[ 6524.281206] usb 1-2.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 6524.282055] usb 1-2.1: Product: DataTraveler 2.0
[ 6524.284851] usb 1-2.1: Manufacturer: Kingston
[ 6524.290527] usb 1-2.1: SerialNumber: 000AEB920161SK861E1301F6
[ 6524.369667] usb-storage 1-2.1:1.0: USB Mass Storage device detected
[ 6524.379638] scsi4 : usb-storage 1-2.1:1.0
[ 6525.420794] scsi 4:0:0:0: Direct-Access     Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 2
[ 6525.459504] sd 4:0:0:0: Attached scsi generic sg0 type 0
[ 6525.526419] sd 4:0:0:0: [sdb] 7856128 512-byte logical blocks: (4.02 GB/3.74 GiB)
[ 6525.554959] sd 4:0:0:0: [sdb] Write Protect is off
[ 6525.555010] sd 4:0:0:0: [sdb] Mode Sense: 23 00 00 00
[ 6525.571552] sd 4:0:0:0: [sdb] No Caching mode page found
[ 6525.573029] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 6525.667624] sd 4:0:0:0: [sdb] No Caching mode page found
[ 6525.669322] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 6525.816841]  sdb: sdb1
[ 6525.887493] sd 4:0:0:0: [sdb] No Caching mode page found
[ 6525.889142] sd 4:0:0:0: [sdb] Assuming drive cache: write through
[ 6525.890478] sd 4:0:0:0: [sdb] Attached SCSI removable disk

$ sudo mount /dev/sdb1 /mnt/usbpen

[ 5685.621007] FAT-fs (sdb1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

[ 5685.631218] SELinux: initialized (dev sdb1, type vfat), uses genfs_contexts

Setup Light X Windows System &amp; Fluxbox on F20 instance ( [1] ) and make sure it’s completely functional and can read and wite to USB pen

   Nova status verification

 

 

   Neutron status verification

On the dfw02 (Controller) , run the following command:

ssh-keygen (Hit Enter to accept all of the defaults)
ssh-copy-id -i ~/.ssh/id_rsa.pub root@dwf01 (Compute)

Add to /etc/rc.d/rc.local lines :-

ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.137
ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.137
ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.137

to be comfortable with spicy connection to instances running on Compute node.

Build fresh spice-gtk packages :-

$ rpm -iv spice-gtk-0.23-1.fc21.src.rpm
$ cd ~/rpmbuild/SPEC
$ sudo yum install intltool gtk2-devel usbredir-devel libusb1-devel libgudev1-devel pixman-devel openssl-devel  libjpeg-turbo-devel celt051-devel pulseaudio-libs-devel pygtk2-devel python-devel zlib-devel cyrus-sasl-devel libcacard-devel gobject-introspection-devel  dbus-glib-devel libacl-devel polkit-devel gtk-doc vala-tools gtk3-devel spice-protocol opus-devel
$ rpmbuild -bb ./spice-gtk.spec
$ cd ../RPMS/x86_64

Install rpms been built , because spicy is not on the system

[boris@dfw02 x86_64]$  sudo yum install spice-glib-0.23-1.fc20.x86_64.rpm \
spice-glib-devel-0.23-1.fc20.x86_64.rpm \
spice-gtk-0.23-1.fc20.x86_64.rpm \
spice-gtk3-0.23-1.fc20.x86_64.rpm \
spice-gtk3-devel-0.23-1.fc20.x86_64.rpm \
spice-gtk3-vala-0.23-1.fc20.x86_64.rpm \
spice-gtk-debuginfo-0.23-1.fc20.x86_64.rpm \
spice-gtk-devel-0.23-1.fc20.x86_64.rpm  \
spice-gtk-python-0.23-1.fc20.x86_64.rpm \
spice-gtk-tools-0.23-1.fc20.x86_64.rpm

Verify new spice-gtk install on F20 :-

[boris@dfw02 x86_64]$ rpm -qa | grep spice-
spice-gtk-tools-0.23-1.fc20.x86_64
spice-server-0.12.4-3.fc20.x86_64
spice-glib-devel-0.23-1.fc20.x86_64
spice-gtk3-vala-0.23-1.fc20.x86_64
spice-gtk3-devel-0.23-1.fc20.x86_64
spice-gtk-python-0.23-1.fc20.x86_64
spice-vdagent-0.15.0-1.fc20.x86_64
spice-gtk-devel-0.23-1.fc20.x86_64
spice-gtk-0.23-1.fc20.x86_64
spice-gtk-debuginfo-0.23-1.fc20.x86_64
spice-glib-0.23-1.fc20.x86_64
spice-gtk3-0.23-1.fc20.x86_64
spice-protocol-0.12.6-2.fc20.noarch

Connection via spice will give  a warning :-

    just ignore this message.

References

1. http://bderzhavets.blogspot.com/2014/02/setup-light-weight-x-windows_2.html
2. http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Ongoing problems with “Two Real Controller&Compute Nodes Neutron GRE + OVS” setup on F20 via native Havana Repos

February 16, 2014

**************************************************************
UPDATE on 02/23/2014  
**************************************************************

To create new instance I must have in `nova list` no more then 4 entries. Then I can sequentially restart qpidd, openstack-nova-scheduler services ( it’s, actually, not always necessary)  and I will be able create new one instance for sure.  It has been tested on 2 “Two Node Neutron GRE+OVS+Gluster Backend for Cinder Cluster”.  It is related with `nova quota-show`  for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller. All testing  details here http://bderzhavets.blogspot.com/2014/02/next-attempt-to-set-up-two-node-neutron.html

Syntax like :

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 10    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+
[root@dallas1 ~(keystone_admin)]$  nova quota-class-update –instances 20 default

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 20    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+

doesn’t work for me

********************************************************************

Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don’t start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services :-
openstack-nova-conductor & openstack-nova-scheduler wouldn’t start if mysql.users table won’t be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.

Instance number on this snapshot is Instance-0000004a (HEX). This number is all the time increasing . This is instance created 74 th starting with 00000001

Detailed information about instances above:

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+————+———–+————+————-+—————————–+
| ID                                   | Name       | Status    | Task State | Power State | Networks                    |
+————————————–+————+———–+————+————-+—————————–+
| e52f8f4d-5d01-4237-a1ed-79ee53ecc88a | UbuntuSX5  | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.114
| 6c094d16-fda7-43fa-8f24-22e02e7a2fc6 | UbuntuVLG1 | ACTIVE    | None       | Running     | int=10.0.0.6, 192.168.1.118 |
| 526b803d-ded5-48d8-857a-f622f6082c18 | VF20GLF    | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.119 |
| c3a4c6d4-8618-4c4f-becb-0c53c2b3ad91 | VF20GLX    | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.117 |
| 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7 | VF20SX4    | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.110
+————————————–+————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ nova show 526b803d-ded5-48d8-857a-f622f6082c18
+————————————–+———————————————————-+
| Property                             | Value                                                               |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                            |
| updated                              | 2014-02-17T13:10:14Z                             |
| OS-EXT-STS:task_state                | None                                                  |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                         |
| key_name                             | None                                                             |
| image                Attempt to boot from volume – no image supplied      |
| int network                          | 10.0.0.5, 192.168.1.119                           |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                 |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000004a                                                                                                      |
| OS-SRV-USG:launched_at                                                                         | 2014-02-17T11:08:13.000000                                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                                                                       |
| flavor                               | m1.small (2)                                                     |
| id                                   | 526b803d-ded5-48d8-857a-f622f6082c18     |
| security_groups                      | [{u'name': u'default'}]                          |
| OS-SRV-USG:terminated_at             | None                                              |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc      |
| name                                 | VF20GLF                                                        |
| created                              | 2014-02-17T11:08:07Z                                |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f    |
| OS-DCF:diskConfig                    | MANUAL                                              |
| metadata                             | {}                                                                 |
| os-extended-volumes:volumes_attached | [{u'id': u'296d02ff-6e2a-424a-bd79-e75ed52875fc'}]       |
| accessIPv4                           |                                                                     |
| accessIPv6                           |                                                                     |
| progress                             | 0                                                                    |
| OS-EXT-STS:power_state               | 1                                                     |
| OS-EXT-AZ:availability_zone          | nova                                              |
| config_drive                         |                                                                     |
+————————————–+———————————————————-+

Instances numbers increasing sequence, old gets removed  new ones gets created

Top at Compute :-

Top at Controller :-

[root@dfw02 ~(keystone_admin)]$ nova-manage service list

Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-02-17 15:20:11
nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-02-17 15:20:11
nova-compute     dfw01.localdomain                    nova             enabled    :-)   2014-02-17 15:20:12

Watch also carefully “ovs-vsctl outputs on Controller &amp; Compute , presence of block :-
On controller :
Port “gre-2″
            Interface “gre-2″
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.130″, out_key=flow, remote_ip=”192.168.1.140″}
and this one on compute:
Port “gre-1″
            Interface “gre-1″
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130″}
is important for success . It might be gone from “ovs-vsctl show” report.

Initial point start testing . Continue per http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html

System is functional.
Controller – dallas1.localdomain 192.168.1.130
Compute  –  dallas2.localdomain 192.168.1.140

[root@dallas1 ~(keystone_admin)]$ date
Sat Feb 15 11:05:12 MSK 2014
[root@dallas1 ~(keystone_admin)]$ openstack-status

== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 974006673310455e8893e692f1d9350b |  admin  |   True  |       |
| fbba3a8646dc44e28e5200381d77493b |  cinder |   True  |       |
| 0214c6ae6ebc4d6ebeb3e68d825a1188 |  glance |   True  |       |
| abb1fa95b0ec448ea8da3cc99d61d301 | kashyap |   True  |       |
| 329b3ca03a894b319420b3a166d461b5 | neutron |   True  |       |
| 89b3f7d54dd04648b0519f8860bd0f2a |   nova  |   True  |       |
———————————-+———+———+——-+
== Glance images ==
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| 2cada436-30a7-425b-9e75-ce56764cdd13 | Cirros31            | qcow2       | bare             | 13147648  | active |
| c0b90f9e-fd47-46da-b98b-1144a41a6c08 | Fedora 20 x86_64    | qcow2       | bare             | 214106112 | active |
| 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b | Ubuntu 13.10 Server | qcow2       | bare             | 244711424 | active |
+————————————–+———————+————-+——————+———–+——–+
== Nova managed services ==
+—————-+———————+———-+———+——-+—————————-+—————–+
| Binary         | Host                | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+———————+———-+———+——-+—————————-+—————–+
| nova-scheduler | dallas1.localdomain | internal | enabled | up    | 2014-02-15T08:14:54.000000 | None            |
| nova-conductor | dallas1.localdomain | internal | enabled | up    | 2014-02-15T08:14:54.000000 | None            |
| nova-compute   | dallas2.localdomain | nova     | enabled | up    | 2014-02-15T08:14:59.000000 | None            |
+—————-+———————+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 082249a5-08f4-478f-b176-effad0ef6843 | ext   | None |
| cea0463e-1ef2-46ac-a449-d1c265f5ed7c | int   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+——————————

Looks good on both Controller and Compute

[root@dallas1 nova]# ovs-vsctl show
2790327e-fde5-4f35-9c99-b1180353b29e
Bridge br-int
Port br-int
Interface br-int
type: internal
Port “qr-f38eb3d5-20″
tag: 1
Interface “qr-f38eb3d5-20″
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tap5d1add26-f3″
tag: 1
Interface “tap5d1add26-f3″
type: internal
Bridge br-ex
Port “p37p1″
Interface “p37p1″
Port br-ex
Interface br-ex
type: internal
Port “qg-0dea8587-32″
Interface “qg-0dea8587-32″
type: internal
Bridge br-tun
        Port “gre-2″
            Interface “gre-2″
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.130″, out_key=flow, remote_ip=”192.168.1.140″}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

[root@dallas2 ~]# ovs-vsctl show
b2e33386-ca7e-46e2-b97e-6bbf511727ac
Bridge br-int
Port br-int
Interface br-int
type: internal
Port “qvo30c356f8-c0″
tag: 1
Interface “qvo30c356f8-c0″
Port “qvoa5c6c346-78″
tag: 1
Interface “qvoa5c6c346-78″
Port “qvo56bfcccb-86″
tag: 1
Interface “qvo56bfcccb-86″
Port “qvo051565c4-dd”
tag: 1
Interface “qvo051565c4-dd”
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-tun
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
  Port “gre-1″
            Interface “gre-1″
                type: gre
                options: {in_key=flow, local_ip=”192.168.1.140″, out_key=flow, remote_ip=”192.168.1.130″}
Port br-tun
Interface br-tun
type: internal
ovs_version: “2.0.0”

[root@dallas1 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –image 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b UbuntuSRV

+————————————–+————————————–+
| Property                             | Value                                |
+————————————–+————————————–+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Ubuntu 13.10 Server                  |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000003                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | 6adf0838-bfcf-4980-a0a4-6a541facf9c9 |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 974006673310455e8893e692f1d9350b     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-15T07:24:54Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | UbuntuSRV                            |
| adminPass                            | T2ArvfucEGqr                         |
| tenant_id                            | b5c0d0d4d31e4f3785362f2716df0b0f     |
| created                              | 2014-02-15T07:24:54Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+————————————–+————————————–+

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | BUILD  | spawning   | NOSTATE     | int=10.0.0.5                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ date

Sat Feb 15 11:25:36 MSK 2014

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE | None       | Running     | int=10.0.0.5                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+——————

/var/log/nova/schedure.log ( last message about 1 hour before successfull `nova boot  .. ` F20, Ubuntu 13.10, Cirrus loaded OK.

I believe a couple of `nova boot .. ` I still have.

Here is /var/log/nova/scheduler.log:-

2014-02-15 09:34:07.612 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 8 seconds

2014-02-15 09:34:15.617 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 16 seconds

2014-02-15 09:34:31.628 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 32 seconds

2014-02-15 09:35:03.630 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

2014-02-15 09:36:03.663 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

The last record in log :-

2014-02-15 09:37:03.713 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

Nothing else, still working

[root@dallas1 ~(keystone_admin)]$ date

Sat Feb 15 12:44:33 MSK 2014

[root@dallas1 Downloads(keystone_admin)]$ nova image-list

+————————————–+———————+——–+——–+
| ID                                   | Name                | Status | Server |
+————————————–+———————+——–+——–+
| 2cada436-30a7-425b-9e75-ce56764cdd13 | Cirros31            | ACTIVE |        |
| fd1cd492-d7d8-4fc3-961a-0b43f9aa148d | Fedora 20 Image     | ACTIVE |        |
| c0b90f9e-fd47-46da-b98b-1144a41a6c08 | Fedora 20 x86_64    | ACTIVE |        |
| 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b | Ubuntu 13.10 Server | ACTIVE |        |
+————————————–+———————+——–+——–+

[root@dallas1 Downloads(keystone_admin)]$ cd

[root@dallas1 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –image fd1cd492-d7d8-4fc3-961a-0b43f9aa148d VF20GLS

+————————————–+————————————–+
| Property                             | Value                                |
+————————————–+————————————–+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Fedora 20 Image                      |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000004                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | e948e74c-86e5-46e3-9df1-5b7ab890cb8a |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 974006673310455e8893e692f1d9350b     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-15T09:04:22Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | VF20GLS                              |
| adminPass                            | i5Lb79SybSpV                         |
| tenant_id                            | b5c0d0d4d31e4f3785362f2716df0b0f     |
| created                              | 2014-02-15T09:04:22Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+————————————–+————————————–+

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| e948e74c-86e5-46e3-9df1-5b7ab890cb8a | VF20GLS   | BUILD     | spawning   | NOSTATE     | int=10.0.0.6                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.103 |
| e948e74c-86e5-46e3-9df1-5b7ab890cb8a | VF20GLS   | ACTIVE    | None       | Running     | int=10.0.0.6                |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 082249a5-08f4-478f-b176-effad0ef6843 |
| id                  | b582d8f9-8e44-4282-a71c-20f36f2e3d89 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | b5c0d0d4d31e4f3785362f2716df0b0f     |
+———————+————————————–+

[root@dallas1 ~(keystone_admin)]$ neutron port-list –device-id e948e74c-86e5-46e3-9df1-5b7ab890cb8a

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 30c356f8-c0e9-439b-b68e-6c1e950b39ef |      | fa:16:3e:7f:4a:57 | {“subnet_id”: “3d75d529-9a18-46d3-ac08-7cb4c733636c”, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-+———————————————————————————+

[root@dallas1 ~(keystone_admin)]$ neutron floatingip-associate b582d8f9-8e44-4282-a71c-20f36f2e3d89 30c356f8-c0e9-439b-b68e-6c1e950b39ef

Associated floatingip b582d8f9-8e44-2014-02-15

[root@dallas1 ~(keystone_admin)]$ ping 192.168.1.104
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=3.67 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=0.758 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=0.687 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=0.731 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=0.767 ms
64 bytes from 192.168.1.104: icmp_seq=6 ttl=63 time=0.713 ms
64 bytes from 192.168.1.104: icmp_seq=7 ttl=63 time=0.817 ms
64 bytes from 192.168.1.104: icmp_seq=8 ttl=63 time=0.741 ms
64 bytes from 192.168.1.104: icmp_seq=9 ttl=63 time=0.703 ms
^C

— 192.168.1.104 ping statistics —

9 packets transmitted, 9 received, 0% packet loss, time 8002ms

rtt min/avg/max/mdev = 0.687/1.065/3.674/0.923 ms

[root@dallas1 ~(keystone_admin)]$ date
Sat Feb 15 13:15:13 MSK 2014
 

Check same log :-

2014-02-15 09:36:03.663 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds

Last record still the same :-

2014-02-15 09:37:03.713 1111 ERROR nova.openstack.common.rpc.impl_qpid [req-4363af36-3f90-49b9-abaa-835797891946 None None] Unable to connect to AMQP server: [Errno 111] ECONNREFUSED. Sleeping 60 seconds



  Top at Compute Node :-

 

 

[root@dallas2 ~]# virsh list –all

Id    Name                           State

—————————————————-
4     instance-00000001              running
5     instance-00000003              running
9     instance-00000005              running
10    instance-00000002              running
11    instance-00000004              running

Finally, I get ERROR&NOSTATE at 16:28

[root@dallas1 ~(keystone_admin)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| ee3ff870-91b7-4d14-bb06-e9a6603f0a83 | UbuntuSLM | ERROR     | None       | NOSTATE     |                             |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5e43c64d-41cf-413f-b317-0046b070a7a4 | VF20GLS   | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.105 |
| e948e74c-86e5-46e3-9df1-5b7ab890cb8a | VF20GLS   | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.104 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ date
Sat Feb 15 16:28:35 MSK 2014

I was allowed to create 5 instance . Six one goes to ERROR&amp;NOSTATE

Then make number of instances no more then four  and optionally run  restarts of services
# service qpidd restart ;
# service openstack-nova-scheduler restart ;

Then you may run   :-

[root@dallas1 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –image 14cf6e7b-9aed-40c6-8185-366eb0c4c397 UbuntuSL3 

+————————————–+————————————–+
| Property                             | Value                                |
+————————————–+————————————–+
| OS-EXT-STS:task_state                | scheduling                           |
| image                                | Ubuntu Salamander Server             |
| OS-EXT-STS:vm_state                  | building                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000009                    |
| OS-SRV-USG:launched_at               | None                                 |
| flavor                               | m1.small                             |
| id                                   | 2712446b-3442-4af2-a330-c9365736ee73 |
| security_groups                      | [{u'name': u'default'}]              |
| user_id                              | 974006673310455e8893e692f1d9350b     |
| OS-DCF:diskConfig                    | MANUAL                               |
| accessIPv4                           |                                      |
| accessIPv6                           |                                      |
| progress                             | 0                                    |
| OS-EXT-STS:power_state               | 0                                    |
| OS-EXT-AZ:availability_zone          | nova                                 |
| config_drive                         |                                      |
| status                               | BUILD                                |
| updated                              | 2014-02-15T12:44:36Z                 |
| hostId                               |                                      |
| OS-EXT-SRV-ATTR:host                 | None                                 |
| OS-SRV-USG:terminated_at             | None                                 |
| key_name                             | None                                 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                 |
| name                                 | UbuntuSL3                            |
| adminPass                            | zq3n5FCktcYB                         |
| tenant_id                            | b5c0d0d4d31e4f3785362f2716df0b0f     |
| created                              | 2014-02-15T12:44:36Z                 |
| os-extended-volumes:volumes_attached | []                                   |
| metadata                             | {}                                   |
+————————————–+————————————–+

[root@dallas1 ~(keystone_admin)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 2712446b-3442-4af2-a330-c9365736ee73 | UbuntuSL3 | BUILD     | spawning   | NOSTATE     | int=10.0.0.6                |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5e43c64d-41cf-413f-b317-0046b070a7a4 | VF20GLS   | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.105 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

[root@dallas1 ~(keystone_admin)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 56794a06-3c2b-45d1-a024-aeb94be598dc | Cirrus311 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 2712446b-3442-4af2-a330-c9365736ee73 | UbuntuSL3 | ACTIVE    | None       | Running     | int=10.0.0.6                |
| 6adf0838-bfcf-4980-a0a4-6a541facf9c9 | UbuntuSRV | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5e43c64d-41cf-413f-b317-0046b070a7a4 | VF20GLS   | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.105 |
| c0e2bea9-aba8-48b2-adce-ceeee47a607b | VF20WXL   | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+———–+————+————-+—————————–+

Here goes sample on another Cluster :-

First remove one old instance if number =5 , then  run for “nova boot new instance”, otherwise there is a big chance to get “ERROR&NOSTATE” instead of “BUILD&SPAWNING”  status.  Log /var/log/nova/scheduling.log will explain you reason of rejecting – AMQP Server cannot be connected  after overcoming the limit of instances.

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2  –user-data=./myfile.txt –block_device_mapping vda=4cb4c501-c7b1-4c42-ba26-0141fcde038b:::0 VF20SX4


+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| OS-EXT-STS:task_state                | scheduling                                         |
| image                                | Attempt to boot from volume – no image supplied    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000003c                                  |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7               |
| security_groups                      | [{u'name': u'default'}]                            |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                   |
| OS-DCF:diskConfig                    | MANUAL                                             |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
| status                               | BUILD                                              |
| updated                              | 2014-02-16T06:15:34Z                               |
| hostId                               |                                                    |
| OS-EXT-SRV-ATTR:host                 | None                                               |
| OS-SRV-USG:terminated_at             | None                                               |
| key_name                             | None                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                               |
| name                                 | VF20SX4                                            |
| adminPass                            | C8r6vtF3kHJi                                       |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                   |
| created                              | 2014-02-16T06:15:33Z                               |
| os-extended-volumes:volumes_attached | [{u'id': u'4cb4c501-c7b1-4c42-ba26-0141fcde038b'}] |
| metadata                             | {}                                                 |
+————————————–+—————————————————-+

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+——————+———–+————+————-+—————————–+
| ID                                   | Name             | Status    | Task State | Power State | Networks                    |
+————————————–+——————+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5        | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312        | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 95a36074-5145-4959-b3b3-2651f2ac1a9c | UbuntuSalamander | SUSPENDED | None       | Shutdown    | int=10.0.0.8, 192.168.1.104 |
| 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7 | VF20SX4          | ACTIVE    | None       | Running     | int=10.0.0.4                |
| 55f6e0bc-281e-480d-b88f-193207ea4d4a | VF20XWL          | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.108 |
+————————————–+——————+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ nova show 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-02-16T06:15:39Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                                        |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume – no image supplied          |
| int network                          | 10.0.0.4, 192.168.1.110                                  |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000003c                                        |
| OS-SRV-USG:launched_at               | 2014-02-16T06:15:39.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                        |
| flavor                               | m1.small (2)                                             |
| id                                   | 4b619f27-30ba-4bd0-bd03-019b3c5d4bf7                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                         |
| name                                 | VF20SX4                                                  |
| created                              | 2014-02-16T06:15:33Z                                     |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u'id': u'4cb4c501-c7b1-4c42-ba26-0141fcde038b'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

  Tenants Network testing

[root@dfw02 ~]#  cat  keystonerc_boris
export OS_USERNAME=boris
export OS_TENANT_NAME=ostenant
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://192.168.1.127:35357/v2.0/
export PS1=’[\u@\h \W(keystone_boris)]$

[root@dfw02 ~]# . keystonerc_boris

[root@dfw02 ~(keystone_boris)]$ neutron net-list
+————————————–+——+—————————————+
| id                                   | name | subnets                               |
+————————————–+——+—————————————+
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d  |
+————————————–+——+—————————————+

[root@dfw02 ~(keystone_boris)]$ neutron router-create router2
Created a new router:
+———————–+————————————–+
| Field                 | Value                                |
+———————–+————————————–+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 86b3008c-297f-4301-9bdc-766b839785f1 |
| name                  | router2                              |
| status                | ACTIVE                               |
| tenant_id             | 4dacfff9e72c4245a48d648ee23468d5     |
+———————–+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron router-gateway-set router2 ext
Set gateway for router router2

[root@dfw02 ~(keystone_boris)]$  neutron net-create int1
Created a new network:
+—————-+————————————–+
| Field          | Value                                |
+—————-+————————————–+
| admin_state_up | True                                 |
| id             | 426bb226-0ab9-440d-ba14-05634a17fb2b |
| name           | int1                                 |
| shared         | False                                |
| status         | ACTIVE                               |
| subnets        |                                      |
| tenant_id      | 4dacfff9e72c4245a48d648ee23468d5     |
+—————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron subnet-create int1 40.0.0.0/24 –dns_nameservers list=true 83.221.202.254
Created a new subnet:
+——————+——————————————–+
| Field            | Value                                      |
+——————+——————————————–+
| allocation_pools | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
| cidr             | 40.0.0.0/24                                |
| dns_nameservers  | 83.221.202.254                             |
| enable_dhcp      | True                                       |
| gateway_ip       | 40.0.0.1                                   |
| host_routes      |                                            |
| id               | 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06       |
| ip_version       | 4                                          |
| name             |                                            |
| network_id       | 426bb226-0ab9-440d-ba14-05634a17fb2b       |
| tenant_id        | 4dacfff9e72c4245a48d648ee23468d5           |
+——————+——————————————–+

[root@dfw02 ~(keystone_boris)]$  neutron router-interface-add router2 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06
Added interface e031db6b-d0cc-4c57-877b-53b1c6946870 to router router2.

[root@dfw02 ~(keystone_boris)]$ neutron subnet-list
+————————————–+——+————-+——————————————–+
| id                                   | name | cidr        | allocation_pools                           |
+————————————–+——+————-+——————————————–+
| 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06 |      | 40.0.0.0/24 | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
+————————————–+——+————-+——————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol icmp \
>   –direction ingress –remote-ip-prefix 0.0.0.0/0 default
Created a new security_group_rule:
+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 4a6deddf-9350-4f98-97d7-a54cf6ebaa9a |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol tcp \
>   –port-range-min 22 –port-range-max 22 \
>   –direction ingress –remote-ip-prefix 0.0.0.0/0 default
Created a new security_group_rule:
+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 7a461936-ffbc-4968-975b-3d27ec975e04 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ glance image-list
+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| b9782f24-2f37-4f8d-9467-f622507788bf | CentOS6.5 image     | qcow2       | bare             | 344457216 | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31            | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64    | qcow2       | bare             | 237371392 | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image     | qcow2       | bare             | 214106112 | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10 | qcow2       | bare             | 244514816 | active |
| dd8c11d2-f9b0-4b77-9b5e-534c7ac0fd58 | Ubuntu Trusty image | qcow2       | bare             | 246022144 | active |
+————————————–+———————+————-+——————+———–+——–+

[root@dfw02 ~(keystone_boris)]$ cinder create –image-id de93ee44-4085-4111-b022-a7437da8feac –display_name VF20VLG02 7
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-02-21T06:36:21.753407      |
| display_description |                 None                 |
|     display_name    |              VF20VLG02               |
|          id         | c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 |
|       image_id      | de93ee44-4085-4111-b022-a7437da8feac |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ cinder list
+————————————–+————-+————–+——+————-+———-+————-+
|                  ID                  |    Status   | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+————-+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | downloading |  VF20VLG02   |  7   |     None    |  false   |             |
+————————————–+————-+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ cinder list
+————————————–+———–+————–+——+————-+———-+————-+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | available |  VF20VLG02   |  7   |     None    |   true   |             |
+————————————–+———–+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ nova boot –flavor 2  –user-data=./myfile.txt –block_device_mapping vda=c3b09e44-1868-43c6-baaa-1ffcb4b80fb1:::0 VF20XWS
+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| status                               | BUILD                                              |
| updated                              | 2014-02-21T06:49:42Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume – no image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | c4573327-dd99-4e57-941e-3d35aacb637c               |
| security_groups                      | [{u'name': u'default'}]                            |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | 162021e787c54cac906ab3296a386006                   |
| name                                 | VF20XWS                                            |
| adminPass                            | YkPYdW58gz7K                                       |
| tenant_id                            | 4dacfff9e72c4245a48d648ee23468d5                   |
| created                              | 2014-02-21T06:49:42Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u'id': u'c3b09e44-1868-43c6-baaa-1ffcb4b80fb1'}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dfw02 ~(keystone_boris)]$ nova list
+————————————–+———+——–+————+————-+—————+
| ID                                   | Name    | Status | Task State | Power State | Networks      |
+————————————–+———+——–+————+————-+—————+
| c4573327-dd99-4e57-941e-3d35aacb637c | VF20XWS | ACTIVE | None       | Running     | int1=40.0.0.2 |
+————————————–+———+——–+————+————-+—————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-create ext
Created a new floatingip:
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron port-list –device-id c4573327-dd99-4e57-941e-3d35aacb637c
+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |      | fa:16:3e:10:a0:e3 | {“subnet_id”: “9e0d457b-c4c4-45cf-84e2-4ac7550f3b06″, “ip_address”: “40.0.0.2”} |
+————————————–+——+——————-+———————————————————————————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-associate 64dd749f-6127-4d0f-ba51-8a9978b8c211 2d6c6569-44c3-44b2-8bed-cdc8dde12336
Associated floatingip 64dd749f-6127-4d0f-ba51-8a9978b8c211

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-show 64dd749f-6127-4d0f-ba51-8a9978b8c211
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 40.0.0.2                             |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             | 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |
| router_id           | 86b3008c-297f-4301-9bdc-766b839785f1 |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ ping 192.168.1.115
PING 192.168.1.115 (192.168.1.115) 56(84) bytes of data.
64 bytes from 192.168.1.115: icmp_seq=1 ttl=63 time=3.80 ms
64 bytes from 192.168.1.115: icmp_seq=2 ttl=63 time=1.13 ms
64 bytes from 192.168.1.115: icmp_seq=3 ttl=63 time=0.954 ms
64 bytes from 192.168.1.115: icmp_seq=4 ttl=63 time=1.01 ms
64 bytes from 192.168.1.115: icmp_seq=5 ttl=63 time=0.999 ms
64 bytes from 192.168.1.115: icmp_seq=6 ttl=63 time=0.809 ms
64 bytes from 192.168.1.115: icmp_seq=7 ttl=63 time=1.02 ms
^C

The original text of documents was posted on fedorapeople.org by Kashyap in November 2013.
Atached ones tuned for new IP’s and should not have any more  typos of original version.They also contain MySQL preventive updates currently required for openstack-nova-compute & neutron-openvswitch-agent remote connection to Controller Node to succeed . MySQL stuff is mine. All attached *.conf & *.ini files have been update for my network as well.
In meantime I am quite sure  that using Libvirt’s default and non-default networks  for creating Controller and Compute nodes  as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real
physical boxes. Just one Ethernet Controller per box should be required in case of  using GRE tunnelling in case of RDO Havana on Fedora 20 manual setup.     

  References

1. http://textuploader.com/1hin
2. http://textuploader.com/1hey
3. http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt
4. http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
5.http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


Follow

Get every new post delivered to your Inbox.