This post is an update for http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html . It’s focused on Gluster 3.4.2 implementation including tuning /etc/sysconfig/iptables files on Controller and Compute Nodes.
Copying ssh-key from master node to compute, step by step verification of gluster volume replica 2 functionality and switching RDO Havana cinder services to work with gluster volume created to store instances bootable cinders volumes for performance improvement. Of course creating gluster bricks under “/” is not recommended . It should be a separate mount point for “xfs” filesystem to store gluster bricks on each node.
Manual RDO Havana setup itself was originally suggested by Kashyap Chamarthy for F20 VMs running on non-default Libvirt’s subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for nova-compute and neutron-openvswitch-agent from Compute to Controller, changes to /etc/sysconfig/iptables to enable Gluster 3.4.2 setup on F20 systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for “dnsmasq.conf” and added to dnsmasq.conf line “dhcp-option=26,1454”. Updated configuration files are critical for launching instance without “Customization script” and allow to work with usual ssh keypair. Actually , when updates are done instance gets created with MTU 1454. View [2]. Original setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..` loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept). Regarding Dashboard Setup&VNC Console, view :-
Setup Dashboard&VNC console on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster
Updated setup procedure itself may be viewed here
– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )
– Compute node: Nova (nova-compute), Neutron (openvswitch-agent)
dallas1.localdomain – Controller (192.168.1.130)
dallas2.localdomain – Compute (192.168.1.140)
First step is tuning /etc/sysconfig/iptables for IPv4 iptables firewall (service firewalld should be disabled) :-
Update /etc/sysconfig/iptables on both nodes:-
-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT
Comment out lines bellow , ignoring instruction from http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt . It’s critical for Gluster functionality. Having them active you are supposed to work with thin LVM as cinder volumes. You won’t be able even remote mount with “-t glusterfs” option. Gluster’s replications will be dead for ever.
# -A FORWARD -j REJECT –reject-with icmp-host-prohibited
# -A INPUT -j REJECT –reject-with icmp-host-prohibited
Restart service iptables on both nodes
Second step:-
On dallas1, run the following commands :
# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dallas2
On both nodes run :-
# yum -y install glusterfs glusterfs-server glusterfs-fuse
# service glusterd start
On dallas1
#gluster peer probe dallas2.localdomain
Should return “success”
[root@dallas1 ~(keystone_admin)]$ gluster peer status
Number of Peers: 1
Hostname: dallas2.localdomain
Uuid: b3b1cf43-2fec-4904-82d4-b9be03f77c5f
State: Peer in Cluster (Connected)
On dallas2
[root@dallas2 ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.1.130
Uuid: a57433dd-4a1a-4442-a5ae-ba2f682e5c79
State: Peer in Cluster (Connected)
*************************************************************************************
On Controller (192.168.1.130) & Compute nodes (192.168.1.140)
**********************************************************************************
Verify ports availability:-
[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep gluster
tcp 0 0 0.0.0.0:655 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 2524/glusterfsd
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:49155 0.0.0.0:* LISTEN 2525/glusterfsd
tcp 0 0 0.0.0.0:38468 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:38469 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 2380/glusterd
************************************
Switching Cinder to Gluster volume
************************************
# gluster volume create cinder-volumes021 replica 2 ddallas1.localdomain:/FDR/Replicate dallas2.localdomain:/FDR/Replicate force
# gluster volume start cinder-volumes021
# gluster volume set cinder-volumes021 auth.allow 192.168.1.*
[root@dallas1 ~(keystone_admin)]$ gluster volume info cinder-volumes012
Volume Name: cinder-volumes012
Type: Replicate
Volume ID: 9ee31c6c-0ae3-4fee-9886-b9cb6a518f48
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dallas1.localdomain:/FDR/Replicate
Brick2: dallas2.localdomain:/FDR/Replicate
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
auth.allow: 192.168.1.*
[root@dallas1 ~(keystone_admin)]$ gluster volume status cinder-volumes012
Status of volume: cinder-volumes012
Gluster process Port Online Pid
——————————————————————————
Brick dallas1.localdomain:/FDR/Replicate 49155 Y 2525
Brick dallas2.localdomain:/FDR/Replicate 49152 Y 1615
NFS Server on localhost 2049 Y 2591
Self-heal Daemon on localhost N/A Y 2596
NFS Server on dallas2.localdomain 2049 Y 2202
Self-heal Daemon on dallas2.localdomain N/A Y 2197
# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf
192.168.1.130:cinder-volumes021
:wq
Make sure all thin LVM have been deleted via `cinder list` , if no then delete them all.
[root@dallas1 ~(keystone_admin)]$ for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done
It should add row to `df -h` output :
192.168.1.130:cinder-volumes012 187G 32G 146G 18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34
[root@dallas1 ~(keystone_admin)]$ openstack-status
== Nova services ==
openstack-nova-api: active
openstack-nova-cert: inactive (disabled on boot)
openstack-nova-compute: inactive (disabled on boot)
openstack-nova-network: inactive (disabled on boot)
openstack-nova-scheduler: active
openstack-nova-volume: inactive (disabled on boot)
openstack-nova-conductor: active
== Glance services ==
openstack-glance-api: active
openstack-glance-registry: active
== Keystone service ==
openstack-keystone: active
== neutron services ==
neutron-server: active
neutron-dhcp-agent: active
neutron-l3-agent: active
neutron-metadata-agent: active
neutron-lbaas-agent: inactive (disabled on boot)
neutron-openvswitch-agent: active
neutron-linuxbridge-agent: inactive (disabled on boot)
neutron-ryu-agent: inactive (disabled on boot)
neutron-nec-agent: inactive (disabled on boot)
neutron-mlnx-agent: inactive (disabled on boot)
== Cinder services ==
openstack-cinder-api: active
openstack-cinder-scheduler: active
openstack-cinder-volume: active
== Support services ==
mysqld: inactive (disabled on boot)
libvirtd: active
openvswitch: active
dbus: active
tgtd: active
qpidd: active
== Keystone users ==
+———————————-+———+———+——-+
| id | name | enabled | email |
+———————————-+———+———+——-+
| 871cf99617ff40e09039185aa7ab11f8 | admin | True | |
| df4a984ce2f24848a6b84aaa99e296f1 | boris | True | |
| 57fc5466230b497a9f206a20618dbe25 | cinder | True | |
| cdb2e5af7bae4c5486a1e3e2f42727f0 | glance | True | |
| adb14139a0874c74b14d61d2d4f22371 | neutron | True | |
| 2485122e3538409c8a6fa2ea4343cedf | nova | True | |
+———————————-+———+———+——-+
== Glance images ==
+————————————–+———————+————-+——————+———–+——–+
| ID | Name | Disk Format | Container Format | Size | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31 | qcow2 | bare | 13147648 | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64 | qcow2 | bare | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2 | bare | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
== Nova managed services ==
+—————-+———————+———-+———+——-+—————————-+—————–+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+—————-+———————+———-+———+——-+—————————-+—————–+
| nova-scheduler | dallas1.localdomain | internal | enabled | up | 2014-03-09T14:19:31.000000 | None |
| nova-conductor | dallas1.localdomain | internal | enabled | up | 2014-03-09T14:19:30.000000 | None |
| nova-compute | dallas2.localdomain | nova | enabled | up | 2014-03-09T14:19:33.000000 | None |
+—————-+———————+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID | Label | Cidr |
+————————————–+——-+——+
| 0ed406bf-3552-4036-9006-440f3e69618e | ext | None |
| 166d9651-d299-47df-a5a1-b368e87b612f | int | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+—-+——+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+—-+——+——–+————+————-+———-+
+—-+——+——–+————+————-+———-+
[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+———–+——–+————+————-+—————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None | Running | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora01-root 187G 32G 146G 18% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 184K 3.9G 1% /dev/shm
tmpfs 3.9G 9.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 464K 3.9G 1% /tmp
/dev/sdb5 477M 122M 327M 28% /boot
tmpfs 3.9G 9.1M 3.9G 1% /run/netns
192.168.1.130:cinder-volumes012 187G 32G 146G 18% /var/lib/cinder/volumes/1c9688348ab38662e3ac8fb121077d34
(neutron) agent-list
+————————————–+——————–+———————+——-+—————-+
| id | agent_type | host | alive | admin_state_up |
+————————————–+——————–+———————+——-+—————-+
| 3ed1cd15-81af-4252-9d6f-e9bb140bf6cf | L3 agent | dallas1.localdomain | 🙂 | True |
| a088a6df-633c-4959-a316-510c99f3876b | DHCP agent | dallas1.localdomain | 🙂 | True |
| a3e5200c-b391-4930-b3ee-58c8d1b13c73 | Open vSwitch agent | dallas1.localdomain | 🙂 | True |
| b6da839a-0d93-44ad-9793-6d0919fbb547 | Open vSwitch agent | dallas2.localdomain | 🙂 | True |
+————————————–+——————–+———————+——-+—————-+
If Controller has been correctly set up:-
[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep python
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 1160/python
tcp 0 0 0.0.0.0:35357 0.0.0.0:* LISTEN 1163/python
tcp 0 0 0.0.0.0:9696 0.0.0.0:* LISTEN 1165/python
tcp 0 0 0.0.0.0:8773 0.0.0.0:* LISTEN 1160/python
tcp 0 0 0.0.0.0:8774 0.0.0.0:* LISTEN 1160/python
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 1173/python
tcp 0 0 0.0.0.0:8776 0.0.0.0:* LISTEN 8169/python
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 1163/python
tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 1168/python
**********************************************
Creating instance utilizing glusterfs volume
**********************************************
[root@dallas1 ~(keystone_boris)]$ glance image-list
+————————————–+———————+————-+——————+———–+——–+
| ID | Name | Disk Format | Container Format | Size | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31 | qcow2 | bare | 13147648 | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64 | qcow2 | bare | 214106112 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2 | bare | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
I have to notice that schema with `cinder create –image-id .. –display_name VOL_NAME SIZE` & `nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=volume_id:::0 VM_NAME` doesn’t work stable for me in meantime.
As of 03/11 standard schema via `cinder create –image-id IMAGE_ID –display_name VOL_NAME SIZE `& ` nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=VOLUME_ID:::0 INSTANCE_NAME` started to work fine. However, schema described bellow on the contrary stopped to work on glusterfs based cinder’s volumes.
[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012
+————————————–+————————————————-+
| Property | Value |
+————————————–+————————————————-+
| status | BUILD |
| updated | 2014-03-09T12:41:22Z |
| OS-EXT-STS:task_state | scheduling |
| key_name | None |
| image | Attempt to boot from volume – no image supplied |
| hostId | |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| flavor | m1.small |
| id | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| security_groups | [{u’name’: u’default’}] |
| OS-SRV-USG:terminated_at | None |
| user_id | df4a984ce2f24848a6b84aaa99e296f1 |
| name | VF20RS012 |
| adminPass | eFDhC8ZSCFU2 |
| tenant_id | e896be65e94a4893b870bc29ba86d7eb |
| created | 2014-03-09T12:41:22Z |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+————————————–+————————————————-+
[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+———–+———–+———————-+————-+—————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+———–+———–+———————-+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | BUILD | block_device_mapping | NOSTATE | |
+————————————–+———–+———–+———————-+————-+—————————–+
WAIT …
[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+———–+———–+————+————-+—————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+———–+———–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None | Running | int=10.0.0.4 |
+————————————–+———–+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext
Created a new floatingip:
+———————+————————————–+
| Field | Value |
+———————+————————————–+
| fixed_ip_address | |
| floating_ip_address | 192.168.1.102 |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id | 5c74667d-9b22-4092-ae0a-70ff3a06e785 |
| port_id | |
| router_id | |
| tenant_id | e896be65e94a4893b870bc29ba86d7eb |
+———————+————————————–+
[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 8142ee4c-ef56-4b61-8a0b-ecd82d21484f
+————————————–+——+——————-+———————————————————————————+
| id | name | mac_address | fixed_ips |
+————————————–+——+——————-+———————————————————————————+
| dc60b5f4-739e-49bd-a004-3ef806e2b488 | | fa:16:3e:70:56:cc | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2”, “ip_address”: “10.0.0.4”} |
+————————————–+——+——————-+———————————————————————————+
[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate 5c74667d-9b22-4092-ae0a-70ff3a06e785 dc60b5f4-739e-49bd-a004-3ef806e2b488
Associated floatingip 5c74667d-9b22-4092-ae0a-70ff3a06e785
[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.102
PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data.
64 bytes from 192.168.1.102: icmp_seq=1 ttl=63 time=6.23 ms
64 bytes from 192.168.1.102: icmp_seq=2 ttl=63 time=0.702 ms
64 bytes from 192.168.1.102: icmp_seq=3 ttl=63 time=1.07 ms
64 bytes from 192.168.1.102: icmp_seq=4 ttl=63 time=0.693 ms
64 bytes from 192.168.1.102: icmp_seq=5 ttl=63 time=1.80 ms
64 bytes from 192.168.1.102: icmp_seq=6 ttl=63 time=0.750 ms
^C
[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+———–+——–+————+————-+—————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+———–+——–+————+————-+—————————–+
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | ACTIVE | None | Running | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ cinder list
+————————————–+——–+————–+——+————-+———-+————————————–+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+——–+————–+——+————-+———-+————————————–+
| 575be853-b104-458e-bc72-1785ef524416 | in-use | | 5 | None | true | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8 | in-use | | 5 | None | true | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+——–+————–+——+————-+———-+——————————
On Compute:-
[root@dallas1 ~]# ssh 192.168.1.140
Last login: Sun Mar 9 16:46:40 2014
[root@dallas2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora01-root 187G 18G 160G 11% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 3.1M 3.9G 1% /dev/shm
tmpfs 3.9G 9.4M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 115M 3.8G 3% /tmp
/dev/sdb5 477M 122M 327M 28% /boot
192.168.1.130:cinder-volumes012 187G 32G 146G 18% /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34
[root@dallas2 ~]# ps -ef| grep nova
nova 1548 1 0 16:29 ? 00:00:42 /usr/bin/python /usr/bin/nova-compute –logfile /var/log/nova/compute.log
root 3005 1 0 16:34 ? 00:00:38 /usr/sbin/glusterfs –volfile-id=cinder-volumes012 –volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34
qemu 4762 1 58 16:42 ? 00:52:17 /usr/bin/qemu-system-x86_64 -name instance-00000061 -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 8142ee4c-ef56-4b61-8a0b-ecd82d21484f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=8142ee4c-ef56-4b61-8a0b-ecd82d21484f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000061.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-575be853-b104-458e-bc72-1785ef524416,if=none,id=drive-virtio-disk0,format=raw,serial=575be853-b104-458e-bc72-1785ef524416,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:70:56:cc,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/8142ee4c-ef56-4b61-8a0b-ecd82d21484f/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
qemu 6330 1 44 16:49 ? 00:36:02 /usr/bin/qemu-system-x86_64 -name instance-0000005f -S -machine pc-i440fx-1.6,accel=tcg,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 9566adec-9406-4c3e-bce5-109ecb8bcf6b -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2013.2.2-1.fc20,serial=6050001e-8c00-00ac-818a-90e6ba2d11eb,uuid=9566adec-9406-4c3e-bce5-109ecb8bcf6b -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000005f.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34/volume-9794bd45-8923-4f3e-a48f-fa1d62a964f8,if=none,id=drive-virtio-disk0,format=raw,serial=9794bd45-8923-4f3e-a48f-fa1d62a964f8,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:50:84:72,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/9566adec-9406-4c3e-bce5-109ecb8bcf6b/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 127.0.0.1:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming fd:24 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
root 24713 24622 0 18:11 pts/4 00:00:00 grep –color=auto nova
[root@dallas2 ~]# ps -ef| grep neutron
neutron 1549 1 0 16:29 ? 00:00:53 /usr/bin/python /usr/bin/neutron-openvswitch-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini –log-file /var/log/neutron/openvswitch-agent.log
root 24981 24622 0 18:12 pts/4 00:00:00 grep –color=auto neutron
Top at Compute node (192.168.1.140)
Runtime at Compute node ( dallas2 192.168.1.140)
******************************************************
Building Ubuntu 14.04 instance via cinder volume
******************************************************
[root@dallas1 ~(keystone_boris)]$ glance image-list
+————————————–+———————+————-+——————+———–+——–+
| ID | Name | Disk Format | Container Format | Size | Status |
+————————————–+———————+————-+——————+———–+——–+
| 592faef8-308a-4438-867a-17adf685cde4 | CirrOS 31 | qcow2 | bare | 13147648 | active |
| d0e90250-5814-4685-9b8d-65ec9daa7117 | Fedora 20 x86_64 | qcow2 | bare | 214106112 | active |
| c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 | Ubuntu 14.04 | qcow2 | bare | 264176128 | active |
| 3e6eea8e-32e6-4373-9eb1-e04b8a3167f9 | Ubuntu Server 13.10 | qcow2 | bare | 244777472 | active |
+————————————–+———————+————-+——————+———–+——–+
[root@dallas1 ~(keystone_boris)]$ cinder create –image-id c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 –display_name UbuntuTrusty 5
+———————+————————————–+
| Property | Value |
+———————+————————————–+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-03-10T06:35:39.873978 |
| display_description | None |
| display_name | UbuntuTrusty |
| id | 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 |
| image_id | c2b7c3ed-e25d-44c4-a5e7-4e013c4a8b00 |
| metadata | {} |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+———————+————————————–+
[root@dallas1 ~(keystone_boris)]$ cinder list
+————————————–+———–+————–+——+————-+———-+————————————–+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————————————–+
| 56ceaaa8-c0ec-45f3-98a4-555c1231b34e | in-use | | 5 | None | true | e29606c5-582f-4766-ae1b-52043a698743 |
| 575be853-b104-458e-bc72-1785ef524416 | in-use | | 5 | None | true | 8142ee4c-ef56-4b61-8a0b-ecd82d21484f |
| 8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2 | available | UbuntuTrusty | 5 | None | true | |
| 9794bd45-8923-4f3e-a48f-fa1d62a964f8 | in-use | | 5 | None | true | 9566adec-9406-4c3e-bce5-109ecb8bcf6b |
+————————————–+———–+————–+——+————-+———-+————————————–+
[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2:::0 UbuntuTR01
+————————————–+—————————————————-+
| Property | Value |
+————————————–+—————————————————-+
| status | BUILD |
| updated | 2014-03-10T06:40:14Z |
| OS-EXT-STS:task_state | scheduling |
| key_name | None |
| image | Attempt to boot from volume – no image supplied |
| hostId | |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| flavor | m1.small |
| id | 0859e52d-c07b-4f56-ac79-2b37080d2843 |
| security_groups | [{u’name’: u’default’}] |
| OS-SRV-USG:terminated_at | None |
| user_id | df4a984ce2f24848a6b84aaa99e296f1 |
| name | UbuntuTR01 |
| adminPass | L8VuhttJMbJf |
| tenant_id | e896be65e94a4893b870bc29ba86d7eb |
| created | 2014-03-10T06:40:13Z |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [{u’id’: u’8bcc02a7-b9ba-4cd6-a6b9-0574889bf8d2′}] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+————————————–+—————————————————-+
[root@dallas1 ~(keystone_boris)]$ nova list
+————————————–+————+———–+————+————-+—————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+————+———–+————+————-+—————————–+
| 0859e52d-c07b-4f56-ac79-2b37080d2843 | UbuntuTR01 | ACTIVE | None | Running | int=10.0.0.6 |
| 9566adec-9406-4c3e-bce5-109ecb8bcf6b | VF20RS007 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| 8142ee4c-ef56-4b61-8a0b-ecd82d21484f | VF20RS012 | SUSPENDED | None | Shutdown | int=10.0.0.4, 192.168.1.102 |
| e29606c5-582f-4766-ae1b-52043a698743 | VF20RS016 | ACTIVE | None | Running | int=10.0.0.5, 192.168.1.103 |
+————————————–+————+———–+————+————-+—————————–+
[root@dallas1 ~(keystone_boris)]$ neutron floatingip-create ext
Created a new floatingip:
+———————+————————————–+
| Field | Value |
+———————+————————————–+
| fixed_ip_address | |
| floating_ip_address | 192.168.1.104 |
| floating_network_id | 0ed406bf-3552-4036-9006-440f3e69618e |
| id | 9498ac85-82b0-468a-b526-64a659080ab9 |
| port_id | |
| router_id | |
| tenant_id | e896be65e94a4893b870bc29ba86d7eb |
+———————+————————————–+
[root@dallas1 ~(keystone_boris)]$ neutron port-list –device-id 0859e52d-c07b-4f56-ac79-2b37080d2843
+————————————–+——+——————-+———————————————————————————+
| id | name | mac_address | fixed_ips |
+————————————–+——+——————-+———————————————————————————+
| 1f02fe57-d844-4fd8-a325-646f27163c8b | | fa:16:3e:3f:a3:d4 | {“subnet_id”: “2e838119-3e2e-46e8-b7cc-6d00975046f2”, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-+———————————————————————————+
[root@dallas1 ~(keystone_boris)]$ neutron floatingip-associate 9498ac85-82b0-468a-b526-64a659080ab9 1f02fe57-d844-4fd8-a325-646f27163c8b
Associated floatingip 9498ac85-82b0-468a-b526-64a659080ab9
[root@dallas1 ~(keystone_boris)]$ ping 192.168.1.104
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=2.35 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=2.56 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.17 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=4.08 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=2.19 ms
^C