RDO Kilo DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1

September 30, 2015

Per http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-ovs-dvr.html

1. Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node. (North-South Routing).
2. Neutron DVR implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur with Network Node not involved. (East-West Routing).
3. Neutron Distributed Virtual Router provides the legacy SNAT behavior for the default SNAT for all private VMs. SNAT service is not distributed, it is centralized and the service node will host the service.

Setup configuration

– Controller node: Nova, Keystone, Cinder, Glance,

Neutron (using Open vSwitch plugin && VXLAN )

– (2x) Compute node: Nova (nova-compute),

Neutron (openvswitch-agent,l3-agent,metadata-agent )

Three CentOS 7.1 VMs (4 GB RAM, 4 VCPU, 2 VNICs ) has been built for testing

at Fedora 22 KVM Hypervisor. Two libvirt sub-nets were used first “openstackvms” for emulating External && Mgmt Networks 192.169.142.0/24 gateway virbr1 (192.169.142.1) and “vteps” 10.0.0.0/24 to support two VXLAN tunnels between Controller and Compute Nodes.

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr1′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’192.169.142.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’192.169.142.2′ end=’192.169.142.254′ />
</dhcp>
</ip>
</network>

# cat vteps.xml

<network>
<name>vteps</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode=’nat’>
<nat>
<port start=’1024′ end=’65535’/>
</nat>
</forward>
<bridge name=’virbr2′ stp=’on’ delay=’0′ />
<mac address=’52:54:00:60:f8:6d’/>
<ip address=’10.0.0.1′ netmask=’255.255.255.0′>
<dhcp>
<range start=’10.0.0.1′ end=’10.0.0.254′ />
</dhcp>

</ip>
</network>

# virsh net-define openstackvms.xml
# virsh net-start openstackvms
# virsh net-autostart openstackvms

Second libvirt sub-net maybe defined and started same way.

ip-192-169-142-127.ip.secureserver.net – Controller/Network Node
ip-192-169-142-137.ip.secureserver.net – Compute Node
ip-192-169-142-147.ip.secureserver.net – Compute Node

Answer File :-

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4
********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
********************************************************

# cat ifcfg-br-ex
DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.169.142.1(X)7″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.169.142.255″
GATEWAY=”192.169.142.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSIntPort”
OVS_BRIDGE=br-ex
DEVICETYPE=”ovs”

# cat ifcfg-eth0
DEVICE=”eth0″
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
***********
Then
***********

# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

Reboot

*****************************************
On Controller update neutron.conf
*****************************************

router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

*****************
On Controller
*****************

[root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#| grep -v ^$

[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
allow_automatic_l3agent_failover=False

*********************************
On each Compute Node
*********************************

[root@ip-192-169-142-147 neutron]# cat l3_agent.ini | grep -v ^#| grep -v ^$

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr

*******************
On each node
*******************

[root@ip-192-169-142-147 neutron]# cat metadata_agent.ini | grep -v ^#| grep -v ^$

[DEFAULT]
debug = False
auth_url = http://192.169.142.127:35357/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =4
metadata_backlog = 4096
cache_url = memory://?default_ttl=5

[root@ip-192-169-142-147 neutron]# cat ml2_conf.ini | grep -v ^#| grep -v ^$

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[securitygroup]
enable_security_group = True
# On Compute nodes
[agent]
l2_population = True

The last entry for [agent] is important for DVR configuration on Kilo ( vs Juno )

[root@ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep -v ^#| grep -v ^$

[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2population = True
enable_distributed_routing = True
arp_responder = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

*********************************************************************
On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started.
*********************************************************************

# yum install openstack-neutron-ml2
# systemctl start neutron-l3-agent
# systemctl start neutron-metadata-agent
# systemctl enable neutron-l3-agent
# systemctl enable neutron-metadata-agent

 

DVR01@KIlo

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterDemo
+————————————–+—————————————-+—————-+——-+———-+
| id | host | admin_state_up | alive | ha_state |
+————————————–+—————————————-+—————-+——-+———-+
| 50388b16-4461-441c-83a4-f7e7084ec415 | ip-192-169-142-127.ip.secureserver.net | True | ūüôā | |
| 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 | ip-192-169-142-137.ip.secureserver.net | True | ūüôā | |
| d18cdf01-6814-489d-bef2-5207c1aac0eb | ip-192-169-142-147.ip.secureserver.net | True | ūüôā | |
+————————————–+—————————————-+—————-+——-+———-+
[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-show 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4
+———————+——————————————————————————-+
| Field | Value |
+———————+——————————————————————————-+
| admin_state_up | True |
| agent_type | L3 agent |
| alive | True |
| binary | neutron-l3-agent |
| configurations | { |
| | “router_id”: “”, |
| | “agent_mode”: “dvr”, |
| | “gateway_external_network_id”: “”, |
| | “handle_internal_only_routers”: true, |
| | “use_namespaces”: true, |
| | “routers”: 1, |
| | “interfaces”: 1, |
| | “floating_ips”: 1, |
| | “interface_driver”: “neutron.agent.linux.interface.OVSInterfaceDriver”, |
| | “external_network_bridge”: “br-ex”, |
| | “ex_gw_ports”: 1 |
| | } |
| created_at | 2015-09-29 07:40:37 |
| description | |
| heartbeat_timestamp | 2015-09-30 09:58:24 |
| host | ip-192-169-142-137.ip.secureserver.net |
| id | 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 |
| started_at | 2015-09-30 08:08:53 |
| topic | l3_agent |
+———————+————————————————————————–

DVR02@Kilo

Screenshot from 2015-09-30 13-41-49                                          Screenshot from 2015-09-30 13-43-54

 

 

 


“Setting up Two Physical-Node OpenStack RDO Havana + Gluster Backend for Cinder + Neutron GRE” on Fedora 20 boxes with both Controller and Compute nodes each one having one Ethernet adapter

January 24, 2014

**************************************************************
UPDATE on 02/23/2014  
**************************************************************

To create new instance I must have in `nova list` no more then 4 entries. Then I can sequentially restart qpidd, openstack-nova-scheduler services ( it’s, actually, not always necessary)¬† and I will be able create new one instance for sure.¬† It has been tested on 2 “Two Node Neutron GRE+OVS+Gluster Backend for Cinder Cluster”.¬† It is related with `nova quota-show`¬† for tenant (10 instances is default ). Having 3 vms on Compute I brought up openstack-nova-compute on Controller and was able to create 2 vms more on Compute and 5 vms on Controller.
All testing  details here http://bderzhavets.blogspot.com/2014/02/next-attempt-to-set-up-two-node-neutron.html  Syntax like :

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 10    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+
[root@dallas1 ~(keystone_admin)]$¬† nova quota-class-update –instances 20 default

[root@dallas1 ~(keystone_admin)]$  nova quota-defaults
+—————————–+——-+
| Quota                       | Limit |
+—————————–+——-+
| instances                   | 20    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
+—————————–+——-+

doesn’t work for me
****************************************************************

1. F19,F20 have been installed via volumes based on glusterfs and show a good performance on Compute node. Yum works stable on F19 and a bit slower on F20.
2. CentOS 6.5 was installed only via glance image ( cinder shows ERROR status for volume ) network ops are slower then on Fedoras.
3. Ubuntu 13.10 Server was installed via volume based on glusterfs was able to obtain internal and floating IP. Network speed close to Fedora 19
4. Turning on Gluster backend for Cinder on F20 Two-Node Neutron GRE Cluster (Controller+Compute) improves performance significantly. Due to known F20 bug glustefs FS was ext4
5.On any cloud instance MTU should be set to 1400 for proper communications with GRE tunnel 

Post bellow follows up two Fedora 20 VMs setup described in :-
  http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt
  http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
¬† Both cases have been tested above –¬† default and non-default libvirt’s networks
In meantime I believe that using Libvirt’s networks for creating Controller and Compute nodes¬† as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling for RDO Havana on Fedora 20 manual setup.
¬† Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don’t start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services
openstack-nova-conductor & openstack-nova-scheduler wouldn’t start if mysql.users table won’t be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.

Manuals mentioned above require some editing per authors opinion as well.

Manual Setup  for two different physical boxes running Fedora 20 with the most recent `yum -y update`

– Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )

 РCompute node: Nova (nova-compute), Neutron (openvswitch-agent)

dwf02.localdomain ¬† –¬† Controller (192.168.1.127)

dwf01.localdomain ¬† –¬† Compute¬†¬† (192.168.1.137)

Two instances are running on Compute node :-

VF19RS instance has  192.168.1.102 Рfloating ip ,

CirrOS 3.1 instance has  192.168.1.101 Рfloating ip

Cloud instances running on Compute perform commands like nslookup,traceroute. Yum install & yum -y update work on Fedora 19 instance, however, in meantime time network on VF19 is stable, but relatively slow . Might be Realtek 8169 integrated on board is not good enough for GRE and it’s problem of my hardware ( dfw01 built up with Q9550,ASUS P5Q3, 8 GB DDR3, SATA 2 Seagate 500 GB). CentOS 6.5 with “RDO Havana+Glusterfs+Neutron VLAN” works on same box (dual booting with F20) much faster.¬† That is a first impression. I’ve also changed neutron.conf ‘s connection credentials to mysql to be able run neutron-server service. Neutron L3 agent and Neutron Openvswitch agent require some effort to be started on Controller.
Manual mentioned above requires some editing per authors opinion as well.

[root@dfw02 ~(keystone_admin)]$ openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    inactive  (disabled on boot)
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
== Ceilometer services ==
openstack-ceilometer-api:               inactive  (disabled on boot)
openstack-ceilometer-central:           inactive  (disabled on boot)
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         inactive  (disabled on boot)
openstack-ceilometer-alarm-notifier:    inactive  (disabled on boot)
openstack-ceilometer-alarm-evaluator:   inactive  (disabled on boot)
== Support services ==
mysqld:                                 inactive  (disabled on boot)
libvirtd:                               active
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
qpidd:                                  active
== Keystone users ==
+———————————-+———+———+——-+
|                id                |   name  | enabled | email |
+———————————-+———+———+——-+
| 970ed56ef7bc41d59c54f5ed8a1690dc |  admin  |   True  |       |
| 1beeaa4b20454048bf23f7d63a065137 |  cinder |   True  |       |
| 006c2728df9146bd82fab04232444abf |  glance |   True  |       |
| 5922aa93016344d5a5d49c0a2dab458c | neutron |   True  |       |
| af2f251586564b46a4f60cdf5ff6cf4f |   nova  |   True  |       |
+———————————-+———+———+——-+
== Glance images ==
+————————————–+——————+————-+——————+———–+——–+
| ID                                   | Name             | Disk Format | Container Format | Size      | Status |
+————————————–+——————+————-+——————+———–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | qcow2       | bare             | 237371392 | active |
+————————————–+——————+————-+——————+———–+——–+
== Nova managed services ==
¬†+—————-+——————-+———-+———+——-+—————————-+—————–+
| Binary         | Host              | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+—————-+——————-+———-+———+——-+—————————-+—————–+
| nova-scheduler | dfw02.localdomain | internal | enabled | up    | 2014-01-23T22:36:15.000000 | None            |
| nova-conductor | dfw02.localdomain | internal | enabled | up    | 2014-01-23T22:36:11.000000 | None            |
| nova-compute   | dfw01.localdomain | nova     | enabled | up    | 2014-01-23T22:36:10.000000 | None            |
+—————-+——————-+———-+———+——-+—————————-+—————–+
== Nova networks ==
+————————————–+——-+——+
| ID                                   | Label | Cidr |
+————————————–+——-+——+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int   | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext   | None |
+————————————–+——-+——+
== Nova instance flavors ==
+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+
== Nova instances ==
+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS    | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler¬†¬† dfw02.localdomain¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† internal¬†¬†¬†¬†¬†¬†¬†¬† enabled¬†¬†¬† ūüôā¬†¬† 2014-01-23 22:39:05
nova-conductor¬†¬† dfw02.localdomain¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† internal¬†¬†¬†¬†¬†¬†¬†¬† enabled¬†¬†¬† ūüôā¬†¬† 2014-01-23 22:39:11
nova-compute¬†¬†¬†¬† dfw01.localdomain¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† nova¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† enabled¬†¬†¬† ūüôā¬†¬† 2014-01-23 22:39:10
[root@dfw02 ~(keystone_admin)]$ ovs-vsctl show
7d78d536-3612-416e-bce6-24605088212f
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port “tapf933e768-42”
tag: 1
Interface “tapf933e768-42”
Port “tap40dd712c-e4”
tag: 1
Interface “tap40dd712c-e4”
Bridge br-ex
Port “p37p1”
Interface “p37p1”
Port br-ex
Interface br-ex
type: internal
Port “tap54e34740-87”
Interface “tap54e34740-87”
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port “gre-2”
Interface “gre-2″
type: gre
options: {in_key=flow, local_ip=”192.168.1.127″, out_key=flow, remote_ip=”192.168.1.137”}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: “2.0.0”

Running instances on dfw01.localdomain :

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+———–+——–+————+————-+—————————–+
| ID                                   | Name      | Status | Task State | Power State | Networks                    |
+————————————–+———–+——–+————+————-+—————————–+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None       | Running     | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS    | ACTIVE | None       | Running     | int=10.0.0.4, 192.168.1.102 |
+————————————–+———–+——–+————+————-+—————————–+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler¬†¬† dfw02.localdomain¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† internal¬†¬†¬†¬†¬†¬†¬†¬† enabled¬†¬†¬† ūüôā¬†¬† 2014-01-23 22:25:45
nova-conductor¬†¬† dfw02.localdomain¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† internal¬†¬†¬†¬†¬†¬†¬†¬† enabled¬†¬†¬† ūüôā¬†¬† 2014-01-23 22:25:41
nova-compute¬†¬†¬†¬† dfw01.localdomain¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† nova¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† enabled¬†¬†¬† ūüôā¬†¬† 2014-01-23 22:25:50

Fedora 19 instance loaded via :
[root@dfw02 ~(keystone_admin)]$ nova image-list

+————————————–+——————+——–+——–+
| ID                                   | Name             | Status | Server |

+————————————–+——————+——–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | ACTIVE |        |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE |        |
+————————————–+——————+——–+——–+

[root@dfw02 ~(keystone_admin)]$¬† nova boot –flavor 2 –user-data=./myfile.txt –image 03c9ad20-b0a3-4b71-aa08-2728ecb66210 VF19RS

where

[root@dfw02 ~(keystone_admin)]$  cat ./myfile.txt
#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True

Snapshots  done on dfw01 host with VNC consoles opened via virt-manager :-

   

Snapshots  done on dfw02 host via virt-manager connection to dfw01 :-

  
  \

Setup Light Weight X Windows environment on Fedora 20 Cloud instance and running F20 cloud instance in Spice session via virt-manager or spicy http://bderzhavets.blogspot.com/2014/02/setup-light-weight-x-windows_2.html

¬†Next we install X windows on F20 to run fluxbox ( by the way after hours of googling I was unable to find requied set of packages and just picked them up during KDE Env installation via yum , which I actually don’t need at all on cloud instance of Fedora )

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm

# yum install feh xcompmgr lxappearance xscreensaver dmenu

View for details http://blog.bodhizazen.net/linux/a-5-minute-guide-to-fluxbox/

# mkdir .fluxbox/backgrounds

Add to ~/.fluxbox/menu file

[submenu] (Wallpapers)
[wallpapers] (~/.fluxbox/backgrounds) {feh –bg-scale}
[end] 

to be able set wallpapers

Install some fonts :-

# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts

 We are ready to go :-

# echo “exec fluxbox” > ~/.xinitrc
# startx

To be able surf internet set MTU 1400 only on cloud instances :-
#  ifconfig eth0 mtu 1400 up
Otherwise, it won’t be possible due to GRE incapsulation

[root@dfw02 ~(keystone_admin)]$ nova list | grep LXW
| 492af969-72c0-4235-ac4e-d75d3778fd0a | VF20LXW          | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.106 |
[root@dfw02 ~(keystone_admin)]$ nova show 492af969-72c0-4235-ac4e-d75d3778fd0a
+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-02-06T09:38:52Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                                        |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume Рno image supplied          |
| int network                          | 10.0.0.4, 192.168.1.106                                  |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000021                                        |
| OS-SRV-USG:launched_at               | 2014-02-05T17:47:38.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                        |
| flavor                               | m1.small (2)                                             |
| id                                   | 492af969-72c0-4235-ac4e-d75d3778fd0a                     |
| security_groups¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† | [{u’name’: u’default’}]¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                         |
| name                                 | VF20LXW                                                  |
| created                              | 2014-02-05T17:47:33Z                                     |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’d0c5706d-4193-4925-9140-29dea801b447′}]¬†¬†¬†¬†¬†¬† |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

Switching to Spice session improves X-Server behaviour on F20 cloud instance.

# ssh -L 5900:localhost:5900 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5901:localhost:5901 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5902:localhost:5902 -N -f -l 192.168.1.137 ( Compute IP-address)
# spicy -h localhost -p  590(X)

View also “Surfing Internet & SSH connectoin on (to) cloud instance of Fedora 20 via Neutron¬†GRE” https://bderzhavets.wordpress.com/2014/02/04/surfing-internet-ssh-connectoin-on-to-cloud-instance-of-fedora-20-via-neutron-gre/

Same command  :  `ifconfig eth0 mtu 1400 up`  will put ssh in work from Controller and Compute nodes.

[root@dfw02 nova(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5 | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| 14c49bfe-f99c-4f31-918e-dcf0fd42b49d | VF19RST   | ACTIVE    | None       | Running     | int=10.0.0.4, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL  | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.104 |
| 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 | VF20KVM   | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.109 |
+————————————–+———–+———–+————+————-+—————————–+


[root@dfw02 nova(keystone_admin)]$ ssh fedora@192.168.1.109
fedora@192.168.1.109’s password:
Last login: Thu Jan 30 15:54:04 2014 from 192.168.1.127

 
[fedora@vf20kvm ~]$ ifconfig
eth0: flags=4163  mtu 1400
inet 10.0.0.7  netmask 255.255.255.0  broadcast 10.0.0.255
inet6 fe80::f816:3eff:fec6:e89a  prefixlen 64  scopeid 0x20
ether fa:16:3e:c6:e8:9a  txqueuelen 1000  (Ethernet)
RX packets 630779  bytes 877092770 (836.4 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 166603  bytes 14706620 (14.0 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 2  bytes 140 (140.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 2  bytes 140 (140.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

So, loading cloud instance¬† via `nova boot –user-data=./myfile.txt ….` allows to get access to command line and set MTU for eth0 to 1400 , this makes instance available for ssh connections from Controller and Compute Nodes and also makes possible Internet Surfing in text and graphical¬† mode for fedora 19,20, Ubuntu 13.10,12.04.

[root@dfw02 ~(keystone_admin)]$ ip netns list

qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8


[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 ip a
 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: qr-f933e768-42: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:6a:d3:f0 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-f933e768-42
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe6a:d3f0/64 scope link
valid_lft forever preferred_lft forever
3: qg-54e34740-87: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:00:9a:0d brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.101/32 brd 192.168.1.101 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.102/32 brd 192.168.1.102 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe00:9a0d/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7 ip a
 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ns-40dd712c-e4: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:93:44:f8 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global ns-40dd712c-e4
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe93:44f8/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  ip r
default via 192.168.1.1 dev qg-54e34740-87
10.0.0.0/24 dev qr-f933e768-42  proto kernel  scope link  src 10.0.0.1
192.168.1.0/24 dev qg-54e34740-87  proto kernel  scope link  src 192.168.1.100
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 \
&gt; iptables -L -t nat | grep 169
REDIRECT¬†¬† tcp¬† —¬† anywhere¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† 169.254.169.254¬†¬†¬†¬†¬† tcp dpt:http redir¬† ports 8700

[root@dfw02 ~(keystone_admin)]$ neutron net-list
+————————————–+——+—————————————————–+
| id                                   | name | subnets                                             |
+————————————–+——+—————————————————–+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int  | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24    |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |
+————————————–+——+—————————————————–+
[root@dfw02 ~(keystone_admin)]$ neutron subnet-list
+————————————–+——+—————-+—————————————————-+
| id                                   | name | cidr           | allocation_pools                                   |
+————————————–+——+—————-+—————————————————-+
| fa930cea-3d51-4cbe-a305-579f12aa53c0 |¬†¬†¬†¬†¬† | 10.0.0.0/24¬†¬†¬† | {“start”: “10.0.0.2”, “end”: “10.0.0.254”}¬†¬†¬†¬†¬†¬†¬†¬† |
| f30e5a16-a055-4388-a6ea-91ee142efc3d |¬†¬†¬†¬†¬† | 192.168.1.0/24 | {“start”: “192.168.1.100”, “end”: “192.168.1.200”} |
+————————————–+——+—————-+—————————————————-+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list
+————————————–+——————+———————+————————————–+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+————————————–+——————+———————+————————————–+
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2         | 192.168.1.101       | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4         | 192.168.1.102       | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
+————————————–+——————+———————+————————————–+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show af9c6ba6-e0ca-498e-8f67-b9327f75d93f
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 10.0.0.4                             |
| floating_ip_address | 192.168.1.102                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | af9c6ba6-e0ca-498e-8f67-b9327f75d93f |
| port_id             | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| router_id           | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show  9d15609c-9465-4254-bdcb-43f072b6c7d4
+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 10.0.0.2                             |
| floating_ip_address | 192.168.1.101                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 9d15609c-9465-4254-bdcb-43f072b6c7d4 |
| port_id             | e4cb68c4-b932-4c83-86cd-72c75289114a |
| router_id           | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+
Snapshot :-

*****************************************
Configuring Cinder to Add GlusterFS
*****************************************

# gluster volume create cinder-volumes05  replica 2 dwf02.localdomain:/data1/cinder5  dfw01.localdomain:/data1/cinder5
# gluster volume start cinder-volumes05
# gluster volume set cinder-volumes05  auth.allow 192.168.1.*
# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf

192.168.1.127:cinder-volumes05

:wq

Update /etc/sysconfig/iptables:-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT
-A INPUT -p tcp –dport 111 -j ACCEPT
-A INPUT -p udp –dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment Out

-A FORWARD -j REJECT –reject-with icmp-host-prohibited
-A INPUT -j REJECT –reject-with icmp-host-prohibited

# service iptables restart

To mount gluster volume for cinder backend in current setup :-
# losetup -fv /cinder-volumes
# cinder delete a94b97f5-120b-40bd-b59e-8962a5cb6296
The above lines deleted testvol1 created by Kashyap

Ignoring this step would cause failure restart openstack-cinder-volume-service in particular situation

# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

Verification of service status :-

[root@dfw02 cinder(keystone_admin)]$ service openstack-cinder-volume status -l
Redirecting to /bin/systemctl status  -l openstack-cinder-volume.service
openstack-cinder-volume.service – OpenStack Cinder Volume Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled)
   Active: active (running) since Sat 2014-01-25 07:43:10 MSK; 6s ago
 Main PID: 21727 (cinder-volume)
   CGroup: /system.slice/openstack-cinder-volume.service
¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ‚Ēú‚ĒÄ21727 /usr/bin/python /usr/bin/cinder-volume –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –logfile /var/log/cinder/volume.log
¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ‚Ēú‚ĒÄ21736 /usr/bin/python /usr/bin/cinder-volume –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –logfile /var/log/cinder/volume.log
¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† ‚ĒĒ‚ĒÄ21793 /usr/sbin/glusterfs –volfile-id=cinder-volumes05 –volfile-server=192.168.1.127 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:10 dfw02.localdomain systemd[1]: Started OpenStack Cinder Volume Server.
Jan 25 07:43:11 dfw02.localdomain cinder-volume[21727]: 2014-01-25 07:43:11.402 21736 WARNING cinder.volume.manager [req-69c0060b-b5bf-4bce-8a8e-f2218dec7638 None None] Unable to update stats, driver is uninitialized
Jan 25 07:43:11 dfw02.localdomain sudo[21754]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 192.168.1.127:cinder-volumes05 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:11 dfw02.localdomain sudo[21803]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf df –portability –block-size 1 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a

[root@dfw02 cinder(keystone_admin)]$ df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/fedora00-root        96G  7.4G   84G   9% /
devtmpfs                        3.9G     0  3.9G   0% /dev
tmpfs                           3.9G  152K  3.9G   1% /dev/shm
tmpfs                           3.9G  1.2M  3.9G   1% /run
tmpfs                           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                           3.9G  184K  3.9G   1% /tmp
/dev/sda5                       477M  101M  347M  23% /boot
/dev/mapper/fedora00-data1       77G   53M   73G   1% /data1
tmpfs                           3.9G  1.2M  3.9G   1% /run/netns
192.168.1.127:cinder-volumes05   77G   52M   73G   1% /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a

At runtime on Compute Node :-

[root@dfw01 ~]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/dev/mapper/fedora-root          96G   54G   38G  59% /
devtmpfs                        3.9G     0  3.9G   0% /dev
tmpfs                           3.9G  484K  3.9G   1% /dev/shm
tmpfs                           3.9G  1.3M  3.9G   1% /run
tmpfs                           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs                           3.9G   36K  3.9G   1% /tmp
/dev/sda5                       477M  121M  327M  27% /boot
/dev/mapper/fedora-data1         77G  6.7G   67G  10% /data1
192.168.1.127:cinder-volumes05   77G  6.7G   67G  10% /var/lib/nova/mnt/62f75cf6996a8a6bcc0d343be378c10a

[root@dfw02 ~(keystone_admin)]$ nova image-list
+————————————–+——————+——–+——–+
| ID                                   | Name             | Status | Server |
+————————————–+——————+——–+——–+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31         | ACTIVE |        |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE |        |
+————————————–+——————+——–+——–+

[root@dfw02 ~(keystone_admin)]$ cinder create –image-id 03c9ad20-b0a3-4b71-aa08-2728ecb66210 \
&gt; –display-name Fedora19VLG 7

+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-01-25T03:45:21.124690      |
| display_description |                 None                 |
|     display_name    |             Fedora19VLG              |
|          id         | 5f0f096b-192a-435b-bdbc-5063ed5c6366 |
|       image_id      | 03c9ad20-b0a3-4b71-aa08-2728ecb66210 |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dfw02 cinder5(keystone_admin)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————-+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————-+
| 5f0f096b-192a-435b-bdbc-5063ed5c6366 | available | Fedora19VLG  |  7   |     None    |   true   |             |
+————————————–+———–+————–+——+————-+———-+————–

**********************************************************************************
UPDATE on 03/09/2014. In meantime I am able to load instance via glusterfs cinder’s volume only via command :-
**********************************************************************************
[root@dallas1 ~(keystone_boris)]$ nova boot –flavor 2 –user-data=./myfile.txt –block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012

***********************************************************************************
Update on 03/11/2014.
***********************************************************************************
Standard schema via `cinder create –image-id IMAGE_ID –display_name VOL_NAME SIZE ` && ` nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=VOLUME_ID:::0¬† INSTANCE_NAME`¬† started to work fine. Schema described in previous UPDATE 03/09/14 on the contrary stopped to work smoothly on glusterfs based cinder’s volumes.¬†
¬†¬†¬† However, ending up with “Error” status it creates glusterfs cinder volume ( with system_id ) , which is quite healthy and may be utilized for building new instance of F20 or Ubuntu 14.04, whatever was original image,¬† via CLI or Dashboard. It looks like a kind of bug in Nova&Neutron interprocess communications. I would say synchronization at boot up.
     Please view :-

“Provide an API for external services to send defined events to the compute service for synchronization. This includes immediate needs for nova-neutron interaction around boot timing and network info updates”
    https://blueprints.launchpad.net/nova/+spec/admin-event-callback-api  
 and bug report :-
    https://bugs.launchpad.net/nova/+bug/1280357

Loading instance via created volume on Glusterfs

[root@dfw02 ~(keystone_admin)]$ nova boot –flavor 2 –user-data=./myfile.txt –block_device_mapping vda=5f0f096b-192a-435b-bdbc-5063ed5c6366:::0 VF19VLGL

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| OS-EXT-STS:task_state                | scheduling                                         |
| image                                | Attempt to boot from volume Рno image supplied    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000005                                  |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | 5aa903c5-624d-4dde-9e3c-49996d4a5edc               |
| security_groups¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† | [{u’name’: u’default’}]¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                   |
| OS-DCF:diskConfig                    | MANUAL                                             |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
| status                               | BUILD                                              |
| updated                              | 2014-01-25T03:59:12Z                               |
| hostId                               |                                                    |
| OS-EXT-SRV-ATTR:host                 | None                                               |
| OS-SRV-USG:terminated_at             | None                                               |
| key_name                             | None                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                               |
| name                                 | VF19VLGL                                           |
| adminPass                            | Aq4LBKP9rBGF                                       |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                   |
| created                              | 2014-01-25T03:59:12Z                               |
| os-extended-volumes:volumes_attached | [{u’id’: u’5f0f096b-192a-435b-bdbc-5063ed5c6366′}] |
| metadata                             | {}                                                 |
+————————————–+—————————————————-+

Just in a second new instance will be booted via created volume on Glusterfs ( Fedora 20 : Qemu 1.6, Libvirt 1.1.3)

[root@dfw02 ~(keystone_admin)]$ nova list

+————————————–+———–+———–+————+————-+—————————–+
| ID                                   | Name      | Status    | Task State | Power State | Networks                    |
+————————————–+———–+———–+————+————-+—————————–+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.101 |
| aaeada4a-6a83-4cbc-ac8b-96a8b1fa81ad | VF19GL    | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL  | ACTIVE    | None       | Running     | int=10.0.0.6                |
+————————————–+———–+———–+————+————-+—————————–+

[root@dfw02 ~(keystone_admin)]$ neutron port-list –device-id¬† 5aa903c5-624d-4dde-9e3c-49996d4a5edc

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 7196be1f-9216-4bfd-ac8b-9903780936d9 |¬†¬†¬†¬†¬† | fa:16:3e:4b:97:90 | {“subnet_id”: “fa930cea-3d51-4cbe-a305-579f12aa53c0”, “ip_address”: “10.0.0.6”} |
+————————————–+——+——————-

+———————————————————————————+

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list

+————————————–+——————+———————+————————————–+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+————————————–+——————+———————+————————————–+
| 04ccafab-1878-44f6-b5ab-a1e2ea1faa97 | 10.0.0.5         | 192.168.1.103       | 1d10dc02-c0f2-4225-ae61-db281f3af69c |
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2         | 192.168.1.101       | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4         | 192.168.1.102       | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e |                  | 192.168.1.104       |                                      |
+————————————–+——————+———————+————————————–+

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-associate c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e 7196be1f-9216-4bfd-ac8b-9903780936d9
Associated floatingip c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e

[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 10.0.0.6                             |
| floating_ip_address | 192.168.1.104                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e |
| port_id             | 7196be1f-9216-4bfd-ac8b-9903780936d9 |
| router_id           | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id           | d0a0acfdb62b4cc8a2bfa8d6a08bb62f     |
+———————+————————————–+

root@dfw02 ~(keystone_admin)]$ ping 192.168.1.104

PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.

64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=4.19 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=1.32 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.06 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=1.11 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=1.13 ms
64 bytes from 192.168.1.104: icmp_seq=6 ttl=63 time=1.02 ms
64 bytes from 192.168.1.104: icmp_seq=7 ttl=63 time=1.05 ms
64 bytes from 192.168.1.104: icmp_seq=8 ttl=63 time=1.08 ms
64 bytes from 192.168.1.104: icmp_seq=9 ttl=63 time=0.974 ms
64 bytes from 192.168.1.104: icmp_seq=10 ttl=63 time=1.03 ms

I/O Speed improvement is noticeable on boot up and disk operations like this

CentOS 6.5 instance was able to start it’s own X Server in VNC session from F20 in other words been client of X Server of F20 host (?).

Setting up Ubuntu 13.10 cloud instance

 [root@dfw02 ~(keystone_admin)]$ nova list | grep UbuntuSalamander

| 812d369d-e351-469e-8820-a2d0d8740716 | UbuntuSalamander | ACTIVE    | None       | Running     | int=10.0.0.8, 192.168.1.110 |

 [root@dfw02 ~(keystone_admin)]$ nova show 812d369d-e351-469e-8820-a2d0d8740716

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-01-31T04:46:30Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | dfw01.localdomain                                        |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume Рno image supplied          |
| int network                          | 10.0.0.8, 192.168.1.110                                  |
| hostId                               | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000016                                        |
| OS-SRV-USG:launched_at               | 2014-01-31T04:46:30.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | dfw01.localdomain                                        |
| flavor                               | m1.small (2)                                             |
| id                                   | 812d369d-e351-469e-8820-a2d0d8740716                     |
| security_groups¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† | [{u’name’: u’default’}]¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 970ed56ef7bc41d59c54f5ed8a1690dc                         |
| name                                 | UbuntuSalamander                                         |
| created                              | 2014-01-31T04:46:25Z                                     |
| tenant_id                            | d0a0acfdb62b4cc8a2bfa8d6a08bb62f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’34bdf9d9-5bcc-4b62-8140-919c00fe07df’}]¬†¬†¬†¬†¬†¬† |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

[root@dfw02 ~(keystone_admin)]$ ssh ubuntu@192.168.1.110
ubuntu@192.168.1.110’s password:¬†


Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-15-generic x86_64)
* Documentation:  https://help.ubuntu.com/
System information as of Fri Jan 31 05:13:19 UTC 2014

System load:  0.08              Processes:           73
Usage of /:   11.4% of 6.86GB   Users logged in:     1
Memory usage: 3%                IP address for eth0: 10.0.0.8
Swap usage:   0%
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Fri Jan 31 05:13:25 2014 from 192.168.1.127

ubuntu@ubuntusalamander:~$ ifconfig
eth0      Link encap:Ethernet  HWaddr fa:16:3e:1e:16:35
inet addr:10.0.0.8  Bcast:10.0.0.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe1e:1635/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
RX packets:854 errors:0 dropped:0 overruns:0 frame:0
TX packets:788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:85929 (85.9 KB)  TX bytes:81060 (81.0 KB)

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:65536  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Setting up light weight X environment on Ubuntu instance:-

$ sudo  apt-get install xorg openbox
Reboot
$ startx
Right mouse click on desktop opens X-terminal
$ sudo apt-get install firefox
$ /usr/bin/firefox

Testing tenants network,router,instance creating ability

[root@dfw02 ~]#  cat  keystonerc_boris
export OS_USERNAME=boris
export OS_TENANT_NAME=ostenant
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://192.168.1.127:35357/v2.0/
export PS1='[\u@\h \W(keystone_boris)]$ ‘

[root@dfw02 ~]# . keystonerc_boris

[root@dfw02 ~(keystone_boris)]$ neutron net-list

+————————————–+——+—————————————+

| id                                   | name | subnets                               |

+————————————–+——+—————————————+
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext  | f30e5a16-a055-4388-a6ea-91ee142efc3d  |
+————————————–+——+—————————————+

[root@dfw02 ~(keystone_boris)]$ neutron router-create router2 Created a new router:

+———————–+————————————–+
| Field                 | Value                                |
+———————–+————————————–+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 86b3008c-297f-4301-9bdc-766b839785f1 |
| name                  | router2                              |
| status                | ACTIVE                               |
| tenant_id             | 4dacfff9e72c4245a48d648ee23468d5     |
+———————–+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron router-gateway-set router2 ext

Set gateway for router router2

[root@dfw02 ~(keystone_boris)]$  neutron net-create int1

Created a new network:

+—————-+————————————–+
| Field          | Value                                |
+—————-+————————————–+
| admin_state_up | True                                 |
| id             | 426bb226-0ab9-440d-ba14-05634a17fb2b |
| name           | int1                                 |
| shared         | False                                |
| status         | ACTIVE                               |
| subnets        |                                      |
| tenant_id      | 4dacfff9e72c4245a48d648ee23468d5     |
+—————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron subnet-create int1 40.0.0.0/24 –dns_nameservers list=true 83.221.202.254

Created a new subnet:

+——————+——————————————–+
| Field            | Value                                      |
+——————+——————————————–+
| allocation_pools | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
| cidr             | 40.0.0.0/24                                |
| dns_nameservers  | 83.221.202.254                             |
| enable_dhcp      | True                                       |
| gateway_ip       | 40.0.0.1                                   |
| host_routes      |                                            |
| id               | 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06       |
| ip_version       | 4                                          |
| name             |                                            |
| network_id       | 426bb226-0ab9-440d-ba14-05634a17fb2b       |
| tenant_id        | 4dacfff9e72c4245a48d648ee23468d5           |
+——————+——————————————–+

[root@dfw02 ~(keystone_boris)]$  neutron router-interface-add router2 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06

Added interface e031db6b-d0cc-4c57-877b-53b1c6946870 to router router2.

[root@dfw02 ~(keystone_boris)]$ neutron subnet-list

+————————————–+——+————-+——————————————–+
| id                                   | name | cidr        | allocation_pools                           |
+————————————–+——+————-+——————————————–+
| 9e0d457b-c4c4-45cf-84e2-4ac7550f3b06 |¬†¬†¬†¬†¬† | 40.0.0.0/24 | {“start”: “40.0.0.2”, “end”: “40.0.0.254”} |
+————————————–+——+————-+——————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol icmp \
&gt;¬†¬† –direction ingress –remote-ip-prefix 0.0.0.0/0 default

Created a new security_group_rule:

+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 4a6deddf-9350-4f98-97d7-a54cf6ebaa9a |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron security-group-rule-create –protocol tcp \
&gt;¬†¬† –port-range-min 22 –port-range-max 22 \
&gt;¬†¬† –direction ingress –remote-ip-prefix 0.0.0.0/0 default

Created a new security_group_rule:

+——————-+————————————–+
| Field             | Value                                |
+——————-+————————————–+
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 7a461936-ffbc-4968-975b-3d27ec975e04 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| protocol          | tcp                                  |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 4b72bcaf-b456-4222-afcc-8885326b96b2 |
| tenant_id         | 4dacfff9e72c4245a48d648ee23468d5     |
+——————-+————————————–+

[root@dfw02 ~(keystone_boris)]$ glance image-list

+————————————–+———————+————-+——————+———–+——–+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+————————————–+———————+————-+——————+———–+——–+
| b9782f24-2f37-4f8d-9467-f622507788bf | CentOS6.5 image     | qcow2       | bare             | 344457216 | active |
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31            | qcow2       | bare             | 13147648  | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64    | qcow2       | bare             | 237371392 | active |
| de93ee44-4085-4111-b022-a7437da8feac | Fedora 20 image     | qcow2       | bare             | 214106112 | active |
| e70591fc-6905-4e57-84b7-4ffa7c001864 | Ubuntu Server 13.10 | qcow2       | bare             | 244514816 | active |
| dd8c11d2-f9b0-4b77-9b5e-534c7ac0fd58 | Ubuntu Trusty image | qcow2       | bare             | 246022144 | active |
+————————————–+———————+————-+——————+———–+——–+

[root@dfw02 ~(keystone_boris)]$ cinder create –image-id de93ee44-4085-4111-b022-a7437da8feac –display_name VF20VLG02 7

+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-02-21T06:36:21.753407      |
| display_description |                 None                 |
|     display_name    |              VF20VLG02               |
|          id         | c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 |
|       image_id      | de93ee44-4085-4111-b022-a7437da8feac |
|       metadata      |                  {}                  |
|         size        |                  7                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ cinder list

+————————————–+————-+————–+——+————-+———-+————-+
|                  ID                  |    Status   | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+————-+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | downloading |  VF20VLG02   |  7   |     None    |  false   |             |
+————————————–+————-+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ cinder list

+————————————–+———–+————–+——+————-+———-+————-+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+————————————–+———–+————–+——+————-+———-+————-+
| c3b09e44-1868-43c6-baaa-1ffcb4b80fb1 | available |  VF20VLG02   |  7   |     None    |   true   |             |
+————————————–+———–+————–+——+————-+———-+————-+

[root@dfw02 ~(keystone_boris)]$ nova boot –flavor 2¬† –user-data=./myfile.txt –block_device_mapping vda=c3b09e44-1868-43c6-baaa-1ffcb4b80fb1:::0 VF20XWS

+————————————–+—————————————————-+
| Property                             | Value                                              |
+————————————–+—————————————————-+
| status                               | BUILD                                              |
| updated                              | 2014-02-21T06:49:42Z                               |
| OS-EXT-STS:task_state                | scheduling                                         |
| key_name                             | None                                               |
| image                                | Attempt to boot from volume Рno image supplied    |
| hostId                               |                                                    |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | None                                               |
| flavor                               | m1.small                                           |
| id                                   | c4573327-dd99-4e57-941e-3d35aacb637c               |
| security_groups¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† | [{u’name’: u’default’}]¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† |
| OS-SRV-USG:terminated_at             | None                                               |
| user_id                              | 162021e787c54cac906ab3296a386006                   |
| name                                 | VF20XWS                                            |
| adminPass                            | YkPYdW58gz7K                                       |
| tenant_id                            | 4dacfff9e72c4245a48d648ee23468d5                   |
| created                              | 2014-02-21T06:49:42Z                               |
| OS-DCF:diskConfig                    | MANUAL                                             |
| metadata                             | {}                                                 |
| os-extended-volumes:volumes_attached | [{u’id’: u’c3b09e44-1868-43c6-baaa-1ffcb4b80fb1′}] |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| progress                             | 0                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-AZ:availability_zone          | nova                                               |
| config_drive                         |                                                    |
+————————————–+—————————————————-+

[root@dfw02 ~(keystone_boris)]$ nova list

+————————————–+———+——–+————+————-+—————+
| ID                                   | Name    | Status | Task State | Power State | Networks      |
+————————————–+———+——–+————+————-+—————+
| c4573327-dd99-4e57-941e-3d35aacb637c | VF20XWS | ACTIVE | None       | Running     | int1=40.0.0.2 |
+————————————–+———+——–+————+————-+—————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-create ext

Created a new floatingip:

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ neutron port-list –device-id c4573327-dd99-4e57-941e-3d35aacb637c

+————————————–+——+——————-+———————————————————————————+
| id                                   | name | mac_address       | fixed_ips                                                                       |
+————————————–+——+——————-+———————————————————————————+
| 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |¬†¬†¬†¬†¬† | fa:16:3e:10:a0:e3 | {“subnet_id”: “9e0d457b-c4c4-45cf-84e2-4ac7550f3b06”, “ip_address”: “40.0.0.2”} |
+————————————–+——+——————-+———————————————————————————+

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-associate 64dd749f-6127-4d0f-ba51-8a9978b8c211 2d6c6569-44c3-44b2-8bed-cdc8dde12336

Associated floatingip 64dd749f-6127-4d0f-ba51-8a9978b8c211

[root@dfw02 ~(keystone_boris)]$ neutron floatingip-show 64dd749f-6127-4d0f-ba51-8a9978b8c211

+———————+————————————–+
| Field               | Value                                |
+———————+————————————–+
| fixed_ip_address    | 40.0.0.2                             |
| floating_ip_address | 192.168.1.115                        |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id                  | 64dd749f-6127-4d0f-ba51-8a9978b8c211 |
| port_id             | 2d6c6569-44c3-44b2-8bed-cdc8dde12336 |
| router_id           | 86b3008c-297f-4301-9bdc-766b839785f1 |
| tenant_id           | 4dacfff9e72c4245a48d648ee23468d5     |
+———————+————————————–+

[root@dfw02 ~(keystone_boris)]$ ping 192.168.1.115

PING 192.168.1.115 (192.168.1.115) 56(84) bytes of data.
64 bytes from 192.168.1.115: icmp_seq=1 ttl=63 time=3.80 ms
64 bytes from 192.168.1.115: icmp_seq=2 ttl=63 time=1.13 ms
64 bytes from 192.168.1.115: icmp_seq=3 ttl=63 time=0.954 ms
64 bytes from 192.168.1.115: icmp_seq=4 ttl=63 time=1.01 ms
64 bytes from 192.168.1.115: icmp_seq=5 ttl=63 time=0.999 ms
64 bytes from 192.168.1.115: icmp_seq=6 ttl=63 time=0.809 ms
64 bytes from 192.168.1.115: icmp_seq=7 ttl=63 time=1.02 ms
^C

The original text of documents was posted on fedoraproject.org by Kashyap.
¬†¬† Atached ones tuned for new IP’s and should not have any more¬† typos of original version.They also contain MySQL preventive updates currently required for openstack-nova-compute & neutron-openvswitch-agent remote connection to Controller Node to succeed . MySQL stuff is mine. All attached *.conf & *.ini files have been update for my network as well.
¬†¬† In meantime I am quite sure¬† that using Libvirt’s default and non-default networks¬† for creating Controller and Compute nodes¬† as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real
physical boxes. Just one Ethernet Controller per box should be required in case of  using GRE tunnelling in case of RDO Havana on Fedora 20 manual setup.    
 

References

  1. http://textuploader.com/1hin
  2. http://textuploader.com/1hey
  3. http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
 4. http://bderzhavets.blogspot.com/2014/01/setting-up-two-physical-node-openstack.html


“Setting up Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN” on CentOS 6.5 with both Controller and Compute nodes each one having two Ethernet adapters per Andrew Lau

December 28, 2013

Why CentOS 6.5 ?¬† It has library libgfapi¬†http://www.gluster.org/2012/11/integration-with-kvmqemu/ back-ported¬† what allows native Qemu work directly with glusterfs 3.4.1 volumes¬† https://bugzilla.redhat.com/show_bug.cgi?id=848070¬† View also http://rhn.redhat.com/errata/RHEA-2013-1859.html in particular bug : 956919 – Develop native qemu-gluster driver for Cinder. General concept may be seen here¬† http://www.mirantis.com/blog/openstack-havana-glusterfs-and-what-improved-support-really-means . I am very thankful to Andrew Lau for sample of anwer-file for setups a kind of “Controller + Compute Node + Compute Node ..” His “Howto” [1]¬† is perfect , no matter that even having box with 3 Ethernet adapters I was unable to reproduce his setup exactly. Latter I realised that I just didn’t fix epel-*.repo files and decided to switch to another set up.Baseurl should be uncommented , mirror-list on the contrary. I believe¬† it’s very personal issue. By some reasons I had to install manually¬† EPEL on CentOS 6.5 .Packstack failed on internet enabled¬† boxes,epel-*.repo also required manual intervention to make packstack finally happy.

Differences :-

1. RDO Controller and Compute nodes setup based per Andrew Lau multi-node.packstack [1] is a bit different from original

No gluster volumes for cinder,nova,glance created before RDO packstack install , no network like 172.16.0.0 for gluster cluster management,

just original network 192.168.1.0/24 with internet alive used in RDO setup ( answer-file pretty close to Andrew’s attached)

2.Set up LBaaS :-

Edit /etc/neutron/neutron.conf and add the following in the default section:

[DEFAULT]
service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin

Already there

Then edit the /etc/openstack-dashboard/local_settings file and search for enable_lb and set it to true:

OPENSTACK_NEUTRON_NETWORK = {
‘enable_lb’: True
}

Done

# vi /etc/neutron/lbaas_agent.ini – already done no changes

device_driver=neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
user_group=haproxy

Comment out the line in the service_providers section:
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Nothing to remove

service neutron-lbaas-agent start – already running , restarted
chkconfig neutron-lbaas-agent on  Рskipped
service neutron-server restart  Рdone
service httpd restart  Рdone 

All done.

Haproxy is supposed to manage the landscape with several controllers.One of them is considered as frontend and the rest as backend servers providing HA openstack services running on controllers. It’s a separate host. View¬† :-

http://openstack.redhat.com/Load_Balance_OpenStack_API#HAProxy

In current Controller+Compute  set up there is no need in Haproxy. Otherwise third host is needed to load balance openstack-nova-compute.

So “yum install haproxy” in LBaaS section of [1] is hard to understand.

3. At the end of RDO install br-ex bridge and OVS port eth0 have been created

4. Gluster volumes for Nova,Glance,Cinder backup have been created after     RDO install. Havana tuned for cinder-volumes gluster backend after RDO installation

5. HA implementation via keepalived per [1] after RDO install due to changing interface to “br-ex” on Master.

Initial repositories set up per [1]

# yum install -y  http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm
# cd /etc/yum.repos.d/
# wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo
# yum install -y openstack-packstack python-netaddr
# yum install -y glusterfs glusterfs-fuse glusterfs-server

In case packstack failure to install EPEL :-

[root@hv02 ~]# wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@hv02 ~]# wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
[root@hv02 ~]# rpm -Uvh remi-release-6*.rpm epel-release-6*.rpm

[root@hv02 ~]# ls -1 /etc/yum.repos.d/epel* /etc/yum.repos.d/remi.repo
/etc/yum.repos.d/epel.repo
/etc/yum.repos.d/epel-testing.repo
/etc/yum.repos.d/remi.repo

In case next packstack failure to resolve dependencies:-
Update also epel*.repo files. Uncomment baseurl.Comment out mirrorlist

System core setup

РController node: Nova, Keystone, Cinder, Glance, Neutron  (hv02)
РCompute node: Nova (nova-compute), Neutron (openvswitch-agent)  (hv01)

Service NetworkManager disabled, service network enabled, system rebooted before RDO installation

[root@hv02 ~]# packstack –answer-file=multi-node.packstack
Welcome to Installer setup utility
Packstack changed given value  to required value /root/.ssh/id_rsa.pub
Installing:
Clean Up…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Setting up ssh keys…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Discovering hosts’ details…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding pre install manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Installing time synchronization via NTP…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding MySQL manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding QPID manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Keystone manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Glance Keystone manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Glance manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Installing dependencies for Cinder…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Cinder Keystone manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Cinder manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Nova API manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Nova Keystone manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Nova Cert manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Nova Conductor manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Nova Compute manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Nova Scheduler manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Nova VNC Proxy manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Nova Common manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Openstack Network-related Nova manifest entries…[ DONE ]
Adding Neutron API manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Neutron Keystone manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Neutron L3 manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Neutron L2 Agent manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Neutron DHCP Agent manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Neutron LBaaS Agent manifest entries…¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Neutron Metadata Agent manifest entries…¬†¬†¬†¬†¬† [ DONE ]
Adding OpenStack Client manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Horizon manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Heat manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Heat Keystone manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Ceilometer manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding Ceilometer Keystone manifest entries…¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Adding post install manifest entries…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Preparing servers…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Installing Dependencies…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Copying Puppet modules and manifests…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]
Applying Puppet manifests…
Applying 192.168.1.127_prescript.pp
Applying 192.168.1.137_prescript.pp
192.168.1.127_prescript.pp :               [ DONE ]
192.168.1.137_prescript.pp :               [ DONE ]
Applying 192.168.1.127_ntpd.pp
Applying 192.168.1.137_ntpd.pp
192.168.1.127_ntpd.pp :                         [ DONE ]
192.168.1.137_ntpd.pp :                         [ DONE ]
Applying 192.168.1.137_mysql.pp
Applying 192.168.1.137_qpid.pp
192.168.1.137_mysql.pp :                       [ DONE ]
192.168.1.137_qpid.pp :                         [ DONE ]
Applying 192.168.1.137_keystone.pp
Applying 192.168.1.137_glance.pp
Applying 192.168.1.137_cinder.pp
192.168.1.137_keystone.pp :                 [ DONE ]
192.168.1.137_glance.pp :                     [ DONE ]
192.168.1.137_cinder.pp :                     [ DONE ]
Applying 192.168.1.137_api_nova.pp
192.168.1.137_api_nova.pp :                 [ DONE ]
Applying 192.168.1.137_nova.pp
Applying 192.168.1.127_nova.pp
192.168.1.137_nova.pp :                         [ DONE ]
192.168.1.127_nova.pp :                         [ DONE ]
Applying 192.168.1.127_neutron.pp
Applying 192.168.1.137_neutron.pp
192.168.1.127_neutron.pp :                   [ DONE ]
192.168.1.137_neutron.pp :                   [ DONE ]
Applying 192.168.1.137_osclient.pp
Applying 192.168.1.137_horizon.pp
Applying 192.168.1.137_heat.pp
Applying 192.168.1.137_ceilometer.pp
192.168.1.137_osclient.pp :                 [ DONE ]
192.168.1.137_horizon.pp :                   [ DONE ]
192.168.1.137_heat.pp :                         [ DONE ]
192.168.1.137_ceilometer.pp :             [ DONE ]
Applying 192.168.1.127_postscript.pp
Applying 192.168.1.137_postscript.pp
192.168.1.127_postscript.pp :             [ DONE ]
192.168.1.137_postscript.pp :             [ DONE ]
[ DONE ]
Finalizing…¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† [ DONE ]

**** Installation completed successfully ******

Additional information:
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.1.137. To use the command line tools you need to source the file.
* NOTE : A certificate was generated to be used for ssl, You should change the ssl certificate configured in /etc/httpd/conf.d/ssl.conf on 192.168.1.137 to use a CA signed cert.
* To access the OpenStack Dashboard browse to https://192.168.1.137/dashboard.
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* The installation log file is available at: /var/tmp/packstack/20131226-230226-PzmL7R/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20131226-230226-PzmL7R/manifests

Services on Controller Node :-

Services on Compute Node :-

Post install configuration

On Controller :

root@hv02 network-scripts(keystone_admin)]# cat ifcfg-br-ex

DEVICE=”br-ex”
BOOTPROTO=”static”
IPADDR=”192.168.1.137″
NETMASK=”255.255.255.0″
DNS1=”83.221.202.254″
BROADCAST=”192.168.1.255″
GATEWAY=”192.168.1.1″
NM_CONTROLLED=”no”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”yes”
IPV6INIT=no
ONBOOT=”yes”
TYPE=”OVSBridge”
DEVICETYPE=”ovs”

[root@hv02 network-scripts(keystone_admin)]# cat ifcfg-eth0

NAME=”eth0″
HWADDR=90:E6:BA:2D:11:EB
ONBOOT=”yes”
TYPE=”OVSPort”
DEVICETYPE=”ovs”
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

Pre install configuration

[root@hv02 network-scripts(keystone_admin)]# cat ifcfg-eth1

DEVICE=eth1
HWADDR=00:0C:76:E0:1E:C5
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Post install configuration

[root@hv02 ~(keystone_admin)]# ovs-vsctl show
e059cd59-21c8-48f8-ad7c-b9e1de9a986b
Bridge br-int
Port “int-br-eth1”
Interface “int-br-eth1”
Port br-int
Interface br-int
type: internal
Port “qvo5252ab82-49”
tag: 1
Interface “qvo5252ab82-49”
Port “tape1849acb-66”
tag: 1
Interface “tape1849acb-66”
type: internal
Port “qr-9017c241-f3”
tag: 1
Interface “qr-9017c241-f3”
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port “eth0”
Interface “eth0”
Port “qg-14fcad42-83”
Interface “qg-14fcad42-83”
type: internal
Bridge “br-eth1”
Port “br-eth1”
Interface “br-eth1”
type: internal
Port “eth1”
Interface “eth1”
Port “phy-br-eth1”
Interface “phy-br-eth1”
ovs_version: “1.11.0”

On Compute node :-

[root@hv01 network-scripts]# cat ifcfg-eth0

DEVICE=eth0
TYPE=Ethernet
UUID=e25e1975-50db-4421-ae39-676708d480db
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=192.168.1.127
PREFIX=24
GATEWAY=192.168.1.1
DNS1=83.221.202.254
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=”System eth0″
HWADDR=00:22:15:63:E4:E2
[root@hv01 network-scripts]# cat ifcfg-eth1

DEVICE=eth1
HWADDR=00:22:15:63:F9:9F
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none

Glusterfs replicated volumes created  after reboot for glance,nova,cinder-volumes.

At this point implement HA via keepalived with  /etc/keepalived/keepalived.conf  on hv02

vrrp_instance VI_1 {
interface  br-ex
state MASTER
virtual_router_id 10
priority 100   # master 100
virtual_ipaddress {
192.168.1.134
}
}

and another on on hv01

vrrp_instance VI_1 {
interface eth0
state BACKUP
virtual_router_id 10
priority 99 # master 100
virtual_ipaddress {
192.168.1.134
}
}

I just follow [1] but intterface for MASTER is “br-ex”

Enable service “keepalived” and reboot boxes

Tuning glance and nova per [1]  http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/

Just in case I reproduce instructions from [1]

# mkdir -p /mnt/gluster/{glance,nova} # On Controller
# mkdir -p /mnt/gluster/nova          # On Compute
# mount -t glusterfs 192.168.1.134:/nova2 /mnt/gluster/nova/
# mount -t glusterfs 192.168.1.134:/glance2 /mnt/gluster/glance/

Update /etc/glance/glance-api.conf  
    filesystem_store_datadir = /mnt/gluster/glance/images

# mkdir -p /mnt/gluster/glance/images
# chown -R glance:glance /mnt/gluster/glance/
# service openstack-glance-api restart

For all Compute Nodes ( you may have more the one and controller if you run on it  openstack-nova-compute )

# mkdir /mnt/gluster/nova/instance/
# chown -R nova:nova /mnt/gluster/nova/instance/

Upadte  /etc/nova/nova.conf  
  instances_path = /mnt/gluster/nova/instance

# service openstack-nova-compute restart

Quoting ends

Post installation creating cinder-volumes :-

Configuring Cinder to Add GlusterFS

# gluster volume create cinder-volumes02  replica 2 hv01.localdomain:/data2/cinder hv02.localdomain:/data2/cinder

# gluster volume start cinder-volumes02

# gluster volume set cinder-volumes02  auth.allow 192.168.1.*

# openstack-config –set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver¬†

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf¬†

# openstack-config –set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes

 # vi /etc/cinder/shares.conf

    192.168.1.134:cinder-volumes02

:wq

Update /etc/sysconfig/iptables (if it hasn’t been done earlier) :-

-A INPUT -p tcp -m multiport –dport 24007:24047 -j ACCEPT

-A INPUT -p tcp –dport 111 -j ACCEPT

-A INPUT -p udp –dport 111 -j ACCEPT

-A INPUT -p tcp -m multiport –dport 38465:38485 -j ACCEPT

Comment Out

-A FORWARD -j REJECT –reject-with icmp-host-prohibited

-A INPUT -j REJECT –reject-with icmp-host-prohibited

# service iptables restart

Restart openstack-cinder-volume services mounts glusterfs volume:-

 # for i in api scheduler volume ; do service openstack-cinder-${i} restart ;done

After RDO packstack completed and post configuration tuning is done.

On Controller :-

[root@hv02 ~(keystone_admin)]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vg_hv02-LogVol00     154G   16G  131G  11% /
tmpfs                            3.9G  232K  3.9G   1% /dev/shm
/dev/sdb1                        485M   70M  390M  16% /boot
/dev/mapper/vg_havana-lv_havana   98G  2.8G   95G   3% /data2
192.168.1.134:/glance2            98G  2.9G   95G   3% /mnt/gluster/glance2
192.168.1.134:/nova2              98G  2.9G   95G   3% /mnt/gluster/nova2
192.168.1.134:/cinder-volumes02   98G  2.9G   95G   3% /var/lib/cinder/volumes/77b8406d9f60712274c66a84844feb8a
192.168.1.134:/cinder-volumes02   98G  2.9G   95G   3% /var/lib/nova/mnt/77b8406d9f60712274c66a84844feb8a

[root@hv02 ~(keystone_admin)]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Dec 28 10:47:59 2013
#
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_hv02-LogVol00 /                       ext4    defaults        1 1
UUID=0a7bffa6-d133-4cd6-bdaf-06a00af0b340 /boot    ext4    defaults  1 2

/dev/mapper/vg_hv02-LogVol01 swap                    swap    defaults        0 0
tmpfs                   /dev/shm               tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0sysfs                   /sys                      sysfs   defaults        0 0
proc                    /proc                     proc    defaults        0 0
/dev/mapper/vg_havana-lv_havana    /data2  xfs     defaults        1 2
192.168.1.134:/glance2  /mnt/gluster/glance2  glusterfs defaults,_netdev 0 0
192.168.1.134:/nova2    /mnt/gluster/nova2     glusterfs defaults,_netdev

[root@hv02 ~(keystone_admin)]# gluster volume info nova2
Volume Name: nova2
Type: Replicate
Volume ID: 3a04a896-8080-4172-b3fb-c89c028c6944
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hv01.localdomain:/data2/nova
Brick2: hv02.localdomain:/data2/nova
Options Reconfigured:
auth.allow: 192.168.1.*

[root@hv02 ~(keystone_admin)]# gluster volume info glance2
Volume Name: glance2
Type: Replicate
Volume ID: c7b31eaa-6dea-49c2-9d09-ec4dcd65c560
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hv01.localdomain:/data2/glance
Brick2: hv02.localdomain:/data2/glance
Options Reconfigured:
auth.allow: 192.168.1.*

[root@hv02 ~(keystone_admin)]# gluster volume info cinder-volumes02
Volume Name: cinder-volumes02
Type: Replicate
Volume ID: 639e6afa-dc29-4fd7-8d3c-95f655383d1c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hv01.localdomain:/data2/cinder
Brick2: hv02.localdomain:/data2/cinder
Options Reconfigured:
auth.allow: 192.168.1.*

On Compute :-


[root@hv02 ~(keystone_admin)]# ssh hv01
Last login: Mon Dec 30 11:09:16 2013 from hv02

[root@hv01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_hv01-LogVol00 154G 4.5G 142G 4% /
tmpfs 3.9G 84K 3.9G 1% /dev/shm
/dev/sdb1 485M 70M 390M 16% /boot
/dev/mapper/vg_havana-lv_havana 98G 3.1G 95G 4% /data2
192.168.1.134:/nova2 98G 3.1G 95G 4% /mnt/gluster/nova2
192.168.1.134:/cinder-volumes02 98G 3.1G 95G 4% /var/lib/nova/mnt/77b8406d9f60712274c66a84844feb8a

[root@hv01 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Dec 28 10:14:16 2013
#
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_hv01-LogVol00 /                       ext4    defaults        1 1
UUID=21afa600-9b18-4aea-bfb7-16b73eaee3de /boot                   ext4    defaults        1 2
/dev/mapper/vg_hv01-LogVol01       swap            swap    defaults        0 0
tmpfs                   /dev/shm             tmpfs   defaults        0 0
devpts                  /dev/pts               devpts  gid=5,mode=620  0 0
sysfs                   /sys                      sysfs   defaults        0 0
proc                    /proc                    proc    defaults        0 0
/dev/mapper/vg_havana-lv_havana    /data2  xfs     defaults        1 2
192.168.1.134:/nova2   /mnt/gluster/nova2  glusterfs defaults,_netdev 0 0

On Controller :-

[root@hv02 ~(keystone_admin)]# openstack-status

== Nova services ==

openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 active
openstack-nova-network:                 dead      (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active

== Glance services ==

openstack-glance-api:                   active
openstack-glance-registry:              active

== Keystone service ==

openstack-keystone:                     active

== Horizon service ==

openstack-dashboard:                    000
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    active
neutron-openvswitch-agent:              active

== Cinder services ==

openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active

== Ceilometer services ==

openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active

== Heat services ==

openstack-heat-api:                     active
openstack-heat-api-cfn:                 dead      (disabled on boot)
openstack-heat-api-cloudwatch:          dead      (disabled on boot)
openstack-heat-engine:                  active

== Support services ==

mysqld:                                 active
libvirtd:                               active
openvswitch:                            active
messagebus:                             active
tgtd:                                   active
qpidd:                                  active
memcached:                              active

== Keystone users ==

+———————————-+————+———+———————-+
|                id                |    name    | enabled |        email         |
+———————————-+————+———+———————-+
| 0b6cc1c84d194a4fbf6be1cd3343167e |   admin    |   True  |    test@test.com     |
| 1415f2952fc34b419abc8a0d75130e30 | ceilometer |   True  | ceilometer@localhost |
| d77e11979821441da8157103011cae5a |   cinder   |   True  |   cinder@localhost   |
| 2860d02458904f9aa0f89afed6bcc423 |   glance   |   True  |   glance@localhost   |
| 78a8beeeb277493e96feae3127ea0607 |    heat    |   True  |    heat@localhost    |
| 002a2b8fcbfb47a1a588e74e51cb1f3a |  neutron   |   True  |  neutron@localhost   |
| 1b558e148aff4f618120f0f7f547f064 |    nova    |   True  |    nova@localhost    |
+———————————-+————+———+———————-+

== Glance images ==

+————————————–+—————–+————-+——————+———–+——–+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+————————————–+—————–+————-+——————+———–+——–+
| 02ef79b4-081b-4966-8b11-10492449fba5 | f19image        | qcow2       | bare             | 237371392 | active |
| 6eb9e748-5786-4072-b2cf-4c2a91da2bf3 | Ubuntu1310image | qcow2       | bare             | 243728384 | active |
+————————————–+—————–+————-+——————+———–+——–+

== Nova managed services ==

+——————+——————+———-+———+——-+—————————-+—————–+
| Binary           | Host             | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+——————+——————+———-+———+——-+—————————-+—————–+
| nova-consoleauth | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:32.000000 | None            |
| nova-scheduler   | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:32.000000 | None            |
| nova-conductor   | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:35.000000 | None            |
| nova-cert        | hv02.localdomain | internal | enabled | up    | 2013-12-28T11:06:32.000000 | None            |
| nova-compute     | hv02.localdomain | nova     | enabled | up    | 2013-12-28T11:06:33.000000 | None            |
| nova-compute     | hv01.localdomain | nova     | enabled | up    | 2013-12-28T11:06:32.000000 | None            |

+——————+——————+———-+———+——-+—————————-+—————–+

== Nova networks ==

+————————————–+———+——+
| ID                                   | Label   | Cidr |
+————————————–+———+——+
| 56456fcb-8696-4e63-894e-635681c911e4 | private | None |
| d4e83ac8-c257-4fee-a551-5d711087c238 | public  | None |
+————————————–+———+——+

== Nova instance flavors ==

+—-+———–+———–+——+———–+——+——-+————-+———–+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+—-+———–+———–+——+———–+——+——-+————-+———–+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+—-+———–+———–+——+———–+——+——-+————-+———–+

== Nova instances ==

+————————————–+——————+——–+————+————-+——————————–+
| ID                                   | Name             | Status | Task State | Power State | Networks                       |
+————————————–+——————+——–+————+————-+——————————–+
| 7a9da01f-499c-4d27-9b7a-1b1307b767a8 | UbuntuSalamander | ACTIVE | None       | Running     | private=10.0.0.4, 192.168.1.60 |
| 4db2876c-cedd-4d2b-853c-e156bcb20592 | VF19RS1          | ACTIVE | None       | Running     | private=10.0.0.2, 192.168.1.59 |
+————————————–+——————+——–+————+————-+——————————–|

Detailed info about both instances

 [root@hv02 ~(keystone_admin)]# nova show 7a9da01f-499c-4d27-9b7a-1b1307b767a8

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2013-12-28T10:43:53Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hv02.localdomain                                         |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume Рno image supplied          |
| private network                      | 10.0.0.4, 192.168.1.60                                   |
| hostId                               | 2d47a35fc92addd418ba8dd8df73233732a0e880b2e4e1ffac907091 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                                        |
| OS-SRV-USG:launched_at               | 2013-12-28T10:43:53.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hv02.localdomain                                         |
| flavor                               | m1.small (2)                                             |
| id                                   | 7a9da01f-499c-4d27-9b7a-1b1307b767a8                     |
| security_groups¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† | [{u’name’: u’default’}]¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 0b6cc1c84d194a4fbf6be1cd3343167e                         |
| name                                 | UbuntuSalamander                                         |
| created                              | 2013-12-28T10:43:40Z                                     |
| tenant_id                            | dc2ec9f2a8404c22b46566f567bebc49                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’eaf06b2e-23d0-4a65-bbba-6d464f6c0441′}]¬†¬†¬†¬†¬†¬† |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

[root@hv02 ~(keystone_admin)]# nova show 4db2876c-cedd-4d2b-853c-e156bcb20592

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2013-12-28T10:20:31Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | hv01.localdomain                                         |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume Рno image supplied          |
| private network                      | 10.0.0.2, 192.168.1.59                                   |
| hostId                               | fc6ed5fd7d8a2f3c510671ff8485af9e340d4244246eb0aff55f1a0d |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                        |
| OS-SRV-USG:launched_at               | 2013-12-28T10:20:31.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hv01.localdomain                                         |
| flavor                               | m1.small (2)                                             |
| id                                   | 4db2876c-cedd-4d2b-853c-e156bcb20592                     |
| security_groups¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† | [{u’name’: u’default’}]¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 0b6cc1c84d194a4fbf6be1cd3343167e                         |
| name                                 | VF19RS1                                                  |
| created                              | 2013-12-28T10:20:22Z                                     |
| tenant_id                            | dc2ec9f2a8404c22b46566f567bebc49                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’c1ebdd6c-2be0-451e-b3ba-b93cbc5b506b’}]¬†¬†¬†¬†¬†¬† |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

  Testing Windows 2012 Server evaluation cloud instance :-

[root@hv02 Downloads(keystone_admin)]# gunzip -cd windows_server_2012_r2_standard_eval_kvm_20131117.qcow2.gz | glance image-create –property hypervisor_type=kvm¬† –name “Windows Server 2012 R2 Std Eval” –container-format bare –disk-format vhd
+—————————-+————————————–+
| Property                   | Value                                |
+—————————-+————————————–+
| Property ‘hypervisor_type’ | kvm¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† |
| checksum                   | 83c08f00b784e551a79ac73348b47360     |
| container_format           | bare                                 |
| created_at                 | 2014-01-09T13:27:24                  |
| deleted                    | False                                |
| deleted_at                 | None                                 |
| disk_format                | vhd                                  |
| id                         | d55b81c5-2370-4d3e-8cb1-323e7a8fa9da |
| is_public                  | False                                |
| min_disk                   | 0                                    |
| min_ram                    | 0                                    |
| name                       | Windows Server 2012 R2 Std Eval      |
| owner                      | dc2ec9f2a8404c22b46566f567bebc49     |
| protected                  | False                                |
| size                       | 17182752768                          |
| status                     | active                               |
| updated_at                 | 2014-01-09T13:52:18                  |
+—————————-+————————————–+

[root@hv02 Downloads(keystone_admin)]# nova image-list
+————————————–+———————————+——–+——–+
| ID                                   | Name                            | Status | Server |
+————————————–+———————————+——–+——–+
| 6bb391f6-f330-406a-95eb-a12fd3db93d5 | UbuntuSalamanderImage           | ACTIVE |        |
| d55b81c5-2370-4d3e-8cb1-323e7a8fa9da | Windows Server 2012 R2 Std Eval | ACTIVE
| c8265abc-5499-414d-94c3-0376cd652281 | fedora19image                   | ACTIVE |        |
| 545aa5a8-b3b8-4fbd-9c86-c523d7790b49 | fedora20image                   | ACTIVE |        |
+————————————–+———————————+——–+——–+

[root@hv02 Downloads(keystone_admin)]# cinder create –image-id d55b81c5-2370-4d3e-8cb1-323e7a8fa9da –display_name Windows2012LVG 20
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-01-09T13:58:49.761145      |
| display_description |                 None                 |
|     display_name    |            Windows2012LVG            |
|          id         | fb78c942-1cf7-4f8c-b264-1a3997d03eef |
|       image_id      | d55b81c5-2370-4d3e-8cb1-323e7a8fa9da |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@hv02 Downloads(keystone_admin)]# cinder create –image-id d55b81c5-2370-4d3e-8cb1-323e7a8fa9da –display_name Windows2012LVG 20
+———————+————————————–+
|       Property      |                Value                 |
+———————+————————————–+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-01-09T13:58:49.761145      |
| display_description |                 None                 |
|     display_name    |            Windows2012LVG            |
|          id         | fb78c942-1cf7-4f8c-b264-1a3997d03eef |
|       image_id      | d55b81c5-2370-4d3e-8cb1-323e7a8fa9da |
|       metadata      |                  {}                  |
|         size        |                  20                  |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+———————+————————————–+

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# ls -lah
total 8.5G
drwxr-xr-x. 3 root   root    173 Jan  9 17:58 .
drwxr-xr-x. 6 cinder cinder 4.0K Jan  8 14:12 ..
-rw-rw-rw-. 1 root   root    12G Jan  9 14:56 volume-1ef5e77f-3ac2-42ab-97e6-ebb04a872461
-rw-rw-rw-. 1 root   root    10G Jan  8 22:52 volume-42671dcc-3295-4d9c-a040-6ff031277b73
-rw-rw-rw-. 1 root   root    20G Jan  9 17:58 volume-fb78c942-1cf7-4f8c-b264-1a3997d03eef

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# cinder list
+————————————–+————-+———————+——+————-+———-+————————————–+
|                  ID                  |    Status   |     Display Name    | Size | Volume Type | Bootable |             Attached to              |
+————————————–+————-+———————+——+————-+———-+————————————–+
| 1ef5e77f-3ac2-42ab-97e6-ebb04a872461 |    in-use   |       VF19VLG2      |  12  | performance |   true   | 6b40285c-ce03-4194-b247-013c6f11ff42 |
| 42671dcc-3295-4d9c-a040-6ff031277b73 |    in-use   | UbuntuSalamanderVLG |  10  | performance |   true   | ebd3063e-00c7-4ea8-aed4-63919ebddb42 |
| fb78c942-1cf7-4f8c-b264-1a3997d03eef | downloading |    Windows2012LVG   |  20  |     None    |  false   | |                                      |
———————————————————————————————————–
[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# cinder list
+————————————–+——–+———————+——+————-+———-+————————————–+
|                  ID                  | Status |     Display Name    | Size | Volume Type | Bootable |             Attached to              |
+————————————–+——–+———————+——+————-+———-+————————————–+
| 1ef5e77f-3ac2-42ab-97e6-ebb04a872461 | in-use |       VF19VLG2      |  12  | performance |   true   | 6b40285c-ce03-4194-b247-013c6f11ff42 |
| 42671dcc-3295-4d9c-a040-6ff031277b73 | in-use | UbuntuSalamanderVLG |  10  | performance |   true   | ebd3063e-00c7-4ea8-aed4-63919ebddb42 |
| fb78c942-1cf7-4f8c-b264-1a3997d03eef | in-use |    Windows2012LVG   |  20  |     None    |   true   | 2950e393-eb37-4991-9e16-fa7ca24b678a |
+————————————–+——–+———————+——+————-+———-+————————————–+
+————————————–+————-+———————+——+————-+——-

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# nova list

+————————————–+——————+———–+————+————-+——————————–+
| ID                                   | Name             | Status    | Task State | Power State | Networks                       |
+————————————–+——————+———–+————+————-+——————————–+
| ebd3063e-00c7-4ea8-aed4-63919ebddb42 | UbuntuSalamander | SUSPENDED | None       | Shutdown    | private=10.0.0.4, 192.168.1.60 |
| 6b40285c-ce03-4194-b247-013c6f11ff42 | VF19RS2          | SUSPENDED | None       | Shutdown    | private=10.0.0.2, 192.168.1.59 |
| 2950e393-eb37-4991-9e16-fa7ca24b678a | Win2012SRV       | ACTIVE    | None       | Running     | private=10.0.0.5, 192.168.1.61 |
+————————————–+——————+———–+————+————-+——————————–+

[root@hv02 f6b9512bf949cf4bedd1cd604742797e(keystone_admin)]# nova show  2950e393-eb37-4991-9e16-fa7ca24b678a

+————————————–+———————————————————-+
| Property                             | Value                                                    |
+————————————–+———————————————————-+
| status                               | ACTIVE                                                   |
| updated                              | 2014-01-09T19:37:09Z                           |
| OS-EXT-STS:task_state                | None                                             |
| OS-EXT-SRV-ATTR:host                 | hv01.localdomain                                         |
| key_name                             | key2                                                     |
| image                                | Attempt to boot from volume Рno image supplied          |
| private network                      | 10.0.0.5, 192.168.1.61                                   |
| hostId                               | fc6ed5fd7d8a2f3c510671ff8485af9e340d4244246eb0aff55f1a0d |
| OS-EXT-STS:vm_state                  | active                                           |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000013                                        |
| OS-SRV-USG:launched_at               | 2014-01-09T14:26:34.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | hv01.localdomain                                         |
| flavor                               | m1.small (2)                                             |
| id                                   | 2950e393-eb37-4991-9e16-fa7ca24b678a                     |
| security_groups¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† | [{u’name’: u’default’}]¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬†¬† |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | 0b6cc1c84d194a4fbf6be1cd3343167e                         |
| name                                 | Win2012SRV                                           |
| created                              | 2014-01-09T14:26:24Z                             |
| tenant_id                            | dc2ec9f2a8404c22b46566f567bebc49                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| os-extended-volumes:volumes_attached | [{u’id’: u’fb78c942-1cf7-4f8c-b264-1a3997d03eef‘}]¬†¬†¬†¬†¬†¬† |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+————————————–+———————————————————-+

System info :-

REFERENCES.

1. http://www.andrewklau.com/getting-started-with-multi-node-openstack-rdo-havana-gluster-backend-neutron/
2. http://openstack.redhat.com/forum/discussion/607/havana-mutlinode-with-neutron

Answer file :

[general]

# Path to a Public key to install on servers. If a usable key has not

# been installed on the remote servers the user will be prompted for a

# password and this key will be installed so the password will not be

# required again

CONFIG_SSH_KEY=

# Set to ‘y’ if you would like Packstack to install MySQL

CONFIG_MYSQL_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Image

# Service (Glance)

CONFIG_GLANCE_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Block

# Storage (Cinder)

CONFIG_CINDER_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Compute

# (Nova)

CONFIG_NOVA_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack

# Networking (Neutron)

CONFIG_NEUTRON_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack

# Dashboard (Horizon)

CONFIG_HORIZON_INSTALL=y

# Set to ‘y’ if you would like Packstack to install OpenStack Object

# Storage (Swift)

CONFIG_SWIFT_INSTALL=n

# Set to ‘y’ if you would like Packstack to install OpenStack

# Metering (Ceilometer)

CONFIG_CEILOMETER_INSTALL=y

# Set to ‘y’ if you would like Packstack to install Heat

CONFIG_HEAT_INSTALL=y

# Set to ‘y’ if you would like Packstack to install the OpenStack

# Client packages. An admin “rc” file will also be installed

CONFIG_CLIENT_INSTALL=y

# Comma separated list of NTP servers. Leave plain if Packstack

# should not install ntpd on instances.

CONFIG_NTP_SERVERS=0.au.pool.ntp.org,1.au.pool.ntp.org,2.au.pool.ntp.org,3.au.pool.ntp.org

# Set to ‘y’ if you would like Packstack to install Nagios to monitor

# openstack hosts

CONFIG_NAGIOS_INSTALL=n

# Comma separated list of servers to be excluded from installation in

# case you are running Packstack the second time with the same answer

# file and don’t want Packstack to touch these servers. Leave plain if

# you don’t need to exclude any server.

EXCLUDE_SERVERS=

# The IP address of the server on which to install MySQL

CONFIG_MYSQL_HOST=192.168.1.137

# Username for the MySQL admin user

CONFIG_MYSQL_USER=root

# Password for the MySQL admin user

CONFIG_MYSQL_PW=1279e9bb292c48e5

# The IP address of the server on which to install the QPID service

CONFIG_QPID_HOST=192.168.1.137

CONFIG_QPID_ENABLE_SSL=n

CONFIG_QPID_ENABLE_AUTH=n

CONFIG_NEUTRON_LBAAS_HOSTS=192.168.1.137,192.168.1.127

CONFIG_RH_USER=n

CONFIG_RH_PW=n

CONFIG_RH_BETA_REPO=n

CONFIG_SATELLITE_URL=n

CONFIG_SATELLITE_USER=n

CONFIG_SATELLITE_PW=n

CONFIG_SATELLITE_AKEY=n

CONFIG_SATELLITE_CACERT=n

CONFIG_SATELLITE_PROFILE=n

CONFIG_SATELLITE_FLAGS=novirtinfo

CONFIG_SATELLITE_PROXY=n

CONFIG_SATELLITE_PROXY_USER=n

CONFIG_SATELLITE_PROXY_PW=n

# The IP address of the server on which to install Keystone

CONFIG_KEYSTONE_HOST=192.168.1.137

# The password to use for the Keystone to access DB

CONFIG_KEYSTONE_DB_PW=6cde8da7a3ca4bc0

# The token to use for the Keystone service api

CONFIG_KEYSTONE_ADMIN_TOKEN=c9a7f68c19e448b48c9f520df5771851

# The password to use for the Keystone admin user

CONFIG_KEYSTONE_ADMIN_PW=6fa29c9cb0264385

# The password to use for the Keystone demo user

CONFIG_KEYSTONE_DEMO_PW=6dc04587dd234ac9

# Kestone token format. Use either UUID or PKI

CONFIG_KEYSTONE_TOKEN_FORMAT=PKI

# The IP address of the server on which to install Glance

CONFIG_GLANCE_HOST=192.168.1.137

# The password to use for the Glance to access DB

CONFIG_GLANCE_DB_PW=1c135a665b70481d

# The password to use for the Glance to authenticate with Keystone

CONFIG_GLANCE_KS_PW=9c32f5a3bfb54966

# The IP address of the server on which to install Cinder

CONFIG_CINDER_HOST=192.168.1.137

# The password to use for the Cinder to access DB

CONFIG_CINDER_DB_PW=d9e997c7f6ec4f3b

# The password to use for the Cinder to authenticate with Keystone

CONFIG_CINDER_KS_PW=ae0e15732c104989

# The Cinder backend to use, valid options are: lvm, gluster, nfs

CONFIG_CINDER_BACKEND=lvm

# Create Cinder’s volumes group. This should only be done for testing

# on a proof-of-concept installation of Cinder.  This will create a

# file-backed volume group and is not suitable for production usage.

CONFIG_CINDER_VOLUMES_CREATE=y

# Cinder’s volumes group size. Note that actual volume size will be

# extended with 3% more space for VG metadata.

CONFIG_CINDER_VOLUMES_SIZE=20G

# A single or comma separated list of gluster volume shares to mount,

# eg: ip-address:/vol-name

# CONFIG_CINDER_GLUSTER_MOUNTS=192.168.1.137:/CINDER-VOLUMES

# A single or comma seprated list of NFS exports to mount, eg: ip-

# address:/export-name

CONFIG_CINDER_NFS_MOUNTS=

# The IP address of the server on which to install the Nova API

# service

CONFIG_NOVA_API_HOST=192.168.1.137

# The IP address of the server on which to install the Nova Cert

# service

CONFIG_NOVA_CERT_HOST=192.168.1.137

# The IP address of the server on which to install the Nova VNC proxy

CONFIG_NOVA_VNCPROXY_HOST=192.168.1.137

# A comma separated list of IP addresses on which to install the Nova

# Compute services

CONFIG_NOVA_COMPUTE_HOSTS=192.168.1.137,192.168.1.127

# The IP address of the server on which to install the Nova Conductor

# service

CONFIG_NOVA_CONDUCTOR_HOST=192.168.1.137

# The password to use for the Nova to access DB

CONFIG_NOVA_DB_PW=34bf4442200c4c93

# The password to use for the Nova to authenticate with Keystone

CONFIG_NOVA_KS_PW=beaf384bc2b941ca

# The IP address of the server on which to install the Nova Scheduler

# service

CONFIG_NOVA_SCHED_HOST=192.168.1.137

# The overcommitment ratio for virtual to physical CPUs. Set to 1.0

# to disable CPU overcommitment

CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=32.0

# The overcommitment ratio for virtual to physical RAM. Set to 1.0 to

# disable RAM overcommitment

CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=3.0

# Private interface for Flat DHCP on the Nova compute servers

CONFIG_NOVA_COMPUTE_PRIVIF=eth1

# The list of IP addresses of the server on which to install the Nova

# Network service

CONFIG_NOVA_NETWORK_HOSTS=192.168.1.137

# Nova network manager

CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager

# Public interface on the Nova network server

CONFIG_NOVA_NETWORK_PUBIF=eth0

# Private interface for network manager on the Nova network server

CONFIG_NOVA_NETWORK_PRIVIF=eth1

# IP Range for network manager

CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22

# IP Range for Floating IP’s

CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22

# Name of the default floating pool to which the specified floating

# ranges are added to

CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova

# Automatically assign a floating IP to new instances

CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n

# First VLAN for private networks

CONFIG_NOVA_NETWORK_VLAN_START=100

# Number of networks to support

CONFIG_NOVA_NETWORK_NUMBER=1

# Number of addresses in each private subnet

CONFIG_NOVA_NETWORK_SIZE=255

# The IP addresses of the server on which to install the Neutron

# server

CONFIG_NEUTRON_SERVER_HOST=192.168.1.137

# The password to use for Neutron to authenticate with Keystone

CONFIG_NEUTRON_KS_PW=53d71f31745b431e

# The password to use for Neutron to access DB

CONFIG_NEUTRON_DB_PW=ab7d7088075b4727

# A comma separated list of IP addresses on which to install Neutron

# L3 agent

CONFIG_NEUTRON_L3_HOSTS=192.168.1.137

# The name of the bridge that the Neutron L3 agent will use for

# external traffic, or ‘provider’ if using provider networks

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

# A comma separated list of IP addresses on which to install Neutron

# DHCP agent

CONFIG_NEUTRON_DHCP_HOSTS=192.168.1.137

# The name of the L2 plugin to be used with Neutron

CONFIG_NEUTRON_L2_PLUGIN=openvswitch

# A comma separated list of IP addresses on which to install Neutron

# metadata agent

CONFIG_NEUTRON_METADATA_HOSTS=192.168.1.137

# A comma separated list of IP addresses on which to install Neutron

# metadata agent

CONFIG_NEUTRON_METADATA_PW=d7ae6de0e6ef4d5e

# The type of network to allocate for tenant networks

CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local

# A comma separated list of VLAN ranges for the Neutron linuxbridge

# plugin

CONFIG_NEUTRON_LB_VLAN_RANGES=

# A comma separated list of interface mappings for the Neutron

# linuxbridge plugin

CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=

# Type of network to allocate for tenant networks

CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vlan

# A comma separated list of VLAN ranges for the Neutron openvswitch

# plugin

CONFIG_NEUTRON_OVS_VLAN_RANGES=physnet1:10:20

# A comma separated list of bridge mappings for the Neutron

# openvswitch plugin

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1

# A comma separated list of colon-separated OVS bridge:interface

# pairs. The interface will be added to the associated bridge.

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1

# A comma separated list of tunnel ranges for the Neutron openvswitch

# plugin

CONFIG_NEUTRON_OVS_TUNNEL_RANGES=

# Override the IP used for GRE tunnels on this hypervisor to the IP

# found on the specified interface (defaults to the HOST IP)

CONFIG_NEUTRON_OVS_TUNNEL_IF=

# The IP address of the server on which to install the OpenStack

# client packages. An admin “rc” file will also be installed

CONFIG_OSCLIENT_HOST=192.168.1.137

# The IP address of the server on which to install Horizon

CONFIG_HORIZON_HOST=192.168.1.137

# To set up Horizon communication over https set this to “y”

CONFIG_HORIZON_SSL=y

# PEM encoded certificate to be used for ssl on the https server,

# leave blank if one should be generated, this certificate should not

# require a passphrase

CONFIG_SSL_CERT=

# Keyfile corresponding to the certificate if one was entered

CONFIG_SSL_KEY=

# The IP address on which to install the Swift proxy service

# (currently only single proxy is supported)

CONFIG_SWIFT_PROXY_HOSTS=192.168.1.137

# The password to use for the Swift to authenticate with Keystone

CONFIG_SWIFT_KS_PW=311d3891e9e140b9

# A comma separated list of IP addresses on which to install the

# Swift Storage services, each entry should take the format

# [/dev], for example 127.0.0.1/vdb will install /dev/vdb
# on 127.0.0.1 as a swift storage device(packstack does not create the
# filesystem, you must do this first), if /dev is omitted Packstack
# will create a loopback device for a test setup
CONFIG_SWIFT_STORAGE_HOSTS=192.168.1.137
# Number of swift storage zones, this number MUST be no bigger than
# the number of storage devices configured
CONFIG_SWIFT_STORAGE_ZONES=1
# Number of swift storage replicas, this number MUST be no bigger
# than the number of storage zones configured
CONFIG_SWIFT_STORAGE_REPLICAS=1
# FileSystem type for storage nodes
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
# Whether to provision for demo usage and testing
CONFIG_PROVISION_DEMO=n
# The CIDR network address for the floating IP subnet
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
# Whether to configure tempest for testing
CONFIG_PROVISION_TEMPEST=n
# The uri of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
# The revision of the tempest git repository to use
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
# Whether to configure the ovs external bridge in an all-in-one
# deployment
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
# The IP address of the server on which to install Heat service
CONFIG_HEAT_HOST=192.168.1.137
# The password used by Heat user to authenticate against MySQL
CONFIG_HEAT_DB_PW=0f593f0e8ac94b20
# The password to use for the Heat to authenticate with Keystone
CONFIG_HEAT_KS_PW=22a4dee89e0e490b
# Set to ‘y’ if you would like Packstack to install Heat CloudWatch
# API
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
# Set to ‘y’ if you would like Packstack to install Heat
# CloudFormation API
CONFIG_HEAT_CFN_INSTALL=n
# The IP address of the server on which to install Heat CloudWatch
# API service
CONFIG_HEAT_CLOUDWATCH_HOST=192.168.1.137
# The IP address of the server on which to install Heat
# CloudFormation API service
CONFIG_HEAT_CFN_HOST=192.168.1.137
# The IP address of the server on which to install Ceilometer
CONFIG_CEILOMETER_HOST=192.168.1.137
# Secret key for signing metering messages.
CONFIG_CEILOMETER_SECRET=70ca460aa5354ef8
# The password to use for Ceilometer to authenticate with Keystone
CONFIG_CEILOMETER_KS_PW=72858e26b4cd40c2
# To subscribe each server to EPEL enter “y”
CONFIG_USE_EPEL=y
# A comma separated list of URLs to any additional yum repositories
# to install
CONFIG_REPO=
# The IP address of the server on which to install the Nagios server
CONFIG_NAGIOS_HOST=192.168.1.137
# The password of the nagiosadmin user on the Nagios server
CONFIG_NAGIOS_PW=c3832621eebd4d48