OpenStack RDO Installation on Centos 6.5 - Part 2

RDO LogoOpenStack Logo ​This is the second part of an article that covers the installation and configuration of an OpenStack (Icehouse) implementation atop VMWare ESXi 5.1 in the lab.  The contents of the article are based on a very helpful video and slideshow created by Lars Kellogg-Stedman.  The technology behind the automation of the installation is Redhat's RDO.  Part 1 covered the lab preparation, prerequisites and actual execution of the main install control engine, PackStack.  This part covers the final configuration of the installed platform.

OpenStack Architecture

Just to refresh the mind, here is a diagram of the architecture employed, as recommended within Lars' content but with alterations to reflect external connectivity.

OpenStack Installation Architecture ​ ​

As mentioned above, these components have now been installed and configured.  Iptables firewall rules have been inserted.  MySQL users and databases created and services/agents are now running.  The rest of this article is dedicated to the post-installation configuration steps required prior to using the cloud.  These tasks will be completed in a terminal window.

Before completing the installation, take some time to view the tables below.  They will help to understand where to look for configuration and logging information for each of the OpenStack related services on each host in this cloud.

Controller

Component Location Configuration File Location Log File Location
/usr/bin/nova-api   /var/log/nova/api.log
/usr/bin/nova-cert   /var/log/nova/conductor.log
/usr/bin/nova-consoleauth   /var/log/nova/consoleauth.log
/usr/bin/nova-novncproxy    
/usr/bin/nova-scheduler   /var/log/nova/scheduler.log
/usr/bin/nova-conductor   /var/log/nova/conductor.log
/usr/sbin/httpd

/etc/httpd/conf.d/*

/etc/openstack-dashboard/local_settings

/var/log/httpd

/var/log/horizon/horizon.log

/usr/sbin/rabbitmq-server /etc/rabbitmq/rabbitmq

/var/log/rabbitmq/rabbit@controller.log

/var/log/rabbitmq/rabbit@controller-sasl.log

​/usr/libexec/mysqld

/etc/my.cnf /var/lib/mysql/controller.cloud.local.err
/usr/bin/keystone-all /etc/keystone/keystone.conf /var/log/keystone/keystone.log

Network

Component Location Configuration File Location Log File Location
/usr/bin/neutron-dhcp-agent /etc/neutron/dhcp_agent.ini /var/log/neutron/dhcp-agent.log
/usr/bin/neutron-l3-agent

/etc/neutron/l3_agent.ini

/etc/neutron/fwaas_driver.ini

/var/log/neutron/l3-agent.log
/usr/bin/neutron-lbaas-agent /etc/neutron/lbaas_agent.ini /var/log/neutron/lbaas-agent.log
/usr/bin/neutron-metadata-agent /etc/neutron/metadata_agent.ini /var/log/neutron/metadata-agent.log
/usr/bin/neutron-openvswitch-agent   /var/log/neutron/openvswitch-agent.log
/usr/bin/neutron-server /etc/neutron/plugin.ini /var/log/neutron/server.log
/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf  

Note: Other logs that may contain general neutron settings across all agents are:

/etc/neutron/neutron.conf

/usr/share/neutron/neutron-dist.conf

Compute

Component Location Configuration File Location Log File Location
/usr/bin/nova-compute   /var/log/nova/compute.log
/usr/bin/neutron-openvswitch-agent

/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

/etc/neutron/neutron.conf

/usr/share/neutron/neutron-dist.conf 

/var/log/neutron/openvswitch-agent.log
/usr/bin/neutron-rootwrap

/etc/neutron/rootwrap.conf

 
/usr/sbin/tuned /etc/tuned.conf  
libvirt

/etc/libvirt/libvirtd.conf

/etc/libvirt/libvirt.conf

/etc/libvirt/lxc.conf

/etc/libvirt/qemu.conf

/var/log/libvirt/libvirt.log
/usr/sbin/dnsmasq    

Next Steps

Check Access to Horizon Dashboard

  • Connect to the Controller node
  • Open /etc/openstack-dashboard/local_settings with your favourite editor and check the following line.  It should look approximately like this:
ALLOWED_HOSTS = ['192.168.10.3', 'controller.cloud.local', 'localhost', ]

Retrieve The Admin Password

The password for the OpenStack 'admin' is located in the file /root/keystonerc_admin.  This account should be used when performing tasks that require high privilege level inside OpenStack, either via the UI or command line.  You should now be able to log into the Horizon web UI.  If you'd like to go to http://192.168.10.3/dashboard and enter your admin credentials.  The screen should look something like this: 

OpenStack Horizon Login PageOpenStack Horizon Default Entry ScreenOpenStack Horizon - Agent Info ​ ​ ​

Provision An Image That Can be Used To Create VMs

Execute the following (commands are copied and pasted from the slideshow).

. /root/keystonerc_admin
glance image-create --copy-from http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img --is-public true --container-format bare --disk-format qcow2 --name cirros​

The resulting output should look something like this:

+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | None                                 |
| container_format | bare                                 |
| created_at       | 2014-05-12T14:21:00                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 831b0697-25b7-4f49-aa51-6e95c0154177 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | 11fe5869f6194999b6bb4bab1e89fb85     |
| protected        | False                                |
| size             | 13147648                             |
| status           | queued                               |
| updated_at       | 2014-05-12T14:21:00                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

Cirrus is a small Ubuntu based Linux ideal for testing in clouds.  The launchpad page is here

  • Confirm it is ready for use
glance image-list

Output should resemble this.  Note 'active' status

+--------------------------------------+--------+-------------+------------------+----------+--------+
| ID                                   | Name   | Disk Format | Container Format | Size     | Status |
+--------------------------------------+--------+-------------+------------------+----------+--------+
| 831b0697-25b7-4f49-aa51-6e95c0154177 | cirros | qcow2       | bare             | 13147648 | active |
+--------------------------------------+--------+-------------+------------------+----------+--------+

Create External Network

Execute the following comands (still as the OpenStack admin)

neutron net-create external01 --router:external=True
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | d64356d8-87c8-46fd-b818-f5bda42bc3ae |
| name                      | external01                           |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1000                                 |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 11fe5869f6194999b6bb4bab1e89fb85     |
+---------------------------+--------------------------------------+

Now add the IP details to the network created in the previous step

neutron subnet-create --name external01-subnet01 --disable-dhcp --allocation-pool start=10.20.0.100,end=10.20.0.199 external01 10.20.0.0/24


+------------------+------------------------------------------------+
| Field            | Value                                          |
+------------------+------------------------------------------------+
| allocation_pools | {"start": "10.20.0.100", "end": "10.20.0.199"} |
| cidr             | 10.20.0.0/24                                   |
| dns_nameservers  |                                                |
| enable_dhcp      | False                                          |
| gateway_ip       | 10.20.0.1                                      |
| host_routes      |                                                |
| id               | dbc80e1c-3709-4d4d-bcaf-74857ea01a98           |
| ip_version       | 4                                              |
| name             | external01-subnet01                            |
| network_id       | d64356d8-87c8-46fd-b818-f5bda42bc3ae           |
| tenant_id        | 11fe5869f6194999b6bb4bab1e89fb85               |
+------------------+------------------------------------------------+

Note that the allocation pool has been specifically set for demonstration purposes only. 

Create a VM Flavour for Testing

Now that an OS image has been downloaded, a new VM configuration option (flavour) should be created.

Execute the following command:

nova flavor-create m1.nano auto 128 1 1

The resultant output should be as follows:

+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| ID                                   | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+
| ff5a8ab0-562b-40d6-ab36-41edd8f1bce5 | m1.nano | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+

Create a Tenant

A tenant is essentially a logical container for user accounts.  It most often represents an actual company or entity.  If a billing system were attached to this cloud, billing information would be associated with the tenant account.  User accounts reside in tenant accounts

Enter the following commands:

keystone tenant-create --name demo
keystone user-create --name demo --tenant demo --pass demo

Check full user list (remember you are doing this as the OpenStack 'admin' user

[root@controller ~(keystone_admin)]#  keystone user-list
+----------------------------------+---------+---------+-------------------+
|                id                |   name  | enabled |       email       |
+----------------------------------+---------+---------+-------------------+
| ba22a2488d0c4f5a901f21d3bfe56964 |  admin  |   True  |   test@test.com   |
| 459ac78652204150a35ed8fb7caa9fa0 |  cinder |   True  |  cinder@localhost |
| 9d53920018684658a2de376f9e672b6d |   demo  |   True  |                   |
| 703d60868be940c7b712c711a9b592ba |  glance |   True  |  glance@localhost |
| 448d7c237cfe46bb807209f801c1d521 | neutron |   True  | neutron@localhost |
| c30fbd1c6bbe43d2be78ecc905ae6256 |   nova  |   True  |   nova@localhost  |
+----------------------------------+---------+---------+-------------------+

​On the controller, create a credentials file /root/keystonerc_demo with the following contents

export OS_USERNAME=demo
export OS_TENANT_NAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.10.3:35357/v2.0/
export PS1='[\u@\h \W(keystone_demo)]\$ '

Actions as The Demo User

The following actions will now be performed as the demo user, mimicking web UI activity

  • First, switch user to the demo user
. /root/keystonerc_demo

​Create a private network for the virtual machines of the user

The private network is exactly that.  Only virtual machines created within this tenant account placed on that specific private network can 'see' each other.  Any broadcast traffic from this tenant will not appear on any of the ports of other tenants' networks.  Traffic between virtual machines for the tenant on this network will travel between one another via GRE tunnels setup between the hypervisor hosts and the network controller (where neutron agents are running).

  • Create the private network
neutron net-create private01

​+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| admin_state_up | True                                 |
| id             | 6a799969-832f-4287-98c4-eeb7f28cc974 |
| name           | private01                            |
| shared         | False                                |
| status         | ACTIVE                               |
| subnets        |                                      |
| tenant_id      | 64756631c45749f980c243b71ba8ff42     |
+----------------+--------------------------------------+
  • ​Assign IP information to the network
neutron subnet-create --name private01-subnet01 --dns-nameserver 8.8.8.8 --gateway 10.0.0.1 private01 10.0.0.0/24
+------------------+--------------------------------------------+
| Field            | Value                                      |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| cidr             | 10.0.0.0/24                                |
| dns_nameservers  | 8.8.8.8                                    |
| enable_dhcp      | True                                       |
| gateway_ip       | 10.0.0.1                                   |
| host_routes      |                                            |
| id               | bd1ae33a-4de9-49c2-afa0-cd55bf30335d       |
| ip_version       | 4                                          |
| name             | private01-subnet01                         |
| network_id       | 6a799969-832f-4287-98c4-eeb7f28cc974       |
| tenant_id        | 64756631c45749f980c243b71ba8ff42           |
+------------------+--------------------------------------------+

Create a Router and Connect To Networks

neutron router-create external-router

+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 61ed3222-c996-455c-99f3-3c7b6b29d29e |
| name                  | external-router                      |
| status                | ACTIVE                               |
| tenant_id             | 64756631c45749f980c243b71ba8ff42     |
+-----------------------+--------------------------------------+
  • Set the external network for the router
neutron router-gateway-set external-router external01
  • Add an interface to the router on the previously created internal subnet
neutron router-interface-add external-router private01-subnet01
Added interface c453fda8-f03e-4bce-afac-194d825d7a37 to router external-router
  • Now, log into the horizon dashboard as the demo user and click on network topology.  You should see something like this:

Network Topology As Seen By User

Create Key Authentication Key Pair

  • Execute the following commands as the user:
ssh-keygen -t rsa -b 2048 -N '' -f id_rsa_demo
nova keypair-add --pub-key id_rsa_demo.pub demo

Create a VM

  • List available networks
neutron net-list

+--------------------------------------+------------+--------------------------------------------------+

| id                                   | name       | subnets                                          |
+--------------------------------------+------------+--------------------------------------------------+
| 6a799969-832f-4287-98c4-eeb7f28cc974 | private01  | bd1ae33a-4de9-49c2-afa0-cd55bf30335d 10.0.0.0/24 |
| d64356d8-87c8-46fd-b818-f5bda42bc3ae | external01 | dbc80e1c-3709-4d4d-bcaf-74857ea01a98             |
+--------------------------------------+------------+--------------------------------------------------+
  • Extract the network id from the above table and use in the VM creation command:
nova boot --poll --flavor m1.nano --image cirros --nic net-id=6a799969-832f-4287-98c4-eeb7f28cc974 --key-name demo test0
+--------------------------------------+------------------------------------------------+
| Property                             | Value                                          |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                         |
| OS-EXT-AZ:availability_zone          | nova                                           |
| OS-EXT-STS:power_state               | 0                                              |
| OS-EXT-STS:task_state                | scheduling                                     |
| OS-EXT-STS:vm_state                  | building                                       |
| OS-SRV-USG:launched_at               | -                                              |
| OS-SRV-USG:terminated_at             | -                                              |
| accessIPv4                           |                                                |
| accessIPv6                           |                                                |
| adminPass                            | KvwA2pKFycM9                                   |
| config_drive                         |                                                |
| created                              | 2014-05-15T14:48:58Z                           |
| flavor                               | m1.nano (ff5a8ab0-562b-40d6-ab36-41edd8f1bce5) |
| hostId                               |                                                |
| id                                   | 751a9894-08ea-4628-b1cf-780e572d21b7           |
| image                                | cirros (831b0697-25b7-4f49-aa51-6e95c0154177)  |
| key_name                             | demo                                           |
| metadata                             | {}                                             |
| name                                 | test0                                          |
| os-extended-volumes:volumes_attached | []                                             |
| progress                             | 0                                              |
| security_groups                      | default                                        |
| status                               | BUILD                                          |
| tenant_id                            | 64756631c45749f980c243b71ba8ff42               |
| updated                              | 2014-05-15T14:48:58Z                           |
| user_id                              | 9d53920018684658a2de376f9e672b6d               |
+--------------------------------------+------------------------------------------------+

You should also see a "Server building ... 10% complete" message on screen (due to use of '--poll' in the command line).  

  • Once that is complete, open the horizon dashboard and inspect the VM

Cirros Instance

Note the username and password from the boot screen.

Connecting to the Outside World

  • Execute an 'ifconfig' - IP address for eth0 will likely be 10.0.0.2.
  • Ping the default gateway 10.0.0.1 - you should get a reply.  That interface is sitting on the 'Network' host (where L3 agent is running) so you can infer from this that the GRE tunnel is in place over eth1.  The curious can use Wireshark to confirm this.
  • Ping 10.20.0.100 - you should get a reply.  This should be the northbound interface on the external router created earlier
  • Attempt to ping 10.20.0.1 - this will fail because there is currently no connection between the external router and this router
  • Shutdown the network box
  • Add a third Ethernet card and place it on your 'Public network'
  • Restart the system. Log into the Network host
  • ifconfig -a should reveal an 'eth2' like this:
eth2      Link encap:Ethernet  HWaddr 00:0C:29:41:E9:9F

          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

We now need to connect the external router to this network via a port.  This process is not specifically documented in Lars' presentation but the OpenStack documentation has this covered.

  • create /etc/sysconfig/network-scripts/​ifcfg-eth2 file (retain YOUR HW address)
DEVICE=eth2
HWADDR=00:0C:29:41:E9:9F
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes
  • Create /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge 
ONBOOT=yes
  • Start eth2 device with ifup eth2
  • You may need to cycle the neutron services:
service neutron-l3-agent stop
service neutron-openvswitch-agent stop
service neutron-openvswitch-agent start
service neutron-l3-agent start

Your should now be able to ping:

  • Your local VM interface (10.0.0.2)
  • South side of virtual router on your private network (10.0.0.1)
  • North side of virtual router (10.20.0.100)
  • South side of public network router (10.20.0.1)
  • Southside of home network Internet gateway (192.168.1.1)
  • Google DNS (8.8.8.8)

Cloud Complete.  What's Next?

So, the RDO OpenStack cloud is now complete.  It can now be used as a proof-of-concept for OpenStack based cloud.  In follow-up articles I'll look more closely at the various components in OpenStack, such as Cinder, Glance and Neutron.