Apache Cloudstack 4.3 Installation - Part 3

In Part 1 we installed Cloudstack to our pre-prepared infrastructure.

In Part 2 we configured a first zone in the cloud and added a single XenServer Hypervisor to a cluster.

In this part we are going to:

  • Upload an ISO image
  • Create a virtual machine
  • Configure firewall and port forwarding in order to allow external access
  • Snapshot a virtual machine
  • Add another hypervisor to the cluster
  • Create a virtual machine on a second hypervisor
  • Attempt to communicate between the two VMs via the tenant specific VLAN

Upload an ISO image

  • Navigate to 'Templates' from the LHS menu
  • At the top of the main window, select 'isos' from the dropdown
  • Click the 'Register ISO
  • Fill out the modal form
  • Monitor progress of upload

This is a really good initial test for the SSVM, the Secondary Storage Virtual Machine.  When we enabled the zone, two system VMs are created from the previously downloaded system VM template.  You can see these running via clicking on the 'Infrastructure' menu item on the LHS menu and then click the 'View All' button in the System VMs box in the main Window.

Essentially what happens is this; when you complete filling in the ISO upload form and press OK, Cloudstack instructs the SSVM to go out to the provided URL pull it down and store it on the secondary storage in the zone.  This requires that the SSVM can correctly reach the Internet and has no difficulties in reaching the NFS secondary storage server and mounting the NFS path.

Sometimes when the network has not been correctly setup during the creation of the zone (a misconfiguration), the SSVM will not be able to perform this task.

Create a Virtual Machine

  • Select 'Instances' from the LHS menu
  • Click 'Add instance'
  • Work our way through the modal forms (7 steps).  There are not many options within our default installation.  The most important for now is simply to use our ISO and create a 'Minimal Instance'.  The screenshots show the required options.

The next step here is to enable access from the Internet because at the moment it is not possible to connect directly to this virtual machine via the Internet.

Configure A Centos 6.5 Minimal Build VM For External Access

OK, a cake has been baked - a Centos 6.5 64bit (minimal build) server VM has been built.  In this case, the only additional step we needed to take after the build had finished was to go into /etc/sysconfig/network-scripts and change the 'ONBOOT' parameter from 'no' to 'yes' and restart the interface.  At this point we have a functioning Centos 6.5 server.  Some basic network diagnostic tests will reveal that some extra configuration is required.  

By default, newly created servers do not have access to the Internet nor can they be directly accessed from the Internet.  If you try and ping, for example, 8.8.8.8, you will find that it fails.  Egress rules need to be setup, via the 'Network' LHS menu item.  We click the blue 'Add' button and add a suitable ICMP rule (if we want to ping devices external the 'cloud' based network:

ICMP Egress Rule
Field Value
Source CIDR: 10.1.1.0/24
Protocol: ICMP
ICMP Type: <leave blank>
ICMP Code: <leave blank>

From now, we should be able to ping any external IP address

Now, let's look at how to allow inbound access, as it is slightly more 'complicated'.  We still navigate to the 'Network' menu item and to our specific guest network but after that we need to click on 'View IP Addresses' and click the IP address of our virtual router (which at this stage is the only IP address in the list).  Click configuration and we will see a screen just like the below:

The important information has been highlighted to indicate where we are in the UI and what we need to do now.  Essentially, for traffic to flow inward, that originates external to the cloud VM, we need to open a port in the firewall and then decide on where that particular port is forwarded.  Let's imagine for just a second that we believe in 'security via obscurity'.  We want to enable ssh access to our new VM but want to make it accessible over port 22222 instead of the usual port.  In our example here, we are going to do this without changing any configuration on the VM.

Firewall Rule

The first thing we need to do is open port 22222 on the virtual router firewall.  

  • Click 'View All' on the button inside the firewall box on the diagram
  • Enter the details as below into the UI
Field Value
Source CIDR: 0.0.0.0/0
Protocol TCP
Start port: 22222
End Port: 22222
  • Click the blue 'Add' button

Forwarding Rule

  • Click the <IP address>[Source NAT] link in the breadcrumb menu
  • Click 'View All' in the Port Forwarding box
  • Add a rule as follows:
Field Value
Private Port: 22 (fill in top box)
Public Port: 22222 (fill in top box)
  • Click blue 'Add' button.  A new modal window appears listing all of the available VMs
  • ​Click the RHS 'select' button
  • Select the IP address from LHS dropdown of available addresses
  • Click 'Apply'.  A new rule appears in the Port Forwarding List

It is now possible to ssh over port 22222 to the virtual router 'public' IP address and connect to the Centos 6.5 VM machine

Take a Snapshot of a Virtual Machine

This is a very quick exercise but is a good test of many aspects of network and storage functionality.

  • Click on 'Instances' in the LHS menu
  • Click on the relevant instance
  • Click the 'camera' icon in the row of small icons in the 'details' tab
  • Await completion and confirm success

Add A Second Hypervisor To The Cluster

  • Copy the build for the first one:
    • Select 'Custom Virtual machine'
    • 'Other 64bit Linux'
    • Single processor, single core
    • 6GB of RAM
    • 2 Ethernet cards, E1000.  The first on 'Cloud-Management', the second on 'Cloud-Public'
    • LSI Logic Parallel SCSI controller
    • 40GB new virtual disk

As per the diagram in Part 1, we configure the new XenServer as follows:

Field Value
IP Address: 192.168.4.11
Netmask: 255.255.255.0
Gateway: 192.168.4.1
DNS Server: 192.168.4.3
NTP Server 192.168.4.3
Hostname: xs02.cloud.local
  • Repeat the labeling of the network interfaces:
    • Eth0 - 'Cloud-Management'
    • Eth1 - 'Cloud-Public'
  • Ensure that we can ping the XenServer hostname from the management server ​
  • From within Cloudstack management console, add the new hypervisor to the existing cluster ('Infrastructure' -> 'Hosts', 'View all' -> 'Add host' 

Note: It can take a few seconds for a host to be added to a cluster.  We may observe a red, 'Alert' state in the UI, before it changes to a green 'Up' state.

Create a VM on Second Hypervisor

  • Here we repeat the process of creating a Centos 6.5 64bit minimal server under the same tenant account.  We do not need to enable ingress/egress rules at this stage.  The specification is:
    • 1GB RAM
    • 1Ghz vprocessor
    • 20GB disk
  • Login and configure eth0

Inter VM Communications via the Tenant Specific VLAN

​Now that the second VM build has completed we can check to see on which host it is located and whether the two machines can communicate with one another.  

From the 'Instances', click on each VM name.  Scroll down the 'Details' list to find out upon which host the VM sits.  In our case, the machines are located on different hosts

Login to each VM via the console proxy icon and attempt to ping the other machine.  This should be possible  

At this point it is useful to view what is going on via XenCenter UI tool.  It can provide you with more information than the Cloudstack UI.  From the client machine that has direct access to 192.168.4.0/24, we connect to the cluster (which means just connecting to one of the servers in the cluster).  Here is an image that shows the VLAN used for this tenant specific, inter host communication.

  • Click on the networking tab of each hypervisor.  The tenant specific VLAN is visible on both hypervisors:

Note the highlighted VLAN 141 (the VLAN number is appended to the end of the long ID.  This VLAN number has been randomly assigned from the zone VLAN pool that we configured when we added the zone.  Also note that the VLAN is available on NIC1, which is our 'Cloud-Public' interface.  This is functioning exactly how we anticipated.

 

This concludes Part 3 of this Apache Cloudstack installation series.  I'll start another series shortly on configuration.  Apache Cloudstack has many useful features that I will endeavour to cover in the coming weeks:

Advanced networking functionality such as Virtual Private Clouds (VPC), VPN, GRE tunneling

  • Virtual router redundancy and load balancing
  • VM high availability
  • API access to manage virtual machines
  • Add S3 or SWIFT API-compliant object store for secondary storage
  • Define project space and users within tenant accounts to allow grouping of related work
  • Cloud usage/metering
  • Proxied access to VM console from the Internet