Cloud Foundry ecosystem has been blowing my mind for a long time, and I think it really has made an IT disruption letting us focus on applications as the essential unit of business process. There is no need to worry about all those painful stuffs like scalability, multi tenancy and application health. Cloud Foundry will do that nasty job for us, and much more. It could be considered as an operating system for the cloud.

While I was investigating about Cloud Foundry, I also figured out its agnostic nature which enable it to be easily deployed on AWS, vSphere or OpenStack. That is how I got motivated to acquire one of those cheap Dell rack servers on eBay and start the experiment. I had opted for XenServer 6.2 as the hypervisor. Unfortunately, the documentation about setting up the OpenStack compute node on XenServer is rather incomplete, deprecated and very hard to follow if you are doing it for the first time. So, let's see how to proceed step by step, and prepare our OpenStack environment for Cloud Foundry instance deployment. I already assume you have successfully installed and configured the controller node.

Installing paravirtualized XenServer domain

OpenStack compute node needs a paravirtualized virtual machine running on each XenServer instance. Paravirtualized VM basically has a recompiled kernel so it can talk directly to the hypervisor API. If Centos is your distribution of choice then the easiest way to set up a PV virtual machine is by using this kickstart file.

Let’s first create the VM. Please note we have to use Red Hat 6 template, even if we are going to install Centos 7 distribution. For XenServer 6.5 this is not necessary.

TEMPLATE_UUID=$(xe template-list | grep -B1 'name-label.*Red Hat.* 6.*64-bit' | awk -F: '/uuid/{print $2}'| tr -d " ")
VMUUID=$(xe vm-install new-name-label="compute" template=${TEMPLATE_UUID})
xe vm-param-set uuid=$VMUUID other-config:install-repository=http://mirror.centos.org/centos/7/os/x86_64
xe vm-param-set uuid=$VMUUID PV-args="ks=https://gist.githubusercontent.com/bhnedo/4648499f5680207e86ec/raw/4239fd8d0e10f7f2759d600b28b52f1744d9b5ad/kickstart-centos-minimal.cfg ksdevice=eth0"

Find out the network UUID for the bridge that has access to the Internet. Note that Xen bridge is created for every physical network adapter on your machine. Get a list of XenServer networks and store the UUID for the appropriate bridge (in most cases it will be xenbr0).

xe network-list
NETUUID=$(xe network-list bridge=xenbr0 --minimal)

Create a virtual network interface (VIF) and attach it to the virtual machine and network. Start the VM and watch the installation progress from XenCenter.

xe vif-create vm-uuid=$VMUUID network-uuid=$NETUUID mac=random device=0
xe vm-start uuid=$VMUUID

When installation process is done export the VM so we have the base image to use for the storage node too.

xe vm-export uuid=$VMUUID filename=openstack-juno-centos7.xva

Notice: PyGrub doesn't support grub2 boot loader. You will need to apply the following patch in order to boot the VM properly. This issue has been corrected in XenServer 6.5 release.

Installing and configuring compute service

Once you have a running PV guest the next step is to install OpenStack plugins for XenServer Dom0. These will let the compute node to communicate with Xen XAPI in order to provision virtual machines, set up networking, storage, etc. Download the latest Openstack Juno branch, unzip and copy the content of plugins/xenserver/xenapi/etc/xapi.d/plugins directory to/etc/xapi.d/plugins. Also ensure that added files are executable.

cd /tmp
wget https://github.com/openstack/nova/archive/master.zip
unzip master.zip
cp /tmp/nova-juno-stable/plugins/xenserver/xenapi/etc/xapi.d/plugins/* /etc/xapi.d/plugins
chmod a+x /etc/xapi.d/plugins/*
Log into your newly installed compute node (default password for the root user is changeit) and run these commands to enable OpenStack Juno repository and upgrade the packages on your host.

yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm
yum upgrade

If your kernel is upgraded you will probably need to reboot the machine after upgrade process in order to activate the new kernel. Now install the required packages for the compute hypervisor components and nova-network legacy networking.

yum install openstack-nova-compute sysfsutils
yum install openstack-nova-network openstack-nova-api

Xenapi python package is also required, so install it using pip package manager.

easy_install pip
pip install xenapi
I preferred not to setup another network node for Neutron, even legacy networking is deprecated in favor of aforementioned component. If you need advanced features like VLANs, virtual routing, switching, tenant isolation and so on, follow these docs on how to add Neutron network.

Now we need to edit the /etc/nova/nova.conf configuration file.

  1. Message broker settings

    Configure RabbitMQ messaging system in the [DEFAULT] section:

     [DEFAULT]
     rpc_backend = rabbit
     rabbit_host = controller
     rabbit_userid = RABBIT_USER
     rabbit_password = RABBIT_PASSWORD
     			
  2. Keystone authentication

    Modify [DEFAULT] and [keystone_authtoken] sections to configure authentication service access:

     [DEFAULT]
     auth_strategy = keystone
    
     [keystone_authtoken]
     auth_uri = http://controller:5000/v2.0
     identity_uri = http://controller:35357
     admin_tenant_name = service
     admin_user = nova
     admin_password = NOVA_PASSWORD
    		  
  3. Network configuration

    Before proceeding with network parameters, you will need to create a second VIF and attach it to the compute VM.

     $ xe vif-create vm-uuid=$VMUUID network-uuid=$NETUUID mac=random device=1
     $ xe vm-start uuid=$VMUUID
    		

    This network interface will be connected to the Linux bridge and at same time will act as default gateway for all VM instances spawned inside OpenStack. The traffic forwarding between tenants is done at L2 level through this bridge. You should end up with the following interfaces and xenbr0 up after creating the network in OpenStack.

     $ ifconfig
     eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
            inet 192.168.1.106  netmask 255.255.255.0  broadcast 192.168.1.255
            inet6 fe80::90b3:8fff:fe2c:1d09  prefixlen 64  scopeid 0x20
            ether 92:b3:8f:2c:1d:09  txqueuelen 1000  (Ethernet)
            RX packets 3016  bytes 1189159 (1.1 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 2812  bytes 636656 (621.7 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
     eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
            inet6 fe80::44ab:daff:fe21:46d4  prefixlen 64  scopeid 0x20
            ether 46:ab:da:21:46:d4  txqueuelen 1000  (Ethernet)
            RX packets 611  bytes 111213 (108.6 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 38  bytes 4943 (4.8 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
     xenbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
            inet 192.168.1.50  netmask 255.255.255.0  broadcast 192.168.1.255
            inet6 fe80::4034:39ff:fecd:b9b3  prefixlen 64  scopeid 0x20
            ether 46:ab:da:21:46:d4  txqueuelen 0  (Ethernet)
            RX packets 89  bytes 11222 (10.9 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 28  bytes 3967 (3.8 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
     $ brctl show
     bridge name     bridge id               STP enabled                  interfaces
     xenbr0          8000.46abda2146d4          no                         eth1
    		

    In the[DEFAULT] section you will need to put these properties:

     [DEFAULT]
     network_api_class = nova.network.api.API
     security_group_api = nova
     network_manager = nova.network.manager.FlatDHCPManager
     allow_same_net_traffic = True
     multi_host = True
     send_arp_for_ha = True
     share_dhcp_address = True
     force_dhcp_release = True
     flat_network_bridge = xenbr0
     flat_interface = eth1
     public_interface = eth0
    
     my_ip = MANAGEMENT_INTERFACE_IP
     firewall_driver = nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver
    			
  4. Hypervisor settings

    Enable Xen compute driver in the [DEFAULT] section, XAPI endpoint and credentials in the [xenserver] section:

     [DEFAULT]
     compute_driver = xenapi.XenAPIDriver
    
     [xenserver]
     connection_url = http://XENSERVER_MANAGEMENT_IP
     connection_username = XENSERVER_USERNAME
     connection_password = XENSERVER_PASSWORD
     		
  5. Image service and VNC access

    We are almost done. In the [glance] section configure the location of the Image Service. In the [DEFAULT] section enable remote console access. When deploying OpenStack services for the first time, it's a good idea to enable verbose logging too.

     [glance]
     host = controller
    
     [DEFAULT]
     vnc_enabled = True
     vncserver_listen = 0.0.0.0
     vncserver_proxyclient_address = MANAGEMENT_INTERFACE_IP
     novncproxy_base_url = http://controller:6080/vnc_auto.html
    
     verbose = true
     		

Start the Compute and Network services and configure them to be automatically started at boot time.

systemctl enable openstack-nova-compute.service openstack-nova-network.service openstack-nova-metadata-api.service
systemctl start openstack-nova-compute.service openstack-nova-network.service openstack-nova-metadata-api.service

Make sure the nova-compute and nova-network are up and running by executing this command on the controller node:

 nova service-list
+----+------------------+---------+----------+---------+-------+----------------------------+
| Id | Binary           | Host    | Zone     | Status  | State | Updated_at                 |
+----+------------------+---------+----------+---------+-------+----------------------------+
| 1  | nova-consoleauth | hydra   | internal | enabled | up    | 2015-01-31T17:57:06.000000 |
| 2  | nova-cert        | hydra   | internal | enabled | up    | 2015-01-31T17:57:06.000000 |
| 3  | nova-scheduler   | hydra   | internal | enabled | up    | 2015-01-31T17:57:06.000000 |
| 4  | nova-conductor   | hydra   | internal | enabled | up    | 2015-01-31T17:57:06.000000 |
| 5  | nova-compute     | compute | nova     | enabled | up    | 2015-01-31T17:57:07.000000 |
| 6  | nova-network     | compute | internal | enabled | up    | 2015-01-31T17:57:00.000000 |
+----+------------------+---------+----------+---------+-------+----------------------------+-

Installing and configuring storage node

We can start by creating the storage node VM from the base template image we had exported. Run these commands in the XenServer console:

SRUUID = $(xe sr-list name-label="Local storage" --minimal)
xe vm-import filename=openstack-juno-centos7.xva force=true sr-uuid=$SRUUID preserve=true
You will need to create and attach the VDI where cinder volumes will be stored. Get the UUID of your newly imported VM, and then run these commands.

VDIUUID = $(xe vdi-create sr-uuid=$SRUUID name-label="cinder" type=user virtual-size=250GiB)
VBDUUID = $(xe vbd-create vm-uuid=$VMUUID vdi-uuid=$VDIUUID device=1)
xe vbd-plug uuid=$VBDUUID

Install the required dependencies and start the LVM metadata service.

yum install lvm2
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

Partition the disk in order to create the LVM physical volume and the volume group labeled as cinder-volumes . Change /dev/xvdb1 with your partition.

pvcreate /dev/xvdb1
vgcreate cinder-volumes /dev/xvdb1

It is also necessary to instruct the LVM which block storage devices should be scanned. Edit the /etc/lvm/lvm.conf file and modify the filter section to include the created volume group.

devices {
  ...
  filter = [ "a/xvda/", "a/xvdb/", "r/.*/"]
  ...
}

We are now ready to install and configure Block Storage components and dependencies. I wasn't able to get iSCSI LUNs to work using targetcli, probably because XenServer relies on SCSI initiator utilities. The solution was to use scsi-target-utils instead.

yum install scsi-target-utils
yum install openstack-cinder python-oslo-db MySQL-python

Edit the /etc/cinder/cinder.conf configuration file.

  1. Message broker settings

    Configure RabbitMQ messaging system in the [DEFAULT] section:

     [DEFAULT]
     rpc_backend = rabbit
     rabbit_host = controller
     rabbit_userid = RABBIT_USER
     rabbit_password = RABBIT_PASSWORD
     			
  2. Keystone authentication

    Modify [DEFAULT] and [keystone_authtoken] sections to configure authentication service access:

     [DEFAULT]
     auth_strategy = keystone
    
     [keystone_authtoken]
     auth_uri = http://controller:5000/v2.0
     identity_uri = http://controller:35357
     admin_tenant_name = service
     admin_user = cinder
     admin_password = CINDER_PASSWORD
    		  
  3. Database connection

    In the [database] section change the MySQL connection string:

     [database]
     connection = mysql://cinder:CINDER_DB_PASSWORD@controller/cinder
     		
  4. Image service and management IP address

    In the [DEFAULT] section configure the location of the Image Service. Modify management interface address to match your storage node IP. Enable verbose logging.

     [DEFAULT]
     host = controller
     my_ip = MANAGEMENT_INTERFACE_IP
    
     verbose = true
     		
  5. Target administration service

    In the [DEFAULT] configure Cinder to use tgtadm service for iSCSI storage management:

     [DEFAULT]
     iscsi_helper = tgtadm
    				

    Edit the /etc/tgt/targets.conf to include the cinder volumes. This will hold information about volume's location, CHAP credentials, IQNs, etc.

     include /etc/cinder/volumes/*
    		

Start the Block Storage and target service and configure them to be automatically started at boot time.

systemctl enable openstack-cinder-volume.service tgtd.service
systemctl start openstack-cinder-volume.service tgtd.service

Run this command on the controller node to ensure the Storage service is up and running.

cinder service-list
+------------------+--------+------+---------+-------+----------------------------+
|      Binary      |  Host  | Zone |  Status | State |         Updated_at         |
+------------------+--------+------+---------+-------+----------------------------+
| cinder-scheduler | hydra  | nova | enabled |   up  | 2015-01-31T17:57:44.000000 |
|  cinder-volume   | cinder | nova | enabled |   up  | 2015-01-31T17:57:55.000000 |
+------------------+--------+------+---------+-------+----------------------------+

Tip: If you are able to attach cinder volumes from OpenStack, but the file system creation is taking too long or got stuck, try to disable the checksumming of your storage node VIF. Use ethtool -K vifz.0 tx off where z is the domain identifier of the storage VM.

Validate the OpenStack instance

You should go through this steps to validate your OpenStack environment. In the second part we will see how to deploy Cloud Foundry using BOSH and push our first application.