Gippa RDO OpenStack Test Drive

My friend Francesco Vollero is working in the RedHat OpenStack installer system and he started hunting me for a quick test on his code. As most of my friends have been hired in RedHat engineering now, I was curious to test how my previous employer is doing on OpenStack.

rdo-openstackLet’s explain what is RDO

Officially the RDO acronym doesn’t mean anything, I believe is sort of ReDhat Openstack. RDO is a “community”effort to bring OpenStack to Fedora, CentOS and RHEL. Well, much more than a community, is basically a beta testing of the next RedHat “enterprise” flavor of OpenStack. RedHat has a strong commitment on OpenStack and a large number of people have been hired. What I learned from the tests:

  • Ubuntu is still ahead of RedHat in field experience, as most of the development is done on Ubuntu, but they’re doing a great job and quickly catching up.
  • Although using Python to generate Ruby sounds crazy, PackStack is doing the job quite nicely and you can have a nice playground in basically no time. The answer file is a great idea when you have to replicate the environments, say as a professional services.
  • Having also a monitoring suite (Nagios) deployed with the infrastructure could be a nice plus.
  • GluterFS is not that bad and I decided to go for it as a redundant NFS system at home and in my servers in colocation.
  • Openvswitch is something I always wish to explore and, once understood, along with IP nets and IP VPN opens up a world of new possibilities, also if OpenStack is not involved.

Go back to my RDO experience, it all started like a small test on saturday evening, but it slowly became much more than a quick test. I ended up simulating 3 scenarios:

  1. Everything in one box, the simplest installation
  2. Two servers, one holding all the packages with an extra compute
  3. Two servers, GlusterFS Backend and VLAN-based Neutron

The boxes are HP N40L “over-clocked” to 16GB of RAM each, two disks of 1 TB and two NIC cards. The processor of the N40L is an AMD that is something in between an ATOM and an i3. The switch is an old Cisco Catalyst 10/100, but with VLANs and fully manageable. I knew from the beginning that performance wasn’t the key requirement, but that’s I can afford now.

What comes with RDO

RDO is a yum repo hosted on Fedora project and works with Fedora and RHEL6 or 7.
The installer and required package is PackStack (openstack-packstack). Packstack is a command-line Python utility that uses Puppet modules to deploy various parts of OpenStack on multiple pre-installed servers automatically. Basically it contains a puppet master and some puppet modules, then the Python code generates the temporary puppet modules that creates or modify the infrastructure. At the end of installation, packstack creates an “answer file” that can be modified and re-applied to reconfigure the infrastructure.

The funny thing is that PackStack is a RedHat utility and is maintained on … Launchpad 🙂
https://launchpad.net/packstack

What do PackStack installs? Well, a very large bunch of things:

  • MySQL
  • Glance
  • Cinder (supported backends: lvm, gluster, nfs)
  • Nova
  • Horizon
  • Neutron (defaults to openvswitch, vLAN ranges and tunnel supported)
  • Swift
  • Keystone
  • Celiometer
  • Heat (CloudWatch APIs available)
  • Nagios
  • Qpid
  • Tempest test suite is also available

Test 1: everything in one box

That’s a massive amount of software to fit in a single box. As I thought I would need a lot of re-installations, I decided to setup a PXE and a kickstart file for OpenStack automation. The kickstart will partition the disks, install the bare minimum CentOS 6.5 with EPEL and RDO repositories, plus the openstack-packstack package. For the OpenStack, I decided to try with the latest Icehouse release. I decided to have a short run on Saturday, before going out to the theater with my wife, just to understand what’s missing in the plan.

I run the kickstart, installed CentOS 6.5, then run packstack –allinone and went for a shower. As the option name suggests, it install everything in a single box. Well, when I finished the shower, the system was already installed. With a reboot (just for laziness) I had a full OpenStack all up and running. Of course the performance were very poor, hey but it was all in a single box with all the software installed, and the box is really low end.

Test 2: two servers, one holding all the packages with an extra compute

Within the next days I decided to try again but with two boxes and less software. As I mentioned, packstack created an answer file, so I edited and removed Celiometer, Heat, Nagios and Tempest, plus I added another node to CONFIG_NOVA_NETWORK_HOSTS to have two compute nodes.

Just because I wish to have a clean environment, I PXE’d the nodes again and did it again from scratch. Note that packstack is able to modify an existing installation. That was it basically and it worked quite smoothly.

Test 3: two servers, GlusterFS Backend and VLAN-based Neutron

It was too easy for me and I decided to add GlusterFS and Neutron to handle Cisco’s VLANs using Openvswitch. Preparing GlusterFS wasn’t that complicated: I created a replica of two 1TB disk for each box, splitting in 3 different areas for Nova, Cinder and Glance. In this way I created disk redundancy for the OpenStack infrastructure. I discovered that GlusterFS can natively export in NFS without any specific option, but I decided to go for “native” fuse+glusterfs. As I need to specify the IP address of a member, I decided to setup a virtual IP among the nodes. I should have used peacemaker with the floating IP, but I opted for a more “quick” VRRP with keepalived.

To apply to OpensStack, I modified packstack answer file to accommodate the right backend and mounts. As this applies to Cinder, I needed to modify glance and nova manually in the INI file.

The Neutron part was probably the hardest for me, as it required much more skills. As I am coming from the networking part, I decided to go for a much deeper dive on openvswitch, covering the new ip namespaces and filtering capabilities. Another part that needed some investigation is how to chain bridges, external and internal ports. Once understood the logic behind, and set up basic port information in the standard network configuration of CentOS, it was a matter of modifying the neutron section in packstack’s answer file.

Next steps

Next step is to start playing around two topics:

  • Keystone and start integrating SecurePass as authentication schema
  • Openvswitch and find out best practices to configure networking in an enterprise environment