I am pleased to announce that the 11th release of OpenStack, codename Kilo, has been released. This release is a turning point for the open source project with contributions from nearly 1,500 developers and 169 organizations worldwide.
Nova – Kilo offers new API with v2.1. Microversions will provide reliable, strongly validated API definitions for the future, which means it will be easier to write long-lived applications against compute functionality. Very important for operations is live upgrades when a database schema change is required, in addition to better support for changing the resources of a running VM.
Cinder – Major updates for backend storage systems, which now has 70 supported storage systems. Also, users can now attach a volume to multiple compute instances: this open up options for using shared filesystems or migrating traditional HA environments, say based to Corosync or Veritas cluster.
Swift – Big news is erasure coding, providing efficient and cost-effective storage. Kilo also offers improvements to global cluster replication and storage policy metrics.
Neutron – The load-balancing-as-a-service is even more mature. Great focus on the support of NFV on this release, ex: port security for OpenVSwitch, VLAN transparency and MTU API extensions.
Keystone – has now identity federation to support hybrid workloads in multi-cloud environments.
Kilo has its first full release of the Ironic bare-metal provisioning project with support for existing VM workloads and adoption of emerging technologies like Linux containers, platform-as-a-service and NFV. Ironic is already used in production environments including Rackspace Private Cloud and HP Helion
Know more on OpenStack
Are you still confused about what OpenStack is and what advantages can bring to your company?? Download my OpenStack ebook and donate to help children in Africa.
My friend Francesco Vollero is working in the RedHat OpenStack installer system and he started hunting me for a quick test on his code. As most of my friends have been hired in RedHat engineering now, I was curious to test how my previous employer is doing on OpenStack.
Officially the RDO acronym doesn’t mean anything, I believe is sort of ReDhat Openstack. RDO is a “community”effort to bring OpenStack to Fedora, CentOS and RHEL. Well, much more than a community, is basically a beta testing of the next RedHat “enterprise” flavor of OpenStack. RedHat has a strong commitment on OpenStack and a large number of people have been hired. What I learned from the tests:
Ubuntu is still ahead of RedHat in field experience, as most of the development is done on Ubuntu, but they’re doing a great job and quickly catching up.
Although using Python to generate Ruby sounds crazy, PackStack is doing the job quite nicely and you can have a nice playground in basically no time. The answer file is a great idea when you have to replicate the environments, say as a professional services.
Having also a monitoring suite (Nagios) deployed with the infrastructure could be a nice plus.
GluterFS is not that bad and I decided to go for it as a redundant NFS system at home and in my servers in colocation.
Openvswitch is something I always wish to explore and, once understood, along with IP nets and IP VPN opens up a world of new possibilities, also if OpenStack is not involved.
Go back to my RDO experience, it all started like a small test on saturday evening, but it slowly became much more than a quick test. I ended up simulating 3 scenarios:
Everything in one box, the simplest installation
Two servers, one holding all the packages with an extra compute
Two servers, GlusterFS Backend and VLAN-based Neutron
The boxes are HP N40L “over-clocked” to 16GB of RAM each, two disks of 1 TB and two NIC cards. The processor of the N40L is an AMD that is something in between an ATOM and an i3. The switch is an old Cisco Catalyst 10/100, but with VLANs and fully manageable. I knew from the beginning that performance wasn’t the key requirement, but that’s I can afford now.
What comes with RDO
RDO is a yum repo hosted on Fedora project and works with Fedora and RHEL6 or 7.
The installer and required package is PackStack (openstack-packstack). Packstack is a command-line Python utility that uses Puppet modules to deploy various parts of OpenStack on multiple pre-installed servers automatically. Basically it contains a puppet master and some puppet modules, then the Python code generates the temporary puppet modules that creates or modify the infrastructure. At the end of installation, packstack creates an “answer file” that can be modified and re-applied to reconfigure the infrastructure.
What do PackStack installs? Well, a very large bunch of things:
Cinder (supported backends: lvm, gluster, nfs)
Neutron (defaults to openvswitch, vLAN ranges and tunnel supported)
Heat (CloudWatch APIs available)
Tempest test suite is also available
Test 1: everything in one box
That’s a massive amount of software to fit in a single box. As I thought I would need a lot of re-installations, I decided to setup a PXE and a kickstart file for OpenStack automation. The kickstart will partition the disks, install the bare minimum CentOS 6.5 with EPEL and RDO repositories, plus the openstack-packstack package. For the OpenStack, I decided to try with the latest Icehouse release. I decided to have a short run on Saturday, before going out to the theater with my wife, just to understand what’s missing in the plan.
I run the kickstart, installed CentOS 6.5, then run packstack –allinone and went for a shower. As the option name suggests, it install everything in a single box. Well, when I finished the shower, the system was already installed. With a reboot (just for laziness) I had a full OpenStack all up and running. Of course the performance were very poor, hey but it was all in a single box with all the software installed, and the box is really low end.
Test 2: two servers, one holding all the packages with an extra compute
Within the next days I decided to try again but with two boxes and less software. As I mentioned, packstack created an answer file, so I edited and removed Celiometer, Heat, Nagios and Tempest, plus I added another node to CONFIG_NOVA_NETWORK_HOSTS to have two compute nodes.
Just because I wish to have a clean environment, I PXE’d the nodes again and did it again from scratch. Note that packstack is able to modify an existing installation. That was it basically and it worked quite smoothly.
Test 3: two servers, GlusterFS Backend and VLAN-based Neutron
It was too easy for me and I decided to add GlusterFS and Neutron to handle Cisco’s VLANs using Openvswitch. Preparing GlusterFS wasn’t that complicated: I created a replica of two 1TB disk for each box, splitting in 3 different areas for Nova, Cinder and Glance. In this way I created disk redundancy for the OpenStack infrastructure. I discovered that GlusterFS can natively export in NFS without any specific option, but I decided to go for “native” fuse+glusterfs. As I need to specify the IP address of a member, I decided to setup a virtual IP among the nodes. I should have used peacemaker with the floating IP, but I opted for a more “quick” VRRP with keepalived.
To apply to OpensStack, I modified packstack answer file to accommodate the right backend and mounts. As this applies to Cinder, I needed to modify glance and nova manually in the INI file.
The Neutron part was probably the hardest for me, as it required much more skills. As I am coming from the networking part, I decided to go for a much deeper dive on openvswitch, covering the new ip namespaces and filtering capabilities. Another part that needed some investigation is how to chain bridges, external and internal ports. Once understood the logic behind, and set up basic port information in the standard network configuration of CentOS, it was a matter of modifying the neutron section in packstack’s answer file.
Next step is to start playing around two topics:
Keystone and start integrating SecurePass as authentication schema
Openvswitch and find out best practices to configure networking in an enterprise environment
I am glad to celebrate today more than 10 thousands views of my publication “Comparing IaaS: VMware vs OpenStack vs Google’s Ganeti“. It’s an astonishing result and I can’t thank enough all my readers and fans that have shared it on Twitter and Facebook.
One of the frequently asked question was:
why not comparing to OpenNebula? What are the differences with it?
Comparing to something that I’ve heard but never tested sounded so unprofessional. So, to celebrate my 10 thousands visits, I decided to setup a full OpenNebula architecture.
The testing environment has two HP DL380, plus one DELL R210 as management node. Every machine is running on CentOS 6.5. I decided to make things slightly more complicated, by using GlusterFS as a distributed storage inside the compute node themselves, to leverage the internal disks of the nodes. These are the same nodes I used for testing Ganeti.
First of all, let me tell you that what I heard is confirmed: OpenNebula is a great project. It’s a “mini-OpenStack” that is able to handle a lot of requirements from those ISPs and private datacenters that wants to adopt a Cloud environment.
Comparing Iaas (including OpenNebula)
So, what is my opinion after this tests? It doesn’t change much after all …
OpenStack is becoming a buzzword: every vendor basically is jumping in and there is/was the need of clarify some details. OpenStack targets large installations, which means basically large ISPs or very large corporations with multiple datacenters.
VMware has the advantage that ESXi fits even a single server, but can scale up to 32 hosts. For those IT managers in need of certified software and support, and still have enough budget, VMware is a good solution for their enterprises.
OpenNebula has the same philosophy of OpenStack. It requires a lot less hardware than OpenStack, but still has the same approach of dynamic lifecycle of VMs.
One of the requirements, especially for the ISPs, is the migration from an existing virtualization or VPS solution. Here it comes the issue when embracing a cloud infrastructure, being OpenStack or OpenNebula: while cloud uses virtualization, the management of the virtual machine is very different.
In a cloud solution, the administrators needs to setup images (or templates) that will be the base for virtual machines. If you want to migrate an existing VM to the cloud, you need first to convert it into a template, then instantiate a virtual machine from the template.
While this could be easy enough for few virtual machine, when I’m dealing with extremely large service providers we could talk about thousands of images and VM (1&1 and Deutsche Telekom, just to name two). This is not a process that can be easily automated with a one-fits-all solution.
Ganeti has a different approach: while still being a virtualization solution, offers some flexibilities that are typical to cloud infrastructures, like fast deploy of virtual machines and private network for customers. That’s why Ganeti has been chosen for our SecureData.
Back from the CISCO Live event in Milan, where I had the honor to speak on behalf of Canonical about the general availability of Cisco Nexus 1000v on Ubuntu Openstack.
The virtual switch Nexus 1000v that was previously available on VMware only, is now available on an Open Source platform. While rumors have been around for a while of being able to run 1000v under Linux KVM, the big news here is that the virtual ethernet module (the equivalent of the switching engine) that integrates into Neutron, the Software Defined Networking (SDN) of OpenStack. The Nexus 1000v is a traditional Cisco Supervisor module, so that network engineers can use their familiar IOS interface to configure complex OpenStack SDN scenarios.
The other big news is that Canonical wrote a Juju Charm for deploying the Cisco n1000v. Juju is Ubuntu orchestration tool that is able to deploy both the infrastructure and the applications. The usual we (as Canonical) do the installation is combining the power of Juju and MaaS (Metal-as-a-Service) to deploy the full OpenStack installation, including storage and networking components.
This is a long-waited feature from Internet Service Providers, as they mostly have Cisco infrastructure in their core, and can’t wait to see how it works in the field.
In the meantime I had the chance to visit the the event, the first interesting thing to notice is how big brands are the new fairs organizer. At least in Italy, fairs show are becoming less and less popular and big brands by themself are not able to draw to event on a bright side. It’s far more convenient manage an event by themself and grouping commercial partner for sponsorship.
It’s actually great seeing how Open Source applications are now embed on new devices, a proof of how successful is collaborative programming on an enterprise level. I’m a spare-time, long time contributor of some Open Source tool and it’s great to see how the value of sharing knowledge have increased across generations of programmers, but also how strong this value is perceived from top management.
Last but not least, it’s amazing that big brands are leaving the lock-in concept for a more Open and interoperable approach. Since the beginning I decided that SecurePass needs to be a sustainable, future-proof identity management solution that should have nothing to do with the traditional solutions in the market.
It’s time to sit down and think about the past year. This 2013 was definitively one of the busiest of my career, I’ve never traveled so much, mostly across Telcos, ISPs and biggest companies of Germany, UK and Switzerland. But it was as well one of the worse year in security, especially when it comes to passwords.
Crackers were able to get Adobe encrypted passwords for approximately 38 million active users. Evernote had a security breach with stolen information from the user base, forcing them to reset all passwords. And more than 2 million accounts have been compromised from popular sites such as Google, Yahoo, Twitter, Facebook and LinkedIn after malware captured login credentials from users worldwide. This just to mention some highlights of this year in the consumer space.
Just imagine what happened o could potentially happen in a corporate environment and how many trade secrets, inventions and personal confidential information are at risk. Passwords are definitively over and cannot be considered a secure method to protect information in a cloud world. That’s why I consider 2014 the year of Cloud IAM (Identity & Access Management).
What am I doing to help?
When involved in designing OpenStack architectures for Canonical, I am very conscious in implementing security as it should be. Most of the world’s biggest hosting and housing providers are having issues on misuse of their infrastructures. The biggest issue is that they cannot control and enforce security in their guests and Gigabits, or even Terabits, are wasted in botnet and coordinated attacks.
I am driving SecurePass to be able to handle groups and access policies for web-based applications, as well as in RADIUS and LDAP. Moreover, during 2014 we will release a beta of the public APIs with the same security and segregation of the existing protocols. Through APIs, customers and partners can build lot of new applications, provisioning and more.
IBM labs with my cooperation created a SecurePass plugin for WebSphere applications. With this partnership, I helped protecting two of the largest financial companies across Europe, helping them to reduce costs while increasing protection and confidence in their extranets and applications accessed by 3rd parties. Public reference will be published in 2014 by both IBM and GARL.
I am cooperating with Google’s engineering team to enhance Ganeti, Google’s virtualisation platform that is used to manage Google’s internal corporate network. GARL’s SecureData is the result of our co-operation, bringing the reliability of Ganeti with the protection to SecurePass to help companies reducing the costs of their VMware installations. SecureData is available on Debian, Ubuntu, CentOS and RedHat Enterprise Linux (RHEL). Early 2014 it will be installed in the Labs of a popular italian telco.
GARL traditionally offered Vulnerability Assessment and Penetration Tests. These audits usually targets banks and ISPs, but there’s specific cases in which even medium-sized companies should need a security audits (ex: healthcare, factories, …). GARL introduced EasyAudit in its offering an “audit package” in cooperation with ISGroup, headed by the well-known and respected Francesco Ongaro, that mixes security with affordability. Myself and Francesco were the auditors that acted on behalf of Symantec when the well-known firm used to deliver VA and Penetration Tests in Europe, so who better than us can deliver these services?
As always, I’m trying to write papers to help people understand how security and quality are important during a project. Most of the time it’s not a waste of time, it could take less than what you expect (or other company are trying to sell you), but on the long run you will save time, money and … headaches!
Let me thank you publicily my wife Maria, she’s sustaining me on my decisions and she understands the massive amount of travel I am doing. A big thank you goes to Donatella, my right-hand woman and my invaluable assistant, as well as all my staff at GARL.