News

Mia moglie vuole lo scontrino: una analisi dell’adozione cloud in Europa

I miei personali obiettivi del 2018 sono la semplificazione e la riduzione del “disagio” quotidiano. Una  e’ il proliferare di scontrini che si moltiplicano come i gremlins: la “collezione” di scontrini ormai a casa rasentava un livello inaccettabile.

Qualche giorno fa ho installato a mia moglie l’applicazione di un famoso supermercato, visto che offe la possibiimg_20130228_131815.jpglità di avere degli scontrini virtuali. All’atto della spesa, il supermercato in questione ti crea immediatamente un PDF, che e’ consultabile sia tramite app che tramite sito Internet.

Anche se l’applicazione e’ molto semplice da usare, dopo qualche spesa fatta in autonomia, mia moglie si e’ arrabbiata: “come si usa questo coso” e “non posso vedere quanti punti ho e se hanno sbagliato”, ha detto. Anche se bastava semplicemente guardare sull’applicazione, praticamente mi ha costretto a disabilitare la funzionalità dello scontrino virtuale: l’abitudine dello scontrino fisico ha vinto.

Vi chiederete: bella storia, ma cosa c’entra con il cloud?

Beh, e’ che nelle mie molteplici consulenze, con alcuni tipi di clienti alcune abitudini di avere “qualcosa di fisico” non muore.

Nel 2017 ho fatto un grande lavoro -insieme al mio team- per portare una piccola banca di affari di Londra totalmente su Amazon Web Services. Non avendo personale IT interno, ma soltanto persone che si occupavano del supporto desktop, l’idea che avevo avuto era di eliminare qualsiasi hardware on-site che non fosse strettamente necessario a far funzionare i desktop stessi. Se qualcosa si rompe, qualcuno deve metterla a posto, no? Se non c’e’ nessuno, chi sostituisce (ad esempio) un disco????

In realtà il management era molto favorevole a non avere “rogne” di gestione, quindi passata la “forca” del legal & compliance, abbiamo proceduto lentamente alla migrazione, facendo attenzione che non si “rompesse nulla”.

Server-relocation1Ora, a distanza di poco piu’ di un anno e completata la migrazione, il cliente ha chiesto di tornare indietro. Non per problemi tecnici, ne’ per problemi di performance. Con una linea veloce e ridondata, e a pochi hop da AWS, la sensazione era come essere leggermente piu’ lenti dei server locali.

Quindi qual’e’ il problema?? La paura di non avere piu’ i dati “nello sgabuzzino” e di perdere il controllo ha innescato un meccanismo psicologico al CEO che lo ha portato a prendere la decisione di tornare indietro, pur con un TCO più elevato e con la gesione dei possibili fault. Vorrei farvi notare che sto parlando di una banca della city di Londra, non dell’officina di “Zio Tonino”.

Cosa mi ha insegnato questa storia?

Mi ha insegnato che la tecnologia ci da a disposizione una infinita’ di strumenti e di possibilità, ma alcune mentalità sono veramente difficili da sradicare.

Piu’ vado da clienti in Europa e piu’ sto assistendo ad un vero e proprio paradosso. Con l’avvento di fibra e link radio ad alta velocità, le PMI Europee che maggiormente trarrebbero vantaggi dall’uso del cloud, sono quelle che sono piu’ restie al cambiamento. Al contrario, le grosse aziende che potrebbero fare economia di scala con l’adozione di un private cloud, oltre ad avere maggior controllo sulla sicurezza del dato, si rivolgono invece al public cloud (AWS, Azure, Google Compute Engine) perche’ cosi’ hanno “meno rogne” nella gestione del ciclo di vita dell’hardware e nei processi interni.

Cosa possiamo fare noi consulenti quindi?

La mia esperienza come entusiasta su Linux mi ha insegnato che le guerre di religione non servono a niente, ed -in fondo- e’ il cliente che paga. Il nostro ruolo e’ quindi quello di consigliare al meglio il cliente a seconda di quello che vuole fare.

Mentre aspettiamo che alcune tecnologie vengano “digerite” meglio, ho visto che una strategia vincente per chi vuole l’hardware on-premise e’ quello di offrire i servizi cloud sia per la parte di front-end web (ragioni di immagine), ma soprattutto quella di offrire la possibilità di avere un disaster recovery veloce, rapido e a basso costo.

Dall’altra parte, invece, possiamo proporre a chi ha tutto in cloud, la possibilità di creare un micro-ambiente interno su cui poggiare l’infrastruttura, ad esempio con un private cloud basato su OpenStack con soli 3 nodi, un object storage per il backup o un sistema Kubernetes/Docker, tenendosi pronti a “scalare” con automatismi quando “in emergenza” dovremmo accendere i sistemi in casa.

Spectre, Meltdown e i veri pericoli del Cloud

[Scritto in collaborazione con Luca Perencin]

Il caso delle vulnerabilità Spectre e Meltdown, scoperte da Google, hanno riacceso la discussione pro e contro gli ambienti cloud pubblici.

È indubbio che un sistema isolato, ben aggiornato offra maggiore sicurezza di un ambiente cloud. È pur vero che le grosse aziende cloud (Amazon, Microsoft) possano offrire solitamente maggior sicurezza che gli ambienti privati nei servizi esposti su Internet. La gestione delle patch, anche personalizzabili, un controllo h24 e altre misure di sicurezza offrono tendenzialmente una protezione migliore che molti istituti finanziari, dove la sicurezza è una priorità.

Nella mia esperienza da consulente, spesso il problema di sicurezza non è tanto la sicurezza della piattaforma, quanto come queste piattaforme vengano usate. La poca conoscenza della piattaforma, o semplicemente la pigrizia, spesso conducono ad un data breach. E poi, queste stesse persone, puntano il dito contro il cloud..Con il GDPR dietro l’angolo, e pesanti multe, questo rappresenta un bel rischio per le aziende.

Quando si abbraccia un ambiente cloud, sia pubblico che privato, è bene affidarsi ad un serio e riconosciuto professionista, che conosce a fondo la piattaforma.

Quindi il cloud è esente da pericoli? Forse da vulnerabilità sí, ma il problema è da un’altra parte. Condivido in pieno le parole di Tim Berners-Lee il creatore del World Wide Web: in una sua intervista sul Guardian, il papá del web lancia un allarme di quanto sia pericoloso l’accentramento di Internet verso poche multinazionali.

Pensandoci su, in effetti, i protocolli che oggi sono alla base di Internet che sono stati creati al tempo di DARPANET (militare) e -successivamente- ARPANET (smilitarizzata) avevano un intento preciso: creare una rete decentralizzata, i cui protocolli di peering permettevano di riadattare la rete anche in caso di attacchi nucleari.

Lo strano destino e’ che la rete di trasporto di per se e’ stata progettata per essere non dipendente da un punto centrale, ma permette di avere relazioni di peering con altri punti, mentre i contenuti e le infrastrutture si stanno accentrando verso poche multinazionali, come Amazon Web Services (AWS), Microsoft Azure e Google, per citarne alcune alcuni famosi.

Nel tempo, i servizi a disposizione sono diventati sempre più facili da usare, permettendo, ad esempio, di aprire un blog su WordPress.com con pochi semplici click, al posto di installare e configurare manualmente i vari componenti, oltre all’impegno di dover stare sempre attenti agli aggiornamenti e alle minacce esterne.

Se da una parte e’ fuori discussione che l’uso di piattaforme cloud sia comodo e veloce affidarci anche per l’infrastruttura IT, dall’altra questo accentramento spaventa, non solo per discorsi di privacy e per costi, ma soprattutto per la nostra stessa indipendenza.

E’ palese che piattaforme come AWS e Azure offrano dei servizi interessanti a prezzi appetibili, soprattutto per la piccola e media impresa, e sicuramente nessun software on-premise riuscirà ad offrire lo stesso livello di funzionalità e di innovazione.

È importante evitare alcuni prodotti, che ci legano mani e piedi al fornitore, e usare open standards e protocolli aperti per i nostri dati. In questo modo, anche se in prima battuta volessimo usare AWS o Azure, possiamo in qualsiasi momento decidere di riportare in casa tecnologie e dati o cambiare fornitore.

Lo stesso concetto e le stesse problematiche sono state già affrontate in un ambito simile, apparentemente slegato, ma che, al contrario, offre parecchi punti di contatto: la gestione dei contenuti online; partendo dai primi siti istituzionali e di vetrina, e poi a i blog e ai siti divulgativi, si è pensato poi che la soluzione fosse quella di mettere tutti i contenuti sui Social; Il tempo ha poi dimostrato che questa mossa ha diminuito il valore dei contenuti, oltre alla conseguente perdita di identità e, a volte proprietà. Si vede quindi un ritorno alla gestione diretta dei contenuti, e l’uso dei social come canale di comunicazione.

Allo stesso modo, non possiamo pensare di affidarci totalmente a provider esterni, seppur qualificati, ma pensare a soluzioni che integrino i due mondi, mantenendo il controllo dei nostri dati.

Bisogna essere preparati a “rientrare”, e questo lo si può ottenere creando un embrione di servizi su cui poggiare l’infrastruttura, ad esempio con infrastrutture open come OpenStack e Kubernetes/Docker.

Affrontare una digital transformation con consapevolezza e affidarsi a professionisti, aiuta ad usare in maniera agnostica la piattaforma sottostante, evitare il vendor lock-in, e a riportare in casa i dati, qualora ce ne fosse bisogno.

Is OpenStack really for you? An aftermath of a failed attempt

success-failure-signThis post comes after a failed attempt to help a Swiss small ISP in building their cloud offering. The market of selling Internet access is shrinking down to the big players, so the owner believed that the next 5-10 years would have focused his business in reselling Virtual Private Servers (VPS) and thought that OpenStack can help him on this new business.

I’ve probably seen more failures than happy ending projects, however most of the times failure is not due to OpenStack at all. And this was (unfortunately) the case as well.

You probably know how much I love OpenStack and that I’m a strong supporter since my former boss Mark Shuttleworth put me on the project when I was in Canonical (Ubuntu) in early 2011. But let’s face it: OpenStack is not for everybody. And it’s not a matter of size of the business, nor the money you put on the project, rather the mindset with which you embrace OpenStack.

Two years ago, in when I published my book “OpenStack Explained”, I wrote that “the reality is that OpenStack is just a technology and it enables you to do more if you embrace its philosophy. This requires a company to change deeply in the way IT is conceived”.

Even if I’m an experienced consultant, my biggest mistake was not to deeply analyze the company before starting the project, I believed their words of having “long experience with Linux”, “tried Ceph deeply” and claimed to be “masters of networking”. It turned out that wasn’t true.

So I will write a few suggestions based on what went wrong in this project:

  • Real savings are in automation and no vendor-lock-in. If you are seeking for “something like VMWare, but cheaper”, my suggestion is to either re-consider vmware or go to other virtualization projects like Proxmox or o-virt / RedHat Enterprise Virtualization (RHEV). The real advantage of OpenStack is the extreme automation of your infrastructure and the freedom from any hardware/software vendor.
  • OpenStack requires care and attention. Don’t think on OpenStack like a “point and click solution”: it’s definitively not. The project is meant to be a full stack for building cloud, like Amazon Web Services (AWS) on your premises, so you need to accept its complexity. Live (at the moment) with the fact that you need to upgrade every six months and you require enterprise-level monitoring and operations.
  • Invest on people with good Linux skills. I can’t stress this enough. You can’t just live with somebody that “have installed Ubuntu” or other distributions and pretend you’re a Linux “super-hero”. You really need to know the Linux system in its root, knowing the storage and network subsystem. Basically, you need to find someone geek.
  • You need to have a dedicated team. It’s somehow linked with the above points, but managers or company owners most of the times think that they can “survive” with the existing people. But an OpenStack project requires people with focus. Especially at the beginning, you will need a lot of tuning of the parameters according to your needs. Three people on the team is the bare minimum, but consider that mid-sized ISP has usually around 10 members to cope with shifts and holidays.
  • Invest in tested/certified hardware. Hardware incompatibilities can be a nightmare: during this experience, I had a lot of hardware issues, like the CPU being frozen due to incompatibility with the motherboard or NVME faults due to a cheap PCI adapter. I wasted a lot of days (and nights) on demonstrating it was an hardware issue. If you need to save money, get hardware with less performance, but reliable and rock-solid.
  • Get the right storage for you. OpenStack can use a variety of block storage for the virtual machines. If you are attracted by Ceph because of savings, then you need to know that -according to my calculations- you need to have a few Terabytes before it gets cheap for you.
  • Ceph is like a ship: the bigger, the better. A cruise ship is far more stable than a dinghy, because the bigger size will bring stability to the vessel even when a thunderstorm is hitting it. Ceph has exactly the same concept. If the cluster has a smaller size and just a bunch of disks, then you don’t get much performance and it’s prone to lose the quorum or -worse- data. Bigger clusters deliver far more performance, stability and can recover better any error can occur.
  • Do an extensive PoC/test phase. Proof of Concepts and test phases should be taken very seriously: consider this phase to get acquainted with the technology and go for a deep dive with an experienced consultant. Try to understand in the test phase if you and –mostly- your team is ready for OpenStack. The longer this phase is, the lesser surprises you will get in pre-production and production stages.
  • If you’re going public, invest in network protection solutions. If you’re an ISP, you and your users will be likely a target of DDoS attacks. Use the appropriate techniques to protect your infrastructure…. I know it’s basic stuffs, but not everybody gets it.

Unfortunately, there’s no happy ending in this story. All possible things that could go wrong, went wrong. To summarize it, the cluster had multiple hardware failures, also due to an unplanned relocation of the equipment. It got worse when I discovered that nobody has sufficient Linux knowledge inside the company, even to do some basic troubleshooting.

The situation was against all OpenStack best practices, therefore I (sadly) I told the owner that I can’t be of any help any longer until they reshape the company and I suggested either go back to VMWare or investigate on other “point-and-click” solutions.

After six months, the OpenStack cluster has been decommissioned and the hardware being assigned to other customers.

OpenStack is a fantastic framework for building your own cloud services and is in use by a lot of customers in production. Now, if you’re thinking on having OpenStack on your premises, the question I have is: is OpenStack really for you?

OpenStack Swift reports

openstack_project_swift_verticalIt’s no secret that I love OpenStack Swift. While is not always a two way relationship, I use Swift as much as I can: mostly for long-term backups, serve static websites and even streaming.

While functionalities are awesome, it’s also important to get accounting/usage information of the platform. Out of the box, Swift does not allow an administrator to access even accounting information from a given account. The “standard” approach is to use the Telemetry feature of OpenStack (aka ceilometer), but I’m not a fan of that project either. In my opinion telemetry is  “pumping” so much data that in most of the cases are way too much and I believe that a simpler approach is to be preferred.

To create a report of Swift usage, we need to use the Reseller Admin concept in Swift to query account statistics from a single admin-level user.  The reseller role (named “ResellerAdmin” by default) can operate on any swift account.

While “getting the concept” is a bit tricky (and undocumented as well), the truth is that is quite straightforward to enable it. Create a “ResellerAdmin” role on OpenStack with the command openstack role create ResellerAdmin and grant the role to the user that need to access the containers, ex: the user admin.

Edit the  Swift proxy-server.conf (filter:keystone section) and add the lines highlighted in bold.

[filter:keystone]
use = egg:swift#keystoneauth
operator_roles = admin, SwiftOperator
reseller_admin_role = ResellerAdmin
reseller_prefix = AUTH_
is_admin = true
cache = swift.cache

Now the admin user can enumerate the projects and get statistics of all the projects and containers. It’s now easy enough to cycle through all the projects and get the used bytes, as shown below:

$ swift stat --os-project-name myproject
      Account: AUTH_c9f567ce0c7f484e918ac8fc798f988f
      Containers: 4
      Objects: 325   
      Bytes: 101947377850 
      Containers in policy "policy-0": 4
      Objects in policy "policy-0": 325
      Bytes in policy "policy-0": 101947377850
      X-Account-Project-Domain-Id: default
      X-Timestamp: 1487950953.36228
      X-Trans-Id: tx49e7b3d4e1a24f529fbc6-00594fb813
      Content-Type: text/plain; charset=utf-8
      Accept-Ranges: bytes

swift-backup

Today I’ve released an internal tool we use in GARL. The name is swift-backup and is a simple program written in GO to backup a file.

The aim of this is to have a single multi-platform binary with no dependencies that is able to backup a single file to OpenStack Swift. We use this tool to schedule backup for database dumps and other locally-created backups such as tar.gz from geographically dispersed resources.

You can find the source code here: https://github.com/gpaterno/swift-backup

Racing with OpenStack

My talk at both OpenStack Days Uk and Italy will have the title of Racing with OpenStack.

I’ve been using OpenStack in several telcos and some banks, but there are other creative ways of using OpenStack as well. The talk shows how OpenStack supported at a car racing event. In detail, it will show how the infrastructure was prepared to  managed cams, live streaming of the event, etc….. Details will be provided on the OpenStack architecture.

“lectio magistralis” in London

On Thursday I had the chance to speak “lectio magistralis” in front of members of different funds in London. They were interested in my opinion on the IT market, not only short term, but also long term (e.g. in the 5 to 10 years). The reason is that these funds are willing to provide long term investments for their private customers and banks.

This conference was a wonderful experience, with a lot of intriguing questions from the funds researcher. This was my very first time speaking to a totally different audience rather to pure IT public. Having said that, I’ve realized how important this really was.

This fund community is perceiving me as someone who is really reliable, understands the IT market, and has a clear vision on short and long term perspectives. Speaking exactly in London as an influential city with a worldwide impact was even more significant.

Will this be a new path in my career? I don’t know. What I know for sure is that technology is my absolute passion and nobody can bring me away from the “keyboard”. However, I feel I’m ready to play even more important roles on the market.

I want to say thank you all, it was my pleasure to take part in this event and speak in front of this committed audience.

Wrap-up OpenStack Ops in Milan

It was very exciting to for me to see the OpenStack Ops Midcycle in Milan (Italy), which is my hometown. I was also very pleased to see a lot of common faces across the community, and Switzerland was well represented with Switch, ZHAV,  University of Zurich, CSCS … and myself.

Photo-2017-03-15-10-01-41_0795

I personally followed more the tracks about containers: Kolla is becoming increasingly popular among the consultants to deploy OpenStack in a repeatable way. However, the real advantage of Kolla in my humble opinion is all about upgrades: containers allow to upgrade and roll-back the cluster in short period of time, which is essential when you need to operate a production cloud.

The overall feeling that I shared with my “colleagues” of the community is that OpenStack has definitively improved his stability. I started to work on OpenStack since 2010 with the Diablo release and it was a pain to run it at the beginning. While most of the production clusters are on Liberty or Mitaka release, no one is applying special patches to code, as we used to do at the outset.

There are still issues with the non-core projects or the features that are less used and tested, but I can definitively tell that – if you make appropriate decisions and you stick to the stable components – OpenStack can be safely run in production. We’ll see how it goes with the Ocata release, as Nova introduces cells v2.

I need to thank both Savrerio Proto from Switch and the Enter team for having brought such a fantastic event in Milan.

Back on track for 2017

As you’ve already noticed, I wasn’t that active on my social media channels. As you might know 2016 was a very busy and productive business year for me. I’ve spent most of the time in London and Milan, focusing on some exciting projects. For instance, my work for eBay© was to make their application more cloud-aware so that their releases speed up. I’ve also assisted a bank in central London to integrate the cloud into their business routine. In Milan, I’ve concentrated on the kick off at Saipem/ENI© with OpenStack and multi-cloud. I am thankful for these work opportunities and I am looking forward to get involved in new ones.

Having said that, now is the right time to tell you about my resolutions for 2017:

  • Spend more time in Zurich while keeping London as a main landing point.
  • Get on board 2-3 long-term projects that will provide recurring revenues.
  • Keep working on Long-Term Support (LTS) releases concept for OpenStack.
  • Have more time for family and hobbies (such as improving my flying skills).

The key word for 2017 adventure is “simplification”. This is the reason why my next blog post will underline how cloud might be a key component of simplification in IT.

Stay tuned!

SecurePass 0.4.6, focus on OpenStack & cloud

I’m proud to annouce that SecurePass “Dreamliner” 0.4.6. This release has special focus on OpenStack and could deployments in general, as we introduced password-only authentication for service users. Our new command line tool reflect these changes.

Also, we stabilized the driver for OpenStack Keystone and is easier to install and easier to comfigure. Also multiple SecurePass domains are supported as keystone domaijs, as we enabled domain-based comnfiguration. The Keystone driver is freely available on our GitHub page.

Last but not least, our Linux NSS module now supports posix group. Packages for RedHat/CentOS and Suse/OpenSuse are available on the repository. Ubuntu and Debian users should receive updates via backports.

In detail the new features:

  • Added user modify
  • Added password only authentication
  • Added show group information
  • Fix user modify: name and surname were to lowercase