News

An era has ended: SecurePass shutdown

shutdownGARL announced that SecurePass would have ceased its official activities in August 2017. As of today, I shut down all the virtual machines of SecurePass.

I am a bit sad, but there are choices you have to make and sentiment sometimes is far away from the business.

This definitively marks the end of an era, but a new one is showing up.

Project “simplification” for 2018

shutterstock_64028797-634x0-c-defaultSince the beginning of 2018, I started an “internal” project whose ultimate goal was to simplify my life. 2017 was definitively a stunning year, with a lot of great projects and great results as well. I believe that will be difficult to achieve the same ever again. With great results, however, comes also great sacrifices: it was all about work and there was little space for my own life. “All work and no play makes Jack a dull boy”, the proverb says, so I believe I deserve a little relief from the big pressure.

My new year resolution was to simplify my life and have a better work/life balance. This “simple” resolution turned out to be more complex and harder than I thought. Since January, I worked really hard to reduce the number of hassles as much as I can. This is the main reason why you haven’t seen me around and I wasn’t very often involved with social media, events, etc…

At the end of June, I can say I’m on the right track, but a lot has yet to come. Standby for some great announcements 🙂

Alicloud & RedHat Linux 7.4 BYOS

alibaba-cloud-logo

Alibaba Cloud (Alicloud or Aliyun) is a promising Chinese cloud provider that is becoming popular in the Asia-Pacific region. If you want to release services in China and be able to comply with Chinese privacy law, all your data need to stay in China. For this reason, Alicloud can be handy to start your journey in the Asian country.

Most businesses want to have the same certified workloads in China as well, and those are mostly based on RedHat Enterprise Linux (RHEL). Alicloud is a RedHat Certified Cloud Provider and offers RHEL images in their marketplace, but these images include a RedHat subscription. What if you have an Enterprise agreement and you want to use a Bring Your Own Subscription (BYOS) method?

Here are some handy tricks to bring RHEL 7.4 BYOS into Alicloud and start serving your customers in China.

Alicloud supports importing images in RAW and VHD format, which help us a lot. If you have an active RedHat subscription, you should download the RHEL 7.4 KVM guest image (see image below). This image is compatible with the Alicloud virtualization system; Alicloud is also compatible with cloud-init to customize the virtual machine at boot time. The direct link to the download page is here: https://access.redhat.com/downloads/content/69/ver=/rhel—7/7.4/x86_64/product-software

rhel guest.PNG

The next step would be converting the QCOW2 image into a RAW format. However, the conversion will expand the 500MB QCOW2 image into a 10GB RAW format. Uploading such a big file would be problematic if you do not sit in China and you have to traverse the Great Firewall of China.

As such, we will upload the QCOW2 image into Alicloud  Object Storage Service (OSS) and convert it using a temporary virtual machine in China. Create a bucket through the console and upload the image. Shall you need a GUI to perform the upload, an official GUI client named “OSS Browser” is available here: https://github.com/aliyun/oss-browser/blob/master/all-releases.md

I strongly recommend downloading also ossutil64, a CLI based tool for OSS, to be able to upload your image from the temporary Linux instance. The tool is available here: https://www.alibabacloud.com/help/doc-detail/50452.htm

Create a small Linux instance with the distro of your choice (I recommend CentOS) in your Chinese region (in my case Beijing), but ensure you have sufficient disk space. Once the instance is reachable, login and download the QCOW2 from the bucket using curl and the object URL. Convert it using qemu-img tool:

qcow-img -f qcow2 -O raw rhel-server-7.4-x86_64-kvm.qcow2 rhel-server-7.4-x86_64-kvm.img

Once converted, use the ossutil64 to upload the image to your previously created bucket.

Object Storage Service 1.PNG

If you click on the file, you can get its public URL in the preview. Copy the file URL as we will feed it into the image importer,

Object Storage Service detail.PNG

Go back to the Elastic Compute Service (ECS), select Image on the menu on the left and start the import through the “Import Image” functionality. In the OSS Object Address, insert the URL as copied before. Use Linux as operating system and RedHat as system platform. Mind to specify RAW as image format.

import image1.PNG

import image 2.PNG

The Alicloud image service will (slowly) import the image. If everything is successful, you should see an image similar to the one below:

image2.PNG

You can start a virtual machine with your newly created image and register your RedHat subscription with subscription-manager 🙂

Outside of “The Net”

cord-cutting-hype-100648271-carousel-idge

I’d wish to share with you something that recently happened to a friend of mine couple of days ago. He runs a small cloud provider and acts as an outsourcer for his selected customers. A very big firm in his country decided to move his brand-new website to one of his datacenters.

He runs two datacenters for disaster-recovery and business continuity.  Each one of the datacenters has its own provider independent IPs, different ASNs and different upstream providers.

What happened is that, once he moved the new website, Google has delisted the website from its search engine.  Absolutely no evidence of this company when searching excepts for its famous products on the Amazon marketplace. No need to say that the marketing of the customer and the developers were blaming my friend.

After an initial investigation, Google failed to retrieve the robots.txt file that is needed to index the website, so it decided to delist its website. Funny enough, other search engines (es: Bing and Qwant) were able to retrieve the same file. On access logs and tcpdump, no sign of the Google crawler.

During a test, he was able to “restore” the situation by moving the complex website with its e-commerce platform to the other datacenter. A deeper investigation revealed that -for some unknown reasons- Google seemed to have blocked the ASN IPs, while other search engines and the rest of the world was able to access the website. While contacting the Google NOC, they said that Google search engine and webmaster tools are unsupported,  so basically my friend was on his own. For the unknown reason, after a couple of weeks, the ASN IP of the datacenter were reachable again.

This reminds me of my previous posts in which I told about how the Internet has been designed to be as much as possible independent from a central point, while the information is now more and more centralized to few companies.  Of course, there is no malicious willing from Google to block my friends IPs, but it turned out that one of these companies have the potential power to decide if you can run your business or not.

The same thing could potentially happen to a public cloud provider: what if Amazon decides to shut down your machines (and it has the right to do so!)?

I’m not against any cloud provider and we need to thank AWS and Azure for bringing such an inspiring innovation to the world of IT. But, as I stated in previous posts, we need to be ready to bring back our business on-premise if forced to do so.

Just a couple of hints:

  1. Create your local micro-cloud on-premise, say with OpenStack and Kubernetes, so that you can start and scale up quickly
  2. Use open data and open standards and avoid any layered product that is offered by the cloud provider, it will lock you in.
  3. Automate deployments as much as you can, so that is reproducible and can be run on-premise

The idea I’m currently advocating is to apply the Raiffeisen model to IT to foster a complementary alternative to public clouds and big outsourcers so that heterogeneous enterprises in a local territory can team up to create a small micro-cloud and save.

Mia moglie vuole lo scontrino: una analisi dell’adozione cloud in Europa

I miei personali obiettivi del 2018 sono la semplificazione e la riduzione del “disagio” quotidiano. Una  e’ il proliferare di scontrini che si moltiplicano come i gremlins: la “collezione” di scontrini ormai a casa rasentava un livello inaccettabile.

Qualche giorno fa ho installato a mia moglie l’applicazione di un famoso supermercato, visto che offe la possibiimg_20130228_131815.jpglità di avere degli scontrini virtuali. All’atto della spesa, il supermercato in questione ti crea immediatamente un PDF, che e’ consultabile sia tramite app che tramite sito Internet.

Anche se l’applicazione e’ molto semplice da usare, dopo qualche spesa fatta in autonomia, mia moglie si e’ arrabbiata: “come si usa questo coso” e “non posso vedere quanti punti ho e se hanno sbagliato”, ha detto. Anche se bastava semplicemente guardare sull’applicazione, praticamente mi ha costretto a disabilitare la funzionalità dello scontrino virtuale: l’abitudine dello scontrino fisico ha vinto.

Vi chiederete: bella storia, ma cosa c’entra con il cloud?

Beh, e’ che nelle mie molteplici consulenze, con alcuni tipi di clienti alcune abitudini di avere “qualcosa di fisico” non muore.

Nel 2017 ho fatto un grande lavoro -insieme al mio team- per portare una piccola banca di affari di Londra totalmente su Amazon Web Services. Non avendo personale IT interno, ma soltanto persone che si occupavano del supporto desktop, l’idea che avevo avuto era di eliminare qualsiasi hardware on-site che non fosse strettamente necessario a far funzionare i desktop stessi. Se qualcosa si rompe, qualcuno deve metterla a posto, no? Se non c’e’ nessuno, chi sostituisce (ad esempio) un disco????

In realtà il management era molto favorevole a non avere “rogne” di gestione, quindi passata la “forca” del legal & compliance, abbiamo proceduto lentamente alla migrazione, facendo attenzione che non si “rompesse nulla”.

Server-relocation1Ora, a distanza di poco piu’ di un anno e completata la migrazione, il cliente ha chiesto di tornare indietro. Non per problemi tecnici, ne’ per problemi di performance. Con una linea veloce e ridondata, e a pochi hop da AWS, la sensazione era come essere leggermente piu’ lenti dei server locali.

Quindi qual’e’ il problema?? La paura di non avere piu’ i dati “nello sgabuzzino” e di perdere il controllo ha innescato un meccanismo psicologico al CEO che lo ha portato a prendere la decisione di tornare indietro, pur con un TCO più elevato e con la gesione dei possibili fault. Vorrei farvi notare che sto parlando di una banca della city di Londra, non dell’officina di “Zio Tonino”.

Cosa mi ha insegnato questa storia?

Mi ha insegnato che la tecnologia ci da a disposizione una infinita’ di strumenti e di possibilità, ma alcune mentalità sono veramente difficili da sradicare.

Piu’ vado da clienti in Europa e piu’ sto assistendo ad un vero e proprio paradosso. Con l’avvento di fibra e link radio ad alta velocità, le PMI Europee che maggiormente trarrebbero vantaggi dall’uso del cloud, sono quelle che sono piu’ restie al cambiamento. Al contrario, le grosse aziende che potrebbero fare economia di scala con l’adozione di un private cloud, oltre ad avere maggior controllo sulla sicurezza del dato, si rivolgono invece al public cloud (AWS, Azure, Google Compute Engine) perche’ cosi’ hanno “meno rogne” nella gestione del ciclo di vita dell’hardware e nei processi interni.

Cosa possiamo fare noi consulenti quindi?

La mia esperienza come entusiasta su Linux mi ha insegnato che le guerre di religione non servono a niente, ed -in fondo- e’ il cliente che paga. Il nostro ruolo e’ quindi quello di consigliare al meglio il cliente a seconda di quello che vuole fare.

Mentre aspettiamo che alcune tecnologie vengano “digerite” meglio, ho visto che una strategia vincente per chi vuole l’hardware on-premise e’ quello di offrire i servizi cloud sia per la parte di front-end web (ragioni di immagine), ma soprattutto quella di offrire la possibilità di avere un disaster recovery veloce, rapido e a basso costo.

Dall’altra parte, invece, possiamo proporre a chi ha tutto in cloud, la possibilità di creare un micro-ambiente interno su cui poggiare l’infrastruttura, ad esempio con un private cloud basato su OpenStack con soli 3 nodi, un object storage per il backup o un sistema Kubernetes/Docker, tenendosi pronti a “scalare” con automatismi quando “in emergenza” dovremmo accendere i sistemi in casa.

Spectre, Meltdown e i veri pericoli del Cloud

[Scritto in collaborazione con Luca Perencin]

Il caso delle vulnerabilità Spectre e Meltdown, scoperte da Google, hanno riacceso la discussione pro e contro gli ambienti cloud pubblici.

È indubbio che un sistema isolato, ben aggiornato offra maggiore sicurezza di un ambiente cloud. È pur vero che le grosse aziende cloud (Amazon, Microsoft) possano offrire solitamente maggior sicurezza che gli ambienti privati nei servizi esposti su Internet. La gestione delle patch, anche personalizzabili, un controllo h24 e altre misure di sicurezza offrono tendenzialmente una protezione migliore che molti istituti finanziari, dove la sicurezza è una priorità.

Nella mia esperienza da consulente, spesso il problema di sicurezza non è tanto la sicurezza della piattaforma, quanto come queste piattaforme vengano usate. La poca conoscenza della piattaforma, o semplicemente la pigrizia, spesso conducono ad un data breach. E poi, queste stesse persone, puntano il dito contro il cloud..Con il GDPR dietro l’angolo, e pesanti multe, questo rappresenta un bel rischio per le aziende.

Quando si abbraccia un ambiente cloud, sia pubblico che privato, è bene affidarsi ad un serio e riconosciuto professionista, che conosce a fondo la piattaforma.

Quindi il cloud è esente da pericoli? Forse da vulnerabilità sí, ma il problema è da un’altra parte. Condivido in pieno le parole di Tim Berners-Lee il creatore del World Wide Web: in una sua intervista sul Guardian, il papá del web lancia un allarme di quanto sia pericoloso l’accentramento di Internet verso poche multinazionali.

Pensandoci su, in effetti, i protocolli che oggi sono alla base di Internet che sono stati creati al tempo di DARPANET (militare) e -successivamente- ARPANET (smilitarizzata) avevano un intento preciso: creare una rete decentralizzata, i cui protocolli di peering permettevano di riadattare la rete anche in caso di attacchi nucleari.

Lo strano destino e’ che la rete di trasporto di per se e’ stata progettata per essere non dipendente da un punto centrale, ma permette di avere relazioni di peering con altri punti, mentre i contenuti e le infrastrutture si stanno accentrando verso poche multinazionali, come Amazon Web Services (AWS), Microsoft Azure e Google, per citarne alcune alcuni famosi.

Nel tempo, i servizi a disposizione sono diventati sempre più facili da usare, permettendo, ad esempio, di aprire un blog su WordPress.com con pochi semplici click, al posto di installare e configurare manualmente i vari componenti, oltre all’impegno di dover stare sempre attenti agli aggiornamenti e alle minacce esterne.

Se da una parte e’ fuori discussione che l’uso di piattaforme cloud sia comodo e veloce affidarci anche per l’infrastruttura IT, dall’altra questo accentramento spaventa, non solo per discorsi di privacy e per costi, ma soprattutto per la nostra stessa indipendenza.

E’ palese che piattaforme come AWS e Azure offrano dei servizi interessanti a prezzi appetibili, soprattutto per la piccola e media impresa, e sicuramente nessun software on-premise riuscirà ad offrire lo stesso livello di funzionalità e di innovazione.

È importante evitare alcuni prodotti, che ci legano mani e piedi al fornitore, e usare open standards e protocolli aperti per i nostri dati. In questo modo, anche se in prima battuta volessimo usare AWS o Azure, possiamo in qualsiasi momento decidere di riportare in casa tecnologie e dati o cambiare fornitore.

Lo stesso concetto e le stesse problematiche sono state già affrontate in un ambito simile, apparentemente slegato, ma che, al contrario, offre parecchi punti di contatto: la gestione dei contenuti online; partendo dai primi siti istituzionali e di vetrina, e poi a i blog e ai siti divulgativi, si è pensato poi che la soluzione fosse quella di mettere tutti i contenuti sui Social; Il tempo ha poi dimostrato che questa mossa ha diminuito il valore dei contenuti, oltre alla conseguente perdita di identità e, a volte proprietà. Si vede quindi un ritorno alla gestione diretta dei contenuti, e l’uso dei social come canale di comunicazione.

Allo stesso modo, non possiamo pensare di affidarci totalmente a provider esterni, seppur qualificati, ma pensare a soluzioni che integrino i due mondi, mantenendo il controllo dei nostri dati.

Bisogna essere preparati a “rientrare”, e questo lo si può ottenere creando un embrione di servizi su cui poggiare l’infrastruttura, ad esempio con infrastrutture open come OpenStack e Kubernetes/Docker.

Affrontare una digital transformation con consapevolezza e affidarsi a professionisti, aiuta ad usare in maniera agnostica la piattaforma sottostante, evitare il vendor lock-in, e a riportare in casa i dati, qualora ce ne fosse bisogno.

Is OpenStack really for you? An aftermath of a failed attempt

success-failure-signThis post comes after a failed attempt to help a Swiss small ISP in building their cloud offering. The market of selling Internet access is shrinking down to the big players, so the owner believed that the next 5-10 years would have focused his business in reselling Virtual Private Servers (VPS) and thought that OpenStack can help him on this new business.

I’ve probably seen more failures than happy ending projects, however most of the times failure is not due to OpenStack at all. And this was (unfortunately) the case as well.

You probably know how much I love OpenStack and that I’m a strong supporter since my former boss Mark Shuttleworth put me on the project when I was in Canonical (Ubuntu) in early 2011. But let’s face it: OpenStack is not for everybody. And it’s not a matter of size of the business, nor the money you put on the project, rather the mindset with which you embrace OpenStack.

Two years ago, in when I published my book “OpenStack Explained”, I wrote that “the reality is that OpenStack is just a technology and it enables you to do more if you embrace its philosophy. This requires a company to change deeply in the way IT is conceived”.

Even if I’m an experienced consultant, my biggest mistake was not to deeply analyze the company before starting the project, I believed their words of having “long experience with Linux”, “tried Ceph deeply” and claimed to be “masters of networking”. It turned out that wasn’t true.

So I will write a few suggestions based on what went wrong in this project:

  • Real savings are in automation and no vendor-lock-in. If you are seeking for “something like VMWare, but cheaper”, my suggestion is to either re-consider vmware or go to other virtualization projects like Proxmox or o-virt / RedHat Enterprise Virtualization (RHEV). The real advantage of OpenStack is the extreme automation of your infrastructure and the freedom from any hardware/software vendor.
  • OpenStack requires care and attention. Don’t think on OpenStack like a “point and click solution”: it’s definitively not. The project is meant to be a full stack for building cloud, like Amazon Web Services (AWS) on your premises, so you need to accept its complexity. Live (at the moment) with the fact that you need to upgrade every six months and you require enterprise-level monitoring and operations.
  • Invest on people with good Linux skills. I can’t stress this enough. You can’t just live with somebody that “have installed Ubuntu” or other distributions and pretend you’re a Linux “super-hero”. You really need to know the Linux system in its root, knowing the storage and network subsystem. Basically, you need to find someone geek.
  • You need to have a dedicated team. It’s somehow linked with the above points, but managers or company owners most of the times think that they can “survive” with the existing people. But an OpenStack project requires people with focus. Especially at the beginning, you will need a lot of tuning of the parameters according to your needs. Three people on the team is the bare minimum, but consider that mid-sized ISP has usually around 10 members to cope with shifts and holidays.
  • Invest in tested/certified hardware. Hardware incompatibilities can be a nightmare: during this experience, I had a lot of hardware issues, like the CPU being frozen due to incompatibility with the motherboard or NVME faults due to a cheap PCI adapter. I wasted a lot of days (and nights) on demonstrating it was an hardware issue. If you need to save money, get hardware with less performance, but reliable and rock-solid.
  • Get the right storage for you. OpenStack can use a variety of block storage for the virtual machines. If you are attracted by Ceph because of savings, then you need to know that -according to my calculations- you need to have a few Terabytes before it gets cheap for you.
  • Ceph is like a ship: the bigger, the better. A cruise ship is far more stable than a dinghy, because the bigger size will bring stability to the vessel even when a thunderstorm is hitting it. Ceph has exactly the same concept. If the cluster has a smaller size and just a bunch of disks, then you don’t get much performance and it’s prone to lose the quorum or -worse- data. Bigger clusters deliver far more performance, stability and can recover better any error can occur.
  • Do an extensive PoC/test phase. Proof of Concepts and test phases should be taken very seriously: consider this phase to get acquainted with the technology and go for a deep dive with an experienced consultant. Try to understand in the test phase if you and –mostly- your team is ready for OpenStack. The longer this phase is, the lesser surprises you will get in pre-production and production stages.
  • If you’re going public, invest in network protection solutions. If you’re an ISP, you and your users will be likely a target of DDoS attacks. Use the appropriate techniques to protect your infrastructure…. I know it’s basic stuffs, but not everybody gets it.

Unfortunately, there’s no happy ending in this story. All possible things that could go wrong, went wrong. To summarize it, the cluster had multiple hardware failures, also due to an unplanned relocation of the equipment. It got worse when I discovered that nobody has sufficient Linux knowledge inside the company, even to do some basic troubleshooting.

The situation was against all OpenStack best practices, therefore I (sadly) I told the owner that I can’t be of any help any longer until they reshape the company and I suggested either go back to VMWare or investigate on other “point-and-click” solutions.

After six months, the OpenStack cluster has been decommissioned and the hardware being assigned to other customers.

OpenStack is a fantastic framework for building your own cloud services and is in use by a lot of customers in production. Now, if you’re thinking on having OpenStack on your premises, the question I have is: is OpenStack really for you?