Seite wählen

NETWAYS Blog

Evolution of a Microservice-Infrastructure by Jan Martens | OSDC 2019

This entry is part 2 of 6 in the series OSDC 2019 | Recap

 

At the Open Source Data Center Conference (OSDC) 2019 in Berlin, Jan Martens invited to audience to travel with him in his talk „Evolution of a Microservice-Infrastructure”. You have missed him speaking? We got something for you: See the video of Jan‘s presentation and read a summary (below).

The former OSDC will be held for the first time in 2020 under the new name stackconf. With the changes in modern IT in recent years, the focus of the conference has increasingly shifted from a mainly static infrastructure approach to a broader spectrum that includes agile methods, continuous integration, container, hybrid and cloud solutions. This development is taken into account by changing the name of the conference and opening the topic area for further innovations.

Due to concerns around the coronavirus (COVID-19), the decision was made to hold stackconf 2020 as an online conference. The online event will now take place from June 16 to 18, 2020. Join us, live online! Save your ticket now at: stackconf.eu/ticket/


 

Evolution of a Microservice-Infrastructure

Jan Martens signed up with a talk titled “Evolution of a Microservice Infrastructure” and why should I summarize his talk if he had done that himself perfectly: “This talk is about our journey from Ngnix & Docker Swarm to Traefik & Nomad.”

But before we start getting more in depth with this talk, there is one more thing to know about it. This is more or less a sequel to “From Monolith to Microservices” by Paul Puschmann a colleague of Jan Martens, but it’s not absolutely necessary to watch them in order or both.

 

So there will be a bunch of questions answered by Jan during the talk, regarding their environment, like: “How do we do deployments? How do we do request routing? What problems did we encounter, during our infrastructural growth and how did we address them?”

After giving some quick insight in the scale he has to deal with, that being 345.000 employees and 15.000 shops, he goes on with the history of their infrastructure.

Jan works at REWE Digital, which is responsible for the infrastructure around services, like delivery of groceries. They started off with the takeover of an existing monolithic infrastructure, not very attractive huh? They confronted themselves with the question: “How can we scale this delivery service?” and the solution they came up with was a micro service environment. Important to point out here, would be the use of Docker/Swarm for the deployment of micro services.

Let’s skip ahead a bit and take a look at the state of 2018 REWE Digital. Well there operating custom Docker-Environment consists of: Docker, Consul, Elastic Stack, ngnix, dnsmasq and debian

Jan goes into explaining his infrastructure more and more and how the different applications work with each other, but let’s just say: Everything was fine and peaceful until the size of the environment grew to a certain point. And at that point problems with nginx were starting to surface, like requests which never reached their destination or keepalive connections, which dropped after a short time. The reason? Consul-template would reload all ngnix instances at the same time. The solution? Well they looked for a different reverse proxy, which is able to reload configuration dynamically and best case that new reverse proxy is even able to be configured dynamically.

The three being deemed fitting for that job were envoy, Fabio and traefik, but I have already spoiled their decision, its treafik. The points Jan mentioned, which had them decide on traefik were that it is dynamically configurable and is able to reload configuration live. That’s obviously not all, lots of metrics, a web ui, which was deemed nice by Jan and a single go binary, might have made the difference.

Jan drops a few words on how migration is done and then invests some time in talking about the benefits of traefik, well the most important benefit for us to know is, that the issues that existed with ngnix are gone now.

Well now that the environment was changed, there were also changes coming for swarm, acting on its own. The problems Jan addresses are a poor container spread, no self-healing, and more. You should be able to see where this is going. Well the candidates besides Docker Swarm are Rancher, Kubernetes and Nomad. Well, this one was spoiled by me as well.

The reasons to use nomad in this infrastructure might be pretty obvious, but I will list them anyway. Firstly, seamless consul integration, well both are by HashiCorp, who would have guessed. Nomad is able to selfheal and comes in a single go binary, just like traefik. Jan also claims it has a nice web UI, we have to take his word on that one.

Jan goes into the benefits of using Nomad, just like he went into the benefits of ngnix and shows how their work processes have changed with the change of their environment.

This post doesn’t give enough credit to how much information Jan has shared during his talk. Maybe roughly twenty percent of his talk are covered here. You should definitely check it out the full video to catch all the deeper more insightful topics about the infrastructure and how the applications work with each other.

Alexander Stoll
Alexander Stoll
Junior Consultant

Alexander ist ein Organisationstalent und außerdem seit Kurzem Azubi im Professional Services. Wenn er nicht bei NETWAYS ist, sieht sein Tagesablauf so aus: Montag, Dienstag, Mittwoch Sport - Donnerstag Pen and Paper und ein Wochenende ohne Pläne. Den Sportteil lässt er gern auch mal ausfallen.

Docker – ein erster Eindruck!

Ich habe in den letzten Monaten meiner Ausbildung schon viel über verschiedene Software, Strukturen und Programiersprachen gelernt. Seit einigen Wochen arbeite ich in der Managed Services-Abteilung mit Docker und bin wirklich fasziniert, was Docker alles kann und wie es funktioniert.

Was genau ist Docker?

Docker ist eine Containerisierungstechnologie, die die Erstellung und den Betrieb von Linux-Containern ermöglicht, die man nicht mit virtuellen Maschinen verwechseln sollte.

Docker vereinfacht die Bereitstellung von Anwendungen, weil sich Container, die alle nötigen Pakete enthalten, leicht als Dateien transportieren und installieren lassen. Container gewährleisten die Trennung und Verwaltung der auf einem Rechner genutzten Ressourcen. Das beinhaltet laut Aussage der Entwickler: Code, Laufzeitmodul, Systemwerkzeuge, Systembibliotheken – alles was auf einem Rechner installiert werden kann.

Welche Vorteile bringt Docker?

Ein Vorteil ist, dass man diese Container sehr flexibel erstellen, einsetzen, kopieren und zwischen Umgebungen hin- und her verschieben kann. Sogar ein Betrieb in der Cloud ist dadurch möglich und eröffnet viele neue Möglichkeiten. Das Ausführen verschiedener Container bietet zusätzlich einige Sicherheitsvorteile. Wenn man Anwendungen in verschiedenen Containern ausführt, hat jeder Container nur Zugriff auf die Ports und Dateien, die der andere Container explizit freigibt.

Darüber hinaus bieten Container ein höheres Maß an Kontrolle darüber, welche Daten und Software auf dem Container installiert sind. Schadsoftware, die in einem Container ausgeführt wird, wirkt sich nicht auf andere Container aus.

Was ist der Unterschied zwischen Docker-Containern und virtuellen Maschinen?

Virtuelle Maschinen enthalten von der simulierten Hardware, über das Betriebssystem bis zu den installierten Programmen eine große Menge an Informationen. Die Folge: Sie verbrauchen viel Speicherplatz und Ressourcen.

Der große und schöne Unterschied zu herkömmlichen virtuellen Maschinen ist, dass die Container kein Betriebssystem beinhalten und booten müssen, sondern lediglich die wichtigen Daten der Anwendung enthalten. Wollten wir früher beispielsweise eine Datenbank und einen Webserver getrennt voneinander starten, benötigen wir zwei VMs. Mit Docker werden hier nur noch zwei Container benötigt, ohne dass man die VMs booten muss, die kaum Ressourcen verbrauchen – im Vergleich zu den virtuellen Maschinen.

Virtuelle Maschinen jedoch haben nach wie vor ihre Daseinsberechtigung: zum Beispiel wenn auf einem Host mehrere Maschinen mit jeweils unterschiedlichen Betriebssystemen oder Hardware-Spezifikationen simuliert werden müssen. Denn ein Docker-Container enthält kein eigenes Betriebssystem und keine simulierte Hardware. Hier wird auf das System des Hosts zugegriffen, so dass alle Container Betriebssystem und Hardware gemeinsam nutzen.

Fazit zum Thema Docker:

Den Einsatz von Docker betrachte ich als sinnvoll. Es ergeben sich viele Vorteile und so ist es inzwischen möglich, eine Softwareumgebung rascher aufzubauen, ohne viel Speicherplatz und Ressourcen zu verbrauchen.

Veranstaltungen

Di 27

GitLab Training | Online

Oktober 27 @ 09:00 - Oktober 28 @ 17:00
Di 27

Graylog Training | Online

Oktober 27 @ 09:00 - Oktober 28 @ 17:00
NETWAYS Headquarter | Nürnberg
Nov 04

Vorstellung der Monitoring Lösung Icinga 2

November 4 @ 10:30 - 11:30
NETWAYS Headquarter | Nürnberg
Nov 24

Elastic Stack Training | Online

November 24 @ 09:00 - November 26 @ 17:00
Dez 01

Foreman Training | Nürnberg

Dezember 1 @ 09:00 - Dezember 2 @ 17:00
NETWAYS Headquarter | Nürnberg