Evolution of a Microservice-Infrastructure by Jan Martens | OSDC 2019

This entry is part 1 of 6 in the series OSDC 2019 | Recap

 

At the Open Source Data Center Conference (OSDC) 2019 in Berlin, Jan Martens invited to audience to travel with him in his talk „Evolution of a Microservice-Infrastructure”. You have missed him speaking? We got something for you: See the video of Jan‘s presentation and read a summary (below).

The former OSDC will be held for the first time in 2020 under the new name stackconf. With the changes in modern IT in recent years, the focus of the conference has increasingly shifted from a mainly static infrastructure approach to a broader spectrum that includes agile methods, continuous integration, container, hybrid and cloud solutions. This development is taken into account by changing the name of the conference and opening the topic area for further innovations.

Due to concerns around the coronavirus (COVID-19), the decision was made to hold stackconf 2020 as an online conference. The online event will now take place from June 16 to 18, 2020. Join us, live online! Save your ticket now at: stackconf.eu/ticket/


 

Evolution of a Microservice-Infrastructure

Jan Martens signed up with a talk titled “Evolution of a Microservice Infrastructure” and why should I summarize his talk if he had done that himself perfectly: “This talk is about our journey from Ngnix & Docker Swarm to Traefik & Nomad.”

But before we start getting more in depth with this talk, there is one more thing to know about it. This is more or less a sequel to “From Monolith to Microservices” by Paul Puschmann a colleague of Jan Martens, but it’s not absolutely necessary to watch them in order or both.

 

So there will be a bunch of questions answered by Jan during the talk, regarding their environment, like: “How do we do deployments? How do we do request routing? What problems did we encounter, during our infrastructural growth and how did we address them?”

After giving some quick insight in the scale he has to deal with, that being 345.000 employees and 15.000 shops, he goes on with the history of their infrastructure.

Jan works at REWE Digital, which is responsible for the infrastructure around services, like delivery of groceries. They started off with the takeover of an existing monolithic infrastructure, not very attractive huh? They confronted themselves with the question: “How can we scale this delivery service?” and the solution they came up with was a micro service environment. Important to point out here, would be the use of Docker/Swarm for the deployment of micro services.

Let’s skip ahead a bit and take a look at the state of 2018 REWE Digital. Well there operating custom Docker-Environment consists of: Docker, Consul, Elastic Stack, ngnix, dnsmasq and debian

Jan goes into explaining his infrastructure more and more and how the different applications work with each other, but let’s just say: Everything was fine and peaceful until the size of the environment grew to a certain point. And at that point problems with nginx were starting to surface, like requests which never reached their destination or keepalive connections, which dropped after a short time. The reason? Consul-template would reload all ngnix instances at the same time. The solution? Well they looked for a different reverse proxy, which is able to reload configuration dynamically and best case that new reverse proxy is even able to be configured dynamically.

The three being deemed fitting for that job were envoy, Fabio and traefik, but I have already spoiled their decision, its treafik. The points Jan mentioned, which had them decide on traefik were that it is dynamically configurable and is able to reload configuration live. That’s obviously not all, lots of metrics, a web ui, which was deemed nice by Jan and a single go binary, might have made the difference.

Jan drops a few words on how migration is done and then invests some time in talking about the benefits of traefik, well the most important benefit for us to know is, that the issues that existed with ngnix are gone now.

Well now that the environment was changed, there were also changes coming for swarm, acting on its own. The problems Jan addresses are a poor container spread, no self-healing, and more. You should be able to see where this is going. Well the candidates besides Docker Swarm are Rancher, Kubernetes and Nomad. Well, this one was spoiled by me as well.

The reasons to use nomad in this infrastructure might be pretty obvious, but I will list them anyway. Firstly, seamless consul integration, well both are by HashiCorp, who would have guessed. Nomad is able to selfheal and comes in a single go binary, just like traefik. Jan also claims it has a nice web UI, we have to take his word on that one.

Jan goes into the benefits of using Nomad, just like he went into the benefits of ngnix and shows how their work processes have changed with the change of their environment.

This post doesn’t give enough credit to how much information Jan has shared during his talk. Maybe roughly twenty percent of his talk are covered here. You should definitely check it out the full video to catch all the deeper more insightful topics about the infrastructure and how the applications work with each other.

Alexander Stoll
Alexander Stoll
Junior Consultant

Alexander ist ein Organisationstalent und außerdem seit Kurzem Azubi im Professional Services. Wenn er nicht bei NETWAYS ist, sieht sein Tagesablauf so aus: Montag, Dienstag, Mittwoch Sport - Donnerstag Pen and Paper und ein Wochenende ohne Pläne. Den Sportteil lässt er gern auch mal ausfallen.

5 Steps to a DevOps Transformation by Dan Barker | OSDC 2019

This entry is part 2 of 6 in the series OSDC 2019 | Recap

 

“It’s not what we believe, it’s what we do that defines our culture”, was on his first slide. At the Open Source Data Center Conference (OSDC) 2019 Dan Barker presented “5 Steps to a DevOps Transformation”. Those who missed the talk back then now get the chance to see the video of Dan’s presentation and read a summary (below).

The former OSDC will be held for the first time in 2020 under the new name stackconf. With the changes in modern IT in recent years, the focus of the conference has increasingly shifted from a mainly static infrastructure approach to a broader spectrum that includes agile methods, continuous integration, container, hybrid and cloud solutions. This development is taken into account by changing the name of the conference and opening the topic area for further innovations. Transformation rules!

Due to concerns around the coronavirus (COVID-19), the decision was made to hold stackconf 2020 as an online conference. The online event will now take place from June 16 – 18, 2020. Join us, live online! Save your ticket now at stackconf.eu/ticket/


5 Steps to a DevOps Transformation

In order to be successful in the new digital economy, it is essential to continuously improve the quality, speed and efficiency of your own organization.

“In this session, we’ll walk through the five steps to transformational change that I’ve found to be important. These are really applicable to any continuously improving organization or any large amount of change in a system. Establish the vision. Create shared experiences. Educate, educate, educate. Find evangelists; Get feedback. I’ll elaborate on each item with methods I’ve used in real transformations at multiple companies. I’ll also describe how these all tie into the DevOps culture, which is really the transformation that’s occurring within the company.”

DevOps professionals primarily work in the tech and software world, creating new technology products, software, and other user services. You will play a key role in the development of new ideas for products and services and manage the process of turning these ideas into realities.

Establish the vision

“A strong team can take any crazy vision and turn it into reality” – John Carmack

The vision creates empowerment

  • But I‘m not a leader!!!
  • Bold
  • Inspiring
  • Actionable

Pathological – Power oriented

Bureaucratic – Rule oriented

Generative – Performance oriented

If your company values increased productivity, profitability, and market share then DevOps is essential. Even if your goals are non-financial, DevOps will enhance your ability to achieve those goals. The State of DevOps report soundly backs up these claims. More importantly, if your competition has already implemented DevOps and you haven’t, you are already behind. That’s how Walmart feels now that Amazon has built the world’s most efficient shopping platform.

Bad vision → bad outcomes

  • Biased for failure
  • No vision
  • IT-focused
  • Lack of clarity – JFK Moonrace
  • Not actionable

Find evangelists

“It is not about whether you call yourself a leader or not. It is about what you have to show to people as a leader. Leadership is contagious, you carry it and share it” – Israelmore Ayivor

The control mechanisms that are currently in place to manage your people and projects may not be suited for the DevOps world. You have to be willing to look at items that prevent agility, scalability, and responsiveness and change them. DevOps will provide agility, scalability, and responsiveness, so anything that hinders that process needs to be aligned with the new model.

You can‘t do it alone

  • Use anyone willing to help
  • Nurture this team
  • This team is a bellwether
  • Publicly praise team members

When your organization moves towards developing a DevOps culture, it’s signaling to everyone that participates in the production and release of software they have an equal stake in the success of the company. It’s an all for one, one for all mentality that will break down the communication barriers between teams and make everyone accountable. Once DevOps roles and responsibilities are implemented positive changes will occur, and everyone wins.

Create shared experiences

“Words are symbols for shared memories. If I use a word, then you should have some experience of what the word stands for. If not, the word means nothing to you.” – Jorge Luis BorgesIm

Bringing people together by sharing

  • Two levels
    • Leadership
    • Organization
  • Equally important

Leadership teams need landmarks

  • Shared information model
  • Reference point
  • Provides inspiration
  • Repeat

To start down your path to DevOps success you need to build a proper DevOps organization which includes all the proper team members. However, the size of your organization plays a big role on how granular you can be with your team. But size doesn’t really matter if you properly define the roles and responsibilities across the organization. The important thing is to make a commitment to the process and get started

The core responsibility that needs to exist is the person who owns the entire DevOps process. This person would usually be someone in a senior position. They are the keeper of the process and procedures and guarantor of the delivery of DevOps value. I like to think of this person as the DevOps evangelist. Aside from the leader, you would need to establish, at a minimum, the following roles: Code Release Manager, Automation Expert, Quality Assurance, Software Developer/Tester, and Security Engineer. The DevOps duties for each of these resources are described below.

Don‘t leave everyone else behind

  • Shared information model
  • Provides motivation
  • Leaders should be leading
  • How?

Educate,…

“An investment in knowledge pays the best interest” – Benjamin Franklin

Learn something new to build something new

  • Knowledge changes outcomes
  • Make it priority
  • Make it available
  • Monitor it

Measure what matters

  • Accelerate by Dr. Forsgren
  • Westrum Culture Survey
  • User Surveys
  • 1:1 Feedback
  • CultureAmp

Everyone in the company is sailing on the same ship. If the tide goes up so does the ship and everyone on it. But if the tide goes down so does the ship, but no one on the ship is to blame.

Everyone learns differently

  • Online training
  • In-person classes
  • Newsletters
  • Conferences
  • Hackathons

Get feedback

“True intuitive expertise is learned from prolonged experience with good feedback on mistakes” – Daniel Kahneman

Quellen und Nachschlagewerke

Aleksander Arsenovic
Aleksander Arsenovic
Junior Consultant

Aleksander macht eine Ausbildung zum Fachinformatiker für Systemintegration in unserem Professional Service. Wenn er nicht bei NETWAYS ist, schraubt er an seinem Desktop-PC rum und übertaktet seine Hardware. Er ist immer für eine gute Konversation zu haben.

Tick Tock: What the heck is time-series data? by Tanay Pant | OSDC 2019

This entry is part 5 of 6 in the series OSDC 2019 | Recap

 

The rise of IoT and smart infrastructure has led to the generation of massive amounts of complex data. In his talk at the Open Source Data Center Conference (OSDC) 2019 Tanay Pant brought up a question to gather insights: Tick Tock: What the heck is time-series data? See the video of Tanay‘s presentation and read a summary (below).

The former OSDC will be held for the first time in 2020 under the new name stackconf. With the changes in modern IT in recent years, the focus of the conference has increasingly shifted from a mainly static infrastructure approach to a broader spectrum that includes agile methods, continuous integration, container, hybrid and cloud solutions. This development is taken into account by changing the name of the conference and opening the topic area for further innovations.

Due to concerns around the coronavirus (COVID-19), the decision was made to hold stackconf 2020 as an online conference. The online event will now take place from June 16 – 18, 2020. Join us, live online! Save your ticket now at: stackconf.eu/ticket/


Tick Tock: What the heck is time-series data?

Today we are going to talk about topics like what is time-series and how the load of different file forms are distributed, different use cases where time-series are used frequently. Then we’ll talk about how Create-DB helps to communicate with machine files.

What are time series?

To answer this question we present a sensor that sends the files in a period of time. When we want to read in or display this file, the time would be an axis. Compared to other workloads this file is not added to the database as an update, the time-series is added as an input and this is the primary way for this process. Time-series in database is basically introducing efficiencies through temporal treatment and this allows us to intuitively have this set of files like monitoring in different times in all aspects of our operation.

Now we have a view on time-series. If you create an abstract, look at different use cases of time-series and the way the data was generated. You can categorize them in two different ways. The first one is IT and monitoring, what can be described as a traditional use of time-series databases. When we have a look at the properties in this, one can say there are tens or hundreds of metrics or sensors as well as a lot of complex data and queries that are often larger than several gigabytes. Flux DB is a good example in this category.

We have industrial sensor data and this is an emerging sector that has not been much talked about. There are also hundreds or thousands of sensors or metrics, too. So the real-time queries are under pressure, which must be able to access all the gigabytes of data. Create-DB is a good example in this case.

We start with core technology and see what exactly Create-DB is and how it differs from other databases in this segment. Create-DB is a new type of distribution continuation database that is best suited for handling industrial sensor data, due to its ease of use and ability to handle a lot of different data, as well as a thousand different sensor data. Create-DB supports distributed SQL with full-text search and data queries, and also coordinates different nodes in a DB Cluster seamlessly with one another. In addition, the execution of write and query operations across nodes in clusters are automatically distributed. Create-DB has columnar caches for time-series in memory SQL performance so time-series normally require all data in main memory to fit, which limits the amount of data that can be managed within a specific time.

One solution for time-series performance without data volume restrictions is to implement the residence of memory in filled caches at each node, so that the caches tell the query engine whether there are any records on this node and where those records are. Distributed query processing also contributes to fast performance and a query planner that makes wise decisions about which nodes are best suited for execution. And it has machine data functions with a cloud native that makes it seamless in the cloud. Finally, we look at a few advantages of Create-DB. The Create-DB installation is simple. You can create an instance of Create-DB with a single line on the terminal or docker. It has a distributed query engine that supports full-text queries. It can handle economic hardware and instances well, and it is easy to scale the architecture.

Saeid Hassan-Abadi
Saeid Hassan-Abadi
Junior Consultant

Saeid hat im September 2019 seine Ausbildung zum Fachinformatiker im Bereich Systemintegration gestartet. Der gebürtige Perser hat in seinem Heimatland Iran Wirtschaftsindustrie-Ingenieurwesen studiert. Er arbeitet leidenschaftlich gerne am Computer und eignet sich gerne neues Wissen an. Seine Hobbys sind Musik hören, Sport treiben und mit seinen Freunden Zeit verbringen.

Fast log management for your infrastructure by Nicolas Frankel | OSDC 2019

This entry is part 3 of 6 in the series OSDC 2019 | Recap

 

Nicolas Frankel is a Developer Advocate with 15+ years experience consulting for many different customers, in a wide range of contexts. “Fast log management for your infrastructure” was his topic at the Open Source Data Center Conference (OSDC) 2019 in Berlin. Those who missed the talk back then now have the opportunity to see the video of Nicolas’ presentation and read a summary (below).

The former OSDC will be held for the first time in 2020 under the new name stackconf. With the changes in modern IT in recent years, the focus of the conference has increasingly shifted from a mainly static infrastructure approach to a broader spectrum that includes agile methods, continuous integration, container, hybrid and cloud solutions. This development is taken into account by changing the name of the conference and opening the topic area for further innovations.

We are proud to announce that Nicolas Frankel is in our speaker lineup this year, too. We are looking forward to his talk: “Real Continuous Deployment of JVM applications”.

Due to concerns around the coronavirus (COVID-19), the decision was made to hold stackconf 2020 as an online conference. The online event will now take place June 16 – 18, 2020. Be there, live online! Save your ticket now at: stackconf.eu/ticket/


Fast log management for your infrastructure

Fast log management for your infrastructure”, well that is one way to get OSDC visitors excited. Nicolas Frankel signed up with that one and he did not disappoint. The issues, he was tackling, were issues produced by optimization, that being said do you think about the logs when it comes to migrating your application to reactive micro services?

Before we get to all that, Nicolas had to take a little detour through programming logic and how logging works, and he also points out some misconceptions of how things are done and how they work. Like for example, his so called “[…] root of all evil”.

LOGGER.debug(
"Cart price is now {}", cart.getPrice())

He states the question, who believes that in case of the log level being above debug the statements will be ignored? That’s what is to be expected, however it is not the case. In a small demo section he gives further insight on the topic from the perspective of a software developer.

From the developer point of view one should only do physical logging is the statement he ends his demo explanation run on. Directly afterwards he states that developers do not like to think that they are dealing with the physical world, then he goes further on about the respective storage possibilities like the write time regarding SSDs, HDDs or on an NFS, which should be taken into account.

Tackled some issues already, Nicolas keeps switching back and forth between the perspective of a software developer and an operator. He puts a lot of empathizes on these perspective changes to make sure that everyone involved starts to understand where the issue lies and if there is an issue at all.

For example the writing process and the opening and closing of streams for single log statements. It would be great if the stream could be continuously open and log statements can be written until the stream can be closed. But arguably and in most cases by default, logging is blocking. While most frameworks allow asynchronous logging, there is no right or wrong. And it also doesn’t have to be a software development mistake nor a bad infrastructure.

He dives deeper into asynchronous logging, because if you want to use it, you have to understand it: from queue size to discarding thresholds, the difference between blocking and dropping messages, everything. Nicolas also covers some logging basics, like metadata and what is especially important. Most essential metadata named timestamp, log level, line number and more. You may ask, why? Because some metadata is more expensive to get than others.

After some more detours through log aggregations and common pitfalls, with searching in logs or mandatory metadata, we get to a well-known application stack in the world of logging, the Elastic Stack.

He explains the basic architecture of the Elastic Stack and how the applications work with each other. Especially Filebeat and Logstash take the spotlight during this part. Step by step he works his way through an abstraction of the path a log takes from Filebeat to Logstash until you get a JSON you are familiar with. Then common misunderstandings like “Why do I need Logstash at all?” are being tackled by him, before he goes onto how he is doing logging at Exoscale.

They are using syslog-ng instead of Filebeat, basically just because when they started Filebeat was not ready for production. Then a regular Logstash and before we come to Elasticsearch there is a Kafka running. The reason why they are using Kafka is that Kafka being a decentralized data store, and using Logstash to get data out of it there is lower risk of dropping data instead of buffering towards elasticsearch, because there are not multiple nodes writing at once.

Nicolas summarizes his talk at the end with six short statements or maybe even lessons for log management. If you want, head over to the video above to learn about them from Nicolas himself or experience him live to learn from him.

Alexander Stoll
Alexander Stoll
Junior Consultant

Alexander ist ein Organisationstalent und außerdem seit Kurzem Azubi im Professional Services. Wenn er nicht bei NETWAYS ist, sieht sein Tagesablauf so aus: Montag, Dienstag, Mittwoch Sport - Donnerstag Pen and Paper und ein Wochenende ohne Pläne. Den Sportteil lässt er gern auch mal ausfallen.
Monthly Snap March 2020

Monthly Snap March 2020

As most of you have already noticed, the NETWAYS family is officially in home-office.
This is very strange experience for most of us, but we are adapting to the new situation and are grateful for being able to work from home. As Leonie wrote in her blog NETWAYS @home we have a great Internal Support department, that helps with any issue we may have, and of course Bernd sends us regular updates.
In case you missed it: our NWS team offers a #stayathome-special: Rocket Chat, Jitsi and Nextcloud for free for 30 days! Check it out! These tools are very helpful for teamwork, telephone conferences etc, and help us stay in touch and cooperate despite being apart. It might be just the thing for your team!
And although business might not be quite as usual, we are still there for our customers! You can write us, phone us, order shop-products etc.

So, let us see what else NETWAYS wrote about in March!

 

NEWS from our Shop

STARFACE im Home-Office! Read Natalie`s Blog on how to use Starface while working from home, and that Starface UCC premium licences are for free till the end of May! Further, Natalie provided information on the HW group STE2: Netzwerk-Thermometer Set zum kleinen Preis. Just the thing for your server-room. Also read her second blog on the subject: HW group STE2: Netzwerk-Thermometer Set – Teil 2. And Nicole shared the advantages of the STE und STE PoE for those of us who don`t need the extra features of the STE2.

DevOpsDays Berlin

Julia informed us of DevOpsDays Berlin: Call for Papers open. Get involved! There are three different formats for talks. Read about them and send us your proposal for a talk in Berlin in October!

 

Icinga for Windows

Despite the Home-office situation the webinars are still taking place! Christian gave us an overview on topics and dates in the new webinar series Icinga for Windows – Webinar Kalender. Which leads us to Alexanders blog on his part in creating Icinga for Windows with PowerShell in Was war, ist und wird sein – ein Azubi & PowerShell. He is truly impressed with Christians work, and has learnt a lot in the process.

 

Techie topics

Artur wrote about his first experiences with Docker – ein erster Eindruck! In Privat: Better Late than never – Graphite-Web-Installation unter Debian 10 – Part 1 David kept his promise to his blog-readers and shared a thorough how-to. Why should you test tmux? Christoph gave us some reasons in tmux – terminal multiplexer. And Daniel`s blog Jitsi Best Practice und Skalierung helps with installing and using Jitsi. Julia annonced that NWS now hosts Kubernetes in NETWAYS Managed Kubernetes. Auf in die Welt der Container! The next blog is hilarious and cool at the same time! Read Tobias` Icinga Web Themes coming soon – Bayerisch, Fränkisch, Österreichisch!

 

#lifeatnetways

In our blog series NETWAYS stellt sich vor new colleagues share a bit about themselves. Read about our apprentices Natalie and Nathaniel

Catharina Celikel
Catharina Celikel
Office Manager

Catharina unterstützt seit März 2016 unsere Abteilung Finance & Administration. Die gebürtige Norwegerin ist Fremdsprachenkorrespondentin für Englisch. Als Office Manager kümmert sie sich deshalb nicht nur um das Tagesgeschäft sondern übernimmt nebenbei zusätzlich einen Großteil der Übersetzungen. Privat ist der bekennende Bücherwurm am liebsten mit dem Fahrrad unterwegs.

Ansible – should I use omit filter?

When we talk about Ansible, we more and more talk about AWX or Tower. This Tool comes in handy when you work with Ansible in a environment shared with colleagues or multiple teams.
In AWX we can reuse the playbooks we developed and share them with our colleagues on a GUI Platform.

Often we need a bit of understanding how a playbook is designed or if a variable need to be defined for the particular play. This can be much more tricky when sharing templates to people unaware of your work.

This is where the omit filter can be used. The easiest way to explain this, if the variable has no content or isn’t defined, omit the parameter.

The following example is an extract from the documentation:


- name: touch files with an optional mode
  file:
    dest: "{{ item.path }}"
    state: touch
    mode: "{{ item.mode | default(omit) }}"
  loop:
    - path: /tmp/foo
    - path: /tmp/bar
    - path: /tmp/baz
      mode: "0444"

In AWX we can create surveys, those are great to ask a few questions and provide a guide on how to use the underlying play. But often we need to choose between two variables whether one or another action should happen. Defined by the variable in use. If we leave one of both empty, Ansible will see those empty as defined but “None” (Python null) as content.

With the omit filter we can remove the parameter from the play, so if the parameter is empty it won’t be used.

The following code is the usage of icinga2_downtimes module which can create downtimes for hosts or hostgroups but the parameters cannot be used at the same time. In this case I can show the variable for hostnames and hostgroups in the webinterface. The user will use one variable and the other variable will be removed and therefore no errors occur.


- name: schedule downtimes
  icinga2_downtimes:
    host: https://icingaweb2.localdomain
    username: icinga_downtime
    password: "{{ icinga_downtime_password }}"
    hostnames: "{{ icinga2_downtimes_hostnames | default(omit) }}"
    hostgroups: "{{ icinga2_downtimes_hostgroups | default(omit) }}"
    all_services: "{{ icinga2_downtimes_allservices | default(False) }}"

The variables shown in the AWX GUI on the template.

This filter can be used in various other locations to provide optional parameters to your users.

If you want to learn more about Ansible, checkout our Ansible Trainings or read more on our blogpost.

Thilo Wening
Thilo Wening
Consultant

Thilo hat bei NETWAYS mit der Ausbildung zum Fachinformatiker, Schwerpunkt Systemadministration begonnen und unterstützt nun nach erfolgreich bestandener Prüfung tatkräftig die Kollegen im Consulting. In seiner Freizeit ist er athletisch in der Senkrechten unterwegs und stählt seine Muskeln beim Bouldern. Als richtiger Profi macht er das natürlich am liebsten in der Natur und geht nur noch in Ausnahmefällen in die Kletterhalle.