Seite wählen

NETWAYS Blog

Foreman Birthday Event 2024 – Recap

On Monday, 15.07.2024, I met early with my colleagues Lennart and Matthias to travel to Garching for the Foreman Birthday Event. This year it was again hosted by ATIX at a conference room next to their office, so thanks to all of them for having as there, especially to Bernhard! And also thanks to the community members that joined us! It was a nice mix from the German community with also some international guests from Red Hat, mostly familiar faces but also some new ones. And it is as great to see the community still growing even after 15 years as it is great to meet all the people who became friends in this time.

The morning

After some time to get together we entered the room and Bernhard gave the official welcome before handing over to his colleagues for the first talk.

First Talk“Automated Provisioning with SecureBoot and Foreman” by Jan Löser and Markus Reisner was a great talk giving insights in SecureBoot in general as a start. Then they showed the current state of Foreman handling SecureBoot and why the implementation is limiting real world scenarios. But ATIX is working on a complete overhaul of the feature and thanks to the community input it is taking shape. The new feature will be backwards compatible, so no change needed if you do not want to use it. This is great, especially as the first step will require some manual work to setup everything. But the plans for automated setup are there and as I know the guys it will not take too long to get this implemented, too. With this in place using SecureBoot with Foreman will give you a better experience!

As second speaker Martin Alfke told us the story “How Foreman Community enables new contributors”. While working with Foreman already for a long time at different customers, he never joined the community in a deeper sense. After they developed Hiera Data Manager customers asked for some integration with Foreman. And with the help of the community they delivered! So now you can debug the Hiera data of your Puppet environment not only much easier, but also directly from the Foreman UI. Foreman-HDM
On the day Martin got the help with the last missing piece, the Foreman installer, and was encourage to join official Professional Services starting with a blogpost.

After a short break Evgeni Golov introduced the audience to “Foreman build test environment: migration from Jenkins to GitHub Actions”. He showed where the Project was before starting the migration, the current stage and what else is planned. He also encouraged the Plugin developers in the room to make use of it. And I think other projects can also learn from it. So take a look at his slides.

The afternoon

ATIX invited us to have lunch in the restaurant of the office complex which provided some nice choices and great quality of food. I also used this for a nice mixture of private and business talk with Martin and some ATIX guys. It is always great to realize how well we work together in the community even we are somehow competitors.

Then the stage was mine asking the question “Foreman – a complete lifecycle management tool for desktops?”. A bit out of my comfort zone it was not a technical deep talk with many demos, but a more light one sharing my experience based on some use cases. Starting with a simple virtual desktop over our training setup to the proof of concept we did for a customer project I looked into the challenges we experienced and how we solved them if we did already. After that I came to the conclusion that Foreman is ready to manage Linux desktops with only a bit of work, but the Linux desktops are not ready to be automated without putting some more work into.

Fifth Talk“Foreman documentation: Helping users figure things out since May 15, 2019” by Maximilian Kolb and Aneta Šteflová Petrová talked us through the progress the Foreman documentation made since Red Hat started to upstream the documentation. The anecdotes and examples given by the two technical writers showed their great commitment to the project. One reason they pointed out why the project is seen as a real success story is how closely developers and technical writers of different companies involved work together.
Afterwards we had a short discussion on how to improve further and how to reach the goal of the new documentation being the default one.

And last but not least Ian Ballou joined us remotely from the US to show “New Feature: Pushing Containers Into Katello”. It was great having him give this demo even without being on-site. He did really dived deep into this new feature and answered all the questions the audience had afterwards. So if you had a Container registry to push your images to and then to be synced from to Katello, this will make your life easier by allowing direct push to Katello’s registry.

Cake time

After the talks and a short feedback round unfortunately some had to leave, but for the rest of us it was cake time. Bernhard hesitated to cut the cake, so this job become my honor. After a quick count I had to cut it into 20 pieces directly starting the discussion how to do this best with everyone offering suggestions. But with the problem solved, we split into groups. Some still discussing topics from the talks, some talking about other aspects of the project or IT in general and others just having a friendly chat. About one hour later it was time to leave so we could catch our train back to Nuremberg, but we were not the last ones.

CakeMe cutting the cake

It was again a great event, so thanks to everyone who made it possible. I already heard the feedback like having an introduction round, having more hands-on demos, more time for open discussion, more hybrid. Not sure what of this we can implement next year, but I want to make it a similar success again in 2025!

Other celebrations

But we were not the only one celebrating the Foreman’s 15th birthday. Christian Stankowic made a special on the podcast FOCUS ON: LINUX consisting of two episodes. Episode 110 is “15 Jahre Foreman” (in German) which has Christian, Evgeni, Bernhard and me talking about the project and trying to include every possible pun. Episode 111 is an Interview with Ohad Levy about the project’s history and answering questions.

P.S.: As a side note for Ian: I promised to eat a slice of the birthday cake in your name so the second piece was also delicious and replaced my diner! 😉

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

stackconf 2024 | Highlights from Day 2

Yesterday’s evening event was full of great food, drinks, and conversation. I have to admit the gambling had no appeal to me, but many seemed to enjoy it as most socializing happened around the roulette table. Of course, the attendees could not play for real money, but the one who won the most chips will get handed over a big Lego kit today as a prize. So while most were gambling, smaller groups found them to talk. I had a great conversation with Sebastian from our Marketing team who attended his first NETWAYS conference, actually his first tech conference, and he seemed to really enjoy it.

 

The Morning

Openssf Security LogoToday we started directly with the talks and the first one I attended was “How to hack and defend (your) open source” by Roman Zhukov. He illustrated by numbers from surveys and science articles how important Open Source is for our industry, but also how vulnerable it is. A nice compact lecture he pointed to is Endorlabs’ Top 10 Risks for Open Source, but he included much more. So if the topic is also of interest for you make sure to grab his slides, bring some time (and perhaps coffee) with you and follow all the links included. To defend against his first recommendation was OpenSSF Scorecard. But he recommended also other important best practices and tools which should help to verify if a project follows a secure development lifecycle.

Second talk for today was one I was very interested, so an additional detailed blogpost in the future is planned. “Confidential Containers – Sensitive Data and Privacy in Cloud Native Environments” by Magnus Kulke was another one about security, and I am very happy about increased awareness and coverage of this topic. He nicely and slowly introduced the topic to the audience to get everyone engaged. Then he focused on the key concepts, trust, integrity and remote attestation, and the implications of those on a cloud environment and confidential computing. After he showed this on the example of virtual machines, he covered why this is not so easily ported over to containers. But he also showed the initial ideas how the CoCo project wants to solve this as cloud native is an interesting platform for this.
Unfortunately because of technical difficulties he could not do the demo. But he made the best out of it and talked us through what would be shown and the takeaways.

Next Daniel Hiller told us “Squash the Flakes! – How to Minimize the Impact of Flaky Tests”. He nicely introduced the topic and the importance to the audience which was not completely aware of it. Flakes are failing tests in your CI/CD pipeline which are mostly false positives slowing down development, waste resources and have other negative impacts you are trying to solve with CI/CD initially. Lost trust into testing is also a serious issue. For minimizing the negative impacts, he recommended taking out the flaky test as early as possible in quarantine, but only as long as possible for fixing the test. But there were also some tool recommendations with ci-health, ci-search and testgrid, and others. The talk concluded with a nice summary, the main sources of falkiness Daniel experienced, and the key takeaways from it and future plans based on it.

“The challenges of Platform teams” by Marco Pierobon was the last one before lunch. First challenge to tackle is DevEX (Developer Experience) which includes not only the simplicity to use but also lack of feedback. Second one is Business because they do lack insights, face missing trust and assumptions. Third is Technology and fourth the Platform teams who need to handle a rapidly changing landscape which can lack maturity and has the risk of vendor lock-ins. For all of those, he tried to provide tips and tricks to overcome them. For example establishing a developer portal empowering them do what they need themself and allows for easily and continuous feedback. A roadmap can improve business experience and align expectations. Evaluation of techniques and product lifecycle management can help tackle technological challenges. And vendor agnostics technologies provide vendor lock-in. Investing in the right tools and technologies and cultivating relationships will help the platform teams. From this he had gone further into details which was a nice deep dive.

 

The Ignites

After grabbing some food, ok some may say to much food, attendees were back for the ignites. But before they started the winner of the evening event gambling was announced and the Lego DeLorean was handed over.

“Distributed Tracing using OpenTelemetry and Jaeger” by AJ Jester was the opening one. The ignite showed a comprehensive introduction on how to implement traces using OpenTelemetry and then how to use Jaeger for analyses.

Second one was “Swiss knife for Go debugging with VSCode” by Ivan Pesenti. From debugging in general over the Go debugger, Delve to the detailed integration in VSCode in 5 minutes! Well done, Ivan, well done!

Roman Zhukov with “Security of Open-Source AI: is there any difference?” finished the ignites. He gave a good sneak peek into the topic, but I think the topic would be worth to be covered in a full talk.

 

The Afternoon

“Orchestrating Resilient Data: Harnessing the Strength of Kubernetes with Operators” by Gregor Bauer opened the sessions of the afternoon. After a short introducing to Kubernetes and the challenges databases facing in this world designed to be stateless, he started, to dive deep into the extensions allowing databases to run nicely on the platform.
As example he of course used the NoSQL database Couchbase as he is working for it. But it was interesting to hear where we all use tools which utilize the database like he showed all the customers he came in touch while travelling for stackconf. Couchbase was designed to be could native and from all the details and the demo shown it seems to do a good job.

To stick with the topic databases I stayed in the room for “From a database in container to DBaaS on Kubernetes” by Peter Zaitsev. He tackled the same topic differently as he showed why Docker is not a good solution for databases and Kubernetes is not for stateful workflows. But also when taking a different way the goal and the solution were the same. In his case, Helm charts for the day 1 operations and Operators for the day 2 tasks in general. And for example those provided by Percona for MySQL, PostgreSQL and MongoDB.
He also criticized the current state at which major cloud providers, database vendors and multi databases provide proprietary solutions but Open Source ones are missing. And license changes which made the situation worse! Or the “Hotel California” Compatibility, a term he used as persiflage for the Open Source Compatibility promised by many which allow migrating from Open Source to their solution, but hard to check out. That Percona’s vision is to change this, is good to hear.

stackconf concluded for me with Philip Miglinci and “Rethinking Package Management in Kubernetes with Helm and Glasskube”. He introduced the topic of Package Management shortly before showing the different solutions and their advantages and disadvantages. With Glasskube he wants to solve the disadvantages of other solutions.
With his demo, he bravely trusted the Wi-Fi and showed from a freshly started Minikube over bootstraping Glasskube to installing some packages in a graphical way and via CLI. What we have seen looks very promising, so I recommend having a look into it even if it still in Beta.

 

Save Travels

So thank you to all the speakers and sponsors, our Event team, but also all attendees who made the conference a huge success. Save travels to all of them and see you on the next conference again. For all the other readers I hope you enjoyed my conference coverage, and perhaps I sparked interest to join also a conference in the future. I am very happy that conferences are establishing themselves again after pandemic.

 

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

stackconf 2024 | Highlights from Day 1

When our Event team asked for some to join stackconf this year, I happily volunteered and so it is me again writing our conference blog. I think the last time when I joined our conference in Berlin it was still its predecessor, so it’s my first stackconf. But if you know our Event team you can expect a well organized conference and great talks. And the talks were what made me excited already in advance as they are not on my typical topics, so I can widen my horizon.

 

The Morning

The conference started as always with a warm welcome from Bernd and with some minutes to crab a coffee and find the talk you wanted to hear. For me the first talk I was interested in was “Why is there no new Release? Nobody pays for the basics :-(“ by Schlomo Schapiro who I know as a great speaker and nice guy already for many years. His talk was not only about ReaR (Relax and Recover), but also about recovery of Linux servers in general and the importance of it, especially of having a plan and workflow for the worst case. ReaR is more a backup and recovery automation and workflow improvement than a backup software and from my experience using it I can agree to this. If you have not done yet have a look at the tool and when the talk is online watch it for some nice insights.

Second one was “Buzzing across the eBPF Landscape and into the Hive” by Bill Mulligan. He told us that eBPF makes the Linux Kernel programmable in a secure and efficient way. Why this is needed he showed in a nice pair of comic strips showing the process from an application developer requiring a new feature in the kernel and its landing in Distribution Kernels which can be reduced from several years to a much shorter time span.
With the basics explained, he jumped to the benefits for the Kubernetes world and then to real world examples. One example was Cilium what is becoming the network and networking security solution for Kubernetes. Another example was observability with Hubble and Tetragon which are based on Cilium. But he did not only tell us use cases, he also outlined what is not the use case. For example, a customer facing interface will never care about eBPF.
Finally, he gave a wider overview on projects, evenly mentioning eBPF on Windows, and more about the community.

AI is everywhere nowadays, so are talks about it. But I think “Generative AI Security — A Practical Guide to Securing Your AI Application” by Manuel Heinkel and Puria Izady was the first one I have seen focusing on Security. Manuel introduced us into the topic by explaining how AWS defines a responsible AI and that Security is one of the required values. The insights he gave to scope, lifecycle, dataflow, and top security risks of generative AIs were very interesting. While waiting on the recording to be available you can already talk a look on his recommendations the OWASP Top 10 for Large Language Model Applications and Mitre Atlas.
Puria jumped in for the more complex examples of vulnerabilities and mitigations. One library to help you choose a LLM for your own use case he introduced to the audience was fmeval. This is of course AWS focused, but can be extended to other LLMs which sounds interesting.
Overall a very amazing talk which I could only cover in a very basic way because of the many details mentioned, but one I can really recommend to watch from the conference archive.

The last one before lunch was Vishwa Krishnakumar with “Scaling Up, Not Out: Managing Enterprise Demands in a Growing SaaS Startup”. The talk was different as he shared his experience as a founder of a successful company. He took us from day 0 where you make the first decisions to product market fit where you are an established company. He focused on engineering and the events which can you make stumble. Furthermore, he gave tips on which things you should tackle already from day 1 as it will pave your way. This included when to say No to specific features, something not all have learned!

 

The Ignite Talks

Having the ignites after lunch is always great. The small talks offer the option to include different topics, but the first one “Practical AI with Machine Learning for Observability” by Costa Tsaousis was adding great details to two topics covered by other talks: AI and Observability. Automatically having Anomaly rates is a great thing if you analyze data. So now I have even a harder time to decide between talks tomorrow as it was a good preview on his full talk on Netdata.

Next one was Natalie Serebryakova with “Is Rust good for Kubernetes?”. She compared Go which is used for most parts of the Kubernetes ecosystem with Rust and made good points for using it.

Last one for today was “The DevOps Driving School: What comes after DevOps?” by Schlomo Schapiro as he explained every conference needs a DevOps talk. He made a point for learning DevOps in production as it should be normal for all and gave also some capabilities you need to achieve for getting to this state.

 

The Afternoon

Dotan Horovits I already know for several years as he is also a frequent speaker at Cfgmgmtcamp. He always does great talks on topics that matter, so my expectation were high on “Metadata: The Secret Sauce for Full Observability”. His definition of metadata is data about data which give your data (in his case telemetry data) context. He also made a great point about structured log events, which provide included metadata and consistency. Those can then be enhanced by custom metadata to create meaningful events.
Next telemetry data he focused on were traces which show the flow of a request and really require context. Followed by metrics, which are by default just numbers without much context. So this kind of data really needs enrichment. Having metadata on all this different telemetry data enables correlation, and this is what we need to break down data silos. OpenTelemetry is a project that can help here by standardization.
So a good look into a topic which is not solved in many environments based on hand signs in the audience which seemed to enjoy his talks, too.

Our own Daniel Bodky gave a talk “Towards Standardized Platforms: How the CNOE Project Can Help” I did not want to miss. After some questions to know his audience Daniel started by defining a platform, so what his talk is about. Afterwards he draws parallels between DevOps and Plattform Engineering and I liked his message of tiring down walls between teams. But his talk was not only about culture but also by tooling which brings in the CNOE Project.
Into detail, he went on idpbuilder, a tool which combines many from Kubernetes ecosystem tpin up a complete internal developer platform. It easily created a nice looking, featureful platform. And making a completely scratch-build kubernetes based setup looking easy is a well-done job! Based on the existing example Daniel added a custom one which he created in about half an hour making it look even more easy.
This showcase can really be helpful to convince managers that having your own platform is not so complicated and cost-incentive as they may think, and can improve productivity in your engineering team.

I could not decide between the next two talks, so I simply stayed in the room to see Alayshia Knighten talk about “Unleashing Potential Across Teams: The Power of Infrastructure as Code”. First she asked why we should care about IaC and answered the question by showing the benefits for different teams. Then she showed a democase as a practical example of the power of IaC.
If I had not been convinced already, I would be it now after her talk.

And last but not least I was listening to “Insights into Managed Service Provision: A STACKIT Retrospective” by Patrick Koss. Patrick started his talk with the motivation to provide a managed service and followed with how the decision was made for Kubernetes. After this he had gone into details about for example Kubernetes operators. Hearing which challenges they faced and how they solved them was quite interesting, especially as I would call STACKIT a German success story where it often feels like US company dominate the market.

 

The Evening

But stackconf is not only about the great talks, but also about socializing. While this happens during breaks, lunch and even before during breakfast, it reaches its high in the evening event. So I will leave you now, going to Umspannwerk Ost to enjoy the evening, but will be back tomorrow with the recap of day 2! And of course, I will make sure to include a paragraph about the evening event!

 

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Foreman Birthday Event 2024 – Save the Date

I want to spread the word as we will have our first Foreman Birthday Event on-site again after many years of online conferencing on 15.07.2024. This year’s host is ATIX, so it will take place at their Conference room at Parkring 4, 85748 Garching, near Munich, Germany.

 

What is the Foreman Birthday Event?

So if you missed our last events, the Foreman Birthday Event is a small conference with a mix of talks and networking. The event was born out of the idea to celebrate the birthday of the Foreman Project by Greg Sutcliffe when he was the project’s Community manager. In the first years we had multiple events, but the only one still taking place is the one organized by ATIX and NETWAYS.

 

How to Participate?

Talks are not announced yet as the Call for Papers is still open. So everyone can still propose an interesting talk focusing on the use and development of Foreman, Katello, or Pulp. As I am not hosting this year, I already did send one in, and so you can also suggest your own at cfp@atix.de. As a speaker you will be invited by ATIX to dinner the evening before the main event. So plan your trip to perhaps combine some sightseeing in Munich with the event!

If you want to visit the event just for the talks and networking, please register in the meetup group, as space and cake are limited.

Still undecided? Then take a look at the recap of the last events in 2023 and 2021, and you will find some great talks which will hopefully convince you.

 

How to stay up to date?

Save the date (15.07.2024 10:00-16:00) and location (Parkring 4, 85748 Garching), and spread the word further! Updates from ATIX will be added to the Foreman Community, so follow the thread there and stay in touch!

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

End of Life von CentOS Linux 7 – Was bedeutet das für mich?

Der ein oder andere Admin wird sich vermutlich schon lange den 30. Juni 2024 im Kalender vorgemerkt haben, denn dann ist für CentOS Linux 7 das “End of Life” erreicht. Aber auch Benutzer von Red Hat Enterprise Linux 7 sollten sich Gedanken machen, denn auch dieses verlässt an dem Tag die “Maintenance Support 2 Phase” und geht in den “Extended life cycle” über.

Was bedeutet das nun?

“End of Life” ist relativ einfach erklärt, ab diesem Termin wird die Unterstützung für das Produkt eingestellt und somit gibt es hier keine Updates mehr. Insbesondere das Fehlen von Sicherheitsupdates spricht dann ganz klar gegen einen Weiterbetrieb.

“Extended life cycle” ist dagegen etwas komplizierter. Auf der einen Seite stellt auch Red Hat hiermit das Updaten ein, supported bestehende Systeme aber weiter und bietet auch Migrationsunterstützung. Auf der anderen Seite ermöglicht es bei Red Hat hier ein Support-Addon hinzu zu kaufen, welches dann Zugriff auf kritische und schwerwiegende Sicherheitsupdates und ausgewählte Bugfixes sowie andere Supportbedingungen gibt. Wer also aus irgendwelchen Gründen darauf angewiesen ist, kann für einen nicht gerade kleinen Betrag einen sicheren Weiterbetrieb sicherstellen und damit die Migration weiter hinauszögern. Für die meisten Nutzer dürfte es aber gleichbedeutend mit dem “End of Life” sein.

Was soll ich nun tun?

Der übliche Weg bei Software ist der Weg nach vorne, also ein Upgrade auf die nächste Version des Betriebssystems. Hier wird es allerdings in diesem Fall deutlich komplizierter denn mit der Einstellung von CentOS Linux gibt es den klaren Weg nach vorne nicht mehr, sondern einen echt verzwickte Weggabelung!

Wer weiterhin dem CentOS-Projekt sein Vertrauen schenkt, hat als direkten Nachfolger CentOS Stream 8. Dieses ist aber nicht länger ein Downstream von Red Hat Enterprise Linux und auch nicht mehr den kompletten Support-Lifecycle unterstützt, sondern EoL bereits am Ende des “Full support” was dem Release der Minorversion 10 und somit 5 Jahren statt 10 entspricht! Damit ist das EoL für CentOS Stream 8 allerdings bereits am 31. Mai 2024, also vor dem von CentOS Linux 7. Somit sollte der Weg schnell beschritten werden und dann direkt weiter zu CentOS Stream 9.

Der ein oder andere sollte auch überlegen ob die eigenen Anforderungen nicht den Wechsel zum Enterprise-Support rechtfertigen. Red Hat supportet den Wechsel hier nicht nur, sondern stellt auch entsprechende Werkzeuge zur Verfügung. Damit lässt sich auch das Upgrade bei Bedarf raus zögern, auch wenn ich nach dem Wechsel eigentlich direkt den Schritt zu RHEL 8 empfehlen würde bzw. man auch hier direkt mit Version 9 überlegen kann.

Eine andere Alternative mit direktem Enterprise-Support aber auch unbegrenzter freier Verfügbarkeit ist Oracle Linux. Hierbei lässt sich quasi das gleiche bezüglich Migration sagen. Aber bei diesem Wechsel empfehle ich sich mit den Besonderheiten von Oracle Linux ausführlich auseinander zu setzen. Ich habe da in der Vergangenheit durchwachsene Erfahrungen gesammelt. Zum einen haben wir Kunden, die mit teils recht großen Umgebungen komplett auf Oracle Linux sehr gut fahren. Zum anderen hatte ich mit zusätzlichen Repositories und dort enthaltener Software teils Probleme, da Oracle teilweise andere Kompiler-Flags setzt und auch EPEL (Extra Packages for Enterprise Linux) nachbaut, aber dies nicht im vollen Umfang.

Dann gibt noch die Forks, die aufgrund der Einstellung von CentOS Linux entstanden sind. Hier müssen für mich alle noch beweisen wie ihnen der veränderte Umgang von Red Hat mit den Quellen gelingt und wie das ganze nach dem EoL von CentOS Stream aussieht. Da sehe ich aktuell bei AlmaLinux, dass das Projekt sehr vieles richtig macht, sowohl technisch als auch in anderen Bereichen. Rocky Linux wirkt hier auch sehr solide. Alle anderen habe ich quasi aus den Augen verloren, da sie weder bei uns im Kundenstamm relevant geworden sind noch mein persönliches Interesse durch etwas geweckt wurde. Das Schöne hier ist, dass sowohl Werkzeuge zur Migration in Version 7, als auch direkt von 7 zu 8 existieren.

Den kompletten Distributionswechsel lasse ich in der Betrachtung mal außen vor, da hier dann deutlich mehr Planung notwendig ist. Auch dieser Aufwand ist zu stemmen, aber dann doch eher keine Lösung für einen Zeitraum von etwa 3 Monaten.

Wie gehe ich nun vor?

Nachdem die Entscheidung für das Ziel gefallen ist, stellt sich als nächste Frage üblicherweise ob man das System überhaupt upgraden oder lieber direkt neu erstellen will. Hier habe ich mich mittlerweile tatsächlich zu einem Befürworter von In-Place-Upgrades entwickelt, was auch an dem Tool Leapp bzw. ELevate liegt. Mit diesem und den Möglichkeiten wie ich es auch in größeren Umgebungen sinnvoll nutze, habe ich mich schon vor einer Weile in meinem Artikel “Leap(p) to Red Hat Enterprise Linux 9” beschäftigt.

Worauf ich damals nur am Beispiel von Foreman eingegangen bin, sind hier der Umgang mit zusätzlicher Software. Foreman war für den Artikel ein gutes Beispiel, da hier extra Migrationen für Leapp gebaut wurden. Bei anderer Software ist dies nicht der Fall. Also müssen wir hier von zwei Szenarios ausgehen. Zum einen von Software, die aus einem zusätzlichen Repository in Form von Paketen installiert wurde, und zum anderen solcher die manuell installiert wurde. Software in Containern oder ähnlichem können außen vor lassen, da diese unabhängig vom Betriebssystem sind.

Software aus zusätzlichen Repositories sollte mit Leapp/ELevate kein Problem darstellen, solang ein Repository für die neuere Version verfügbar ist. In diesen Fällen muss nur das zusätzliche Repository während des Upgrades verfügbar gemacht werden. Dass es auch hier Stolpersteine gibt ist klar, ein Beispiel wäre hier Icinga wo ab EL8 nur noch Subscription-pflichtige Repositories zur Verfügung stehen. Also auch hier sollte etwas Zeit in Planung und Tests investiert werden bevor es an produktive Systeme geht.

Bei manuell installierter Software ist nach dem Betriebssystem-Upgrade sehr wahrscheinlich einmal neu kompilieren angesagt, damit die Software mit neueren Version der Systembibliotheken läuft. In manchen Fällen wird auch ein Update der Software notwendig sein, wenn die Kompatibilität sonst nicht mehr gegeben ist. Das gilt auch in vielen Fällen für nicht zu kompilierende Software, da sich auch Perl-, Python-, Ruby- oder ähnliche Laufzeitumgebungen ändern. Da hier ein Vorteil des In-Place-Upgrades weg fällt und eh ein manueller Schritt übrig bleibt, ist hier ein paralleler Neuaufbau schon eher verlockend. Was am Ende sinnvoller ist, entscheidet sich meist je nach Menge der Laufzeitdaten.

Ist damit alles getan?

In vielen Umgebungen wird hier gerne noch vergessen wo das Betriebssystem noch so eine Rolle spielt und auch hier ist die Migration wichtig. Zum einen ist es sinnvoll sich bei Containern oder Images wie beispielsweise bei Vagrant die Basis anzuschauen und hier auf neuere Versionen zu wechseln. Und dann haben wir zum anderen CI/CD-Pipelines, hier kann im Vorfeld über eine Erweiterung der Matrix schon oft eine Kompatibilität sichergestellt werden und nach abgeschlossenem Upgrade können dann die alten Systeme entfernt und damit verbundene Restriktionen gelockert werden, was sicher einige Entwickler freut.

Fazit

Wie hoffentlich jeder sieht, hier steht uns allen Arbeit bevor und zwar nicht nur beim eigentlichen Doing. Aber es kann auch als Chance für Innovation gesehen werden. Beispielsweise hat beim Foreman-Projekt ein regelrechter Frühjahrsputz eingesetzt nachdem mit dem Wegfall des Supports für EL7 und damit dem Bedarf an Support für EL9 der ganze Technologie-Stack modernisiert werden kann. Wenn ich nun jemand aufgeschreckt habe oder allgemein jemand bei dem Thema Hilfe benötigt gerne unterstützen unsere Consultants bei der Planung egal ob es um das richtige Vorgehen oder den Einsatz der Tools geht. Über ein Outsourcing-Kontingent unterstützen die Kollegen auch gerne beim eigentlich Upgrade. Und auch wegen einer Icinga-Subcription darf sich bei Bedarf an unser Vertriebsteam gewendet werden.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.