Foreman’s 10th birthday – The party was a blast

Birthday Logo

I can still remember when Greg had the idea of celebrating the Foreman’s Birthday four years ago and I volunteered to organize the German one. After two editions and with Foreman being covered on the Open Source Camp last year I asked for others to run the party. And with ATIX doing a great job I asked them to team up on this. So we have grown a great community event with the annual Birthday party.

This year was different to the ones before because we had such a big support by Red Hat. The new Community Managers showed up to introduce them accompanied by Greg who had stepped down earlier this year. A group of Product managers and consultants made the last stop on their European tour. A technical writer came over to discuss the future of documentation. And with Evgeni and Ewoud we had some recurring attendees to give a talk later. ATIX also arrived with a bus full of people. Monika represented iRonin, a company doing custom development on Foreman and I hope to team up in the future, and Timo developing on Foreman for dmTech brought a colleague. So users were slightly under-represented and the prepared demos were mostly used to share knowledge and probably because of the heat instead of hacking many discussions took place. But I think everyone of the about thirty attendees made good use of the first session.

Birthday PartyDemoThe session ended when I brought in the cake. And thanks to our Events team the cake was as tasty as good looking. A nice touch by Ohad was to insist he can not blow off the candles alone as he could not have build Foreman without the community.

Birthday CakeHelmets

After the cake break we started with the talks and the first one was by the Community team giving us a recap of Foreman’s history, data from the community survey and other insights like a first look on the future documentation. This is really the next step to me that Red Hat is also making their Satellite documentation upstream adding a use case driven documentation to the manual which is way more technical. The second talk Quirin showcased the current state of Debian Support which will be fully functional with Errata support being added, but he already promised some usability and documentation improvements afterwards. The third speakers were Dana and Rich who showed Red Hat’s roadmap for features to add to Foreman so they will be pulled into Satellite afterwards. The roadmap will be presented in a community demo and uploaded to the community forum. Having the product managers easily available allowed the audience also to ask any question and I was excited to hear for almost all topics brought up that there is already ongoing work in the background. For example I asked about making subscription management also usable for other vendors and Rich told me he is part of a newly founded team which is evaluating exactly this.

Because of the heat we added a small ice break before starting the next talk and because of Lennart being ill Ohad entered the stage to show his work on containerizing Foreman. He explained that he started it mainly for testing but the interest showed him that expanding it to be fully functional to run Foreman and even Katello on Kubernetes could be a future way. Evgeni gave a shortened version of the talk on writing Ansible modules for Foreman and Katello he created for Froscon. It was a very technical one showing how much work is necessary to build a good base so later work is much easier. From this perspective I can really recommend this talk to all Froscon attendees. Last but not least Ewoud looked into the project’s social aspects which was a nice mixture of official history and personal moments. He also showed off the different swag the project created, ending with a t-shirt signed by as many team and community members as possible while traveling from Czech to US and back as suitable gift to Greg because “Once a foreman, always a foreman”. 😉

For dinner we had Pizza and Beer, but moved to the air-conditioned hotel bar after a short while to finish the evening. I heard people were enjoying conversation until two o’clock in the morning even when the bar closed one hour earlier. 😀

I would say the Party was a blast and I am already looking forward to next year when ATIX will be the host again. But until then there are several other Foreman related events with the Open Source Automation Day on 15. & 16.10.2019 in Munich including Workshops the day before and a Foreman hackday the day after organized by ATIX and the Open Source Camp on 07.11.2019 in Nuremberg right after OSMC by NETWAYS.

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Automatisierte Updates mit Foreman Distributed Lock Manager

Foreman Logo

Wer kennt das nicht am besten soll alle nervige, wiederkehrende Arbeit automatisiert werden, damit man mehr Zeit für spaßige, neue Projekte hat? Es gibt nach Backups wohl kein Thema, mit dem man so wenig Ruhm ernten kann, wie Updates, oder? Also ein klarer Fall für Automatisierung! Oder doch nicht weil zu viel schief gehen kann? Nun ja, diese Entscheidung kann ich euch nicht abnehmen. Aber zumindest für eine häufige Fehlerquelle kann ich eine Lösung anbieten und zwar das zeitgleiche Update eines Clusters, was dann doch wieder zum Ausfall des eigentlich hochverfügbaren Service führt.

Bevor ich aber nun zu der von mir vorgeschlagen Lösung komme, will ich kurz erklären wo die Inspiration hierfür herkommt, denn Foreman DLM (Distributed Lock Manager) wurde stark vom Updatemechanismus von CoreOS inspiriert. Hierbei bilden CoreOS-Systeme einen Cluster und über eine Policy wird eingestellt wie viele gleichzeitig ein Update durchführen dürfen. Sobald nun ein neues Update verfügbar ist, beginnt ein System mit dem Download und schreibt in einen zentralen Speicher ein Lock. Dieses Lock wird dann nach erfolgreichem Update wieder freigegeben. Sollte allerdings ein weitere System ein Lock anfordern um sich upzudaten und die maximalen gleichzeitigen Locks werden bereits von anderen Systemen gehalten, wird kein Update zu dem Zeitpunkt durchgeführt sondern später erneut angefragt. So wird sichergestellt, dass die Container-Plattform immer mit genug Ressourcen läuft. CoreOS hat dazu dann noch weitere Mechnismen wie einen einfachen Rollback auf den Stand vor dem Update und verschiedene Channel zum Testen der Software, welche so einfach nicht auf Linux zur Verfügung stehen. Aber einen Locking-Mechanismus zur Verfügung zu stellen sollte machbar sein, dachte sich dmTech. Dass die Wahl auf die Entwicklung als ein Foreman-Plugin fiel lässt sich leicht erklären, denn dieser dient dort als das zentrale Tool für die Administration.

Wie sieht nun die Lösung aus? Mit der Installation des Plugins bekommt Foreman einen neuen API-Endpunkt über den Locks geprüft, bezogen und auch wieder freigegeben werden können. Zur Authentifizierung werden die Puppet-Zertifikate (oder im Fall von Katello die des Subscription-Managers) genutzt, die verschiedenen HTTP-Methoden stehen für eine Abfrage (GET), Beziehen (PUT) oder Freigaben (DELETE) des Lock und die Antwort besteht aus einem HTTP-Status-Code und einem JSON-Body. Der Status-Code 200 OK für erfolgreiche Aktionen und 412 Precondition Failed wenn Beziehen und Freigeben des Locks nicht möglich ist sowie der Body können dann im eigenen Update-Skript ausgewertet werden. Ein einfaches Beispiel findet sich hierbei direkt im Quelltext-Repository. Ein etwas umfangreicheres Skript bzw. quasi ein Framework wurde von einem Nutzer in Python entwickelt und ebenfalls frei zur Verfügung gestellt.
(mehr …)

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Verwaltung von SUSE Linux Paketen mit Katello

Katello-Logo

Katello erweitert Foreman um Content-Management oder da es mir primär um Linux-Pakete geht bevorzuge ich den Ausdruck Software-Management. Über Lifecycle-Environments und Content-Views werden hier Snapshots der Repositories erstellt und den verschiedenen Stages nacheinander präsentiert, damit in Produktion auch tatsächlich die Updates landen, die auch vorher getestet wurden. Doch darüber habe ich bereits vor einer Weile geschrieben. Seitdem hat sich zwar einiges weiterentwickelt, insbesondere ist die Unterstützung für Debian dazugekommen. Aber darüber möchte ich berichten wenn auch noch der Support für Errata-Management für Debian soweit ist.

Stattdessen möchte ich auf die Unterstützung für SUSE eingehen. Diese wurde von ATIX entwickelt und als Foreman-Plugin “ForemanSccManager” veröffentlicht. Wer die “Red Hat”-Unterstützung von Katello kennt, wird die Funktionalität recht schnell wieder erkennen. Das Plugin fügt einen neuen Menüpunkt hinzu, der es erlaubt Accounts für den Zugriff auf das SUSE Customer Center anzugeben und die damit verknüpften Softwareprodukte einfach zur Synchronisation auszuwählen. Dies finde ich besonders hilfreich, da SUSE zur Authentifizierung nicht nur mit Benutzer und Passwort sondern auch einem Token in der URL arbeitet, welches das manuelle Handling hier leider erschwert.

Wenn jemand ein paar Screenshots sehen möchte, möchte ich ihn auf die Orcharhino-Dokumentation (einem Produkt auf Basis von Katello) verweisen, denn das Plugin befindet sich schon eine Weile bei ATIX und ihren Orcharhino-Kunden im Praxis-Einsatz. Wer also auf SUSE angewiesen ist und noch eine Lösung für das Softwaremanagement sucht, kann mit Katello und dem ForemanSccManager auf eine modernere Plattform als Spacewalk oder den darauf basierenden SUSE-Manager setzen. Wer bereits auf Katello setzt und SUSE nutzt, dem kann ich nur empfehlen seinen Workflow auf das Plugin umzustellen.

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

OSMC 2018 – Day 2

The evening event was again great food, drinks and conversation and while it ended in the early morning for some people, rooms were full of attendees again for the first talk. It was a hard choice between probably great talks but in the end I had chosen Rodrigue Chakode with “Make IT monitoring ready for cloud-native systems“. Being a long-term contributor to several Open Source Monitoring he used his experience to develop Realopinsight as a tool bringing existing monitoring tools together and extending them for monitoring cloud-native application platforms. In his live demo he showed the webinterface and Icinga 2, Zabbix and Kubernetes integration including aggregation of the severity for a specific service across the different solutions.
OSMC 2018
Scoring a Technical Cyber Defense Exercise with Nagios and Selenium” by Mauno Pihelgas was a quite uncommon case study. Locked Shields is the biggest Cyber Defense exercise involving 22 teams defending systems provided by vendors against hundreds of attacks. Mauno is responsible for the availability scoring system which gives the defending teams bonus points for availability of the systems, but of course it makes also available for attacks which if successful will cause loss of points. The data collected by Nagios and Selenium are then forwarded to Kafka and Elasticsearch to provide abuse control and overall scoring. To give you some numbers over the 2 days of the exercise about 34 million checks are executed and logged.
Susanne Greiner’s talk “Mit KI zu mehr Automatisierung bei der Fehleranalyse” was on using Artificial Intelligence for automatic failure analyses. Her talk started from anomaly detection and forecasting, went through user experience and ended with machine learning and deep learning. It is always great to see what experts can do with data, so running anomaly detection and forecasting on the data, adding labels for user experience and feeding them to the AI can increase troubleshooting capabilities. And better troubleshooting will result in better availability and user experience of course what perhaps is the main goal of all IT.
At the evening event there was again some gambling and after lunch the guys how managed to win the most chips won some real prices.
OSMC 2018 Gambling Winners
While some still enjoyed the event massage, Carsten Köbke started the afternoon sessions with the best talk title “Katzeninhalt mit ein wenig Einhornmagie” (Cat content with a little bit of unicorn magic). Being the author of the Icinga Web 2 module for Grafana and several themes for Icinga Web 2 he demonstrated and explained his work to the audience. It is very nice to see performance data with annotations extracted from the Icinga database nicely presented in Grafana. The themes part of the talk was based on the idea of every one can do this and monitoring can be fun.
Thomas and Daniel teamed up to focus on log management and help people on choosing their tool wisely in their talk “Fokus Log-Management: Wähle dein Werkzeug weise“. They compared the Elastic stack and Graylog with each other in multiple categories, showing up advantages and disadvantages and which tool fits best for which user group.
Eliminating Alerts or ‘Operation Forest’” by Rihards Olups was a great talk on how he tried reducing alerts to get a better acceptance and handling of the remaining alerts, getting problems solved instead of ignored. The ‘Operation forest’ mentioned in the talk’s title is his synonym for there infrastructure and alerts are trash he does not like in his forest, because trash attracts trash, like alerts attract alerts because if the numbers grows they tend to be ignored and more problems will get critical causing more alerts. It is not a problem of the tool used for monitoring and alerting but he had not only nice hints on changing culture but also technical ones like focusing on one monitoring solution, knowing and using all features or making problems more recognizable like putting them into the message of the day. For those having the same problems in their environment he wrote a shitlist you can check the problems you have and the number of checked items will indicate how shitty your environment is, I recommend having a look at this list.
Last but not least Nicolai Buchwitz talked about the “Visualization of your distributed infrastructure” and with his Map module for Icinga Web 2 he is providing a very powerful tool to visualize it. All the new features you get from the latest 1.1.0 release make it even more useful and the outlook on future extensions looks promising. Nicolai concluded with a nice live demo showing all this functionality.
So it was again a great conference, thanks to all speakers, attendees and sponsors for making this possible. I wish everyone not staying for the hackathon or Open Source camp “Save travels”. Slides, videos and pictures will be online in the near future. I hope to see you on next year’s OSMC on November, 4th – 7th!

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

OSMC 2018 – Day 1

It is always the same, Winter is coming and it brings people to Nuremberg for OSMC. Our Open Source Monitoring conference still grows every year and after giving three parallel tracks a try last year, we changed format again to include also shorter talks and having always three tracks. It also gets more international and topics get more diverse, covering all different monitoring solutions with speakers (and attendees) from all over the worlds. Like every year also the 13th conference started with a day of workshops enabling the interested ones to get hands on Prometheus, Ansible, Graylog and practical example on using the Puppet modules for Icinga 2. Also this year two days of great talks will be followed by a day of hacking and the second issue of the Open Source Camp takes place, this time focusing on Puppet.
OSMC 2018
And another tradition is Bernd starting the conference with a warm welcome before the first talk. Afterwards Michael Medin talked about his journey in monitoring and being a speaker at OSMC for the eleventh time in “10 years of OSMC: Why does my monitoring still look the same?“. It was a very entertaining talk comparing general innovation with the one happening in monitoring. He was showing up that monitoring solutions changed to reflect the change in culture but still stayed the same mechanism and explained all the problems we probably know like finding the correct metrics and interpreting them resulting from this.
Second talk I attended was “Scaling Icinga2 with many heterogeneous projects – and still preserving configurability” by Max Rosin. He started with the technical debt to solve and requirements to fulfill when migrating from Icinga 1 to Icinga 2 like check latency or 100% automation of the configuration. Their high-available production environment had no outage since going live in January, because the infrastructure design and testing updates and configuration changes in a staging setup, what is pretty awesome. The scripting framework they created for the migration will be released on Github. But this was not all they coded to customize their environment, they added some very helpful extensions for the operations team to Icinga Web 2, which will be available on Github somewhere in the future after separating company specific and upstream ready parts.
For the third session I had chosen Matthias Gallinger with “Netzwerkmonitoring mit Prometheus” (Network monitoring with Prometheus). In his case study he showed the migration from Cacti to Prometheus and Grafana done at a international company based in Switzerland. The most important part is here the SNMP Exporter for Prometheus including a generator for its configuration. All required is part of their labs edition of Open Monitoring Distribution (OMD).
After the lunch Serhat Can started with “Building a healthy on-call culture“. He provided and explained his list of rules which should create such a culture: Be transparent – Share responsibilities – Be prepared – Build resilient and sustainable systems – Create actionable alerts – Learn from your experiences. To sum up he tells everyone to care about the on-call people resulting in a good on-call service and user experience which will prevent a loss of users and money.
The Director of UX at Grafana Labs David Kaltschmidt gave an update on whats new and upcoming in Grafana focusing on the logging feature in “Logging is coming to Grafana“. The new menu entry Explore allows to easily querying Prometheus metrics including functions – just one click away – for rate calculation or average and it works the same for logging entries as a new type of datasource. This feature should be very useful in a Kubernetes environment to do some distributed tracing. If you are interested in this feature it should be available as beta in December.
Distributed Tracing FAQ” was also the title of Gianluca Arbezzano‘s talk. I can really recommend his talk for the good explanation on why and how to trace requests through more and more complex, distributed services of nowadays. If you are more interested in tool links, he recommends Opentracing as library, Zipkin as frontend and of course InfluxDB as backend.
This year Bernd’s talk about the “Current State of Icinga” was crowded and interesting as always. I skip the organizational things like interest in the project is growing according to website views, customers talking about their usage, partners, camps and meetups all over the world. From the technical aspects Icinga 2 had a release bringing more stabilization, improved Syntax Highlighting and as new feature Namespacing. The coming Director release brings support for multiple instances helping with staging, health checks and a configuration basket allowing to easily export and import configuration. A new Icinga Web 2 module X509 helps managing your certificate infrastructure, available next week on github. The one for VMware vSphere (sponsored by dmTECH) is already released and was shown in a demo by Tom who developed it. Icinga DB will replace IDO as a backend moving volatile data to Redis and data to be keeped will be stored to MySQL or PostgreSQL and there will also be a new Monitoring Module for Icinga Web 2 to make use of it, all available hopefully in two weeks.
This year’s OSMC provided something special as the last talk of the first day with an authors’ panel including Marianne Spiller (Smart Home mit openHAB 2), Jan Piet Mens (Alternative DNS Servers – Choice and deployment, and optional SQL/LDAP back-ends), Thomas Widhalm and Lennart Betz (Icinga 2 – Ein praktischer Einstieg ins Monitoring) moderated by Bernd and answering questions from the audience.
If you want to get more details or pictures have a look at Twitter. There will also be a post by Julia giving a more personal view on the conference from interviewing some attendees and one of me covering the talks of the second day, but now I am heading for the evening event.

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Ausbilder erzählt – Professional Services – 2018

Ausbildung
Nachdem unsere Auszubildenden eine ganze Blogserie haben, in der sie immer mal wieder aus dem Nähkästchen plaudern, dachte ich mir ich dreh den Spieß mal rum. In den nächsten Zeilen kann der interessierte Leser also meine persönliche Sicht auf die Frage warum wir ausbilden, was mein Ziel für die Auszubildenden ist und wie ich versuche dies über die drei Jahre zu erreichen und natürlich noch kurz worauf ich deswegen bei der Auswahl der Auszubildenden achte.
Warum bilden wir aus?
Es ist jetzt etwa drei Jahre her, dass Bernd im Jahresgespräch die Altersstruktur der Firma thematisiert hat und dass wir eine junge Firma sind und dies auch bleiben wollen. Dadurch entstand die Rechnung wie viele Auszubildenden in welchem Durchschnittsalter wir brauchen um den Altersdurchschnitt zu halten in der Annahme auch allen alternden Mitarbeitern weiterhin eine Perspektive zu bieten. Als zweiter Faktor kam hinzu, dass es sich besonders in Professional Services (unserer Consulting- und Support-Abteilung) schwierig gestaltet mit dem Auftragswachstum personell schrittzuhalten, da hier Leute mit einem breiten Basiswissen, einigem Spezialwissen, Soft-Skills um sie auf Kunden loszulassen, Reisebereitschaft und um ehrlich zu sein ohne utopische Gehaltsvorstellungen brauchen. Diese beiden Faktoren haben also die Diskussion angefacht, ob wir auch in Professional Services ausbilden können. Nachdem einige mir wichtige Rahmenbedingungen wie ein Abteilungsdurchlauf und ausreichend Zeit zur Betreuung und Schulung der Auszubildenden festgelegt waren, hat also Professional Services sich mit dem Ausbildungsjahr 2017 den anderen Abteilungen angeschlossen, die schon wesentlich länger ausbilden.
Was ist das Ziel?
Mein erklärtes Ziel ist es den Auszubildenden nach drei Jahren eine fundierte Entscheidung für den weiteren Berufsweg zu ermöglichen und die Grundlagen vermittelt zu haben, für egal welchen Weg sie sich entscheiden. Die Optionen, die ihnen offen stehen, sind in meinen Augen Junior Consultant in Professional Services, eine andere Tätigkeit bei NETWAYS ohne Reisen, ein Wechsel zu einer anderen Firma oder gar ein Wechsel zu einem anderen Tätigkeitsfeld. Wobei mein erklärtes Wunschziel natürlich der Junior Consultant wäre!
Wie will ich das Ziel erreichen?
Als wichtigstes sehe ich im ersten Lehrjahr IT-Grundlagen und Soft-Skills, die bereits von Anfang an vermittelt werden sollen. Um die IT-Grundlagen zu vermitteln setzen wir auf eine Mischung aus Schulungen durch die erfahrenen Mitarbeiter, Projekte in denen die Auszubildenden selbstständig Themen erarbeiten und den praktischen Einsatz bei Managed Services.
Bei den Schulungen starten wir direkt in der ersten vollen Woche mit Linux-Grundlagen, denen später im ersten Lehrjahr SQL-Grundlagen, Netzwerkgrundlagen, DNS & DHCP folgt und im weiteren Ausbildungsverlauf sind dann noch geplant Linux-Packaging, Virtualisierung und Systemsicherheit zu vermitteln. Als erstes Projekt haben die bisherigen Jahrgänge immer einen LAMP-Stack gemeinsam aufsetzen sollen, bei dem jeder zwar eine Teilaufgabe umsetzen, dokumentieren und präsentieren muss, aber am Ende auch ein gemeinsames Ergebnis erreicht werden muss. Weitere Projekte kommen dann meist aus aktuellen Anforderungen oder Teststellungen und handeln sich nicht um so simple Tätigkeiten wie Benutzer anlegen. So haben die Auszubildenden beispielsweise Portainer getestet, die Hardwarewartung für einen Kollegen übernommen und anschließend getestet. Bei Managed Services werden die Auszubildenden dann mit Aufgabenstellungen aus der Praxis konfrontiert und dürfen sich beispielsweise mit der API unserer CMDB herumschlagen.
Zu den wichtigsten Soft-Skills zählt für mich Selbstmanagement, also lernen die Auszubildenden mit der selben Zeitregelung umzugehen wie alle anderen auch, die geleisteten Arbeitszeiten erfassen, ihre Tickets zu pflegen und was sonst noch dazugehört damit Alles rund läuft. Ebenfalls wichtig ist natürlich Kommunikation und Auftreten, adressatengerechte Präsentation und Dokumentation. Hier hilft sicherlich auch der Abteilungsdurchlauf, bei dem die Auszubildenden auch mal den Telefondienst übernehmen, Grundlagen der Buchhaltung vermittelt bekommen oder helfen eine Schulung zu organisieren und betreuen.
Im zweiten und dritten Lehrjahr bauen wir dann diese Grundlagen aus, indem zusätzlich zu den internen Schulungen unsere offiziellen Schulungen besucht und interne Projekte anspruchsvoller werden, mehr Vorgaben an Dokumentation und Präsentation zu beachten sind und zusätzlich kommen Fachgespräche nach der Präsentation hinzu. Außerdem unterstützt immer ein Auszubildender die Kollegen im Support und leistet Betriebsunterstützung für Kunden. Der Abteilungsdurchlauf setzt sich fort und Sales erhält technische Unterstützung bei Webinaren und Pre-Sales-Terminen und auch unsere Systemintegratoren lernen unsere Entwickler und das Entwickeln in Open-Source-Projekten kennen. Mir ist besonders wichtig, dass die Auszubildenden den Kundenkontakt lernen, indem sie erfahrene Consultants begleiten und auch dort eigene Aufgaben übernehmen oder in Schulungen die Rolle des Co-Trainers und einzelne Themenblöcke übernehmen.
Als Belohnung für gute Leistungen kommen dann noch Konferenzteilnahmen oder sogar das komplett selbstständige Abwickeln eines Kundenprojekts, wobei dies natürlich immer nur Remote durchgeführt werden kann, damit ein erfahrener Kollegen wie bei den internen Projekten unterstützen kann.
Um all dies zu planen, Projekte zu suchen, die Schwächen und Stärken der Auszubildenden individuell berücksichtigen und die Auszubildenden zu betreuen, wird natürlich entsprechend viel Zeit benötigt, welche ich und unterstützende Kollegen dankenswerterweise bekommen haben. Damit dies alles so funktioniert, ist Professional Services natürlich nicht nur auf die Mitarbeit aller im Team sondern auch auf die Unterstützung der anderen Abteilungen und das Verständnis der Kunden angewiesen. Hierfür an dieser Stelle ein herzliches Dankeschön.
Auswahl der Auszubildenden
Wer jetzt denkt um das Ziel zu erreichen erwarten wir von unseren Bewerber schon bestimmtes Vorwissen, wird überrascht sein, dass ich sowas zwar als Bonus ansehe, aber es mir auf ganz andere Dinge ankommt. Den ersten Kontakt mit einem Bewerber habe ich, wenn mir Bewerbungsunterlagen weitergeleitet werden und ich um meine Meinung gebeten werde. Als erstes schaue ich mir daher das Anschreiben an um die Motivation für die Berufswahl und den bisherigen Werdegang zu erkennen. Schlecht ist wenn diese nicht nachzuvollziehen ist und sich der Bewerber auch nicht die Mühe gemacht hat Rechtschreibkorrektur oder Korrekturleser zu bemühen. Ein Lebenslauf sollte dann einfach nur schlüssig sein und Zeugnisnoten sind interessant, aber viel interessanter ist das Bild, das sich aus den Zeugnisbemerkungen ergibt.
Wer es schafft damit zu überzeugen, hat die erste Hürde genommen und bekommt eine Einladung zum Vorstellungsgespräch. Hier muss dann einfach das Auftreten überzeugen, ein technisches Verständnis und Interesse sowie natürlich Motivation auszumachen sein. Klingt eigentlich simpel und wer nicht selbst an so einem Bewerbungsprozess beteiligt ist, wird es kaum glauben wie viele an einfachen Dingen wie Pünktlichkeit scheitern oder dass die Begründung “Ich schraube gerne an Rechnern” nicht die beste Motivation ist. Ein Bewerber, der zum zweiten Lehrjahr nicht volljährig ist, muss hierbei etwas mehr überzeugen, da der organisatorische Aufwand bei Reisen welche im Consulting anfallen wesentlich höher ist. Aber ein Ausschlusskriterium wäre es genauso wenig wie ein höheres Alter bei beispielsweise einem Studienabbrecher oder zweiter Ausbildung.
Schlusswort
Ich hoffe jeder Leser hat ein gewisses Verständnis für und Einblick in die Ausbildung bei NETWAYS gewonnen. Der ein oder andere Kunde ist nun vielleicht nicht mehr überrascht, wenn er gefragt wird ob der Consultant von einem Auszubildenden begleitet werden kann. Andere Ausbildungsbetriebe dürfen sich gerne Anregungen holen und ich bin generell auch immer an einem Erfahrungsaustausch interessiert. Und vor allem freu ich mich, wenn nun jemand denkt, dass wir der richtige Ausbildungsbetrieb für ihn sein könnten und sich bewerben möchte.

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Contributing as a Non-Developer

TuxQuite often I get asked how someone can get involved with an Open Source project without or at least minimal development skills. I have some experience in this topic as I am pretty involved with some projects, especially Icinga and Foreman, and mostly not because of my development skills which are existing but at least lack more exercise. So here comes my non exhaustive list of roles a non-developer can engage in a project.
Ambassador
A good point to start is talking about the project. I know this sounds very simple and it is indeed. Spread the word about what you like about and how you use the project and by doing so you help to increase the userbase and with increasing userbase there will be also an increase in developers. But where to start? The internet gives you many opportunities with forums, blogs and other platforms or if you prefer offline communication you can go to local computer clubs, join a user group meeting, help at a conference booth or even send in a paper for a conference. And let the project know about so they can promote your talk.
Tester
Another option for getting involved quite easily is bug reporting. If you start using some software more and more, time will come and you will find a bug so simply report it. Try to find a way to reproduce it, add relevant details, configs and logs, explain why you think it is a bug or missing feature and developers can hopefully fix it. When you like the result and you want to get more involved start testing optional workflows, plugins or similar edge cases you expect to be not well tested because they have a smaller user base. Another option is testing release candidates or even nightly builds so bugs can perhaps be fixed before they hit the masses. If provided take part in test days for new features or versions and test as many scenarios as you can or even go a step further and help organizing a test day. With minimal development know-how you can even dig into the code and help with fixing it, so it is also a good idea if you want to improve this knowledge.
Community Support
If the user base of a project grows, the number of question that are asked every day grows. This can easily reach a number the developers have to spend more time on answering questions than on coding or have to ignore questions, both can cause a project to fail. Experienced users providing their knowledge to newbies can be a big help. So find the way a project wants this kind of questions to be handled and answer questions. It can vary from mailing lists, irc and community forums or panels to flagging issues as questions or if no separate platform is provided community may meet on serverfault, stackoverflow or another common site shared by many projects. Also very important is helping routing communication in the right channel, so help new users also to file a good bug report or tell people to not spam the issue tracker with questions better handled on the community platform.
Documentarist
Projects lacking good documentation are very common and also if the documentation is good it still can be improved by adding examples and howtos. Pull requests improving documentation are likely to be accepted or if the project uses a wiki access is granted, but if you feel more comfortable creating your own source of documentation by writing howtos in your private blog or even creating video tutorials on youtube. Many project will link to it at least in community channels if you let them know about.
Translator
Improving user base by making the software available to more people by providing it in additional languages is always a good thing. But perhaps everyone knows at least one project where selecting a language other than English feels like automatic translation, mix of languages or you would have chosen just different words. Already with some basic knowledge of the software and a feeling for your native language you can help improving translation. In the most cases you do not need special knowledge of tools or translation frameworks as projects try to keep barriers low for translators.
Infrastructure Operator
When projects grow they will need more and more infrastructure for hosting their website, documentation, community platform, CI/CD and build pipeline, repositories and so on. Helping with managing this infrastructure is really a good way to get involved for an ops person. Perhaps it is also possible to donate computing resources (dedicated hardware or a hosted virtual machine) what will be often honored by the project by adding you to the list of sponsors giving you some good publicity.
Specialist
Nowadays a project is not only about the software itself, making installation more easy by providing packages and support for configuration management or more secure by providing security analyses, hardening guides or policies is something system administrators are often more capable than developers. From my experience providing spec files for RPM packaging, Puppet modules or Ansible roles to support automation or a SELinux policy for securing the installation are happily accepted as contribution.
Like I said this list is probably not complete and you can also mix roles very well like starting with talking about the project and filing bugs for your own and perhaps end as community supporter who is well-known for his own blog providing in depth guides, but it should at least give some ideas how you can get involved in open source projects without adding developer to your job description. And if you are using the project in your company’s environment ask your manager if you can get some time assigned for supporting the project, in many cases you will get some.

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Open Source Camp Issue #1 – Foreman & Graylog

Open Source Camp Issue #1Right after OSDC we help to organize the Open Source Camp, a brand new serie of events which will give Open Source projects a platform for presenting to the Community. So the event started with a small introduction of the projects covered in the first issue, Foreman and Graylog. For the Foreman part it was Sebastian Gräßl a long term developer who did gave a short overview of Foreman and the community so also people attending for Graylog just know what the other talks are about. Lennart Koopmann who founded Graylog did the same for the other half including upcoming version 3 and all new features.
Tanya Tereshchenko one of the Pulp developers started the sessions with “Manage Your Packages & Create Reproducible Environments using Pulp” giving an update about Pulp 3. To illustrate the workflows covered by Pulp she used the Ansible plugin which will allow to mirror Ansible Galaxy locally and stage the content. Of course Pulp also allows to add your own content to your local version of the Galaxy and serve it to your systems. The other plugins a beta version is already available for Pulp 3 are python to mirror pypi and file for content of any kind, but more are in different development stages.
“An Introduction to Graylog for Security Use Cases” by Lennart Koopmann was about taking the idea of Threadhunting to Graylog by having a plugin providing lookup tables and processing pipeline. In his demo he showed all of this based on eventlogs collected by their honey pot domain controller and I can really recommend the insides you can get with it. I still remember how much work it was getting such things up and running 10 years ago at my former employer with tools like rsyslog and I am very happy about having tools like Graylog nowadays which provide this out of box.
From Sweden came Alexander Olofsson and Magnus Svensson to talk about “Orchestrating Windows deployment with Foreman and WDS”. They being Linux Administrators wanted to give their Windows colleagues a similar experience on a shared infrastructure and shared their journey to reach this goal. They have created a small Foreman Plugin for WDS integration into the provisioning process which got released in its first version. Also being a rather short presentation it started a very interesting discussion as audience were also mostly Linux Administrators but nearly everyone had at least to deal in one way with Windows, too.
My colleague Daniel Neuberger was introducing into Graylog with “Catch your information right! Three ways of filling your Graylog with life.” His talk covered topics from Graylogs architecture, what types of logs exists and how you can get at least the common ones into Graylog. Some very helpful tips from practical experience spiced up the talk like never ever run Graylog as root for being able to get syslog traffic on port 514, if the client can not change the port, your iptables rules can do so. Another one showed fallback configuration for Rsyslog using execOnlyWhenPreviousIsSuspended action. And like me Daniel prefers to not only talk about things but also show them live in a demo, one thing I recommend to people giving a talk as audience will always honor, but keep in mind to always have a fallback.
Timo Goebel started the afternoon sessions with “Foreman: Unboxing” and like in a traditional unboxing he showed all the plugins Filiadata has added to their highly customized Foreman installation. This covered integration of omaha (the update management of coreos), rescue mode for systems, VMware status checking, distributed lock management to help with automatic updates in cluster setups, Spacewalk integration they use for SUSE Manager managed systems, host expiration which helps to keep your environment tidy, monitoring integration and the one he is currently working on which provides cloud-init templates during cloning virtual machines in VMware from templates.
Jan Doberstein did exactly what you can expect from a talk called “Graylog Processing Pipelines Deep Dive”. Being Support engineer at Graylog for several years now his advice is coming from experience in many different customer environments and while statements like “keep it simple and stupid” are made often they stay true but also unheard by many. Those pipelines are really powerful especially when done in a good way, even more when they can be included and shared via content packs with Version 3.
Matthias Dellweg one of those guys from AITX who brought Debian support to Pulp and Katello talked about errata support for it in his talk “Errare Humanum Est”. He started by explaining the state of errata in RPM and differences in the DEB world. Afterwards he showed the state of their proof of concept which looks like a big improvement bringing DEB support in Katello to the same level like RPM.
“How to manage Windows Eventlogs” was brought to the audience by Rico Spiesberger with support by Daniel. The diversity of the environment brought some challenges to them which they wanted to solve with monitoring the logs for events that history proved to be problematic. Collecting the events from over 120 Active Directory Servers in over 40 countries generates now over 46 billion documents in Graylog a day and good idea about what is going on. No such big numbers but even more detailed dashboards were created for the Certificate Authority. Expect all their work to be available as content pack when it is able to export them with Graylog 3.
Last but not least Ewoud Kohl van Wijngaarden told us the story about software going the way “From git repo to package” in the Foreman Project. Seeing all the work for covering different operating systems and software versions for Foreman and the big amount of plugins or even more for Katello and all the dependencies is great and explains why sometimes things take longer, but always show a high quality.
I think it was a really great event which not only I enjoyed from the feedback I got. I really like about the format that talks are diving deeper into the projects than most other events can do and looking forward for the next issue. Thanks to all the speakers and attendees, safe travels home to everyone.

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

The Future of Open Source Data Center Solutions – OSDC 2018 – Day 2

The evening event was a great success. While some enjoyed the great view from the Puro Skybar, others liked the food and drinks even more and at least I preferred the networking. I joined some very interesting discussions about very specific Information Technology tools, work life balance, differences between countries and cultures and so on. So thanks to all starting with our event team to the attendees for a great evening.
But also a great evening and a short night did not keep me and many others from joining Walter Gildersleeve for the first talk about “Puppet and the Road to Pervasive Automation”. He introduced the new tools from Puppet to improve the Configuration management experience like Puppet Discovery, Pipelines and Tasks. What I liked about his demos about Tasks was that he was showing of course what the Enterprise version could do, but also what the Open Source version is capable of. Pipelines is Puppet’s CI/CD solution which can be used as SaaS or on premise and at least I have to commit it looks very nice and informative. If you want to give it a try, you can sign up for a free account and test it with a limited number of nodes.
Second one today was Matt Jarvis with his talk “From batch to pipelines – why Apache Mesos and DC/OS are a solution for emerging patterns in data processing”. Like several others he started with the history from mainframes via hardware partitioning and virtualization to microservices running in containers. After this introduction he started to dig deeper into Container Orchestration and changes in modern application design which add complexity which they wanted to solve with Mesos. Matt then has given a really good overview on different aspects of the Mesos ecosystem and DC/OS. This being quite a complex topic a list of all the topics covered would be quite exhaustive list, but just to mention some he covered Service Discovery or Load Balancing for example.
Michael Ströder who I know as great specialist for secure authentication by working with him at one customer in the past introduced “Æ-DIR — Authorized Entities Directory” to the crowd. You already could see his experience when he was talking about goals and paradigms applied during development which resulted in the 2-tier architecture of Æ-DIR consisting of a writable provider and readable consumer with separated access based on roles. Installation is quite easy with a provided Ansible role and results in a very secure setup which I really like for central service like Authentication. The shown customer scenarios using features like SSH proxy authz and two factor authentication with Yuibkey make Æ-DIR sound like a really production ready solution. If you want to have a look into without installing it, a demo is provided on the projects webpage.
First talk after lunch was “Git Things Done With GitLab!” by my colleagues Gabriel Hartmann and Nicole Lang about Gitlab and why it was chosen by NETWAYS for inclusion in our Webservices. Nicole gave a very good explanation about basic function which Gabriel showed live in a demo followed by a cherry pick of nice features provided by Gitlab. Also these features like Issue tracker and CI/CD were shown live. I was really excited by the beta of AutoDevops which allows you to get CI/CD up and running very easy.
Thomas Fricke’s talk “Three Years Running Containers with Kubernetes in Production” was a very good talk about things you should know before moving container and container orchestration into production. But while it was a interesting talk I had to prepare for my own because I was giving the last talk of the day about “Katello: Adding content management to Foreman” which was primarily demos showing all the basic parts.
It was a great conference again this year, I really want to thank all the speakers, attendees and sponsors who made this possible. I am looking forward for more interesting and even more technical talks at the Open Source Camp tomorrow, but wish save travels to all those leaving today and hope to see you next year on May 14-15.

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

The Future of Open Source Data Center Solutions – OSDC 2018 – Day 1

Now for the fourth time OSDC started in Berlin with a warm Welcome from Bernd and a fully packed room with approximately 140 attendees. This year we made a small change to the schedule by doing away with the workshop day and having an additional smaller conference afterwards. The Open Source Camp will be on Foreman and Graylog, but more on this on Thursday.
First talk was Mitchell Hashimoto with “Extending Terraform for Anything as Code” who started by showing how automation evolved in information technology and explained why it is so important before diving into Terraform. Terraform provides a declarative language to automate everything providing an API, a plan command to get the required changes before you then apply all this changes. While this is quite easy to understand for something like infrastructure Mitchell showed how the number of possibilities grew with Software-as-a-Service and now everything having an API. One example was how HashiCorp handles employees and their permissions with Terraform. After the examples for how you can use existing stuff he gave an introduction to extending Terraform with custom providers.
Second was “Hardware-level data-center monitoring with Prometheus” presented by Conrad Hoffmann who gave us some look inside of the datacenter of Soundcloud and their monitoring infrastructure before Prometheus which looked like a zoo. Afterwards he highlighted the key features why they moved to Prometheus and Grafana for displaying the collected data. In his section about exporters he got into details which exporter replaced which tools from the former zoo and gave some tips from practical experience. And last but not least he summarized the migration and why it was worth to do it as it gave them a more consistent monitoring solution.
Martin Schurz and Sebastian Gumprich teamed up to talk about “Spicing up VMWare with Ansible and InSpec”. They started by looking back to the old days they had only special servers and later on virtual machines manually managed, how this slowly improved by using managing tools from VMware and how it looks now with their current mantra “manual work is a bug!”. They showed example playbooks for provisioning the complete stack from virtual switch to virtual machine, hardening according their requirements and management of the components afterwards. Last but not least for the Ansible part they described how they implemented the Python code to have an Ansible module for moving virtual machines between datastores and hosts. For testing all this automation they use inSpec and the management requiring some tracking of the environment was solved using Ansible-CMDB.
After lunch break I visited the talk about “OPNsense: the “open” firewall for your datacenter” given by Thomas Niedermeier. OPNsense is a HardenedBSD-based Open Source Firewall including a nice configuration web interface, Spamhouse blocklists, Intrusion Prevention System and many more features. I think with all these features OPNsense has not to avoid comparison with commercial firewalls and if enterprise-grade support is required partners like Thomas Krenn are available, too.
Martin Alfke asked the question “Ops hates containers. Why?” he came around in a customer meeting. Based on this experience he started to demystify containers in a very entertaining and memorable way. He focused on giving OPS some tips and ideas about what you should learn before even thinking about having container in production or during implementing your own container management platform. As we do recording I really recommend you to have a look into the video of the talk when recordings are up in a few days.
Anton Babenko in his talk “Lifecycle of a resource. Codifying infrastructure with Terraform for the future” started were Mitchell’s talk ended and dived really deep into module design and development for Terraform. Me being not very familiar with Terraform he at least could convince me that it seems possible to write well designed code for it and it makes fun to experiment and improve with your own modules. Furthermore he gave tips for handling the next Terraform release and testing code during refactoring which are probably very useful for module authors.
“The Computer Science behind a modern distributed data store” by Max Neunhöffer did a very good job explaining theory used in cluster election and consensus. The second topic covered was sorting of data and how modern technology changed how we have to look at sorting algorithm. Log structured merge trees as the third topic of the talk are a great way to improve write performance and with applying some additional tricks also read performance used by many database solutions. Fourth section was about Hybrid Logical Clocks to solve the problem of system clocks differing. Last but not least Max talked about Distributed ACID Transactions (Atomic Consistent Isolated Durable) which are important to keep data consistent but are quite harder to achieve in distributed systems. It was really a great talk while only covering theoretical computer science Max made it very easy to understand at least basic levels and presented it in way getting people interested in those topics.
After this first day full of great talks we will have the evening event in a sky bar having a good view of Berlin, more food, drinks and conversations. This networking is perhaps one of the most interesting parts of conferences. I will be back with a short review of the evening event and day 2 tomorrow evening. If you want to have more details and a more live experience follow #osdc on Twitter.

Dirk Götz
Dirk Götz
Senior Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.