Open Source Camp Issue #1 – Foreman & Graylog

Open Source Camp Issue #1Right after OSDC we help to organize the Open Source Camp, a brand new serie of events which will give Open Source projects a platform for presenting to the Community. So the event started with a small introduction of the projects covered in the first issue, Foreman and Graylog. For the Foreman part it was Sebastian Gräßl a long term developer who did gave a short overview of Foreman and the community so also people attending for Graylog just know what the other talks are about. Lennart Koopmann who founded Graylog did the same for the other half including upcoming version 3 and all new features.
Tanya Tereshchenko one of the Pulp developers started the sessions with “Manage Your Packages & Create Reproducible Environments using Pulp” giving an update about Pulp 3. To illustrate the workflows covered by Pulp she used the Ansible plugin which will allow to mirror Ansible Galaxy locally and stage the content. Of course Pulp also allows to add your own content to your local version of the Galaxy and serve it to your systems. The other plugins a beta version is already available for Pulp 3 are python to mirror pypi and file for content of any kind, but more are in different development stages.
“An Introduction to Graylog for Security Use Cases” by Lennart Koopmann was about taking the idea of Threadhunting to Graylog by having a plugin providing lookup tables and processing pipeline. In his demo he showed all of this based on eventlogs collected by their honey pot domain controller and I can really recommend the insides you can get with it. I still remember how much work it was getting such things up and running 10 years ago at my former employer with tools like rsyslog and I am very happy about having tools like Graylog nowadays which provide this out of box.
From Sweden came Alexander Olofsson and Magnus Svensson to talk about “Orchestrating Windows deployment with Foreman and WDS”. They being Linux Administrators wanted to give their Windows colleagues a similar experience on a shared infrastructure and shared their journey to reach this goal. They have created a small Foreman Plugin for WDS integration into the provisioning process which got released in its first version. Also being a rather short presentation it started a very interesting discussion as audience were also mostly Linux Administrators but nearly everyone had at least to deal in one way with Windows, too.
My colleague Daniel Neuberger was introducing into Graylog with “Catch your information right! Three ways of filling your Graylog with life.” His talk covered topics from Graylogs architecture, what types of logs exists and how you can get at least the common ones into Graylog. Some very helpful tips from practical experience spiced up the talk like never ever run Graylog as root for being able to get syslog traffic on port 514, if the client can not change the port, your iptables rules can do so. Another one showed fallback configuration for Rsyslog using execOnlyWhenPreviousIsSuspended action. And like me Daniel prefers to not only talk about things but also show them live in a demo, one thing I recommend to people giving a talk as audience will always honor, but keep in mind to always have a fallback.
Timo Goebel started the afternoon sessions with “Foreman: Unboxing” and like in a traditional unboxing he showed all the plugins Filiadata has added to their highly customized Foreman installation. This covered integration of omaha (the update management of coreos), rescue mode for systems, VMware status checking, distributed lock management to help with automatic updates in cluster setups, Spacewalk integration they use for SUSE Manager managed systems, host expiration which helps to keep your environment tidy, monitoring integration and the one he is currently working on which provides cloud-init templates during cloning virtual machines in VMware from templates.
Jan Doberstein did exactly what you can expect from a talk called “Graylog Processing Pipelines Deep Dive”. Being Support engineer at Graylog for several years now his advice is coming from experience in many different customer environments and while statements like “keep it simple and stupid” are made often they stay true but also unheard by many. Those pipelines are really powerful especially when done in a good way, even more when they can be included and shared via content packs with Version 3.
Matthias Dellweg one of those guys from AITX who brought Debian support to Pulp and Katello talked about errata support for it in his talk “Errare Humanum Est”. He started by explaining the state of errata in RPM and differences in the DEB world. Afterwards he showed the state of their proof of concept which looks like a big improvement bringing DEB support in Katello to the same level like RPM.
“How to manage Windows Eventlogs” was brought to the audience by Rico Spiesberger with support by Daniel. The diversity of the environment brought some challenges to them which they wanted to solve with monitoring the logs for events that history proved to be problematic. Collecting the events from over 120 Active Directory Servers in over 40 countries generates now over 46 billion documents in Graylog a day and good idea about what is going on. No such big numbers but even more detailed dashboards were created for the Certificate Authority. Expect all their work to be available as content pack when it is able to export them with Graylog 3.
Last but not least Ewoud Kohl van Wijngaarden told us the story about software going the way “From git repo to package” in the Foreman Project. Seeing all the work for covering different operating systems and software versions for Foreman and the big amount of plugins or even more for Katello and all the dependencies is great and explains why sometimes things take longer, but always show a high quality.
I think it was a really great event which not only I enjoyed from the feedback I got. I really like about the format that talks are diving deeper into the projects than most other events can do and looking forward for the next issue. Thanks to all the speakers and attendees, safe travels home to everyone.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

The Future of Open Source Data Center Solutions – OSDC 2018 – Day 2

The evening event was a great success. While some enjoyed the great view from the Puro Skybar, others liked the food and drinks even more and at least I preferred the networking. I joined some very interesting discussions about very specific Information Technology tools, work life balance, differences between countries and cultures and so on. So thanks to all starting with our event team to the attendees for a great evening.
But also a great evening and a short night did not keep me and many others from joining Walter Gildersleeve for the first talk about “Puppet and the Road to Pervasive Automation”. He introduced the new tools from Puppet to improve the Configuration management experience like Puppet Discovery, Pipelines and Tasks. What I liked about his demos about Tasks was that he was showing of course what the Enterprise version could do, but also what the Open Source version is capable of. Pipelines is Puppet’s CI/CD solution which can be used as SaaS or on premise and at least I have to commit it looks very nice and informative. If you want to give it a try, you can sign up for a free account and test it with a limited number of nodes.
Second one today was Matt Jarvis with his talk “From batch to pipelines – why Apache Mesos and DC/OS are a solution for emerging patterns in data processing”. Like several others he started with the history from mainframes via hardware partitioning and virtualization to microservices running in containers. After this introduction he started to dig deeper into Container Orchestration and changes in modern application design which add complexity which they wanted to solve with Mesos. Matt then has given a really good overview on different aspects of the Mesos ecosystem and DC/OS. This being quite a complex topic a list of all the topics covered would be quite exhaustive list, but just to mention some he covered Service Discovery or Load Balancing for example.
Michael Ströder who I know as great specialist for secure authentication by working with him at one customer in the past introduced “Æ-DIR — Authorized Entities Directory” to the crowd. You already could see his experience when he was talking about goals and paradigms applied during development which resulted in the 2-tier architecture of Æ-DIR consisting of a writable provider and readable consumer with separated access based on roles. Installation is quite easy with a provided Ansible role and results in a very secure setup which I really like for central service like Authentication. The shown customer scenarios using features like SSH proxy authz and two factor authentication with Yuibkey make Æ-DIR sound like a really production ready solution. If you want to have a look into without installing it, a demo is provided on the projects webpage.
First talk after lunch was “Git Things Done With GitLab!” by my colleagues Gabriel Hartmann and Nicole Lang about Gitlab and why it was chosen by NETWAYS for inclusion in our Webservices. Nicole gave a very good explanation about basic function which Gabriel showed live in a demo followed by a cherry pick of nice features provided by Gitlab. Also these features like Issue tracker and CI/CD were shown live. I was really excited by the beta of AutoDevops which allows you to get CI/CD up and running very easy.
Thomas Fricke’s talk “Three Years Running Containers with Kubernetes in Production” was a very good talk about things you should know before moving container and container orchestration into production. But while it was a interesting talk I had to prepare for my own because I was giving the last talk of the day about “Katello: Adding content management to Foreman” which was primarily demos showing all the basic parts.
It was a great conference again this year, I really want to thank all the speakers, attendees and sponsors who made this possible. I am looking forward for more interesting and even more technical talks at the Open Source Camp tomorrow, but wish save travels to all those leaving today and hope to see you next year on May 14-15.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

The Future of Open Source Data Center Solutions – OSDC 2018 – Day 1

Now for the fourth time OSDC started in Berlin with a warm Welcome from Bernd and a fully packed room with approximately 140 attendees. This year we made a small change to the schedule by doing away with the workshop day and having an additional smaller conference afterwards. The Open Source Camp will be on Foreman and Graylog, but more on this on Thursday.
First talk was Mitchell Hashimoto with “Extending Terraform for Anything as Code” who started by showing how automation evolved in information technology and explained why it is so important before diving into Terraform. Terraform provides a declarative language to automate everything providing an API, a plan command to get the required changes before you then apply all this changes. While this is quite easy to understand for something like infrastructure Mitchell showed how the number of possibilities grew with Software-as-a-Service and now everything having an API. One example was how HashiCorp handles employees and their permissions with Terraform. After the examples for how you can use existing stuff he gave an introduction to extending Terraform with custom providers.
Second was “Hardware-level data-center monitoring with Prometheus” presented by Conrad Hoffmann who gave us some look inside of the datacenter of Soundcloud and their monitoring infrastructure before Prometheus which looked like a zoo. Afterwards he highlighted the key features why they moved to Prometheus and Grafana for displaying the collected data. In his section about exporters he got into details which exporter replaced which tools from the former zoo and gave some tips from practical experience. And last but not least he summarized the migration and why it was worth to do it as it gave them a more consistent monitoring solution.
Martin Schurz and Sebastian Gumprich teamed up to talk about “Spicing up VMWare with Ansible and InSpec”. They started by looking back to the old days they had only special servers and later on virtual machines manually managed, how this slowly improved by using managing tools from VMware and how it looks now with their current mantra “manual work is a bug!”. They showed example playbooks for provisioning the complete stack from virtual switch to virtual machine, hardening according their requirements and management of the components afterwards. Last but not least for the Ansible part they described how they implemented the Python code to have an Ansible module for moving virtual machines between datastores and hosts. For testing all this automation they use inSpec and the management requiring some tracking of the environment was solved using Ansible-CMDB.
After lunch break I visited the talk about “OPNsense: the “open” firewall for your datacenter” given by Thomas Niedermeier. OPNsense is a HardenedBSD-based Open Source Firewall including a nice configuration web interface, Spamhouse blocklists, Intrusion Prevention System and many more features. I think with all these features OPNsense has not to avoid comparison with commercial firewalls and if enterprise-grade support is required partners like Thomas Krenn are available, too.
Martin Alfke asked the question “Ops hates containers. Why?” he came around in a customer meeting. Based on this experience he started to demystify containers in a very entertaining and memorable way. He focused on giving OPS some tips and ideas about what you should learn before even thinking about having container in production or during implementing your own container management platform. As we do recording I really recommend you to have a look into the video of the talk when recordings are up in a few days.
Anton Babenko in his talk “Lifecycle of a resource. Codifying infrastructure with Terraform for the future” started were Mitchell’s talk ended and dived really deep into module design and development for Terraform. Me being not very familiar with Terraform he at least could convince me that it seems possible to write well designed code for it and it makes fun to experiment and improve with your own modules. Furthermore he gave tips for handling the next Terraform release and testing code during refactoring which are probably very useful for module authors.
“The Computer Science behind a modern distributed data store” by Max Neunhöffer did a very good job explaining theory used in cluster election and consensus. The second topic covered was sorting of data and how modern technology changed how we have to look at sorting algorithm. Log structured merge trees as the third topic of the talk are a great way to improve write performance and with applying some additional tricks also read performance used by many database solutions. Fourth section was about Hybrid Logical Clocks to solve the problem of system clocks differing. Last but not least Max talked about Distributed ACID Transactions (Atomic Consistent Isolated Durable) which are important to keep data consistent but are quite harder to achieve in distributed systems. It was really a great talk while only covering theoretical computer science Max made it very easy to understand at least basic levels and presented it in way getting people interested in those topics.
After this first day full of great talks we will have the evening event in a sky bar having a good view of Berlin, more food, drinks and conversations. This networking is perhaps one of the most interesting parts of conferences. I will be back with a short review of the evening event and day 2 tomorrow evening. If you want to have more details and a more live experience follow #osdc on Twitter.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Systemd-Unitfiles und Multi-Instanz-Setups

Tux
Vor einer Weile hab ich bereits eine kurze Erklärung zu Systemd-Unitfiles geschrieben, diesmal will ich auf das Multi-Instanz-Feature von Systemd eingehen, da auch dieses anscheinend nicht jedem bekannt ist.
Als Beispiel soll mir diesmal Graphite bzw. genauer gesagt die Python-Implementierung carbon-cache dienen. Diese skaliert nicht automatisch, sondern erfordert, dass man weitere Instanzen des Dienstes auf anderen Ports startet. Die Konfiguration auf Seiten des carbon-cache ist hierbei recht simpel, denn es wird in einer Ini-Datei nur eine neue Sektion mit den zu überschreibenden Werten geschrieben. Der Sektions-Name gibt hierbei der Instanz den Namen vor. Das ganze sieht dann beispielsweise für Instanz b so aus.

[cache:b]
LINE_RECEIVER_PORT = 2013
UDP_RECEIVER_PORT = 2013
PICKLE_RECEIVER_PORT = 2014
LINE_RECEIVER_PORT = 7102

Mit SystemV hätte man nun das Startskript für jede Instanz kopieren und anpassen müssen, da das mitgelieferte Unitfile leider das Multi-Instanz-Feature nicht nutzt muss ich dies zwar auch einmal tun, aber immerhin nur einmal. Hierbei ist es sinnvoll den Namen zu verändern, um nicht mit dem bestehen in Konflikt zu kommen, wenn man möchte kann man es aber auch durch Verwendung des gleichen Namens “überschreiben”. Für das Multi-Instanz-Feature muss nur ein @ an das Ende des Namens. Baut man nun an entsprechenden Stellen den Platzhalter %i ist auch schon das Multi-Instanz-Setup fertig und man kann einen Dienst mit name@instanz.service starten. In meinem Beispiel wäre dies carbon-cache-instance@b.service mit folgendem Unitfile unter /etc/systemd/system/carbon-cache-instance@.service.

[Unit]
Description=Graphite Carbon Cache Instance %i
After=network.target
[Service]
Type=forking
StandardOutput=syslog
StandardError=syslog
ExecStart=/usr/bin/carbon-cache --config=/etc/carbon/carbon.conf --pidfile=/var/run/carbon-cache-%i.pid --logdir=/var/log/carbon/ --instance=%i start
ExecReload=/bin/kill -USR1 $MAINPID
PIDFile=/var/run/carbon-cache-%i.pid
[Install]
WantedBy=multi-user.target

Ich hoffe diese kurze Erklärung hilft dem ein oder anderen und ich würde mich freuen zukünftig mehr Dienste, die auf Instanzierung ausgelegt sind, bereits mit einem entsprechenden Unitfile ausgeliefert zu sehen.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Configuration Management Camp Ghent 2018 – Recap

For the third time in a row I attended the Configuration Management Camp in Ghent. While I was the only one of Netways last year, this time Lennart, Thilo, Blerim and Bernd joined me. Lennart already attended the PGDay and FOSDEM which took place on the weekend before, so if you want to spend some days in Belgium and/or on Open Source conferences start of February is always a good time for this.
I really like the Configuration Management Camp as it is very well organized, especially for a free event, and I always come back with many new knowledge and ideas. The speakers of the Main track reads like a who’s who of configuration management and the community rooms have a big number of experts diving deep into a solution. This year there were 10 community rooms including favorites like Ansible, Foreman and Puppet but also new ones like Mgmt.
My day typically started with the Main track and when community rooms opened I joined the Foreman while Lennart and Thilo could be found in the Puppet and Ansible room and Blerim and Bernd manned the Icinga booth. For the first time I gave a talk on the event and not only one but two. My first one was Foreman from a consultant’s perspective were I tried to show how configuration management projects look like and how we solve them using Foreman, but also show limitations, which got very positive feedback. The second one was demonstrating the Foreman Monitoring integration. In the other talks I learned about Foreman Datacenter plugin which is a great way to document your environment and will very likely find its way into our training material, Foreman Maintain which will make upgrading even more easy, the improvements in the Ansible integration or Debian support in Katello.
But the conference is not only worth it because of the talks but also because of the community. I had very interesting conversation with Foreman Developers, long-term Contributors and Beginners, but also with so many other people I got to know on other conference and met here again. And sometime this is the best way to get things done. For example I talked with Daniel Lobato about a pull request I was waiting to get merged for Ansible, he afterwards talked to a colleague and now I can call myself Ansible contributor or we talked about a missing hover effect in Foreman’s tables and some minutes later Ohad Levy had created the pull request, Timo Goebel had reviewed and merged it while Marek Hulán created pull requests for plugins requiring adjustments. And there was plenty of time for these conversations with Speakers dinner on Sunday, Evening Event on Monday and Foreman Community Dinner on Tuesday or in a comic-themed bar afterwards.
After the two days of conference were now a total number of 8 fringe events with beginner sessions and hacking space which I can really recommend if you want to improve your knowledge and/or involvement in a project. While the others left, I stayed one more day as I had managed to arrange a day of Icinga 2 consulting at a costumer in Ghent before I also started my way home with a trunk full of waffle and Kriek.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Monitoring – it’s all about integration and automation – OSMC 2017 Hackathon

OSMC 2017
Also this year we organized a hackathon as follow up and managed to get about 50 people to work on actual coding. We started again with a small round of introduction so everyone had the chance to find people with same interests or knowledge needed. Afterwards people started to hack on Icinga 2, Icinga Web 2, different Modules, OpenNMS, Zabbix, Mgmt, NSClient++, Docker containers, Ansible and Puppet code or simply help others with configuration and other tasks to solve in their environment.
Here is a list of some things developed or at least designed today:
* Tom accepted and improved some of my pull requests, so the director got more property modifiers
* He also was working on improving notifications to allow managing them via a custom attribute of hosts and services
* Markus was improving Icinga packaging resulting in new package releases for SLES and support for Fedora 27
* Bodo was trying to move the ruby library for Icinga 2 to 1.0.0 release and got valuable input by Gunnar for displaying API coverage
* Thomas improved his diagnostics script for Icinga 2 to help with troubleshooting
* Nicola was working on a graphical picker for the geolocation in the Director for his awesome map module while getting several other ideas and requests
* David started a Single Sign On module for Icinga Web 2
* Mgmt got some improvements by Julien, Toshaan und James
* Michael was working on Elastic integration and web based installer for NSClient++
* Gunnar and Michael discussed so many features they actual did not find time for hacking, but keep our eyes open for Elastic 6 support and datatypes for arguments
* Steffen, Blerim and Michael discussed how to fix a problem with running two Icingabeat instances which now could probably be solved
* Stephan finally solved the management issue of red alerts in Icinga Web 2 😉


Furthermore an impressive amount of knowledge was transferred, user questions got answered and problems got solved. One thing I am really happy about seeing one user to use the URL encode property modifier only minutes after being accept by Tom to create Hostgroups including membership assignment from PuppetDB. But I want to end this blogpost with one really cool thing Dave from the Australian Icinga Partner Sol1 showed us. This map displays all pubs in Australia because it monitors Satellite receivers to visualize any large outages for Sky Racing Australia.
Map of Australian Pubs by Sol1
So have a nice weekend and keep on hacking.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Monitoring – it’s all about integration and automation – OSMC 2017 Day 2

OSMC 2017
The second day started with “Monitoring – dos and don’ts” presented by Markus Thiel. Room was already full on the first talk what was not expected when people move from evening event to late lounge and then at 5 o’clock in the morning to the hotel. Event was great great with good food, drinks and chat. But Julia already wrote about that so I will focus on the talks and Markus one was nicely showing “don’ts” I also recognize from my daily work as consultant and helped with tips how to avoid them. He got deeply into details so I can not repeat everything, but just to summarize the biggest problem is always communication between people or systems, perhaps you already knew this from your daily business.
The second talk I attended was Bodo Schulz talking about automated and distributed monitoring of a continuous integration platform. He created his own service discovery named Brain which discovers services and put them into Redis which is then read by Icinga 2 and Grafana for creating configuration. Pinky is his simple stack for visualisation consisting of containers. Both of them are integrated in the platform, one Brain for every pipeline, one Pinky for every team. If you did not get the reference. watch the intro on youtube. His workarounds for features he missed were also quite interesting like implementing his own certificate signing service for Icinga 2 or displaying License data in Grafana. And of course he had a live demo to show all this fancy stuff which was great to see.
Tom was giving the third talk of the day about automated monitoring in heterogeneous environments showing real life scenarios using the Director‘s capabilities. He started with the basics explaining how import, synchronization and jobs work and followed by importing from an old Icinga environment utilizing SQL and the IDO database. In the typical scenario for importing from a CMDB Tom showed typical problems like bad quality of input data and how to workaround with the Director to get a good quality of output. Another scenario explained how to get data from Active Directory for the Windows part of your environment. For VMware users he show the already released vSphere module and also the prototype of the vSphereDB module which adds some more visualization and for AWS users the corresponding module. And the last one showed how to import Excel files using the Fileshipper. And of course he explained how easy it is to create your own import source.
Right after the excellent lunch and the even better event massage Marianne Spiller‘s talk “Ich sehe was, was du nicht siehst (… und das ist CRITICAL!)” (in English “I spy with my little eye something CRITICAL!”) focused on how to get a good monitoring environment with a high user acceptance up and running. Being realistic and show everyone his benefits are the best tips she gave but also she could not provide the one solution that fits all. For more of her tips ranging from technical to organizational I can recommend her blog.
Lennart and Janina Tritschler were talking about distributed Icinga 2 environments automated by Puppet. Really happy to see the talk because Janina adopted Icinga 2 after a fundamentals training I gave about a year ago. They started with a basic introduction of distributed monitoring with Icinga 2 as master, satellite and agent and configuration management with Puppet including exported resources. Afterwards they were diving deeper into the Puppet module for Icinga 2 and how to use it for installation and configuration of the environment. In their demos they included several virtual machines to show how easily this can be done.
In the last break the winner of the gambling at the evening event got his price, a retro game console.
Last but not least I decided for Kevin Honka‘s talk “Icinga 2 + Director, flexible Thresholds with Ansible” in favor of Thomas talking about troubleshooting Icinga2. But I am sure his talk was great as troubleshooting is his daily business as our Lead Support Engineer. Kevin was unhappy with static threshold configured in their Monitoring environment so started to develop a python script to include in his Ansible workflow which modifies thresholds using the Director API. On his roadmap is extending it by creating a Icinga 2 python library usable for others, utilizing this library in a real Ansible module and extending functionality.
Thanks to all speakers, attendees and sponsors leaving today for the great conference, save travels and see you next year on November 5th – 8th for the next OSMC. And of course a nice dinner and happy hacking to all staying for the hackathon tomorrow, I will keep our readers informed on the crazy things we manage to build.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Monitoring – it's all about integration and automation – OSMC 2017 Day 1

OSMC 2017
Also for the 12th OSMC we started on Tuesday with a couple of workshops on Icinga, Ansible, Graphing and Elastic which were famous as always and afterwards with meet and greet at the evening dinner. But the real start was as always a warm Welcome from Bernd introducing all the small changes we had this year like having so many great talks we did three in parallel on the first day. Also we had the first time more English talks than German and are getting more international from year to year which is also the reason for me blogging in English.
The first talk of the day I attended was James Shubin talking about “Next Generation Config Mgmt: Monitoring” as he is a great entertainer and mgmt is a really a great tool. Mgmt is primarily a configuration management solution but James managed in his demos to build a bridge to monitoring as mgmt is event driven and very fast. So for example he showed mgmt creating files deleted faster then a user could recognize they are gone. Another demo of mgmt’s reactivity was visualizing the noise in the room, perhaps not the most practical one but showing what you can do with flexible inputs and outputs. In his hysteresis demo he showed mgmt monitoring systemload and scale up and down the number of virtual machines depending on it. James is as always looking for people who join the project and help hacking, so have a look at mgmt (or the recording of one of his talks) and perhaps join what could really be the next generation of configuration management.
Second one was Alba Ferri Fitó talking about community helping her doing monitoring at Vodafone in her talk “With a little help from…the community”. She was showing several use cases e.g. VMware monitoring she changed from passive collection of snmptraps to proactively monitoring the infrastructure with check_vmware_esx. Also she helped to integrate monitoring in the provisioning process with vRealise using the Icinga 2 API, did a corporate theme to get a better acceptance, implemented log monitoring using the sticky option from check_logfiles, created her own scripts to monitor things she was told they could only be monitored by SCOM or using expect for things only having an interactive “API”. It was a great talk sharing knowledge and crediting community for all the code and help.
Carsten Köbke and our Michael were telling “Ops and dev stories: Integrate everything into your monitoring stack”. So Carsten as the developer of the Icinga Web 2 module for Grafana started the talk about his motivation behind and experience gained by developing this module. Afterwards Michael was showing more integration like the Map module placing hosts on an Openstreet map, dashboards, ticket systems, log and event management solutions like Greylog and Elastic including the Icingabeat and an very early prototype (created on the day before) for a module for Graylog.
After lunch which was great as always I attended “Icinga 2 Multi Zone HA Setup using Ansible” by Toshaan Bharvani. He is a self-employed consultant with a history in monitoring starting with Nagios, using Icinga and Shinken for a while and now utilizing Icinga 2 to monitor his costumer’s environments. His ansible playbooks and roles showed a good practical example for how to get such a distributed setup up and running and he also managed to explain it in a way also people not using Ansible at all could understand it.
Afterwards Tobias Kempf as the monitoring admin and Michael Kraus as the consultant supporting him talked about a highly automated monitoring for Europe’s biggest logistic company. They used omd to build a multilevel distributed monitoring environment which uses centralized configuration managed with a custom webinterface, coshsh as configuration generator and git, load distribution with mod_gearman and patch management with Ansible.
Same last talk like every year Bernd (representing the Icinga Team) showed the “Current State of Icinga”. Bernd shortly introduced the project and team members before showing some case studies like Icinga being deployed on the International Space Station. He also promoted the Icinga Camps and our effort to help people to run more Icinga Meetups. Afterwards he started to dive into technical stuff like the new incarnation of Icinga Exchange including full Github sync, the documentation and package repository including numbers of downloads which were a crazy 50000 downloads just for CentOS on one day. Diving even deeper into Icinga itself he showed the new CA Proxy feature allowing multilevel certificate signing and automatic renewal which was sponsored by Volkswagen like some others, too. Some explanation on projects effort on Configuration management and which API to use in the Icinga 2 environment for different use cases followed before hitting the topic logging. For logging Icinga project now provides output for Logstash and Elasticsearch in Icinga 2, the Icingabeat, the Logstash output which could create monitoring objects in Icinga 2 on the fly and last but not least the Elasticsearch module for Icinga Web 2. In his demos he also showed the new improved Icinga Web 2 which adds even more eye candy. Speaking about eye candy also the latest version of Graphite module which will get released soon looks quite nice. Another release pending will be the Icinga Graphite installer using Ansible and Packaging to provide an easy way to setup Graphite. So keep an eye on release blogposts coming next weeks.
It is nice to see topics shift through the years. While the topics automation and integration were quite present in the last years it was main focus of many talks this year. This nicely fits my opinion that you as a software developer should care about APIs to allow easy integration and as an administrator you should provide a single interface I sometimes call “single point of administration”.
Colleagues have collected some pictures for you, if you want to see more follow us or #osmc on Twitter. So enjoy these while I will enjoy the evening event and be back tomorrow to keep you updated on the talks of second day.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Custom Datatypes in Puppet

Puppet Logo
Eine der größten Neuerungen mit Puppet 4 waren wohl die Typisierung von Variablen, so dass nun direkt bei der Parametrisierung einer Klasse oder definierten Resource ein Datentyp für die Parameter festgelegt werden kann. Da der Wert dann automatisch gegen diese Typisierung geprüft, entfallen damit auch die Validierungsfunktionen aus der Stdlibs. Neben den einfachen Datentypen wie Integer und String können hier auch komplexere Vorgaben gemacht werden. Das einfachste Beispiel wäre die Länge eines Strings einzuschränken indem Mindest- und Maximallänge angegeben werden wie String[1,10] für einen String mit mindestens einem, maximal zehn Zeichen Länge. Etwas komplexer wird eine Werteliste wie Enum['running','stopped'], wenn ein Wert auch undefiniert sein darf also Optional[String] oder ein Pattern[] um einen Vergleich mit einem regulären Ausdruck zu machen, aber auch dies stößt irgendwann an die Grenzen und man landet bei den abstrakten Datentypen.
Der abstrakteste Datentyp ist Struct, welcher es erlaubt eine komplett eigene Struktur als Hash festzulegen. Und hier kommt nun eine Neuerung von Puppet 4.8 ins Spiel, die es erlaubt Custom Datatypes als Aliasses zu erstellen. Diese landen im Unterordner types des Modules und müssen sich wie gewohnt an eine Namenskonvention halten damit das Autoloading funktioniert. Ein sehr gutes Beispiel findet sich im Systemd-Module von Camptocamp auf welches mich Ewoud Kohl van Wijngaarden dankenswerterweise aufmerksam gemacht hat.
(mehr …)

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Foreman-Training nun auch mit Ansible und Monitoring-Integration

English version below
Foreman-Logo
Es ist nun bald anderthalb Jahre her, dass ich die Trainingsunterlagen veröffentlichen durfte, die in Zusammenarbeit mit dem Foreman-Projekt und dort auch als offizielles Training aufgeführt sind. In der Zeit hat sich im Projekt selbst aber auch vor allem an den Plugins viel getan. Auch konnte Feedback in offiziellen und Inhouse-Schulungen sowie Workshops gesammelt werden. Um dem Rechnung zu tragen versuche ich die Schulung regelmäßig zu aktualisieren und zu erweitern.
Foreman training presentation
Nachdem es sich bei dem Update diesmal um ein größeres handelt, dachte ich mir, ich fasse es mal in einem Blogpost zusammen. Diesmal habe ich es gewagt und statt wie bisher auf eine bereits länger veröffentliche Version auf den Release Candidate für 1.16 gesetzt, da mit diesem Support für Puppet 5 kommt. Auch wenn ich nicht über den höheren Ressourcenbedarf von Puppet 5 begeistert bin, da er deutlich höhere Anforderungen an unsere Schulungsnotebooks stellt, war es auch Zeit der Entwicklung hier Rechnung zu tragen und somit ist in den Schulungen ab sofort Puppet 5 der Standard. Wenn ich gerade von Konfigurationsmanagement rede, kann ich auch gleich die erste große Neuerung präsentieren und zwar ist nun auch die Ansible-Integration Teil der Schulung. Dies ist dem Interesse geschuldet, das sich sowohl in Anfragen in allen Bereichen bei NETWAYS, dem Interesse der Kollegen und auf den Foreman-Mailinglisten sowie auf dem Foreman-Geburtstag gezeigt hat.
Die zweite große Erweiterung ist die Monitoring-Integration, auf die ich persönlich sehr stolz bin. Allein in die Vorbereitung der Übung floss hier einiges an Zeit um den Schulungsteilnehmern ein möglichst gutes Trainingserlebnis zu gewährleisten. Den Neuerungen im OpenSCAP-Plugin wurde mit einer optionalen Übung Rechnung getragen. Optional damit es keine Zeit frisst, wenn bei Schulungsteilnehmern kein Bedarf besteht, aber gerade mit den Tailoring-Files lässt sich eine OpenSCAP-Policy sehr gut und einfach auf den eigenen Bedarf anpassen. Bereits beim letzten Update hatte ich das Plugin “Expire Hosts” hinzugefügt, da ich in vielen Kundenumgebungen entsprechende Anforderungen ausmachen konnte. Leider musste ich das ABRT-Plugin zumindest temporär rausnehmen, da dieses erst aktualisiert werden muss um wieder mit Foreman bzw. dem Smart-Proxy kompatibel zu sein.
(mehr …)

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.