Nachdem unsere Auszubildenden eine ganze Blogserie haben, in der sie immer mal wieder aus dem Nähkästchen plaudern, dachte ich mir ich dreh den Spieß mal rum. In den nächsten Zeilen kann der interessierte Leser also meine persönliche Sicht auf die Frage warum wir ausbilden, was mein Ziel für die Auszubildenden ist und wie ich versuche dies über die drei Jahre zu erreichen und natürlich noch kurz worauf ich deswegen bei der Auswahl der Auszubildenden achte.
Warum bilden wir aus?
Es ist jetzt etwa drei Jahre her, dass Bernd im Jahresgespräch die Altersstruktur der Firma thematisiert hat und dass wir eine junge Firma sind und dies auch bleiben wollen. Dadurch entstand die Rechnung wie viele Auszubildenden in welchem Durchschnittsalter wir brauchen um den Altersdurchschnitt zu halten in der Annahme auch allen alternden Mitarbeitern weiterhin eine Perspektive zu bieten. Als zweiter Faktor kam hinzu, dass es sich besonders in Professional Services (unserer Consulting- und Support-Abteilung) schwierig gestaltet mit dem Auftragswachstum personell schrittzuhalten, da hier Leute mit einem breiten Basiswissen, einigem Spezialwissen, Soft-Skills um sie auf Kunden loszulassen, Reisebereitschaft und um ehrlich zu sein ohne utopische Gehaltsvorstellungen brauchen. Diese beiden Faktoren haben also die Diskussion angefacht, ob wir auch in Professional Services ausbilden können. Nachdem einige mir wichtige Rahmenbedingungen wie ein Abteilungsdurchlauf und ausreichend Zeit zur Betreuung und Schulung der Auszubildenden festgelegt waren, hat also Professional Services sich mit dem Ausbildungsjahr 2017 den anderen Abteilungen angeschlossen, die schon wesentlich länger ausbilden.
Was ist das Ziel?
Mein erklärtes Ziel ist es den Auszubildenden nach drei Jahren eine fundierte Entscheidung für den weiteren Berufsweg zu ermöglichen und die Grundlagen vermittelt zu haben, für egal welchen Weg sie sich entscheiden. Die Optionen, die ihnen offen stehen, sind in meinen Augen Junior Consultant in Professional Services, eine andere Tätigkeit bei NETWAYS ohne Reisen, ein Wechsel zu einer anderen Firma oder gar ein Wechsel zu einem anderen Tätigkeitsfeld. Wobei mein erklärtes Wunschziel natürlich der Junior Consultant wäre!
Wie will ich das Ziel erreichen?
Als wichtigstes sehe ich im ersten Lehrjahr IT-Grundlagen und Soft-Skills, die bereits von Anfang an vermittelt werden sollen. Um die IT-Grundlagen zu vermitteln setzen wir auf eine Mischung aus Schulungen durch die erfahrenen Mitarbeiter, Projekte in denen die Auszubildenden selbstständig Themen erarbeiten und den praktischen Einsatz bei Managed Services.
Bei den Schulungen starten wir direkt in der ersten vollen Woche mit Linux-Grundlagen, denen später im ersten Lehrjahr SQL-Grundlagen, Netzwerkgrundlagen, DNS & DHCP folgt und im weiteren Ausbildungsverlauf sind dann noch geplant Linux-Packaging, Virtualisierung und Systemsicherheit zu vermitteln. Als erstes Projekt haben die bisherigen Jahrgänge immer einen LAMP-Stack gemeinsam aufsetzen sollen, bei dem jeder zwar eine Teilaufgabe umsetzen, dokumentieren und präsentieren muss, aber am Ende auch ein gemeinsames Ergebnis erreicht werden muss. Weitere Projekte kommen dann meist aus aktuellen Anforderungen oder Teststellungen und handeln sich nicht um so simple Tätigkeiten wie Benutzer anlegen. So haben die Auszubildenden beispielsweise Portainer getestet, die Hardwarewartung für einen Kollegen übernommen und anschließend getestet. Bei Managed Services werden die Auszubildenden dann mit Aufgabenstellungen aus der Praxis konfrontiert und dürfen sich beispielsweise mit der API unserer CMDB herumschlagen.
Zu den wichtigsten Soft-Skills zählt für mich Selbstmanagement, also lernen die Auszubildenden mit der selben Zeitregelung umzugehen wie alle anderen auch, die geleisteten Arbeitszeiten erfassen, ihre Tickets zu pflegen und was sonst noch dazugehört damit Alles rund läuft. Ebenfalls wichtig ist natürlich Kommunikation und Auftreten, adressatengerechte Präsentation und Dokumentation. Hier hilft sicherlich auch der Abteilungsdurchlauf, bei dem die Auszubildenden auch mal den Telefondienst übernehmen, Grundlagen der Buchhaltung vermittelt bekommen oder helfen eine Schulung zu organisieren und betreuen.
Im zweiten und dritten Lehrjahr bauen wir dann diese Grundlagen aus, indem zusätzlich zu den internen Schulungen unsere offiziellen Schulungen besucht und interne Projekte anspruchsvoller werden, mehr Vorgaben an Dokumentation und Präsentation zu beachten sind und zusätzlich kommen Fachgespräche nach der Präsentation hinzu. Außerdem unterstützt immer ein Auszubildender die Kollegen im Support und leistet Betriebsunterstützung für Kunden. Der Abteilungsdurchlauf setzt sich fort und Sales erhält technische Unterstützung bei Webinaren und Pre-Sales-Terminen und auch unsere Systemintegratoren lernen unsere Entwickler und das Entwickeln in Open-Source-Projekten kennen. Mir ist besonders wichtig, dass die Auszubildenden den Kundenkontakt lernen, indem sie erfahrene Consultants begleiten und auch dort eigene Aufgaben übernehmen oder in Schulungen die Rolle des Co-Trainers und einzelne Themenblöcke übernehmen.
Als Belohnung für gute Leistungen kommen dann noch Konferenzteilnahmen oder sogar das komplett selbstständige Abwickeln eines Kundenprojekts, wobei dies natürlich immer nur Remote durchgeführt werden kann, damit ein erfahrener Kollegen wie bei den internen Projekten unterstützen kann.
Um all dies zu planen, Projekte zu suchen, die Schwächen und Stärken der Auszubildenden individuell berücksichtigen und die Auszubildenden zu betreuen, wird natürlich entsprechend viel Zeit benötigt, welche ich und unterstützende Kollegen dankenswerterweise bekommen haben. Damit dies alles so funktioniert, ist Professional Services natürlich nicht nur auf die Mitarbeit aller im Team sondern auch auf die Unterstützung der anderen Abteilungen und das Verständnis der Kunden angewiesen. Hierfür an dieser Stelle ein herzliches Dankeschön.
Auswahl der Auszubildenden
Wer jetzt denkt um das Ziel zu erreichen erwarten wir von unseren Bewerber schon bestimmtes Vorwissen, wird überrascht sein, dass ich sowas zwar als Bonus ansehe, aber es mir auf ganz andere Dinge ankommt. Den ersten Kontakt mit einem Bewerber habe ich, wenn mir Bewerbungsunterlagen weitergeleitet werden und ich um meine Meinung gebeten werde. Als erstes schaue ich mir daher das Anschreiben an um die Motivation für die Berufswahl und den bisherigen Werdegang zu erkennen. Schlecht ist wenn diese nicht nachzuvollziehen ist und sich der Bewerber auch nicht die Mühe gemacht hat Rechtschreibkorrektur oder Korrekturleser zu bemühen. Ein Lebenslauf sollte dann einfach nur schlüssig sein und Zeugnisnoten sind interessant, aber viel interessanter ist das Bild, das sich aus den Zeugnisbemerkungen ergibt.
Wer es schafft damit zu überzeugen, hat die erste Hürde genommen und bekommt eine Einladung zum Vorstellungsgespräch. Hier muss dann einfach das Auftreten überzeugen, ein technisches Verständnis und Interesse sowie natürlich Motivation auszumachen sein. Klingt eigentlich simpel und wer nicht selbst an so einem Bewerbungsprozess beteiligt ist, wird es kaum glauben wie viele an einfachen Dingen wie Pünktlichkeit scheitern oder dass die Begründung “Ich schraube gerne an Rechnern” nicht die beste Motivation ist. Ein Bewerber, der zum zweiten Lehrjahr nicht volljährig ist, muss hierbei etwas mehr überzeugen, da der organisatorische Aufwand bei Reisen welche im Consulting anfallen wesentlich höher ist. Aber ein Ausschlusskriterium wäre es genauso wenig wie ein höheres Alter bei beispielsweise einem Studienabbrecher oder zweiter Ausbildung.
Ich hoffe jeder Leser hat ein gewisses Verständnis für und Einblick in die Ausbildung bei NETWAYS gewonnen. Der ein oder andere Kunde ist nun vielleicht nicht mehr überrascht, wenn er gefragt wird ob der Consultant von einem Auszubildenden begleitet werden kann. Andere Ausbildungsbetriebe dürfen sich gerne Anregungen holen und ich bin generell auch immer an einem Erfahrungsaustausch interessiert. Und vor allem freu ich mich, wenn nun jemand denkt, dass wir der richtige Ausbildungsbetrieb für ihn sein könnten und sich bewerben möchte.
Quite often I get asked how someone can get involved with an Open Source project without or at least minimal development skills. I have some experience in this topic as I am pretty involved with some projects, especially Icinga and Foreman, and mostly not because of my development skills which are existing but at least lack more exercise. So here comes my non exhaustive list of roles a non-developer can engage in a project.
A good point to start is talking about the project. I know this sounds very simple and it is indeed. Spread the word about what you like about and how you use the project and by doing so you help to increase the userbase and with increasing userbase there will be also an increase in developers. But where to start? The internet gives you many opportunities with forums, blogs and other platforms or if you prefer offline communication you can go to local computer clubs, join a user group meeting, help at a conference booth or even send in a paper for a conference. And let the project know about so they can promote your talk.
Another option for getting involved quite easily is bug reporting. If you start using some software more and more, time will come and you will find a bug so simply report it. Try to find a way to reproduce it, add relevant details, configs and logs, explain why you think it is a bug or missing feature and developers can hopefully fix it. When you like the result and you want to get more involved start testing optional workflows, plugins or similar edge cases you expect to be not well tested because they have a smaller user base. Another option is testing release candidates or even nightly builds so bugs can perhaps be fixed before they hit the masses. If provided take part in test days for new features or versions and test as many scenarios as you can or even go a step further and help organizing a test day. With minimal development know-how you can even dig into the code and help with fixing it, so it is also a good idea if you want to improve this knowledge.
If the user base of a project grows, the number of question that are asked every day grows. This can easily reach a number the developers have to spend more time on answering questions than on coding or have to ignore questions, both can cause a project to fail. Experienced users providing their knowledge to newbies can be a big help. So find the way a project wants this kind of questions to be handled and answer questions. It can vary from mailing lists, irc and community forums or panels to flagging issues as questions or if no separate platform is provided community may meet on serverfault, stackoverflow or another common site shared by many projects. Also very important is helping routing communication in the right channel, so help new users also to file a good bug report or tell people to not spam the issue tracker with questions better handled on the community platform.
Projects lacking good documentation are very common and also if the documentation is good it still can be improved by adding examples and howtos. Pull requests improving documentation are likely to be accepted or if the project uses a wiki access is granted, but if you feel more comfortable creating your own source of documentation by writing howtos in your private blog or even creating video tutorials on youtube. Many project will link to it at least in community channels if you let them know about.
Improving user base by making the software available to more people by providing it in additional languages is always a good thing. But perhaps everyone knows at least one project where selecting a language other than English feels like automatic translation, mix of languages or you would have chosen just different words. Already with some basic knowledge of the software and a feeling for your native language you can help improving translation. In the most cases you do not need special knowledge of tools or translation frameworks as projects try to keep barriers low for translators.
When projects grow they will need more and more infrastructure for hosting their website, documentation, community platform, CI/CD and build pipeline, repositories and so on. Helping with managing this infrastructure is really a good way to get involved for an ops person. Perhaps it is also possible to donate computing resources (dedicated hardware or a hosted virtual machine) what will be often honored by the project by adding you to the list of sponsors giving you some good publicity.
Nowadays a project is not only about the software itself, making installation more easy by providing packages and support for configuration management or more secure by providing security analyses, hardening guides or policies is something system administrators are often more capable than developers. From my experience providing spec files for RPM packaging, Puppet modules or Ansible roles to support automation or a SELinux policy for securing the installation are happily accepted as contribution.
Like I said this list is probably not complete and you can also mix roles very well like starting with talking about the project and filing bugs for your own and perhaps end as community supporter who is well-known for his own blog providing in depth guides, but it should at least give some ideas how you can get involved in open source projects without adding developer to your job description. And if you are using the project in your company’s environment ask your manager if you can get some time assigned for supporting the project, in many cases you will get some.
Right after OSDC we help to organize the Open Source Camp, a brand new serie of events which will give Open Source projects a platform for presenting to the Community. So the event started with a small introduction of the projects covered in the first issue, Foreman and Graylog. For the Foreman part it was Sebastian Gräßl a long term developer who did gave a short overview of Foreman and the community so also people attending for Graylog just know what the other talks are about. Lennart Koopmann who founded Graylog did the same for the other half including upcoming version 3 and all new features.
Tanya Tereshchenko one of the Pulp developers started the sessions with “Manage Your Packages & Create Reproducible Environments using Pulp” giving an update about Pulp 3. To illustrate the workflows covered by Pulp she used the Ansible plugin which will allow to mirror Ansible Galaxy locally and stage the content. Of course Pulp also allows to add your own content to your local version of the Galaxy and serve it to your systems. The other plugins a beta version is already available for Pulp 3 are python to mirror pypi and file for content of any kind, but more are in different development stages.
“An Introduction to Graylog for Security Use Cases” by Lennart Koopmann was about taking the idea of Threadhunting to Graylog by having a plugin providing lookup tables and processing pipeline. In his demo he showed all of this based on eventlogs collected by their honey pot domain controller and I can really recommend the insides you can get with it. I still remember how much work it was getting such things up and running 10 years ago at my former employer with tools like rsyslog and I am very happy about having tools like Graylog nowadays which provide this out of box.
From Sweden came Alexander Olofsson and Magnus Svensson to talk about “Orchestrating Windows deployment with Foreman and WDS”. They being Linux Administrators wanted to give their Windows colleagues a similar experience on a shared infrastructure and shared their journey to reach this goal. They have created a small Foreman Plugin for WDS integration into the provisioning process which got released in its first version. Also being a rather short presentation it started a very interesting discussion as audience were also mostly Linux Administrators but nearly everyone had at least to deal in one way with Windows, too.
My colleague Daniel Neuberger was introducing into Graylog with “Catch your information right! Three ways of filling your Graylog with life.” His talk covered topics from Graylogs architecture, what types of logs exists and how you can get at least the common ones into Graylog. Some very helpful tips from practical experience spiced up the talk like never ever run Graylog as root for being able to get syslog traffic on port 514, if the client can not change the port, your iptables rules can do so. Another one showed fallback configuration for Rsyslog using execOnlyWhenPreviousIsSuspended action. And like me Daniel prefers to not only talk about things but also show them live in a demo, one thing I recommend to people giving a talk as audience will always honor, but keep in mind to always have a fallback.
Timo Goebel started the afternoon sessions with “Foreman: Unboxing” and like in a traditional unboxing he showed all the plugins Filiadata has added to their highly customized Foreman installation. This covered integration of omaha (the update management of coreos), rescue mode for systems, VMware status checking, distributed lock management to help with automatic updates in cluster setups, Spacewalk integration they use for SUSE Manager managed systems, host expiration which helps to keep your environment tidy, monitoring integration and the one he is currently working on which provides cloud-init templates during cloning virtual machines in VMware from templates.
Jan Doberstein did exactly what you can expect from a talk called “Graylog Processing Pipelines Deep Dive”. Being Support engineer at Graylog for several years now his advice is coming from experience in many different customer environments and while statements like “keep it simple and stupid” are made often they stay true but also unheard by many. Those pipelines are really powerful especially when done in a good way, even more when they can be included and shared via content packs with Version 3.
Matthias Dellweg one of those guys from AITX who brought Debian support to Pulp and Katello talked about errata support for it in his talk “Errare Humanum Est”. He started by explaining the state of errata in RPM and differences in the DEB world. Afterwards he showed the state of their proof of concept which looks like a big improvement bringing DEB support in Katello to the same level like RPM.
“How to manage Windows Eventlogs” was brought to the audience by Rico Spiesberger with support by Daniel. The diversity of the environment brought some challenges to them which they wanted to solve with monitoring the logs for events that history proved to be problematic. Collecting the events from over 120 Active Directory Servers in over 40 countries generates now over 46 billion documents in Graylog a day and good idea about what is going on. No such big numbers but even more detailed dashboards were created for the Certificate Authority. Expect all their work to be available as content pack when it is able to export them with Graylog 3.
Last but not least Ewoud Kohl van Wijngaarden told us the story about software going the way “From git repo to package” in the Foreman Project. Seeing all the work for covering different operating systems and software versions for Foreman and the big amount of plugins or even more for Katello and all the dependencies is great and explains why sometimes things take longer, but always show a high quality.
I think it was a really great event which not only I enjoyed from the feedback I got. I really like about the format that talks are diving deeper into the projects than most other events can do and looking forward for the next issue. Thanks to all the speakers and attendees, safe travels home to everyone.
The evening event was a great success. While some enjoyed the great view from the Puro Skybar, others liked the food and drinks even more and at least I preferred the networking. I joined some very interesting discussions about very specific Information Technology tools, work life balance, differences between countries and cultures and so on. So thanks to all starting with our event team to the attendees for a great evening.
But also a great evening and a short night did not keep me and many others from joining Walter Gildersleeve for the first talk about “Puppet and the Road to Pervasive Automation”. He introduced the new tools from Puppet to improve the Configuration management experience like Puppet Discovery, Pipelines and Tasks. What I liked about his demos about Tasks was that he was showing of course what the Enterprise version could do, but also what the Open Source version is capable of. Pipelines is Puppet’s CI/CD solution which can be used as SaaS or on premise and at least I have to commit it looks very nice and informative. If you want to give it a try, you can sign up for a free account and test it with a limited number of nodes.
Second one today was Matt Jarvis with his talk “From batch to pipelines – why Apache Mesos and DC/OS are a solution for emerging patterns in data processing”. Like several others he started with the history from mainframes via hardware partitioning and virtualization to microservices running in containers. After this introduction he started to dig deeper into Container Orchestration and changes in modern application design which add complexity which they wanted to solve with Mesos. Matt then has given a really good overview on different aspects of the Mesos ecosystem and DC/OS. This being quite a complex topic a list of all the topics covered would be quite exhaustive list, but just to mention some he covered Service Discovery or Load Balancing for example.
Michael Ströder who I know as great specialist for secure authentication by working with him at one customer in the past introduced “Æ-DIR — Authorized Entities Directory” to the crowd. You already could see his experience when he was talking about goals and paradigms applied during development which resulted in the 2-tier architecture of Æ-DIR consisting of a writable provider and readable consumer with separated access based on roles. Installation is quite easy with a provided Ansible role and results in a very secure setup which I really like for central service like Authentication. The shown customer scenarios using features like SSH proxy authz and two factor authentication with Yuibkey make Æ-DIR sound like a really production ready solution. If you want to have a look into without installing it, a demo is provided on the projects webpage.
First talk after lunch was “Git Things Done With GitLab!” by my colleagues Gabriel Hartmann and Nicole Lang about Gitlab and why it was chosen by NETWAYS for inclusion in our Webservices. Nicole gave a very good explanation about basic function which Gabriel showed live in a demo followed by a cherry pick of nice features provided by Gitlab. Also these features like Issue tracker and CI/CD were shown live. I was really excited by the beta of AutoDevops which allows you to get CI/CD up and running very easy.
Thomas Fricke’s talk “Three Years Running Containers with Kubernetes in Production” was a very good talk about things you should know before moving container and container orchestration into production. But while it was a interesting talk I had to prepare for my own because I was giving the last talk of the day about “Katello: Adding content management to Foreman” which was primarily demos showing all the basic parts.
It was a great conference again this year, I really want to thank all the speakers, attendees and sponsors who made this possible. I am looking forward for more interesting and even more technical talks at the Open Source Camp tomorrow, but wish save travels to all those leaving today and hope to see you next year on May 14-15.
Now for the fourth time OSDC started in Berlin with a warm Welcome from Bernd and a fully packed room with approximately 140 attendees. This year we made a small change to the schedule by doing away with the workshop day and having an additional smaller conference afterwards. The Open Source Camp will be on Foreman and Graylog, but more on this on Thursday.
First talk was Mitchell Hashimoto with “Extending Terraform for Anything as Code” who started by showing how automation evolved in information technology and explained why it is so important before diving into Terraform. Terraform provides a declarative language to automate everything providing an API, a plan command to get the required changes before you then apply all this changes. While this is quite easy to understand for something like infrastructure Mitchell showed how the number of possibilities grew with Software-as-a-Service and now everything having an API. One example was how HashiCorp handles employees and their permissions with Terraform. After the examples for how you can use existing stuff he gave an introduction to extending Terraform with custom providers.
Second was “Hardware-level data-center monitoring with Prometheus” presented by Conrad Hoffmann who gave us some look inside of the datacenter of Soundcloud and their monitoring infrastructure before Prometheus which looked like a zoo. Afterwards he highlighted the key features why they moved to Prometheus and Grafana for displaying the collected data. In his section about exporters he got into details which exporter replaced which tools from the former zoo and gave some tips from practical experience. And last but not least he summarized the migration and why it was worth to do it as it gave them a more consistent monitoring solution.
Martin Schurz and Sebastian Gumprich teamed up to talk about “Spicing up VMWare with Ansible and InSpec”. They started by looking back to the old days they had only special servers and later on virtual machines manually managed, how this slowly improved by using managing tools from VMware and how it looks now with their current mantra “manual work is a bug!”. They showed example playbooks for provisioning the complete stack from virtual switch to virtual machine, hardening according their requirements and management of the components afterwards. Last but not least for the Ansible part they described how they implemented the Python code to have an Ansible module for moving virtual machines between datastores and hosts. For testing all this automation they use inSpec and the management requiring some tracking of the environment was solved using Ansible-CMDB.
After lunch break I visited the talk about “OPNsense: the “open” firewall for your datacenter” given by Thomas Niedermeier. OPNsense is a HardenedBSD-based Open Source Firewall including a nice configuration web interface, Spamhouse blocklists, Intrusion Prevention System and many more features. I think with all these features OPNsense has not to avoid comparison with commercial firewalls and if enterprise-grade support is required partners like Thomas Krenn are available, too.
Martin Alfke asked the question “Ops hates containers. Why?” he came around in a customer meeting. Based on this experience he started to demystify containers in a very entertaining and memorable way. He focused on giving OPS some tips and ideas about what you should learn before even thinking about having container in production or during implementing your own container management platform. As we do recording I really recommend you to have a look into the video of the talk when recordings are up in a few days.
Anton Babenko in his talk “Lifecycle of a resource. Codifying infrastructure with Terraform for the future” started were Mitchell’s talk ended and dived really deep into module design and development for Terraform. Me being not very familiar with Terraform he at least could convince me that it seems possible to write well designed code for it and it makes fun to experiment and improve with your own modules. Furthermore he gave tips for handling the next Terraform release and testing code during refactoring which are probably very useful for module authors.
“The Computer Science behind a modern distributed data store” by Max Neunhöffer did a very good job explaining theory used in cluster election and consensus. The second topic covered was sorting of data and how modern technology changed how we have to look at sorting algorithm. Log structured merge trees as the third topic of the talk are a great way to improve write performance and with applying some additional tricks also read performance used by many database solutions. Fourth section was about Hybrid Logical Clocks to solve the problem of system clocks differing. Last but not least Max talked about Distributed ACID Transactions (Atomic Consistent Isolated Durable) which are important to keep data consistent but are quite harder to achieve in distributed systems. It was really a great talk while only covering theoretical computer science Max made it very easy to understand at least basic levels and presented it in way getting people interested in those topics.
After this first day full of great talks we will have the evening event in a sky bar having a good view of Berlin, more food, drinks and conversations. This networking is perhaps one of the most interesting parts of conferences. I will be back with a short review of the evening event and day 2 tomorrow evening. If you want to have more details and a more live experience follow #osdc on Twitter.
Vor einer Weile hab ich bereits eine kurze Erklärung zu Systemd-Unitfiles geschrieben, diesmal will ich auf das Multi-Instanz-Feature von Systemd eingehen, da auch dieses anscheinend nicht jedem bekannt ist.
Als Beispiel soll mir diesmal Graphite bzw. genauer gesagt die Python-Implementierung
carbon-cache dienen. Diese skaliert nicht automatisch, sondern erfordert, dass man weitere Instanzen des Dienstes auf anderen Ports startet. Die Konfiguration auf Seiten des
carbon-cache ist hierbei recht simpel, denn es wird in einer Ini-Datei nur eine neue Sektion mit den zu überschreibenden Werten geschrieben. Der Sektions-Name gibt hierbei der Instanz den Namen vor. Das ganze sieht dann beispielsweise für Instanz b so aus.
[cache:b] LINE_RECEIVER_PORT = 2013 UDP_RECEIVER_PORT = 2013 PICKLE_RECEIVER_PORT = 2014 LINE_RECEIVER_PORT = 7102
Mit SystemV hätte man nun das Startskript für jede Instanz kopieren und anpassen müssen, da das mitgelieferte Unitfile leider das Multi-Instanz-Feature nicht nutzt muss ich dies zwar auch einmal tun, aber immerhin nur einmal. Hierbei ist es sinnvoll den Namen zu verändern, um nicht mit dem bestehen in Konflikt zu kommen, wenn man möchte kann man es aber auch durch Verwendung des gleichen Namens “überschreiben”. Für das Multi-Instanz-Feature muss nur ein
@ an das Ende des Namens. Baut man nun an entsprechenden Stellen den Platzhalter
%i ist auch schon das Multi-Instanz-Setup fertig und man kann einen Dienst mit
firstname.lastname@example.org starten. In meinem Beispiel wäre dies
email@example.com mit folgendem Unitfile unter
[Unit] Description=Graphite Carbon Cache Instance %i After=network.target [Service] Type=forking StandardOutput=syslog StandardError=syslog ExecStart=/usr/bin/carbon-cache --config=/etc/carbon/carbon.conf --pidfile=/var/run/carbon-cache-%i.pid --logdir=/var/log/carbon/ --instance=%i start ExecReload=/bin/kill -USR1 $MAINPID PIDFile=/var/run/carbon-cache-%i.pid [Install] WantedBy=multi-user.target
Ich hoffe diese kurze Erklärung hilft dem ein oder anderen und ich würde mich freuen zukünftig mehr Dienste, die auf Instanzierung ausgelegt sind, bereits mit einem entsprechenden Unitfile ausgeliefert zu sehen.
For the third time in a row I attended the Configuration Management Camp in Ghent. While I was the only one of Netways last year, this time Lennart, Thilo, Blerim and Bernd joined me. Lennart already attended the PGDay and FOSDEM which took place on the weekend before, so if you want to spend some days in Belgium and/or on Open Source conferences start of February is always a good time for this.
I really like the Configuration Management Camp as it is very well organized, especially for a free event, and I always come back with many new knowledge and ideas. The speakers of the Main track reads like a who’s who of configuration management and the community rooms have a big number of experts diving deep into a solution. This year there were 10 community rooms including favorites like Ansible, Foreman and Puppet but also new ones like Mgmt.
My day typically started with the Main track and when community rooms opened I joined the Foreman while Lennart and Thilo could be found in the Puppet and Ansible room and Blerim and Bernd manned the Icinga booth. For the first time I gave a talk on the event and not only one but two. My first one was Foreman from a consultant’s perspective were I tried to show how configuration management projects look like and how we solve them using Foreman, but also show limitations, which got very positive feedback. The second one was demonstrating the Foreman Monitoring integration. In the other talks I learned about Foreman Datacenter plugin which is a great way to document your environment and will very likely find its way into our training material, Foreman Maintain which will make upgrading even more easy, the improvements in the Ansible integration or Debian support in Katello.
But the conference is not only worth it because of the talks but also because of the community. I had very interesting conversation with Foreman Developers, long-term Contributors and Beginners, but also with so many other people I got to know on other conference and met here again. And sometime this is the best way to get things done. For example I talked with Daniel Lobato about a pull request I was waiting to get merged for Ansible, he afterwards talked to a colleague and now I can call myself Ansible contributor or we talked about a missing hover effect in Foreman’s tables and some minutes later Ohad Levy had created the pull request, Timo Goebel had reviewed and merged it while Marek Hulán created pull requests for plugins requiring adjustments. And there was plenty of time for these conversations with Speakers dinner on Sunday, Evening Event on Monday and Foreman Community Dinner on Tuesday or in a comic-themed bar afterwards.
After the two days of conference were now a total number of 8 fringe events with beginner sessions and hacking space which I can really recommend if you want to improve your knowledge and/or involvement in a project. While the others left, I stayed one more day as I had managed to arrange a day of Icinga 2 consulting at a costumer in Ghent before I also started my way home with a trunk full of waffle and Kriek.
Also this year we organized a hackathon as follow up and managed to get about 50 people to work on actual coding. We started again with a small round of introduction so everyone had the chance to find people with same interests or knowledge needed. Afterwards people started to hack on Icinga 2, Icinga Web 2, different Modules, OpenNMS, Zabbix, Mgmt, NSClient++, Docker containers, Ansible and Puppet code or simply help others with configuration and other tasks to solve in their environment.
Here is a list of some things developed or at least designed today:
* Tom accepted and improved some of my pull requests, so the director got more property modifiers
* He also was working on improving notifications to allow managing them via a custom attribute of hosts and services
* Markus was improving Icinga packaging resulting in new package releases for SLES and support for Fedora 27
* Bodo was trying to move the ruby library for Icinga 2 to 1.0.0 release and got valuable input by Gunnar for displaying API coverage
* Thomas improved his diagnostics script for Icinga 2 to help with troubleshooting
* Nicola was working on a graphical picker for the geolocation in the Director for his awesome map module while getting several other ideas and requests
* David started a Single Sign On module for Icinga Web 2
* Mgmt got some improvements by Julien, Toshaan und James
* Michael was working on Elastic integration and web based installer for NSClient++
* Gunnar and Michael discussed so many features they actual did not find time for hacking, but keep our eyes open for Elastic 6 support and datatypes for arguments
* Steffen, Blerim and Michael discussed how to fix a problem with running two Icingabeat instances which now could probably be solved
* Stephan finally solved the management issue of red alerts in Icinga Web 2 😉
Furthermore an impressive amount of knowledge was transferred, user questions got answered and problems got solved. One thing I am really happy about seeing one user to use the URL encode property modifier only minutes after being accept by Tom to create Hostgroups including membership assignment from PuppetDB. But I want to end this blogpost with one really cool thing Dave from the Australian Icinga Partner Sol1 showed us. This map displays all pubs in Australia because it monitors Satellite receivers to visualize any large outages for Sky Racing Australia.
So have a nice weekend and keep on hacking.
The second day started with “Monitoring – dos and don’ts” presented by Markus Thiel. Room was already full on the first talk what was not expected when people move from evening event to late lounge and then at 5 o’clock in the morning to the hotel. Event was great great with good food, drinks and chat. But Julia already wrote about that so I will focus on the talks and Markus one was nicely showing “don’ts” I also recognize from my daily work as consultant and helped with tips how to avoid them. He got deeply into details so I can not repeat everything, but just to summarize the biggest problem is always communication between people or systems, perhaps you already knew this from your daily business.
The second talk I attended was Bodo Schulz talking about automated and distributed monitoring of a continuous integration platform. He created his own service discovery named Brain which discovers services and put them into Redis which is then read by Icinga 2 and Grafana for creating configuration. Pinky is his simple stack for visualisation consisting of containers. Both of them are integrated in the platform, one Brain for every pipeline, one Pinky for every team. If you did not get the reference. watch the intro on youtube. His workarounds for features he missed were also quite interesting like implementing his own certificate signing service for Icinga 2 or displaying License data in Grafana. And of course he had a live demo to show all this fancy stuff which was great to see.
Tom was giving the third talk of the day about automated monitoring in heterogeneous environments showing real life scenarios using the Director‘s capabilities. He started with the basics explaining how import, synchronization and jobs work and followed by importing from an old Icinga environment utilizing SQL and the IDO database. In the typical scenario for importing from a CMDB Tom showed typical problems like bad quality of input data and how to workaround with the Director to get a good quality of output. Another scenario explained how to get data from Active Directory for the Windows part of your environment. For VMware users he show the already released vSphere module and also the prototype of the vSphereDB module which adds some more visualization and for AWS users the corresponding module. And the last one showed how to import Excel files using the Fileshipper. And of course he explained how easy it is to create your own import source.
Right after the excellent lunch and the even better event massage Marianne Spiller‘s talk “Ich sehe was, was du nicht siehst (… und das ist CRITICAL!)” (in English “I spy with my little eye something CRITICAL!”) focused on how to get a good monitoring environment with a high user acceptance up and running. Being realistic and show everyone his benefits are the best tips she gave but also she could not provide the one solution that fits all. For more of her tips ranging from technical to organizational I can recommend her blog.
Lennart and Janina Tritschler were talking about distributed Icinga 2 environments automated by Puppet. Really happy to see the talk because Janina adopted Icinga 2 after a fundamentals training I gave about a year ago. They started with a basic introduction of distributed monitoring with Icinga 2 as master, satellite and agent and configuration management with Puppet including exported resources. Afterwards they were diving deeper into the Puppet module for Icinga 2 and how to use it for installation and configuration of the environment. In their demos they included several virtual machines to show how easily this can be done.
In the last break the winner of the gambling at the evening event got his price, a retro game console.
Last but not least I decided for Kevin Honka‘s talk “Icinga 2 + Director, flexible Thresholds with Ansible” in favor of Thomas talking about troubleshooting Icinga2. But I am sure his talk was great as troubleshooting is his daily business as our Lead Support Engineer. Kevin was unhappy with static threshold configured in their Monitoring environment so started to develop a python script to include in his Ansible workflow which modifies thresholds using the Director API. On his roadmap is extending it by creating a Icinga 2 python library usable for others, utilizing this library in a real Ansible module and extending functionality.
Thanks to all speakers, attendees and sponsors leaving today for the great conference, save travels and see you next year on November 5th – 8th for the next OSMC. And of course a nice dinner and happy hacking to all staying for the hackathon tomorrow, I will keep our readers informed on the crazy things we manage to build.
Also for the 12th OSMC we started on Tuesday with a couple of workshops on Icinga, Ansible, Graphing and Elastic which were famous as always and afterwards with meet and greet at the evening dinner. But the real start was as always a warm Welcome from Bernd introducing all the small changes we had this year like having so many great talks we did three in parallel on the first day. Also we had the first time more English talks than German and are getting more international from year to year which is also the reason for me blogging in English.
The first talk of the day I attended was James Shubin talking about “Next Generation Config Mgmt: Monitoring” as he is a great entertainer and mgmt is a really a great tool. Mgmt is primarily a configuration management solution but James managed in his demos to build a bridge to monitoring as mgmt is event driven and very fast. So for example he showed mgmt creating files deleted faster then a user could recognize they are gone. Another demo of mgmt’s reactivity was visualizing the noise in the room, perhaps not the most practical one but showing what you can do with flexible inputs and outputs. In his hysteresis demo he showed mgmt monitoring systemload and scale up and down the number of virtual machines depending on it. James is as always looking for people who join the project and help hacking, so have a look at mgmt (or the recording of one of his talks) and perhaps join what could really be the next generation of configuration management.
Second one was Alba Ferri Fitó talking about community helping her doing monitoring at Vodafone in her talk “With a little help from…the community”. She was showing several use cases e.g. VMware monitoring she changed from passive collection of snmptraps to proactively monitoring the infrastructure with check_vmware_esx. Also she helped to integrate monitoring in the provisioning process with vRealise using the Icinga 2 API, did a corporate theme to get a better acceptance, implemented log monitoring using the sticky option from check_logfiles, created her own scripts to monitor things she was told they could only be monitored by SCOM or using expect for things only having an interactive “API”. It was a great talk sharing knowledge and crediting community for all the code and help.
Carsten Köbke and our Michael were telling “Ops and dev stories: Integrate everything into your monitoring stack”. So Carsten as the developer of the Icinga Web 2 module for Grafana started the talk about his motivation behind and experience gained by developing this module. Afterwards Michael was showing more integration like the Map module placing hosts on an Openstreet map, dashboards, ticket systems, log and event management solutions like Greylog and Elastic including the Icingabeat and an very early prototype (created on the day before) for a module for Graylog.
After lunch which was great as always I attended “Icinga 2 Multi Zone HA Setup using Ansible” by Toshaan Bharvani. He is a self-employed consultant with a history in monitoring starting with Nagios, using Icinga and Shinken for a while and now utilizing Icinga 2 to monitor his costumer’s environments. His ansible playbooks and roles showed a good practical example for how to get such a distributed setup up and running and he also managed to explain it in a way also people not using Ansible at all could understand it.
Afterwards Tobias Kempf as the monitoring admin and Michael Kraus as the consultant supporting him talked about a highly automated monitoring for Europe’s biggest logistic company. They used omd to build a multilevel distributed monitoring environment which uses centralized configuration managed with a custom webinterface, coshsh as configuration generator and git, load distribution with mod_gearman and patch management with Ansible.
Same last talk like every year Bernd (representing the Icinga Team) showed the “Current State of Icinga”. Bernd shortly introduced the project and team members before showing some case studies like Icinga being deployed on the International Space Station. He also promoted the Icinga Camps and our effort to help people to run more Icinga Meetups. Afterwards he started to dive into technical stuff like the new incarnation of Icinga Exchange including full Github sync, the documentation and package repository including numbers of downloads which were a crazy 50000 downloads just for CentOS on one day. Diving even deeper into Icinga itself he showed the new CA Proxy feature allowing multilevel certificate signing and automatic renewal which was sponsored by Volkswagen like some others, too. Some explanation on projects effort on Configuration management and which API to use in the Icinga 2 environment for different use cases followed before hitting the topic logging. For logging Icinga project now provides output for Logstash and Elasticsearch in Icinga 2, the Icingabeat, the Logstash output which could create monitoring objects in Icinga 2 on the fly and last but not least the Elasticsearch module for Icinga Web 2. In his demos he also showed the new improved Icinga Web 2 which adds even more eye candy. Speaking about eye candy also the latest version of Graphite module which will get released soon looks quite nice. Another release pending will be the Icinga Graphite installer using Ansible and Packaging to provide an easy way to setup Graphite. So keep an eye on release blogposts coming next weeks.
It is nice to see topics shift through the years. While the topics automation and integration were quite present in the last years it was main focus of many talks this year. This nicely fits my opinion that you as a software developer should care about APIs to allow easy integration and as an administrator you should provide a single interface I sometimes call “single point of administration”.
Colleagues have collected some pictures for you, if you want to see more follow us or #osmc on Twitter. So enjoy these while I will enjoy the evening event and be back tomorrow to keep you updated on the talks of second day.