pixel
Select Page

NETWAYS Blog

Plugins, Provider, PowerShell

Im Icinga-Windows-Kosmos passiert in letzter Zeit sehr viel, vor allem durch das Zauberwort PowerShell. Framework, Plugins, Kickstart und Service, das alles hört man recht oft und auch neue Releases sind da oder stehen an. Doch wie schreibe ich meine eigenen Checks beziehungsweise Plugins? Gibt es Vorschriften an die ich mich halten sollte? Und kann ich mich auch an dem Projekt beteiligen?

Plugins und Provider

Als das Projekt gestartet worden ist, war es bereits angedacht, dass Plugins individuell anpassbar sein sollten. Um dies zu gewährleisten, war es wichtig die Daten, die zur Prüfung genutzt werden in einer normierten Form gesammelt zurückzugeben und hier kommen die Provider ins Spiel.

Was ist ein Provider?

Ein Provider sammelt Daten vom System und stellt sie zur Verfügung um in Plugins genutzt zu werden. Sehen wir uns das ganze im icinga-powershell-plugins repository an:
└── provider
├── bios
├── cpu
├── directory
├── disks
├── enums
├── eventlog
├── memory
├── process
├── services
├── updates
├── users
└── windows

Innerhalb dieser Verzeichnisse befinden sich in der Regel zwei Dateien. Im Falle von CPU wären das Icinga_ProviderCpu.psm1 und Show-IcingaCpuData.psm1. Respektive Dateien existieren auch in allen anderen Verzeichnissen.

Mit dem gleichnamigen Befehl Show-IcingaCpuData kann man nun alle generischen Daten zur Thematik CPU abfragen. Während die Datei Icinga_ProviderCpu.psm1 verschiedene Funktionen enthält um spezifische Daten anzufordern. Darunter zum Beispiel folgende Funktionen: Get-IcingaCPUArchitecture, Get-IcingaCPUArchitecture.

Im Falle von Get-IcingaCPUs könnte der Output beziehungsweise die bezogenen Daten, wie folgt aussehen – gekürzt und in JSON konvertiert:
{
"specs":  {
"L3CacheSize":  0,
"CurrentClockSpeed":  2100,
"VoltageCaps":  0,
"L2CacheSpeed":  null,
"ThreadCount":  1,
"CurrentVoltage":  null,
"LoadPercentage":  7,
"L2CacheSize":  null
},
"errors":  {
"ErrorDescription":  null,
"ErrorCleared":  null,
"ConfigManagerErrorCode":  {
"value":  "This device is working properly.",
"raw":  0
},
"LastErrorCode":  null
},
"metadata":  {
"ProcessorType":  {
"value":  "Central Processor",
"raw":  3
},
...
"SocketDesignation":  "CPU 1",
"ProcessorID":  "078BFBFF000306A9",
"NumberOfCores":  1,
"SerialNumber":  "",
"Architecture":  {
"value":  "x64",
"raw":  9
},
"Availability":  {

},
"AddressWidth":  64,
"Status":  "OK",
"StatusInfo":  {
"value":  "Enabled",
"raw":  3
},

}

Auf diese Daten können wir nun Aufbauen. Ob die Daten direkt zuträglich sind oder ausreichend für unsere Anwendung ist individuell.

Bevor ich ein Plugin schreibe!

Bevor es dazu kommt, dass man ein Plugin schreibt und Unmengen an Cmdlets benutzt und die Daten normiert, sollte man sich darüber im Klaren sein, was es alles für Provider gibt und vorher überprüfen, ob die Daten nicht in der ein oder anderen Form bereits vorhanden sind.

Gefällt mir etwas am Invoke-IcingaCheckCPUs Check nicht, dann ist es sinnig, zumindest zunächst den Provider des Check-Plugins zu betrachten und zu prüfen ob man nicht die Informationen die der Check erhält anders nutzen kann.

Falls das nicht der Fall ist, sollte man seinen eigenen Provider schreiben oder einen vorhanden ergänzen und uns vielleicht mit einen Pull-Request überraschen.

Aber wohin das Ganze?

Damit nicht alles verloren ist beim nächsten Update oder Konflikte auftreten, gibt es im Icinga-PowerShell-Framework ein Custom-Verzeichnis. Dieses unterteilt sich wiederum in /lib und /plugins. In das plugins-Verzeichnis kommen folgerichtig natürlich die Plugins, die ihr für euch individuell anlegt und die nicht oder noch nicht Teil des Repositorys sind.

Das Gleiche gilt natürlich auf für das Library-Verzeichnis. Wenn ihr das Rad neu erfinden wollt und ein Custom-Module baut für Test-Numeric oder euch das Ausgabeformat von Convert-Bytes stört, dann sind eure Tools hier sicher. Und natürlich auch da freuen wir uns über einen Pull-Request.

Wir halten fest, wenn man also ein Plugin baut, dann sollte man ein eigenes PowerShell Modul bauen und sich dabei weitestgehend zunächst an den vorhandenen Plugins orientieren und aufpassen, wo was hingehört.

Fazit

Wer es richtig machen will, fängt langsam an und baut vor allem modular. Erst der Provider, dann das eigentliche Check-Plugin. Desto größer der Provider, desto einfacher das Bauen der Check-Plugins, weil mit bereits vorhanden Informationen ist die Logik, der Check-Prüfung das wenigste Problem.

Wem das alles noch nicht genug ist, der kann auch mal auf unserem Youtube-Channel vorbei schauen, wo wir im Umfang unserer Webinare mit unter Custom Plugin Development und Custom Daemon Development besprochen haben.

Seit in jeden Fall gespannt wie es mit dem Projekt weitergeht und für noch mehr zum Thema Plugin schreiben schaut euch den Developer-Guide an.

Alexander Stoll
Alexander Stoll
Consultant

Alex hat seine Ausbildung zum Fachinformatiker für Systemintegration bei NETWAYS Professional Services abgeschlossen und ist nun im Consulting tätig. Vereinzelt kommt es auch vor das er an Programmierprojekten mitarbeitet. Auch privat setzt er sich sehr viel mit Informationstechnologie auseinander, aber jenseits davon ist auch viel Zeit für Fußballabende, Handwerkerprojekte und das ein oder andere Buch.

Evolution of a Microservice-Infrastructure by Jan Martens | OSDC 2019

This entry is part 2 of 6 in the series OSDC 2019 | Recap

 

At the Open Source Data Center Conference (OSDC) 2019 in Berlin, Jan Martens invited to audience to travel with him in his talk „Evolution of a Microservice-Infrastructure”. You have missed him speaking? We got something for you: See the video of Jan‘s presentation and read a summary (below).

The former OSDC will be held for the first time in 2020 under the new name stackconf. With the changes in modern IT in recent years, the focus of the conference has increasingly shifted from a mainly static infrastructure approach to a broader spectrum that includes agile methods, continuous integration, container, hybrid and cloud solutions. This development is taken into account by changing the name of the conference and opening the topic area for further innovations.

Due to concerns around the coronavirus (COVID-19), the decision was made to hold stackconf 2020 as an online conference. The online event will now take place from June 16 to 18, 2020. Join us, live online! Save your ticket now at: stackconf.eu/ticket/


 

Evolution of a Microservice-Infrastructure

Jan Martens signed up with a talk titled “Evolution of a Microservice Infrastructure” and why should I summarize his talk if he had done that himself perfectly: “This talk is about our journey from Ngnix & Docker Swarm to Traefik & Nomad.”

But before we start getting more in depth with this talk, there is one more thing to know about it. This is more or less a sequel to “From Monolith to Microservices” by Paul Puschmann a colleague of Jan Martens, but it’s not absolutely necessary to watch them in order or both.

 

So there will be a bunch of questions answered by Jan during the talk, regarding their environment, like: “How do we do deployments? How do we do request routing? What problems did we encounter, during our infrastructural growth and how did we address them?”

After giving some quick insight in the scale he has to deal with, that being 345.000 employees and 15.000 shops, he goes on with the history of their infrastructure.

Jan works at REWE Digital, which is responsible for the infrastructure around services, like delivery of groceries. They started off with the takeover of an existing monolithic infrastructure, not very attractive huh? They confronted themselves with the question: “How can we scale this delivery service?” and the solution they came up with was a micro service environment. Important to point out here, would be the use of Docker/Swarm for the deployment of micro services.

Let’s skip ahead a bit and take a look at the state of 2018 REWE Digital. Well there operating custom Docker-Environment consists of: Docker, Consul, Elastic Stack, ngnix, dnsmasq and debian

Jan goes into explaining his infrastructure more and more and how the different applications work with each other, but let’s just say: Everything was fine and peaceful until the size of the environment grew to a certain point. And at that point problems with nginx were starting to surface, like requests which never reached their destination or keepalive connections, which dropped after a short time. The reason? Consul-template would reload all ngnix instances at the same time. The solution? Well they looked for a different reverse proxy, which is able to reload configuration dynamically and best case that new reverse proxy is even able to be configured dynamically.

The three being deemed fitting for that job were envoy, Fabio and traefik, but I have already spoiled their decision, its treafik. The points Jan mentioned, which had them decide on traefik were that it is dynamically configurable and is able to reload configuration live. That’s obviously not all, lots of metrics, a web ui, which was deemed nice by Jan and a single go binary, might have made the difference.

Jan drops a few words on how migration is done and then invests some time in talking about the benefits of traefik, well the most important benefit for us to know is, that the issues that existed with ngnix are gone now.

Well now that the environment was changed, there were also changes coming for swarm, acting on its own. The problems Jan addresses are a poor container spread, no self-healing, and more. You should be able to see where this is going. Well the candidates besides Docker Swarm are Rancher, Kubernetes and Nomad. Well, this one was spoiled by me as well.

The reasons to use nomad in this infrastructure might be pretty obvious, but I will list them anyway. Firstly, seamless consul integration, well both are by HashiCorp, who would have guessed. Nomad is able to selfheal and comes in a single go binary, just like traefik. Jan also claims it has a nice web UI, we have to take his word on that one.

Jan goes into the benefits of using Nomad, just like he went into the benefits of ngnix and shows how their work processes have changed with the change of their environment.

This post doesn’t give enough credit to how much information Jan has shared during his talk. Maybe roughly twenty percent of his talk are covered here. You should definitely check it out the full video to catch all the deeper more insightful topics about the infrastructure and how the applications work with each other.

Alexander Stoll
Alexander Stoll
Consultant

Alex hat seine Ausbildung zum Fachinformatiker für Systemintegration bei NETWAYS Professional Services abgeschlossen und ist nun im Consulting tätig. Vereinzelt kommt es auch vor das er an Programmierprojekten mitarbeitet. Auch privat setzt er sich sehr viel mit Informationstechnologie auseinander, aber jenseits davon ist auch viel Zeit für Fußballabende, Handwerkerprojekte und das ein oder andere Buch.

Fast log management for your infrastructure by Nicolas Frankel | OSDC 2019

This entry is part 4 of 6 in the series OSDC 2019 | Recap

 

Nicolas Frankel is a Developer Advocate with 15+ years experience consulting for many different customers, in a wide range of contexts. “Fast log management for your infrastructure” was his topic at the Open Source Data Center Conference (OSDC) 2019 in Berlin. Those who missed the talk back then now have the opportunity to see the video of Nicolas’ presentation and read a summary (below).

The former OSDC will be held for the first time in 2020 under the new name stackconf. With the changes in modern IT in recent years, the focus of the conference has increasingly shifted from a mainly static infrastructure approach to a broader spectrum that includes agile methods, continuous integration, container, hybrid and cloud solutions. This development is taken into account by changing the name of the conference and opening the topic area for further innovations.

We are proud to announce that Nicolas Frankel is in our speaker lineup this year, too. We are looking forward to his talk: “Real Continuous Deployment of JVM applications”.

Due to concerns around the coronavirus (COVID-19), the decision was made to hold stackconf 2020 as an online conference. The online event will now take place June 16 – 18, 2020. Be there, live online! Save your ticket now at: stackconf.eu/ticket/


Fast log management for your infrastructure

Fast log management for your infrastructure”, well that is one way to get OSDC visitors excited. Nicolas Frankel signed up with that one and he did not disappoint. The issues, he was tackling, were issues produced by optimization, that being said do you think about the logs when it comes to migrating your application to reactive micro services?

Before we get to all that, Nicolas had to take a little detour through programming logic and how logging works, and he also points out some misconceptions of how things are done and how they work. Like for example, his so called “[…] root of all evil”.

LOGGER.debug(
"Cart price is now {}", cart.getPrice())

He states the question, who believes that in case of the log level being above debug the statements will be ignored? That’s what is to be expected, however it is not the case. In a small demo section he gives further insight on the topic from the perspective of a software developer.

From the developer point of view one should only do physical logging is the statement he ends his demo explanation run on. Directly afterwards he states that developers do not like to think that they are dealing with the physical world, then he goes further on about the respective storage possibilities like the write time regarding SSDs, HDDs or on an NFS, which should be taken into account.

Tackled some issues already, Nicolas keeps switching back and forth between the perspective of a software developer and an operator. He puts a lot of empathizes on these perspective changes to make sure that everyone involved starts to understand where the issue lies and if there is an issue at all.

For example the writing process and the opening and closing of streams for single log statements. It would be great if the stream could be continuously open and log statements can be written until the stream can be closed. But arguably and in most cases by default, logging is blocking. While most frameworks allow asynchronous logging, there is no right or wrong. And it also doesn’t have to be a software development mistake nor a bad infrastructure.

He dives deeper into asynchronous logging, because if you want to use it, you have to understand it: from queue size to discarding thresholds, the difference between blocking and dropping messages, everything. Nicolas also covers some logging basics, like metadata and what is especially important. Most essential metadata named timestamp, log level, line number and more. You may ask, why? Because some metadata is more expensive to get than others.

After some more detours through log aggregations and common pitfalls, with searching in logs or mandatory metadata, we get to a well-known application stack in the world of logging, the Elastic Stack.

He explains the basic architecture of the Elastic Stack and how the applications work with each other. Especially Filebeat and Logstash take the spotlight during this part. Step by step he works his way through an abstraction of the path a log takes from Filebeat to Logstash until you get a JSON you are familiar with. Then common misunderstandings like “Why do I need Logstash at all?” are being tackled by him, before he goes onto how he is doing logging at Exoscale.

They are using syslog-ng instead of Filebeat, basically just because when they started Filebeat was not ready for production. Then a regular Logstash and before we come to Elasticsearch there is a Kafka running. The reason why they are using Kafka is that Kafka being a decentralized data store, and using Logstash to get data out of it there is lower risk of dropping data instead of buffering towards elasticsearch, because there are not multiple nodes writing at once.

Nicolas summarizes his talk at the end with six short statements or maybe even lessons for log management. If you want, head over to the video above to learn about them from Nicolas himself or experience him live to learn from him.

Alexander Stoll
Alexander Stoll
Consultant

Alex hat seine Ausbildung zum Fachinformatiker für Systemintegration bei NETWAYS Professional Services abgeschlossen und ist nun im Consulting tätig. Vereinzelt kommt es auch vor das er an Programmierprojekten mitarbeitet. Auch privat setzt er sich sehr viel mit Informationstechnologie auseinander, aber jenseits davon ist auch viel Zeit für Fußballabende, Handwerkerprojekte und das ein oder andere Buch.

Was war, ist und wird sein – ein Azubi & PowerShell

Wir Azubis bei NETWAYS lassen ohnehin oft genug durchscheinen, was bei uns in der Ausbildung so alles passiert. Aber meistens nur Projekte, die bereits abgeschlossen sind. Damit ich dem auch weiterhin nachkomme, hier ein bisschen, was war, ist und kommen wird.

Was war?

Ganz kurz gefasst? PowerShell. Und in letzter Zeit wird das Wort im NETWAYS und Icinga-Kontext ganz schön oft in den Raum geworfen. Letztes Jahr hat Christian irgendwann den Entschluss gefasst, das damalige Windows-Modul seinerseits zu einem voll funktionalen Framework mit PowerShell umzugestalten. Christian hatte, sowie alle der Kollegen, die Möglichkeit, einen Azubi mit ins Boot zu holen – und von hier sollte es keine große Überraschung sein, wo die Geschichte hinführt. Besagter Azubi war eben ich.

Ich hatte keinerlei Ahnung von PowerShell oder der Windows-Welt jenseits der Anwenderseite. Doch auch PowerShell ist kein Hexenwerk und die logischen Abhandlungen, die man bei anderen Sprachen verwenden kann, verlernt und verliert man auch hier nicht.

Juli 2019 hat das ganze begonnen, also zumindest für mich, und das Ziel sollte die OSMC im selbigen Jahr sein. Es ist einfach ein ganz anderer Ansporn zu wissen, dass man an einer Sache mitarbeitet, die später auf einer großen Plattform präsentiert wird und man sehen wird, was die eigene Arbeit gebracht hat.

Bis dato war ich eher an Plugin-Entwicklung sowie an den dazugehörigen Providern beteiligt. Auch das ein oder andere Tool im Framework kommt aus meiner Feder. Aber wenn man sieht, was Christian im vergangenen Jahr geleistet hat, dann kann man nur neidisch sein. Natürlich war seit meinem letzten Blog-Post nicht alles PowerShell. Ich hatte auch Trainings, habe die ersten Trainings als Co-Trainer mitgeleitet, war auf meiner ersten Consulting-Reise und noch mehr. Aber das PowerShell-Projekt lag mir besonders am Herzen.

Was ist?

Jetzt? Erstmal Foreman-Schulung. In der Vergangenheit wurde ich ja schon das ein oder andere Mal mit Foreman konfrontiert. Sei es ein bisschen Community-Arbeit beim Übersetzen von Strings oder mein letzter Blog-Post, in dem ich einen kleinen Einblick gegeben habe zum OSCAMP on Foreman, was letztes Jahr im Anschluss an die OSMC stattgefunden hat. Danach? Zunächst Abteilungsdurchlauf in den Support, auch wenn dieser nicht von langem ist, weil die Berufsschule dazwischen kommt und das auch noch samt Zwischenprüfung. Aber wird schon schief gehen.

Was wird?

Was? Ein ganzer Absatz ohne PowerShell. Also ja, ich mach auch weiter PowerShell. Man darf sich in der Hinsicht bald auch auf ein Zertifikats-Plugin freuen mit eigenem Provider. Und sonst? Die zweite gemeinsame Azubi-Projektwoche bahnt sich an, aber auch dazu werdet ihr bestimmt wieder einen Blog-Post bekommen. An dieser Stelle ist auch noch zu erwähnen, dass besagter Christian diesen Mittwoch sein erstes Webinar zu Icinga for Windows abgehalten hat. Keine Angst, die ganze Sache kommt auch später auf YouTube und zwei weitere Webinare stehen ja noch aus. Zum einen: Icinga for Windows – Custom Plugin Development am 01. April 2020 um 10:30 und Icinga for Windows – Custom Daemon Development am 06. Mai 2020 zur gleichen Zeit. Eine Übersicht zu den Webinaren bekommt ihr hier. Stellt euch drauf ein, dass in der Windows-Monitoring Landschaft noch so einiges passieren wird.

 

Alexander Stoll
Alexander Stoll
Consultant

Alex hat seine Ausbildung zum Fachinformatiker für Systemintegration bei NETWAYS Professional Services abgeschlossen und ist nun im Consulting tätig. Vereinzelt kommt es auch vor das er an Programmierprojekten mitarbeitet. Auch privat setzt er sich sehr viel mit Informationstechnologie auseinander, aber jenseits davon ist auch viel Zeit für Fußballabende, Handwerkerprojekte und das ein oder andere Buch.

Open Source Camp on Foreman

Like every year there was an Open Source Camp following the OSMC and as usual we helped organize that. Just in case you aren’t aware of what an Open Source Camp is here is the just of it: It’s meant to be an offer for Open Source projects to present themselves more in depth to the community. This year the Open Source Camp is on that one special yellow helmet we all know and love, Foreman.

Ondřej Ezr started us off with Ansible automation for Foreman (hosts). There are probably more than enough people using puppet only in their Foreman environment. Alternative or complementary to that would be using the plugin foreman_ansible. Ansible and Puppet don’t necessarily need to be better or worse, they are different and both have their advantages and disadvantages. By going through some basic steps, like role assignment, host creation and so on, he showed how one can do all that, but with Ansible. You can easily dynamically allocate roles and installations through Ansible to your Foreman hosts, but to make it even more specific one can set custom variables within the Ansible plugin for it to use, like foreman_repository_version. You could invoke a Job, like an Ansible Playbook, which will overwrite the variables previously set or make your installation more customizable from the get go. Install from git, run a playbook through ssh and more was covered during his talk. The plugin would not be a good alternative or viable if it did not hold up against the standards that puppet sets as a competitor. While Ansible doesn’t offer an inherit solution for reoccurring runs like every hour, the plugin does.

Next up was Bernhard Suttner, who wanted to give us a taste of Salted Foreman. Initially he explained what all that salt was about. The SaltStack a open source project written in python, can be used as a configuration management tool for Foreman. Salt excels at orchestrating cloud environments and network use-cases, but then we got to the Foreman relation. Running a salt and Foreman environment means running a environment of managed hosts, which are salt minions and a foreman_smart_proxy, which will also be the salt master. He showed us what salt in Foreman looks like and gave us some insight on how it works, but even more important from now on there are people dedicated to the project and some day the plugin might be as good as the puppet or ansible plugin. Salt is great and especially effective in terms of scalability. It’s pretty straightforward to use and the initial setup is not so hard. We are excited for what is to come.

Provisioning on Azure Cloud through Foreman by Aditi Puntambekar was going to follow that one. Aditi made sure everyone is familiar with the extend of Foremans capabilities in terms of provisioning. This was especially important because Foremans capabilities differ from its usual when it comes to cloud provisioning. After a quick trip through the configuration of compute resources and imaged-based provisioning templates we went onward to the Azure Resource Manager. She explained how the Azure Resource Manager essentially worked, but what is interesting to us is the foreman_azure_rm. Well and foreman_azure_rm does what you expect it to do. It adds the Microsoft Azure Resource Manager as a compute resource for the foreman. In her demo, she showed us how to use said resource and more.

Martin Bačovský talked about CLI tools with Foreman. He started of with the Foreman API. Of course the Foreman API is fast and has a wide range of tools and libs included within it. Just like Martin said in his talk, if you are interested in the Foreman API check out the documentation, it’s very good. Also interesting in the realm of APIs was his next tool, which is using apipie/apipy, which you are probably aware of if you are more heavy on the python side of things. Up there with the most well-known tools is Martins next, Hammer CLI, a command-line tool for Foreman. After sharing his experience with these rather popular tools with everyone he introduced us to Foreman’s integration of GraphQL. It’s basically a query language, which seems to be promising so far. Martin especially focused on the flexibility of queries and the introspective it has, yet one has to see where the project goes. There were many more tools he told us a lot about. To name just a few more of them, Report Templates, Foreman Ansible Modules and foreman_maintain. If you are interested in one of these tools in particular check out the video of the talk, which will be available soon on our Youtube Channel.

 

Give your Foreman a greater toolbox with Plugins by our very own Dirk Götz. Like he said himself: I will start of with existing toolbox things and at the end I will show you how to create these things yourself. And that he did. This talk was very demo heavy, thereby everything he explained was plain and simple, because you where able to see it as he did it. At the very top of his agenda was Job Invocation/Remote Execution. Not that exciting you think? Well, more interesting is the best practice advice he threw in on the way, like there is no issue of the configured user because his password is not saved as plain text in the database. Then the development part was up. He showed a couple of jobs that he wrote himself. Easiest, which served as an example is a simple ping check. He pointed out important thoughts to keep in mind, while writing jobs, like default values. Before his talk came to a close he talked a bit about the Web Console which has been introduced and is yet not well known. The web console is pretty much a integration of Cockpit. A well experienced user in the Linux world won’t be that excited about this, but a less experienced user will love this.

The next talk would not have happened, if Dirk didn’t spontaneously offer to step in. So we got another thirty minutes of Dirk Götz and I won’t complain. Katello: Adding content management to Foreman was the title and people where keen to hear about just that. What is Katello? Dirk described it as a defined set of Foreman plugins but not just that. It enriches your content management, as well as subscription management. Wait… content management? Why do I need that? Configuration management should be enough! Not necessarily, depending on your environment. Lets just pick up the points that Dirk made towards content management. For local content it ensures availability. For staging, it allows testing updates and makes builds reproducible. So content management should be seen as an addition to config management. He also talks about content views and how they are used to do the versioning, while they are being held by life cycles. Integration in orchestration was also a rather big point during his talk, which is done via SSH or Ansible. Dirk designs his talk in a way that makes summarizing them impossible, because he covers way to much. Lets just say not announced but very appreciated and most definitely worth checking out at our NETWAYS-Youtube Channel.

It was my second Open Source Camp and if you ask me this kind of exchange is what one wants to see in the open source community. There was variety and judging by the crowd reactions I was not the only one enjoying these talks. Thanks to all the speakers and attendees, safe travels home to everyone. Until the next Open Source Camp, hope to see you there!

Alexander Stoll
Alexander Stoll
Consultant

Alex hat seine Ausbildung zum Fachinformatiker für Systemintegration bei NETWAYS Professional Services abgeschlossen und ist nun im Consulting tätig. Vereinzelt kommt es auch vor das er an Programmierprojekten mitarbeitet. Auch privat setzt er sich sehr viel mit Informationstechnologie auseinander, aber jenseits davon ist auch viel Zeit für Fußballabende, Handwerkerprojekte und das ein oder andere Buch.