Hurra, hurra … der Bio-Milch-Mann ist da!

Vor einiger Zeit haben wir über das Billig-Milch-Dilemma bei Aldi, Rewe, etc. diskutiert. Zunächst auf Twitter, dann aber auch mal offline in lockerer Runde. Dass am Ende sehr wenig Ertrag bei unseren Bauern ankommt, steht ausser Frage. Und dass wir das nicht gut gefunden haben, auch. Da wir bei NETWAYS gerne mal Tagesthemen und Probleme angehen, haben wir uns gefragt, wo wir selbst noch nachhaltiger werden können. Dabei ist uns aufgefallen, dass die Paletten an Milch für unseren Kaffee und die myMuesli-Versorgung auch auf dieser Milch-Billig-Schiene basieren. Haltbarmilch, fettarm, aber dennoch … “wir sollten Biomilch einkaufen, fertig, aus.” war eine spontane Idee von Bernd und Vanessa.
Es ist ein bisschen Gras über die Sache gewachsen. An einem schönen Tag im Büro waren plötzlich keine Milchpackerl mehr da. Stattdessen Biomilch in grösseren Behältnissen, frisch mit Sahne oben drauf. Eine ganz andere Welt, die gut schmeckt, und mit dem Gefühl der Nachhaltigkeit noch deutlich besser schmeckt 🙂

Wer sich nun frägt – “das ist ja Plastik und kein Glas”. Jepp, wir reinigen die Flaschen selbst und geben diese bei unserer wöchentlichen Lieferung wieder zurück. Grössere Kühlschränke brauchen wir jetz halt, aber das kriegn ma auch noch 😉
Ich habe mich selbst dabei ertappt, dass ich im Supermarkt nicht mehr zu meiner “üblichen” schnellen Milch greife (die auch den 61-Cent-Stempel trägt, wenn man mal genau hinschaut), sondern selektiv Bio-Milch nehme. Kostet zwar einen Euro, dafür schmeckt der Julius-Meinl-Kaffee deutlich besser damit. Die guten Dinge im Leben muss man einfach nachhaltig geniessen ❤️

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Continuous Integration with Golang and GitLab

Today you not only develop code and collaborate with other developers in your Git branches and forks. In addition continuous integration helps with “instant” compile and runtime tests. Which Git commit caused a failure, are there any performance changes with the recent code history?
Today I want to dive into our latest insights into Golang and building our code inside GitLab on each push and merge request. Golang extensively supports unit tests and also their analysis with coverage tests. This ensures that code isolated in test interfaces runs rock-solid and you can focus on the more important tasks and features.
In order to abstract the Go build and install calls, we add a Makefile for convenience. This also allows developers to manually use it, if they want.

$ vim Makefile
.PHONY: all test coverage
all: get build install
get:
        go get ./...
build:
        go build ./...
install:
        go install ./...
test:
        go test ./... -v -coverprofile .coverage.txt
        go tool cover -func .coverage.txt
coverage: test
        go tool cover -html=.coverage.txt

The “coverage” target runs the tests and stores the coverage results in “.coverage.txt”. This is then converted into HTML and can be opened with a browser for example.
Inside GitLab we need a CI runner configured for our project and job pipeline. This is already available in the NWS app. If you want to learn more about GitLab CI/CD, containers, jobs and pipelines, I’ll invite you to join me for the official GitLab training sessions 🙂
The CI configuration file is stored in “.gitlab-ci.yml” inside the Git repository and uses the Golang image for Debian Stretch in this example.

$ vim .gitlab-ci.yml
image: golang:1.10-stretch
stages:
  - build
before_script:
  - apt-get update && apt-get install -y make curl
  - curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
  - mkdir -p /go/src/git.icinga.com/icingadb
  - ln -s /builds/icingadb/icingadb /go/src/git.icinga.com/icingadb/icingadb
  - cd /go/src/git.icinga.com/icingadb/icingadb
  - dep init && dep ensure
build:
  stage: build
  script:
    - make test

The “before_script” tasks can be moved into your own Docker image which can be made available in the GitLab container registry too. At some later point when dependencies are fixed, we’ll remove the “dep init” step and just use the locked versions from inside our Git repository.
The test coverage should also be highlighted in the job output. This can be extracted with a regular expression using the shell output from the tests. Nagivate into “Settings > CI/CD > General pipeline settings > Test coverage parsing” and add the following regex:

^total:\t+\(statements\)\t+(\d+\.\d+)%

Each pushed branch and merge request will then automatically be built and also shows the test coverage. There’s room for improvement here, the next months will ensure to add more unit and runtime tests ?

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Releasing our Git and GitLab training as Open Source

Development is super fast these days, there are many tools and integrations to make it more comfortable. One of the version control systems is Git, next to well-known SVN and CVS. Git got really pushed by GitHub and most recently GitLab which provide entire collaboration suites. Nowadays you not only commit code revisions/diffs, you can also test and deployment changes immediately to see whether your code is good on defined platforms or breaks. Issue and project management is put on top and makes it very easy to collaborate.
Continuous integration (CI) allows for deeper code quality too. Add code coverage reports, unit test results, end2end tests and actually build distribution packages with tests on many platforms included. In addition to CI, continuous deployment (CD) adds the icing on the cake. Once CI tests and package builds are fine, add a new build job to your pipeline for automated deployments. This could for example push updated RPM packages for your repository and immediately install the bugfix release in production on all your client hosts with Puppet or Ansible.
You can do all of this with the ease of GitHub or your own hosted GitLab instance (try it out in NWS right now!). Don’t forget about the basics for development and devops workflows:

  • Untracked files, staging area, … what’s within the .git directory?
  • What is a “good commit“?
  • I want to create patch for an open source project – what’s a “pull request“?
  • Workflow with branches?
  • A developer asked me to “rebase your branch against master” and “squash the commits” … what’s that?

Our Git training sessions have been renewed into a two day hands-on session on Git and GitLab. Many of us are using Git on a daily basis at NETWAYS, in addition to GitLab. Knowledge which we share and improve upon. The training starts with the Git basics, diving into good commits, branching, remote repositories and even more. Day 1 also provides your own NWS hosted GitLab instance.
Starting with day 2, you’ll learn about development workflows with branches and real-life use cases. Continuing with CI/CD and generating your own Job pipeline, and exploring GitLab even further. We’ll also discuss integrations into modern development tools (Visual Studio, JetBrains, etc.) and have time to share experiences from daily work. I’ve been working with Git since the beginning of Icinga more than nine years ago.
We have open-sourced our GitLab training material. We truly believe in Open Source and want make it easier for development and contributions on your favourite OSS project, like Icinga.
You are welcome to use our training material for your own studies, especially if you are an open source developer who’s been learning to use Git, GitLab and GitHub. For offline convenience, the handouts, exercises and solutions are provided as PDF too.
Many of the mentioned practical examples and experiences are only available in our two day training sessions at NETWAYS so please consider getting a ticket. There’s also time for your own experience and ideas – the previous training sessions have shown that you can always learn something new about Git. You can see that in the Git repository and the newer Git commits, where this feedback was added to the training material ❤️
See you soon at the famous NETWAYS Kesselhaus for a deep-dive into Git and GitLab!
Please note that the training material is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International.

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Modern open source community platforms with Discourse

Investing into open source communities is key here at NETWAYS. We do a lot of things in the open, encourage users with open source trainings and also be part of many communities with help and code, be it Icinga, Puppet, Elastic, Graylog, etc.
Open source with additional business services as we love and do only works if the community is strong, and pushes your project to the next level. Then it is totally ok to say “I don’t have the time to investigate on your problem now, how about some remote support from professionals?”. Still, this requires a civil discussion platform where such conversations can evolve.
One key factor of an open source community is to encourage users to learn from you. Show them your appreciation and they will like it and start helping others as you do. Be a role model and help others on a technical level, that’s my definition of a community manager. Add ideas and propose changes and new things. Invest time and make things easier for your community.
I’ve been building a new platform for monitoring-portal.org based on Discourse in the last couple of days. The old platform based on Woltlab was old-fashioned, hard to maintain, and it wasn’t easy to help everyone. It also was closed source with an extra license, so feature requests were hard for an open source guy like me.
Discourse on the other hand is 100% open source, has ~24k Github stars and a helping community. It has been created by the inventors of StackOverflow, building a conversation platform for the next decade. Is is fast, modern, beautiful and both easy to install and use.
 

Setup as Container

Discourse only supports running inside Docker. The simplest approach is to build everything into one container, but one can split this up too. Since I am just a beginner, I decided to go for the simple all-in-one solution. Last week I was already using the 1.9.0beta17, running really stable. Today they released 1.9.0, I’ll share some of the fancy things below already 🙂
Start on a fresh VM where no applications are listening on port 80/443. You’ll also need to have a mail server around which accepts mails via SMTP. Docker must be installed in the latest version from the Docker repos, don’t use what the distribution provides (Ubuntu 14.04 LTS here).

mkdir /var/discourse
git clone https://github.com/discourse/discourse_docker.git /var/discourse
cd /var/discourse
./discourse-setup

The setup wizard ask several questions to configure the basic setup. I’ve chosen to use monitoring-portal.org as hostname, and provided my SMTP server host and credentials. I’ve also set my personal mail address as contact. Once everything succeeds, the configuration is stored in /var/discourse/container/app.yml.
 

Nginx Proxy

My requirement was to not only serve Discourse at /, but also have redirects for other web applications (the old Woltlab based forum for example). Furthermore I want to configure the SSL certificates in a central place. Therefore I’ve been following the instructions to connect Discourse to a unix socket via Nginx.

apt-get install nginx
rm /etc/nginx/sites-enabled/default
vim /etc/nginx/sites-available/proxy.conf
server {
    listen 443 ssl;  listen [::]:443 ssl;
    server_name fqdn.com;
    ssl on;
    ssl_certificate      /etc/nginx/ssl/fqdn.com-bundle.crt;
    ssl_certificate_key  /etc/nginx/ssl/fqdn.com.key;
    ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
    ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
    ssl_prefer_server_ciphers on;
    # openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096
    ssl_dhparam /etc/nginx/ssl/dhparam.pem;
    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;
    ssl_stapling_verify on;
    location / {
        error_page 502 =502 /errorpages/discourse_offline.html;
        proxy_intercept_errors on;
        # Requires containers/app.yml to use websockets
        proxy_pass http://unix:/var/discourse/shared/standalone/nginx.http.sock:;
        proxy_set_header Host $http_host;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
    }
}
ln -s /etc/nginx/sites-available/proxy.conf /etc/nginx/sites-enabled/proxy.conf
service nginx restart

Another bonus of such a proxy is to have a maintenance page without an ugly gateway error.
The full configuration can be found here.
 

Plugins

Installation is a breeze – just add the installation calls into the app.yml file and rebuild the container.

# egrep -v "^$|#" /var/discourse/containers/app.yml
templates:
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/web.template.yml"
  - "templates/web.ratelimited.template.yml"
expose:
params:
  db_default_text_search_config: "pg_catalog.english"
env:
  LANG: en_US.UTF-8
  DISCOURSE_HOSTNAME: fqdn.com
  DISCOURSE_DEVELOPER_EMAILS: 'contact@fqdn.com'
  DISCOURSE_SMTP_ADDRESS: smtp.fqdn.com
  DISCOURSE_SMTP_PORT: 587
  DISCOURSE_SMTP_USER_NAME: xxx
  DISCOURSE_SMTP_PASSWORD: xxx
volumes:
  - volume:
      host: /var/discourse/shared/standalone
      guest: /shared
  - volume:
      host: /var/discourse/shared/standalone/log/var-log
      guest: /var/log
hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/docker_manager.git
          - git clone https://github.com/discourse/discourse-akismet.git
          - git clone https://github.com/discourse/discourse-solved.git
run:
  - exec: echo "Beginning of custom commands"
  - exec: echo "End of custom commands"
./launcher rebuild app

Akismet checks against spam posts as you know it from WordPress. We’ve learned that spammers easily crack reCaptcha, and the only reliable way is filtering the actual posts.
The second useful plugin is for accepting an answer in a topic, marking it as solved. This is really useful if your platform is primarily used for Q&A topics.
 

Getting Started

Once everything is up and running, navigate to your domain in your browser. The simple setup wizard greets you with some basic questions. Proceed as you like, and then you are ready to build the platform for your own needs.
The admin interface has lots of options. Don’t fear it – many of the default settings are from best practices, and you can always restore them if you made a mistake. There’s also a filter to only list overridden options 🙂

Categories and Tags

Some organisation and structure is needed. The old-fashioned way of choosing a sub forum and adding a topic in there is gone. Still Discourse offers you to require a category from users. Think of monitoring – a question on the Icinga Director should be highlighted in a specific category to allow others to catch up.
By the way – you can subscribe to notifications for a specific category. This helps to keep track only for Icinga related categories for example.
In addition to that, tags help to refine the topics and make them easier to search for.

Communication matters

There are so many goodies. First off, you can start a new topic just from the start page. An overlay page which saves the session (!) is here for you to edit. Start typing Markdown, and see it pre-rendered live on the right side.
You can upload images, or paste an URL. Discourse will trigger a job to download this later and use a local cache. This avoids broken images in the future. If you paste a web link, Discourse tries to render a preview in “onebox”. This also renders Github URLs with code preview.
Add emotions to your discussion, appreciate posts by others and like them, enjoy the conversation and share it online. You can even save your draft and edit it amongst different sessions, e.g. after going home.

 

Tutorials, Trust Level and Rewards

Once you register a new account (you can add oauth apps from Twitter, Github, etc.!), a learning bot greets you. This interactive tutorial helps you learning the basics with likes, quotes, urls, uploads, and rewards you with a nice certificate in the end.
New users don’t start with full permissions, they need to earn their trust. Once they proceed with engaging with the community, their trust level is raised. The idea behind this is not to have moderators and admins regulating the conversation, but let experienced members to it. Sort of self healing if something goes wrong.
Users who really engage and help are able to earn so-called badges. These nifty rewards are highlighted on their profile page, e.g. for likes, number of replies, shared topics, even accepted solutions for questions. A pure motivation plaything built into this nice piece of open source software.

 

Wiki and Solved Topics

You can change topics to wiki entries. Everyone can edit them, this way you’ll combine the easiness of writing things in Markdown with a full-blown documentation wiki.
Accepting a replay as solution marks a topic as “solved”. This is incredibly helpful for others who had the same problem.

 

Development

As an administrator, you’ll get automated page profiling for free. This includes explained SQL queries, measured page load time, and even flame graphs.
If you ever need to reschedule a job, e.g. for daily badge creation, admins can access the Sidekiq web UI which really is just awesome.
Plugin development seems also easy if you know Ruby and EmberJS. There are many official plugins around which tested before each release.

Discourse also has a rich REST API. Even a monitoring endpoint.
 

Maintenance

You can create backups on-demand in addition to regular intervals. You can even restore an old backup directly from the UI.

 

Conclusion

Discourse is used by many communities all over the world – Graylog, Elastic, Gitlab, Docker, Grafana, … have chosen to use the power of a great discussion platform. So does monitoring-portal.org as a #monitoringlove community. A huge thank you to the Discourse team, your software is pure magic and just awesome 🙂
My journey in building a new community forum from scratch in just 5 days can be read here 🙂
monitoring-portal.org running Discourse is fully hosted at NETWAYS, including SSL certificates, Puppet deployment and Icinga for monitoring. Everything I need to build an awesome community platform. You too?
 

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Replace spaces with tabs in Visual Studio 2017

Visual Studio has several source code edit settings. This defaults to 4 spaces and no tabs by default and is slightly different to what we use in Icinga 2. There we put focus on tabs in our code style.
Editing the Icinga 2 source code on Windows with Visual Studio requires adjusting the editor settings. Navigate into Tools > Options > Text Editor > C# and C++ and adjust the settings to “Keep tabs”.

I accidentally forgot to specify these settings for C# too, and had the problem that half of the Icinga 2 setup wizard code had 4 spaces instead of tabs. Luckily I’ve found this blog post which sheds some lights in the comments.
Hit Ctrl+H to open the replace search window. Tick the icon to use regular expressions and search for “((\t)*)([ ]{4})”. Add “\t” as replacement text.

Happy coding for Icinga 2 v2.8 – ready for OSMC 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Secure Elasticsearch and Kibana with an Nginx HTTP Proxy

Elasticsearch provides a great HTTP API where applications can write to and read from in high performance environments. One of our customers sponsored a feature for Icinga 2 which writes events and performance data metrics to Elasticsearch. This will hit v2.8 later this year.
We’re also concerned about security, and have been looking into security mechanisms such as basic auth or TLS. Unfortunately this isn’t included in the Open Source stack.
 

Why should you care about securing Elasticsearch and Kibana?

Modern infrastructure deployments commonly require Elasticsearch listening on an external interface and answering HTTP requests. Earlier this year we’ve learned about ransomware attacks on MongoDB and Elasticsearch too.
During development I’ve found this API call – just clear everything inside the database.

[root@icinga2-elastic ~]# curl -X DELETE http://localhost:9200/_all
{"acknowledged":true}

I don’t want any user to run this command from anywhere. You can disable this by setting “action.destructive_requires_name” to “true” inside the configuration, but it is not the default.
A similar thing is that you can read and write anything without any access rules in place, no matter if querying Elasticsearch or Kibana directly.
 

Firewall and Network Segments

A possible solution is to limit the network transport to only allowed hosts via firewall rules and so on. If your Elasticsearch cluster is running on a dedicated VLAN, you would need to allow the Icinga 2 monitoring host to write data into http://elasticsearch.yourdomain:9200 – anyone on that machine could still read/write without any security mechanism in place.
 

HTTP Proxy with Nginx

Start with the plain proxy pass configuration and forward http://localhosT:9200 to your external interface (192.168.33.7 in this example).

# MANAGED BY PUPPET
server {
  listen       192.168.33.7:9200;
  server_name  elasticsearch.vagrant-demo.icinga.com;
  index  index.html index.htm index.php;
  access_log            /var/log/nginx/ssl-elasticsearch.vagrant-demo.icinga.com.access.log combined;
  error_log             /var/log/nginx/ssl-elasticsearch.vagrant-demo.icinga.com.error.log;
  location / {
    proxy_pass            http://localhost:9200;
    proxy_read_timeout    90;
    proxy_connect_timeout 90;
    proxy_set_header      Host $host;
    proxy_set_header      X-Real-IP $remote_addr;
    proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header      Proxy "";
  }
}

Restart Nginx and test the connection from the external interface.

# systemctl restart nginx
# curl -v http://192.168.33.7:9200

Once this is working, proceed with adding basic auth and TLS.
 

HTTP Proxy with Basic Auth

This leverages the access level to authenticated users only. Best is to manage the basic auth users file with Puppet, Ansible, etc. – similar to how you manage your Nginx configuration. Our consultants use that method on a regular basis, and provided me with some examples for Nginx. You could do the same for Apache – I would guess that is a matter of taste and performance here.
Generate a username/password combination e.g. using the htpasswd CLI command.

htpasswd -c /etc/nginx/elasticsearch.passwd icinga

Specify the basic auth message and the file which contains the basic auth users.

    auth_basic                "Elasticsearch auth";
    auth_basic_user_file      "/etc/nginx/elasticsearch.passwd";

Restart Nginx and connect to the external interface.

# systemctl restart nginx
# curl -v -u icinga:icinga http://192.168.33.7:9200

 

HTTP Proxy with TLS

The Elasticsearch HTTP API does not support TLS out-of-the-box. You need to enforce HTTPS via HTTP Proxy, enable ssl and set up the required certificates.
Enforce the listen address to SSL only. That way http won’t work.

  listen       192.168.33.7:9200 ssl;

Enable SSL, specify the certificate paths on disk, use TLSv1 and above and optionally secure the used ciphers.

  ssl on;
  ssl_certificate           /etc/nginx/certs/icinga2-elastic.crt;
  ssl_certificate_key       /etc/nginx/certs/icinga2-elastic.key;
  ssl_session_cache         shared:SSL:10m;
  ssl_session_timeout       5m;
  ssl_protocols             TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers               ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS;
  ssl_prefer_server_ciphers on;
  ssl_trusted_certificate   /etc/nginx/certs/ca.crt;

Restart Nginx and connect to the external interface on https. Note: Host verification is disabled in this example.

# systemctl restart nginx
# curl -v -k -u icinga:icinga https://192.168.33.7:9200

 

Combine HTTP Proxy, TLS and Basic Auth

A complete configuration example could look like this:

vim /etc/nginx/sites-enabled/elasticsearch.vagrant-demo.icinga.com.conf
# MANAGED BY PUPPET
server {
  listen       192.168.33.7:9200 ssl;
  server_name  elasticsearch.vagrant-demo.icinga.com;
  ssl on;
  ssl_certificate           /etc/nginx/certs/icinga2-elastic.crt;
  ssl_certificate_key       /etc/nginx/certs/icinga2-elastic.key;
  ssl_session_cache         shared:SSL:10m;
  ssl_session_timeout       5m;
  ssl_protocols             TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers               ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS;
  ssl_prefer_server_ciphers on;
  ssl_trusted_certificate   /etc/nginx/certs/ca.crt;
  uth_basic                "Elasticsearch auth";
  auth_basic_user_file      "/etc/nginx/elasticsearch.passwd";
  index  index.html index.htm index.php;
  access_log            /var/log/nginx/ssl-elasticsearch.vagrant-demo.icinga.com.access.log combined;
  error_log             /var/log/nginx/ssl-elasticsearch.vagrant-demo.icinga.com.error.log;
  location / {
    proxy_pass            http://localhost:9200;
    proxy_read_timeout    90;
    proxy_connect_timeout 90;
    proxy_set_header      Host $host;
    proxy_set_header      X-Real-IP $remote_addr;
    proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header      Proxy "";
  }
}

The following example query does not verify the offered host certificate. In case you configure the ElasticWriter feature in Icinga 2 v2.8, you’ll find the options to specify certificates for TLS handshake verification.

$ curl -v -k -u icinga:icinga https://192.168.33.7:9200
* Rebuilt URL to: https://192.168.33.7:9200/
*   Trying 192.168.33.7...
* TCP_NODELAY set
* Connected to 192.168.33.7 (192.168.33.7) port 9200 (#0)
* WARNING: disabling hostname validation also disables SNI.
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: icinga2-elastic
* Server auth using Basic with user 'icinga'
> GET / HTTP/1.1
> Host: 192.168.33.7:9200
> Authorization: Basic aWNpbmdhOmljaW5nYQ==
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.12.1
< Date: Tue, 12 Sep 2017 13:52:31 GMT
< Content-Type: application/json; charset=UTF-8
< Content-Length: 340
< Connection: keep-alive
<
{
  "name" : "icinga2-elastic-elastic-es",
  "cluster_name" : "elastic",
  "cluster_uuid" : "axUBwxpFSpeFBmVRD6tTiw",
  "version" : {
    "number" : "5.5.2",
    "build_hash" : "b2f0c09",
    "build_date" : "2017-08-14T12:33:14.154Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  },
  "tagline" : "You Know, for Search"
}
* Connection #0 to host 192.168.33.7 left intact

 

Conclusion

Secure data transfer from your monitoring instances to Elasticsearch is mandatory. Basic access control via basic auth should also be implemented. All of this is possible with the help of a dedicated HTTP Proxy host. Fine granular access control for specific HTTP requests is available in the commercial Shield package or variants. While securing Elasticsearch, also look into Kibana which runs on port 5601.
Since we’ve used the Icinga Vagrant boxes as a development playground, I’ve added Nginx as HTTP Proxy inside the icinga2x-elastic box. This provisions the required basic auth and TLS settings and offers to write data on https://192.168.33.7:9200 (icinga/icinga). The same applies for Kibana. The examples above can be replayed too.
If you want to learn more on this topic, make sure to join our Elastic Stack training sessions or kindly invite one of our consultants for a hands-on workshop. Hint: There is an Elastic Stack workshop at OSMC too 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Request Tracker 4.4 Security Update and UTF8 issues with Perl's DBD::Mysql 4.042

Last week, Best Practical announced that there are critical security fixes available for Request Tracker. We’ve therefore immediately pulled the patches from 4.4.2rc2 into our test stages and rolled them out in production.
 

Update == Broken Umlauts

One thing we did notice too late: German umlauts were broken on new ticket creation. Text was simply cut off and rendered subjects and ticket body fairly unreadable.

Encoding issues are not nice, and hard to track down. We rolled back the security fix upgrade, and hoped it would simply fix the issue. It did not, even our production version 4.4.1 now led into this error.
Our first idea was that our Docker image build somehow changes the locale, but that would have at least rendered “strange” text, not entirely cut off. Same goes for the Apache webserver encoding. We’ve then started comparing the database schema, but was not touched in these regards too.
 

DBD::Mysql UTF8 Encoding Changes

During our research we learned that there is a patch available which avoids Perl’s DBD::mysql in version 4.042. The description says something about changed behaviour with utf8 encoding. Moving from RT to DBD::Mysql’s Changelog there is a clear indication that they’ve fixed a long standing bug with utf8 encoding, but that probably renders all other workarounds in RT and other applications unusable.

2016-12-12 Patrick Galbraith, Michiel Beijen, DBI/DBD community (4.041_1)
* Unicode fixes: when using mysql_enable_utf8 or mysql_enable_utf8mb4,
previous versions of DBD::mysql did not properly encode input statements
to UTF-8 and retrieved columns were always UTF-8 decoded regardless of the
column charset.
Fix by Pali Rohár.
Reported and feedback on fix by Marc Lehmann
(https://rt.cpan.org/Public/Bug/Display.html?id=87428)
Also, the UTF-8 flag was not set for decoded data:
(https://rt.cpan.org/Public/Bug/Display.html?id=53130)

 

Solution

Our build system for the RT container pulls in all required Perl dependencies by a cpanfile configuration. Instead of always pulling the latest version for DBD::Mysql, we’ve now pinned it to the last known working version 4.0.41.

 # MySQL
-requires 'DBD::mysql', '2.1018';
+# Avoid bug with utf8 encoding: https://issues.bestpractical.com/Ticket/Display.html?id=32670
+requires 'DBD::mysql', '== 4.041';

Voilá, NETWAYS and Icinga RT production instances fixed.

 

Conclusion

RT 4.4.2 will fix that by explicitly avoiding the DBD::Mysql version in its dependency checks, but older installations may suffer from that problem in local source builds. Keep that in mind when updating your production instances. Hopefully a proper fix can be found to allow a smooth upgrade to newer library dependencies.
If you need assistance with building your own RT setup, or having trouble fixing this exact issue, just contact us 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Documentation matters or why OSMC made me write NSClient++ API docs

Last year I learned about the new HTTP API in NSClient++. I stumbled over it in a blog post with some details, but there were no docs. Instant thought “a software without docs is crap, throw it away”. I am a developer myself, and I know how hard it is to put your code and features into the right shape. And I am all for Open Source, so why not read the source code and reverse engineer the features.
I’ve found out many things, and decided to write a blog post about it. I just had no time, and the old NSClient++ documentation was written in ASCIIDoc. A format which isn’t easy to write, and requires a whole lot knowledge to format and generate readable previews. I skipped it, but I eagerly wanted to have API documentation. Not only for me when I want to look into NSClient++ API queries, but also for anyone out there.
 

Join the conversation

I met Michael at OSMC 2016 again, and we somehow managed to talk about NSClient++. “The development environment is tricky to setup”, I told him, “and I struggle with writing documentation patches in asciidoc.”. We especially talked about the API parts of the documentation I wanted to contribute.
So we made a deal: If I would write documentation for the NSClient++ API, Michael will go for Markdown as documentation format and convert everything else. Michael was way too fast in December and greeted me with shiny Markdown. I wasn’t ready for it, and time goes by. Lately I have been reviewing Jean’s check_nscp_api plugin to finally release it in Icinga 2 v2.7. And so I looked into NSClient++, its API and my longstanding TODO again.
Documentation provides facts and ideally you can walk from top to down, and an API provides so many things to tell. The NSClient API has a bonus: It renders a web interface in your browser too. While thinking about the documentation structure, I could play around with the queries, online configuration and what not.
 

Write and test documentation

Markdown is a markup language. You’ll not only put static text into it, but also use certain patterns and structures to render it in a beautiful representation, just like HTML.
A common approach to render Markdown is seen on GitHub who enriched the original specification and created “GitHub flavoured Markdown“. You can easily edit the documentation online on GitHub and render a preview. Once work is done, you send a pull request in couple of clicks. Very easy 🙂

If you are planning to write a massive amount of documentation with many images added, a local checkout of the git repository and your favourite editor comes in handy. vim handles markdown syntax highlighting already. If you have seen GitHub’s Atom editor, you probably know it has many plugins and features. One of them is to highlight Markdown syntax and to provide a live preview. If you want to do it in your browser, switch to dillinger.io.

NSClient++ uses MKDocs for rendering and building docs. You can start an mkdocs instance locally and test your edits “live”, as you would see them on https://docs.nsclient.org.

Since you always need someone who guides you, the first PR I sent over to Michael was exactly to highlight MKDocs inside the README.md 🙂
 

Already have a solution in mind?

Open the documentation and enhance it. Fix a typo even and help the developers and community members. Don’t move into blaming the developer, that just makes you feel bad. Don’t just wait until someone else will fix it. Not many people love to write documentation.
I kept writing blog posts and wiki articles as my own documentation for things I found over the years. I once stopped with GitHub and Markdown and decided to push my knowledge upstream. Every time I visit the Grafana module for Icinga Web 2 for example, I can use the docs to copy paste configs. Guess what my first contribution to this community project was? 🙂
I gave my word to Michael, and you’ll see how easy it is to write docs for NSClient++ – PR #4.

 

Lessions learned

Documentation is different from writing a book or an article in a magazine. Take the Icinga 2 book as an example: It provides many hints, hides unnecessary facts and pushes you into a story about a company and which services to monitor. This is more focussed on the problem the reader will be solving. That does not mean that pure documentation can’t be easy to read, but still it requires more attention and your desire to try things.
You can extend the knowledge from reading documentation by moving into training sessions or workshops. It’s a good feeling when you discuss the things you’ve just learned at, or having a G&T in the evening. A special flow – which I cannot really describe – happens during OSMC workshops and hackathons or at an Icinga Camp near you. I always have the feeling that I’ve solved so many questions in so little time. That’s more than I could ever write into a documentation chapter or into a thread over at monitoring-portal.org 🙂
Still, I love to write and enhance documentation. This is the initial kickoff for any howto, training or workshop which might follow. We’re making our life so much easier when things are not asked five times, but they’re visible and available as URL somewhere. I’d like to encourage everyone out there to feel the same about documentation.
 

Conclusion

Ever thought about “ah, that is missing” and then forgot to tell anyone? Practice a little and get used to GitHub documentation edits, Atom, vim, MkDocs. Adopt that into your workflow and give something back to your open source project heroes. Marianne is doing great with Icinga 2 documentation already! Once your patch gets merged, that’s pure energy, I tell you 🙂
I’m looking forward to meet Michael at OSMC 2017 again and we will have a G&T together for sure. Oh, Michael, btw – you still need to join your Call for Papers. Could it be something about the API with a link to the newly written docs in the slides please? 🙂
PS: Ask my colleagues here at NETWAYS about customer documentation I keep writing. It simply avoids to explain every little detail in mails, tickets and whatnot. Reduce the stress level and make everyone happy with awesome documentation. That’s my spirit 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Flexible and easy log management with Graylog

This week we had the pleasure to welcome Jan Doberstein from Graylog. On Monday our consulting team and myself attended a Graylog workshop held by Jan. Since many of us are already familiar with log management (e.g. Elastic Stack), we’ve skipped the basics and got a deep-dive into the Graylog stack.
You’ll need Elasticsearch, MongoDB and Graylog Server running on your instance and then you are good to go. MongoDB is mainly used for caching and sessions but also as user storage e.g. for dashboards and more. Graylog Server provides a REST API and web interface.
 

Configuration and Inputs

Once you’ve everything up and running, open your browser and log into Graylog. The default entry page greets you with additional tips and tricks. Graylog is all about usability – you are advised to create inputs to send in data from remote. Everything can be configured via the web interface, or the REST API. Jan also told us that some more advanced settings are only available via the REST API.

 
If you need more input plugins, you can search the marketplace and install the required one. Or you’ll create your own. By default Graylog supports GELF, Beats, Syslog, Kafka, AMQP, HTTP.
One thing I also learned during our workshop: Graylog also supports Elastic Beats as input. This allows even more possibilities to integrate existing setups with Icingabeat, filebeat, winlogbeat and more.
 

Authentication

Graylog supports “internal auth” (manual user creation), sessions/tokens and also LDAP/AD. You can configure and test that via the web interface. One thing to note: The LDAP library doesn’t support nested groups for now. You can create and assign specific roles with restrictions. Even multiple providers and their order can be specified.

 

Streams and Alerts

Incoming messages can be routed into so-called “streams”. You can inspect an existing message and create a rule set based on these details. That way you can for example route your Icinga 2 notification events into Graylog and correlate events in defined streams.

Alerts can be defined based on existing streams. The idea is to check for a specific message count and apply threshold rules. Alerts can also be reset after a defined grace period. If you dig deeper, you’ll also recognise the alert notifications which could be Email or HTTP. We’ve also discussed an alert handling which connects to the Icinga 2 API similar to the Logstash Icinga output. Keep your fingers crossed.

 

Dashboards

You can add stream message counters, histograms and more to your own dashboards. Refresh settings and fullscreen mode are available too. You can export and share these dashboards. If you are looking for automated deployments, those dashboards can be imported via the REST API too.

 

Roadmap

Graylog 2.3 is currently in alpha stages and will be released in Summer 2017. We’ve also learned that it will introduce Elasticsearch 5 as backend. This enables Graylog to use the HTTP API instead of “simulating” a cluster node at the moment. The upcoming release also adds support for lookup tables.
 

Try it

I’ve been fixing a bug inside the Icinga 2 GelfWriter feature lately and was looking for a quick test environment. Turns out that the Graylog project offers Docker compose scripts to bring up a fully running instance. I’ve slightly modified the docker-compose.yml to export the default GELF TCP input port 12201 on localhost.
vim docker-compose.yml
version: '2'
services:
mongo:
image: "mongo:3"
elasticsearch:
image: "elasticsearch:2"
command: "elasticsearch -Des.cluster.name='graylog'"
graylog:
image: graylog2/server:2.2.1-1
environment:
GRAYLOG_PASSWORD_SECRET: somepasswordpepper
GRAYLOG_ROOT_PASSWORD_SHA2: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
GRAYLOG_WEB_ENDPOINT_URI: http://127.0.0.1:9000/api
depends_on:
- mongo
- elasticsearch
ports:
- "9000:9000"
- "12201:12201"


docker-compose up

 
Navigate to http://localhost:9000/system/inputs (admin/admin) and add additional inputs, like “Gelf TCP”.
I just enabled the “gelf” feature in Icinga 2 and pointed it to port 12201. As you can see, there’s some data running into. All screenshots above have been taken from that demo too 😉

 

More?

Jan continued the week with our official two day Graylog training. From a personal view I am really happy to welcome Graylog to our technology stack. I’ve been talking about Graylog with Bernd Ahlers at OSMC 2014 and now am even more excited about the newest additions in v2.x. Hopefully Jan joins us for OSMC 2017, Call for Papers is already open 🙂
My colleagues are already building Graylog clusters and more integrations. Get in touch if you need help with integrating Graylog into your infrastructure stack 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

OSDC 2017: Community connects

After a fully packed and entertaining first day at OSDC, we really enjoyed the evening event at Umspannwerk Ost. Warm weather, tasty food and lots of interesting discussions, just relaxing a bit and preparing for day 2 🙂
 

Warming up

Grabbed a coffee and started with Julien Pivotto on Automating Jenkins. Continuous integration matters these days and there’s not only Jenkins but also GitLab CI and more. Julien told us why automation for Jenkins is needed. Likewise, “XML Everywhere” makes configuration a bit tad hard. Same thing goes for plugins, you literally can’t run Jenkins without. Julien also told us “don’t edit XML”, but go for example for Groovy and the Jenkins /script API endpoint. The Jenkins pipeline plugin even allows to use YAML as config files. In terms of managing the daemon, I learned about “init.groovy.d” to manage and fire additional Groovy scripts. You can use the Job DSL Groovy plugin to define jobs in a declarative manner.
Julien’s talk really was an impressive deep dive leading to Jenkins running Docker and more production hints. After all an amazing presentation, like James said 🙂


I decided to stay in MOA5 for the upcoming talks and will happily await the conference archive once videos are uploaded in the next couple of days.
Casey Calendrello from CoreOS led us into the evolution of the container network interface. I’m still a beginner with containers, Kubernetes and also how networks are managed with it, so I learned quite a lot. CNI originates from rkt and is now built as separate project and library for Go-built software. Casey provided an impressive introduction and deep dive on how to connect your containers to the network – bridged, NAT, overlay networks and their pros and cons. CNI also provides many plugins to create and manage specific interfaces on your machine. It’s magic, and lots of mentioned tool names certainly mean I need to look them up and start to play to fully understand the capabilities 😉


Yesterday Seth Vargo from HashiCorp had 164 slides and promised to just have 18 today, us moving to lunch soon. Haha, no – it is live demo time with modern secrets management with Vault. We’ve also learned that Vault was developed and run at HashiCorp internally for over a year. It received a security review by the NCC group before actually releasing it as open source. Generally speaking it is “just” an encrypted key value store for secrets. Seth told us “our” story – create a database password once, write it down and never change it for years. And the process to ask the DBA to gain access is so complicated, you just save the plain-text password somewhere in your home directory 😉
Live demo time – status checks and work with key creation. Manage PostgreSQL users and credentials with vault – wow, that simple? That’s now on the TODO list to play with too. Seth also released the magic Vault demo as open source on GitHub right after, awesome!


 

Enjoying the afternoon

We had tasty lunch and were glad to see Felix Frank following up with “Is that an Ansible? Stop holding it like a Puppet!” – hilarious talk title already. He provided an overview on the different architecture and naming schemas, community modules (PuppetForge, Ansible Galaxy) and also compared the configuration syntax (Hash-Like DSL, YAML). Both tools have their advantages, but you certainly shouldn’t enforce one’s mode onto the other.


Puh, I learned so many things today already. I’ve unfortunately missed Sebastian giving an introduction about our very own NETWAYS Web Services platform managed with Mesos and Marathon (I rest assured it was just awesome).
After a short coffee break we continued to make decisions – previously Puppet vs. Ansible, now VMware vs. Rudder, location-wise. I decided to listen to Dr. Udo Seidel diving into “VMware’s (Open Source) way of Container“. VMWare is traditionally not very open source friendly, but things are changing. Most likely you’ve heard about Photon OS serving as minimal container host. It was an interesting talk about possibilities with VmWare, but still, I left the talk with the “yet another platform” feeling.
Last talk for a hilarious day about so many learnt things is about containerized DBs by Claus Matzinger from Crate.io. CrateDB provides shared nothing architecture and includes partitioning, auto-sharing, replication. It event supports structured and unstructured data plus SQL language. Sounds promising after all.
Dirk talked about Foreman as lifecycle management tool in MOA4, too bad I missed it.


 

Conclusion

Coffee breaks and lunch unveiled so many interesting discussions. Food was really tasty and I’m sure everyone had a great time, so did I. My personal highlights this year: Follow-up Seth’s talk and try Consul and Vault and do a deep dive into mgmt and tell James about it. Learn more about Ansible and put it into context with Puppet, like Felix has shown in his talk. As always, I’m in love with Elastic beats and will follow closely how to log management evolves, also on the Graylog side of life (2.3 is coming soon, Jan and Bernd promised).
Many thanks to our sponsor Thomas Krenn AG for being with so long. And also for the tasty Linzer Torte – feels like home 🙂


Thanks for a great conference, safe travels home and see you all next year!
Save the date for OSDC 2018: 12. – 14.6.2018!
 

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...