Request Tracker 4.4 Security Update and UTF8 issues with Perl's DBD::Mysql 4.042

Last week, Best Practical announced that there are critical security fixes available for Request Tracker. We’ve therefore immediately pulled the patches from 4.4.2rc2 into our test stages and rolled them out in production.
 

Update == Broken Umlauts

One thing we did notice too late: German umlauts were broken on new ticket creation. Text was simply cut off and rendered subjects and ticket body fairly unreadable.

Encoding issues are not nice, and hard to track down. We rolled back the security fix upgrade, and hoped it would simply fix the issue. It did not, even our production version 4.4.1 now led into this error.
Our first idea was that our Docker image build somehow changes the locale, but that would have at least rendered “strange” text, not entirely cut off. Same goes for the Apache webserver encoding. We’ve then started comparing the database schema, but was not touched in these regards too.
 

DBD::Mysql UTF8 Encoding Changes

During our research we learned that there is a patch available which avoids Perl’s DBD::mysql in version 4.042. The description says something about changed behaviour with utf8 encoding. Moving from RT to DBD::Mysql’s Changelog there is a clear indication that they’ve fixed a long standing bug with utf8 encoding, but that probably renders all other workarounds in RT and other applications unusable.

2016-12-12 Patrick Galbraith, Michiel Beijen, DBI/DBD community (4.041_1)
* Unicode fixes: when using mysql_enable_utf8 or mysql_enable_utf8mb4,
previous versions of DBD::mysql did not properly encode input statements
to UTF-8 and retrieved columns were always UTF-8 decoded regardless of the
column charset.
Fix by Pali Rohár.
Reported and feedback on fix by Marc Lehmann
(https://rt.cpan.org/Public/Bug/Display.html?id=87428)
Also, the UTF-8 flag was not set for decoded data:
(https://rt.cpan.org/Public/Bug/Display.html?id=53130)

 

Solution

Our build system for the RT container pulls in all required Perl dependencies by a cpanfile configuration. Instead of always pulling the latest version for DBD::Mysql, we’ve now pinned it to the last known working version 4.0.41.

 # MySQL
-requires 'DBD::mysql', '2.1018';
+# Avoid bug with utf8 encoding: https://issues.bestpractical.com/Ticket/Display.html?id=32670
+requires 'DBD::mysql', '== 4.041';

Voilá, NETWAYS and Icinga RT production instances fixed.

 

Conclusion

RT 4.4.2 will fix that by explicitly avoiding the DBD::Mysql version in its dependency checks, but older installations may suffer from that problem in local source builds. Keep that in mind when updating your production instances. Hopefully a proper fix can be found to allow a smooth upgrade to newer library dependencies.
If you need assistance with building your own RT setup, or having trouble fixing this exact issue, just contact us 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Documentation matters or why OSMC made me write NSClient++ API docs

Last year I learned about the new HTTP API in NSClient++. I stumbled over it in a blog post with some details, but there were no docs. Instant thought “a software without docs is crap, throw it away”. I am a developer myself, and I know how hard it is to put your code and features into the right shape. And I am all for Open Source, so why not read the source code and reverse engineer the features.
I’ve found out many things, and decided to write a blog post about it. I just had no time, and the old NSClient++ documentation was written in ASCIIDoc. A format which isn’t easy to write, and requires a whole lot knowledge to format and generate readable previews. I skipped it, but I eagerly wanted to have API documentation. Not only for me when I want to look into NSClient++ API queries, but also for anyone out there.
 

Join the conversation

I met Michael at OSMC 2016 again, and we somehow managed to talk about NSClient++. “The development environment is tricky to setup”, I told him, “and I struggle with writing documentation patches in asciidoc.”. We especially talked about the API parts of the documentation I wanted to contribute.
So we made a deal: If I would write documentation for the NSClient++ API, Michael will go for Markdown as documentation format and convert everything else. Michael was way too fast in December and greeted me with shiny Markdown. I wasn’t ready for it, and time goes by. Lately I have been reviewing Jean’s check_nscp_api plugin to finally release it in Icinga 2 v2.7. And so I looked into NSClient++, its API and my longstanding TODO again.
Documentation provides facts and ideally you can walk from top to down, and an API provides so many things to tell. The NSClient API has a bonus: It renders a web interface in your browser too. While thinking about the documentation structure, I could play around with the queries, online configuration and what not.
 

Write and test documentation

Markdown is a markup language. You’ll not only put static text into it, but also use certain patterns and structures to render it in a beautiful representation, just like HTML.
A common approach to render Markdown is seen on GitHub who enriched the original specification and created “GitHub flavoured Markdown“. You can easily edit the documentation online on GitHub and render a preview. Once work is done, you send a pull request in couple of clicks. Very easy 🙂


If you are planning to write a massive amount of documentation with many images added, a local checkout of the git repository and your favourite editor comes in handy. vim handles markdown syntax highlighting already. If you have seen GitHub’s Atom editor, you probably know it has many plugins and features. One of them is to highlight Markdown syntax and to provide a live preview. If you want to do it in your browser, switch to dillinger.io.

NSClient++ uses MKDocs for rendering and building docs. You can start an mkdocs instance locally and test your edits “live”, as you would see them on https://docs.nsclient.org.

Since you always need someone who guides you, the first PR I sent over to Michael was exactly to highlight MKDocs inside the README.md 🙂
 

Already have a solution in mind?

Open the documentation and enhance it. Fix a typo even and help the developers and community members. Don’t move into blaming the developer, that just makes you feel bad. Don’t just wait until someone else will fix it. Not many people love to write documentation.
I kept writing blog posts and wiki articles as my own documentation for things I found over the years. I once stopped with GitHub and Markdown and decided to push my knowledge upstream. Every time I visit the Grafana module for Icinga Web 2 for example, I can use the docs to copy paste configs. Guess what my first contribution to this community project was? 🙂
I gave my word to Michael, and you’ll see how easy it is to write docs for NSClient++ – PR #4.


 

Lessions learned

Documentation is different from writing a book or an article in a magazine. Take the Icinga 2 book as an example: It provides many hints, hides unnecessary facts and pushes you into a story about a company and which services to monitor. This is more focussed on the problem the reader will be solving. That does not mean that pure documentation can’t be easy to read, but still it requires more attention and your desire to try things.
You can extend the knowledge from reading documentation by moving into training sessions or workshops. It’s a good feeling when you discuss the things you’ve just learned at, or having a G&T in the evening. A special flow – which I cannot really describe – happens during OSMC workshops and hackathons or at an Icinga Camp near you. I always have the feeling that I’ve solved so many questions in so little time. That’s more than I could ever write into a documentation chapter or into a thread over at monitoring-portal.org 🙂
Still, I love to write and enhance documentation. This is the initial kickoff for any howto, training or workshop which might follow. We’re making our life so much easier when things are not asked five times, but they’re visible and available as URL somewhere. I’d like to encourage everyone out there to feel the same about documentation.
 

Conclusion

Ever thought about “ah, that is missing” and then forgot to tell anyone? Practice a little and get used to GitHub documentation edits, Atom, vim, MkDocs. Adopt that into your workflow and give something back to your open source project heroes. Marianne is doing great with Icinga 2 documentation already! Once your patch gets merged, that’s pure energy, I tell you 🙂
I’m looking forward to meet Michael at OSMC 2017 again and we will have a G&T together for sure. Oh, Michael, btw – you still need to join your Call for Papers. Could it be something about the API with a link to the newly written docs in the slides please? 🙂
PS: Ask my colleagues here at NETWAYS about customer documentation I keep writing. It simply avoids to explain every little detail in mails, tickets and whatnot. Reduce the stress level and make everyone happy with awesome documentation. That’s my spirit 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Flexible and easy log management with Graylog

This week we had the pleasure to welcome Jan Doberstein from Graylog. On Monday our consulting team and myself attended a Graylog workshop held by Jan. Since many of us are already familiar with log management (e.g. Elastic Stack), we’ve skipped the basics and got a deep-dive into the Graylog stack.
You’ll need Elasticsearch, MongoDB and Graylog Server running on your instance and then you are good to go. MongoDB is mainly used for caching and sessions but also as user storage e.g. for dashboards and more. Graylog Server provides a REST API and web interface.
 

Configuration and Inputs

Once you’ve everything up and running, open your browser and log into Graylog. The default entry page greets you with additional tips and tricks. Graylog is all about usability – you are advised to create inputs to send in data from remote. Everything can be configured via the web interface, or the REST API. Jan also told us that some more advanced settings are only available via the REST API.


 
If you need more input plugins, you can search the marketplace and install the required one. Or you’ll create your own. By default Graylog supports GELF, Beats, Syslog, Kafka, AMQP, HTTP.
One thing I also learned during our workshop: Graylog also supports Elastic Beats as input. This allows even more possibilities to integrate existing setups with Icingabeat, filebeat, winlogbeat and more.
 

Authentication

Graylog supports “internal auth” (manual user creation), sessions/tokens and also LDAP/AD. You can configure and test that via the web interface. One thing to note: The LDAP library doesn’t support nested groups for now. You can create and assign specific roles with restrictions. Even multiple providers and their order can be specified.


 

Streams and Alerts

Incoming messages can be routed into so-called “streams”. You can inspect an existing message and create a rule set based on these details. That way you can for example route your Icinga 2 notification events into Graylog and correlate events in defined streams.


Alerts can be defined based on existing streams. The idea is to check for a specific message count and apply threshold rules. Alerts can also be reset after a defined grace period. If you dig deeper, you’ll also recognise the alert notifications which could be Email or HTTP. We’ve also discussed an alert handling which connects to the Icinga 2 API similar to the Logstash Icinga output. Keep your fingers crossed.

 

Dashboards

You can add stream message counters, histograms and more to your own dashboards. Refresh settings and fullscreen mode are available too. You can export and share these dashboards. If you are looking for automated deployments, those dashboards can be imported via the REST API too.


 

Roadmap

Graylog 2.3 is currently in alpha stages and will be released in Summer 2017. We’ve also learned that it will introduce Elasticsearch 5 as backend. This enables Graylog to use the HTTP API instead of “simulating” a cluster node at the moment. The upcoming release also adds support for lookup tables.
 

Try it

I’ve been fixing a bug inside the Icinga 2 GelfWriter feature lately and was looking for a quick test environment. Turns out that the Graylog project offers Docker compose scripts to bring up a fully running instance. I’ve slightly modified the docker-compose.yml to export the default GELF TCP input port 12201 on localhost.
vim docker-compose.yml
version: '2'
services:
mongo:
image: "mongo:3"
elasticsearch:
image: "elasticsearch:2"
command: "elasticsearch -Des.cluster.name='graylog'"
graylog:
image: graylog2/server:2.2.1-1
environment:
GRAYLOG_PASSWORD_SECRET: somepasswordpepper
GRAYLOG_ROOT_PASSWORD_SHA2: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
GRAYLOG_WEB_ENDPOINT_URI: http://127.0.0.1:9000/api
depends_on:
- mongo
- elasticsearch
ports:
- "9000:9000"
- "12201:12201"


docker-compose up

 
Navigate to http://localhost:9000/system/inputs (admin/admin) and add additional inputs, like “Gelf TCP”.
I just enabled the “gelf” feature in Icinga 2 and pointed it to port 12201. As you can see, there’s some data running into. All screenshots above have been taken from that demo too 😉


 

More?

Jan continued the week with our official two day Graylog training. From a personal view I am really happy to welcome Graylog to our technology stack. I’ve been talking about Graylog with Bernd Ahlers at OSMC 2014 and now am even more excited about the newest additions in v2.x. Hopefully Jan joins us for OSMC 2017, Call for Papers is already open 🙂
My colleagues are already building Graylog clusters and more integrations. Get in touch if you need help with integrating Graylog into your infrastructure stack 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

OSDC 2017: Community connects

After a fully packed and entertaining first day at OSDC, we really enjoyed the evening event at Umspannwerk Ost. Warm weather, tasty food and lots of interesting discussions, just relaxing a bit and preparing for day 2 🙂
 

Warming up

Grabbed a coffee and started with Julien Pivotto on Automating Jenkins. Continuous integration matters these days and there’s not only Jenkins but also GitLab CI and more. Julien told us why automation for Jenkins is needed. Likewise, “XML Everywhere” makes configuration a bit tad hard. Same thing goes for plugins, you literally can’t run Jenkins without. Julien also told us “don’t edit XML”, but go for example for Groovy and the Jenkins /script API endpoint. The Jenkins pipeline plugin even allows to use YAML as config files. In terms of managing the daemon, I learned about “init.groovy.d” to manage and fire additional Groovy scripts. You can use the Job DSL Groovy plugin to define jobs in a declarative manner.
Julien’s talk really was an impressive deep dive leading to Jenkins running Docker and more production hints. After all an amazing presentation, like James said 🙂


I decided to stay in MOA5 for the upcoming talks and will happily await the conference archive once videos are uploaded in the next couple of days.
Casey Calendrello from CoreOS led us into the evolution of the container network interface. I’m still a beginner with containers, Kubernetes and also how networks are managed with it, so I learned quite a lot. CNI originates from rkt and is now built as separate project and library for Go-built software. Casey provided an impressive introduction and deep dive on how to connect your containers to the network – bridged, NAT, overlay networks and their pros and cons. CNI also provides many plugins to create and manage specific interfaces on your machine. It’s magic, and lots of mentioned tool names certainly mean I need to look them up and start to play to fully understand the capabilities 😉


Yesterday Seth Vargo from HashiCorp had 164 slides and promised to just have 18 today, us moving to lunch soon. Haha, no – it is live demo time with modern secrets management with Vault. We’ve also learned that Vault was developed and run at HashiCorp internally for over a year. It received a security review by the NCC group before actually releasing it as open source. Generally speaking it is “just” an encrypted key value store for secrets. Seth told us “our” story – create a database password once, write it down and never change it for years. And the process to ask the DBA to gain access is so complicated, you just save the plain-text password somewhere in your home directory 😉
Live demo time – status checks and work with key creation. Manage PostgreSQL users and credentials with vault – wow, that simple? That’s now on the TODO list to play with too. Seth also released the magic Vault demo as open source on GitHub right after, awesome!


 

Enjoying the afternoon

We had tasty lunch and were glad to see Felix Frank following up with “Is that an Ansible? Stop holding it like a Puppet!” – hilarious talk title already. He provided an overview on the different architecture and naming schemas, community modules (PuppetForge, Ansible Galaxy) and also compared the configuration syntax (Hash-Like DSL, YAML). Both tools have their advantages, but you certainly shouldn’t enforce one’s mode onto the other.


Puh, I learned so many things today already. I’ve unfortunately missed Sebastian giving an introduction about our very own NETWAYS Web Services platform managed with Mesos and Marathon (I rest assured it was just awesome).
After a short coffee break we continued to make decisions – previously Puppet vs. Ansible, now VMware vs. Rudder, location-wise. I decided to listen to Dr. Udo Seidel diving into “VMware’s (Open Source) way of Container“. VMWare is traditionally not very open source friendly, but things are changing. Most likely you’ve heard about Photon OS serving as minimal container host. It was an interesting talk about possibilities with VmWare, but still, I left the talk with the “yet another platform” feeling.
Last talk for a hilarious day about so many learnt things is about containerized DBs by Claus Matzinger from Crate.io. CrateDB provides shared nothing architecture and includes partitioning, auto-sharing, replication. It event supports structured and unstructured data plus SQL language. Sounds promising after all.
Dirk talked about Foreman as lifecycle management tool in MOA4, too bad I missed it.


 

Conclusion

Coffee breaks and lunch unveiled so many interesting discussions. Food was really tasty and I’m sure everyone had a great time, so did I. My personal highlights this year: Follow-up Seth’s talk and try Consul and Vault and do a deep dive into mgmt and tell James about it. Learn more about Ansible and put it into context with Puppet, like Felix has shown in his talk. As always, I’m in love with Elastic beats and will follow closely how to log management evolves, also on the Graylog side of life (2.3 is coming soon, Jan and Bernd promised).
Many thanks to our sponsor Thomas Krenn AG for being with so long. And also for the tasty Linzer Torte – feels like home 🙂


Thanks for a great conference, safe travels home and see you all next year!
Save the date for OSDC 2018: 12. – 14.6.2018!
 

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Manage Elasticsearch, Kibana & icingabeat with Puppet

A while back I’ve already looked into the Elastic Stack 5 Beta release and the beats integration. Time flies and the stable release is here. Blerim announced the icingabeat 1.0.0 release last week, and so I was looking into a smooth integration into a Vagrant box here.
The provisioner uses Puppet, which Puppet modules could be used here? I’m always looking for actively maintained modules, best by upstream projects which share the joy of automated setups of their tools with their community.
 

Elastic and Kibana Puppet Module

The Elastic developers started writing their own Puppet module for Kibana 5.x. Previously I’ve used community modules such as puppet-kibana4 or puppet-kibana5. puppet-elasticsearch already supports 5.x for a while, that works like a charm.
Simple example for a low memory Elasticsearch setup:

class { 'elasticsearch':
  manage_repo  => true,
  repo_version => '5.x',
  java_install => false,
  jvm_options => [
    '-Xms256m',
    '-Xmx256m'
  ],
  require => Class['java']
} ->
elasticsearch::instance { 'elastic-es':
  config => {
    'cluster.name' => 'elastic',
    'network.host' => '127.0.0.1'
  }
}

Note: jvm_options was released in 5.0.0.
 

Default index pattern

If you do not define any default index pattern, Kibana greets you with a fancy message to do so (and I have to admit – i still need an Elastic Stack training to fully understand why). I wanted to automate that, so that Vagrant box users don’t need to care about it. There are several threads around which describe the deployment by sending a PUT request to the Elasticsearch backend.
I’m using a small script here which waits until Elasticsearch is started. If you don’t do that, the REST API calls will fail later on. This is also required for specifying index patterns and importing dashboards. A Puppet exec timeout won’t do the trick here, because we’ll have to loop and check again if the service is available.

$ cat/usr/local/bin/kibana-setup
#!/bin/bash
ES_URL="http://localhost:9200"
TIMEOUT=300
START=$(date +%s)
echo -e "Restart elasticsearch instance"
systemctl restart elasticsearch-elastic-es
echo -e "Checking whether Elasticsearch is listening at $ES_URL"
until $(curl --output /dev/null --silent $ES_URL); do
  NOW=$(date +%s)
  REAL_TIMEOUT=$(( START + TIMEOUT ))
  if [[ "$NOW" -gt "$REAL_TIMEOUT" ]]; then
    echo "Cannot reach Elasticsearch at $ES_URL. Timeout reached."
    exit 1
  fi
  printf '.'
  sleep 1
done

If you would want to specify the default index pattern i.e. for an installed filebeat service, you could something like this:

ES_INDEX_URL="$ES_URL/.kibana/index-pattern/filebeat"
ES_DEFAULT_INDEX_URL="$ES_URL/.kibana/config/5.2.2" #hardcoded program version
curl -XPUT $ES_INDEX_URL -d '{ "title":"filebeat", "timeFieldName":"@timestamp" }'
curl -XPUT $ES_DEFAULT_INDEX_URL -d '{ "defaultIndex":"filebeat" }'

One problem arises: The configuration is stored by Kibana program version. There is no symlink like ‘latest’, but you’ll need to specify ‘.kibana/config/5.2.2’ to make it work. There is a certain requirement for pinning the Kibana version, or finding a programatic way to automatically determine the version.
 

Kibana Version in Puppet

Fortunately the Puppet module allows for specifying a version. Turns out, the class validation doesn’t support package revision known from rpm (“5.2.2-1”). Open source is awesome – you manage to patch it, apply tests and after review your contributed patch gets merged upstream.
Example for Kibana with a pinned package version:

$kibanaVersion = '5.2.2'
class { 'kibana':
  ensure => "$kibanaVersion-1",
  config => {
    'server.port' => 5601,
    'server.host' => '0.0.0.0',
    'kibana.index' => '.kibana',
    'kibana.defaultAppId' => 'discover',
    'logging.silent'               => false,
    'logging.quiet'                => false,
    'logging.verbose'              => false,
    'logging.events'               => "{ log: ['info', 'warning', 'error', 'fatal'], response: '*', error: '*' }",
    'elasticsearch.requestTimeout' => 500000,
  },
  require => Class['java']
}
->
file { 'kibana-setup':
  name => '/usr/local/bin/kibana-setup',
  owner => root,
  group => root,
  mode => '0755',
  source => "puppet:////vagrant/files/usr/local/bin/kibana-setup",
}
->
exec { 'finish-kibana-setup':
  path => '/bin:/usr/bin:/sbin:/usr/sbin',
  command => "/usr/local/bin/kibana-setup",
  timeout => 3600
}

 

Icingabeat integration

That’s fairly easy, just install the rpm and put a slightly modified icingabeat.yml there.

$icingabeatVersion = '1.0.0'
yum::install { 'icingabeat':
  ensure => present,
  source => "https://github.com/Icinga/icingabeat/releases/download/v$icingabeatVersion/icingabeat-$icingabeatVersion-x86_64.rpm"
}->
file { '/etc/icingabeat/icingabeat.yml':
  source    => 'puppet:////vagrant/files/etc/icingabeat/icingabeat.yml',
}->
service { 'icingabeat':
  ensure => running
}

I’ve also found the puppet-archive module from Voxpupuli which allows to download and extract the required Kibana dashboards from icingabeat. The import then requires that Elasticsearch is up and running (referencing the kibana setup script again). You’ll notice the reference to the pinned Kibana version for setting the default index pattern via exec resource.

# https://github.com/icinga/icingabeat#dashboards
archive { '/tmp/icingabeat-dashboards.zip':
  ensure => present,
  extract => true,
  extract_path => '/tmp',
  source => "https://github.com/Icinga/icingabeat/releases/download/v$icingabeatVersion/icingabeat-dashboards-$icingabeatVersion.zip",
  checksum => 'b6de2adf2f10b77bd4e7f9a7fab36b44ed92fa03',
  checksum_type => 'sha1',
  creates => "/tmp/icingabeat-dashboards-$icingabeatVersion",
  cleanup => true,
  require => Package['unzip']
}->
exec { 'icingabeat-kibana-dashboards':
  path => '/bin:/usr/bin:/sbin:/usr/sbin',
  command => "/usr/share/icingabeat/scripts/import_dashboards -dir /tmp/icingabeat-dashboards-$icingabeatVersion -es http://127.0.0.1:9200",
  require => Exec['finish-kibana-setup']
}->
exec { 'icingabeat-kibana-default-index-pattern':
  path => '/bin:/usr/bin:/sbin:/usr/sbin',
  command => "curl -XPUT http://127.0.0.1:9200/.kibana/config/$kibanaVersion -d '{ \"defaultIndex\":\"icingabeat-*\" }'",
}

The Puppet code is the first working draft, I plan to refactor the upstream code. Look how fancy 🙂

Conclusion

Managing your Elastic Stack setup with Puppet really has become a breeze. I haven’t tackled the many plugins and dashboards available, there’s so much more to explore. Once you’ve integrated icingabeat into your devops stack, how would you build your own dashboards to correlate operations maintenance with Icinga alerts? 🙂
If you’re interested in learning more about Elastic and the awesome Beats integrations, make sure to join OSDC 2017. Monica Sarbu joins us with her talk on “Collecting the right data to monitor your infrastructure”.
PS: Test-drive the integration, finished today 🙂


 
 

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Speed up your work with Alfred 3 Workflows

Make things easy at work. Save time for the important bits. That’s something everyone talks about but there’s no real “that’s mine” guide out there. A while ago I decided to try something new – I switched my entire work place from Linux to macOS. Working with colleagues using their tools on a daily basis made me ask several times – how does that work? How could I use that workflow? Can you show me how the trackpad works? Swiping with two fingers going back in browser history?
Turns out there are a couple of tools which help me save time day by day. Some of them are not part of macOS themselves and thus require you to buy them. If a tool really saves my life (and time) I am willing to do so.
One of these tools is Alfred. Generally speaking Alfred can be used to search your mac, quickly open files and applications, use hotkeys for actions. Things you can achieve with Apple’s Spotlight already. The coolest thing is the additional Alfred Powerpack. This allows you to run shell commands on your macOS. You can extend the functionality by adding custom Alfred workflows. That way you can manage your chat clients, music apps, search the mail application and so on.
Locking your screen requires a somewhat quirky key combination or moving the cursor to hot corners. Both of which I don’t like and therefore just type in “lock” into the Alfred command popup. These days locking my screen is just “Cmd + Space, l, Enter” because Alfred remembers the history. Another tip from Bernd – type “empty” and automatically empty your trash bin.
We’re using Jabber at work, and I’m a heavy Adium user. When I want to chat with a co-worker, I just open Alfred and start typing “im <name>” and a new Adium chat tab opens. This is all thanks to this workflow. If you’re sending an email over the ocean, what time is it? Use this workflow to avoid googling. Another cool tip – if you are for instance converting inches to the metric system, a workflow called units comes in handy.

Yet another workflow is for Github repos which really helps after the migration for all Icinga repositories. Just type in “gh icinga2” and open the corresponding Github repository in your browser. Search for repos or users or open a new issue in a specific repository.

Alfred saves me a lot of time already. Keep focus and develop faster 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

It's magic: git squash

training_gitA common technique for feature development with Git is to start with a new feature branch. You’ll continue to add and commit changes into that branch and at some point in time you want to merge the many work-in-progress commits into one. The Git terminology references that as “squash“.
Another common best practice is to use the Git forking model you are probably familiar with on GitHub. You’ll fork your own repository, start adding a new Icinga 2 ITL CheckCommand definition, later on you are adding the missing documentation bits. Then you’ll send a pull request to the upstream repository. Developers will review your changes and probably ask you to rebase and squash your commits into one commit.
Let’s just try that.

michi@mbmif ~/coding/icinga/icinga2 (master) $ git checkout -b feature/itl-checkcommand-13079
<change, add, commit>
michi@mbmif ~/coding/icinga/icinga2 (feature/itl-checkcommand-13079) $ git l
* ed6a68f- (HEAD -> feature/itl-checkcommand-13079) Docs: Fix phrasing (5 minutes ago)
* ce90e16- Add documentation for logfiles (5 minutes ago)
* 5a31d9e- Add CheckCommand "logfiles" (6 minutes ago)
* dc29924- (origin/master, origin/HEAD, master) Deprecate the client 'bottom up' mode w/ node update-config (2 hours ago)
...

Now I want to merge that feature branch back to the master. Our development guidelines require me to squash the three commits first.
Generally speaking you can use the interactive “git rebase -i” command here. It requires an additional parameter about the commits we will be editing, counting from HEAD back in the history. In my case I want to edit HEAD minus 3 commits.
The interactive mode will open the configured editor, vim in my case.

michi@mbmif ~/coding/icinga/icinga2 (feature/itl-checkcommand-13079) $ git rebase -i HEAD~3
pick 5a31d9e Add CheckCommand "logfiles"
pick ce90e16 Add documentation for logfiles
pick ed6a68f Docs: Fix phrasing
# Rebase dc29924..ed6a68f onto dc29924 (3 commands)
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

Git informs us about the possible options here. I could just use “edit” which iterates over the commits, stops for amending the commit message and/or the commit changes itself.
I want to squash the commits. This requires the base commit (usually the first one) to stay on “pick” while the others are changed to “squash”.

pick 5a31d9e Add CheckCommand "logfiles"
squash ce90e16 Add documentation for logfiles
squash ed6a68f Docs: Fix phrasing

This will squash the commits one by one onto the last picked one. Save the changes and continue.

# This is a combination of 3 commits.
# This is the 1st commit message:
Add CheckCommand "logfiles"
# This is the commit message #2:
Add documentation for logfiles
# This is the commit message #3:
Docs: Fix phrasing
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# Date:      Wed Nov 23 17:19:55 2016 +0100
#
# interactive rebase in progress; onto dc29924
# Last commands done (3 commands done):
#    squash ce90e16 Add documentation for logfiles
#    squash ed6a68f Docs: Fix phrasing
# No commands remaining.
# You are currently editing a commit while rebasing branch 'feature/itl-checkcommand-13079' on 'dc29924'.
#
# Changes to be committed:
#       modified:   doc/10-icinga-template-library.md
#       modified:   itl/plugins-contrib.d/logmanagement.conf

Edit the commit message (comments starting with “#” will be ignored) and make it a good one (also something you learn in the training session ;)).

Add CheckCommand "logfiles"
refs #13079

Save and continue.

[detached HEAD f1445eb] Add CheckCommand "logfiles"
 Date: Wed Nov 23 17:19:55 2016 +0100
 3 files changed, 8 insertions(+)
Successfully rebased and updated refs/heads/feature/itl-checkcommand-13079.

Verify the changed commit history.

michi@mbmif ~/coding/icinga/icinga2 (feature/itl-checkcommand-13079) $ git l
* f1445eb- (HEAD -> feature/itl-checkcommand-13079) Add CheckCommand "logfiles" (3 minutes ago)
* dc29924- (origin/master, origin/HEAD, master) Deprecate the client 'bottom up' mode w/ node update-config (2 hours ago)

When you are updating your remote repository you’ll need to override the remote history with the local history. Be careful when using “-f”!

michi@mbmif ~/coding/icinga/icinga2 (feature/itl-checkcommand-13079) $ git push -f origin feature/itl-checkcommand-13079

Voilà – our Git commit history is now rebased and squashed.
This example works in a similar fashion with a forked repository on GitHub, thus requiring you to update your PR then.
Hint: I’m using Git shell integration as well Git aliases here (“git l”). We’ll dive into these topics in our Git training too 🙂

$ git config --list | grep 'alias.l'
alias.l=log --graph --pretty=format:'%Cred%h%Creset-%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit --date=relative --
Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

NETWAYS on tour: PuppetConf in San Diego

img_3844Heading to the US and PuppetConf the 5th year – this time, the NETWAYS folks moved to sunny San Diego. We had the annual Icinga Camp on Tuesday in the same venue as PuppetConf in the beautiful Town & Country Resort.
Bernd organised the entire trip – from flights to our lovely AirBnB, even going shopping (aka raiding) the local Walmart Neighbourhood market, cooking a meal for the hungry crowd. Last but not least – anyone flying over the US was offered the possibility to extend the trip with his/her vacation plans. Bernd, Julian, Lennart and Florian arrived earlier and so the other folks (Tom, Eric, Dirk, Blerim, Michael) joined them on Monday afternoon.

img_3792
Guess what happened after getting onto the rental cars?  The first visit at the In-N-Out burger right around the corner at LAX. This round was on Bernd too, thanks a lot man! We agreed on visiting the AirBnb first, then looking for some food. Spaghetti and tasty salad accompanied with the obligatory G&T. Some of us were just chilling after an 11 hours flight, others still preparing for their Icinga Camp talks.
The next day we arrived at the PuppetConf venue – their brand was nearly everywhere although the room for Icinga Camp was a bit tad hard to find. Cosy warm and sunny weather and a full day of #monitoringlove. I’ll continue with details over at the Icinga blog soon.
Wednesday was sort of a free day for those attending PuppetConf. I went for Sea World with Lennart and Dirk, the others joined the beach. Unfortunately Bernd had to leave to Nuremberg so I was surprised with luckily attending PuppetConf. I’ve been learning and improving my Puppet skills in the past year quite a lot, also helped with our newly designed Puppet Open Source training sessions. Glad I could join this opportunity.
Therefore I’d like to share my experience with this year, also compared to previous events. To start with PuppetConf offered a delicious breakfast again on the first day. Bonus: Weather was hot and sunny, what a beautiful start into the day. The rooms for the sessions were loosely connected inside this building but still sometimes confusing. Everyone was friendly and in comparison to last year, you were not “marketing scanned” by Puppet folks everywhere.
img_3941Nigel Kersten kicked off PuppetConf and also announced the date next year in SFO, 10.-12. October 2017. Then former CEO and creator of Puppet, Luke Kanies, started with his keynote. From container numbers (starting/stopping 1.58 * 10^10 containers over 3 years is a hell of a number) to open source commitment heard from Puppet CEO Sanjay Mirchandani. In case you haven’t been following closely, Puppet Enterprise got certain exclusive features not available in the community version. When it comes to server metrics available PE only I could imagine that community members are not amused (I wasn’t). Time for changes.
I decided to join the Github talk since it is always interesting to get insights into their operations management. Nearly everything is managed with HuBot. GitHub really is living the #chatops dream. Kevin Paulisse also talked about Puppet as culture, dealing with new contributors and then announced a new open-source tool: octocat-diff. This allows to compare Puppet catalogs and avoid deploying unwanted changes in production.
Everyone was really crazy about using Puppet to create and update Docker images. And so was David Lutterkort decoupling the challenges with containers and their management with Kubernetes into a fascinating talk. Note: That specific nginx demo was used everywhere 😉
“Making Puppet clean up its own mess” sounded provoking and so we went for it. You might be asking – which mess? Take for example changed configuration file locations generating collisions. “Write code to cleanup code” or “Use Ansible to cleanup after Puppet?”. Sounded funny but still can be a common approach instead of waiting for the Puppet agent’s 30 minute update interval.
img_4005On Thursday evening the social event was happening from 6 to 8 pm. Hey, a NETWAYS party isn’t even started yet at that time 😀 And Jenny is waiting later at night, OSMC is near! From what the others told me, the location and food was great. Blerim and I decided to raid Walmart – again – and go for some barbecue together with the other folks not attending the conference. Florian took care of grilling tasty meat, and of course the drinks later on when the PuppetConf party people joined us again.
You would guess – no-one attends the keynotes on the second day. We made it, and listened to interesting insights into Microsoft’s plans on Azure with containers, Nano server 2016 setup plans and of course Powershell deployment strategies. Lots of things happening here, definitely worth keep watching.
During the keynotes the internet broke. Ok, actually Twitter was down so I couldn’t tweet about #puppetconf. The DNS DDOS even affected Puppet itself – livestream and forge were unreachable. Back in Germany everyone was sleeping but I guess some participants had to deal with their notification email stream rather than listening to sessions 😉
Martin Alfke gave a training session…ehm…talk about moving exec into types and providers. And everyone in the audience could follow and left the session both entertained and well trained. Since Blerim and I were pretty much into containers, management and also monitoring, we went for Gareth Rushgrove and him doing lots of demos showcasing Puppet and Docker. Did I mention the nginx demo already? 😉
img_4043We were a bit undecided where to head for the last talk but then we saw Ben’s session “How you actually get hacked” differing from the usual Puppet suspects in topics. Oh boy, such sarcasm combined with actual matters of security. Ben, if you really lost your job, join NETWAYS. Will be fun 🙂
There were a couple of sessions talking about the transition from Puppet 3 to 4. Though it did not really feel that people are aware that Puppet 3 will reach EOL by the end of 2016. Most recently an interesting discussion on twitter started.
Compared to last year, PuppetConf including the session topics, venue and friendlyness nailed it this year. I’m looking forward to San Francisco next year – probably the best location to attract even more IT people. In case you’ve missed PuppetConf this year – their event archive including video recordings is already online.
We left San Diego on Friday evening spending two more days in lovely Los Angeles. Lennart, Dirk and I went for LEGOLand California, colleagues enjoined Venice beach. And then we got our rental car for our road trip to Grand Canyon and more. But that’s a different story … join the NETWAYS tour!
Enjoy some pictures we’ve taken during our NETWAYS “school trip” 🙂


 
 
 
 

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Diving into Elastic Stack 5.0.0-beta1 and Elastic Beats

logo2_elastic_150x75I’m always trying to look into new devops tools and how they fit best with Icinga 2 as a monitoring solution. Often demanded is an integration with Elastic Stack and Elastic Beats with Icinga 2. Gathering metrics and events, correlated to additional input sources analysing a greater outage and much more.
Last week the first 5.0.0 beta1 release hit my channels and I thought I’d give it a try. The installation is pretty straight forward using packages. Note: This is my first time installing Elastic Stack, still have little knowledge from colleague hero stories and the OSDC talk by Monica Sarbu and earlier conferences.
(mehr …)

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Sharing Git patches in a different way

training_gitSometimes it is necessary to share git patches locally. Either you cannot push it to the public domain in a feature or master branch or you simply don’t have an internet connection – for example while working in a customer VPN reproducing issues locally.
You could hope for XMPP sending patch files or any other local transport. But if you don’t want to add, commit, format-patch and send a patch file you’ll obviously go for plain text copy paste of a ‘git diff’ output. Editors and messengers tend to destroy the tab indent. Now what?
base64 encoding to the rescue.

michi@mbmif ~/coding/icinga/icinga2 (master) $ git diff | base64

Send the blob to your colleague who can easily apply it. Example on MacOS:

michi@mbmif ~/coding/icinga/icinga2 (master) $ base64 -D | patch -p1

Or if written to a temporary file:

michi@mbmif ~/coding/icinga/icinga2 (master) $ base64 -D < patchfile | patch -p1

More about working with Git and applying patches can be learned in our Git training sessions. There’s also a Git workshop during this years OSMC 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...