Let's Encrypt HTTP Proxy: Another approach

Yes, we all know that providing microservices with Docker is a very wicked thing. Easy to use and of course there is a Docker image available for almost every application you need. The next step to master is how we can expose this service to the public. It seems that this is where most people struggle. Browsing the web (generally GitHub) for HTTP proxies you’ll find an incredible number of images, people build to fit into their environments. Especially when it comes to implement a Let’s Encrypt / SNI encryption service. Because it raises the same questions every time: Is this really the definitely right way to do this? Wrap custom API’s around conventional products like web servers and proxies, inject megabytes of JSON (or YAML or TOML) through environment variables and build scrips to convert this into the product specific configuration language? Always my bad conscience tapped on the door while I did every time.
Some weeks ago, I stumble upon Træfik which is obviously not a new Tool album but a HTTP proxy server which has everything a highly dynamic Docker platform needs to expose its services and includes Let’s Encrypt silently – Such a thing doesn’t exist you say?
A brief summary:

Træfik is a single binary daemon, written in Go, lightweight and can be used in virtually any modern environment. Configuration is done by choosing a backend you have. This could be an orchestrator like Swarm or Kubernetes but you can also use a more “open” approach, like etcd, REST API’s or file backend (backends can be mixed of course). For example, if you are using plain Docker, or Docker Compose, Træfik uses Docker object labels to configures services. A simple configuration looks like this:

[docker]
endpoint = "unix:///var/run/docker.sock"
# endpoint = "tcp://127.0.0.1:2375"
domain = "docker.localhost"
watch = true

Træfik constantly watch for changes in your running Docker container and automatically adds backends to its configuration. Docker container itself only needs labels like this (configured as Docker Compose in this example):

whoami:
image: emilevauge/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"

The clue is, that you can configure everything you’ll need that is often pretty complex in conventional products. This is for Example multiple domains, headers for API’s, redirects, permissions, container which exposes multiple ports and interfaces and so on.
How you succeed with the configuration can be validated in a frontend which is included in Træfik.

However, the best thing everybody was waiting is the seamless Let’s Encrypt integration which can be achieved with this snippet:

[acme]
email = "test@traefik.io"
storage = "acme.json"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
# [[acme.domains]]
# main = "local1.com"
# sans = ["test1.local1.com", "test2.local1.com"]
# [[acme.domains]]
# main = "local2.com"
# [[acme.domains]]
# main = "*.local3.com"
# sans = ["local3.com", "test1.test1.local3.com"]

Træfik will create the certificates automatically. Of course, you have a lot of conveniences here too, like wildcard certificates with DNS verification though different DNS API providers and stuff like that.
To conclude that above statements, it exists a thing which can meet the demands for a simple, apified reverse proxy we need in a dockerized world. Just give it a try and see how easy microservices – especially TLS-encrypted – can be.

Marius Hein
Marius Hein
Head of Development

Marius Hein ist schon seit 2003 bei NETWAYS. Er hat hier seine Ausbildung zum Fachinformatiker absolviert, dann als Application Developer gearbeitet und ist nun Leiter der Softwareentwicklung. Ausserdem ist er Mitglied im Icinga Team und verantwortet dort das Icinga Web.

Obstacles when setting up Mesos/Marathon

Sebastian has already mentioned Mesos some time ago, now it’s time to have a more practical look into this framework.
We’re currently running our NWS Platform under Mesos/Marathon and are quite happy with it. Sebastians talk at last years OSDC can give you a deeper insight into our setup. We started migrating our internal coreos/etcd/fleetctl setup to Mesos with Docker and also could provide some of our customers with a new setup.
Before I will give you a short description about snares I ran into during the migration, let’s have a quick overview on how Mesos works. We will have a look at Zookeeper, Mesos, Marathon and Docker.
Zookeeper acts as centralized key-value store for the Mesos cluster and as such has to be installed on both the Mesos-Master and -Slaves
Mesos is a distributed system kernel and runs on the Mesos-Masters and Slaves. The Masters distribute jobs and workload to the slaves and therefore need to know about their available ressources, e.g. RAM and CPU
Marathon is used for orchestration of docker containers and can access on information provided by Mesos.
Docker is one way to run containerized applications and used in our setup.
As we can see, there are several programs running simultaneously which creates needs for seamless integration.
What are obstacles you might run into when setting up your own cluster?
1. Connectivity:
When you set up e.g. different VMs to run your cluster, please make sure they are connected to each other. Which might look simple, can become frustrating when the Zookeeper nodes can’t find each other due to “wrong” etc/hosts settings, such as
127.0.1.1  localhost
This should be altered to
127.0.1.1 $hostname, e.g. mesos-slave1
2. Configuration
Whenever you make changes to your configuration, it has to be communicated through your complete cluster. Sometimes it doesn’t even a need a service restart. Sometimes you may need to reboot. In desperate times you might want to purge packages and reinstall them. In the end it will work and you will happily run into

by TRISTAR MEDIA


3. Bugs
While Marathon provides you with an easy to use Web-UI to interact with your containers, it has one great flaw in the current version. As the behaviour is so random, you could tend to search for issues in your setup.  You might or might not be able to make live changes to your configured containers. Worry not, the “solution” may be simply using an older version of Marathon.
Version 1.4.8 may help.

by TRISTAR MEDIA


Have fun setting up your own cluster and avoiding annoying obstacles!
Edit 20180131 TA: fixed minor typo

Tim Albert
Tim Albert
System Engineer

Tim kommt aus einem kleinen Ort zwischen Nürnberg und Ansbach, an der malerischen B14 gelegen. Er hat in Erlangen Lehramt und in Koblenz Informationsmanagement studiert, wobei seine Tätigkeit als Werkstudent bei IDS Scheer seinen Schwenk von Lehramt zur IT erheblich beeinflusst hat. Neben dem Studium hat Tim sich außerdem noch bei einer Werkskundendienstfirma im User-Support verdingt. Blerim und Sebastian haben ihn Anfang 2016 zu uns ins Managed Services Team geholt, wo er sich nun insbesondere...

Modern open source community platforms with Discourse

Investing into open source communities is key here at NETWAYS. We do a lot of things in the open, encourage users with open source trainings and also be part of many communities with help and code, be it Icinga, Puppet, Elastic, Graylog, etc.
Open source with additional business services as we love and do only works if the community is strong, and pushes your project to the next level. Then it is totally ok to say “I don’t have the time to investigate on your problem now, how about some remote support from professionals?”. Still, this requires a civil discussion platform where such conversations can evolve.
One key factor of an open source community is to encourage users to learn from you. Show them your appreciation and they will like it and start helping others as you do. Be a role model and help others on a technical level, that’s my definition of a community manager. Add ideas and propose changes and new things. Invest time and make things easier for your community.
I’ve been building a new platform for monitoring-portal.org based on Discourse in the last couple of days. The old platform based on Woltlab was old-fashioned, hard to maintain, and it wasn’t easy to help everyone. It also was closed source with an extra license, so feature requests were hard for an open source guy like me.
Discourse on the other hand is 100% open source, has ~24k Github stars and a helping community. It has been created by the inventors of StackOverflow, building a conversation platform for the next decade. Is is fast, modern, beautiful and both easy to install and use.
 

Setup as Container

Discourse only supports running inside Docker. The simplest approach is to build everything into one container, but one can split this up too. Since I am just a beginner, I decided to go for the simple all-in-one solution. Last week I was already using the 1.9.0beta17, running really stable. Today they released 1.9.0, I’ll share some of the fancy things below already 🙂
Start on a fresh VM where no applications are listening on port 80/443. You’ll also need to have a mail server around which accepts mails via SMTP. Docker must be installed in the latest version from the Docker repos, don’t use what the distribution provides (Ubuntu 14.04 LTS here).

mkdir /var/discourse
git clone https://github.com/discourse/discourse_docker.git /var/discourse
cd /var/discourse
./discourse-setup

The setup wizard ask several questions to configure the basic setup. I’ve chosen to use monitoring-portal.org as hostname, and provided my SMTP server host and credentials. I’ve also set my personal mail address as contact. Once everything succeeds, the configuration is stored in /var/discourse/container/app.yml.
 

Nginx Proxy

My requirement was to not only serve Discourse at /, but also have redirects for other web applications (the old Woltlab based forum for example). Furthermore I want to configure the SSL certificates in a central place. Therefore I’ve been following the instructions to connect Discourse to a unix socket via Nginx.

apt-get install nginx
rm /etc/nginx/sites-enabled/default
vim /etc/nginx/sites-available/proxy.conf
server {
    listen 443 ssl;  listen [::]:443 ssl;
    server_name fqdn.com;
    ssl on;
    ssl_certificate      /etc/nginx/ssl/fqdn.com-bundle.crt;
    ssl_certificate_key  /etc/nginx/ssl/fqdn.com.key;
    ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
    ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
    ssl_prefer_server_ciphers on;
    # openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096
    ssl_dhparam /etc/nginx/ssl/dhparam.pem;
    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;
    ssl_stapling_verify on;
    location / {
        error_page 502 =502 /errorpages/discourse_offline.html;
        proxy_intercept_errors on;
        # Requires containers/app.yml to use websockets
        proxy_pass http://unix:/var/discourse/shared/standalone/nginx.http.sock:;
        proxy_set_header Host $http_host;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
    }
}
ln -s /etc/nginx/sites-available/proxy.conf /etc/nginx/sites-enabled/proxy.conf
service nginx restart

Another bonus of such a proxy is to have a maintenance page without an ugly gateway error.
The full configuration can be found here.
 

Plugins

Installation is a breeze – just add the installation calls into the app.yml file and rebuild the container.

# egrep -v "^$|#" /var/discourse/containers/app.yml
templates:
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/web.template.yml"
  - "templates/web.ratelimited.template.yml"
expose:
params:
  db_default_text_search_config: "pg_catalog.english"
env:
  LANG: en_US.UTF-8
  DISCOURSE_HOSTNAME: fqdn.com
  DISCOURSE_DEVELOPER_EMAILS: 'contact@fqdn.com'
  DISCOURSE_SMTP_ADDRESS: smtp.fqdn.com
  DISCOURSE_SMTP_PORT: 587
  DISCOURSE_SMTP_USER_NAME: xxx
  DISCOURSE_SMTP_PASSWORD: xxx
volumes:
  - volume:
      host: /var/discourse/shared/standalone
      guest: /shared
  - volume:
      host: /var/discourse/shared/standalone/log/var-log
      guest: /var/log
hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/docker_manager.git
          - git clone https://github.com/discourse/discourse-akismet.git
          - git clone https://github.com/discourse/discourse-solved.git
run:
  - exec: echo "Beginning of custom commands"
  - exec: echo "End of custom commands"
./launcher rebuild app

Akismet checks against spam posts as you know it from WordPress. We’ve learned that spammers easily crack reCaptcha, and the only reliable way is filtering the actual posts.
The second useful plugin is for accepting an answer in a topic, marking it as solved. This is really useful if your platform is primarily used for Q&A topics.
 

Getting Started

Once everything is up and running, navigate to your domain in your browser. The simple setup wizard greets you with some basic questions. Proceed as you like, and then you are ready to build the platform for your own needs.
The admin interface has lots of options. Don’t fear it – many of the default settings are from best practices, and you can always restore them if you made a mistake. There’s also a filter to only list overridden options 🙂

Categories and Tags

Some organisation and structure is needed. The old-fashioned way of choosing a sub forum and adding a topic in there is gone. Still Discourse offers you to require a category from users. Think of monitoring – a question on the Icinga Director should be highlighted in a specific category to allow others to catch up.
By the way – you can subscribe to notifications for a specific category. This helps to keep track only for Icinga related categories for example.
In addition to that, tags help to refine the topics and make them easier to search for.

Communication matters

There are so many goodies. First off, you can start a new topic just from the start page. An overlay page which saves the session (!) is here for you to edit. Start typing Markdown, and see it pre-rendered live on the right side.
You can upload images, or paste an URL. Discourse will trigger a job to download this later and use a local cache. This avoids broken images in the future. If you paste a web link, Discourse tries to render a preview in “onebox”. This also renders Github URLs with code preview.
Add emotions to your discussion, appreciate posts by others and like them, enjoy the conversation and share it online. You can even save your draft and edit it amongst different sessions, e.g. after going home.

 

Tutorials, Trust Level and Rewards

Once you register a new account (you can add oauth apps from Twitter, Github, etc.!), a learning bot greets you. This interactive tutorial helps you learning the basics with likes, quotes, urls, uploads, and rewards you with a nice certificate in the end.
New users don’t start with full permissions, they need to earn their trust. Once they proceed with engaging with the community, their trust level is raised. The idea behind this is not to have moderators and admins regulating the conversation, but let experienced members to it. Sort of self healing if something goes wrong.
Users who really engage and help are able to earn so-called badges. These nifty rewards are highlighted on their profile page, e.g. for likes, number of replies, shared topics, even accepted solutions for questions. A pure motivation plaything built into this nice piece of open source software.

 

Wiki and Solved Topics

You can change topics to wiki entries. Everyone can edit them, this way you’ll combine the easiness of writing things in Markdown with a full-blown documentation wiki.
Accepting a replay as solution marks a topic as “solved”. This is incredibly helpful for others who had the same problem.

 

Development

As an administrator, you’ll get automated page profiling for free. This includes explained SQL queries, measured page load time, and even flame graphs.
If you ever need to reschedule a job, e.g. for daily badge creation, admins can access the Sidekiq web UI which really is just awesome.
Plugin development seems also easy if you know Ruby and EmberJS. There are many official plugins around which tested before each release.

Discourse also has a rich REST API. Even a monitoring endpoint.
 

Maintenance

You can create backups on-demand in addition to regular intervals. You can even restore an old backup directly from the UI.

 

Conclusion

Discourse is used by many communities all over the world – Graylog, Elastic, Gitlab, Docker, Grafana, … have chosen to use the power of a great discussion platform. So does monitoring-portal.org as a #monitoringlove community. A huge thank you to the Discourse team, your software is pure magic and just awesome 🙂
My journey in building a new community forum from scratch in just 5 days can be read here 🙂
monitoring-portal.org running Discourse is fully hosted at NETWAYS, including SSL certificates, Puppet deployment and Icinga for monitoring. Everything I need to build an awesome community platform. You too?
 

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Deutsche Open Stack Tage 2017 – Programm

In 6 Wochen ist es soweit: Die deutschen OpenStack Tage (DOST) in München gehen in die 3. Runde!
Die Feinschliffarbeiten sind bereits in vollem Gange und das Programm steht selbstverständlich auch schon fest! Was wird euch alles erwarten?
Die Konferenz lockt jedes Jahr ca. 300 begeisterte OpenStack Sympathisanten an, die sich in zwei Tagen über die neuesten Neuigkeiten rund um den Enterprise-Einsatz von OpenStack informieren und die Gelegenheit wahrnehmen, sich wertvolle Tipps von Profis zu holen.
Das fertige Konferenzprogramm deckt ein breites Themenspektrum ab. Zum einen wird es Vorträge von Vertretern führender, internationaler Unternehmen geben, zum anderen gibt es etliche Fachvorträge von OpenStack Experten zu Case Studies und Best Practices. Am ersten Konferenztag wird es hierzu außerdem Workshops zu den Themen „Konfigurationsmanagement mit Puppet“, „Ceph Storage Cluster“, „Administration von OpenStack“ und „Docker – Die andere Art der Virtualisierung“ geben. Informationen zu den Vorträgen und Referenten findet ihr auf der Konferenzwebseite.
Ergänzend zu den Vorträgen besteht für alle die Möglichkeit, im Foyer des Veranstaltungshotels die Sponsorenausstellung zu besuchen. Hier können die Besucher direkt mit den Referenten und Sponsoren in Kontakt treten. Euch erwarten interessante Diskussionen, aktuellstes Know-How und tolle Networkingmöglichkeiten. Unterstützt werden die 3. Deutschen OpenStack Tage von Noris Network, Fujitsu, Rackspace,  Mirantis, SUSE, Nokia, Canonical, Cumulus, Netzlink, Juniper, Telekom, Vmware, Mellanox, Cisco und NetApp.
An dieser Stelle möchten wir neben den Sponsoren außerdem unseren tollen Medienpartner danken, die auch einen großen Anteil am Erfolg der OpenStack Tage haben. Unsere hochlobenden Dankeshymnen gehen dieses Jahr raus an das deutsche Linux Magazin und den IT-Administrator. Wir haben die Zusammenarbeit mit euch jederzeit genossen und würden uns freuen, euch auch nächstes Jahr wieder mit an Bord zu haben.

 

NETWAYS on tour: PuppetConf in San Diego

img_3844Heading to the US and PuppetConf the 5th year – this time, the NETWAYS folks moved to sunny San Diego. We had the annual Icinga Camp on Tuesday in the same venue as PuppetConf in the beautiful Town & Country Resort.
Bernd organised the entire trip – from flights to our lovely AirBnB, even going shopping (aka raiding) the local Walmart Neighbourhood market, cooking a meal for the hungry crowd. Last but not least – anyone flying over the US was offered the possibility to extend the trip with his/her vacation plans. Bernd, Julian, Lennart and Florian arrived earlier and so the other folks (Tom, Eric, Dirk, Blerim, Michael) joined them on Monday afternoon.

img_3792
Guess what happened after getting onto the rental cars?  The first visit at the In-N-Out burger right around the corner at LAX. This round was on Bernd too, thanks a lot man! We agreed on visiting the AirBnb first, then looking for some food. Spaghetti and tasty salad accompanied with the obligatory G&T. Some of us were just chilling after an 11 hours flight, others still preparing for their Icinga Camp talks.
The next day we arrived at the PuppetConf venue – their brand was nearly everywhere although the room for Icinga Camp was a bit tad hard to find. Cosy warm and sunny weather and a full day of #monitoringlove. I’ll continue with details over at the Icinga blog soon.
Wednesday was sort of a free day for those attending PuppetConf. I went for Sea World with Lennart and Dirk, the others joined the beach. Unfortunately Bernd had to leave to Nuremberg so I was surprised with luckily attending PuppetConf. I’ve been learning and improving my Puppet skills in the past year quite a lot, also helped with our newly designed Puppet Open Source training sessions. Glad I could join this opportunity.
Therefore I’d like to share my experience with this year, also compared to previous events. To start with PuppetConf offered a delicious breakfast again on the first day. Bonus: Weather was hot and sunny, what a beautiful start into the day. The rooms for the sessions were loosely connected inside this building but still sometimes confusing. Everyone was friendly and in comparison to last year, you were not “marketing scanned” by Puppet folks everywhere.
img_3941Nigel Kersten kicked off PuppetConf and also announced the date next year in SFO, 10.-12. October 2017. Then former CEO and creator of Puppet, Luke Kanies, started with his keynote. From container numbers (starting/stopping 1.58 * 10^10 containers over 3 years is a hell of a number) to open source commitment heard from Puppet CEO Sanjay Mirchandani. In case you haven’t been following closely, Puppet Enterprise got certain exclusive features not available in the community version. When it comes to server metrics available PE only I could imagine that community members are not amused (I wasn’t). Time for changes.
I decided to join the Github talk since it is always interesting to get insights into their operations management. Nearly everything is managed with HuBot. GitHub really is living the #chatops dream. Kevin Paulisse also talked about Puppet as culture, dealing with new contributors and then announced a new open-source tool: octocat-diff. This allows to compare Puppet catalogs and avoid deploying unwanted changes in production.
Everyone was really crazy about using Puppet to create and update Docker images. And so was David Lutterkort decoupling the challenges with containers and their management with Kubernetes into a fascinating talk. Note: That specific nginx demo was used everywhere 😉
“Making Puppet clean up its own mess” sounded provoking and so we went for it. You might be asking – which mess? Take for example changed configuration file locations generating collisions. “Write code to cleanup code” or “Use Ansible to cleanup after Puppet?”. Sounded funny but still can be a common approach instead of waiting for the Puppet agent’s 30 minute update interval.
img_4005On Thursday evening the social event was happening from 6 to 8 pm. Hey, a NETWAYS party isn’t even started yet at that time 😀 And Jenny is waiting later at night, OSMC is near! From what the others told me, the location and food was great. Blerim and I decided to raid Walmart – again – and go for some barbecue together with the other folks not attending the conference. Florian took care of grilling tasty meat, and of course the drinks later on when the PuppetConf party people joined us again.
You would guess – no-one attends the keynotes on the second day. We made it, and listened to interesting insights into Microsoft’s plans on Azure with containers, Nano server 2016 setup plans and of course Powershell deployment strategies. Lots of things happening here, definitely worth keep watching.
During the keynotes the internet broke. Ok, actually Twitter was down so I couldn’t tweet about #puppetconf. The DNS DDOS even affected Puppet itself – livestream and forge were unreachable. Back in Germany everyone was sleeping but I guess some participants had to deal with their notification email stream rather than listening to sessions 😉
Martin Alfke gave a training session…ehm…talk about moving exec into types and providers. And everyone in the audience could follow and left the session both entertained and well trained. Since Blerim and I were pretty much into containers, management and also monitoring, we went for Gareth Rushgrove and him doing lots of demos showcasing Puppet and Docker. Did I mention the nginx demo already? 😉
img_4043We were a bit undecided where to head for the last talk but then we saw Ben’s session “How you actually get hacked” differing from the usual Puppet suspects in topics. Oh boy, such sarcasm combined with actual matters of security. Ben, if you really lost your job, join NETWAYS. Will be fun 🙂
There were a couple of sessions talking about the transition from Puppet 3 to 4. Though it did not really feel that people are aware that Puppet 3 will reach EOL by the end of 2016. Most recently an interesting discussion on twitter started.
Compared to last year, PuppetConf including the session topics, venue and friendlyness nailed it this year. I’m looking forward to San Francisco next year – probably the best location to attract even more IT people. In case you’ve missed PuppetConf this year – their event archive including video recordings is already online.
We left San Diego on Friday evening spending two more days in lovely Los Angeles. Lennart, Dirk and I went for LEGOLand California, colleagues enjoined Venice beach. And then we got our rental car for our road trip to Grand Canyon and more. But that’s a different story … join the NETWAYS tour!
Enjoy some pictures we’ve taken during our NETWAYS “school trip” 🙂

 
 
 
 

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Ist schon wieder Webinar Zeit?

netways Auch in der Hitze des Sommers gibt es natürlich weiterhin Bedarf an diversen IT-Dienstleistungen wie Outsourcing, Hosting, Monitoring, Configuration Management und natürlich allem voran Umweltüberwachung.
In den kommenden Monaten wollen wir genau diese Themen in unseren Webinaren ansprechen und – sofern möglich – anhand von Live-Demos veranschaulichen. Die nächsten Termine stehen dabei schon fest:

Titel Zeitraum Registrierung
NETWAYS Cloud: Der Weg zur eigenen VM 15. Juli 2016 – 10:30 Uhr Anmelden
Foreman: Berechtigungen 28. Juli 2016 – 10:30 Uhr Anmelden
Umweltüberwachung im Rechenzentrum 04. August 2016 – 10:30 Uhr Anmelden
SMS Alarmierung einrichten 25. August 2016 – 10:30 Uhr Anmelden
Foreman: Docker Integration 05. Oktober 2016 – 10:30 Uhr Anmelden

Alle unsere Webinare zeichnen wir auf, damit diese auch nachträglich noch einmal angesehen werden können. Hinterlegt werden Sie dann in unserem Webinar-Archiv.

Christian Stein
Christian Stein
Lead Senior Account Manager

Christian kommt ursprünglich aus der Personalberatungsbranche, wo er aber schon immer auf den IT Bereich spezialisiert war. Bei NETWAYS arbeitet er als Senior Sales Engineer und berät unsere Kunden in der vertrieblichen Phase rund um das Thema Monitoring. Gemeinsam mit Georg hat er sich Mitte 2012 auch an unserem Hardware-Shop "vergangen".

Monthly Snap May: Docker, Graphite, Opennebula and Beijing

May started with Simons blog post on monitoring custom applications. 
Blerim gave us an insight into Graphite – the history of a data point. Kai, one of our trainees explained what he learned at NETWAYS.
weekly snap
Tobias explained Debugging with Docker and Michael told us something about Docker on OSX.
Kay explained on how to get started with the Opennebula API.
Finally, Christoph told us about his Consulting journey to Beijing.

Vanessa Erk
Vanessa Erk
Head of Finance & Administration

Vanessa ist unser Head of Finance und somit mit ihrem Team für das Geld, Controlling und die Personalverwaltung zuständig. Außerhalb des Büros ist sie sportlich unterwegs und widmet sich hauptsächlich dem Yoga. Auf das offizielle Yoga-Teacher-Training (RYT 200), das sie mit Bravour bestanden hat folgte gleich eine Zusatzausbildung zur Vertiefung ihres Wissens. Ihr Fleiß dahingegen wird zukünftig wohl noch mehreren älter werdenden NETWAYS´lern zugutekommen.

And again: Das Löschen von Images in der Docker-Registry

DockerDas Löschen von Images in einer Docker-Registry, ist ein Thema, das bereits sehr viel Zeit in Anspruch genommen hat. In einem früheren Blogpost wurde eine Variante vorgestellt, die mit einem separaten Script arbeitete. Da diese mit neueren Docker-Versionen jedoch nicht mehr funktioniert, wurde nach einer Alternative gesucht. Dabei haben wir erneut die von Docker mitgelieferte Möglichkeit getestet, die zu unserer Überraschung nun endlich richtig implementiert wurde.
Dieses Vorgehen sieht so aus, dass ein Image über die API gelöscht wird, wodurch die Abhängigkeiten der einzelnen Layer aufgelöst werden. Um dieser Layer Herr zu werden, ist es nötig einen Garbage-Collector zu starten, der diese Layer erfasst und entfernt. Der API-Call sieht zunächst wie folgt aus:
curl -k -X DELETE https://localhost:5000/v2/$IMAGENAME$/manifests/sha256:$BLOBSUM$
Die hierfür benötigte Blobsum kann einem Push/Pull des betroffenen Images entnommen werden. Tags wie “latest” werden in diesem Aufruf nicht akzeptiert. Der Garbage-Collector sollte in einem separaten Docker-Container laufen nachdem die Registry selbst gestoppt wurde. Wir haben diesen Vorgang mit einem Script umgesetzt. So ist es möglich, tagsüber Images manuell zu entfernen, sodass diese zu einer bestimmten Uhrzeit von einem Cronjob entfernt werden können.
echo "###################################################" >> /var/log/docker-registry-cleanup.log
/usr/local/sbin/icingaweb2-downtime.rb -s -H docker-registry.netways.de -S 'docker registry' -t 15 -c Cleanup && echo "$(date) [OKAY] Downtime was set." >> /var/log/docker-registry-cleanup.log
/etc/init.d/docker-registry stop && echo " $(date) [STOP] Stopping Docker-Registry ..." >> /var/log/docker-registry-cleanup.log
touch /var/log/docker-registry-cleanup.log
sleep 30
pgrep -f "/etc/docker/registry/config.yml"
echo $i
if [ "$i" = "0" ]
then
echo "$(date) [FAIL] Docker-Registry is still running. Aborting Cleanup" >> /var/log/docker-registry-cleanup.log
else
echo "$(date) [OKAY] Docker-Registry is not running. Starting Cleanup" >> /var/log/docker-registry-cleanup.log
docker run --rm -v /storage:/var/lib/registry -v /etc/docker/config.yml:/etc/docker/registry/config.yml registry:2 garbage-collect /etc/docker/registry/config.yml
/etc/init.d/docker-registry start && echo "$(date) [START] Starting the Docker-Registry ..."
pgrep -f "/etc/docker/registry/config.yml"
i=$?
if [ $i=0 ]
then
echo "$(date) [OKAY] Docker-Registry is running after cleanup" >> /var/log/docker-registry-cleanup.log
else
echo "$(date) [FAIL] Docker-Registry is not running after cleanup" >> /var/log/docker-registry-cleanup.log
fi
fi

Dieses Script setzt ebenso eine Downtime von 15 Minuten für den Container der Registry im Icinga2 und schreibt seinen Output in ein Log.
###################################################
Fri May 27 02:00:03 CEST 2016 [OKAY] Downtime was set.
Fri May 27 02:00:07 CEST 2016 [STOP] Stopping Docker-Registry ...
Fri May 27 02:00:37 CEST 2016 [OKAY] Docker-Registry is not running. Starting Cleanup
Fri May 27 02:01:10 CEST 2016 [OKAY] Docker-Registry is running after cleanup
###################################################
Sat May 28 02:00:03 CEST 2016 [OKAY] Downtime was set.
Sat May 28 02:00:07 CEST 2016 [STOP] Stopping Docker-Registry ...
Sat May 28 02:00:37 CEST 2016 [OKAY] Docker-Registry is not running. Starting Cleanup
Sat May 28 02:01:13 CEST 2016 [OKAY] Docker-Registry is running after cleanup
###################################################
Sun May 29 02:00:03 CEST 2016 [OKAY] Downtime was set.
Sun May 29 02:00:06 CEST 2016 [STOP] Stopping Docker-Registry ...
Sun May 29 02:00:36 CEST 2016 [OKAY] Docker-Registry is not running. Starting Cleanup
Sun May 29 02:01:13 CEST 2016 [OKAY] Docker-Registry is running after cleanup
###################################################
Mon May 30 02:00:02 CEST 2016 [OKAY] Downtime was set.
Mon May 30 02:00:03 CEST 2016 [STOP] Stopping Docker-Registry ...
Mon May 30 02:00:33 CEST 2016 [OKAY] Docker-Registry is not running. Starting Cleanup
Mon May 30 02:00:41 CEST 2016 [OKAY] Docker-Registry is running after cleanup

Dieses Verfahren testen wir nun seit ein paar Wochen und sind mit dem Ergebnis sehr zufrieden.
Brauchen Sie Unterstützung für Ihr Docker Projekt? Dann empfehlen wir unser Docker-Hosting Angebot.

Marius Gebert
Marius Gebert
Systems Engineer

Marius ist seit 2013 bei NETWAYS. Er hat 2016 seine Ausbildung zum Fachinformatiker für Systemintegration absolviert und ist nun im Web Services Team tätig. Hier kümmert er sich mit seinen Kollegen um die NWS Plattform und alles was hiermit zusammen hängt. 2017 hat Marius die Prüfung zum Ausbilder abgelegt und kümmert sich in seiner Abteilung um die Ausbildung unserer jungen Kollegen. Seine Freizeit verbringt Marius gerne an der frischen Luft und ist für jeden Spaß zu...

Docker on OSX

DockerLogoRunning Docker on OSX can be made possible using different methods:

Docker containers require kernel features which are only available in modern Linux kernels. In order to run Docker on OSX for example, one needs a virtual machine with a smallish Linux running in it.

Docker for Mac Beta

Docker for Mac uses xhyve, a lightweight OS X virtualization solution built on top of Hypervisor.framework. This requires you to run OS X 10.10 Yosemite and higher. The VM is provisioned with Alpine Linux running Docker engine.
The Docker API is exposed in /var/run/docker.sock where the docker and docker-compose CLI commands may directly communicate with. This is one of the benefits compared to Docker machine, especially when you do not need to manage your docker VM, or set specific environment variables before running it. Docker for Mac is further installed as native OSX application and only provides symlinks to /usr/local/bin/{docker,docker-compose}.
After the app is installed, I only had to manually add the bash-completion provided by Homebrew.

cd /usr/local/etc/bash_completion.d
ln -s /Applications/Docker.app/Contents/Resources/etc/docker.bash-completion
ln -s /Applications/Docker.app/Contents/Resources/etc/docker-machine.bash-completion
ln -s /Applications/Docker.app/Contents/Resources/etc/docker-compose.bash-completion

I was granted a beta access key for Docker for Mac today 🙂 Even if this is still beta, it already feels much more integrated into my test and development workflow rather than using Docker machine. Awesome job! 🙂

 

Docker Machine

Docker machine will use Virtualbox as VM provider. In order to avoid manual interaction in each terminal I’m opening I’ve added an alias into my bashrc file.

vim $HOME/.bashrc
alias enable_docker=". '/Applications/Docker/Docker Quickstart Terminal.app/Contents/Resources/Scripts/start.sh'"

This script doesn’t do much except for starting the VM using the Virtualbox cli tools and then source the exported variables into your current shell environment. That way the docker client will be able to communicate with the docker daemon running inside the VM.

Parallels instead of Virtualbox

While Virtualbox works fine there are significant performance improvements when using Parallels on OSX. Furthermore it is reasonable to only use one application firing virtual machines (the Icinga Vagrant boxes also provide support for Parallels as Vagrant provider).
I was therefore looking for a native Parallels driver for Docker. Following this issue shed some light on the history of Docker drivers and their support as plugins. Parallels doesn’t seem to be officially supported by Docker themselves according to their documentation. Though there is an official driver plugin from Parallels themselves which works for Pro and Business subscriber editions only. The main reason seems to be the limited cli features in the Standard edition.

Requirements for Parallels

The main requirement is at least Docker 1.9.1 providing the Docker toolbox 0.5.1+.

Installation

I’m using Homebrew, the manual installation parts are described in the documentation. Brew tries to pull docker-machine as well – if you’re using the version from docker.com you can safely ignore the linking error.

brew install docker-machine-parallels

Create a docker machine

docker_machine_parallels_runUse the driver “parallels” and add the name “docker-parallels”. This will create a new Parallels VM with 20GB HDD and 1GB RAM by default. In case you want to disable sharing the /User mounts, add –parallels-no-share.

docker-machine create --driver=parallels docker-parallels

Add the environment variables to your shell and run docker pulling the latest Fedora container.

eval $(docker-machine env docker-parallels)
docker run -ti fedora:latest bash

Automate it

I’ve partially modified the Docker toolbox script in order to support Parallels.

wget https://raw.githubusercontent.com/dnsmichi/docker-tools/master/toolbox/scripts/osx/start.sh -O /usr/local/bin/enable_docker
chmod +x /usr/local/bin/enable_docker
enable_docker

 

Conclusion

While the Docker Machine integration allows room for improvement the Parallels driver works like a charm. Though I have to admit – while looking into the Parallels integration, Docker announced Docker for Mac and I was eagerly waiting for it.
Both methods are working, but the Docker for Mac application integrated natively into OSX is pretty slick. I like it a lot!
If you are looking for more Docker and its many possibilities – follow our blog closely and visit the Docker training sessions 🙂

Michael Friedrich
Michael Friedrich
Senior Developer

Michael ist seit vielen Jahren Icinga-Entwickler und hat sich Ende 2012 in das Abenteuer NETWAYS gewagt. Ein Umzug von Wien nach Nürnberg mit der Vorliebe, österreichische Köstlichkeiten zu importieren - so mancher Kollege verzweifelt an den süchtig machenden Dragee-Keksi und der Linzer Torte. Oder schlicht am österreichischen Dialekt der gerne mit Thomas im Büro intensiviert wird ("Jo eh."). Wenn sich Michael mal nicht in der Community helfend meldet, arbeitet er am nächsten LEGO-Projekt oder geniesst...

Debugging mit Docker

docker
Docker ist uns allen als leichtgewichtige Lösung bekannt mit deren Hilfe man Anwendungen in Containern bereitstellen kann. Ist man etwas kreativ, kann man mit Docker aber viel mehr “verbrechen”. So kann man beispielsweise Docker sehr gut zum debuggen von Applikationen verwenden.
Jetzt fragt ihr euch sicher: “Was ist den bei dem kaputt? Zum debuggen brauch ich in 90 % aller Fälle eine Konsole”. Aber warum den nicht!? Es ist zwar gegen die Idee von Docker, aber man kann damit natürlich auch einen kleine Debugging-Container mit SSH betreiben.
 
 
Hier ein kurzes Beispiel in Form eines Dockerfiles:
FROM debian:8.4
MAINTAINER $your_name $your_email


# install needed packages
RUN apt-get update && apt-get install -y openssh-server rsync rsnapshot vim git sudo ntpdate ethtool screen dnsutils shorewall curl unzip telnet net-tools ntp ntpdate


# prepare root account and login
RUN mkdir /var/run/sshd

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile


# prepare user
RUN groupadd -g 10000 $your_name
RUN useradd -g 10000 -u 10000 -s /bin/bash -m $your_name
RUN mkdir /home/$your_name/.ssh && chmod 750 /home/$your_name/.ssh && chown $your_name. /home/$your_name/.ssh
RUN echo "<$your_ssh_key>" > /home/$your_name/.ssh/authorized_keys && chmod 600 /home/$your_name/.ssh/authorized_keys && chown $your_name. /home/$your_name/.ssh/authorized_keys
RUN echo "$your_name ALL=NOPASSWD: ALL" > /etc/sudoers.d/$your_name && chmod 640 /etc/sudoers.d/$your_name


# map ssh port and run ssh
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Wenn ihr jetzt alle $your_name durch euren Benutzernamen ersetzt (Variablen funktionieren bei Docker leider nicht in der RUN-Umgebung) erhaltet ihr ein aktuelles Debian 8.4 mit einem SSH Zugang. Dieses Dockerfile mit SSH kann man nun z. B. sehr einfach um Icinga2 Pakete erweitern. Etwas weiter gesponnen könnte man noch verschiedene Betriebssystemversionen oder die Auswahl Icinga2 Stable oder Snapshot mit einbauen.
Alles in allem erhält man einen sehr leichtgewichtigen Container der das Debuggen ermöglicht, der sehr schnell provisioniert ist und man mit der entsprechenden Storage Config sogar anwendungsspezifische Konfigurationen und Dateien mit schleppen kann.

Tobias Redel
Tobias Redel
Head of Professional Services

Tobias hat nach seiner Ausbildung als Fachinformatiker bei der Deutschen Telekom bei T-Systems gearbeitet. Seit August 2008 ist er bei NETWAYS, wo er in der Consulting-Truppe unsere Kunden in Sachen Open Source, Monitoring und Systems Management unterstützt. Insgeheim führt er jedoch ein Doppelleben als Travel-Hacker, arbeitet an seiner dritten Millionen Euro (aus den ersten beiden ist nix geworden) und versucht die Weltherrschaft an sich zu reißen.