Seite wählen

NETWAYS Blog

Let's Encrypt HTTP Proxy: Another approach

Yes, we all know that providing microservices with Docker is a very wicked thing. Easy to use and of course there is a Docker image available for almost every application you need. The next step to master is how we can expose this service to the public. It seems that this is where most people struggle. Browsing the web (generally GitHub) for HTTP proxies you’ll find an incredible number of images, people build to fit into their environments. Especially when it comes to implement a Let’s Encrypt / SNI encryption service. Because it raises the same questions every time: Is this really the definitely right way to do this? Wrap custom API’s around conventional products like web servers and proxies, inject megabytes of JSON (or YAML or TOML) through environment variables and build scrips to convert this into the product specific configuration language? Always my bad conscience tapped on the door while I did every time.
Some weeks ago, I stumble upon Træfik which is obviously not a new Tool album but a HTTP proxy server which has everything a highly dynamic Docker platform needs to expose its services and includes Let’s Encrypt silently – Such a thing doesn’t exist you say?
A brief summary:

Træfik is a single binary daemon, written in Go, lightweight and can be used in virtually any modern environment. Configuration is done by choosing a backend you have. This could be an orchestrator like Swarm or Kubernetes but you can also use a more “open” approach, like etcd, REST API’s or file backend (backends can be mixed of course). For example, if you are using plain Docker, or Docker Compose, Træfik uses Docker object labels to configures services. A simple configuration looks like this:

[docker]
endpoint = "unix:///var/run/docker.sock"
# endpoint = "tcp://127.0.0.1:2375"
domain = "docker.localhost"
watch = true

Træfik constantly watch for changes in your running Docker container and automatically adds backends to its configuration. Docker container itself only needs labels like this (configured as Docker Compose in this example):

whoami:
image: emilevauge/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"

The clue is, that you can configure everything you’ll need that is often pretty complex in conventional products. This is for Example multiple domains, headers for API’s, redirects, permissions, container which exposes multiple ports and interfaces and so on.
How you succeed with the configuration can be validated in a frontend which is included in Træfik.

However, the best thing everybody was waiting is the seamless Let’s Encrypt integration which can be achieved with this snippet:

[acme]
email = "test@traefik.io"
storage = "acme.json"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
# [[acme.domains]]
# main = "local1.com"
# sans = ["test1.local1.com", "test2.local1.com"]
# [[acme.domains]]
# main = "local2.com"
# [[acme.domains]]
# main = "*.local3.com"
# sans = ["local3.com", "test1.test1.local3.com"]

Træfik will create the certificates automatically. Of course, you have a lot of conveniences here too, like wildcard certificates with DNS verification though different DNS API providers and stuff like that.
To conclude that above statements, it exists a thing which can meet the demands for a simple, apified reverse proxy we need in a dockerized world. Just give it a try and see how easy microservices – especially TLS-encrypted – can be.

Marius Hein
Marius Hein
Head of IT Service Management

Marius Hein ist schon seit 2003 bei NETWAYS. Er hat hier seine Ausbildung zum Fachinformatiker absolviert und viele Jahre in der Softwareentwicklung gearbeitet. Mittlerweile ist er Herr über die interne IT und als Leiter von ITSM zuständig für die technische Schnittmenge der Abteilungen der NETWAYS Gruppe. Wenn er nicht gerade IPv6 IPSec Tunnel bohrt, sitzt er daheim am Schlagzeug und treibt seine Nachbarn in den Wahnsinn.

Obstacles when setting up Mesos/Marathon

Sebastian has already mentioned Mesos some time ago, now it’s time to have a more practical look into this framework.
We’re currently running our NWS Platform under Mesos/Marathon and are quite happy with it. Sebastians talk at last years OSDC can give you a deeper insight into our setup. We started migrating our internal coreos/etcd/fleetctl setup to Mesos with Docker and also could provide some of our customers with a new setup.
Before I will give you a short description about snares I ran into during the migration, let’s have a quick overview on how Mesos works. We will have a look at Zookeeper, Mesos, Marathon and Docker.
Zookeeper acts as centralized key-value store for the Mesos cluster and as such has to be installed on both the Mesos-Master and -Slaves
Mesos is a distributed system kernel and runs on the Mesos-Masters and Slaves. The Masters distribute jobs and workload to the slaves and therefore need to know about their available ressources, e.g. RAM and CPU
Marathon is used for orchestration of docker containers and can access on information provided by Mesos.
Docker is one way to run containerized applications and used in our setup.
As we can see, there are several programs running simultaneously which creates needs for seamless integration.
What are obstacles you might run into when setting up your own cluster?
1. Connectivity:
When you set up e.g. different VMs to run your cluster, please make sure they are connected to each other. Which might look simple, can become frustrating when the Zookeeper nodes can’t find each other due to „wrong“ etc/hosts settings, such as
127.0.1.1  localhost
This should be altered to
127.0.1.1 $hostname, e.g. mesos-slave1
2. Configuration
Whenever you make changes to your configuration, it has to be communicated through your complete cluster. Sometimes it doesn’t even a need a service restart. Sometimes you may need to reboot. In desperate times you might want to purge packages and reinstall them. In the end it will work and you will happily run into

by TRISTAR MEDIA


3. Bugs
While Marathon provides you with an easy to use Web-UI to interact with your containers, it has one great flaw in the current version. As the behaviour is so random, you could tend to search for issues in your setup.  You might or might not be able to make live changes to your configured containers. Worry not, the „solution“ may be simply using an older version of Marathon.
Version 1.4.8 may help.

by TRISTAR MEDIA


Have fun setting up your own cluster and avoiding annoying obstacles!
Edit 20180131 TA: fixed minor typo

Tim Albert
Tim Albert
Senior Systems Engineer

Tim kommt aus einem kleinen Ort zwischen Nürnberg und Ansbach, an der malerischen B14 gelegen. Er hat in Erlangen Lehramt und in Koblenz Informationsmanagement studiert. Seit Anfang 2016 ist er bei uns tätig. Zuerst im Managed Services Team, dort kümmerte Tim sich um Infrastrukturthemen und den internen Support, um dann 2019 - zusammen mit Marius - Gründungsmitglied der ITSM Abteilung zu werden. In seiner Freizeit engagiert sich Tim in der Freiwilligen Feuerwehr – als Maschinist und Atemschutzgeräteträger -, spielt im Laientheater Bauernschwänke und ist auch handwerklich ein absolutes Allroundtalent. Angefangen von Mauern hochziehen bis hin zur KNX-Verkabelung ist er jederzeit...

Modern open source community platforms with Discourse

Investing into open source communities is key here at NETWAYS. We do a lot of things in the open, encourage users with open source trainings and also be part of many communities with help and code, be it Icinga, Puppet, Elastic, Graylog, etc.
Open source with additional business services as we love and do only works if the community is strong, and pushes your project to the next level. Then it is totally ok to say „I don’t have the time to investigate on your problem now, how about some remote support from professionals?“. Still, this requires a civil discussion platform where such conversations can evolve.
One key factor of an open source community is to encourage users to learn from you. Show them your appreciation and they will like it and start helping others as you do. Be a role model and help others on a technical level, that’s my definition of a community manager. Add ideas and propose changes and new things. Invest time and make things easier for your community.
I’ve been building a new platform for monitoring-portal.org based on Discourse in the last couple of days. The old platform based on Woltlab was old-fashioned, hard to maintain, and it wasn’t easy to help everyone. It also was closed source with an extra license, so feature requests were hard for an open source guy like me.
Discourse on the other hand is 100% open source, has ~24k Github stars and a helping community. It has been created by the inventors of StackOverflow, building a conversation platform for the next decade. Is is fast, modern, beautiful and both easy to install and use.
 

Setup as Container

Discourse only supports running inside Docker. The simplest approach is to build everything into one container, but one can split this up too. Since I am just a beginner, I decided to go for the simple all-in-one solution. Last week I was already using the 1.9.0beta17, running really stable. Today they released 1.9.0, I’ll share some of the fancy things below already 🙂
Start on a fresh VM where no applications are listening on port 80/443. You’ll also need to have a mail server around which accepts mails via SMTP. Docker must be installed in the latest version from the Docker repos, don’t use what the distribution provides (Ubuntu 14.04 LTS here).

mkdir /var/discourse
git clone https://github.com/discourse/discourse_docker.git /var/discourse
cd /var/discourse
./discourse-setup

The setup wizard ask several questions to configure the basic setup. I’ve chosen to use monitoring-portal.org as hostname, and provided my SMTP server host and credentials. I’ve also set my personal mail address as contact. Once everything succeeds, the configuration is stored in /var/discourse/container/app.yml.
 

Nginx Proxy

My requirement was to not only serve Discourse at /, but also have redirects for other web applications (the old Woltlab based forum for example). Furthermore I want to configure the SSL certificates in a central place. Therefore I’ve been following the instructions to connect Discourse to a unix socket via Nginx.

apt-get install nginx
rm /etc/nginx/sites-enabled/default
vim /etc/nginx/sites-available/proxy.conf
server {
    listen 443 ssl;  listen [::]:443 ssl;
    server_name fqdn.com;
    ssl on;
    ssl_certificate      /etc/nginx/ssl/fqdn.com-bundle.crt;
    ssl_certificate_key  /etc/nginx/ssl/fqdn.com.key;
    ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
    ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
    ssl_prefer_server_ciphers on;
    # openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096
    ssl_dhparam /etc/nginx/ssl/dhparam.pem;
    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;
    ssl_stapling_verify on;
    location / {
        error_page 502 =502 /errorpages/discourse_offline.html;
        proxy_intercept_errors on;
        # Requires containers/app.yml to use websockets
        proxy_pass http://unix:/var/discourse/shared/standalone/nginx.http.sock:;
        proxy_set_header Host $http_host;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
    }
}
ln -s /etc/nginx/sites-available/proxy.conf /etc/nginx/sites-enabled/proxy.conf
service nginx restart

Another bonus of such a proxy is to have a maintenance page without an ugly gateway error.
The full configuration can be found here.
 

Plugins

Installation is a breeze – just add the installation calls into the app.yml file and rebuild the container.

# egrep -v "^$|#" /var/discourse/containers/app.yml
templates:
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/web.template.yml"
  - "templates/web.ratelimited.template.yml"
expose:
params:
  db_default_text_search_config: "pg_catalog.english"
env:
  LANG: en_US.UTF-8
  DISCOURSE_HOSTNAME: fqdn.com
  DISCOURSE_DEVELOPER_EMAILS: 'contact@fqdn.com'
  DISCOURSE_SMTP_ADDRESS: smtp.fqdn.com
  DISCOURSE_SMTP_PORT: 587
  DISCOURSE_SMTP_USER_NAME: xxx
  DISCOURSE_SMTP_PASSWORD: xxx
volumes:
  - volume:
      host: /var/discourse/shared/standalone
      guest: /shared
  - volume:
      host: /var/discourse/shared/standalone/log/var-log
      guest: /var/log
hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/docker_manager.git
          - git clone https://github.com/discourse/discourse-akismet.git
          - git clone https://github.com/discourse/discourse-solved.git
run:
  - exec: echo "Beginning of custom commands"
  - exec: echo "End of custom commands"
./launcher rebuild app

Akismet checks against spam posts as you know it from WordPress. We’ve learned that spammers easily crack reCaptcha, and the only reliable way is filtering the actual posts.
The second useful plugin is for accepting an answer in a topic, marking it as solved. This is really useful if your platform is primarily used for Q&A topics.
 

Getting Started

Once everything is up and running, navigate to your domain in your browser. The simple setup wizard greets you with some basic questions. Proceed as you like, and then you are ready to build the platform for your own needs.
The admin interface has lots of options. Don’t fear it – many of the default settings are from best practices, and you can always restore them if you made a mistake. There’s also a filter to only list overridden options 🙂

Categories and Tags

Some organisation and structure is needed. The old-fashioned way of choosing a sub forum and adding a topic in there is gone. Still Discourse offers you to require a category from users. Think of monitoring – a question on the Icinga Director should be highlighted in a specific category to allow others to catch up.
By the way – you can subscribe to notifications for a specific category. This helps to keep track only for Icinga related categories for example.
In addition to that, tags help to refine the topics and make them easier to search for.

Communication matters

There are so many goodies. First off, you can start a new topic just from the start page. An overlay page which saves the session (!) is here for you to edit. Start typing Markdown, and see it pre-rendered live on the right side.
You can upload images, or paste an URL. Discourse will trigger a job to download this later and use a local cache. This avoids broken images in the future. If you paste a web link, Discourse tries to render a preview in „onebox“. This also renders Github URLs with code preview.
Add emotions to your discussion, appreciate posts by others and like them, enjoy the conversation and share it online. You can even save your draft and edit it amongst different sessions, e.g. after going home.


 

Tutorials, Trust Level and Rewards

Once you register a new account (you can add oauth apps from Twitter, Github, etc.!), a learning bot greets you. This interactive tutorial helps you learning the basics with likes, quotes, urls, uploads, and rewards you with a nice certificate in the end.
New users don’t start with full permissions, they need to earn their trust. Once they proceed with engaging with the community, their trust level is raised. The idea behind this is not to have moderators and admins regulating the conversation, but let experienced members to it. Sort of self healing if something goes wrong.
Users who really engage and help are able to earn so-called badges. These nifty rewards are highlighted on their profile page, e.g. for likes, number of replies, shared topics, even accepted solutions for questions. A pure motivation plaything built into this nice piece of open source software.


 

Wiki and Solved Topics

You can change topics to wiki entries. Everyone can edit them, this way you’ll combine the easiness of writing things in Markdown with a full-blown documentation wiki.
Accepting a replay as solution marks a topic as „solved“. This is incredibly helpful for others who had the same problem.


 

Development

As an administrator, you’ll get automated page profiling for free. This includes explained SQL queries, measured page load time, and even flame graphs.
If you ever need to reschedule a job, e.g. for daily badge creation, admins can access the Sidekiq web UI which really is just awesome.
Plugin development seems also easy if you know Ruby and EmberJS. There are many official plugins around which tested before each release.


Discourse also has a rich REST API. Even a monitoring endpoint.
 

Maintenance

You can create backups on-demand in addition to regular intervals. You can even restore an old backup directly from the UI.

 

Conclusion

Discourse is used by many communities all over the world – Graylog, Elastic, Gitlab, Docker, Grafana, … have chosen to use the power of a great discussion platform. So does monitoring-portal.org as a #monitoringlove community. A huge thank you to the Discourse team, your software is pure magic and just awesome 🙂
My journey in building a new community forum from scratch in just 5 days can be read here 🙂
monitoring-portal.org running Discourse is fully hosted at NETWAYS, including SSL certificates, Puppet deployment and Icinga for monitoring. Everything I need to build an awesome community platform. You too?
 

Deutsche Open Stack Tage 2017 – Programm

In 6 Wochen ist es soweit: Die deutschen OpenStack Tage (DOST) in München gehen in die 3. Runde!
Die Feinschliffarbeiten sind bereits in vollem Gange und das Programm steht selbstverständlich auch schon fest! Was wird euch alles erwarten?
Die Konferenz lockt jedes Jahr ca. 300 begeisterte OpenStack Sympathisanten an, die sich in zwei Tagen über die neuesten Neuigkeiten rund um den Enterprise-Einsatz von OpenStack informieren und die Gelegenheit wahrnehmen, sich wertvolle Tipps von Profis zu holen.
Das fertige Konferenzprogramm deckt ein breites Themenspektrum ab. Zum einen wird es Vorträge von Vertretern führender, internationaler Unternehmen geben, zum anderen gibt es etliche Fachvorträge von OpenStack Experten zu Case Studies und Best Practices. Am ersten Konferenztag wird es hierzu außerdem Workshops zu den Themen „Konfigurationsmanagement mit Puppet“, „Ceph Storage Cluster“, „Administration von OpenStack“ und „Docker – Die andere Art der Virtualisierung“ geben. Informationen zu den Vorträgen und Referenten findet ihr auf der Konferenzwebseite.
Ergänzend zu den Vorträgen besteht für alle die Möglichkeit, im Foyer des Veranstaltungshotels die Sponsorenausstellung zu besuchen. Hier können die Besucher direkt mit den Referenten und Sponsoren in Kontakt treten. Euch erwarten interessante Diskussionen, aktuellstes Know-How und tolle Networkingmöglichkeiten. Unterstützt werden die 3. Deutschen OpenStack Tage von Noris Network, Fujitsu, Rackspace,  Mirantis, SUSE, Nokia, Canonical, Cumulus, Netzlink, Juniper, Telekom, Vmware, Mellanox, Cisco und NetApp.
An dieser Stelle möchten wir neben den Sponsoren außerdem unseren tollen Medienpartner danken, die auch einen großen Anteil am Erfolg der OpenStack Tage haben. Unsere hochlobenden Dankeshymnen gehen dieses Jahr raus an das deutsche Linux Magazin und den IT-Administrator. Wir haben die Zusammenarbeit mit euch jederzeit genossen und würden uns freuen, euch auch nächstes Jahr wieder mit an Bord zu haben.


 

NETWAYS on tour: PuppetConf in San Diego

img_3844Heading to the US and PuppetConf the 5th year – this time, the NETWAYS folks moved to sunny San Diego. We had the annual Icinga Camp on Tuesday in the same venue as PuppetConf in the beautiful Town & Country Resort.
Bernd organised the entire trip – from flights to our lovely AirBnB, even going shopping (aka raiding) the local Walmart Neighbourhood market, cooking a meal for the hungry crowd. Last but not least – anyone flying over the US was offered the possibility to extend the trip with his/her vacation plans. Bernd, Julian, Lennart and Florian arrived earlier and so the other folks (Tom, Eric, Dirk, Blerim, Michael) joined them on Monday afternoon.

img_3792
Guess what happened after getting onto the rental cars?  The first visit at the In-N-Out burger right around the corner at LAX. This round was on Bernd too, thanks a lot man! We agreed on visiting the AirBnb first, then looking for some food. Spaghetti and tasty salad accompanied with the obligatory G&T. Some of us were just chilling after an 11 hours flight, others still preparing for their Icinga Camp talks.
The next day we arrived at the PuppetConf venue – their brand was nearly everywhere although the room for Icinga Camp was a bit tad hard to find. Cosy warm and sunny weather and a full day of #monitoringlove. I’ll continue with details over at the Icinga blog soon.
Wednesday was sort of a free day for those attending PuppetConf. I went for Sea World with Lennart and Dirk, the others joined the beach. Unfortunately Bernd had to leave to Nuremberg so I was surprised with luckily attending PuppetConf. I’ve been learning and improving my Puppet skills in the past year quite a lot, also helped with our newly designed Puppet Open Source training sessions. Glad I could join this opportunity.
Therefore I’d like to share my experience with this year, also compared to previous events. To start with PuppetConf offered a delicious breakfast again on the first day. Bonus: Weather was hot and sunny, what a beautiful start into the day. The rooms for the sessions were loosely connected inside this building but still sometimes confusing. Everyone was friendly and in comparison to last year, you were not „marketing scanned“ by Puppet folks everywhere.
img_3941Nigel Kersten kicked off PuppetConf and also announced the date next year in SFO, 10.-12. October 2017. Then former CEO and creator of Puppet, Luke Kanies, started with his keynote. From container numbers (starting/stopping 1.58 * 10^10 containers over 3 years is a hell of a number) to open source commitment heard from Puppet CEO Sanjay Mirchandani. In case you haven’t been following closely, Puppet Enterprise got certain exclusive features not available in the community version. When it comes to server metrics available PE only I could imagine that community members are not amused (I wasn’t). Time for changes.
I decided to join the Github talk since it is always interesting to get insights into their operations management. Nearly everything is managed with HuBot. GitHub really is living the #chatops dream. Kevin Paulisse also talked about Puppet as culture, dealing with new contributors and then announced a new open-source tool: octocat-diff. This allows to compare Puppet catalogs and avoid deploying unwanted changes in production.
Everyone was really crazy about using Puppet to create and update Docker images. And so was David Lutterkort decoupling the challenges with containers and their management with Kubernetes into a fascinating talk. Note: That specific nginx demo was used everywhere 😉
„Making Puppet clean up its own mess“ sounded provoking and so we went for it. You might be asking – which mess? Take for example changed configuration file locations generating collisions. „Write code to cleanup code“ or „Use Ansible to cleanup after Puppet?“. Sounded funny but still can be a common approach instead of waiting for the Puppet agent’s 30 minute update interval.
img_4005On Thursday evening the social event was happening from 6 to 8 pm. Hey, a NETWAYS party isn’t even started yet at that time 😀 And Jenny is waiting later at night, OSMC is near! From what the others told me, the location and food was great. Blerim and I decided to raid Walmart – again – and go for some barbecue together with the other folks not attending the conference. Florian took care of grilling tasty meat, and of course the drinks later on when the PuppetConf party people joined us again.
You would guess – no-one attends the keynotes on the second day. We made it, and listened to interesting insights into Microsoft’s plans on Azure with containers, Nano server 2016 setup plans and of course Powershell deployment strategies. Lots of things happening here, definitely worth keep watching.
During the keynotes the internet broke. Ok, actually Twitter was down so I couldn’t tweet about #puppetconf. The DNS DDOS even affected Puppet itself – livestream and forge were unreachable. Back in Germany everyone was sleeping but I guess some participants had to deal with their notification email stream rather than listening to sessions 😉
Martin Alfke gave a training session…ehm…talk about moving exec into types and providers. And everyone in the audience could follow and left the session both entertained and well trained. Since Blerim and I were pretty much into containers, management and also monitoring, we went for Gareth Rushgrove and him doing lots of demos showcasing Puppet and Docker. Did I mention the nginx demo already? 😉
img_4043We were a bit undecided where to head for the last talk but then we saw Ben’s session „How you actually get hacked“ differing from the usual Puppet suspects in topics. Oh boy, such sarcasm combined with actual matters of security. Ben, if you really lost your job, join NETWAYS. Will be fun 🙂
There were a couple of sessions talking about the transition from Puppet 3 to 4. Though it did not really feel that people are aware that Puppet 3 will reach EOL by the end of 2016. Most recently an interesting discussion on twitter started.
Compared to last year, PuppetConf including the session topics, venue and friendlyness nailed it this year. I’m looking forward to San Francisco next year – probably the best location to attract even more IT people. In case you’ve missed PuppetConf this year – their event archive including video recordings is already online.
We left San Diego on Friday evening spending two more days in lovely Los Angeles. Lennart, Dirk and I went for LEGOLand California, colleagues enjoined Venice beach. And then we got our rental car for our road trip to Grand Canyon and more. But that’s a different story … join the NETWAYS tour!
Enjoy some pictures we’ve taken during our NETWAYS „school trip“ 🙂