Ansible – should I use omit filter?

When we talk about Ansible, we more and more talk about AWX or Tower. This Tool comes in handy when you work with Ansible in a environment shared with colleagues or multiple teams.
In AWX we can reuse the playbooks we developed and share them with our colleagues on a GUI Platform.

Often we need a bit of understanding how a playbook is designed or if a variable need to be defined for the particular play. This can be much more tricky when sharing templates to people unaware of your work.

This is where the omit filter can be used. The easiest way to explain this, if the variable has no content or isn’t defined, omit the parameter.

The following example is an extract from the documentation:


- name: touch files with an optional mode
  file:
    dest: "{{ item.path }}"
    state: touch
    mode: "{{ item.mode | default(omit) }}"
  loop:
    - path: /tmp/foo
    - path: /tmp/bar
    - path: /tmp/baz
      mode: "0444"

In AWX we can create surveys, those are great to ask a few questions and provide a guide on how to use the underlying play. But often we need to choose between two variables whether one or another action should happen. Defined by the variable in use. If we leave one of both empty, Ansible will see those empty as defined but “None” (Python null) as content.

With the omit filter we can remove the parameter from the play, so if the parameter is empty it won’t be used.

The following code is the usage of icinga2_downtimes module which can create downtimes for hosts or hostgroups but the parameters cannot be used at the same time. In this case I can show the variable for hostnames and hostgroups in the webinterface. The user will use one variable and the other variable will be removed and therefore no errors occur.


- name: schedule downtimes
  icinga2_downtimes:
    host: https://icingaweb2.localdomain
    username: icinga_downtime
    password: "{{ icinga_downtime_password }}"
    hostnames: "{{ icinga2_downtimes_hostnames | default(omit) }}"
    hostgroups: "{{ icinga2_downtimes_hostgroups | default(omit) }}"
    all_services: "{{ icinga2_downtimes_allservices | default(False) }}"

The variables shown in the AWX GUI on the template.

This filter can be used in various other locations to provide optional parameters to your users.

If you want to learn more about Ansible, checkout our Ansible Trainings or read more on our blogpost.

Thilo Wening
Thilo Wening
Consultant

Thilo hat bei NETWAYS mit der Ausbildung zum Fachinformatiker, Schwerpunkt Systemadministration begonnen und unterstützt nun nach erfolgreich bestandener Prüfung tatkräftig die Kollegen im Consulting. In seiner Freizeit ist er athletisch in der Senkrechten unterwegs und stählt seine Muskeln beim Bouldern. Als richtiger Profi macht er das natürlich am liebsten in der Natur und geht nur noch in Ausnahmefällen in die Kletterhalle.

Config Management Camp Ghent 2020 – Recap

Cfgmgmtcamp Logo

It seams like Config Management Camp at beginning of February in Ghent gets a fixed date for me. I attended the fifth year in series, gave a talk in the last three and joined the Foreman Construction day on the day after as many times. So why am I still attending while some people are perhaps already telling you that the time of configuration management is over in favor of containers and Kubernetes. While I can not totally agree or disagree with this thesis, my schedule is still full of Foreman, Puppet and Ansible, so it makes sense to keep me updated. Furthermore the event allows to network like not many others with speaker diner, community event (also known as beer event) and Foreman community dinner. And last but not least it is always interesting to hear what the big names think about configuration management in the future and how to adopt to a world of containers and Kubernetes which was a big part of the talks in the main track.

But to get everything in correct order let me start at Sunday morning where Blerim, Aleksander and I started so we can meet Bernd and Markus who attended FOSDEM in advance in our AirBnb before going to speaker diner. I have to admit I really like Ghent’s old City so I was happy the same restaurant right in the middle of Ghent was chosen for diner like last year. And also like last year I joined the Foreman table to meet old and new friends for some hours mixed of small talk and technical discussions.

The first conference day started as always with main tracks only and I can really recommend Ryn Daniels’ talk Untitled Config Game. After lunch I joined the Foreman community room to get latest news from the community and the 2.0 release by Tomer Brisker and Ewoud Kohl van Wijngaarden respectively. The talks about Katello and how to create API and CLI for a Compute Resource where also quite interesting, but my favorite was Marek Hulan who had initially chosen a very similar title for his talk about Foreman’s new Reporting Engine and showed some interesting examples and the future templates documentation which will be automatically rendered in a similar fashion like the API documentation which is always available at /apidoc on a Foreman installation. Last but hopefully not least was my talk about existing solutions which get data from Foreman into central systems like Elasticsearch for logs and Supervisor Authority Plugin which enables Elastic APM to show performance bottlenecks, stacktraces and some metrics and is perhaps the most promising solution for me. As I was the one between the audience and the beer I was quite happy finishing my talk in time and get some more Kriek afterwards.

Day two had also some create talks to start with John Willis telling us he got 99 (or perhaps even far more) problems and a bash DSL ain’t one of them and Bernd how convenience is killing Open Standards. The first was really great in showing how configuration management has evolved compared to the container world which follows the same evolutionary process. The second was not only related to configuration management but IT at all including clouds and many more (and Bernd was fully aware of the discrepancy giving such a talk on a macbook). This day I visited more different tracks to hear about the migration of Pulp 2 to 3 behind Katello, testing of Ansible roles with Molecule (including some chemistry lessons), Ansible modules for pulp, how Foreman handles Secure boot and last but not least to get an update on Mgmt Config. After the talks we joined the Foreman Community diner which was located in a separate room of the same location we visited last year, allowing even more discussions without fearing to disturb others.

The Foreman Construction day like many other community events is a fringe event at the same location allow to hack together on some features and I was happy to make the beginner session I had given already the last years an official workshop. It was based on our official training focusing on installation and provisioning including hints and answering questions. After lunch I joined the hacking session for some time before shopping some Kriek and waffles and then traveling home.

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.
OpenStack made easy – Autoscaling on Demand

OpenStack made easy – Autoscaling on Demand

This entry is part 5 of 5 in the series OpenStack made easy

Je nach Art der eigenen Produktion kann es für manche/n ServerbetreiberInnen lohnenswert sein, virtuelle Maschinen für einen gewissen Zeitraum per Skript automatisch zu erstellen und – nach getaner Arbeit – genauso automatisch wieder zu löschen; beispielsweise wenn ein Computing-Job mit eigener Hardware länger dauern würde, als akzeptabel ist. Unsere Cloud nimmt sich dem gerne für Sie an – auch wenn es um andere Ressourcen als Prozessoren geht.

In diesem Beispiel gehe ich auf die ersten Schritte dieses Szenario betreffend ein, wie gegen die API unserer OpenStack-Plattform mittels Linux-CLI gesprochen werden kann.
Hierzu braucht man einen OpenStack-Client auf dem die Skalierung betreibenden Host. Unter Ubuntu wäre das beispielsweise das Paket python-openstackclient .
Als nächstes ist das projektbezogene “OpenStack RC File v3” aus der OpenStack-WebUI notwendig. Diese Datei lässt sich nach Anmeldung im Projekt über das Drop-down-Menü mit der eigenen Projekt-ID rechts oben herunterladen.

Man source die Datei, damit der Client auch weiß, mit welchem Projekt er gegen die API sprechen soll – erfordert Passworteingabe:
source XXXX-openstack-XXXXX-openrc.sh

Um für den Start einer neuen Instanz, die zu übergebenden Optionen setzen zu können, darf jetzt nach Werten (UUID; außer beim Schlüsselpaar) für diese gesucht, für die richtigen entschieden und gemerkt werden:

  • Source, das zu verwendende Installations-Image:
        openstack image list
  • Flavor, also welche Dimensionen die zu bauende VM haben soll:
        openstack flavor list
  • Networks, hier empfehle ich das projekteigene, nach außen abgesicherte Subnet:
        openstack network list
  • Security Groups, hier wird mindestens die Default-Sicherheitsgruppe empfohlen, damit zumindest innerhalb des Projekts alle VMs vollumfänglich miteinander sprechen können:
        openstack security group list
  • Key Pair, zum Verbinden via SSH:
        openstack keypair list

Dann kann die Instanz auch schon gestartet werden – bei mehr als einem zu übergebenden Wert pro Option, die Option mehrmals mit jeweils einem Wert aufführen, zuletzt der Instanz- bzw. Servername:
    openstack server create --image $imID --flavor $flID --network $nID --security-group $sgID --key-name $Name $Servername

Sodala, die VM steht und ist bereit, ihren Beitrag zum Tagesgeschäft zu leisten.
Wer gerne mehr als eine Maschine haben möchte, z. B. drei, gebe noch zusätzlich diese Optionen vor dem Servernamen mit:
    --min 3 --max 3

Geldbeutelschonenderweise, dürfen die Server nach getaner Arbeit auch wieder gelöscht werden:
    openstack server list
    openstack server delete $deID

Automatisch, also ohne nach der ID der Instanz zu schauen, ginge das auch so:
    deID=`openstack server list | grep Instanzname | cut -d ' ' -f 2` ; openstack server delete $deID

Wie gesagt bietet sich das Einbinden des Create-, des Computing- und des Delete-Befehls in ein Skript an. Wem die eigenen Bash-Künste dafür nicht ausreichen, kann sich gerne an unsere MyEngineers wenden. Hier ist die Zwischenschaltung eines Loadbalancers auch kein Problem.

Martin Scholz
Martin Scholz
Systems Engineer

Martin sattelte unlängst vom sozialen Bereich auf die IT um und ist im Managed-Services-Support tätig. Praktischerweise nutzt ihm hier, dass er sich bereits vor geraumer Zeit Linux als User zugewandt hat. Privat ist er bekennende Couch-Potatoe. Es sei denn, er fühlt sich einmal wieder gedrängt, einen Marathon-Marsch zu unternehmen. Kein feliner oder kanider Passant ist vor seiner Kontaktaufnahme sicher.
A journey with Vault – Teil 1

A journey with Vault – Teil 1

This entry is part of 1 in the series a journey with vault

Hello fellow blog readers!

Heute möchte ich euch auf die Reise mit Vault by Hashicorp mitnehmen.

Zunächst was ist Vault? Bei Vault handelt es sich stumpf gesagt, um einen Passwortspeicher. Vermutlich kommen da jetzt bei dem einen oder anderen Projekte wie Keypass oder Enpass in den Sinn. Die Richtung ist schon mal gut. Jedoch kennt auch jeder das Hauptproblem der oben genannten Lösungen nur zu gut.

Teamfähigkeit.

Das eine Projekt beherrscht es, andere nur teilweise oder vieleicht sogar garnicht. Frustrierend könnte man die Situation auch gerne umschreiben. Fakt ist auf jeden Fall das es sich bei Vault um eine Lösung handelt, die wirklich das Zeug dazu hat ein Teamfähiger Passwortspeicher zu sein. Wie so alles in der Welt haben Dinge leider ihren Preis. Man wird mit Teamfähigkeit gesegnet, aber Satan bestraft uns indirekt mit der Komplexität des ganzen Konstrukts, weswegen ich das Abenteuer Vault in eine kleine Serie verpacke. Genug Worte für den Einstieg, legen wir los mit dem neuen Abenteuer in der Hautprolle: Vault.

Part Uno widment sich der grundlegenden Inbetriebnahme von Vault.

Wie immer benutzte ich eine mit Vagrant provisionierte höchst aktuelle CentOS 7 Box der Marke Eigenbau mit VirtualBox als Provider.

Die Reise beginnt mit dem Download eines ZIP-Archivs welche das Vault binary enthält. Den legen wir einfach unter /tmp ab und entpacken ihn direkt nach /usr/local/bin

wget https://releases.hashicorp.com/vault/1.3.0/vault_1.3.0_linux_amd64.zip -P /tmp
unzip /tmp/vault_1.3.0_linux_amd64.zip -d /usr/local/bin
chown root. /usr/local/bin/vault

Damit das aufrufen von Vault direkt gelingt müssen wir unsere PATH Variable noch um /usr/local/bin ergänzen. Ich hab das ganze in meinem ~/.bash_profile hinterlegt:

PATH=$PATH:$HOME/bin:/usr/local/bin

Wenn alles korrekt ausgeführt wurde, können wir jetzt die Autocompletion nachziehen und anschließend die Shell neustarten:

vault -autocomplete-install
complete -C /usr/local/bin/vault vault
exec $SHELL

Um das ganze abzurunden lassen wir Vault als Daemon laufen.

Zunächst müssen wir es Vault gestatten mlock syscalls ohne der Notwendigkeit von root ausführen zu dürfen:

setcap cap_ipc_lock=+ep /usr/local/bin/vault

Danach legen wir einen nicht priviligierten Systembenutzer an, unter dem der Vault Daemon später laufen soll:

useradd --system --home /etc/vault.d --shell /bin/false vault

Jetzt kommt die systemd Unit:

touch /etc/systemd/system/vault.service

… mit folgenden Inhalt:

[Unit]
Description="HashiCorp Vault - A tool for managing secrets"
Documentation=https://www.vaultproject.io/docs/
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/vault.d/vault.hcl
StartLimitIntervalSec=60
StartLimitBurst=3

[Service]
User=vault
Group=vault
ProtectSystem=full
ProtectHome=read-only
PrivateTmp=yes
PrivateDevices=yes
SecureBits=keep-caps
AmbientCapabilities=CAP_IPC_LOCK
Capabilities=CAP_IPC_LOCK+ep
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
NoNewPrivileges=yes
ExecStart=/usr/local/bin/vault server -config=/etc/vault.d/vault.hcl
ExecReload=/bin/kill --signal HUP $MAINPID
KillMode=process
KillSignal=SIGINT
Restart=on-failure
RestartSec=5
TimeoutStopSec=30
StartLimitInterval=60
StartLimitIntervalSec=60
StartLimitBurst=3
LimitNOFILE=65536
LimitMEMLOCK=infinity

[Install]
WantedBy=multi-user.target

Bevor wir den Daemon starten können, müssen wir ein paar Verzeichnisse sowie eine Konfigurationsdatei anlegen:
mkdir -pv /etc/vault.d/
mkdir -pv /usr/local/share/vault/data/
chown -R vault. /usr/local/share/vault/

touch /etc/vault.d/vault.hcl

Meine Konfigurationsdatei ist als Beispielhaft anzusehen. Sie behinhaltet das nötigste um den Vault Server grundsätzlich starten zu können. Diese sollte entsprechend an das eigene Szenario angepasst werden und unbedingt mit Zertifikaten ausgestattet sein!

storage "file" {
path = "/usr/local/share/vault/data"
}

ui = true

listener "tcp" {
address = "172.28.128.25:8200"
tls_disable = "true"
}

api_addr = "http://172.28.128.25:8200"
cluster_addr = "http://172.28.128.25:8201"

systemd neuladen und den Vault Daemon starten:

systemctl daemon-reload
systemctl start vault

Wenn uns alles geglückt ist, sollten wir unter der Adresse des Servers, mit Angabe des Ports 8200 nun die UI von Vault vorfinden. Damit geht es nun in der Bildstrecke weiter:

Das wars für den ersten Teil der Serie, im zweiten Teil werden wir uns den Aufbau von Vault genauer anschauen und uns der integration von SSH widment. Vault bietet nämlich viele Integrationsmöglichkeiten mit denen sich am Ende die Authentifizierung von sämtlichen Dienste Zentralisiert über Vault steuern lassen. Bis dahin wünsche ich wie immer viel Spass beim Basteln!

Photo from: https://devopscube.com/setup-hashicorp-vault-beginners-guide/

Max Deparade
Max Deparade
Consultant

Max ist seit Januar als Consultant bei NETWAYS und unterstützt tatkräftig unser Professional Services Team. Zuvor hat er seine Ausbildung zum Fachinformatiker für Systemintegration bei der Stadtverwaltung in Regensburg erfolgreich absolviert. Danach hat der gebürtige Schwabe, der einen Teil seiner Zeit auch in der Oberpfalz aufgewachsen ist ein halbes Jahr bei einem Managed Hosting Provider in Regensburg gearbeitet, ehe es ihn zu NETWAYS verschlagen hat. In seiner Freizeit genießt Max vor allem die Ruhe, wenn...

Open Source Camp on Foreman

Like every year there was an Open Source Camp following the OSMC and as usual we helped organize that. Just in case you aren’t aware of what an Open Source Camp is here is the just of it: It’s meant to be an offer for Open Source projects to present themselves more in depth to the community. This year the Open Source Camp is on that one special yellow helmet we all know and love, Foreman.

Ondřej Ezr started us off with Ansible automation for Foreman (hosts). There are probably more than enough people using puppet only in their Foreman environment. Alternative or complementary to that would be using the plugin foreman_ansible. Ansible and Puppet don’t necessarily need to be better or worse, they are different and both have their advantages and disadvantages. By going through some basic steps, like role assignment, host creation and so on, he showed how one can do all that, but with Ansible. You can easily dynamically allocate roles and installations through Ansible to your Foreman hosts, but to make it even more specific one can set custom variables within the Ansible plugin for it to use, like foreman_repository_version. You could invoke a Job, like an Ansible Playbook, which will overwrite the variables previously set or make your installation more customizable from the get go. Install from git, run a playbook through ssh and more was covered during his talk. The plugin would not be a good alternative or viable if it did not hold up against the standards that puppet sets as a competitor. While Ansible doesn’t offer an inherit solution for reoccurring runs like every hour, the plugin does.

Next up was Bernhard Suttner, who wanted to give us a taste of Salted Foreman. Initially he explained what all that salt was about. The SaltStack a open source project written in python, can be used as a configuration management tool for Foreman. Salt excels at orchestrating cloud environments and network use-cases, but then we got to the Foreman relation. Running a salt and Foreman environment means running a environment of managed hosts, which are salt minions and a foreman_smart_proxy, which will also be the salt master. He showed us what salt in Foreman looks like and gave us some insight on how it works, but even more important from now on there are people dedicated to the project and some day the plugin might be as good as the puppet or ansible plugin. Salt is great and especially effective in terms of scalability. It’s pretty straightforward to use and the initial setup is not so hard. We are excited for what is to come.

Provisioning on Azure Cloud through Foreman by Aditi Puntambekar was going to follow that one. Aditi made sure everyone is familiar with the extend of Foremans capabilities in terms of provisioning. This was especially important because Foremans capabilities differ from its usual when it comes to cloud provisioning. After a quick trip through the configuration of compute resources and imaged-based provisioning templates we went onward to the Azure Resource Manager. She explained how the Azure Resource Manager essentially worked, but what is interesting to us is the foreman_azure_rm. Well and foreman_azure_rm does what you expect it to do. It adds the Microsoft Azure Resource Manager as a compute resource for the foreman. In her demo, she showed us how to use said resource and more.

Martin Bačovský talked about CLI tools with Foreman. He started of with the Foreman API. Of course the Foreman API is fast and has a wide range of tools and libs included within it. Just like Martin said in his talk, if you are interested in the Foreman API check out the documentation, it’s very good. Also interesting in the realm of APIs was his next tool, which is using apipie/apipy, which you are probably aware of if you are more heavy on the python side of things. Up there with the most well-known tools is Martins next, Hammer CLI, a command-line tool for Foreman. After sharing his experience with these rather popular tools with everyone he introduced us to Foreman’s integration of GraphQL. It’s basically a query language, which seems to be promising so far. Martin especially focused on the flexibility of queries and the introspective it has, yet one has to see where the project goes. There were many more tools he told us a lot about. To name just a few more of them, Report Templates, Foreman Ansible Modules and foreman_maintain. If you are interested in one of these tools in particular check out the video of the talk, which will be available soon on our Youtube Channel.

 

Give your Foreman a greater toolbox with Plugins by our very own Dirk Götz. Like he said himself: I will start of with existing toolbox things and at the end I will show you how to create these things yourself. And that he did. This talk was very demo heavy, thereby everything he explained was plain and simple, because you where able to see it as he did it. At the very top of his agenda was Job Invocation/Remote Execution. Not that exciting you think? Well, more interesting is the best practice advice he threw in on the way, like there is no issue of the configured user because his password is not saved as plain text in the database. Then the development part was up. He showed a couple of jobs that he wrote himself. Easiest, which served as an example is a simple ping check. He pointed out important thoughts to keep in mind, while writing jobs, like default values. Before his talk came to a close he talked a bit about the Web Console which has been introduced and is yet not well known. The web console is pretty much a integration of Cockpit. A well experienced user in the Linux world won’t be that excited about this, but a less experienced user will love this.

The next talk would not have happened, if Dirk didn’t spontaneously offer to step in. So we got another thirty minutes of Dirk Götz and I won’t complain. Katello: Adding content management to Foreman was the title and people where keen to hear about just that. What is Katello? Dirk described it as a defined set of Foreman plugins but not just that. It enriches your content management, as well as subscription management. Wait… content management? Why do I need that? Configuration management should be enough! Not necessarily, depending on your environment. Lets just pick up the points that Dirk made towards content management. For local content it ensures availability. For staging, it allows testing updates and makes builds reproducible. So content management should be seen as an addition to config management. He also talks about content views and how they are used to do the versioning, while they are being held by life cycles. Integration in orchestration was also a rather big point during his talk, which is done via SSH or Ansible. Dirk designs his talk in a way that makes summarizing them impossible, because he covers way to much. Lets just say not announced but very appreciated and most definitely worth checking out at our NETWAYS-Youtube Channel.

It was my second Open Source Camp and if you ask me this kind of exchange is what one wants to see in the open source community. There was variety and judging by the crowd reactions I was not the only one enjoying these talks. Thanks to all the speakers and attendees, safe travels home to everyone. Until the next Open Source Camp, hope to see you there!

Alexander Stoll
Alexander Stoll
Junior Consultant

Alexander ist ein Organisationstalent und außerdem seit Kurzem Azubi im Professional Services. Wenn er nicht bei NETWAYS ist, sieht sein Tagesablauf so aus: Montag, Dienstag, Mittwoch Sport - Donnerstag Pen and Paper und ein Wochenende ohne Pläne. Den Sportteil lässt er gern auch mal ausfallen.

Give your Foreman a greater toolbox

Like every Foreman our well-beloved lifecycle management is only as good as its tools, says Dirk Götz, Foreman expert from NETWAYS. At OSCamp Dirk will showcase some plugins and explain their use case before giving some hints on plugin development.

DevOps with Foreman

Ondřej Ezr, Satellite Software Engineer at Red Hat, loves to invest time to DevOps so much, it basically became his main job, he says. He will show how to get the most value when using Ansible from Foreman – both when using hosts in a predefined state, or when working in a remote execution fashion.

Better with Salt

Everything is better with salt – even Foreman. Bernhard Suttner, head of development at ATIX AG, who is maintain the foreman_salt plugin, will demonstrate the use of Salt in Foreman. New features, such as Salt Variables and the Remote Execution Salt Provider will be part of his talk.

With these and many other talks at OSCamp, get to know how to best equip your Foreman according to your individual needs.

Tickets at https://opensourcecamp.de/.

Julia Hornung
Julia Hornung
Marketing Manager

Julia ist seit Juni 2018 Mitglied der NETWAYS Family. Vor ihrer Zeit in unserem Marketing Team hat sie als Journalistin und in der freien Theaterszene gearbeitet. Ihre Leidenschaft gilt gutem Storytelling, klarer Sprache und ausgefeilten Texten. Privat widmet sie sich dem Klettern und ihrer Ausbildung zur Yogalehrerin.