pixel
Select Page

NETWAYS Blog

Automate Icinga for Windows with Ansible

This article will cover how to automate the monitoring of your windows infrastructure with Ansible and Icinga for Windows. For that, I developed a new Ansible role which you can find here: https://github.com/DanOPT/ansible-role-ifw

The role will allow you to manage your infrastructure dynamically with an Inventory and group_vars file. It’s also possible to define PKI Tickets in the inventory and the support of the Self-Service API is coming soon. The parent as well as the zone of each host will be defined with the group name and their associated group variables.

The following topics will be covered:

  • How to organize your Windows infrastructure with an Inventory and group_vars file dynamically
  • Setup with On-Demand CSR Signing on the master
  • Setup with PKI Tickets for the client (agent or satellite) generated on the master
  • Coming soon

Prerequisites

This guide will not cover how to configure your Ansible host for WinRM connections. For that, there are already enough Blog Posts about that topic and the Ansible Documentation also covers it in detail (https://docs.ansible.com/ansible/latest/user_guide/windows.html).

What we will need:

  • Icinga2 master instance
    You will need an Icinga2 master instance. I will use Ubuntu 20.04 and the Icinga Installer (https://github.com/NETWAYS/icinga-installer) to deploy the instance.
  • Windows host
    For that, I will use a Windows Server 2012.
  • Ansible host
    Ansible host with remote access to the Windows hosts.

How to deploy your Icinga2 master instance

Configure the Netways extra repository:

wget -O – https://packages.netways.de/netways-repo.asc | sudo apt-key add –
echo “deb https://packages.netways.de/extras/ubuntu focal main” | sudo tee /etc/apt/sources.list.d/netways-extras-release.list

Icinga Installer requires the Puppet repository. So we will also need to configure the repository with the following commands:

wget -O – https://apt.puppetlabs.com/DEB-GPG-KEY-puppet-20250406 | sudo apt-key add –
echo “deb http://apt.puppetlabs.com focal puppet7” | sudo tee /etc/apt/sources.list.d/puppet7.list

Install Icinga2 master instance (including IcingaWeb2 and Director):

sudo apt update
sudo apt install icinga-installer
sudo icinga-installer -S server-ido-mysql

Do not forget to write down your Password and Username.

How to organize the Windows hosts of your infrastructure with an Inventory and group_vars file dynamically

For the organization of your Windows hosts, you will need an Inventory file and a group_vars/<zone-name> file. In the group variables file, we will define the parent node name(s) as well as the parent address(es). In the Inventory it is possible to define a PKI ticket as a host variable.

So if we have a simple setup like this:

The Inventory file would look like this:

[master]
windows-2012 ansible_host=10.77.14.229

[satellite]
windows-2012-2 ansible_host=10.77.14.230

The group name will be used to define the zone name of the parent. The parent node name and address will be defined in group_vars/master and group_vars/satellite. Here is an example of the master file:

ifw_parent_nodes:
  - 'i2-master'
  #- 'i2-master-2'
ifw_parent_address:
  - '10.77.14.171'
  #- '10.77.14.172'

The variables always have to be a list, even if only one master needs to be specified.

Setup with On-Demand CSR Signing on the master

For simplification I will only use one host in my Inventory and the group_vars/masters file I already described:

[master]
windows-2012 ansible_host=10.77.14.229

The most simple way to connect agents is to sign the certificates on the master. To achieve this the agent has to be connected and after that, we can sign them. The role has already all variables required, so we just need to run the Playbook:

---
- hosts: all
  roles:
    - ansible-role-ifw

Run the playbook with the following command:

$ ansible-playbook playbook.yml -i hosts
PLAY [all] ****************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************
ok: [windows-2012]

TASK [ansible-role-ifw : Create icinga-install.ps1 using Jinja2] **********************************************************************************************************************************************************************************************************
ok: [windows-2012]

TASK [ansible-role-ifw : Execute icinga-install.ps1] **********************************************************************************************************************************************************************************************************************
skipping: [windows-2012]

PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************
windows-2012 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0

After that, we can verify if a request has been made on the master with this command:

$ icinga2 ca list
Fingerprint | Timestamp | Signed | Subject
—————————————————————–|————————–|——–|——–
*****************************************************************| Nov 2 04:57:32 2022 GMT | | CN = windows-2012

Setup with PKI Tickets for the client (agent or satellite) generated on the master

It’s also possible to define a PKI ticket as a hosts variable in the Inventory (replace <pki-ticket>):

[master]
windows-2012 ansible_host=10.77.14.229 pki_ticket=<pki-ticket>

After that we need to set the variable ‘ifw_certificate_creation’ to 1 in the Playbook:

---
- hosts: all
  vars:
    ifw_certificate_creation: 1
  roles:
    - ansible-role-ifw

Just run the Playbook and the agent should be connected to your master.

 

Coming soon

  • Self-Service API for Director
  • JEA Profile
  • Local ca.crt
  • Custom repository
Daniel Patrick
Daniel Patrick
Senior Systems Engineer

Daniel interessiert sich schon sein Leben lang für Linux-Distributionen, Programmieren und Technik. Im Bereich Linux konnte er bereits zwischen vier und fünf Jahre Berufserfahrung sammeln. Seit August 2022 arbeitet er für NETWAYS. Bei seinem letzten Arbeitgeber war er mitverantwortlich für das Leiten und Organisieren eines sechsköpfigen Teams, galt als Wissensträger und einer der ersten Ansprechpartner des Kunden, wenn es um technische Probleme ging. Nach der Arbeit hört er meistens auch nicht mit dem Arbeiten auf, sondern schraubt unentwegt an seinen Maschinen herum. In seiner Freizeit schaut er sich gerne gute Filme an und kümmert sich um seine zwei geliebten Katzen.

Ansible Continuous Deployment without AWX/Tower/AAP

Why Ansible?

Ansible is a configuration management tool to automate tasks in your IT infrastructure. It offers a rather low barrier of entry, when compared to other tools. A local Ansible installation (i.e. on your machine) with SSH access to the infrastructure you want to manage is sufficient for getting started. Meaning, it requires no substantial additions to existing infrastructure (e.g. management servers or agents to install). Ansible also ships with an extensive standard library and has a large selection of modules to extend functionality.

Why Continuous Deployment?

Once a simple Ansible setup is up & running and things start to scale to more contributors, servers or services, it is usually necessary to automate the integration of code changes. By creating one or more central Ansible repositories, we create a single source of truth for our infrastructure. We shift to continuous integration, start testing and verifying changes to the code base.

The next logical step is then to use automate the deployment of this single source of truth, to make sure changes are applied in a timely/consistent manner. Infrastructure code that is not deployed on a regular basis tends to become riskier to deploy each day, since it’s better to discover errors promptly so that they can be traced back to recent code changes; and we all know that people make undocumented hand-crafted changes that are then overwritten and all goes up in flames. Thus we want shorter, more frequent cycles and consistent deployments to avoid our infrastructure code becoming stale and risky.

Why not AWX/Tower/AAP?

AWX (aka. Tower, now Ansible Automation Platform) aims to provide a continuous deployment experience for Ansible. Quote:

Ansible Tower(formerly ‘AWX’) is a web-based solution that makes Ansible even more easy to use for IT teams of all kinds.

It offers a wide array of features for all your ‘Ansible at scale’ needs, however it comes with some strings attached. Namely, it involves management overhead for smaller environments as it introduces yet another tool to install, learn, update and manage throughout its life cycle. Not only that but from version 18.0 onward the preferred way to install AWX is the AWX (Kubernetes) Operator. Meaning – preferably – we would need a Kubernetes instance laying around somewhere. Of course, there is always the option to use “unorchestrated” Containers as an alternative, but that comes with its own obstacles.

Installation and management aside, there is also Red Hat’s upstream first approach to consider. Meaning, AWX is the upstream project of Ansible Tower and thus it might not be as ‘stable’. Furthermore, Red Hat does not recommend AWX for production environments. Quote:

The AWX team currently plans to release new builds approximately every 2 weeks. The AWX team will flag certain builds as “stable” at their discretion. Note that the term “stable” does not imply fitness for production usage or any kind of warranty whatsoever.

Obviously, there are alternatives to AWX/Ansible Tower. Rundeck allows for predefined workflows, these jobs can then be triggered from a Web GUI, API, CLI, or by schedule and works not just with Ansible. Semaphore offers a simple UI for Ansible to manage projects (environments, inventories, repositories, etc.) and includes an API for launching tasks. Puppet aficionados may already know Foreman, which is a great and battle-tested tool for provisioning machines. You can use the “Foreman Remote Execution” to run your Playbooks and use Ansible callbacks to register new machines in Foreman. Here are some recommended videos on this topic:

– FOSDEM 2020, Foreman meets Ansible: https://www.youtube.com/watch?v=PQYCiJlnpHM
– OSCamp 2019, Ansible automation for Foreman (hosts): https://www.youtube.com/watch?v=Lt0MksAIYuQ

That being said, the premise was to avoid substantially extending any existing infrastructure. Any of the mentioned tools need at least an external database service (e.g. MariaDB, MySQL or PostgreSQL). With that in mind, this article will now describe alternative solutions for continuous deployment without AWX/Ansible Tower. It will show examples using the GitLab CI, however, the presented solutions should be adaptable to various CI/CD solutions.

Ansible Continuous Deployment via the Pipeline

For this article, we will assume a central Ansible Repository on an existing GitLab Server with some GitLab CI Pipeline already in place. Meaning, we might also have some experience with CI jobs in Containers.

Many (if not all) CI/CD solutions feature isolated jobs within Containers, which enables us to quickly spin up predefined execution environments for these jobs (e.g. pre-installed with various tools for testing). Furthermore, it is possible to use specific machines for specific jobs, or place certain machine in different network zones (e.g. a node that triggers something in production environment could be isolated from the rest).

Given this setup we will now explore two scenarios for Ansible Continuous Deployment via pipeline jobs. One based on SSH and the other based on HTTP (Webhooks).

The example Ansible repository follows a standard pattern and is safely stored in a git repository:

git clone git@git.my-example-company.com:ansible/ansible-configuration.git
cd ansible-configuration/

ls -l
ansible.cfg
playbooks/
roles/
inventory/
collections/
site.yml
requirements.yml

SSH

Since the basis for all Ansible deployments is SSH we will leverage this protocol to deploy our code. Fundamentally, there are two options to achieve this:

– Connect from a pipeline job to a central machine with Ansible already installed, download the code changes there and trigger a playbook
– Run an Ansible playbook directly in a pipeline job (i.e. a Container)

For this example we will generate a specific SSH Keypair that is then used in the pipeline. The public key needs to be added to the `authorized_keys` of any machines we want to connect to. Secrets such as the SSH private key can be managed directly in GitLab (CI Variables) or be stored in an external secret management tool (e.g. Hashicorp Vault). Don’t hardcode secrets in the Ansible code base or CI configuration.

# -t keytype (preferably use ed25519 whenever possible)
# -f output file
# -N passphrase
# -C comment

ssh-keygen -t ed25519 -f ansible-deployment -N '' -C 'Ansible-Deployment-Key'

Option A: via an Ansible machine

In this scenario, we connect from a CI job in the pipeline to a machine with Ansible already installed. On this machine we will clone the Ansible configuration and trigger a playbook. This article will refer to this machine as ‘central Ansible node’, obviously a more complex infrastructure might need more of these machines (i.e. per network zone).

First, we need to copy the previously generated SSH Key onto the central Ansible node, so that we connect from the GitLab CI job. Second, we require a working Ansible setup on this node. Please note, that a detailed installation process will not be explained in this article, since the focus lies on the CI/CD part. We assume that this node has a dedicated user for Ansible is be able to successfully run the Ansible code.

# Copy public key for deployment on the central Ansible node
scp ansible-deployment.pub ansible@central-ansible-node.local
ssh ansible@central-ansible-node.local

# Authorize the public key for outside connections
cat 'ansible-deployment.pub' >> ~/.ssh/authorized_keys

# Install Ansible
pip3 install --user ansible # or ansible==version
# Further setup like inventory creation or dependency installation happens here...

At this point we assume, we can connect to our infrastructure and run Ansible playbooks at our leisure. Next we will create a GitLab CI job which do the following:

  • Retrieve the previously generated SSH private key from our secrets, so that we can connect to the central Ansible node
  • Connect to the central Ansible node and clone the repository there. We will use the GitLab’s CI job tokens for this
  • Create a temporary directory to isolate each pipeline job
  • Run a playbook via SSH on the central Ansible node
---
stages:
- deploy

variables:
CENTRAL_ANSIBLE_NODE: central-ansible-node.local
# Or you can provide a ssh_known_hosts file
ANSIBLE_HOST_KEY_CHECKING=False

deploy-ansible:
image: docker.io/busybox:latest
stage: deploy
before_script:
- mkdir -p ~/.ssh
# SSH_KNOWN_HOSTS is a CI variable to make sure we connect to the correct node
- echo $SSH_KNOWN_HOSTS ~/.ssh/known_hosts
# The SSH private key is a CI variable
- echo $SSH_PRIVATE_KEY > id_ed25519
- chmod 400 id_ed25519
script:
- TMPDIR=$(ssh -i id_ed25519 $CENTRAL_ANSIBLE_NODE "mktemp -d")
- ssh -i id_ed25519 $CENTRAL_ANSIBLE_SERVER "git clone https://gitlab-ci-token:${CI_JOB_TOKEN}@git.my-example-company.com:ansible/ansible-configuration.git $TMPDIR"
- ssh -i id_ed25519 $CENTRAL_ANSIBLE_SERVER "ansible-playbook $TMPDIR/site.yaml"

This basic example can be extended in many ways. For example, CI variables could be used to control which Ansible playbook is executed, change which hosts or tags are included. Furthermore GitLab can also trigger jobs on a schedule. Some of the benefits of this approach are that it is rather easy to set up since it mirrors the local execution workflow, plus the deployment can be debugged and triggered on the central Ansible node.

However, we now have a central Ansible node to manage and we might need several in different network zones. Additionally the `mktemp` solution is a bit hacky and might need a garbage collection job (e.g. `tmpreaper`). The next solution will alleviate some of these issues.

Option B: directly via a Pipeline

In this scenario, Ansible executed directly in the CI pipeline job (i.e. a Container). It is recommended to use a custom pre-build Ansible Container Image, to make the jobs faster and more consistent. This Image may contain a specific Ansible version and further tools required for the given code. The Image can be stored in the GitLab Container Registry. Building and storing Container Images is outside the scope of this article. Here’s a small example of how it might look like:

cat Dockerfile.ansible.example

FROM docker.io/python:3-alpine
RUN pip install --no-cache-dir ansible
# ... Install further tools or infrastructure specifics here
# The image will be stored at registry.my-example-company.com:ansible/ansible-configuration/runner:latest

cat .gitlab-ci.yaml
---
stages:
- deploy

variables:
# Or you can provide a ssh_known_hosts file
ANSIBLE_HOST_KEY_CHECKING=False

deploy-ansible:
image: registry.my-example-company.com:ansible/ansible-configuration/runner:latest
stage: deploy
before_script:
# The SSH private key is a CI variable
- echo $SSH_PRIVATE_KEY > id_ed25519
- chmod 400 id_ed25519
script:
- ansible-playbook --private-key id_ed25519 site.yaml

This removes the need for a central Ansible node and the need for external garbage collection, since these CI jobs are ephemeral by default. That being said, if we have a more complex network setup we might need runners in these zones and a way to control which job is executed where.

HTTP (Webhooks)

In this scenario, we setup another central Ansible node that will run the playbooks, however, there won’t be a SSH connection from the CI job. Instead we will trigger a webhook on this central Ansible node. While this scenario is more complex it offers some benefits when compared to previously discussed options.

Since there are several ways to implement incoming webhooks, we will not view a specific implementation but discuss the concept. Interestingly enough, a webhook-based feature is currently in developer preview to be provided by Ansible. Event-Based-Ansible provides a webhook service that can trigger Playbooks.

In this example we have a service providing webhooks running on central-ansible-node.local on port 8080. This service is configured to run Ansible with various options which we can pass via a HTTP POST request. This request will certain data that controls the Ansible playbook.

cat trigger-site-yaml.json
{
"token": "$WEBHOOK_TOKEN",
"playbook": "site.yaml",
"limit": "staging"
}

cat .gitlab-ci.yaml
---
stages:
- deploy

variables:
CENTRAL_ANSIBLE_NODE: central-ansible-node.local:8080

deploy-ansible:
image: docker.io/alpine:latest
stage: deploy
before_script:
- apk add curl gettext
script:
# Replace the $WEBHOOK_TOKEN placeholder in the file with the real value from the CI variables
- envsubst < trigger-site-yaml.json > trigger-site-yaml.run.json
- curl -X POST -H "Content-Type:application/json" -d @trigger-site-yaml.run.json $CENTRAL_ANSIBLE_NODE

From a security standpoint we remove the need for reachable SSH ports, the central Ansible node now just accepts HTTP (or specific HTTP methods) secured by Tokens. Furthermore there now is a layer between our CI jobs and the Ansible playbooks which can be used to validate requests.

That being said, this extra layer could also be seen as a hurdle that might break. And beside the central Ansible node we now need to manage a service that provides these webhooks. However, in the future Event-Based-Ansible might alleviate some of these issues.

Conclusions

Deploying Ansible is quite flexible due to its simple operational model based on SSH. As we have seen, there are some low-effort alternatives to AWX/Tower that can be applied in various use cases. However, at some point there is a maintainability tradeoff. Meaning, even though AWX/Tower might not appear as stable or is sometimes tricky to operate, once an environment is large enough it might be a better option than custom creations. Probably not a satisfying conclusion for an article named “without AWX/Tower”, I agree.

Foreman presents an interesting alternative due to its myriad of other features that you get with an installation. Finally, Event-Based-Ansible could be very promising webhook-based solution when it comes to automated deployments. Starting simple and then pivoting to a more complex system is always an option.

References

Markus Opolka
Markus Opolka
Consultant

Markus war nach seiner Ausbildung als Fachinformatiker mehrere Jahre als Systemadministrator tätig und hat währenddessen ein Master-Studium Linguistik an der FAU absolviert. Seit 2022 ist er bei NETWAYS als Consultant tätig. Hier kümmert er sich um die Themen Container, Kubernetes, Puppet und Ansible. Privat findet man ihn auf dem Fahrrad, dem Sofa oder auf GitHub.

Leap(p) to Red Hat Enterprise Linux 9

Ich muss mich direkt für das Wortspiel im Titel entschuldigen, aber es lag so nahe als ich mich für das Thema entschieden hatte, denn ich möchte einen neuen Blick auf Leapp werfen mit dem Upgrades von Red Hat Enterprise Linux (RHEL) durchgeführt werden können. Der Blick soll wie gewohnt etwas ausführlicher sein, daher wird Blog-Post als erstes die Frage “Replace or In-Place” aufgreifen, die ähnlich alt ist wie “Henne oder Ei”. Als nächstes geht es um den Status des Projekts, bevor ich die Nutzung auf Kommando-Zeile erläutere. Dann ein kleiner Abstecher über Foreman-Upgrades bevor es mit dem Cockpit-Plugin graphisch wird und am Ende das Foreman-Plugin und die Ansible-Rolle das ganze Thema massentauglich machen.

Replace or In-Place?

Vermutlich stand bereits jeder Nutzer mal vor der Frage ob sich ein Upgrade lohnt oder nicht doch besser das System direkt mit der neusten Version des Betriebssystems neu zu installieren ist. Bei ersterem ist der Aufwand natürlich entsprechend geringer, denn man liest sich die Neuerungen durch, achtet dabei auf besondere Hinweise, macht nochmal ein aktuelles Backup (das tun doch hoffentlich alle?), startet das Upgrade, neustartet, lässt die Nacharbeiten laufen, neustartet und erfreut sich eines schönen glänzenden neuen Systems. Zumindest bis man merkt, dass manche Schraube immer noch locker, der Müll auf der Rückbank nicht verschwunden ist, der Motor weiter leckt und falsch eingestellt ist, um einen Autovergleich zu bemühen. Oder in unserem Fall bleibt vielleicht die oder andere Einstellung beibehalten, die sich negativ auswirkt, eine Software-Lösung wird vielleicht nicht durch eine modernere Lösung ersetzt und alles was wir mal ausprobiert haben und danach nie wieder genutzt haben nimmt weiterhin Plattenplatz weg.
Das ist dann auch meist das Argument für die Befürworter von Neuinstallationen statt Upgrades. Ein weiteres war zumindest früher oft das Thema es gibt kein einfaches, geeignetes Werkzeug und der Vorgang ist nicht supportet. So musste man früher für Updates von RHEL immer über die Installationsabbilder gehen, damit auch Änderungen an Einstellungen sowie Software möglich waren, wodurch der Aufwand quasi bis auf das Wiederherstellen von Daten der einer Neuinstallation gleicht. Der alternative Weg über den Packagemanager war oft mit manuellen Nacharbeiten verbunden und absolut nicht durch den Support abgedeckt. Hier setzt Leapp an um einen Mittelweg aus Einfachheit für den Benutzer und der Möglichkeit für Anpassungen durch die Distribution zu finden.

Das Projekt

Das Projekt Leapp hat Red Hat 2017 gestartet und es besteht aus einem Framework und den sogenannten Actors, die die eigentliche Arbeit ausführen. Das erste Mal zum Einsatz kam es dann als Upgrade von RHEL 7 auf 8. Weitere Bekanntheit bekam das Projekt unter dem Namen ELevate, was ein Fork der Actor durch die Distributoren von AlmaLinux ist. Der Fork enthält dann Actors um CentOS Linux 7 auf AlmaLinux OS 8, CentOS Stream 8, EuroLinux 8, Oracle Linux 8 oder Rocky Linux 8 zu aktualisieren. Auch wenn die Intention sicher ein einfacher Wechsel von CentOS zu AlmaLinux war, ist der Open-Source-Gedanke hier mit der zusätzlichen Unterstützung der anderen Distributionen sehr löblich umgesetzt, insbesondere nachdem CentOS selbst die Unterstützung von Leapp ausgeschlossen hatte. Für ein Upgrade von RHEL 8 auf 9 gibt es mittlerweile vom Leapp-Projekt selbst auch Actors, was der Grund für mich war mich erneut mit dem Projekt zu beschäftigen, da bei einem Kunden genau dieses Update ansteht.
read more…

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Ansible – Testing roles with Molecule

Ansible is a widely used and a powerful open-source configuration and deployment management tool. It can be used for simple repetitive daily tasks or complex application deployments, therefore Ansible is able to cover mostly any situation.

If used in complex or heterogene environments it is necessary to test the code to reduce time to fix code in production. To test Ansible code it is suggested to use Molecule.

Molecule is a useful tool to run automated tests on Ansible roles or collections. It helps with unit tests to ensure properly working code on different systems. Whether using the role internally or provide it to the public, it is useful to test many cases your role can be used. In addition Molecule is easily integrated into known CI/CD tools, like Github Actions or Gitlab CI/CD.

In this short introduction I’ll try get your first Molecule tests configured and running!

Please make sure you installed Molecule beforehand. On most distributions it’s easily installed via PIP.
The fastest and most common way to test roles would be in container. Due to a version problem with systemd currently it’s not possible to start services over systemd in containers. For this reason you can easily start with a vagrant instance and later migrate to docker or podman easily.


pip install molecule molecule-vagrant

If you have a role you can change into the role directory and create a default scenario.


cd ~/Documents/netways/git/thilo.my_config/
molecule init scenario -r thilo.my_config default
INFO     Initializing new scenario default...
INFO     Initialized scenario in /Users/thilo/Documents/netways/git/thilo.my_config/molecule/default successfully.

Below the molecule folder all scenarios are listed. Edit the default/molecule.yml to add the vagrant options.

Add a dependency file with your collections as with newer Ansible versions only the core is available. If needed you can add sudo privileges to your tests.

molecule/default/molecule.yml


---
dependency:
  name: galaxy
  options:
    requirements-file: collections.yml
driver:
  name: vagrant
platforms:
  - name: instance
    box: bento/centos-7
provisioner:
  name: ansible
verifier:
  name: testinfra
  options:
    sudo: true

The converge.yml is basically the playbook to run on your instance. In the playbook you define which variables should be used or if some pre-tasks should be run.

molecule/default/converge.yml


---
- name: Converge
  hosts: all
  become: true
  tasks:
    - name: "Include thilo.my_config"
      include_role:
        name: "thilo.my_config"

Now you can run your playbook with molecule. If you want to deploy and not delete your instance use converge. Otherwise you can use test, then the instance will be created, tested and destroyed afterwards.


python3 -m molecule converge -s default
or 
python3 -m molecule test -s default

Finally we can define some tests, the right tool is testinfra. Testinfra provides different modules to gather informations and check them if they have specific attributes.

Your scenario creates a tests folder with the following file: molecule/default/tests/test_default.py

In this example I’ll test the resources my role should create.


"""Role testing files using testinfra."""


def test_user(host):
    """Validate created user"""
    u = host.user("thilo")

    assert u.exists

def test_authorized_keys(host):
    """Validate pub key deployment"""
    f = host.file("/home/thilo/.ssh/authorized_keys")

    assert f.exists
    assert f.content_string == "ssh-rsa AAAA[...] \n"

And if we already converged our instance, we can verify these definitions against our deployment.


python3 -m molecule verify
INFO     default scenario test matrix: verify
INFO     Performing prerun with role_name_check=0...
[...]
INFO     Running default > verify
INFO     Executing Testinfra tests found in /Users/thilo/Documents/netways/git/thilo.my_config/molecule/default/tests/...
============================= test session starts ==============================
platform darwin -- Python 3.9.12, pytest-6.2.5, py-1.11.0, pluggy-0.13.1
rootdir: /
plugins: testinfra-6.4.0
collected 2 items

molecule/default/tests/test_default.py ..                                [100%]

============================== 2 passed in 1.79s ===============================
INFO     Verifier completed successfully.

With those easy steps you can easily test your roles for any scenario and your deployments can run without any hassle or at least you will be more relaxed during it 😉

Check out our Blog for more awesome posts and if you need help with Ansible send us a message or sign up for one of our trainings!

Thilo Wening
Thilo Wening
Senior Consultant

Thilo hat bei NETWAYS mit der Ausbildung zum Fachinformatiker, Schwerpunkt Systemadministration begonnen und unterstützt nun nach erfolgreich bestandener Prüfung tatkräftig die Kollegen im Consulting. In seiner Freizeit ist er athletisch in der Senkrechten unterwegs und stählt seine Muskeln beim Bouldern. Als richtiger Profi macht er das natürlich am liebsten in der Natur und geht nur noch in Ausnahmefällen in die Kletterhalle.

Ansible – How to create reusable tasks

Ansible is known for its simplicity, lightweight footprint and flexibility to configure nearly any device in your infrastructure. Therefore it’s used in large scale environments shared between teams or departments. Often tasks could be used in multiple playbooks to combine update routines, setting downtimes at an API or update data at the central asset management.

To use external tasks in Ansible we use the include_task module. This module dynamically includes the tasks from the given file. When used in a specific plays we would assign play specific variables to avoid confusion. For example:


vim tasks/get_ldap_user.yml

- name: get user from ldap
  register: users
  community.general.ldap_search:
    bind_pw: "{{ myplay_ad_bind_pw }}"
    bind_dn: "{{ myplay_ad_bind_dn }}"
    server_uri: "{{ myplay_ad_server }}"
    dn: "{{ myplay_ad_user_dn }}"
    filter: "(&(ObjectClass=user)(objectCategory=person)(mail={{ myplay_usermail }}))"
    scope: children
    attrs:
      - cn
      - mail
      - memberOf
      - distinguishedName

If this task should be used in another playbook to reduce the amount of code or is used again with other conditions or values. Therefore the variables need to be overwritten or if it is another playbook the variables are named wrong.

The solve this problem change the variables to unused generic variables. And assign your own variables in the include_task statement.


vim tasks/get_ldap_user.yml

- name: get user from ldap
  register: users
  community.general.ldap_search:
    bind_pw: "{{ _ad_bind_pw }}"
    bind_dn: "{{ _ad_bind_dn }}"
    server_uri: "{{ _ad_server }}"
    dn: "{{ _ad_user_dn }}"
    filter: "(&(ObjectClass=user)(objectCategory=person)(mail={{ _ad_usermail }}))"
    scope: children
    attrs:
      - cn
      - mail
      - memberOf
      - distinguishedName

The include_task vars parameter provides own variables to the tasks.


vim plays/user_management.yml
[...]
- name: check if user exists in ldap
  include_tasks:
    file: tasks/get_ldap_user.yml
  vars: 
    _ad_bind_pw: "{{ play_ad_pw }}"
    _ad_bind_dn: "{{ play_ad_user }}"
    _ad_server: "{{ play_ad_server }}"
    _ad_user_dn: "OU=users,DC=example,DC=de"
    _ad_usermail: "{{ play_usermail }}"

This can be easily combined with loops, to enhance the reusability of your tasks even more! Checkout this blogpost about looping multiple tasks. Ansible – Loop over multiple tasks

Check out our Blog for more awesome posts and if you need help with Ansible send us a message or sign up for one of our trainings!

Thilo Wening
Thilo Wening
Senior Consultant

Thilo hat bei NETWAYS mit der Ausbildung zum Fachinformatiker, Schwerpunkt Systemadministration begonnen und unterstützt nun nach erfolgreich bestandener Prüfung tatkräftig die Kollegen im Consulting. In seiner Freizeit ist er athletisch in der Senkrechten unterwegs und stählt seine Muskeln beim Bouldern. Als richtiger Profi macht er das natürlich am liebsten in der Natur und geht nur noch in Ausnahmefällen in die Kletterhalle.