Seite wählen

NETWAYS Blog

Foreman Birthday Event 2024 – Save the Date

I want to spread the word as we will have our first Foreman Birthday Event on-site again after many years of online conferencing on 15.07.2024. This year’s host is ATIX, so it will take place at their Conference room at Parkring 4, 85748 Garching, near Munich, Germany.

 

What is the Foreman Birthday Event?

So if you missed our last events, the Foreman Birthday Event is a small conference with a mix of talks and networking. The event was born out of the idea to celebrate the birthday of the Foreman Project by Greg Sutcliffe when he was the project’s Community manager. In the first years we had multiple events, but the only one still taking place is the one organized by ATIX and NETWAYS.

 

How to Participate?

Talks are not announced yet as the Call for Papers is still open. So everyone can still propose an interesting talk focusing on the use and development of Foreman, Katello, or Pulp. As I am not hosting this year, I already did send one in, and so you can also suggest your own at cfp@atix.de. As a speaker you will be invited by ATIX to dinner the evening before the main event. So plan your trip to perhaps combine some sightseeing in Munich with the event!

If you want to visit the event just for the talks and networking, please register in the meetup group, as space and cake are limited.

Still undecided? Then take a look at the recap of the last events in 2023 and 2021, and you will find some great talks which will hopefully convince you.

 

How to stay up to date?

Save the date (15.07.2024 10:00-16:00) and location (Parkring 4, 85748 Garching), and spread the word further! Updates from ATIX will be added to the Foreman Community, so follow the thread there and stay in touch!

Dirk Götz
Dirk Götz
Principal Consultant

Dirk ist Red Hat Spezialist und arbeitet bei NETWAYS im Bereich Consulting für Icinga, Puppet, Ansible, Foreman und andere Systems-Management-Lösungen. Früher war er bei einem Träger der gesetzlichen Rentenversicherung als Senior Administrator beschäftigt und auch für die Ausbildung der Azubis verantwortlich wie nun bei NETWAYS.

Officially Opening the Call for Papers for OSMC 2024!

We are pleased to announce that you can now submit your ideas for presentations at OSMC 2024! The event will take place over three days, from November 19 to 21, in Nuremberg. Now is your chance to share your knowledge with our monitoring community! Check out this blog post for all the details you need to know to become a speaker.

 

Let’s Talk About…

…open source monitoring solutions! That’s the main topic of our conference. But there is much more you can talk about. Here are some ideas that will inspire you.

We’re looking for talks that take an in-depth perspective on technical topics. Whether it’s about new topics, open source projects, technical background or the latest developments, we want to hear about it! We also look forward to presentations on new features, tutorials, real-life stories, best practices and what’s next in the world of monitoring.

You are looking for more ideas?  Just check the presentations from past OSMC events.

 

Presentation Formats

How much do you want to say? How long do you want to speak? At OSMC, you can choose from three different presentation formats: Ignite Talk, 30-minute talk or 45-minute talk.

Choose the type that suits you best!

 

Submission Deadline

We’re accepting your presentation idea until August 15. Don’t wait until later – submit your talk now!

If you have any questions about Open Source Monitoring Conference, you can contact our events team at any time.

Katja Kotschenreuther
Katja Kotschenreuther
Manager Marketing

Katja ist seit Oktober 2020 Teil des Marketing Teams. Als Manager Marketing kümmert sie sich um das Marketing für die Konferenzen stackconf und OSMC, die DevOpsDays Berlin, Open Source Camps, sowie unsere Trainings. In ihrer Freizeit reist sie gerne, bastelt, backt und im Sommer kümmert sie sich außerdem um ihren viel zu großen Gemüseanbau.

The Countdown for DevOpsDays Berlin 2024 is on!

It’s just one week to go until DevOpsDays Berlin is about to start. The organizing team’s preparations are in full swing, and they can’t wait to finally welcome you there. Let’s build up the excitement together!

 

A Fantastic Program

The two-day schedule provides a combination of curated talks and self-organized conversations called the Open Spaces. The agenda covers topics like “Product Management in DevOps”, „SRE Challenges and Highlights in Shifting from Monolith to Microservices at adidas e-commerce”, “Mentoring and Coaching Junior Engineers – Insights from a Career Changer”, “Lessons from Failing GitOps in Delivering Self-Service Kubernetes Onboarding”, and many more.

The Open Spaces provide a platform for discussing anything that interests you. Whether you want to delve into a topic you’re eager to learn about or share your expertise with others, the possibilities are endless. From technical subjects to cultural insights to casual board game sessions for networking, there’s something for everyone.

 

 Networking Opportunities

Besides the Open Spaces there are many more opportunities for networking with like-minded DevOps professionals. Either in the coffee breaks, the lunch, or at the evening event on the first conference day – use these chances! If you want to meet new people, get advice for your career, or grow your professional connections, DevOpsDays is the right spot. You never know, you might find your next great idea or job opportunity just by talking to someone there.

 

Get Ready!

Make sure you have your ticket handy at the Check-In counter. Follow DevOpsDays Berlin on Twitter and LinkedIn to keep in the loop with real-time updates. Engage on social media by using the hashtag #devopsdaysberlin and share your personal conference highlights! Note down your hardest questions and be sure to get answer by their expert speakers.

In case you haven’t saved your seat yet, you can still grab one of the last available tickets. Make sure to join DevOpsDays Berlin on May 7 & 8!

The organizing team is absolutely looking forward to meeting you there!

Katja Kotschenreuther
Katja Kotschenreuther
Manager Marketing

Katja ist seit Oktober 2020 Teil des Marketing Teams. Als Manager Marketing kümmert sie sich um das Marketing für die Konferenzen stackconf und OSMC, die DevOpsDays Berlin, Open Source Camps, sowie unsere Trainings. In ihrer Freizeit reist sie gerne, bastelt, backt und im Sommer kümmert sie sich außerdem um ihren viel zu großen Gemüseanbau.

Spoiler Alert – The stackconf 2024 Program is Set!

The wait is over! We’re pleased and excited to finally announce the agenda for this year’s stackconf. The event focused on cloud native & infrastructure solutions takes place from June 18 – 19 in Berlin, Germany.

 

Highlights from the Schedule

Here’s a glimpse into a few of our keynote speakers, but there are plenty more on our website. Be sure to have a look and bookmark your favourites!

 

BILL MULLIGAN, Isovalent

Buzzing Across The eBPF Landscape And Into The Hive

Bill’s talk on the rising eBPF technology buzz covers its applications and guidance for beginners and experts. He shares his journey into eBPF, its benefits like efficient networking and real-time security, and highlights various applications. Attendees learn about the eBPF landscape, new tools, and how eBPF addresses networking, observability, and security challenges.


 

ANAÏS URLICHS,  Aqua Security

Looking into the Closet from Code to Cloud with Bills of Material

Anaïs‘ talk covers various Bills of Material (BOM) types, generated from Code to Cloud resources using tools like Trivy, Syft, and Microsoft SBOM. Attendees compare BOM outputs for security and quality using sbom-comparator by Lockheed Martin, with a live demo showcasing their benefits for security scans and reducing vulnerability scan noise.


 

ALEX PSHE, JetBrains

Step-by-step algorithm for building CI/CD as an automated quality control system

In Alex’s talk, delve into the tester’s CI/CD perspective, leveraging automatic metric control for decisions. Learn about constructing CI/CD pipelines based on test metrics, the fail-first approach, and utilizing various pipeline types for testing. Key criteria for effective pipelines, including quality gates and automated control systems, are emphasized.


MAGNUS KULKE, Microsoft

Confidential Containers – Sensitive Data and Privacy in Cloud Native Environments

Magnus‘ talk introduces Confidential Container technology for processing sensitive data in cloud-native environments. He discusses its implementation in Linux and hardware and evaluates the progress of the „Confidential Containers“ project. Finally, a practical demonstration of confidential container deployment in Kubernetes is showcased.


 

ALAYSHIA KNIGHTEN, Pulumi

Unleashing Potential Across Teams: The Power of Infrastructure as Code

Alayshia’s talk showcases how Infrastructure as Code (IaC) revolutionizes managing diverse infrastructures, offering ease and agility. Attendees learn practical strategies for implementation, fostering collaboration and boosting productivity across technical backgrounds.


 

Save your Ticket!

Don’t miss out on grabbing one of our available tickets before they’re gone! If you have any friends or colleagues who might be interested too, come together and benefit from our team discount. We’re looking forward to meeting you!

Katja Kotschenreuther
Katja Kotschenreuther
Manager Marketing

Katja ist seit Oktober 2020 Teil des Marketing Teams. Als Manager Marketing kümmert sie sich um das Marketing für die Konferenzen stackconf und OSMC, die DevOpsDays Berlin, Open Source Camps, sowie unsere Trainings. In ihrer Freizeit reist sie gerne, bastelt, backt und im Sommer kümmert sie sich außerdem um ihren viel zu großen Gemüseanbau.

OSMC 2023 | Experiments with OpenSearch and AI

Last year’s Open Source Monitoring Conference (OSMC) was a great experience. It was a pleasure to meet attendees from around the world and participate in interesting talks about the current and future state of the monitoring field.

Personally, this was my first time attending OSMC, and I was impressed by the organization, the diverse range of talks covering various aspects of monitoring, and the number of attendees that made this year’s event so special.

If you were unable to attend the congress, we are being covering some of the talks presented by the numerous specialists.
This blog post is dedicated to this year’s Gold Sponsor Eliatra and their wonderful speakers Leanne Lacey-Byrne and Jochen Kressin.

Could we enhance accessibility to technology by utilising large language models?

This question may arise when considering the implementation of artificial intelligence in a search engine such as OpenSearch, which handles large data structures and a complex operational middleware.

This idea can also be seen as the starting point for Eliatra’s experiments and their findings, which is the focus of this talk.

 

Working with OpenSearch Queries

OpenSearch deals with large amounts of data, so it is important to retrieve data efficiently and reproducibly.
To meet this need, OpenSearch provides a DSL which enables users to create advanced filters to define how data is retrieved.

In the global scheme of things, such queries can become very long and therefore increase the complexity of working with them.

What if there would be a way of generating such queries by just providing the data scheme to a LLM (large language model) and populate it with a precise description of what data to query? This would greatly reduce the amount of human workload and would definitely be less time-consuming.

 

Can ChatGPT be the Solution?

As a proof-of-concept, Leanne decided to test ChatGPT’s effectiveness in real-world scenarios, using ChatGPT’s LLM and Elasticsearch instead of OpenSearch because more information was available on the former during ChatGPT’s training.

The data used for the tests were the Kibana sample data sets.

Leanne’s approach was to give the LLM a general data mapping, similar to the one returned by the API provided by Elasticsearch, and then ask it a humanised question about which data it should return. Keeping that in mind, this proof of concept will be considered a success if the answers returned consist of valid search queries with a low failure rate.

 

Performance Analysis

Elasticsearch Queries generated by ChatGPT (Result Overview)

Source: slideshare.net (slide 14)

As we can see, the generated queries achieved only 33% overall correctness. And this level was only possible by feeding the LLM with a number of sample mappings and the queries that were manually generated for them.

Now, this accuracy could be further improved by providing more information about the mapping structures, and by submitting a large number of sample mappings and queries to the ChatGPT instance.
This would however result in much more effort in terms of compiling and providing the sample datasets, and would still have a high chance of failure for any submitted prompts that deviate from the trained sample data.

 

Vector Search: An Abstract Approach

Is there a better solution to this problem? Jochen presents another approach that falls under the category of semantic search.
Large language models can handle various inputs, and the type of input used can significantly impact the results produced by such a model.
With this in mind, we can transform our input information into vectors using transformers.
The transformers are trained LLM models that process specific types of input, for example video, audio, text, and so on.
They generate n-dimensional vectors that can be stored in a vector database.
Illustration about the usage of vector transformers

Source: slideshare.net (slide 20)

When searching a vector-based database, one frequently used algorithm for generating result sets is the ‚K-NN index‘
(k-nearest-neighbour index). This algorithm compares stored vectors for similarity and provides an approximation of their relevance to other vectors.
For instance, pictures of cats can be transformed into a vector database. The transformer translates the input into a numeric, vectorized format.
The vector database compares the transformed input to the stored vectors using the K-NN algorithm and returns the most fitting vectors for the input.

 

Are Vectors the Jack of all Trades?

There are some drawbacks to the aforementioned approach. Firstly, the quality of the output heavily depends on the suitability between the transformer and the inputs provided.
Additionally, this method requires significantly more processing power to perform these tasks, which in a dense and highly populated environment could be the bottleneck of such an approach.
It is also difficult to optimize and refine existing models when they only output abstract vectors and are represented as black boxes.
What if we could combine the benefits of both approaches, using lexical and vectorized search?

 

Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation (RAG) was first mentioned in a 2020 paper by Meta. The paper explains how LLMs can be combined with external data sources to improve search results.
This overcomes the problem of stagnating/freezing models, in contrast to normal LLM approaches. Typically, models get pre-trained with a specific set of data.
However, the information provided by this training data can quickly become obsolete and there may be a need to use a model that also incorporates current developments, the latest technology and currently available information.
Augmented generation involves executing a prompt against an information database, which can be of any type (such as the vector database used in the examples above).
The result set is combined with contextual information, for example the latest data available on the Internet or some other external source, like a flight plan database.
This combined set could then be used as a prompt for another large language model, which would produce the final result against the initial prompt.
In conclusion, multiple LLMs could be joined together while using their own strengths and giving them access to current data sources and that could in turn generate more accurate and up to date answers for user prompts.
Overview of the RAG (Retrieval Augmented Generation)

Source: slideshare.net (slide 36)

Noé Costa
Noé Costa
Developer

Noé ist als Schweizer nach Deutschland ausgewandert und unterstützt das Icinga Team seit Oktober 2023 als Developer im Bereich der Webentwicklung. Er wirkt an der Weiterentwicklung der Webmodule von Icinga mit und ist sehr interessiert am Bereich des Monitorings und dessen Zukunft. Neben der Arbeit kocht er gerne, verbringt Zeit mit seiner Partnerin, erweitert sein Wissen in diversen Gebieten und spielt ab und an auch Computerspiele mit Bekanntschaften aus aller Welt.