pixel
Seite wählen

Ergebnisse für " stackconf "

stackconf 2022 | DevOps or DevX – Lessons We Learned Shifting Left the Wrong Way

stackconf 2022 was a full success! On July 19 and 20, our conference took place in Berlin and we very much enjoyed the event, which has been on site for the very first time! stackconf was all about open source infrastructure solutions in the spectrum of continuous integration, container, hybrid and cloud technologies. We’re still excited about our expert speaker sessions. In the following you get a deeper insight into one of our talks.

We kicked off the lecture program with a talk by Hannah Foxwell on „DevOps or DevX – Lessons We Learned Shifting Left The Wrong Way“. Here is what I’ve learned from Hannah:

Once Upon A Time

DevOps is such a common term now that it has almost lost its accurate meaning. Once upon a time there were two teams, Devs and Ops, with different missions and goals – rapid development vs. stable user experience. Changes were handed over just like that and great effort was put into getting even the smallest features into production to the customer in a stable way. For sure: This needed to change!

While some people felt the problem was the Ops team. Here, NoOps was a thing. This misconception came from thinking that the Ops team didn’t care about users because the Ops team didn’t want to release the new features fast enough. As a result, more and more typical Ops tasks like backup, monitoring or cost management were outsourced to developers. At a certain point, these additional tasks became too much for the dev team, which some developers were also unhappy with.

Focus on Team Health

According to a report by Haystack Analytics, 83% of all developers suffer from burnout, mostly triggered by the demands of having to learn and consider more and more technologies and areas.
Here you have to pay more attention to HumanOps again to focus on the health of the team.

Just like the old ways of splitting everything into silos, the NoOps approach was the wrong way to go. Here, it’s important to use mixed teams with a product-owner mentality for the different layers. Each team is responsible for delivering the best possible experience for their users.

Hannah also touched on how important the proper site reliabitily is and how it can impact the team. With a 99% reliability over 28 days, you have 400 minutes, enough time for manual intervention. The larger the reliability, the less time and more stress the team has until only automatic interventions are possible to stay within the time. Here, no human can react fast enough.

On Site Realiability

But you also have to see if this is needed by the user. Many users don’t even notice a short disruption, and if they do, some aren’t even bothered by it – contrast this with the cost and effort of taking measures. Depending on the level of site reliability needed, monitoring measures range from user input to active monitoring to automatic rollbacks.

You also have to decide how to allocate this downtime at each level – the closer you are to the physical hardware, the lower the downtime needs to be.
Whereas site reliability should not be a single responsibility, this is where all teams need to work together.

Finally, Hannah explained the security aspects that need to be considered with software. Bugs like Log4Shell can be avoided with the right security mindset. An open culture is important here, where you can also discuss and criticize your own concept.

When creating the security concept, you should also consider the people who implement the measures as well as how to automate it. Some security aspects should also not be carried out by individual teams alone, but across entire teams. You can avoid a strong leftward slide towards the dev team with this and still not work in isolated silos if you have a user-centric focus with it and pay attention to the people in the process.

That was just a short summary of Hannah Foxwell’s talk at stackconf 2022. You can watch her full talk on our YouTube Channel.
I’m already looking forward to the talks at the next stackconf and the opportunity to share thoughts and experiences with a wide variety of cool people there.

Take a look at our conference website to learn more about stackconf, check out the archives and register for our newsletter to stay tuned!

Michael Kübler
Michael Kübler
Systems Engineer

Michael war jahrelang in der Gastronomie tätig, bevor er 2022 seine Umschulung als Fachinformatiker bei Netways abschloss. Seitdem unterstützt er unsere Kunden bei ihren Projekten als MyEngineer und sucht auch nebenbei kleinere Projekte, die er realisieren kann. Privat geht er gerne Campen und fährt Rad. Er genießt auch einen entspannten Abend daheim mit einem Buch und Whisky.

stackconf 2022 | Network Service Mesh

Wow, it was an outstanding stackconf 2022! Many thanks to all the organizers, speakers and participants who made it possible throughout three days full of Open Source Infrastructure Love. I am truly blessed to have been a part of this amazing event!

For all of you who would like to catch up on the specialist lectures of this special event, all the talks are available for you on YouTube. My blogpost today is all about Ricardo Castro and his talk ”Network Service Mesh”. Due to personal circumstances, he was unfortunately prevented from being present on site and sharing his expertise with us in person. Nevertheless, he was able to record his talk in advance and send it to us. So, I am going to recap his talk for you in this blog and give some insights into my findings.

Why Network Service Mesh?

Microservice architectures build applications as a collection of loosely coupled services. In this type of architecture, the goal is for the services to be fine-grained. The main adjective for teams is that the services are independent of other services. By building loosely coupled services, the types of dependencies and associated complexities are eliminated. Reliably managing network connectivity on distributed systems brings its own set of challenges. Things like service discovery, load balancing, fault tolerance, metrics collection, and security are some examples of issues that arise with distributed systems.

In short, how do you facilitate workload collaboration to create an application on the fly that communicates regardless of where those workloads are running? Ricardo is going to answer this and plenty of other questions for us below, so prick up your ears!

Traditional Service Meshes

A service mesh is a connective tissue between all of your services. It adds additional capabilities such as traffic control, service discovery, load balancing, resilience, observability, security, etc. A service network allows applications to offload these capabilities from application-level libraries. A service mesh typically consists of two components, the data plane and the control plane. The service mesh data plane is made up of business services along side the proxies, which are responsible for all the traffic. The service mesh control plane comprises a set of services providing administrative functions necessary for the control of the service mesh. There are many of the service mesh implementations that can help you in many ways. Here you can see a few of them:

Service meshes connectivity work well within a runtime domain, but often workloads need to interoperate with other services that are outside the runtime domain to provide full functionality. But what are runtime and connectivity domains?

A runtime domain is essentially a compute domain that runs workloads. Traditionally, there is exactly one connectivity domain in a runtime domain because connectivity domains are coupled with runtime domains. This means that only workloads that are within the runtime domain can be part of the connectivity domain. To truly take advantage of this approach, microservice components must at least be interdependent. Application service networks work well within the connectivity domain and their target Layer 7 such as https. They can help cloud-native workloads achieve loose coupling through functionality such as service discovery and fault tolerance. Ricardo has explored some of the use cases where traditional service meshes fail and where network service meshes can mitigate these challenges.

Where Do L7 Service Meshes Fall Short?

The first example is the Multi/Hybrid Cloud center. In this scenario, you have multiple distinct and independent cloud clusters that can be public, private or on-premises. The workloads in each of these clusters must have the ability to communicate with each other. In short, you have a number of workloads running in different connectivity domains that need to be connected together.

Another example is the Multi-Corp/Extra-Net scenario. In this scenario, workloads from different clusters also need to interconnect independent of the connectivity domains they are running in. The main difference between this and the hybrid cloud scenario is that you now have different domains of administrative control. This means that different organizations are managing their own runtime and connectivity domains. The workloads in each of these domains have to work with each other in some fashion.

Service mesh balkanization is also another example of how traditional Layer 7 service meshes fall short. For example, traditional service meshes work well within the same Kubernetes cluster. When local service meshes need to collaborate, they are typically connected via Gateways. These are usually static Layer 7 routes that are very difficult to scale and maintain. However, traditional service meshes cannot guarantee reliable networking in such a scenario because they assume there is a reliable Layer 3 underneath them, which is usually the case within the same cluster, but cannot be guaranteed beyond that. Ricardo goes on to examine how network service meshes have tackled these challenges.

Network Service Mesh (NSM)

Before we dig in further here’s an excellent description, as Ricardo did, of what network service mesh actually is.

Network Service Mesh is a hybrid/multi-cloud IP service mesh that enables Layer 3 (L3), zero trusts, pure network service connectivity, security and observability. It works with your existing K8s Container Network Interface (CNI), provides pure workload granularity, requires no changes to K8s and no changes to your workloads. Network Service Mesh tackles all of the above mentioned challenges by enabling individual workloads to connect securely, regardless of where they are running. Your runtime domain provides connectivity between clusters and requires no intervention. Before we dive into the key concepts of network service meshes, here are some use cases where NSM can be very useful. Basically, these are the same examples where traditional service meshes fall short:

  • A common Layer 3 domain that allows Databases (DB) running in multiple clusters, clouds, etc. to communicate with each other, such as DB replication.
  • A single Layer 7 service mesh such as Kuma, Linkerd, etc., that connects workloads running in multiple clusters, clouds, etc.
  • A single workload connecting to multiple Layer 7 service meshes.
  • Workloads from multiple organizations connecting to a single collaborative service mesh that is accessible across organizations.

NSM Key Concepts

In a network service mesh, a network service is a collection of features that are applied to the network traffic. These functions provide connectivity, security and observability. A network service client is a workload that requires a connection to the network service by specifying the name of the network service. Clients are independently authenticated by Spiffe ID and must be authorized to connect to the requested network service. In addition to the network service name, a client may declare a set of key value pairs called labels. These labels can be used by the network service to select the proper endpoint. Or they can be used by the endpoint itself to influence the way it provides service to the client.

A network service mesh endpoint is the entry point to the network service for the client. The network service is identified by its name and carries a payload. Network services are registered in the network service register. In short, an endpoint can be a Pod running on the same or a different K8s cluster, an aspect of the physical network or anything else to which packets can be delivered for processing. A client and an endpoint are connected by a virtual wire (vWire). A vWire acts as a virtual connectivity link between clients and an endpoint. A client may also request the same network service multiple times and therefore multiple vWires can lead to the requested endpoint.

Network Service Mesh API

The Network Service Mesh API includes the Request, Close and Monitor functionality. A client can send a Request-GRPC call to the Network Service Mesh to establish a virtual circuit between the client and the network service. For closing a virtual wire between a client and the network service, a client can send a Close-GRPC call to the NSM. A vWire between a client and a network service always has a limited expiration time, so the client usually sends a new Request message to refresh its vWire. A client can also send a Monitor GRPC call to the NSM to obtain information about the status of a vWire it has to a Network Service. If a vWire exceeds its expiration time without being refreshed, the NSM cleans up that vWire.

Network Service Mesh Registry

Like any other mesh, the Network Service Mesh has a Registry in which network services and network service endpoints are registered. A network service endpoint provides one or more network services. It registers a list of network services by name in the registry, which it annotates, and the destination labels that it advertises for each network service. Optionally, a mesh service can specify a list of matches that allows matching with the source identifiers of the client sending its request to the destination labels advertising when it is registered by the endpoint. You can find a detailed description in the documentation or on YouTube, where Ricardo explains it very well in person.

Well, how do you put it all together?

At a higher level, a Network Service Mesh registers one or more network services within the NSM. It registers that list by name and the destination labels that advertising each network service. A client can send a request to the network service to establish a vWire connectivity between the client and the network service. It can also close the connection, and since the vWire has an expiration time, a client can monitor its vWire state. The interesting thing about the NSM is that it allows workloads to request services regardless of where they are running. It is up to the NSM to create the necessary vWires and enable communication between workloads in a secure manner.

Conclusion

In general, Ricardo’s talk covers the issues that Network Service Mesh addresses and its key concepts, however there is much more to talk about, as he told us. He recommends you to look at Floating Inter-domain, where a client requests a network service from any domain in the network service registry, regardless of where it is running. It can also be valuable to take a look at some of the advanced features, as he encourages us to do. The Network Service Mesh Match process, which selects candidate endpoints for specific Network Service Meshes, can be used to implement a variety of advanced features such as Composition, Selective Composition, Topologically Aware Endpoint Selection, and Topologically Aware Scale from Zero. Also, another important note Ricardo shared with us is that Network Service Mesh has been part of the CNCF project since 2019 and is currently in the Sandbox maturity level. If you don’t know what CNCF is, it’s definitely worth to take a look at the referenced documentation.

It’s a very interesting and informative topic and I could learn a lot from Ricardo. I have never dealt with it before, but now it is quite simple to keep track after watching this great talk. I hope I was able to somehow reflect what Ricardo was talking about throughout the text. As I mentioned at the beginning, the recorded video of this lecture and all the other stackconf talks are available on YouTube! Enjoy it!

Take a look at our conference website to learn more about stackconf, check out the archives and register for our newsletter to stay tuned!

Yonas Habteab
Yonas Habteab
Developer

Yonas hat 2022 seine Ausbildung zum Fachinformatiker für Anwendungsentwicklung bei NETWAYS erfolgreich abgeschlossen. Er hat Spaß daran, sich stets Wissen über das Programmieren anzueignen und widmet sich leidenschaftlich seiner Tätigkeit bei Icinga. Wenn er mal nicht am Programmieren ist, kickt er gern mit seinen Freunden Fußball oder geht in der Stadt spazieren und genießt ruhige Abende vor dem Fernseher.

stackconf 2022 | A big „Thank You“ to our Sponsors & Partners!

This year’s Open Source Infrastructure Conference was an absolute successful onsite event and today we take the opportunity to thank all of our contributors for their great support!

 

Sponsors & Partners

We’re happy to shout out a big thank you to

Thanks to all of you whether sponsor or media partner! We were it was a great pleasure to have you on board and hope you have enjoyed your sponsorship benefits!

 

Take a glance back!

For all of you who either couldn’t join stackconf 2022 or want to work on the given lectures as a follow-up we provide all speaker talks including slides and videos as well as lots of awesome photographs in our archives. So use the chance and take a walk down memory lane. Enjoy lots of interesting and inspiring talks around open source infrastructure solutions. There’s something for everyone!

 

Stay tuned!

You are already excited about taking part in stackconf 2023? Well, we have good news for you! The conference will take place in September next year – so stay tuned!

Katja Kotschenreuther
Katja Kotschenreuther
Marketing Manager

Katja ist seit Oktober 2020 Teil des Marketing Teams. Als Online Marketing Managerin kümmert sie sich neben der Optimierung unserer Websites und Social Media Kampagnen hauptsächlich um die Bewerbung unserer Konferenzen und Trainings. In ihrer Freizeit ist sie immer auf der Suche nach neuen Geocaches, bereist gern die Welt, knuddelt alle Tierkinder, die ihr über den Weg laufen und stattet ihrer niederbayrischen Heimat Passau regelmäßig Besuche ab.

stackconf 2022 | Archives now available!

It’s a wrap…

…and it was a blast! stackconf 2022 has been the very first onsite edition of the event and was a full success. We’re happy about the large number of speakers and participants that made stackconf an unforgettable conference in Berlin this summer.

 

Review stackconf in…

…videos

We have good news for you! Our archives of stackconf 2022 are now online! We provide all speaker presentations including videos and slides. Whether you’re interested in topics like Kubernetes, DevOps, Cloud Solutions or Observability, there’s something for everyone! So take a glance back and watch the videos at your leisure.

 

…photos

Of course there’s a top selection of this year’s photographs. You should definitely check them out. Who knows, maybe you’re in a picture.

 

Stay tuned!

The Open Source Infrastructure Conference will take place next year again. The final date is not yet set. You will be informed immediately when it is fixed. So stay tuned!
You don’t want to miss any updates or news on stackconf, then subscribe to the stackconf newsletter. We will keep you postet!

It has been a great pleasure!

Katja Kotschenreuther
Katja Kotschenreuther
Marketing Manager

Katja ist seit Oktober 2020 Teil des Marketing Teams. Als Online Marketing Managerin kümmert sie sich neben der Optimierung unserer Websites und Social Media Kampagnen hauptsächlich um die Bewerbung unserer Konferenzen und Trainings. In ihrer Freizeit ist sie immer auf der Suche nach neuen Geocaches, bereist gern die Welt, knuddelt alle Tierkinder, die ihr über den Weg laufen und stattet ihrer niederbayrischen Heimat Passau regelmäßig Besuche ab.

stackconf 2022 | Behind the Scenes – Part 2/2

It was a blast! stackconf 2022 has been a full success and we’re more than happy about this outstanding onsite event last week. Let’s take a glance back and see what was going on behind the scenes during our two-day conference.

 

Our Crew

Teamwork makes the dream work! And it couldn’t have been better. The marketing ladies, Jessica and I, have managed the social media accounts during our conference to keep people up to date with what’s currently going on at stackconf.

Lukas and Markus are our main organizers. They always had an eye on everything: the check-in counter, the well-being of the participants, and the smooth program flow. Thanks for your amazing work before and during the conference!

 

If you would like to take a look what was happening you’re invited to follow our Twitter and Instagram account.

 

 

In the Mood for Food

As we all know, you will never go hungry at an event organized by NETWAYS. Inbetween our speaker talks everything took place in the break room, where attendees took the opportunity to engage with each other. Donuts, muffins, cake, and puff pastry have sweetened the already excellent atmosphere. I already miss the food…

 

 

 

Roll it!

Justin was one of our camera men at this year’s stackconf. It was his first NETWAYS conference and he got a trusted task right away: Recording our speaker talks. Thanks to him and our other hardworking camera men Yonas, Sukwinder and Martin we have lots of video material for our archives. Stay tuned about it.

 

Cuddle Time!

Let’s face it! Who of us would not like to have such a giant plush penguin? During our speaker talks I took the opportunity to grab the fluffy penguin. He belonged to one of our sponsors, B1 Systems, and beautified our networking room.

PS: I already miss him…

 

 

We ♡ Berlin

Besides lots of fun, exchange and inspiration at the conference itself, stackconf also provides the opportunity to take a walk in Germany’s capital city Berlin. Sightseeing is simply part of the experience when you’re already in Berlin. No matter if during the day or in the evening, Berlin is just always magical!

 

 

Thanks to everyone who made stackconf an unforgettable event this summer! We’re happy about lots of positive feedback from our attendees and speakers.

In a nutshell: it was a blast!

Katja Kotschenreuther
Katja Kotschenreuther
Marketing Manager

Katja ist seit Oktober 2020 Teil des Marketing Teams. Als Online Marketing Managerin kümmert sie sich neben der Optimierung unserer Websites und Social Media Kampagnen hauptsächlich um die Bewerbung unserer Konferenzen und Trainings. In ihrer Freizeit ist sie immer auf der Suche nach neuen Geocaches, bereist gern die Welt, knuddelt alle Tierkinder, die ihr über den Weg laufen und stattet ihrer niederbayrischen Heimat Passau regelmäßig Besuche ab.