Select Page

NETWAYS Blog

The Walrus Has Landed – Structured Logging in Go

Walrus pup at Kamogawa Seaworld, Japan. Photo by Max Smith.


Logs let us understand which combination over time lead to a specific event, we can further analyze them to monitor trends or have them print enough information to narrow down a problem to a single line of code. They are the written history of the IT world. Did I lay it on too thick? Probably. But the bottom line is, logs can be very useful. But before you can read logs, one has to write log messages, assign severities and decide which information is worth print and which is not.
An attempt to make logging easier for the programmer, more readable for the user and better formatted for log processors is structured logging. This concept has been around for a while but ‘got cool’ only recently with the advent of large scale logging processors like ElasticSearch or Graylog. Let’s go by example, the following are two log messages, the first being good old lines of text and the second structured logging

[2018-07-31T16:25:07+02:00] - Information/Api: New connection ("GET") from "192.1.1.8" to endpoint "/v1/get-the-cookie" with token "aa76dfdf1023dfa567".
time="2018-07-31T16:25:07+02:00" level=Info msg="New connection" context=Api request_type=GET remote=192.1.1.8 endpoint=/v1/get-the-cookie authtoken="aa76dfdf1023dfa567

It’s easy to see how the structured logging message is easier to comprehend for a logging processor. And while you can grep use grep on both the same way to get a quick overview in this case, the key=’value’ scheme of the second line will always have that advantage while string logs may get convoluted – we all had to chain multiple instances of `grep` and `grep -i` to get rid of false positives at some point, no?
Now let’s look at the code, since I promised it would be easier for developers as well. In this case I’ll be using logrus, a structured logging library for Go. Here is code how these two lines could look like:

log.Information("Api", "New connection (\"%s\") from \"%s\" to endpoint \"%s\" with token \"%s\", conn.type, conn.remote, conn.path, conn.token)

Simple enough. Timestamps are automatic, the severity part of the function call and the context is one of the parameters. Now the same with logrus:

log.WithFields(log.Fields{
		"context":      "Api",
		"request_type": conn.type,
		"remote":       conn.remote,
		"endpoint":     conn.path,
		"authtoken":    conn.token
	}).Info("New connection")

This looks like a lot more to write and less simple at that! Was I lying? Of course not. The big advantage of using logrus are temporary loggers with default fields:

apiLogger := log.WithFields(log.Fields{
		"context":      "Api",
		"request_type": conn.type,
		"remote":       conn.remote,
		"endpoint":     conn.path,
		"authtoken":    conn.token
	})
apiLogger.Info("New connection")

Any future logging within the Api context can use this `apiLogger`. Letting us shape the log messages with exactly the information we need, without too much copy and paste or having to think about the type of object we want to log:

apiLogger.WithField("data",conn.data).Debug("Submitted Data")
apiLogger.WithFields(log.Fields{
		"error": "Too much data",
		"error_id": 5,
	}).Warn("Connectivity issue")
apiLogger.Info("Connection terminated")
.... Outputs:
time="2018-07-31T16:25:17+02:00" level=Info msg="New connection" context=Api request_type=POST remote=192.1.1.8 endpoint=/v1/give-me-cookies authtoken=aa76dfdf1023dfa567
time="2018-07-31T16:25:09+02:00" level=Debug msg="Submitted Data" context=Api request_type=POST remote=192.1.1.8 endpoint=/v1/give-me-cookies authtoken=aa76dfdf1023dfa567 submitted_data="{some json blob}"
time="2018-07-31T16:25:17+02:00" level=Warn msg="Connectivity issue" context=Api request_type=POST remote=192.1.1.8 endpoint=/v1/give-me-cookies authtoken=aa76dfdf1023dfa567 error="Too much data" error_id=5
time="2018-07-31T16:25:24+02:00" level=Info msg="Connection terminated" context=Api request_type=POST remote=192.1.1.8 endpoint=/v1/give-me-cookies authtoken=aa76dfdf1023dfa567

It’s easy to see how having to write log messages the usual way would be quite tiresome and prone to missing some information, be it by oversight or misjudged importance. Stay tuned for future updates on our journey into the world of Golang and hopefully we’ll see you at OSMC.

Continuous Integration with Golang and GitLab

Today you not only develop code and collaborate with other developers in your Git branches and forks. In addition continuous integration helps with “instant” compile and runtime tests. Which Git commit caused a failure, are there any performance changes with the recent code history?
Today I want to dive into our latest insights into Golang and building our code inside GitLab on each push and merge request. Golang extensively supports unit tests and also their analysis with coverage tests. This ensures that code isolated in test interfaces runs rock-solid and you can focus on the more important tasks and features.
In order to abstract the Go build and install calls, we add a Makefile for convenience. This also allows developers to manually use it, if they want.

$ vim Makefile
.PHONY: all test coverage
all: get build install
get:
        go get ./...
build:
        go build ./...
install:
        go install ./...
test:
        go test ./... -v -coverprofile .coverage.txt
        go tool cover -func .coverage.txt
coverage: test
        go tool cover -html=.coverage.txt

The “coverage” target runs the tests and stores the coverage results in “.coverage.txt”. This is then converted into HTML and can be opened with a browser for example.
Inside GitLab we need a CI runner configured for our project and job pipeline. This is already available in the NWS app. If you want to learn more about GitLab CI/CD, containers, jobs and pipelines, I’ll invite you to join me for the official GitLab training sessions 🙂
The CI configuration file is stored in “.gitlab-ci.yml” inside the Git repository and uses the Golang image for Debian Stretch in this example.

$ vim .gitlab-ci.yml
image: golang:1.10-stretch
stages:
  - build
before_script:
  - apt-get update && apt-get install -y make curl
  - curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
  - mkdir -p /go/src/git.icinga.com/icingadb
  - ln -s /builds/icingadb/icingadb /go/src/git.icinga.com/icingadb/icingadb
  - cd /go/src/git.icinga.com/icingadb/icingadb
  - dep init && dep ensure
build:
  stage: build
  script:
    - make test

The “before_script” tasks can be moved into your own Docker image which can be made available in the GitLab container registry too. At some later point when dependencies are fixed, we’ll remove the “dep init” step and just use the locked versions from inside our Git repository.
The test coverage should also be highlighted in the job output. This can be extracted with a regular expression using the shell output from the tests. Nagivate into “Settings > CI/CD > General pipeline settings > Test coverage parsing” and add the following regex:

^total:\t+\(statements\)\t+(\d+\.\d+)%

Each pushed branch and merge request will then automatically be built and also shows the test coverage. There’s room for improvement here, the next months will ensure to add more unit and runtime tests ?