Software Engineer, Consultant, Speaker & Technical Trainer

Tag: logging

How to create a Logging Dashboard with Kibana

In this tutorial, I’ll show you how to create a dashboard for your application’s structured logs in Kibana.

This is a follow-up to this article, which covers how to instrument your Go application \w structured logging for use by Kibana (in this tutorial).

We’ll use Kibana v7.6 but any version you’re using should work. It’s just that your UI might look a bit different & you’ll have to adjust.

Let’s jump straight in! We’ll stick to simple panels, which suite most of the use-cases you’d need.

The Starting Point

First, make sure you have docker and docker-compose installed.

For docker, follow the installation instructions for your platform here.
For docker-compose, follow the installation instructions here.

Afterwards, download this exercise’s repository on the kibana-dashboard-tutorial branch:

git clone --branch kibana-dashboard-tutorial https://github.com/preslavmihaylov/tutorials

What you already have is a Go application, which randomly produces logs for endpoints /payments/execute, /payments/list, /payments/authhold.

Starting from here, you will build a dashboard for effectively monitoring & analysing incoming traffic.

In the following section, I will walk you through how to achieve this step by step.

The Walkthrough

Boot up all containers

Invoke this command to bring up all containers:

docker-compose up

For all following steps, make sure you never execute docker-compose down.

That command will not only stop all containers but delete them as well.

If you’re seeing some issues with booting up elasticsearch, here is a reference to some common issues & how to handle them:

elasticsearch exit code 78
elasticsearch exit code 137

Setup your application’s baseline saved search

Go to http://localhost:5601/app/kibana#/home and open the Discover tab.

First, expand the tab names in the bottom-left as they are unfamiliar at first:

the expand button

Then, find the Discover tab in the top-left:

Click discover in the Kibana dashboard

Set the index pattern if you haven’t already:

Create index pattern

On the next step, choose @timestamp as a time filter:

Configure index pattern settings

Open the Discover tab again. This is what the initial view should look like:

The discover tab

Currently, kibana shows all log lines from our application without any filtered fields. We can exclude all fields we don’t need:

Exclude unneeded fields

Next, select only the fields which make sense for our application.

Those are endpoint, method, countryISO2, userID, paymentMethod, userType, error, msg.

For each of them, click Add to add them to the selected fields in the visualised log:

Filtering the fields you need

In the end, this is what your Selected fields should be:

The final selected fields

If you did these steps properly, this is an example of what your logs should look like:

Discover tab after filtering

We will use this saved search as a baseline view for our dashboard. 

Click Save at the top-left to later reuse this view for our dashboard:

Saving your current view

This is one of the most useful views as it allows one to inspect the details of the events which happen in your application.

Setting up your initial Kibana dashboard

Open the Dashboard tab to create your first dashboard:

Your initial Kibana dashboard

Follow the instructions on-screen:

Creating your first dashboard message

After this, you should see an empty dashboard which doesn’t show anything:

Initial view of the empty Kibana dashboard

Click the Add button at the top-left to add a new visualisation:

Add a new element

You should see the name of the saved search you created in the previous step. Choose that:

Select a panel to add to

Next, resize it as you see fit and you should see the first panel in your dashboard:

Adding the saved search

This view is great for inspecting the details of all requests & using it for additional filtering on a given property. 

However, we will need to add some more visualisations to be able to see aggregations of the data in our dashboard.

Save your new dashboard with a descriptive name and proceed to the next section:

Saving the Kibana dashboard

Add a data table to monitor requests per endpoint

In this step, we will create a data table which will show how many requests we get per endpoint.

Click the Edit button at the top-left to edit the contents of the dashboard and select Add to create a new view. 

In the panel which pops up, create a new visualisation from the bottom-left:

Create a new visualization

Next, create a new data table:

Picking a visualization template

On the next screen, select your saved search’s name as the data source to use.

Afterwards, you should see a screen which simply shows the total count of requests in a table. This is because the table currently aggregates all requests & shows the total count for them.

We will aggregate the data in the table by the endpoint keys. This will show the count of each request per endpoint.

To do this, add a new Bucket which splits rows:

Split rows setting

As an aggregation, use Terms. This is a simple aggregation which simply groups the data by the unique values for the key we’ll choose:

Aggregating by Terms

Aggregate by the endpoint key and use the following configuration:

The aggregation settings

The Group other values in separate bucket will display all data which is too little and doesn’t fit in the table (the limit is 5 rows, which we specify) in a general “Other” bucket.

For example, if we had 7 different endpoint and five of them get 80% of the traffic, the 20% other values will be aggregated in a row called “Other”.

Now, click the Play button at the top of the window to see how your table looks. You should see the count of requests per endpoint:

Initial view after aggregation

This view now looks quite useful. With it, we can track how much traffic we get per endpoint. Click the Save button at the top-left, give the new view a title (e.g. Calls by Endpoints) and rearrange your dashboard to look aesthetically pleasing:

Adding the new visualization

Let’s now do the same for the rest of our significant fields.

Add a data table for all significant fields

In this step, add a new table, following the same steps as the previous one, for method, countryISO2, userType and paymentMethod.

There is no need to include a view for userID as the granularity of that field is typically way too big to make a sensible dashboard. 

However, you can usually take that by making custom filters on some of the log lines in the saved search view.

After you go through this, here is what the dashboard should look like:

Adding all data tables

Now, you can see aggregations for all significant data points you are interested in. 

If you want to further inspect e.g. what payment methods one uses when calling the /payments/execute endpoint, you can “zoom in” on the data by filtering for that data. 

Try it out and explore for yourself:

Filtering by a given field

Here is a challenge for you, zoom in on the errors and identify which is the endpoint which has the most errors.

The scope of this exercise is creating the dashboard, but don’t worry. You will get to play with it in the following exercise.

Our dashboard already looks quite informative. 

But there is one final touch it needs.

Add a histogram for tracking success/error rate

In this step, we will diverge from the good old data table and add a histogram to track the success-to-error ratio over time.

The rest of the views in the dashboard are typically used to find the root cause of an issue. 

The view we’ll create now will enable us to quickly identify if there is an issue.

Create a new visualisation, but this time, select a Vertical Bar:

Adding a histogram

In the beginning, you will simply see a single bar with the total count of requests in the given time period. 

For starters, add an X-Axis which will create time intervals for each of the lines in the histogram. This will allow us to see the tendency of events over time.

Choosing the X-Axis

This time, the selected aggregation is Date histogram on the @timestamp field:

X-Axis settings

Click the Play button and see how the view changes to reflect the count of events over time:

Initial histogram

This is cool, but it would be useful if we could actually see how many of those requests are success and how many are errors.

To do this, let’s customize the existing Y-Axis to only aggregate successful requests.

First, change the Y-Axis aggregation from Count to Sum bucket.

Then, use Filters as the Bucket aggregation. This will allow us to use KQL (Kibana Query Language) to make any custom filter we like.

Add a filter which says select all messages which are not empty strings:

Setting up the Y-Axis

Next, add a custom label “Success” for this Y-Axis at the bottom of the panel.

Click the Play button to see the results. You should now see the number of successful requests every 30s. The default color, however, is a bit off, so change it at the top-right to green:

Choosing a histogram color

Now, it looks a lot better:

Result after changing color

We aren’t done yet. 

It’s now time to stack the errors on this Y-Axis as well.

Collapse the current Y-Axis settings and add a new one:

Adding the Y-Axis metric

The setup for this Y-Axis is simpler. 

Choose Sum bucket as an aggregation again and Terms as an aggregation for the bucket. Choose errors.keyword as the field to use. 

Keep the rest of the settings as-is.

Y-Axis metric setup

Add a custom label “Errors”, change the color to red and here’s what you get:

Success to error ratio

This is the success-to-error ratio, aggregated per 30s.

This view will now enable you to see the % of errors for all endpoints with a single glance.

You could also zoom in on the data again and see the % of errors per endpoint/payment method/userType/etc.

Save this view, add it to your dashboard, arrange it nicely at the top and gaze at your beautiful application dashboard:

Final dashboard

Finale

Congratulations. 👏👏👏

You’ve successfully completed the tutorial. 

You should now have a decent practice using Kibana to create basic dashboards for your application.

The final step of this exercise is to add such a dashboard for your real production application, buy a 146-inch monitor, attach it to the wall in your office, open your app’s dashboard, enter full-screen and enable Auto-refresh.

After you do this, all your colleagues from the other teams will stare at it enviously.

But unless you know how to effectively use Kibana for debugging real production issues, that’s all the value you’ll get from it – a shiny dashboard to show-off with in the office.

In the following tutorial, you’ll learn how to now capitalize on this dashboard and use it to instantly discover the root cause of production issues.

How to Use Structured Logs in your Go Application

The Elastic stack (also referred to as ELK) can bring a lot of value to your production services. But it is not that much of value if you don’t use structured logs in your services.

In one of my latest posts, I wrote about what ELK is and why you should care. I also wrote a tutorial about how to integrate ELK with your Go app.

In this article, I will walk you through how to integrate structured logging in your Go services. We will use a sample HTTP service with a few basic endpoints and we’ll use the zap library to emit logs on error/success, which would also include some domain-specific info.

Continue reading

Getting The Most Out of Your Logs with ELK

When you start developing your application, you typically instrument it with some logging to be able to debug problems later.

Some skip it in the development phase, but once the application hits production, then it is crucial to have some logging.

After all, once users complain that something isn’t working, how would you be able to find the root-cause?

And although logging proves to be useful, many companies don’t really capitalise on its potential as they’re still clinging to the classic way of writing freestyle logs and grep-ing them on their prod machines afterwards.

However, there is so much more potential that logging holds for monitoring our production systems. In this article, I will show you how to get the maximum value from your logs using the ELK stack.

Continue reading

© 2020 Preslav Mihaylov

Theme by Anders NorenUp ↑

BulgariaEnglish