Application monitoring in OpenShift 4.3

As enterprises start to adopt their container journey and onboard their applications into the OpenShift Container Platform, application monitoring becomes critical to anticipate problems and discover bottlenecks in a production environment. Application Monitoring is also one of the biggest challenges faced by almost all organizations who are either in the process of or already have migrated their workloads into OpenShift.

The growing adoption of microservices architecture makes monitoring more complex since a large number of applications that are distributed in nature are communicating with each other. What used to be a function or a direct call in a monolithic application is now a network call from one microservice to another. Also, running multiple instances on these microservices as containers adds another layer of complexity.

Starting with OpenShift 4.3, you can use the platform’s monitoring capabilities for your application workloads running on OpenShift. This helps keep the application monitoring centralized. You don’t need to manage an additional monitoring solution as the platform now provides these capabilities.

OpenShift 4.3 gives you the flexibility to extend these application metrics beyond the cluster administrators. This means that an arbitrary user or a developer can set up metrics collection for the applications. See setting up metrics collection for more details.

Let’s take a look at how you can monitor your application in OpenShift 4.3 using the platform’s capabilities by following these 5 steps:

Pre-requisites:

  1. OpenShift 4.3 cluster is up and running
  2. You have cluster administrator privileges
  3. oc client is installed

 

Step 1: Enable application monitoring in OpenShift 4.3
Login as cluster administrator

Create the cluster-monitoring-config configmap if one doesn’t exist already. See configuring the monitoring stack for more details:

oc -n openshift-monitoring create configmap cluster-monitoring-config

Edit the configmap to add config.yaml and set techPreviewUserWorkload setting to true:

oc -n openshift-monitoring edit configmap cluster-monitoring-config

 

This is how the configmap should look:

apiVersion: v1

kind: ConfigMap

metadata:

  name: cluster-monitoring-config

  namespace: openshift-monitoring

data:

  config.yaml: |

    techPreviewUserWorkload:

      enabled: true

 

Verify by checking whether prometheus-user-workload pods are created and are in running state:

$ oc -n openshift-user-workload-monitoring get pod

NAME                                   READY STATUS RESTARTS AGE

prometheus-operator-684fcd47b6-bdmpc   1/1 Running 0 144m

prometheus-user-workload-0             5/5 Running 1 144m

prometheus-user-workload-1             5/5 Running 1 144m

This confirms that OpenShift monitoring is now enabled to monitor application workloads.

 

Step 2: Deploy a Quarkus microservice with microprofile metrics endpoint

In this example, I am going to use a Quarkus microservice to demonstrate application monitoring capabilities of OpenShift 4.3. Let’s use a simple Quarkus microservice which will expose the microprofile metrics using /metrics endpoint. We will configure the OpenShift monitoring to scrape this metrics endpoint in the next steps. If you are interested in the application code, visit the GitHub repository. 

Let’s create the OpenShift objects for the Quarkus application using oc apply command. We will create the following objects

  • ImageStream
  • BuildConfig
  • Deployment
  • Service
  • Route
oc apply -f https://raw.githubusercontent.com/nmalvankar/quarkus-quickstarts/master/microprofile-metrics-quickstart/.openshift/templates/quarkus-application.yaml

Let’s start the Quarkus application build. This will make use of s2i to build a Quarkus application image quarkus-quickstart which will trigger a new deployment and create an application pod.

oc start-build quarkus-quickstart

 

Verify that the application pod is up and running:

$ oc get pods -n quarkus

NAME                          READY STATUS RESTARTS AGE

quarkus-quickstart-1-build    0/1 Completed 0 57m

quarkus-quickstart-1-cr7cq    1/1 Running 0 14m

quarkus-quickstart-1-deploy   0/1 Completed 0 15m

Once the application pod is up and running, you should be able to access the application metrics at /metrics. The url should look like this – http://<hostname_of_the_route>/metrics

 

Step 3: Setup ServiceMonitor/PodMonitor to configure OpenShift Monitoring that scrapes the application metrics

To use the metrics exposed by the Quarkus microservice, let’s configure OpenShift Monitoring to scrape metrics from the /metrics endpoint. This can be achieved by using either a ServiceMonitor, a custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor, a CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod. 

In this case, let’s use a ServiceMonitor CRD for monitoring the Quarkus microservice:

oc apply -f https://raw.githubusercontent.com/nmalvankar/quarkus-quickstarts/master/microprofile-metrics-quickstart/.openshift/templates/quarkus-service-monitor.yaml

Verify that the ServiceMonitor is running:

$ oc get ServiceMonitor -n quarkus

NAME                         AGE

prometheus-quarkus-monitor   5m

 

Step 4: Setup Alerts for Quarkus service

Now, let’s create an alerting rule which will fire alerts based on values of the service metric. In order to demonstrate a simple alert, let’s create a rule which will create an alert when the value of metric vendor_cpu_processCpuTime_seconds is greater than 8 seconds:

oc apply -f https://raw.githubusercontent.com/nmalvankar/quarkus-quickstarts/master/microprofile-metrics-quickstart/.openshift/templates/quarkus-alerting-rule.yaml

Verify that the PrometheusRule is created:

$ oc get PrometheusRule -n quarkus

NAME            AGE

quarkus-alert   9m14s

 

Step 5: Use OpenShift Monitoring to access the metrics of Quarkus microservice 

Login to OpenShift Web Console as cluster administrator and verify that OpenShift Monitoring is able to scrape the application metrics as shown in the screenshot below.

OpenShift Monitoring

 

Check the Alerts using the AlertManagerUI. Verify that an alert is visible for the Quarkus application once the value of metric vendor_cpu_processCpuTime_seconds is greater than 8 secs. You can also modify the alerting rule to use any other metric.

AlertManagerUI

 

Note: Application monitoring is currently Tech preview in OpenShift 4.3 and not recommended for production use.

 

In these 5 simple steps, you can easily monitor your application workloads on OpenShift 4.3 without having to install any additional software. OpenShift 4.3 also allows you to expose custom application metrics for autoscaling. This gives  you much needed flexibility to autoscale an application pod based on custom application metrics in addition to cpu and memory usage. OpenShift 4.3 provides a lot of exciting new features and enhancements. See the release notes for more details.

 

Connect with Red Hat Services
Learn more about Red Hat Consulting
Learn more about Red Hat Training
Join the Red Hat Learning Community
Learn more about Red Hat Certification
Subscribe to the Training Newsletter
Follow Red Hat Services on Twitter
Follow Red Hat Open Innovation Labs on Twitter
Like Red Hat Services on Facebook
Watch Red Hat Training videos on YouTube
Understand the value of Red Hat Certified Professionals

    1. Thanks @CertDepot for pointing this out. As per our conversation on Twitter, I have pushed the fix. Also thanks for the confirming that it’s working fine.

Leave a Reply to CertDepot Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.