Integration of Prometheus with Cortex

Anshul Kichara - Jul 3 - - Dev Community

Previously we talked that Prometheus is becoming a go-to option for people who want to implement event-based monitoring and alerting. The implementation and management of Prometheus are quite easy. But when we have a large infrastructure to monitor or the infrastructure has started to grow you require to scale monitoring solution as well.

A few days back we were also in a similar kind of situation where one of our client’s infrastructure was growing as per the need and they need a resilient, scalable, and reliable monitoring system. Since they were already using the Prometheus, so we explored our option and came across an interesting project called “Cortex“.

What is Cortex?

As we have discussed in our previous blog that Prometheus has some scalability limitations. So cortex is a project created by Weaveworks originally to overcome those limitations. We can also say that it’s a doozy version of Prometheus which has a lot of additional features like:-

Horizontal Scaling- Cortex works in a microservice model that means it can be deployed across multiple clusters and multiple Prometheus servers can send data to a single Cortex endpoint. This model embraces the global aggregation of the metrics.
Highly Available- Each Cortex component can be scaled individually which provides high availability across the services.
Multi Tenant- If multiple Prometheus servers are sending data to Cortex, in that case, it provides a layer of abstraction between data.
Long Term Storage- This is one of the key features of Cortex which comes natively inside it. Cortex supports multiple storage backends to store data for long-term analytics purposes. Some of the storage backend examples are:- S3, GCS, Minio, Cassandra, and Big Table, etc.
If we talk about the architecture of Cortex, it looks like this:-

Installation

Cortex can be easily installed by using Helm package manager in Kubernetes. So, we will use standard helm chart created by Cortex team, but before we have to install consul inside the cluster as data store.

$ helm repo add hashicorp https://helm.releases.hashicorp.com
$ helm search repo hashicorp/consul
$ helm install consul hashicorp/consul --set global.name=consul --namespace cortex
[Good Read: AWS Firewall ]

Verify the consul nodes using kubectl.
Now we have the datastore in-place, we need to configure the storage gateway to connect with a remote storage backend. We evaluated multiple storage solutions and then decided to go ahead with the S3 bucket in AWS. A few points that how we decided that S3 was a right fit:-

We were already using AWS for few services.
Our Kubernetes was running inside the Local Datacenter and Prometheus was also configured at the same location, so we already have a built-in bridge using AWS Direct connect. So network bandwidth was not a concerned anymore.
So we have customized the default values file of Cortex according to our use-case, you guys can find the values file here

$ helm repo add cortex-helm https://cortexproject.github.io/cortex-helm-chart
$ helm install cortex --namespace cortex -f my-cortex-values.yaml cortex-helm/cortex

Here we are pretty much done with the cortex setup and now it’s time for configuring the Prometheus to connect with Cortex.

You can check more info about: Integration of Prometheus with Cortex.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player