Skip to main content

Integration of Prometheus with Cortex

Integrating Prometheus with Cortex is a robust solution for scalable, long-term metrics storage and querying. Cortex enhances Prometheus by enabling high-availability setups and horizontal scalability. Here's a step-by-step guide to help you with the integration:

Previously we talked that Prometheus is becoming a go-to option for people who want to implement event-based monitoring and alerting. The implementation and management of Prometheus are quite easy. But when we have a large infrastructure to monitor or the infrastructure has started to grow you require to scale monitoring solution as well.

A few days back we were also in a similar kind of situation where one of our client’s infrastructure was growing as per the need and they need a resilient, scalable, and reliable monitoring system. Since they were already using the Prometheus, so we explored our option and came across an interesting project called “Cortex“.


What is Cortex?

As we have discussed in our previous blog that Prometheus has some scalability limitations. So cortex is a project created by Weaveworks originally to overcome those limitations. We can also say that it’s a doozy version of Prometheus which has a lot of additional features like:-

  • Horizontal Scaling- Cortex works in a microservice model that means it can be deployed across multiple clusters and multiple Prometheus servers can send data to a single Cortex endpoint. This model embraces the global aggregation of the metrics.
  • Highly Available- Each Cortex component can be scaled individually which provides high availability across the services.
  • Multi Tenant- If multiple Prometheus servers are sending data to Cortex, in that case, it provides a layer of abstraction between data.
  • Long Term Storage- This is one of the key features of Cortex which comes natively inside it. Cortex supports multiple storage backends to store data for long-term analytics purposes. Some of the storage backend examples are:- S3, GCS, Minio, Cassandra, and Big Table, etc.

If we talk about the architecture of Cortex, it looks like this:-

[Good Read: Cloud Migration With DevOps ]

Installation

Cortex can be easily installed by using Helm package manager in Kubernetes. So, we will use standard helm chart created by Cortex team, but before we have to install consul inside the cluster as data store.

 $ helm repo add hashicorp https://helm.releases.hashicorp.com
 $ helm search repo hashicorp/consul
 $ helm install consul hashicorp/consul --set global.name=consul --namespace cortex

Verify the consul nodes using kubectl.

Now we have the datastore in-place, we need to configure the storage gateway to connect with a remote storage backend. We evaluated multiple storage solutions and then decided to go ahead with the S3 bucket in AWS. A few points that how we decided that S3 was a right fit:-

  • We were already using AWS for few services.
  • Our Kubernetes was running inside the Local Datacenter and Prometheus was also configured at the same location, so we already have a built-in bridge using AWS Direct connect. So network bandwidth was not a concerned anymore.

So we have customized the default values file of Cortex according to our use-case, you guys can find the values file here

$ helm repo add cortex-helm https://cortexproject.github.io/cortex-helm-chart
$ helm install cortex --namespace cortex -f my-cortex-values.yaml cortex-helm/cortex

Here we are pretty much done with the cortex setup and now it’s time for configuring the Prometheus to connect with Cortex.

You can check more info about: Integration of Prometheus with Cortex.

Comments

Popular posts from this blog

Step-by-Step Guide to Cloud Migration With DevOps

This successful adoption of cloud technologies is attributed to scalability, security, faster time to market, and team collaboration benefits it offers. With this number increasing rapidly among companies at all levels, organizations are  looking forward to the methods that help them: Eliminate platform complexities Reduce information leakage Minimize cloud operation costs To materialize these elements, organizations are actively turning to DevOps culture that helps them integrate development and operations processes to automate and optimize the complete software development lifecycle. In this blog post, we will discuss the step-by-step approach to cloud migration with DevOps. Steps to Perform Cloud Migration With DevOps Approach Automation, teamwork, and ongoing feedback are all facilitated by the DevOps culture in the cloud migration process. This translates into cloud environments that are continuously optimized to support your business goals and enable faster, more seamless mi...

Containerization vs Virtualization: Explore the Difference!

  In today’s world, technology has become an integral part of our daily lives, and the way we work has been greatly revolutionized by the rise of cloud computing. One of the critical aspects of cloud computing is the ability to run applications and services in a virtualized environment. However, with the emergence of new technologies and trends, there are two popular approaches that have emerged, containerization and virtualization, and it can be confusing to understand the difference between the two. In this blog on Containerization vs Virtualization, we’ll explore what virtualization and containerization are, the key difference between virtualization and containerization, and the use cases they are best suited for. By the end of this article, you should have a better understanding of the two technologies and be able to make an informed decision on which one is right for your business needs. Here, we’ll discuss, –  What is Containerization? –  What is Virtualization? – B...

Migration Of MS SQL From Azure VM To Amazon RDS

The MongoDB operator is a custom CRD-based operator inside Kubernetes to create, manage, and auto-heal MongoDB setup. It helps in providing different types of MongoDB setup on Kubernetes like-  standalone, replicated, and sharded.  There are quite amazing features we have introduced inside the operator and some are in-pipeline on which deployment is going on. Some of the MongoDB operator features are:- Standalone and replicated cluster setup Failover and recovery of MongoDB nodes Inbuilt monitoring support for Prometheus using MongoDB Exporter. Different Kubernetes-related best practices like:- Affinity, Pod Disruption Budget, Resource management, etc, are also part of it. Insightful and detailed monitoring dashboards for Grafana. Custom MongoDB configuration support. [Good Read:  Migration Of MS SQL From Azure VM To Amazon RDS  ] Other than this, there are a lot of features are in the backlog on which active development is happening. For example:- Backup and Restore...