Prometheus
Host metrics refer to the metrics collected from the operating system of the host where your applications are running. These metrics include CPU, memory, disk, and network usage. Understanding host metrics is crucial as it helps you identify potential problems or bottlenecks that could affect the overall performance of your applications.
In this tutorial, we will show you how to collect host metrics, send them to GreptimeDB and visualize them.
Create Service
To experience the full power of GreptimeCloud, you need to create a service which contains a database with authentication. Open the GreptimeCloud console, signup and login. Then click the New Service
button and config the following:
- Service Name: The name you want to describe your service.
- Description: More information about your service.
- Region: Select the region where the database is located.
- Plan: Select the pricing plan you want to use.
Now create the service and we are ready to write some metrics to it.
Write data
Prerequisites
Example
We will use node exporter to monitor the host system and send metrics to GreptimeDB via Prometheus.
To begin, create a new directory named quick-start-prometheus
to host our project. Create a docker compose file named compose.yml
and add the following:
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
depends_on:
- node_exporter
ports:
- 9090:9090
volumes:
- ./prometheus-greptimedb.yml:/etc/prometheus/prometheus.yml:ro
node_exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
ports:
- 9100:9100
command:
- '--path.rootfs=/'
The configuration file above will start a Prometheus server and a node exporter. Next, create a new file named prometheus-greptimedb.yml
and add the following:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'node'
static_configs:
- targets: ['node_exporter:9100']
remote_write:
- url: https://<host>/v1/prometheus/write?db=<dbname>
basic_auth:
username: <username>
password: <password>
The configuration file above configures Prometheus to scrape metrics from the node exporter and send them to GreptimeDB. For the configuration about <host>
, <dbname>
, <username>
, and <password>
, please refer to the Prometheus documentation in GreptimeDB or GreptimeCloud.
Finally, start the containers:
docker-compose up
The connection information can be found on the service page of GreptimeCloud console.
Visualize Data
Visualizing data in panels and monitoring metrics is important in a developer's daily work. From the GreptimeCloud console, click on Open Prometheus Workbench
, then click on + New Ruleset
and Add Group
. Name the group host-monitor
and add panels.
To add panels for all the tables you're concerned with, select a table and click on Add Panel
one by one. Once you've added all the necessary panels, click on the Save
button to save them. You can then view the panels in your daily work to monitor the metrics. Additionally, you can set up alert rules for the panels to be notified when the metrics exceed the threshold.