Using PostgreSQL, Prometheus & Grafana For Storing .

3y ago
104 Views
3 Downloads
3.07 MB
27 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Elise Ammons
Transcription

Using PostgreSQL, Prometheus& Grafana for Storing, Analyzingand Visualizing MetricsErik Nordström, PhDCore Database Engineerhello@timescale.com·github.com/timescale

WhyPostgreSQL? Reliable and familiar (ACID, Tooling) SQL: powerful query language JOINs: combine time-series with other data Simplify your stack: avoid data silos

TimescaleDB:PostgreSQL for time-series data

CommonComplaints Hard or impossible to scale Need to define schema SQL too complex or has poor supportfor querying time-series Vacuuming on DELETE No Grafana support

TimescaleDB Prometheus Scales for time-series workloads Automatic scheme creation Advanced analytics with full SQLsupport and time-oriented features No vacuuming with drop chunks() Grafana support via Prometheus orPostgreSQL data sources (since v4.6)

How it works

Collecting metrics with PrometheusRemote storage APITimescaleDB / PostgreSQL Adapterpg prometheus

Time(older)

Time-space partitioning(for both scaling up & out)Time(older)Chunk (sub-table)Space

The Hypertable AbstractionHypertable TriggersConstraintsIndexesUPSERTsTable mgmtChunks

Automatic Space-time Partitioning

Easy to Get StartedCREATE TABLE conditions (time timestamptz,temp float,humidity float,device text);SELECT create hypertable('conditions', 'time', 'device', 4,chunk time interval interval '1 week’);INSERT INTO conditions VALUES ('2017-10-03 10:23:54 01', 73.4,40.7, 'sensor3');SELECT * FROM conditions;time temp humidity device------------------------ ------ ---------- --------2017-10-03 11:23:54 02 73.4 40.7 sensor3

Repartitioning is Simple— Set new chunk time intervalSELECT set chunk time interval('conditions', interval '24 hours’);— Set new number of space partitionsSELECT set number partitions('conditions', 6);

PG10 requires a lot of manual workCREATE TABLE conditions (time timestamptz,temp float,humidity float,device text);CREATE TABLE conditions p1 PARTITION OF conditionsFOR VALUES FROM (MINVALUE) TO ('g')PARTITION BY RANGE (time);CREATE TABLE conditions p2 PARTITION OF conditionsFOR VALUES FROM ('g') TO ('n')PARTITION BY RANGE (time);CREATE TABLE conditions p3 PARTITION OF conditionsFOR VALUES FROM ('n') TO ('t')PARTITION BY RANGE (time);CREATE TABLE conditions p4 PARTITION OF conditionsFOR VALUES FROM ('t') TO (MAXVALUE)PARTITION BY RANGE (time);-- Create time partitions for the first week in each device partitionCREATE TABLE conditions p1 y2017m10w01 PARTITION OF conditions p1FOR VALUES FROM ('2017-10-01') TO ('2017-10-07');CREATE TABLE conditions p2 y2017m10w01 PARTITION OF conditions p2FOR VALUES FROM ('2017-10-01') TO ('2017-10-07');CREATE TABLE conditions p3 y2017m10w01 PARTITION OF conditions p3FOR VALUES FROM ('2017-10-01') TO ('2017-10-07');CREATE TABLE conditions p4 y2017m10w01 PARTITION OF conditions p4FOR VALUES FROM ('2017-10-01') TO (‘2017-10-07');-- Create time-device index on each leaf partitionCREATE INDEX ON conditions p1 y2017m10w01 (time);CREATE INDEX ON conditions p2 y2017m10w01 (time);CREATE INDEX ON conditions p3 y2017m10w01 (time);CREATE INDEX ON conditions p4 y2017m10w01 (time);INSERT INTO conditions VALUES ('2017-10-03 10:23:54 01', 73.4, 40.7,‘sensor3');

K441 RICS / SINSERT performanceMET14.4KINSERTS / SPostgres 9.6.2 on Azure standard DS4 v2 (8 cores), SSD (premium LRS storage)Each row has 12 columns (1 timestamp, indexed 1 host ID, 10 metrics)

M111 . CS / SINSERT performanceMETRI 20xPostgres 9.6.2 on Azure standard DS4 v2 (8 cores), SSD (premium LRS storage)Each row has 12 columns (1 timestamp, indexed 1 host ID, 10 metrics)

TimescaleDB vs. PG10Insert Performance as # Partitions Increases(batch size 1 row)

timescaledb-vs-6a696248104eSimple column -10000xDELETEs2000x

How data is stored

pg prometheusPrometheus Data Model in PostgreSQL New data type prom sample: time, name, value, labels CREATE TABLE metrics (sample prom smaple);INSERT INTO metricsVALUES (‘cpu usage{service “nginx”,host “machine1”} 34.6 1494595898000’); Scrape metrics with CURL:curl http://myservice/metrics grep -v “ #” sql -c “COPY metrics FROM STDIN”

Querying raw samplesSELECT * FROM �—————————————cpu usage{service “nginx”,host “machine1”} 34.600000 1494595898000SELECT prom time(sample) AS time, prom name(sample) AS name, prom value(sample) AS value,prom labels(sample) AS labels from metrics;time name value �———————— ——————————— ——————————— ��————————2017-05-12 15:31:38 02 cpu usage 34.6 {“host”: “machine1”,”service”: “nginx”}

Normalized data storageSELECT create prometheus table(‘metrics’); Normalizes data: values table labels table (jsonb)Sets up proper indexesConvenience view for inserts and querying columns: sample time name value labels

Easily query viewSELECT sampleFROM metricsWHERE time NOW() - interval ’10 min’ ANDname ‘cpu usage’ ANDLabels @ ‘{“service”: “nginx”}’;

slack-login.timescale.comOpen-source /timescale/pg e

PG10 requires a lot of manual work CREATE TABLE conditions (time timestamptz, temp float, humidity float, device text); CREATE TABLE conditions_p1 PARTITION OF conditions

Related Documents:

PostgreSQL Python EDB PostgreSQL EBD . Mac brew postgresql Homebrew ' macOS ' . . brew PostgreSQL . brew update brew install postgresql Homebrew . brew search postgresql brew search postgresql. PostgreSQL brew info postgresql. Homebrew . brew services start postgresql .

Golang code instrumentation with Prometheus metrics / OpenMetrics 18 Java code instrumentation with Prometheus metrics / OpenMetrics 20 Python code instrumentation with Prometheus metrics / OpenMetrics 23 NodeJS / Javascript code instrumentation with Prometheus OpenMetrics 25 Prometheus metrics and Sysdig Monitor 27

Centralized Monitoring Control plane Prometheus Cluster registry Configurator Prometheus config Prometheus data Kubernetes Cluster Grafana K8S Proxy API nodes, pods, service endpoints Prometheus (collector) Ship externally Ship externally. Centralized Monitoring: Considerations Pr

Kafka Clusters Prometheus kafka exporter, Prometheus kafka consumer group exporter AWS RDS Instances Prometheus cloudwatch exporter AWS Dynamodb Prometheus cloudwatch exporter. 4503 Alice . - Have Grafana as a central monitoring

Taming Performance Variability in PostgreSQL Shawn S. Kim. PostgreSQL Execution Model 2 Storage Device Linux Kernel P1 Client P2 I/O P3 P4 Request Response I/O I/O I/O PostgreSQL Database . Checkpoint tuning makes PostgreSQL unpredictable Server: r5d.4xlarge, 300GB NVMe SSD, CentOS 7, PostgreSQL v11.3 (shared_buffers 32GB, effective_cache .

Data streams: Kafka, Flink, Spark Analytics: ClickHouse (OLAP), CitusDB (shared PostgreSQL) Hadoop: HDFS, HBase, OpenTSDB Logging: Elasticsearch, Kibana . Monitoring Prometheus Mesh: each Prometheus monitors other Prometheus servers in same datacenter To

Monitoring Kafka AMQ Streams on OCP integrates with Prometheus for monitoring Alerting is also supported via Prometheus Grafana dashboard are provided OOTB Separate Prometheus and Graf

openshift-monitoring Prometheus Operator Prometheus Cluster Monitoring Operator Kube State Metrics Grafana AlertManager Node Exporter. . (Kafka) Strong(er) multi tenancy separation on TSDB level Prom-label-proxy Data science / ML / AI for alerting and forecasting