From 538589b99b62cbd6250cca52ec5ec35b55071a3a Mon Sep 17 00:00:00 2001 From: Alexander Zobnin Date: Sun, 22 Oct 2017 11:06:48 +0300 Subject: [PATCH] docs: update db connection docs --- .../img/installation-postgres_ds_config.png | 3 ++ .../sources/installation/configuration-sql.md | 14 +++++- docs/sources/installation/configuration.md | 2 +- .../sources/reference/direct-db-connection.md | 48 +++++++++++++++---- 4 files changed, 56 insertions(+), 11 deletions(-) create mode 100644 docs/sources/img/installation-postgres_ds_config.png diff --git a/docs/sources/img/installation-postgres_ds_config.png b/docs/sources/img/installation-postgres_ds_config.png new file mode 100644 index 0000000..6c2f313 --- /dev/null +++ b/docs/sources/img/installation-postgres_ds_config.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7dae33170bf09b353f886bc1a8cbc4307d80858c514199f08f1b9e01bc01e81 +size 260406 diff --git a/docs/sources/installation/configuration-sql.md b/docs/sources/installation/configuration-sql.md index a1cf37a..68eab39 100644 --- a/docs/sources/installation/configuration-sql.md +++ b/docs/sources/installation/configuration-sql.md @@ -1,5 +1,7 @@ # SQL Data Source Configuration +## MySQL + In order to use _Direct DB Connection_ feature you should configure SQL data source first. ![Configure MySQL data source](../img/installation-mysql_ds_config.png) @@ -7,7 +9,7 @@ In order to use _Direct DB Connection_ feature you should configure SQL data sou Select _MySQL_ data source type and provide your database host address and port (3306 is default for MySQL). Fill database name (usually, `zabbix`) and specify credentials. -## Security notes +### Security notes As you can see in _User Permission_ note, Grafana doesn't restrict any queries to the database. So you should be careful and create a special user with limited access to Zabbix database. Grafana-Zabbix plugin uses only `SELECT` queries to @@ -20,3 +22,13 @@ Also, all queries are invoked by grafana-server, so you can restrict connection ```sql GRANT SELECT ON zabbix.* TO 'grafana'@'grafana-host' identified by 'password'; ``` + +## PostgreSQL + +Select _PostgreSQL_ data source type and provide your database host address and port (5432 is default). Fill +database name (usually, `zabbix`) and specify credentials. + +![Configure PostgreSQL data source](../img/installation-postgres_ds_config.png) +### Security notes + +Make sure you use read-only user for Zabbix database. diff --git a/docs/sources/installation/configuration.md b/docs/sources/installation/configuration.md index 9d7135e..a9ba14a 100644 --- a/docs/sources/installation/configuration.md +++ b/docs/sources/installation/configuration.md @@ -65,7 +65,7 @@ Read [how to configure](/installation/configuration-sql) SQL data source in Graf #### Supported databases -Now only **MySQL** is supported by Grafana. +**MySQL** and **PostgreSQL** are supported by Grafana. ### Alerting diff --git a/docs/sources/reference/direct-db-connection.md b/docs/sources/reference/direct-db-connection.md index 0e2d941..5658f02 100644 --- a/docs/sources/reference/direct-db-connection.md +++ b/docs/sources/reference/direct-db-connection.md @@ -22,12 +22,26 @@ and `trend.get` API calls. Below is an example query for getting history in the Grafana-Zabbix Plugin: +**MySQL**: ```sql SELECT itemid AS metric, clock AS time_sec, {aggFunc}(value) as value - FROM {historyTable} - WHERE itemid IN ({itemids}) - AND clock > {timeFrom} AND clock < {timeTill} - GROUP BY time_sec DIV {intervalSec}, metric +FROM {historyTable} +WHERE itemid IN ({itemids}) + AND clock > {timeFrom} AND clock < {timeTill} +GROUP BY time_sec DIV {intervalSec}, metric +ORDER BY time_sec ASC +``` + +**PostgreSQL**: +```sql +SELECT to_char(itemid, 'FM99999999999999999999') AS metric, + clock / {intervalSec} * {intervalSec} AS time, + {aggFunc}(value) AS value +FROM {historyTable} +WHERE itemid IN ({itemids}) + AND clock > {timeFrom} AND clock < {timeTill} +GROUP BY 1, 2 +ORDER BY time ASC ``` where `{aggFunc}` is one of `[AVG, MIN, MAX, SUM, COUNT]` aggregation functions, `{historyTable}` is a history table, @@ -36,18 +50,34 @@ where `{aggFunc}` is one of `[AVG, MIN, MAX, SUM, COUNT]` aggregation functions, When getting trends, the plugin additionally queries a particular value column (`value_avg`, `value_min` or `value_max`) which depends on `consolidateBy` function value: +**MySQL**: ```sql SELECT itemid AS metric, clock AS time_sec, {aggFunc}({valueColumn}) as value - FROM {trendsTable} - WHERE itemid IN ({itemids}) - AND clock > {timeFrom} AND clock < {timeTill} - GROUP BY time_sec DIV {intervalSec}, metric +FROM {trendsTable} +WHERE itemid IN ({itemids}) + AND clock > {timeFrom} AND clock < {timeTill} +GROUP BY time_sec DIV {intervalSec}, metric +ORDER BY time_sec ASC ``` +**PostgreSQL**: +```sql +SELECT to_char(itemid, 'FM99999999999999999999') AS metric, + clock / {intervalSec} * {intervalSec} AS time, + {aggFunc}({valueColumn}) AS value +FROM {trendsTable} +WHERE itemid IN ({itemids}) + AND clock > {timeFrom} AND clock < {timeTill} +GROUP BY 1, 2 +ORDER BY time ASC +``` + +**Note**: these queries may be changed in future, so look into sources for actual query structure. + As you can see, the Grafana-Zabbix plugin uses aggregation by a given time interval. This interval is provided by Grafana and depends on the panel width in pixels. Thus, Grafana displays the data in the proper resolution. ## Functions usage with Direct DB Connection There's only one function that directly affects the backend data. This function is `consolidateBy`. Other functions work on the client side and transform data that comes from the backend. So you should clearly understand that this is pre-aggregated data (by AVG, MAX, MIN, etc). -For example, say you want to group values by 1 hour interval and `max` function. If you just apply `groupBy(10m, max)` function, your result will be wrong, because you would transform data aggregated by default `AVG` function. You should use `consolidateBy(max)` coupled with `groupBy(10m, max)` in order to get a precise result. \ No newline at end of file +For example, say you want to group values by 1 hour interval and `max` function. If you just apply `groupBy(10m, max)` function, your result will be wrong, because you would transform data aggregated by default `AVG` function. You should use `consolidateBy(max)` coupled with `groupBy(10m, max)` in order to get a precise result.