Update direct-db-connection.md
Please look over my changes before merge. I want to make sure I tweaked the wording to read a little better, but not changed the actual meaning of the sentences. Thanks! t
This commit is contained in:
@@ -1,21 +1,18 @@
|
|||||||
# Direct DB Connection
|
# Direct DB Connection
|
||||||
|
|
||||||
Since version 4.3 Grafana has MySQL data source, Grafana-Zabbix plugin can use it for querying data directly from
|
Since version 4.3 Grafana can use MySQL as a native data source. The Grafana-Zabbix plugin can use this data source for querying data directly from a Zabbix database.
|
||||||
Zabbix database.
|
|
||||||
|
|
||||||
One of the most hard queries for Zabbix API is history queries. For long time intervals `history.get`
|
One of the most resource intensive queries for Zabbix API is the history query. For long time intervals `history.get`
|
||||||
returns huge amount of data. In order to display it, plugin should adjust time series resolution
|
returns a huge amount of data. In order to display it, the plugin should adjust time series resolution
|
||||||
by using [consolidateBy](/reference/functions/#consolidateby) function. Ultimately, Grafana displays this reduced
|
by using [consolidateBy](/reference/functions/#consolidateby). Ultimately, Grafana displays this reduced
|
||||||
time series, but that data should be loaded and processed on the client side first. Direct DB Connection solves this
|
time series, but that data should be loaded and processed on the client side first. Direct DB Connection solves these two problems by moving consolidation to the server side. Thus, the client gets a 'ready-to-use' dataset which is much smaller. This allows the data to load faster and the client doesn't spend time processing the data.
|
||||||
two problems by moving consolidation to the server side. Thus, client get ready-to-use dataset which has much smaller
|
|
||||||
size. Data loads faster and client doesn't spend time for data processing.
|
|
||||||
|
|
||||||
Also, many users point better performance of direct database queries versus API calls. This caused by several reasons,
|
Also, many users see better performance from direct database queries versus API calls. This could be the result of several reasons,
|
||||||
such as additional PHP layer and additional SQL queries (user permissions checks).
|
such as the additional PHP layer and additional SQL queries (user permissions checks).
|
||||||
|
|
||||||
## Data Flow
|
## Data Flow
|
||||||
|
|
||||||
This chart illustrates how plugin uses both Zabbix API and MySQL data source for querying different types
|
This chart illustrates how the plugin uses both Zabbix API and the MySQL data source for querying different types
|
||||||
of data from Zabbix. MySQL data source is used only for pulling history and trend data instead of `history.get`
|
of data from Zabbix. MySQL data source is used only for pulling history and trend data instead of `history.get`
|
||||||
and `trend.get` API calls.
|
and `trend.get` API calls.
|
||||||
|
|
||||||
@@ -23,7 +20,7 @@ and `trend.get` API calls.
|
|||||||
|
|
||||||
## Query structure
|
## Query structure
|
||||||
|
|
||||||
Grafana-Zabbix uses queries like this for getting history:
|
Below is an example query for getting history in the Grafana-Zabbix Plugin:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT itemid AS metric, clock AS time_sec, {aggFunc}(value) as value
|
SELECT itemid AS metric, clock AS time_sec, {aggFunc}(value) as value
|
||||||
@@ -33,10 +30,10 @@ SELECT itemid AS metric, clock AS time_sec, {aggFunc}(value) as value
|
|||||||
GROUP BY time_sec DIV {intervalSec}, metric
|
GROUP BY time_sec DIV {intervalSec}, metric
|
||||||
```
|
```
|
||||||
|
|
||||||
where `{aggFunc}` is one of `[AVG, MIN, MAX, SUM, COUNT]` aggregation function, `{historyTable}` is a history table,
|
where `{aggFunc}` is one of `[AVG, MIN, MAX, SUM, COUNT]` aggregation functions, `{historyTable}` is a history table,
|
||||||
`{intervalSec}` - consolidation interval in seconds.
|
`{intervalSec}` - consolidation interval in seconds.
|
||||||
|
|
||||||
When getting trends, plugin additionally queries particular value column (`value_avg`, `value_min` or `value_max`)
|
When getting trends, the plugin additionally queries a particular value column (`value_avg`, `value_min` or `value_max`) which
|
||||||
depends on `consolidateBy` function value:
|
depends on `consolidateBy` function value:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
@@ -47,14 +44,10 @@ SELECT itemid AS metric, clock AS time_sec, {aggFunc}({valueColumn}) as value
|
|||||||
GROUP BY time_sec DIV {intervalSec}, metric
|
GROUP BY time_sec DIV {intervalSec}, metric
|
||||||
```
|
```
|
||||||
|
|
||||||
As you can see, plugin uses aggregation by given time interval. This interval is provided by Grafana and depends on the
|
As you can see, the Grafana-Zabbix plugin uses aggregation by a given time interval. This interval is provided by Grafana and depends on the panel width in pixels. Thus, Grafana displays the data in the proper resolution.
|
||||||
panel with in pixels. Thus, Grafana always gets data in necessary resolution.
|
|
||||||
|
|
||||||
## Functions usage with Direct DB Connection
|
## Functions usage with Direct DB Connection
|
||||||
|
|
||||||
There's only one function affecting backend. This function is `consolidateBy`. It changes what data comes from
|
There's only one function that changes what data comes from the backend: `consolidateBy`. Other functions still work on the client side and transform data from the backend. So mak sure to work with pre-aggregated data (by AVG, MAX, MIN, etc).
|
||||||
the backend. Other functions still work on the client side and transform data come from the backend. So you should
|
|
||||||
clearly understand that you work with pre-aggregated data (by AVG, MAX, MIN, etc). For example, you want to group
|
For example, let's say you want to group values by 1 hour intervals and the `max` function. If you just apply `groupBy(10m, max)` function, the result will be incorrect because the transform data aggregated by the default `AVG` function. You should use `consolidateBy(max)` coupled with `groupBy(10m, max)` in order to get the correct result.
|
||||||
values by 1 hour interval and `max` function. But if you just apply `groupBy(10m, max)` function, result will be wrong,
|
|
||||||
because you transform data aggregated by default `AVG` function. You should use `consolidateBy(max)` coupled with
|
|
||||||
`groupBy(10m, max)` in order to get precise result.
|
|
||||||
|
|||||||
Reference in New Issue
Block a user