Merge pull request #1 from bulletfactory/patch-1
Update direct-db-connection.md
This commit is contained in:
@@ -1,21 +1,18 @@
|
|||||||
# Direct DB Connection
|
# Direct DB Connection
|
||||||
|
|
||||||
Since version 4.3 Grafana has MySQL data source, Grafana-Zabbix plugin can use it for querying data directly from
|
Since version 4.3 Grafana can use MySQL as a native data source. The Grafana-Zabbix plugin can use this data source for querying data directly from a Zabbix database.
|
||||||
Zabbix database.
|
|
||||||
|
|
||||||
One of the most hard queries for Zabbix API is history queries. For long time intervals `history.get`
|
One of the most resource intensive queries for Zabbix API is the history query. For long time intervals `history.get`
|
||||||
returns huge amount of data. In order to display it, plugin should adjust time series resolution
|
returns a huge amount of data. In order to display it, the plugin should adjust time series resolution
|
||||||
by using [consolidateBy](/reference/functions/#consolidateby) function. Ultimately, Grafana displays this reduced
|
by using [consolidateBy](/reference/functions/#consolidateby). Ultimately, Grafana displays this reduced
|
||||||
time series, but that data should be loaded and processed on the client side first. Direct DB Connection solves this
|
time series, but that data should be loaded and processed on the client side first. Direct DB Connection solves these two problems by moving consolidation to the server side. Thus, the client gets a 'ready-to-use' dataset which is much smaller. This allows the data to load faster and the client doesn't spend time processing the data.
|
||||||
two problems by moving consolidation to the server side. Thus, client get ready-to-use dataset which has much smaller
|
|
||||||
size. Data loads faster and client doesn't spend time for data processing.
|
|
||||||
|
|
||||||
Also, many users point better performance of direct database queries versus API calls. This caused by several reasons,
|
Also, many users see better performance from direct database queries versus API calls. This could be the result of several reasons,
|
||||||
such as additional PHP layer and additional SQL queries (user permissions checks).
|
such as the additional PHP layer and additional SQL queries (user permissions checks).
|
||||||
|
|
||||||
## Data Flow
|
## Data Flow
|
||||||
|
|
||||||
This chart illustrates how plugin uses both Zabbix API and MySQL data source for querying different types
|
This chart illustrates how the plugin uses both Zabbix API and the MySQL data source for querying different types
|
||||||
of data from Zabbix. MySQL data source is used only for pulling history and trend data instead of `history.get`
|
of data from Zabbix. MySQL data source is used only for pulling history and trend data instead of `history.get`
|
||||||
and `trend.get` API calls.
|
and `trend.get` API calls.
|
||||||
|
|
||||||
@@ -23,7 +20,7 @@ and `trend.get` API calls.
|
|||||||
|
|
||||||
## Query structure
|
## Query structure
|
||||||
|
|
||||||
Grafana-Zabbix uses queries like this for getting history:
|
Below is an example query for getting history in the Grafana-Zabbix Plugin:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT itemid AS metric, clock AS time_sec, {aggFunc}(value) as value
|
SELECT itemid AS metric, clock AS time_sec, {aggFunc}(value) as value
|
||||||
@@ -33,10 +30,10 @@ SELECT itemid AS metric, clock AS time_sec, {aggFunc}(value) as value
|
|||||||
GROUP BY time_sec DIV {intervalSec}, metric
|
GROUP BY time_sec DIV {intervalSec}, metric
|
||||||
```
|
```
|
||||||
|
|
||||||
where `{aggFunc}` is one of `[AVG, MIN, MAX, SUM, COUNT]` aggregation function, `{historyTable}` is a history table,
|
where `{aggFunc}` is one of `[AVG, MIN, MAX, SUM, COUNT]` aggregation functions, `{historyTable}` is a history table,
|
||||||
`{intervalSec}` - consolidation interval in seconds.
|
`{intervalSec}` - consolidation interval in seconds.
|
||||||
|
|
||||||
When getting trends, plugin additionally queries particular value column (`value_avg`, `value_min` or `value_max`)
|
When getting trends, the plugin additionally queries a particular value column (`value_avg`, `value_min` or `value_max`) which
|
||||||
depends on `consolidateBy` function value:
|
depends on `consolidateBy` function value:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
@@ -47,13 +44,13 @@ SELECT itemid AS metric, clock AS time_sec, {aggFunc}({valueColumn}) as value
|
|||||||
GROUP BY time_sec DIV {intervalSec}, metric
|
GROUP BY time_sec DIV {intervalSec}, metric
|
||||||
```
|
```
|
||||||
|
|
||||||
As you can see, plugin uses aggregation by given time interval. This interval is provided by Grafana and depends on the
|
As you can see, the Grafana-Zabbix plugin uses aggregation by a given time interval. This interval is provided by Grafana and depends on the panel width in pixels. Thus, Grafana displays the data in the proper resolution.
|
||||||
panel with in pixels. Thus, Grafana always gets data in necessary resolution.
|
|
||||||
|
|
||||||
## Functions usage with Direct DB Connection
|
## Functions usage with Direct DB Connection
|
||||||
|
|
||||||
There's only one function affecting the backend. This function is `consolidateBy`. It changes what data comes from
|
|
||||||
|
There's only one function affecting the backend. This function is `consolidateBy`, which changes what data comes from
|
||||||
the backend. Other functions still work on the client side and transform data that comes from the backend. So you should
|
the backend. Other functions still work on the client side and transform data that comes from the backend. So you should
|
||||||
clearly understand that this is pre-aggregated data (by AVG, MAX, MIN, etc).
|
clearly understand that this is pre-aggregated data (by AVG, MAX, MIN, etc).
|
||||||
|
|
||||||
For example, say you want to group values by 1 hour interval and `max` function. If you just apply `groupBy(10m, max)` function, your result will be wrong, because you would transform data aggregated by default `AVG` function. You should use `consolidateBy(max)` coupled with `groupBy(10m, max)` in order to get a precise result.
|
For example, say you want to group values by 1 hour interval and `max` function. If you just apply `groupBy(10m, max)` function, your result will be wrong, because you would transform data aggregated by default `AVG` function. You should use `consolidateBy(max)` coupled with `groupBy(10m, max)` in order to get a precise result.
|
||||||
Reference in New Issue
Block a user