setup docs hirearchy

This commit is contained in:
Sriramajeyam Sugumaran
2023-12-20 11:32:04 +00:00
parent a6b91bbd86
commit ef24d1fcb4
14 changed files with 13 additions and 13 deletions

View File

@@ -0,0 +1,25 @@
---
title: Alerting
menuTitle: Alerting
description: Alerting
aliases:
keywords:
- data source
- zabbix
labels:
products:
- oss
- grafana cloud
weight: 520
---
## Alerting overview
Grafana-Zabbix plugin introduces [alerting](https://grafana.com/docs/grafana/latest/alerting/) feature support in 4.0 release. Work still in progress, so current alerting support has some limitations:
- Only `Metrics` query mode supported.
- Queries with data processing functions are not supported.
## Creating alerts
In order to create alert, open panel query editor and switch to the `Alert` tab. Click `Create Alert` button, configure alert and save dashboard. Refer to [Grafana](https://grafana.com/docs/grafana/latest/alerting/create-alerts/) documentation for more details about alerts configuration.

View File

@@ -0,0 +1,122 @@
---
title: Direct DB Connection
menuTitle: Direct DB Connection
description: Direct DB Connection
aliases:
keywords:
- data source
- zabbix
labels:
products:
- oss
- grafana cloud
weight: 530
---
Since version 4.3 Grafana can use MySQL as a native data source. The idea of Direct DB Connection is that Grafana-Zabbix plugin can use this data source for querying data directly from a Zabbix database.
One of the most resource intensive queries for Zabbix API is the history query. For long time intervals `history.get`
returns a huge amount of data. In order to display it, the plugin should adjust time series resolution
by using [consolidateBy](../functions/#consolidateby). Ultimately, Grafana displays this reduced
time series, but that data should be loaded and processed on the client side first. Direct DB Connection solves these two problems by moving consolidation to the server side. Thus, the client gets a 'ready-to-use' dataset which is much smaller. This allows the data to load faster and the client doesn't spend time processing the data.
Also, many users see better performance from direct database queries versus API calls. This could be the result of several reasons,
such as the additional PHP layer and additional SQL queries (user permissions checks).
Direct DB Connection feature allows using database transparently for querying historical data. Now Grafana-Zabbix plugin supports few databases for history queries: MySQL, PostgreSQL and InfluxDB. Regardless of the database type, idea and data flow remain the same.
## Data Flow
This chart illustrates how the plugin uses both Zabbix API and the MySQL data source for querying different types
of data from Zabbix. MySQL data source is used only for pulling history and trend data instead of `history.get`
and `trend.get` API calls.
[![Direct DB Connection](https://raw.githubusercontent.com/grafana/alexanderzobnin-zabbix-app/main/docs/images/reference-direct-db-connection.svg)](https://raw.githubusercontent.com/grafana/alexanderzobnin-zabbix-app/main/docs/images/reference-direct-db-connection.svg)
## Query structure
Below is an example query for getting history in the Grafana-Zabbix Plugin:
**MySQL**:
```sql
SELECT itemid AS metric, clock AS time_sec, {aggFunc}(value) as value
FROM {historyTable}
WHERE itemid IN ({itemids})
AND clock > {timeFrom} AND clock < {timeTill}
GROUP BY time_sec DIV {intervalSec}, metric
ORDER BY time_sec ASC
```
**PostgreSQL**:
```sql
SELECT to_char(itemid, 'FM99999999999999999999') AS metric,
clock / {intervalSec} * {intervalSec} AS time,
{aggFunc}(value) AS value
FROM {historyTable}
WHERE itemid IN ({itemids})
AND clock > {timeFrom} AND clock < {timeTill}
GROUP BY 1, 2
ORDER BY time ASC
```
where `{aggFunc}` is one of `[AVG, MIN, MAX, SUM, COUNT]` aggregation functions, `{historyTable}` is a history table,
`{intervalSec}` - consolidation interval in seconds.
When getting trends, the plugin additionally queries a particular value column (`value_avg`, `value_min` or `value_max`) which
depends on `consolidateBy` function value:
**MySQL**:
```sql
SELECT itemid AS metric, clock AS time_sec, {aggFunc}({valueColumn}) as value
FROM {trendsTable}
WHERE itemid IN ({itemids})
AND clock > {timeFrom} AND clock < {timeTill}
GROUP BY time_sec DIV {intervalSec}, metric
ORDER BY time_sec ASC
```
**PostgreSQL**:
```sql
SELECT to_char(itemid, 'FM99999999999999999999') AS metric,
clock / {intervalSec} * {intervalSec} AS time,
{aggFunc}({valueColumn}) AS value
FROM {trendsTable}
WHERE itemid IN ({itemids})
AND clock > {timeFrom} AND clock < {timeTill}
GROUP BY 1, 2
ORDER BY time ASC
```
**Note**: these queries may be changed in future, so look into sources for actual query structure.
As you can see, the Grafana-Zabbix plugin uses aggregation by a given time interval. This interval is provided by Grafana and depends on the panel width in pixels. Thus, Grafana displays the data in the proper resolution.
## InfluxDB
Zabbix supports loadable modules which makes possible to write history data into an external database. There's a [module](https://github.com/i-ky/effluence) for InfluxDB written by [Gleb Ivanovsky](https://github.com/i-ky) which can export history into InfluxDB in real-time.
### InfluxDB retention policy
In order to keep database size under control, you should use InfluxDB retention policy mechanism. It's possible to create retention policy for long-term data and write aggregated data in the same manner as Zabbix does (trends). Then this retention policy can be used in plugin for getting data after a certain period ([Retention Policy](../../configuration/#direct-db-connection) option in data source config). Read more about how to configure retention policy for using with plugin in effluence module [docs](https://github.com/i-ky/effluence#database-sizing).
#### InfluxDB Query
Eventually, plugin generates InfluxDB query similar to this:
```sql
SELECT MEAN("value")
FROM "history"
WHERE ("itemid" = '10073' OR "itemid" = '10074')
AND "time" >= 1540000000000s AND "time" <= 1540000000060s
GROUP BY time(10s), "itemid" fill(none)
```
## Functions usage with Direct DB Connection
There's only one function that directly affects the backend data. This function is `consolidateBy`. Other functions work on the client side and transform data that comes from the backend. So you should clearly understand that this is pre-aggregated data (by AVG, MAX, MIN, etc).
For example, say you want to group values by 1 hour interval and `max` function. If you just apply `groupBy(10m, max)` function, your result will be wrong, because you would transform data aggregated by default `AVG` function. You should use `consolidateBy(max)` coupled with `groupBy(10m, max)` in order to get a precise result.

View File

@@ -0,0 +1,430 @@
---
title: Functions reference
menuTitle: Functions reference
description: Functions reference
aliases:
keywords:
- data source
- zabbix
labels:
products:
- oss
- grafana cloud
weight: 510
---
## Functions Variables
There are some built-in template variables available for using in functions:
- `$__range_ms` - panel time range in ms
- `$__range_s` - panel time range in seconds
- `$__range` - panel time range, string representation (`30s`, `1m`, `1h`)
- `$__range_series` - invoke function over all series values
Examples:
```sh
groupBy($__range, avg)
percentile($__range_series, 95) - 95th percentile over all values
```
---
## Transform
### _groupBy_
```sh
groupBy(interval, function)
```
Takes each timeseries and consolidate its points fallen in the given _interval_ into one point using _function_, which can be one of: _avg_, _min_, _max_, _median_.
Examples:
```sh
groupBy(10m, avg)
groupBy(1h, median)
```
---
### _scale_
```sh
scale(factor)
```
Takes timeseries and multiplies each point by the given _factor_.
Examples:
```sh
scale(100)
scale(0.01)
```
---
### _delta_
```sh
delta()
```
Converts absolute values to delta. This function just calculate difference between values. For the per-second
calculation use `rate()`.
---
### _rate_
```sh
rate()
```
Calculates the per-second rate of increase of the time series. Resistant to counter reset. Suitable for converting of
growing counters into the per-second rate.
---
### _movingAverage_
```sh
movingAverage(windowSize)
```
Graphs the moving average of a metric over a fixed number of past points, specified by `windowSize` param.
Examples:
```sh
movingAverage(60)
calculates moving average over 60 points (if metric has 1 second resolution it matches 1 minute window)
```
---
### _exponentialMovingAverage_
```sh
exponentialMovingAverage(windowSize)
```
Takes a series of values and a window size and produces an exponential moving average utilizing the following formula:
`ema(current) = constant * (Current Value) + (1 - constant) * ema(previous)`
The Constant is calculated as:
`constant = 2 / (windowSize + 1)`
If windowSize < 1 (0.1, for instance), Constant wouldn't be calculated and will be taken directly from windowSize
(Constant = windowSize).
It's a bit tricky to graph EMA from the first point of series (not from Nth = windowSize). In order to do it,
plugin should fetch previous N points first and calculate simple moving average for it. To avoid it, plugin uses this
hack: assume, previous N points have the same average values as first N (windowSize). So you should keep this fact
in mind and don't rely on first N points interval.
Examples:
```sh
movingAverage(60)
calculates moving average over 60 points (if metric has 1 second resolution it matches 1 minute window)
```
---
### _percentile_
```sh
percentile(interval, N)
```
Takes a series of values and a window size and consolidate all its points fallen in the given _interval_ into one point by Nth percentile.
Examples:
```sh
percentile(1h, 99)
percentile($__range_series, 95) - 95th percentile over all series values
```
---
### _removeAboveValue_
```sh
removeAboveValue(N)
```
Replaces series values with `null` if value > N
Examples:
```sh
removeAboveValue(1)
```
---
### _removeBelowValue_
```sh
removeBelowValue(N)
```
Replaces series values with `null` if value < N
---
### _transformNull_
```sh
transformNull(N)
```
Replaces `null` values with N
---
## Aggregate
### _aggregateBy_
```sh
aggregateBy(interval, function)
```
Takes all timeseries and consolidate all its points fallen in the given _interval_ into one point using _function_, which can be one of: _avg_, _min_, _max_, _median_.
Examples:
```sh
aggregateBy(10m, avg)
aggregateBy(1h, median)
```
---
### _sumSeries_
```sh
sumSeries()
```
This will add metrics together and return the sum at each datapoint. This method required interpolation of each timeseries so it may cause high CPU load. Try to combine it with _groupBy()_ function to reduce load.
---
### _percentileAgg_
```sh
percentileAgg(interval, N)
```
Takes all timeseries and consolidate all its points fallen in the given _interval_ into one point by Nth percentile.
Examples:
```sh
percentileAgg(1h, 99)
percentileAgg($__range_series, 95) - 95th percentile over all values
```
---
### _average_
```sh
average(interval)
```
**Deprecated**, use `aggregateBy(interval, avg)` instead.
---
### _min_
```sh
min(interval)
```
**Deprecated**, use `aggregateBy(interval, min)` instead.
---
### _max_
```sh
max(interval)
```
**Deprecated**, use `aggregateBy(interval, max)` instead.
---
## Filter
### _top_
```sh
top(N, value)
```
Returns top N series, sorted by _value_, which can be one of: _avg_, _min_, _max_, _median_.
Examples:
```sh
top(10, avg)
top(5, max)
```
---
### _bottom_
```sh
bottom(N, value)
```
Returns bottom N series, sorted by _value_, which can be one of: _avg_, _min_, _max_, _median_.
Examples:
```sh
bottom(5, avg)
```
---
## Trends
### _trendValue_
```sh
trendValue(valueType)
```
Specifying type of trend value returned by Zabbix when trends are used (avg, min or max).
---
## Time
### _timeShift_
```sh
timeShift(interval)
```
Draws the selected metrics shifted in time. If no sign is given, a minus sign ( - ) is implied which will shift the metric back in time. If a plus sign ( + ) is given, the metric will be shifted forward in time.
Examples:
```sh
timeShift(24h) - shift metric back in 24h hours
timeShift(-24h) - the same result as for timeShift(24h)
timeShift(+1d) - shift metric forward in 1 day
```
---
## Alias
Following template variables available for using in `setAlias()` and `replaceAlias()` functions:
- `$__zbx_item`, `$__zbx_item_name` - item name
- `$__zbx_item_key` - item key
- `$__zbx_host_name` - visible name of the host
- `$__zbx_host` - technical name of the host
Examples:
```sh
setAlias($__zbx_host_name: $__zbx_item) -> backend01: CPU user time
setAlias(Item key: $__zbx_item_key) -> Item key: system.cpu.load[percpu,avg1]
setAlias($__zbx_host_name) -> backend01
```
### _setAlias_
```sh
setAlias(alias)
```
Returns given alias instead of the metric name.
Examples:
```sh
setAlias(load)
```
---
### _setAliasByRegex_
```sh
setAliasByRegex(regex)
```
Returns part of the metric name matched by regex.
Examples:
```sh
setAlias(Zabbix busy [a-zA-Z]+)
```
---
### _replaceAlias_
```sh
replaceAlias(pattern, newAlias)
```
Replace metric name using pattern. Pattern is regex or regular string. If regex is used, following special replacement patterns are supported:
| Pattern | Inserts |
| ------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| $$ | Inserts a "$". |
| $& | Inserts the matched substring. |
| $` | Inserts the portion of the string that precedes the matched substring. |
| $' | Inserts the portion of the string that follows the matched substring. |
| $n | Where n is a non-negative integer less than 100, inserts the nth parenthesized submatch string, provided the first argument was a RegExp object. |
For more details see [String.prototype.replace()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace) function.
Examples:
```sh
CPU system time
replaceAlias(/CPU (.*) time/, $1) -> system
backend01: CPU system time
replaceAlias(/CPU (.*) time/, $1) -> backend01: system
backend01: CPU system time
replaceAlias(/.*CPU (.*) time/, $1) -> system
backend01: CPU system time
replaceAlias(/(.*): CPU (.*) time/, $1 - $2) -> backend01 - system
```
---
## Special
### _consolidateBy_
```sh
consolidateBy(consolidationFunc)
```
When a graph is drawn where width of the graph size in pixels is smaller than the number of datapoints to be graphed, plugin consolidates the values to to prevent line overlap. The consolidateBy() function changes the consolidation function from the default of average to one of `sum`, `min`, `max` or `count`.
Valid function names are `sum`, `avg`, `min`, `max` and `count`.
---