The following guides walk through both common and complex queries and use cases for Flux.
Use the InfluxDB CLI, API, and the Chronograf Data Explorer to execute Flux queries.
Use the filter()
function to query data based on fields, tags, or any other column value. filter()
performs operations similar to the SELECT
statement and the WHERE
clause in InfluxQL and other SQL-like query languages.
from(bucket: "db/rp")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "example-measurement" and
r._field == "example-field" and
r.tag == "example-tag"
)
Use the group()
function to group data with common values in specific columns.
data
|> group(columns: ["host"], mode: "by")
Input:
_time |
host |
_value |
2020-01-01T00:01:00Z |
host1 |
1.0 |
2020-01-01T00:01:00Z |
host2 |
2.0 |
2020-01-01T00:02:00Z |
host1 |
1.0 |
2020-01-01T00:02:00Z |
host2 |
3.0 |
Output:
_time |
host |
_value |
2020-01-01T00:01:00Z |
host1 |
1.0 |
2020-01-01T00:02:00Z |
host1 |
1.0 |
_time |
host |
_value |
2020-01-01T00:01:00Z |
host2 |
2.0 |
2020-01-01T00:02:00Z |
host2 |
3.0 |
Use the sort()
function to order records within each table by specific columns and the limit()
function to limit the number of records in output tables to a fixed number, n
.
data
|> sort(columns: ["host", "_value"])
|> limit(n: 4)
Input:
_time |
host |
_value |
2020-01-01T00:01:00Z |
A |
1.0 |
2020-01-01T00:02:00Z |
B |
1.2 |
2020-01-01T00:03:00Z |
A |
1.8 |
2020-01-01T00:04:00Z |
B |
0.9 |
2020-01-01T00:05:00Z |
B |
1.4 |
2020-01-01T00:06:00Z |
B |
2.0 |
Output:
_time |
host |
_value |
2020-01-01T00:03:00Z |
A |
1.8 |
2020-01-01T00:01:00Z |
A |
1.0 |
2020-01-01T00:06:00Z |
B |
2.0 |
2020-01-01T00:05:00Z |
B |
1.4 |
This guide walks through windowing and aggregating data with Flux and outlines how it shapes your data in the process.
data
|> aggregateWindow(every: 20m, fn: mean)
Input:
_time |
_value |
2020-01-01T00:00:00Z |
250 |
2020-01-01T00:04:00Z |
160 |
2020-01-01T00:12:00Z |
150 |
2020-01-01T00:19:00Z |
220 |
2020-01-01T00:32:00Z |
200 |
2020-01-01T00:51:00Z |
290 |
2020-01-01T01:00:00Z |
340 |
Output:
_time |
_value |
2020-01-01T00:20:00Z |
195 |
2020-01-01T00:40:00Z |
200 |
2020-01-01T01:00:00Z |
290 |
2020-01-01T01:20:00Z |
340 |
Use the map()
function to remap column values and apply mathematic operations.
data
|> map(fn: (r) => ({ r with _value: r._value * r._value }))
Input:
_time |
_value |
2020-01-01T00:01:00Z |
2 |
2020-01-01T00:02:00Z |
4 |
2020-01-01T00:03:00Z |
3 |
2020-01-01T00:04:00Z |
5 |
Output:
_time |
_value |
2020-01-01T00:01:00Z |
4 |
2020-01-01T00:02:00Z |
16 |
2020-01-01T00:03:00Z |
9 |
2020-01-01T00:04:00Z |
25 |
Use pivot()
or join()
and the map()
function to align operand values into rows and calculate a percentage.
data
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({
_time: r._time,
_field: "used_percent",
_value: float(v: r.used) / float(v: r.total) * 100.0
}))
Input:
_time |
_field |
_value |
2020-01-01T00:00:00Z |
used |
2.5 |
2020-01-01T00:00:10Z |
used |
3.1 |
2020-01-01T00:00:20Z |
used |
4.2 |
_time |
_field |
_value |
2020-01-01T00:00:00Z |
total |
8.0 |
2020-01-01T00:00:10Z |
total |
8.0 |
2020-01-01T00:00:20Z |
total |
8.0 |
Output:
_time |
_field |
_value |
2020-01-01T00:00:00Z |
used_percent |
31.25 |
2020-01-01T00:00:10Z |
used_percent |
38.75 |
2020-01-01T00:00:20Z |
used_percent |
52.50 |
Use the increase()
function to track increases across multiple columns in a table. This function is especially useful when tracking changes in counter values that wrap over time or periodically reset.
Input:
_time |
_value |
2020-01-01T00:01:00Z |
1 |
2020-01-01T00:02:00Z |
2 |
2020-01-01T00:03:00Z |
8 |
2020-01-01T00:04:00Z |
10 |
2020-01-01T00:05:00Z |
0 |
2020-01-01T00:06:00Z |
4 |
Output:
_time |
_value |
2020-01-01T00:02:00Z |
1 |
2020-01-01T00:03:00Z |
7 |
2020-01-01T00:04:00Z |
9 |
2020-01-01T00:05:00Z |
9 |
2020-01-01T00:06:00Z |
13 |
Use the movingAverage()
or timedMovingAverage()
functions to return the moving average of data.
data
|> movingAverage(n: 5)
Input:
_time |
_value |
2020-01-01T00:01:00Z |
1.0 |
2020-01-01T00:02:00Z |
1.2 |
2020-01-01T00:03:00Z |
1.8 |
2020-01-01T00:04:00Z |
0.9 |
2020-01-01T00:05:00Z |
1.4 |
2020-01-01T00:06:00Z |
2.0 |
Output:
_time |
_value |
2020-01-01T00:03:00Z |
1.33 |
2020-01-01T00:04:00Z |
1.30 |
2020-01-01T00:05:00Z |
1.36 |
2020-01-01T00:06:00Z |
1.43 |
data
|> timedMovingAverage(every: 2m, period: 4m)
Input:
_time |
_value |
2020-01-01T00:01:00Z |
1.0 |
2020-01-01T00:02:00Z |
1.2 |
2020-01-01T00:03:00Z |
1.8 |
2020-01-01T00:04:00Z |
0.9 |
2020-01-01T00:05:00Z |
1.4 |
2020-01-01T00:06:00Z |
2.0 |
Output:
_time |
_value |
2020-01-01T00:02:00Z |
1.000 |
2020-01-01T00:04:00Z |
1.333 |
2020-01-01T00:06:00Z |
1.325 |
2020-01-01T00:06:00Z |
1.150 |
Use the derivative()
function to calculate the rate of change between subsequent values or the aggregate.rate()
function to calculate the average rate of change per window of time. If time between points varies, these functions normalize points to a common time interval making values easily comparable.
data
|> derivative(unit: 1m, nonNegative: true)
Input:
_time |
_value |
2020-01-01T00:00:00Z |
250 |
2020-01-01T00:04:00Z |
160 |
2020-01-01T00:12:00Z |
150 |
2020-01-01T00:19:00Z |
220 |
2020-01-01T00:32:00Z |
200 |
2020-01-01T00:51:00Z |
290 |
2020-01-01T01:00:00Z |
340 |
Output:
_time |
_value |
2020-01-01T00:04:00Z |
|
2020-01-01T00:12:00Z |
|
2020-01-01T00:19:00Z |
10.0 |
2020-01-01T00:32:00Z |
|
2020-01-01T00:51:00Z |
4.74 |
2020-01-01T01:00:00Z |
5.56 |
import "experimental/aggregate"
data
|> aggregate.rate(every: 20m, unit: 1m)
Input:
_time |
_value |
2020-01-01T00:00:00Z |
250 |
2020-01-01T00:04:00Z |
160 |
2020-01-01T00:12:00Z |
150 |
2020-01-01T00:19:00Z |
220 |
2020-01-01T00:32:00Z |
200 |
2020-01-01T00:51:00Z |
290 |
2020-01-01T01:00:00Z |
340 |
Output:
_time |
_value |
2020-01-01T00:20:00Z |
|
2020-01-01T00:40:00Z |
10.0 |
2020-01-01T01:00:00Z |
4.74 |
2020-01-01T01:20:00Z |
5.56 |
Use the histogram()
function to create cumulative histograms with Flux.
data
|> histogram(
column: "_value",
upperBoundColumn: "le",
countColumn: "_value",
bins: [100.0, 200.0, 300.0, 400.0],
)
Input:
_time |
_value |
2020-01-01T00:00:00Z |
250.0 |
2020-01-01T00:01:00Z |
160.0 |
2020-01-01T00:02:00Z |
150.0 |
2020-01-01T00:03:00Z |
220.0 |
2020-01-01T00:04:00Z |
200.0 |
2020-01-01T00:05:00Z |
290.0 |
2020-01-01T01:00:00Z |
340.0 |
Output:
le |
_value |
100.0 |
0.0 |
200.0 |
3.0 |
300.0 |
6.0 |
400.0 |
7.0 |
Use the fill()
function to replace null values.
data
|> fill(usePrevious: true)
Input:
_time |
_value |
2020-01-01T00:01:00Z |
null |
2020-01-01T00:02:00Z |
0.8 |
2020-01-01T00:03:00Z |
null |
2020-01-01T00:04:00Z |
null |
2020-01-01T00:05:00Z |
1.4 |
Output:
_time |
_value |
2020-01-01T00:01:00Z |
null |
2020-01-01T00:02:00Z |
0.8 |
2020-01-01T00:03:00Z |
0.8 |
2020-01-01T00:04:00Z |
0.8 |
2020-01-01T00:05:00Z |
1.4 |
Input:
_time |
_value |
2020-01-01T00:01:00Z |
null |
2020-01-01T00:02:00Z |
0.8 |
2020-01-01T00:03:00Z |
null |
2020-01-01T00:04:00Z |
null |
2020-01-01T00:05:00Z |
1.4 |
Output:
_time |
_value |
2020-01-01T00:01:00Z |
0.0 |
2020-01-01T00:02:00Z |
0.8 |
2020-01-01T00:03:00Z |
0.0 |
2020-01-01T00:04:00Z |
0.0 |
2020-01-01T00:05:00Z |
1.4 |
Use the median()
function to return a value representing the 0.5
quantile (50th percentile) or median of input data.
Input:
_time |
_value |
2020-01-01T00:01:00Z |
1.0 |
2020-01-01T00:02:00Z |
1.0 |
2020-01-01T00:03:00Z |
2.0 |
2020-01-01T00:04:00Z |
3.0 |
Use the quantile()
function to return all values within the q
quantile or percentile of input data.
data
|> quantile(q: 0.99, method: "estimate_tdigest")
Input:
_time |
_value |
2020-01-01T00:01:00Z |
1.0 |
2020-01-01T00:02:00Z |
1.0 |
2020-01-01T00:03:00Z |
2.0 |
2020-01-01T00:04:00Z |
3.0 |
This guide walks through joining data with Flux and outlines how it shapes your data in the process.
t1 = from(bucket: "example-bucket")
|> range(start: 2020-01-01T00:00:00Z)
|> filter(fn: (r) => r.m == "foo")
t2 = from(bucket: "example-bucket")
|> range(start: 2020-01-01T00:00:00Z)
|> filter(fn: (r) => r.m == "bar")
join(tables: {t1: t1, t2: t2}, on: ["_time"])
Input:
t1
_time |
_value |
2020-01-01T00:01:00Z |
1 |
2020-01-01T00:02:00Z |
2 |
2020-01-01T00:03:00Z |
1 |
2020-01-01T00:04:00Z |
3 |
t2
_time |
_value |
2020-01-01T00:01:00Z |
5 |
2020-01-01T00:02:00Z |
2 |
2020-01-01T00:03:00Z |
3 |
2020-01-01T00:04:00Z |
4 |
Output:
_time |
_value_t1 |
_value_t2 |
2020-01-01T00:01:00Z |
1 |
5 |
2020-01-01T00:02:00Z |
2 |
2 |
2020-01-01T00:03:00Z |
1 |
3 |
2020-01-01T00:04:00Z |
3 |
4 |
Use the cumulativeSum()
function to calculate a running total of values.
Input:
_time |
_value |
2020-01-01T00:01:00Z |
1 |
2020-01-01T00:02:00Z |
2 |
2020-01-01T00:03:00Z |
1 |
2020-01-01T00:04:00Z |
3 |
Output:
_time |
_value |
2020-01-01T00:01:00Z |
1 |
2020-01-01T00:02:00Z |
3 |
2020-01-01T00:03:00Z |
4 |
2020-01-01T00:04:00Z |
7 |
Use the first()
or last()
functions to return the first or last point in an input table.
Input:
_time |
_value |
2020-01-01T00:01:00Z |
1.0 |
2020-01-01T00:02:00Z |
1.0 |
2020-01-01T00:03:00Z |
2.0 |
2020-01-01T00:04:00Z |
3.0 |
Output:
_time |
_value |
2020-01-01T00:01:00Z |
1.0 |
Input:
_time |
_value |
2020-01-01T00:01:00Z |
1.0 |
2020-01-01T00:02:00Z |
1.0 |
2020-01-01T00:03:00Z |
2.0 |
2020-01-01T00:04:00Z |
3.0 |
Output:
_time |
_value |
2020-01-01T00:04:00Z |
3.0 |
Use the Flux exists
operator to check if a record contains a key or if that key’s value is null
.
Filter null values
data
|> filter(fn: (r) => exists r._value)
Use Flux stream and table functions to extract scalar values from Flux query output. This lets you, for example, dynamically set variables using query results.
scalarValue = {
_record =
data
|> tableFind(fn: key => true)
|> getRecord(idx: 0)
return _record._value
}
Use Flux to process and manipulate timestamps.
Flux provides several functions to help monitor states and state changes in your data.
The Flux sql
package provides functions for working with SQL data sources. Use sql.from()
to query SQL databases like PostgreSQL and MySQL
import "sql"
sql.from(
driverName: "postgres",
dataSourceName: "postgresql://user:password@localhost",
query: "SELECT * FROM example_table"
)
This guide describes how to use Flux conditional expressions, such as if
, else
, and then
, to query and transform data. Flux evaluates statements from left to right and stops evaluating once a condition matches.
if color == "green" then "008000" else "ffffff"
This guide walks through using regular expressions in evaluation logic in Flux functions.
data
|> filter(fn: (r) => r.tag =~ /^foo[1-3]/)
Input:
_time |
tag |
_value |
2020-01-01T00:01:00Z |
foo1 |
1.0 |
2020-01-01T00:02:00Z |
foo5 |
1.2 |
2020-01-01T00:03:00Z |
bar3 |
1.8 |
2020-01-01T00:04:00Z |
foo3 |
0.9 |
2020-01-01T00:05:00Z |
foo2 |
1.4 |
2020-01-01T00:06:00Z |
bar1 |
2.0 |
Output:
_time |
tag |
_value |
2020-01-01T00:01:00Z |
foo1 |
1.0 |
2020-01-01T00:04:00Z |
foo3 |
0.9 |
2020-01-01T00:05:00Z |
foo2 |
1.4 |
Use the Flux Geo package to filter geo-temporal data and group by geographic location or track.
import "experimental/geo"
sampleGeoData
|> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0})
|> geo.groupByArea(newColumn: "geoArea", level: 5)
Optimize your Flux queries to reduce their memory and compute (CPU) requirements.
This guide walks through using Flux queries in Chronograf dashboard cells, what template variables are available, and how to use them.