A rollup job is a periodic task that summarizes data from indices specified by an index pattern and rolls it into a new index. In the following example, we create an index named sensor with different date time stamps. Then we create a rollup job to rollup the data from these indices periodically using cron job.
PUT /sensor/_doc/1 { "timestamp": 1516729294000, "temperature": 200, "voltage": 5.2, "node": "a" }
On running the above code, we get the following result −
{ "_index" : "sensor", "_type" : "_doc", "_id" : "1", "_version" : 1, "result" : "created", "_shards" : { "total" : 2, "successful" : 1, "failed" : 0 }, "_seq_no" : 0, "_primary_term" : 1 }
Now, add a second document and so on for other documents as well.
PUT /sensor-2018-01-01/_doc/2 { "timestamp": 1413729294000, "temperature": 201, "voltage": 5.9, "node": "a" }
PUT _rollup/job/sensor { "index_pattern": "sensor-*", "rollup_index": "sensor_rollup", "cron": "*/30 * * * * ?", "page_size" :1000, "groups" : { "date_histogram": { "field": "timestamp", "interval": "60m" }, "terms": { "fields": ["node"] } }, "metrics": [ { "field": "temperature", "metrics": ["min", "max", "sum"] }, { "field": "voltage", "metrics": ["avg"] } ] }
The cron parameter controls when and how often the job activates. When a rollup job’s cron schedule triggers, it will begin rolling up from where it left off after the last activation
After the job has run and processed some data, we can use the DSL Query to do some searching.
GET /sensor_rollup/_rollup_search { "size": 0, "aggregations": { "max_temperature": { "max": { "field": "temperature" } } } }