Skip to content

Service to online route, filter, modify and replicate metrics from InfluxDB or prometheus sources to InfluxDB DB's

License

Notifications You must be signed in to change notification settings

toni-moreno/influxdb-srelay

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

influxdb-srelay

Install from precompiled packages

Debian RedHat Docker
deb - signature rpm - signature docker run -d --name=influxdb-srelay_instance00 -p 9096:9096 -v /mylocal/conf:/etc/influxdb-srelay/conf -v /mylocal/log:/var/log/influxdb-srelay/ tonimoreno/influxdb-srelay

Download all binary and code releases here

License

  1. Overview
  2. Description
  3. Requirements
  4. Setup
  5. Usage
  6. Limitations
  7. Development
  8. Miscellaneous

Overview

Maintained fork of [influxdb-relay][overview-href] originally developed by InfluxData, with additions from https://github.com/veepee-moc. Refactored to add advanced data routing, transforamation and filtering.

Be careful, we have deeply changed the configuration file syntax from both versions original from InfluxData and also from veepep-moc.

Description

This project adds an advanced routing layer to any HTTP based InfluxDB queries.

Tested on

  • Go 1.7.4 to 1.12
  • InfluxDB 1.5 to 1.7 (2.x not yet supported)

Other versions will probably work but are untested.

Setup

From Sources

git clone  https://github.com/toni-moreno/influxdb-srelay
cd influxdb-srelay
go run build.go build
./bin/influxdb-srelay -config ./examples/sample.influxdb-srelay.conf -logs ./log

Docker

Build your own image need for docker-ce > 17.05 ( or equivalent ee version )

git clone  https://github.com/toni-moreno/influxdb-srelay
cd influxdb-srelay
make -f Makefile.docker

docker run \
       --volume /path/to/influxdb-srelay.conf:/etc/influxdb-srelay/influxdb-srelay.conf
       --rm
       tonimoreno/influxdb-srelay:latest

or

Docker pull our image.

docker pull tonimoreno/influxdb-srelay:latest
docker run \
       --volume /path/to/influxdb-srelay.conf:/etc/influxdb-srelay/influxdb-srelay.conf
       --rm
       tonimoreno/influxdb-srelay:latest

Usage

Architecture

Architecture

Architecture description:

  • DB Backend: A DB backend is a reference for each OSS influxDB individual instances.

  • InfluxCluster: a set of DB Backends working together and the item to route http request to.

  • HTTP Input: Is an HTTP listener waiting for HTTP connections, each HTTP Object listens for user configured HTTP Endpoints or also administrative tasks /health, /ping, /status or /ping

  • HTTP Endpoint: User configured endpoint (tipically /query or /write on InfluxDB 1.X data) to send "WR" or retrieve "RD" data. Each endpoint has an associated source format (Influx Line Protocol, Influx Query Language, etc). If one query match the user endpoint it begins to check for matching Routes in sequential order and only until one route matches.

  • HTTP Route: An HTTP Route defines the way to handle the incomming HTTP request that matches the enpoint. The route is only applied if all its Route Filters matches and will be handled by each Route Rule in sequencial order. If any defined Route Filters match the HTTP request its evaluated by the next route.

  • Route Filter: A way to define a condition over incomming HTTP or DATA parameters, when condition becomes true the route will be matched.

  • Route Rule: A rule is a way to handle incomming query or data. Rules could route over InfluxClusters as many times as it is needed, or data filtering and/or transformation, there is some rule "types" defined each one with its way to work, you can see config parameters descritptions on example config

Configuration Examples

You can find some configurations in examples folder.

A simple Read-Write HA load balancer

k8s multitenant

The following config enables user separate k8s namespaces labeled metrics on diferent databases with the namespace name (databases in the backends should be previously created)

Complex combination

HTTP Listener Administrative endpoints

/ping

This endpoint is a compatible InfluxDB /ping endpoint with ExtraHeaders.

curl -I  http://localhost:9096/ping
HTTP/1.1 200 OK
Content-Length: 0
X-Influx-Srelay-Version: 0.2.0
X-Influxdb-Version: Influx-Smart-Relay
Date: Wed, 22 May 2019 07:27:03 GMT

/ping/<clusterid>

#curl -I  http://localhost:9096/ping/mycluster
HTTP/1.1 204 No Content
X-Influx-Srelay-Version: 0.2.0
X-Influxdb-Version: Influx-Smart-Relay
Date: Wed, 22 May 2019 07:31:39 GMT

/health

This endpoint provides a quick way to check if the listener is working. Use it as health check for external load balancers

# curl -I  http://localhost:9096/health
HTTP/1.1 200 OK
Content-Length: 4
Content-Type: application/json
Date: Wed, 22 May 2019 07:23:03 GMT

/health/<clusterid>

This endpoint provides a quick way to check the state of all the backends for the selected cluster with its clusterid. It will return a JSON object with the status of all backends defined on cluster:

{
  "status": "problem",
  "problem": {
    "local-influxdb01": "KO. Get http://influxdb/ping: dial tcp: lookup influxdb on 0.0.0.0:8086: no such host"
  },
  "healthy": {
    "local-influxdb02": "OK. Time taken 3ms"
  }
}

If the relay encounters an error while checking a backend, this backend will be reported with the associated error in the problems object. The backends wich the relay was able to communicate with will be reported in the healthy object.

The status field is a summary of the general state of the backends, the defined states are as follows:

  • healthy: all backends are OK
  • problem: some backends but no all of them, return errors
  • critical: all backends return an error

/admin/<clusterid>

Whereas data manipulation relies on the /write endpoint, some other features such as database or user management are based on the /query endpoint. As InfluxDB SRelay does not send back a response body to the client(s), we are not able to forward all of the features this endpoint provides. Still, we decided to expose it through the /admin/<clusterid> endpoint.

Its usage is the same as the standard /query Influx DB endpoint

curl -i  -XPOST http://localhost:9096/admin/cluster_caas --data-urlencode "q=CREATE DATABASE test_4"
HTTP/1.1 200 OK
Content-Length: 580
Content-Type: application/json
Date: Wed, 22 May 2019 14:43:17 GMT

{
	"Msg": "Cluster cluster_caas : Admin Action  q=CREATE%20DATABASE%20test_4: OK",
	"Responses": [
		{
			"Body": "{\"results\":[{}]}",
			"Serverid": "influxdb01",
			"Clusterid": "cluster_caas",
			"Location": "http://127.0.0.1:8086/",
			"ContentType": "application/json",
			"ContentEncoding": "",
			"StatusCode": 200
		},
		{
			"Body": "{\"results\":[{\"statement_id\":0}]}\n",
			"Serverid": "influxdb02",
			"Clusterid": "cluster_caas",
			"Location": "http://127.0.0.1:8087/",
			"ContentType": "application/json",
			"ContentEncoding": "",
			"StatusCode": 200
		}
	]

Output will show in json format an Msg and and Responses array with response with each backend in the cluster.

/status/<clusterid> (work in progress)

This endpoint provides a quick way to get InfluxCluster Backends Data and statistics.(work in progress...)

access.log

2019-05-29 22:07:12 INF  bk_duration_ms=20.070535 duration_ms=64.696947 latency_ms=44.685575 method=POST referer= returnsize=0 source=127.0.0.1:53112 status=204 trace-route="http:example-http-influxdb2> rt:telegraf> decode:ILP> rule:rename_mem_measurement_to_mem_2> rule:to_namespace_metrics> rule:route_all_data_to_cluster_linux> " url=/write?db=telegraf user=admin user-agent=telegraf write-points=26 write-size=5570

Fields

  • Time : time what request has been responsed ( and also this log entry has been written)
  • INF : no special meanning
  • source: the IP:PORT source address
  • method: the HTTP method used (GET/POST/HEAD/....)
  • user: the username who did the request
  • url: the requested url
  • referer: identifies the address of the webpage (i.e. the URI or IRI) that linked to the resource being requested
  • user-agent: identifies the client software originating the request,
  • write-size: The body size for the request ()
  • trace-route: A string which a chained processors id's which has been used by the request
  • write-points: When decoded data (depends on the selected route) smart relay shows how many points has been requested to write.
  • status: HTTP response status generated by the request
  • returnsize: the body size of the response
  • duration_ms: complete response duration from first incomming until build and sent the outgoing response
  • bk_duration_ms: time taken by our backends to process and answer us our "new" requests.
  • latency_ms: time taken by the smart-relay to do its routing/transformation work.

Limitations

So far,this is compatible with Debian, RedHat, and other derivatives.

Development

Please read carefully CONTRIBUTING.md before making a merge request.

git clone git@github.com:toni-moreno/influxdb-srelay
go run build.go build

Miscellaneous

About

Service to online route, filter, modify and replicate metrics from InfluxDB or prometheus sources to InfluxDB DB's

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 89.9%
  • Shell 9.6%
  • Dockerfile 0.5%