Functions - Kav91/nri-flex-concept GitHub Wiki

Flex has many useful functions to help you manipulate and tidy up your data output in many ways, and can also be used in various combinations.

The below functions are available under each defined api:

  • start_key Start processing data at a different point in your payload
  • cache Reuse output from something previously run eg. a command or url
  • store_lookups Store some values to be used as a lookup in a subsequent request
  • store_variables Store variables to reuse in any subsequent url, cmd, jmx query etc.
  • strip_keys Strip keys from a payload
  • lazy_flatten Lazy flatten a payload
  • sample_keys Create samples out of nested objects
  • replace_keys Replace entire keys or parts of a key using regex
  • rename_samples Using regex to find a key, rename the sample if found
  • rename_keys Rename keys using a contains match
  • remove_keys Remove keys entirely
  • keep_keys Remove all keys and keep particular keys
  • to_lower Converts the key to all lower case
  • snake_to_camel Converts snake_case to camelCase, eg. super_hero -> superHero
  • perc_to_decimal Converts percentages to decimals
  • sub_parse Sub parse a nested value
  • metric_parser For setting metrics to be RATEs or DELTAs
  • value_parser Use regex to find a key, and use regex again to pluck a value
  • value_transformer Use regex to find a key, and then transform the value
  • pluck_numbers Pluck all numbers out, if not found leave value as is
  • math Perform math operations with or without the current data set
  • timestamp Apply timestamps anywhere in configs, and add or subtract to timestamps as well
  • split_objects Split an object that has nested objects
  • lookup_file Supply a json file containing an array of objects, to dynamically substitute into config(s)
---
name: myFlexIntegration
apis: 
  - event_type: alertSample
    url: https://someapi.com/stats
    | <- here!
    start_key: # <- example 

start_key

Can be used to process data from a different point in your payload.

eg.

you could start processing the below payload from leaderInfo
start_key:
 - leaderInfo

or

start processing the payload from a further nested point
start_key:
 - leaderInfo
 - abc

{
    "id": "eca0338f4ea31566",
    "leaderInfo": {
        "leader": "8a69d5f6b7814500",
        "startTime": "2014-10-24T13:15:51.186620747-07:00",
        "uptime": "10m59.322358947s",
        "abc":{
            "def":123,
            "hij":234
        }
    },
    "name": "node3"
}

cache

Allows you to reuse the output from a previously run command or http response by the name or url used.

name: elasticsearchFlex
global:
    base_url: http://localhost:9200/
    # user: elastic
    # pass: elastic
    headers:
      accept: application/json
apis: 
  ### here we are calling the _stats endpoint, but splitting the stats sample to several samples to avoid overloading a single sample
  ### since Flex has inbuilt caching we can retrieve URL data previously fetched and split the sample out
  - event_type: elasticsearchTotalSample
    url: _stats
    start_key:
      - _all
      - total
  - event_type: elasticsearchPrimarySample
    cache: _stats ### we can use previously cached calls saved in memory
    start_key:
      - _all
      - total

With commands

---
name: commandCache
apis: 
  - name: getHost
    commands: 
      - run: echo "zHost:$(hostname)"
        split_by: ":"
  - name: abc
    cache: getHost

From URL output, to commands

### see the full example in the repo: examples/flexConfigs/nginx-opensource-stub-example.yml

name: nginxFlex
apis:
  - name: nginxStub
    url: http://127.0.0.1/nginx_status
  - name: nginx
    merge: NginxSample
    commands:
      - cache: http://127.0.0.1/nginx_status
        split_by: ": "
        line_end: 1

store_lookups

You can have a situation, where a previous API call made, may have attributes you would like to use in a subsequent call elsewhere. Flex allows you to store these attributes, and use them in a subsequent call or command.

- event_type: rabbitVHostSample
    url: api/vhosts
    snake_to_camel: true
    store_lookups: ### we store all vhost "name" attributes found into the lookupStore to be used later
      vhosts: name 
  - event_type: rabbitAliveTestSample         ### as we use the special lookup key, we fetch the vhosts found, and execute the next request(s)
    url: api/aliveness-test/${lookup:vhosts}  ### if multiple vhosts were stored, this will issue multiple requests

strip_keys

Strip keys can remove entire keys and objects from processing.

Remove the entire "incidents" object

apis: 
  - event_type: incidentSample
    url: incidents
    strip_keys:
      - incidents

Remove nested key(s) under the "incidents" object

We remove the "transitions", and "pagedPolicies" objects/keys under the "incidents" object.
apis: 
  - event_type: incidentSample
    url: incidents
    strip_keys:
      - incidents>transitions
      - incidents>pagedPolicies

lazy_flatten

Performs a lazy flatten. Uses no smarts to flatten a payload, eg. you could have an array of contact numbers, or some sort of dimensional data.

eg.

receive:
{ contacts:[ {name:batman,number:911}, {name:robin,number:000} ] }
after lazy_flatten:
contacts.0.name = batman
contacts.0.number = 911
contacts.1.name = robin
contacts.1.number = 000

apis: 
  - event_type: incidentSample
    url: incidents
    lazy_flatten:
      - numbers
      - incidents>pagedUsers # we can also do it another level nested within
      - incidents>pagedTeams

sample_keys

You can receive payloads where they are not in an array, but in a larger object split by key. We can easily target a nested key and split them out into samples.

eg.

receive:
{
    "followers": {
        "6e3bd23ae5f1eae0": {
            "counts": {
                "fail": 0,
                "success": 745
            },
            "latency": {
                "average": 0.017039507382550306,
                "current": 0.000138,
                "maximum": 1.007649,
                "minimum": 0,
                "standardDeviation": 0.05289178277920594
            }
        },
        "a8266ecf031671f3": {
            "counts": {
                "fail": 0,
                "success": 735
            },
            "latency": {
                "average": 0.012124141496598642,
                "current": 0.000559,
                "maximum": 0.791547,
                "minimum": 0,
                "standardDeviation": 0.04187900156583733
            }
        }
    },
    "leader": "924e2e83e93f2560"
}

config:
name: etcdFlex
global:
    base_url: http://127.0.0.1:2379/v2/
    headers:
      accept: application/json
apis: 
  - event_type: etcdLeaderSample
    url: stats/leader
    sample_keys:
      followerSample: followers>follower.id # there are objects within objects distinguished by key, we can create samples like so

replace_keys

Uses regex to find keys to replace.

replace the "os" key in the payload, to "operatingSystem"
apis: 
  - name: redis
    commands: 
      - run: (printf "info\r\n"; sleep 1) | nc -q0 127.0.0.1 6379 # or even ### run: "redis-cli -h 127.0.0.1 -p 6379 info" ### (depends on redis-cli)
        split_by: ":"
    replace_keys:
      os: operatingSystem # replaces os > operatingSystem

rename_samples

Using regex to find a key, if found rename to have a different sample name. As below eg. if "db" key is found, rename to redisDbSample, if "cmd" is found, rename to redisCmdSample.

---
name: redis
apis: 
  - name: redis
    url: http://127.0.0.1:8887/metrics
    prometheus: 
      enable: true
    rename_samples:
      db: redisDbSample
      cmd: redisCmdSample

rename_keys

Renames any part of a key using regex to match, with supplied text. eg. you could rename "superhero" to "hero" by

  • renaming, super to nothing "" , or
  • rename superhero to "hero"
example:
  - event_type: elasticsearchNodeSample
    url: _nodes/stats
    sample_keys:
      nodes: nodes>node.id # there are objects within objects distinguished by key, we can create samples like so
    rename_keys: # <- here
      _nodes: parentNodes # rename nodes to parentNodes

remove_keys

Uses regex to remove any key(s) (attributes) from your data.

apis: 
  - name: redis
    commands: 
      - run: (printf "info\r\n"; sleep 1) | nc -q0 127.0.0.1 6379 # or even ### run: "redis-cli -h 127.0.0.1 -p 6379 info" ### (depends on redis-cli)
        split_by: ":"
    remove_keys: # remove any keys that contain any of the following strings
      - human # <- here

keep_keys

Delete all other keys, and keep only the ones you define using regex.

apis: 
  - name: tomcatThreads
    event_type: tomcatThreadSample
    ### note "keep_keys" will do the inverse, if you want all metrics remove the keep keys blocks completely
    ### otherwise tailor specific keys you would like to keep, this uses regex for filtering
    ### this is useful for keeping key metrics
    keep_keys: ###
      - bean
      - maxThreads
      - connectionCount

sub_parse

Splits nested values out from one line eg. db0:keys=1,expires=0,avg_ttl=0 to db0.keys = 1, db0.expires = 0, db0.avg_ttl = 0

apis: 
  - name: redis
    commands: 
      - run: (printf "info\r\n"; sleep 1) | nc -q0 127.0.0.1 6379 # remove -q0 if testing on mac
        split_by: ":"
    sub_parse:
      - type: prefix
        key: db
        split_by:
          - ","
          - "="

metric_parser

Setting rates and deltas can be performed by the metric_parser.

By default it expects an exact match to avoid any conflicts, however if auto_set is set to true, matching is performed using regex. (may change to default to regex)

Flex will automatically attempt to set a namespace as required for your attributes, else you can namespace based on existing attributes or a custom attributes.

eg. see metric_parser options further below set either RATE or DELTA

name: redisFlex
apis: 
  - name: redis
    commands: 
      - run: (printf "info\r\n"; sleep 1) | nc 127.0.0.1 6379 # or even ### run: "redis-cli -h 127.0.0.1 -p 6379 info" ### (depends on redis-cli)
        split_by: ":"
    remove_keys: # remove any keys that contain any of the following strings
      - human
    snake_to_camel: true 
    perc_to_decimal: true
    sub_parse:
      - type: prefix
        key: db
        split_by:
          - ","
          - "="
    custom_attributes:
      myCustomAttr: theValue
    metric_parser:
      metrics:
        totalNetInputBytes: RATE
        rate$: RATE
      namespace: # you can create a namespace with a custom attribute, or chain together existing attributes, else it will default 
        # custom_attr: "mySpecialRedisServer" 
        existing_attr:
          - redisVersion
          - tcpPort
      auto_set: true ### switches metric parser to use regex rather then direct match

value_parser

Find keys using regex, and again use regex to pluck the first found value out

eg. find "time" and pluck the first found integer out.

name: squidFlex
apis: 
  - name: squidMgrUtilization
    commands: 
      - run: squidclient -v mgr:utilization
        split_by: " = "
        line_limit: 88 # stop processing at this line as we only care about last 5 minute metrics
    pluck_numbers: true # find any numbers within a string and pluck them out
    value_parser:
      time: "[0-9]+" # use regex to find any time values, and pluck the first found integer out with the value_parser

value_transformer

eg. Transform the value of a key, called "key" where the value is "world" to "hello-world"

key: world -> key: hello-world

---
name: dummy
apis: 
  - event_type: myEventSample
    url: http://127.0.0.1:8887/test
    value_transformer:
      key: hello-${value} 
## value is already world, so we can substitute it back in, which would now equal "key": "hello-world"

pluck_numbers

Any values that contain numbers, are automatically plucked out. If no number found, the value is left as is.

name: squidFlex
apis: 
  - name: squidMgrUtilization
    commands: 
      - run: squidclient -v mgr:utilization
        split_by: " = "
        line_limit: 88 # stop processing at this line as we only care about last 5 minute metrics
    pluck_numbers: true # find any numbers within a string and pluck them out
    value_parser:
      time: "[0-9]+" # use regex to find any time values, and pluck the first found integer out with the value_parser

store_variables

Store variables from any execution point, and reuse in any subsequent url, command, query etc.

---
name: dummyFlex
apis: 
  - name: todo
    url: https://jsonplaceholder.typicode.com/todos/2
    store_variables:
      storedId: userId ### store the userId from this response into storedId
  - name: user
    url: https://jsonplaceholder.typicode.com/users/${var:storedId}  ### query the user route with the previously stored userId which is storedId

math

Perform math operations with or without current data set. See full example here

name: nginxFlex
apis:
  - name: nginxStub
    url: http://127.0.0.1/nginx_status
    rename_keys:
      "Active connections": "net.connectionsActive"
    pluck_numbers: true
    math: ### <- here, the key will become the new metric key, the attributes wrapped in ${attribute} can be from any existing attributes, or you can use your own numbers too
      net.connectionsDroppedPerSecond: ${net.connectionsAcceptedPerSecond} - ${net.handledPerSecond}
..........

timestamp

Apply timestamps anywhere in your config, and subtract or add to them. eg. apply the below anywhere in your config.

"${timestamp:ms}" current timestamp in milliseconds
"${timestamp:ms-5000}" subtract 5000 from current timestamp in milliseconds
"${timestamp:ms+10000}" add 10000 to current timestamp in milliseconds

Supports:

  • "ms" - milliseconds
  • "s" - seconds
  • "ns" - nanoseconds

split_objects

Split an object that has nested objects. eg. You receive a payload that looks like below

{
    "hg.nginx.org": {
        "processing": 0,
        "requests": 204,
        "responses": {
            "1xx": 0,
            "2xx": 191,
            "3xx": 12,
            "4xx": 1,
            "5xx": 0,
            "total": 204
        },
        "discarded": 0,
        "received": 45310,
        "sent": 2913986
    },
    "trac.nginx.org": {
        "processing": 0,
        "requests": 278,
        "responses": {
            "1xx": 0,
            "2xx": 185,
            "3xx": 84,
            "4xx": 2,
            "5xx": 6,
            "total": 277
        },
        "discarded": 1,
        "received": 65422,
        "sent": 2825682
    }
}

The following config can split these neatly for you.

name: splitObjectExample
apis:
  - event_type: NginxEndpointSample
    url: http://demo.nginx.com/api/3/http/server_zones
    split_objects: true

Output:

{
  "name": "com.newrelic.nri-flex",
  "protocol_version": "2",
  "integration_version": "0.6.0-pre",
  "data": [
    {
      "metrics": [
        {
          "discarded": 0,
          "event_type": "NginxEndpointSample",
          "integration_name": "com.newrelic.nri-flex",
          "integration_version": "0.6.0-pre",
          "processing": 0,
          "received": 54808,
          "requests": 250,
          "responses.1xx": 0,
          "responses.2xx": 236,
          "responses.3xx": 13,
          "responses.4xx": 1,
          "responses.5xx": 0,
          "responses.total": 250,
          "sent": 3357038,
          "split.id": "hg.nginx.org"
        },
        {
          "discarded": 1,
          "event_type": "NginxEndpointSample",
          "integration_name": "com.newrelic.nri-flex",
          "integration_version": "0.6.0-pre",
          "processing": 0,
          "received": 71475,
          "requests": 324,
          "responses.1xx": 0,
          "responses.2xx": 213,
          "responses.3xx": 99,
          "responses.4xx": 2,
          "responses.5xx": 9,
          "responses.total": 323,
          "sent": 3292360,
          "split.id": "trac.nginx.org"
        },
        {
          "event_type": "flexStatusSample",
          "flex.ConfigsProcessed": 1,
          "flex.EventCount": 2,
          "flex.EventDropCount": 0,
          "flex.NginxEndpointSample": 2
        }
      ],
      "inventory": {},
      "events": []
    }
  ]
}

lookup_file

Supply a json file containing an array of objects, to dynamically substitute into config(s) This will generate a separate config file dynamically for each object within the array, and substitute the variables in with the below helper substitutions.

eg. ${lf:addr} expects there to be an "addr" attribute in the object

config example:

name: portTestWithLookup
lookup_file: testLookup.json ### location of lookup file
apis: 
  - name: portTest
    timeout: 1000 ### default 1000 ms increase if you'd like
    commands:
      - dial: ${lf:addr}
    custom_attributes: ### add some additional attributes if you'd like
      name: ${lf:name}
      abc: ${lf:abc}

testLookup.json could be for eg.

 [
     {
         "name":"google.com",
         "addr":"google.com:80",
         "abc":"def"
     },
     {
         "name":"yahoo",
         "addr":"yahoo.com:80",
         "abc":"zyx"
     },
     {
         "name":"redis",
         "addr":"localhost:6379",
         "abc":"efg"
     }
 ]