Kibana Elastic - noi-techpark/odh-docs GitHub Wiki

How to copy a Dashboard from the default space to another

Dashboards cannot be shared across spaces in Kibana so they need to be copied. Best way is to do this via the api

  • Find out the Dashboard ID in the URL (UUID)
  • Export the Dashboard as json using the api
    curl -u USER:PASSWORD https://KIBANAURL/api/kibana/dashboards/export?dashboard=DASHBOARDUUID > export.json
  • Import the Dashboard to another space using the api curl -H "Content-Type: application/json" -H "kbn-xsrf: true" -u USER:PASSWORD https://KIBANAURL/s/SPACE/api/kibana/dashboards/import --data-binary @export.json

How to reduce Index Sizes by Deleting unneded logs

Delete documents from index which matches a query
POST /indexname/_delete_by_query?wait_for_completion=false { "query": { "match": { "field": "value" } } }

creates a task with an ID which can be viewed with GET /_tasks/taskid

After the deletion the index size remains the same, the documents are marked as deleted.
Force merge the index. Means deleting all deleted records and reduce index to 1 segment (increases size) or n segments
POST /indexname/_forcemerge?max_num_segments=1

How to move indices from Instance A to Instance B

Install elastic dump
npm install elasticdump
Run this commands with all urls and basic auth user/pass

  • Import the mappings
    elasticdump --input="http://user:pass@Source:port/Sourceindex" --output="http://user:pass@Destination:port/Destinationindex" --type=mapping
  • Import the index
    elasticdump --input="http://user:pass@Source:port/Sourceindex" --output="http://user:pass@Destination:port/Destinationindex" --type=data

How to implement a hot - warm - cold tier architecture

The tier assignment can be defined in the Index Lifecycle Management (ILM)

hot - tier = content tier

Primary storage where data is written into the indices

warm - tier

After 60 days indices are moved to the warm tier

cold - tier

After 396 days indices are moved to the cold tier

GET /indexname/_settings

check tier preference "_tier_preference" : "data_cold,data_warm,data_hot"

Backup Indexes to Amazon S3

How to search with on-the-fly generated fields

In Kibana Menu open Dev tools (Console)

You can define custom runtime fields, that can be displayed or used in queries. This is done using the runtime_mappings functionality, and supplying a script that generates the field. The default language for this is called "Painless" and is Java with some extra syntactic sugar like inline regex, automatic imports and dynamic variable typing.

When generating a field, we first have to define which type it is (here: long). We then generate the value, and return it via the emit() function

In the example use case, we have a URI string that contains two dates defining a date range. We need to extract the two dates and calculate how many days are between those two dates.

e.g. /flat/ParkingStation/occupied/2022-01-01T12:30:23.000Z/2023-01-01

We then use the generated field to filter for documents that have a date range of 350 days or greater.

The generated field is also displayed using the "fields" functionality

GET filebeat-*/_search
{
  "fields": [
    "json.uri",
    "json.user_roles_opendata",
    "json.user_agent",
    "historyRangeDays"
    ],
  "_source": false,
  "runtime_mappings": {
    "historyRangeDays" :{
      "type": "long",
      "script" : {
        "source" : """
          String uri = doc['json.uri'].value;
          // match the URI regex pattern so that we get the last two parts of the path
          def m = /.*\/(.*)\/(.*)$/.matcher(uri);
          if(m.find()){
            // Parse the two dates, substring so that we only have the date part, and convert to epoch days
            def from = LocalDate.parse(m.group(1).substring(0,10)).toEpochDay();
            def to = LocalDate.parse(m.group(2).substring(0,10)).toEpochDay();
            // return the difference between the two days as the value for the field
            emit(to-from);
          } else {
            emit(0);
          }
        """
      }
    }
  },
  "query": {
    "bool": {
      "must": [
        {"match": {
          "json.data_fetcher.command" : "fetchMeasurementHistory"
        }},
        {"range": {
          "historyRangeDays": {
            "gte": 350
          }
        }}
      ]
    }
  }
}