Splunk. Configuration steps - anodot/daria GitHub Wiki
-
Install agent How to install
-
Make sure that the Anodot agent port is open and reachable from Splunk server host.
version: '3.1'
services:
dc:
image: anodot/streamsets:latest
container_name: anodot-sdc
restart: on-failure
ports:
- "AGENT_PORT:AGENT_PORT"
Example:
version: '3.1'
services:
dc:
image: anodot/streamsets:latest
container_name: anodot-sdc
restart: on-failure
ports:
- "38080:38080"
TCP server on the agent will listen to port 38080. This port number should be used when configuring the agent source and messages forwarding.
- Now let's test the connection between the Splunk server and the agent. You can download our test client here Anodot TCP client command-line application
- Download and Start Anodot TCP client on your Splunk server
./anodot_tcp_client_darwin_amd64.uu --host 76.49.127.23 --port 38080
Connecting to 76.49.127.23:38080...
message sent
message sent
- On the agent side, use
nctool for starting a simple TCP server that will output messages to the console
docker exec -it anodot-sdc bash -c "nc -l -p 38080 -v"
listening on [::]:38080 ...
connect to [::ffff:172.24.0.3]:38080 from ip-172-24-0-1.ec2.internal:40160 ([::ffff:172.24.0.1]:40160)
2020.03.17 11:40:59 Anodot TCP test message
2020.03.17 11:41:01 Anodot TCP test message
2020.03.17 11:41:03 Anodot TCP test message
2020.03.17 11:41:05 Anodot TCP test message
- Once the connection from Splunk server to Anodot agent can be successfully established, proceed with Splunk configuration.
Splunk forwarders can forward raw data to the Anodot system over a plain TCP socket. Because they are forwarding to a non-Splunk system, they can send only raw data.
By editing outputs.conf, props.conf, and transforms.conf, you can configure a heavy forwarder to route data conditionally to the Anodot system, in the same way, that it routes data conditionally to other Splunk instances. You can filter the data by host, source, or source type. You can also use regular expressions to further qualify the data. More details on Splunk fields could be found here
Example on how to forward data based on source:
-
Navigate to
$SPLUNK_HOME/etc/system/local -
Edit (or create if missing)
props.conffile
[source::<source_name>]
TRANSFORMS-anodot-log = anodot-log
- Edit (or create if missing)
transforms.conffile
[anodot-log]
REGEX = .
DEST_KEY=_TCP_ROUTING
FORMAT=Anodot
- Edit (or create if missing)
outputs.conffile
[tcpout]
defaultGroup=nothing
[tcpout:Anodot]
server=<AGENT_HOST:PORT>
sendCookedData=false
disabled=false
blockOnCloning=false
# timeout in seconds waiting for Agent to process data
dropEventsOnQueueFull=3
[tcpout]
indexAndForward = 1
- Reload Splunk configuration by clicking "Refresh" on this page http://<splunk_host>:8000/en-US/debug/refresh
More configuration examples on how to configure forwarders could be found here: https://docs.splunk.com/Documentation/Splunk/8.0.2/Forwarding/Forwarddatatothird-partysystemsd
When forwarding is configured you can use nc tool to check if it works
docker exec -it anodot-sdc bash -c "nc -l -p 38080 -v"
listening on [::]:38080 ...
connect to [::ffff:172.24.0.3]:38080 from ip-172-24-0-1.ec2.internal:38080 ([::ffff:172.24.0.1]:40182)
231.163.50.25 - moen1386 [17/Mar/2020:11:41:28 +0000] "DELETE /monetize HTTP/2.0" 203 26860
184.198.213.168 - oreilly7555 [17/Mar/2020:11:41:29 +0000] "DELETE /enhance/collaborative/b2b HTTP/1.1" 504 16904
212.114.206.80 - olson8154 [17/Mar/2020:11:41:30 +0000] "PATCH /platforms/killer/integrated/clicks-and-mortar HTTP/1.1" 400 1867
After configuring Splunk you can configure pipelines config spec. Supported data formats: json, csv, avro, log Data formats config examples