How to forward events from logstash to Splunk

Eduardo Patrocinio
3 min readJun 2, 2018

In my last blog, I showed how to install Splunk in a Kubernetes environment, particularly IBM Cloud Private. That was simple, but inane, as Splunk is not receiving any events from Kubernetes.

Now, I am going to show how to send the Kubernetes log information to Splunk.

First, how?

Filebeat, logstash, ElasticSearch, you name, they are all in the flow of taking the Kubernetes logs and persisting to a database.

So we have many points of integration between these components and Splunk. I decided to use logstash as a way to forward the logs (events) to Splunk.

The solution is straightforward, but I still need to talk one aspect in Splunk in the next section

Enabling HTTP Event Collector in Splunk

Before we configure logstash, we need to enable Splunk to receive HTTP Events.

Follow this page to enable HTTP Event Collector (HEC) and generate a token:

Configure logstash to send events to Splunk

Run the following command to retrieve the logstash pipeline configuration:

kubectl get cm logstash-pipeline -n kube-system -o yaml > logstash-pipeline.yaml

Now open the file logstash-pipeline.yaml with your favorite editor, as we need to update it with your Splunk token.

Search for the following section in the YAML file:

output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts => "elasticsearch:9200"
}
}
headers => ["Authorization", "Splunk <your token>"]

Add the following output in this section, replacing <your token> with the Splunk token generated in the previous step:

http {
http_method => "post"
url => "http://splunkenterprise:8088/services/collector/event/1.0"
headers => ["Authorization", "Splunk <your token>"]
mapping => {
"event" => "%{log}"
}
}

Deploy the new configuration

We are now ready to deploy the logstash pipeline configuration.

Run the following command to replace the current configuration:

kubectl replace cm logstash-pipeline -f logstash-pipeline.yaml

Then we need to recycle the logstash Pod. Run the following command to find the existing Pod:

kubectl get po -n kube-system | grep logstash

You will see an output like this, noticing the Pod ID:

patro:tmp edu$ kubectl get po -n kube-system | grep logstash
logstash-5c8c4954d9-gzkdt 1/1 Running 0 2h

Now delete the Pod:

kubectl delete po -n kube-system <pod-id>

Kubernetes will start a new Pod with the refreshed configuration (from the ConfigMap). You can see the output by running the following command:

kubectl logs -f $(kubectl get po -n kube-system | grep logstash | awk '{print $1}')

Conclusion

In a few steps, we could configure Splunk to receive the log events from logstash.

Now, if you go to the Splunk UI, you will see all the Kubernetes log events:

Bring your plan to the IBM Garage.
Are you ready to learn more about working with the IBM Garage? We’re here to help. Contact us today to schedule time to speak with a Garage expert about your next big idea. Learn about our IBM Garage Method, the design, development and startup communities we work in, and the deep expertise and capabilities we bring to the table.

Schedule a no-charge visit with the IBM Garage.

--

--

Eduardo Patrocinio

Principal Solutions Architect, Strategic Accounts, AWS