Splunk Enterprise

How to troubleshoot a scheduler that has a high amount of concurrent searches running constantly?

joshiro
Communicator

We are running a SHC with Splunk Enterprise OnPrem 9.0.1 and noticed that the concurrent searches in one of the nodes is way higher than the rest (3 times aprox.) even though the scheduler delegation shows its delegating evenly across the nodes.

Most of the scheduled searches are from an app that runs dbx queries to keep updated some lookups, these are scheduled to run a few times a week but appear to be running constantly in the scheduler.

These concurrent searches run constantly even after a restart of the node.

It doesnt happen in a single instance with the same apps, so we think it is a clustering issue.

How can we troubleshoot/debug this behaviour?

Labels (2)
0 Karma

joshiro
Communicator

The search concurrency count in each SH node appears constant even though we are not running rt searches.
It seems that the scheduler in the SH cluster has some stuck processes running constantly even after a restart.

Any ideas on how to clean stuck processes on the scheduler?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Get the T-shirt to Prove You Survived Splunk University Bootcamp

As if Splunk University, in Las Vegas, in-person, with three days of bootcamps and labs weren’t enough, now ...

Wondering How to Build Resiliency in the Cloud?

IT leaders are choosing Splunk Cloud as an ideal cloud transformation platform to drive business resilience,  ...