Splunk Dev

Dedicated Splunk Job Scheduler

mark
Path Finder

Hi All,

I would like to have a dedicated job scheduling host for Splunk.
ie. A host that routinely processes all scheduled searches in the background(something an end user doesn't see), leaving any visible search heads able to respond to adhoc requests).

So I've created a new Splunk instance added it to the existing search head pool and then disabled the scheduled searching on the end user visible search head via:
default-mode.conf file:
[pipeline:scheduler]
disabled = true

This works great - only my dedicated scheduler run jobs now, but since the other search heads have the scheduler role disabled they display 'NONE' as the next scheduled time for a job to run even though another host will pick up the scheduled search...

So I guess this isn't the right way to do this; I haven't found any docuementation on this either.

How am I best to set up a dedicated job scheduler (to take load of existing search heads)?

Thanks,
Mark

Tags (2)

kallu
Communicator

So everything is working as it should be; scheduled searches are all run by the dedicated search head and other search heads don't run any. If you login to search head that IS running scheduler you should also see the correct scheduled time for your searches. I think this is feature not a bug in your config and everything seems to be the way you wanted. After all those normal search heads are not going to run any of scheduled searches so in that sense NONE is the correct answer for them.

0 Karma

aaronkorn
Splunk Employee
Splunk Employee

Is it possible to run the saved search on a separate job server and return the results to a dashboard on the search head?

0 Karma

rmorlen
Splunk Employee
Splunk Employee

More recently I took the jobs server out of the pool (disabled pooling) and copied all of the apps and user directories local (etc/users and etc/apps). I then created a cron job to rsync the pooledlocation/etc/users and pooledlocation/etc/apps from the shared location to the local location. I also do an rsync from the local/etc/users to pooledlocation/etc/users so that when the search is updated it will get reflected in the pooled area. I still have some troubleshooting with this but basically it took I/O load off of the pooled location. We have 20,000-30,000 scheduled searches per day.

0 Karma

rmorlen
Splunk Employee
Splunk Employee

We do this the same way. Yes that is the same way it works for us. We have people create their own search and save it. Then they submit a request to have it scheduled. It would be nice to have a kind of calendar view so that we could visually "see" when searches are scheduled. You are kind of taking load of the searchheads but since the jobs server still is hitting the indexers you will still have the load hitting the indexers.

0 Karma

kallu
Communicator

Why you need to know if search is scheduled or not?

0 Karma

mark
Path Finder

Thanks for your response.The partitioning of roles is how I want it to be.

The problem is that each app(lets take deployment monitor as an example) has lots and lots of searches. It's impossible to know which of these searches are scheduled searches or not(as they all have 'NONE') displayed. Only by then editing the search can one identify that it is scheduled. This doesn't make much sense to me

Off course the scheduled times are displayed correclty on the hosts with the scheduler role enabled. Being a dedicated job scheduler, I don't intend for anyone to log into it or even know its there.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...