I'd like a system with multiple dedicated search heads, but for various reasons (avoiding added complexity, dependencies on NFS, pooling across a WAN, etc) would prefer to avoid setting up search head pooling between them. One of the search heads will function as primary, with one as backup in case the primary fails.
I need to replicate the $SPLUNK_HOME/etc/users and $SPLUNK_HOME/etc/apps between the search heads, and have seen mention of others using rsync for this. But how do you deal with scheduled searches in this situation?
Users are free to set up their scheduled searches, alerts and even summary indexing. I do not want to duplicate these across both search heads.
Splunk version 4.3.1.
asked 14 Mar '12, 04:43
We have gone through this same issue. We went with Pooling. The apps aren't really the issue since they probably don't change very often. The users information can change. (We have over 2000 users and get about 400 unique user logins per day so this was an issue for us.)
Scheduled searches can be a real headache if you don't get them under control. We setup a jobs server (basically a searchhead that users don't access directly). We disabled scheduled searches on the searchheads that the users access so that they don't directly affect performance for the users. All scheduled searches run on the jobs server. We also turned off the ability for users to schedule searches. We force them to come to us to schedule the searches. A better option is for them to create a splunk app that contains their scheduled searches. Then we can use deployment server to push out any updates they might have.
answered 01 Jun '12, 07:18