If certain indexes go down and stop reporting over a 24hr - 7 day period how do you run a search to easily identify which ones have gone down?
Currently I run two separate searches filtered by 24hrs / 7 days " | tstats dc(host) where index="name" by index | fields dc(host) ". This lists all of the index's currently reporting in then I have to search through the data to find the result, but I would like to optimise it more by using one command too see these results in one search.
By "indexes" do you really mean "indexers"? I ask because a process (indexer) is much more likely to go down than a file (index).
Finding something that is not there is not Splunk's strong suit. See this blog entry for a good write-up on it.
https://www.duanewaddle.com/proving-a-negative/
Hi richgalloway,
Sorry I meant indexer! Apologises for my lack of detail I'm fairly new to this and just wanted to pick some peoples brains! Basically I do health checks in the morning and run the search query above to determine if all the indexers are up and running. If some indexers are down I have to run the query and filter by time (i.e. 24hrs, 7days) then compare the two to find the indexer which are not reporting in within the set timeframe. I was just wondering if its possible to | an additional query to highlight these IPs/hosts with one search if that makes sense.
Use the Monitoring Console. It will tell you which indexers are down and a lot more.