Your easiest solution might be the simplest one - On each indexer, change the setting in the deploymentclient.conf that tells it how often it checks for updates to something much longer, like 15 or 30 minutes. (This assumes you can tolerate several minutes of lag in them picking up changes). Then restart the two indexers separated by about half that amount of time. When they come back it starts the timer for the next check and update.
(Sorry I can't tell you exactly which setting, Splunk docs is down at the moment, but it's pretty obvious when you see it).
If you do that, each should then, more or less, check for changes and restart at slightly different times. If one restarts faster than the other they'll "drift" fairly quickly, but this might work for you.
IMO, I would not be concerned at all about this problem, and indeed I'd try to make them both go down at the same time. The point being that I'd rather them have a short period of no data than a longer period of wrong data because they only have half. And... why not cluster? The only real overhead is the Cluster Master (which can be a smallish VM), then even if you leave RF/SF=1 so you don't replicate data, you'll get this rolling restart for free... But that's another topic. 🙂
... View more