Is there a recommended/optimum size of all the indexes that one indexer can handle?
I have not analysed this, but indexers that have 500gb of indexed data restart faster that those with 2tb. I am wondering if anyone has run into any problems because of this. Or if others have rules like "whenever our indexers manage more than xy gb we add another indexer".
asked 16 Apr '12, 22:58
Have you checked out Splunk's doc on Capacity Planning? - docs here, this informs you of the best practices on scaling Splunk to meet your needs.
Also, if you have not already done so, I would check Splunk's docs on Retirement and Archiving... docs here, as this may help with performance on larger indexers where you do not require as much data to be kept "fresh" (or searchable) in the warm buckets.
Apologies if I have missed the point of your question. But generally I would think an indexer with more "active" data would take longer to restart than an instance with less.
answered 17 Apr '12, 00:53