On all documentations says, indexer planning should be done using 100 GB/day for Enterprise Security . According to these calculation, on a large project minimum 50 indexers needed to index 5TB/day , 100K EPS log data. This also requires too many servers.
My question is what is the relation between disk IOPS and indexer capacity. We know more IOPS means more indexing power.
i.e. if we calculate linearly 100 GB/day for 800 IOPS --> 1 TB/day for 8000 IOPS make sense?
If we use a disk system with 40K IOPS, how can we estimate the indexer count with 30 concurrent searches?
Thank you,
Indexer scaling is about balancing sufficient CPU cores to handle search requests and data parsing, while maintaining plentiful IOPS to service I/O requests in a timely manner. I suggest reaching out to your Splunk Sales Engineer, and arrange for a chat with a PS Architect resource to help evaluate your requirements.
Current ES scale-testing notes focus on data volume per day, per datamodel, per indexer. So depending upon the data sources/use-cases, maintaining low datamodel latency on an indexer at a given data volume per day might require more IOPS, CPU cores, and some configuration tuning.
I could not see the answers of the questions above.
I suggest reaching out to your Splunk Sales Engineer, and arrange for a chat with a PS Architect resource to help evaluate your requirements.