Hi @shakti , there's a delay between the event timestamp and the indexing timestamp probably caused by the too high data volume. This could be caused by a queue issue on the Forwarder, by a network latency or by a resource provlem (usually storage performance) on your Indexers. You can check queues using a search like the following index=_internal source=*metrics.log sourcetype=splunkd group=queue
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
name=="indexqueue", "4 - Indexing Queue",
name=="parsingqueue", "1 - Parsing Queue",
name=="typingqueue", "3 - Typing Queue",
name=="splunktcpin", "0 - TCP In Queue",
name=="tcpin_cooked_pqueue", "0 - TCP In Queue")
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size)
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size)
| eval fill_perc=round((curr/max)*100,2)
| bin _time span=1m
| stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name
| where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue")
| sort -_time About resources, did you checked the IOPS of your storage? have the correct number of CPUs? at least, does your network have sufficient bandwidth to support your data volume? Ciao. Giuseppe
... View more