All Apps and Add-ons

splunk

shakti
Loves-to-Learn Everything

Hello ,

 

The Forwarder ingestion latency is showing red on my search head....

  • Root Cause(s):
    • Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 5474815. Message from 452CE67F-3C57-403C-B7B1-E34754172C83:10.250.2.7:3535

Can anyone please provide any suggestions?

 

Labels (2)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @shakti ,

there's a delay between the event timestamp and the indexing timestamp probably caused by the too high data volume.

This could be caused by a queue issue on the Forwarder, by a network latency or by a resource provlem (usually storage performance) on your Indexers.

You can check queues using a search like the following 

index=_internal  source=*metrics.log sourcetype=splunkd group=queue 
| eval name=case(name=="aggqueue","2 - Aggregation Queue",
 name=="indexqueue", "4 - Indexing Queue",
 name=="parsingqueue", "1 - Parsing Queue",
 name=="typingqueue", "3 - Typing Queue",
 name=="splunktcpin", "0 - TCP In Queue",
 name=="tcpin_cooked_pqueue", "0 - TCP In Queue") 
| eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) 
| eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) 
| eval fill_perc=round((curr/max)*100,2) 
| bin _time span=1m
| stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name 
| where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue")
| sort -_time

About resources, did you checked the IOPS of your storage?

have the correct number of CPUs?

at least, does your network have sufficient bandwidth to support your data volume?

Ciao.

Giuseppe

0 Karma

shakti
Loves-to-Learn Everything

Also , if i may know what should be the good I/O operations for splunk?

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @shakti,

surely the number of CPUs in one of the root causes of your issue, because Splunk requires at least 12 CPUs for Indexers and 16 if you also have ES.

Anyway, check the IOPS (using a tool e.g. like Bonnie++ or FIO), because this is the usual major issue in queue problems.

Ciao.

Giuseppe

0 Karma

shakti
Loves-to-Learn Everything

@gcusello   Thank you for your reply 

The IOPS of indexers and search heads is between 50 -300 ...I guess its pretty less  ...May I know do you have any suggestions how to improve on it?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @shakti,

use faster disks if you're using a physical server or dedicated resources if you're using a virtual server and possible SSD disks.

Ciao.

Giuseppe

0 Karma

shakti
Loves-to-Learn Everything

@gcusello  Appreciate your reply....

we have indexer clustering environment . However for both indexers and search head we are using only 4 CPU physical cores ..Do  you think that can cause this problem?

0 Karma