Getting Data In

What is the best way to monitor large log files?

gaurav_maniar
Builder

Hi Team,

What is the best way to monitor large rolling log files??

As of now I have following configuration to monitor files, (there are 180+ log files)

 

[monitor:///apps/folders/.../xxx.out]
index=app_server

 

At the end of month, log files are deleted and new log files are created by the application.

But the issue is, the log files are 20Gb+ in size by end of the month.

Recently when we migrated the server, we have started getting following error for some of the log files, 

 

12-02-2020 19:03:58.335 +0530 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/xxx/xxx/xxx/xxx.out). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.
WARN  TailReader - Enqueuing a very large file=<hadoop large file> in the batch reader, with bytes_to_read=4981188783, reading of other large files could be delayed

 

I tried "crcSalt = <SOURCE>" option as well, there is no. difference.

Please suggest what configuration I should use for monitoring log files in given case.

Thanks.

Labels (3)
0 Karma
Get Updates on the Splunk Community!

Enter the Dashboard Challenge and Watch the .conf24 Global Broadcast!

The Splunk Community Dashboard Challenge is still happening, and it's not too late to enter for the week of ...

Join Us at the Builder Bar at .conf24 – Empowering Innovation and Collaboration

What is the Builder Bar? The Builder Bar is more than just a place; it's a hub of creativity, collaboration, ...

Combine Multiline Logs into a Single Event with SOCK - a Guide for Advanced Users

This article is the continuation of the “Combine multiline logs into a single event with SOCK - a step-by-step ...