Getting Data In

What is the best way to monitor large log files?

gaurav_maniar
Builder

Hi Team,

What is the best way to monitor large rolling log files??

As of now I have following configuration to monitor files, (there are 180+ log files)

 

[monitor:///apps/folders/.../xxx.out]
index=app_server

 

At the end of month, log files are deleted and new log files are created by the application.

But the issue is, the log files are 20Gb+ in size by end of the month.

Recently when we migrated the server, we have started getting following error for some of the log files, 

 

12-02-2020 19:03:58.335 +0530 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/xxx/xxx/xxx/xxx.out). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.
WARN  TailReader - Enqueuing a very large file=<hadoop large file> in the batch reader, with bytes_to_read=4981188783, reading of other large files could be delayed

 

I tried "crcSalt = <SOURCE>" option as well, there is no. difference.

Please suggest what configuration I should use for monitoring log files in given case.

Thanks.

Labels (3)
0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...