I'd like to index a directory of 50,000 gzip files. The files range in size from 1 KB to 5 MB. Can Splunk monitor these files without first unpacking the gzips?
The good news is "YES, Splunk can index gzip files as is!" The bad news is, Splunk will monitor these files one at a time, instead of in parallel. Because it is not possible to predict the uncompressed size of a gzip file, Splunk processes these files in sequence for better control of disk allocation. With respect to performance, this is not ideal for handling 50k files so please consider uncompresing them before having Splunk monitor them to take advantage of Splunk's multi-threading file monitoring capabilities.
Here's more details about how that works:
http://docs.splunk.com/Documentation/Splunk/latest/Data/Monitorfilesanddirectories#How_Splunk_Enterp...
I think we're actually a bit slower than uncompressing the files first (unsure of details) but it's not far off. Mostly uncompressing that much data with the zlib algorithm just takes a lot of CPU.
The good news is "YES, Splunk can index gzip files as is!" The bad news is, Splunk will monitor these files one at a time, instead of in parallel. Because it is not possible to predict the uncompressed size of a gzip file, Splunk processes these files in sequence for better control of disk allocation. With respect to performance, this is not ideal for handling 50k files so please consider uncompresing them before having Splunk monitor them to take advantage of Splunk's multi-threading file monitoring capabilities.
So, what if you don't want it to read a compressed file? Can you compress using a file extension that will prevent Splunk from attempting to index the file... e.g. rotated and compressed log files?
You can blacklist compressed files in inputs.conf so that they will be ignored:
[monitor:///var/log]
blacklist=(tgz$|zip$)
will ignore all files in /var/log that end with "tgz" or "zip"