Hi all,
so we installed the OPSEC App yesterday (took about 2 hours) and now we get Logs - a LOT of logs.
Our firewall Team expected around 25 Gig per day, now at 8am we are at around 40 Gig.
Now I was wondering if maybe the configuration of opsec app is somehow incorrect and that maybe data is indexed multiple times or something the like. I did few very short tests to find duplicates (just searched for the raw test of an event) but could not find any.
Are there any known configuration mistakes that might cause such that much data to be indexed. During the configuration we did had to create/delete/re-create several connections and trust relations, but now we only have the connections that we actually need and want.
What I am a little bit worried about is the following configuration in the opsec-log-status.conf file. I see multiple stanzas for the same data-source (clm-data-1) but only for the last one the "last_rec_pos" variable is increased automatically.
[1466632741@clm-data-1]
fileid = 1466632741
filename = fw.log
last_rec_pos = 23801843
[1466644651@clm-data-1]
fileid = 1466644651
filename = fw.log
last_rec_pos = 23725430
[1466655679@clm-data-1]
fileid = 1466655679
filename = fw.log
last_rec_pos = 23407048
[1466661952@clm-data-1]
fileid = 1466661952
filename = fw.log
last_rec_pos = 1710000
Or are there any features on how to reduce the amount of data indexed by the opsec app. We could deactivate VPN and Audit Logging but that's only around 1% of all logs.
Thank you !
Are you using Splunk_TA_opseclea_linux22
(aka version 3.1) or Splunk_TA_checkpoint-opseclea
(aka version 4.0)?
If you're using 3.1, are you passing unique values for configentity? e.g.: audit, vpn, ips, etc?
You might also be pulling historical data, as well as current. If that's the case then the data volume should settle down after your initial import.