We have created a bunch of different indexes for all of our different systems. At some point, these systems will freak out, and produce 10-15G of data in a matter of minutes. We are still working to fix the freak out issue.
Is disabling that index when it hits 2G in a day the best way to prevent a license violation?
The only other option I have thought of is to send that forwarder's data to the null-queue. Is that a better option?
This is desirable for a couple of reasons: 1) avoid filling up the splunk indexer drive 2) avoid license violations
asked 11 Jan '11, 15:46
If the behavior you describe above is, indeed, rare and you didn't have to worry filling up the disk, then I would just let it violate the license limit because you are allow a few violations within any 30-day rolling period with only a warning message. And even if your search is disabled eventually due to multiple violations, your indexing will never stop and you can always request a reset license from Splunk support, which you can use to reset the violations indicator back to zero.
However, if filling up the disk is a concern (and feasibly you are not able to allocate or add more disk capacity to handle the overage) then one thing you can try is creating an alert that triggers a script to run when you start to get close to going over your licensed limit, say, 1.8 GB or more. That script could maybe copy in a nullQueue config on the Splunk forwarders and then restart them or it could maybe stop the splunk forwarders altogether until you are able to resolve the reason for the overage or until the "Freak out" period if over, etc.
BTW, if you search your _internal index for the metrics events for each day, like this:
index=_internal metrics kb group="per_index_thruput" series="main" startdaysago=1 | eval totalGB = (kb / 1024) / 1024 | timechart span=1d sum(totalGB) as total | search total > 1.8
you can run this search every five minutes or so to trigger your alert as your index exceeds 1.8 GB per day and run your script. Also, if you have more than one index, you could create a separate alert trigger for each one by specifying the "series=index_name_here" in the search sample above.
answered 11 Jan '11, 17:02
the search string doesn't work. i've got this instead:
Specified field(s) missing from results: 'totalGB'
answered 21 Feb '11, 01:34