I'm currently experiencing this:
1) Run a query that returns a large number of events (say, 1mil)
2) Save the job
3) Load the events using | loadjob events=t
Step 3 only loads about approx. 150k events, whereas if I click on the link in the job manager directly it returns the full result set.
Is this normal? If this is a bug, was this fixed in 4.1?
asked 08 Apr '10, 15:00
For any given search, Splunk will only retain a limited number of raw events, i.e., actual event data pulled out of the index by the
To illustrate the necessity for this, consider this simple example:
If your index contains 2 billion events that match the search, storing all 2 billion events every time you run that search would consume the entire storage system in no time. Generally speaking, the 2 billion row data set is not what you're after -- it's the summarized or transformed version that is of interest.
Note that the limitation described here does not mean that Splunk cannot handle lots of events. The search language will process all events asked of it, but will abide by these practical safety controls and not cache all of the raw data.
For perspective, search for a word like
answered 08 Apr '10, 19:13