Hello,
How to solve " Events might not be returned in sub-second order due to search memory limits" without increasing the value of the following limits.conf setting:[search]:max_rawsize_perchunk?
I got a message after I scheduled a query to move more than 150k rows into a summary index.
I appreciate your help. Thank you
Can you split your query into a set of smaller queries that index those rows into a summary index?
How do I split my query from DBXquery (eg. 200k rows)and push it into a Summary Index at the same time?
| dbxquery query=" SELECT * from Table_Test"
the scheduled report for summary index will add something like this:
summaryindex spool=t uselb=t addtime=t index="summary" file="test_file" name="test" marker="hostname=\"https://testcom/\",report=\"test\""
Technically, I don't really need the _time because it is a static data, but it needs to get updated every day.
Thanks
Do you actually care what order the data is returned in - you are simply adding it to the summary index. The _time written to the summary will be whatever you want it to be, so just ignore the message, I don't believe it will affect the data in the summary.
Hi,
Thanks for your help.
No I do not care the order. I am afraid if I split the data and re-combine them it will return duplicate/missing data as it doesn't have a unique identifier.
Also I don't know how to split the data and keeping the same _time. Please help answer this. Thanks
How do I split my query from DBXquery (eg. 200k rows)and push it into a Summary Index at the same time?
Sorry, I don't understand what you are talking about re splitting your data - what is being split with the dbxquery?
Hi,
What I meant by splitting the data is to split the number of rows. So, if my query has 200k rows, splitting into 2, it becomes 100k rows. Or @marnall suggested split into a smaller queries.. not sure if it's possible since I have a large query involving multiple DBs
I don't know how to do this in a scheduled report and write it into the same summary index with the same _time setting. Please suggest.
Thanks
I don't see why it needs to be split - the events not coming back in subsecond order does not matter to you, so why not just add the 200k in one go - is that causing a problem?
Hi
So far, I didn't see any missing data, so it's not causing a problem, except for the error message.
I am not sure why Splunk has to throw the error message.
Thank you for your help.