Splunk Dev

Python SDK dbxquery results limited to 100k rows using jobs.export- Do I need to paginate streaming results?

joecav
Engager

Running a dbxquery through jobs.export my results are limited to 100k rows. Do I need to paginate streaming results? 

Here's my code:

 

 

data = {
        'adhoc_search_level': 'fast',
        'search_mode': 'normal',
        'preview': False,
        'max_count': 500000,
        'output_mode': 'json',
        'auto_cancel': 300,
        'count': 0
    }

job = service.jobs.export(<dbxquery>, **data)
reader = results.JSONResultsReader(job)
lst = [result for result in reader if isinstance(result, dict)]

 

 

 

This runs correctly except that that results are always stopped at 100k rows, it should be over 200k.

Tags (3)
0 Karma
Get Updates on the Splunk Community!

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer at Splunk .conf24 ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...

Combine Multiline Logs into a Single Event with SOCK: a Step-by-Step Guide for ...

Combine multiline logs into a single event with SOCK - a step-by-step guide for newbies Olga Malita The ...