I'm working on a custom search command which will take the results of a search and create an XML output file. As a very simplified example, the search might look like this:
Within my search command, I read the results and aggregate all of the stuff into Python dicts (e.g. source[type]['total'] += 1, source[type][value] += 1, etc), and then attempt to write the results to a randomly named output file, where the XML would look something like:
However, I suppose due to map/reduce maybe, multiple output files are created with the results being spread among them. At least, I suppose that it would make sense to be a function of map/reduce, and actually rather cool to see in action.
Is my analysis correct? If so, what is the best practice for handling this merging of results into a single, highly structured output file where order matters?
asked 10 Apr '11, 16:25
I would suggest that it might be easier to get what you want by calling the Splunk API:
There is also generally no need for you to worry about map-reduce. Splunk will take care of that. (It's possible to write map-reduceable search commands if you specify them as
answered 11 Apr '11, 04:41