Splunk Search

search - generate a (time, causes, count) collums table

splunk_zen
Builder

How can I correctly get a (time, causes, count) collums search from the following input data example?

   EXECUTION_DATETIME,SERVICE_ACCEPTED_INVOCATIONS,TIMEOUT,ORAEXCEPTIONS,DECODE_ERRORS,STATMENT_PROCESSING_ERRORS,HIT_MAX_REQ_LIMIT,SDF_STATE
20120704 16:00:00,3003896,0,1,1,0,,0
20120704 15:00:00,2968442,4,3,0,0,,0
20120704 14:00:00,2947280,330,0,1,0,,0
20120704 13:00:00,2919975,0,0,0,0,,0

My current search,

source=*platform/term_causes.csv | bucket _time span=1d | stats 
dc(TIMEOUT) dc(ORAEXCEPTIONS) dc(DECODE_ERRORS) dc(STATMENT_PROCESSING_ERRORS) dc(HIT_MAX_REQ_LIMIT) dc(SDF_STATE) by _time

is flawed, the simple XML dashboard generates several incoherently time sorted bubbles alt text
due to the search result not respecting the condition,

"A single series structure that contains 3 columns. The first column (column 0) contains the values to be plotted on the x-axis. The second column (column 1) contains the values to be plotted on the y-axis. And the third column (column 2) contains the values to be plotted on the z-axis."

So, if I correctly understood, my goal would be to generate a search (and probably join it with a subsearch? ) which would return the following structure,

20120704 16:00:00 EXECUTION_DATETIME 0
20120704 16:00:00 TIMEOUT 1
20120704 16:00:00 ORAEXCEPTIONS 1
20120704 16:00:00 DECODE_ERRORS 0
20120704 16:00:00 STATMENT_PROCESSING_ERRORS 0
20120704 16:00:00 HIT_MAX_REQ_LIMIT
20120704 16:00:00 SDF_STATE 0
...
20120704 14:00:00 EXECUTION_DATETIME 0
20120704 14:00:00 TIMEOUT 330
20120704 14:00:00 ORAEXCEPTIONS 0
20120704 14:00:00 DECODE_ERRORS 1
20120704 14:00:00 STATMENT_PROCESSING_ERRORS 0
20120704 14:00:00 HIT_MAX_REQ_LIMIT
20120704 14:00:00 SDF_STATE 0
0 Karma

ak
Path Finder

Disclaimer: the search is not pretty, and could probably use some performance tweaking.

source="testlog.log"

| eval category="SERVICE_ACCEPTED_INVOCATIONS" | stats dc(SERVICE_ACCEPTED_INVOCATIONS) as count by category, _time | table _time category count

| append [search * | eval category="TIMEOUT" | stats dc(TIMEOUT) as count by category, _time | table _time category count]

| append [search * | eval category="ORAEXCEPTIONS" | stats dc(ORAEXCEPTIONS) as count by category, _time | table _time category count]

| append [search * | eval category="DECODE_ERRORS" | stats dc(DECODE_ERRORS) as count by category, _time | table _time category count]

| append [search * | eval category="STATMENT_PROCESSING_ERRORS" | stats dc(STATMENT_PROCESSING_ERRORS) as count by category, _time | table _time category count]

| append [search * | eval category="HIT_MAX_REQ_LIMIT" | stats dc(HIT_MAX_REQ_LIMIT) as count by category, _time | table _time category count]

| append [search * | eval category="SDF_STATE" | stats dc(SDF_STATE) as count by category, _time | table _time category count]

I haven't tried this out with the bubble chart, but it will get you the table in the format you wanted:


_time category count


7/4/12 1:00:00.291 PM HIT_MAX_REQ_LIMIT 0

7/4/12 2:00:00.294 PM HIT_MAX_REQ_LIMIT 0

7/4/12 3:00:00.296 PM HIT_MAX_REQ_LIMIT 0

7/4/12 4:00:00.300 PM HIT_MAX_REQ_LIMIT 0

7/4/12 1:00:00.291 PM SDF_STATE 1

7/4/12 2:00:00.294 PM SDF_STATE 1

splunk_zen
Builder

I had already added it to the main search.

Considering all subsequent "| append [search * | eval" subsearches are built on top of it, those subsearches data is already filtered by that timespan right?

0 Karma

ak
Path Finder

one thing you could try is adding the | bucket _time span=YOURSPAN |. I don't have that in the search above. Add it to all the subsearches

0 Karma

splunk_zen
Builder

If I tweak it for the last 24h, the table gets correctly returned but the bubble chart returns,

"Encountered an error while reading file '/opt/splunk/splunk/var/run/splunk/dispatchtmp/subsearch_1341857990.1072_1341857990.1/statstmp_1341858018.7_0.csv.gz'"

0 Karma

splunk_zen
Builder

Thanks. I do appreciate the effort,
the search for the last 7 days times out,
"The job appears to have expired or has been canceled. Splunk could not retrieve data for this search."

0 Karma

splunk_zen
Builder

Sure, hope it becomes clearer now.

0 Karma

ak
Path Finder

what exactly should the causes field contain? it's one field, but you are generating multiple fields in your search -> dc(TIMEOUT) dc(ORAEXCEPTIONS) dc(DECODE_ERRORS) dc(STATMENT_PROCESSING_ERRORS) dc(HIT_MAX_REQ_LIMIT) dc(SDF_STATE).

it would help if you pointed out which field corresponds to which axis, and how that field is defined:

x axis: time (EXECUTION_DATETIME)

y axis: ?

z axis: count(?)

your time axis (x) is off because the chart is reading the time as the series values, not the x axis.

0 Karma

splunk_zen
Builder

Ayn, not yet, I don't know how to build the final search
which returns the aforementioned structure.

ak, my goal is to count all the different contributions from each root cause and generate a bubble for each of them per day,
that's why I'm using,
| bucket _time span=1d

0 Karma

ak
Path Finder

why are you using the dc function? don't you just want the actual value in there? for example the value of 330 - that's the actual value, not the dc.

0 Karma

Ayn
Legend

Ok, so did you resolve your issue?

0 Karma

splunk_zen
Builder

ak, the y axis should be each of the dc() fields,
TIMEOUT ORAEXCEPTIONS DECODE_ERRORS STATMENT_PROCESSING_ERRORS HIT_MAX_REQ_LIMIT SDF_STATE
whereas the z axis should correspond to the count of each one of them,
dc(TIMEOUT) dc(ORAEXCEPTIONS) dc(DECODE_ERRORS) dc(STATMENT_PROCESSING_ERRORS) dc(HIT_MAX_REQ_LIMIT) dc(SDF_STATE)

0 Karma

Ayn
Legend

Could you provide an example of the desired result, not just your current result and that it's wrong?

0 Karma

splunk_zen
Builder

Was my question clear enough?
Would further clarification be useful for someone to help answer it?

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...