Yes.
I expected that the first line of the data is recognized header fields.
the data is coming in and visible, but the fields aren't being extracted as you expect.
No.
The intermediate output file (csvh-xxxxxxxxxx.dbmonevt) is set to dbmon-spool.
But finally the sourcetype is set to my_sourcetype specified by me.
Is the sourcetype being set to dbmon-sppol-1?
The settings of DB Connect app is following.
inputs.conf
[batch://$SPLUNK_HOME\var\spool\dbmon\*.dbmonevt]
...
sourcetype = dbmon:spool
[dbmon-dump://dbname/tablename]
...
query = select * from table
...
sourcetype = my_sourcetype
props.conf
[source::...csvh_*.dbmonevt]
...
CHECK_FOR_HEADER = true
HEADER_MODE = firstline
.
1. Execute SQL query and export spool output file. (This file's sourcetype is dbmon-spool)
2. Import spool output file. (The spool file's sourcetype is set to dbmon-spool.)
3. The spool file's first line starts splunk header. (SPLUNK host=xxxxx source=xxxx sourcetype=xxxx)
4. The settings of props.conf set to HEADER_MODE = firstline.
The first line is treated splunk header. (This contains original host, source, sourcetype, ...etc)
5. And the settings of props.conf set to CHECK_FOR_HEADER = true.
The second line is treated CSV header.
6. Following the settings CHECK_FOR_HEADER, add field information to $SPLUNK/etc/apps/learned/local/props.conf.
I think when the data is indexed, the field is to be set according to the field information that was added automatically.
The first line of the CSV data despite being extracted as AutoHeader,
the indexed data actually is identified as a record, a field has not been assigned to each data.
I think because AutoHeader is extracted, setting CHECK_FOR_HEADER is correctly running.
I would be wrong or what ?
... View more