@vikramyadav wrote: Can you check internal log of splunk and provide some more details about error. And also can you mention form which python script you are getting error. Had similar issue. For whatever reason ours would only throw on the SHC members. I would go into the README directory in the app and comment out the entirety of the inputs.conf.spec. Once that was performed all alerting on stopped. 0000 INFO SpecFiles - Found external scheme definition for stanza="splunk_ta_aws_sqs://" from spec file="/opt/splunk/etc/apps/Splunk_TA_aws/README/inputs.conf.spec" with parameters="placeholder" 0000 ERROR ModularInputs - No script to handle scheme "splunk_ta_aws_sqs" was found. This modular input will be disabled. 0000 ERROR ModularInputs - Unable to initialize modular input "splunk_ta_aws_sqs" defined inside the app "Splunk_TA_aws": Unable to locate suitable script for introspection. These would also fire on standalone search heads but only at startup. With the SHC members it was continuous every few minutes. Commenting out the inputs.conf.spec stopped the issue. Since the TA for the search head wasn't there to ingest data we had no concerns commenting it out.
... View more
Just ran into this as well. Having to use the ref instead of |savedsearch means I'll be pulling in quite a bit more data and then using the input of dashboard panel to filter.
ref -> can't pass variables/tokens but can run as owner of saved search
|savedsearch -> can't run as owner of saved search when passing variables/tokens
... View more
If it's scheduled it must run as owner. Could modify the metadata files to shift ownership or remove it entirely (nobody).
The run as owner or user referenced above I believe is for dashboards and the running of searches. You can build the dashboard to run the search contained within it, which runs as the user accessing the dashboard. Or you can set up the dashboard to reference the saved search which can run as the owner for that search.
... View more
DATETIME_CONFIG = NONE
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf?utm_source=answers&utm_medium=in-answer&utm_term=props.conf&utm_campaign=refdoc
Under the DATETIME_CONFIG setting
* For file-based inputs (monitor, batch) the time chosen is the
modification timestamp on the file being read.
... View more
Do the hosts the error is thrown on actually have the index created? I see the same error for HFs configured with the server role of indexer in the monitoring console. So they are getting pinged by the search through rest even though they don't have the index created or any knowledge of it (not keeping the indexes updated on the HF since we don't do any searching there).
... View more
If your search is looking strictly for events be sure to add events=true to the end. It defaults to false . Noticed that my search was completing within Activity->Jobs but was returning no events to my dashboard compared to if I ran the search on its own outside the dashboard. Using events=true fixes that issue.
| loadjob savedsearch="MyUser:MyApp:MySavedSearch" artifact_offset=0 events=true
... View more
I believe you will likely want to configure "register_forwarder_address" and "register_search_address" as well. I've heard of situations where a cluster was attempting to use the interface identified by the "register_replication_address" to also perform searches and receive ingest. It is probably best that if you configure one of these stanzas that you do the other two as well.
register_forwarder_address =
* Only valid for mode=slave
* This is the address on which a slave will be available for accepting
data from forwarder.This is useful in the cases where a splunk host
machine has multiple interfaces and only one of them can be reached by
another splunkd instance.
register_search_address =
* Only valid for mode=slave
* This is the address on which a slave will be available as search head.
This is useful in the cases where a splunk host machine has multiple
interfaces and only one of them can be reached by another splunkd
instance.
... View more