Hello, I am in the process of setting up a new TCP input to pull DNS/DHCP logs from a vendor product. This product has Splunk built in and can act as a UF. I've setup a standard TCP connection and when I look at the logs on my search head, they are all showing as "splunk-cooked-data" (example below).
--splunk-cooked-mode-v3--\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
This was my config:
[tcp://9082]
index=dns
sourcetype=dns:query
disabled=0
I then tried this config based on reading some Splunk docs:
[splunktcp://9082]
index=dns
sourcetype=dns:query
disabled=0
But the above failed as well.
Any idea what I am doing wrong here?
I think you are experiencing the same problem as observed here -
https://answers.splunk.com/answers/178485/why-am-i-getting-cooked-mode-v3-data-testing-tcp-i.html
Please validate you settings for the accepted answer in the above post.
OR else we might want to look at more errors/warns from _internal index specific to the ports you are using.
Please mark as accepted answer if this solves your problem.
I think you are experiencing the same problem as observed here -
https://answers.splunk.com/answers/178485/why-am-i-getting-cooked-mode-v3-data-testing-tcp-i.html
Please validate you settings for the accepted answer in the above post.
OR else we might want to look at more errors/warns from _internal index specific to the ports you are using.
Please mark as accepted answer if this solves your problem.
Unless I am missing something I looked at the linked discussion and having everything configured the same way as far as I can tell.
This setup is like so:
HF inputs.conf
[splunktcp://9082]
index=dns
sourcetype=dns:query
HF outputs.conf:
[tcpout:indexer_group]
server= indexer1:9997, indexer2:9997, indexer3:9997
tcpout includes this indexer group
I've looked through the splunk logs around this port and I don't see anything, but standard connections. All info messages.
Do I need to set something else up on the indexers to catch this? I am under the impression that my first input pulls the cooked logs from the vendors UF and sends to the HF. As for the output group it, this is the standard servers we use to send to all of our indexers (default).
Am I right in understanding that your forwarding goes like:
Vendor product (with Splunk UF) ---> Splunk HF ----> Splunk Indexer
If yes, I do not see a point of placing an HF in between. You can directly configure your outputs to indexers from the Vendor Product Splunk UF.
Because additional hops can definitely cause cooked data messages.
Yes you are correct. The reason I want to point at our HF is that we have 50+ indexers and the owner of this vendor product will need to point at all of my indexers and they have to setup all of the normal inputs that we as the Splunk admins would control.
For example, I have no control over the inputs including index name, sourcetype, host, etc. and as I stated previously, I would need to provide all of my indexers to the vendor admin to configure. I was hoping that if I can send this data to a HF I can control all of the above and include load balancing that is native to Splunk.
If this is not possible please let me know
2 things I'd suggest you to double check -
Check that your HF isn't indexing the data as well because that'll definitely mark the data as cooked. It is less likely but it is definitely something to be checked.
Is it possible for you to configure one of your Univ forwarders to send the data directly to your indexers. This will make sure that an additional hop at HF is the reason.
Thanks for your suggestions, after more testing with my customer and looking over the wire we could see there was an issue with our firewall. We made some adjustments and the data is coming in now in the correct format.