Hi,
I'm having an issue with timestamping on one unstructured sourcetype (others json and access_log are fine).
My deployment looks like UF->HF->Splunk cloud.
For some reason data from the mentioned sourcetype is delayed by 1 hour. I mean, I have to increase seachrtime to >60m to see the latest data. Below is the output of a query to compare index time and _time.
I tried to change timestamp extraction is sourcetype configuration in the cloud, but it didn't help.
I come up with idea to transform INGEST_EVAL expression in a transforms stanza in transforms.conf to update the _time field at ingest time after it has been parsed out from the actual event (+3600s)
#transforms.conf
[time-offset]
INGEST_EAVL = _time:=_time+3600
#props.conf
[main_demo]
TRANSFORMS=time-offset
I suppose there is no transforms.conf equivalent in Splunk GUI (props.conf can be configured in source type GUI section). Do I need to contact Splunk support to perform this kind of change in cloud indexer?
Or maybe there is any other way to align _time to reflect real time?
All help would be appreciated,
regards,
Szymon
As you are using other TZ on logs than your host have, you must tell this to splunk when you are ingesting logs. Splunk expects that those are same unless event contains TZ information (your events haven’t that). See this https://docs.splunk.com/Documentation/Splunk/9.1.0/Data/Applytimezoneoffsetstotimestamps to fix the situation.
Hi
as you have HF before SC you must add all props & transforms.conf into HF not into SC as it's always 1st full splunk enterprise instance which apply those to events!
Are you sure that there is not a latency in indexing instead of 1h time shift? Actually 1h time shift could indicate that you have something weird with your TZ settings (e.g. summertime information is missing)!
Can you show log event on UF side and what are TZ on that host?
r. Ismo
Hi Ismo,
Below is TZ set on UF server. Interestingly I set the same time zone on my local test server and after changing timestamp settings to "current time" the logs are displaying in time. However, it's a simple UF->Cloud setup as opposed to productional one (through HF).
timedatectl
Local time: Fri 2023-07-21 13:27:32 BST
Universal time: Fri 2023-07-21 12:27:32 UTC
RTC time: Fri 2023-07-21 12:27:32
Time zone: Europe/London (BST, +0100)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2023-03-26 00:59:59 GMT
Sun 2023-03-26 02:00:00 BST
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2023-10-29 01:59:59 BST
Sun 2023-10-29 01:00:00 GMT
Not sure how to grab log event on UF (this sounds like from Windows world).
regards,
Sz
Would this be enough?
[SSL]
_rcvbuf = 1572864
allowSslRenegotiation = true
certLogMaxCacheEntries = 10000
certLogRepeatFrequency = 1d
cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ecdhCurves = prime256v1, secp384r1, secp521r1
host = $decideOnStartup
index = default
logCertificateData = true
sslQuietShutdown = false
sslVersions = tls1.2
[batch:///opt/splunkforwarder/var/run/splunk/search_telemetry/*search_telemetry.json]
_rcvbuf = 1572864
crcSalt = <SOURCE>
host = $decideOnStartup
index = _introspection
log_on_completion = 0
move_policy = sinkhole
sourcetype = search_telemetry
[batch:///opt/splunkforwarder/var/spool/splunk]
_rcvbuf = 1572864
crcSalt = <SOURCE>
host = $decideOnStartup
index = default
move_policy = sinkhole
[batch:///opt/splunkforwarder/var/spool/splunk/...stash_hec]
_rcvbuf = 1572864
crcSalt = <SOURCE>
host = $decideOnStartup
index = default
move_policy = sinkhole
sourcetype = stash_hec
[batch:///opt/splunkforwarder/var/spool/splunk/...stash_new]
_rcvbuf = 1572864
crcSalt = <SOURCE>
host = $decideOnStartup
index = default
move_policy = sinkhole
queue = stashparsing
sourcetype = stash_new
time_before_close = 0
[batch:///opt/splunkforwarder/var/spool/splunk/tracker.log*]
_rcvbuf = 1572864
host = $decideOnStartup
index = _internal
move_policy = sinkhole
sourcetype = splunkd_latency_tracker
[blacklist:/opt/splunkforwarder/etc/auth]
_rcvbuf = 1572864
host = $decideOnStartup
index = default
[blacklist:/opt/splunkforwarder/etc/passwd]
_rcvbuf = 1572864
host = $decideOnStartup
index = default
[fschange:/opt/splunkforwarder/etc]
_rcvbuf = 1572864
delayInMills = 100
disabled = false
filesPerDelay = 10
followLinks = false
fullEvent = false
hashMaxSize = -1
host = $decideOnStartup
index = default
pollPeriod = 600
recurse = true
sendEventMaxSize = -1
signedaudit = true
[http]
_rcvbuf = 1572864
ackIdleCleanup = true
allowSslCompression = true
allowSslRenegotiation = true
dedicatedIoThreads = 2
disabled = 1
enableSSL = 1
host = $decideOnStartup
index = default
maxSockets = 0
maxThreads = 0
port = 8088
sslVersions = *,-ssl2
useDeploymentServer = 0
[monitor:///opt/splunkforwarder/etc/splunk.version]
_TCP_ROUTING = *
_rcvbuf = 1572864
host = $decideOnStartup
index = _internal
sourcetype = splunk_version
[monitor:///opt/splunkforwarder/var/log/splunk]
_rcvbuf = 1572864
host = $decideOnStartup
index = _internal
[monitor:///opt/splunkforwarder/var/log/splunk/configuration_change.log]
_rcvbuf = 1572864
host = $decideOnStartup
index = _configtracker
[monitor:///opt/splunkforwarder/var/log/splunk/license_usage_summary.log]
_rcvbuf = 1572864
host = $decideOnStartup
index = _telemetry
[monitor:///opt/splunkforwarder/var/log/splunk/metrics.log]
_TCP_ROUTING = *
_rcvbuf = 1572864
host = $decideOnStartup
index = _internal
[monitor:///opt/splunkforwarder/var/log/splunk/splunk_instrumentation_cloud.log*]
_rcvbuf = 1572864
host = $decideOnStartup
index = _telemetry
sourcetype = splunk_cloud_telemetry
[monitor:///opt/splunkforwarder/var/log/splunk/splunkd.log]
_TCP_ROUTING = *
_rcvbuf = 1572864
host = $decideOnStartup
index = _internal
[monitor:///opt/splunkforwarder/var/log/watchdog/watchdog.log*]
_rcvbuf = 1572864
host = $decideOnStartup
index = _internal
[monitor:///opt/videoipath/logs/backend/main.log]
_rcvbuf = 1572864
disabled = false
host = $decideOnStartup
index = discol2
sourcetype = main_demo
[script]
_rcvbuf = 1572864
host = $decideOnStartup
index = default
interval = 60.0
start_by_shell = true
[splunktcp]
_rcvbuf = 1572864
acceptFrom = *
connection_host = ip
host = $decideOnStartup
index = default
route = has_key:tautology:parsingQueue;absent_key:tautology:parsingQueue
[tcp]
_rcvbuf = 1572864
acceptFrom = *
connection_host = dns
host = $decideOnStartup
index = default
[udp]
_rcvbuf = 1572864
connection_host = ip
host = $decideOnStartup
index = default
You should use </> code block when you are adding conf, SPL or something similar to your post. That way e.g. double _ are not used as italic characters and it’s easier to to read!
You could found your log sample for that sourcetype from
[monitor:///opt/videoipath/logs/backend/main.log]
monitor told which file is continuously ingested into splunk. You should take couple of events here (scramble content if needed) to help us to check and fix your issue.
[root@xxx ~]# clock
Fri 21 Jul 2023 16:15:09 BST -0.286435 seconds
[root@xxx ~]# tail -n 10 /opt/videoipath/logs/backend/main.log
2023-07-21 15:15:12,326 backend_2023.2.5: INFO content
2023-07-21 15:16:48,011 backend_2023.2.5: INFO content
And below is how logs are being visible in GUI
For some unknown reason your application is not aware of correct time! Have you restarted it after summer time has started? Time by time I have seen some apps which cannot do this automatically without restart. Anyhow you should report this to your app responsible and ask fix for this app.
Do you mean "search and reporting" app? The only thing I can do is the restart Splunk cloud instance. Is it right?
No, I mean the source app where you are collecting logs. Based your screenshots time was wrongly in your log file.
My app always uses UTC to put a timestamp. However, I always managed to use index time to display events correctly (ignoring app timestamp), but not in this case. As I said before, the logs are being displayed correctly from my lab system (the same app, the same timestamp set). Weird.
As you are using other TZ on logs than your host have, you must tell this to splunk when you are ingesting logs. Splunk expects that those are same unless event contains TZ information (your events haven’t that). See this https://docs.splunk.com/Documentation/Splunk/9.1.0/Data/Applytimezoneoffsetstotimestamps to fix the situation.
I finally managed to make it work by manually assigning TZ to sourcetype in props.conf on HF.
Thank you @isoutamo !