Hi Splunkers
Currently, I have 8 indexers and about 100 indexes! Here is a sample of my indexes.conf:
# volumes
[volume:HOT]
path = /Splunk-Storage/HOT
maxVolumeDataSizeMB = 2650000
[volume:COLD]
path = /Splunk-Storage/COLD
maxVolumeDataSizeMB = 27500000
### indexes ###
[testindex1]
repFactor = auto
homePath = volume:HOT/testindex1
coldPath = volume:COLD/testindex1
thawedPath = /Splunk-Storage/COLD/testindex1/thaweddb
summaryHomePath = /Splunk-Storage/HOT/testindex1/summary
frozenTimePeriodInSecs = 47520000
[testindex2]
repFactor = auto
homePath = volume:HOT/testindex2
coldPath = volume:COLD/testindex2
thawedPath = /Splunk-Storage/COLD/testindex2/thaweddb
summaryHomePath = /Splunk-Storage/HOT/testindex2/summary
frozenTimePeriodInSecs = 47520000
I don't restrain my indexes by size, only by time. The current median age of all data is about 180 days.
Regarding my fstab file:
/dev/mapper/mpatha-part1 /Splunk-Storage/HOT xfs defaults 0 0
/dev/mapper/mpathb-part1 /Splunk-Storage/COLD xfs defaults 0 0
Now, for compliance reasons, I want to separate two of my indexes to preserve them for a longer duration (at least two years).
I have considered two possible methods to accomplish this:
1.
a. Create a different path and volume.
b. Stop all indexers.
c. Move the two indexes to the new path.
d. Start all indexers.
If I'm correct, the issue is that I can't move just two indexes because I didn't mount different paths in the OS. Therefore, I would have to move all other indexes to another path. Essentially, this means creating two paths and volumes in both my OS and indexes.conf.
2.
a. Decrease the frozenTimePeriod for all indexes except the two to, for example, 150 days.
b. Wait for Splunk to free up some disk space.
c. Increase the frozenTimePeriod for those two indexes to, for example, 730 days.
The second solution may seem more straightforward, but I'm uncertain if it is a best practice or a good idea at all.
Could you please guide me on how to implement the first solution with minimal downtime?
Thank you in advance for your assistance!
I was able to fix my issue with symbolic links, thanks to the following topic.
Here is the steps I did:
I created two directories on each volume, like this
mkdir /Splunk-Storage/HOT/HOT1
mkdir /Splunk-Storage/HOT/HOT2
mkdir /Splunk-Storage/COLD/COLD1
mkdir /Splunk-Storage/COLD/COLD2
I stopped Splunk on one Indexer. Then moved the indexes to the appropriate directories as desired
mv /Splunk-Storage/HOT/testindex1 /Splunk-Storage/HOT/HOT1/testindex1
mv /Splunk-Storage/COLD/testindex1 /Splunk-Storage/COLD/COLD1/testindex1
mv /Splunk-Storage/HOT/testindex2 /Splunk-Storage/HOT/HOT2/testindex2
mv /Splunk-Storage/COLD/testindex2 /Splunk-Storage/COLD/COLD2/testindex2
It took no time of course. Then I created symbolic links just like this
ln -s /Splunk-Storage/HOT/HOT1/testindex1 /Splunk-Storage/HOT/testindex1
ln -s /Splunk-Storage/COLD/COLD1/testindex1 /Splunk-Storage/COLD/testindex1
ln -s /Splunk-Storage/HOT/HOT2/testindex2 /Splunk-Storage/HOT/testindex2
ln -s /Splunk-Storage/COLD/COLD2/testindex2 /Splunk-Storage/COLD/testindex2
Then I started Splunk.
At this point, Splunk remained unaware of the changes occurring on the underlying file system, yet it continued to function, with the actual data now residing in the correct path.
After repeating this process on all indexers, I proceeded to modify the indexes.conf on CM and pushed the changes.
After checking that everything is correct, I removed the soft links.
Moving indexes on a working cluster is a tricky thing to do since:
1) You have to physically move the data
2) You have to push the indexes.conf from the CM
3) The indexes.conf has to be consistent across the whole cluster.
So the issue is very tricky and I'd do a lot of testing before attempting it on prod environment.
You could get away with taking the whole cluster down, moving the data around physically and deploying "fixed" indexes.conf both on the CM and on each individual indexer. But again - testing, testing, testing. There are many things that could go wrong here.
Generally, the best practice would be to leave those indexes alone and don't move them around - if there is a _new_ requirement, just create a new index on a new storage unit, set the proper size/age constraints and stick with it.
Yes. That is correct. I forgot about how I must manage the new indexes.conf and push it because I faced many issues when the indexes.conf on indexers doesn't match with indexes.conf on CM.
I'll be very grateful If also @woodcock gives me an advice