Splunk Enterprise

Shut down a splunk cluster completely for planned outage

cmeo-bcit
New Member

Good morning,

I am currently instructing the Cluster Admin course, and a student has asked a question which to my great surprise doesn't seem to covered anywhere.

They have an indexer cluster and SHC on a single site, and they want to shut down everything for a planned power outage in their data centre.

 

What is the correct sequence and commands for doing this?

My own guesses are:

Shut down everything that is sending data to splunk first.

Place the index cluster in maintenance mode

Shut down the deployment server if in use.

Shutdown the SHC deployer (splunk stop)

Shut down the SHC members (splunk stop?)

Shut down the indexer members (? not sure which variant of the commands to use here)

Shut down the cluster master last.

Restart is the reverse order.

Correct or not?

Thank you,

Charles

Labels (1)
0 Karma

deepakc
Contributor

The main idea is to have a graceful shutdown/start and run the necessary commands for the cluster’s, had a look at your high-level steps yours looks OK, I did something a while ago, in terms of stopping a Splunk cluster environment and bring them up.

By shutting down the data forwarding tier first is a good idea , otherwise the data will be lost nowhere to go.

Place the CM in maintenance mode

Shutdown Deployment Server / HF ‘s if in use as well

Shutdown SHC - take note of the SHC Captain – Stop the SHC Members and Captain Last, make sure they are down.

Shutdown Deployer.

As the CM should be in maintenance mode  via CM, shutdown shut down the indexers by the way of the normal commands should be fine(/opt/splunk/bin/stop), one at a time and make sure they are down.

Shutdown the CM.

On the reverse make sure CM  is up and it’s still in maintenance mode, bring all the indexers up and when they are all up  disable the maintenance mode – check status using MC, the replication factors should searchable be green status, so you may have to wait a bit.  

Bring back the Deployer back up

Then bring the SH's up one by one, ensure the captain is up first, then the other SHC members  and check the others can communicate with it, using SHC clusters commands to check status

Bring back the Deployment Server / HF’s

Bring back the data forwarding tier.

Use the MC to check overall health.

I would document all the steps and commands clearly, so you have a process to follow and checkpoint, rather than in an ad-hoc manner due to the many moving parts.   

0 Karma
Get Updates on the Splunk Community!

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer at Splunk .conf24 ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...

Combine Multiline Logs into a Single Event with SOCK: a Step-by-Step Guide for ...

Combine multiline logs into a single event with SOCK - a step-by-step guide for newbies Olga Malita The ...