All Apps and Add-ons

HDFS capacity N/A

jackshine
Explorer

I installed splunk to monitor and analyze hadoop jobs. After I installed splunk core and splunk hadoop app in Jobtracker, forwarder and TA on other nodes, the HDFS capacity and slots capacity shows N/A in Utilization section. Anyone has some idea of possible causes?

Thank you

1 Solution

pierre4splunk
Splunk Employee
Splunk Employee

Have you enabled collection for Hadoop Metrics?

Each Hadoop daemon exposes rich runtime metrics that are useful for both monitoring and ad-hoc exploration of cluster activity, job performance, and historical workload. The utilization gauge for HDFS and Slot capacity gauges depend on NameNode and JobTracker metrics, respectively.

The simplest way to collect Hadoop metrics is to:

  1. configure your Hadoop daemon(s) to dump Hadoop metrics to a named log file
  2. configure your Splunk forwarders to monitor the resulting output files

For more info about #1:
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SplunkTAforHadoopOps#Hadoop_metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourClouderaplatform
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourHortonworksplatform

For #2, refer to the example input.conf stanzas for metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SampleHadoopinputs.conffiles

For more about Hadoop Metrics: http://blog.cloudera.com/blog/2009/03/hadoop-metrics/

View solution in original post

pierre4splunk
Splunk Employee
Splunk Employee

Have you enabled collection for Hadoop Metrics?

Each Hadoop daemon exposes rich runtime metrics that are useful for both monitoring and ad-hoc exploration of cluster activity, job performance, and historical workload. The utilization gauge for HDFS and Slot capacity gauges depend on NameNode and JobTracker metrics, respectively.

The simplest way to collect Hadoop metrics is to:

  1. configure your Hadoop daemon(s) to dump Hadoop metrics to a named log file
  2. configure your Splunk forwarders to monitor the resulting output files

For more info about #1:
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SplunkTAforHadoopOps#Hadoop_metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourClouderaplatform
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourHortonworksplatform

For #2, refer to the example input.conf stanzas for metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SampleHadoopinputs.conffiles

For more about Hadoop Metrics: http://blog.cloudera.com/blog/2009/03/hadoop-metrics/

jackshine
Explorer

The slots is also working now. It turned out that the N/A showed because 500MB limit was already exceeded yesterday.
Thank you

0 Karma

jackshine
Explorer

You are right, I forgot to modify the hadoop.metric configuration file and input.conf.After I did that, the HDFS capacity shows up. However, the slot capacity still shows N/A.

0 Karma
Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...