Observability: Application Performance Monitoring (APM): Session 2 - 4/10/24

1 Comment
Cover Images - Office Hours (17).png
Published on ‎01-12-2024 04:26 PM by Splunk Employee | Updated on ‎04-17-2024 02:31 PM

Register here. This thread is for the Community Office Hours session on Observability: Application Performance Monitoring (APM) on Wed, April 10, 2024 at 1pm PT / 4pm ET. 

 

This is your opportunity to ask questions about your current Observability APM challenge or use case, including:

  • Sending traces to APM
  • Tracking service performance with dashboards
  • Setting up deployment environments
  • AutoDetect detectors
  • Enabling Database Query Performance
  • Setting up business workflows
  • Implementing high-value features (Tag Spotlight, Trace View, Service Map)
  • Anything else you'd like to learn!

 

Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here)

 

Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.

 

Look forward to connecting!



Labels (2)
0 Karma
ArifV
Splunk Employee

Here are a few questions from the session (get the full Q&A deck and live recording in the #office-hours Slack channel):

Q1: How do I configure the yaml for APM including any performance best practices?

  • OpenTelemetry Collector configuration is stored in a YAML file and specifies the behavior of receivers, processors, exporters, and other components.  The Splunk Distribution of the OpenTelemetry Collector comes with a default configuration that can be modified as needed for your environment.  It can be helpful to visualize collector configuration using a tool such as OTelBin. 

Documentation:

Q2: Can we generate a component call flow?

Splunk automatically generates a service map showing the dependencies and call flow among your instrumented and inferred services in APM. The map is dynamically updated based on your selections in the time range, environment, workflow, service, and tag filters. You can use the service map to identify dependencies, performance bottlenecks, and error propagation.

Documentation: 

View dependencies among your services in the service map

Q3: Can you cover AlwaysOn Profiling for .Net?

To activate AlwaysOn Profiling, set the SPLUNK_PROFILER_ENABLED environment variable to true.

To activate memory profiling, set the SPLUNK_PROFILER_MEMORY_ENABLED environment variable to true after activating AlwaysOn Profiling.

Documentation: 

Activate AlwaysOn Profiling (.NET)

Q4: How can we get SQS to integrate into a trace for APMs?

For some languages, this is included with auto-instrumentation.  For example, OpenTelemetry Java instrumentation includes support out-of-the-box for the AWS SDK, which automatically instruments calls to SQS.  You can search the OpenTelemetry registry to find details regarding SQS support with other languages. 

Documentation: 

Q6: Can you touch on API performance and availability?

  • Create availability SLOs in o11y Cloud 

Documentation: 

Live Questions:

Q1: How do I know which attributes to use for breaking down the service map? 

Blog: Up Your Observability Game with Attributes

Docs: Instrument your application code to add tags to spans

Docs: Filter spans within a particular trace

 

Q2: Do we need to set both Profiling variables for auto instrumentation as well ? 

Yes.  To enable AlwaysOn Profiling with Zero config auto-instrumentation you can add the following parameters to enable CPU and memory profiling: 

--enable-profiler --enable-profiler-memory

Docs:  Activate AlwaysOn Profiling 

Q3: Can I create customized variables and filters for APM Dashboards? 

Documentation: Customize Dashboard Filters