Splunk Threat Research Team: Generative AI - 3/13/24

2 Comments
Cover Images - Office Hours (16).png
Published on ‎01-09-2024 02:57 PM by Splunk Employee | Updated on ‎03-22-2024 01:35 PM

Register here. This thread is for the Community Office Hours session with the Splunk Threat Research Team on Generative AI on Wed, Mar 13, 2024 at 1pm PT / 4pm ET. 

 

This is your opportunity to ask questions related to your specific Generative AI challenge or use case. Including:

  • Understanding generative AI technologies and techniques
  • The application of AI techniques in cybersecurity
  • How to use Large Language Models (LLMs), Generative Adversarial Networks (GANs), Diffusion Models, and Autoencoders
  • The particular strengths of different generative AI techniques
  • Real-world security scenarios that these techniques can support
  • Practical tips for implementing these techniques to enhance threat detection
  • Anything else you'd like to learn!

 

Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here)

 

Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.

 

Look forward to connecting!



Labels (2)
0 Karma
adepp
Splunk Employee

Hi everyone!

Don't forget to submit your questions at registration or post a comment here for any topics you'd like to see discussed in the Community Office Hours session. You can also head to the #office-hours user Slack channel to ask questions and join the discussion (request access here).

loriexi
Splunk Employee

Here are a few questions from the session (get the full Q&A deck and live recording in the #office-hours Slack channel):

Q1: What are some of Splunk’s best practices for artificial intelligence?

- Splunk’s trustworthy AI principles:

    • Accountability - Shaped by humans
    • Transparency - Transparency
    • Privacy - Data trust
    • Fairness - Institutionally unbiased
    • Resilience - Built to withstand

- Before starting a new AI / ML project, make sure to assess:

    • The objectives
    • The risks
    • Methodology
    • Foundations

Q2: What are some issues to keep in mind associated with security and Generative AI?

- Preserving privacy

    • Mitigating risk when learning from sensitive data
    • Maintaining control over data

- Adversaries’ use of GenAI

    • We can train GenAI to help detect threats; adversaries can train it to help evade detection

Q3:  How is Splunk using AI / ML in its products? What are some common use cases for Splunk ES and SOAR?

- Incorporating AI / ML into Splunk products:

    • AI / ML capabilities built into security and observability solutions
    • Apps that support advanced and custom AI use cases
    • Assistive workflow

- Example use cases:

    • Simplifying workflows
    • ML-powered detections
    • Assisted response actions
    • Event correlation and alert noise reduction
    • Predictive analytics

- Built on solid foundations such as

    • Good quality data
    • Risk Based Alerting (RBA)
    • Incident Response Processes/Automation

Q4: How can Gen AI be used to support threat and anomaly detection? What are some examples?

- Sample use cases:

    • Tackling phishing with autoencoder-based deep neural network architectures
    • Using GANs (a class of neural networks) for private synthetic data
    • Detecting spurious domain names with LLMs

- Guidance & Examples

Other Questions (check the #office-hours Slack channel for responses):

  • How can I train models to support fraud detection using Splunk and Enterprise Security?
  • Can you give an overview of fraud identification using ML commands in Spunk? 
  • How to deal with voice fakes for user calls for verification and access?
  • The DGA app from 2020 can we get a Splunk Cloud compatible version of that?
  • Setting up ML datasets akin to DGA app for Botnet DDNS detection related to IOCs.
  • Using Splunk AI Assistant

Live Questions:

  • On the Splunk Threat Research Team, are you mostly concerned with building GenAI into Splunk to improve threat detection, or are you also considering GenAI as the threat itself (or something that can be monitored in Splunk where we may have users using GenAI inside the business)?
  • I'd wish similar "Clippy" assistant for Splunk that supports with dashboarding (suggestions, Chat Questions etc.), configuration Management, Troubleshooting support and General Environment Evaluation / optimization suggestions. Currently I'm using Github Copilot for These Tasks but that's externally and requires Manual application of suggestions etc. Hope that Splunk AI Assist goes into this direction
  • How does Splunk use Generative AI - Or how would one use Generative AI to query the data in Splunk.
  • How does Splunk approach the privacy and sensitivity of data when it comes to training your models?
  • I currently use Splunk Enterprise for logs , where exactly does enterprise security/SOAR fit in? Are they separate solutions or they are just new apps on top of  Splunk enterprise
  • How does Splunk approach the privacy and sensitivity of data when it comes to training your models?
  • Is there gonna be a way to add your own models? DSDL deep science and deep learning was a good app but kind of like project built. Will this be a part of that project or separate? The use case apps are good but need a way to load and test the models. I’m thinking something like SageMaker from AWS.
  • Are these gonna be on prem and cloud? I see all the ai alerts and in Splunk Olly. Is those open source or see what it’s looking for?
  • How about an AI app to dynamically tag ES data models tags akin to Splunk Essentials for Predictive Maintenance | Splunkbase  but more geared for the Open CyberSecurity Framework (OCFS) like OCSF Datamodels Security Add On | Splunkbase  
  • Anything about Retrieval-Augmented Generation, aka RAG?