Dojo AI for SOC investigations guide cuts alert fatigue and speeds triage with agentic automation now
Dojo AI for SOC investigations guide helps teams cut alert time by using new agent tools from Sumo Logic. The SOC Analyst Agent scores severity and builds context. The Knowledge Agent answers “how-to” in chat. An MCP server connects your own copilots. Use them together to triage faster and act with confidence.
Security teams face a flood of alerts, many tools, and pressure to move fast. Sumo Logic’s Dojo AI adds agent-powered help to shrink the gap from alert to action. The new SOC Analyst Agent speeds triage. The Knowledge Agent explains tasks in plain language. An MCP server lets you plug in your own copilots and models while keeping scale and security.
What’s new in Sumo Logic’s Dojo AI
SOC Analyst Agent (beta)
Applies agent reasoning to each alert
Suggests severity and likely impact
Pulls related activity to build a clean timeline
Presents clear context so analysts can decide faster
Knowledge Agent (available now)
Answers “how do I” questions in natural language
Works through Mobot, the chat interface
Returns citable steps from docs and product knowledge
Reduces ramp time for new analysts
MCP server (beta/prototype)
Implements Model Context Protocol to connect external AI
Integrates customer-owned copilots and models
Keeps consistency, scale, and security within Dojo
Enables “bring your own AI” without tool sprawl
Dojo AI for SOC investigations guide: core workflow
1) Intake and triage
Alerts arrive in Dojo from logs and signals
SOC Analyst Agent proposes severity and risk
It enriches with related events, users, hosts, and time
Analyst confirms or adjusts the verdict
2) Scope and investigate
Use the agent’s context to trace lateral movement
Query logs for matching indicators across environments
Call out to connected copilots via the MCP server for extra checks
Flag duplicates and suppress noisy alerts
3) Decide and act
Pick the right playbook based on verified severity
Automate safe actions where allowed (isolate, reset, block)
Log the decision path for audit and review
4) Learn and improve
Ask Knowledge Agent for faster steps or commands
Add missed context and update playbooks
Feed lessons back into the agents’ prompts and rules
How to cut alert time in practice
Use this Dojo AI for SOC investigations guide to structure your rollout and get quick wins.
Prep your data and rules
Map top alert types and known noisy sources
Define severity criteria and accepted auto-actions
Normalize metadata (users, assets, tags) for clean joins
Wire up your AI ecosystem
Connect your copilots and models to the MCP server
Set strict scopes: what each tool can read and do
Use read-only first, then enable actions with approvals
Standardize playbooks
Write short, testable steps for common incidents
Mark which steps agents can suggest vs. execute
Store playbooks where Knowledge Agent can cite them
Pilot, then expand
Start with one use case (for example, phishing or impossible travel)
Shadow-run the SOC Analyst Agent for two weeks
Measure gains, tune prompts, and reduce manual hops
Metrics and KPIs to track
MTTA: cut time-to-acknowledge by using verdict suggestions
MTTR: shorten time-to-remediate with auto-enrichment
False-positive rate: drop by de-dup and better context
Analyst throughput: more cases closed per shift
Time-to-competency: new analysts reach full speed faster
Tie these to a baseline so you can prove value from this Dojo AI for SOC investigations guide within 30 to 90 days.
Why it beats old-school workflows
Fewer tool switches: context lives with the alert
Less guesswork: AI proposes a clear next step
Safer actions: guardrails keep changes controlled
Better knowledge flow: answers arrive inside the case
Risk management and guardrails
Set human-in-the-loop for medium and high-risk actions
Log all agent suggestions and executions for audit
Segment data by sensitivity; avoid over-broad model access
Use allowlists for external AI calls via MCP
Run chaos tests to spot bad prompts and edge cases
30-60-90 day rollout plan
Days 1–30: Foundation
Connect data sources and enable the SOC Analyst Agent in shadow mode
Onboard the Knowledge Agent; seed it with your docs
Define approval policies for actions
Days 31–60: First automation
Turn on MCP server for limited copilots
Automate low-risk steps (enrichment, ticket updates)
Measure MTTA/MTTR; tune severity prompts
Days 61–90: Scale and govern
Expand to more alert types
Enable guided actions for medium-risk cases
Publish monthly metrics and lessons learned
The bottom line: modern SOCs need speed, context, and safe automation. Sumo Logic’s agents bring those pieces together. Use this Dojo AI for SOC investigations guide to link triage, knowledge, and your own copilots, so you cut alert time and focus on real threats.
(p)(Source:
https://siliconangle.com/2025/12/01/sumo-logic-expands-dojo-ai-new-agentic-tools-modern-security-operations/)(/p)
(p)For more news:
Click Here(/p)
FAQ
Q: What is the Dojo AI for SOC investigations guide and how does it help SOC teams?
A: Dojo AI is Sumo Logic’s agentic artificial intelligence platform for security operations that combines agentic AI, log intelligence and secure model integration. The Dojo AI for SOC investigations guide explains how the SOC Analyst Agent, Knowledge Agent and MCP server work together to speed triage, provide citable knowledge and connect customer-owned copilots to cut alert time.
Q: How does the SOC Analyst Agent speed triage and investigations?
A: The SOC Analyst Agent applies agentic AI reasoning to each alert, suggests severity and likely impact, pulls related activity to build a clear timeline and presents context so analysts can decide faster. It is launching in beta and delivers verdicts and enrichment to reduce alert fatigue and accelerate investigation steps.
Q: What does the Knowledge Agent do and how do analysts access it?
A: The Knowledge Agent answers “how-do-I” questions in natural language through Mobot, returning straightforward, citable steps drawn from documentation and product knowledge. It is available now within the Sumo Logic platform and helps reduce friction and accelerate onboarding for new analysts.
Q: What is the MCP server and how does it enable integration with external AI?
A: The MCP server implements the Model Context Protocol to connect customer-owned copilots, models and third-party AI into Dojo while maintaining scale, consistency and security. It lets organizations bring their own AI without creating tool sprawl and is currently available as a beta/prototype to select customers.
Q: How should SOC teams structure the triage and investigation workflow using this guide?
A: The guide lays out a four-step core workflow: intake and triage (alerts arrive, the SOC Analyst Agent proposes severity and enriches events), scope and investigate (trace lateral movement, query logs and call connected copilots), decide and act (pick a playbook and automate safe actions like isolate, reset or block), and learn and improve (ask the Knowledge Agent and update playbooks). Teams should start with the agent’s suggested context, log decision paths for audit and iterate on playbooks.
Q: What practical steps does the guide recommend to cut alert time quickly?
A: Prep data and rules by mapping top alert types and noisy sources, define severity criteria and accepted auto-actions, and normalize metadata for clean joins; wire up the AI ecosystem by connecting copilots to the MCP server with strict scopes and read-only first; and standardize short, testable playbooks stored where the Knowledge Agent can cite them. Then pilot with a single use case, shadow-run the SOC Analyst Agent for two weeks, measure gains and tune prompts before expanding.
Q: Which metrics and KPIs should teams track to prove value?
A: Track MTTA (time-to-acknowledge), MTTR (time-to-remediate), false-positive rate, analyst throughput and time-to-competency and tie them to a baseline so improvements are measurable. The guide recommends proving value within a 30–90 day window by measuring those indicators.
Q: How does the guide recommend managing risk and governance when using agentic tools?
A: Set human-in-the-loop for medium and high-risk actions, log all agent suggestions and executions for audit, segment sensitive data to limit model access and use allowlists for external AI calls via the MCP server. It also advises running chaos tests, defining approval policies and staging actions (read-only first) to surface bad prompts and edge cases before enabling full automation.