Microsoft on Tuesday unveiled the expansion of its Sentinel Security Incidents and Event Management solution (SIEM) as a unified agentic platform with the general availability of the Sentinel data lake.

In addition, the tech giant said it's also releasing a public preview of Sentinel Graph and Sentinel Model Context Protocol (MCP) server to turn telemetry into a security graph and allow AI agents access an organization's security context in a standardized manner.

"With graph-based context, semantic access, and agentic orchestration, Sentinel gives defenders a single platform to ingest signals, correlate across domains, and empower AI agents built in Security Copilot, VS Code using GitHub Copilot, or other developer platforms," Vasu Jakkal, corporate vice president at Microsoft Security, said in a post shared with The Hacker News.

DFIR Retainer Services

Microsoft released Sentinel data lake in public preview earlier this July as a purpose-built, cloud-native tool to ingest, manage, and analyze security data to provide better visibility and advanced analytics.

With the data lake, the idea is to lay the foundation for an agentic defense by bringing data from diverse sources and enabling artificial intelligence (AI) models like Security Copilot to have the full context necessary to detect subtle patterns, correlate signals, and surface high-fidelity alerts.

The shift, Redmond added, allows security teams to uncover attacker behavior, retroactively hunt over historical data, and trigger detections automatically based on the latest tradecraft.

"Sentinel ingests signals, either structured or semi-structured, and builds a rich, contextual understanding of your digital estate through vectorized security data and graph-based relationships," Jakkal said.

"By integrating these insights with Defender and Purview, Sentinel brings graph-powered context to the tools security teams already use, helping defenders trace attack paths, understand impact, and prioritize response -- all within familiar workflows."

Microsoft further noted that Sentinel organizes and enriches security data so as to detect issues faster and better respond to events at scale, shifting cybersecurity from "reactive to predictive."

In addition, the company said users can build Security Copilot agents in a Sentinel MCP server-enabled coding platform, such as VS Code, using GitHub Copilot, that are tailored to their organizational workflows.

The Windows maker has also emphasized the need for securing AI platforms and implementing guardrails to detect (cross-)prompt injection attacks, stating it intends to roll out new enhancements to Azure AI Foundry that incorporate more protection for AI agents against such risks.

Microsoft told The Hacker News that it enforced security and compliance across its data lake by using Azure and Entra RBAC for least-privilege access, storing data in the same region as the connected workspace to meet data residency requirements, and encrypting all data at rest using Microsoft-managed keys by default, with options to use customer-managed keys (CMK). It also said each tenant receives a logically isolated data lake instance.

CIS Build Kits

To counter the risks associated with prompt injection attacks, the company said it's securing the platform through a multi-pronged approach that involves threat protection for AI services in Microsoft Defender for Cloud, Spotlighting in Azure AI Content Safety, and Azure AI Red Teaming Agent.

These features are engineered to provide advanced security for custom AI applications built with Azure AI Foundry models, real-time detection and blocking of prompt injection attacks, and automated adversarial testing of AI systems to identify vulnerabilities in content safety.

"The [threat protection] feature proactively detects and responds to threats, including prompt injection attacks by generating actionable alerts," Microsoft said. "It helps organizations safeguard their AI services from malicious inputs that could compromise model behavior or data integrity."

How AI Red Teaming works

Spotlighting, on the other hand, enhances protection against indirect attacks by tagging the input documents with special formatting to indicate lower trust in the model. "This capability ensures that Azure AI Foundry models do not act on hidden or malicious prompts embedded in user inputs," the company added. "By continuously monitoring and filtering unsafe content, Spotlighting helps maintain the integrity and trustworthiness of AI-generated outputs."

Lastly, the third layer of defense comes in the form of an AI Red Teaming Agent, which Microsoft released in preview back in April 2025 as a tool to help users proactively find safety risks associated with generative AI systems during design and development. It integrates the company's open-source Python Risk Identification Tool (PyRIT) with Azure AI Foundry's built-in Risk and Safety Evaluations to assess safety issues.

"Users can simulate prompt injection scenarios and other attack strategies to evaluate how well their models detect and respond to threats," Microsoft said. "This tool supports proactive risk assessment and strengthens the resilience of AI applications against evolving adversarial techniques."

(The story was updated after publication to include additional insights from Microsoft related to prompt injection attacks.)

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.