For the last year, “AI in Intune” mostly meant a chat box where you could ask for a KQL query or a summary of a device. It was helpful, but it still required the engineer to do the heavy lifting: taking the AI’s suggestion, verifying it, and manually applying it to a policy.
The introduction of Security Copilot Agents changes this. Instead of a general-purpose assistant, Microsoft is deploying specialized agents designed for specific, high-friction workflows. These agents don’t just chat; they aggregate signals from across the Microsoft stack to perform complex analysis on things like PowerShell risk, compliance baselines, and CVE priorities.
For desktop engineers, this is a shift toward autonomous governance. You aren’t just using AI to write a script; you’re using it to audit the risk of that script before it ever hits a production device.
What are Security Copilot Agents?
Most practitioners are used to the Copilot Chat experience: you enter a prompt, and the LLM returns a response based on the current page context. Agents are different. They are specialized AI identities wrapped around specific business logic and data sources.
While a standard Copilot prompt might tell you what a setting does, an Agent can look at a Multi-Admin Approval request, cross-reference it with Defender vulnerability data and Entra identity risk, and then tell you why a specific PowerShell script is dangerous for your environment.
These agents live in the “Agents” node of the Intune admin center. They operate as a layer between your raw data (Intune, Defender, Entra) and your final administrative action.
The Change Review Agent: Auditing PowerShell Risk
One of the biggest headaches in enterprise endpoint management is Multi-Admin Approval. When a technician submits a PowerShell script for deployment, a senior engineer usually has to review the code. In large environments, this becomes a bottleneck.
The Change Review Agent automates the first pass of this audit. When a request is submitted, the agent evaluates the script by aggregating signals from:
- Microsoft Defender Vulnerability Management: Checking if the script interacts with known vulnerable components.
- Microsoft Entra ID: Assessing the identity risk of the requester.
- Microsoft Intune: Analyzing the historical context of similar requests in your tenant.
Instead of just saying “this looks okay,” the agent provides risk-based recommendations. It identifies exactly which part of the script is high-risk and provides the context needed to approve or deny the request in seconds rather than hours.
The Policy Configuration Agent: From PDFs to Policies
Most security baselines start as a PDF or a spreadsheet. Whether it’s a NIST guideline, a CIS benchmark, or a custom internal security document, the process of translating those requirements into Intune Settings Catalog entries is tedious and prone to human error.
The Policy Configuration Agent eliminates the manual translation phase. The workflow is straightforward:
- Upload: You upload the compliance document (e.g., a STIG or NIST PDF).
- Identify: The agent scans the text and maps the requirements to actual Intune settings.
- Refine: The agent presents a list of suggested settings. You can then remove exceptions or tweak values to fit your specific environment.
- Deploy: The suggestions are converted into a baseline policy you can assign to devices.
This removes the “lost in translation” risk where a security requirement is misinterpreted during the manual configuration of a CSP or ADMX-backed setting.
The Vulnerability Remediation Agent: Prioritizing CVEs
Managing vulnerabilities is usually a game of “whack-a-mole.” You get a massive list of CVEs from Defender, and you have to figure out which ones actually matter and how to fix them using Intune.
The Vulnerability Remediation Agent (currently in limited public preview) changes the approach from a list of bugs to a list of solutions. It analyzes Defender Vulnerability Management data to identify the most critical CVEs and provides:
- Impact Analysis: A summarized view of how a specific vulnerability affects your specific device fleet.
- Step-by-Step Guidance: Exact instructions on how to use Intune to remediate the threat (e.g., which specific setting to toggle or which update to push).
- Remediation Tracking: You can mark suggestions as “applied,” creating a historical record of how risks were mitigated over time.
Critical Warning: The Device Offboarding Agent Sunset
If you are currently using the Device Offboarding Agent to identify stale devices, you need to act now.
Microsoft has announced that the Device Offboarding Agent will be retired on June 1, 2026.
As of April 30, 2026, you can no longer set up the agent. If it’s already configured, it will work until June 1, but after that, it is gone. Desktop engineers should immediately pivot back to standard device lifecycle management and remediation scripts for stale device cleanup. Do not build any new workflows around this agent.
Practical Implementation Workflow
To get these agents running, you cannot simply toggle a switch in Intune. There is a specific dependency chain:
- Licensing: Ensure you have Intune Plan 1 and a Security Copilot license with sufficient Security Compute Units (SCUs).
- Plugin Activation: Navigate to the Security Copilot portal $ ightarrow$ Sources. Enable the Microsoft Intune, Microsoft Entra, and Microsoft Defender XDR plugins. Without these, the agents have no data to analyze.
- Access: Assign the “Intune Administrator” role in Entra ID or the appropriate Copilot role in the Security Copilot portal.
- Execution: Go to the Intune admin center $ ightarrow$ Agents. Select the agent (e.g., Policy Configuration) and follow the setup wizard.
Limitations and Caveats
While powerful, these agents aren’t a “set and forget” solution.
First, they are exclusively supported on the Public Cloud. If you are operating in a Government Community Cloud (GCC) or other sovereign cloud, these agents are currently unavailable.
Second, the Policy Configuration Agent can still “hallucinate” mappings. It might map a general security requirement to a setting that is close but not exact. Every AI-generated policy must be validated in a test group before production deployment.
Finally, the cost is tied to SCUs. Heavy use of the Change Review Agent in a high-churn environment can burn through your compute units faster than expected.
Conclusion
The transition from AI chat to AI agents represents the professionalization of AI in endpoint management. We are moving away from “prompt engineering” and toward “governance orchestration.” By automating the risk review of scripts and the translation of compliance documents, desktop engineers can stop acting as human translators and start acting as architects.