Skip to content
March 27, 2026 Mid-Level (3-5 years) Deep Dive

How to Automate Windows Patch Compliance with AI

Stop manually chasing missing patches across hundreds of endpoints. Here is how AI-powered PowerShell pipelines can automate your Windows patch compliance workflow.

How to Automate Windows Patch Compliance with AI (and Stop Chasing Missing KBs)

You know the drill. Patch Tuesday hits, and for the next week you are staring at WSUS reports, cross-referencing spreadsheets, and chasing down the 47 machines that did not pick up KB5039212 for reasons nobody can explain. It is tedious, error-prone, and it eats hours that could go toward actual engineering work.

What if you could hand most of that to an AI-powered pipeline? Not a magic black box. A set of scripts and prompts that pull compliance data, analyze gaps, and draft remediation plans while you focus on the exceptions that actually need a human brain. That is what this article walks through.

Why Traditional Patch Compliance Is Broken

Most enterprise patch workflows follow the same pattern: deploy patches through WSUS, Intune, or SCCM, wait a few days, pull a compliance report, then spend hours figuring out why certain machines are non-compliant.

The problem is not the tooling. WSUS and Intune both generate decent reports. The problem is what happens after the report lands on your desk.

You are manually scanning hundreds of rows looking for patterns. Is it a specific hardware model that keeps failing? A particular OU that is misconfigured? A VPN split-tunnel issue preventing machines from reaching the update server? Finding those patterns in raw compliance data takes experience and time. AI is good at both.

The AI-Powered Patch Compliance Pipeline

Here is the setup I have been running for the past few months. It is not complicated, but it changed how I spend my time after Patch Tuesday.

The pipeline has three stages: collect, analyze, act.

Stage 1: Collect Compliance Data

First, pull your patch status into a format an LLM can work with. If you are using Intune, the Microsoft Graph API makes this straightforward:

# Pull device compliance data from Intune via Graph API
$uri = "https://graph.microsoft.com/v1.0/deviceManagement/managedDevices"
$devices = Invoke-MgGraphRequest -Uri $uri -Method GET

$complianceReport = foreach ($device in $devices.value) {
    [PSCustomObject]@{
        DeviceName      = $device.deviceName
        OS              = $device.operatingSystem
        OSVersion       = $device.osVersion
        ComplianceState = $device.complianceState
        LastSync        = $device.lastSyncDateTime
        UserPrincipal   = $device.userPrincipalName
        Model           = $device.model
        Manufacturer    = $device.manufacturer
    }
}

$complianceReport | Export-Csv -Path ".\PatchCompliance.csv" -NoTypeInformation

For WSUS environments, you can pull similar data from the WSUS API or directly from the SQL database. The point is: get it into CSV or JSON. Something structured that you can feed into the next stage.

Stage 2: AI-Powered Analysis

This is where things get interesting. Instead of eyeballing a spreadsheet, you feed the compliance data to an LLM and ask it to find what matters.

# Read compliance data and send to Claude API for analysis
$complianceData = Get-Content ".\PatchCompliance.csv" -Raw
$prompt = @"
You are an endpoint compliance analyst. Here is a CSV of device patch compliance data from our Intune environment.

Analyze this data and provide:
1. How many devices are non-compliant and the percentage of the total fleet
2. Patterns in non-compliance (specific models, OS versions, user groups, or last-sync gaps)
3. Devices that have not synced in over 7 days (these are likely offline or decommissioned)
4. A prioritized remediation plan starting with the highest-impact fixes

Be specific. Use device names and model numbers from the data.

$complianceData
"@

$body = @{
    model = "claude-sonnet-4-6"
    max_tokens = 4096
    messages = @(@{ role = "user"; content = $prompt })
} | ConvertTo-Json -Depth 5

$response = Invoke-RestMethod -Uri "https://api.anthropic.com/v1/messages" `
    -Method POST `
    -Headers @{
        "x-api-key" = $env:ANTHROPIC_API_KEY
        "anthropic-version" = "2023-06-01"
        "Content-Type" = "application/json"
    } `
    -Body $body

$analysis = $response.content[0].text
$analysis | Out-File ".\ComplianceAnalysis.txt"
Write-Host $analysis

What comes back is a structured breakdown of your compliance gaps, sorted by impact. When I first ran this on a fleet of 1,200 machines, Claude identified that 80% of our non-compliant devices were a single Dell Latitude model running a specific BIOS version that was causing Windows Update to hang. That would have taken me hours to spot manually. The LLM found it in seconds because it could process every row at once.

Stage 3: Automated Remediation

Once you know what is wrong, you can script the fixes. AI helps here too. Feed the analysis back into your LLM and ask it to generate targeted remediation scripts.

For example, if the analysis shows a group of machines have not synced in 14+ days:

# Force sync on stale devices identified by the AI analysis
$staleDevices = Import-Csv ".\PatchCompliance.csv" |
    Where-Object {
        $_.ComplianceState -eq "nonCompliant" -and
        ([datetime]$_.LastSync -lt (Get-Date).AddDays(-14))
    }

foreach ($device in $staleDevices) {
    $syncUri = "https://graph.microsoft.com/v1.0/deviceManagement/managedDevices/$($device.Id)/syncDevice"
    try {
        Invoke-MgGraphRequest -Uri $syncUri -Method POST
        Write-Host "Sync triggered for $($device.DeviceName)" -ForegroundColor Green
    }
    catch {
        Write-Warning "Failed to sync $($device.DeviceName): $_"
    }
}

For machines failing due to specific error codes, you can have the LLM generate a remediation script tailored to each error pattern. This turns a generic non-compliant status into an actionable, device-specific fix.

What AI Catches That You Might Miss

I have been running this pipeline monthly, and there are a few things the LLM consistently catches that I used to overlook.

Time-based patterns. Devices that go non-compliant every other month usually have a scheduled task or group policy conflict. The LLM notices the cycle because it looks at sync history across months, not just the current snapshot.

Correlated failures. When 15 devices in the same subnet all fail at the same time, that is probably a network issue, not a patching issue. The LLM groups devices by subnet, physical location, and OU automatically when you include that data in the CSV.

Ghost machines. Devices that show up in compliance reports but have not synced in 60+ days are probably decommissioned or lost. The LLM flags these so you can clean up your inventory instead of wasting cycles trying to patch a machine that no longer exists.

Setting Up the Recurring Pipeline

The real payoff comes when you automate the whole thing. I run this as a scheduled PowerShell task that fires every Wednesday after Patch Tuesday:

# Scheduled wrapper - runs weekly, outputs report + remediation plan
$timestamp = Get-Date -Format "yyyy-MM-dd"
$outputDir = "C:\PatchReports\$timestamp"
New-Item -ItemType Directory -Path $outputDir -Force

# Step 1: Pull compliance data
# (Graph API call from Stage 1, output to $outputDir)

# Step 2: Run AI analysis
# (Claude API call from Stage 2, output to $outputDir)

# Step 3: Email the report
$emailParams = @{
    From       = "patchbot@yourdomain.com"
    To         = "endpoint-team@yourdomain.com"
    Subject    = "Patch Compliance Report - $timestamp"
    Body       = Get-Content "$outputDir\ComplianceAnalysis.txt" -Raw
    SmtpServer = "smtp.yourdomain.com"
}
Send-MailMessage @emailParams

The team gets a plain-English compliance report every Wednesday morning. No manual work. The report includes the analysis, the list of problem devices, and a recommended remediation sequence. If something needs human attention, it is called out at the top.

Cost and Performance Reality Check

Running compliance data through an LLM API is not free, but it is cheap. A typical compliance CSV for 1,000 devices runs about 15,000 tokens. At Claude current pricing, that is a few cents per analysis run. Even if you run it weekly, you are spending less per month than a single hour of an engineer time.

The bigger question is accuracy. In my testing, the LLM pattern detection is solid for identifying clusters of non-compliant devices and spotting anomalies. It is not infallible with error code interpretation, so I always spot-check the remediation suggestions before running them against production. Think of it as a very fast first-pass analyst, not a replacement for your judgment.

Where This Is Heading

Right now I am building a version that feeds compliance data into a local LLM running on our management server, so nothing leaves the network. If your security team pushes back on sending device data to an external API (and they might), that is a reasonable path. Local models like Llama or Mistral are good enough for this kind of structured data analysis.

I am also experimenting with having the LLM compare this month compliance report against last month, so it can tell me whether we are trending better or worse overall and flag any new problem patterns that did not exist before.

If you are still manually reviewing patch compliance spreadsheets, give this pipeline a shot. Start with Stage 2 alone: export your compliance data and paste it into Claude or ChatGPT. See what it finds. I would bet it catches something you have been missing.

For a deeper look at AI-assisted PowerShell scripting for endpoint management or how to set up automated compliance auditing with LLMs, check out those guides on the blog.

Frequently Asked Questions

Is it safe to send device compliance data to an external AI API?

It depends on your organization data classification policies. Compliance CSVs typically contain device names, OS versions, and user principal names. That is low-sensitivity in most environments, but check with your security team. If it is a concern, strip UPNs before sending or run a local model instead.

Does this work with WSUS or just Intune?

Both. The collection step differs (WSUS uses its own API or SQL queries instead of Microsoft Graph), but the analysis and remediation stages are identical. The LLM does not care where the CSV came from.

How accurate is the AI analysis compared to manual review?

For pattern detection and clustering, it is faster and more thorough than a human scanning rows. It processes every row at once, so it catches correlations you would miss when scrolling. For interpreting specific Windows Update error codes, it is about 85-90% accurate in my experience. Always verify remediation scripts before running them.

Can I use this with Configuration Manager (SCCM) instead of Intune?

Yes. SCCM compliance reporting can export to CSV or you can query the SCCM SQL database directly. Swap out the Graph API calls in Stage 1 with your SCCM data source and the rest of the pipeline works the same.

What LLM works best for this kind of analysis?

Claude and GPT-4 both handle structured data analysis well. For compliance data specifically, I have had the best results with Claude because it handles large CSV inputs without losing context on the details. For local deployment, Llama 3 70B is a solid option.


Want more on building AI-powered IT workflows? Follow me on LinkedIn or subscribe to the newsletter for weekly breakdowns of how AI is changing endpoint engineering.

Was this helpful?

Comments

Comments are coming soon. Have feedback? Reach out via the About page.