Software

AI + ML

Microsoft expands Copilot bug bounty targets, adds payouts for even moderate messes

Said bugs 'can have significant implications' – glad to hear that from Redmond


Microsoft is so concerned about security in its Copilot products for folks that it’s lifted bug bounty payments for moderate-severity vulnerabilities from nothing to a maximum of $5,000, and expanded the range of vulnerabilities it will pay people to find and report.

The payouts for less severe vulns were introduced because the software giant thinks "even moderate vulnerabilities can have significant implications for the security and reliability of our Copilot consumer products," explained Microsoft bounty team members Lynn Miyashita and Madeline Eckert earlier this month.

Under the Copilot Bounty Program, researchers who identify and disclose previously unknown vulnerabilities can earn between $250 and $30,000. As is typical in bug bounty programs, higher payouts are reserved for those who report the most serious vulnerabilities, such as code injection or model manipulation.

Microsoft classifies security flaws into four severity levels - Critical, Important, Moderate, and Low - based on the Microsoft Vulnerability Severity Classification for AI Systems and the Microsoft Vulnerability Severity Classification for Online Services.

Redmond also recently expanded the Copilot (AI) Bounty Program to cover 14 types of vulnerability, up from three, an understandable decision given its push to embed the generative AI assistants across its product portfolio.

The three old-school vulns are inference manipulation, model manipulation and inferential information disclosure.

The new vuln types Microsoft wants bug hunters to find are deserialization of untrusted data, code injection, authentication issues, SQL or command injection, server-side request forgery, improper access control, cross-site scripting, cross-site request forgery, web security misconfiguration, cross origin access issues, and improper input validation.

Crucially, Microsoft’s asked AI bug hunters to start probing the following services:

Redmond launched its first AI bug bounty program in October 2023 for Bing's AI-powered features, then extended it to Copilot in April 2024.

In addition to the revised bug bounty rewards and targets, Microsoft last year announced new training for "aspiring AI professionals" under its Zero Day Quest, which now includes workshops, access to Microsoft AI engineers, and research and development tools.

Microsoft's latest security efforts come as it and almost every other major tech company races to pack generative AI into their stuff, sometimes without fully addressing - or understanding - the security and privacy risks involved as was the case with Windows Recall.

Time and again, researchers have found ways to jailbreak the large language models (LLMs) that underpin services like Copilot, in ways that some worry could see criminals potentially use AI to develop weapons or carry out cyberattacks.

It’s also possible to manipulate LLMs by intentionally introducing misleading data into the training datasets. These types of data poisoning attacks can cause models to generate incorrect or harmful outputs, again with potential real-world consequences if the models are used in healthcare.

It's doubtful that the software vendors will slow the introduction of AI into their products. Maybe bigger bug bounties will motivate an army of hunters to find the worst flaws before miscreants exploit these weaknesses. ®

Send us news
7 Comments

After Copilot trial, government staff rated Microsoft's AI less useful than expected

Not all bad news for Redmond as Australian agency also found strong ROI and some unexpected upsides

Microsoft's drawback on datacenter investment may signal AI demand concerns

Investment bank claims software giant ditched 'at least' 5 land parcels due to potential 'oversupply'

Microsoft warns Trump: Where the US won't sell AI tech, China will

Rule hamstringing our datacenters is 'gift' to Middle Kingdom, vice chair argues

Under Trump 2.0, Europe's dependence on US clouds back under the spotlight

Technologist Bert Hubert tells The Reg Microsoft Outlook is a huge source of geopolitical risk

Microsoft names alleged credential-snatching 'Azure Abuse Enterprise' operators

Crew helped lowlifes generate X-rated celeb deepfakes using Redmond's OpenAI-powered cloud – claim

How nice that state-of-the-art LLMs reveal their reasoning ... for miscreants to exploit

Blueprints shared for jail-breaking models that expose their chain-of-thought process

Despite Wall Street jitters, AI hopefuls keep spending billions on AI infrastructure

Sunk cost fallacy? No, I just need a little more cash for this AGI thing I’ve been working on

UK's new thinking on AI: Unless it's causing serious bother, you can crack on

Plus: Keep calm and plug Anthropic's Claude into public services

Some workers already let AI do the thinking for them, Microsoft researchers find

Dammit, that was our job here at The Reg. Now if you get a task you don't understand, you may assume AI has the answers

Satya Nadella says AI is yet to find a killer app that matches the combined impact of email and Excel

Microsoft CEO is more interested in neural nets boosting GDP than delivering superhuman intelligence

Windows 11 adoption picking up speed, but older sibling still ahead

Microsoft Copilot reckons that it didn't have to be like this

Microsoft unveils finalized EU Data Boundary as European doubt over US grows

Some may have second thoughts about going all-in with an American vendor, no matter where their data is stored