Off-Prem

PaaS + IaaS

Microsoft unleashes autonomous Copilot AI agents in public preview

They can learn, adapt, and make decisions – but don't worry, they're not coming for your job


Ignite Microsoft has fresh tools out designed to help businesses build software agents powered by foundation models – overenthusiastically referred to as artificial intelligence, or AI.

"Our vision is to empower every employee with a personal assistant via Copilot that allows them to tackle work's biggest pain points like meeting overload, overflowing inbox, designing sales decks in less time and with more business impact," comms chief Frank Shaw told media in a briefing.

Copilot, in case you've missed it, is the term Microsoft has adopted to refer to its AI productivity software. Based on OpenAI's GPT-4 series of large language models, it provides a chatbot interface through which users can enter an input prompt (text, audio, or an image, depending on context) and receive some response.

Microsoft 365 Copilot lets users tell Word to draft or edit text on whatever topic the chatbot can satisfactorily reproduce from its undisclosed training data, ask PowerPoint to create slides, or direct Excel to build a data visualization for a data set – among other things.

The next step for the technology is AI agents – chatbots that perform a series of linked tasks based on instructions. Microsoft teased its autonomous agent capabilities last month at an AI event in the UK, and they have graduated to public preview.

Agents represent a modern take on macros and workflow automation. The hope is that they can parrot human digital output well enough to pass inspection.

"Whether it's sourcing sales leads, handling customer service inquiries, or tracking project deadlines, agents can do a lot of the heavy lifting so people can focus on more strategic tasks," explained Shaw.

What separates agents from models, according to Microsoft Source writer Susanna Ray, is memory, entitlements, and tools. Memory lets agents perform a series of tasks that build on one another. Entitlements ensure the code has permission to access data and take action. And tools refer to the necessary glue code or applications.

To realize its vision of software-directed action, Microsoft is introducing Microsoft 365 Copilot agents with predefined roles. These include:

And if these templates fail to satisfy, Microsoft Copilot Studio provides a way to customize AI agent behavior.

"Agents built in Copilot Studio can operate independently, dynamically planning and learning from processes, adapting to changing conditions, and making decisions without the need for constant human intervention," explained Charles Lamanna, corporate VP of business and industry for Copilot, in a blog post. "These autonomous agents can be triggered by data changes, events, and other background tasks – and not just through chat!"

Copilot Studio includes templates for common agent scenarios that can serve as the basis for a customized version. It is also gaining support for voice-enabled agents, image uploading (for analysis by GPT-4o), and knowledge tuning – the ability to add new sources of knowledge to help agents respond to questions.

Developers interested in creating agents can use the Agent SDK to access services from Azure AI, Semantic Kernel, and Copilot Studio. There's also an Azure AI Foundry integration that links Copilot Studio to facilitate connection to services like Azure AI Search and the Azure AI model catalog.

Separately, there's a public preview of agent builder in Power Apps – Microsoft's low-code service for business apps.

Sarah Bird, chief product officer for Responsible AI, noted in a blog post that extra safety considerations arise with autonomous agents – something said of robot agents too – and that Microsoft is focused on ensuring that they behave. She argued that the standard Responsible AI practices – including the Copilot Control System, intended to allow IT departments to manage data access for Copilot and AI agents – can mitigate risks.

Microsoft's post on the subject observes that many agents include a human-in-the-loop check to make sure autonomous decision-making doesn't go off the rails. Nothing demonstrates confidence in automation more than a manual approval bottleneck.

Those looking to get a sense of AI agents in the field may wish to consider the Hiring Assistant launched by Microsoft's LinkedIn subsidiary. The bot was deployed to lighten the burden on human resources professionals who have a hard time dealing with the deluge of job applications and other clerical chores – a situation exacerbated by automation and AI. ®

Send us news
9 Comments

Under Trump 2.0, Europe's dependence on US clouds back under the spotlight

Technologist Bert Hubert tells The Reg Microsoft Outlook is a huge source of geopolitical risk

Microsoft expands Copilot bug bounty targets, adds payouts for even moderate messes

Said bugs 'can have significant implications' – glad to hear that from Redmond

Microsoft's drawback on datacenter investment may signal AI demand concerns

Investment bank claims software giant ditched 'at least' 5 land parcels due to potential 'oversupply'

How nice that state-of-the-art LLMs reveal their reasoning ... for miscreants to exploit

Blueprints shared for jail-breaking models that expose their chain-of-thought process

Microsoft warns Trump: Where the US won't sell AI tech, China will

Rule hamstringing our datacenters is 'gift' to Middle Kingdom, vice chair argues

Microsoft names alleged credential-snatching 'Azure Abuse Enterprise' operators

Crew helped lowlifes generate X-rated celeb deepfakes using Redmond's OpenAI-powered cloud – claim

UK's new thinking on AI: Unless it's causing serious bother, you can crack on

Plus: Keep calm and plug Anthropic's Claude into public services

We meet the protesters who want to ban Artificial General Intelligence before it even exists

STOP AI warns of doomsday scenario, demands governments pull the plug on advanced models

Does terrible code drive you mad? Wait until you see what it does to OpenAI's GPT-4o

Model was fine-tuned to write vulnerable software – then suggested enslaving humanity

Despite Wall Street jitters, AI hopefuls keep spending billions on AI infrastructure

Sunk cost fallacy? No, I just need a little more cash for this AGI thing I’ve been working on

LLM aka Large Legal Mess: Judge wants lawyer fined $15K for using AI slop in filing

Plus: Anthropic rolls out Claude 3.7 Sonnet

Satya Nadella says AI is yet to find a killer app that matches the combined impact of email and Excel

Microsoft CEO is more interested in neural nets boosting GDP than delivering superhuman intelligence