Software

AI + ML

Sage Copilot grounded briefly to fix AI misbehavior

'Minor issue' with showing accounting customers 'unrelated business information' required repairs


Sage Group plc has confirmed it temporarily suspended its Sage Copilot, an AI assistant for the UK-based business software maker's accounting tools, this month after it blurted customer information to other users.

A source familiar with the developer told The Register late last week: "A customer found when they asked [Sage Copilot] to show a list of recent invoices, the AI pulled data from other customer accounts including their own."

"It was reported via their support lines, verified, and the decision was made to pull access to the AI," the source continued.

The biz described the blunder as a "minor issue" to the The Register, and denied the machine-learning system had leaked GDPR-sensitive data as some had feared.

"After discovering a minor issue involving a small amount of customers with Sage Copilot in Sage Accounting, we briefly paused Sage Copilot," a company spokesperson said. "The issue showed unrelated business information to a very small number of customers. At no point were any invoices exposed. The fix is fully implemented and Sage Copilot is performing as expected."

Sage's spokesperson indicated Sage Copilot was taken offline for a few hours last Monday, to investigate the issue and implement a fix.

Have you experienced an AI system going off the rails? Tell us about it in confidence.

Unveiled in February 2024, Sage Copilot is described by its creators as "a trusted team member, handling administrative and repetitive tasks in ‘real-time’, while recommending ways for customers to create more time and space to focus on growing and scaling their businesses."

The chatbot is intended to automate workflows, catch errors and generate suggested actions relevant to business accounting.

"Sage Copilot’s accuracy, security and trust have been prioritized every step of the way, combined with expert support, robust encryption, access controls, and compliance with data protection regulations," the biz says on its website.

Sage Copilot is presently available by invitation as an early access product, and is being used by a small number of customers; the company's spokesperson could not provide a specific figure.

AI models make cybersecurity more difficult, according to Microsoft. And they generally come with warnings that their output needs to be verified since they're often wrong. Nonetheless, companies insist on deploying AI services, occasionally to their chagrin.

Apple this week suspended Apple Intelligence's news summarization capability following concerns that the service's AI summaries were inaccurate.

Last year, Air Canada had to pay a traveler hundreds of dollars after its chatbot misinformed the passenger about the airline's bereavement rate discount. And McDonald's ended its Automated Order Taker pilot after customers complained the AI got orders wrong.

In 2023, a GM chatbot used by a Watsonville, California, auto dealership got talked into agreeing to sell a 2024 Chevy Tahoe for $1 through some clever prompt engineering. Two years earlier, Zillow took a $304 million inventory write-down and cut 2,000 jobs after the real estate firm's bet on AI-based property valuations sank its home-buying business.

AI models have invented software package names, cited non-existent court cases, and accused people of crimes they haven't committed.

Meanwhile, several large makers of AI models including Anthropic, Meta, Microsoft, OpenAI, and Nvidia face copyright lawsuits over the data used to make these models. ®

Send us news
23 Comments

How nice that state-of-the-art LLMs reveal their reasoning ... for miscreants to exploit

Blueprints shared for jail-breaking models that expose their chain-of-thought process

UK's new thinking on AI: Unless it's causing serious bother, you can crack on

Plus: Keep calm and plug Anthropic's Claude into public services

We meet the protesters who want to ban Artificial General Intelligence before it even exists

STOP AI warns of doomsday scenario, demands governments pull the plug on advanced models

Does terrible code drive you mad? Wait until you see what it does to OpenAI's GPT-4o

Model was fine-tuned to write vulnerable software – then suggested enslaving humanity

Microsoft expands Copilot bug bounty targets, adds payouts for even moderate messes

Said bugs 'can have significant implications' – glad to hear that from Redmond

LLM aka Large Legal Mess: Judge wants lawyer fined $15K for using AI slop in filing

Plus: Anthropic rolls out Claude 3.7 Sonnet

Despite Wall Street jitters, AI hopefuls keep spending billions on AI infrastructure

Sunk cost fallacy? No, I just need a little more cash for this AGI thing I’ve been working on

If you thought training AI models was hard, try building enterprise apps with them

Aleph Alpha's Jonas Andrulis on the challenges of building sovereign AI

Why AI benchmarks suck

Anyone remember when Volkswagen rigged its emissions results? Oh...

Microsoft names alleged credential-snatching 'Azure Abuse Enterprise' operators

Crew helped lowlifes generate X-rated celeb deepfakes using Redmond's OpenAI-powered cloud – claim

How mega city council's failure to act on Oracle rollout crashed its financial controls

Missing assessments, hidden caveats, and overoptimism all contributed to fateful decision, auditors find

C++ creator calls for help to defend programming language from 'serious attacks'

Bjarne Stroustrup wants standards body to respond to memory-safety push as Rust monsters lurk at the door