Microsoft’s Copilot rollout is facing sharper criticism

Mozilla has openly criticized Microsoft’s handling of Copilot, arguing that the company pushed AI into its products too aggressively and without proper user consent. The criticism lands at a moment when Microsoft is already under pressure from complaints about how widely Copilot has been inserted across Windows apps.

The core complaint is simple: users were not clearly asked whether they wanted these AI features in the first place. Mozilla’s Linda Griffin said rolling back these forced AI integrations is the right move, but also described it as only the latest example of Microsoft going too far without user consent.

That matters because the issue here is not just AI itself. It’s the way AI was added. The criticism centers on choice, control, and whether people should have to accept new AI behavior in tools they were already using.

Why Microsoft’s Copilot expansion triggered backlash

Copilot was pushed into multiple apps

The criticism comes after Microsoft spent a long stretch embedding Copilot into a growing list of apps. The pattern described is one many Windows users would recognize: Copilot appearing in Notepad, in Widgets, and in the Snipping Tool.

That broad push created the sense that AI was being added everywhere whether users asked for it or not. And once that starts happening across multiple everyday tools, frustration builds fast.

User complaints appear to have forced a response

According to the material provided, repeated backlash from users has already had an effect. Microsoft has decided to scale back machine learning features in a selection of its own apps.

That shift is important because it suggests the complaints were strong enough to force a change in direction. Mozilla’s view is that the rollback is overdue, not generous. In that framing, Microsoft is not leading with user choice now so much as responding after pushing too far.

Mozilla’s criticism is unusually direct. Griffin described Microsoft’s Copilot expansion as forceful and said it happened with no prompt and no consent. The point is not subtle: users were not asked whether they wanted their apps outfitted with AI features.

She also questioned Microsoft’s motives. When Microsoft says it now wants to be intentional about Copilot, Griffin argues that this amounts to admitting it made repeated choices that served the business over the customer.

That is a strong accusation, and it reframes the whole Copilot debate. Instead of asking whether AI tools are useful in theory, the argument becomes whether Microsoft respected the user while deploying them.

Mozilla calls Microsoft’s design tactics deceptive

A broader pattern beyond Copilot

Mozilla’s criticism does not stop with Copilot. Griffin argued that placing AI inside Microsoft’s apps fits into a broader pattern of deceptive design patterns.

That claim is backed, in Mozilla’s telling, by research it commissioned. The research says Microsoft uses design and distribution tactics to override user choice. Several examples are highlighted in that argument:

  • The Windows search bar opening Edge instead of the user’s chosen browser
  • The lack of a device migration system in Windows
  • A convoluted process for changing the default browser

Taken together, these examples support Mozilla’s wider case that Copilot is not an isolated issue. It is presented as part of a familiar approach in which defaults, friction, and product design are used to steer people where Microsoft wants them to go.

The real issue is user choice

Here’s what this comes down to: Mozilla is saying that software companies should not quietly decide that AI belongs in your workflow. If a feature changes how you use your browser or apps, the decision should be yours.

That’s the line Mozilla keeps drawing. Not whether AI can exist. Whether users stay in control.

Mozilla’s AI approach is built around a kill switch

Firefox offers a single control to disable AI

Mozilla contrasts its own approach very clearly. Its built-in browser AI can be turned off using a single kill switch. That feature was added after vocal feedback from users.

That detail matters because it shows Mozilla presenting opt-out control not as an afterthought, but as part of the product design. In Griffin’s words, users should decide whether AI is part of their browsing experience at all. Not Big Tech. Not Mozilla. You.

That line captures Mozilla’s whole position. AI should be optional. And the option should be obvious.

Preferences should not reset after updates

Mozilla also emphasizes something users often hate: settings that quietly change after an upgrade. Griffin said user preferences persist across browser updates, meaning AI tools do not silently switch themselves back on after a major update.

No reinstalling. No opting out all over again after the fact.

That’s a small detail on paper, but honestly, it’s one of the most practical points in the whole debate. Control is not real if it disappears the next time the software updates.

Microsoft’s Copilot retreat looks like a reaction, not a reset

Microsoft has faced heavy criticism over the aggressive way Copilot was rolled out across its apps, and the current rollback suggests that pressure has started to bite. The change in direction may reduce some of the anger, but Mozilla’s response makes clear that the larger complaint has not gone away.

From this view, scaling back a few features does not erase the bigger issue. The bigger issue is that users felt AI was being imposed on them. And once that trust is damaged, reversing course becomes harder than simply removing a feature.

The criticism also lands because it speaks to something broader than one company or one product. Users are increasingly sensitive to software changes that feel forced, especially when they affect core tools they rely on every day.

The fight over Copilot is really a fight over defaults. Who gets to decide when AI shows up in the products people use every day? The company building the product, or the person sitting in front of the screen?

Mozilla’s answer is direct: the user should decide. That means clear consent, easy controls, and settings that stay where the user put them. Microsoft, by contrast, is being criticized for pushing AI first and adjusting only after backlash.

And that’s why this debate has real weight. It’s not just about one assistant or one browser maker taking a shot at a rival. It’s about whether AI becomes another layer users must work around, or a tool they can choose on their own terms.