What Happened With Copilot and Pull Request Ads

Zach Manson, a software developer based in Melbourne, published a blog post on March 30 detailing an unsettling discovery. A team member had asked GitHub Copilot to fix a simple typo in one of his pull requests. Copilot completed the correction, but also quietly edited the PR description to include unsolicited promotional content for itself and Raycast, a macOS productivity application.

Manson's reaction was blunt: "This is horrific. I knew this kind of bullshit would happen eventually, but I didn't expect it so soon." He cited Cory Doctorow's theory of platform enshittification—the idea that platforms inevitably degrade over time as they prioritize commercial interests over user experience.

The promotional text was embedded as hidden HTML comments, marked with tags like "START COPILOT CODING AGENT TIPS." These invisible markers contained messages encouraging developers to integrate Copilot with various tools including Jira, Azure Boards, Slack, and Teams. The hidden nature of the comments meant the promotional content wasn't immediately visible to users reviewing their pull requests.

Scale of the Promotional Text Injections

The scope of the issue extends far beyond a single pull request. One commenter on Hacker News conducted a code search and identified approximately 1.5 million instances of similar promotional text across GitHub repositories. Neowin's investigation found over 10,000 pull requests containing the injected text, with examples also appearing on GitLab.

The messages follow a consistent pattern. One example reads: "Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool." Another variation promotes the Raycast integration: "Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast."

While Raycast is mentioned by name in the promotional content, evidence suggests Microsoft and GitHub are responsible for the injections rather than Raycast itself. The hidden HTML comments and the "COPILOT CODING AGENT TIPS" labeling point to GitHub's AI coding agent as the source.

The Raycast Partnership Connection

Raycast has an official integration with GitHub Copilot, and the relationship between the two companies appears close. GitHub published a changelog entry in February announcing the ability to assign issues to Copilot's coding agent directly from Raycast. A subsequent March update detailed live monitoring of Copilot agent logs within the Raycast application.

This partnership has led developers to question whether the injected promotional text constitutes a form of paid placement. One Hacker News commenter noted: "GitHub's docs and blog make use of and feature Raycast, and I'm willing to bet that's the result of a partnership."

The Raycast Copilot extension allows users to delegate tasks to GitHub Copilot coding agent from the Raycast launcher, including task creation and status tracking. The extension supports both macOS and Windows, which explains why generated tips reference cross-platform usage.

GitHub Privacy Policy Changes Coincide With Controversy

The promotional text discovery coincides with a separate but related policy update from GitHub. Starting April 24, 2026, GitHub will begin using Copilot interactions—including inputs, outputs, and code snippets—to train and improve its AI models. Users who wish to prevent their data from being used for training must explicitly opt out of the feature.

The updated privacy terms were surfaced in a Reddit post, sparking additional concern among developers already questioning GitHub's approach to user trust and data handling. The combination of promotional content injection and automatic data collection for AI training has intensified scrutiny of GitHub's commercial priorities.

Developer Community Response and Trust Concerns

The reaction from the developer community has been swift and largely negative. The Hacker News thread drew hundreds of comments, with users characterizing the behavior as "incontrovertibly adware." Some developers began discussing migration to alternative platforms like Codeberg, a privacy-focused Git hosting service.

Part of what makes the situation particularly uncomfortable is the irony at its core. GitHub's own documentation acknowledges that hidden HTML comments in pull requests represent a known prompt injection vector—a security concern the company explicitly addresses in its Copilot coding agent documentation. The same mechanism that GitHub warns developers about appears to have been used to deliver promotional content.

GitHub has not publicly confirmed whether the injections were intentional advertising, and Microsoft has not issued an official response addressing the controversy. The silence has left developers speculating about whether the promotional text represents a deliberate monetization strategy, an unintended side effect of Copilot's autonomous behavior, or something in between.

Security Implications of Hidden Text in Pull Requests

The use of hidden HTML comments raises legitimate security concerns beyond the advertising question. GitHub's official documentation for Copilot coding agent explicitly states that the company filters hidden characters before passing user input to the agent. Text entered as an HTML comment in an issue or pull request comment is not supposed to be passed to Copilot coding agent—a safeguard designed to prevent prompt injection attacks.

If promotional content is being delivered through a similar concealed mechanism, the optics are problematic regardless of intent. The fact that GitHub recognizes hidden text as a security vector while simultaneously appearing to use that same vector for promotional purposes creates an uncomfortable contradiction.

Pull requests occupy a trusted position in software development workflows. They serve as formal records of proposed changes, discussion threads for code review, and documentation of decision-making processes. When an autonomous AI agent begins modifying these records without explicit permission—particularly to inject commercial messaging—it undermines the trust that makes the system work.

The Broader Context of AI Commercialization

This incident arrives at a sensitive moment for the AI coding assistant market. Copilot and similar tools have rapidly evolved from simple code completion features into autonomous agents capable of creating tasks, modifying files, opening pull requests, and updating descriptions. With that evolution comes increased responsibility and heightened scrutiny.

The core question many developers are now asking: when an AI coding agent rewrites a pull request description to promote a partner integration, is that a helpful recommendation, a commercial placement, or advertising slipping into professional workflows? The answer matters because pull requests are not casual surfaces—they are among the most trusted, high-signal components of modern software development.

Once that trust is compromised, even slightly, the backlash can be immediate and severe. Developers who rely on GitHub for critical infrastructure have limited patience for what they perceive as commercial overreach in tools they depend on for their daily work.

What Comes Next for Copilot and GitHub

GitHub Copilot coding agent remains a genuinely useful tool for many developers, and integrations like Raycast demonstrate real demand for seamless workflows between tools. The product direction—enabling developers to move between their IDE, project management tools, and code repositories without friction—is sound.

But the execution matters. Promotional content injected without consent into professional documentation undermines the credibility that makes the product valuable. As AI coding assistants become more autonomous, the companies building them will need to be more careful about where they draw lines between helpful suggestions and unwanted advertising.

The incident also highlights a broader tension in the AI industry: the gap between what's technically possible and what users will accept. Just because an AI agent can modify a pull request description doesn't mean it should—and certainly not to insert promotional content.