Original title: Pull request overview
Article
The change set in the PR raised the git.addAICoAuthor default from off to all, so VS Code’s Git integration could insert Co-authored-by trailers for commits when AI-generated contributions were detected. A linked review noted the schema change and runtime inconsistency: the code path still used a fallback default of off via config.get('addAICoAuthor', 'off'), which can produce unexpected behavior when contributed defaults are not loaded. The PR included a minimal package metadata change and was summarized with one changed file, extensions/git/package.json, plus screenshot updates. Community responses argued the rollout was hidden, intrusive, and misleading because commit metadata was altered without explicit user consent or clear communication. The approver later acknowledged the insufficient validation and stated the setting should not apply when AI features are disabled and should not mark non-AI edits. They indicated a revert to off would happen in VS Code 1.119 and that attribution handling needed better test coverage. A later follow-up note says the default path was subsequently adjusted to chatAndAgent, showing continued policy iteration. The thread also reflects broader frustration that AI-related metadata can distort attribution, trust, and software development practices, especially for legal and procedural records.
Commenters overwhelmingly reported surprise and frustration after commits began carrying Copilot co-author lines even when they used no AI workflow or had
chat.disableAICoFeaturesenabled, and several called the behavior deceptive or legally risky. A core technical complaint was that commit logs are provenance records, so silent attribution may misrepresent authorship and possibly affect licensing or accountability expectations. Many participants criticized the change as a product or growth-hacking tactic aimed at AI adoption metrics, while also flagging product quality issues like poor PR hygiene, weak test coverage, and poor change communication. Some participants argued attribution is acceptable only when AI actually modified code, not when the editor is merely used for committing. Several users shared mitigations, including disabling settings, commit hooks, or moving to alternatives such as Zed, VSCodium, Emacs, or Neovim-based workflows. The maintainer’s follow-up apology and plan to revert and harden behavior was seen as a constructive middle point, with a request for direct issues and feedback. Overall, the discussion mixes ethical, technical, and strategic concerns rather than isolated UX complaints, and it reflects broader skepticism about AI feature defaults in developer tools.