AI News Digest — March 7, 2026
Highlights
- Anthropic’s Claude AI uncovers over 100 security vulnerabilities in Firefox: Claude found more than 100 bugs in Firefox — including flaws that had survived decades of conventional testing — marking a milestone in AI-assisted vulnerability discovery.
- Trump administration drafts AI contract rules requiring “all lawful use” licensing: New US procurement guidelines would force AI vendors to grant the government irrevocable licenses and ban ideologically biased outputs — a requirement critics note mirrors China’s approach to AI governance.
- Despite Pentagon ban, Google, AWS, and Microsoft stick with Anthropic’s AI models: All three cloud giants are maintaining commercial partnerships with Anthropic, signaling confidence in the company despite its exclusion from Defense Department contracts.
- Claude Code’s $200 subscription may cost Anthropic up to $5,000 in compute per user: An internal Cursor analysis cited by Forbes reveals a 25x compute subsidy, raising questions about long-term pricing once AI coding tools become indispensable.
News
AI Security
- Anthropic’s Claude AI uncovers over 100 security vulnerabilities in Firefox (The Decoder) Claude identified over 100 bugs in Firefox, including vulnerabilities that had persisted through decades of human and automated testing — a concrete demonstration of AI’s growing role as a large-scale vulnerability-discovery tool.
USA
-
Trump administration drafts AI contract rules requiring companies to license systems for “all lawful use” (The Decoder) Draft US procurement guidelines would grant the federal government an irrevocable license to use any contracted AI system for any lawful purpose and prohibit politically skewed outputs — a self-contradictory stance that echoes authoritarian AI mandates.
-
Despite Pentagon ban, Google, AWS, and Microsoft stick with Anthropic’s AI models (The Decoder) Amid Anthropic’s exclusion from Defense Department contracts, its three primary cloud partners are holding firm on commercial and enterprise partnerships, drawing a clear line between military and civilian AI markets.
-
OpenAI and Oracle stop expanding their flagship data center in Texas over power supply delays (The Decoder) The Stargate expansion is paused due to power infrastructure bottlenecks; OpenAI is redirecting investment toward Nvidia’s next-generation Vera Rubin chips at new locations.
-
Anthropic’s Claude Code subscription may consume up to $5,000 in compute per month while charging the user just $200 (The Decoder) Cursor’s internal estimates suggest Anthropic is absorbing a massive compute subsidy on Claude Code, raising the prospect of sharp price increases once these tools become entrenched.
-
Anthropic turns Claude Code into a background worker with local scheduled tasks (The Decoder) Claude Code Desktop can now run recurring autonomous tasks — such as scanning error logs and opening pull requests for fixable bugs — on a schedule without user intervention.
-
Anthropic’s new marketplace lets enterprise customers spend their existing AI budget on third-party tools (The Decoder) The Anthropic Marketplace allows corporate customers to purchase third-party Claude-powered applications directly within their existing AI spend, expanding the ecosystem beyond direct API contracts.
-
OpenAI offers open-source maintainers six months of free ChatGPT Pro and Codex access (The Decoder) OpenAI is courting open-source developers with a free tier covering ChatGPT Pro, Codex, and security tooling for qualifying project maintainers — following a similar move by Anthropic.
-
Grammarly’s ‘expert review’ is just missing the actual experts (TechCrunch AI) Grammarly’s new feature claims to surface insights from renowned writers and thinkers but relies on AI outputs rather than genuine expert input, drawing scrutiny over misleading product framing.
-
ByteDance’s open-weight Helios model brings minute-long AI video generation close to real time (The Decoder) Helios, a 14B-parameter open-weight model, hits 19.5 FPS on a single GPU for minute-long clips — a significant efficiency leap in video generation; code and weights are publicly available.
Europe
- When language models hallucinate, they leave “spilled energy” in their own math (The Decoder) Researchers at Sapienza University of Rome discovered measurable computational signatures in LLM activations during hallucinations, enabling a training-free detection method that outperforms prior approaches and generalizes across model families.
Key Themes
- Anthropic ecosystem expansion: Claude Code gained autonomous scheduling, an enterprise marketplace launched, and cloud partners held firm despite Pentagon friction — a dense cluster of moves in a single day.
- AI compute economics: The 25x gap between Claude Code’s subscription price and its actual compute cost is now public, foreshadowing repricing across AI coding tools industry-wide.
- AI-powered security research: Claude’s Firefox audit and Europe’s hallucination-detection research both highlight AI being turned inward — on itself and on software infrastructure — to find failure modes at scale.
- US government AI policy: Draft procurement rules signal an assertive federal posture on licensing and ideological control of AI systems.
- Open-source AI momentum: ByteDance released open weights for Helios, and both OpenAI and Anthropic launched programs to subsidize open-source developers — reflecting strategic competition for the open-source developer base.
For detailed summaries of selected research papers, see papers.md.