AI & Tech News Digest — February 28, 2026
Highlights
- Anthropic vs. Pentagon standoff escalates: The Trump administration banned Anthropic products from all federal agencies and the Pentagon designated the company a “supply chain risk” — a label typically reserved for foreign adversaries — after Anthropic refused to enable autonomous weapons and mass surveillance tools. Anthropic vows to challenge the designation in court.
- OpenAI moves quickly to fill the void: Hours after Anthropic’s ban, OpenAI signed a deal to deploy AI in classified DoD environments, claiming it maintains the same safety principles but with explicit “technical safeguards.”
- U.S. and Israel strike Iran, Khamenei reported killed: In a dramatic geopolitical escalation, U.S. and Israeli forces launched coordinated strikes targeting Iran’s leadership, with President Trump claiming Supreme Leader Khamenei was killed.
- OpenAI closes $110B funding round: Valued at $730B, OpenAI secured investment from SoftBank, Amazon, and NVIDIA, with plans to build a “Stateful Runtime” AI agent infrastructure on AWS as part of a strategic Amazon partnership.
- Claude surges to No. 2 in the App Store: Amid the Pentagon controversy, public interest in Anthropic’s Claude app spiked, pushing it to the second-highest position in the App Store.
News
AI Security
-
ClawJacked: Malicious Sites Can Hijack Local OpenClaw AI Agents via WebSocket (The Hacker News) — A high-severity flaw in OpenClaw’s core system allowed malicious websites to connect to locally running AI agents and seize control — no plugins or marketplace required. OpenClaw has issued a fix.
-
Thousands of Google Cloud API Keys Exposed with Gemini Access (The Hacker News) — Truffle Security discovered nearly 3,000 publicly exposed Google Cloud API keys that could be abused to authenticate to sensitive Gemini endpoints and access private model data.
-
Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute (The Hacker News) — Defense Secretary Pete Hegseth formally classified Anthropic as a supply chain risk after negotiations over military AI use cases broke down. Anthropic is preparing legal action.
-
Google Quantum-Proofs HTTPS by Squeezing 15kB into 700 Bytes (Ars Technica) — Using Merkle Tree Certificates, Google compresses HTTPS certificate data from 15kB to 700 bytes, making the protocol resistant to quantum attacks via Shor’s algorithm. Support is already live in Chrome.
USA
Anthropic vs. Pentagon
-
Defense Secretary Designates Anthropic a Supply Chain Risk (The Verge) — Pete Hegseth formally designated Anthropic a supply chain risk — nearly two hours after Trump announced a federal ban on Anthropic products — applying a label normally used for foreign adversaries.
-
Anthropic Calls Pentagon Designation Illegal, Vows to Challenge in Court (The Decoder) — Anthropic says the classification arose from its refusal to enable autonomous weapons and mass surveillance, and that the action itself violates the law.
-
Trump Directs Agencies to Drop Anthropic’s AI (Japan Times) — The executive order deals a major blow to the AI lab, barring all Anthropic products from federal agencies following the Pentagon standoff.
-
Anthropic’s Claude Rises to No. 2 in the App Store (TechCrunch) — The controversy appears to have boosted public awareness of Claude, driving a surge in downloads that pushed it to second place in the App Store.
OpenAI & the Pentagon
-
OpenAI Signs Pentagon Deal for Classified AI Networks (The Decoder) — Just hours after Anthropic’s ban, OpenAI announced a contract to deploy AI in classified DoD environments, raising questions about whether OpenAI’s stated safety principles truly align with Anthropic’s.
-
Sam Altman Announces Pentagon Deal with ‘Technical Safeguards’ (TechCrunch) — Altman says OpenAI’s defense contract includes explicit protections — cloud-only deployment, legal provisions, and prohibitions on autonomous weapons use.
-
Our Agreement with the Department of War (OpenAI Blog) — OpenAI published full contract details, outlining safety red lines and how AI systems will be deployed in classified environments.
OpenAI: Other
-
OpenAI Closes $110B Funding Round from SoftBank, Amazon, NVIDIA (Japan Times) — Valued at $730B, OpenAI will build “Stateful Runtime” AI agent infrastructure on AWS as part of a sweeping strategic partnership with Amazon, while also tapping Amazon’s custom silicon.
-
OpenAI Fires Employee for Using Confidential Info in Prediction Markets (Gigazine) — An OpenAI employee was dismissed after reportedly leveraging internal company information on Polymarket and other prediction platforms, per WIRED reporting.
-
OpenAI Calls Stuart Russell a “Doomer” in Court (The Decoder) — In ongoing litigation, OpenAI is attempting to discredit AI safety expert Stuart Russell as a doomsday prophet — even though Altman himself co-signed Russell’s AI extinction warnings in prior years.
-
OpenAI Promises Canada Tighter Safety Protocols After ChatGPT Flagged Shooter’s Chats but Never Called Police (The Decoder) — Following a fatal school shooting, OpenAI admitted it had flagged and blocked a suspect’s account but did not alert authorities, prompting a policy overhaul for cooperating with law enforcement.
AI Industry
-
The Billion-Dollar Infrastructure Deals Powering the AI Boom (TechCrunch) — A comprehensive roundup of the largest AI infrastructure commitments from Meta, Oracle, Microsoft, Google, and OpenAI, including massive data center expansions totaling planned compute of 5GW+.
-
Frontier LLMs Lose Up to 33% Accuracy in Long Conversations (The Decoder) — New research shows even the newest models — including GPT-5.2 and Claude 4.6 — show significant performance degradation as conversation length increases.
-
Perplexity Open-Sources Embedding Models Matching Google and Alibaba (The Decoder) — Perplexity’s two new open-source text embedding models match or beat Google’s and Alibaba’s offerings at a fraction of the memory cost.
-
Current LLM Training Leaves Large Parts of the Internet on the Table (The Decoder) — Researchers from Apple, Stanford, and UW found that different HTML extraction tools used in training pipelines pull surprisingly different content, leaving significant web data unused.
-
A New Benchmark Pits Five AI Models as Autonomous Social Media Agents on X (The Decoder) — Arcada Labs is running five leading AI models as autonomous agents on X to benchmark their real-world performance and behavior in open-ended social environments.
-
Xi’s AI Ambitions Collide with China’s Fragile Employment Market (Japan Times) — China faces a dual pressure: aggressive AI development to compete geopolitically with the U.S. and managing rising unemployment from automation that could trigger social unrest.
Geopolitics
-
U.S. and Israel Launch Strikes on Iran, Targeting Its Leadership (Japan Times) — Trump said the joint U.S.-Israeli strikes would end a security threat and give Iranians a chance to topple their rulers.
-
Iranian Leader Khamenei Said Killed in U.S. and Israeli Strikes (Japan Times) — Trump announced the killing of Ayatollah Khamenei, while a senior Israeli official confirmed his body had been found.
-
Paramount to Buy Warner Bros Discovery in $110B Deal (Japan Times) — The merger would create one of the world’s largest film studios, with Netflix stepping out of the bidding process.
-
NASA Announces Overhaul of Artemis Lunar Program (Japan Times) — NASA is restructuring the repeatedly delayed Artemis program to ensure Americans return to the lunar surface by 2028.
Europe
-
Russia Weighs Halt to Peace Talks Unless Ukraine Cedes Territory (Japan Times) — Kremlin-linked sources say next week’s talks will be decisive, with Russia demanding territorial concessions as a precondition for continuing negotiations.
-
France’s Macron Planning Official Visit to Japan in April (Japan Times) — The visit aims to reaffirm cooperation with Prime Minister Takaichi ahead of France’s G7 summit in June.
Japan
-
Japan Prepares for Risks from U.S.-Iran Strikes (Japan Times) — The Foreign Ministry noted approximately 200 Japanese nationals are in Iran; no casualties reported so far.
-
Japan Joins U.S., Philippines for Military Exercises Near Taiwan (Japan Times) — Joint naval and aerial exercises near the Bashi Channel underscore the strategic importance of the waterway between the Philippines and Taiwan.
-
Takaichi Backs Male-Only Imperial Succession (Japan Times) — Prime Minister Takaichi reaffirmed her position maintaining Japan’s male-only imperial succession line during a parliamentary session.
-
Paternity Leave Gets Boost as Local Governments Act (Japan Times) — Local governments are making paternity leave easier to take, aiming to retain workers amid ongoing population outflow to major cities.
-
India’s GDP Revisions Mean Longer Wait to Overtake Japan (Japan Times) — Revised Indian GDP figures push back the timeline for surpassing Japan economically, though India’s 7%+ growth rate makes the outcome a matter of when, not if.
Key Themes
- AI governance vs. national security: The Anthropic-Pentagon standoff is a defining moment for how AI companies navigate government pressure to enable military applications they consider dangerous. The contrast with OpenAI’s rapid deal-signing reveals fundamentally different corporate stances on safety compliance.
- Concentration of AI power: OpenAI’s $110B raise, massive infrastructure commitments by Big Tech, and Perplexity’s open-source push illustrate the widening gap between frontier AI players and the rest of the field.
- Geopolitical escalation: The U.S.-Israeli strikes on Iran and the reported killing of Khamenei represent a dramatic Middle East escalation with broad implications for regional stability, energy markets, and U.S. foreign policy credibility.
- LLM limitations in practice: New evidence that long conversations degrade accuracy by up to 33%, combined with findings that HTML extraction choices dramatically affect training data quality, challenges assumptions about current frontier model reliability.
- AI safety as concrete societal stakes: From ChatGPT’s failure to alert Canadian authorities about a suspected shooter to OpenAI’s courtroom treatment of its own former allies in AI safety, the gap between AI safety rhetoric and practice is under unprecedented scrutiny.
For detailed summaries of selected research papers, see papers.md.