Stanford Study Finds Wide Gaps in Privacy Policies for Large Language Models

Stanford Study Finds Wide Gaps in Privacy Policies for Large Language Models

A new study by Stanford Research analyzed privacy disclosures from Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI, finding that policies on chat data use vary widely and are often difficult for users to interpret.

The research highlights inconsistent practices around data usage, retention, and transparency, with many platforms using chat data to improve services or train models depending on account type, while enterprise/API products often exclude training by default. Fragmented disclosures and unclear retention limits create privacy risks and impede informed consent. The study underscores the urgent need for clearer policies and stronger governance as AI tools become increasingly integrated into daily life.

NIST Launches AI Agent Standards Initiative to Establish Identity, Security, and Interoperability

NIST Launches AI Agent Standards Initiative to Establish Identity, Security, and Interoperability

The era of autonomous AI agents is here, but legal and security frameworks lag behind. National Institute of Standards and Technology (NIST) has launched the AI Agent Standards Initiative through its Center for AI Standards and Innovation (CAISI) to address this gap, focusing on international leadership, open-source protocols, and secure digital identities for AI agents.

The initiative directly intersects with Anthropic’s Responsible Scaling Policy (RSP v3.0), which acknowledges the risks of agent autonomy and misuse. As AI agents gain economic capabilities — from coding and managing emails to shopping — NIST emphasizes the need for verifiable identities and security standards to prevent fragmented ecosystems, industrial-scale attacks, and unsafe deployments. Public input is currently being solicited on AI agent security and identity frameworks, highlighting a pivotal moment for defining accountability for digital agents.

Anthropic Expands Cowork: Department-Specific AI Agents Move Deeper into the Enterprise

Anthropic Expands Cowork: Department-Specific AI Agents Move Deeper into the Enterprise

Anthropic has unveiled a major upgrade to its Cowork agent platform, introducing department-specific AI agents, private agent stores, and deeper integrations with enterprise tools including Google Workspace, DocuSign, FactSet, and Harvey. Partner plugins from Slack, Salesforce, S&P Global, and London Stock Exchange Group further embed AI into existing workflows.

With prebuilt agents spanning ten departments and new capabilities allowing Claude to move between Excel and PowerPoint autonomously, Cowork signals a strategic push into the operational core of the enterprise – accelerating the race with OpenAI to define the AI-native workplace.

FDM-1: The Model That Learns to Use Computers by Watching Humans

FDM-1: The Model That Learns to Use Computers by Watching Humans

Standard Intelligence has unveiled FDM-1, a new “computer-action” model trained on 11 million hours of screen recordings to learn workflows directly from observation. By reverse-engineering user intent from video frames, the system can perform complex CAD modelling, debug software, and even control a real car – all with minimal task-specific training. With the ability to process nearly two hours of continuous screen activity in a single context window, FDM-1 signals a shift from language-trained AI to action-trained AI, potentially redefining how autonomous software systems learn and operate.

London Named OpenAI’s Largest Research Hub Outside the US as UK Doubles Down on AI Leadership and Safety

London Named OpenAI’s Largest Research Hub Outside the US as UK Doubles Down on AI Leadership and Safety

OpenAI has announced that London will become its largest research hub outside the United States, citing Britain’s strong technology ecosystem, leading universities and scientific institutions. The expansion is paired with deeper collaboration between OpenAI, Microsoft and the UK AI Security Institute, including £5.6 million in new alignment funding. The move reinforces the UK’s ambition to position London as a global centre for frontier AI research while embedding safety and trust at the core of advanced system development.