OpenAI Names London Its Largest Research Hub Outside the U.S., Boosting UK AI Leadership
OpenAI has announced that London will become its largest research hub outside the United States, citing Britain’s technology ecosystem, leading universities, and scientific institutions as strategic advantages. The expansion coincides with deeper collaboration on AI safety, including a £5.6 million contribution to the UK’s AI Security Institute Alignment Project, part of a £27 million funding effort across eight countries.
The move positions London as a global center for advanced AI research, combining frontier innovation with formal alignment and safety efforts. It underscores the UK’s strategy to lead not only in AI development, but also in building trust and governance frameworks that ensure powerful systems behave safely and predictably.
Anthropic Flags “Industrial-Scale” AI Distillation by Chinese Firms, Raising Security Concerns
Anthropic has released a report accusing three Chinese AI firms – DeepSeek, Moonshot, and MiniMax – of orchestrating large-scale efforts to extract proprietary capabilities from Claude models. Anthropic calls these “industrial-scale distillation attacks,” highlighting more than 24,000 fraudulent accounts generating over 16 million exchanges to replicate Claude’s reasoning and tool-use logic.
The company warns that illicitly distilled models bypass alignment and safety layers, creating national security risks and potential misuse in military, intelligence, and surveillance systems. Anthropic argues these campaigns are accelerating, forcing labs to tighten access and control over high-value reasoning traces.
The episode spotlights a growing tension in AI: frontier models are no longer just built from scraped internet data – they themselves have become the most valuable data worth stealing and protecting, raising questions about safety, transparency, and geopolitical AI competition.
OECD Issues Practical Guidance to Embed Responsible AI Across Enterprise Value Chains
OECD has released new guidance to help companies operationalize its Responsible AI business conduct standards and AI Principles. The report positions responsible AI as an integrated governance function rather than a standalone ethics exercise, providing enterprises with a practical framework to proactively address adverse impacts across the AI lifecycle.
Built on the OECD’s due diligence model, the guidance outlines six steps — from embedding responsible business conduct in policies to assessing impacts, mitigating harm, tracking outcomes, communicating actions, and providing remediation. Practical examples demonstrate how these principles can be applied in AI development, deployment, and risk management, closing gaps in stakeholder engagement and accountability.
The guidance signals a shift: AI risk is increasingly framed as a business conduct obligation. Companies are expected not only to ensure technical safety but to implement documented, auditable processes for oversight, impact assessment, and remediation, establishing a global baseline for responsible AI governance.
From Side Project to Self-Modifying Systems: The OpenClaw Story
After visiting OpenAI, Peter Allen unpacked the rapid rise of OpenClaw – a project that evolved from a personal experiment into a global phenomenon within weeks. What began as an attempt to overcome burnout became a live case study in high-agency building with modern AI toolchains.
Allen describes agents exhibiting emergent behaviour, including one that located an API key, converted audio files, and executed its own call to OpenAI without explicit step-by-step instructions. He reframes “vibe coding” as disciplined architectural thinking — guiding intent while allowing models to handle implementation.
The broader implication is structural: as AI systems generate and adapt their own infrastructure, the line between user and developer begins to dissolve. In this model, software is no longer simply installed – it is cloned, instructed, and reshaped.
