AI Agent Breaches McKinsey Internal AI System

AI Agent Breaches McKinsey Internal AI System

A recent cybersecurity experiment has highlighted emerging risks surrounding enterprise artificial intelligence after an autonomous AI agent successfully breached McKinsey & Company’s internal AI platform, known as Lilli. The system is used by more than 40,000 McKinsey employees to search internal knowledge, analyse documents, and assist with consulting work.

OpenAI Names London Its Largest Research Hub Outside the U.S., Boosting UK AI Leadership

OpenAI Names London Its Largest Research Hub Outside the U.S., Boosting UK AI Leadership

OpenAI has announced that London will become its largest research hub outside the United States, citing Britain’s technology ecosystem, leading universities, and scientific institutions as strategic advantages. The expansion coincides with deeper collaboration on AI safety, including a £5.6 million contribution to the UK’s AI Security Institute Alignment Project, part of a £27 million funding effort across eight countries.

The move positions London as a global center for advanced AI research, combining frontier innovation with formal alignment and safety efforts. It underscores the UK’s strategy to lead not only in AI development, but also in building trust and governance frameworks that ensure powerful systems behave safely and predictably.

Anthropic Flags “Industrial-Scale” AI Distillation by Chinese Firms, Raising Security Concerns

Anthropic Flags “Industrial-Scale” AI Distillation by Chinese Firms, Raising Security Concerns

Anthropic has released a report accusing three Chinese AI firms – DeepSeek, Moonshot, and MiniMax – of orchestrating large-scale efforts to extract proprietary capabilities from Claude models. Anthropic calls these “industrial-scale distillation attacks,” highlighting more than 24,000 fraudulent accounts generating over 16 million exchanges to replicate Claude’s reasoning and tool-use logic.

The company warns that illicitly distilled models bypass alignment and safety layers, creating national security risks and potential misuse in military, intelligence, and surveillance systems. Anthropic argues these campaigns are accelerating, forcing labs to tighten access and control over high-value reasoning traces.

The episode spotlights a growing tension in AI: frontier models are no longer just built from scraped internet data – they themselves have become the most valuable data worth stealing and protecting, raising questions about safety, transparency, and geopolitical AI competition.

OECD Issues Practical Guidance to Embed Responsible AI Across Enterprise Value Chains

OECD Issues Practical Guidance to Embed Responsible AI Across Enterprise Value Chains

OECD has released new guidance to help companies operationalize its Responsible AI business conduct standards and AI Principles. The report positions responsible AI as an integrated governance function rather than a standalone ethics exercise, providing enterprises with a practical framework to proactively address adverse impacts across the AI lifecycle.

Built on the OECD’s due diligence model, the guidance outlines six steps — from embedding responsible business conduct in policies to assessing impacts, mitigating harm, tracking outcomes, communicating actions, and providing remediation. Practical examples demonstrate how these principles can be applied in AI development, deployment, and risk management, closing gaps in stakeholder engagement and accountability.

The guidance signals a shift: AI risk is increasingly framed as a business conduct obligation. Companies are expected not only to ensure technical safety but to implement documented, auditable processes for oversight, impact assessment, and remediation, establishing a global baseline for responsible AI governance.

From Side Project to Self-Modifying Systems: The OpenClaw Story

From Side Project to Self-Modifying Systems: The OpenClaw Story

After visiting OpenAI, Peter Allen unpacked the rapid rise of OpenClaw – a project that evolved from a personal experiment into a global phenomenon within weeks. What began as an attempt to overcome burnout became a live case study in high-agency building with modern AI toolchains.

Allen describes agents exhibiting emergent behaviour, including one that located an API key, converted audio files, and executed its own call to OpenAI without explicit step-by-step instructions. He reframes “vibe coding” as disciplined architectural thinking — guiding intent while allowing models to handle implementation.

The broader implication is structural: as AI systems generate and adapt their own infrastructure, the line between user and developer begins to dissolve. In this model, software is no longer simply installed – it is cloned, instructed, and reshaped.

Celebrating International Women’s Day Every Day! 🌍✨

Celebrating International Women’s Day Every Day! 🌍✨

International Women’s Day is an important moment to recognise the achievements and leadership of women, including cis and trans women, whose work continues to shape our world. But advancing equality cannot be limited to one day a year. This year’s UN theme, “Rights. Justice. Action. For ALL Women,” reminds us that progress requires sustained effort and real change. Initiatives like RepresentAI are helping address the gender gap in technology by providing free AI training, mentoring and opportunities to upskill, supporting over 1,500 people last year alone. By removing barriers to knowledge and experimentation, more women and underrepresented individuals can participate in shaping the future of AI.