Politics
Deal to get ChatGPT Plus for whole of UK discussed by Open AI boss and minister
- Exclusive: Deal that could have cost £2bn was floated at meeting between technology secretary Peter Kyle and Sam Altman
- OpenAI recently agreed on a deal with the United Arab Emirates to “enable ChatGPT nationwide” and use the technology in public sectors
- The UK government has been keen to attract AI investment from the US, having struck deals with OpenAI’s rivals Google and Anthropic earlier this year.

OpenAI’s Altman warns the U.S. is underestimating China’s next-gen AI threat
- OpenAI CEO Sam Altman said the U.S. may be underestimating the complexity and seriousness of China’s progress in artificial intelligence.
- His comments come as Washington adjusts its policies designed to curb China’s AI ambitions.
- Altman said competition from Chinese models — particularly open-source systems like DeepSeek and Kimi K2 — was a factor in OpenAI’s recent decision to release its own open-weight models.

China is quietly upstaging America with its open models
While American tech giants are spending megabucks to learn the secrets of their rivals’ proprietary artificial-intelligence (AI) models, in China a different battle is under way. It is what Andrew Ng, a Stanford University-based AI boffin, recently called the “Darwinian life-or-death struggle” among builders of China’s more open large language models (LLMs).

Industry
NVIDIA AI Released Jet-Nemotron: 53x Faster Hybrid-Architecture Language Model Series that Translates to a 98% Cost Reduction for Inference at Scale
- While today’s state-of-the-art (SOTA) LLMs, like Qwen3, Llama3.2, and Gemma3, have set new benchmarks for accuracy and flexibility, their O(n²) self-attention mechanism incurs exorbitant costs
- The core innovation is PostNAS: a neural architecture search pipeline designed specifically for efficiently retrofitting pre-trained models.
- PostNAS is not a one-off trick: it’s a general-purpose framework for accelerating any Transformer, lowering the cost of future breakthroughs.

Researchers Are Already Leaving Meta’s New Superintelligence Lab
- CEO Mark Zuckerberg went on a recruiting blitz to lure top AI researchers to Meta. WIRED has confirmed that three recent hires have now resigned.
- Just two months after Mark Zuckerberg launched the initiative with sky-high recruiting offers, at least three researchers have resigned, two returning to OpenAI after less than a month, and one, Rishabh Agarwal, leaving for undisclosed reasons. Meta is also losing longtime generative AI product director Chaya Nayak, who is joining OpenAI.

Scientists just developed a new AI modeled on the human brain — it’s outperforming LLMs like ChatGPT at reasoning tasks
- The hierarchical reasoning model (HRM) system is modeled on the way the human brain processes complex information, and it outperformed leading LLMs in a notoriously hard-to-beat benchmark.
- HRM executes sequential reasoning tasks in a single forward pass, without any explicit supervision of the intermediate steps, through two modules. One high-level module is responsible for slow, abstract planning, while a low-level module handles rapid and detailed computations.
- HRM scored 40.3% in ARC-AGI-1, compared with 34.5% for OpenAI’s o3-mini-high, 21.2% for Anthropic’s Claude 3.7 and 15.8% for Deepseek R1. In the tougher ARC-AGI-2 test, HRM scored 5% versus o3-mini-high’s 3%, Deepseek R1’s 1.3% and Claude 3.7’s 0.9%.

Anthropic will start training its AI models on chat transcripts
Anthropic is also extending its data retention policy to 5 years. To opt-out:
- In Claude, go to Settings > Privacy.
- At the bottom, click the “Review” button which is on the right side of the black bar that has small text reading “Review and accept updates to the Consumer Terms and Privacy Policy for updated privacy settings.”
- Toggle off the “You can help improve Claude” setting.
- Hit accept.

The AI doomers are having their moment
- Top AI companies are in a race to develop artificial general intelligence.
- The large language models powering popular chatbots, however, are showing their limits.
- Some researchers say world models or other strategies might be the clearer path to AGI.

Elon Musk says he wants to ‘simulate’ software companies like Microsoft ‘purely’ with AI. He’s calling it ‘Macrohard.’

Musk Tried to Enlist Zuckerberg to Help Finance Bid for OpenAI
- OpenAI said Elon Musk identified Mark Zuckerberg as one of the people with whom he had communicated about potentially financing a deal to purchase the ChatGPT maker.
- Neither Zuckerberg or Meta signed the letter of intent or participated in the $97.4 billion bid, OpenAI said in the filing.
- OpenAI asked the judge to order Meta to turn over documentation related to any communication the tech company had with Musk.

Google says its Gemini AI sips a mere ‘five drops’ of water per text prompt, but experts disagree with its findings
(Additional Article from the Verge)
- New research paper published by Google estimates that a median Gemini AI text prompt uses less energy than watching nine seconds of television, and consumes “0.26 milliliters (or about five drops) of water”—but some experts have disagreed with its findings.
- Shaolei Ren, an associate professor of electrical and computer engineering at the University of California and one of the authors named in the study, said: “They’re just hiding the critical information. This really sends the wrong message to the world.”
- Google omitted indirect water use in its data. While the claim that roughly five drops—or 0.26 ml—of water is consumed per median text prompt may be true regarding data center cooling systems, it doesn’t take into account the vast amount of water used by power plants providing electricity to the facilities.

The AI Report That’s Spooking Wall Street
- A new report from MIT has been published based on 150 executive interviews, a survey of 350 employees, and an analysis of 300 public AI deployments.
- “Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable profit and loss impact,” the report said. Meaning that “95 per cent of organizations are getting zero return.”

Elon Musk’s xAI Published Hundreds of Thousands of Grok Chatbot Conversations
-Anytime a Grok user clicks the “share” button on one of their chats with the bot, a unique URL is created, allowing them to share the conversation via email, text message or other means.
-Unbeknownst to users, though, that unique URL is also made available to search engines, like Google, Bing and DuckDuckGo, making them searchable to anyone on the web.

OpenAI logged its first $1 Billion Month but is still ‘constantly under compute’ CFO says
- OpenAI CFO Sarah Friar said the company is “constantly under compute,” but hit its first $1 billion revenue month in July.
- CNBC confirmed last week that the Sam Altman-led AI company was in talks to sell about 6 billion in stock at a roughly 500 billion valuation.
- Friar said the company is seeing an acceleration in some paid subscribers for its new ChatGPT-5 models, which faced criticism from some users following its debut.

Claude Opus 4 and 4.1 Can Now End a Rare Subset of Conversations
We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces. This ability is intended for use in rare, extreme cases of persistently harmful or abusive user interactions. This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.

Alibaba AI Team Just Released Ovis 2.5 Multimodal LLMs: A Major Leap in Open-Source AI with Enhanced Visual Perception and Reasoning Capabilities
- Alibaba’s Ovis2.5, released in 9B and 2B parameter versions, sets a new bar for open-source multimodal language models by integrating a native-resolution vision transformer and deep reasoning capabilities. This architecture enables Ovis2.5 to process visual inputs at their original resolutions, preserving critical details for tasks like chart analysis, OCR, document understanding, and STEM reasoning. The model’s “thinking mode” allows users to trigger enhanced step-by-step reflection and self-correction, boosting accuracy on complex queries and technical challenges.
- Ovis2.5 matches or surpasses most open-source competitors on industry benchmarks like OpenCompass, MathVista, and OCRBench V2, while delivering efficient, scalable training and robust performance even in its lightweight 2B version. Praised for its versatile applications—from cloud AI to mobile inference—the model is now openly available on Hugging Face, empowering researchers and developers with high-fidelity multimodal reasoning and visual comprehension that approach proprietary model standards…

Zuckerberg squandered his AI talent. Now he’s spending billions to replace it
- At Meta, a chaotic culture and lack of vision have led to brain drain, with rivals saying its AI talent is lacklustre. But Zuckerberg’s frenzied hiring spree hasn’t stopped the departures.
- “Meta is the Washington Commanders of tech companies. They massively overpay for okay-ish AI scientists and then civilians think those are the best AI scientists in the world because they are paid so much.”

Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives
-Although Perplexity initially crawls from their declared user agent, when they are presented with a network block, they appear to obscure their crawling identity in an attempt to circumvent the website’s preferences. We see continued evidence that Perplexity is repeatedly modifying their user agent and changing their source ASNs to hide their crawling activity, as well as ignoring — or sometimes failing to even fetch — robots.txt files.
“The bluster around this issue reveals that Cloudflare’s leadership is either dangerously misinformed on the basics of AI, or simply more flair than cloud.”
X Article: Agents or Bots? Making Sense of AI on the Open Web

Society
How ChatGPT fueled delusional man who killed mom, himself in posh Conn. town
- A disturbed former Yahoo manager killed his mother and then himself after months of delusional interactions with his AI chatbot “best friend” — which fueled his paranoid belief that his mom was plotting against him
- Three weeks after their final message, Greenwich police uncovered the gruesome murder-suicide scene in the posh tri-state suburb.
- OpenAI said it has reached out to investigators. “We are deeply saddened by this tragic event,” a company spokeswoman told The Post.

OpenAI Says It’s Scanning Users’ ChatGPT Conversations and Reporting Content to the Police
- OpenAI also quietly disclosed that it’s now scanning users’ messages for certain types of harmful content, escalating particularly worrying content to human staff for review — and, in some cases, reporting it to the cops.
- “When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts”

A Teen Was Suicidal. ChatGPT Was the Friend He Confided In
In an emailed statement, OpenAI, the company behind ChatGPT, wrote: “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”

AWS CEO says using AI to replace junior staff is ‘Dumbest thing I’ve ever heard’
Amazon Web Services CEO Matt Garman said he’s encountered business leaders who think AI tools “can replace all of our junior people in our company.”
That notion led to the “dumbest thing I’ve ever heard” quote, followed by a justification that junior staff are “probably the least expensive employees you have” and also the most engaged with AI tools.

AI is a Mass-Delusion Event
- The current era of generative AI is defined by a disorienting sense of confusion and a collective rush toward a future that feels hastily conceived. This sentiment is captured by bizarre events like the AI-powered “resurrection” of a murdered teenager for a news interview, a spectacle that highlights the strange intersection of grief and technology.
- While Silicon Valley promotes a narrative of imminent, world-changing super-intelligence, the reality is often characterized by flawed technology that elicits negative emotions and anxiety about the future.
- This uncritical adoption of AI is already fueling concerns over job displacement, the spread of misinformation, and the erosion of human cognition. The greatest risk is not a dramatic AI apocalypse but a future where society reorients itself around a technology that is just “good enough” to cause widespread harm without ever delivering on its grandest promises.
- Ultimately, the article cautions against this potential mass delusion, urging a more critical consideration of what is being sacrificed in the race for AI advancement.

Why ChatGPT Shouldn’t Be Your Therapist
Scientific American spoke with C. Vaile Wright, a licensed psychologist and senior director of the APA’s Office of Health Care Innovation, about how AI chatbots used for therapy could potentially be dangerous and whether it’s possible to engineer one that is reliably both helpful and safe.

OpenAI Announces That It’s Making GPT-5 More Sycophantic After User Backlash
- Beyond reinstating older models to paying subscribers, OpenAI continues to bow to the pressure, tweeting on Friday that it would be “making GPT-5 warmer and friendlier based on feedback that it felt too formal before.”
- “Changes are subtle, but ChatGPT should feel more approachable now,” it added.

Users Were So Addicted to GPT-4o That They Immediately Cajoled OpenAI Into Bringing It Back After It Got Killed
- OpenAI release their new GPT-5 model and announced that it would replace all of its previous models
- Power users immediately started to beg CEO Sam Altman to bring back preceding models, often for a reason that had little to do with intelligence, artificial or otherwise: they were attached to it on an emotional level.
Sam Altman thoughts on AI Attachment



