(Subscription or registration may be required)
OpenAI’s plans to bring its $500bn (£378bn) data centre programme to Britain have stalled, raising fresh scrutiny over Sir Keir Starmer’s artificial intelligence (AI) push.

The ChatGPT developer announced in September that it would bring its flagship Stargate scheme to Britain by teaming up with UK data centre giant Nscale.

At the time, it unveiled plans to potentially house around 8,000 Nvidia AI processors at a data centre in Cobalt Park, Tyneside, during the first three months of 2026.

However, the project has yet to go live.

It is not clear why the data centre has been delayed, but commercial negotiations remain live. OpenAI declined to give an updated timeline for the facility.

Sam Altman, OpenAI’s chief executive, first revealed plans for its $500bn Stargate data centre investment programme in January 2025 during a White House press conference alongside Donald Trump.

This was followed by a pledge to expand the programme to build data centres around the world, including the UK.

In a government press release, Mr Altman said Stargate UK was part of a “shared vision that with the right infrastructure in place, AI can expand opportunity for people and businesses across the UK”.

This was championed by the Government, which has put AI at the heart of its growth push.

Meanwhile, OpenAI hired George Osborne, the former Conservative chancellor, to lead its international expansion.

However, in the US, talks over OpenAI’s data centre programme have progressed slowly with investors, including key backer SoftBank.

A plan to expand the capacity of a key site in Texas, which is still in development with US data giant Oracle, was also dropped earlier this year, according to Bloomberg.

While tech giants have revealed plans to spend hundreds of billions of dollars this year on data centres to meet demand for AI apps, many of their projects are facing delays.

Up to 50pc of large data centres have fallen behind schedule, according to an analysis by Sightline Climate, held up by planning problems or energy constraints.

Last week, The Telegraph reported that Nscale, a $15bn data centre business which features the former UK deputy prime minister Sir Nick Clegg on its board, had been forced to push back its timelines for another project in Loughton, Essex.

Tom Hegarty, a spokesman for the campaign group Foxglove, which has lobbied against a surge in data centre developments over their climate impact, said: “Sam Altman’s flagship Stargate UK project exists as little more than a colourful eight-month-old press release.”

A government spokesman said: “Our focus is on creating the right conditions for investment in the UK’s AI and data centre infrastructure, and we are working with OpenAI and other leading AI companies to strengthen UK compute capacity.”

OpenAI and Nscale declined to comment.
 
Microsoft is making sure its Copilot Terms of Use no longer steal the show.

The tech giant said it will be updating its user agreement after viral posts pointed out that Copilot said it is "for entertainment purposes only," a far cry from how Microsoft has sold its AI tool.

"The 'entertainment purposes' phrasing is legacy language from when Copilot originally launched as a search companion service in Bing," a Microsoft spokesperson said in a statement, first published by PCMag. "As the product has evolved, that language is no longer reflective of how Copilot is used today and will be altered with our next update."

In recent days, users on X have highlighted the terms, suggesting that Microsoft isn't confident in its flagship AI tool.

"Copilot is for entertainment purposes only," the terms read. "It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk."

Microsoft says Copilot isn't just 'for entertainment purposes' after its terms of service language goes viral
 
Take-Two has hardened its stance against AI use in video games, as the parent company of Rockstar Games gets rid of all of its internal experts.

As the rest of the world embraces AI with apparently no thought to its wider financial, societal or ethical impact – or trivial things like whether it works reliably or not – the video game world has, in general, been surprisingly sceptical.

Some publishers have been as mindlessly enthusiastic (you can guess who before even clicking that link) as you’d expect, but others have been more cautious. In some cases, such as with Nintendo, it’s been a general opposition to the entire concept, while others seem more worried about fan backlashes – as happened with the recent unveiling of Nvidia’s much derided DLSS 5 technology.

GTA 6 publisher lays off its entire AI team after seven years
 
While browsing our website a few weeks ago, I stumbled upon “How and When the Memory Chip Shortage Will End” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM).

As we and the rest of the tech media have documented, AI is a resource hog. AI electricity consumption could account for up to 12 percent of all U.S. power by 2028. Generative AI queries consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. Water consumption for cooling AI data centers is predicted to double or even quadruple by 2028 compared to 2023.

But Moore’s reporting shines a light on an obscure corner of the AI boom. HBM is a particular type of memory product tailor-made to serve AI processors. Makers of those processors, notably Nvidia and AMD, are demanding more and more memory for each of their chips, driven by the needs and wants of firms like Google, Microsoft, OpenAI, and Anthropic, which are underwriting an unprecedented buildout of data centers. And some of these facilities are colossal: You can read about the engineering challenges of building Meta’s mind-boggling 5-gigawatt Hyperion site in Louisiana, in “What Will It Take to Build the World’s Largest Data Center?

We realized that Moore’s HBM story was both important and unique, and so we decided to include it in this issue, with some updates since the original published on 10 February. We paired it with a recent story by Contributing Editor Matthew S. Smith exploring how the memory-chip shortage is driving up the price of low-cost computers like the Raspberry Pi. The result is “AI Is a Memory Hog.”

While browsing our website a few weeks ago, I stumbled upon “How and When the Memory Chip Shortage Will End” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM).

As we and the rest of the tech media have documented, AI is a resource hog. AI electricity consumption could account for up to 12 percent of all U.S. power by 2028. Generative AI queries consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. Water consumption for cooling AI data centers is predicted to double or even quadruple by 2028 compared to 2023.

But Moore’s reporting shines a light on an obscure corner of the AI boom. HBM is a particular type of memory product tailor-made to serve AI processors. Makers of those processors, notably Nvidia and AMD, are demanding more and more memory for each of their chips, driven by the needs and wants of firms like Google, Microsoft, OpenAI, and Anthropic, which are underwriting an unprecedented buildout of data centers. And some of these facilities are colossal: You can read about the engineering challenges of building Meta’s mind-boggling 5-gigawatt Hyperion site in Louisiana, in “What Will It Take to Build the World’s Largest Data Center?

We realized that Moore’s HBM story was both important and unique, and so we decided to include it in this issue, with some updates since the original published on 10 February. We paired it with a recent story by Contributing Editor Matthew S. Smith exploring how the memory-chip shortage is driving up the price of low-cost computers like the Raspberry Pi. The result is “AI Is a Memory Hog.”

The big question now is, When will the shortage end? Price pressure caused by AI hyperscaler demand on all kinds of consumer electronics is being masked by stubborn inflation combined with a perpetually shifting tariff regime, at least here in the United States. So I asked Moore what indicators he’s looking for that would signal an easing of the memory shortage.

“On the supply side, I’d say that if any of the big three HBM companies—Micron, Samsung, and SK Hynix—say that they are adjusting the schedule of the arrival of new production, that’d be an important signal,” Moore told me. “On the demand side, it will be interesting to see how tech companies adapt up and down the supply chain. Data centers might steer toward hardware that sacrifices some performance for less memory. Startups developing all sorts of products might pivot toward creative redesigns that use less memory. Constraints like shortages can lead to interesting technology solutions, so I’m looking forward to covering those.”

To be sure you don’t miss any of Moore’s analysis of this topic and to stay current on the entire spectrum of technology development, sign up for our weekly newsletter, Tech Alert.
 
Google Photos has started rolling out a new AI-powered editing feature called “AI Enhance” to Android users globally. The feature appears as a new button inside the photo editor.

The rollout is being carried out in phases, which means not all users may see the feature immediately on their devices.

Google Photos rolls out AI Enhance button globally for Android users

Google has launched a new tool that can help individuals who are into writing or transcription. Known as Google AI Edge Eloquent, the application is a free-to-download one, and it also works without any connectivity or internet. Furthermore, the app can automatically filter out words like 'ah' and 'um' to enhance the text, apart from offering multiple options to transform the text.

For your information, the application is based on Google's Gemma-based automatic speech recognition (ASR) models. Users can also see the live translation of their speech at the time of dictating it. In addition, the app also filters and polishes the text after the user pauses speaking. Users also get options to transform their text in different formats like 'Key Points,' 'Formal,' 'Short,' and 'Long.'

Google brings AI dictation app that can polish your speech for free and without internet
 
Four-legged robot dog, powered by AI have been patrolling the streets, apartments, parking lots and construction sites in Atlanta as a move to up security. However, the addition of this automated security has raised concerns about the increasing use of private technology by law enforcement agencies.

A video of one such surveillance robot dog walking in Atlanta has gone viral on social media, leaving onlookers in awe as they drive past it.

When someone honks, the robot stops immediately and looks at the person recording it, producing noise that resembles a dog. The robot dog then waves, something that the car’s occupants find adorable.

Robot police dogs powered by AI take over Atlanta's streets
 
Anthropic’s latest AI model could let hackers carry out attacks faster than ever. It wants companies to put up defenses first


Further to the above post.

Regards,

The security landscape is never going to be the same. AI labs a strategic asset at this point.

-------

One funny thing I've been hearing is that people using openclaw to hack their own Internet of things devices (generally known to have poor security) to produce a cohesive smart home.
 
Researchers led by Swedish scientist Almira Osmanovic Thunström created a completely fake eye condition called 'Bixonimania' and published papers describing it. The condition was attributed to a fictional author whose name translates to "The Lying Loser", a clue that the illness itself wasn’t real.

Despite this, popular AI chatbots, including Microsoft Copilot and Google Gemini, later picked it up and treated it as a genuine medical condition.

When questioned about Bixonimania, the chatbots discussed it as though it were a real illness, even speculating about potential causes like excessive blue light exposure.

Researchers made up a fake eye disease, and AI chatbots believed it
 
AI is getting quite good at mimicking human behaviour, then. Started April 1 1983, with quite a lot of people falling for it over the years:
Is there a direct connection between the link (I first found out about this spoof when our daughter [who majored in Ocean Sciences] related it to us when she was in High School [shout-out to the California Education System!]) and AI, or are you just using it as a parable? Honest question.
 
Is there a direct connection between the link [...] and AI
I was not thinking of such a link, I have forgotten where and when I picked up about DHMO. Sometime when I was reading biology?
AI implementations are now mimicking effects of some human traits that humans would do well without. Gullibility, jumping to conclusions, intellectual laziness.
If this replication was intentional - kudos for it being faithful to the original.
If unintentional - cue anguished cries from AI investors.

Should I be worried, or should I be ROFL? Both? <reaches for whisky bottle>
 
Ever since AI has seen rapid advancements, data exposure and leaks have become common terms that we hear every day, and it is dangerous. Yet again, a newly surfaced cybersecurity report has flagged potential data exposure risk tied to Google’s Gemini integration in Android apps, including some big names, such as OYO Hotel Booking App, Google Pay for Business (50M+ installs), Taobao (50M+ installs), apna Job Search App (50M+ installs), ELSA Speak: AI English Learning (10M+ installs).

According to findings by CloudSEK, a commonly used Google API key, which was previously considered safe for client-side use, can gain elevated privileges once the Gemini API is enabled, potentially allowing unauthorised access to sensitive data and services.

Gemini integration bug may put millions of Android users’ data at risk: All you should know
 
Tesco is trialling an innovative AI assistant aimed at transforming how customers plan meals and manage their grocery shopping.

The supermarket has initially granted early access to approximately 280,000 employees, who will test the new tool, which is linked to their Clubcard purchase data.

Over the coming weeks, these colleagues will evaluate the assistant, providing crucial feedback and even suggesting names, before its anticipated launch to all customers later this year.

This internal testing phase is vital for refining the technology.

The AI assistant aims to simplify meal planning by offering a "natural, two-way dialogue to offer inspiration in the form of personalised recipe ideas”, while also considering individual dietary preferences.

Supermarket giant launches AI assistant trial
 
Came across this on the socials. The notable obsequiousness of LLMs isn't emergent, I think, but part of an attempted business strategy. AI toots many a horn for different reasons and sometimes the results have a distinct odor; unless the references to "atmosphere" and "bedroom/DIY texture" aren't as unironic as I take them to be.

chat_fart.jpg
 
For composites

Accuracy
View: https://m.youtube.com/shorts/ZLp_60vJsMQ


Risk

East is East and West is West

AI moods

Meatwear

Social use

Fakebuster

Would you just look at what that bot is wearing?
 
Last edited:
Texas Sees Power Demand Quadrupling by 2032 on Data Center Boom


Not sure how that will go, in the 70's Texas decided to not be on the national grid. The reasoning was due to federal oversite and regulations.

After 4 Years And Billions Of Dollars, The Texas Grid Is Not Fixed


Regards,
 
A review...

Grayson Perry Has seen the Future (Channel 4) and he doesn’t like it. After watching this programme, neither do I. AI is going to transform humanity, and it’s in the hands of people who seem oddly blasé about the consequences.

In this first episode in a new series about AI technology, Perry went to San Francisco. There he met Mustafa Suleyman, the British chief executive of Microsoft AI. Grayson didn’t go into his background, but Sulaiman is British, son of a taxi driver and a nurse, grew up in the rough bit of Islington then went to Oxford and moved to California. He founded DeepMind and is extremely rich.

AI is coming for your job, your partner and (apparently) even your god
 
For a company that has long positioned itself as a privacy-first alternative in the AI race, Anthropic has taken a step that is likely to test that reputation.

The firm has begun rolling out identity verification requirements for its chatbot Claude, asking some users to provide a government-issued photo ID and, in certain cases, a live selfie to access parts of the platform.

The change, introduced quietly via an update to its help centre this week, applies only to select scenarios for now. Anthropic says users may encounter verification prompts when accessing “certain capabilities”, during routine platform integrity checks, or as part of broader safety and compliance measures. It has not specified which features are affected or what triggers the checks.

Anthropic introduces identity checks that could require Claude users to submit ID and live selfie for access
 
Allbirds isn't only pivoting to AI — it's pivoting away from its core environmental principles.

Once dubbed the favorite shoemaker of Silicon Valley, Allbirds announced Wednesday it was transitioning from footwear to AI compute infrastructure and becoming NewBird AI.

The company said it planned to buy GPUs, or powerful chips, and become a GPU-as-a-service company. It's also selling off its footwear assets and its original name, meaning Allbirds-branded shoes could continue to be made, under new ownership.

The company partially answered questions about what that might mean for Allbirds' status as a company committed to ESG — environmental, social, and governance principles — in a filing with the Securities and Exchange Commission.

Allbirds is ditching years of clean and green street cred
 

Similar threads

Back
Top Bottom