The ABC fact-checked an AI scam. Here's what we found
As artificial intelligence becomes more prolific in modern society, scammers are using the tool to produce highly convincing scams that leave even the experts second-guessing.

An older story, but the audio clip is definitely worth listening to...
ChatGPT unexpectedly began speaking in a user’s cloned voice during testing
AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference
On April 17, 2025, the MIT Shaping the Future of Work Initiative and the MIT Schwarzman College of Computing welcomed Arvind Narayanan, Professor of Computer Science at Princeton University, to discuss his latest book, "AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference," co-authored with Sayash Kapoor.
“Our research indicates significant complexity required for deployments of Nvidia systems in comparison to traditional data centers—cooling, configuration and orchestration challenges throughout the supply chain,” Goldberg wrote in a research report.
The analyst noted that all of Nvidia’s largest customers are also looking to design their own chips.
If you're a community college student in California, there's a chance that at least one of your fellow students is actually an AI bot robbing taxpayers. Recent data from the California Community Colleges Chancellor's Office suggest that these bots have stolen more than $10 million in federal financial aid and upward of $3 million in state aid between March 2023 and March 2024.
Artificial intelligence models have long struggled with hallucinations, a conveniently elegant term the industry uses to denote fabrications that large language models often serve up as fact.
And judging by the trajectory of the latest "reasoning" models, which the likes of Google and AI have designed to "think" through a problem before answering, the problem is getting worse — not better.
As the New York Times reports, as AI models become more powerful, they're also becoming more prone to hallucinating, not less. It's an inconvenient truth as users continue to flock to AI chatbots like OpenAI's ChatGPT, using it for a growing array of tasks. By having chatbots spew out dubious claims, all those people risk embarrassing themselves or worse.
Worst of all, AI companies are struggling to nail down why exactly chatbots are generating more errors than before — a struggle that highlights the head-scratching fact that even AI's creators don't quite understand how the tech actually works.
OpenAI, the parent of artificial intelligence service ChatGPT, has announced a new governance plan after a bitter power struggle over the business.
Boss Sam Altman said OpenAI would remain under the control of its for-profit board, while becoming what is known in the US as a public benefit corporation.
Mr Altman had put forward a similar plan in December - but without clarifying the control of the non-profit.
The update follows widespread scrutiny of the startup, which began as a non-profit and faced criticism, including from co-founder Elon Musk, that its quest for growth is pushing it to stray from its original mission of creating technology for the benefit of humanity.