• The world is awash in predictions of when the singularity will occur or when artificial general intelligence (AGI) will arrive. Some experts predict it will never happen, while others are marking their calendars for 2026.
  • A new macro analysis of surveys over the past 15 years shows where scientists and industry experts stand on the question and how their predictions have changed over time, especially after the arrival of large language models like ChatGPT.
  • Although predictions vary across a span of almost a half-century, most agree than AGI will arrive before the end of the 21st century.
 
The tech world is reeling from revelations that Builder.ai, once hailed as a $1.5 billion (£1.11 billion), 'AI' powerhouse, now faces scrutiny as its highly promoted artificial intelligence facade crumbles, revealing a human-powered operation behind the cutting-edge AI automation.

Builder.ai, the British no-code AI startup that once garnered acclaim for its strategic partnership with Microsoft and secured a $250 million (£184.64 million) investment led by the Qatar Investment Authority, declaredTuesday that it is initiating bankruptcy protection.

It collapsed as it turned out to be bunch of human programmers pretending to be an AI.
 
It is another round of the OPM (other people's money) all trying to get in on the potential upside.

Main issue is that once again too much money is chasing alpha (return) and if they win it works, if they don't, they just look for the next big thing.

I see AI as the 2020's version of the 2000 dot com bubble; it will end up in a mess, and a lot of people will lose a lot of money.

Only difference is that AI could create a cascade of other disasters globally that may not be put back in the box.

Regards,
 
The Orlando Division of the U.S. District Court for the Middle District of Florida will hear allegations against Character Technologies, the creator of Character.AI, in the wrongful death lawsuit Garcia v. Character Technologies, Inc. If the case is not first settled between the parties, Judge Anne Conway's ruling will set a major precedent for First Amendment protections afforded to artificial intelligence and the liability of AI companies for damages their models may cause.

A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.

And here is the lawsuit in question

Garcia v. Character Technologies, Inc. (6:24-cv-01903)
 
Top men

Electronics
 
Last edited:
Researchers at Apple have released an eyebrow-raising paper that throws cold water on the "reasoning" capabilities of the latest, most powerful large language models.

In the paper, a team of machine learning experts makes the case that the AI industry is grossly overstating the ability of its top AI models, including OpenAI's o3, Anthropic's Claude 3.7, and Google's Gemini.

In particular, the researchers assail the claims of companies like OpenAI that their most advanced models can now "reason" — a supposed capability that the Sam Altman-led company has increasingly leaned on over the past year for marketing purposes — which the Apple team characterizes as merely an "illusion of thinking."
 

Interesting paper that makes use of some very traditional logic puzzles. I'm not quite up to speed on AI taxonomies, in this case between LLMs and LRMs; part of the tension surely is whether there is a meaningful difference. This discussion has been around in different guises for a long time. It's partly about objectives as some developers are uninterested in emulating biologic neurology, content with adding dimensions to their probabilistic models. Others claim this will in itself lead to some kind of emergence to an evolved state of AI, and yet a group attempting to develop models and methods beyond mere refinements to these now well established neural networks. In other words, there's not necessarily an agreed upon context to these empirical criticisms but differences in interests, investments and objectives as well. It's also about our self-image, what we consider human reasoning to be.
 
This is a news story (Sadly on the Daily Mail.) which fits with the commentary in some of the videos I posted earlier covering the corrosive effect of 'Procedural Generation' (aka 'AI') on the historic record. These videos can be found in posts #402, #404, #411 and #482

At first glance, it is a heartwarming photo: a little girl feeding the ducks on the canal in pre-war Amsterdam.

A lengthy description explains that the girl, 'Hannelore Cohen', would 'skip along the cobblestone paths' each morning' - until 'the ducks never saw her again'.

What follows is the claim that she was murdered at Sobibor death camp by the Nazis in the Holocaust - but it is not true.

The photo has been generated by artificial intelligence, and the story that accompanies it is equally fictitious.

The fake AI-generated Holocaust 'victims' duping thousands - as Auschwitz museum slams 'distortions'

It's an article I have some problems with as the author's chose to 'bulk up' the article by quoting verbatim from the text readable in the illustrations.
 
Exactly, quite a strange outcome.

Pretty sure all "Cloud AI" will be based in the US.

Regards,
 
Makes perfect sense for Nvidia to break into the EU, yet on the other hand under the current admin in the US would be thinking they are going to keep everything close to the chest.

Also gets more customers and more funding.

AI is all the rage now until it isn't :cool:

Regards,
 
Last edited:
New problems, new answers?

Tech show

Odds and ends
It's kind of hilarious that one of the primary methods for dissuading large language models from generating harmful responses to questions is literally to ask them not to

-from

AI news

electronics
 
Last edited:

Similar threads

Back
Top Bottom