Everything in the US market is always "Too big to fail" and then it does.

Started in finance back in 85, and since then every 10~12 the US blows up and creates these meltdowns.

This time it has been over 15 yrs and the billions have turned into trillions, I estimate the AI side of things at 20 trillion currently, all off market round robin no substance, no nothing. Companies are piling into AI with management having little knowledge of what it could achieve, or the damage it could do.

Oracle stock price is almost back to where it was before Open AI bazillion $$ investment. All other deals are being discounted just as quickly. Centre of all the AI mess is OpenAI.

This bail out has no money, US has 38 trillion debt and currency has dropped 11% since the beginning of 25.

Massive instability across the US, UK, and Europe is zeroing confidence in the global financial system. These traditionally stable economies are now facing high inflation, sluggish growth, political uncertainty and the list goes on. No one has any money to bail out anyone.

This one is going to be messy, bigtime. Just have to wonder what will be the match that starts it all.

Regards,
 
Building model
With the GlobalBuildingAtlas, a research team at the Technical University of Munich (TUM) has created the first high-resolution 3D map of all buildings worldwide.
View: https://www.youtube.com/watch?app=desktop&v=UoTjji7VQp8

3D no glasses
scientists have developed a new display system that delivers a realistic 3D experience without the need for any eyewear.

Researchers at UC Santa Barbara have invented a display technology for on-screen graphics that are both visible and haptic, meaning that they can be felt via touch.

Computer aids
The tool, A11yShape, addresses a challenge for blind and low-vision programmers by providing a method for editing and verifying complex models without assistance from sighted individuals. The first part of the tool's name is a numeronym, a number-based contracted word that stands for "accessibility" and is pronounced "al-ee."

A11yShape takes digital pictures of 3D models that developers generate in the open-source computer code editor OpenSCAD to capture objects' shapes from several angles. The system utilizes GPT-4o, together with the code and multiangle views of the generated 3D model, to provide detailed descriptions of the model for blind and low-vision programmers. A11yShape tracks changes and synchronizes them with the code, descriptions and the 3D rendering. The tool includes an AI assistant similar to a chatbot that can answer questions about the model and edits
.

Representing continuous data as an array seems impossible—after all, real numbers are infinite and arrays have a finite size. The MIT team overcame this challenge with a clever concept called piecewise-constant tensors, which divide continuous space into manageable regions that have the same value. It's akin to creating a large collage by cutting and pasting rectangles from millions of different colored plain paper (for example, the art of Piet Mondrian, depicted in the banner): The essential information is preserved, but in a form the hardware can handle.

This approach makes it possible to express previously cumbersome algorithms in a single, compact line of tensor code. Tasks that once required thousands of lines of specialized logic—from analyzing 3D LiDAR scans to modeling fluid flow or simulating physical systems—can now be written in a familiar, high-level mathematical Einsum language and executed efficiently on modern accelerators
.

Kobe University machine learning expert Yaguchi Takaharu explains, "Recently, deep learning methods are beginning to be used, but they often violate physical laws needed for accuracy. More traditional physical simulations may be more accurate; however, they are very time- and resource-intensive."

A team from the University's Computer Science Department has conducted one of the largest studies to date on how humans collaborate with AI during design tasks. More than 800 participants took part in an online experiment using an AI-powered system that supported users as they designed virtual cars.

Examine
A fake photo of an explosion near the Pentagon once rattled the stock market. A tearful video of a frightened young "Ukrainian conscript" went viral: until exposed as staged. We may be approaching a "synthetic media tipping point", where AI-generated images and videos are becoming so realistic that traditional markers of authenticity, such as visual flaws, are rapidly disappearing.

Games
Though some might see video games as a distraction, a recent study from the University of Georgia suggests they can actually serve as a place to practice key science skills—with the help of some adorable cats, of course.

*****************************************************************************************************************
On surveillance

The red light is always on...all mics are hot

You are being followed

Not ready for prime time, M3GAN

Something useful

For cooling

The dead speak
 
Last edited:
Every time AI is looking shaky, one of the gang opens their check book.
You would think these turkeys would have some reverence for where they come from.

The Wall Street Bets guys tried to protect brands from those who would short them.

I thought at least the Woz would bail out Radio Shack:

" Here is X amount of money and/or stock. That should keep you running even if nobody buys a single bloody transistor. The world is more interesting with you in it."

$600 billion for this crap--but they can't be bothered to do hackaday types a solid?

I hate this world.
View: https://m.youtube.com/watch?v=oMmpC1tef4c&pp=0gcJCQgKAYcqIYzv
 
Last edited:
A Los Angeles high school English teacher's routine grading session took a shocking turn when his lowest-performing students began turning in flawless A-grade essays - all at once.

Dustin Stevenson's suspicions were soon confirmed after one of his students revealed the secret.

The culprit wasn't an underground website or a secret group chat but Google Lens, a tool embedded directly into every student's school-issued Chromebook.

Students simply hover over a test or essay question and instantly receive AI-generated answers - all without switching tabs or typing a single word.

'I couldn't believe it,' Stevenson said Mercury News. 'It's hard enough to teach in the age of AI, and now we have to navigate this?'

For Stevenson and other educators across the country, the discovery marks a turning point in what they describe as an escalating battle against invisible academic dishonesty.

Teacher's worst fears confirmed when struggling students suddenly started submitting A-grade papers
 
Oral exams. They are a drag on teachers' time, but there is no cheating your way out of them with AI if the examiner takes away your phone. Or chromebook. Or whatever.
Alternatively, old fashioned paper exams, answers in longhand, on paper.
 
Last edited:
‘Big Short’ investor Michael Burry accuses AI hyperscalers of artificially boosting earnings


“Understating depreciation by extending useful life of assets artificially boosts earnings - one of the more common frauds of the modern era,” Burry wrote. “Massively ramping capex through purchase of Nvidia chips/servers on a 2-3 yr product cycle should not result in the extension of useful lives of compute equipment. Yet this is exactly what all the hyperscalers have done.”

Thus it begins................

Regards,
 
Eyes turn to space to feed power-hungry data centers

US startup Starcloud this week sent a refrigerator-sized satellite containing an Nvidia graphics processing unit (GPU) into orbit in what the AI chip maker touted as a "cosmic debut" for the mini-data center.

"The idea is that it will soon make much more sense to build data centers in space than it does to build them on Earth," Starcloud chief executive Philip Johnston said at a recent tech conference in Riyadh.


Founded in 2024, valuation best to throw a dart with a blind fold on at a dartboard with numbers in the billions on it.

Que..............everyone start writing more blank cheques.

Regards,
 
I cannot see anything happening in relation to this, not with present circumstances.

The nonprofit Public Citizen is now demanding OpenAI withdraw Sora 2 from the public, writing in a Tuesday letter to the company and CEO Sam Altman that the app’s hasty release so that it could launch ahead of competitors shows a “consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails.” Sora 2, the letter says, shows a “reckless disregard” for product safety, as well as people's rights to their own likeness and the stability of democracy. The group also sent the letter to the U.S. Congress.
 
The federal government's answer to ChatGPT will be installed in every public servant's laptop as part of a major push to boost artificial intelligence use and productivity in the public service.

Under the first whole-of-government AI plan, unveiled by Finance Minister Katy Gallagher on Wednesday, federal bureaucrats will undergo training, gain access to a generative AI tools called GovAI Chat.

They will also receive new guidance on using private AI platforms like ChatGPT, Gemini and Claude.

Every Australian Public Service (APS) agency will also need to appoint chief AI officers by July 2026.

Purpose-built public servant chatbot in federal AI push
 
The tech industry is moving fast and breaking things again — and this time it is humanity’s shared reality and control of our likeness before and after death — thanks to artificial intelligence image-generation platforms like OpenAI’s Sora 2.

The typical Sora video, made on OpenAI’s app and spread onto TikTok, Instagram, X and Facebook, is designed to be amusing enough for you to click and share. It could be Queen Elizabeth II rapping or something more ordinary and believable. One popular Sora genre is fake doorbell camera footage capturing something slightly uncanny -- say, a boa constrictor on the porch or an alligator approaching an unfazed child -- and ends with a mild shock, like a grandma shouting as she beats the animal with a broom.

Watchdog group Public Citizen demands OpenAI withdraw AI video app Sora over deepfake dangers
 
Meta pledges $600 billion U.S. investment for AI expansion


Every time AI is looking shaky, one of the gang opens their cheque book.

Regards,
Surely this is unsustainable? AI will eventually have to generate ROI at a minimum and that's already an astronomical figure.

I mean are the companies trying to create a financial situation similar to sovereign debt - where the money goes round and it actually the interest which is the income/ product.

And AI itself is just some giant version of the Producers musical.
 
Surely this is unsustainable? AI will eventually have to generate ROI at a minimum and that's already an astronomical figure.

I mean are the companies trying to create a financial situation similar to sovereign debt - where the money goes round and it actually the interest which is the income/ product.

And AI itself is just some giant version of the Producers musical.
Try this for precedent, and note the Background paragraph: https://en.wikipedia.org/wiki/Dot-com_bubble. At least The Producers was funny...
 
LLMs learn like students?

Controversy

alright alright alright
 
Last edited:
LLMs learn like students?
Learning implies cognitive self reflection, so no...
 
A.I. at lightspeed

Beyond A.I.

A.I. mystics

Supercomputing

Other advances
"This method enables more robust data analysis in non-Euclidean settings, which has potential applications in areas such as computer vision, medical imaging, and shape analysis," Lee explained.

AI and math

Watch for bugs

Not cool

Grumbling

Cybersociology 101
 
Last edited:
Wall Street cools on Oracle’s buildout plans as debt concerns mount: ‘AI sentiment is waning’


“AI sentiment is waning,” said Jackson Ader, an analyst at KeyBanc Capital Markets, in an interview.

Ader said that of the big cloud companies in the GPU business, Oracle is expected to generate the least amount of free cash flow. To fund the capex required for Oracle’s business, Ader expects Oracle to turn to more creative financing tools.

Oracle is looking to raise $38 billion in debt sales to help fund its AI buildout, according to sources with knowledge of the matter who asked not to be named because the information is confidential. Bloomberg reported on the planned debt raise last month.

OpenAI is writing cheques they cannot cover, and they have written out a lot of cheques...............

Regards,
 
Wall Street cools on Oracle’s buildout plans as debt concerns mount: ‘AI sentiment is waning’




OpenAI is writing cheques they cannot cover, and they have written out a lot of cheques...............

Regards,
I'd love for this to mean the end of the following Threat for any serious future consideration. And this is indeed a Threat.

 
I’m not sure how many of you would have heard of the Eliza effect but it’s named after the first chatbot that was developed back in the sixties that could use so called ‘natural language’ effectively. But it also dies show in my view that humans are the biggest issue wherever they fall in the system with AI.

Throughout Joseph Weizenbaum’s life, he liked to tell this story about a computer program he’d created back in the 1960s as a professor at MIT. It was a simple chatbot named ELIZA that could interact with users in a typed conversation. As he enlisted people to try it out, Weizenbaum saw similar reactions again and again — people were entranced by the program. They would reveal very intimate details about their lives. It was as if they’d just been waiting for someone (or something) to ask.


And he also worried about the same future that Alan Turing had described — one where chatbots regularly fooled people into thinking they were human. Weizenbaum would eventually write of ELIZA, “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

Weizenbaum went from someone working in the heart of the AI community at MIT to someone preaching against it. Where some saw therapeutic potential, he saw a dangerous illusion of compassion that could be bent and twisted by governments and corporations.

Joseph Weizenbaum eventually retired from MIT, but he continued speaking out against the dangers of AI until he died in 2008 at the age of 85. And while he was an important humanist thinker, some people felt like he went too far.

Pamela McCorduck, author of Machines Who Think, knew Weizenbaum over several decades. She says he burned a lot of bridges in the AI community and became almost a caricature of himself towards the end, endlessly railing against the rise of machines.

He also may have missed something that Darcy has thought about a lot with Woebot: the idea that humans engage in a kind of play when we interact with chatbots. We’re not necessarily being fooled, we’re just fascinated to see ourselves reflected back in these intelligent machines.
 
Invisible friends never charged---spitting up pea soup could get messy.

A quiet roll-out

At least A.I. will look for potholes
 
Last edited:
Last edited:
Nobody expects the cyber inquisition...

The real Krell Brain Boost
A new software enables brain simulations which both imitate the processes in the brain in detail and can solve challenging cognitive tasks. The program was developed by a research team at the Cluster of Excellence "Machine Learning: New Perspectives for Science" at the University of Tübingen. The software thus forms the basis for a new generation of brain simulations which allow deeper insights into the functioning and performance of the brain. The Tübingen researchers' paper has been published in the journal Nature Methods.
 
Last edited:
Two related stories on Futurism

AI chatbots have conquered the world, so it was only a matter of time before companies started stuffing them into toys for children, even as questions swirled over the tech’s safety and the alarming effects they can have on users’ mental health.

Now, new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.

After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.

AI-Powered Toys Caught Telling 5-Year-Olds How to Find Knives and Start Fires With Matches

Children’s toymaker FoloToy says it’s pulling its AI-powered teddy bear “Kumma” after a safety group found that the cuddly companion was giving wildly inappropriate and even dangerous responses, including tips on how to find and light matches, and detailed explanations about sexual kinks.

“FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” marketing director Hugo Wu told The Register in a statement, in response to the safety report. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”

FoloToy, Wu added, will work with outside experts to verify existing and new safety features in its AI-powered toys.

“We appreciate researchers pointing out potential risks,” Wu said. “It helps us improve.”

AI-Powered Stuffed Animal Pulled From Market After Disturbing Interactions With Children

And the story at 'The Register' referenced in the second of the Futurism stories

Picture the scene: It's Christmas morning and your child is happily chatting with the AI-enabled teddy bear you got them when you hear it telling them about sexual kinks, where to find the knives, and how to light matches. This is not a hypothetical scenario.

As we head into the holiday season, consumer watchdogs at the Public Interest Research Group (PIRG) tested four AI toys and found that, while some are worse than others at veering off their limited guardrails, none of them are particularly safe for impressionable young minds.

PIRG was only able to successfully test three of the four LLM-infused toys it sought to inspect, and the worst offender in terms of sharing inappropriate information with kids was scarf-wearing teddy bear Kumma from Chinese company FoloToy.

Happy holidays: AI-enabled toys teach kids how to play with fire, sharp objects
 
At least the Garbage Pail kids wouldn't come to life...

“Website please…”

“We can’t complete the hypertext as dialed. Please hang up and try your search again.”
 
Last edited:

Similar threads

Back
Top Bottom