Artificial intelligence is killing pupils' ability to think critically and stopping them learning things like spellings, research has found.

A survey of 9,000 teachers found 66 per cent of those in secondary schools thought critical thinking has declined because of AI tools such as ChatGPT.

The problem is even affecting younger children, with 28 per cent of those teaching in primary schools saying the same.

The poll, by the National Education Union (NEU), comes amid a debate over how much pupils should be allowed to use AI in their schoolwork.

One respondent told researchers: 'Children no longer feel the need to spell as voice-to-text replaces knowledge.'

AI is killing pupils' critical thinking - and many now can't be bothered to learn how to spell
 
The federal court of Australia has warned the legal profession about the dangers of using generative artificial intelligence in legal proceedings, issuing new rules for its use, with potential financial or legal consequences if AI errors frustrate court cases.

Amid an explosion in court filings in Australia and across the globe found to have included false citations generated by AI, the federal court on Thursday issued a new practice note on how the technology can be used in court cases.

The chief justice of the federal court, Debra Mortimer, said presentation of false or inaccurate information to the court is “unacceptable”.

“It is inconsistent with the responsibility on all persons to not mislead the court or other parties,” she said in the note.

Australian federal court warns lawyers over 'unacceptable' use of AI
 
A neuroscience professor claims to have developed an AI algorithm that endows humans with “perfect and infinite memory”.

Gabriel Kreiman, who researches artificial intelligence and neuroscience at Harvard Medical School, launched a startup last month in the hope of commercialising technology that he says will transform people’s cognitive capabilities.

He describes his work as a “fight against oblivion”, allowing memories to be stored indefinitely.

The idea is to use something called “large memory models” – a play on the large language model (LLM) coinage used for AI tools like ChatGPT – in order to retrieve data from a person’s digital life.

AI startup offers humans 'perfect and infinite memory'
 
Have you ever wondered how AI has become a strong part of our lives? People tell their deepest thoughts to chatbots like Claude, ChatGPT, Gemini, and more. But your conversations might not be private, and hence, you must think twice before you speak. In the light of similar incidents, lawyers in the US are urging caution. They believe that your chatbot might not hold your secrets as you think.

Think twice before sharing secrets with ChatGPT, Claude: Here's why
 
It seems most all AI legislation seems to include age-verification within it: Of course, this isn't really about protecting children, it's about tying every online interaction to a specific user conclusively (effectively an online ID).

First, there's the administration's AI policy which also includes preemption of state regulation of AI in addition to the Age Verification requirement (Senator's Cruz and Thune are planning to draft legislation to Trump's policy).

Then there's the TRUMP AMERICA AI Act as proposed by Senator Blackburn which includes everything Trump's policy (Preemption of State AI regulation, Age Verification) as well as the abolition of Section 230.

Lastly, there's H.R. 7218 which includes... you guessed it: Age Verification.
 
OpenAI’s chief financial officer, Sarah Friar, is leveraging the company’s own ChatGPT chatbot for both personal and professional tasks, from generating a tilapia recipe for dinner to streamlining her workday by summarising emails and Slack messages. This dual application underscores a strategic pivot for OpenAI, with executives, including Ms Friar, increasingly focusing on business-oriented AI products as a route to profitability, moving away from certain consumer offerings.

The company is poised to unveil a new artificial intelligence model specifically designed for "high-value professional work". This launch comes amid intensifying competition with rival firm Anthropic, as both vie to attract corporate clients to integrate AI assistants into their operations.

ChatGPT maker OpenAI shifts its focus to business users amid pressure
 
California-based Google has rolled out a new update to its Gemini that can now offer more personalised images. This feature is powered by the AI platform’s popular Nano Banana AI model, which will help users to generate visuals based on their choices and preferences without typing much detailed and repetitive prompt. In simple words, you will not be required to explain your suggestions multiple times to Gemini. Now it can understand your interests and work accordingly. For example, a basic prompt to ‘design my home' may now involve elements like a swimming pool or a gym, based on your preferences.

How Gemini’s Nano Banana Will Use Other Google Apps For Context. This feature will work by analysing data from connected Google apps like Google Photos and Gmail. This will help the AI understand you better personally, including your activities, interests, and even labels like 'Family.' For instance, you can ask Gemini to generate an image of your family doing your favorite activity. It may use the available context based on your recent email with your family, supposedly about an outing, and could produce the results based on the same.

Google Gemini now creates images based on your likes without detailed prompts
 
Good morning. Not every CEO will have a book written about them. But if they do, what should they try to get out of it? For Demis Hassabis that moment has arrived with the publication of The Infinity Machine, the new biography written by Sebastian Mallaby (author of More Money Than God on hedge funds and The Man Who Knew, the biography of Alan Greenspan).

Hassabis, co-founder of DeepMind and Isomorphic Labs, knows that the book changes his relationship with the public. “I am a pretty private guy,” he said at a launch event in London this week. The 1,000-seater venue was sold out, filled with a mix of young people keen to know about the future of work and older generations concerned that artificial intelligence will upend the world as we know it.

I was there, alongside the academics and senior technology executives, to listen to one of the few Tech Gods to work outside the hothouse of the U.S. and, more specifically, Silicon Valley.

Forget the chatbot wars. Google DeepMind CEO Demis Hassabis is thinking about something far bigger
 
Artificial intelligence-enabled robot companions, personal behaviour monitors and pain management apps could soon be used more widely within the Australian aged care and retirement sector, industry stakeholders say.

Australia is on the verge of an AI boom in the sector that could improve the quality of life of older Australians and combat loneliness, according to some experts.

But others, including the office of Australia's eSafety Commissioner Julie Inman Grant, have voiced concerns about the ethics of unregulated use of AI and the potential for the technology to cause negative behavioural changes for some people.

While some South-East Asian nations and the United States have embraced the technology in aged care, the experimental use of AI-based devices in Australia is in its infancy.

The federal government is still exploring how it can be used safely and effectively to improve care for older people.

Australia on the verge of an aged care AI boom but experts warn of high risks
 
Bias

This week in AI

Evil spirits

Call my agent

Label work

Creativity

Brushstroke, the legend of...never mind

Death by elocution
 
Last edited:
Amazon to invest up to another $25 billion in Anthropic as part of AI infrastructure deal
Bezos is just hoping their Mythos A.I. will hack SpaceX servers so he can learn how to build an upper stage that actually works.

Unknown to him, Elon actually keeps all of his rocket secrets written down in an old recipe book at his Mom's.

Boeing's also ahead of Amazon--but instead of working with Anthropic as I suggested, they just hired a washed up noir detective to follow Grimes around.

Spies

Protest

Guesswork

Can someone please tell me that Susan Dell isn't a skinwalker?
 
Last edited:
NEW YORK, April 21 (Reuters) - Meta is installing new tracking software on U.S.-based employees’ computers to capture mouse movements, clicks and keystrokes for use in training its artificial intelligence models, part of a broad initiative to build AI agents that can perform work tasks autonomously, the company told staffers in internal memos seen by Reuters.

The tool, called Model Capability Initiative (MCI), will run on work-related apps and websites and will also take occasional snapshots of the content on employees’ screens, according to one of the memos, posted by a staff AI research scientist on Tuesday in a channel for the company's model-building Meta SuperIntelligence Labs team.

The purpose, according to the memo, was to improve the company's AI models in areas where they struggle to replicate how humans interact with computers, like choosing from dropdown menus and using keyboard shortcuts.

"This is where all Meta employees can help our models get better simply by doing their daily work," it said.

The Facebook and Instagram owner has been moving aggressively to integrate AI into its workflows and reshape its workforce around the technology, arguing it will make the company operate more efficiently.

Exclusive: Meta to start capturing employee mouse movements, keystrokes for AI training data
 
A Labour council will become the first to deploy AI-assisted CCTV cameras to spy on the public and spot 'suspicious' behaviour.

Hammersmith and Fulham Council, in west London, plans to modify 500 street cameras with AI that can automatically flag 'aggressive' behaviour or 'suspicious shopping' patterns.

But there are concerns innocent people going about their lives could be wrongly reported to the authorities.

It is feared that 'hugs, back slaps and high fives' might be mistaken for aggression, and shoppers shaking or rattling items, or holding clothes up to the light, could trigger an alert about a potential thief.

Labour council to use AI CCTV to spy on public and flag any 'suspicious behaviour'
 

Similar threads

Back
Top Bottom