The potential effect of Artificial Intelligence on civilisation - a serious discussion

Without more data... *maybe.* It could well be that the humans had been actively using "affirmative action" thinking in order to artificially inflate race and sex based quotas... and the AI hadn't been trained that way, instead it looked only at actual qualifications. Meritocracy can look like bigotry if you're not qualified.
It can merely be the available things which the AI can reference,, for instance,

October 10, 20186:04 PM Updated 5 years ago

Amazon scraps secret AI recruiting tool that showed bias against women​

By Jeffrey Dastin

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.



In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

Also
Amazon’s sexist AI recruiting tool: how did it go so wrong?
Julien Lauret
Becoming Human: Artificial Intelligence Magazine
18 min read
Aug 16, 2019


But what happened at Amazon, and why did they fail? We can learn many valuable lessons from a high profile failure. The Reuters’ article and the subsequent write-ups by Slate and others are written for a general news-reading audience. As an AI practitioner myself, and I’m more interested in the technical and business details. I was frustrated by how shallow — and sometimes wrong — most of the reporting was. Surely some professionals and enthusiasts would like a more in-depth write-up?

Below is my attempt at a post-mortem case study of the Amazon project. I hope you find it interesting.

A disclaimer before we start: I’ve never worked for Amazon, I don’t know personally the people involved, and I’m certainly not privy to any confidential information. Don’t expect that piece to divulge any secret. The analysis below draws on my own experiences and whatever information is publicly known from news sources.

The piece is long, so let’s cut to the chase. AI projects are complicated pieces of business and engineering, and there is more to it than “AI bias” and “unbalanced data.” Algorithms aren’t morally biased. They have no agency, no consciousness, no autonomy, and no sense of morals. They do what their designers ask them to do. Even data isn’t biased. It’s only data. It means that to understand what happened at Amazon — or in any AI project — we need to understand the human designers, their goals, the resources they had, and the choices they had to make.
 

The idea that hiring and promotion in the private sector is strictly on merit is false.
Sure. But what criteria are the AI using? If they are programmed to use stupid bigotry, that would be found in the code. If they are looking for what should be proper metrics and they still come up with "wrong" results, then maybe the *candidates* are "wrong."
The report I read was that the AI was rejecting *better* qualified minority candidates in favor of non-minorities.
 
Also
Amazon’s sexist AI recruiting tool: how did it go so wrong?
I have a strong impression from this story, that it's simpler for Amazon to blame "faulty AI", than to admit that their recruiting process was biased for years (especially considering that some of those biased recruiters may now hold much higher positions in company...)
 
The report I read was that the AI was rejecting *better* qualified minority candidates in favor of non-minorities.
And AI was merely following the previous patterns faithfully) It seems that one reason why Powers-To-Be are so worried about AI is because it would reveal all their past biases, mistakes and stupidity. Which said Powers-To-Be would strongly prefer to stay buried in mountains of archived data, not refined and put to public attention by AI...
 
Then the criteria should be able to be determined in the coding, yes?
Indeed. Acting after determining the fact is, of course, at the discretion of the system's owner.
 
Algorithms aren’t morally biased. They have no agency, no consciousness, no autonomy, and no sense of morals. They do what their designers ask them to do.
THIS. Now repeat after me: AI DOESN'T KILL PEOPLE, PEOPLE KILL PEOPLE...

Wha, what?
Don't blame the product for bad design, blame the designer.

I see. So AI can't think for itself.
Correct. Take for example the definition of thought at https://en.wikipedia.org/wiki/Thought - AI is already disqualified from having any by the very first sentence, since thought requires conscious cognitive processes, which AI is not capable of (no [artificial] consciousness, remember?), and it won't perform any processes independently of sensory stimulation (in the case of AI, e.g. inputs, requests, commands, or prompts).
 
Last edited:
Algorithms aren’t morally biased. They have no agency, no consciousness, no autonomy, and no sense of morals. They do what their designers ask them to do.
THIS. Now repeat after me: AI DOESN'T KILL PEOPLE, PEOPLE KILL PEOPLE...

Wha, what?
Don't blame the product for bad design, blame the designer.

I see. So AI can't think for itself.
Correct. Take for example the definition of thought at https://en.wikipedia.org/wiki/Thought - AI is already disqualified by the very first sentence, since thought requires conscious cognitive processes, which AI is not capable of (no consciousness, remember?), and won't perform any processes independently of sensory stimulation (in the case of AI, e.g. inputs or prompts).

Wow. So that takes the mystery out of it. I suspect so-called AI is just a military project that got declassified and released into the wild, being replaced by something more advanced. Much like the internet. What hypesters call AI is just military pattern recognition technology.
 
Algorithms aren’t morally biased. They have no agency, no consciousness, no autonomy, and no sense of morals. They do what their designers ask them to do.
THIS. Now repeat after me: AI DOESN'T KILL PEOPLE, PEOPLE KILL PEOPLE...

Wha, what?
Don't blame the product for bad design, blame the designer.

I see. So AI can't think for itself.
Correct. Take for example the definition of thought at https://en.wikipedia.org/wiki/Thought - AI is already disqualified by the very first sentence, since thought requires conscious cognitive processes, which AI is not capable of (no consciousness, remember?), and won't perform any processes independently of sensory stimulation (in the case of AI, e.g. inputs or prompts).

Wow. So that takes the mystery out of it. I suspect so-called AI is just a military project that got declassified and released into the wild, being replaced by something more advanced. Much like the internet. What hypesters call AI is just military pattern recognition technology.
I honestly don't know or care what the exact roots of AI activities to date are, though I highly doubt they were purely military. I'm also not aware of what much more advanced development in concrete terms is expected to replace the current AI implementations.
 
People need to know and care. Microsoft does. That's why they've invested billions. The EU wants AI outputs clearly labeled as AI outputs so that a too trusting public will not be taken in by scammers. And more convincing fakes raises the ability of scammers to scam the public.
 
I believe that any competent military AI implementations will not end any form of arms race as predicted above, but rather just escalate it.
 
Last edited:
Our friend here seems to have swallowed the idea that AI has magical and/or human-like qualities, as if it will wake up one day, like a human. The problem is it's not.

What's "magical" about being self aware/sentient/conscious? What makes humans "magical" and chimps or chipmunks or computers not?
We invented computers

We dominate the planet and live on every continent. Across the extremes of climate.
Through our technology, we run faster, fly higher, dive deeper, move mountains or single atoms and reached out to touch the moon. We've hurled our machines beyond the heliopause out into the galaxy. Our signals are now over a hundred lightyears outwards.
We conjour theories to explain the nature of the universe and conceive of means to violate fundamental limits, which we may yet achieve.

Monkeys cracking nuts is a long way back and your computer cannot prove to itself the infinite.
 
I've located another video discussing the attempt to substitute Chat-Bots for legal research and why it's not a good idea, this one also briefly mentions another attempt to bring Chat-Bots into the court. (Warning, this does contain an advert part-way through.)

View: https://www.youtube.com/watch?v=oqSYljRYDEM
 
Last edited:
Here's an article discussing an attempt (The second I know of) to sue the creators of ChatGPT for statements that are provably false, as the article notes it's also the third case I know of the machine making things up when asked to provide information. (See my last post above about the two lawyers in a lot of trouble for using ChatGPT for legal research.)

A Georgia talk-show radio host sued OpenAI, the company that owns ChatGPT, for libel June 5 after the artificial intelligence chat bot shared false information about the host to a journalist.

Mark Walters, a radio host at Armed American Radio, filed the lawsuit in the Superior Court of Gwinnett County, Georgia, and claimed ChatGPT published libelous information about him by sharing a “fabricated” complaint to a journalist. Libel is a published false statement that diminishes a person’s reputation.

The journalist, Fred Riehl of Ammoland.com, was reporting on a federal lawsuit filed in Washington, and provided ChatGPT with a link to the complaint and requested a summary of the lawsuit’s accusations.

ChatGPT responded, in part, that the complaint was “filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF.”

According to Walters’ complaint against OpenAI, all the information provided by ChatGPT relating to him was false. Riehl requested a copy of the complaint from ChatGPT, which responded with a filing that was a “complete fabrication and bears no resemblance to the actual complaint, including an erroneous case number,” the lawsuit states.

https://firstamendmentwatch.org/lib...eorgia-against-chatgpt-parent-company-openai/
 
A reader's letter to the Dutch newspaper De Volkskrant:
My question to ChatGPT: 'It takes 20 seconds to completely empty a 1 liter bottle by turning it upside down. How long does it take to completely empty 20 1 liter bottles when they are turned upside down at the same time?

The answer: "If 20 1 liter bottles are turned upside down at the same time, they will empty faster than if they are emptied one at a time. In this case, we can calculate the time it takes to empty the bottles by looking at the volume drained per second."

"Suppose each bottle takes 20 seconds to empty completely. This means that every second 1/20th part of a bottle is drained. Since we have 20 bottles, this means that every second we can drain the equivalent of 20/20 = 1 bottle."

“So, it only takes 1 second to completely empty the contents of 20 1 liter bottles when they are turned upside down at the same time.”
Artificial Intelligence my foot.
 
Further reporting on the case of two lawyers who used 'AI' to supplement their legal research.

MANHATTAN (CN) — A New York lawyer whose court filings included fake case citations generated by ChatGPT apologized Thursday afternoon for getting duped by the artificial intelligence tool, but the federal judge overseeing potential sanctions appeared unlikely to show any mercy.


And another case of the same out of the US State of Colorado...

COLORADO SPRINGS, Colo. (KRDO) - Colorado Springs attorney Zachariah Crabill thought he was filing a motion with cited cases that would favor his client’s argument, only to find out many of the cases were made up by Artificial Intelligence software ChatGPT.

Crabill, a licensed Colorado attorney for about a year and a half, was working on his first civil litigation case. He was defending a client that was accused of breaching a car payment agreement.

In a court document, admitting his mistake, Crabill said it was the first motion to set aside a summary judgment he had ever researched, drafted, and filed by himself. He had heard of ChatGPT, an artificial intelligence chatbot developed by OpenAI, trained to follow instructions in a prompt and provide a detailed response. So he turned to AI technology to help find case laws that would strengthen his client’s argument.

https://krdo.com/news/2023/06/13/co...eated-fake-cases-he-cited-in-court-documents/
 
A reader's letter to the Dutch newspaper De Volkskrant:
“So, it only takes 1 second to completely empty the contents of 20 1 liter bottles when they are turned upside down at the same time.”
Artificial Intelligence my foot.
While that is indeed a dumb conclusion... tell me that you really believe that a *lot* of people - including perhaps a distressing percentage of current college kids - wouldn't nod along, thinking it was right. And unlike the AI, they get to vote.

So... the AI is no dumber than, say, half the participants on the "Whatever Podcast."
 
tell me that you really believe that a *lot* of people - including perhaps a distressing percentage of current college kids - wouldn't nod along, thinking it was right.
I fear a significant part of all age groups will nod along. As a simulation of a certain subset of human reasoning, it is uncomfortably convincing.
unlike the AI, they get to vote.
Yep.
As Winnie put it - the cigar smoking one, not the honey-obsessed one or any of HIS lookalikes:
Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…
 
Yes... but someday the GrAImmys will be a thing and *humans* won't be eligible. Of course, humans will be out of the drudgery-filled world of music creation anyway by that point.

Drudgery? Do you know nothing about creativity and inspiration? SF, when done well, inspires. It creates a sense of awe and wonder. Music, when done well, lifts people up. All creative work requires work. Our head writer was at a convention, doing a panel talk. A few people in the audience wanted to know his "secret." He said: A lot of hard work.

"No, no. What is your SECRET?" Translation: Give us the 5 super easy steps to becoming someone as successful as you.

His reply: Like I said. It takes a lot of hard work. Translation: More than 15 minutes, a few youtube videos and wiki.
 

Similar threads

Back
Top Bottom