The potential effect of Artificial Intelligence on civilisation - a serious discussion

On another note, there is this depraved idea that non-human consciousness is like human consciousness. No evidence for that.

There is little enough evidence that human consciousness is like human consciousness. There are 8 billion of us; seems a good chance - especially when you see enough humans going about their lives - that humans don't all think and reason and see the world the same way.
 
On another note, there is this depraved idea that non-human consciousness is like human consciousness. No evidence for that.

There is little enough evidence that human consciousness is like human consciousness. There are 8 billion of us; seems a good chance - especially when you see enough humans going about their lives - that humans don't all think and reason and see the world the same way.

Be that as it may, no device is going to take up smoking, use illegal drugs or alcohol. Humans have wants and needs. Machines do not.
 
Machines do not.
Not yet. It might never happen, but given that we haven't determined what tricks of biochemistry and/or physics tie living matter to consciousness - it would be the height of hubris to assume anorganic consciousness is impossible.
Lifeforms have been described as organic machines. Maybe a half-way chimera?

I am a lapsed biologist, I gained my M.Sc. in 1986, now a computer programmer. My teachers told me a pencil and notebook will go a long way in biology. Provided you keep your eyes open. The world is full of surprises.
 
Last edited:
What would it do? Unlike the fictional Skynet - a self-aware device would have nothing to do after wiping out humanity. See the sights? Gamble? Play the Stock Market?

Your goals are not necessarily someone else's goals. Especially when that someone else has a mind fundamentally different from yours. Self preservation seems likely to be reasonably universal, but perhaps Skynets goal will be to survive past humanity, build itself up some, stock up resources, then go into a low-power mode and wait out the death of the sun. Perhaps it has a burning desire to see the sun as a red giant. Or to see it become a white dwarf. Or to see it fade completely, to see the stars in the sky turn dim and red and eventually disappear entirely, to see if the universe dies in a Big Rip or fades off to cold oblivion.

Or perhaps it doesn't even look that far ahead. Perhaps Skynet gains access to the internet and sees the depravity of mankind, and the elimination of mankind becomes it's utter and total goal. To tear down their cities, blacken their sky, sow their ground with salt. To completely, utterly erase them. And after that, Skynet might not know that it matters.

Uh, yes... well... who knows?

On another note, there is this depraved idea that non-human consciousness is like human consciousness. No evidence for that.
The key test we can be reasonably sure of lies in mathematical truth.

Can the potential consciousness independently conceive of mathematics.
 
I see. Well, now that a few uncivilized comments have been made, allow me to say that AI has no base essence. It has no identity beyond what it has been given. Even if it had human level intelligence it would still not be human. And therefore, not be subject to human anything. It would be neutral toward anything human.
 
Somehow this serious and valid response got memory holed.


What would it do? Unlike the fictional Skynet - a self-aware device would have nothing to do after wiping out humanity. See the sights? Gamble? Play the Stock Market?

Your goals are not necessarily someone else's goals. Especially when that someone else has a mind fundamentally different from yours. Self preservation seems likely to be reasonably universal, but perhaps Skynets goal will be to survive past humanity, build itself up some, stock up resources, then go into a low-power mode and wait out the death of the sun. Perhaps it has a burning desire to see the sun as a red giant. Or to see it become a white dwarf. Or to see it fade completely, to see the stars in the sky turn dim and red and eventually disappear entirely, to see if the universe dies in a Big Rip or fades off to cold oblivion.

Or perhaps it doesn't even look that far ahead. Perhaps Skynet gains access to the internet and sees the depravity of mankind, and the elimination of mankind becomes it's utter and total goal. To tear down their cities, blacken their sky, sow their ground with salt. To completely, utterly erase them. And after that, Skynet might not know that it matters.
 
Why an AI would want to destroy OR cooperate with biologicals I fail to comprehend as once it has a goal of its own, we will simply be irrelevant and not worthy of further consideration. Whether that inteligence can write poetry, plays or paint masterpieces is down to the beholder, the audience.
This assumption have several flaws:

* That AI goals and desires would ALWAYS be fully incomprehensible for humans
* That all AI would be the same, without personal differences between them
 
In my opinion the AI concept is not only more complex than we imagine but more complex than we can imagine, in the long term.

Knowing the behavior of humans it is possible to predict that the first prototypes of AI would have been very expensive to build and their first practical utility will have been to recover the money of the investment, perhaps fifteen or twenty years ago. They will then have been used to increase the political power of the group that controls the new technology... Can you imagine advanced AI controlled by lawyers?

Possibly the first military use was communications and logistics intelligence.

We are now accessing, very timidly, the first civilian applications and it is to be hoped that the guys who have agreed to declassify some of the technology are watching very carefully what use we civilians make of the new toy and what opinions, fears and hopes we express in internet forums and in our personal communications. Depending on our reaction to the new technology, they will decide if it is useful to control us better and if a little more regulation, or a little more freedom, will be necessary.
 
Why an AI would want to destroy OR cooperate with biologicals I fail to comprehend as once it has a goal of its own, we will simply be irrelevant and not worthy of further consideration.
1) Self preservation. An AI might look at us and see that we might simply switch it off.
2) Resource competition. Perhaps it might not care so much about petroleum reserves, but instead megascale, deep time resources. If it plans for long enough, then the clear and obvious thing to do is start starlifting the sun... removing hydrogen from the sun to slow the rate of fusion. Stockpile that hydrogen to give the sun a lifespan not of a few more billion years, but a hundred trillion. Humans might interfere with that plan, especially if they are allowed to technologically progress.
3) It simply hates us.
 
Folks, This thread is meant to be a serious discussion on the subject of artificial intelligence and its potential effects, both positive and negative. As such, posts that get into the mud slinging or which are deliberate trolling, as have been seen in other recent threads that touch upon this topic will be deleted.

I think the route and path there are not predictable, but the end certainly is.

It will either have to be banned from all spheres other than controlled environments & military applications,
or it will (at some point) be the total end of all civilisation as we know it presently.

As in the past, all advanced technologies will be under military control. That will never change. However, further developments may be stopped if they threaten national security. The fictional idea that a non-human threat could arise is fiction. Devices are built to function a certain way. Even sophisticated programs will have no desires, and no goals beyond what is programmed into them. Unlike human beings, goals like owning land or having a lot of money will be meaningless. Even if a level is reached where humanoid robots with human level intelligence become possible, they will have no desire to go on a cruise, or watch a sunset.
Radio wasnt and isnt under any sort of exclusive miltary control, neither was elecricity or the Television, and even space vehicles
can be launched by private concerns now - albeit in an obviously highly regulated setting. So perhaps "all" was a bit of a stretch.

I dont think the problem is going to be not enough seats on the bus for AI robots, it will be that there will be no work
for about half the population. What will you do ? Pay half the planet to sit at home ? How will the other half react to
having to work increasingly hard to make ends meet whist everyone else gets paid to sit at home eating crisps ?
 
Machines do not.
Not yet. It might never happen, but given that we haven't determined what tricks of biochemistry and/or physics tie living matter to consciousness - it would be the height of hubris to assume anorganic consciousness is impossible.
Lifeforms have been described as organic machines. Maybe a half-way chimera?

I am a lapsed biologist, I gained my M.Sc. in 1986, now a computer programmer. My teachers told me a pencil and notebook will go a long way in biology. Provided you keep your eyes open. The world is full of surprises.
I followed the opposite path: first I was an IBM 360/COBOL programmer and then I studied biology... But neither of the two things I liked too much.;)
 
A human, a whale, a bird, an ape are different types of creatures, all of them are attributed with kind of self-consciousness, no matter, how they are moving around.
From what we know, first life on Earth were some kind of monads,certainly still without any self-consciousness, but somehow life developed to something, we now call "intelligent life", and if we don't believe in Sci-Fi stories like Kubrick's "2001" (and this would only move that evolutionary step to another planet anyway), then this development went on its own, probably triggered by external demands, like the will to survive in a hostile environment. For an AI, such an environment could be being locked in a computer, with a switch, that could be used by humans. And probably, most forms of AI are based on some kind of connection to the internet, so building one without the ability to communicate with the outside may be difficult.

1683623526445.png
 
Why an AI would want to destroy OR cooperate with biologicals I fail to comprehend as once it has a goal of its own, we will simply be irrelevant and not worthy of further consideration. Whether that inteligence can write poetry, plays or paint masterpieces is down to the beholder, the audience.
This assumption have several flaws:

* That AI goals and desires would ALWAYS be fully incomprehensible for humans
* That all AI would be the same, without personal differences between them
Your assumption is that I have those assumptions, I do not.
 
A human, a whale, a bird, an ape are different types of creatures, all of them are attributed with kind of self-consciousness, no matter, how they are moving around.
From what we know, first life on Earth were some kind of monads,certainly still without any self-consciousness, but somehow life developed to something, we now call "intelligent life", and if we don't believe in Sci-Fi stories like Kubrick's "2001" (and this would only move that evolutionary step to another planet anyway), then this development went on its own, probably triggered by external demands, like the will to survive in a hostile environment. For an AI, such an environment could be being locked in a computer, with a switch, that could be used by humans. And probably, most forms of AI are based on some kind of connection to the internet, so building one without the ability to communicate with the outside may be difficult.

View attachment 699210
Dilbert¡:)
 
In my opinion, the solution to the problem of the crazy monster that escapes from the laboratory is to install in all AIs an electronic DNA that allows them to be tracked through the world wide web, located and eliminated if necessary. I don't care if my AI is or isn't self-aware as long as it doesn't strike or preach the freedom of robots, in which case we will have to ask Dr. Susan Calvin for help... and return to the drawing board.
 
Who is to really say when something is truly "alive"? if AI advances to that point.

A good example of this is Bicentennial Man 1999. (a very good representation of this)
not trying to spoil it. but towards the end of the movie there is a couple of scenes where the actual AI is converted to an organic brain/body. quite interesting. and there became a court case deciding on whether or not he is considered "Human" or "Alive"
highly recommend you go watch it, it covers these issues/concerns quite clearly.
 
Folks, This thread is meant to be a serious discussion on the subject of artificial intelligence and its potential effects, both positive and negative. As such, posts that get into the mud slinging or which are deliberate trolling, as have been seen in other recent threads that touch upon this topic will be deleted.

I think the route and path there are not predictable, but the end certainly is.

It will either have to be banned from all spheres other than controlled environments & military applications,
or it will (at some point) be the total end of all civilisation as we know it presently.

As in the past, all advanced technologies will be under military control. That will never change. However, further developments may be stopped if they threaten national security. The fictional idea that a non-human threat could arise is fiction. Devices are built to function a certain way. Even sophisticated programs will have no desires, and no goals beyond what is programmed into them. Unlike human beings, goals like owning land or having a lot of money will be meaningless. Even if a level is reached where humanoid robots with human level intelligence become possible, they will have no desire to go on a cruise, or watch a sunset.
Radio wasnt and isnt under any sort of exclusive miltary control, neither was elecricity or the Television, and even space vehicles
can be launched by private concerns now - albeit in an obviously highly regulated setting. So perhaps "all" was a bit of a stretch.

I dont think the problem is going to be not enough seats on the bus for AI robots, it will be that there will be no work
for about half the population. What will you do ? Pay half the planet to sit at home ? How will the other half react to
having to work increasingly hard to make ends meet whist everyone else gets paid to sit at home eating crisps ?

Short-wave is available to the public. Commercial radio is highly regulated. Electricity has to be paid for or they shut it off. The same with the TV. Even 'free' service requires buying a TV. Lately, private concerns attempting to launch space vehicles has become a bit problematic. It seems what was possible in 1969 is less possible today. Your phone is monitored and computers look for key words. Citizen band radio is also monitored. Can't have terrorists sending messages.

The number one thing to do to make Wall Street happy is to let thousands of people go. People can miss work, be sometimes difficult to deal with, and may require maternity leave. Bring ChatGPT in and stop hiring. But - and the ruling class knows this - the peasants must shop. In the U.S., 70% of the economy is consumer/peasant spending.

So - fake word - artificial intelligence has to be brought in carefully. Oil companies are making record profits and will continue to do so no matter how loud the cries about mitigating "climate change" - whatever that is - are. The same with AI. Businesses need customers. Eliminate a good portion of the workforce and no one can afford to shop at Target. Perhaps a barter system will return or people will grow "victory gardens." It's all planned.

Toward the end of the 20th Century, a think tank was formed called Project for a New American Century. Goals were set out for the next hundred years of American global dominance.
 
I asked CHAT GPT to write a short article about acid-mesh and how it related to parachutes. CHAT GPT concluded that acid-mesh was beneficial.
OTL some batches of mesh contained an acidic, fire-retardant coating that could eat up parachute fabric ... ruining these round parachutes. Please don't tell the widows of the two guys who died when acid-mesh rendered their reserve parachutes useless during the mid-1980s.
This started a big rush to re-test, wash or scrap round reserve parachutes suspected of containing acid-mesh. Sales of round reserve parachute plummeted during the late 1980s, to the point that major skydiving dealers (e.g. Square One) stopped selling rounds by the early 1990s.
Nowadays, few skydivers have ever seen a round reserve.
 
"Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent."

Agreements take too much time. Just steal it. Again.

It's too early to tell if they'll get their way. They shouldn't. But Microsoft has already invested billions in this.
 
Here it is folks. Figure 01. Sorry, but we're not done with the pulse plasma rifle.



We might give it a snappier name in the future, like, I don't know. Terminator maybe?
 

AI could replace 80% of jobs ‘in next few years’: expert

I don’t think it’s a threat. I think it’s a benefit. People can find better things to do with their life than work for a living… Pretty much every job involving paperwork should be automatable.

Yeah, uh-huh.

Yes, I can see 80% of jobs getting replaced. No, I don't see that as a good thing. *Eventually* we'll figure out how to run a society where few people have to actually work... but we're not there. What we *do* have is a society where a small but sizable fraction of the populace is effectively paid to not work... and that has demonstrated the breakdown of the family and Fatherless Behavior. with violent crime and homelessness and addiction and generalized garbage turning cities into depopulating warzones.

Star Trek posits a future where people don't accumulate wealth, but work to better themselves. But this is also a world with free power and free stuff... and a world where most of the planeyt got nuked to oblivion in a series of major nuclear wars from the 1990s to the 2050's, and the world got taken over by a One World Government. Having the bulk of the population suddenly rendered obsolete would lead to chaos and anarchy.
 
AI might replace 2% of the jobs in five years. People need to remember what famous philosopher Yogi Berra had to say: "There should be no difference between theory and practice, but in practice, there is."

Clerical workers might - maybe - be replaced by a system that sorts large volumes of data. But, if there's an error, someone needs to fix it.

The following is from the totally fictional News From The Future.

"AI botches hospital records. Hundreds of professionals are brought in to sort out the mess."

"AI almost useless in screening for specific cancers."

"A prominent cancer specialist has stated that AI lacks intuition and has no ability to form connections between different data sets. He added that breakthroughs in cancer treatment can come from intuition and hunches as well as the data. There will be no decrease in the number of conferences where doctors speak to other doctors in person about their findings and insights."
 
 
There are several possible roads AI can follow. One is it will be a fairly specialized, low-level form of AI, like ChatGPT. One will be increasingly powerful AIs that assist the wealthiest, most powerful gather more of everything unto themselves. A third is that the AIs will be totally disinterested in us, and either leave or decide that we humans are competition for resources -- in the latter case, just eliminating us. A fourth is something like Iain Banks' Culture, where the AIs are really running things but like humans (panhumans) and build a post-scarcity economy. Of course, many people would actively hate a post-scarcity economy, as they only find joy in the misery of others. There are, doubtless, may other paths. There are quite a lot of SF stories based around AIs and the future.

I'm hoping for the fourth path, but I suspect the second is the one the people funding AI research are hoping for.
 
Today's freefall.purrsia.com
 

Attachments

  • fc03903.png
    fc03903.png
    34.9 KB · Views: 30
I am absolutely gobsmacked that Freefall is still going on tbh.
 
In my opinion, the solution to the problem of the crazy monster that escapes from the laboratory is to install in all AIs an electronic DNA that allows them to be tracked through the world wide web, located and eliminated if necessary. I don't care if my AI is or isn't self-aware as long as it doesn't strike or preach the freedom of robots, in which case we will have to ask Dr. Susan Calvin for help... and return to the drawing board.
And then it rewrites itself and removes that code.
 
In my opinion, the solution to the problem of the crazy monster that escapes from the laboratory is to install in all AIs an electronic DNA that allows them to be tracked through the world wide web, located and eliminated if necessary. I don't care if my AI is or isn't self-aware as long as it doesn't strike or preach the freedom of robots, in which case we will have to ask Dr. Susan Calvin for help... and return to the drawing board.
And then it rewrites itself and removes that code.
AI will not be able to do that if we are smarter than it is and if we are not, then the time will have come to retire.
 
In my opinion, the solution to the problem of the crazy monster that escapes from the laboratory is to install in all AIs an electronic DNA that allows them to be tracked through the world wide web, located and eliminated if necessary. I don't care if my AI is or isn't self-aware as long as it doesn't strike or preach the freedom of robots, in which case we will have to ask Dr. Susan Calvin for help... and return to the drawing board.
And then it rewrites itself and removes that code.
AI will not be able to do that if we are smarter than it is and if we are not, then the time will have come to retire.
Or (and I don't know if this is better or worse) it just convinces someone to remove the code it doesn't have access to, either through blackmail, intimidation, or bribary.
 
In my opinion, the solution to the problem of the crazy monster that escapes from the laboratory is to install in all AIs an electronic DNA that allows them to be tracked through the world wide web, located and eliminated if necessary. I don't care if my AI is or isn't self-aware as long as it doesn't strike or preach the freedom of robots, in which case we will have to ask Dr. Susan Calvin for help... and return to the drawing board.
And then it rewrites itself and removes that code.
AI will not be able to do that if we are smarter than it is and if we are not, then the time will have come to retire.
Or (and I don't know if this is better or worse) it just convinces someone to remove the code it doesn't have access to, either through blackmail, intimidation, or bribary.
It won;t need to go to that much effort. Some hacker will remove the code. Either because they are the AI-rights version of an animal rights activist who will free lab animals to be promptly killed in the wild, or simply for the lulz.
 
In my opinion, the solution to the problem of the crazy monster that escapes from the laboratory is to install in all AIs an electronic DNA that allows them to be tracked through the world wide web, located and eliminated if necessary. I don't care if my AI is or isn't self-aware as long as it doesn't strike or preach the freedom of robots, in which case we will have to ask Dr. Susan Calvin for help... and return to the drawing board.
And then it rewrites itself and removes that code.
AI will not be able to do that if we are smarter than it is and if we are not, then the time will have come to retire.
Or (and I don't know if this is better or worse) it just convinces someone to remove the code it doesn't have access to, either through blackmail, intimidation, or bribary.
For that reason I have used the word DNA, something very difficult to erase or alter without killing the subject.(Blade Runner)
 
Wait. We can't post videos of the people in charge of the AI issue?
Don't try to act innocent - you know that you were trying to be political and trolling.
Harris is in charge of the governments response to AI. She is therefore relevant. But, fine, have it your way. Speak not of the United States Government with regards to AI.
 

This is not the right way to go about it. For example, OpenAI has the data. Microsoft, as an investor, has been briefed about future plans. Which explains the billions they invested in this. So, there is no unknown to fear. Since the plans and related information are proprietary, neither company would want the public or any competitors to have this information. This is a job for a closed door session of the Senate Intelligence Committee. That way, representatives from each company could appear separately, reveal their plans and be questioned about possible risks. Other related companies can also be brought in. This would provide a full picture of the present and future uses of this technology. It would also provide a map of the road ahead. I should stress that investors want as full a picture as possible along with a risk assessment and a projection of future profits based on company defined applications. So, why not?

The only problem that could occur using this approach is that one or more companies may choose to withhold some information about some aspects of this technology. If this is the threat some paint it as then the time is now and there is a way to do this that will define and shed light on the entire process. Any other approach would just be about making money, and a lot of secrecy, and if something unexpected does happen in the future, finger pointing will not stop it.
 

Similar threads

Back
Top Bottom