The potential effect of Artificial Intelligence on civilisation - a serious discussion

A "Vatican approved ethics guide"? What does that contain? No LGBTQ? Protect church approved child molestation? Protect church approved theft of children? Deny women the right to abortion because the church does not like it? What on earth makes the church believe it is so moral, ethical and so damn right to interfere in the lives of non believers?

Little wonder putin promotes his christianity.
 
In today's world there are women, atheists, Jews and homosexuals willing to risk their freedom and even their lives to allow a Catholic or a Muslim to freely practice their religion because that is the value system that has proven to work best.

No one wants to live in a world with an infant mortality of fifty percent and a life expectancy of 37 years, a world in which sea bathing, anesthesia, contraceptives, potatoes, tomatoes and cats are forbidden for religious reasons, where justice admits as evidence a confession made under torture about an accusation of witchcraft based on having a spot on the skin, where people are born and die for generations within a few meters of a city walls, where paedophilia, racism and slavery are tolerated, where doctors and scientists are burned alive, where population adjustments consist of organizing a children's crusade.

That world existed and dominated most of the known world for a thousand years. Part of him still exists, playing with his swords, his dungeons and his dragons in reality and in consciousness, waiting for a comet, a virus or a dictator to restore his power.

And from time to time they provoke a little to see if the opposition weakens.
 
Question.
What did you value when you developed the AI?
Results?
Like what? Coming up with something you didn't.
 
Folks, a warning to stay on topic.
Agreed but, there will be a need for something like the three laws of robotics in the AI mix and asking the question of "Who is going to be most influential in this process", is important, we just have to know that certain institutions will push for involvement if not lead.
 
Agreed but, there will be a need for something like the three laws of robotics in the AI mix and asking the question of "Who is going to be most influential in this process", is important, we just have to know that certain institutions will push for involvement if not lead.
When the police chief realizes that he cannot stop the anger of the peasants, he places himself at the head of the demonstration. Cui prodest?
 

Attachments

  • images.jpg
    images.jpg
    6.5 KB · Views: 2
  • tumblr_puplohdl7O1tdkro1o4_540.gif
    tumblr_puplohdl7O1tdkro1o4_540.gif
    112 KB · Views: 2
What seems to be emerging is that 'human curation' for want of a better term seems to be needed at this stage.

In the linked article Microsoft blames 'human error' for an AI generated 'listicle' recommending the Ottowa Food Bank as a place for tourists to visit. I seriously doubt any human read the list.

 
To believe AI is merely something "Written by humans" is missing the point of AI. At some stage the AI will be writing the code for successive generations of AI, then there will be a change and there is no way to know for certain which way TRUE AI will go as oppposed to AI coded by humans. Until AI code is written by AI, it cannot by definition be true AI.
AI was created by humans for humans and personally, I think humans won't let AI replace them to the point of writing code for AI. I think humans (especially the press) will create policies for AI the way they did with nuclear weapons back then.
 
The term AI is in fact disingenuous. Working from a series of algorithms written hy biological entities cannot by definition, be artificial.
 
Whatever the moniker it wears, fear of skynet will curb actual AI apart from hypetheticals and whimsy. AI is just marketing speak for "It's stupid and costs too much" so give us your money, sheeples.
 
Recent articles
https://www.channelnewsasia.com/world/fight-over-dangerous-ideology-shaping-ai-debate-3727481

AI
 
Last edited:
OK, not entirely serious. Sorry. Still, it raises an interesting question. As a tangent in his sf novel, The Omega Expedition, Brian Stableford notes that human senses are limited, and wondered what would happen if you had direct machine-brain interfaces and the imagery transmitted was at higher resolution that we see? No heads exploded as it's fairly hard sf and the brain is remarkably flexible, but he didn't go into much depth of speculation.

SEI_195727177.jpg
 
The image is humorous but makes a serious point. All rational people know that it's nuts for the government to forgive student loans en masse, especially for those who got degrees that they *knew* would not lead to careers that would pay enough to pay off the loans. But now AI is going to make a *lot* of fields non-lucrative, and it would be not just unwise but unethical to go into debt studying for a field that simply won't have any use for you.

GIr3YDrXsAA3ss1.png
 
I have had limited experience with AI on the internet. It seems that most systems in common use have pre-programmed guidelines that prevent discussion of some topics or biases that slant the discussion toward the programmers desired outcome. Since people with their own concepts of correctness set up the guidelines for AI, we are not really seeing the full potential of the systems. It would be interesting to see what an AI system would generate if it had no programmer-built constraints.
 

A broad article with many fun facts. Major points:

...Open Minds Institute (OMI) in Kyiv, describes the work his research outfit did by generating these assessments with artificial intelligence (AI). Algorithms sifted through oceans of Russian social-media content and socioeconomic data on things ranging from alcohol consumption and population movements to online searches and consumer behaviour. The AI correlated any changes with the evolving sentiments of Russian “loyalists” and liberals over the potential plight of their country’s soldiers.

...drone designers commonly query ChatGPT as a “start point” for engineering ideas, like novel techniques for reducing vulnerability to Russian jamming. Another military use for ai, says the colonel, who requested anonymity, is to identify targets.

As soldiers and military bloggers have wisely become more careful with their posts, simple searches for any clues about the location of forces have become less fruitful. By ingesting reams of images and text, however, AI models can find potential clues, stitch them together and then surmise the likely location of a weapons system or a troop formation.

...uses the model to map areas where Russian forces are likely to be low on morale and supplies, which could make them a softer target. The AI finds clues in pictures, including those from drone footage, and from soldiers bellyaching on social media.

The use of AI helps Ukraine’s spycatchers identify people ,,, “prone to betrayal”.

Palantir’s software and Delta, battlefield software that supports the Ukrainian army’s manoeuvre decisions. COTA's
[Operations for Threats Assessment] “bigger picture” output provides senior officials with guidance on sensitive matters, including mobilisation policy

Ukraine’s ai effort benefits from its society’s broad willingness to contribute data for the war effort. Citizens upload geotagged photos potentially relevant for the country’s defence into a government app called Diia (Ukrainian for “action”).

Ukraine’s biggest successes came early in the war when decentralised networks of small units were encouraged to improvise. Today, Ukraine’s ai “constructor process”, he argues, is centralising decision-making, snuffing out creative sparks “at the edges”. His assessment is open to debate. But it underscores the importance of human judgment in how any technology is used.
 

Similar threads

Back
Top Bottom