AI art and creative content creation

Status
Not open for further replies.
Before you mock AI art generators for "what the frak sort of guns are *those*?" keep in mind that actual humans produced this:

Rambo-Movie-Poster-diamond-painting.jpg
 
Yeah, what is about fingers that escapes the AIs??
 
Yeah, what is about fingers that escapes the AIs??
I honestly don't get that. You'd think that, compared to faces, hands would be *easy.* You don't need to make a wide diversity of hands, and the basic model is pretty simple. A hundred "hand models" would seem utterly adequate. A dozen would probably do. Faces, on the other hand, need to be designed in their millions, and are very quickly picked up by humans as *wrong* when they're off by even a little bit.
 
  • Like
Reactions: Cjc
Thread reopened after clean up. I removed a bunch of off topic and just bad behaviour posts. Please try to keep on topic and act like adults.
 
  • Like
Reactions: zen
No one really knows what AI is.

As a science fiction writer, I've long wondered what AI really is and how will it be used in the future: a more efficient secretary or Dr. Mengele?

The most obvious answers that may justify any investment are always money and power. And then, more money and more power.

Well, we have been already seeing how the referendums and elections of last ten years are getting rather unexpected results, political leaders are becoming less and less credible characters and the world economy is behaving illogically, in the classical sense.

By rendering themselves unnecessary for agriculture, war and factories, I was afraid humans would become unnecessary. When COVID began to kill old people, I believed that the guys running everything had decided to make a population adjustment ... But I was wrong.

Actually, figures indicate that the situation of the world has improved in many respects: there is less poverty, less disease and fewer wars.

Although apparently they only need us to vote, preserving the democratic system must be so important to them that they will let us live a few more decades, despite the enormous expenses involved in the health system and the pension system of the baby boomers in the Western world. I don't know why they do it. It's not cost-effective, but it's what they're doing.:confused:
 
No one really knows what AI is.
As a science fiction writer, I've long wondered what AI really is ..:confused:
*NOBODY* knows what AI is. But for all the screamign and hair pulling about how AI doesn't actually have consciousness... nobody can explain what exactly that means, what exactly that is, and how humans have it while machines or animals don't. It's like claiming that C-3PO doesn't have a soul, but Luke Skywalker does... without actually proving that the soul actually exists in the first place.

So, AI probably won;t be human minds in metal boxes. They will be different from us in most ways. And we need not understand exactly what those differences are to understand what AI can do... and whether those capabilities might or might not be good things.

View: https://www.youtube.com/watch?v=xSJmhUMSsMY
 
AI are learning to code. This means:
1) The job of "coder" might well soon be in trouble, so the default fallback option for journalists replaced by AI will itself be replaced by AI
2) That those building these AI have either never watched or read any science fiction, or they watched it and nodded along and said "that seems like a good idea."

View: https://twitter.com/lemonodor/status/1628270074074398720

Where are your sources for this? Plagiarism is a real problem. Google got sued by writers, artists and photographers for lifting text and images for their Google Books project. This is more sophisticated but it is the same thing. First, AI is fiction. So-called AI art programs have massive databases and programs to characterize art, break it into pieces, and mix and match. No > intelligence < of any kind is involved. Writing has been broken down into easy to use, mix and match pieces, again by using highly sophisticated programs.

Did OpenAI create art to mix and match? Of course not. They lifted it, WITHOUT PERMISSION, from the internet. To make billions of dollars. The same with text.
 
AI are learning to code. This means:
1) The job of "coder" might well soon be in trouble, so the default fallback option for journalists replaced by AI will itself be replaced by AI
2) That those building these AI have either never watched or read any science fiction, or they watched it and nodded along and said "that seems like a good idea."

View: https://twitter.com/lemonodor/status/1628270074074398720

Where are your sources for this?

Right there. You can read it right there.

Plagiarism is a real problem.
Always has been. Not sure what your argument is.

First, AI is fiction.
Then by all means ignore it. Your future is secure. Don't waste even a second planning for possible futures. It's fine. You're fine. Everything's fine.

No > intelligence < of any kind is involved.
Just like Hollywood.

Did OpenAI create art to mix and match? Of course not. They lifted it, WITHOUT PERMISSION, from the internet. To make billions of dollars.
Just like Hollywood.
 
  • Like
Reactions: Cjc
Sci-fi publisher Clarkesworld halts pitches amid deluge of AI-generated stories


"Clarkesworld, which has published writers including Jeff VanderMeer, Yoon Ha Lee and Catherynne Valente, is one of the few paying publishers to accept open submissions for short stories from new writers.

But that promise brought it to the attention of influencers promoting “get rich quick” schemes using AI, according to founding editor Neil Clarke.

In a typical month, the magazine would normally receive 10 or so such submissions that were deemed to have plagiarised other authors, he wrote in a blogpost. But since the release of ChatGPT last year pushed AI language models into the mainstream, the rate of rejections has rocketed.

In January, Clarke said, the publisher rejected 100 submissions, banning their “authors” from submitting again. In February to date, he has banned more than 500.

“I’ve reached out to several editors and the situation I’m experiencing is by no means unique,” he wrote. “It does appear to be hitting higher-profile ‘always open’ markets much harder than those with limited submission windows or lower pay rates.

“It’s clear that business as usual won’t be sustainable and I worry that this path will lead to an increased number of barriers for new and international authors. Short fiction needs these people.

“It’s not just going to go away on its own and I don’t have a solution.”

Closing submissions is a drastic move. Until a solution is identified, the magazine is not considering stories from authors.

“We will reopen, but have not set a date,” Clarke said on social media. “Detectors are unreliable. Pay-to-submit sacrifices too many [legitimate] authors. Print submissions are not viable for us."
 
AI are learning to code. This means:
1) The job of "coder" might well soon be in trouble, so the default fallback option for journalists replaced by AI will itself be replaced by AI
2) That those building these AI have either never watched or read any science fiction, or they watched it and nodded along and said "that seems like a good idea."

View: https://twitter.com/lemonodor/status/1628270074074398720

Where are your sources for this?

Right there. You can read it right there.

Plagiarism is a real problem.
Always has been. Not sure what your argument is.

First, AI is fiction.
Then by all means ignore it. Your future is secure. Don't waste even a second planning for possible futures. It's fine. You're fine. Everything's fine.

No > intelligence < of any kind is involved.
Just like Hollywood.

Did OpenAI create art to mix and match? Of course not. They lifted it, WITHOUT PERMISSION, from the internet. To make billions of dollars.
Just like Hollywood.

Hey. Are you using your classic "conversations I've had in bars" template? Hollywood is very concerned about rights and licensing. Billions of dollars are involved. But I suppose you've never heard of rights clearances. Example: A TV production company was going to feature a few books produced by my company in an upcoming TV show. They contacted us to get OUR PERMISSION. WHEN WE SAID YES, THEY SENT OVER A DOCUMENT FOR SIGNING. That's how it's done.

Say you just produced a toy based on an animated TV show and want 5 seconds from that show to use in a TV commercial. I know who the people that must be contacted are. You pay money for this. And you pay more if you want to use a different clip later on. And your TV commercial has to be cleared - get approved - by the rights holder.

So enjoy living in La La Land.
 
Hey. Are you using your classic "conversations I've had in bars" template? Hollywood is very concerned about rights and licensing.

Sure. *Their* rights and licensing. But "appropriating" other peoples stuff? Perfectly fine if they can get away with it. This is not Hollywood specific; everyone does it. That's how humans do *everything.* But Hollywood is unique in their skill at it, their blatantness and their hypocrisy.

Say you just produced a toy based on an animated TV show and want 5 seconds from that show to use in a TV commercial.

Or say you saw an animated TV show and you want to produce a toy *kinda* based on that show. Just go ahead and half-ass it. Chances are you'll get away with it, especially if yer furrin.

Robert_Cop.jpg

knock-off-landscape.jpg

nintchdbpict000279257749.jpg


Humans do this nonsense all the time, and get away with it often enough, profitably enough, to keep doing it. So... does that means humans have no intelligence, since they operate the same way as current AI?
 
  • Like
Reactions: Cjc
Closing submissions is a drastic move. Until a solution is identified, the magazine is not considering stories from authors.
Seems to me a valid solution would be to include a nominal "submission fee." If you want to spam a publisher with a thousand AI-written screeds, paying a couple bucks ($2? $5? $10?) each for the privilege might be a good idea. If nothing else, the funds could be used by the publisher to buy an AI-seeking AI that auto-reviews each submission before a human is even bothered.
 
  • Like
Reactions: Cjc
Hey. Are you using your classic "conversations I've had in bars" template? Hollywood is very concerned about rights and licensing.

Sure. *Their* rights and licensing. But "appropriating" other peoples stuff? Perfectly fine if they can get away with it. This is not Hollywood specific; everyone does it. That's how humans do *everything.* But Hollywood is unique in their skill at it, their blatantness and their hypocrisy.

Say you just produced a toy based on an animated TV show and want 5 seconds from that show to use in a TV commercial.

Or say you saw an animated TV show and you want to produce a toy *kinda* based on that show. Just go ahead and half-ass it. Chances are you'll get away with it, especially if yer furrin.

Robert_Cop.jpg

knock-off-landscape.jpg

nintchdbpict000279257749.jpg


Humans do this nonsense all the time, and get away with it often enough, profitably enough, to keep doing it. So... does that means humans have no intelligence, since they operate the same way as current AI?

cheers,
Robin. sub-buzz-4051-1663272136-2.jpg
 
  • Like
Reactions: Cjc
cheers,
Robin.
In lieu of filling the thread with photos of laughable (often *hilariously* laughable) cheap knockoffs, here's a link:

35 Of The Strangest Knock-Off Designs That Were Rightfully Shamed By This Twitter Account

And again, why this is relevant: complaints that AI don't have "intelligence" because all they do is copy/paste existing products run headlong into the fact that that's how *humans* do things.

Look at the current Box Off King: "Avatar." It's Dances With Smurfs 2. I doubt there's a single identifiable design element or plot point that can't be linked to an earlier design or plot. but it was written and created by humans, not AI. Sure, we steal stuff better than AI does, but human evolution and development is essentially static. AI get better and more powerful by the day.
 
No one really knows what AI is.
As a science fiction writer, I've long wondered what AI really is ..:confused:
*NOBODY* knows what AI is. But for all the screamign and hair pulling about how AI doesn't actually have consciousness... nobody can explain what exactly that means, what exactly that is, and how humans have it while machines or animals don't. It's like claiming that C-3PO doesn't have a soul, but Luke Skywalker does... without actually proving that the soul actually exists in the first place.

So, AI probably won;t be human minds in metal boxes. They will be different from us in most ways. And we need not understand exactly what those differences are to understand what AI can do... and whether those capabilities might or might not be good things.

View: https://www.youtube.com/watch?v=xSJmhUMSsMY











Throughout its existence, humanity has always been an endangered species.

1.2 million years ago only between 18,000 and 26,000 homo sapiens lived.

195,000 years ago the Ice Age reduced the population to only a few thousand, according to the most exaggerated figures less than a thousand.

70,000 years ago the Toba volcano almost made it, but about 10,000 people managed to survive.

In October 1962 there was also little left for extinction... But again we survived.

Perhaps the explanation is that we are able to change to adapt, not physically but through technology.

Nature tries to kill us by all means, but never quite succeeds and if one day we get some kind of immortality the game will be over.

But we must regret the thousand years of scientific and technological development lost during the Middle Ages, we are still in danger if the next asteroid is lucky.
 
No one really knows what AI is.
As a science fiction writer, I've long wondered what AI really is ..:confused:
*NOBODY* knows what AI is. But for all the screamign and hair pulling about how AI doesn't actually have consciousness... nobody can explain what exactly that means, what exactly that is, and how humans have it while machines or animals don't. It's like claiming that C-3PO doesn't have a soul, but Luke Skywalker does... without actually proving that the soul actually exists in the first place.

So, AI probably won;t be human minds in metal boxes. They will be different from us in most ways. And we need not understand exactly what those differences are to understand what AI can do... and whether those capabilities might or might not be good things.

View: https://www.youtube.com/watch?v=xSJmhUMSsMY

Throughout its existence, humanity has always been an endangered species.

1.2 million years ago only between 18,000 and 26,000 homo sapiens lived.

195,000 years ago the Ice Age reduced the population to only a few thousand, according to the most exaggerated figures less than a thousand.

70,000 years ago the Toba volcano almost made it, but about 10,000 people managed to survive.

In October 1962 there was also little left for extinction... But again we survived.

Perhaps the explanation is that we are able to change to adapt, not physically but through technology.

Nature tries to kill us by all means, but never quite succeeds and if one day we get some kind of immortality the game will be over.

But we must regret the thousand years of scientific and technological development lost during the Middle Ages, we are still in danger if the next asteroid is lucky.
According to Darwinism, throughout their very existence, *every* *single* species of life on this planet (and potentially or even likely on other celestial bodies across the universe as well, though we may sadly never know) continuously has been or is being endangered by evolutionary pressure - I won't even start to bore you with concrete examples (well, o.k., Trilobites, Dodos - you get the point). Logic reasoning leads one to the assumption that at the very origin of our particular species, there had to be at the very minimum one female and one male specimen, and perhaps at least in part due to the utter lack of high def online streaming entertainment services at the time, things quite literally evolved from there. So if you can scale up from a mere two individuals to 8 billion+ herd members easy peasy, your potential extinction events quoted above scare me on an existential level about as much as a Freddy Krueger movie. That doesn't mean that I wouldn't take extremely serious exception to being nuked by "Ras" Putin, but I can assure you that the end of us as a species would be just about the farthest thing from my mind at that point.
 
Last edited:
So if you can scale up from a mere two individuals to 8 billion+ herd members easy peasy, your potential extinction events quoted above scare me on an existential level about as much as a Freddy Krueger movie.
Humanity going extinct in, say, the next century is a low order of probability event. Modern civilization collapsing into darkness, though, is quite feasible. Things get bad enough to knock civilization backwards - global nuclear war, a *real* pandemic, Carrington Event, etc. - that's not at all unreasonable. And unfortunately, that might be unrecoverable. All the easily scraped-up/pumped up coal and oil are gone. Much of our knowledge is on easily EMP-erased electronic systems (that can't be read without electricity anyway). A *lot* of people who really should know better are *already* violently opposed to modern science, western civilization, objectivism, etc.

Humanity *could* be knocked back to a pre-industrial level from which we might never rise again. And that's close enough to "extinct" as makes little difference, as our future horizons would be reduced to a narrow squint.
 
It's hard to decide whether this is technology in this form is going to lead anywhere.
One error in Google's AI wiped millions of the shareprice, Bing's is a laughing stock already and barely credible as intelligent in any form. ChatGPT is riding high with merry quippers and plagarising teens and get-rich quicker wannabe authors AI art programmes with wannabe artists and NFT creators but there will come a crunch point. Is this going to be another Google Glass or Metasphere? Lots of techy speak but actually the end product nobody really wants and embarrassing share value losses?

AI has important uses, if this money was put into AI medical systems to aid diagnoses or analysis, scientific analysis, robot surgeons or manufacturing tools then it would be making progress.

But ChatGPT and Midjourney isn't taking us anywhere productive. It's a fancy entertainment set-up holding up a mirror to teenage-levels of mentality and banality. Microsoft and Google think ultimately when its behind a paywall it will generate mega bucks, but who is really going to keep forking out money to write gun crime poems once the novelty has worn off? All the paywall will do is drive the scammers - those who try to pass off AI art and writing as their own and sell it for profit or commission. It will make current pay-for-essays plagiarism look like small fry (publishers can cut off submissions and put in place 'chokepoints' in order to check material but schools, colleges and universities can't cut off coursework entirely to avoid being swamped).

Sure a "scary AI" could be around the corner, but only it will only be scary because it presents us with an image of human psyche right back at us that we can't easily bat away or pretend doesn't exist. I'm not against AI, but I am against AI developed for get-rich quick schemes of no practical benefit.
 
It's hard to decide whether this is technology in this form is going to lead anywhere.
One error in Google's AI wiped millions of the shareprice, Bing's is a laughing stock already and barely credible as intelligent in any form. ChatGPT is riding high with merry quippers and plagarising teens and get-rich quicker wannabe authors AI art programmes with wannabe artists and NFT creators but there will come a crunch point. Is this going to be another Google Glass or Metasphere? Lots of techy speak but actually the end product nobody really wants and embarrassing share value losses?

AI has important uses, if this money was put into AI medical systems to aid diagnoses or analysis, scientific analysis, robot surgeons or manufacturing tools then it would be making progress.

But ChatGPT and Midjourney isn't taking us anywhere productive. It's a fancy entertainment set-up holding up a mirror to teenage-levels of mentality and banality. Microsoft and Google think ultimately when its behind a paywall it will generate mega bucks, but who is really going to keep forking out money to write gun crime poems once the novelty has worn off? All the paywall will do is drive the scammers - those who try to pass off AI art and writing as their own and sell it for profit or commission. It will make current pay-for-essays plagiarism look like small fry (publishers can cut off submissions and put in place 'chokepoints' in order to check material but schools, colleges and universities can't cut off coursework entirely to avoid being swamped).

Sure a "scary AI" could be around the corner, but only it will only be scary because it presents us with an image of human psyche right back at us that we can't easily bat away or pretend doesn't exist. I'm not against AI, but I am against AI developed for get-rich quick schemes of no practical benefit.
That's the most calibrated response I ever saw on AI.
 
So if you can scale up from a mere two individuals to 8 billion+ herd members easy peasy, your potential extinction events quoted above scare me on an existential level about as much as a Freddy Krueger movie.
Humanity going extinct in, say, the next century is a low order of probability event. Modern civilization collapsing into darkness, though, is quite feasible. Things get bad enough to knock civilization backwards - global nuclear war, a *real* pandemic, Carrington Event, etc. - that's not at all unreasonable. And unfortunately, that might be unrecoverable. All the easily scraped-up/pumped up coal and oil are gone. Much of our knowledge is on easily EMP-erased electronic systems (that can't be read without electricity anyway). A *lot* of people who really should know better are *already* violently opposed to modern science, western civilization, objectivism, etc.

Humanity *could* be knocked back to a pre-industrial level from which we might never rise again. And that's close enough to "extinct" as makes little difference, as our future horizons would be reduced to a narrow squint.
I certainly agree that that's entirely possible, and you haven't even included a "When Worlds Collide" scenario, though per Justo's post above it looks like we already dodged a few bullets in the past, staged a comeback, and recovered fairly nicely from those, thank you, but I am also dead certain that ultimately at some point in the future we will be dead and gone for good, better, or worse - I just hope I won't live to witness it.
 
It's hard to decide whether this is technology in this form is going to lead anywhere.
One error in Google's AI wiped millions of the shareprice, Bing's is a laughing stock already and barely credible as intelligent in any form. ChatGPT is riding high with merry quippers and plagarising teens and get-rich quicker wannabe authors AI art programmes with wannabe artists and NFT creators but there will come a crunch point. Is this going to be another Google Glass or Metasphere? Lots of techy speak but actually the end product nobody really wants and embarrassing share value losses?

AI has important uses, if this money was put into AI medical systems to aid diagnoses or analysis, scientific analysis, robot surgeons or manufacturing tools then it would be making progress.

But ChatGPT and Midjourney isn't taking us anywhere productive. It's a fancy entertainment set-up holding up a mirror to teenage-levels of mentality and banality. Microsoft and Google think ultimately when its behind a paywall it will generate mega bucks, but who is really going to keep forking out money to write gun crime poems once the novelty has worn off? All the paywall will do is drive the scammers - those who try to pass off AI art and writing as their own and sell it for profit or commission. It will make current pay-for-essays plagiarism look like small fry (publishers can cut off submissions and put in place 'chokepoints' in order to check material but schools, colleges and universities can't cut off coursework entirely to avoid being swamped).

Sure a "scary AI" could be around the corner, but only it will only be scary because it presents us with an image of human psyche right back at us that we can't easily bat away or pretend doesn't exist. I'm not against AI, but I am against AI developed for get-rich quick schemes of no practical benefit.

Speaking as a publisher, that is precisely what is happening. There will be no deluge where I work. New manuscripts will have to be guaranteed by authors. Statements like: "No, I did not use ChatGPT to write this for me." will be enforced. The same for fake art. "No, I did not use Midjourney to create this."

"Scary AI" is 100% fake. Those with more money than they know what to do with enjoy issuing such statements. They enjoy frightening the peasants. The TV news has been doing it for years.

Schools are helpless? No, of course not. I would have students write something in class, and supervise their work. And those who think they can get can get away with it will have no actual, functional skills when they graduate.

The idea that you can get something for nothing is what got the cryptocurrency industry in trouble. They are selling nothing in exchange for real, actual money. That's all this is. Some kids sitting at home and using - fake word - AI to "create" something that they can't create themselves.
 
Going forward, all exam rooms of educational institutions should be TEMPEST proof. For my current job, in addition to interviews, I had to write a short essay on a topic given to me immediately before the test in one hour on a clean laptop without internet connection.
 
Going forward, all exam rooms of educational institutions should be TEMPEST proof. For my current job, in addition to interviews, I had to write a short essay on a topic given to me immediately before the test in one hour on a clean laptop without internet connection.
What wrong with pen and paper? Let's see AI handle *that.*

*NOTE: Not valid in timelines where people have invisible implanted AI feeding them answers...
 
Going forward, all exam rooms of educational institutions should be TEMPEST proof. For my current job, in addition to interviews, I had to write a short essay on a topic given to me immediately before the test in one hour on a clean laptop without internet connection.

Bravo Martin. No one is helpless. For every measure there is a countermeasure.
 
Going forward, all exam rooms of educational institutions should be TEMPEST proof. For my current job, in addition to interviews, I had to write a short essay on a topic given to me immediately before the test in one hour on a clean laptop without internet connection.
What wrong with pen and paper? Let's see AI handle *that.*

*NOTE: Not valid in timelines where people have invisible implanted AI feeding them answers...
Since my penmanship is pretty lousy (I'm left handed, but in the bad old days I was forced to learn to write with the right) using a laptop was definitely preferable and advantageous for all parties involved, especially with the far superior text editing capabilities as you go :).

As for AI implements, we'll cross that bridge when we get there, but CT scans and MRIs come readily to mind as potential countermeasures.
 
Going forward, all exam rooms of educational institutions should be TEMPEST proof. For my current job, in addition to interviews, I had to write a short essay on a topic given to me immediately before the test in one hour on a clean laptop without internet connection.
What wrong with pen and paper? Let's see AI handle *that.*

*NOTE: Not valid in timelines where people have invisible implanted AI feeding them answers...
Since my penmanship is pretty lousy (I'm left handed, but in the bad old days I was forced to learn to write with the right) using a laptop was definitely preferable and advantageous for all parties involved, especially with the far superior text editing capabilities as you go :).

As for AI implements, we'll cross that bridge when we get there, but CT scans and MRIs come readily to mind as potential countermeasures.

Yes, indeed.
 
I wish for an AI that writes apps, codes or programs for me or that at least can make me smart like one of the senior members here.

Of course the first one is likely and the second one is going to make me sound like Dagoth Ur at best!
 
I wish for an AI that writes apps, codes or programs for me or that at least can make me smart like one of the senior members here.

Of course the first one is likely and the second one is going to make me sound like Dagoth Ur at best!
Using AI in any way, shape, or form will not *make* you smart. At the very best it might make you *look* smart, because you cannot outsource intelligence. What actually might make you smart, or rather informed (or, to use a more accurate term, educated on the issues debated in this venue), is gaining as much knowledge and understanding of the topics at hand discussed in this forum as you can before asking any questions.
 
Last edited:
I wish for an AI that writes apps, codes or programs for me or that at least can make me smart like one of the senior members here.

Of course the first one is likely and the second one is going to make me sound like Dagoth Ur at best!
Using AI in any way, shape, or form will not *make* you smart. At the very best it might make you *look* smart, because you cannot outsource intelligence. What actually might make you smart, or rather informed (or, to use a more accurate term, educated on the issues debated in this venue), is gaining as much knowledge and understanding of the topics at hand discussed in this forum as you can before asking any questions.

Amazing. AI will cook your meals. AI will make humans obsolete... Oh brother...

I can't wait until the U.S. military announces a fully functioning Terminator with AI. They're still working on the pulse rifle...
 
Last edited by a moderator:
Status
Not open for further replies.

Similar threads

Back
Top Bottom