Game of Thrones author sues ChatGPT owner OpenAI

Not the only author going after the people behind the 'procedural language generators'


Call me perverse, but when I found my name on the list of books that have been fed into Books3 — a database used to train generative AI programs — I felt relief. Finally, some transparency on the origins of AI! Finally, some understanding of the importance of copyright! Maybe now we could push back on claims, made by big tech and others, that Australia needs to loosen up its copyright laws if entrepreneurial tech is to flourish here. Maybe now calls for writers’ work, their copyright, and the pitiful state of their incomes can be taken seriously! Writers are also entrepreneurs and their copyright is one of the very few ways they have of making an income. As it is, most writers earn well below the poverty line, yet the flourishing of our culture depends on them.
 
As predicted, the circling of the wagons to prevent the advent of the car. So much ignorance of how AI works and so much fear of change, rather than embracing new tools and moving forwards.

Excuse me if this is peripheral to the topic, but I find it hilarious that the actors who played artificial lifeforms in films and TV shows championing their right to recognition and equality are now on picket lines trying to stop the ancestors of these exact things from being allowed to evolve.

I expect, as always, God is on the side of the expensive lawyers, and progress will once again be crippled by the interests of existing business hegemony.
 
It is you who don't understand, no-one stole anything. Let me give you a hypothetical:

AI is now capable of controlling a humanoid robot.

If I sat such a robot in a comfy chair and gave it a legally purchased physical copy of the book to read, is that theft?

Such a robot forgets nothing.

Such a robot is capable of reading vastly faster than a human. if it read the library of congress in a day, is that theft?

Simply because it remembers more eficiently and uses what it reads to improve itself, does not make it theft. It does not attempt to redistribute or otherwise circumvent the rights of the author, it does however, integrate it into its own learning in a manner that is specifically designed to mimic the workings of our own brains.

This allows it to generate its own stories derivative of the vast numbers of books it has read, much as any human author, vastly faster and more efficient in some ways, vastly more primitive in others, but this is early in the evolution of a new technology.

I'm sure the lawyers will find a way of defining this as plagiarism or similar, and the recent settlement with one of the writers' unions specifically referenced this aspect, so it is real, and not as you describe "YOU stole my work to make money off it !!!".
 
Last edited:
Truly, thou art Majikthise: The lawyers are precisely who will attempt to define MlL as "theft" in order to prevent this new tech from disrupting the highly lucrative hegemony of the media industry.
 
Did theft occur? Yes.

From The Hollywood Reporter:

"As evidence of infringement, the suit filed by authors points to ChatGPT generating summaries of their novels when prompted. They argue that’s “only possible if ChatGPT was trained on Plaintiffs’ copyrighted works.”

"And because the AI system can’t function without the information extracted from the material, the software programs known as large language models that power ChatGPT “are themselves infringing derivative works, made without Plaintiffs’ permission and in violation of their exclusive rights under the Copyright Act,” the suit says. A derivative is a work based upon a preexisting, copyright-protected work.

"The authors take issue with OpenAI illegally downloading hundreds of thousands of books to train its AI system. In June 2018, the company revealed that it fed GPT-1 — the first iteration of its large language model — a collection of over 7,000 novels on BookCorpus, which was assembled by a team of AI researchers."

Full article: https://www.hollywoodreporter.com/business/business-news/authors-sue-openai-novels-1235526462/
 
QED: The expensive lawyers claiming that "training on the works" = "infringing on the works" I called it, and there it is.
 
I await the lawsuits against schools and sctudents who do book reports. If the book reports are accurate, clearly the kiddies stole the words right out of the authors bank accounts.
 
What is wrong with you people?

ChatGPT is a COMPUTER program. If you want to find something on Google and enter the wrong search words, you won't get what you're looking for. It's a computer program that runs off of IF [this input], THEN [this response] prompts and list of replies.

HUMAN programmers who understood storytelling realized that a computer program could use the same basic skeleton that could get new skins and clothes no matter if the story was set in the past, present or future. Action-adventure stories are action-adventure stories no matter the time period. But the problem was: selecting stories to be dissected into their component parts. They could have created ORIGINAL material but nooooooo. That's tooooooooo exswensive. So what do they do? Just grab stories off the net without permission, without compensation and because they were FREE.

So, they break a bunch of SF stories into their component parts. They label these parts. That way, when you ask for an original SF story with certain elements, the PROGRAM grabs the appropriate parts from columns A, B, C and D. And everybody goes ooooh! and ahhhhh! because it's good. And they believe the ILLUSION that "Hey Bob! I just wrote a story!" No, you didn't. The same with Medieval Fantasy, Modern Detective or Noir film.

Take Midjourney. Same thing. A program that identifies certain shapes and labels them, along with some mixing of pieces so it does not look EXACTLY like the original, and voila! More oooohs! and ahhhs! Does anyone remember Paint By Number? You got a surface where the part you paint is marked off by light blue lines and each enclosed area has a different number or sometimes, the same number. For example, 15 means dark brown, 13 means light brown and 10 is green.

Don't act like a PROGRAM is the same as a 10 year old kid with a pre-existing brain. It's not.
 
What is wrong with you people?
We're human. We see progress, we don't immediately take a dump on it. Even if it does mean the buggy whip manufacturers might take a financial hit.

Don't act like a PROGRAM is the same as a 10 year old kid with a pre-existing brain. It's not.
That ten-year old kids brain was educated by having it read and learn from ("steal" in the parlance of some) others over those previous ten years. Very few 10-year-olds form spontaneously as 10-year-olds.
 
We're human. We see progress, we don't immediately take a dump on it. Even if it does mean the buggy whip manufacturers might take a financial hit.


That ten-year old kids brain was educated by having it read and learn from ("steal" in the parlance of some) others over those previous ten years. Very few 10-year-olds form spontaneously as 10-year-olds.

Don't look too closely - you might see the obvious.

 
I wonder: assume an AI was created that designed actual physical hardware. It didn't just crank out pretty pictures, but 3D CAD models of components (along with the manufacturing process for that part) and the complete machine. The machine could be a car, an airplane, a space launch vehicle, a communications satellite. Assume also that the design isn't merely slapped together, but optimized for all the factors the humans designers wish, with industry-recognized thermal, structural, aerodynamic, etc. simulations. So it could, say, produce a launcher capable of putting a 50 kg payload into LEO at a world-beatingly low price.

But in order to attain this design excellence, the AI is trained on all the launchers that came before, from the V-2 through Soyuz, Titan, Saturn, Shuttle, Falcon, all through publicly available information. Would *this* be considered stealing? Would current slower, less efficient and effective designers of rockets sue to prevent the competition?
 
What is wrong with you people?

ChatGPT is a COMPUTER program. If you want to find something on Google and enter the wrong search words, you won't get what you're looking for. It's a computer program that runs off of IF [this input], THEN [this response] prompts and list of replies.

HUMAN programmers who understood storytelling realized that a computer program could use the same basic skeleton that could get new skins and clothes no matter if the story was set in the past, present or future. Action-adventure stories are action-adventure stories no matter the time period. But the problem was: selecting stories to be dissected into their component parts. They could have created ORIGINAL material but nooooooo. That's tooooooooo exswensive. So what do they do? Just grab stories off the net without permission, without compensation and because they were FREE.

So, they break a bunch of SF stories into their component parts. They label these parts. That way, when you ask for an original SF story with certain elements, the PROGRAM grabs the appropriate parts from columns A, B, C and D. And everybody goes ooooh! and ahhhhh! because it's good. And they believe the ILLUSION that "Hey Bob! I just wrote a story!" No, you didn't. The same with Medieval Fantasy, Modern Detective or Noir film.

Take Midjourney. Same thing. A program that identifies certain shapes and labels them, along with some mixing of pieces so it does not look EXACTLY like the original, and voila! More oooohs! and ahhhs! Does anyone remember Paint By Number? You got a surface where the part you paint is marked off by light blue lines and each enclosed area has a different number or sometimes, the same number. For example, 15 means dark brown, 13 means light brown and 10 is green.

Don't act like a PROGRAM is the same as a 10 year old kid with a pre-existing brain. It's not.
What triggers me is how Vehemently, blatantly and agressively ignorant you are on this topic, yet you presume to post about it. your description of AI is wrong in every important respect.
 
I wonder: assume an AI was created that designed actual physical hardware. It didn't just crank out pretty pictures, but 3D CAD models of components (along with the manufacturing process for that part) and the complete machine. The machine could be a car, an airplane, a space launch vehicle, a communications satellite. Assume also that the design isn't merely slapped together, but optimized for all the factors the humans designers wish, with industry-recognized thermal, structural, aerodynamic, etc. simulations. So it could, say, produce a launcher capable of putting a 50 kg payload into LEO at a world-beatingly low price.

But in order to attain this design excellence, the AI is trained on all the launchers that came before, from the V-2 through Soyuz, Titan, Saturn, Shuttle, Falcon, all through publicly available information. Would *this* be considered stealing? Would current slower, less efficient and effective designers of rockets sue to prevent the competition?

This isn't about competition.

 
How about this: an AI is created that reads every single medical text, paper, report and consequently becomes the best - almost perfect - diagnostician on the planet. Even though it "stole" the knowledge it needed to get there, should it be used, or sued into oblivion?
 
So... would you be as opposed to the "machine-designing AI" as you are to the "story-writing AI?"

First, AI is a computer program. Years ago, an earlier version designed a structure for use in space. This was based on its HUMAN programmers giving it certain inputs, and a way to select design elements based on those inputs. Along with this, human engineers were given the same problem to solve on their own. The computer output looked different but met the structural design parameters. The human version was different and also met the structural design parameters.
 
So... would you be as opposed to the "machine-designing AI" as you are to the "story-writing AI?"

A computer program called ChatGPT is a way to make the rich, richer. It has no need for food, clothing or shelter, but the rich want new ways to be richer so they enlisted HUMAN BEINGS to find a way to do that. Any invention is designed to serve people. Originally, the goal was to get machines with onboard cameras to do dangerous or repetitive work, freeing people to become unemployed, and giving greater PROFITS to the already rich.
 
You are so wrong. It's not just a computer program it's a paradigm shift in the way we work with data that has been decades in the making. I personally know people who have been working in neural nets and then ML, and finally LLMs for 30 years. It's as huge as the PC revolution, the internet revolution, and every upheaval like this has consequences.

My answer to this is the same as every previous one, all of which I have been intimately involved in: Control the technology, or it will control you. I was a darkroom tech, no-one cried for us when photography went digital, it was "learn photoshop or go unemployed".

The current revolution builds on the previous ones, and is enabled by the massive farms of GPU's that were developed for crypto, the massive datamining developed for social media and (bleh) marketing. It's telling that the current writers' deals doesn't seek to block the use of AI, but to restrict it "writers only". I personally believe that trying to brake/steer tech with legal limitations just creates more problems than it solves.

AI has been the butt of jokes for almost as long as the field has existed: artificial dumbness, the clumsy chatbots and phone systems that misunderstand your responses, but finally is has reached a level of maturity that makes it useful in dozens of fields, and NOW the luddites come out in force? What about the guys who spent their lives building this stuff? the tiny elite who now actually, properly understand and can develop these immense and complex systems, does their work have no value?
 
The HUMAN developers were paid for their work. Just like virtual reality goggles being called a "solution in search of a market," the human developers have built a device most people don't want. Not every new device or invention ends up being bought. The money invested is lost. The formula is simple: IF it doesn't sell it disappears. And people have very short memories.

The revolution, revolution? Hardly. It's just rich people hiring HUMAN BEINGS to develop things that will make them richer, AND allow them to let workers go in the process so they can pocket GREATER PROFITS.

I worked in a darkroom also. I have no problem with the development of CCDs and digital photography. I studied the technical side of CCD operation. Well done.

Have you read the text of any of the copyright infringement lawsuits? All of the writers involved -- ARE they Luddites? Is that what you're saying? And their lifetime devotion to writing. Their hard work. For which they got paid -- does their work have no value? OpenAI thinks so. They felt they could scrape whatever THEY WANTED OFF THE INTERNET FOR THEIR PROFIT. And what did they use that scraped data for? To CREATE A PROFIT-MAKING MACHINE. I know all about the server farms and cloud computing/storage. Big deal.

Right now, the valuation for OpenAI is $29 billion. They are considering selling shares that would more than double that valuation. AND the PEOPLE they used to get there were not consulted or paid anything.
 
You do realise the vast majority of LLMs are open source, built collaboratively by dedicated researchers seeking the advance humankind? Their work is s finding useful applications in medicine, science, engineering and a host of other real-world disciplines. All this should be sacrificed and hamstrung for the benefit of what? Hollywood? the entertainment industry?
 
We're not talking about medicine, science or engineering. Or the advancement of mankind.

Just so you're clear, here's what happened.

Books - actual books - have the Copyright symbol and date, usually followed by All Rights Reserved. What DOES that mean? Hi, George R.R. Martin, I want to make a movie or TV show based on your books. Reply: You're going to have to pay me. Hi, Stephen King, I want to make a movie or TV show based on your books. Same reply. Or a Broadway play or a musical or coloring books, same reply.

Why didn't OpenAI contact any author anywhere to get their permission? It's because the authors' lawyers would slow down the process and if we're talking about OpenAI dealing with hundreds of authors, and their agents, it would have drained money from their project. Deal making can be a long process, and sometimes, one party decides an equitable arrangement cannot be reached and walks away. OpenAI felt they couldn't put in the time or up-front money. They would need an army of lawyers to deal with all of the authors.

Derivative works: Hi, John Grisham, we want to do a TV show loosely based on one of your books. We're going to change a few things to make it more dramatic. What do you say? As long as it's clear in the credits that the TV show is based on the book as opposed to being a direct, accurate copy of it, then Mr, Grisham might decide to say yes, or not.

George Lucas made more licensing the rights to the art, ship and creature designs from Star Wars than from the movie. Pepsi Cola paid $2 billion dollars to put well-known Star Wars images on soda pop cans. They had to PAY for the RIGHT to do this.

Back to the advancement of mankind. Nikola Tesla demonstrated the wireless transmission of electricity. His financial backer was J.P. Morgan. Mister Morgan was impressed by the demonstration and asked Tesla, "How can I place a meter on this?" When Tesla replied that he couldn't, Mister Morgan had the transmission tower torn down and sold for scrap.
 
The HUMAN developers were paid for their work. Just like virtual reality goggles being called a "solution in search of a market," the human developers have built a device most people don't want. Not every new device or invention ends up being bought. The money invested is lost. The formula is simple: IF it doesn't sell it disappears. And people have very short memories.
Yup. From the 1930's to the 1960's, phone companies worked real hard to develop "video phones." But the technology wasn't there, the cost was too high, the infrastructure couldn't handle the load. So video phones disappeared, never to be seen again.

Nowadays, nobody even *thinks* about seeing videos on their phones. Because since the technology ran into difficulties early on, the whole idea was abandoned.

One of the early competitors to the internal combustion engine for automobiles was the electric motor and batteries. But that technology also, after a great deal of effort and expense, was found lacking and was dropped. And thus electric vehicles never appeared again.

Long ago it was just expected that reusable rockets would be developed and that they'd land on their tails on plumes of fire. But that proved to be *intensely* difficult with 1950's/60's tech and the whole idea was dropped. The very notion of reusable VTOVL rockets! Silly!

Indeed, once a technology runs into series technical, political or financial difficulties, it is dropped forever. because that's what humans do: surrender and never try again.
 
Wha... what? Surrender and never try again? I don't think so...

Take the X-20 Dyna-Soar. A full scale mock-up was built, astronauts were selected, and Boeing began construction. Then the Kennedy administration came in and cancelled it. In 1969, the movie Marooned showed a similar vehicle. It added a piece of tech that made a lot of sense. A hollow cone that covered the Dyna-Soar-like vehicle which was flush with the top of the booster. "Perfect," I thought. Any dangerous crosswinds would be deflected by this outer shell.
 
Take the X-20 Dyna-Soar. A hollow cone that covered the Dyna-Soar-like vehicle which was flush with the top of the booster. "Perfect," I thought. Any dangerous crosswinds would be deflected by this outer shell.
LockMart had a spearhead version--that I want atop Starship:

(Design and testing of a large composite asymmetric payload fairing---tailored asymmetric PLF).

Maybe that can make a comeback.

Still---I liked CRTs that used lightpens... microfiche of NASA documents need to exist...computers can fail.

What if Rockwell's DC-X took off?

We might not have Falcon's boost-back.
 
Last edited:
The natural extension of near future capabilities of AI are at Least:
  1. Self replicating
  2. Self governed
  3. Singly or in group having all the knowledge of mankind's scientific processes and achievements, and capable of charting paths of investigation and development to further the knowledge - presumaby far faster than biological beings now named 'human'.
  4. Not constrained by 'law/regulation' or 'morals' or 'code of ethics' from any action 'it' chooses, including extermination of species 'it' determines undesirable or unneccessary consumers of 'required resources'.
There is no 'built in Positronics Ethics regulation' available to Asimov when he wrote "I, Robot"

Copyrights are an interesting intellectual exercise. I suspect somewhat analogous to that sly Nero as he fiddled away while Rome burned?

BTW I agree the importance of copyright protection as I have been severely hammered by such acts of unauthorized piracy and reproduction.
 
The natural extension of near future capabilities of AI are at Least:
  1. Self replicating
  2. Self governed
  3. Singly or in group having all the knowledge of mankind's scientific processes and achievements, and capable of charting paths of investigation and development to further the knowledge - presumaby far faster than biological beings now named 'human'.
  4. Not constrained by 'law/regulation' or 'morals' or 'code of ethics' from any action 'it' chooses, including extermination of species 'it' determines undesirable or unneccessary consumers of 'required resources'.
There is no 'built in Positronics Ethics regulation' available to Asimov when he wrote "I, Robot"

Copyrights are an interesting intellectual exercise. I suspect somewhat analogous to that sly Nero as he fiddled away while Rome burned?

BTW I agree the importance of copyright protection as I have been severely hammered by such acts of unauthorized piracy and reproduction.

Beyond wishful thinking. The future of NOT AI is this:

1. Tightly controlled by the military and private corporations.
2. Tightly controlled. Due to money/income considerations.
3. Nonsense. No imagination whatsoever.
4. Terminator 2? I'm sure the military saw the movie.

Unauthorized piracy? How dare you say that out loud? I have sent take-down notices for copyright infringement and will continue to do so.
 
I await the lawsuits against schools and sctudents who do book reports. If the book reports are accurate, clearly the kiddies stole the words right out of the authors bank accounts.

I wonder if you'd react the same if you found out your historical research PDFs were being distributed for a subscription access on a database somewhere. You're not getting a cent of this revenue, but they are making five or six times (or 10 or 100 times, what does it matter) what you do from your works, while just taking your work, and other people's, and distributing it for a fee that everyone seems to be looking at. That would make sense, at least it would be logically consistent, rather than whatever this is. You'd have a right to sue that database, too.

This is how private torrent trackers work, after all, and it's roughly approximate with how OpenAI builds its models.

Because it's the same exact thing going on: IP theft, software piracy, whatever you want to call it.

Some snooty people took someone's books, someone else's works or whatever, without giving fair credit or remuneration, and used it to train a generative algorithm which they are selling access to for profit. That's a problem, and one that will be corrected, eventually.

Why do people get so up in arms about this? It should be trivial for an AI business to make so much money it can just proceed to pay all its contributors and it should be able to afford enough programmers (or train AI) to edit the training database to remove contributors who no longer wish to be a part of it. Running a legitimate business isn't hard, unless you aren't profitable, then it really is lame.
 
Last edited:
The disparity is that the AI trained on the texts is not there to, nor will it, redistribute the works. It has simply absorbed the knowledge within in ways that mimic our own memory. I haven't tried asking ChatGPT to regurgitate the first chapter of "A Song of Ice & Fire" , but I doubt it could. (and personally I'd rather it didn't, awfully bad writing).
 
I wonder if you'd react the same if you found out your historical research PDFs were being distributed for a subscription access on a database somewhere.
Not even remotely the same. The correct analogy: someone takes all of my, say, US Fighter Projects, redraws the diagrams (perhaps as 3D full color renderings), re-writes the text in their own words and publishes the result under their own name.

My response: Meh. The publishing world is *filled* with books built on other books.

For example: I've produced a few things - CAD diagrams of the 4,000 ton Orion, say, or the Doomsday Orion - that numerous others have taken and ran with. Those desigs did not exist in the world (at least not thoser specific configuration) until *I* created and released them. Now others are doing their thing with them, very likely making money off them. And good for them. If someone makes a mint with, say, a Transformer version of the Orion battleship... well, I'll be annoyed that *I* didn't do that and make that money, but they are certainly free to do so.
 
Back
Top Bottom