Please ban AI generated posts

Elysium

ACCESS: Secret
Joined
31 May 2023
Messages
275
Reaction score
708
Recently there has been an unfortunate increase of long-winded and somewhat generically written posts that I'm sure were either written wholly or in part by AI.

I've seen forum arguments where either one side clearly decided to rely on ChatGPT, drowning the other side of the argument in walls of text, or even more 'hilarously', two people were furiously posting each others replies into their AI of choice, and the results were not something that I would like to read or engage with. Often the posters of these did not fully understand or were aware of the points the AI was making on their behalf.

Besides, ChatGPT(and other LLM)'s information is unverifiable, unsourced, often self contradictory and has a tendency to agree with the prompter, which is great for generating text that supports the prompter's premise, and much less great at allowing critical reflection on one's statements.
If I want to learn about a particular topic from ChatGPT, I will ask ChatGPT, not come here to post. The fact that this forum has become a long-enduring place with a lot of information on hard to find topics, and a good place for dicussion stems from the fact, that said discussion has managed to remain on-point, respectful, and the high-effort, and people making said posts were not discouraged from posting. I hope this remains the case for the future.

I think this should be an absolute policy. While doing AI research is fine, or using an LLM as a sounding board for your ideas, please take the time, to assemble your findings into a coherent whole that is relevant to the thread and type it out yourself.

Turning three sentences of prompts into three paragraphs does not enhance your argument (it may even not reproduce it), it will just make reading it waste more time.
 
I'm not sure if that could be handled. The forum has increased the proportion of speculative posts over that made for pure informative intentions which was the original idea.
For obvious reasons some of the most prolific posters are now speculation specialists only. I'm losing interest on that threads and I only keep an eye on it for moderating purposes. It's really hard to have some control over 50-100 new posts a day dealing with guessing/what if/I think/I'll do that way/what about...
If it's so difficult to ask people for make a moderate use of guessing or keeking the technical threads free from news or speculative posts, why could we be able to succeed against the coming of AI slop era?

A call to everyone's contribution for keeping the forum tidy!
 
A call to everyone's contribution for keeping the forum tidy!
I have, unfortunately, no hope that people will be collaborative in that sense.

I'm not sure if that could be handled. The forum has increased the proportion of speculative posts over that made for pure informative intentions which was the original idea.
For obvious reasons some of the most prolific posters are now speculation specialists only.
Can I suggest a possible solution?

Remove the reaction score.

Most of the people that engage in this behavior do so for the gratification of seeing their posts liked by other users.

Once that has been done, the 3-strikes rule ought to be really applied and people should start to see their posting privileges revoked (either temporarily or not), and not just in extreme cases.
Repeatedly posting speculative posts or repeatedly posting in the wrong sections should be ground for banning as well.
 
Too Right CiTrus90, remove the likes from them if it is found out that the poster is AI then ban them completely just as you would a human poster if they get out of line, the three strikes rule should be issued as a warning to them.
 
Too Right CiTrus90, remove the likes from them if it is found out that the poster is AI then ban them completely just as you would a human poster if they get out of line, the three strikes rule should be issued as a warning to them.
Nope.

Everyone's.

I think the reaction score, as a whole, has made this site worse. Like every other sites that uses it.
It's a system mostly used for agreeing to someone's opinion (or speculation?) and rarely for its intended function of rewarding meaningful contributions to the site.

It's a dopamine trap, because it encourages people to post anything that can get them likes.

Removing that would solve a lot of the trouble we see daily, in my opinion.
 
It's a dopamine trap, because it encourages people to post anything that can get them likes.

Removing that would solve a lot of the trouble we see daily, in my opinion.
I believe you overestimate the effect of the like button in that case. Because the desired feedback can take other forms too. Meaning that people who are thirsty for attention will ultimately get their kick one way or the other.

But I generally believe that most aren't motivated by likes or dopamine. I believe most are just interested in seeing their bias being confirmed/agreed with or just in general control/fabricate a certain appearance of information that they deem suitable for their personal views.

Just my two Cents.
 
Recently there has been an unfortunate increase of long-winded and somewhat generically written posts that I'm sure were either written wholly or in part by AI.
How do you tell? As pointed out in another thread this week, a whole bunch of classic texts, written before AI, are reported as AI-written by the various AI detectors out there (which are probably themselves relying on AI for the alleged detection). Some of us are naturally long-winded, others may lack the time to write more concisely (cf Blaise Pascal).

If there's no way of telling, there's no way of policing it.

If the issue is the content, not the length, then that's a different problem entirely, and there is always going to be part of the audience here who are less widely read/more credulous.
 
I have a concern. The act of banning something itself could be abused. There is a possibility that attacks may arise in which someone arbitrarily accuses another person of “being an AI” or “using AI.” However, it is impossible to determine whether such accusations are actually true.

For example, I personally use AI only as a basis for translation in order to compensate for the mistake I made in giving up studying English. Because of that, my posts might contain phrasing that resembles AI-generated language.

Of course, an organization cannot grow unless it accepts a certain level of risk.
 
For example, I personally use AI only as a basis for translation in order to compensate for the mistake I made in giving up studying English. Because of that, my posts might contain phrasing that resembles AI-generated language.

I was just reading an interview with the author of an acclaimed novel in which he repeatedly starts his points with the normally rarely seen "May I say". It sounds odd, and not typical of normal written language, but it's perfectly valid and clearly a personal tic - provided you have the background context to recognise that. If you only saw the answers, a lot of people might claim it was clearly an AI.
 
I was just reading an interview with the author of an acclaimed novel in which he repeatedly starts his points with the normally rarely seen "May I say". It sounds odd, and not typical of normal written language, but it's perfectly valid and clearly a personal tic - provided you have the background context to recognise that. If you only saw the answers, a lot of people might claim it was clearly an AI.
Or case in point me saying "I believe" multiple times in reply #8 here. Quite frankly (also a favorite), as a non-native english speaker you will develop certain patterns and in that context you're at an inherent disadvantage. I doubt there's lots of AI posts here, a lot probably formatted with the help of AI, or translated. But fully fledged, continous posts? Ehhhh.

Honestly it didn't cross my mind at all until is saw this thread.
 
Too Right CiTrus90, remove the likes from them if it is found out that the poster is AI then ban them completely just as you would a human poster if they get out of line, the three strikes rule should be issued as a warning to them.
It may be impossible to completely eliminate the role of human judgment in determining correctness, but how about a system in which the moderators decide collectively whether a post was generated by AI?The important point is that the decision should be made by having many people evaluate it, and by reaching a conclusion based on multiple opinions.


It is undeniably true that no one here wants this site to be flooded with AI-generated spam, so it is likely that some kind of progress on this issue will eventually be necessary.
 
My opinion -

LLMs are trained on data they found on the internet (or stolen ebooks, a topic for another day). If you use an LLM AI to make a post on an on-topic subject for this forum, there's a pretty good chance one of the major sources of said post will be existing posts on the forum. So, its not really a great idea to use AI for basic research on forum topics or you risk eating your own tail like Ouroboros.

Additionally, I am not really very interested in posts reposting ChatGPT or Gemini or Grok's take on things.

Using ChatGPT to explain to yourself how something works is one thing, but parroting that answer on the forum is another. I am using Copilot at work to help me with some coding in Java (a language I am unfamiliar with) and SOAP XML API requests, but I'm not going to start posting what it gave me onto coding forums as my own work.

That said, there's already a tool for this - the Report Post function.

Any posts deemed to be of low value / "AI Slop" can be removed. Any user posting too much in this category will get a post ban.
 
I have a concern. The act of banning something itself could be abused. There is a possibility that attacks may arise in which someone arbitrarily accuses another person of “being an AI” or “using AI.” However, it is impossible to determine whether such accusations are actually true.

This is a great point and one I was going to make earlier today but you beat me to it.

I've seen this abused, and I've seen communities adopt policies that encourage abuse.

As an example, the website Bandcamp.com is a storefront for artists/performers to sell music. Bandcamp recently adopted a "zero tolerance" policy for AI generated music. Any content *suspected* of being AI content is removed and the artist/performer is banned. There is no appeal process. Anyone can send them an email claiming something is AI and it will removed, regardless of the quality of the music or wether it is or is not AI generated.

In other online communities I've seen accusations made about how something was AI generated causing an onslaught of torches and pitchforks - again, regardless of wether it actually was AI generated or the quality of the content.

And that's the real issue here. Quality. There have been many, many low quality or low effort posts that often drown out or bury high quality posts.

Just try and read the F/A-XX thread (still a strike fighter!)

That's the problem. It's not AI specifically, it's the post quality - and AI generated posts may be part of that, but AI shouldn't be singled out as the problem here. Ultimately a human is still in control pushing the buttons to make things happen. A low quality post, wether generated by a person using AI or not ultimately is the responsibility of the person posting. If they are posting with low standards and low effort, that's the problem no matter what tools they are using.

That said, I don't think that moderators should the be arbiters of quality (other than in extreme cases, or off-topic posts, etc.). I don't see that as their job.

It's the SPF members that should be "moderating" low quality/low effort. If there are poor quality posts or discussions, bad information being taken as truth, etc. the best remedy is to present high quality posts.

... And ignore posters who habitually post low quality, low effort content. Don't read it, don't engage with it.
 
As an example, the website Bandcamp.com is a storefront for artists/performers to sell music. Bandcamp recently adopted a "zero tolerance" policy for AI generated music. Any content *suspected* of being AI content is removed and the artist/performer is banned. There is no appeal process. Anyone can send them an email claiming something is AI and it will removed, regardless of the quality of the music or wether it is or is not AI generated.

In other online communities I've seen accusations made about how something was AI generated causing an onslaught of torches and pitchforks - again, regardless of wether it actually was AI generated or the quality of the content.
I'm enough of a troll in personality to now want to go through and report EVERYTHING on Bandcamp as AI, just to destroy the site.
 
Is a post relevant and materially contributory? If so, then I don't care if it was written by hurling scrabble tiles at a sticky wall.

If it isn't, then can it, if necessary.

By that standard, people who abuse AI in their composition will self-select.

This whole debate will be anachronistic and irrelevant much more quickly than we think anyway. In very short order you genuinely won't be able to tell. Doesn't matter if that's a good thing or not. Be wary of the follow-on consequences of legislating in a fluid and rapidly changing set of conditions. When you're riding a tiger, spurs may not be the preferred method of control input.
 

“Hence the cold shiver: if an author is determined to use AI, then cover their tracks, there’s very little we can do.”

Prof Patrick Juola, a US computer scientist known for his work on authorship attribution, agreed. “I don’t want to call AI detection tools a scam, but it’s a technology that simply doesn’t work.”

He likened the failure to antibiotic resistance: “AI is a learning system continually upgraded by its manufacturers. If there was a detection technology that worked, then people would simply build better AI tools to fool it.”


Meanwhile audience policing as a method of AI detection just doesn't work (that Bandcamp approach is awful and probably inviting legal action). One Facebook site that keeps showing up in my mentions is Cancelled Aircraft, and a good 90% of the illos claimed as AI by the audience are actually from Tony Buttler's Secret Projects books and similar. The lack of imagination (and the number of people who think they're aerodynamics experts while actual designers were incompetent) is rather depressing.
 
How do you tell? As pointed out in another thread this week, a whole bunch of classic texts, written before AI, are reported as AI-written by the various AI detectors out there (which are probably themselves relying on AI for the alleged detection). Some of us are naturally long-winded, others may lack the time to write more concisely (cf Blaise Pascal).

If there's no way of telling, there's no way of policing it.

If the issue is the content, not the length, then that's a different problem entirely, and there is always going to be part of the audience here who are less widely read/more credulous.
Yeah this is another major problem.

There are lots of telltale signs of AI use, but machine-based detection is notoriously unreliable, and a lot of human-based detection by laypeople can be as unreliable as the machine-based detection. Many commonly used telltale signs are not entirely reliable, and many commonly believed "signs" are ridiculously non specific to the point of uselessness.

The best set of formal written criteria for detecting LLM use I've seen so far is Wikipedia's Signs of AI Writing advice page, but analyzing writing against that set of criteria is a very involved and time-intensive task.

It's doable on Wikipedia for enforcing their anti-LLM policies, but Wikipedia editing is a very time-intensive task in general, with a wide base of extremely dedicated volunteer contributors enforcing the rules, so the overhead of applying those criteria in cases of suspected LLM misuse isn't as noticeable there as it would be elsewhere, given how much overhead is already involved in many types of Wikipedia editing activities.

I'm not sure how well that approach would scale on a forum like secret projects, where the moderator base is much smaller, there's no guarantee that the same criteria will be applied, and the culture is extremely different from Wikipedia's hyper-bureaucratic (yet also maximally-transparent) approach.
 
How do you tell? As pointed out in another thread this week, a whole bunch of classic texts, written before AI, are reported as AI-written by the various AI detectors out there (which are probably themselves relying on AI for the alleged detection). Some of us are naturally long-winded, others may lack the time to write more concisely (cf Blaise Pascal).

If there's no way of telling, there's no way of policing it.

If the issue is the content, not the length, then that's a different problem entirely, and there is always going to be part of the audience here who are less widely read/more credulous.
I would go with the standard practice of innocent until proven guilty. First the rule should be posted, that using AI to write your posts wholly or in part is not allowed, so everyone understands the rule. Then if someone still manages to write something indistinguishable from actual human writing, something both well written, coherent and relevant, then nobody should care if its AI or not, my purpose here is not to run witch hunts, but to enforce quality. If somebody writes in a suspiciously ChatGPT-like impeccable corporate style, while not making much sense, then they shall be politely reminded that using AI to write is forbidden, and it will be treated as circumstantial evidence - I'm sure mods are used to making a difference between people having a bad day, and actual bad eggs.

However, if someone suddenly stars writing these nonsensical listicles that seemingly go on and on, we go Butlerian Jihad on his ass.

I didn't make my original post because of some sneaking suspicion, there were multiple occurences where it was both jarring and obvious to cross a certain pain tolerance for me.

As for these detectors not being accurate does not mean that AI writing can't be detected - I'm not sure how these detectors are trained but the problem is probably with their quality - I mean, the average human is far less likely to write like Mary Shelley than ChatGPT (especially if taken out of context), but that does not mean that the difference between ChatGPT's output and a thoughtful human's is undetectable - if that were the case, humanity would be in trouble. It just mean that these detectors are not very good.
 
I would go with the standard practice of innocent until proven guilty.
The devil is in the details. Let's break this down.

the rule should be posted, that using AI to write your posts wholly or in part is not allowed,
This rules out anyone using automated translation tools such as Google Translate, which isn't the problem you're targeting. Our non-English as a First Language membership is a tremendous asset.

It also rules out anyone quoting an AI result to show how poor it is, which I've done at least twice in the last week, and which again isn't the problem you're targetting.

Writing rules is difficult.

my purpose here is not to run witch hunts, but to enforce quality.
If you want to enforce quality, then AI isn't the problem, there's plenty of low quality posts that never touch on AI.

And why enforce 'quality'? What is 'quality' and who decides? Or is it simply some measure of elitism?

If somebody writes in a suspiciously ChatGPT-like impeccable corporate style, while not making much sense, then they shall be politely reminded that using AI to write is forbidden, and it will be treated as circumstantial evidence
So guilty until proven innocent, despite what you said initially. Some of us spend our days writing in impeccable corporate style, and it isn't always easy to turn that off. I've no doubt I had days when I sounded like a corporate rulebook while I was busy re-writing the QA manual for BAE Systems NA.

Equally there are several members in good standing here whose posts regularly strike me as nonsensical, but who've been writing that way for a decade or more, so definitely aren't using AIs.

if someone suddenly stars writing these nonsensical listicles that seemingly go on and on
Is that someone who uses an AI, or someone who just doesn't think the same way you do? I know for a fact that if I don't edit myself I can get sucked into a whirlpool of nested exceptions in trying to reach down to the precise point I'm after.

we go Butlerian Jihad on his ass.
You're really not convincing me you want a measured response and innocent until proven guilty.

As for these detectors not being accurate does not mean that AI writing can't be detected

“I don’t want to call AI detection tools a scam, but it’s a technology that simply doesn’t work.”
 
So guilty until proven innocent, despite what you said initially. Some of us spend our days writing in impeccable corporate style, and it isn't always easy to turn that off. I've no doubt I had days when I sounded like a corporate rulebook while I was busy re-writing the QA manual for BAE Systems NA.
I have to agree with you here. I've been accused of using AI several times on several forums in the past--and much of this is because my writing style has been trained by decades of describing complex topics to dumb people.

Just because you don't understand what someone has written doesn't mean it's AI... they may just know more than you.

(You may note that I also have a lifelong love of em dashes, another thing that has definitely resulted in AI accusations being hurled at me.)
 
Sure, but generative renditions of aircraft and other secret project being discuss is on a strict case-to-case basis, or limit them to the User Artworks section in a best-case scenario.
I think everyone can agree with your opinion.
 
The Internet and Publisher flooding the media with crap like this.
reason more to ban AI generated post in this forum
HGhLaChWkAAP5mH
 
Well said Michel Van, anyone posting AI stuff like that should be warned then a thread ban, I am so fed up with the amount of AI generated rubbish on the internet these days. :mad:
 
Qa'plaH! We don't need more of whatever that heresy, blasphemy, and abomination unto both machine and man is supposed to be.
 
:rolleyes: It's such a shame that they had to cut off the legs of the astronauts and tie them into their chairs like that. That turned out to be a huge unforeseen problem when they actually landed on the moon and realized they couldn't leave the module or do anything of note. Really makes you wonder about the whole space race don't it?
 
Well said Michel Van, anyone posting AI stuff like that should be warned then a thread ban,
I have to point out, that member of this forum use AI for artistic User Artwork that is good.
we have not the right to ban those members, because the use AI tool rightfully.

I take this excellent example of AI art work by our member @Dilandu
1766931062554-png.796694
 
:rolleyes: It's such a shame that they had to cut off the legs of the astronauts and tie them into their chairs like that. That turned out to be a huge unforeseen problem when they actually landed on the moon and realized they couldn't leave the module or do anything of note. Really makes you wonder about the whole space race don't it?
Astronauts Huey, Dewey and Louie and Mission Commander Lowell would beg to differ with you. :oops:
 
Astronauts Huey, Dewey and Louie and Mission Commander Lowell would beg to differ with you.
That reference to Sci-fi Movie "Silent Running" and crew of USS Valley Forge
A movie made complete analog without AI long time ago.
silent-running_1280x720.jpg
 
I have to point out, that member of this forum use AI for artistic User Artwork that is good.
we have not the right to ban those members, because the use AI tool rightfully.

May I propose the "AI code" for Secretproject rules?

* The AI arts are generally forbidden from being posted in discussion threads;

* User who want to post AI arts is permitted to create a personal "gallery" thread in "User artworks and model" section and post his works there;

* As exception, AI generated images may be permitted in discussion threads but only if they serve a technical function - for example, if AI was used to enchance some blurry image, clean some poorly scanned blueprint, or to illustrate some concept that require visualization. Moderators could remove any such image if they decide that it did not serve practical purpose;

* AI arts posted anywhere on forum must be clearly stated to be such, to avoid confusion;
 
May I propose the "AI code" for Secretproject rules?

* The AI arts are generally forbidden from being posted in discussion threads;

* User who want to post AI arts is permitted to create a personal "gallery" thread in "User artworks and model" section and post his works there;

* As exception, AI generated images may be permitted in discussion threads but only if they serve a technical function - for example, if AI was used to enchance some blurry image, clean some poorly scanned blueprint, or to illustrate some concept that require visualization. Moderators could remove any such image if they decide that it did not serve practical purpose;

* AI arts posted anywhere on forum must be clearly stated to be such, to avoid confusion;
Yes !
that's right way
we have allot stuff as discolour photo, scratch microfilms or scan photocopies, here AI clean up and color correction is helpful.
or visualised a illustration in 3d object (best in "User artworks and model" section)
Also give Moderators authorisation to remove AI image if they not sever purpose in discussion !
 
May I propose the "AI code" for Secretproject rules?

* As exception, AI generated images may be permitted in discussion threads but only if they serve a technical function - for example, if AI was used to enchance some blurry image, clean some poorly scanned blueprint, or to illustrate some concept that require visualization.
IMHO there should be an additional rule on using AI to illustrate concepts... the AI image should be using real artwork or a known design as its starting point. This limits the AI function to modifications of prior content.

Why? Because AI can be quite useful at quickly modifying images, for example to illustrate design mods (e.g. adding weapons under an aircraft, tweaking a wing or tail shape etc). But it's very bad at sketching out images completely from scratch.
 
If AI is used correctly then I do not see why it could not be used on the forum.
 

Similar threads

Back
Top Bottom