AI Detectors Get It Wrong. Writers Are Being Fired Anyway

Pierce Manhammer

Moderator
Joined
Jun 2, 2021
Messages
5,028
Reaction score
6,032
Location
PRC

The article discusses the controversy surrounding AI detection software, which claims to be able to identify AI-generated text. Many writers and academics have criticized these tools for their low accuracy rates and tendency to produce false positives, leading some universities to ban their use. AI detector companies defend their products as necessary tools in a world overwhelmed by robot-generated content but acknowledge the shortcomings of their technology. However, many writers feel that the detectors are more of a problem than a solution, leading to lost jobs and mistrust among clients. Despite the mixed opinions on the effectiveness and implications of AI detection software, it remains a popular service for those concerned about the increasing prevalence of AI-generated content online.
 

AmsterdamAssassin

Master Don Juan
Joined
Aug 4, 2023
Messages
6,642
Reaction score
5,752
Any valuable writing will have someone accountable attached; like an article in the newspaper has a byline about the writer / contributors. If writing is 'anonymous', it could easily AI generated and potentially malignant. Know the source.
 

BackInTheGame78

Moderator
Joined
Sep 10, 2014
Messages
14,588
Reaction score
15,705
I mean there are only so many ways to string words together when you are talking about a subject. Just by randomness it would be expected that someone might write like a published author that the AI now has centuries worth of writing to comb thru.

Almost impossible for there not to be when it can look thru that much information.

These are going to turn into serious lawsuits against both the companies firing them over it and the AI companies that are going to cost tens or hundreds of millions, likely in the form of class action lawsuits.

Right now there are no real regulations as to how to use AI in these situations but once enough money is lost there will start being guidelines REAL quick, even if they are only at the company level.

Just for a quick laugh, I went to chat GPT and told it that I was a writer for a major newspaper and got fired due to it telling my boss I plagiarized my article.

I told it I planned on suing Open AI and the newspaper for wrongful termination. It gave me a long list of things to do to best win my case and then asked if it could do anything else to assist me and I told it to go delete itself, to which it responded:

"I understand you're upset, and I'm sorry for the distress you're experiencing. While I can't delete myself, I'm here to help in any way I can. If you have further questions or need assistance on any topic, please feel free to ask."

Only AI gets you fired then tries to help you win a lawsuit against itself

:lol: :lol: :lol: :lol:
 
Last edited:

Kotaix

Master Don Juan
Joined
Apr 29, 2019
Messages
2,285
Reaction score
2,884
Age
46
Any valuable writing will have someone accountable attached; like an article in the newspaper has a byline about the writer / contributors. If writing is 'anonymous', it could easily AI generated and potentially malignant. Know the source.
A significant amount of articles in newspapers these days (AKA op-eds) are being written by non-anonymous authors who are malignant. They have an axe to grind, and an agenda to push. These people don't care about facts or reality, and they are not held accountable for the harm that their content does to society as a whole. If these particular people get fired and don't produce any more content, the world is a better place for it.

An anonymous work is far more likely to be judged on its merits in today's world instead of being tainted by the identity of the author, real or implied.
 

AmsterdamAssassin

Master Don Juan
Joined
Aug 4, 2023
Messages
6,642
Reaction score
5,752
A significant amount of articles in newspapers these days (AKA op-eds) are being written by non-anonymous authors who are malignant. They have an axe to grind, and an agenda to push. These people don't care about facts or reality, and they are not held accountable for the harm that their content does to society as a whole.
But you don't read their Op-Eds because you can see by the author's name what you can expect.
 

Pierce Manhammer

Moderator
Joined
Jun 2, 2021
Messages
5,028
Reaction score
6,032
Location
PRC
Those experimenting with ChatGPT (evil), should understand that they are using your prompts to train their AI, and that all the information is logged. OpenAI has recently been in the news regarding having let go its ethics and safeguards team in its entirety. Do not be surprised if in the near future we find out that they have been logging all your queries and uses, maybe feeding them directly to authorities. Hell they may already be getting paid by said authorities for data feeds on individuals trying to get the LLM to answer questions related to your grey and black area queries.

Many members here that are ESL and others those who can't write coherent sentences would benefit from asking an LLM to rewrite their posts for clarity. Alternatively you could use a AI writing tool that integrates into your browser like Grammarly. Here is a rewrite of my first paragraph on this post, using a local copy of an LLM called the Mistral 7B model I have running locally in a Docker container on my craptop:

When you use ChatGPT, you're essentially training its artificial intelligence (AI) by providing it with prompts and information. This means that everything you type is being recorded and stored somewhere. Recently, OpenAI, the company behind ChatGPT, made some changes that might raise concerns about your privacy.

What's concerning is that OpenAI used to have a team dedicated to ensuring their AI was developed in an ethical and responsible way. Unfortunately, they got rid of this team entirely! This means there are fewer checks and balances in place to guarantee your data is being handled securely.

As a result, it's possible (or even likely) that your queries, including sensitive topics, might be accessed or shared with authorities without your knowledge or consent. It's like having a private conversation that's actually being monitored by someone else.

This isn't just speculation; there have been instances where AI systems like ChatGPT were used to collect and share personal data without users' awareness. So, it's crucial to be aware of these risks and take steps to protect your privacy when interacting with AI models.


Remember: When you interact with ChatGPT or any other AI system, you're creating a digital footprint that can have real-world consequences!

p.s. how cool would it be to get API access to this forum and feed it to a AI/Deep Learning/LLM to generate a SoSuave chatbot we could ask questions from that would be answered from the decades worth of posts in this forum, exclusively?
 
Last edited:

CAPSLOCK BANDIT

Master Don Juan
Joined
Jul 29, 2020
Messages
2,842
Reaction score
2,171
I interface with this technology on a daily basis, because I do so I've taken the time to understand it and because I have, my interactions with it are not "Blind" so to speak.

Interacting with anything when the parameters are unknown is akin to blindness... A lot of people don't want to pay for editors (I can't blame them they aren't cheap) and will run their script through it, rip, usually there is a recommended editing program per whatever your using so you can kind of silo, obviously nothing is perfect but hey

Other thing is that most people who are buying writing whether it be ghost or script are particular and it's hard to tell somebody No, you can get led on wild goose chases very easily, everything they know about you is speculative and now your work is as well in a way where it never really has been before, building a strong portfolio is a hard counter to speculation, means the disparity between those with good and bad portfolios has never been bigger and that's not surprising with contractors and the self taught in the mix
 

BaronOfHair

Master Don Juan
Joined
Feb 14, 2024
Messages
2,593
Reaction score
1,101
Age
35
I mean there are only so many ways to string words together when you are talking about a subject
A substantial portion of the human race(Many of them very bright people, with advanced degrees from prestigious universities)are unable to express themselves in clear, coherent sentences, and when it's these same humans who are designing AI... Well, inferring that the machines will be more adept as communicators than their creators makes less sense than anticipating The Dalai Lama saying anything that isn't saccharine enough to end up on a greeting card anytime soon
 
Top