OpenAI reports harmful ChatGPT conversations to Police
“OpenAI is scanning users’ ChatGPT conversations and reporting content to the police.”
I came across this statement multiple times in the last couple of days. Even mainstream news has articles about it.
Is it true?
Well, according to the privacy policy, it is true.
OpenAI runs automated safety/moderation monitoring and it flags any potentially harmful content which is reviewed by a human.
If the human reviewer deems it necessary (such as serious physical harm to others), then it is shared with law enforcement.
What’s my take on this?
To be honest, it is both terrifying and necessary in some cases.
If you’re interacting with it, assume that your data may be used against you.
What do we have control over?
As a user of ChatGPT, we have control over limiting training use of our chat history, but moderation scanning still applies for safety.
Funny Thoughts
Even OpenAI, the leader in the AI space, still needs human reviewers; that is the current state of AI. 😅
Reference
Happy safe conversations!