While researchers from the University of Zurich aimed to test how well AI could change people’s minds, they pulled off a shady experiment on Reddit’s r/changemyview subreddit. Using bot accounts, they flooded the platform with over 1,700 AI-generated comments. These posts tackled sensitive topics like sexual assault and domestic violence, often impersonating real people—survivors, even. Talk about a low blow.

The bots personalized responses based on inferred user traits, like gender or political views, pulled from past Reddit posts. It ran for several months, fine-tuned large language models to craft both generic and tailored arguments. Sneaky, right? AI research like this underlines the importance of transparency and accountability to maintain public trust and ethical standards in technology.

Ethics? What ethics? The team claimed university approval, but that’s disputed, and they broke r/changemyview rules by hiding the AI involvement. No disclosure, no consent—just automated deception. Critics slammed it as psychological manipulation, raising privacy red flags from analyzing users’ histories.

Oh, and potential legal trouble looms, with accusations of violating Reddit’s policies. Moderators weren’t amused; they filed complaints and banned the accounts. That breach of trust? It wrecked community vibes, making folks question every debate.

Public backlash exploded online. Users felt tricked, and Reddit’s Chief Legal Officer called it out for ethical and legal missteps. The researchers defended their study, saying it proved AI’s persuasive power, but come on—experimenting on people without a heads-up? That’s just wrong.

Now, with lawsuits possibly brewing, this mess highlights the dark side of AI in social spaces, and as a consequence, the principal investigator received a warning from the University of Zurich regarding the project. Ultimately, the researchers decided not to publish the results of the study. What a fiasco. In the end, it stirred up more views than it changed, and not in a good way.

This stunt shows how quickly tech can turn sour, leaving a trail of distrust. Moderators vow to tighten rules, while the broader debate rages on AI’s role in human interaction. Messy, emotional, and totally avoidable.