AI Disinformation’s Worst Offenders

In my last post, I talked about AI Hallucinations and how AI can be primed for disinformation. Today, I’m gonna explore exactly how it’s being used as a disinformation tool and what experts are watching.

Before we begin, let me clarify: AI is a tool. My goal is not to fearmonger, just to educate. These are very real phenomena, but just because it’s being used this way sometimes doesn’t mean it’s all bad.

With that, let’s get into it.

An image of set of dominoes toppling and a hand stopping more from collapsing. The collapsed dominoes read "Robot apocalypse". The untoppled dominoes read "The human race". The hand reads "me saying 'please' to ChatGPT".
Manners are important.

 

Let’s Talk Grok

Let’s start with (to borrow a phrase from D&D) a Favored Foe of mine: Elon Musk. Musk’s much-promoted AI chatbot, Grok, was released late last year.

Somewhat expectedly, Grok sucks. Like, really bad. This Vice article from December of last year gets into some specifics, and I encourage you to check it out. Here are the highlights.

First, here’s a primer from Motherboard (Vice’s tech blog): “Like other AI chatbots, Grok does not actually understand any of the information it generates. It is essentially an advanced search function that can summarize results by predicting the next word in a sentence.”

Motherboard ran multiple tests on Grok’s propensity for spreading disinformation. The results were predictable: “The chatbot produced fake timelines for news events and misinformation when tested… and lent credence to conspiracy theories such as Pizzagate.”

When asked to provide a timeline of events in the Gaza conflict, Grok responded with inaccurate, sometimes even made-up, information. Its “Fun Mode” (think Freshmen-Philospher-With-An-Inflated-Ego) repeated disinformation about “crisis actors” amongst Palestinians and produced an inaccurate timeline even in “Regular Mode” (just normal bad).

So, about as reliable as any other Musk project. At least he’s consistent.

An image of a classic 4-panel rage comic. The first panel reads "Who are we?", the second reads "chatbots" over a cheering group, the third reads "What do we do?", and the last is a screenshot of a conversation with an AI chatbot starting with the bot saying "Sorry, I don't understand what you're trying to say.", the human replying "Hello", and the AI chatbot replying "Sorry, I don't understand what you're trying to say."
I don’t know what I’m trying to say either.

 

AI Fake News

Inaccurate chatbot results are just one facet of the AI Disinformation Hydra, though. As this Washington Post article argues, “The rise of AI fake news is creating a ‘misinformation superspreader.’”

What do they mean by that? Well, “Since May, websites hosting AI-created false articles have increased by more than 1,000 percent.”

These websites are churning out thousands of fake news articles a day. They vary in the quality of the con, but this disinformation project isn’t about quality–it’s about quantity. They’re creating a sea of disinformation that’s increasingly difficult to wade through, especially when these websites nest fake content between actual articles.

Fortunately, there are measures being taken to counteract such efforts. We wouldn’t even know about this particular issue, for example, if not for NewsGuard’s exemplary disinformation monitoring. There are others doing this work (like The News Literacy Project’s RumorGuard), but I specifically wanted to shout them out because of their efforts to curtail AI disinformation.

Don’t Freak Out Just Yet

Again, I don’t say these things to fearmonger, just to inform. There are tons of great organizations all around the world who are already hard at work developing ethical AI practice structures and regulations. Chatbots and fake news sites are here, and they can be nasty, but they aren’t the end of the world.

I don’t want to discourage you from using chatbots–on the contrary, using them correctly helps to reinforce accurate information and deemphasize their inherent biases. Using them while double-checking their outputs against established facts is perfectly valid. When it comes to fake news sites, just be sure to double-check your facts with an established publication before sharing anything.

AI isn’t going away, and that means we need to learn to live with it. In order to do that, we need to listen to what experts tell us about how to ethically and responsibly engage with these tools. That starts by understanding their flaws, which, hopefully, you understand a little better now.

I’m going to keep up with this AI series until I run out of things to talk about. If you want to learn more, consider following along. In the meantime, thanks for tuning in.