What is AI?

What is AI?

Artificial ‘intelligence’? Not exactly.

AI, as we understand it today, is not really “intelligence”, but sophisticated automation. Usually when we talk about AI, at least in the counter-disinformation field, we’re referring to either generative AI or Large Language Models (LLMs)–though we do tend to use the two pretty much interchangeably. I’ll get to that in a second.

Distinguishing Generative AI

According to Aporia, “Generative AI, in its simplest definition, refers to a subset of AI that focuses on producing new and diverse outputs rather than simply responding to inputs.” Its function is to “autonomously create new, original content such as text, code, and music.” 

There are three subsets of Generative AI, and LLMs are one of them. Their function is to generate meaningful text outputs based on user queries, or inputs.

A picture of Robert Downey Jr. holding a hand to his chest in relief with the text "when it's been three months and chatGPT hasn't replaced your job."
Is it good or bad that chatGPT is the face of AI?

The most fervent champions of AI point to Generative AI to show how close AI has gotten to intelligence, but that’s sort of putting the cart before the horse. You see, these models still rely upon an enormous database of information (fed to it by humans, which introduces its own set of inherent biases) and need at least one human input (no matter how simple it may be) in order to produce an output. They can’t, say, predict your needs and produce an output specifically tailored to them until you expressly tell them what those needs are. And even then, it can get things pretty wrong–but I’ll touch on that in another piece on what are called AI “hallucinations”.

I bring this up not to deride generative AI or its champions, but because it communicates a fundamental truth: “AI” is guided by humans. It is simply a tool, wielded by humans to achieve an output quicker than if they did it all manually.

A meme of three panels from the movie I, Robot. The first panel is Will Smith saying "why are you biased and wrong so often?" The second is a robot saying "because you built and trained me." The last panel is Will Smith holding his mouth.
What is AI if not what we make it?

 

What is Intent in AI?

What matters is the intent.

AI is a tool. It is not fundamentally good or bad. It can be used for either end, but how it is wielded depends on the intent of the user (and on the robustness of the model, which is its own problem–we’ll get to that in a later post). Their input in part defines the output, at least in the sense that they guide the model to get out of it what they want.

That might seem obvious, but I think it’s important to acknowledge because it helps us understand both how AI is being used and what it is capable of. We need that baseline if we wish to identify how we can counteract nefarious actors, which is exactly what needs doing if we are to prevent its use to spread disinformation.

I don’t want this blog to get away from me, so for now, I’ll leave it at that, and elaborate upon how AI is used in future posts. I’m going to talk about its role in spreading disinformation, intentionally or otherwise, and examine what can be done to use it ethically and regulate it effectively. Be sure to follow along for more on that in the future.