Welcome(?) Sora AI
Sora is Here
There’s another new player in the AI landscape: Sora.
Open AI’s new generative AI project (technically a “diffusion transformer”) allows for video generation from user prompts. Think ChatGPT, but for videos.

Sora and Bias
With it comes familiar pitfalls. Chiefly, the bias inherent to the construction of these models, and, consequently, the lack of transparency as to how the model is trained. As Vox puts it, “Since we don’t know what data Sora was trained on, and the tool is not available for the public to test, it’s hard to speak in much detail about how biases might be reflected in the videos it creates.” Sam Altman is optimistic, but he’s the freakin’ owner of the company, so he’s biased (pun intended).
Sora’s Potential Disinformation
Sora’s videos are, admittedly, impressive. That’s not a good thing–at least while there’s no regulation in place.
Consider the implications for the proliferation of disinformation. The internet is already rife with deepfakes, altered images, fake articles, and even fake publications, and much of the general public has proven susceptible to these lies. ABC’s article, OpenAI video-generator Sora risks fueling propaganda and bias, experts say, is just one example of the arguments experts have made and the warning calls that have been sounded.
Still Being Developed
Now, Sora is still in the testing phase, so it’s not yet in the hands of those who would seek to do harm with it. Its creators also insist that they’ve set up safeguards to prevent the generation of, say, sexually explicit content.
I’d like to believe them–except for the fact that I’ve already seen a screenshot of an ad for Sora that advertised “AI generated XXXX content” alongside a generated image of a child. I will not be sharing that for obvious reasons.
The State of Play
There are regulations that have been put in place, and lots of legislation is up for discussion around the world. The problem, though, is that technology will always outpace such legislation.
The bottom line here is simply that we must remain educated. It’s important we know what tools are out there, what they’re capable of, and what their productions look like.

Not sure where to start? Check out my AI blog series. My most recent one was on ethical AI usage and went into more detail about what the average person can do as well as what regulatory bodies are capable of. I did another on the most problematic actors in AI and yet another on AI hallucinations, both of which should help you build the tools to recognize AI disinformation.
Facing the AI horizon may be daunting, but the truth is that we’re not alone. There are myriad organizations that are doing wonderful work in this space. We need to support them by educating ourselves.