What do you think of when you hear the term deepfake? Maybe it’s a comically edited video of someone’s head on a rooster’s body? Maybe it’s a poorly generated video of a jaguar eating cake? Or maybe it’s a video digitally altered to display explicit sexual content.
Deepfakes, in layman’s terms, are images and videos that have either been edited or generated using artificial intelligence. From humorous to salacious, the content produced varies widely depending on the prompt the user inputs.
Over time, the quality of deepfakes has improved. In 2023, prompting Will Smith to eat a meatball resulted in a horror show of misplaced eyes and clumps of yellow goo. Now, Will Smith can eat a meatball with ease and sip orange juice afterward.
These videos are getting freakishly accurate, making it harder for audiences to tell what’s real and what’s not. This ultimately leads to misrepresentation of individuals and a spread of false information. This needs to stop.
It’s not all fun and games anymore.
Deepfakes have been around for a while. One of the first examples can be traced back to 2018, when celebrities’ faces were manipulated into pornographic videos. This grossly perverts people’s actions into deeds they did not consent to.
Moreover, AI videos have not only been used to attempt to disseminate false information; they have already successfully done so.
In 2020, a short film “In Event of a Moon Disaster,” went viral, depicting Nixon giving a contingency speech for a failed moon landing. This video, even though it clearly stated it was a deepfake, still inflated people’s belief that the moon landing was faked.
More recently, in January of 2024, there were reports of Biden telling voters not to vote in the primary elections. This could have resulted in lower voter turnout.
While not everybody abhors deepfakes, as seen with current President Donald Trump’s repost of former President Barack Obama getting arrested, the consequences far outweigh anyone’s enjoyment.
Furthermore, deepfake replication is not just isolated to celebrities and politicians anymore.
Often, AI needs a large amount of data to train itself. So celebrities and politicians have been the main victims of deepfakes, as there is a large amount of public data on them due to their frequent public appearances in the media.
Now, however, AI has advanced to the point where only a handful of user data points are needed to generate believable content. Sora — OpenAI’s video generation platform with a social media twist — only needs the user to say three numbers and to rotate their head in order to insert them into any video imaginable.
This means that now, the everyday Joe can be affected too.
While some of these platforms have guardrails to prevent misrepresentation and misinformation, they are not foolproof. Moreover, companies seem to be embracing AI more than ever. Social media apps like TikTok and Instagram even encourage using AI for content creation.
As videos like these circulate on the web, viewers increasingly aren’t looking for the AI-generated label that TikTok implemented to see if the content they are consuming is real or accurate. Rather, they are looking for a laugh as they doomscroll.
What seems harmless now can quickly spiral, as the ability to make deepfakes of anybody is now at the tip of everyone’s fingertips.
Now people are left with the question: What safeguards are there to protect us?
While social media companies have community guidelines and reporting methods, they often fall short, as it takes time to see results and the content removed is only that which explicitly violates the rules.
In addition, the U.S. government cannot be relied upon to create any long lasting change that will wrap the challenges surrounding this complicated technological innovation into a neat little bow. The war with TikTok and the ignorant pestering seen in the TikTok CEO’s testimony displayed that. The whole fiasco shows lawmakers’ gross misunderstanding of technology and their push for their own political agendas.
Instead, the people need to stop creating and watching these AI-influenced videos. As long as these videos receive attention, they will continue to circulate — disrupting people’s image and adding to the plethora of misinformation in existence.
Ultimately, a full boycott of deepfakes needs to happen or else nothing will change.
Gabriela Gomez is a biomedical sciences senior and opinion writer for The Battalion.
