Mini Cart 0

Your cart is empty.

Features, The Grid

Deepfakes Are Becoming a Menace. Here’s Why It Concerns You

As we often find ourselves playing catchup to address the consequences of technologies that have already been unleashed, we must confront the damaging impact of deepfakes 

  • Johnson Opeisa
  • 14th August 2024

Earlier in May, stunning pictures of American celebrities Rihanna and Katy Perry at the 2024 Met Gala event were making rounds on social media, particularly on X (formerly Twitter). The pictures, as expected, went viral, as they appeared to exemplify contenders for the event’s best-dressed stars. But that wasn’t going to happen, because, in reality, Rihanna and Perry weren’t dressed for the occasion, let alone present at the Metropolitan Museum of Art in New York where the event occurred.

 

The media professionals scrambling for the best shots on mounted cameras and mobiles, the flowery carpeted staircase, Perry’s three-dimensional floral appliqué dress, and the well-dressed gents and gorgeous ladies in the background—every detail was a convincing creation of artificial intelligence and could’ve completely fooled everyone (pending the knowledge of her absence), had it not been for Community Note.

 

In case you’re unfamiliar with the term, Community Notes is a feature on X that allows users (read: vetted contributors) to collaboratively add context or corrections to posts, ensuring that misleading or false information is flagged. Thanks to this, the truth behind the trendsetting post was exposed, even though neither the author nor X has deleted the tweet.

 

This incident wasn’t the first or last and was not harmful compared to malignant deepfakes like those involving South Africa’s Leanne Manas, whose face was used to promote products she had no affiliation with. There are also Taylor Swift and Megan Thee Stallion, who were recent victims of pornographic deepfakes.  

 

As we often find ourselves playing catchup to address the consequences of technologies that have already been unleashed, we must confront the damaging impact of deepfakes before they become more damaging.

 

The Magic and Mystery of AI-Driven Deepfakes

 

AI-generated images, as you might already know, are hyper-realistic products of tools like Runway ML and Artbreeder, which were originally meant to aid in creative projects and artistic exploration. 

 

These images or visuals—as the case may be— are often referred to as “deepfakes” when they involve the realistic alteration or creation of images or videos that depict people doing or saying things they never did.

 

At the heart of this Frankenstein’s digital tool is a technology known as Generative Adversarial Networks (GANs). In simple terms, GANs consist of two neural networks: one that creates images and another that evaluates them. The first network tries to generate an image that looks real, while the second one assesses whether the image is genuine or fake. Through this process, GANs become incredibly good at producing images that are nearly indistinguishable from actual photos.

 

While this technology has tremendous potential for creative and commercial uses—such as generating lifelike characters for video games, creating stunning visuals for marketing, or even assisting in medical imaging—its dark side is becoming increasingly insidious.

 

Though the practical examples shared above were celebrity-centred, the potential harm of deepfakes cuts much wider. They can be used to manipulate public opinion, create misleading news, or damage reputations, with implications reaching beyond the entertainment industry. The misuse of such technology poses serious risks to individuals and societies at large. 

 

It’s a well-established fact that seeing is no longer believing. And it doesn’t matter if deepfakes can be easily recognised; the crux is that they shouldn’t have gotten to this. The US Congress earlier called for new laws to criminalise the creation of deepfake images, a move that African countries should be eyeing.

 

Countries like South Africa and little to no legislative stance to combat deepfakes, as highlighted by South African legal consultant Layckan Van Gensen in her LLD thesis. Specific legislation directly addressing the protection of image rights in Nigeria is also somewhat absent.

 

 

Damage controls like temporarily blocking a victim’s name in search engines or social media and scrubbing the visuals off the internet can only do so much. The legal terrain has few clear landmarks and does not comprehensively address the issue, nor does it fully protect victims. Image legislation should clearly define an individual’s image, specify when an infringement has occurred, and provide the image rights holder with legal remedies for unauthorised use.

 

Share BOUNCE, let's grow our community.