Mini Cart 0

Your cart is empty.

Editorials, The Grid

What Data Protection Day Reminds Us About Our Digital Vulnerabilities

Data breaches, AI exploitation, global privacy gaps, and more are what we’re reminded of on 2025 Data Protection Day.

  • Johnson Opeisa
  • 28th January 2025

Today, January 28, is Data Protection Day globally. It’s a day dedicated to raising awareness and fostering much-needed discussions on promoting privacy best practices and safeguarding personal information in this digital age. This year, the theme revolves around “taking control of your data” in an era where data breaches, theft, and misuse are increasingly frequent, sometimes even orchestrated by corporate organisations.

 

The imperative for this cannot be overstated, particularly in regions with largely sterile policies like Nigeria. Though President Bola Tinubu-led administration established an improved framework for data protection, the Nigeria Data Protection Act (NPDA) in 2023, data privacy in the country remains an issue. For context, 2024 witnessed a series of high-profile data breaches, including the compromise of the National Identity Database. Reports indicated that unlicensed bodies, XpressVerify.com and AnyVerify were selling citizens’ data for as low as ₦200. Equally alarming was when Paradigm Initiative’s Gbenga Sesan revealed on News Central in June 2024 that he had procured Minister Bosun Tijani’s National Identification Number (NIN) slip for just ₦100 during his analysis of the database breach.

 

 

Stricter enforcement of the NPDA could significantly reduce, if not totally eliminate, such incidents. However, an increasingly pressing concern as Nigeria leans further into digital transformation is its vulnerability to foreign data processors and controllers, particularly social media platforms and generative AI. Late on Monday, Meta announced an update that would allow its AI chatbot to leverage users’ account data across its social media platforms to provide personalised recommendations.

 

This is nothing new. The resourcefulness of generative AI comes with a troubling trade-off — users’ data is often exploited to train models, ostensibly to make them sound, look, and function more like humans. This goes beyond personal information; the optimisation of most generative AI relies on practically everything you’ve authored online, whether with your consent or through obscure and convoluted permissions.

 

Meta has long used public posts on Facebook and Instagram to train its AI systems, as confirmed by Mike Clark, its director of product management in September 2023. Similarly, X’s relatively new chatbot, Grok, was trained by default using users’ tweets. The same applies to ChatGPT models, which utilise data “from various sources, including publicly available data from social media,” as revealed in a May 2024 blog post.

 

While this might seem harmless on the surface, instances of this data being sold for undisclosed purposes or instances of copyright violations against writers, artists, musicians, and other creatives represent direct threats to livelihoods. Thankfully, increasing investigations and lawsuits against these, particularly from Europe and North America have forced these companies to allow users to opt out of having their data or content us.  Although little can be done about already scraped data, you now have the option to safeguard your digital footprint by modifying your settings on most of these platforms.

 

Share BOUNCE, let's grow our community.