Opinion #other
Read time: 03'35''
20 September 2023
A new watchdog isn’t the secret to regulating AI – education is

A new watchdog isn’t the secret to regulating AI – education is

In May, a letter signed by global tech leaders sought to halt the development of advanced AI models, highlighting the "profound risk to society and humanity” AI could pose. In the wake of this news, British Prime Minister Rishi Sunak announced hopes of launching a world-leading dedicated AI watchdog to regulate the sector.

While many have voiced their support for such a body, there is arguably a simple truth that many are missing: a government-led watchdog is not the only path forward – nor might it be the most effective.

Instead, an education-led approach should pave the way to a secure, harmonious AI ecosystem, where consumers are safeguarded and businesses have room to innovate with and develop trailblazing AI tools.

Data in AI – understanding its privacy and value

At its core, AI is all about data. Vast data sets are its lifeblood, and the creators of such information? All of us. With every click, we leave a trail of data behind us, with the capacity to unveil insights about who we are and what we look at online. Naturally, if used irresponsibly or nefariously this can have damaging consequences, as seen with previous incidents of gross data privacy infringement and abuse like the Cambridge Analytica scandal.

But without data, AI is nothing – so that is where regulatory focus must be placed – and indeed already has been. In recent years, the government has recognised the need for enhanced data protection, with existing policies like GDPR and DPA 2018 now firmly in place to safeguard consumers.

However, what is often problematically overlooked is not the protection, but the value of this data. All the protection against AI in the world means nothing if data is not viewed as a material asset, and its exploitation as theft.

While businesses have been profiteering from consumer data for decades now, its creators have been left in the cold. Data bears no difference to physical possessions, so, why should companies be allowed to exploit it free of charge? Naturally, there is the risk that widespread AI usage would exacerbate this issue, due to its farming of large data sets.

This has been allowed to go on for two reasons. Firstly, legislation protecting consumer’s monetary rights to their data has not been introduced to prevent it – but must be as a logical priority. After all, would the government allow businesses to take from our wallets, or steal from our homes?

But secondly, and most importantly, people have not been taught that their data is an asset, and to treat it as such. Worryingly, ZIPZERO’s latest research revealed that less than a third (32%) of people have a good understanding of how much money is made by companies from their personal data, while just 44% comprehend how their data is being used by businesses.

As we enter a new data-driven age of AI-powered living, keeping it under control means addressing this issue – namely, by educating people and ensuring transparency, thereby enabling them to exercise a degree of control over their ‘property’.

Embracing a transparent, education-led approach

Primarily, the issue with placing total control of AI regulation into the hands of a watchdog is that it disempowers businesses and people alike. Turning the UK into a pro-business, pro-innovation AI zone as the government wishes cannot be achieved with sector experts pressed under the thumb of a regulatory body who don’t yet fully understand AI’s implications.

Indeed, AI is progressing at lightning speed – far faster than government policy’s capabilities. Developing regulatory body with the necessary expertise and dynamism to keep up with ever-evolving AI technologies could be near impossible. By the time such a watchdog is established, its regulations might already be outdated, rendering it ineffective in addressing emerging threats. For that reason, it is not the solution we currently need.

Such progressive technology necessitates a more progressive approach from the government, one standing on the cutting edge of research and innovation: education.

By providing individuals with up-to-date, comprehensive education on AI and clarifying with full transparency how their data is being used, they remain vigilant and informed. The burden of data protection is therefore shared between users, businesses and AI providers. This collaborative approach encourages a sense of collective responsibility for keeping AI secure and safeguarding data, creating a self-balancing ecosystem.

What’s more, education is future-proof, fostering a culture of responsible data usage in AI that extends beyond immediate regulations. As individuals become more conscious of their data’s value, they are likely to hold businesses accountable by demanding higher privacy standards, influencing businesses and organisations to adopt ethical AI practices – just as we have seen with increased environmental awareness, for instance.

Finally, it empowers people to take control of their data monetisation. By understanding the value of their personal information and the potential risks associated with its misuse, people can make informed decisions about sharing data and engaging with AI technologies.

Looking forward

The new dawn of AI is just in its infancy, and there is much we are yet to learn. A watchdog won’t give Britain what it needs to become a flourishing AI ecosystem where businesses and people alike are primed for innovation. All parties must be empowered and encouraged to make informed, transparent decisions when it comes to AI and data, which can only come through a commitment to education.

Marcin Walaszczyk is the founder, COO and CTO of ZIPZERO.