Decoding #deeptech
Read time: 02'23''
25 April 2024
Meet OmniIndex, a Web3 Data Platform committed to protecting your data

AI doesn’t want your job, it wants something far more valuable: Your digital soul

Big tech is selling your private information to finance their extraordinarily expensive servers and train their AI. Simon Bain, AI expert and CEO at OmniIndex, believes that technology companies aren’t in fact looking to replace people like is often reported, but are instead looking to exploit us and the AI hype to harvest more private data on us than ever and sell it to the highest bidder.

“AI companies are currently accessing and collecting more of our sensitive data than ever before, and whether sanctioned or not, AI tools are creeping into our lives and being used more and more each day with more of our data being both used, and collected.”

OpenAI’s CTO Mira Murati recently revealed in an interview with the Wall Street Journal that the company is unsure where the data used to train its AI tools like ChatGPT and Sora comes from. Murati was unable to answer when pressed to reveal whether private user data from social media platforms like YouTube, Instagram or Facebook had been used to power the responses given by its AI models.

“If you are unable to tell people what information is being used to train your AI, then you should not be allowed to work in AI. It’s that simple.”

“It is clear that we cannot rely on governments and regulatory bodies to protect us as they are either too slow, too lazy, too incompetent, or all three. AI developers have already admitted to using copyrighted material, have stated they won’t stop doing so, and have gotten away with profiting from it by claiming it is ‘fair use’. Now they’re after our private information, thoughts and preferences that they can both use to train their models, and sell to others.

“Why? Because Large Language Model AIs are expensive! OpenAI’s ChatGPT reportedly costs over $700,000 a day to run with massive amounts of computing power required to continually run the servers and even more money required to train each new model.”

According to Bain, the risk to our private information is not just limited to this deliberate use however.

“The worst part of it all is that the AI companies accessing and keeping our data are built on technologies that aren’t able to keep it safe from outsider threats and attacks. As such, it’s more important than ever for us to protect our data ourselves, and demand that it is protected adequately by the tools and services we use on a daily basis. As if they don’t sell it, then they lose it.

“Fortunately, there is some hope for the future because of the growing drive from consumers for data transparency and the accessibility of new technologies that can protect us. We’re all becoming far more informed about where our sensitive data is shared, and so AI companies will no doubt need to reconsider what technology they are using to keep that data secure.

“For instance AI models that include technologies such as fully homomorphic encryption and web3 can be used to ensure that a user’s data is never decrypted and therefore never becomes available for sale. This ensures that even if AI companies do manage to access your data, they’ll be unable to read it or use it to their advantage.

“If these organisations are going to be playing fast and loose with every last detail on us, we must reconsider who we share our data with and keep it out of the hands of those who are ‘unsure’ if they are profiting from it or not.”