Unsupervised learning frameworks are already being used, at least sort of.
Imagine a world in which the news started showing video evidence of events that never actually happened. What if I told you that this is already possible now? To understand why, it’s enough to get familiar with a technology called “deepfakes.”
First, it’s important to review what Generative Adversarial Networks are. A Generative Adversarial Network is basically an AI system that is composed of two neural networks that consistently face off to enhance each other. More specifically, it could help to picture an AI in which one side of the system makes an observation and the other consistently challenges it until the former improves its’ initial observation.
With this structure, GANs have already created impressive works of art, and even musical compositions, as evidenced by efforts like Google’s Project Magenta. The fact is, their output has not been all positive. They’ve also developed the ability to create progressively more believable, yet fake content to the point that the mainstream media has begun to question how we will determine what is fact and what is fiction in the future.
Is such a suggestion, however, rational? Is there a way that already exists that can easily combat this trend?
Before we answer this, it’s essential to define “deepfakes.”
Deepfakes are created by GANs, which function as detailed above, with one key difference. With deepfakes, one deep neural network makes fake images or videos over and over until the other side cannot reliably prove that these images or videos are fake. As with any GAN, the more data that each side takes in, the more the resulting content will be believable, even though it is not real.
If you know anything about institutional involvement in Cryptocurrencies, then you likely know of Anthony Pompliano and his “Off the Chain” podcast. On one of the latest episodes, Max Mersch of Fabric Ventures, mentioned that the blockchain can easily halt deepfakes in their tracks.
To understand how this is possible, it is important to understand how public and private keys work. In a nutshell, public keys are unique IDs that make messages of any kind more secure, while private keys decrypt the same messages.
As Crypto enthusiast knows by now, blockchain is immutable in how it stores data. People tend to trust a blockchain’s records because they are generated from community consensus, rather than from a third-party that does not even directly represent the parties involved. Considering this, now imagine if every piece of content was signed with its’ creator’s private key. If this were done to every piece of content on the Internet, then the only people who could prove ownership of them would be the true owners.
The key problem with such a proposal is simple. It would likely be too difficult to tag every piece of content on the Internet in this fashion. Perhaps, therefore, we should start anew and leave out the pieces, pictures, and videos that have already been posted? In a basic sense, even this would require connecting the blockchain to just about every aspect of the Internet’s infrastructure.
So, it would seem that we need to go back to the drawing board. While Mersch’s idea works with blockchain platforms, it would likely not work with most of the legacy internet. What if, however, AI and ML were thrown into the mix with or instead of his idea?
No, I’m not talking about GANs versus GANs.
According to one particular blog, the likely answer to combating the rise of deepfakes actually begins with understanding their principal weakness.
As a technology, they are generally the same as any sort of deep learning network in how they rely on large sets of data to function efficiently. Because of this, it appears that average people are safe from being attacked with deepfakes because there just aren’t enough images or videos of them online for the networks to take in. Famous people, on the other hand, do not appear to have the same luxury.
Still, this barrier will probably not exist forever. As a technology, deepfakes are already affecting how we view the news, for example. If you run a quick Google search on the subject, you’ll see just how pervasive the fear related to this already is.
To truly stop them, we will need to develop new laws, together with educational programs that evolve as the technology evolves. Following this, it will also be important to encourage the founding of companies like Chainanalysis, but for the AI industry. In other words, we will need tools that can monitor all online content and learn to detect what is real and what is not.
It is at this point, therefore, that we return to how AI could be used to stop itself, in a sense. Perhaps, protecting ourselves against deepfakes will be as simple as creating deep neural nets that have the specific purpose of detecting them and monitoring the sources that they come from. In the end, it’s reasonable to expect that all of this will take a considerable amount of time.
Perhaps most importantly, fake news will never be non-existent. It can, however, be brought into the light. If you’re interested in digging deeper into any of these topics, check out our list of suggested resources below.
Resources:
https://bdtechtalks.com/2018/04/16/artificial-intelligence-deepfakes-blockchain/
https://futurism.com/the-byte/deepfakes-illegal-china
https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html
https://blog.wetrust.io/why-do-i-need-a-public-and-private-key-on-the-blockchain-c2ea74a69e76
https://medium.com/coinmonks/blockchain-public-private-key-cryptography-in-a-nutshell-b7776e475e7c
https://www.linkedin.com/in/merschmax/?originalSubdomain=uk
https://magenta.tensorflow.org
https://erlc.com/resource-library/articles/what-is-a-deepfake