By Robert Enderle
We are surrounded by fake news, whether it’s from someone trying to discredit someone else, trick us into doing something not in our best interests, or convince us that a viable medical treatment is bad or a non-viable medical treatment is good. Particularly with videos and the emergence of Deep Fakes, we may increasingly see people we trust misrepresented and giving us bad advice or making us believe a false narrative. Worse, even if we later find that video is false, the impression it leaves with us may cause us to place or remove trust from a person or institution in error.
Intel has created an AI technology named FakeCatcher which, with a 96% accuracy rate, can identify these fake videos and provide us with critical context over the content we are consuming to assure we are not as easily tricked. However, the technology in this class has one big problem, and addressing that problem won’t be easy.
Let’s talk about Intel’s FakeCatcher and the problem that surrounds this entire class of AI-based security tools this week.
Let’s start with the problem. AI tools to catch other AIs doing bad things are critical to our safety. However, offense is always a ton easier than defense. The issue with tools in this class is that they are generally trained by looking at existing criminal work. But that limits them to what has been in the market while attackers are free to innovate and create different methods to execute their crimes. AIs can even increasingly be used to suggest, based on what they know about the mitigation tools, methods that these tools won’t initially be able to identify.
Whether we are talking about a physical or digital attack, the attacker always has the advantage because they get unlimited time to study the entity they want to trick, whereas the victim must protect against an overwhelming number of potential exploits and attack vectors. You could have years of preparation by the attacker mostly focused on figuring out how to get past security, with only microseconds on the defense side to stop the attack.