For years there has been a growing concern that many forms of machine learning are actually easier to deceive than they should be (and there is good reason to be concerned, for background on why see the paper recommended to me by my friend Lewis Shepherd: "Deep Neural Networks are Easily Fooled").
Many of us have also raised concerns about the current security frameworks around Artificial Intelligence (there are none! The approach to fielding AI is to create capabilities, test them for functionality and field them, with no security frameworks involved). These observations make it important to discuss ways to optimize security of AI along with overall functionality of our systems. Machine learning is becoming ubiquitous now, so we already need ways to improve its ability to perform in the presence of potential adversaries who would seek to deceive models. This is definitely a topic worth discussing and understanding.
In discussions on this topic with Frank Chen of a16z I was very happy to learn that some of the greatest minds in machine learning have been examining this issue. In fact, there is exciting, peer-reviewed research published on the topic and many interesting projects are well underway on methods to address some of these issues.
Perhaps the most exciting domain of research in this area was kicked off by a 2014 research paper titled Generative Adversarial Nets. It describes ways to use unsupervised machine learning to help systems improve, including improving in environments that include deception.
This paper by Ian Goodfellow and his team at the University of Montreal described Generative Adversarial Nets (GANs) as a way to create two neural network models that fight each other, one creating real results and one creating forgeries. Another model serves as an expert detective that seeks to evaluate all results and know the difference between the fraud and real result.
Goodfellow et al used the metaphor of a counterfeiter seeking to generate fake currency and a detective seeking to tell the difference between real and fake. In their words:
The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistiguishable from the genuine articles.
So, in Goodfellow's model, both the real model and the adversary model will be trained to get better over time, eventually reaching the point where the detectives cannot tell the difference between the real currency and counterfeit. This can be used to continuously improve models.
Where might this research lead? This particular framework is applicable to the field of deep learning, which seeks to discover rich, hierarchical models that represent probability distributions over the kinds of data used in artificial intelligence applications. It is particularly relevant to applications that include natural images, audio waveforms containing speech, and data that contain symbols. But this is early into the research, and it is perfectly appropriate for us to speculate on future use cases of this and other related research.
For example, consider algorithms that seek to automatically detect changes in imagery from satellites and then seek to describe those changes. Was there more or less vegetation in the image? Was the water level higher or lower than the past image? Was there more or less ice or snow? Were there more vehicles? What types were they? Algorithms have been around for these types of problems for years and despite many breakthroughs there is huge need for improvement, especially in those cases where humans might seek to deceive and shape the results. GANs may be key to breakthroughs in how these images are processed.
Another potential area is in computer security. AI, especially machine learning, is being applied to computer security solutions at the endpoint, network and data center in many use cases. It is also making its way into commodity consumer solutions for cyber security. The bad news is that adversaries are also discovering AI and machine learning. The cat and mouse game of cyber attacker vs cyber defender continues. How might the use of GANs help defenders in this domain? One day soon, the AI in commercial cyber security offerings may come with GANs embedded to continuously challenge the system's results and continually seek to improve defenses against increasingly smart adversaries.
These are just a few examples. There are so many others. GANs will one day be throughout our systems and always on, always seeking to deceive the good AI, and always making AI better.
Latest posts by Bob Gourley
- C³ Webinar: Awareness Briefings on Russian Activity Against Critical Infrastructure - July 19, 2018
- Join Us at Cyber Tacos 24 July 2018 in Washington DC - July 19, 2018
- Inform Your Cybersecurity Strategy With Lessons From July 1861 - July 17, 2018