Many feel Artificial Intelligence (AI) is the panacea for cybersecurity. Once we get AI, we will defeat the cybercriminals!
Not so fast. While AI can certainly enhance cybersecurity if appropriately applied, it is not as foolproof as one might think.
Jason Matheny, Founding Director for Georgetown’s Center for Security and Emerging Technology (CSET) and currently on the National Security Commission on Artificial Intelligence, said ‘AI systems are not being developed with a healthy focus on evolving threats, despite increased funding by the Pentagon and the private sector. A lot of the techniques that are used today were built without intelligent adversaries in mind. They were sort of innocent.’
There are three types of known threats to AI:
- Adversarial Examples – By exploiting how an AI system processes data, it can be tricked into seeing something that is not there.
- Trojan – In a trojan attack, an adversary can introduce a change in the system’s environment, which causes it to learn the wrong lesson. This type of attack is usually performed during development or as a supply chain attack.
- Model Inversion – With model inversion, adversaries reverse-engineer the machine learning to see the information used to train it.
Despite these three vulnerabilities, less than one percent of AI research and development funding is going toward AI security. This means that, like other legacy IT, after these AI models are developed, we will need to retrofit security into the models. This pitfall has proven to be complicated and flawed.
AI and Machine Learning will enhance security, but we must never forget that intelligent humans with malicious intent are lurking. We must never get complacent and think a tool alone will protect our assets.