Link Search Menu Expand Document

Microsoft’s New Framework Can Protect ML Models

Adversarial attacks have long been a big threat to the safety of an organization. Adversarial ML Threat Matrix, a joint project done by Microsoft, MITRE, IBM, NVIDIA, and Bosch together, studies precisely this aspect. The MITRE Corporation is an American not-for-profit organization that looks after and manages federally funded researches.

The principal aim of this new open framework is to help security analysts detect, and respond against adversarial attacks on machine learning models. Machine learning models are known to be especially weak against some specific type of adversarial attacks. This framework is expected to help cope up with these attacks more easily.

The weaknesses of AI models

AI-based tools are being used in a wide variety of problems. Many new and novel applications are being found out regularly. But most of these AI-based systems, deep learning models specifically, are extremely vulnerable against adversarial attacks as mentioned.

Time and time again, researchers have shown how adversarially chosen response can be used to fool, or worse, made to produce preferred output. Threat actors can make good use of this loophole and fool the underlying model, or even power the malware or trick the model with a poisoned dataset.

Adversarial attacks can be used to make AI applications make incorrect predictions, and take irrational decisions causing serious threats to its safety and stability. Not so long ago, ESET, a Slovak internet security company, found a malware called 'Emotet'. This email-based malware is widely used for running botnet-driven spam campaigns or even ransomware attacks. They discovered that it is using ML internally.

Model-inversion attacks have been studied extensively by researchers. In Model-inversion attacks, access to a model is abused for inferring information about the training dataset. Gartner stated in their 'Top 10 Strategic Technology Trends' report that by 2022 about 30% of the AI cyberattacks are expected to exploit the aforementioned weaknesses.

Adversarial ML Threat Matrix

Microsoft’s Adversarial ML Threat Matrix aims to ease out the job of the security analysts. They hope to reduce the probability of weaponization of data. They expect it to be an effective tool in the fight against intelligent threat actors.

Organizations can use this tool for checking the resilience of their automated models. It has the ability to simulate realistic attacks using an extensive list of tactics. This new technique is able to gain initial access to the environment, execute unsafe models, contaminate the data, and do every other nasty thing a threat actor might cause.

As Microsoft states, the goal of the Adversarial ML Threat Matrix is to help the security analysts detect weaknesses in the model by positioning attack on the ML system. The tool has some major resemblance to ATT&CK, another popular framework so that the analysts do not need to unlearn everything and start afresh.

This development is a major step towards making AI and ML tools secure from weaknesses like data poisoning and model evasion attacks. This is not the only development though -- many others are already working towards the same goal. Researchers from the Johns Hopkins University, for example, developed TrojAI -- a framework to help protect against Trojan attacks.

Other useful articles:


Back to top

© , AI Security Now — All Rights Reserved - Terms of Use - Privacy Policy