Scientists have created a security method for AI models that use a lot of power. It defends against two typical attacks.
Health-monitoring apps allow individuals to control chronic illnesses or keep up with fitness objectives using just a smartphone.
Yet, these apps may run slowly and use a lot of battery. This is because the large machine-learning models they rely on need to move data back and forth between the smartphone and a main memory server.
Engineers usually make processes faster by using special hardware that cuts down on data movement.
Although these machine-learning accelerators make computing more efficient, they are vulnerable to hackers who might steal confidential information.
To lessen this risk, researchers from MIT and the MIT-IBM Watson AI Lab developed a machine-learning accelerator that can withstand the two most frequent attacks.
Their device is designed to protect users’ health records, financial details, or other private data, while allowing large AI models to operate effectively on devices.
The team made several improvements that boost security with only a minor slowdown in the device. Additionally, these security enhancements do not affect the accuracy of the computations.
This machine-learning accelerator could be especially useful for intensive AI tasks such as augmented and virtual reality or self-driving cars.
“It is important to design with security in mind from the ground up. If you are trying to add even a minimal amount of security after a system has been designed, it is prohibitively expensive. We were able to effectively balance a lot of these tradeoffs during the design phase,” says Ashok.
The researchers focused on a kind of machine-learning accelerator known as digital in-memory compute (IMC). A digital IMC chip carries out calculations within a device’s memory, where parts of a machine-learning model are kept after being transferred from a central server.
The full model is too large to fit on the device, but by dividing it into sections and reusing these sections frequently, IMC chips minimize the data that needs to be transferred repeatedly.
Safety checks
To evaluate their chip, the researchers acted as hackers and attempted to access secret data through side-channel and bus-probing attacks.
Despite trying millions of times, they were unable to retrieve any real data or access parts of the model or dataset. The encryption also proved to be secure. In comparison, it took only about 5,000 samples to obtain information from a chip without protection.
However, adding security did lower the energy efficiency of the accelerator, and it required more chip space, which could increase production costs.