This is straight out of a sci-fi novel. Researchers created a proof-of-concept technique that let them hide malware inside of an AI’s neurons to avoid detection.
According to the paper, in this approach the malware is “disassembled” when embedded into the network’s neurons, and assembled into functioning malware by a malicious receiver program that can also be used to download the poisoned model via an update. The malware can still be stopped if the target device verifies the model before launching it, according to the paper. It can also be detected using “traditional methods” like static and dynamic analysis.
Check It Out: Researchers Hid Malware Inside an AI’s Brain
Andrew:
This is yet one more reason why Apple’s whole-widget, walled garden approach is the most user-secure model on the market.
Given the feasibility of this type of attack, and the plausibility of its being undetected and successfully activated by uploading a malicious ‘update’, makes one wonder what other highly sophisticated, state-sponsored threats might already exist in the wild, and why, using third party components on one’s devices that might house a ‘backdoor’ or a neural engine with pre-installed, albeit dormant malware just might be why Apple makes it so difficult to do this on their devices.
It’s not that Apple protecting their profit margins is not a factor in their locking down their devices; rather that their security analysis suggests that this is an unacceptable level of risk, not only to the individual, but to their interconnected ecosystem. Both things can be true, and yet one the more important.