Researchers have demonstrated how a projected gradient descent attack is able to fool medical imaging systems into seeing things which are not there. A PGD attack degrades pixels in an image to convince an image recognition tool into falsely identifying the presence of something in an image, in this case medical scanners. The researchers were successful in fooling three tests, a retina scan, an x-ray and a dermatological scan for cancerous moles; regardless of their access level on the scanner itself. Take a look over at The Register for more information on this specific attack as well as the general vulnerability of image recognition software.
"Medical AI systems are particularly vulnerable to attacks and have been overlooked in security research, a new study suggests."
Here is some more Tech News from around the web:
- Qual-gone: 1,200+ axed from Snapdragon, Centriq giant Qualcomm @ The Register
- TSMC cuts revenue outlook for 2018 @ DigiTimes
- Google launches Chat its Apple iMessage killer, as Allo development is stalled @ The Inquirer
- Unlock & Talk: Open Source Bootloader & Modem @ Hack a Day
- Android Go review—Google’s scattershot attempt at a low-end Android OS @ Ars Technica
- Apple not planning to merge iOS and macOS anytime soon, says Tim Cook @ The Inquirer
Under what circumstances can
Under what circumstances can such an attack happen? I mean, if you have the level of access that allows you to modify a medical image, you could do pretty much anything to it; there’s no particular reason for an attack that fools AI only.