Sponge Examples, Like Creutzfeldt-Jakob Disease For AIs

Source: The Register Sponge Examples, Like Creutzfeldt-Jakob Disease For AIs

Worse Than Digital Brain Freeze

A coalition of researchers from the University of Cambridge, the University of Toronto and the Vector Institute in Canada have discovered a rather worrying vulnerability in computer-vision and natural-language processing models which bears a superficial resemblance to a DoS attack.  A successful attack can cause a system to take a longer time to process input by several orders of magnitude which slows output and increases power consumption or in the case of real time systems could render image recognition software for autonomous vehicles useless as it can’t process input quickly enough to correct course.

The Register offers an example of this attack, which is somewhat hard to process if you don’t consider how machine learning algorithms work.  If you fed a processing model which contained the word “explsinable” instead of “explainable”, the difference in how hardware processes language compared to wetware becomes obvious.  Instead of being able to easily recognize the word in it’s entirety and spotting the obvious spelling mistake, the hardware tends to break up the word into smaller bits and associates them to come up with a meaning or answer.  In the example, the hardware attempts to process the three tokens ‘expl’, ‘sin’, ‘able’ to try to match it with known patterns to determine the meaning of the word.  While it will eventually use other associations to come up with the same definition as the known word “explainable” it will take significantly longer to process.

This style of attack has been successfully executed on an Intel Xeon E5-2620 V4,  a GeForce 1080 Ti GPU, and an ASIC simulator; Google’s custom TPU is also likely effected however they did not have an opportunity to perform a test attack.   It is somewhat amusing that those generating the examples used in these attacks develop them on neural networks, spawning new generations of attacks by picking the ones that took the longest in the current generation and so on until they get a really nasty one.

We haven’t recognized any attacks in the real world yet, but now that team is devoting their efforts to hunt for any current or future ones.

A novel adversarial attack that can jam machine-learning systems with dodgy inputs to increase processing time and cause mischief or even physical harm has been mooted.

Video News

About The Author

Jeremy Hellstrom

Call it K7M.com, AMDMB.com, or PC Perspective, Jeremy has been hanging out and then working with the gang here for years. Apart from the front page you might find him on the BOINC Forums or possibly the Fraggin' Frogs if he has the time.

1 Comment

  1. Anomalous

    I like this “attack” – it cuts through all the marketing hype (AI, machine learning, etc.) to remind us that in the end, computers are dumb, no matter how fast they are.


Leave a reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Latest Podcasts

Archive & Timeline

Previous 12 months
Explore: All The Years!