Posts: 1

Deep Learning can hallucinate

This is very interesting, a way to trip up computer vision by adding a strange image to fool classification tasks.


As the researchers write, the sticker “allows attackers to create a physical-world attack without prior knowledge of the lighting conditions, camera angle, type of classifier being attacked, or even the other items within the scene.” So, after such an image is generated, it could be “distributed across the Internet for other attackers to print out and use.” via The Verge


Neural Market Trends is the online home of Thomas Ott.