Recent research by Professor of Computer Science Vitaly Shmatikov and Postdoc Reza Shokri was featured in WIRED. The researchers, including researchers from the University of Texas at Austin, trained artificial intelligence how to recognize faces that have been blurred.
Researchers at the University of Texas at Austin and Cornell Tech say that they’ve trained a piece of software that can undermine the privacy benefits of standard content-masking techniques like blurring and pixelation by learning to read or see what’s meant to be hidden in images—anything from a blurred house number to a pixelated human face in the background of a photo. And they didn’t even need to painstakingly develop extensive new image uncloaking methodologies to do it. Instead, the team found that mainstream machine learning methods—the process of “training” a computer with a set of example data rather than programming it—lend themselves readily to this type of attack.
“The techniques we’re using in this paper are very standard in image recognition, which is a disturbing thought,” says Vitaly Shmatikov, one of the authors from Cornell Tech.
Read the full article on WIRED.