Obfuscation is the deliberate addition of ambiguous, confusing, or misleading information to interfere with surveillance and data collection (p.1)
 Obfuscation, at its most abstract, is the production of noise modeled on an existing signal in order to make a collection of data more ambiguous, confusing, harder to exploit, more difficult to act on, and therefore less valuable (p.46)
I began this project with my classmates Julia Rich and Idit Barak inspired by other obfuscation and related work we talked about in class (see definition above - CV Dazzle, Zach Blas, Ani Lui, NIR LED glasses, Kathleen McDermott).
We asked ourselves:
Can we build on this work in the form a personal kit?
Can we use the computer against itself (in form and function)?
Our goal was to create a kit of silicone face adhesives that would make the user anonymous to facial recognition software and that could be carried around in a small purse or bag. The design was important to us and we wanted the form to look angular, similar to the patterns of triangles that facial recognition software uses to break down faces into sections (using the computer against itself).
First, we researched different facial recognition softwares including to better understand how they work. You can read my summaries of these different softwares here. Most use machine learning now, which makes “fooling” them more complicated. We considered using machine learning to figure out what colors, patterns and shapes, but decided first to start with an analogue approach.
Based off of an example our professor had shown us in class, I created a simple web interface using Amazon Rekognition to test our different methods. We tried the following
Making angular paper shapes to change the “shape” of our face. We used plain white paper
Color the paper to be close to our skin tone to see if the computer was more likely to see it as part of our face rather than something in front of it
Applying silicone the same color as our skin tone to try to more realistically change our face structure
Based off of research using machine learning to fool object recognition systems, we tried using images and patterns from these projects
None of this worked very well. We had a few successes, but on the whole, we decided we need a different approach. Based on this initial research, our next step will be to try to use machine learning, similar to the approach taken by the recently published research previously cited. Another route could be to crowdsource ideas by making the webpage public and letting people try different tactics to fool it. We are planning to continue the project use either or both of these methods - hopefully we’ll have an update soon!
If you have an NYU email you can see the final presentation here.
(redacted) code on github here.