For my midterm, I am planning on building on my project from last week to make it into a user generated and user defined computer vision dataset. It will be made of two parts:
Part 1 - Create a website to allow people to draw how they want the computer to see them and label themselves
Part 2 - Create an output website that would allow people to put in a word and it will draw the corresponding person OR they draw a person and it predicts what kind of person they are drawing
Motivating Questions
Can we force technology to see us how we see ourselves? How we want to be seen?
Can this be used to train technology to see us differently?
Can we create new infrastructure for big data?
Can we create new modes of production for big data?
What type of augmentation does this give to our technology systems?
Can we make new biometric diagrams?
What would digital self portraits look like?
In what ways is obfuscation part of or not part of this process?
So far I have used the below for references in thinking about this work
Caroline Sinders - Feminist Dataset
Zach Blas - Face Cages and Facial Weaponization Suite
CV Dazzle - tips
ML5 sketch https://ml5js.org/docs/sketchrnn-example
Through this project, I hope I’ll learn:
Server management →
How to create and store a database on a server
How to pull information from a server
Use machine learning tools such as ML5
ML5 example: Train sketch RNN on drawings?
Code for Quick, Draw! which is used in the ML5 example
Other considerations for this first version:
Should there be a time constraint?
Total length of pixel constraint?
Constrain labels to only one or two?
Which order makes most sense/works best? Draw and then label or label and then draw?
These are questions that our professor Shawn asked me during office hours and I’d like to think more about!