User-Generated Computer Vision Dataset
For this midterm, I was able to accomplish part 1 of my project proposal for the user-generated computer vision dataset: making an app where people can draw themselves for the computer and label themselves.
When the server is running, you can see the webpage here: http://68.183.140.103/ and be the watcher here: http://68.183.140.103/watcher.html
Design
To the user, the interaction isn’t much different because I was focused on getting the backend in place. When the user load the page, they are prompted to trace their face and then choose an adjective to describe themselves:
Technical
The code can be found on github here.
I am using node.js with express, websockets to communicate between the server and the pages and nedb to save the adjective and mouse movements in the database.
Getting this functioning, really understanding every step of it, and then being able to control what I wanted was a big challenge for me since this is my first time doing this. I combined examples from Shawn, Dan Shiffman and the internet! And got some very helpful help from Rushali and Shawn - thank you!
Next
Now that I have the data, my next step is to complete part 2: create an output website or app based off of sketchRNN to make predictions based on these drawings. I also still need to consider some of the questions I posted in my previous midterm proposal (conceptual, technical and practical).