board2board - collaborative whiteboard using object detection


board2board

Our app board2board allows multiple users to collaboratively edit a board using a colored object. Previous projects in this area have forced users to draw on their trackpad or switch to a tablet to draw. Our app differs from previous work by using the webcam on a computer to detect the motion of a colored object. We allow users to communicate with one another remotely while still retaining the dexterity of drawing by hand.

Features


Since users’ background environments may vary greatly, we have given the option of choosing different colors to detect. Our currently supported colors are blue and green as we have found these to deliver the best results but we plan to extend the color spectrum to any color.
We allow users to draw disconnected lines by having the spacebar toggle whether the marker is drawing or not.



We also allow users to clear the board using the key “c”.


Users can collaboratively edit the board by visiting the same link. In the future we plan on allowing for different private rooms so that multiple groups of people may edit multiple boards.

Tools Used

We used the OpenCV.js library to implement object detection, Material-UI and React.js for the front end and Express, Socket.IO and Node.js for the backend.

Core Challenge


The most challenging part of the project was updating and sending the drawing data (the points that each user drew) between multiple clients. We used Socket.IO to handle communications between the client and the server. The general idea is that our server holds a dictionary that maps between Socket.IO’s uniquely generated socket IDs and the points drawn by the user with a particular socket ID.

We originally planned to have the server store a single list of all points drawn but realized that we had to know who drew each point. This was to ensure that multiple online users would not interfere with one another’s drawings. Each of our clients has two data structures to store the drawing data; one stores all the points drawn across all frames, allPoints, and the other stores the points drawn from the current frame, currPoints. As soon as the current points drawn are detected, they are emitted to the server. The server then receives the newly added points and adds them to the data structure storing all the points on the server.

Future Work


While we are very excited about our project, we realize that there are a lot of potential features to be added.
Most notably would be a URL sharing feature, which would allow multiple groups of people to work on different whiteboards.
Another potential feature would be to add a database so that users could save their drawings, share them, and come back to them at a later date.
We are also considering adding gesture recognition to signify potential actions (i.e. fist for clearing the board).

Acknowledgements

We would like to acknowledge 1991viet for the inspiration of our project. The repo we consulted is linked here.
My sister's blog is here.

Comments

Popular posts from this blog

Perceptron in Python and C++

Intro and Raspberry Pi Setup