Helping Hand

Helping Hand is a standalone gesture recognition project which connects to a 3D-printed hand. The camera will detect whether a hand is closed or open and make the 3D-printed hand mimic the same position. This project was made in a team of EE-Emerge students at UC Davis.

Mission Statement:

As the supply chain gets deeper in complexity and the need to meet consumer demands grows, the process of automating tasks has never been needed more than now. Helping Hand enters the field of automation in a different way by allowing a task to be automated without sacrificing the specific wants and needs of the user. Helping Hand is a robotic hand that utilizes computer vision through edge machine learning to mimic the gestures of its user. By copying the user’s motions Helping Hand allows one to avoid unsafe work environments while still having full control over tasks. With all decisions being made directly by the devices itself, Helping Hand is a stand-alone, portable device that does not require wifi for cloud computing. This was an important design decision as it would allow the device to have more versatility than other devices currently available. In order to do this the Helping Hands team identified and implemented many different approaches to computer vision with this design constraint in mind before settling on a method. This required assessing each approach’s efficiency, accuracy, and effectiveness under different environmental conditions. This ultimately resulted in the Helping Hand team using Convolution Neural Networks (CNN), which is the software responsible for the object detection and movements of Helping Hand.