抄録
The instability of surface electromyography (sEMG) signals leads to significant burdens on the use of traditional prosthetic hands. This research aims to achieve six common prosthetic grasping actions by using computer vision to recognize the key characteristics of the objects to be grasped. Prosthesis wearers only need to install two-channel sEMG sensors and perform five easily recognizable gestures as control commands to enable functions such as system judgment, revision of inappropriate grasping actions, and rapid recognition of grasping actions. Additionally, haptic feedback is added to the prosthetic hand fingers to adapt to grasping objects of different sizes and shapes. These methods demonstrate the potential to improve the stability and flexibility of prosthetic hand control, effectively reducing the difficulty of using prosthetic hands to grasp various daily necessities.