Abstract
Studies on sign language recognition can be classified into vision-based methods using a camera and motion-based methods using sensor gloves. In the vision-based methods, a camera is placed across from a signer to take sign languages from an anterior view. In this setup, it is difficult to use in daily conversations due to an installation place requirements. In this study, a sign language recognition system which can be used anywhere and anytime is proposed. In the proposed system, a wearable camera is equipped with a signer. In this paper, sign language image features extracted from the first person view image and installed position of a wearable camera are examined. The effectiveness of the proposed system is evaluated through some experiments.