Nowadays, touch interface has become popular. Touch interface features an easy to use intuitive user interface; however, having a few input elements is one of operability issues. In this study, our method detects an operated finger using information on the touch screen and values from various sensors when the user sits. This study enables touch interface to increase the number of input elements. We conducted experiments to collect operation data for each finger by 19 participants. As a result, we confirmed that our method, which only used other users' operation data to machine learning, classified four classes (left hand's thumb, other fingers, right hand's thumb and other fingers) with about 60% f-measure. Moreover, our method, only used own operation data to machine learning, classified four classes with about 80% to 90% f-measure. In this paper, we describe consideration about the results, and some characteristics.