Nowadays, touchscreens have been increasingly used as human interface of devices. Their usage varies from personal devices to public ones, such as ATMs and vending machines. However, the use of touchscreen machines for visually impaired people is challenging because these machines have only few physical clues and less somatosensory feedback. Hence, by using accessibility functions (e.g., VoiceOver), the number of visually impaired users of touchscreen devices has been increasing recently. In a survey of visually impaired users, the difficulty in operation method was discussed. To clarify the strategy of operation method when using touchscreen devices with VoiceOver, we performed an experiment for visually impaired users. This experiment involved menu selection task with three screen patterns, namely, grid, list, and random, which is simulated from menu screens that we use daily. The participants (i.e., visually impaired people) searched and selected the target menu item that they had listened to prior to performing the tasks. The experimental results are as follows. Based on the finger behavior, the participants tend to repeatedly flick at the same position on the screen. They selected the correct answer in most of the tasks; however, regarding the required operation time, finding contents at the top of the menu screen is time consuming. Hence, they may skip reading unintentionally. In this experiment, the participants already had an idea on what to search. Nevertheless, when visually impaired users use website and application for the first time, they may not obtain necessary information from these tools. Therefore, a discrepancy among the intended input operation by flicking, double tapping, and tapping and the accepted information operation of devices by users with only a somatosensory feedback clue exists. To address these problems, the information that a user should perform on the screen or the content continuity must be added.
View full abstract