Abstract
Recently we developed a non-invasive braincomputer interface (BCI) to allow an user to control an humanoid robot within a complex scenario involving locomotion and object handling. The main challenge lies in the limitations of such BCI: limited output commands, low bandwidth and possible interpretation errors. Thus we developed several techniques to overcome these limitations by taking advantage of the embedded autonomy of the robot and putting it to use within the user interface. In this paper we present two of these techniques. The first shows how the vision system can be used to allow the user to grasp objects using the robot; a task otherwise extremely difficult using non-invasive BCI. The second illustrates how the vision system can also be used to ease the steering of the robot. These techniques illustrate the way we aim to develop better solutions for humanoid control using a brain-computer interface.