2020 Volume Annual58 Issue Abstract Pages 485
Precise control of surgical equipment based on visual feedback from laparoscopic images is important for surgical tool maneuvers during automatic or laparoscopic surgery. However, the operation accuracy is limited, since it is difficult to estimate the surgical tool's 3D coordinates from 2D images. In recent years, studies on the estimation of 3D coordinates, using deep learning, based on tool joint position in images has been proposed. But such systems are unoptimized due to the limited estimation accuracy of the intermediate information. The goal of this research is to estimate the tool's 3D coordinates directly from simulated monocular laparoscopic images, using a deep learning algorithm. Using 3D rendered images of surgical tools, our deep regression architecture-based neural network was trained to estimate forceps position and orientation. As a result, the possibility of estimating the 3D coordinates of surgical tools from monocular images using a deep learning approach was demonstrated.