2024 年 144 巻 6 号 p. 560-567
In multi-task learning, the goal is to improve the generalization performance of the model by exploiting the information shared across tasks. In this paper, we propose a neural network that simultaneously learns depth estimation and semantic segmentation of the environment from omnidirectional images captured by an omnidirectional camera. Our proposed neural network is developed by modifying UniFuse network, which was originally developed for depth estimation from omnidirectional images, to simultaneously learn depth estimation and semantic segmentation of the environment by exploiting the features shared between depth estimation and semantic segmentation tasks. In the experiments, the proposed method was evaluated with the well-known Stanford 2D3D Dataset. High accuracy for the two tasks was not obtained with a single network. However, if either of the two tasks was prioritized in learning, the synergistic effect of the two tasks with shared feature maps would improve accuracy, resulting in better results than a single-task network. It showed the effectiveness of simultaneously learning semantic segmentation and depth estimation from omnidirectional images.
J-STAGEがリニューアルされました! https://www.jstage.jst.go.jp/browse/-char/ja/