Abstract
Majority of existing algorithms for shape-based 3D model retrieval presumes a specific shape class (e.g., rigid-body CAD model of mechanical parts) defined by using limited shape representation methods (e.g., singly-connected, closed mesh). Recently, however, need has arisen for a more versatile algorithm capable of handling wider class of shapes (e.g., articulated models) represented by using diverse shape representations. We have previously proposed an appearance based algorithm for 3D model retrieval that possesses invariance to articulation (global deformation) and is able to handle diverse shape representations. The algorithm extracts local image descriptors at interest points of 2D depth images rendered from multiple viewpoints about 3D models. The algorithm achieved good retrieval accuracy for articulated, simpler 3D shapes. However, retrieval accuracy was not satisfactory for some other classes of shapes, e.g., complex and rigid models. In this paper, we propose a 3D model retrieval algorithm that can handle a wider class of shapes. The proposed algorithm is based on the previous research, but employs randomly and densely sampled local visual features as well as a global visual feature. Distances among 3D models are computed by using distance metric learning. Experimental evaluations using multiple standard benchmarks as well as international 3D model retrieval contests have shown that our proposed algorithm outperforms many existing methods.