This work proposes a novel framework for realizing inspection automation through a human-in-the-loop machine-learning paradigm. Central to the robotic inspection system, a mixed-reality device is adopted, which interfaces human inspectors, civil structures, and robotic vehicles. We describe conceptually how to collect level-of-detail operational data to train a robotic vehicle, which shadows the inspection process in real-time and learns to realize automatic inspection. Among the different types of operational data, 3D models are essential in navigation, exploration, and damage mapping in complex structural spaces. We focus on the 3D model assessment resulting from the online visual simultaneous localization and mapping (SLAM) process. Important findings include that 3D model generation can be staged and registered. The quality of 3D models is comparable to an offline image-based generation process, namely, structure-from-Motion (SFM), when a quick-motion operation is conducted. We remark that such online 3D model generation can be comprehensively exploited in training robots for real-time structural exploration, damage detection, quantification, and ultimately integrating with building information modeling.