Journal of Environmental Engineering (Transactions of AIJ)
Online ISSN : 1881-817X
Print ISSN : 1348-0685
ISSN-L : 1348-0685
A REMOTE SHARING METHOD USING MIXED REALITY FOR 3D PHYSICAL OBJECTS THAT ENABLES HIGH-SPEED POINT CLOUD SEGMENTATION AND RECEIVER'S OBJECT MANIPULATION
Daichi ISHIKAWATomohiro FUKUDANobuyoshi YABUKI
Author information
JOURNAL FREE ACCESS

2020 Volume 85 Issue 778 Pages 1017-1026

Details
Abstract

 In the architectural field, making a consensus is indispensable to stakeholders. Remote meetings are useful with consensus buildings because they don't require travel costs, unlike face-to-face meetings by gathering from a distance. However, conventional remote meetings like video conferences using displays lack a sense of presence. Moreover, it is difficult to intuitively share three-dimensional (3D) information such as the spatial position and shape of design objects required in the architectural field.

 For sharing 3D information, 3D physical building models or 3D virtual models are created. In remote meetings, users can see 3D virtual models made by 3D modeling software such as 3D computer-aided design (CAD) or Building Information Modeling (BIM) in stereoscopic images. However, their 3D virtual models have to be made in advance, this is why they cannot represent shapes changing in real-time like 3D physical objects.

 Point clouds are clusters of points that generally have their position as an XYZ coordinate with RGB values. They can represent 3D surfaces in the real world. By using an RGB-D camera that can capture RGB images and depth data and capture point cloud data in real-time, the 3D virtual model based on the surface of the 3D physical object can be made without needing to create a 3D model in advance.

 Interactivity enables users to provide understandings of the targeted 3D object characteristics. High-speed point cloud segmentation is necessary to classify unordered point cloud data captured by an RGB-D camera in real-time.

 In this research, we use the Euclidean Cluster Extraction method for interactive remote sharing of 3D physical objects and provide users with the ability to manipulate 3D point cloud objects with Mixed Reality (MR). We constructed a prototype system using an RGB-D camera to capture point cloud data and MR-Head Mounted Display (HMD). By using the Euclidean Cluster Extraction method with Random sample consensus (RANSAC), point cloud data is classified into individual clusters and then it enables an MR-HMD user in a remote area to see and manipulate 3D point cloud objects manipulating them by a hand gesture, called the “pinch gesture”.

 Through experiments, we evaluated the accuracy of the high-speed segmentation results and the user’s operability for point cloud objects including visibility. When two objects are on the desk, the relation between the two objects and the RGB-D camera including its angle is tracked. Also, the number of point clouds transferred into the MR-HMD can affect the user’s operability.

 In future works, the proposed system should be modified using a higher accuracy segmentation method which doesn’t depend on the distance, for capturing point cloud data using multiple RGB-D cameras and align them to make the wider surface of point cloud objects, and for use on Wide Area Network (WAN) as well as the internet environment.

Content from these authors
© 2020 Architectural Institute of Japan
Previous article Next article
feedback
Top