This paper introduces a flexible multi-sensor fusion SLAM framework that integrates data from LiDAR, cameras, IMUs, and wheel odometry. The framework is adaptable and does not rely on a specific sensor configuration, automatically executing optimal tightly-coupled sensor fusion when the camera, IMU, and wheel odometry are available. A key component, the LiDAR-Visual Temporal Alignment (LVTA) method is proposed to synchronize asynchronous LiDAR and camera data. With synchronized sensor inputs, joint optimization is employed to deliver high-accuracy SLAM results. Experimental results demonstrate that our SLAM framework achieves state-of-the-art accuracy and operates in real-time on low-cost embedded systems.
In this study, we develop a human-dual-robot collaborative manipulation system for deformable objects using skeleton tracking. Human-robot collaboration leverages the complementary strengths of humans and robots to perform tasks that neither can accomplish alone. Although many studies have addressed human-robot collaborative manipulation, most have been limited to rigid objects, relatively small items, or lightweight materials. In contrast, studies on collaborative manipulation of deformable objects by humans and dual robots remain relatively sparse due to the challenges associated with modeling deformable materials and coordinating multiple robots. To address this, we use RGB-D cameras mounted on the TCPs of two robot arms to track human skeletal feature points, enabling model-free collaborative manipulation of a large deformable object by a human and two robots. An optimization problem is solved based on the human feature point positions observed by each RGB-D camera to determine the relative positioning between the two robot arms. Subsequently, using this relative positioning, a compensation value for the human feature position is computed, which is then used to determine the target velocities of the robot arms for cooperative manipulation. We validate the proposed system through real-world experiments using two robot arms and two RGB-D cameras, demonstrating its effectiveness.
Recently, many studies have been conducted on communicating policies to citizens as a narrative in order to gain their acceptance and empathy. However, when narratives are used in the policy field, the truth of their contents cannot be judged if the process of their creation is unknown, and there is a risk that they may be misused to induce people to a particular ideology. In response to this issue, by basing the content of narratives on the output results of social simulations, it is possible to confirm the consistency and safety of the content of stories, and to use them as policy communication to increase acceptance and empathy. Although a lot of narratives based on social simulations are beginning to be studied, the methodology of narrative generation has not been established, nor have the effects of acceptability and empathy on communication been verified. Therefore, this study aims to establish a verifiable methodology for narrative generation based on social simulation and to verify its effectiveness.
This paper proposes a new controllability measure for linear systems involving multiple control equilibria (CEs), which are states x with an input that makes dx/dt=0. We consider the transition between control equilibria (TCE) as the target class of control. Although the controllability Gramian and size of reachable sets are commonly used measures to assess the performance of systems, they include non-CE information and result in evaluations that are misaligned with the efficiency of TCE. To solve this problem, we propose an appropriate measure for the size of the reachable control equilibrium set (RCES) to quantitatively evaluate the cost of TCE utilizing this measure. The proposed measure can be easily computed. Examples demonstrate that the proposed measure enables the assessment of systems that require less energy for TCEs more effectively than traditional measures.