International Journal of Automotive Engineering
Online ISSN : 2185-0992
Print ISSN : 2185-0984
ISSN-L : 2185-0992
Current issue
Displaying 1-2 of 2 articles from this issue
  • Kazuma Kashiwabara, Keisuke Kazama, Yoshitaka Marumo
    2025 Volume 16 Issue 2 Pages 33-39
    Published: 2025
    Released on J-STAGE: April 30, 2025
    JOURNAL OPEN ACCESS
    Safety assurance tests reflecting actual accidents or near-misses are necessary to ensure the safety of automated driving systems. In such tests in a simulated environment, vehicle trajectories and road environments reproduced from nearmiss data recorded on a drive recorder are used. In this study, a method is proposed to reconstruct the trajectory of the ego vehicle and the road environment at the time the vehicle traveled from the dashcam video recorded on the drive recorder. The movement of the ego vehicle is obtained using image registration utilizing weights calculated from the semantic information obtained by semantic segmentation. Based on the ego vehicle's motion between frames calculated by image registration, the trajectory of the ego vehicle and the map of the driving environment are reconstructed. The validation of the proposed method on several dashcam videos suggests that it is possible to reconstruct the driving environment by image registration using semantic information.
    Download PDF (2607K)
  • Feng Ao, Masahiro Yoshida
    2025 Volume 16 Issue 2 Pages 40-45
    Published: 2025
    Released on J-STAGE: April 30, 2025
    JOURNAL OPEN ACCESS
    In response to the poor performance of Visual SLAM in low light scenes and its high requirements for real-time computing, we propose a 2D-3D fused neural network that processes time consistency with minimal computational cost for enhancing a sequence of low-light images, and design two new loss functions to guide the network to focus on enhancing corners and edges. We then integrated our method into VINS-Mono and compared it with several low-light enhancement methods. The results show that our method achieves approximately 30% higher positioning accuracy than previous researches, while also having the shortest inference time.
    Download PDF (4261K)
feedback
Top