We investigated topographic changes from land to the shallow seabed caused by the 2024 Noto Peninsula earthquake using airborne LiDAR bathymetry (ALB). We analyzed 0.5 m DEM acquired before the earthquake (September-October 2022) and after the earthquake (April-May 2024). The results indicate a maximum uplift of 5.2 m and a maximum horizontal displacement of 2.5 m, and the area newly emerged above sea level due to uplift was estimated to be 4.56 km2.
We propose a GCP-free photography workflow for post-wildfire surveys in mountainous regions, where the implementation of ground control points (GCPs) is not feasible for rapid surveys. The workflow integrates three sets of imagery : close-range in-forest images acquired using an AI-assisted UAV (Skydio 2+), forest-edge and open-area images obtained with a UAV equipped with network RTK GNSS (Mavic 3E), and bridging images collected to ensure overlap between the in-forest and open-area scenes. These three datasets are processed within a single Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline to propagate georeferencing to the in-forest imagery. Georeferenced imaging of post-wildfire conditions facilitates subsequent AI-based analyses and provides a contemporaneous record of site conditions. We estimate that the proposed workflow has the potential to reduce on-site fieldwork time from 2 hours and 31 minutes to 56 minutes, achieving a 62.9% reduction.
This report introduces “Bois,” a cloud-based disaster information platform supporting corporate BCP. Bois automates the GIS-based collection and assessment of disaster data, reducing manual processing costs and enabling rapid 24/7 decision-making. We highlight three core functions : integrated hazard assessment, automated impact prediction, and situational awareness via surveys. Case studies in the retail and manufacturing sectors demonstrate the platform's practical efficacy. Finally, the paper discusses social contributions using aerial photography and future global expansion.
In recent years, Japan has experienced an increasing number of severe natural disasters, highlighting the need for rapid acquisition and dissemination of disaster-related information. The photogrammetry industry plays a critical role by providing various types of data, including oblique imagery, orthophotos, airborne LiDAR data, and microtopographic visualization maps. These datasets are essential for understanding damage conditions and supporting recovery planning. However, despite their importance, secondary use of such data remains limited due to licensing restrictions and challenges in data distribution.
This paper examines the current status of open access to photogrammetric data collected during disasters, focusing on case studies from the Geospatial Information Center. It also identifies key issues related to licensing, data delivery, operational decision-making, and sustainability, and discusses future directions for improving data sharing frameworks to enable timely and effective utilization.
In marathon competitions, the time taken by runners wearing RFID tags to pass designated points is measured, and this information is used to provide viewers with commentary on the race progress during marathon broadcasts, as well as lap times and estimated finishing times. However, it is difficult to measure each player's performance information, such as speed, pitch, and stride length, and this information has not yet been provided. In collaboration with Kansai Television Co. Ltd., the authors have developed technology using deep learning to automatically extract athletes from live television footage and estimate their running motion, i.e., their pitch and stride length. Therefore, we report the detection results obtained from the analysis of the OSAKA Women's Marathon held in 2024 and 2025, and discuss methods for dealing with occlusion between runners.
The decline in cow conception rates in the Japanese cattle industry-from 68.7% in 1989 to 52.0% in 2022-poses a critical challenge to productivity. Major factors contributing to this decline include missed estrus detection due to labor shortages and the obscuration of estrus behaviors by environmental stressors such as heat stress. Conventional object detection methods are limited to capturing broad behavioral categories, such as mounting or physical contact, and do not account for subtle behavioral nuances, including mounting direction or vulva sniffing. To address these challenges, this study proposes a non-invasive method for fine-grained behavioral analysis based on fixed surveillance cameras and AI-driven pose estimation. By integrating YOLOv9 and DeepLabCut, we conducted detailed behavioral detection of mounting direction and specific contact points in Tosa Akaushi. The results indicated that all types of mounting behavior showed significant increases only on the day of estrus, suggesting that mounting behavior alone is insufficient for detecting potential signs of proestrus. In contrast, nose and forehead contact behaviors exhibited increasing trends on Days -3 and -1 relative to baseline levels, suggesting that these behaviors may serve as potential indicators of the proestrus phase. These findings suggest that detailed behavior quantification based on pose estimation may improve the identification of both proestrus and the day of estrus, thereby contributing to reproductive management for timely insemination.