Intelligence, Informatics and Infrastructure
Online ISSN : 2758-5816
Current issue
Displaying 1-12 of 12 articles from this issue
  • Yusuke FUJITA, Iori SUGIHORI, Kenta UCHIDA
    2025Volume 6Issue 3 Pages 1-14
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    Accurate crack detection using deep convolutional neural networks (DCNNs) is critical for infrastructure inspection. However, the requirement for fine-grained annotations remains a major bottleneck. To mitigate this, a previous study proposed a multi-stage Multiple-Instance Learning (MIL) framework that uses regionlevel labels and model-generated pseudo-labels to reduce annotation costs while maintaining performance. This paper introduces two key extensions to enhance that framework. First, an adaptive thresholding mechanism derives bag-specific thresholds from the likelihood distribution of negative instances, explicitly filtering unreliable positive instances without parametric assumptions. Second, a multi-scale overlapping tiling strategy increases the ratio of crack-containing instances with each positive bags, improving MIL training efficiency and robustness under weak supervision. Experiments on the Concrete Crack Segmentation Dataset demonstrate that the proposed method outperforms both the baseline MIL and fully supervised models under equivalent labeling budgets. The enhanced model improves the F1-score by 2.5 points over the baseline and also reduces false positives by approximately one-third through two-stage inference. Importantly, these improvements are achieved without any pixel-level or subregion-level manual labeling. These results highlight the proposed framework’s scalability, robustness, and practical suitability for annotation-efficient crack detection in civil infrastructure.

  • Kenichi KUSUNOKI, Toru ADACHI, Naoki ISHITSU, Tomofumi KITAMURA, Ken-I ...
    2025Volume 6Issue 3 Pages 15-31
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    This paper presents findings from a project advancing disaster-prevention digital transformation (DX) by operationalizing a real-time wind-gust information system across industries. It shows how an established detection technology can be adapted for cross-sector use through needs analysis and field deployment.

    The aim of this study was to develop a system that not only meets the disaster prevention needs of individual users but also addresses the specific requirements of diverse industry sectors. While earlier research1) demonstrated the system’s effectiveness for individual users, the unique needs and challenges faced by industries such as railways, roads, power utilities, telecommunications, and construction had not been fully explored. This research investigates these aspects through an industry-specific needs assessment and field experiments. The needs assessment identified concrete scenarios and requirements for each industry, while field experiments validated the system’s performance under real-world winter conditions along the Sea of Japan coast. While the core tornado-vortex detector follows KEA24, the novelty lies in societal implementation, not algorithmic innovation; the model is the CNN introduced in KEA24.

    The developed system integrates real-time wind gust detection from Doppler radar analysis with GNSS-based location information to deliver personalized alert information. This approach transcends the limitations of traditional broad-area meteorological forecasts by enabling disaster prevention measures tailored to individual circumstances. As demonstrated in the field experiments, the system addresses industry-specific requirements such as route and site management functionalities and compatibility with existing operational systems.

    Key challenges identified for broader societal implementation include expanding the detection area through the use of public radar networks, improving user interface design to enhance accessibility, and increasing prediction accuracy. By using DX technologies, this research aims to establish an advanced disaster prevention information system, contributing to greater societal resilience against localized meteorological risks.

    The paper also outlines a cost--benefit evaluation framework to guide future deployments.

  • Nut SOVANNETH, Kotaro SASAI, Felix OBUNGUTA, Kiyoyuki KAITO
    2025Volume 6Issue 3 Pages 32-46
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    Pavement structural condition assessment is crucial for effective maintenance management, including quality control and maintenance planning. To support consistent evaluation, road agencies require reliable threshold criteria. Unlike traditional methods that primarily rely on numerical analysis and distress severity correlations, this study defines threshold values for structural indices derived from the Falling Weight Deflectometer (FWD) data using a deterioration modeling approach. A stochastic framework based on the Markov hazard model is employed to predict surface roughness deterioration by incorporating structural indices. This approach accounts for uncertainty and variability in pavement performance, accommodating diverse pavement characteristics and influencing factors. The key structural indices include maximum deflection, effective structural number, subgrade resilient modulus, and deflection bowl parameters (base layer index, middle layer index, and lower layer index). The analysis uses two sets of roughness inspection data from flexible pavements, specifically asphalt concrete (AC) and double bituminous surface treatment (DBST) pavements, linked to structural indices obtained from FWD measurements. Structural thresholds are determined at set failure probabilities, based on estimated deterioration rates and pavement lifespans. The benchmark criteria are categorized into three levels: Sound (0-50% failure probability), Warning (50-75%), and Severe (75-100%) for each pavement type. The findings indicate that AC pavements require more stringent thresholds than DBST pavements due to their higher structural standards and longer design life. By adopting uniform standards for interpreting FWD data, road agencies can more effectively evaluate pavement structural conditions, identify structural deficiencies, and optimize maintenance strategies to ensure compliance with performance standards without relying on mathematical models that require advanced technical knowledge.

  • Shijun PAN, Shujia Qin
    2025Volume 6Issue 3 Pages 47-54
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    Coal and gangue are two important factors in the coal mining, and separating them from each other is one of the steps in the coal mining engineering. Because of the transportations and personnel fee, coal mining engineers should focus on separating coal and gangue from each other as much as possible to save the corresponding cost. Traditional coal mining industry is high-risky and labor-intensive, and simultaneously, until now the number of young professional mining engineers is still decreasing. The mentioned challenges are driving the government to transfer the coal mining industry from human-centered (human eyes locating, human hands taking) to automatic-robots-industry-centered (sensor/camera locating, robots arms taking). Up to now, combined with the robot arms, there are several computer vision algorithms (YOLO, RCNN) trained with open-source public dataset, and applied in the practical coal mining projects. During these projects, there are still some tasks that have not been solved already, i.e., to some degree, open-source public dataset cannot cover all the practical conditions. Therefore, discovering an approach of increasing the diversity of the existing dataset is in need. Based on the aforementioned issue, this research proposes to prove the possibility of applying the Generative Artificial Intelligence (AI) as the supplementary of the open-source public dataset (i.e., generating gangue images). Generative AI-based dataset mainly includes 2 patterns, individually: txt2img and img2img. With the assistance of the multimodal, the possibility of applying lithology-aware prompt engineering in generating gangue images has been proved. And the authors have categorized and analyzed the lithology-aware prompts.

  • Ruben Vargas, Katsuya Ikeno, Hideki Naito, Tomoyuki Kimoto
    2025Volume 6Issue 3 Pages 55-66
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    A machine learning-based anomaly detection method for concrete structures, using localized vibration measurements and transfer learning, has been previously proposed by the authors. The method detects anomalies based on changes in the frequency response function obtained from localized vibrations measurements. An autoencoder model is first trained using sound-state data from an arbitrary-structure to create a general-purpose model, which is then customized for a target structure using transfer learning with a small amount of sound-state data from that target. This paper investigates the influence of the arbitrary structure’s structural type on the model’s performance and provides an on-site validation. To this end, three general-purpose autoencoder models were developed using data from full-scale beam and slab specimens: a “beam-trained model”, a “slab-trained model”, and a “mixed-data model”. The validity of these models was confirmed by using them to identify load-induced damage in the specimens. The models were then evaluated through an on-site inspection of a concrete pier’s main beam and deck slab. Results indicate that while transfer learning provides a robust baseline for anomaly detection even with mismatched structural types, accuracy is significantly improved when the training data’s source structural type matches the target structure. Furthermore, a model trained on a mixed-data set exhibits superior generalization and robustness, suggesting that for practical field applications, the development of a versatile model would be beneficial.

  • Sushama DE SILVA, Taro UCHIMURA
    2025Volume 6Issue 3 Pages 67-82
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    Landslides are a major hazard to human activities, often causing severe losses to lives, infrastructure, and the environment. Implementing effective countermeasures requires not only accurate hazard prediction but also clear identification of landslide types. However, across many countries in Asia-where about three- quarters of fatal landslides occur-national-scale inventories often lack explicit type classification, constraining the design of type-appropriate measures. Moreover, much of South Asia lies within seismically active belts where earthquakes are important triggers of slope failure. To address these gaps, this study develops a model to identify cliff-type landslides from inventories where type is unspecified, in earthquake- prone regions. The model was constructed using Forest-based and Boosted Classification and Regression tools in ArcGIS Pro. A dataset of 535 cliff-type incidents and 535 randomly generated non-cliff points was used for training, considering 25 conditioning factors. Trained in Wakayama Prefecture, the model achieved strong predictive performance, with a mean accuracy of 0.85, sensitivity for cliff-type landslides of 0.88, an MCC of 0.71, and an F1 score of 0.85. Variable importance analysis indicated that distance from buildings, rainfall, distance from streams and roads, slope, DEM, soil thickness, and earthquake distance were the most influential, while geology ranked only 14th, and soil type, TPI, and STI were least significant. Validation in Mie Prefecture showed that 66% of recorded cliff-type landslides matched predicted areas. To test transferability, the model was also applied to the Kandy District in Sri Lanka, a non-seismic context. To preserve model structure, the earthquake-distance factor was simulated as twice the maximum distance observed in the Wakayama dataset. Validation against the available inventory showed a 73% match, indicating robustness under different geological and environmental settings. Overall, this study enhances disaster inventories through type-specific classification, supporting effective hazard zonation and countermeasure planning in earthquake-affected regions, and contributing to sustainable disaster risk reduction and resilient infrastructure development.

  • Jun SONODA, Fuma MASUDA
    2025Volume 6Issue 3 Pages 83-89
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    We present the development of an autonomous GPR robot capable of centimeter‑level navigation using the Centimeter‑Level Augmentation Service (CLAS) provided by Japan’s Quasi‑Zenith Satellite System (QZSS, MICHIBIKI). Field and laboratory evaluations demonstrate its travel accuracy and buried‑pipe detection capability, with localization errors on the order of approximately 10 cm.

  • Zhenyu YANG, Hideomi GOKON
    2025Volume 6Issue 3 Pages 90-102
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    Early identification of areas susceptible to traffic congestion caused by snow disasters is critical for formulating effective emergency response strategies. This study focuses on the 2018 heavy snowstorm in Fukui Prefecture, Japan, and integrates GPS data with multi-source remote sensing datasets, including the digital elevation model (DEM), land use data, nighttime light imagery, Normalized Difference Vegetation Index (NDVI), urban area information, and other relevant spatial indicators. A spatial machine learning model based on the Random Forest algorithm was developed to identify congestion segments. The results show that: (1) During the 2018 Fukui snow disaster, severe traffic congestion was primarily concentrated in the cities of Sabae, Fukui, and Awara. Initial congestion points were frequently located on intercity roads near administrative boundaries, especially in areas classified as “field” or “forest” in land use. (2) A pilot model trained on data from 10 northern cities in Fukui achieved an accuracy of 94.59%, confirming the feasibility of the method. (3) Feature importance analysis identified the most influential factors as: Snow Depth > Nighttime Light Difference > Elevation > Slope Angle > Urban Area > NDVI > Population Change > Forest > Field > Low-rise Buildings (Sparse) > Low-rise Buildings (Dense).

  • Takumi MURAI, Kenta ITAKURA, Riku MIYAKAWA, Sota KUDO, Kosuke MURAISHI ...
    2025Volume 6Issue 3 Pages 103-119
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    Since the quality and waste rate of tofu depend on the coagulation state, viscosity prediction technology during the soymilk coagulation process is required. In this study, focusing on changes in light scattering during the coagulation process, laser scattering images of tofu surfaces were acquired under different coagulation conditions, and image features were obtained using pre-trained convolutional neural networks and Vision-Transformer as image feature extractors. Furthermore, Long Short-Term Memory (LSTM) and Transformer were introduced to perform time-series analysis for tofu coagulation prediction. As a result of model construction, the combination of ResNet-18 and LSTM achieved an overall accuracy of RMSE = 2.68 mPa·s, demonstrating the effectiveness of tofu coagulation prediction using laser scattering images. This study represents an English translation and development of the content previously reported in Intelligence, Informatics and Infrastructure: Japanese edition, titled “Development of a method for predicting viscosity during the coagulation process of soymilk using laser scattering images and deep learning.”

  • Elfrido Elias TITA, Gakuho WATAANABE, Takeshi KITAHARA
    2025Volume 6Issue 3 Pages 120-127
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    This study investigates thermal deformation behavior and anomaly detection in a continuous curved viaduct equipped with multi-point fixed metal bearings using GNSS displacement monitoring. Temperature fluctuations can cause significant deformations, potentially leading to structural anomalies or failures. A GNSS-based monitoring system was implemented to track displacements over time under varying thermal conditions. The study systematically analyzes thermal deformation patterns and identifies deviations that may indicate risks to structural integrity. Long-term GNSS data processing reveals insights into interactions with thermal loads and structural responses. The findings demonstrate the effectiveness of GNSS in real- time deformation tracking, enhancing accuracy in monitoring complex viaduct structures. This research contributes to early anomaly detection and optimized maintenance strategies, improving safety and ensuring the longevity of critical infrastructure. The proposed methodology offers a reliable approach to managing the structural health of curved viaducts, supporting sustainable infrastructure maintenance and long-term operational stability.

  • Avzalshoev ZAFAR, Pang-jo CHUN
    2025Volume 6Issue 3 Pages 128-136
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    Dynamic-zone fire models, such as CFAST, remain underutilised outside specialist communities due to the requirement for complex, solver-specific input syntax. This study introduces the first end-to-end pipeline that transforms natural-language fire scenarios into executable CFAST simulations on a single workstation. In the current study, we fine-tune a 7-billion-parameter Mistral model using a rank-16 LoRA adapter on 15,133 prompt-response pairs, 133 hand-curated decks validation experiments, and 15,000 synthetic variations spanning room counts (1-20), vent configurations, and heat-release-rate (HRR) profiles. Training requires only six GPU hours on an RTX 3090, preserving base weights as frozen 4-bit quantised parameters for efficient laptop-class inference. A blind evaluation of 1,513 unseen scenarios reveals that unconstrained decoding achieves merely 4.9% JSON validity and 0% end-to-end pass rates, with 95% of failures attributable to formatting errors. Implementing schema-guided decoding increases JSON validity to 88%, and a subsequent reinforcement learning-based rejection sampling loop further enhances the end-to-end success rate for executable decks to 63%, without requiring model retraining. Qualitative Smokeview analysis confirms that syntax-correct outputs reproduce vent logic and HRR growth curves indistinguishable from hand-authored baselines. The pipeline generates single-room inputs in approximately 15 seconds, outperforming GPT-4o (approximately 44 seconds). These findings demonstrate that critical performance gains derive not from larger models or datasets, but from integrating domain constraints during generation. With schema enforcement, the system eliminates ≈95% of manual input effort for single-compartment studies, establishing a foundation for practical, natural-language-to-simulation tools in fire safety engineering. Future enhancements will prioritise physics-aware reinforcement learning and native constrained decoding to bridge remaining gaps.

  • Avzalshoev ZAFAR, Pang-jo CHUN
    2025Volume 6Issue 3 Pages 137-149
    Published: 2025
    Released on J-STAGE: November 11, 2025
    JOURNAL FREE ACCESS FULL-TEXT HTML

    This study addresses a critical challenge in large-scale shaking table tests: measuring surface displacement when only discontinuous images are available (e.g., before and after images), compared to traditional continuous imaging techniques like DIC, which are unusable for current model tests. This study presents a deep learning framework that demonstrates exceptional robustness in the primary task of marker detection under adverse conditions, such as surface cracking and shadows, where conventional methods like template matching completely fail. A YOLOv5 model, trained on a diverse synthetic dataset generated via a GAN to avoid data leakage, successfully identified markers in noisy, real-world experimental images. However, the secondary task of regressing high-precision displacement vectors from these detections proved challenging, with quantitative evaluation revealing high errors (RMSE 35–62 mm) and a lack of linear correlation with ground truth (R2 < 0). Therefore, this study presents a case study that, despite current limitations in displacement quantification accuracy, establishes a foundational pipeline for a new generation of ’smart’ geotechnical data analysis. Its primary contribution lies in proving the viability of deep learning for robust feature detection in data-limited, discontinuous-imaging environments, and the potential for hybrid systems that can provide scalable and accessible displacement analysis.

feedback
Top