Intelligence, Informatics and Infrastructure
Online ISSN : 2758-5816
Deep learning-based detection of roadside vegetation: how vegetation indices boost performance
Yoshiyuki Yamamoto
Author information
JOURNAL FREE ACCESS FULL-TEXT HTML

2024 Volume 5 Issue 2 Pages 45-56

Details
Abstract

This study investigates the enhancement of deep learning-based roadside vegetation detection using vegetation indices. Vegetation detection is crucial for maintaining clear sightlines for autonomous driving and as an indicator of potential drainage issues and road deterioration. A Faster R-CNN architecture was employed to analyze images from vehicle-mounted cameras, with three separate input types evaluated: standard RGB images, Excess Green Index (ExG) images, and Color Index of Vegetation Extraction (CIVE) images. In addition, an integrated approach was developed that combined these input types. The results demonstrate that the integrated approach consistently outperformed individual input-based detections, achieving the highest Average Precision (AP) in both validation and test datasets. CIVE-based detection showed the highest overall performance among single input types, particularly in the test dataset. Vegetation indices generally improved detection accuracy compared to the standard RGB input, especially for challenging scenarios. However, all input types struggled with small object detection, indicating an area for future improvement. The study also revealed varying levels of detection consistency, with RGB-based detection showing the highest consistency across data sets. These findings contribute to the advancement of roadside vegetation detection techniques and suggest potential applications in comprehensive road condition assessment, automated maintenance planning, and early detection of drainage problems, complementing existing crack and pothole detection methods.

1. INTRODUCTION

The maintenance of optimal road conditions is becoming increasingly critical as the era of advanced driver assistance systems and autonomous vehicles approaches. Well-maintained roads are essential not only for current traffic safety and efficiency but also for the successful implementation and operation of these future transportation systems. As highlighted by Farah et al1), the infrastructure requirements for automated and connected driving are more demanding than those for conventional vehicles, with a particular emphasis on consistent and predictable road environments.

One crucial aspect of road maintenance is the management of roadside vegetation. Excess vegetation growth along roadsides can obscure important visual cues and road markings that human drivers and automated systems rely on for navigation and decision making2). In addition, uncontrolled vegetation can cause various issues, such as reduced visibility, encroachment on the roadway, and potential damage to road infrastructure.

However, the significance of vegetation detection extends beyond visibility concerns. The presence of vegetation, particularly in unexpected areas, can serve as an important indicator of underlying road condition problems. Areas where vegetation thrives often correspond to locations where soil has accumulated, which can indicate poor drainage. Such areas are prone to water retention, which can lead to a cascade of road deterioration problems including soil erosion, sediment build-up in drainage systems, and even the formation of cracks and potholes.

Traditional methods of road condition assessment, including visual inspections and manual surveys, are time-consuming and often subjective. As road networks expand and maintenance demands increase, there is a growing need for more efficient and automated assessment techniques3). Although numerous studies have focused on the detection of cracks and potholes based on deep learning, the detection of vegetation as an early indicator of potential road problems has been relatively unexplored.

This study proposes a novel approach to assess road conditions by detecting the presence of vegetation using video footage captured from vehicle mounted cameras in the forward face during regular driving. By incorporating vegetation detection into road assessment systems, the aim is to complement existing methods for crack and pothole detection, creating a more comprehensive and proactive approach to road maintenance.

This study leverages deep learning techniques, specifically the Faster R-CNN object detection algorithm4), to develop an automated system that can accurately detect roadside vegetation from video footage with high precision. While previous studies5) have focused on detecting general roadside vegetation, and others6) have proposed a digital twin-based approach for controlling overgrown roadside vegetation, our approach specifically targets the detection of small-scale vegetation overgrowth that poses immediate risks to road safety. This approach balances detection accuracy and computational efficiency, addressing current road maintenance needs and contributing to preparing infrastructure for future transportation technologies.

The proposed method involves training the Faster R-CNN architecture on a dataset of images extracted from forward-facing GoPro camera footage captured by a vehicle. The study explores different input types, including standard RGB images and images processed with vegetation indices. To enhance the accuracy and robustness of the vegetation detection, the research investigates the effects of incorporating vegetation indices into the learning process. These indices, such as the Excess Green Index (ExG) and Color Index of Vegetation Extraction (CIVE), utilize the spectral properties of vegetation and have the potential to improve the reliability of vegetation detection7).

Furthermore, this vegetation detection method could potentially contribute to the advancement of AI-based landscape evaluation techniques. In a previous study by Yamamoto8), deep learning models trained on emotional images were applied to evaluate landscape quality. The vegetation detection method proposed in this study could be integrated into such landscape evaluation systems to assess the impact of vegetation on overall landscape quality. By quantifying the presence and characteristics of roadside vegetation, it may be possible to establish correlations between vegetation features and emotional responses to landscapes, thereby enhancing the comprehensiveness of AI-based landscape evaluations.

The results of this research have practical implications for the planning of road maintenance and rehabilitation. By accurately identifying areas of excessive roadside vegetation, our method can help prioritize maintenance efforts, optimize the allocation of resources, and potentially identify areas prone to drainage issues and early-stage road deterioration. This proactive approach to road maintenance can enhance road safety, improve the longevity of road infrastructure, and contribute to more efficient and effective pavement management strategies.

In summary, this study aims to develop an advanced deep learning-based system for detecting roadside vegetation using video footage from forwardfacing vehicle-mounted cameras. The research explores innovative approaches to enhance detection accuracy and robustness, including the integration of spectral information through multi-modal integration techniques. Specifically, separate Faster R-CNN models are trained using different input types: standard RGB images, ExG images, and CIVE images. The outputs from these individual models are then integrated to produce a final, more robust detection result. This multi-modal approach leverages the strengths of each input type, potentially improving overall detection performance across various environmental conditions and scales.

This research not only aims to advance roadside vegetation detection techniques but also to provide valuable insights for future applications in comprehensive road condition assessment and automated maintenance systems. By improving the accuracy and efficiency of vegetation detection through the use of different input types and their integration, it ultimately contributes to the enhancement of road safety, the optimization of maintenance resources, and the preparation of our infrastructure for future transportation technologies. Moreover, this approach has potential applications in Green Infrastructure planning and assessment, offering a tool for quantifying urban vegetation and supporting sustainable urban development strategies9). The outcomes of this study have the potential to significantly impact how road maintenance and environmental monitoring are approached in the context of evolving transportation systems, offering a more holistic and predictive approach to road infrastructure management.

2. SELECTION OF VEGETATION INDICES FOR MULTI-MODAL INPUT

This study selected two vegetation indices to create additional input modalities for roadside vegetation detection: Excess Green Index (ExG) and Color Index of Vegetation Extraction (CIVE). These indices have been extensively analyzed in the review paper by Hamuda et al.10) and have shown high performance in vegetation extraction.

(1) Excess green index (ExG)

ExG was proposed by Woebbecke et al.11) and is defined by the following equation:

where r, g, and b are the normalized Red, Green, and Blue values, respectively. ExG is characterized by its high ability to separate plants from soil and its low sensitivity to changes in lighting conditions. Meyer and Camargo-Neto12) confirmed that ExG demonstrates high performance in vegetation extraction under various background conditions.

(2) Color index of vegetation extraction (CIVE)

CIVE was proposed by Kataoka et al.13) and is defined by the following equation:

where R, G, and B are the original Red, Green, and Blue values, respectively. CIVE is characterized by its short computation time and high adaptability to vegetation extraction in outdoor environments. Guijarro et al.14) reported that CIVE shows superior performance compared to other vegetation indices.

(3) Reasons for selection as additional input modalities

The main reasons for selecting ExG and CIVE to create additional input modalities are as follows:

1. High Performance: Both indices have shown high accuracy in vegetation extraction across various studies, potentially offering complementary information to standard RGB images for vegetation detection.

2. Computational Simplicity: Both can be easily calculated from images captured by RGB cameras, allowing for efficient preprocessing of input data.

3. Diverse Representations: ExG uses normalized RGB values, while CIVE uses original RGB values, providing different representations of the image data that may highlight various aspects of vegetation.

4. Adaptability to Outdoor Environments: Both indices are suitable for use in outdoor environments with changing lighting conditions and complex backgrounds, which is crucial for roadside vegetation detection.

These characteristics make ExG and CIVE suitable for creating additional input modalities in this study, which focuses on roadside vegetation detection. By using these vegetation indices alongside standard RGB images, the study aims to explore how a multimodal input approach affects the performance of the Faster R-CNN architecture in detecting roadside vegetation. The next chapter will detail the methodology for vegetation detection using these different input modalities and their integration.

3. METHODOLOGY

(1) Data Collection

The proposed method relies on video footage captured by forward-facing GoPro cameras mounted on vehicles during regular driving. The cameras are positioned to capture a clear view of the roadside environment, including the presence of vegetation. The video footage is collected from various road networks, covering a diverse range of drainage conditions and vegetation types. The collected footage is then processed to extract individual frames for further analysis.

To ensure a comprehensive dataset, footage was collected encompassing diverse road environments, as illustrated in Fig.1 through Fig.4. These images represent a variety of scenarios including residential areas, urban and suburban major roads, and mountain roads. For each of these four road environments, approximately 10 km of one or a few representative routes were observed, providing a comprehensive sample of roadside conditions. This diverse range of environments ensures that the model is trained on a wide spectrum of roadside vegetation contexts, enhancing its ability to generalize across different road types and locations.

(2) Dataset preparation

The extracted frames from the video footage are manually annotated to create a dataset for training, validation, and testing the vegetation detection model. The annotations involve drawing bounding boxes around the regions of interest (ROIs) containing roadside vegetation, with the following criteria:

1. Only vegetation on the roadway itself is considered for detection.

2. The road area is defined as extending to the curb where present, or to the drainage ditch where no curb exists.

3. Contiguous vegetation areas are enclosed in a single bounding box.

4. There are no size restrictions; any vegetation visible on the road surface is annotated.

5. Vegetation in bicycle lanes, pedestrian paths, sidewalks, planting strips, or other areas outside the defined road area is not included as detection targets.

6. When drawing bounding boxes, care was taken to minimize the inclusion of large background areas. The boxes were drawn such that vegetation exists along the diagonal of the box, ensuring that the upper, lower, left, and right sides of the box are not predominantly occupied by background.

For the annotation process, LabelMe15), an opensource graphical image annotation tool, was utilized. This tool allowed for efficient and accurate bounding box creation.

The dataset is divided into training, validation, and testing subsets, with a ratio of approximately 80%, 10%, and 10%, respectively. The training set is used to train the Faster R-CNN model, the validation set is used for tuning hyperparameters and monitoring the model’s performance during training, while the testing set is used to evaluate the model’s final performance. Furthermore, comparing the model’s performance on both validation and test sets allows for the assessment of potential overfitting issues. This comparison provides insights into the model’s generalization capabilities and helps ensure the reliability of the results. The detailed results of this comparison and analysis are presented in the Results section.

Table 1 shows the specific number of images in each subset of the dataset.

Furthermore, to understand the distribution of object sizes in the dataset, the annotated vegetation regions were categorized into small, medium, and large objects, following the size thresholds defined in the COCO dataset16). Specifically, after resizing the images to match the input dimensions of our Faster R-CNN implementation in Detectron2, objects with area less than 322 pixels are considered small, between 322 and 962 pixels are medium, and larger than 962 pixels are large. Table 2 presents the number of objects in each size category for the train, validation, and test sets.

This detailed breakdown of the dataset composition provides important context for interpreting the model’s performance, particularly with respect to objects of different sizes.

(3) Faster R-CNN architecture

Object detection is a crucial task in computer vision, with several state-of-the-art algorithms available, such as Faster R-CNN4), YOLO17), Efficient-Det18), and SSD19). For this study, Faster R-CNN was chosen due to its high accuracy and ability to handle objects of various sizes, which is particularly important for detecting roadside vegetation that may appear at different scales in the images. Although semantic segmentation could provide more detailed vegetation detection at the pixel level, for road management purposes, such fine-grained information is not necessary. Furthermore, semantic segmentation requires more extensive training data preparation and higher computational costs. Therefore, object detection was chosen as it provides sufficient information for roadside vegetation management goals while being more efficient in terms of data preparation and processing time.

Faster R-CNN consists of two main components: a Region Proposal Network (RPN) and a classification network. The RPN generates a set of candidate object bounding boxes, which are then refined and classified by the classification network. This study selected Faster R-CNN over one-stage detectors like YOLO or EfficientDet because the task focuses on detecting vegetation specifically on the road surface, rather than in the entire image. The RPN is particularly effective in this context, as it can efficiently propose regions of interest within the limited area of the road surface. While YOLO and SSD offer faster inference times, Faster R-CNN generally provides superior detection accuracy, especially for small objects20). This accuracy is crucial for our application, where precise detection of vegetation, including small plants, is essential for comprehensive road maintenance assessment. In this research, the Faster R-CNN architecture is adapted to detect roadside vegetation from different input types, leveraging its strengths in accuracy and adaptability to enhance vegetation detection capabilities.

(4) Incorporating vegetation indices as additional input types

To explore the potential benefits of different input representations, this study investigates the effects of incorporating vegetation indices into the detection process. Specifically, the Excess Green Index (ExG) and the Color Index of Vegetation Extraction (CIVE) are utilized to create additional input types.

The ExG and CIVE indices are computed using the coefficients for the R, G, and B channels as defined in equations (1) and (2). Each index is calculated separately for each channel and not combined into a single channel. After the computation, a global min-max stretching is applied to each index to standardize the dynamic range of the images. This ensures that all images are on the same scale. The processed indices are then converted into three-channel images to match the input format requirements of the Faster R-CNN architecture and to ensure fair comparisons with the RGB images. Examples of images processed with ExG and CIVE indices are shown in Fig.5 and Fig.6, respectively. While these processed images may not visually reveal obvious additional information compared to the RGB image, they are expected to offer potential advantages such as enhanced vegetation contrast, possible reduction in sensitivity to lighting variations, and provision of complementary data to RGB.

The performance of the Faster R-CNN architecture is evaluated separately with RGB, ExG, and CIVE images. Each training process uses only one type of input (either RGB, ExG, or CIVE) to determine the effectiveness of each input representation in vegetation detection.

(5) Training and optimization

This study employs the Faster R-CNN implementation from Detectron2, a state-of-the-art object detection framework developed by Facebook AI Research. The model architecture uses a ResNeXt-101-32x8d-FPN backbone, which is initialized with weights pretrained on ImageNet, leveraging transfer learning to benefit from features learned on a large-scale dataset. The network is fine-tuned for our specific task of vegetation detection, with the final classification layer adapted to 2 classes (vegetation and background). Training is performed using stochastic gradient descent (SGD) with momentum, optimizing a multi-task loss function that combines the objectness loss from the Region Proposal Network (RPN) and the classification loss from the detection network.

Input images undergo normalization using perchannel standard deviation division, with values [57.375, 57.120, 58.395]. Data augmentation techniques, including horizontal flipping and multi-scale training (640-800 px), are applied to enhance model generalization.

Key training parameters and settings are summarized in Table 3.

The model is evaluated on the validation set periodically, and the best-performing model based on the validation performance is selected for final testing.

(6) Evaluation

The trained Faster R-CNN architecture is evaluated on the testing dataset for each input type to assess its performance in detecting roadside vegetation. Following the evaluation protocol proposed in the COCO dataset16), 22), the primary evaluation metric used is Average Precision (AP), which provides a comprehensive measure of the detection accuracy across various Intersection over Union (IoU) thresholds. Specifically, AP is calculated for IoU thresholds from 0.5 to 0.95 (denoted as AP), at 0.5 IoU (AP50), and at 0.75 IoU (AP75). Additionally, AP is computed for objects of different sizes: small (APs), medium (APm), and large (APl), using the same size categorization as defined in Table 2, which presents the distribution of object sizes in our dataset.

These metrics collectively measure the ability to correctly identify vegetation regions while minimizing false positives and false negatives, and they provide insights into the model’s performance across different scales and detection confidence levels.

Furthermore, to assess the training process and potential overfitting issues, the training progress is analyzed by tracking both the loss and Average Precision (AP) over training iterations. This analysis is performed for each input type (RGB, ExG, and CIVE) and is visualized in Fig.7, Fig.8, and Fig.9 respectively. These learning curves provide valuable insights into the model’s convergence, stability, and generalization capabilities for each input type.

The detection performance is also visually assessed by comparing the predicted bounding boxes with the ground truth annotations, allowing for a qualitative evaluation of the detection results in various road scenarios.

This comprehensive evaluation approach, combining quantitative metrics, learning curve analysis, and visual assessment, enables a thorough understanding of the model’s performance and behavior across different input types and throughout the training process.

(7) Integrated multi-modal approach

To further improve the detection performance, an integrated multi-modal approach was developed that combines the results from the RGB, ExG, and CIVE-based Faster R-CNN detections. Fig.10 illustrates the flow of this integrated multi-modal approach.

The integration process follows these steps:

1. Collect detection results from all three input types (RGB, ExG, and CIVE).

2. Merge all detection results into a single pool of predictions.

3. Apply Non-Maximum Suppression (NMS)23) to the merged results to eliminate redundant detections. An IoU (Intersection over Union) threshold of 0.5 is used for this process.

4. The remaining detections after NMS form the final integrated predictions.

This integrated approach leverages the strengths of each input type, potentially improving overall detection accuracy and robustness. To quantify the contribution of each input type to the integrated results, a leave-one-out strategy is employed:

1. Create a new integration excluding one input type at a time.

2. Evaluate the performance of this reduced integration.

3. Calculate the difference in Average Precision (AP) between the full integration and the reduced integration.

4. This difference represents the contribution of the excluded input type to full integration.

This process allows for the assessment of not only the overall performance of the integrated approach but also the individual importance of each input type within the multi-modal framework. The results of this analysis provide information on the complementary nature of the different input types (RGB, ExG, and CIVE) in the detection of vegetation at the roadside.

4. RESULTS

The performance of the Faster R-CNN architecture using RGB, ExG, CIVE inputs, and their integration for roadside vegetation detection is presented in Table 4 and Table 5 for the validation and test datasets, respectively.

The integrated approach consistently outperforms individual input types in both datasets, achieving the highest AP (IoU=0.50:0.95) of 16.22% and 22.02% in the validation and test datasets, respectively. Among single input types, CIVE-based detection shows the highest overall performance, particularly in the test dataset.

Table 6 presents the absolute performance difference between the test and validation datasets, providing insights into detection consistency and potential overfitting issues.

The training progress for the RGB, ExG, and CIVE-based detections in the validation set are shown in Fig.7, Fig.8, and Fig.9, respectively. These curves, illustrating the loss and Average Precision (AP) over training iterations, provide insights into the learning process and potential overfitting issues, particularly for the ExG-based detection.

To visually demonstrate the performance of each input type and the integrated approach, detection results on specific sample images from both validation and test sets are presented. These images correspond to the scenes previously introduced in Fig.3 and Fig.1. Fig.11, Fig.12, and Fig.13 show the detection results using RGB, ExG, and CIVE inputs respectively on the validation set image from Fig.3. Similarly, Fig.14, Fig.15, and Fig.16 present the results on the test set image from Fig.1. These examples, while not exhaustive, provide insights into the detection performance across different input types and dataset splits. One particular concern was that in areas with abundant vegetation outside the road, the system might incorrectly identify these non-road vegetation areas as targets. However, our tests, as demonstrated in these figures, showed that such misidentifications were not observed. This suggests that our model successfully focuses on road vegetation even in environments with rich off-road plant life.

The effectiveness of the integrated approach is illustrated in Fig.17 and Fig.18 for the validation and test sets, respectively. These figures combine the detection results of the RGB, ExG and CIVE inputs, demonstrating how the integration of multiple input types can lead to more accurate and comprehensive vegetation detection. In both cases, the integrated approach shows improved detection of vegetation, particularly in areas where individual input types might have missed or incorrectly identified vegetation.

For instance, in the validation set image (Fig.17), the integrated approach successfully detects vegetation areas that were missed or only partially detected by individual input types. Similarly, in the test set image (Fig.18), the integrated approach provides a more complete detection of roadside vegetation, effectively combining the strengths of each input type.

These visual results corroborate the quantitative improvements shown in Table 4 and Table 5, highlighting the benefits of the integrated multi-modal approach in roadside vegetation detection.

5. DISCUSSIONS

Based on a comprehensive analysis of the results presented in Table 4, Table 5, and Table 6, as well as Fig.7, Fig.8, and Fig.9, the following insights can be drawn:

1. Integrated Approach Performance: The integrated approach consistently outperformed individual input types, demonstrating the value of combining multiple input representations in vegetation detection tasks.

2. Individual Input Type Characteristics: CIVE-based detection showed the highest overall performance among single input types, particularly in the test dataset (Table 5, AP 19.96%). However, RGB-based detection demonstrated the most consistent performance across validation and test sets, suggesting better generalization.

3. Vegetation Indices Impact: The use of vegetation indices (ExG and CIVE) showed mixed results compared to standard RGB input. In Table 5, CIVE achieved the highest AP (19.96%) and AP50 (52.16%) among single input types, outperforming RGB (18.25% and 45.28% respectively). However, ExG performed similarly to RGB in AP (18.17%) and only slightly better in AP50 (45.70%). Notably, RGB outperformed both vegetation indices in AP75 (13.27%) and APs (26.62%), indicating its strength in more precise detections and small object detection.

4. Small Object Detection Variability: The performance on small object detection (APs) showed significant variability across datasets and input types. In Table 4, APs consistently outperformed APm and APl for all input types. However, in Table 5, APs scores were generally lower than APm, but often higher than APl. This inconsistency highlights the challenge of generalizing small object detection capabilities. Future research should address the data imbalance evident in Table 2 by augmenting the dataset with more small vegetation samples. Additionally, optimizing the anchor boxes of Faster R-CNN for small objects could potentially improve detection performance.

5. Detection Consistency and Overfitting: The RGB-based detection showed the highest consistency, while ExG and CIVE-based detections exhibited larger variations between validation and test sets, suggesting potential overfitting issues. This is particularly evident in Table 6, where RGB shows the smallest AP difference (2.88%) compared to ExG (4.92%) and CIVE (5.11%).

6. ExG-based Detection Behavior: The ExG-based detection’s learning curve (Fig.8) showed a continuous increase in Average Precision for small objects (APs) during training, despite lower overall AP scores. This trend suggests potential overfitting to small objects in the training set.

7. Visual Characteristics: As shown in Fig.1 to Fig.6, each input type emphasizes different visual features. RGB images retain rich color and texture information, while ExG and CIVE highlight vegetation but may lose some background details. This visual difference is reflected in the detection results (Fig.11 to Fig.18).

8. Information Content and Separability: The performance differences among input types can be attributed to their varying information content and ease of background separation. RGB’s rich information allows for more generalizable learning, while ExG and CIVE’s emphasis on vegetation separation may lead to overfitting, especially for small objects.

9. Weather Impact and Input Types: While weather conditions can affect color information and model accuracy, vegetation indices like ExG and CIVE may be more robust to illumination changes than raw RGB data. ExG normalizes RGB values, potentially reducing brightness impact, while CIVE enhances vegetationbackground contrast. However, the specific effects of weather on different input types and overall detection performance require further investigation in future studies.

These observations provide valuable information for future research directions and practical applications in the detection of vegetation on roads. The inconsistent performance across input types, metrics, and datasets highlights a key challenge in this field. Future work should prioritize improving consistency, particularly in small object detection and across different environmental conditions. Developing more robust integration methods that maintain the strengths of individual input types while mitigating their weaknesses could lead to more reliable roadside vegetation detection systems.

6. CONCLUSIONS

This study demonstrated the effectiveness of using vegetation indices (ExG and CIVE) combined with RGB images in an integrated multi-modal approach for improving roadside vegetation detection. Key findings include:

1. The integrated approach consistently outperformed individual input types, highlighting the benefits of combining multiple input representations.

2. Vegetation indices enhanced detection accuracy, particularly in challenging scenarios, but also showed a tendency for overfitting, especially with small objects.

3. RGB-based detection demonstrated the most consistent performance across datasets, suggesting better generalization due to its rich information content.

4. Small object detection performance varied significantly across datasets and input types, indicating a complex challenge that requires further investigation.

5. The study revealed varying levels of detection consistency and potential overfitting issues, particularly in vegetation index-based detections, as evidenced by the performance differences between validation and test sets.

6. Visual analysis of different input types provided insights into their strengths and weaknesses, highlighting the need for careful consideration of input representation in model design.

These findings contribute to the advancement of roadside vegetation detection techniques. Future work should focus on improving small object detection consistency, enhancing generalization across different input types, and optimizing the integration approach. Developing strategies to combat overfitting in vegetation index-based detections is also crucial. Real-world testing and long-term performance evaluations are essential next steps for practical implementation in road maintenance and autonomous driving applications.

Acknowledgments

This work was supported by JSPS KAKENHI Grant Number JP24K07717 and the SETO Consortium of Universities.

References
 
© 2024 Japan Society of Civil Engineers
feedback
Top