Abstract
Apron operations must ensure both high utilization of given capacity and safe aircraft operations
even under degraded environmental conditions, such as low visibility. An appropriate sensor environment
could support controllers, where deep learning models will ensure that the observed objects are classified
correctly. The fundamental challenge is that these models require a large amount of data to be trained.
Therefore, we have developed a virtual airport to generate the required training and validation data at
any time and for any operational scenario (ground truth). We apply our concept of a virtual airport
and sensor environment at Singapore Changi Airport implementing a synthetic LiDAR sensor. With
the help of different data sources and own models, a multitude of 3D scenes can be generated which
correspond to the real operational environment. From these scenes, a point cloud is extracted according
to the specifications of the LiDAR sensor, which is already labeled by the underlying model and serves
as input for PointNet++ for segmentation and classification. We show that the training of a classifier
based on artificial input data is a promising approach, which covers relevant aspects of the real system
and can therefore be easily applied in (augmented) tower environments.