This study uses an unmanned aircraft mounted green Light Detection and Ranging (LiDAR) system to verify the learnability associated with deep learning regarding land cover classification (i.e., evaluated by the average and the absolute difference value of label-based F1, overall accuracy or Macro F1 under the premise of cross-seasonal mutual-prediction). Image fusion method in this study mainly attempts to superimpose the visualized LiDAR data with the aerial photographs to provide new features. LiDAR data are visualized using high contrast color scale (default), average high contrast color scale (same color, different split points as default), and high contrast gray color scale (different color, same split points as default). The reason of setting these methods is to compare the impact of split points and point-associated color in the color scale on learnability of cross-seasonal data. It is worth noting, however from the view of the results, that relying solely on aerial photographs is not sufficient, comparing with image fusion method, especially when training and predicting using cross-seasonal data. From the comparison of the label-based F1, this study has proved the necessary impact of LiDAR on vegetation label-based land cover classification.