The Horticulture Journal
Online ISSN : 2189-0110
Print ISSN : 2189-0102
ISSN-L : 2189-0102
ORIGINAL ARTICLES
Prediction of Lettuce Harvest Date and Evaluation of Data for Yield Estimation Using Artificial Intelligence Analysis of Aerial Drone Images
Shinichi NakanoNobutada FujiiRyohei KoyamaYuichi Uno
Author information
JOURNAL OPEN ACCESS FULL-TEXT HTML
Supplementary material

2025 Volume 94 Issue 4 Pages 472-482

Details
Abstract

Predicting the harvest date and yield of vegetables substantially contributes to stable shipments; however, such practical applications regarding lettuce remain limited. This study aimed to improve the accuracy of lettuce growth sensing and harvest date prediction by using artificial intelligence to analyze aerial photographs captured by drones. In addition, the relationship between yield and area data from aerial images is discussed concerning the final goal of yield estimation. For the harvest date prediction, a convolutional neural network trained on images of lettuce with different leaf ages was used to construct a leaf age estimation system by applying it to a leaf age-increasing model for lettuce created from two years of data. Six production plots in two regions were targeted for practical predictions. Images were obtained by aerial drone photography from 14 to 21 days after planting and input into the system to estimate leaf age. A leaf age of 40 was predicted as the harvest day using the forecast and daily mean temperature values obtained from mesh agricultural weather data. The mean relative error between the predicted and actual harvest dates was 2.35 days within the target value of 3.5 days, indicating that harvest prediction was achieved with high accuracy. To estimate the yield, the relationship between yield and cultivated area was surveyed using drone aerial images of fertilizer-dependent growth plots. The results showed that vegetation coverage was more strongly correlated with yield than with the vertical projected area of plants. The correlation between vegetation coverage and yield was higher during the preheading period (leaf age 16.2) than during the early heading period (leaf age 25.1). Therefore, we suggest that the vegetation coverage in the before-heading period could be used as an explanatory variable for yield estimation. In conclusion, this study enabled the prediction of harvest dates during growth at a practical level and provided basic insights for yield estimation.

Introduction

Predicting the vegetable harvest date and estimating yield contribute substantially to stable shipments. However, practical application of forecasting technology in open-field crops such as lettuce has been difficult to date due to its weather-dependent characteristics.

Previous studies have attempted to predict harvest date and yield using growth models and weather data; however, there have been several challenges. The first issue is that there is a lack of technology for sensing lettuce growth in open fields. To date, research has been dominated by destructive and sampling surveys, which require data on dry and fresh weights and require a lot of time and effort (Okada and Sasaki, 2016; Takada, 2022). Therefore, the yield estimation of spring wheat was performed by remote sensing using non-destructive surveys (Doraiswamy et al., 2003). However, this study used satellite imagery, which made it difficult to acquire images on cloudy days and generally had low resolution. One way to overcome this issue is through smart farming technology, which has been developed to sense the growth of vast areas of crops in real-time using drones and artificial intelligence (AI) (Wang et al., 2023). Remote sensing techniques with drones typically use cameras equipped with infrared light; however, they are prohibitively large and expensive (Petropoulou et al., 2023). In addition, this technology cannot be applied to the open-field cultivation of lettuce because it was developed within the limited confines of a greenhouse and in a tightly controlled environment. The second issue is the validity of the harvest date prediction and yield estimation techniques. In particular, because open fields differ from artificially controlled greenhouses, growth delays caused by events such as rainstorms may not be reflected in harvest date predictions. This is probably because the harvest date was often predicted based on the planting date (Mokhtar et al., 2022; Wurr et al., 1992). To address this, Okada and Sasaki (2016) attempted to predict the harvest date of cabbage based on the head dry weight during growth. However, their method is error-prone because the use of dry weight requires consideration of moisture content. Furthermore, this is impractical because dry weight measurements are time-consuming. Another important issue is that many studies have not evaluated the prediction results from a practical standpoint.

To address these issues, this study proposed an improved growth monitoring technique and harvest date prediction for lettuce and the results of a preliminary survey for yield estimation are reported. Specifically, the following two approaches were used. To address the challenge of improving growth observation techniques for open-air field-grown lettuce, a drone was used to conduct non-destructive and all-encompassing surveys. This eliminates data loss owing to cloudy weather and enables high-throughput growth sensing. The method used in this study enables the monitoring of crops using a drone equipped with a small visible light camera, which has the advantage that the technology can be introduced more easily and cheaply than an infrared camera. Furthermore, assuming less-than-ideal growing conditions in an open field, we assessed oligotrophic or eutrophic conditions by designating treatment zones with different fertilizer concentrations. Second, to address the issue of developing technologies for harvest-date prediction, we propose a new method that attempts to predict the harvest date using images of growing lettuce obtained from drone photography. As mentioned earlier, there are challenges in predicting the harvest date based on the planting date and lettuce weight. As an alternative, head hardness is a more appropriate indicator (Gil et al., 2012). Because it is difficult to estimate head hardness from image information, the present study focused on the number of inner leaves, which is highly correlated with the hardness of the lettuce head (Jenni and Bourgeois, 2008). Extensive research using convolutional neural networks (CNN) for counting leaves has targeted many types of rosette plants (Aich and Stavness, 2017). However, this was inefficient because many images were required; RGB and trained segmentation binarized the plant and other areas. In this study, we aimed to improve the accuracy of leaf age estimation using fewer training images by creating a specialized CNN for lettuce. The leaf age-increasing model (Kanda et al., 2000), a proven model for predicting rice ripening, was combined with a CNN to enhance the accuracy of the harvest date prediction based on leaf age. Furthermore, to consider the practicality of this technique, we evaluated an acceptable margin of error of 3.5 days or less, given that sales planning is generally performed weekly. To establish a potential yield estimation technique in the future, we also obtained vegetation coverage and vertically projected areas from drone images and investigated the relationship between these two factors and yield.

Based on the above, the method proposed in this study is expected to facilitate harvest date prediction from the mid-growth stage and provide basic knowledge for yield estimation. This will lead to more accurate adjustments to fluctuations in the growth rate caused by stress and other factors. Widespread use of this forecasting method will allow producers to provide harvest information to consumers before shipment. This valuable information will enable production areas to achieve favorable sales by adjusting shipments. The demand side can use this information to coordinate cargo collection in other production areas. The methodology proposed in this study will contribute to the optimization of food supply chains, including production sites.

Materials and Methods

Leaf age estimation using convolutional neural network

The analysis was conducted in the following order: leaf age estimation using a CNN, prediction of harvest date using the leaf age increase model, and the relationship between vegetation coverage and yield. Five lettuce varieties, ‘Elegant’, ‘Raptor’, ‘Verde 7’, ‘Vivre’, and ‘Revolution’, were cultivated according to regional standards of cultivation from October to February in a research field at the Awaji Agricultural Technology Institute (34.31 NE, 134.80 E) in Minami-Awaji City, Hyogo, Japan. Briefly, seeds were sown in cell trays (15 mL × 200 cells) filled with compost (N:P2O5:K2O = 150:1,500:150 mg·L−1) (Yosaku N-15; J-Cam Agri, Tokyo, Japan) on September 10, 2020. Seedlings were grown in a greenhouse and fertilized with liquid fertilizer (N:P2O5:K2O = 125:62.5:100 mg·L−1) (Kumiai Liquid Fertilizer #10; Katakura & Co-op Agri Corporation, Tokyo, Japan) per cell tray on the 14th and 21st days after sowing. On October 12, 2020, 32-day-old seedlings were planted in two rows on a ridge covered with a black mulch film but without a polytunnel. The planting density was 5,900 plantlets/10 a, with a spacing of 130 cm between rows and 26 cm between plants. Slow-release fertilizer (N:P2O5:K2O = 25.2:12.6:14 kg·10 a−1) (Super IBS890; J-Cam Agri, Tokyo) was applied once before planting as the base fertilizer without additional fertilization. Siliceous dolomite (100 kg·10 a−1) was used to neutralize the soil pH.

Vertical images were obtained using a digital camera (FinePix XP140; FUJIFILM, Japan) 1 m above the lettuce head, and leaf age was measured for each plantlet. Lettuce images were cropped to 30 × 26 cm (row and plant dimensions) and classified into 12 groups of leaf ages from 5–16 (Fig. 1). Lettuce leaf age was defined as the total number of outer and head leaves excluding the cotyledons. The minimum discrimination criteria were a leaf width of 0.5 cm for the outer leaves and a leaf weight of 1 g for the head leaves. A CNN requires several images to improve discrimination accuracy. Therefore, data augmentation was performed using left-right flip, cutout, scale augmentation, image rotation, brightness change, and contrast change to obtain a large number of images. A total of 300 training images for leaf ages 5–11 (before heading) and 103 images for leaf ages 12–16 (beginning of heading) were prepared for each variety. The number of training images was augmented by 64 times the original number of images, resulting in 167,360 training images. Images of the lettuce were captured at an altitude of 10 m using a camera (ZenmuseX5; DJI, China) mounted on a drone (Inspire2; DJI). From the images, areas showing lettuce were cut into 30 cm × 26 cm sections, and the leaf age was tied to each individual to create 796 images for evaluation. All training and evaluation image sizes were changed to 90 × 78 pixels for input into the CNN.

Fig. 1

Appearance of lettuce at each leaf age. Lettuce was grown using conventional methods with black vinyl mulch. Each stock was photographed at each leaf age stage, as shown in the upper-left corner of the graph. Scale bar, 5 cm.

The CNN consisted of two convolutional layers, two pooling layers, and three fully connected layers (Fig. 2). In the convolutional layers, the features are extracted by convolving a filter with the input. If there are N inputs (hereafter referred to as a channel), an L × L 2D filter is convolved with each H × W input channel, and the results are added across N channels to obtain the output H × W. The channel k output xij is calculated using equation (1).

Fig. 2

Network structure diagram of the transfer-trained model VGG16. VGG16 is composed of 16 layers, consisting of 13 convolutional layers and three fully connected layers. In the large IMAGENET dataset, images of 1,000 categories other than lettuce were pre-trained and used for weighting. Input image resolution was at least 50 × 50 pixels.

  
xijk=p=1Lq=1Lyi+p-1,j+q-1hpqk+bk ( when  i  and  j  start at 1 ) xijk=p=1Lq=1Lyi+p,j+qhpqk+bk ( when  i  and  j  start at 0 ) ( b k  is the bias common to all output units  ( i , j ) for each channel  k .) (1)

By preparing M types of such filters and calculating each independently, the output H × W × M xijk {(i,j,k)∈[1,H] × [1,W] × [1,M]} is obtained. The input image was an RGB image; therefore, the first input layer was N = 3, H = 90, W = 78, and the input size was 90 × 78 × 3. The ReLU function in equation (2) was used as the activation function after output of the convolution and fully connected layers. In the equation, if x is greater than or equal to 0, x is the output. If x is less than 0,

  
fx=maxx,0(2)

Next, the pooling layer discards unnecessary information while retaining the features extracted by the convolution layer and the information necessary for discrimination. Any unit (i,j) in the pooling layer aggregates the output ypq of the internal unit (p,q)∈Pij for a small region Pij of the input layer (the output layer of the previous convolution layer) to produce a single output. We used max pooling, as shown in equation (3).

  
y~ijk=maxp,qPijypqk(3)

Finally, the fully connected layer combined the information extracted by the convolutional and pooling layers. The jth unit in any fully connected layer receives the value calculated using equation (4).

  
xj=bj+i=1myiwij(4)

Here, yi (i = 1, …, m) is the output of the input layer, wij is the weight, and bj is the bias. In a CNN that performs classification, as in this study, the final output is assigned the same number of units as the 12 classification groups of leaf ages 5–16 (Fig. 1). It is calculated and output using the softmax function expressed by the following equations (5) and (6).

  
pj=exjΣk=1nexk(5)

  
j=1npj=1 (6)

At the time of discrimination, the index j = ar gj max pj of the unit, where pj is the maximum value, and is the output as the predicted group.

In a CNN, learning is performed using training images, and learning is evaluated using evaluation images. The learning algorithm used was “Adam” (Kingma, 2014). In this learning method, batch processing was performed by simultaneously inputting and processing multiple images to reduce the processing time per image. The number of input images was named the batch size. One learning session lasted from inputting images of the batch size to completing the weight update. The number of repeated calculations in the training was named the number of epochs. The computational experiments (learning) in this study were performed under the following conditions: number of groups, 12; number of training images, 167,360; number of evaluation images, 510; batch size, 128; number of epochs, 20; number of trials, 30; learning rate, 0.001.

We replaced the fully connected layer of a pretrained network (VGG16) (Jia et al., 2022). The numbers of units in the fully connected layers were 512, 256, and 12 (number of groups) from the layer closest to the input (Fig. 2). The filter size of all the convolutional layers was 3 × 3, and the size of the small region pij of all the pooling layers was 2 × 2. Then, fine-tuning was performed by the transition-trained model (VGG16) (Jia et al., 2022), In addition, we created a CNN that automatically determines the number of neuron units in the fully connected layer and the batch size using the hyperparameter optimization framework “Optuna” (Akiba et al., 2019). Machine learning of the training image was performed using the created CNN, and the agreement accuracy between the leaf age estimated from the evaluation image and the actual measured value was assessed.

Prediction of lettuce harvest date

We applied the conventional leaf-age estimation model established in the previous section to predict the harvest date on actual production farms and evaluated its estimation accuracy. Six production farms located in Minami-Awaji City, the same area as the research field, were evaluated, and the variety ‘Elegant’ was cultivated. Fourteen to 21 days after planting, lettuce plant images were captured using a camera mounted on a drone at an altitude of 10 m. Images of eight lettuce plants were cut individually to 30 cm × 26 cm. The leaf age of each plant was estimated using a CNN, and the average value of leaf ages of eight plants was used. Forecast and normal values of the temperature data were obtained from the mesh agrometeorological data of the National Agriculture and Food Research Organization (Ohno et al., 2016). The harvest date was predicted based on the leaf age estimated from the aerial photographs using the forecast values of the daily average temperature up to 25 days and the climatological standard normal for mesh agricultural weather data 26 days into the future. The actual harvest date was determined based on farm practices, which are based on the size and firmness of the lettuce head when pressed down with the hand.

The estimated leaf age from the captured images was input into the leaf age-increasing model used in this study. The model predicted the harvest date as the day when the leaf age reached 40. The rationale for defining the harvest date as 40 days was based on the results of a previous study by Ogura (2019). The leaf age estimation system was designed as follows.

A leaf age-increasing model for ‘Elegant’ was constructed from sampling data in 2015 and 2016 on true leaf age, number of heading leaves, and accumulated temperature (Nakano et al., 2021). Leaf age can be estimated using a linear regression equation with accumulated temperature as the explanatory variable (Okada and Sugahara, 2019). However, the relationship before and after heading differed (Fig. S2A). Therefore, we obtained two linear regression equations before (7) and after heading (8) for the leaf age-increasing model as follows:

  
(Before heading)   La = 0.02At + 4.33   ( R 2 = 0.99 ) (7)

  
(After heading)   La = 0.05At 5.79   ( R 2 = 0.94 ) (La is leaf age and At is accumulated temperature) (8)

At this point, to predict the harvest date using the leaf age-increasing model, it is necessary to know whether the prediction date is before or after heading. To determine the date, the leaf age obtained from the aerial image was compared to the leaf age at the start of heading, which was determined using the following procedure. Because a linear relationship was found between leaf age and number of heading leaves in the sampling data from the variety ‘Elegant’ (Fig. S2B), leaf age at the start of heading was estimated using a linear regression equation (9).

  
La = 1.05Nhl + 16.73   ( R 2 = 0.98 ) (La: leaf age; Nhl: number of heading leaves) (9)

The leaf age (La) at the beginning of heading was 16.73, where Nhl = 0 in equation (9).

The relationship between vegetation coverage and yield

The lettuce variety ‘Elegant’ was grown in two replicates in 2018 and 2019 in a research field at the Awaji Agricultural Technology Center as described earlier. In October of each year, 25-day-old seedlings were planted in the field in four plots: no-fertilizer (control) (N: 0 kg·10 a−1), half fertilizer (N: 12.6 kg·10 a−1), standard fertilizer (N: 25.2 kg·10 a−1), and double fertilizer (N: 50.4 kg·10 a−1) to obtain a range of lettuce growth and yield values. Each plot measured 42.9 m2, and slow-release fertilizer was applied as the base fertilizer.

We conducted the drone shoot in November 2018, 27 days after planting and when the leaf age was 16.2 in the standard fertilizer plot, and in November 2019, 35 days after planting and when the leaf age was 25.1, the lettuces were considered in the early stage of heading and photographed from an altitude of 15 m by a camera mounted on a drone. JPEG images were processed using the SfM software Pix4Dmapper (Pix4D S. A., Switzerland), and orthomosaic images were created with standard settings of “3D Maps”, the option template for Pix4D mapper. Orthomosaic images were cut out for each test plot and quantified using open-source GIS software (QGIS) (ver. 2.14.22), and the planted cover percentage was calculated as the ratio of the ridge area to the planted area covered by lettuce in each test area (Fig. 3). The green area in each image was extracted and measured in the vertical direction of the projected lettuce area per plot (vertical projection area).

Fig. 3

Creation of orthomosaic image from the lettuce field. Original photograph captured by the drone (A). The ridge area for each test plot is indicated by yellow lines (B). The ridge area was removed from the orthomosaic images and quantified using GIS software (QGIS) (C). The green area in the image is extracted and measured in the vertical direction of the projected lettuce area (D). The vegetation coverage (%) was determined by dividing the vertically projected lettuce area by the ridge surface area.

The plants were harvested when the head tightness (calculated as head weight/head volume) reached 0.3 or more, as per standard conditions. In December, the yield of all the plants was surveyed, and the individual total fresh weight, head weight, head height, head diameter, number of outer leaves, and number of inner leaves (over 1 g of fresh leaves) of the six plants were measured in triplicate from the four treatments. All statistical analyses were performed using Microsoft Excel (Microsoft Corporation, Tokyo, Japan) with Excel Statistics add-in software ver. 7.08 (ESUMI, Tokyo, Japan).

Results

Leaf age estimation using a convolutional neural network

Table 1 presents the average of 30 calculation results for leaf-age estimation from the evaluation images using the CNN, with 20 training updates per trial. There were three types of discrimination rates: the training image discrimination rate, the evaluation image discrimination rate, and the evaluation image discrimination rate that includes errors of ± 1 leaf age as the correct answer. Table 1 shows the evaluation image discrimination rate that includes errors of ± 1 leaf age as the correct answer. The mean absolute counting error for leaf age was 0.68. The reason for using the evaluation image discrimination rate that includes errors of ± 1 leaf age as the correct answer resulted from an error at the time of measurement. Because leaves with a leaf width of 0.5 cm or more were counted as one leaf, it is unavoidable that an artificial error occurs during the measurement of leaf age (Fig. S1). Regarding each discrimination rate, when the training images were used as the evaluation images to determine leaf age, the validity of the leaf age estimation rate was 87.5%, whereas when evaluation images acquired with a drone different from the training images were used, this dropped significantly to 41.8% (Table 2). If the acceptable error range was set to ± 1 leaf age, the validity increased to 91.4%, which is similar to the estimation rate using training images.

Table 1

Discriminant results of 30 repeated trials of lettuce leaf age estimation from images by CNN.

Table 2

Comparison of correct answer percentage in estimation of lettuce leaf age from images by CNN.

Prediction of lettuce harvest date

Leaf age was estimated using a CNN, as described in the previous section, and the harvest date was predicted using the leaf age-increasing model (Fig. S2B). The prediction accuracies were obtained as R2 = 0.99 (before heading), and R2 = 0.94 (after heading). The total number of leaves, obtained by adding the number of outer leaves, did not match the number of head leaves at a leaf age of 40 of the optimum harvest date (Table 3). This was because approximately 10 outer leaves fell off by the date of harvest. The mean relative margin of error (RMSE) between the predicted and actual harvest dates was 2.35 days (Fig. 4). This value was smaller than the acceptable target of 3.5 days, which is half the general sales timeframe of one week. This suggests that the leaf age of lettuce can be estimated from drone photography by applying a CNN and that a model for increasing leaf age can be used to predict the harvest date with high accuracy from the mid-growth stage.

Table 3

The effect of different amounts of fertilizer on lettuce growth.

Fig. 4

Relationship between the actual and predicted harvest dates using the leaf age increasing model of lettuce. Leaf age was estimated from lettuce leaf age estimated from images using a CNN. The slope of the regression line was set to 1.0. RMSE, root mean square error.

Relationship between vegetation coverage and yield

Figure 5 shows the correlation of yield, total above-ground weight, and head diameter with vegetation coverage (A, C, E) or vertical projection area (B, D, F) before the heading period (leaf age 16.2) in the 2018 trial, while Figure 6 shows the same correlation at the beginning of the heading period (leaf age 25.1) in the 2019 trial. All test plots, except those with missing plants, were disease-free, and the percentage of marketable heads was 100%. In both the before and early heading periods, there were significant differences between the no-fertilizer treatment and the other treatments for all growth parameters. The half-fertilizer treatment showed mostly smaller values than those of the standard and double fertilizer treatments, particularly for total fresh weight and head height. These results suggested that lettuce grew in a dose-dependent manner in response to the amount of fertilizer applied (Table 3).

Fig. 5

Correlation between lettuce yield (A, B), total above-ground weight (C, D), head diameter (E, F), and vegetation coverage (A, C, E) or vertical projection area (B, D, F) from orthomosaic images during the before heading period (leaf age 16.2). The vegetation coverage and vertical projection area of the plants were measured using aerial images captured in 2018. An average of six plants in three replicates from the four treatments was plotted (12 plots in total). The details of the fertilizer application treatments are described in the Legend of Table 3.

Fig. 6

Correlation between lettuce yield (A, B), total above-ground weight (C, D), head diameter (E, F), and vegetation coverage (A, C, E) or vertical projection area (B, D, F) from orthomosaic images during the early heading period (leaf age 25.1). The vegetation coverage and vertical projection area of the plants were measured using aerial images captured in 2019. An average of six plants in three replicates from the four treatments was plotted (12 plots in total). Details of the fertilizer application treatments are described in the Legend of Table 3.

At 27 days after planting and before heading, the vegetation coverage in each test plot varied from 35.9% to 64.1% (Fig. 5) and increased with increasing amounts of nitrogen fertilizer applied. A high coefficient of determination (R2 = 0.91) was observed for the correlation between yield and vegetation coverage before the heading period (Fig. 5A). Conversely, in the early heading period, R2 decreased to 0.75 (Fig. 6A). These results indicate that yield can be estimated more precisely before the heading period than at the beginning of the heading period using vegetation coverage as an explanatory variable. Regarding the correlation with yield, when the explanatory variable was the vertical projection area, the coefficient of determination was 0.63 before the heading period (Fig. 5B) and 0.61 during the early heading period (Fig. 6B), which was lower than when vegetation coverage was used as the explanatory variable. A positive correlation was observed between vegetation coverage before heading and total above-ground weight (Fig. 5C) and head diameter at harvest (Fig. 5E). However, in the vertical projection area before heading, a positive correlation was found with total above-ground weight (Fig. 5D) and head diameter at harvest (Fig. 5F). The coefficients of determination were 0.90 and 0.93 for the vegetation coverage vs. total above-ground weight and average head diameter, respectively, whereas they were 0.60 and 0.68 for the vertical projection area vs. total above-ground weight and average head diameter, respectively, during the before heading period (Fig. 5C, D, E, and F). On the other hand, in the early heading period, the coefficients of determination for vegetation coverage vs. total above-ground weight and mean head diameter were 0.77 and 0.82, respectively, whereas those for vertical projection area vs. total above-ground weight and mean head diameter were 0.61 and 0.67, respectively (Fig. 6C, D, E, and F).

Discussion

Leaf age estimation using CNN

Several studies using deep learning and CNNs in agriculture have been conducted for crop classification, disease detection, and other applications (Kamilaris and Prenafeta-Boldú, 2018). However, few studies have estimated growth stages and performed growth predictions. Regarding the use of CNN for lettuce, the detection of tip-burn incidence in an artificial plant factory has been reported (Shimamura, 2019), but not for continuous growth prediction in an open field, where growth is greatly affected by unstable weather conditions. This study successfully estimated the leaf age of lettuce grown in the field, with a desired discrimination rate of more than 90% (Table 2). In the model of increasing leaf age created in this experiment, the maximum error in rounded-off decimal points was ± 2 days (Fig. 4). This was less than the acceptable range of ± 3.5 days determined based on interviews with marketing individuals and is therefore considered to be sufficient for practical use. Other examples using CNN reported a mean absolute counting error of 1.62 for rosette-leaf plants other than lettuce (Aich and Stavness, 2017). However, the value obtained in this study was 0.68, suggesting that the accuracy of estimating leaf age improved as a result of using machine learning specialized for lettuce (Table 1).

Prediction of lettuce harvest date

For lettuce in greenhouse cultivation, artificial neural networks (ANN) have been used to predict the harvest date using a growth model of biological weights (Lin, 2001). However, in open-field cultivation, the rate of development is slow and growth is difficult to control (Santos et al., 2009), which is the reason for the lack of reports on the prediction of harvest date. However, the regression equation for estimating leaf number from accumulated temperature was highly accurate for onions (Usuki et al., 2019). The lettuce leaf age-increasing model used in the present study to predict the harvest date showed an R2 value of more than 0.9, both before and after heading, suggesting that the model is very robust (Fig. S2). When lettuce growth was compared by varying the amount of fertilizer applied, the total number of leaves in the no-fertilizer treatment (None) was lower than that in the fertilizer treatments (Standard, Double, and Half) (Table 3). This was presumably because of the lower number of inner leaves and higher number of outer leaves, owing to delayed or slowed heading in the non-fertilized plot. In addition, defoliation was considered as another factor. Compared with the fertilizer treatments (Standard, Double, and Half), the number of leaves remained the same, even though the top weight was lower (Table 3). These results indicate that leaf age does not change significantly unless extreme growth failure occurs and that leaf age can be used to accurately predict harvest date. This was because approximately 10 of the outer leaves that developed early in the growth period fell off at harvest.

A method for predicting lettuce harvest dates based on growth models and weather data has been reported previously (Okada and Sugahara, 2019). In this study, the accuracy of harvest date prediction from the mid-growth stage was improved by closely linking the leaf age-increasing model with the growth-sensing information obtained by leaf age estimation. The prediction accuracy of CNN in greenhouse lettuce ranged from 0.89 to 0.91 for R2 values and 19.9–26.0% for normalized root mean square error (NRMSE) values (Zhang et al., 2020). The prediction accuracy of the method used in the present study was higher than these results; R2 = 0.99 (before heading), R2 = 0.94 (after heading), and RMSE = 2.35 (Figs. 4 and S2). The results of this study showed a higher prediction accuracy than those of Zhang et al. (2020). Drone photography during the middle of the growing season is effective for estimating harvest dates for lettuce grown in open fields.

Relationship between vegetation coverage and yield

In the 2018 fertilization trial, the growth parameters at lettuce harvest were significantly different at all intervals, except between the standard and double-dose fertilizer treatments, suggesting that lettuce yield varied greatly. The correlation coefficient and coefficient of determination between the vegetation coverage and yield just before heading (leaf age 16.2) were 0.95 and 0.91, respectively (Fig. 5A). These values were > 0.9, indicating that the yield could be estimated with high accuracy from vegetation coverage immediately before heading. Yang et al. (2008) estimated cabbage yield using the vegetation area measured from aerial photographs. Lin (2001) developed a growth model to estimate the fresh weight of lettuce from vegetation coverage measured using a fixed camera in a greenhouse. These studies on estimating the yield from vegetation cover support our strategy for using vegetation cover. The coefficient of determination (R2) for the accuracy of lettuce yield estimation using another method ranged from 0.75 to 0.92 (Kizil et al., 2012), and this was better in the present study (R2 = 0.91, Fig. 5A). The reason may lie in the use of spectral vegetation indices, such as the normalized difference vegetation index, a simple ratio, and chlorophyll indices which differed from this study and may have influenced the results.

In general, the yield is highly dependent on the dry matter production rate, number of leaves per unit area (LAI), and the duration of the growing period. However, this information cannot be directly obtained from aerial drone images; therefore, it needs to be estimated based on upward information, such as vegetation coverage or vertical projection area. Indeed, van Holsteijn (1980) reported a high correlation between vegetation coverage and the dry matter production rate of lettuce. In this study, the regression analysis of vegetation coverage and yield also showed a high correlation before the heading period, with a coefficient of determination of 0.91 (Fig. 5A). However, the coefficient of determination between the vertical projection area and yield was low at 0.63 (Fig. 5B). One reason for this may be that the area of the test plots was more precisely limited than the vertical projection area when determining vegetation coverage. When estimating the yield from vegetation coverage, the coefficient of determination for the prediction depended on the leaf age when vegetation coverage was measured. It was higher immediately before heading (leaf age: 16.2, 2018) than at the beginning (leaf age: 25.1, 2019). This was because vegetation coverage plateaued after the start of lettuce heading (Fig. 6A). The optimum aerial growth stage for estimating the yield was early heading, with approximately 16 leaves (Fig. 5).

These results suggest that vegetation coverage before the start of heading can be used as an explanatory variable for yield estimation. In the future, a practical yield estimation system can be developed by applying a leaf age-increasing model based on vegetation coverage.

Acknowledgements

We thank Takafumi Kawai and Hitomi Tokuhisa of the Hyogo Prefectural Technology Center for Agriculture, Forestry, and Fisheries, Awaji Agricultural Technology Institute for growing the plants and supporting our experiments, Dr. Takeshi Kanto of the same affiliation for fruitful discussions and English language editing, and Shunsuke Iitsuka and Rintaro Ito of the Graduate School of System Informatics, Kobe University, for help with programming and image analysis.

Literature Cited
 
© 2025 The Japanese Society for Horticultural Science (JSHS)

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial (BY-NC) License.
https://creativecommons.org/licenses/by-nc/4.0/
feedback
Top