The Horticulture Journal
Online ISSN : 2189-0110
Print ISSN : 2189-0102
ISSN-L : 2189-0102
SPECIAL ISSUE: ORIGINAL ARTICLES
Flower Longevity Quantification in Greenhouses Using Deep Learning Models for Computer Vision
Motoyuki Ishimori
Author information
JOURNAL OPEN ACCESS FULL-TEXT HTML
Supplementary material

2025 Volume 94 Issue 1 Pages 24-32

Details
Abstract

Flower longevity is essential to maintain the quality and value of ornamental crops. The distribution of high-quality ornamental flowers has been established by developing various preservatives, especially for cut flowers. However, genetic improvements to promote flower longevity have not progressed, except for some ornamental crops. This is partly due to a lack of analytical methods to evaluate flower longevity in large-scale trials. This study showed that high-throughput image analysis using deep learning models for computer vision, which has rapidly progressed, can quantify the flower longevity of multiple lines in greenhouses. Seven lines of Portulaca umbraticola, a one-day-flowering crop for summer gardening, were photographed at regular intervals from the top of the greenhouse. Using trained object detection models, including the YOLO series, multiple flowers and pots were accurately detected in each image. The opening degree of flowering was calculated by trained image classification models, such as Densenet and VGG. Although the images generally contained many flowers, multiple object tracking models, such as ByteTrack and StrongSORT, enabled us to track each flower and estimate its longevity. Additionally, video classification sorted the tracked flowers into seven P. umbraticola lines. Each line’s flower longevity could be quantified and it differed each day and within the same flower line. Therefore, applying the proposed method using deep learning models could dramatically accelerate research and breeding of flowers with increased longevity. The applicability of this method to various ornamental crops and traits is also discussed.

Introduction

Although the precise breeding goal varies among ornamental flowers, ornamental traits are always essential. Major traits, including flower color and double flowers, have been genetically improved; such ornamental traits are often qualitatively inherited. Evaluating these traits only once during the flowering period is almost always sufficient.

Alternatively, evaluating flower longevity, which is the duration of ornamental value, requires measurement over time. Studies have long demonstrated that environmental effects like temperature can cause quantitative variations in flower longevity, especially in cut flowers (Çelikel and Reid, 2002; Maxie et al., 1973). These difficulties limit flower longevity quantification experiments that require many genotypes, except for morning glory (Ipomoea nil), a one-day flower studied in a controlled chamber (Shinozaki et al., 2011).

Currently, improving flower longevity depends heavily on the use of preservatives. However, breeding to enhance flower longevity has also progressed in several ornamental flowers. Breeding for flower longevity in carnation is the best example. Through crossbreeding, carnations that have maintained their ornamental value for more than 30 days have been produced (Onozaki et al., 2018). Recently, dahlia longevity has also been improved by crossbreeding, even though it was originally an extremely short-lived flower (Onozaki and Fujimoto, 2023). Although improving flower longevity was possible through the artificial modification of ethylene biosynthesis and signaling (Shibuya, 2018), this method is not very popular due to the limited utilization of transgenic crops.

Video recording (or interval shooting) is often utilized to evaluate flower longevity (Onozaki et al., 2004). Unfortunately, classical digital image analysis has not been practical for images from greenhouses or fields because the uniformity of shooting environments is crucial. High-throughput image analysis methods that can be applied to large breeding trials are needed to improve the efficiency of breeding for flower longevity in various ornamental flowers.

In this study, an efficient procedure to quantify flower longevity from interval images taken in the greenhouse is demonstrated. The main purpose was to establish a procedure for analyzing the flower longevity of many potted plants, which is challenging to measure in greenhouses. Herein, sufficiently analyzing longevity by combining existing deep learning models is shown to be feasible without developing a new model. The focus was on a procedure that can be applied to flower longevity analysis of many potted plants rather than developing a specialized model that can only be optimized for flowers of a specific plant species.

In several stages of the image analysis, deep learning models were applied, including object detection (OD) and image classification (IC) for computer vision, which have dramatically improved in recent years (Chai et al., 2021). OD models are used to identify the types of various objects in images. For example, applying the YOLO v3 model to apple fruit harvesting machines has already been investigated (Kuznetsova et al., 2020). IC models classify the image itself (e.g., whether it is an apple tree image or an orange fruit image) and could classify 102 species of flower images with high accuracy (Wu et al., 2018). Predicting continuous values annotated on images (e.g., weight and length of fruit in an image) is also possible by modifying the IC model into a regression model. Furthermore, multiple object tracking (MOT) models were used to identify the same flower from consecutive images. MOT is commonly used with OD models, and the latest MOT models estimated the number and location of apple fruits without duplication from videos taken from a machine running in the field (Gené-Mola et al., 2023).

Portulaca umbraticola, a popular summer potted flower, was used as the experimental material to test the analysis procedure for flower longevity proposed in this study. Although the flowering characteristics of P. umbraticola in artificially controlled environments have been studied (Maguvu et al., 2016, 2018), the focus in this study was on its flower longevity in a greenhouse setting under natural daylight and with uncontrolled temperature.

Materials and Methods

Plant materials

Seven P. umbraticola lines (#1, #8, #10, #34, #75, #78, and #79) maintained in our laboratory by cutting propagation were used for the analysis. These lines were grown as potted plants (in 15 cm plastic pots) in a vinyl greenhouse at the University of Tokyo (Bunkyo-ku, Tokyo, Japan). Small granular soil mixed with peat-based soil (Metro-Mix 360; Sun Gro, Agawam, MA, USA) was used for cultivation. The experiment was conducted under natural daylight, and the temperature was controlled only by automatic ventilation through a skylight. Plants were irrigated as needed during the experimental period.

Interval shooting

Interval shooting of experimental plants was continuously conducted throughout the experiment. The camera function of Android OS tablet devices (Tab M8; Lenovo, China) was used for photography. Since no camera application suitable for long-term interval shooting was available in Android OS, an application was programmed and developed for this study: “InterValCaM”. Android Studio 4.2 (powered by the IntelliJ® Platform) was used to develop InterValCaM, which can run in Android 10 (the kernel Android-4.9). InterValCaM can be provided to academic researchers by the author upon request. It allows interval shooting at specified times. Images taken at five-minute intervals were used to estimate flower longevity, although more images were acquired with shorter intervals to select the most suitable one. The interval shooting period was from June 5 to 20, 2021. However, data for June 9 is missing. During the experiment, interval shooting was continued under sunlight without supplementary light or shading. Only images from 5:00 to 19:00 (840 min), when flowers could be identified with natural day length, were used. Therefore, flower longevity was estimated daily based on 168 images taken at five-minute intervals. The tablet device was wired and constantly charged from a power source inside the greenhouse. The single camera of the tablet device captured 26 pots within an image. The pots were placed roughly 10 cm apart from each other. The 26 pots included plants from lines other than the seven mentioned above, but were excluded from the analysis due to low flower counts. Finally, 19 pots were analyzed (average of > 2 pots per line). One pot contained 2–3 plants of the same line, but each individual could not be identified.

The camera was adjusted to be visually horizontal to a rectangular plant table. Since the camera remained fixed, it photographed the target plant body continuously in the same position and orientation. The distance from the camera to the plant table directly below was ~170 cm. The height of the pot was ~13 cm, and since P. umbraticola is a creeping (short stature) plant, the distance from the camera to flowers was estimated to be roughly ~150 cm. The original image size was 2,448 × 3,264 pixels. At the position of the analyzed objects (i.e., flowers and pots) in images, this size roughly corresponded to 0.5 mm/pixel on average.

Procedure for flower longevity quantification

Detecting flowers and estimating their longevity from interval shooting images comprises four steps (Fig. 1): (1) flowers and pots are detected in each image (OD); (2) the opening degree of each flower is estimated (IC); in P. umbraticola, the flower opening degree can be used as an indicator of flower longevity, which is classified into six stages from bud before anthesis (stage 1) to fully open flower (stage 6) (Maguvu et al., 2018); (3) each flower is tracked in a series of interval images (i.e., video) (MOT); and (4) the classification of each flower line is performed (video classification, VC). Known models based on deep learning for computer vision were used in the above steps, except for some models in the third step (Table S1). These image analyses were implemented with the Python3 language.

Fig. 1

Procedure to quantify P. umbraticola flower longevity in the greenhouse using deep learning models for computer vision. In this study, different deep learning models were used in four steps (flower detection from images, opening degree estimation, tracking, and classification). In the first step, OD models (e.g., YOLO) trained for P. umbraticola flowers were used. Next, IC models modified to estimate the opening degree of flowers were applied to each detected flower image. In the third step, moving and changing flowers in a series of interval images were tracked and identified from other flowers. Finally, the VC model was used to classify flowers in the correct line.

Training flower detection models

The interval shooting images obtained in the experiment were used to train deep learning models. Images of flowers at various degrees of opening were included for training to detect flowers from the bud stage to wilting after full flowering. In addition to P. umbraticola, images of lisianthus (Eustoma grandiflorum) and petunia (Petunia × hybrida) flowers acquired with similar shooting intervals were included in the training images for future research (data not shown). Two annotation tools for OD, LabelImg and imglab, were used based on the timing of the annotation tasks. LabelImg was published by Tzutalin in 2015, but is maintained as a part of the Label Studio community (https://github.com/HumanSignal/labelImg?tab=readme-ov-file). Now, imglab is provided as a component of the dlib C++ library (https://github.com/davisking/dlib/tree/master/tools/imglab). Images for the final flower longevity estimation were taken at five-minute intervals, but training data (including test images) were selected from images taken at 11-minute intervals. This shooting interval considered the rapid progression of P. umbraticola flower opening in 10–20 min. Additionally, some training images were annotated after the taken images were previously divided into 12 equal parts to reduce annotation task cost due to there being too many flowers in each image and to prevent overfitting to the conditions when the original images were taken. The annotation included 659 images for training OD models. The average number of flowers per image was 15.3 (minimum, 0; maximum, 223; and standard deviation, 21.1).

Since data augmentation is generally performed before training deep learning models, various transformations were applied to these annotated images. Such methods included transformations based on higher and lower contrasts, gamma correction, moving average and Gaussian filters, as well as pepper- and salt-like noises. Furthermore, parts of the original and transformed images were randomly cut and pasted onto the background with random colors to imitate a greenhouse background, in which the colors change with sunlight. These transformed images did not involve any aspect ratio conversion, and the state for human detection of flowers in the images was maintained. Since some transformation methods could not be successfully applied to some images, generating an equal multiple of transformed images was not possible for all annotated images, resulting in 29,658 final images (on average ~45 per original image). These images were divided into subsets composed of train: validation: test = 4:1:1 based on 659 annotated images to prevent overfitting due to different subsets, including images derived from the same image (19,804, 4,950, and 4,905 images, respectively). For training models, the training subset was further divided into small groups (minibatch) to improve the efficiency with parallelization and avoid local optima. The minibatch size (i.e., number of images per group) depends on the memory consumption of each model; here, it was set to 16 images maximum. Completion of all the minibatches is called an epoch. Typically, several to dozens of epochs are repeated before learning is completed. The number of epochs was adjusted in each model according to the learning rate. Multiple models were compared using their statistics on the test set.

Discussing the model’s usefulness from the metrics based on a single test set is impossible; therefore, the performance of YOLO v8 was repeatedly examined based on the cross-validation (CV) method. While maintaining the ratio of train:validation:test at 4:1:1, a sixfold CV was used so that all original images were included in the test set once. The number of epochs in each verification depends on the learning rate, so they were not necessarily equal. There were five CV replicates. The images were randomly divided into six train/validation/test subsets for each replicate. The details of the OD model implementation are described in Table S1.

Training models for estimating flower opening degree

Labelcls, a graphical user interface tool that supports IC annotation (https://github.com/pei223/labelcls), was used to annotate 8,353 original images of P. umbraticola flowers. Each image included a P. umbraticola flower. The criterion of flower opening degree in P. umbraticola was based on six stages suggested by Maguvu et al. (2018). Figure S4 presents examples in addition to the top-right image in Figure 1. Original images were divided into subsets composed of a train: validation: test ratio of 3,962:1, 213:3,178. The train and validation images were transformed for data augmentation. The transformation methods were the same as those used for OD images (e.g., including contrast adjustment and filter treatment). The numbers of images for the training and validation subsets after data augmentation were 35,658 and 10,917, respectively. The minibatch size was 12 for all IC models, and the number of epochs was fixed at 10 for all models. The test set composed of 3,178 original images without data augmentation was used for model comparison without CV.

The IC models used in this study have pretrained weights for the Image-Net 1K classification. The output layer of these models was modified to a single value to use these classification models to predict the flower opening degree as a continuous value. The learning rate of parameters in models was adjusted to fine-tune of each layer. For all IC models, momentum SGD was used as the optimizer and mean squared loss was used for the loss function.

Other deep learning models

A video classification model was also trained to classify tracked flowers into seven P. umbraticola lines. PyTorchVideo was used for the implementation. SlowFast_R50 pretrained for Kinetics-400 was used as the base model. Two hundred and eighty-one videos tracking each flower were manually annotated with the line from which the flower was derived. The annotated videos included some plant lines not used in this study, except for the seven lines. Parts of the same videos were randomly used for training to reduce the imbalance in the number of classes (plant lines in this case). This strategy resulted in 320 videos for training and 160 videos for validation. Data augmentation was then performed on each video. The videos were converted by rotating them to a certain angle in addition to using some conversion methods for the OD images. After data augmentation, the final numbers of videos were 8,640 for training and 4,320 for validation. The training method generally followed the manual provided by PyTorchVideo. The minibatch size was set to four. Additionally, random treatments, such as size transformations and cropping, were performed during training to mitigate overfitting. Judgment by the trained VC model was considered when it became challenging to determine which plant line adjacent to it the flower came from.

All MOT models used in this study were implemented with the BoxMOT package (https://pypi.org/project/boxmot/), and the original MOT models were used without training. Since preparing multiple annotated videos for MOT performance evaluation is challenging, a single annotated video was created from a portion of the June 5 image data (84 frames, 7 h). The video contained 58 unique flowers and was used for the MOT model’s performance.

Flower longevity was calculated as the cumulative time when the opening degree of each flower that could be traced was over four. The final classification of flowers into the seven lines was determined by considering the judgment of the VC model and the location of plant pots. P. umbraticola is a creeping, elongating plant with numerous branches and multiple flowers at the tips. Each plant body can produce many flowers per day, but most flowers are located on or near its own pot. However, some flowers that are attached to the tips of the elongated branches bloom closer to adjacent pots. The OD model can provide information on the location of pots and flowers and classify each flower into the line corresponding to the nearest pot (Classification A). Separately, the video of each tracked flower is classified into seven lines by the VC model (Classification B) to judge the correspondence of flowers to the line. If Classification A and B do not match, Classification B is adopted if it is a neighboring line. If Classification B is not an adjacent line, it is excluded from the analysis.

Statistical analysis

The estimated flower longevity was validated by ANOVA, considering the effects of line, day, and their interaction as explanatory variables. ANOVA was performed using the R language (version 4.1). The function HSD.test included in the agricolae package was used for the multiple comparison of lines by Tukey’s test (de Mendiburu, 2023).

The metrics for OD models, Precision, Recall, and Average Precision (AP), were calculated using Object Detection-Metrics, a Python package (Padilla et al., 2021). The metrics for MOT models, MOT Accuracy (MOTA), MOT Precision (MOTP), and Identification F1 Score (IDF1), were calculated using the py-motmetrics of a Python package (https://github.com/cheind/py-motmetrics).

Results

Detection of flowers and pots from single images

The trained OD models were used to detect flowers, including P. umbraticola, and plant pots from each image (Fig. S1). The differences in flower detection accuracy among the OD models were examined based on three standard metrics—Precision, Recall, and AP—on the basis of Intersection over Union (IoU), a threshold indicating the degree of agreement between detection and the correct answer (Fig. S2). Precision was particularly high for YOLO models, exceeding 0.9 for all models except Faster-RCNN when IoU ≥ 0.5. When IoU ≥ 0.75, YOLO v8 had the highest Precision (0.89) (Fig. 2). Recall was extremely low for SSD, but YOLO v6 performed particularly well. AP was high for YOLO v6 and v8, with YOLO v8 having the best performance for a single test set with IoU ≥ 0.75.

Fig. 2

Precision, Recall, and AP of OD models for flowers, including P. umbraticola. Upper panels indicate the metrics based on the threshold with IoU ≥ 0.5 and lower panels with IoU ≥ 0.75.

A CV of YOLO v8 was also performed to evaluate the generality of model performance. For IoU ≥ 0.5, Precision and Recall were 0.95–0.97 and 0.73–0.77, respectively (Table S2). When IoU ≥ 0.75, Precision and Recall were 0.85–0.89 and 0.66–0.69, respectively. For AP, the values were 0.72–0.76 for IoU ≥ 0.5 and 0.64–0.66 for IoU ≥ 0.75.

When multiple potted plants are used in an experiment, pots can be detected as a substitute for location information on the plant body. Pots were much more detectable than flowers as long as they were not completely blocked by plant bodies (Fig. S3).

These results indicate the potential for deep learning models to detect P. umbraticola flowers accurately from greenhouse images.

Estimation of flower opening degree in P. umbraticola

In P. umbraticola, the flower opening degree can be used as a measure of flower longevity (Maguvu et al., 2018). The opening degree of flowers was estimated by modifying the models for IC to regression models. The opening degree of P. umbraticola flowers estimated by the IC models is a continuous value (Fig. S4). Although Maguvu et al. (2018) classified the flower opening degree into six stages (e.g., a flower with clearly seen filaments is stage five while a fully open flower is stage six), intermediate flowers can be calculated using regression models (e.g., the value can be 5.65, as in the top-right one in Fig. 1).

Many IC models use deep learning (Table S1). In this study, most IC models displayed comparable metrics (RMSE = 0.513–0.604, r = 0.835–0.871) (Fig. S5). The flower opening degree estimated by densenet201 (RMSE = 0.518, r = 0.866) was close to the manually annotated true value (Fig. 3). However, the predictions for opening degrees 5 and 6 seemed slightly lower. This discrepancy will be discussed later. Thus, the flower opening degree of P. umbraticola can be accurately calculated with trained IC models.

Fig. 3

Prediction of P. umbraticola flower opening degree using denseset201. Flower images for the test were manually annotated based on the criterion of Maguvu et al. (2018), which are displayed as the True values of six categories. Predictions from the IC model were obtained as continuous values.

Tracking each flower through time series images

Flowering time was calculated by tracking each flower from the time series images (i.e., video) during identification. Each flower and pot was assigned a uniform ID through multiple images (Fig. S6). The combined six OD models and four MOT models were compared based on three general indices of MOT accuracy (MOTA, MOTP, and IDF1) (Fig. S7). MOTA indicates the ratio of the number of misses, false positives, and mismatches, while MOTP indicates the distance between the true and estimated object position over all frames (Bernardin and Stiefelhagen, 2008). Conversely, IDF1 is related to the length of the tracking time of objects (Ristani et al., 2016). The differences between the OD models were generally consistent with their performance in flower detection accuracy, with YOLO v6 and v8 performing better on all three measures (Fig. 4). Differences in performance were observed between MOT models, although they were not very large, with some exceptions. Although MOTA and MOTP have been used as popular metrics for OD models, IDF1 was proposed to measure how long the model could correctly tracks objects (Ristani et al., 2016), which may often be the most crucial criterion for tracking tasks. Therefore, the combination of a better OD model and a compatible MOT model that exhibits good performance, especially in IDF1, can be used to track P. umbraticola flowers for accurate flower longevity quantification. In this study, combining YOLO v8 and ByteTrack estimated flower longevity as detailed in the following analyses.

Fig. 4

Comparison of MOT performance among the combinations of OD and tracking models. MOTA: multiple object tracking accuracy, MOTP: multiple object tracking precision, IDF1: Identification F1 Score. In this study, higher MOTA and IDF1 performed better, while lower MOTP performed better.

Quantification of P. umbraticola flower longevity

Plant pots were densely arranged, assuming actual large-scale greenhouse cultivation. Each flower line was identified by considering the pot’s location and the video classification model. Two P. umbraticola lines with similar flower colors could be classified even when they were close to each other (Fig. S8). Tracked flowers were classified into seven P. umbraticola lines, and flower longevity could be evaluated by monitoring the change in flower opening degree.

The flowering longevity of P. umbraticola in the greenhouse under natural daylight varied greatly depending on the day of measurement (Fig. S9). Typical examples of days are presented in Figure 5. Flower longevity varied daily throughout seven lines with differences > 2 h on average. This result suggests that environmental effects, such as differences in temperature and sunlight time, can strongly affect P. umbraticola flower longevity in greenhouses.

Fig. 5

Differences in P. umbraticola flower longevity influenced by the day. The mean of flower longevity for each day (h) is indicated after the date. The number of flowers in each line (n) is presented for each day.

The possibility of quantifying flower longevity in each line was validated by ANOVA. The effect of the line on P. umbraticola flower longevity in the greenhouse was highly significant (Table 1). The effect of line accounted for 23.5% of the variance component of flower longevity. The effect of day and the interaction between day and line were also significant. Thus, the genetic evaluation of flower longevity is feasible even though environmental effects and measurement errors are large.

Table 1

ANOVA of P. umbraticola flower longevity in a greenhouse.

Furthermore, the onset time of flowering also varied with the day, suggesting the possibility that the full opening of flowers may occur after cumulative solar radiation reaches 1 MJ·m−2 (Fig. 6). For example, full opening was completed by 8 am when cumulative solar radiation reached the threshold by 7 am (June 5, 8, 10–13, 15, and 18). Alternatively, a delay in reaching the threshold of solar radiation may delay full flower opening (e.g., June 6, 14, 16, 19, and 20). However, the relationship between the time of full flower opening and flower longevity was not always obvious.

Fig. 6

Opening time of P. umbraticola flowers in the greenhouse. The time connected by the two circles is the average flowering time of the day. The number on the far right represents the average flower longevity (h) of all flowers on that day. Diamonds indicate hours when cumulative solar radiation exceeded 1 MJ·m−2 each day (hourly). Cumulative solar radiation is based on data from the Japan Meteorological Agency in Chiyoda Ward (https://www.data.jma.go.jp/gmd/risk/obsdl/index.php).

The estimated mean value of each line was above 8 h for #8, and even #10 was close to 8 h (Fig. 7). On the contrary, #1 had a flowering time less than 6 h, while the other four lines were around 7 h. Statistically, these seven lines were classified into six groups in terms of flower longevity. The results suggested that P. umbraticola flower longevity in the greenhouse varied by more than 2 h among the lines and support the usefulness of deep learning models for computer vision to analyze time series image data.

Fig. 7

Estimated values of flower longevity for each P. umbraticola line. Columns and error bars represent the mean and SE, respectively. The letters (a–f) indicate significant differences at the 5% level by Tukey’s test.

Discussion

Deep learning models were used for several tasks to quantify differences in flower longevity among P. umbraticola lines in a greenhouse. The same lines that exhibited short and long flower longevity in this study (#1 and #10) also displayed large differences in the test in a growth chamber and were “Single Red” and “Sanchuraka Cherry Red”, respectively (Maguvu et al., 2016). While a growth chamber is necessary to accurately evaluate flower longevity with a limited number of lines due to the stable environment, a breeding trial to evaluate a large number of genetic resources at once is also anticipated. The quantification methods presented in this study may be particularly useful for primary selection for flower longevity in a larger space and under conditions influenced by environmental effects.

Flower longevity is strongly influenced by environmental factors (Reid and Jiang, 2012). Portulaca umbraticola is generally grown as a potted plant and it is often placed outdoors in the summer because it prefers high temperatures and a lot of sunlight. Since daily environmental conditions (e.g., temperature and sunlight) vary greatly outdoors, the flower longevity of P. umbraticola also depends on the conditions each day. Although this study also suggests a day effect on flower longevity and full opening time (Fig. 6), it is possible to quantify genetic differences in flower longevity among lines even under various environmental factors (Fig. 7). Determining the specific environmental variables (e.g., temperature and sunlight) that affect P. umbraticola flower longevity is challenging, but determining it statistically is difficult due to the strong collinearity among environmental variables in the greenhouse. The influence of environmental factors on flower longevity remains to be tested in a closed environment chamber.

Flower opening degree predictions in this study were also accurate, but the opening degrees 5 and 6 were slightly underestimated (Fig. 3). Similarly, degree 2 may have been slightly overestimated. In this study, the training data were categorical, but the predictions were done using regression models. The predicted values shrank toward the overall average at the upper and lower ends of the flower opening degree. In this study, flower longevity was analyzed based on opening degree 4, so this condensation is unlikely to have a major effect. If an extreme value is used as the standard for flower opening (e.g., degree 5.9), flower longevity may be greatly underestimated; therefore, using classification models may be a better option.

The applicability of these methods to breeding for flower longevity in other ornamental flowers will depend largely on vegetative and flowering characteristics. P. umbraticola is a creeping plant that tends to produce apical flowers facing upward, making it suitable for conditions in which interval shooting is conducted from the top of the greenhouse. Additionally, since it is a one-day flower, each measurement period is relatively short. For taller plants or those that flower downward, adjusting the photography method may be necessary. For example, lisianthus (E. grandiflorum) is a tall plant with a blooming period of two weeks or longer (Halevy and Kofranek, 1984; Shimizu and Ichimura, 2002). The applicability of the procedure established in this study to other ornamental flowers is unknown and should be verified as soon as possible. In carnations, the flower longevity of intact flowers and cut flowers may differ significantly (Kondo et al., 2020; Wu et al., 1991). Although the analysis procedure in this study is intended for large-scale evaluation of flower longevity for potted plants in greenhouses, the method itself can be applied to evaluate intact and cut flowers in growth chambers.

Various deep learning models were used for computer vision in this study. Deep learning models can perform tasks with high accuracy that classical image analysis methods cannot accomplish (Kamilaris and Prenafeta-Boldú, 2018). The use of deep learning in horticultural science is also rapidly developing and improving in various applications (Yang and Xu, 2021). For example, procedures using OD alone have been replaced by a combination with MOT for fruit detection in the field (Koirala et al., 2019; Zhang et al., 2022). Additionally, aerial photography using an unmanned aerial vehicle has become common for acquiring images of large farm fields, and using various deep learning models is essential to analyze the vast number of images obtained (Apolo-Apolo et al., 2020). The application of deep learning will continue to advance to achieve various objectives in floricultural research.

One drawback of deep learning models is their need to be trained for each application. In future research, considering ways to extend the flower longevity analysis procedure presented in this study to various ornamental flowers will be necessary. By including other flowers in the training image data [e.g., using an open dataset from flower images of Nilsback and Zisserman (2008)], the model can analyze an even wider variety of flowers. Planning and preparing reusable deep learning models may be essential in terms of improving the efficiency of breeding for flower longevity.

These methods are not limited to flower longevity quantification (Arya et al., 2022). In principle, the number of flowers is counted simultaneously. However, selecting models based on the appropriate metrics for each application remains necessary. If a more accurate count of flowers is needed, an OD model with superior Recall must be selected to detect all flowers. Estimating the number of floral organs (e.g., petals) may also be possible, although high-resolution images and better photography methods would be needed. Since instance segmentation based on another deep learning model can identify individual plants and quantify the area they occupy in the image (Champ et al., 2020), quantifying the growth of crops over time from interval shooting images is likely possible. Deep learning models are expected to be used to improve the efficiency of analysis in various horticultural studies; they will not be limited to the flower longevity of ornamental flowers.

Acknowledgements

I would like to express my deep appreciation to Professor Emeritus Michio Shibata (The University of Tokyo), who collected and maintained the P. umbraticola lines used in this study for many years.

Literature Cited
 
© 2025 The Japanese Society for Horticultural Science (JSHS)

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial (BY-NC) License.
https://creativecommons.org/licenses/by-nc/4.0/
feedback
Top