2022 Volume 62 Issue 11 Pages 2301-2310
A model for predicting the vanadium content in a molten iron blast furnace (BF) was developed to solve the problem of late iron detection during the smelting process of a vanadium and titanium BF. First, based on the whole process data platform of BF ironmaking, the standardized data warehouse of BF smelting was established, and the variables related to vanadium content in molten iron are selected in the model. Clean data were obtained by processing the original data. Afterward, the feature extraction of variables was achieved by feature construction and PCA dimensionality reduction, and the final input feature variables were determined using a combination of multiple feature selection algorithms and production process experience. Finally, the CatBoost model was selected for prediction. The results show that CatBoost achieved better results than XGBoost and long short-term memory (LSTM) models, and all indicators were higher than in these two models. The R2 of CatBoost reached 0.773, and the index of prediction error within ±0.020% reached 89.65%, which met the actual production requirement of a vanadium and titanium commercial BF in China.
In recent years, with the implementation of Germany’s Industry 4.0, the U.S. Industrial Internet, and the “China 2025” plan, the application of industrial big data has become the focus of manufacturing transformation and upgrading.1,2,3,4) In the information era, advanced big data mining technology provides new intelligent support for industrial manufacturing and greatly promotes the structural adjustment and industrial upgrading of industrial enterprises. In the iron and steel industry, the new technological revolution, especially the digital technology revolution, has become the trend of the modern ironmaking industrial development.5,6) As the production process runs for a long time, many BF ironmaking smelting process data appear during the production period. Suppose these data are effectively collected and analyzed; the production rules in the ironmaking process can be obtained so as to effectively predict and guide the production process and make the BF ironmaking production process more accurate and intelligent.7,8) Owing to the unstable internal and external environment of BF and the limitation of current detection technology level, the iron quality index cannot support stable and accurate on-line testing. The time span of offline testing of the elements in the iron is too long, and there is a serious lag.9) Therefore, it is of great significance to establish a prediction model of iron quality for actual production.
For a BF for smelting vanadium titanomagnetite, traditional molten iron quality indexes should be considered,10,11) such as hot metal temperature (HMT), carbon content ([C]), silicon content ([Si]), phosphorus content ([P]), and sulfur content ([S]). Moreover, the vanadium content ([V]), with obvious economic benefits, should be monitored and measured according to the actual production situation. Researchers have done a lot of work in the prediction of molten iron quality, and iron quality prediction models include mechanistic models, inference models, and data-driven intelligent models. Among them, the data-driven modeling approach has been widely used to solve the modeling problem of iron quality.12,13,14,15,16,17) However, relatively little research has been done on vanadium-related prediction in vanadium and titanium BF molten iron.18)
In this work, a model for predicting the vanadium content of BF molten iron based on the CatBoost algorithm was developed. Based on the standardized data warehouse of BF smelting, clean data were obtained through data processing of the original data. The original variables were extracted by feature construction and PCA dimensionality reduction. The final input features of the model were selected by combining the production theory process with various feature selection methods. The model building was completed through the model tuning process. The selected model could accurately predict the vanadium content BF molten iron in the next hour and grasp the trend of the molten iron vanadium content, indicating that it can assist in guiding BF production in a timely manner.
The data platform provides strong support for the standardized data warehouse of BF ironmaking. The big data platform architecture of BF ironmaking is shown in Fig. 1. With big data, artificial intelligence, and cloud computing as the core, the data platform provides complete life-cycle services in the process of data informatization and intelligence, including many essential services such as data transmission, data storage, data processing, data scheduling, business data modeling, data interaction analysis, and intelligent application.
Big data platform architecture of BF ironmaking. (Online version in color.)
Based on the dimensions of the collected BF data, process experience, and objectives of the analysis, the data were divided into themes and content settings according to the BF production process. The structure and content of each data table related to the BF ironmaking process were designed to establish a highly efficient and generalized data warehouse. The data warehouse operation architecture of the whole BF ironmaking process is shown in Fig. 2. The theme and content of the data warehouse for the whole process of BF ironmaking are shown in Fig. 3.
Data warehouse operation architecture for the whole process of BF ironmaking. (Online version in color.)
Theme and content of the data warehouse for the whole process of BF ironmaking. (Online version in color.)
Operational data store (ODS) stores the massive underlying source data streams by modules. Data warehouse detail (DWD) cleans, desensitizes, transforms, and integrates source data to form complete, clean, and consistent data. Data market (DM) includes the data warehouse base (DWB) and data warehouse service (DWS); it divides and arranges the data by the theme according to the production process and business needs. Parameter data store (PDS) stores key process indexes in process production; it can easily extract index data to realize the most efficient analysis and application. Application data store (ADS) further extracts and transforms key production parameter data for model applications and business needs so as to realize reliable storage, efficient circulation, and full utilization.
The BF ironmaking process is a process of reducing iron ore or iron-bearing material into liquid pig iron at high temperatures using reducing agents such as coke and coal. The raw and fuel material and furnace flux that are loaded into the BF top fall as the reaction proceeds; meet with the rising gas stream during the descent and undergo heat transfer, reduction, melting, and decarburization reactions; and eventually form pig iron. The slag falls onto the iron surface in the hearth and is discharged together with the iron from the iron taphole. Some of the gas produced inside the BF is used to participate in the reduction reaction, while another part of the unused gas is discharged from the BF top as de-dusted gas. The purpose of BF ironmaking is to obtain iron, and in this process, there are also by-products, such as gas and slag. The whole process needs to maintain the stability of the BF operating condition to smelt qualified iron. The schematic diagram of the BF ironmaking process is shown in Fig. 4.
Schematic diagram of BF ironmaking. (Online version in color.)
The BF ironmaking system has a long flow and complex internal reaction mechanism, which is characterized by large hysteresis, strong coupling, and nonlinearity. First, the BF ironmaking process is invisible, with numerous physicochemical reactions occurring inside the BF, which makes it incomparable to other complex but visible production systems. Second, each link affects and restricts another in BF ironmaking production. There is also coupling between hundreds of parameters involved in a single link. A smelting BF with vanadium titanomagnetite as the primary raw material is even more complex compared to an ordinary BF.
3.2. Data Pre-processing 3.2.1. Data SelectionThe data used in this paper came from a BF of vanadium and titanium ore smelting in an iron-smelting plant, which represents real-world smelting data in an actual production process of a BF. A total of 535 data parameters related to the BF production were selected based on the data warehouse of the BF. The collected data are described according to the production process and data type in Table 1.
Type of data | Data content and number of fields |
---|---|
Non-real-time discrete data | Data of physical and chemical performance of raw and fuel material (60), detection data of slag and iron (21), manually entered production data (13) |
Real-time continuous data | Operating parameters (69), temperature of the hearth and bottom of the BF (156), temperature of BF stave (150), pressure parameters (22), flow parameters (44) |
Owing to the influence of uncontrollable factors such as equipment anomalies, signal interference, or manual entry, the data would have null values, outlier data, and other problems. In addition, the long-flow production process characteristics of BF could cause a time lag problem in the data. Data from non-normal production processes (BF blowing-down operations) were also not available. Therefore, it was necessary to perform data processing on the raw data.
1) Null values processing. The causes of null values for different types of data in the BF production process are different. The non-real-time discrete data mainly include quality test data and manually entered data. The null values of these data are generated mainly owing to artificial data omission and inconsistent data frequency. Generally, this type of data has a high correlation with data points recorded in the adjacent time, so the null values are handled by filling in the value of the previous moment.
Null values of real-time continuous data were generated by abnormal monitoring equipment or signal interference. The Lagrangian interpolation method was used to backfill the null values to maintain the continuity of the data.
2) Outlier data processing. Outlier data are the extreme outliers generated under normal production conditions. The most obvious detection of the outlier data is the appearance of outlying points outside the area in the relatively stable data set.
Outliers of real-time continuous data are mainly caused by signal interference of monitoring equipment and deterioration of sensitivity. The rapid extraction of outliers by sliding window features was used to determine abnormal data.19) The linear interpolation method was used to fill in the modified outliers.20)
Non-real-time discrete data outliers are mainly caused by human input errors and abnormal device storage. The discrimination of outliers was divided into two steps. First, the threshold value of each parameter was set manually according to the actual production situation, and the data exceeding the threshold value were considered as outlier data. Then, a box plot was used to discriminate the data that met the threshold range. Data outside of the two extreme outlier limits (Q3±3IQR) were specified as outlier data according to the data distribution,21) and the outlier data were updated using the correct value of the previous moment.
3) Data frequency processing. The business requirements of BF production processes are different, and the frequencies of recording are also different. The raw fuel system and charging system are based on the batch frequency, the BF operation is the second frequency, and the slag iron system is based on the furnace frequency. Therefore, it was necessary to process the data of different production processes in a uniform frequency. The overall frequency of the data was adjusted to hours by correlating and integrating data between different production processes.
4) Data delay correspondence. The BF ironmaking process has a lag. When the independent variables changes, the dependent variable does not change immediately, but the corresponding change occurs after some time. This phenomenon is called time delay in the BF. In the actual production of the BF, the time lag problem cannot be ignored. For this reason, it was necessary to calculate and determine the lag time between each variable and the target variable. The specific method used was to calculate the correlation coefficient between the related variable and the target variable under different time lags, and the one with the maximum correlation coefficient was the time lag used for the relevant variable and the target variable.
5) BF blowing-down operations data processing. The normal production process of the BF does not avoid the regular shutdown maintenance operations. The production is at a standstill during blowing-out operations, so the data are not available. Therefore, the data during blowing-down operations should be eliminated according to the production records.
3.3. Feature Engineering 3.3.1. Analysis of Target Characteristic VariablesAccording to the BF ironmaking process and data pattern, the target characteristic variable vanadium content of iron (GL04_Iron_V) is not only influenced by the process parameters, but also has its own strong correlation as the time-series data. Therefore, the autocorrelation function22)(ACF) was used to analyze the target variable. The ACF is an indicator used to measure the change in correlation between data at different time intervals in the same series over time, and its calculation as shown in Eq. (1). Among them, τ is the lag period, μx is the mean value of the sequence, and N is the number of samples. The ACF values of GL04_Iron_V are shown in Fig. 5.
(1) |
Autocorrelation function values of GL04_Iron_V. (Online version in color.)
As shown from Fig. 5, the ACF values of GL04_Iron_V gradually decreased from 1 as the time lag period increased. The threshold level was taken to be 0.6, corresponding to an autocorrelation window of up to 4 hours. The GL04_Iron_V1, GL04_Iron_V2, GL04_Iron_V3, and GL04_Iron_V4 in the first 4 hours were selected as new feature variables to be added to the training model.
3.3.2. Principal Component Analysis (PCA) Dimensionality ReductionPrincipal component analysis (PCA) is one of the most widely used algorithms for data dimensionality reduction.23) PCA sets the proportion of information contained in the original data through linear transformation and then transforms the original data into a set of d-dimensional linear independent array matrices w* = (W1, W2, ..., WD) so as to transform high-dimensional data into main feature components.
The data types of the BF hearth temperature and cooling part are similar, closely related, and have obvious autocorrelation. Therefore, PCA was selected for data dimensionality reduction. The hearth temperature was the temperature points of different heights and directions of the BF hearth, which generated 156 dimensions of data. The cooling part contained the temperature points of different heights and directions of the BF, as well as the flow and temperature information of cooling water in different directions, which generated 172 dimensional of data. The result of dimensionality reduction is shown in Fig. 6.
Principal component dimension reduction based on PCA. (Online version in color.)
As shown from Fig. 6, setting the feature value threshold to 1, six independent characteristic variables (FCA1_1–FCA1_6) were extracted from the 156-dimensional data in the hearth part. The cumulative variance reached 95.457%. In the cooling part, 25 independent feature variables (FCA2_1–FCA2_25) were extracted from the 172-dimensional data, and the cumulative variance reached 84.456%. The combination of the two parts covered more than 90% of the information to meet the data mining requirements.
3.3.3. Feature Variable Selection Based on Multiple Feature Selection MethodsFeature selection aims to reduce the number of features, eliminate feature redundancy, avoid model overfitting, and improve the model generalization ability by reducing the number of features. In this work, multiple feature selection algorithms were used and combined with the ironmaking process experience to fully explore the potential patterns among the data so as to comprehensively select the final feature variables.
1) Spearman correlation and Maximum Information Coefficient (MIC) analysis. The Spearman correlation coefficient is often called Spearman rank correlation coefficient, which is solved according to the ranking position of the original data and has strong adaptability for nonlinear data analysis of the BF. The MIC24) is generally used to determine the degree of association between two variables. The larger the MIC value, the closer the relationship between two variables. Spearman correlation analysis and MIC values greater than 0.2 are shown in Figs. 7 and 8.
Target parameters with a Spearman correlation coefficient greater than 0.2. (Online version in color.)
MIC value of characteristic variables greater than 0.2. (Online version in color.)
2) Process experience screening out redundant characteristic variables. To cover as many characteristic variables that can reflect the association of the data as possible, the characteristic variables selected by Spearman and MIC methods were combined, and a total of 89 characteristic variables were obtained. However, the redundancy of characteristic variables was serious. There were correlations between many characteristic variables, such as computational formulas, which were derived from the computation of some of these variables. Therefore, the feature variables with redundant relationships were identified and eliminated in combination with the ironmaking process experience. After screening, 53 characteristic variables were finally determined. The number and content of the feature variables are shown in Table 2.
Production processes | Feature name | Number of Features |
---|---|---|
Raw and fuel material | GL04_MixOre_Al2O3, GL04_MixOre_CaO, GL04_MixOre_FeO, GL04_MixOre_MgO, GL04_MixOre_R, GL04_MixOre_SiO2, GL04_MixOre_TFe, GL04_MixOre_V2O5, GL04_Pellet_Compressive, GL04_Pellet_Drum, GL04_Pellet_H2O, GL04_Pellet_P, GL04_Pellet_TiO2, GL04_Proportion_Pellet, GL04_Proportion_Sinter, GL04_Sinter_TiO2, GL04_Coke_H2O | 17 |
BF operation | GL04_XBYLPJ, GL04_XBYCZB, GL04_XBYCPJ, GL04_TQXZS, GL04_SBYLPJ, GL04_SBYCZB, GL04_SBYCPJ,GL04_RFYL02, GL04_QLYC, GL04_N2ZXFX, GL04_LQSPSZGLL, GL04_LQSPSWD, GL04_LQSGSZGLL, GL04_LQSGSWD, GL04_LFYL, GL04_LDYL, GL04_LDLQSYL, GL04_GLRSZLL, GL04_FZWD, L04_FYYL, GL04_DWPJ, GL04_DTHY, GL04_COZXFX, GL04_Charging_Tironhour, GL04_32595LSJYPJ, FAC4_1, FAC21_2, FAC2_1, FAC19_2, FAC14_2, FAC1_2, FAC1_1 | 32 |
Slag and Iron | GL04_Iron_V1, GL04_Iron_V2, GL04_Iron_V3, GL04_Iron_V4 | 4 |
3) Random forest recursive elimination method to select feature variables. Recursive elimination is a greedy algorithm to find the optimal feature subset. This method can repeatedly build the model, select the feature with the highest score, and then repeat the process for the remaining features. After traversing all features, the features are sorted according to the score value to determine the number of features selected.25) In this study, the final feature variables were obtained by the random forest model as the base model through the cross-validation method for the feature variables after process screening. The result of the feature variable selection of the random forest model is shown in Fig. 9.
Feature variable selection of the random forest model. (Online version in color.)
As shown in Fig. 9, the number of features added to the model rose rapidly from 2 to a high point, and when the number of features reached 20, the model increased slowly and gradually stabilized. The score of model cross-validation reached its peak when the number of feature variables reached 34, and then decreased slightly and started to stabilize. Therefore, the final number of features used was 34. The result of the feature variable classification is shown in Table 3.
Production processes | Feature name | Feature description | Number of Features |
---|---|---|---|
Raw and fuel material | GL04_MixOre_FeO, GL04_MixOre_MgO, GL04_MixOre_V2O5, GL04_Pellet_Compressive, GL04_Coke_H2O | Raw and fuel material composition testing information | 5 |
BF operation | FAC1_1, FAC1_2, FAC14_2, FAC19_2, FAC2_1, FAC21_2, FAC4_1 | PCA of furnace hearth and cooling part | 7 |
GL04_Charging_Tironhour | BF Charging Information | 1 | |
GL04_DWPJ, GL04_FZWD,GL04_LQSGSWD, GL04_LQSPSWD | Temperature monitoring information, including BF top, top valve seat, cooling water | 4 | |
GL04_ COZXFX, GL04_N2ZXFX | BF roof gas detection information | 1 | |
GL04_32595LSJYPJ, GL04_FYYL, GL04_LDYL, GL04_LFYL, GL04_QLYC, GL04_SBYCZB, GL04_SBYLPJ, GL04_SBYLPJ, GL04_TQXZS, GL04_XBYCPJ, GL04_XBYLPJ | Pressure detection information, including roof, air supply, and furnace body pressure | 11 | |
GL04_GLRSZLL | Flow monitoring information,cooling water flow | 1 | |
Slag and Iron | GL04_Iron_V1, GL04_Iron_V2, GL04_Iron_V3, GL04_Iron_V4 | Four hours value of GL04_Iron_V | 4 |
It can be seen from the finalized feature variables (Table 3) that feature redundancy was eliminated through the feature selection process, and the number of feature variables was reduced from 535 to 34. The final feature variables selected by combining multiple feature selection methods not only contained the feature variables that were concerned for the daily BF production, but also some new feature variables were found, such as raw and fuel material, slag and iron, and so on. These new feature variables were approved by the site operators, and enriched the set of variables concerned in BF production. Based on the attention and adjustment of these variables, the operators achieved the control of the vanadium content in the molten iron. It can be shown that the method of selecting feature variables was scientific and effective in the production situation.
4) Category feature extraction under process conditions. The experience of BF production and operation reveals the rule of thumb “seven points for raw materials, three points for operation”; that is, the raw and fuel of material conditions of BF determine the degree of production operation. The vanadium content in the molten iron of the BF has both “correlation” and “randomness.” The “correlation” is determined by the vanadium content of the raw material in the furnace, and the BF production process causes the “randomness.” Different raw material conditions (the vanadium content of various raw materials) determine the level of vanadium content in the molten iron. Under the same raw material conditions (the vanadium content of the same raw material), different production operations lead to changes in the vanadium content in the molten iron. Therefore, categorizing and extracting key feature variables would help to improve the model prediction. The relationship between V2O5 in raw material and vanadium in molten iron is shown in Fig. 10, and the distribution relationship between vanadium in molten iron and vanadium in raw materials is shown in Fig. 11.
Relationship between V2O5 in raw material and vanadium in molten iron. (Online version in color.)
Distribution relationship between vanadium in molten iron and vanadium in raw materials. (Online version in color.)
As shown in Fig. 10, V2O5 in the feedstock (GL04_MixOre_V2O5) had a strong correlation with vanadium content in the molten iron (GL04_Iron_V), and greatly influenced on GL04_Iron_V. GL04_MixOre_V2O5 was taken as a characteristic classification variable. First, its distribution range and initial threshold were determined according to the distribution of GL04_MixOre_V2O5 in Fig. 11. Then, the resulting threshold was corrected combining the expert production experience. Finally, GL04_MixOre_V2O5, as the characteristic categorical variable, was divided into four categories: normal I type (I), normal-II type (II), high (III) and low (IV). The classification result of characteristic categorical variable is shown in Table 4.
Classification label | GL04_MixOre_V2O5 |
---|---|
All | 0.26≤V2O5<0.41 |
Normal-I type (I) | 0.30≤V2O5<0.34 |
Normal-II type (II) | 0.34≤V2O5<0.38 |
High (III) | 0.38≤V2O5<0.41 |
Low (IV) | 0.26≤V2O5 <0.30 |
CatBoost26) is a GBDT framework with fewer parameters, support for categorical variables, and high accuracy based on symmetric oblivious trees as the base learner implementation. The main pain point it addresses is the efficient and reasonable processing of the categorical features. CatBoost is composed of Categorical and Boosting. In addition, CatBoost solves the problems of gradient bias and prediction shift, thus reducing the occurrence of overfitting and further improving the accuracy and generalization ability of the algorithm. Compared with XGBoost27) and LightGBM,28) the innovations of CatBoost include the following:
(1) Innovative algorithms are embedded to automatically process category features into numerical features. Statistical analysis is performed on the categorical features, calculating the frequency of a particular category feature, and then hyperparameters are added to generate new numerical features.
(2) CatBoost uses combined category features to exploit the connection between features, which significantly enriches the feature dimension.
(3) The problem of prediction bias is solved by using the ordered boost method to avoid the tendency of gradient estimation.
(4) A fully symmetric tree is used as the base model.
4.2. Model Tuning and PredictionBF ironmaking is a typical continuous production process, and its data have obvious time-series characteristics. Therefore, the actual production data collected from the BF of the ironmaking plant were divided according to the time order, and the ratio of the training set and test set was 9:1. The optimal parameters were determined through the tuning process of the critical parameters of the model, and the tuning process and the tuning results of CatBoost are shown in Fig. 12 and Table 5. The tuning results of XGBoost and LSTM are shown in Tables 6 and 7.
CatBoost model tuning parameters. (Online version in color.)
Model | od_type | depth | learning_rate | iterations | border_count | l2_leaf_reg | bagging_temperature | loss_function |
---|---|---|---|---|---|---|---|---|
CatBoost | Iter | 6 | 0.06 | 150 | 275 | 2 | 1 | RMSE |
Model | booster | max_depth | learning_rate | n_estimators | min_child_weight | gamma | reg_alpha | reg_lambda |
---|---|---|---|---|---|---|---|---|
XGBoost | gbtree | 9 | 0.1 | 40 | 2 | 0.1 | 0 | 1 |
Model | loss | optimizer | learning_rate | batch_size | dropout | hidden_size | memory_cells | time_step |
---|---|---|---|---|---|---|---|---|
LSTM | mse | adam | 0.001 | 80 | 0.1 | 2 | 64 | 4 |
After the model tuning process, the optimal parameters of the model were determined, and the optimal results of the model were obtained. The XGBoost and LSTM models were also selected for comparison. The result of model prediction and evaluation is shown in Table 8, and the result of the CatBoost model prediction is shown in Fig. 13.
Features | Model | R2 | RMSE | ±0.015% | ±0.020% | ±0.025% |
---|---|---|---|---|---|---|
No feature selection (535) | CatBoost | 0.607 | 0.0146 | 64.89 | 87.02 | 90.58 |
XGBoost | 0.578 | 0.0151 | 62.08 | 84.22 | 89.82 | |
LSTM | 0.531 | 0.0160 | 61.32 | 73.79 | 84.22 | |
Feature selection (34) | CatBoost | 0.773 | 0.0130 | 80.92 | 89.65 | 94.00 |
XGBoost | 0.746 | 0.0139 | 77.93 | 86.10 | 91.83 | |
LSTM | 0.720 | 0.0144 | 76.83 | 85.28 | 89.64 |
Result of CatBoost Model prediction. (Online version in color.)
As shown in Table 8 and Fig. 13, the prediction indexes of the models combining multiple feature selection methods were higher than those of the models without feature selection, and it can be seen that feature selection had a positive significance and could effectively improve the prediction effect of the models. CatBoost achieved better results than the XGBoost and LSTM models in all indicators. The R2 of feature selection CatBoost reached 0.773, the accuracy index of prediction error within ±0.015% reached 80.92%, the accuracy index within ±0.020% reached 89.65%, and the ±0.025% index reached 94%, meeting the accuracy index of the error within±0.020%, which is higher than 85% of the production needs.29)
Based on the big data platform and whole process data warehouse of BF ironmaking, the prediction model of vanadium content in the molten iron was established using a large amount of historical data, and the following conclusions were obtained:
(1) Based on the data warehouse of the whole process of BF ironmaking, the original variables related to the target parameter (GL04_Iron_V) were selected. We then performed further processing by dealing with outlier data, data delay correspondence, and abnormal production data (blowing down operations data). Finally, the cleaned model data were obtained.
(2) Feature extraction was realized by feature construction and PCA dimensionality reduction. Using various feature selection methods such as Spearman, MIC, and random forest recursive elimination, combined with the production process theory, the final feature variables were selected comprehensively. The number of final feature variables was determined to be 34.
(3) The CatBoost model was selected for prediction, and the prediction model development was completed by the setting of category feature variables and the optimization of model parameter. The results show that the R2 of CatBoost reached 0.773, and the accuracy index of error within ±0.020% reached 89.65%, meeting the requirement that the prediction accuracy error of actual production was not less than 85% within ±0.020%. Compared with the XGBoost and LSTM models, all the indexes of CatBoost were higher, indicating a better model effect.
Thanks are given to the financial supports from the National Nature Science Foundation of China (52004096), Hebei Higher Education Fundamental Research Funds Research Project (JQN2020032), Hebei Postgraduate Innovation Fund Project (CXZZBS2019143, CXZZBS2021094).