ISIJ International
Online ISSN : 1347-5460
Print ISSN : 0915-1559
ISSN-L : 0915-1559
Regular Article
A New AdaBoost.IR Soft Sensor Method for Robust Operation Optimization of Ladle Furnace Refining
Hui-Xin Tian Yu-Dong LiuKun LiRan-Ran YangBo Meng
Author information
JOURNAL OPEN ACCESS FULL-TEXT HTML

2017 Volume 57 Issue 5 Pages 841-850

Details
Abstract

LF (Ladle Furnace) refining plays an important role during secondary metallurgic process. The traditional LF refining operation relies on the workers’ experience, it is disadvantageous to ensure the stable production, high-quality products and energy saving. A new robust operation optimization method of molten steel temperature based on AdaBoost.IR soft sensor is proposed in LF refining process. Firstly, an intelligent model based on BP (Back Propagation) neural network is established by analyzing the changes of energy during whole refining process of LF as sub intelligent model. Then an AdaBoost.IR is designed for the characters of industrial data, and is suitable for industrial soft sensor modeling. The ensemble soft sensor model is established for realizing the online real time measurement of molten steel temperature with better accuracy by using AdaBoost.IR. Secondly the robust operation optimization model is described by analyzing the process of LF refining based on above AdaBoost.IR soft sensor. And the HPOS-GA is used to solve the optimal operation solution of robust optimization model. The new robust operation optimization of temperature based on AdaBoost.IR is used in 300 t LF of the Baosteel Company. The results of experiments demonstrate the soft sensor can predict the temperature more accuracy and the end temperature of LF refining after robust optimization become more stable.

1. Introduction

The steel industry faces stiff competition in the global market. The steel companies have to focus on how to reduce cost and energy consumption while produce high quality products satisfying various customer demand.1) They must ensure the stable continuous production and efficient operation in essential. The process of secondary metallurgic is very important to improve the quality of products. Ladle furnace (LF) steel refining plays a substantial role in secondary metallurgic process, whose objective is to produce qualified steel grades.2) Another important purpose of LF is to ensure the temperature of molten steel for continuous casting. Therefore, it is necessary to optimize the LF refining process for stable continuous production and high quality products. In order to optimize the operations of molten steel temperature, the temperature should be measured in real time. But in real LF refining process, the temperature is measured no more than three times generally using expendable thermocouples. So, the development of a soft sensor model based on historical production data to predict the molten steel temperature in LF is particularly concerned. On the basis of real time predicted temperature, the best operation parameters of LF refining should be calculated by building an optimization model and solving optimal solution. Due to the LF refining process is complex and changed, the optimal operation parameters need to overcome the fluctuating of production. Therefore, a robust operation optimization model should be established for the robust optimal solution.

Generally, the soft sensor methods of predicting the temperature of molten steel in LF can be mainly divided into two kinds: mechanistic method and intelligent method. The mechanistic method is mainly based on thermodynamics and conservation of energy.2) In early studies, mechanistic models are developed in a variety of different ways.3,4,5) However, these models cannot be used for on-line prediction as the parameters were hard to obtain. It is attributed to the harsh operating environment of ladle metallurgy especially the high temperatures and corrosive slag associated with the process. With the fast development of artificial intelligence, more and more intelligence algorithms are used to establish the prediction models in the process of industrial production.6,7) Almost all the intelligent models are designed to obtain the information from the historical production data. So the intelligent method can overcome the limitations about parameters of mechanistic model. However, in practical applications of industrial production, the intelligent models based on single algorithm are unstable because the complex industrial process and the complexity of industrial production data. Ensemble technique that combined the predictors is an efficient strategy for achieving high performance of prediction, especially in fields where the development of a powerful single predictor system requires considerable efforts, and has received much attention by researchers.8) In recent decade years the ensemble technique has been developing to the field of regression problems. The regression ensemble strategy can overcome above limitations and improve generalization ability and prediction accuracy. Most of researchers of regression ensemble focus on the ensemble theory. The research about how to fit the complex industrial data and satisfy the needs of complex industrial process is scarcely.

Besides, the robust operation optimization is also the key to realize the stable molten steel temperature. The robust optimization model needs to be built using the prediction temperature by soft sensor model. In view of the robust optimization problem in the process of industrial production, most of researches are concentrated on petrochemical industry and chemical industry. Shimoyalna etc. proposed the linear additive expectation and variance of the original fitness function as the objective function of the robust optimization problem.9) Jin considered both the performance and robustness index in the robust optimization problems.10) The original fitness function regard serves as performance index and the robust index represented by the fitness function of the expected value or variance. M. Vallerioa et al. used sigma point method to resolve the robust multi-objective dynamic optimization problem of chemical processes.11) Qi Zhanga et al. proposed an adjustable robust optimization approach for the problems in continuous industrial processes.12) But the robust optimization in LF refining is studied scarcely. It is necessary to research the robust optimization problem of molten steel temperature in LF considering the characteristics of the LF refining process and the requirements of end molten temperature.

Against above background, the process of LF refining is analyzed to find the main factors that affect the molten temperature. According to the main factors, the soft sensor model based on the BP network is established. For accurate and efficient prediction of molten steel temperature, we proposed a new AdaBoost strategy for industrial soft sensor (AdaBoost.IR), which will fit the industrial process data well. The AdaBoost.IR can aggregate several soft sensors based on single BP network to overcome the noise of industrial process data, and obtain useful information, which is helpful in optimizing operations during LF refining. Then a robust operation optimization model of molten steel temperature in LF refining process is established on the basis of the AdaBoost.IR ensemble soft sensor of temperature considering the fluctuation of production operation, which obtained by analyzing the refining process of LF. At last, in order to obtain a good robust optimal solution, a hybrid algorithm based on particle swarm optimization and genetic algorithm (HSPSO-GA) is used. The new robust operation optimization method of molten steel temperature can optimize and guide the operation of LF refining for controlling end temperature. At the same time the stable LF refining production will be ensured effectively. Then the LF refining quality will be improved, and it is helpful to saving energy and reducing consumption.

The article is organized as following. In Sect. 2, an intelligent temperature prediction model based on BP neural network is established by analyzing the refining process of LF. In Sect. 3, aiming at the characters of industrial process data, a new ensemble method AdaBoost.IR is proposed to build the ensemble soft sensor model of molten steel temperature for improving the performance of prediction. In Sect. 4, the robust operation optimization model of molten steel temperature in LF refining process is described in detail. And a solving algorithm HSPSO-GA is used to find the robust optimal solution. In Sect. 5, the proposed methods of AdaBoost.IR soft sensor and robust operation optimization are experimented by 300 t LF of Baosteel. Finally, the conclusions are given in Sect. 6.

2. Intelligent Temperature Prediction Model Based on BP Neural Network

At present, Ladle furnace is used extensively in the iron and steel industry. The main purpose of ladle furnace treatment is to produce qualified steel with desired temperature and chemical composition for specified steel grades, when the ladle is taken over at downstream secondary metallurgy units or at a continuous caster. The process of LF refining is showed in Fig. 1. The refining production process includes step 1 to step 7. In practical LF refining process, the molten steel temperature should be measured by expendable thermocouples two or three times in most cases. It is to say that there is short of online measurement instrument for obtaining the molten steel temperature. If the results of sample and temperature measurement does not meet the requirements of qualified steel grades in step 6 sometime, the step 5 and 6 will be operated again. In order to avoid the unnecessary operation and consumption, the molten steel temperature soft sensor model will be established to realize online in real time measurement in the following paragraphs.

Fig. 1.

Schematic diagram for LF refining process.

2.1. Analysis of Main Factors

In order to establish the intelligent prediction model of temperature, the whole LF refining process is considered as an energy conservation system. Traditionally, the metallurgic process of LF is from the ladle entry to the ladle exit (Fig. 1). The whole LF refining process is an energy conservation system, whose input energy is equal to the output energy. However, in practical refining process, the temperature is not measured every time at the beginning of the process. The measurement may be done before or after the power turned on. The energy change from ladle entry to the first temperature measurement is neglected. Therefore, in order to ensure the balance of energy in whole system, we choose the time of the last temperature measurement before power supply as the start time of the energy conservation system, and this temperature is the initial temperature. Similarly, the ending time of the energy conservation system is the time of end temperature measurement. The energy conservation system of LF refining process is showed in Fig. 1. The changed energy in energy conservation system is considered to find the correlative factors that affect molten steel temperature. By analyzing the thermodynamics and conservation of the energy during the LF metallurgic process, the energy gain of LF is mainly due to the electric arc, and the energy loss mainly includes the following three sections: the first section is the heat exchanges between ladle furnace and surroundings, which include the ladle refractory wall and the top surface. The energy loss in this section is relatively stable. It will increase with time. Therefore, this loss energy may be reflected by refining time. The second section is the changed energy for additions, which includes the sum of heat exchanges and chemical reaction heats. In this section, the changed energy by various metal alloys can be calculated by the parameters in Table 1 for 300 t ladle furnace. In addition, the slag adding also affects the changed energy. The third section is the energy loss by argon purging. To sum up, the main factors that affect the molten steel temperature are as followed: the refining power consumption, the initial temperature, the heat effects of metal alloy additions, the adding amount of slag, the volume of argon purging, the weight of molten steel and the refining time.

Table 1. Temperature drop coefficients of metal alloy in 300 t LF.
Additiontemperature effects parameter × 10−2 (°C/kg)Additiontemperature effects parameter × 10−2 (°C/kg)
HcFeCr−0.95LcFeCr−0.65
HcFeMn−0.9LcFeMn−0.75
FeMo−0.75FeSi+0.9
Al+5.0C−2.5
FeNb−0.35Ni−0.5
CaSi−1.05FeTi−0.4
HCZ1−1.0FeAl+1.0
LCCR−0.65Al-D−0.5

2.2. Soft Sensor Model of Molten Steel Temperature Based on BP Network

For the purpose of building intelligent sub soft sensor model of molten steel temperature, the BP (back propagation) neural network is applied. BP network is the most widely used model of the neural network paradigm and has been applied successfully in many application studies in a broad range of areas. BP networks are multi-layered feedforward neural networks that are trained using the error BP procedure, a supervised mode of training. The architecture of BP network consists of an input layer, a hidden layer and an output layer. By above analysis, there are seven main factors that affect the temperature as potential inputs of neural network, while the molten steel temperature is the single output of soft sensor model. For better performance and accuracy of prediction model, several BP networks will be aggregated in next section.

3. Ensemble Temperature Prediction Model Based on AdaBoost.IR

Boosting is one of the most efficient ensemble strategies. It was used in classification problems at the earlier stage. Many researchers have proved that Boosting can solve the classification problems effectively. The boosting algorithms are used widely in the field of classification. However, the researches about boosting algorithm in regression problems are much less than the ones in classification problems. Schapire and Freund proposed the AdaBoost.R algorithm to generate regression models.13) Drucker developed the AdaBoost.R2 algorithm, which is an ad hoc modification of AdaBoost.R.14) He conducted some experiments firstly for regression problems and obtained good results. But in traditional AdaBoost.R2, the important loss function must be not more than 0.5, if not the AdaBoost.R2 will be end. So the application of Adaboost.R2 in industrial prediction modeling field is highly limited. Furthermore, the training sample inevitably contains the data with big fluctuation in industry process. But the AdaBoost.R2 is sensitive to noise because the weight update is proportional to average loss function. It cannot fit the need of industrial production. Aiming at shortages of the loss function in AdaBoost.R2, Solomatime and Shrestha proposed the AdaBoost.RT algorithm to solve the regression problems more efficiently.15) In AdaBoost.RT, the so-called absolute relative error threshold is introduced to project training examples into two classes (poorly and well-predicted examples). And the harder examples will be given more chance to be trained. So the threshold is very important for the accuracy of prediction. How to design the value of threshold of AdaBoost.RT becomes a new problem which the users have to face. Tian Huixin et al. proposed an adaptive threshold method to improve the AdaBoost.RT.6) But the new parameters have to be chosen before using AdaBoost.RT. In a word, the suitable parameters are the important guarantee for good performance of AdaBoost.R. But both traditional AdaBoost.R and improved AdaBoost.R have to face the problems of setting parameters. It leads to the fact that the methods of AdaBoost.R is hard to use in practical industry production. Therefore, we must try to get rid of the blindness of setting parameters without mechanism analysis in industry production.

Aiming at above shortages of traditional AdaBoost.R algorithm, an AdaBoost.IR algorithm is proposed for industrial soft sensor regression problems with easier operation. A slack variable δ is introduced to distinguish between well samples and poor samples in new algorithm. The slack variable δ can be designed according to the allowable maximum absolute error (AE) of outputs in the application of industrial production (LF refining process). It is so easy for industry users by simple mechanism analysis. So it can overcome the blindness of setting parameters in other AdaBoost.R methods. The operation of AdaBoost.IR also becomes easier than the one of traditional AdaBoost.R for users in practical industries. In new AdaBoost.IR, the weights of data will be changed according to the value of absolute relative error (ARE). The weight of poor sample will be increased greatly. But if the poor sample is the one with bigger noise, the weight will be increased slowly. If the noise of samples is big enough (ARE > 1), the weight will be decreased. It is to say, the new AdaBoost.IR will pay more attention to some of the poor samples whose information is hard to be obtained by previous training. And the other poor samples with big noise will be ignored gradually in later training iterations. Therefore, the new strategy of AdaBoost.IR is more suitable for the need of industrial production. It provides a more simple operation for industry users to establish accurate prediction model by using the theory of AdaBoost. The details of new AdaBoost.IR are described as follows:

Input:

1) Sequence of m examples S =[(a1,b1),...,(am,bm)], where output bR.

2) Design a sub prediction learning machine.

3) Integer T specifying number of iterations (machines).

4) The slack variable δ.

Initialize:

1) t=1.

2) D1(i)=1/m, ∀i=1,2,...,m. Here D(i) is the weight of (ai,bi).

For: while tT

1) Choose k data (train_data) from S according to Dt(i), km.

2) Train the learning machine and obtain a prediction model: gt(a)→b.

3) Calculate the absolute error of every data (ai,bi):   

A E t ( i ) =| g t ( a i ) - b i | (1)

4) Calculate the absolute relative error of every data (ai,bi):   

AR E t ( i ) =| g t ( a i ) - b i b i | (2)

5) Calculate the root mean square error of (ai,bi) according to the prediction model:   

RMS E t = ( i=1 k ( g t ( a i ) - b i ) 2 ) k (3)

6) Update the weight:   

D t+1 ( i ) = D t ( i ) Z i ×{ AR E t ( i )    erro r t ( i ) δ    1if( a i , b i ) is not chosen 1 AR E t ( i )    erro r t ( i ) >δ (4)

Here the Zi is a normalization factor.

7) set t=t+1.

Output: Output the final hypothesis:   

g fin ( x ) = t ( 1 RMS E t ) g t ( a ) t ( 1 RMS E t ) (5)

In the beginning, the m examples S = [(a1,b1),...,(am,bm)] are supplied as data set. The molten steel soft sensor model based on the BP network in section 2 is used as sub prediction learning machine. The parameters T is selected which is the iterations of algorithm. The slack variable δ is selected according to the characters of output of prediction model in practical industrial process. D1(i) = 1/m(k) is distributed to each sample (ai,bi) of S initially, here i = 1,2,...,m.

In each iteration t = 1,2,...,T, the training data train_datat = [(a1,b1),(a2,b2),...,(ak,bk)], km are selected randomly from the dataset according to the Dt(i), i = 1,2,...,m. The train_datet is used to train the BP prediction learning machine, and a soft sensor model gt(a)→b will be obtained. Then the absolute error (Eq. (1)), absolute relative error (Eq. (2)) and RMSE (Eq. (3)) of every training data can be calculated. According to the RMSEt, the weight Dt+1(i) is updated as Eq. (4).

There are two relationships between slack variable δ and k samples (which have been chosen to train): 1) When AEt(i) ≤ δ it means the sample is trained well, and we call it well-predicted examples. Here the AREt(i) is in (0, 1). So the weight of sample i will multiply AREt(i) to decrease its weight. 2) When AEt(i) > δ, it means the sample is trained not well in the iteration, and we call it poorly examples. If AREt(i) ≤ 1 and the AREt(i) is bigger, 1/AREt(i) is smaller. Here the weight will be changed with a small increase. In another case, if AREt(i) > 1, it means the sample with big noise is an outlier. Here the weight of sample will be decrease for less training times. In a word, in next iteration the weights of poor sample (not including outliers) will be increase for more training. And the interference of outliers is avoided efficiently.

After T hypotheses being generated for each train_data database, the RMSE is used to reflect the performance of sub training machine (BP network). So the final hypothesis gfin is obtained by the weighted majority voting of all composite hypotheses as Eq. (5).

4. Robust Operation Optimization of Molten Steel Temperature in LF

In practical LF refining process, the operation is controlled by the operators’ experiences, which leading to low production efficiency and high cost. And the workers with long-term operating experience must have worked for many years. So it is urgent to establish an optimal operation model and solve it for obtaining the best operating parameters. Thanks to these parameters, the stable production will be ensured. In addition, owing to the constantly changing refining process and environment, and the restrictions of engineering, it is hard to design a suitable optimal operation model. It is to say we have to face the uncontrollable errors or noise during the LF refining process unavoidably. In this case, the small changes of design parameter will lead to a failed objective function. And the globally optimal solution is meaningless because it will become sensitive to small changes. Therefore, it is necessary to consider the robust optimization solution of LF refining process.

4.1. Robust Optimization Model

Considering the stability of production, the optimization model is described based on above temperature soft sensor as follows:   

F( X ) =min| y ˆ -y | subject   to x i-min     x i x i-max    i=1,2,n (6)
where, F(X) is the objective function to be minimized, y ˆ = f ˆ (X) , X = [x1,x2,...,x7]T, f ˆ (X) is the prediction molten steel temperature by AdaBoost.IR soft sensor, f(X) is the objective molten steel temperature. x1,x2,... are the operation parameters, xi-min and xi-max are the lower and upper bounds of xi.

On the other hand, the lower energy consumption is also an important objective in the process of LF refining for steel companies. So the lowest refining power consumption objective is added to the optimization model of molten steel temperature. The optimization objectives of stability of production and lower energy consumption are mutual restrictive sometimes. For this kind of problems, an approximate Pareto front will be obtained by multi-objective optimization algorithm. There are many non-inferior solutions. The suitable solution must be chosen by operators with rich experience. It will lead to fluctuations of production when operators are changed. For overcoming above shortage, we describe the multi-objective optimization model as follow:   

F( X ) =min{ α| y ˆ -y |+( 1-α ) x 1 } subject   to x i-min x i x i-max    i=1,2,,7 (7)
where 0<α<1.

Additionally, LF refining production process includes many physical and chemical reactions in ladle. The parameters of operation will move up or down during the refining. Here we further consider the fluctuations of controllable parameters that received according to the research in the second section. For a given solution (i.e. a set of control variables), 10 samples with fluctuations are generated randomly. Then the objective function values of the solutions are calculated according to each sample. And finally the average and variance of these 10 objective function values are obtained as the evaluation of this solution. The description of robust objective optimization model is:   

F( X ) =min{ α[ ( 1-β ) f AE ( X+σ ) +β f V ( X+σ ) ]+( 1-α ) x 1 } subject   to x i-min x i x i-max    i=1,2,,7 (8)

Where, F(X) is the robust objective function to be minimized, fAE and fE respectively are the average error and variance of a parameters’ set X, the fluctuation σ of each parameter is assumed to be within 5%. β is an adaptive coefficient. It will linearly change with the increase of iterations. Here the Monte Carlo method is used to evaluate the performance of a solution.

Through above analysis of main factors that affect the molten steel temperature in section 2, the factors can be classified into controllable variable and non-controllable variable. They are described in detail in Table 2. x5, x6 and x7 are regarded as non-controllable variable. And x1,...,x4 are controllable during the whole LF refining process. Their fluctuation scope is also showed in Table 2.

Table 2. The list of operation parameters.
xioperation parametercontrollable or non- controllablefluctuation scope
x1refining power consumptioncontrollable0–220
x2refining timecontrollable35–900
x3adding amount of slagcontrollable−7000~10000
x4volume of argon purgingcontrollable0–33000
x5heat effects of metal alloy additionsnon- controllable
x6initial temperaturenon- controllable
x7weight of molten steelnon- controllable

4.2. HPSO-GA for Robust Optimal Solution

After building above robust multi-objective optimization model, the robust optimal solution of the model should be solved. Genetic Algorithm (GA) is one of the well-known evolutionary algorithms17) and used widely to solve the optimal solution in various fields. In GA, a population of potential solutions to some optimization problem can be maintained, and new solutions will be generated by using a variety of genetic operators including recombination, selection and mutation during each iteration. GA has many good characters, such as high parallelism, strong randomness and adaptive ability. But it also has some shortages. For example, the speed of convergence is influenced by initial values, and the performance of global optimization is not good enough. So the global optimal solution may not be found sometimes especially for the multi-objective optimization problems. Moreover, PSO (Particle Swarm Optimization) has fast convergence speed, good global performance and good anti-interference performance. So it is very suitable that PSO is combined with GA to overcome the shortages of traditional GA. The core idea of PSO is that if a particle discovers a promising new solution, all the other particles will move closer to it. Then the region will be explored more thoroughly in the process. Each particle in the swarm is updated using Eqs. (9) and (10). Here the swarm consists of n particles, and r1, r2 are in the range (0,1), then   

v id k+1 =ω v id k + c 1 r 1 ( p id k - x id k ) + c 2 r 2 ( p gd k - x id k ) (9)
for all d∈1,2,...,n. where, vid is the velocity of the dth dimension of the ith particle, and c1 and c2 denote the acceleration coefficients. pid represents the local best position value of the ith particle, pgd represents the global best position value among all the swarm of particles achieved so far. The new position of a particle is calculated using   
x id k+1 = x id k + v id k+1 (10)

The new hybrid algorithm combined PSO with GA has good ability of fast learning speed and self-adaptive ability by fusing their advantages for robust optimization problem. And the details of HPSO-GA are described as follows:

Initialized:

1) The number of individuals: pop_size.

2) The number of individuals retained after evolution in PSO:M.

3) PSO weight coefficients: c1 and c2.

4) Crossover and mutation probability of GA: pc, pm.

5) Iterations of the hybrid algorithm: max_gen.

For: while jmax_gen

1) Evaluate the desired robust optimization fitness function according to the Eq. (8) for every individual.

2) Rank the individuals in population from big to small according to the fitness values, set the new population pnew to empty.

3) Choose the first 40% individuals from the original population and saved as population p1; save the rest individuals as population p2.

4) Apply crossover operator using Eq. (11).   

y i = x i 1 + p c ( x i 1 - x i 2 ) ,      i=1,2,...,n/2 (11)
where, yi is the new position of offspring, x i 1 and x i 2 are the ith individual which selected randomly from p1 and p2, pc is the crossover probability, and n is the size of population.

5) Obtain a mutated solution according Eq. (12)   

y i = x i-min + p m ( x i - x i-min ) ,      i=1,2,,0.1n (12)

6) Generate the new population pnew, and the first 0.4n individuals are selected from p1. The next 0.5n individuals are obtained by crossover operator and the rest are generated from mutation.

7) Set p = pnew.

8) Calculate the fitness values of individuals according to the Eq. (13). Find out the optimal value of group pGbest(j) and the optimal value of each individual pIbest(j).   

g(x)= min{ α[ ( 1-β ) f AE ( X+σ ) +β f V ( X+σ ) ]+( 1-α ) x 1 +d } (13)
where, d is defined as d= i=1 D ( x i - x o ) , xi is the position of the ith individual, xo is the center value of xi, and D is the number of dimensions.

9) Update the velocity and position of the individuals according to Eqs. (9) and (10) respectively.

10) If there is no improvement, jump out.

Output: output the best solution.

As the foregoing steps shows, PSO is used to initialize population and calculate the objective function value according to Eq. (8). Then, the strategy of GA algorithm is used to get a new population by a certain probability. The genetic principle includes selection, crossover and mutation. And the objective function values of the new population can be calculated. But there is no fixed rule when the crossover and mutation operation of genetic algorithm is used. In this way the particles are likely to be far away from the optimal solution even though they are within the required range. Therefore, the fitness value is calculated by Eq. (13), where the Euclidean distance between two points is used to guide the search for optimal solution. Finally, whether the iterations should be stopped is determined according to the termination conditions. The termination criteria condition is the maximum number of function evaluations or no improvement in the objective function for a successive number of generations.

5. Experiments

5.1. Molten Steel Temperature Soft Sensor Model Based on AdaBoost.IR

Three hundred data of production from 300 t LF in Baosteel Company are used to test the new temperature soft sensor model of molten steel in LF based on AdaBoost.IR. 50 data are randomly selected from this data set as testing data, and others are used as training data. The BP model with three-layer network structure is used as sub learning machine. The input nodes are 7, and they are the factors that affect the molten steel temperature. The output node is the temperature. The hidden nodes are 15 according to the experiments and experience. The slack variable δ is designed according to the allowed maximum error of temperature measurement which is 8°C in real LF refining process. So the slack variable δ is 8 in AdaBoost.IR.

Firstly, the performance trend of AdaBoost.IR is researched when parameter T changes. The experiments are performed using different T for analyzing the effect on performance of AdaBoost.IR. The difference of AdaBoost.R2, AdaBoost.RT and AdaBoost.IR has been also compared. Figure 2 shows the performance of three methods of AdaBoost.R with different value of T. In Adaboost.IR, along with the increase of T, the RMSE of the testing become smaller. When the T increases to 15, the change of RMSE become smaller, and the error of test is smaller too. This trend is maintained until T = 23. When the T increases more than 23, the error of the testing become bigger again. The trend of performance with the increasing T in AdaBoost.RT is similar to the one in AdaBoost.IR. But in AdaBoost.R2, the value of T has a larger fluctuant range. It is hard to obtain a suitable T. According to the relationship between RMSE and iteration T, we can draw a conclusion that when the value of T is from 15 to 23, the AdaBoost.IR has the best and stable performance.

Fig. 2.

The RMSE of AdaBoost.IR, AdaBoost.RT and AdaBoost.R2 with different value of T. (Online version in color.)

Next the further tests of the performance of soft sensor model are done among the soft sensor model based on BP network, AdaBoost.R2, AdaBoost.RT and AdaBoost.IR. The structure and parameters of the BP network are the same as those of sub learning machine in AdaBoost.IR, AdaBoost.RT and AdaBoost.R2. The numbers of machines T is 20. The results of prediction by different soft sensor models are showed in Figs. 3, 4, 5 and 6 separately. From the performance of BP models in Fig. 3 we can find the better accuracy of prediction can be obtained for the data with smaller noise. But the performance for the data with big noise is not good. In Fig. 4, the accuracy of prediction by AdaBoost.R2 soft sensor model is greatly improved compared with the one by BP soft sensor model. Some data with big noise can be used to predict with high accuracy, but the generalization ability is not satisfied. As show in Fig. 5, the prediction of molten steel temperature by AdaBoost.RT has the higher accuracy than the former methods. However, the experiment need much more time to adjust the parameters. According to the process of test, the soft sensor model cannot be built when the threshold is more than 0.1. Figure 6 shows the prediction of molten steel temperature by AdaBoost.IR. The experiment proves that the method of AdaBoost.IR can increase prediction the accuracy of soft sensor model and fit the noisy industrial data. Besides, the operation of algorithm is easy and suitable for industrial production. Figure 7 shows the comparison among above three models by absolute errors. The accuracy of prediction by AdaBoost.IR is the best among the three soft sensor models. Especially when the industrial data is with high fluctuation, the AdaBoost.IR also presents the best performance.

Fig. 3.

The temperature prediction results of soft sensor based on BP network. (Online version in color.)

Fig. 4.

The temperature prediction results of soft sensor based on AdaBoost.R2. (Online version in color.)

Fig. 5.

The temperature prediction results of soft sensor based on AdaBoost.RT. (Online version in color.)

Fig. 6.

The temperature prediction results of soft sensor based on AdaBoost.IR. (Online version in color.)

Fig. 7.

The absolute errors of different temperature soft sensor models. (Online version in color.)

For testing the performance of different soft sensor regression models, the five evaluation indicators are used: RMSE, MRE, MAXE, MINE and accuracy, and they are calculated in detail in Eqs. (14), (15), (16), (17), (18).   

RMSE= 1 k i=1 k f ( x i - y i ) 2 (14)
  
MRE= max i=1,...k ( | f( x i - y i ) y i | ) (15)
  
MAXE= max i=1,...k ( f( x i ) - y i ) (16)
  
MINE= min i=1,...k ( f( x i ) - y i ) (17)
  
accuracy= N a N w ×100% (18)

Where, k is the number of samples, Na is the number of furnaces with absolute error < 5°C, Nw is the whole testing times. f(xi) is the prediction of soft sensor, and yi is the real temperature Table 3 show the performance comparison among different soft sensor models based on BP, AdaBoost.R2, AdaBoost.RT and AdaBoost.IR by testing data. According to the comparison of five evaluation indicators, we find the error of AdaBoost.IR model is limited to the minimum range. It is to say, the new AdaBoost.IR soft sensor can overcome the shortage of sensitivity to the industrial data, and improve the accuracy of prediction. It also has good ability of generalization. And the prediction results can meet the demands of industrial production.

Table 3. The comparison of different modeling method by five evaluation indicators with testing data.
Modeling methodRMSEMREMAXEMINEaccuracy
BP neural network12.25890.025340.0745−35.471571.42%
AdaBoost.R297.23400.015825.3445−13.323288.10%
AdaBoost.RT6.73580.012416.8642−19.862790.48%
AdaBoost.IR5.51060.009715.575−10.665592.85%

5.2. Solving Robust Optimization of Molten Steel Temperature by HPSO-GA

After the successful temperature prediction by ensemble soft sensor, the LF refining operation can be optimized using the hybrid of particle swarm optimization and genetic algorithm (HPSO-GA). In this section, the performance of HPSO-GA for robust optimization solution will be test firstly. And then the HPSO-GA will be used to find the robust optimization solution of LF refining production process operation parameters.

Firstly, the performance of HPSO-GA for robust optimization solution is test. We devoted to describing the benchmark test problems and parameter settings of HPSO-GA which used in the experiments. Then, the efficiency of the main components in HPSO-GA compared with other evolutionary algorithms based on the benchmark test problems. In the experiments, two benchmark robustness test problems (RTP1 and RTP2) are selected. The definitions of the problems are given in Eqs. (19) and (20).   

f(x)={ - (x+1) 2 +1 -2x<0 2.6 -8| x-1 | 0x<2 (19)
  
f(x)={ -0.5* e -0.5 (x-0.4) 2 / 0.05 2 0x<0.4696    -0.6* e -0.5 (x-0.5) 2 / 0.02 2 0.4696x<0.5304 -0.5* e -0.5 (x-0.6) 2 / 0.05 2 0.5304x1 (20)

In HPSO-GA, the population size is 100, the ranges for each dimension used in the initial population generation method is set according to their scopes. The parameters used in the cross-over and mutation operators are set as their suggested values, according to Eqs. (11) and (12). In the experiment, a total of 100 independent runs are performed for each problem to collect the statistical performance of each algorithm. The results of optimization for RTP1 and the comparison between PSO (a, c) and HSPO-GA (b, d) are given in Fig. 8. The comparison of general optimization (a, c) and robust optimization (b, d) for RTP2 are given in Fig. 9. We can find that the HPSO-GA search the optimal solution more quickly than PSO. Meanwhile the HPSO-GA could find the result more accurate. The difference between the general optimization and robust optimization is also described. The characteristic of robust optimization is the variables with disturbance in objective function. The experiments results demonstrate that the robust optimization function can help us to search the robust optimization solution which is more stable and more effective than general optimization solution.

Fig. 8.

The comparison of PSO and HPSO-GA for RTP1.

Fig. 9.

The comparison of general optimization and robust optimization for RTP2.

Secondly, the proposed HPSO-GA is used to solve the robust optimal solution of the LF refining process operation. The robust operation optimization model has been designed in section 4.1 as Eq. (8). The process of robust operation optimization of molten steel temperature based on AdaBoost.IR soft sensor is shown in Fig. 10. The molten steel temperature and corresponding operation parameters are sampled 50 furnaces during the LF refining production. The end temperatures before optimization, after general optimization and after robust optimization are compared, and the results of comparison are showed in Fig. 11. Figure 12 shows the detail of temperature absolute relative error (ARE) before optimization and after optimization. The absolute relative error after optimization is less than the one before optimization, and absolute relative error after robust optimization is less than 0.005. Obviously, the end temperature is more stable after using the robust operation parameters which have been optimized by above robust optimization method. That is to say the optimal solution of operation parameters can guide the LF refining efficiently. The stable temperature will ensure the quality of the LF products, and fit the demands of continuous casting. Furthermore, it is benefit for enterprises to improve production efficiency, save energy and reduce cost.

Fig. 10.

The process of robust operation optimization of molten steel temperature.

Fig. 11.

The comparison of temperature before optimization, after general optimization and after robust optimization. (Online version in color.)

Fig. 12.

The absolute relative error of temperature before optimization, after general optimization and after robust optimization. (Online version in color.)

6. Conclusion

For more efficient LF refining production, a robust operation optimization method is proposed to obtain the optimal operation parameters. An AdaBoost.IR algorithm is proposed to establish the accurate soft sensor model of molten steel temperature. On this basis, the robust operation optimization model is described by considering the fluctuation during the process of LF refining and the demands of production. Then a HPSO-GA is used to solving the robust optimization model. The real production data is used to test the performance of above methods. The results of experiments show the new method can ensure the stable production, improve the quality of production and save energy. In the future research the operation optimization of alloy adding will be considered further more.

Acknowledgement

This research is supported by National Natural Science Foundation of China (Grant No. 61403277 and No. 71602143).

References
 
© 2017 by The Iron and Steel Institute of Japan
feedback
Top