ISIJ International
Online ISSN : 1347-5460
Print ISSN : 0915-1559
ISSN-L : 0915-1559
Regular Article
Quantum Optimization with Lagrangian Decomposition for Multiple-process Scheduling in Steel Manufacturing
Kouki Yonaga Masamichi MiyamaMasayuki OhzekiKoji HiranoHirokazu KobayashiTetsuaki Kurokawa
著者情報
ジャーナル オープンアクセス HTML

2022 年 62 巻 9 号 p. 1874-1880

詳細
Abstract

Steel manufacturing involves multiple processes, and optimizing the production schedule is essential for improving the quality, cost, and delivery of products. However, the corresponding optimization problems are usually NP-hard and generally intractable even for modern high-performance computers. Recent advances in quantum computers, such as those developed by D-Wave Systems, have opened new possibilities for handling this type of optimization problem. Nevertheless, a major obstacle to the application of currently available quantum computers is the relatively small number of quantum bits, which limits the feasible problem size. To overcome this obstacle, we propose an algorithm for solving optimization problems based on Lagrangian decomposition and coordination. D-Wave quantum computers can explore diverse solutions simultaneously, and we leverage this unique characteristic to obtain the upper bound for Lagrangian decomposition and coordination. We apply the proposed algorithm to a simplified problem of two-process production scheduling. By decomposing the problem into two stages for the processes, the number of steel products that can be handled increases from five to eight (60% increase). The optimal solutions are reached even for the extended case of the eight products. The proposed algorithm is a promising technique for enabling the application of quantum computers to real-world problems by thoroughly exploiting quantum bits.

1. Introduction

Steel is a fundamental material supporting the development of our society. In steel manufacturing, raw materials, mainly iron ore and coal, result in various types of steel products through a series of processes, including smelting, refining, continuous casting, rolling, annealing, and surface treatments.

For steel companies that produce various types of products, the optimization of production planning and scheduling for multiple processes is challenging. Over the multiple production processes of steel, the unit volume per batch gradually decreases throughout processing. For example, the typical volume upstream of steelmaking is approximately 300 tons, whereas the volume downstream when reaching surface treatment is approximately 10 tons. Therefore, a production unit in an upstream process includes different products or semi-products across downstream processes. Between all the process layers, the grouping of different sequence orders or semi-products should be jointly optimized. Such optimization problems must be solved under several constraints. For instance, two products or semi-products requiring very different processes or different specifications, such as chemical components, widths, and thicknesses, cannot be grouped. On the other hand, grouping products obtained from similar processing conditions is favorable for quality, productivity, and cost. Nevertheless, excessive grouping may lead to stock accumulation and delay because grouping implies waiting for other products that undergo similar processes before proceeding.

The tradeoff between grouping and due times should be optimized, but even a single process is difficult to optimize. When considering multiple processes, both the scale of the problem and complexity increase, as the grouping compatibility between any two products differs depending on the process. Such complexity hinders the development of an efficient algorithm for real production scheduling. A solution has been proposed in the previous work,1) where scheduling is solved using a two-step algorithm. First, the grouping size (lot size) is dynamically decided using Lagrangian decomposition and coordination (LDC). Then, a feasible schedule is generated heuristically considering the applicable constraints.

Recently, quantum computers have emerged to solve challenging combinatorial optimization problems. Their quantum superposition capability allows to search in parallel over numerous possible solution paths, likely allowing to obtain a better solution in a much shorter time than classical computers.

In particular, the quantum computers developed by D-Wave Systems are promising for quantum optimization because they offer thousands of quantum bits (qubits). A D-Wave quantum computer can solve quadratic unconstrained binary optimization (QUBO) problems, and several combinatorial optimization problems have already been addressed using these machines.2,3,4,5,6)

Despite the promise of quantum computers, their application in industry is limited by the current inability to handle large-scale problems owing to the scarcity of qubits, lacking connections between qubits, and other factors. The connectivity between qubits is problematic, especially when a problem describes high correlations between a qubit and several other bits. To represent this type of correlation, many redundant qubits should be assigned to encode a single bit that represents the numerous connections. Scale limitations can be alleviated by adopting a hybrid approach that combines quantum annealers with classical computers for processing.7,8) Therefore, we apply LDC to a production scheduling problem comprising two processes for a steel factory. Specifically, the optimization problem over the two processes is decomposed into two optimization problems for the processes, and the constraints between the problems are considered using LDC. We verify that LDC can determine the optimal solutions in all the experiments. For certain problems, LDC can reduce the required number qubits compared with conventional algorithms.

The remainder of this paper is organized as follows. In Sec. II, we define the problem of job scheduling for two processes and present the QUBO formulation required to use the D-Wave quantum computers. Section III details the proposed algorithm with focus on the novel LDC method. Experimental results are reported in Sec. IV. Finally, we discuss the study and findings in Sec. V and draw conclusions in Sec. VI.

2. Optimization Problem: Job Scheduling for Two Processes

We deal with a simplified job shop scheduling problem for two processes that represent the features of a variety of processes in steel manufacturing. The unit of a job corresponds to a steel product (e.g., coil, plate). All the products are sequentially processed in the first and second processes. A processing group and due time are assigned to each job. Optimization attempts to find the schedule that provides a finishing time of products close to their due times while minimizing the number of changes of processing groups during a production series to reduce costs.

2.1. Problem Definitions

We consider the following definitions in the optimization problem:

1) N products are finished through sequential manufacturing processes 1 and 2.

2) The time required to process one product in a process is constant for all the products and both processes. The time is thus normalized with this required time. Then, the problem is considered as distributing N products into several timeslots dedicated to processes 1 and 2, as shown in Fig. 1.

3) The due time from process 1 to process 2 is neglected.

4) If a product undergoes process 1 at timeslot s, it undergoes process 2 at timeslot s or later.

5) Considering manufacturing efficiency, N products are distributed into N consecutive timeslots for both processes 1 and 2.

6) Without loss of generality, process 1 handles products from the first timeslot, whereas process 2 may start later. Therefore, we prepare (N + e) timeslots for the process 2, where e depends on the due time, as shown in Fig. 1.

7) For every product, the due time and production groups of processes 1 and 2 are defined. The cost is counted when two adjacent products belong to different production groups. In real production, this cost is related to changes in the process conditions.

8) The formulated problem is aimed to find the optimal scheduling considering the tradeoff between the due time and group mismatch.

Fig. 1.

Simplified job scheduling for two processes. The valid timeslots are highlighted in gray.

2.2. Problem Formulation

Multiple-process scheduling can be defined as an integer programming problem (see the Appendix). Unfortunately, the existing quantum annealers only handle binary variables. In addition, we cannot take large time slots since the number of the available variables on the current D-Wave machines are limited. Thus, we formulate the scheduling problem to efficiently use the currently available quantum annealers. We detail the corresponding model below.

2.2.1. Parameters and Notation

The notation and parameters adopted in this study are detailed as follows:

p: Index of process, 1 ≤ p ≤ 2

i,j: Index of products, 1 ≤ i,jN

s: Index of timeslots, 1 ≤ sN

tp,i: Time when product i undergoes process p

Δ: Gap between start times of processes 1 and 2

di: Due time of product i

Lp,i,j: Cost of group mismatch in process p; Lp,i,j = 0 if products i and j belong to the same group, and Lp,i,j = 1 if the products belong to different groups

wg, we, wl: Coefficients of costs

ρ, ρ′: Coefficients of penalty terms

2.2.2. Decision Variables

  

x p,i,s ={ 1, if   product   i   undergoes   process   p   at   timeslot   s 0, otherwise

2.2.3. Constraints

We define the following two constraints:   

s=1 N x p,i,s =1 i,     p, (1)
  
i=1 N x p,i,s =1 s,     p. (2)
Constraint (1) ensures that all products undergo all the processes once. Constraint (2) guarantees that only one product can be assigned to a timeslot. In addition, we introduce the following constraint for time tp,i:   
t 1,i t 2,i i. (3)
Constraint (3) makes process 2 start only after a product has undergone process 1. Let us establish the definitions of tp,i. When all the binary variables satisfy constraint (2), t1,i and t2,i can be written as   
t 1,i = s=1 N s x 1,i,s i, (4a)
  
t 2,i = s=1 N ( s+Δ ) x 2,i,s i. (4b)
Figure 2 illustrates our formulation and an example of the solution for N = 3 and Δ = 1. The gray areas correspond to xp,i,s = 1. In the original definition of the multiple scheduling problem, process 2 has N + e timeslots. In our formulation, the timeslots in p = 2 range from 1 to N. Thus, our formulation reduces the number of variables. However, gap Δ between the start times of processes 1 and 2 should be considered in the scheduling problem, as explained in Sec. IV.
Fig. 2.

Diagram of proposed formulation and example of solution for N = 3 and Δ = 1.

2.2.4. Costs

We define two types of costs: due time and group mismatch. The costs of missing due times are defined as follows:   

Cos t e ( x 2 ) = i=1 N s=1 d i -Δ-1 ( d i -( s+Δ ) ) x 2,i,s , (5)
  
Cos t l ( x 2 ) = i=1 N s= d i -Δ+1 N ( ( s+Δ ) - d i ) x 2,i,s , (6)
where xp denotes the binary variables for the p-th process, and Coste(x) (Costl(x)) corresponds to the cost when the products are processed earlier (later) than their due times. The cost of the group mismatch in the p-th process is defined as   
Cos t g ( x ) = p=1 2 Cos t g,p ( x p ) , (7)
where   
Cos t g,p ( x p ) = i,j=1 N s=1 N-1 L p,i,j x p,i,s x p,j,s+1 .
Thus, we define the costs for processes 1 and 2 as   
Cos t 1 ( x 1 ) = w g Cos t g,1 ( x 1 ) ,
  
Cos t 2 ( x 2 ) = w g Cos t g,2 ( x 2 ) + w e Cos t e ( x 2 ) + w l Cos t l ( x 2 ) .
The total cost is given by   
Cost( x ) =Cos t 1 ( x 1 ) +Cos t 2 ( x 2 ) . (8)

2.2.5. QUBO Formulation

To use the current D-Wave machines, we express constraints (1), (2), and (3) in a quadratic form over x. Equality constraints (1) and (2) can be represented using penalty terms as9)   

s=1 N x p,i,s =1     i,     p p=1 2 i=1 N ( s=1 N x p,i,s -1 ) 2 , (9)
  
i=1 N x p,i,s =1     s,     p p=1 2 s=1 N ( i=1 N x p,i,s -1 ) 2 . (10)
We express the penalty terms for the p-th process as follows:   
Penalt y p ( x p ) ={ t=1 N ( i=1 N x p,i,t -1 ) 2 + i=1 N ( t=1 N x p,i,t -1 ) 2 }. (11)
To deal with inequality constraints, various techniques are required. A simple approach is to introduce the following alternative expression to constraint (3):   
i=1 N s, s =1 N Θ( s-( s +Δ ) ) x 1,i,s x 2,i, s , (12)
where Θ(z) is the Heaviside step function, with Θ(z) = 1 if z > 1 and Θ(z) = 0 otherwise. From the definition of Eqs. (4a) and (4b), [s−(s′+Δ)] corresponds to (t1,it2,i). Thus, constraint (11) can be interpreted as the penalty when process 2 starts earlier than process 1.

Substituting Eqs. (9), (10), and (11) into Eq. (8), we obtain the following QUBO model:   

E( x ) =Cost( x ) +ρ p=1 2 Penalt y p ( x p ) + ρ i=1 N s, s =1 N Θ( s-( s +Δ ) ) x 1,i,s x 2,i, s . (13)

3. Methods

3.1. Quantum Annealing

Quantum annealing (QA) is a metaheuristic approach to solve optimization problems.10) A QA system consists of target and driver parts. The target part corresponds to the QUBO model to be optimized, while the driver part introduces quantum fluctuations into the system and creates a superposition state that contains all the patterns of solutions. To start QA, we set the driver part as dominant and explore diverse solutions. Then, we gradually reduce the influence of the driver part, and the system evolves to a solution. If QA is operated slowly enough and the system suffers no external interferences, we can reach the optimal solution. In fact, QA has outperformed classical simulated annealing under ideal conditions.11,12) Unfortunately, the QA implementation in an actual quantum device is affected by random noise, and it is difficult to conduct ideal QA. Therefore, current quantum annealers establish samplers that stochastically generate diverse approximated solutions.13)

The D-Wave 2000Q quantum annealer has approximately 2000 physical qubits implemented in a chimera graph. D-Wave 2000Q can solve a QUBO model defined as   

E QUBO = x T Qx,
where x is a vector of binary variables and Q is a real-valued matrix. However, the connectivity between qubits on the chimera graph is limited. Consequently, D-Wave 2000Q cannot represent the QUBO problem directly, and the embedding and unembedding become necessary.14,15) The embedding technique maps the QUBO problem onto the D-Wave hardware. After QA, we use the unembedding to retransform the annealing results into the original QUBO representation. Thus, the embedding and unembedding allow to solve various QUBO problems on D-Wave 2000Q. As embedding requires many physical bits, the problem size that can be computed reduces dramatically. For example, when the QUBO problem is given by a fully connected graph, D-Wave 2000Q can only compute up to 64 variables. Furthermore, a previous study reported that the success probability of the embedding algorithm reduces dramatically as the problem size increases.15)

3.2. Lagrangian Decomposition and Coordination

LDC is a relaxation and decomposition method for solving constrained optimization problems.16,17) Specifically, LDC generates an approximated relaxation problem by incorporating some of constraints into Cost(x). In addition, the accuracy of LDC is guaranteed by the duality gap between its upper and lower bounds. LDC has been applied in various production scheduling problems.1,18,19,20) We use LDC for the scheduling problem addressed in this study.

We apply LDC to inequality constraint (3). With Lagrange multiplier λ, the QUBO formulation for the relaxation problem is given by   

E LDC ( x ) =Cost( x ) +ρ p=1 2 Penalt y p ( x p ) + i=1 N λ i ( t 1,i - t 2,i ) . (14)
Considering the definitions in Eqs. (4a), (4b), and (8), we can separate ELDC(x) into the following two QUBO models:   
E 1 ( x 1 ) =Cos t 1 ( x 1 ) +ρPenalt y 1 ( x 1 ) + i=1 N λ i t 1,i , (15a)
  
E 2 ( x 2 ) =Cos t 2 ( x 2 ) +ρPenalt y 2 ( x 2 ) - i=1 N λ i t 2,i . (15b)
As tp,i is a linear over xp, E1(x1) and E2(x2) are given as QUBO formulations. Thus, LDC allows to solve E1(x1) and E2(x2) separately as follows:   
x p * = argmin x p E p ( x p ) . (16)
To obtain a feasible solution, the appropriate Lagrangian multipliers are searched. To this end, we use the subgradient method as follows:   
λ i new = λ i +αmax( t 1,i * - t 2,i * ,0 ) , (17)
where parameter α > 0 determines the step size. In addition, we can obtain t 1,i * ( t 2,i * ) by substituting x 1 * ( x 2 * ) into Eq. (4a) [(4b)]. As the convergence criterion, we use the following duality gap:   
duality   gap= | Cost( x * ) -Cost( x feas ) | | Cost( x * ) | ,
where Cost(x*) is the lower bound, x* represents { x 1 * , x 2 * } obtained from Eq. (16), and upper bound Cost(xfeas) can be defined by a feasible solution xfeas, which is often computed using a heuristic algorithm.

LDC provides various advantages. In the original QUBO model, E(x), the number of variables is 2N2, while LDC requires only N2 variables because the total QUBO model is divided into E1(x1) and E2(x2). Let us consider a more general case with Np processes. Even in this case, the number of variables in the LDC method is N2, whereas the original QUBO model would require NpN2 variables. Thus, LDC can considerably reduce the number of variables for solving optimization problems. In addition, LDC can be run in parallel. By solving E1(x1) and E2(x2) in parallel, we can efficiently search for a feasible solution.

3.3. Proposed Algorithm

We propose a hybrid LDC algorithm that combines LDC and QA. The main process in our algorithm uses D-Wave machines to determine the lower and upper bounds. As mentioned above, D-Wave machines are samplers that generate diverse solutions stochastically. Therefore, we input QUBO model Ep(xp) into the D-Wave machines to then obtain samples { x p ν }, where ν is the index of each sample. From the samples obtained by QA, we define the lowest-cost solution for the p-th process as   

x p cost = argmin x p { x p ν } E p ( x p ) . (18)
The lowest cost, Cost1( x 1 cost ) + Cost2( x 2 cost ), corresponds to the lower bound. In LDC, the combination of x 1 cost and x 2 cost may not be feasible initially. However, the combination of samples in { x 1 ν } and { x 2 ν } can be feasible. We define feasible solution xfeas as   
x feas = argmin x 1 { x 1 ν },    x 2 { x 2 ν } E 1 ( x 1 ) + E 2 ( x 2 ) s.t. t 1,i t 2,i i. (19)
Thus, we can obtain the lower and upper bounds from sampling on the D-Wave machines. The proposed hybrid LDC algorithm is detailed as follows:

1) Initialize Lagrangian multipliers as λi = 0 i

2) Perform embedding

3) Initialize counter t = 1

4) Compute QUBO matrices

5) Input the QUBO matrices and an embedding result into D-Wave 2000Q to obtain samples { x 1 ν } and { x 2 ν }

6) Compute the lowest-cost solutions as x 1 cost and x 2 cost

7) Compute feasible solution xfeas and the upper bound

8) Check convergence: when one of the following criteria is satisfied, terminate the iterative process:

 i) t > 20

 ii) duality gap < 0.01

 iii) upper bound does not improve in 10 consecutive steps.

9) Update the multipliers as λi = λi + αmax( t 1,i * t 2,i * ,0) i

10) Update the counter as t = t + 1

11) Iterate steps 4–10 until convergence

To obtain accurate results, lower-cost solutions should be sampled. We use two techniques:14) 1) energy minimization method for unembedding, which proceeds while computing a local QUBO model; 2) greedy steepest descent (GSD) to modify the samples obtained from QA on the D-Wave machines while minimizing a local QUBO model.

4. Experiments and Results

4.1. Experimental Setup

We tested the performance of the proposed hybrid algorithm on artificial setups. Table 1 lists the setup of the five- and eight-product problems, whose product groups and due times were generated randomly. For comparison, we evaluated the performance of solving the original QUBO model, E(x), using D-Wave 2000Q. We also compared the results obtained by solving the integer linear programming formulation (see the Appendix) with the CPLEX software.21) We set the annealing time to 20 μs and generated 1000 samples on D-Wave 2000Q. The coefficients of the penalty terms were given by ρ, ρ′ = 5 × max(wg, we, wl), and those of the costs were set as we = 3, wd = 1, and wg = {4, 10, 100}. Step size parameter α in the subgradient method was set to 0.01.

Table 1. Experimental setup for five- and eight-product problem.
Product12345
Due time51423
Group 123424
Group 246681
Product12345678
Due time71586234
Group 132434133
Group 272783237

4.2. Dependence on Gap Time Δ between Processes

We use parameter Δ to reduce the number of variables by representing the gap between the start times of processes 1 and 2, which must be determined before scheduling. Table 2 details the Δ-dependence of each cost in the five-product problem in terms of Cost*, Cos t e * , Cos t l * , and Cos t g * , which indicate the costs in the best feasible solution obtained by the hybrid LDC algorithm. The values highlighted in boldface in rows Cost* are consistent with the optimal solutions obtained by CPLEX. Table 2 shows that Cost* is minimized at Δ = 1 for every wg. This is because the due times start from t = 1 in our problem. When Δ shifts from 1, the values of Coste + Costl increase. Thus, the hybrid LDC algorithm can achieve the optimal solution for the appropriate value of Δ.

Table 2. Δ-dependence of costs.
Δ = 0Δ = 1Δ =2Δ = 3Δ = 4Δ = 5
wg = 4
Cost*292435506580
Cos t e * 500000
Cos t l * 005101520
Cos t g * 665555
wg = 10
Cost*5958658095110
Cos t e * 620000
Cos t l * 125101520
Cos t g * 555555
wg = 100
Cost*509508515530545566
Cos t e * 620000
Cos t l * 12551520
Cos t g * 5551055

4.3. Accuracy and Computation Time

To investigate the typical performance of the hybrid LDC, we performed five independent trials. To evaluate the algorithm performance, we used the average relative error and computation times. The relative error is defined as   

error= | Cost( x opt ) -Cost( x feas ) | | Cost( x opt ) | ,
where xopt is the optimal solution obtained by CPLEX. In addition, we measured three computation times: tcloud, tunemb, and tLDC. Total cloud time tcloud involves internet latency, QA, and other processes of D-Wave 2000Q. Time tunemb is related to unembedding in the LDC iterative process. The total computation time in the hybrid LDC algorithm is denoted by tLDC and involves tcloud, tunemb, and the computation times for GSD and other processes.

Tables 3 and 4 list the experimental results of the five- and eight-product problems for Δ = 1, respectively. For comparison, we show the average relative error, tcloud, and tunemb when solving E(x) using D-Wave 2000Q. Table 3 shows that the average errors obtained by the hybrid LDC algorithm are zero for every wg. Hence, the hybrid LDC achieves optimal solutions in all the five trials. D-Wave 2000Q also finds the optimal solution in all experiments except for wg = 4. In addition, tcloud, and tunemb of D-Wave 2000Q are shorter than those of the proposed hybrid LDC algorithm, which requires multiple sampling operations on D-Wave 2000Q to update the multipliers. Table 4 shows that the hybrid LDC algorithm achieves the optimal solution in all trials. Unfortunately, the direct implementation in D-Wave 2000Q cannot solve the eight-product problem because the embedding fails. In the eight-product problem, the number of variables is 128 because the QUBO model, E(x), requires 2N2 variables. As reported in the previous work,15) the success probability of embedding reduces as the problem size increases. Consequently, in the eight-product problem, the embedding fails to map E(x) onto the hardware graph of D-Wave 2000Q. In the hybrid LDC algorithm, the QUBO model is separated into E1(x1) and E2(x2). Therefore, the number of variables is 64 even for the eight-product problem because E1(x1) and E2(x2) require N2 variables. Owing to this reduction, the hybrid LDC algorithm can solve larger problems than using the direct implementation in D-Wave 2000Q.

Table 3. Average relative error and computation times in five-product problem.
wg = 4wg = 10wg = 100
ErrortcloudtunembtLDCErrortcloudtunembtLDCErrortcloudtunembtLDC
Hybrid LDC0.017.054.3825.450.018.324.1726.450.016.894.1725.07
D-Wave 2000Q0.071.740.490.02.320.490.02.240.49

Table 4. Average relative error and computation times in eight-product problem.
wg = 4wg = 10wg = 100
ErrortcloudtunembtLDCErrortcloudtunembtLDCErrortcloudtunembtLDC
Hybrid LDC0.025.6013.7150.900.047.6727.0797.650.024.7913.9750.23
D-Wave 2000Q

5. Discussion

The proposed hybrid LDC algorithm can successfully optimize scheduling problems for up to eight products. However, much larger scheduling optimization should be achieved to solve practical problems. In 2020, the next-generation machine D-Wave Advantage quantum computer was released.22) It can handle up to 118 variables for the QUBO problem being expressed as a fully connected graph. Moreover, a quantum–classical hybrid solver, Leap Hybrid, was launched in 2020,23) and it can solve up to 20000-variable problems on a fully connected graph. Thus, with the development of improved hardware and software, the proposed hybrid LDC algorithm can solve more practical scheduling problems in the steel industry.

Here, we discuss the computation times in our algorithm. In our experiments, most of the hybrid LDC time, tLDC, corresponds to the total cloud time tcloud. Although QA is performed in 20 μs, tcloud includes the processing time for generating 1000 samples, waiting for queued results, and communication delays. Moreover, the hybrid LDC algorithm requires multiple iterations of D-Wave sampling. As a result, the total cloud time takes a few tens of seconds. In addition, unembedding has a considerable impact on tLDC as the problem size increases. This is because we use the energy minimization method for the unembedding to obtain high-quality solutions. The energy minimization method performs the unembedding while optimizing a local QUBO model, consequently increasing the computation time. However, because the current D-Wave machines are implemented on a sparse graph, the embedding and unembedding are essential. When the hardware on a denser graph is realized in the future, our approach will be greatly accelerated.

6. Conclusion

A novel hybrid LDC algorithm is proposed to increase the feasible size of a combinatorial optimization problem using quantum computers. The unique feature of D-Wave quantum computers to generate various solutions in a single run is exploited for the first time to calculate the upper bound of the total cost during each LDC update. We validated the effectiveness of the proposed algorithm for scheduling two processes typically found in steel manufacturing. Optimization was successful even when considering eight products, which establish a currently intractable problem without decomposition. The proposed algorithm is applicable to other types of combinatorial optimization problems, and it will likely contribute to scaling-up various problems to enable real-world applications.

References
Appendix

As a comparison with the results obtained from the D-Wave quantum computer, the same scheduling problems for the two processes were resolved using integer linear programming implemented in the commercial CPLEX software. This appendix presents the formulation of the corresponding problem. The parameters and notations are the same as those indicated in Sec. II unless otherwise specified.

In addition to the decision variables for real products xp,i,k (p =1,2;1 ≤ iN), we prepare variables for dummy products xp,i,k (p = 2;N + 1 ≤ iN + e) for process 2 (p = 2) to facilitate the formulation. The variable range of k is thus 1 ≤ kN for p = 1 and 1 ≤ kN + e for p = 2. Another variable y, which represents the consecutive processing of a product, is introduced to keep the formulation linear, unlike the QUBO model that is quadratic.   

y p, i 1 , i 2 ,s ={ 1, if   products    i 1    and    i 2    undergo   process    p   at   timeslots   s   and    ( s+1 ) ,   respectively 0, otherwise (12)

The constraints for x are the same as those in Eqs. (1) and (2), respectively:   

i=1 N x 1,i,s =1 s{ 1,2,,N }, (13)
  
s=1 N x 1,i,s =1 i{ 1,2,,N }, (14)
  
i=1 N+e x 2,i,s =1 s{ 1,2,,N+e }, (15)
  
s=1 N+e x 1,i,s =1 i{ 1,2,,N+e }. (16)

From the definition of y, the following constraints are derived:   

i 2 =1 N y 1, i 1 , i 2 ,s = x 1, i 1 ,s i 1 { 1,2,,N },     s{ 1,2,,N-1 }, (17)
  
i 1 =1 N y 1, i 1 , i 2 ,s-1 = x 1, i 2 ,s i 2 { 1,2,,N },     s{ 2,3,,N }, (18)
  
i 2 =1 N+e y 2, i 1 , i 2 ,s = x 2, i 1 ,s i 1 { 1,2,,N+e }, s{ 1,2,,N+e-1 }, (19)
  
i 1 =1 N+e y 2, i 1 , i 2 ,s-1 = x 2, i 2 ,s i 2 { 1,2,,N+e }, s{ 2,3,,N+e }, (20)

The time to process a product can be expressed as   

t p,i = s=1 N s x p,i,s i{ 1,2,,N }. (21)

The constraint for the processing order is the same as in Eq. (3):   

t 1,i t 2,i i{ 1,2,,N }. (22)

Finally, we set the constraint that a dummy product does not stay between two real products. This constraint can be represented by the number of transitions from a dummy product to a real product being at most one:   

s=1 N+e-1 i 1 =N+1 N+e i 2 =1 N y 2, i 1 , i 2 ,s + i=1 N x 2,i,1 =1. (23)

The second term in the left-hand side of the equation represents the case in which a sequence starts with a real product. In this case, the sequence does not experience a transition from a dummy product to a real product.

The total cost is the sum of the three costs for late delivery, early delivery, and production group mismatch:   

Cost( x ) = w d i=1 N s= d i +1 N+e ( s- d i ) x 2,i,s + w e i=1 N s=1 d i -1 ( d i -s ) x 2,i,s + w g s=1 N-1 ( i 1 , i 2 )L C 1 y 1, i 1 , i 2 ,s + w g s=1 N+e-1 ( i 1 , i 2 )L C 2 y 2, i 1 , i 2 ,s , (24)
where LCp (p = 1,2) are sets of pairs of two products i1 and i2 (i1, i2 ∈{1, 2, …, N}) that belong to different production groups.

 
© 2022 The Iron and Steel Institute of Japan.

This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
feedback
Top