ISIJ International
Online ISSN : 1347-5460
Print ISSN : 0915-1559
ISSN-L : 0915-1559
Instrumentation, Control and System Engineering
A Novel Genetic Simulated Annealing Algorithm for No-wait Hybrid Flowshop Problem with Unrelated Parallel Machines
Hua Xuan Qianqian ZhengBing LiXueyuan Wang
Author information
JOURNAL OPEN ACCESS FULL-TEXT HTML

2021 Volume 61 Issue 1 Pages 258-268

Details
Abstract

This paper studies the problem of scheduling N jobs in a hybrid flowshop with unrelated parallel machines at each stage. Considering the practical application of the presented problem, no-wait constraints and the objective function of total flowtime are included in the scheduling problem. A mathematical model is constructed and a novel genetic simulated annealing algorithm so-called GSAA are developed to solve this problem. In the algorithm, firstly a modified NEH algorithm is proposed to obtain the initial population. A two-dimensional matrix encoding scheme for scheduling solutions is designed and an insertion-translation approach are employed for decoding in order to meet no-wait constraints. Afterwards, to avoid GA premature and enhance search ability, an adaptive adjustment strategy is imposed on the crossover and mutation operators. In addition, a SA procedure is implemented for some better individuals from the GA solutions to complete re-optimization, where five neighborhood search structures are constructed including job based exchange, gene based exchange, gene based insertion, job based mutation, and gene based mutation. Finally, various simulation experiments in two scales of small-medium and large are established. Computational results show that the presented algorithm performs much more effectively compared with several heuristic algorithms reported in the literature.

1. Introduction

The hybrid flowshop problem (HFP) plays an important role in modern manufacturing systems. A hybrid flowshop includes multiple production stages and two or more parallel machines in at least one processing stage.1) Due to technology constraints or a lack of intermediate buffers of practical application, it is allowed that once a job starts processing of its first-stage operation, it cannot be interrupted until all operations of this job are completed. Such a case often occurs in the fields of steel production, chemical and petrochemical production.2) For example, in the steel industry, the heated metal must go through a series of operations such as melting, casting and rolling without any interruption. The waiting on or between machines will bring about temperature drop, resulting in the reheat cost or the influence on the composition of the steel. Moreover, different wear degree and service life of the machines lead to various processing times of parallel machines for the same operation. Therefore, a multi-stage no-wait hybrid flowshop problem with unrelated parallel machines at each stage (MNWHFP-UPM) is presented. Since two-stage HFP with two parallel machines at only one stage and no-wait flowshop problem are proved to be NP-hard,3,4) the more complex MNWHFP-UPM is also NP-hard.

HFP has been paid great attention in recent years and lots of HFP concentrate on the development of heuristics such as genetic algorithm (GA) to solve it. In terms of identical parallel machine environment, to solve two-stage no-wait HFP, Wang and Liu5) suggested a GA including the upper bound value obtained from constructive heuristics to minimize makespan; Rahmanidoust and Zheng6) presented non-dominated sorting genetic algorithm, multi-objective imperialist competitive algorithm and Pareto archive evolutionary strategy to solve the bi-objective problem of minimizing makespan and maximum tardiness. For multi-stage HFP, Mousavi and Zandieh7) considered setup times, and proposed a hybrid algorithm based on simulated annealing (SA), GA and local search to minimize two goals as makespan and total tardiness; Hekmatfar et al.8) considered a reentrant HFP with setup times and a hybrid GA based on a random key GA was developed to minimize makespan. In terms of unrelated parallel machine environment, Zabihzadeh et al.9) considered robot transportation, release time and blocking, and proposed ant colony optimization with double pheromone and GA to minimize makespan; Tao et al.10) proposed a hybrid algorithm based on GA and SA to solve the problem with makespan minimization and blocking. There are also other heuristic or approximation methods to be designed for a multi-stage HFP. Li and Pan11) combined the artificial bee colony algorithm with tabu search to minimize makepan under the consideration of limited buffers and identical machines. For MHFP-UPM, Sun et al.12) and Asadi-Gangraj13) respectively presented a hybrid estimation of distribution algorithm combined with teaching learning based optimization and a Lagrangian relaxation algorithm in order to minimize makespan.

NWHF is a generalization of no-wait flowshop. The no-wait flowshop problem has been studied extensively by many researchers. Allahverdi and Allahverdi14) solved the two-machine problem with uncertain setup times by using an algorithm to minimize maximum lateness. With the minimum makespan objective for m-machine no-wait flowshop, Xu et al.15) suggested a new co-evolutionary immune algorithm; Riahi and Kazemi16) developed a hybrid meta-heuristic algorithm based on ant colony optimization and SA; Elyasi17) proposed a hybrid GA. Under makespan constraint, Allahverdi et al.18) investigated an algorithm combining SA with insertion algorithm for minimizing total tardiness; Allahverdi and Aydilek19) applied an insertion algorithm, two genetic algorithms, five SA algorithms and a differential evolution algorithm in order to minimize total completion time. Nagano et al.20) designed a new constructive heuristic algorithm to minimize the total flowtime with the consideration of sequent-dependent setup times. Shao et al.21) presented a Pareto-based estimation of distribution algorithm to solve a distributed problem with sequence-dependent setup times so as to minimize the objective of makespan and total weighted tardiness. Considering both wait and no-wait constraints, Wang et al.22) and Cheng et al.23) presented iterative greedy algorithms to minimize makespan.

There are also some research papers about NWHFP related to our problem. For two-stage NWHFP, Wang et al.24) presented a branch and bound algorithm for the makespan problem with a single machine at stage one and multiple identical parallel machines at stage two. Wang and Liu5) used a GA for the same problem in Wang et al.24) Zhong and Shi25) considered inter-stage flexibility and presented two polynomial approximation algorithms for the same problem in Wang et al.24) and a polynomial approximation algorithm for the problem with multiple machines at stage one and one machine at stage two. Li et al.26) developed the least deviation algorithm for makespan minimization with a single machine at either stage. Rahmanidoust and Zheng6) presented three multi-objective metaheuristic methods to minimize makespan and maximum tardiness for the case with multiple identical machines at both stages. For the same machine environment in Rahmanidoust and Zheng6) with setup times, Moradinasab et al.27) applied an adaptive imperialism competition algorithm and GA to minimize the total completion time; Huang et al.28) studied the total earliness and tardiness problem with time window constraints and presented an improved ant colony optimization algorithm. Considering unrelated parallel machines at each stage, Rabiee et al.29) designed an intelligent hybrid algorithm based on imperialist competitive algorithm so as to minimize makespan where it is combined with SA, variable neighborhood search and GA. For multi-stage NWHFP with sequence-dependent setup times and identical machines, Asefi et al.30) developed a metaheuristic approach so that the bi-objective of makespan and mean tardiness are minimized and Jolai et al.2) proposed three metaheuristic algorithms to minimize makespan. For NWHFP with uniform parallel machines, Ramezani et al.31) proposed a hybrid metaheuristic algorithm to minimize makespan. For NWHFP with unrelated parallel machines, Rabiee et al.32) investigated a biogeography-based optimization algorithm to minimize mean tardiness.

To the best of our knowledge, the majority of studies have been focused on classic hybrid flowshop problems and no-wait flowshop problems. Existing research of HFP with no-wait constraints has mainly discussed the case with two stages. The unrelated attribute of parallel machines is also a significant characteristic of realistic production environment of hybrid flowshop. Consequently, simultaneous consideration of no-wait and unrelated parallel machines needs to be taken into account. Moreover, little work has been carried out on the minimization of the total flowtime in the no-wait hybrid flowshop with multiple processing stages and unrelated parallel machines at each stage. In addition, the GA-based optimization algorithm for MNWHFP has not been studied yet. Thus, our study considers the features of no-wait and unrelated parallel machines in multi-stage hybrid flowshop and provides a novel hybrid GA-based algorithm named genetic simulated annealing algorithm (GSAA) to solve it.

The remainder of this paper is arranged as follows. Description and modeling of the problem are given detailedly in Section 2. Section 3 elaborates the GSAA designed for the studied problem. Testing results of simulation experiments are shown in Section 4. Finally, Section 5 contains the conclusions and some future works.

2. Problem Statement and Formulation

The MNWHFP-UPM studied in this paper can be described as follows. There are N jobs to sequentially pass through W processing stages. Every stage has at least one machine and at least one stage must involve multiple unrelated parallel machines. Job processing time is various on different machines of the same stage. Once a job enters the production system it must be processed until completion without interruption either on or between machines at all stages. Each machine can process at most one job and each job can be handled on at most one machine at a time. All jobs are available at zero time. Transportation times between stages are negligible and setup times are considered as a part of the processing time.

When a job enters the production system, the idle machines are selected for completing each operation. Taking the case with N = 5, W = 3 as an example, the processing machine flow can be illustrated in Fig. 1 where S and E indicate the start and end of the whole production system, respectively, and UMjm denotes unrelated machine m for the jth-stage operation.

Fig. 1.

Processing machine flow.

The above problem can be represented as HFW (um1, …, umW)| no-wait |∑CiW where the vector (um1, …, umW) means the number of unrelated parallel machines at each stage, and no-wait shows that job i must be processed continuously from start to end. The objective is to minimize total flowtime where CiW indicates the completion time of job i at the last stage W.

2.1. Notations

N: Number of jobs.

W: Number of stages.

Mj: Number of unrelated parallel machines at stage j.

m: Index for machines where m ∈{1, 2,..., Mj}.

i: Index for jobs where i ∈{1, 2,…, N}.

j: Index for stages where j ∈{1, 2,..., W}.

t: Index for time where t ∈{1, 2,..., G}.

PTijm: Processing time of job i at stage j on mth machine.

BTijm: Beginning time of job i at stage j on mth machine.

CTijm: Completion time of job i at stage j on mth machine.

Xijm: a binary variable. If job i is processed on mth machine at stage j, it is equal to 1; otherwise 0.

Yijt: a binary variable. If job i at stage j is processed at time t, it is equal to 1; otherwise 0.

2.2. Modeling

Based on the above problem description and notations, an integer programming (IP) model is established:   

min E= i=1 N C iW (1)
s.t.   
m=1 M j X ijm =1,      i,j (2)
  
i=1 N Y ijt M j ,      j,t (3)
  
t=1 G Y ijt = m=1 M j X ijm P T ijm ,   i,j (4)
  
m=1 M j X ijm (C T ijm -B T ijm )= m=1 M j X ijm P T ijm ,   i,j (5)
  
m=1 M j B T ijm X ijm = m =1 M j-1 X i,j-1, m (B T i,j-1, m +P T i,j-1, m ) i,   j(j>1)  (6)
  
C T ijm ,B T ijm  0,   i,j,m (7)
  
  X ijm { 0, 1 },   i,j,m (8)
  
Y ijt { 0, 1 },   i,j,t (9)

The objective function (1) is to minimize total flowtime ∑iCiW where CiW=∑mCTiWmXiWm. Constraints (2) ensure that each job is assigned to only one machine at every stage. Constraint (3) represents machine capability restriction, i.e. the number of jobs processed at each stage cannot exceed the available number of machines at that stage during each time period. Constraints (4) indicate the time period occupied by each job on its assigned machine. Constraints (5) make sure that an operation is not interrupted once it starts the processing. Constraints (6) indicate that there is no waiting to be allowed between two adjacent stages. Constraints (7)–(9) define the value range of the decision variables.

3. Proposed Genetic Simulated Annealing Algorithm

Since the studied MNWHFP-UPM is NP-hard, it is required that the optimization algorithm must be of high computational efficiency. GA is widely used for solving combinatorial optimization problems with the advantage of the shorter computation time. However, the traditional GA has a disadvantage of local convergence. Meanwhile, SA may accept a solution that is worse than the current solution with a certain probability in order to make the algorithm away from the local optimum. Therefore, considering that the NEH algorithm33) can improve the quality of randomly generated initial population, a novel genetic simulated annealing algorithm (GSAA) is presented based on GA, SA and NEH algorithm, aiming to minimize the total flowtime.

In the proposed algorithm, two-dimensional matrix representation based on job ordering and machine assignment is first used for encoding and NEH algorithm is modified to define the initial job ordering. An insertion-translation decoding scheme is designed to obtain the solutions. Then, some improvement is made to enhance the performance of the GA, including a SA algorithm, five combined neighborhood search structures and an adaptive adjustment strategy of genetic parameters. The procedure of the developed GSAA is shown in Fig. 2.

Fig. 2.

Flowchart of the GSAA.

The detailed steps are provided as follows.

3.1. Encoding

The proposed GSAA starts with multiple generated solutions organized by a two-dimensional matrix formation.

3.1.1. Multi-stage Production System

The multi-stage production system under our study consists of W stages, and the jth-stage operation is finished by any one of unrelated parallel machines. Each machine is coded by natural number encoding scheme, as shown in Fig. 3.

Fig. 3.

Multi-stage production system.

3.1.2. Chromosome Encoding

Considering the characteristics of the MNWHFP-UPM, a scheme using integers is applied to express the machine number to which the job is assigned. A two-dimensional chromosome matrix Z is then formed as follows, where a row corresponds to a job and a column represents a processing stage. The rows of Z is sequenced by the job ordering and an element Z[i]j represents the machine allocation number of the job in the i-th position of job ordering at stage j. Further, a plurality of chromosome matrices constructs a population.   

Z=[ α [1]1 α [1]2 α [1]W α [2]1 α [2]2 α [2]W | | | | α [N]1 α [N]2 α [N]W ]

3.2. Population Initialization

The initialization is to generate the matrix of job ordering and machine assignment. As mentioned above, each integer (gene) in Z denotes the assigned machine number of a job for an operation. To prevent the imbalance of machine assignment, the gene value is set to meet the uniform distribution of [1, Mj]. It is well known that NEH algorithm is widely applied for shop scheduling problems to determine job processing sequence. Extending the works of Kim and Li,34) the modified NEH algorithm (MNEH) is then proposed to define the initial job ordering so as to adapt to the unrelated parallel machine environment. The detailed steps are listed below.

Step 1: Calculate the average processing time of job i at stage j, given by the Eq. (10):   

PT ¯ ij = m=1 M j P T ijm M j ,               i,j (10)

Step 2: A list is generated where the jobs are sequenced according to a non-increasing order of the total processing time SPTi. SPTi is computed by the following equation:   

SP T i = j=1 W PT ¯ ij ,               i (11)

Step 3: Find the first two jobs from the above list and order them in all possible ways. Select a permutation of the two jobs with the minimum total flowtime as the current partial schedule.

Step 4: Sequentially choose the kth job (k=3, 4, ..., N) from the list and insert it to the possible positions in the partial schedule including the previous k-1 jobs. Take the permutation with the minimum total flowtime as the job sequence of the k jobs;

Step 5: The obtained job sequence is used as the initial job ordering and it decides the row permutation of the two-dimensional chromosome matrix.

3.3. Decoding

Decoding is the inverse process of encoding. After the job ordering and assignment of each operation are obtained through the decoding, a schedule can be generated. To ensure that the solution achieved by chromosome decoding is a feasible schedule, an insertion-translation decoding scheme is developed as follows.

Step 1: Initialize the possible earliest beginning time of job [i] on the first-stage operation. Let BT[i]1m =0 (i=1, 2, ..., N, m=1, 2, ..., M1);

Step 2: Perform Step 3 to Step 6 for all jobs [i] from j=1 to W in turn;

Step 3: Get the machine number p in the gene position Z[i]j. Add the job to the job set Ω and update the job number |Ω| allocated on machine m;

Step 4: Find the idle time period of machine m at stage j during which the processing of job [i] can be inserted without no-wait consideration. Update the starting time BT[i]jm of job [i] on machine m at stage j and calculate the finishing time. If |Ω| = 1, BT[i]jm is calculated by   

B T [i]jm ={ 0 ,            j=1 B T [i],j-1,m +P T [i],j-1,m ,j>1 (12)
Otherwise, BT[i]jm is determined by the following Eq. (13), where A represents the maximal completion time of the set Ω-[i].   
B T [i]jm ={ A,            j=1 max{B T [i],j-1,m +P T [i],j-1,m ,A},j>1 (13)

Step 5: Check the beginning time and finishing time of all the operations for job [i]. If the violation of no-wait constraints exists, the obtained schedule is unfeasible and a translation procedure is performed to convert it to a feasible schedule;

Step 6: Update the beginning time and finishing time of job [i] at each stage.

The following example will help illustrate the decoding issues. Consider an instance with 5 jobs and 3 stages in Fig. 1. There are three unrelated machines in the first stage and two unrelated machines in the second and third stages. The processing time PTijm and the gene values in Z are provided in Table 1. For example, at stage 1, job 1 is processed on the second machine and at stage 2 and stage 3, it is assigned to the first machine. It is also observed that the jobs are arranged according to the sequence of 1→2→3→4→5.

Table 1. Testing data of the case.
JobProcessing timeZ
PTi11PTi12PTi13PTi21PTi22PTi31PTi32Z[i]1Z[i]2Z[i]3
1171031791615211
2191321916812122
361120201445322
42042021726211
52101771422111

The specific decoding steps are given as follows.

Job 1: Z1j =[2 1 1] X111 = X113 = X122 = X132 = 0 X112 = X121 = X131 = 1

BT112 = 0 CT112 = BT112+∑mPT11mX11m = 0 + 10 = 10; BT121 = CT112 = 10

CT121 = BT121 + ∑mPT12mX12m = 10 + 17 = 27 BT131 =CT121=27 CT131= BT131 + ∑mPT13mX13m = 27+ 16 = 43

Job 2: Z2j = [1 2 2] X212 = X213 = X221 = X231 = 0 X211 = X222 = X232 = 1

BT211 = 0 CT211 = BT21 + PT211 = 0 + 19 = 19 BT222 = 19 CT222 = BT222 + PT222 = 19 + 16 = 35

BT232 = 35 CT232 = BT232 + PT232 = 35 + 12 = 47

Job 3: Z3j = [3 2 2] X311 = X312 = X321 = X331 = 0 X313 = X322 = X332 = 1

BT313 = 0 CT313 = BT313 + PT313 = 0 + 20 = 20 BT322 = max{CT313, CT222} = max{20, 35} = 35

CT322 = BT322 + PT322 = 35 + 14 = 49 BT332 = max{CT322, CT232} = max{49, 47} =49

CT332 = BT332 + PT332 = 49 + 5 = 54

Update the partial schedule:

BT332 = 49 CT332 = 54 CT322 = BT332 = 4 BT322 = CT322 − ∑mPT32mX32m = 49 − 14 = 35

CT313 = BT322 = 35 BT313 = CT313 −∑mPT31mX31m = 35 − 20 = 15

Job 4: Z4j = [2 1 1] X411 = X413 = X422 = X432 = 0 X412 = X421 = X431 = 1

BT412 = max{0, CT112} = max{0, 10} = 10 CT412 = BT412 + PT412 = 10 + 4 = 14

BT421 = max{CT412, CT121} = max{16, 27} = 27 CT421 = BT421 + PT421 = 27+2 = 29

BT431 = max{CT421, CT131} = max{29, 43} = 43 CT431 = BT431 + PT431 = 43 + 2 = 45

Update the partial schedule:

BT431 = 43 CT431 = 45 CT421 = BT431 = 43 BT421 = 43 − 2 = 41 CT412 = BT421 = 41 BT412 = 42 − 4 = 38

Job 5: Z5j = [1 1 1] X512 = X513 = X522 = X532 = 0 X511 = X521 = X531 = 1

BT511 = max{0, CT211} = max{0, 19} = 19 CT511 = BT511 + PT511 = 19 + 2 = 21

BT521 = max{CT511, CT121, CT421} = max{21, 27, 43} = 43 CT521 = BT521 + PT521 = 43 + 7 = 50

BT531 = max{CT521, CT131, CT431} = max{50, 27, 45} = 50 CT531 = BT531 + PT531 = 50 + 2 = 52

Update the partial schedule:

BT531 = 50 CT531 = 52 CT521 = BT53 = 50 BT521 = 50 − 7 = 43 CT511 = BT52 = 43 BT511 = 43 − 2 = 41

Unfeasible solution   

[B T ijm ] 5×3 =[ 0    10    27 0    19    35 0    35    49 10   27   43 19   43   48 ]          [C T ijm ] 5×3 =[ 10   27   43 19   35   47 20   49   54 14   29   45 21   50   52 ]

Feasible solution   

[B T ijm ] 5×3 =[ 0    10    27 0    19    35 15   33   47 38   41   43 41   43   50 ]       [C T ijm ] 5×3 =[ 10   27   43 19   35   47 35   49   54 41   43   45 43   50   52 ]

The Gantt chart of the obtained solution for the above instance is shown in Fig. 4 where each operation is denoted by a blank rectangle labeled with job number and a shadow rectangle represents violation time period of no-wait constraints. The finishing times for jobs 1–5 at the last stage are 43, 47, 54, 45, 52, respectively. Therefore, the total flowtime E is 241.

Fig. 4.

Violation time based translation process for 5 jobs.

3.4. Genetic Operators

3.4.1. Fitness Evaluation and Selection

Since the goal is to minimize the total flowtime, the reciprocal of the objective function is used to describe the fitness function. That is, the fitness function of the n-th individual is defined by   

F n = 1 m=1 M W i=1 N C T iWm X iWm (14)

Based on Eq. (14), the selection probability PRn is provided by   

P R n = F n n=1 D F n (15)
where D represents the population size. The roulette wheel rule is applied in the selection phase. According to the fitness value after calculation, the individuals with better fitness are selected for reproduction.

3.4.2. Crossover

Crossover is an operation to generate two new chromosomes by recombining the genes of two parent chromosomes, which will be included in the next generation. Two single-point crossover (SPC) rules are designed here: row crossover and column crossover.

(1) Row crossover. Randomly generate a crossover point η (0<η<N) for two parent chromosomes. The rows before this point are copied to the offspring and the lacking rows’ genes in the offspring are filled with those after η in the other parent. Two new offspring chromosomes are then generated as shown in Fig. 5.

Fig. 5.

Row crossover using SPC rule.

(2) Column crossover. Randomly generate a crossover point θ (0<θ<W). The columns from 1 to θ of two parent chromosomes are inherited by the offspring and the genes of columns from θ + 1 to W in the offspring are placed with the columns after θ in the other parent (see Fig. 6).

Fig. 6.

Column crossover using SPC rule.

The crossover rate Pcx influences the probability of using the above crossover operator. An adaptive adjustment strategy of genetic parameters is applied to determine its value, given by   

P cx ={ P cmax -( P cmax - P cmin )( x MI + F n F max - F avg ) , F n F avg P cmax ,  F n < F avg
In the above formula, x is the current iteration and MI is the maximum iteration number.

3.4.3 Mutation

Single-point mutation is applied to guarantee the diversity of population. One gene is randomly chosen from the parent chromosome and a new number is assigned to this position (see Fig. 7). The mutation probability Pmx is computed by   

P mx ={ P mmax -( P mmax - P mmin )( x MI + F n F max - F avg ) , F n F avg P mmax ,  F n < F avg
Fig. 7.

Gene based single-point mutation.

3.5. SA Using Multiple Neighborhood Search Structures

In order to further expand the search scope of solutions and avoid premature of GA, a SA procedure is performed based on multiple neighborhood search structures (MNSS) to re-optimize the GA solutions.

3.5.1. Multiple Neighborhood Search Structures

Extending the results of Ramezani et al.,31) five neighborhood relations are defined to generate the neighborhood solutions of the better solutions in the current generation.

(1) Job based exchange (JE). Randomly select two jobs μ and ν (0<μ<νN) and swap their genes on all operations (see Fig. 8(a)).

Fig. 8.

Types of neighborhood search structures.

(2) Gene based exchange (GE): Randomly choose a point γ (0<γW) for two jobs μ and ν (0<μ<νN) and exchange the genes of operations (μ, γ) and (ν, γ) (see Fig. 8(b)).

(3) Gene based insertion (GI): Randomly select a point γ (0<γW) for two jobs μ and ν (0<μ<νN) and insert the gene of operation (ν, γ) immediately following the one of operation (μ, γ) (see Fig. 8 (c)).

(4) Job based mutation (JM): Randomly select two points σ and γ (0<σ<γW) for job μ (0<μN), and re-arrange their machine numbers between operations (μ, σ) and (μ, γ) (see Fig. 8 (d)).

(5) Gene based mutation (GM): Randomly choose a point γ (0<γW) for two jobs μ and ν (0<μ<νN), and re-arrange their machine numbers between operations (μ, γ) and (ν, γ) (see Fig. 8(e)).

3.5.2. Improvement of GA Solutions Based on SA

To further enhance solution quality of the above GA solutions, a SA procedure is incorporated into the hybrid algorithm. An array {Zd} is constructed in the descending order of fitness values for individuals achieved by the above GA. The first 40% of {Zd} are chosen to perform the SA procedure under the temperature Tx. The SA procedure is described in Algorithm 1.

Algorithm 1: The SA procedure
1:Input the current individual Zn, neighborhood relations {JE, GE, GI, JM, GM}
2:Output the new individual Zn
3:Set B = Zn, B* = B, pk = 1, L = 5
4:while pkL do
5:Randomly choose one of neighborhood relations to generate B
6:Calculate the increment ΔE′ = E(B′)−E(B)
7:if ΔE′ ≤ 0
8:B* = B
9:elseif Rand ≤ 0.5*exp(ΔE′/Tx)
10:B = B
11:end if
12:pk = pk +1
13:end while
14:Zn = B*

In the SA, the linear cooling scheme is utilized to update temperature by the formula of Tx=q*Tx-1 where q is the cooling coefficient, equal to 0.95 and T0 is the initial temperature, equal to 10000.

4. Simulation Experiments

In this section, the simulation experiments and algorithm comparison are provided to analyze the performance of the proposed GSAA. The GSAA is coded in MATLAB 2014a and compared with SA, Tabu search (TS), traditional GA (TGA), improved GA using NEH rule (NEH-IGA)35) and hybrid algorithm on GA and SA (GASA).10) All the six algorithms are run on a personal computer with AMD A8-4500M, 1.9 GHz and 4.00 G of RAM.

4.1. Experimental Design

The four sets of experiments are designed including: (1) effect analyses of introducing the MNEH algorithm for the initial population; (2) experimental test for small and medium scale problems; (3) experimental test for large scale problems; (4) evolution of objective values with iteration number for different sized problem instances.

Because the settings of parameters have a great influence on the algorithm, the empirical estimation method is applied to determine their value range. In the proposed GSAA, it mainly involves three key factors: D, [Pcmin, Pcmax] and [Pmmin, Pmmax]. Taking the case with N = 5, W = 3, M1 = 3, M2 = M3 = 2 as an example, simulation experiments are carried out with different combinations of these parameters, and an average of 10 runs is performed for each combination (see Table 2). According to the results, the factors are set as follows: D=80, [Pcmin, Pcmax]=[0.4, 0.99], [Pmmin, Pmmax]=[0.2, 0.6].

Table 2. Factor evaluation.
No.FactorsObjective value
D[Pcmin, Pcmax][Pmmin, Pmmax]
170[0.4, 0.6][0.2, 0.4]7670.0
270[0.4, 0.6][0.2, 0.6]7122.7
370[0.4, 0.99][0.2, 0.4]7713.4
470[0.4, 0.99][0.2, 0.6]6922.7
580[0.4, 0.6][0.2, 0.4]7252.9
680[0.4, 0.6][0.2, 0.6]7408.3
780[0.4, 0.99][0.2, 0.4]7189.2
880[0.4, 0.99][0.2, 0.6]6512.3
990[0.4, 0.6][0.2, 0.4]7588.0
1090[0.4, 0.6][0.2, 0.6]7104.0
1190[0.4, 0.99][0.2, 0.4]6880.0
1290[0.4, 0.99][0.2, 0.6]6811.6

In order to fairly compare all algorithms, by extensive numerical experiments, the crossover probability in TGA and GASA is set to 0.8 and the mutation probability is 0.4; the population size of all GA-based algorithms is set to 80. For all algorithms, the maximum iteration number 120 and the maximum running time 800 s are used as the stopping criteria. The other experimental parameters are generated as shown in Table 3 where Mj = 3 in A1, MjU[3, 5] in A2 and Mj = 5 in A3.

Table 3. Parameter settings.
W{2, 3, 4}
Mj{A1, A2, A3}
PTijmU[1, 20]
N{10, 30, 60, 80, 100, 120, 150, 200}

For each combination of N, W, Mj, ten replicates are generated and solved. Consequently, there are totally 720 problem instances used in our experiments. Each value in the following tables (except the last row) is the average of the ten instances with the same problem size. The performance of the algorithms is measured by the computational time and the improvement rate (IR) of the heuristic objective values from the best objective values. The IR value is defined as   

IR= H r - H GSAA H GSAA ×100
where Hr and HGSAA are the objective values obtained by a given algorithm and the GSAA algorithm, respectively.

4.2. Effect Analyses of Initial Population

In order to analyze the influence of the introduction of the MNEH algorithm for initial solutions, simulation experiments compare the objective values of initial population in the two algorithms, GSAA and GSAA without MNEH algorithm (GSAA-N), for the problems with N = {10, 30, 60, 80, 100, 120, 150, 200}, Mj = 3, W = 3. The results of these numerical examples are demonstrated in Table 4 where every value is the average of each problem size in 10 run (except the last row).

Table 4. Testing results of initial population.
N × W × MjGSAAGSAA-N
BestMeanBestMean
10 × 3 × 3523748.8466681.7
30 × 3 × 338885109.936594732.4
60 × 3 × 31585219111.11464517658.1
80 × 3 × 32841533700.52556330539.1
100 × 3 × 34451551187.54005447102.2
120 × 3 × 36292873513.45794767488.0
150 × 3 × 3100195114180.592312104637.0
200 × 3 × 3179881200076.2161370182115.9
Average5452462203.54950256869.3

According to Table 4, for the initial population achieved by the GSAA and GSAA–N algorithms, the average of best objective values are 54524 and 49502 respectively; the average of mean objective values are 62203.5 and 56869.3 respectively. On average, the best and mean values obtained by GSAA are respectively 10.1% and 9.4% better than those from GSAA-N. Therefore, the introduction of MNEH algorithm can significantly improve the quality of the initial population.

4.3. Experimental Test for Small and Medium Scale Problems

In order to further verify the effectiveness of the GSAA, the above six algorithms are tested and compared for small and medium sized problems (N={10, 30, 60, 80}) and the computational results are listed in Table 5. The stopping criterion of these algorithms including SA, TS, TGA, NEH-IGA and GASA is set as the used CPU time of the GSAA algorithm under the limitation of 120 iterations. Considering the importance of time, the maximum running time of all the algorithms is limited to 800 s.

Table 5. Testing results of small and medium scale problems.
N × W × Mjmin EIRCPU/s
SATSTGANEH-IGAGASAGSAASATSTGANEH-IGAGASA
10 × 2 × A121728320620320319313.645.88.05.56.422.06
10 × 2 × A222529721019221418125.164.916.56.518.822.48
10 × 2 × A315823012112012010551.2122.414.814.513.721.52
10 × 3 × A134845330428430026034.375.016.69.115.832.13
10 × 3 × A232441925823126022744.889.513.12.113.729.73
10 × 3 × A326530617917118817055.380.55.40.210.528.55
10 × 4 × A149363939935040333348.993.320.44.921.947.51
10 × 4 × A244560632129131428858.0115.312.2−0.110.644.61
10 × 4 × A338153023022723622669.6136.21.70.44.445.07
30 × 2 × A118411827160414231686133338.137.420.06.726.659.67
30 × 2 × A215581693130212121338110943.154.016.49.721.860.17
30 × 2 × A31104130583077283278042.067.66.5−0.87.058.62
30 × 3 × A124642652206918752052175442.152.719.17.6−0.882.60
30 × 3 × A222392286170915751726153347.150.712.22.713.282.76
30 × 3 × A317491931125312091257114553.269.49.95.310.281.45
30 × 4 × A133443656268123942671230245.558.816.74.016.2132.05
30 × 4 × A230363235235221412418211643.954.211.41.314.9139.63
30 × 4 × A324182810165616961727162749.273.52.13.96.2128.00
60 × 2 × A176156832662359496656551438.223.920.38.020.9126.03
60 × 2 × A263866235534346875467453641.538.117.63.720.1117.13
60 × 2 × A347804508367934603930345239.431.57.10.714.4112.99
60 × 3 × A11035210053858978648790750238.134.014.65.017.3165.55
60 × 3 × A285588238690164847011608340.736.013.46.515.3168.88
60 × 3 × A371006934527851895548508340.036.74.02.39.4163.62
60 × 4 × A1129661214710389989110638965334.425.97.72.510.3262.61
60 × 4 × A21097610637859682278806811135.431.35.91.48.7259.89
60 × 4 × A390108666679267837033661636.631.83.12.76.6258.80
80 × 2 × A113980124041191611233123661045434.319.314.58.018.8153.60
80 × 2 × A211245104529372869510003821038.428.414.86.422.5152.49
80 × 2 × A390477837727265747428622045.926.417.35.819.7150.26
80 × 3 × A118893166271540014568159101391835.819.410.64.714.3228.28
80 × 3 × A216326155821352712711136341199436.130.212.76.013.7220.36
80 × 3 × A312741116519706981910112947434.823.32.73.76.9224.20
80 × 4 × A122659216881851817535184911675035.429.310.74.810.6356.51
80 × 4 × A219454180151552014954157991451634.123.76.53.18.7303.38
80 × 4 × A316314140131253012049129161183338.218.76.22.29.4285.88
Average66946324537850845513487741.251.411.54.513.3134.14

Table 5 shows the average objective values, average improvement rates and average computational times for all algorithms for each problem scale. The following conclusions can be made from this table.

(1) For small and medium scale problems, the GSAA algorithm outperforms all the other heuristic algorithms on the whole, with respect to solution quality. For all instances, it is improved by 41.2%, 51.4%, 11.5%, 4.5% and 13.3% respectively within the average running time of 134.14 s, compared with SA, TS, TGA, NEH-IGA and GASA.

(2) The average objective values of the six algorithms are 6694, 6324, 5378, 5084, 5513 and 4877, respectively. The average objective value of GSAA is 4877 and the solution quality is best. SA shows worst performance among all algorithms where the average objective value is 6694.

(3) Although GSAA performs slightly worse than NEH-IGA and GASA for several small-sized problems such as 10 × 4× A2, 30 × 2 × A3 and 30 × 3 × A1, it yields better performance with the increasing of the number of jobs in the all cases (60 × W × Mj, 80 × W × Mj). This conclusion can be further verified in the following numerical experiments of large scale problems.

4.4. Experimental Test for Large Scale Problems

This section tests 36 combinations of difficult problems (N = {100, 120, 150, 200}), and Table 6 reports the results.

Table 6. Testing results of large scale problems.
N × W × Mjmin EIRCPU/s
SATSTGANEH-IGAGASAGSAASATSTGANEH-IGAGASA
100 × 2 × A122534212331915017465191491640538.130.217.36.717.2191.66
100 × 2 × A217418164521443213683147581271337.129.29.57.613.4197.26
100 × 2 × A314163135471172711162118111050935.129.211.76.312.6195.03
100 × 3 × A129805312932526523253248272223334.141.013.74.611.8323.99
100 × 3 × A223713228091932418851200861794132.526.97.85.212.1329.26
100 × 3 × A320419198591626815643161991497236.532.88.74.48.3308.73
100 × 4 × A136490404303029328566309122726534.048.411.24.813.5413.25
100 × 4 × A231132325382524123949258312318234.540.58.93.411.8391.58
100 × 4 × A325519242202021619654199951884635.628.87.44.56.3362.79
120 × 2 × A132823322122834424776285702333240.938.121.66.322.6238.53
120 × 2 × A226093268782235020525224141904837.040.616.97.817.6232.09
120 × 2 × A320953215631704915936174681519538.642.912.85.415.4231.03
120 × 3 × A143708474513780334602373643289333.244.615.15.313.8338.05
120 × 3 × A234954384452885727836289532642432.545.69.05.39.5334.48
120 × 3 × A329656306672382623580236072227933.337.97.05.96.1341.46
120 × 4 × A153021572644346942121454603996332.743.28.85.413.8466.33
120 × 4 × A244548485143633335511367193380031.943.77.55.38.6482.22
120 × 4 × A337378400573015629456306302802833.543.27.75.19.4433.89
150 × 2 × A152830572724537940816467703802738.950.619.37.323.0293.94
150 × 2 × A243983477573799333619382163251635.547.116.93.517.5300.67
150 × 2 × A333320364562796326388287222504633.045.611.75.414.7314.90
150 × 3 × A169653796965912856810597975389829.347.99.85.511.0495.44
150 × 3 × A256171645964700245398477854268631.551.410.16.312.0498.69
150 × 3 × A347247540133875237744393993558632.951.88.96.110.8505.83
150 × 4 × A184184983107091067391719376376732.154.311.35.712.9582.78
150 × 4 × A275577869236291261021629635793730.449.98.65.28.7595.06
150 × 4 × A359587687224918248991496284592229.949.77.26.78.1566.77
200 × 2 × A1962131062998322376357836467120935.349.517.07.217.6396.11
200 × 2 × A276435853266583859407657625670634.350.115.54.715.3435.72
200 × 2 × A363281702785249549880534464725634.149.011.35.713.4397.78
200 × 3 × A11281041432671097201035791091819693632.247.913.37.012.8609.98
200 × 3 × A21039781199628855884639904018043029.449.310.15.512.4587.18
200 × 3 × A3863511000137456271993749026803027.147.19.75.910.2571.02
200 × 4 × A115755317893613600512838313522111965531.849.713.77.413.2783.71
200 × 4 × A213389015463211441111071111372310313730.050.311.17.810.6758.49
200 × 4 × A31075291257969167490110920868341829.050.910.08.110.4751.54
Average56117623244738344994477314242133.643.911.65.812.7423.81

(1) Among these six algorithms, the GSAA consistently generates the best results for all of the 360 problem instances. Within the average running times of 423.81 s, the average objective value of GSAA is 42421 and it is smallest; the one of TS is 62324 and it is worst.

(2) The solution quality of GSAA compared with SA increased by 33.6%, 43.9% compared with TS, 11.6% compared with TGA, 5.8% compared with NEH-IGA, and 12.7% compared with GASA.

(3) As the number of jobs increases, IR values of GSAA compared with SA descend and are higher; the IR values compared with the other four algorithms increase. It can be further demonstrated that the proposed GSAA is much more effective than the other compared algorithms, especially for large scale problems.

4.5. Evolution of the GSAA Algorithm

To study the variance of solution quality with iteration number, the above six algorithms are run under the restriction of 120 iterations. Since TS has a worse performance among all the algorithms, especially for large sized problems, it is not included here. Taking six instances with N = {30, 60, 80, 120, 150, 200}, Mj = 3, W = 3 as examples, the evolution of objective values is illustrated in Fig. 9. It can be observed from this figure that the proposed GSAA has a faster convergence speed and avoid the premature convergence. These results demonstrate that the proposed GSAA performs better than other algorithms such as SA, TGA, NEH-IGA and GASA.

Fig. 9.

Evolution of objective values for different scale problems.

5. Conclusions

In this paper, the multi-stage no-wait hybrid flowshop problem with unrelated parallel machines (MNWHFP-UPM) is studied. An integer programming model is first presented with the objective of minimizing the total flowtime. Next, the two-dimensional matrix chromosome encoding scheme and the insertion-translation decoding scheme are designed. Then, a novel genetic simulation annealing algorithm (GSAA) is proposed where the MNEH algorithm is introduced to determine the initial job ordering; the adaptive adjustment strategy is applied to define genetic parameters; and five neighborhood search structures are constructed to expand resolution space. Finally, simulation experiments based on 720 randomly generated instances are performed and compared. These testing results demonstrate the effectiveness and efficiency of the GSAA.

Future research could focus on exploration of the application of the proposed algorithm to solve the MNWHFP-UPM with the other optimization objectives or the problems with realistic assumptions such as release time and transportation time. Another trend is to the development of more effective heuristic algorithms for the MNWHFP-UPM.

Acknowledgements

The authors thank the anonymous referees for their valuable suggestions that improved the paper. This research is partly supported by National Natural Science Foundation of China (Grant No. U1804151, U1604150).

References
 
© 2021 The Iron and Steel Institute of Japan.

This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
feedback
Top