ISIJ International
Online ISSN : 1347-5460
Print ISSN : 0915-1559
ISSN-L : 0915-1559
Regular Article
Invariant Feature Extraction Method Based on Smoothed Local Binary Pattern for Strip Steel Surface Defect
Maoxiang Chu Rongfen Gong
Author information
JOURNAL OPEN ACCESS FULL-TEXT HTML

2015 Volume 55 Issue 9 Pages 1956-1962

Details
Abstract

An invariant feature extraction method based on smoothed local binary pattern (SLBP) is proposed for strip steel surface defect images. SLBP proposed in this paper is a developed version of local binary pattern (LBP). It is determined by the sign of the difference between weighted grays in local neighborhood. SLBP has the ability of noise smoothing. In this paper, invariant features are obtained by concentric discrete square sampling template (CDSST). Firstly, defect images are resampled on CDSST by the way of coordinate mapping. Then, invariant features in scale, rotation, illumination and translation are extracted by combining two types of SLBP images and gray-level co-occurrence matrix. Experimental results show that this novel feature extraction method not only can extract features with scale, rotation, illumination and translation invariance, but also can effectively suppress noise and maintain high classification accuracy.

1. Introduction

Feature extraction is an important step for strip steel surface defect detection because it is the premise of defect classification and recognition. In order to realize defect feature extraction, effective and separable feature description is needed. There are many features used to describe steel surface defect, such as geometric features,1) one-dimensional histogram features,1) two-dimensional histogram features2,3) and HU invariant moment features.4) Geometric features can describe the shape of defect. One-dimensional histogram features can describe statistic distribution of gray values in region of defect. Two-dimensional histogram features describe the texture in region of defect by using gray-level co-occurrence matrix (GLCM). And HU invariant moment features also belong to geometric features but with rotation, translation and scale invariance.

Local binary pattern (LBP)5,6) is an effective description operator of gray changes in local neighborhood. It is invariant in rotation and illumination, which can describe the texture of detection region. Though its theory is simple, it is powerful in texture recognition. In recent years, LBP is used widely in texture analysis,7) face recognition8,9) and image matching.10) However, it still has shortcomings in specific application. So, many scholars have improved it and achieved satisfactory results. Reference8) adopts scalable oval to be local neighborhood. And LBP value is outputted by comparing neighboring pixels on that oval. This method can capture anisotropic information, which is more general than standard LBP. Reference9) introduces new parameters: positive and negative thresholds. Then 1, -1 and 0 are outputted by comparing neighboring pixels. This method splits local ternary pattern into two LBPs, which can enhance recognition ability. Reference10) outputs 0 or 1 by comparing two pixels which are symmetrical with respect to central point in local neighborhood, which reduces the dimension of the histogram from 256 to 16.

Based on the performance of LBP in texture feature extraction, it is used to realize feature extraction for strip steel surface defect in this paper. In order to achieve a considerable improvement in noise smoothing for LBP, smoothed LBP (SLBP) is proposed. It firstly calculates the difference between weighted grays in local neighborhood. Then SLBP is determined according to the sign of that difference. On the other hand, concentric discrete square sampling template (CDSST) is proposed to make SLBP with scale and translation invariance. All defect images are resampled on that template by the way of coordinate mapping. Then, SLBP and GLCM are combined to realize texture feature description with scale, rotation, translation and illumination invariance. This paper is structured as follows. Based on LBP, SLBP is proposed in section 2. CDSST is proposed in section 3. In section 4, SLBP, CDSST and GLCM are combined together to realize invariant feature extraction. Testing experiments and corresponding results are described in section 5. Some conclusions are drawn in section 6.

2. Smoothed Local Binary Pattern

2.1. LBP

Standard LBP is firstly proposed by Ojala et al. in 1996, which uses local neighborhood with size 3×3.5) The central pixel of local neighborhood is regarded as the threshold. If any other pixel in local neighborhood is smaller than that threshold, the output is 0, otherwise 1. Any one of those 8 binary values is multiplied with its corresponding power of 2. The sum of those multiplied results is the LBP value. The calculation process of standard LBP for local neighborhood is shown in Fig. 1.

Fig. 1.

The calculation process of standard LBP for local neighborhood.

In 2002, Ojala et al. improved standard LBP based on previous work.6) They proposed extended LBP operator and theory based on local circular neighborhood. This extended LBP not only is of illumination and rotation invariance, but also has multi-scale and uniform pattern. Suppose the radius of the local circular neighborhood is r. There are p pixels with the same intervals on that circle. Suppose a pixel on that circle is fi(i = 1,2, ..., p) and the central pixel is fc, and then extended LBP can be represented with the following.   

LB P r,p = i=0 p-1 S( f i - f c ) × 2 i ,where      Sign(x)={ 1      x0 0      x<0 (1)

Where, the scale of extended LBP can be changed with r and p. The influence of illumination is avoided because only sign is considered in extended LBP. In order to realize rotation invariance, extended LBP values are reduced to be 36 unique values by binary shift. This shift operation is equivalent to rotation correction. Moreover, Ojala et al. divide extended LBP into uniform and non-uniform patterns. And they consider that extended LBP with uniform pattern can represent over 90 percent information. So, uniform pattern can reduce the dimension of extended LBP histogram. It is necessary to mention that extended LBP costs more time comparing with standard LBP.

2.2. Improvement on LBP

LBP is calculated based on threshold, which makes it be sensitive to noise. In order to make LBP fit for steel surface defect images with noise points, SLBP with noise smoothing ability is proposed in this paper. It is realized based on standard LBP not extended LBP because of time cost. Some other simple and effective methods are used to realize scale invariance for SLBP. And rotation invariance is realized by binary shift of LBP.

An example of local neighborhood with size 3×3 is shown in Fig. 2(a). Where, fj(j = 0,1, ..., 8) is the pixel gray in local neighborhood. The output of SLBP depends on the difference between weighted grays. Weighted templates are shown in Fig. 2(b). Where, Si(i = 0,1, ...,7) is the difference of weighted grays. Si in every direction is represented as the difference between weighted gray of 3 neighboring pixels in surrounding and that in center. Take S1 as an example, S1 = (f1 + 2f2 + f3) − (f8 + 2f0 + f4). On one hand, S1 highlights the importance of f0 and f2. On the other hand, weighting grays of neighboring pixels smoothes noise pixels. The calculation process of SLBP by using weighted templates is shown in Fig. 2(c).

Fig. 2.

Weighted templates and calculation process of SLBP: (a) an example of local neighborhood with size 3×3; (b) weighted templates; (c) calculation process of SLBP.

Binary output of SLBP will be adopted as two types: SLBP-1 and SLBP-2. Binary output for SLBP-1 depends on the sign of Si, which represents the local change information between central pixel and 8 neighboring pixels. Binary output for SLBP-2 depends on the sign of difference between neighboring Si, which represents the local change information among 8 neighboring pixels. Figure 3 shows the calculation process of SLBP-1 and SLBP-2. SLBP-1 and SLBP-2 can avoid the influence of illumination because they only depend on the sign of the difference. Formulas for SLBP-1 and SLBP-2 are shown as follows.   

SLBP-1= i=0 7 ( Sign( S i )× 2 i ) SLBP-2= i=0 6 ( Sign( S i - S i+1 )× 2 i ) +Sign( S 7 - S 0 )× 2 7 Sign(x)={ 1      x0 0      x<0 (2)
Fig. 3.

Calculation process of SLBP-1 and SLBP-2.

The method of binary shift of LBP in reference11) which can realize rotation invariance has been used for extended LBP in reference.6) This method is also used in this paper. As far as that method is concerned, 36 unique values can be obtained by using binary shift for 256 different SLBP values. And All SLBP values in [0, 35] can be obtained by marking those 36 unique values.

3. Concentric Discrete Square Sampling Template

There are differences in scale and position for the same type and shape of strip steel surface defect samples. These differences will affect feature extraction and the following defect classification and recognition. So, in this paper, CDSST is proposed to realize scale and translation invariance for SLBP image. This novel method considers the center of gravity for every region of defect as origin of coordinates. And map every region of defect onto the same CDSST. This mapping can make region of defect be resampled on a fixed template, which ensures scale and translation invariance.

Firstly, construct a template including kmax squares with the same central point. The interval of every two neighbor lines or columns on that template is 1. Where, 1 represents that the template is accord with the original sampling interval. Every square has 8k(k = 1,2, ..., kmax) discrete points and their interval is 1. Figure 4 shows a CDSST with kmax = 8. kmax can be set according to reality. The bigger kmax is, the more the sampling points. The smaller kmax is, the faster the calculation. For strip steel surface defect image, kmax∈[16,32] is proper.

Fig. 4

CDSST with kmax = 8.

In the process of resampling, the center of gravity for region of defect is positioned to the central point of CDSST. The coordinate of every point on the sampling template is fixed relative to the central point, which ensures translation invariance of resampling. Moreover, in order to keep scale invariance, the circumscribed square region of defect can be mapped onto CDSST to be resampled. Take kmax = 8 for example, Fig. 5 shows the process and the result of resampling on CDSST. Suppose the set of pixels in the original region of defect is ΩD, coordinate of the pixel is (i, j), and the gray of pixel is fD(i, j). That mapping process can be described as the following.

Fig. 5.

The process of resampling on CDSST: (a) external square and original sampling result for defect image; (b) resampling result on CDSST for defect image; (c) coordinate mapping relationship.

Step 1: calculate the central point of gravity (ic, jc) for region of defect and the maximum distance (dc)max with the following formulas.   

( i c , j c )=( (i,j) Ω D i (i,j) Ω D 1 ,       (i,j) Ω D j (i,j) Ω D 1 ) (3)
  
d c (i,j)= (i- i c ) 2 + (j- j c ) 2 (i,j) Ω D (4)
  
( d c ) max =max{ d c (i,j)| (i,j) Ω D } (5)

Step 2: determine the external square with side length lc = 2×(dc)max and central point of gravity (ic, jc). The original sampling coordinate (i0, j0) of defect image is obtained after considering the central point of gravity as origin, which is shown in Fig. 5(a).

Step 3: calculate the ratio of coordinate mapping with the following formula.   

ρ= l c 2 k max (6)

Step 4: calculate i2 and j2 according to the following formulas, and the corresponding result is shown in Fig. 5(c).   

i2=i1×ρ,      j2=j1×ρ (7)

Step 5: obtain (i2, j2) according to the following formulas. Where, ROU represents the value is rounded towards the nearest integer. Then one-to-one mapping relationship between the coordinate (i1, j1) of CDSST and the original sampling coordinate (i0, j0) can be built through (i2, j2), which is shown in Fig. 5(c).   

(i2,j2){ i2=ROU(i2),      j2=ROU(j2) } (8)

It is necessary to point out that this mapping is quite different from zooming in and out on digital image. The reason is that it does not need image interpolation but only needs coordinate mapping.

4. Invariant Feature Extration

In order to realize texture feature extraction for strip steel surface defect, CDSST, SLBP and GLCM are combined together in this paper. In detail, CDSST ensures scale and translation invariance. Two types of SLBP images theirself are of illumination invariance. Binary shift of SLBP can ensure rotaion invariance and reduce the dimension of histogram. GLCM is used to describe statistical information of two types of SLBP images. So, this novel feature extraction is of scale, rotation, translation and illumination invariance.

Firstly, determine kmax and construct CDSST. Then, map defect image onto that CDSST to resample. After that gray f ¯ D (i1,j1) (i1, j1∈ΩD) in the region of defect can be obtained. Then, extract SLBP-1 and SLBP-2 according to SLBP method described in section 2. Suppose pixel grays of SLBP-1 and SLBP-2 are S ¯ 1(i1,j1) and S ¯ 2(i1,j1) . Then make binary shift for S ¯ 1(i1,j1) and S ¯ 2(i1,j1) to obtain S1(i1, j1) and S2(i1, j1) respectively. The range of S1(i1, j1) as well as S2(i1, j1) is [0, 35].

Calculate the number of coupled points with one is of L1 on SLBP-1 and the other is of l = L2 on SLBP-2. It must be pointed out that those coupled points must be in the same position on SLBP-1 and SLBP-2. Then the element of GLCM HS(k,l) is the number of coupled points. Where, L1,L2∈{0,1, ..., 35}. The probability of HS(k,l) is defined as p(k,l) and it can be calculated with the following formula.   

p(k,l)= H S (k,l) k1=0 35 l1=0 35 H S (k1,l1) (9)

According to p(k,l), statistical texture features1,2,12) of strip steel surface defect image can be extracted, such as means BM1 and BM2, variances BV12 and BV22, skews BS1 and BS2, kurtosises BK1 and BK2, powers BP1 and BP2, entropies BE1 and BE2 for SLBP-1 and SLBP-2 respectively. And angular second moment BP2, mixed entropy BE2, inertia moment BI2 and correlation coefficient BC2 for SLBP are also extracted. Their calculation formulas are just like the following.   

B M1 = k=0 35 ( k× l=0 35 p(k,l) ) (10)
  
B V1 2 = k=0 35 ( (k- B M1 ) 2 × l=0 35 p(k,l) ) (11)
  
B S1 = k=0 35 ( (k- B M1 ) 3 × l=0 35 p(k,l) ) (12)
  
B K1 = k=0 35 ( (k- B M1 ) 4 × l=0 35 p(k,l) ) (13)
  
B P1 = k=0 35 ( l=0 35 p(k,l) ) 2 (14)
  
B E1 =- k=0 35 ( l=0 35 p(k,l)×log( l=0 35 p(k,l) ) ) (15)
  
B M2 = l=0 35 ( l× k=0 35 p(k,l) ) (16)
  
B V2 2 = l=0 35 ( (l- B M2 ) 2 × k=0 35 p(k,l) ) (17)
  
B S2 = l=0 35 ( (l- B M2 ) 3 × k=0 35 p(k,l) ) (18)
  
B K2 = l=0 35 ( (l- B M2 ) 4 × k=0 35 p(k,l) ) (19)
  
B P2 = l=0 35 ( k=0 35 p(k,l) ) 2 (20)
  
B E2 =- l=0 35 ( k=0 35 p(k,l)×log( k=0 35 p(k,l) ) ) (21)
  
B P 2 = k=0 35 l=0 35 p 2 (k,l) (22)
  
B E 2 =- k=0 35 l=0 35 ( p(k,l)×log(p(k,l)) ) (23)
  
B I 2 = k=0 35 l=0 35 ( (k-l) 2 ×p(k,l) ) (24)
  
B C 2 = k=0 35 l=0 35 ( (k- B M1 )×(l- B M2 )×p(k,l) ) B V1 × B V2 (25)

5. Experiments

In order to testify the performance of this novel invariant feature extraction method, some experiments are done for strip steel surface defect. Four typical types of defect are chosen. They are rust, hole, scratch and scale which are shown in Fig. 6. The number of samples for every type of defect is shown in Table 1. These experimental samples include original and changed defect images. Changed defect images are different from original defect images in scale, rotation, illumination and translation. During the whole experimental process, samples are divided into training samples and testing samples many times at random according to Table 1. In order to obtain region of defect, those chosen samples are preprocessed and segmented according to the methods in references.13,14,15) For obtained region of defect, feature extraction experiments are done in the following. All experiments are performed by using Matlab 7.11 on a PC with an Intel P4 processor (3.0 GHz) and 2 GB RAM.

Fig. 6.

Images of four types of defect.

Table 1. Number of training and testing samples for four types of defect.
Defect typesNumber of samples
Total
(Original + Changed)
Training
(Original + Changed)
Testing
(Original + Changed)
Rust390+390273+273117+117
Hole360+360252+252108+108
Scratch360+360252+252108+108
Scale340+340238+238102+102

Firstly, scale invariant feature extraction experiments are done for defect images with random illumination and translation. For every type of defect, 3 samples with different scales are chosen from experimental datasets. And the final experimental results for 4 types of defect are shown in Table 2. It can be seen that extracted features are very close for the same type with different scales, illumination and translation. However, extracted features are quite different for different types. Those results show that invariant feature extraction method based on SLBP is of rational scale invariance property. And extracted features are separable for different types.

Table 2. Scale invariant features for defect samples with random illumination and translation.

Secondly, rotation invariant feature extraction experiments are done for defect images with random illumination and translation. Similarly, 3 samples with different rotation angles are chosen from experimental datasets. The final experimental results for 4 types of defect are shown in Table 3. It can be seen that extracted features are very close for the same type with different rotation, illumination and translation. However, extracted features are quite different for different types. Those results show that this novel feature extraction method based on SLBP is of rotation, illumination and translation invariance property.

Table 3. Rotation invariant features for defect samples with random illumination and translation.

Then, in order to verify the noise smoothing ability of SLBP, original images in experimental datasets are artificially corrupted. The added Gaussian white noise are all with mean m = 0 but with 3 different standard deviation δ = 0, 0.05 and 0.1. It must be pointed out that only original images are corrupted, which impartially proves the smoothing ability of SLBP. On the other hand, 5 types of coupled images are used to do feature extraction experiment. The first type is the combining of original image and extended LBP image (O-L). The second type is the combining of filtered original image and filtered extended LBP image (FO-FL). The third type is the combining of filtered original image and SLBP-1 image (FO-S1). The next type is the combining of filtered original image and SLBP-2 image (FO-S2). And the last type is the combining of SLBP-1 image and SLBP-2 image (S1-S2). It is mentioned that standard mean filter with the size 3×3 is used to filter. Because this experiment is used to testify noise smoothing ability, in fairness, 5 types of coupled images are all with the same invariant feature proposed in this paper. Besides that, the same TWSVM16) and binary tree17) are combined together to obtain the multi-class classifier which will be used in this experiment. Parameters of multi-class classifier are determined according to reference.17) For the above mentioned 5 types of coupled images, experiments of invariant feature extraction and multi-class classification are done. And tenfold cross validation is also used. The final results including testing accuracy and time cost are shown in Tables 4 and 5. When δ is 0, classification accuracy are nearly the same for 5 types of coupled images. Classification accuracy of O-L and S1-S2 are comparatively high. Classification accuracy of FO-FL, FO-S1 and F-S2 are nearly the same. Those results prove that the combining of SLBP-1 and SLBP-2 which is proposed in this paper is of high classification accuracy. When δ is not 0, for O-L, classification accuracy declines sharply, while for FO-FL, it does slowly. What mentioned above shows that LBP is sensitive to noise. And filtering can improve classification accuracy of LBP. Classification accuracy of FO-S1 and FO-S2 are very close to that of FO-FL, which shows that the noise smoothing ability of SLBP-1 and SLBP-2 is equivalent to filtering. Comparing with the other methods, S1-S2 has higher classification accuracy which declines slowly for corrupted datasets. All these prove that the combining of SLBP-1 and SLBP-2 has better noise smoothing ability. As far as classification speed concerned in Table 5, the combining of SLBP-1 and SLBP-2 has absolute advantage for different type of defect image.

Table 4. Testing results of classification accuracy for 5 types of coupled images with different δ.
TypesAccuracy (%)
O-LFO-FLFO-S1FO-S2S1-S2
Rustδ=093.76±1.2192.39±1.7392.22±1.5093.25±1.2494.02±1.38
δ=0.0587.09±2.5590.26±2.6391.37±2.4990.85±2.2692.82±1.72
δ=0.182.91±2.8388.55±2.2789.15±2.5188.38±2.2491.20±2.20
Holeδ=095.19±1.3693.61±1.8794.54±1.3493.43±1.6395.28±1.05
δ=0.0587.87±2.7690.09±2.0392.69±2.4391.94±2.4593.98±1.86
δ=0.183.88±3.4788.15±2.3490.00±2.5589.26±2.4292.50±2.39
Scratchδ=091.57±2.8590.37±1.8689.81±2.1189.35±2.2491.30±1.99
δ=0.0585.28±2.6386.57±1.7786.30±1.8986.57±2.2488.43±2.08
δ=0.183.06±3.5985.46±1.6684.91±1.9085.46±1.7687.04±1.85
Scaleδ=085.10±2.0086.27±1.7086.37±2.1786.47±2.0588.14±1.42
δ=0.0580.29±2.3085.59±1.8185.69±1.9785.59±1.8687.06±2.09
δ=0.175.88±2.7181.57±1.4481.96±2.2481.96±2.5684.31±2.06
Table 5. Testing results of classification time for 5 types of coupled images.
TypesTime (s)
O-LFO-FLFO-S1FO-S2S1-S2
Rustδ=0, 0.05, 0.11.1705.1.37120.99301.05770.8807
Hole1.06791.27730.93590.99550.8032
Scratch1.06831.27880.93640.99600.8039
Scale1.01161.21470.89760.95410.7601

Finally, in order to testify the superiority of the invariant feature extraction method based on SLBP, comparative classification experiments are done. The numbers of training samples and testing samples used in this experiment are shown in Table 1. All datasets are corrupted with Gaussian white noise with mean 0 and standard deviation δ = 0 or δ = 0.05. Classification results obtained from 4 different feature extraction methods are compared. If δ is 0, 4 feature extraction methods including traditional feature extraction for original image (O-TF), traditional feature extraction for LBP image(LBP-TF), traditional feature extraction for SLBP image (SLBP-TF) and invariant feature extraction for SLBP image (SLBP-IF) are used. If δ is 0.05, 4 feature extraction methods including traditional feature extraction for filtered original image (FO-TF), traditional feature extraction for filtered LBP image (FLBP-TF), traditional feature extraction for SLBP image (SLBP-TF) and invariant feature extraction for SLBP image (SLBP-IF) are used. Traditional features are 16 common features extracted with principal component analysis method18) based on references.1,2,3,4) To mention that, the multi-class classifier used here are just the same with Tables 4 and 5. And the final classification results with δ=0 and δ=0.05 are just shown as Tables 6 and 7 respectively. It can be seen from Table 6 that SLBP-IF has the best classification accuracy. And SLBP-TF has the second best classification accuracy, which shows that invariant feature extraction method based on SLBP proposed in this paper can maintain satisfactory classification accuracy for images with different scale, rotation, illumination and translation. Though classification accuracy of LBP-TF is ok, its speed is obviously slower than SLBP-IF. For different type of defect, classification accuracy of traditional O-TF is very bad. It can be similarly seen from Table 7 that SLBP-IF has the advantage on accuracy and speed for corrupted defect images. From Tables 6 and 7, it can be compared that classification accuracy of 4 methods all reduce. Take scratch defect as an example, the classification accuracy of FO-TF reduces 4.26 percent comparing with that of O-TF. And the classification accuracy of FLBP-TF reduces 3.75 percent comparing with that of LBP-TF. And comparing with δ=0 and δ=0.05, for SLBP-TF and SLBP-IF, it reduces 2.27 and 1.84 percent respectively. All these results show that SLBP proposed in this paper can effectively suppress noise. In summary, invariant feature based on SLBP can effectively suppress noise and maintain high classification accuracy, especially for different scale, rotation, illumination and translation samples.

Table 6. Testing results of four classification methods with δ= 0.
TypeAccuracy (%) Time (s)
O-TFLBP-TFSLBP-TFSLBP-IF
Rust87.31±2.32 1.437490.09±1.84 2.406192.69±2.00 2.093493.85±1.86 1.7512
Hole88.80±2.33 1.358392.18±1.66 2.227894.77±2.14 1.928995.19±1.91 1.5997
Scratch81.85±2.59 1.417386.20±2.42 2.354588.10±2.50 1.930789.68±2.21 1.6065
Scale76.86±3.49 1.386482.06±2.28 2.245184.90±2.37 1.835087.75±2.02 1.5298
Table 7. Testing results of three classification methods with δ= 0.05.
TypeAccuracy (%) Time (s)
FO-TFFLBP-TFSLBP-TFSLBP-IF
Rust82.69±2.70 1.905588.21±1.74 2.862891.62±2.32 2.093492.69±2.13 1.7512
Hole85.93±2.58 1.800889.77±2.25 2.633793.50±2.43 1.928993.75±1.77 1.5997
Scratch77.59±3.01 1.867382.45±2.08 2.799985.83±2.03 1.930787.84±1.92 1.6065
Scale73.68±2.77 1.820580.00±2.20 2.680383.68±2.26 1.830585.93±2.24 1.5298

6. Conclusions

Invariant feature extraction method based on SLBP is proposed in this paper. And it is used for strip steel surface defect recognition. The main work of this paper includes four aspects. Firstly, two types of SLBP images are obtained by calculating the sign of the difference between weighted grays in local neighborhood, which overcomes the influence of noise points. Secondly, CDSST is proposed on which image is resampled by coordinate mapping. Resampled defect image is of scale and translation invariance. Thirdly, GLCM and binary shift are used to obtain statistical features with scale, rotation, illumination and translation invariance. And binary shift of SLBP can reduce the dimension of histogram. Finally, Invariant feature extraction method based on SLBP is used for strip steel surface defect image to do feature extraction and classification experiments. Feature extraction experiments show that the novel feature extraction method is invariant in scale, rotation, illumination and translation. Moreover, classification experiments done with TWSVM classifier show that invariant feature based on SLBP can effectively suppress noise and maintain high classification accuracy.

Acknowledgment

The authors give thanks to sponsor and support from University of Science and Technology Liaoning Foundation (No. 2014QN05).

References
 
© 2015 by The Iron and Steel Institute of Japan

This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
feedback
Top