ISIJ International
Online ISSN : 1347-5460
Print ISSN : 0915-1559
ISSN-L : 0915-1559
Instrumentation, Control and System Engineering
Wide-area Operation Monitoring of Conveyors Using a Panoramic Vibration Camera
Kohei ShimasakiZulhaj Muhammad AliansyahTaku SenooIdaku Ishii Tomohiko Ito
著者情報
ジャーナル オープンアクセス HTML

2021 年 61 巻 10 号 p. 2587-2596

詳細
Abstract

In this study, we propose a novel non-contact vision sensing method for wide-area monitoring of the operation of conveyors in ironworks by using a panoramic vibration camera in real time. This method can capture magnified images, including vibrations from a single camera with mirror-driven viewpoint switching. The rotation of multiple rollers with conveyor belts was detected using a function of full-pixel vibration spectrum imaging, which can calculate peak frequencies from time frequency responses. Through experiments in different situations, such as loading and unloading, we evaluated the efficiency of this method, which can monitor the operation of multiple rollers and conveyors, when the camera is located 15 m away, or more, from the conveyors to be monitored.

1. Introduction

Many pieces of equipment and structures, such as belt conveyors in ironworks, were made during the period of high economic growth up to the 1970s. The occurrence of deterioration accidents due to the passage of more than 40 years after construction has become a serious problem in recent years. In particular, the material handling conveyor, which has a total length of 100 km or more in ironworks, requires extensive maintenance and inspection. There is a growing demand for efficient structural health monitoring techniques, but current technology has not yet reached widespread realization.1) One of the reasons is that ensuring the safety of wide-area infrastructures in ironworks requires the installation of an extremely large number of sensors on belt conveyors and pipes at high costs.2) In addition, it is difficult to inspect all infrastructures simply by executing visual equipment diagnosis and maintenance, and the risk of major accidents due to overlooking is also considered as an incident issue.3) In a material handling conveyor composed of rollers and belts, if a condition in which the rollers continue to be non-rotating owing to some trouble, there is a fear that the friction between a belt moving at a high speed and a stationary roller may lead not only to a decrease in productivity, due to a decrease in transportation volume, but also to a serious incident4) such as fires and explosions due to a crack/damage of the belt or an overload on the drive system. Various examples of these incidents have been reported.5,6)

Contact sensors using radio frequency identification (RFID) tags,7,8) non-contact sensors such as electromagnetic sensors9,10) and optical fibers,11) optical non-contact measurements such as laser scanners,12) thermographic cameras,13,14) and video cameras,15,16) are widely used for measuring the conveyer operation status. These contact and non-contact sensors requiring direct or near-side installation of the measurement object can realize high-precision measurements. When monitoring a wide-area facility such as a material handling conveyor in ironworks, not only installation and measurement time but also maintenance and management costs, such as sensor replacement, become large bottlenecks that make it difficult to improve their efficiency. To realize the measurement of dynamic structural properties using high spatial resolution images, many video analysis methods such as digital image correlation,17,18) sampling moiré pattern method,19,20) and optical flow21,22) have been reported. They perform video analysis based on the frame rate of a commercially available video camera, which enables the simultaneous monitoring of multiple objects. When monitoring a facility installed in a wide area, the range covered by one pixel increases owing to the limitation of the pixel pitch of the image sensor, and there is a problem of spatial resolution that a fixed single camera cannot measure the local small displacement. Additionally, most of them deal with standard videos at several tens of frames per second (fps) designed based on human vision; it is difficult to correctly measure high-speed vibration phenomena such as roller rotation and belt vibration on conveyors. Especially with the high-magnification observations, the apparent speed in images becomes faster, and the change between frames becomes so large that the accuracy of the correspondence between frames is lowered in the image analysis, such as the image correlation method. Therefore, higher-frame-rate video analysis is required to reduce the change between frames for a more accurate measurement.

To solve the problems of spatial and temporal resolution of video cameras, a mirror-driven active vision system23) has been proposed; it can actively and rapidly move the viewpoint for high-magnification imaging. Its effectiveness has been demonstrated through vibration measurement of a bridge model24,25) and pillars of a belt conveyor in steelworks26) as a dynamic sensing tool that uses both high temporal resolution and high spatial resolution of mirror-drive high-speed active vision, which enables zoom imaging while maintaining spatial resolution in wide-area measurement with a single camera and realizes imaging and image processing at several hundreds of frames per second. Generally, when installing dozens or hundreds of cameras, camera parameter settings such as focus adjustment and frame rate per unit are very troublesome. When considering day-by-day spot measurements at different locations, it is necessary to reduce the setting time for the camera arrangement to simplify the installation. The mirror-driven active vision system provides immediate and extensive measurements once set up and can immediately start measurements with a unified camera parameter set even if the measurement location changes. Thus, it has an advantage over the installation of multiple cameras for instantaneous installation. Considering that each pixel of the image sensor is an optical sensor, several studies on vibration imaging based on signal processing for time-series brightness signals independently at all pixels have been recently reported. Its effectiveness was verified by structural vibration analysis24,27) with vibration distribution measurement at the audio-frequency level, rotation measurement for motors and propellers,28,29,30) and wing-flapping measurement of honeybees.31,32) It is expected to be a novel wide-area vibration monitoring technique in which every pixel plays the role of a sensor for time-series signal processing in high-speed vision.

In this study, we developed a panoramic vibration camera that enables long-term vibration analysis based on pixel-level vibration imaging in real-time as well as covering multiple viewpoints over a wide range with high-magnification imaging. We demonstrated several monitoring experiments for a material handling conveyor operating that analyzed the vibration imaging data to detect its rollers’ rotation in comparison with material transportation conditions on its belt, and verified its effectiveness as a wide-area spot measurement tool for conveyors in ironworks in which dozens-or-hundreds-of-cameras installation and setting are not necessary for day-by-day spot measurements.

2. Panoramic Vibration Camera

2.1. System Configuration

The panoramic vibration camera used in this study is an extended system of the mirror-drive high-speed active vision system12) by implementing a vibration imaging function as real-time high-frame-rate video processing. Figure 1 shows an overview of the panoramic vibration camera. In this system, a galvanomirror that can perform two-degrees-of-freedom viewpoint control in the pan and tilt directions is installed in front of a high-speed camera. The magnified images can be captured in the angle range of −20° to 20° by controlling the mirror angle in the pan and tilt directions in the range of −10° to 10°. The high-speed camera uses a high-speed USB3.1 camera (DFK 37BUX287, Imaging Source) with a color CMOS image sensor; its sensor size, pixel-pitch, and resolution are 4.96 × 3.76 mm, 6.90 × 6.90 μm, and 720 × 540 pixels, respectively. It can capture 720 × 540 color images at 540 fps. A 200-mm-focal-length lens was mounted on the camera, and its field of view was 1.42° × 1.07°. The galvanomirror (6240H, Cambridge Technology) has two mirrors of 17.5 × 12.2 mm on the pan and tilt axes, respectively. It enables switching of 200 or more viewpoints per second to an arbitrary viewpoint in the mirror-movable range when the mirror angle is 10° or less from the present angle by a voltage command from a control PC. Images captured by the high-speed camera are transferred to the control PC via the USB3.1 interface, and the vibration imaging process is executed in real time. In this study, a laptop PC (OMEN15, HP) with CPUs (Intel Core i7-8750H @ 2.20 GHz), 16 GB memory, and GPUs (NVIDIA GeForce RTX 2070 with Max-Q Design), on which Windows OS 10 Pro (64-bit) was operated, was used as the control PC. In parallel with the vibrational imaging process, the viewpoint switching control of the galvanomirror was performed via a DA/AD board (PEX-361116, Interface). The DA/AD board is mounted in a PCI express expansion box (TB31PCIEX16, StarTech.com) and connected to the control PC through the Thunderbolt 3 interface.

Fig. 1.

Overview of panoramic vibration camera.

2.2. Pixel-level Vibration Imaging

To detect the rotation of conveyor rollers in real time, we implemented the pixel-level vibration imaging method19) that can extract vibration distributions by computing a short-time Fourier transform (STFT) for time-varying brightness signals at all pixels in an image. Figure 2 shows the concept of pixel-level vibration imaging. For an image sequence with high-speed vibration, which is difficult to observe with the naked eye, the power spectra in the frequency domain based on STFT computation are detected at every pixel by computing STFTs for the brightness signals. After removing the low-frequency component, such as the background, the peak frequency at which the power spectrum becomes maximum is detected at every pixel. When these values correspond to the rotation speeds of the rollers, the vibration pixels corresponding to the roller rotation are extracted.

Fig. 2.

Concept of pixel-level vibration imaging.

This method can visualize vibrations up to the Nyquist frequency, which is half of the frame rate. When the frame rate is 120 fps, it enables vibration monitoring in the frequency range of 0–60 Hz. In this method, vibration regions can be robustly detected as targets when captured images deteriorate due to lack of image resolution, defocus blur, and background changes such as fluctuating illumination and occlusion. This is because it focuses on pixel-level signal processing for brightness changes without considering the appearance shapes. Its real-time implementation enables simultaneous visualization without limiting the measurement time when dealing with long-time high-frame-rate images that are too large for computer storage to record. In this study, peak frequency images with pixel-level vibration imaging for high-frame-rate images were output at several tens of frames per second. This enables output data recording in a manner similar to that of general-purpose standard video recording.

In this study, a material handling conveyor to be measured was observed from a panoramic vibration camera installed at a fixed position, and its rollers did not move significantly from their reference positions. Their rotation frequencies were observed at the pixels around the rollers because periodic changes of brightness signals depend on the rotation speeds of the rollers. There exists a trade-off between its detection frequency resolution and execution time with respect to the number of frames used in the pixel-level STFT computation. The frequency resolution is 3.75 Hz and 1.875 Hz in the STFT computation for a 120 fps video with 32 frames and 64 frames, respectively. The computation time is twice when the number of frames used in the STFT computation is twice, and it may become an obstacle in real-time vibration imaging. To prioritize real-time and long-term monitoring of conveyor rollers in this study, pixel-level STFT computation with a reduced number of frames is implemented in parallel on a GPU system to only judge whether conveyor rollers rotate or not; the small differences in their rotation speeds are not detectable accurately.

2.3. Implemented Algorithms and their Performances

To realize rotation monitoring of multiple conveyor rollers installed in a wide area, (1) determination of multiple viewpoints with panoramic scanning, (2) vibration imaging with peak frequency images from multiple viewpoints, and (3) detection of rotating rollers in the peak frequency images are implemented on the panoramic vibration camera. The details of the algorithms are as follows:

2.3.1. Determination of Multiple Viewpoints Using Panoramic Scanning

Similar to the vibration measurement of conveyor pillars with a mirror-drive high-speed active vision system,12) a panoramic image to confirm that the measurable area is generated by ultrafast image scanning using a galvanomirror as a preprocessing, and the pan and tilt angles to observe multiple rollers at different positions were manually determined to monitor their rotation status.

First, the viewpoint direction is switched at time intervals τ by clockwise-scanning a mirror with image capturing in the IJ direction (pan direction I, and tilt direction J). A panoramic image P0(x,y,t0 + )(0 ≤ k < IJ) was synthesized from these images. The pan and tilt angles θ at time t0 + (i + jI)τ are θ(t0 + (i + jI)τ) = (iΔθpan,jΔθtilt) (0 ≤ i < I, 0 ≤ j < J), when the step pan and tilt angles are Δθpan and Δθtilt, respectively. The relationship between the pan and tilt angles θ and the image coordinate system of the panoramic image (x′,y′) is recorded as a look-up table ( x n , y n ) = fLUT(θ(t0 + )).

Based on the panoramic image P(x′,y′), the N coordinate position ( x n , y n ) (n = 0, …, N − 1) of the point corresponding to the roller and pillar to be observed by the user, and determine the pan and tilt angles corresponding to the direction of each viewpoint in which video is taken based on the look-up table   θ n = f LUT -1 ( x n , y n ) .

2.3.2. Vibration Imaging with Peak Frequency Images from Multiple Viewpoints

We apply pixel-level vibration imaging20) that calculates STFTs for the brightness signals at all pixels after capturing HFR images at a frame rate fi for a certain period T0 at each viewpoint. As the processed data, the peak frequency images are output at a frame rate fO corresponding to the standard video output rate.

Corresponding to the viewpoint direction θn (n = 1, …, N − 1) determined in (1), the pan and tilt angles of the galvanomirror are set to θ(t)=θ [ t T ]mod   N to capture HFR images In (x,y,t′) = I(x,y,tnT0) ((iN + (n - 1))τt < (iN + n)T0 (i:integer) for a certain period T0.

The power spectrum of the brightness signal at each pixel is computed as STFTs at all pixels for K(= fi T1) frames of the image In(x,y,t’) as follows:   

F n k (x,y,t')=STFT( I n (x,y, t ),   ,    I n ( x,y, t -(K-1) T i ) ) (k=0,   ,   K-1). (1)
F n k (x,y,t') represents the power spectral component in the frequency band of kfi/Kf < (k + 1) fi/K.

The peak frequency image for the viewpoint direction θn is computed by detecting the frequency at which the power spectrum is maximal in the frequency range of fcut (= kcut·f/K) or higher at all pixels, as follows:   

P n (x,y,t')= f K arg   max k k cut     F n k (x,y,t'). (2)

These computations for pixel-level STFTs and outputs of peak-frequency images are performed for each switched viewpoint; they are performed with viewpoint switching at a much lower rate fo = 1/T0 than the capture rate of input images fi.

2.3.3. Detection of Rotating Rollers in the Peak Frequency Images

In the peak frequency image at the frame rate fo for each viewpoint, the existence of a rotating roller in the ROI Rn(x,y) including the roller is detected by checking the peak frequency component relating to the roller rotation in the ROI. It is assumed that Rn(x,y) are pre-determined from multiple viewpoints in (1).

In a peak frequency image, vibration pixels whose peak frequencies are in a certain frequency band of flowf < fhigh are detected as follows:   

V n (x,y,t')={ 1 ( f low P n (x,y,t')< f high ) 0 (otherwise) . (3)

By counting the number of vibration pixels rn(t’) in the ROI, we determine whether the rotation of the roller in the viewpoint direction θn when rn(t’) is a threshold θ n r (n = 1, …, N) as follows:   

R n (t')={ 1 ( r n ( t )( = (x,y) R n (x,y) V n (x,y) ) θ n r ) 0 (otherwise) . (4)

According to the viewpoint direction θn, the threshold θ n r was individually adjusted considering different illumination conditions.

In this study, these processes were performed by capturing 720 × 540 color images at a frame rate of 120 fps (= fi) with an exposure time of 2 ms and by executing viewpoint switching at an interval of 1/120 s in synchronization with the image capturing. The galvanomirror for gaze control enables viewpoint switching within 2 ms, and viewpoint switching control is performed outside the camera exposure of 6.3 ms.

The step pan and tilt angles in the panoramic scanning were set to Δθpan = 1.37° and Δθtilt = 1.03°, respectively; they were set to be smaller than the field of camera view, 1.42 × 1.07°, to obtain images in the scanning without a gap. After the color image was converted into a grayscale image, pixel-level STFTs in the vibration imaging process were calculated for 32 (= K) frame images for 0.267 s. The viewpoint switching rate was fo = 3.75 fps. With parallel implementation of the GPU board, pixel-level STFT computation for 720 × 540 images for 32 frames can be performed within 0.267 s; it was accelerated to finish its computation until the next viewpoint switching. In the peak frequency image output at intervals of To = 0.267 s, rotating rollers were detected. by counting the number of vibration pixels in the 30 × 30 ROI set in advance for each roller. When switching to N viewpoints, the rotation status of each roller is updated at intervals of T2 = NTo = 0.267 s × N.

In the following experiments of long-time conveyor monitoring, 120 fps image capturing and processing in (1) and (2) were implemented in real time because it is difficult to record 120 fps images for a long time because of the limited memory of the PC. The peak frequency images downconverted to 3.75 fps in the vibration imaging in (2) were once recorded for a long time, and the detection of rotating rollers in (3) was performed offline for the recorded images.

3. Monitoring Experiment for Rotating Rollers on a Belt Conveyor

3.1. Experimental Environment

An experiment for monitoring conveyor rollers was conducted with a panoramic vibration camera for a material handling conveyor in operation in a ironworks in Japan on December 23, 2019 (sunny, temperature of 11.7°C, and wind speed of 2.0 m/s at 12:00). Figure 3 shows an outline of the experimental environment. On the target belt conveyor, the pillars with 17 cm diameter rollers were built at intervals of 0.8 m, and the 10-mm-thickness belt moved on the rollers. The maximum conveying speed was 2.7 m/s, and the maximum rotation speed of the rollers was 8.3 rps. A panoramic vibration camera with a 200-mm-focal-length lens was installed on a tripod 15 m ahead from the belt conveyor to avoid the effect of vibration generated from the conveyor. The image resolution and the field of camera view were 0.5 mm/pixel and 0.36 × 0.27 m, respectively, when the camera was located 15 m ahead.

Fig. 3.

Outline of rotation monitoring of conveyer-rollers in a steelworks.

Initially, 1170 images of 30 horizontal × 39 vertical viewpoints were taken and generated as a composite panoramic image of 21600 × 21060 pixels, as shown in Fig. 4. Based on the panoramic image, the pan and tilt angles for 30 horizontal × 2 vertical viewpoints were set as the viewpoints of mirror scanning; the range of 10.8 × 0.54 m that involved the 13 rollers on the conveyor was captured for monitoring with vibration imaging. The panoramic vibration camera executed vibration imaging in real time by computing pixel-level STFTs with 32 frame images for 0.267 s for all the viewpoints. The monitoring experiment was conducted for 70 min from 12:21 to 13:30 to detect of rotating rollers in the images of 60 viewpoints at a cycle time of 16.0 s. In the experiment, The conveyor belt was operated at a constant speed, and the loading status of the materials was time-varied.

Fig. 4.

Panoramic images captured by panoramic vibration camera.

3.2. Roller Rotation Analysis Using Offline Images

As a pre-experiment, we verify the temporal changes of the brightness signals at pixels around a rotating conveyor roller offline by capturing color 640 × 480 images at 200 fps, corresponding to a 0.32 × 0.24 m range that involves the roller to be observed. Figure 5 shows (a) the input image, (b) the peak frequency image computed with pixel-level STFTs, (c) the one-second-duration brightness signals at pixels A (30,30), B (188,167), and C (214,361), and (d) their frequency responses for 1 s (200 frames). The image resolution was 0.5 mm/pixel, similar to that in the real-time monitoring experiment in Section 3.3.

Fig. 5.

Vibration detection results when capturing 640 × 480 images around a rotating conveyor-roller at 200 fps: (a) input image, (b) peak frequency image, (c) temporal brightness changes at A (30,30), B (188,167), and C (214,361), and (d) frequency responses at A, B, and C.

No large changes are observed in the brightness signals at pixel A relating to the background scene and pixel C around the pillar of the conveyor, whereas that at pixel B around the roller exhibits a remarkable temporal change caused by the roller’s rotation. Corresponding to 16.6 Hz that is twice the rotation speed of 8.3 rps, 15.6 Hz was detected as the peak frequency due to the surface pattern of the roller. The vibration caused by the roller rotation was transferred to the pillar, and a peak frequency of 9.4 Hz was observed at pixel C, whereas the amplitude of the vibration was so small that the brightness signal changed little over time. Figure 5(d) shows a color map of the peak frequency in the range of 5–16 Hz, where the frequency (green: around 8 Hz) corresponding to the rotation speed of the roller was observed not only on the roller but also on the entire belt conveyor, while the peak frequency (red: around 16 Hz) is strongly displayed around the roller.

From these results, it can be observed that the rotation speed of the roller and the frequency twice as high were detected as the peak frequencies at pixels around the rotating roller in the images captured at 200 fps.

3.3. Real-time Monitoring of Rotating Rollers Using a Panorama Vibration Camera

Figure 6 shows the input images for the viewpoints that involve the rollers on 13 pillars (pillar 1 to pillar 13) in the mirror-scanning area, which were taken just after starting the monitoring experiment at 12:21 under the conditions described in Section 3.1. Corresponding to the viewpoints that involve the rollers, Fig. 7 shows the peak frequency images obtained in real time by the panoramic vibration camera, which were conveyed on the belt at 12:21 in (a) and at 12:40 in (b), and no material was conveyed at 13:30 in (c). In the peak frequency images, all the rollers except those on pillar 6 rotated when materials were conveyed on the belt, and the peak frequency components corresponding to their rotation speeds were detected. The rollers on pillars 6, 8, 10, and 13 did not rotate when no material was conveyed, and the peak frequency component was not noticeably detected around the rollers.

Fig. 6.

Input images with panoramic viewpoint switching at 12:21: (a) selected pillar images in panoramic scanning, and (b) input images for pillars 1–13.

Fig. 7.

Peak frequency images around pillars: (a) 12:21 (with load), (b) 12:40 (with load), and (c) 13:30 (without load).

The 30 × 30 ROI regions around the rollers were manually selected for all the viewpoints, and the number of pixels that had peak frequency components of 8 to 16 Hz in the ROI region were counted as vibration pixels. Figure 8 shows the detected number of vibration pixels and their rotation status during 70 min monitoring, which indicate the number of vibration pixels in the range of 0 to 900 pixels, and the red/gray lines indicate the rotation status determined by thresholding the number of vibration pixels. The manually selected 30 × 30 ROI regions are shown in Fig. 7(a). In the experiment, 640 × 480 images were recorded at 30 fps by a camera (SCS-color, Occipital Co.) that was installed obliquely above the belt conveyor, were recorded at 30 fps, and a spatiotemporal image that indicates the material load status on the belt was computed by integrating its belt cross-sectional images, as shown at the top of Fig. 8. Figure 9 shows an image captured by the SCS-color camera and the intersected line for the belt cross-sectional images. The threshold in Eq. (4) was set to 90 for pillars 1 to 5 and 200 for the other pillars.

Fig. 8.

Number of detected vibration pixels around conveyor-rollers and their rotation status during 70-minute conveyor monitoring.

Fig. 9.

Monitor image for loaded materials with an intersected line.

For the spatiotemporal image during the 70 min monitoring, it can be observed that materials were intermittently conveyed from 12:21 to 12:24 and from 12:40 to 12:47, and no material was conveyed at other times. For pillars 1–5, 7, 9, and 11, where the rollers always rotated regardless of whether the materials were loaded on the belt, a certain number of vibration pixels were always observed in the ROI regions, whereas there was slight temporal fluctuation with changes in sunshine conditions. For pillar 6, where the roller always stopped its rotation, vibration pixels were hardly observed. For pillars 8, 10, and 13, where the rotation status of the rollers varied over time, distinct increases and decreases were observed in the number of vibration pixels, corresponding to their material load status. For pillar 12, where the roller rotated intermittently not only when materials were conveyed but also when no material was conveyed, the material load status on the belt did not always match the rotation status of the roller. From these results, it was verified that the rotation monitoring of conveyor rollers in quantitative comparison with material load status on the belt was realized over a long observation period of 70 min by inspecting rotations of multiple rollers with our developed panoramic vibration camera.

We can consider the following reasons why a roller stopped its rotation when materials were conveyed: (1) non-contact state in which the belt was always separated from the roller, and (2) contact state in which the roller did not rotate even though it always came into contact with the belt. In the experiment, pillar 6 is the former case, and the deformation of the roller or the subsidence of its pillar may cause this status. The latter is often caused by deterioration of the roller mechanism or its motor drive system over time. It may lead to serious incidents of fires in ironworks, and it is necessary to pay attention to the maintenance and management of the equipment. In the case of (2), there is a possibility that the non-rotation roller may be erroneously recognized as rotating because it slightly moves with vibration if the belt is touching even when the roller is not rotating. In this study, the rotating roller was detected by checking the frequency responses of the brightness signals that depend on the rotation of the scratch patterns and the gaps on the roller, and the frequency response of the non-rotation roller when touching the belt can be detected as a different one. If a roller does not rotate while touching a belt, intermittent vibration may be caused by friction between the belt and roller. The behavior of the belt touching a non-rotation roller is obviously different from that in contact with a rotating roller, and Its detection can be a trigger for a serious incident. To prevent serious incidents of belt conveyors in ironworks, our monitoring technique will be a more effective maintenance and management tool when combined with monitoring of belt-roller contact conditions.

4. Conclusion

In this study, a panoramic vibration camera was developed for wide-area vibration monitoring using high-speed vision, and a remote and long-term monitoring experiment was conducted to measure the rotation status of multiple rollers of a material handling conveyor operating in ironworks. Through the monitoring experiment, it was confirmed that our panoramic vibration camera functioned as virtual rotation sensors corresponding to dozens of pillars without directly installing a sensor on each roller, and that the rotation status of multiple rollers, which varied with or without conveying materials on a belt, could be measured in real time.

Pixel-level vibration imaging with a panoramic vibration camera can detect vibrating pixels when the roller region in the image is small, and it is applicable to wide-area monitoring of belt conveyors at distances of several tens of meters or more. Based on these results, we will improve the camera system and image processing algorithm for monitoring the belt and rollers of a material handling conveyor to detect non-rotation rollers with touching a belt by inspecting the correlation between the contact status between the belt and rollers and the rotation status of the rollers. Toward the smart realization of maintenance and management to prevent serious incidents, we plan to implement our wide-area monitoring technology in ironworks by introducing automatic panoramic viewpoint determination with AI-based landmark recognition.

References
 
© 2021 The Iron and Steel Institute of Japan.

This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
feedback
Top