Diagnostic medical imaging has become more sophisticated in recent years owing to technological advances and improved diagnostic techniques. In addition, diagnostic support systems have been introduced to deep learning methods for digitalized image diagnosis and analysis such as convolutional neural network (CNNs) and fully convolutional network (FCNs). We propose a radiological technical support system based on a prelearned CNN that informs the radiological technologist of the required correction of the beam direction after establishing that re-exposure is required. In this study, a recognition system was developed by merging prelearned CNN with a classifier and semantic segmentation techniques (Faster R-CNN). The CNN with classifier was applied to identify the positional relationship among knee and ankle X-ray images. The Faster R-CNN was utilized to segment the target area in a lateral image of the knee joint X-ray images. In the CNN classification, “pass” and “NG”, two classes, and “pass”, “adduction”, “abduction”, “internal rotation” and “external rotation”, five classes were defined for the knee joint, and “pass”, “internal rotation”, “external rotation”, “cranio-caudal” and “caudo-cranial” five classes, were defined for the ankle joint. For the overall performance evaluation of the system, from Faster R-CNN segmentation to CNN classification, only the lateral knee joint images were applied. For a lateral image of the knee joint, the batch size of 6 and input image size of 256 in ResNet101 models with a support vector machine (SVM-ResNet101) resulted in a high performance of classification for two and five class. Among all investigated CNN models high accuracy of 0.9495 provided by SVM-ResNet101 model for among two and five class. In addition, the accuracy of the overall system was 0.7000. This value was compared with the results of visual evaluation and found to be equivalent to that of a radiological technologist with more than 10 years of clinical experience. Verification revealed that these results are dependent on the contrast of the image.
View full abstract