Hair samples are commonly utilized as evidence in criminal investigations. Typically, hair collected at crime scenes undergoes morphological examination with the naked eye and optical microscopes at forensic science laboratories, followed by DNA profiling. However, when human and animal hairs are intermixed at a crime scene, the volume of hair evidence becomes vast, necessitating considerable time and effort from collection to DNA analysis.
In this study, we explored the feasibility of using convolutional neural network models to screen for human hairs from samples collected at crime scenes. Initially, images of the root and shaft sections of cat and dog hairs, as well as human hairs, were captured using a portable digital microscope and smartphone. Images of the tip sections of human hairs were also taken, creating an image dataset comprising seven classes.
For these seven classes of image data, we developed image classification models by training a DenseNet-121 convolutional neural network from scratch or fine-tuning a pre-trained DenseNet-121 with ImageNet. The optimization functions employed were SGD or Adam, and data augmentation included basic horizontal flipping, rotation, and brightness adjustment, or more enhanced blurring, noise, and color distortion.
The results of the training revealed that the model fine-tuned with Adam and basic data augmentation achieved the highest accuracy of 99 %. However, Grad-CAM++ indicated that this model sometimes focused on the background rather than the hair within the images.
Conversely, the model fine-tuned with SGD and enhanced data augmentation exhibited a lower accuracy of 93.71 % compared to the aforementioned model but was the most reliable in focusing on the hair within the images and robust against images featuring only shapes. This model also demonstrated high precision and recall rates for human hair roots. These outcomes suggest that this model has the potential to screen human hairs with high accuracy.
View full abstract