IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532

This article has now been updated. Please use the final version.

HDR-VDA: A Full Stage Data Augmentation Method for HDR Video Reconstruction
Fengshan ZHAOQin LIUTakeshi IKENAGA
Author information
JOURNAL FREE ACCESS Advance online publication

Article ID: 2024PCP0004

Details
Abstract

Mainstream data augmentation techniques involving image-level manipulation operations (e.g., CutMix) compromise the integrity of extracted features, which impedes the application of data augmentation for pixel-level image processing tasks. Moreover, the unexplored potential of test-time augmentation within the HDR domain remains to be validated. In this paper, a full stage data augmentation method called HDR-VDA for HDR video reconstruction is proposed, especially for synthetic video based training datasets. In the training stage, the local area-based mixed data augmentation (LMDA) provides samples encompassing diverse exposure and color patterns, thus the trained model gains improved capabilities in effectively processing poorly-exposure regions, with particular emphasis on areas with rich color details. A motion and ill-exposure guided sample rank and adjustment strategy (MISRA) is utilized to augment specific training samples and compensate extra information. In the testing stage, an HDR-targeted test-time augmentation method (HTTA) is designed for reconstructed HDR frames. After restoring the shape of the test-time augmented HDR output to be consistent with the original inference output, an ill-exposure outlier removal based average ensemble method is used to blend all augmented inference outputs to generate reliable and stable reconstruction results. Experiments demonstrate that HDR-VDA achieves a better PSNR-T score of 38.91dB, compared with conventional works under the same conditions.

Content from these authors
© 2024 The Institute of Electronics, Information and Communication Engineers
feedback
Top