Circulation Reports
Online ISSN : 2434-0790
Images in Cardiovascular Medicine
Reconstruction of Apical 2-Chamber View From Apical 4- and Long-Axis Views on Echocardiogram Using Machine Learning ― Pilot Study With Deep Generative Modeling ―
Akinori HigakiKatsuji InoueMasaki KinoshitaShuntaro IkedaOsamu Yamaguchi
Author information
JOURNAL OPEN ACCESS FULL-TEXT HTML
Supplementary material

2019 Volume 1 Issue 4 Pages 197-

Details

The apical 2-chamber (A2C) view is one of the pivotal echocardiographic planes for chamber quantification in terms of the biplane method on routine examination, but it is relatively difficult to acquire the anterolateral images with adequate resolution compared with other apical 4-chamber (A4C) and long-axis (ALX) views.

Currently, there is an increasing number of studies about the application of artificial intelligence in the field of echocardiography.1 Zhang et al recently reported a fully automated echocardiography interpretation system with convolutional neural networks.2 We hypothesized that deep learning methods can be used to complement A2C images from the other 2 apical views, considering its high capability of feature abstraction. In reference to the previously proposed image-to-image regression model,3 we constructed a deep convolutional encoder–decoder network that can receive A4C and ALX images as inputs (Figure A). Images were obtained from 210 consecutive patients who underwent echocardiography at Ehime University Hospital, Toon, Japan between 1 and 31 August 2018. All images were converted to grayscale pictures 96×96 pixels in size.

Figure.

(A) Schematic representation of the encoder-decoder model. Detailed description and the source code are available at the online repository (https://gist.github.com/ahigaki/bffc549fa7b8f69b1454db5cbd0c11a7). A2C, apical 2-chamber; A4C, apical 4-chamber; ALX, apical long-axis. (B) Representative input and output images. Ground truth, reference 2-chamber images.

With this model, we could obtain plausible A2C images (Figure B), which showed the highest similarity to the reference images compared with others (Supplementary Figure). These findings indicate that deep generative modeling has the potential to complement low-resolution echocardiography images.

Disclosures

The authors declare no conflicts of interest.

Supplementary Files

Please find supplementary file(s);

http://dx.doi.org/10.1253/circrep.CR-19-0011

References
 
© 2019 THE JAPANESE CIRCULATION SOCIETY

This article is licensed under a Creative Commons [Attribution-NonCommercial-NoDerivatives 4.0 International] license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
feedback
Top