Diff of /README.md [000000] .. [aeb6cc]

Switch to unified view

a b/README.md
1
EchoNet-Dynamic:<br/>Interpretable AI for beat-to-beat cardiac function assessment
2
------------------------------------------------------------------------------
3
4
EchoNet-Dynamic is a end-to-end beat-to-beat deep learning model for
5
  1) semantic segmentation of the left ventricle
6
  2) prediction of ejection fraction by entire video or subsampled clips, and
7
  3) assessment of cardiomyopathy with reduced ejection fraction.
8
9
For more details, see the accompanying paper,
10
11
> [**Video-based AI for beat-to-beat assessment of cardiac function**](https://www.nature.com/articles/s41586-020-2145-8)<br/>
12
  David Ouyang, Bryan He, Amirata Ghorbani, Neal Yuan, Joseph Ebinger, Curt P. Langlotz, Paul A. Heidenreich, Robert A. Harrington, David H. Liang, Euan A. Ashley, and James Y. Zou. <b>Nature</b>, March 25, 2020. https://doi.org/10.1038/s41586-020-2145-8
13
14
Dataset
15
-------
16
We share a deidentified set of 10,030 echocardiogram images which were used for training EchoNet-Dynamic.
17
Preprocessing of these images, including deidentification and conversion from DICOM format to AVI format videos, were performed with OpenCV and pydicom. Additional information is at https://echonet.github.io/dynamic/. These deidentified images are shared with a non-commerical data use agreement.
18
19
Examples
20
--------
21
22
We show examples of our semantic segmentation for nine distinct patients below.
23
Three patients have normal cardiac function, three have low ejection fractions, and three have arrhythmia.
24
No human tracings for these patients were used by EchoNet-Dynamic.
25
26
| Normal                                 | Low Ejection Fraction                  | Arrhythmia                             |
27
| ------                                 | ---------------------                  | ----------                             |
28
| ![](docs/media/0X10A28877E97DF540.gif) | ![](docs/media/0X129133A90A61A59D.gif) | ![](docs/media/0X132C1E8DBB715D1D.gif) |
29
| ![](docs/media/0X1167650B8BEFF863.gif) | ![](docs/media/0X13CE2039E2D706A.gif ) | ![](docs/media/0X18BA5512BE5D6FFA.gif) |
30
| ![](docs/media/0X148FFCBF4D0C398F.gif) | ![](docs/media/0X16FC9AA0AD5D8136.gif) | ![](docs/media/0X1E12EEE43FD913E5.gif) |
31
32
Installation
33
------------
34
35
First, clone this repository and enter the directory by running:
36
37
    git clone https://github.com/echonet/dynamic.git
38
    cd dynamic
39
40
EchoNet-Dynamic is implemented for Python 3, and depends on the following packages:
41
  - NumPy
42
  - PyTorch
43
  - Torchvision
44
  - OpenCV
45
  - skimage
46
  - sklearn
47
  - tqdm
48
49
Echonet-Dynamic and its dependencies can be installed by navigating to the cloned directory and running
50
51
    pip install --user .
52
53
Usage
54
-----
55
### Preprocessing DICOM Videos
56
57
The input of EchoNet-Dynamic is an apical-4-chamber view echocardiogram video of any length. The easiest way to run our code is to use videos from our dataset, but we also provide a Jupyter Notebook, `ConvertDICOMToAVI.ipynb`, to convert DICOM files to AVI files used for input to EchoNet-Dynamic. The Notebook deidentifies the video by cropping out information outside of the ultrasound sector, resizes the input video, and saves the video in AVI format. 
58
59
### Setting Path to Data
60
61
By default, EchoNet-Dynamic assumes that a copy of the data is saved in a folder named `a4c-video-dir/` in this directory.
62
This path can be changed by creating a configuration file named `echonet.cfg` (an example configuration file is `example.cfg`).
63
64
### Running Code
65
66
EchoNet-Dynamic has three main components: segmenting the left ventricle, predicting ejection fraction from subsampled clips, and assessing cardiomyopathy with beat-by-beat predictions.
67
Each of these components can be run with reasonable choices of hyperparameters with the scripts below.
68
We describe our full hyperparameter sweep in the next section.
69
70
#### Frame-by-frame Semantic Segmentation of the Left Ventricle
71
72
    echonet segmentation --save_video
73
74
This creates a directory named `output/segmentation/deeplabv3_resnet50_random/`, which will contain
75
  - log.csv: training and validation losses
76
  - best.pt: checkpoint of weights for the model with the lowest validation loss
77
  - size.csv: estimated size of left ventricle for each frame and indicator for beginning of beat
78
  - videos: directory containing videos with segmentation overlay
79
80
#### Prediction of Ejection Fraction from Subsampled Clips
81
82
  echonet video
83
84
This creates a directory named `output/video/r2plus1d_18_32_2_pretrained/`, which will contain
85
  - log.csv: training and validation losses
86
  - best.pt: checkpoint of weights for the model with the lowest validation loss
87
  - test_predictions.csv: ejection fraction prediction for subsampled clips
88
89
#### Beat-by-beat Prediction of Ejection Fraction from Full Video and Assesment of Cardiomyopathy
90
91
The final beat-by-beat prediction and analysis is performed with `scripts/beat_analysis.R`.
92
This script combines the results from segmentation output in `size.csv` and the clip-level ejection fraction prediction in `test_predictions.csv`. The beginning of each systolic phase is detected by using the peak detection algorithm from scipy (`scipy.signal.find_peaks`) and a video clip centered around the beat is used for beat-by-beat prediction.
93
94
### Hyperparameter Sweeps
95
96
The full set of hyperparameter sweeps from the paper can be run via `run_experiments.sh`.
97
In particular, we choose between pretrained and random initialization for the weights, the model (selected from `r2plus1d_18`, `r3d_18`, and `mc3_18`), the length of the video (1, 4, 8, 16, 32, 64, and 96 frames), and the sampling period (1, 2, 4, 6, and 8 frames).