Diff of /README.md [000000] .. [0b950f]

Switch to unified view

a b/README.md
1
# [DeepDOF: Deep learning extended depth-of-field microscope for fast and slide-free histology](https://www.pnas.org/content/117/52/33051)
2
Lingbo Jin<sup>1</sup>,  Yubo Tang<sup>1</sup>, Yicheng Wu,  Jackson B. Coole, Melody T. Tan, Xuan Zhao, Hawraa Badaoui, Jacob T. Robinson, Michelle D. Williams, Ann M. Gillenwater, Rebecca R. Richards-Kortum, and Ashok Veeraraghavan
3
4
<sup>1</sup> equal contribution
5
6
Reference github repository for the paper [Deep learning extended depth-of-field microscope](https://www.pnas.org/content/117/52/33051). Proceedings of the National Academy of Sciences 117.52 (2020) If you use our dataset or code, please cite our paper:
7
8
    @article{jin2020deep,
9
      title={Deep learning extended depth-of-field microscope for fast and slide-free histology},
10
      author={Jin, Lingbo and Tang, Yubo and Wu, Yicheng and Coole, Jackson B and Tan, Melody T and Zhao, Xuan and Badaoui, Hawraa and Robinson, Jacob T and Williams, Michelle D and Gillenwater, Ann M and others},
11
      journal={Proceedings of the National Academy of Sciences},
12
      volume={117},
13
      number={52},
14
      pages={33051--33060},
15
      year={2020},
16
      publisher={National Acad Sciences}
17
    }
18
19
## Dataset
20
21
Dataset can be downloaded here: [the training, validation, and testing dataset used in the manuscript](https://zenodo.org/record/3922596)
22
23
The dataset contains:
24
- 600 microscopic fluorescence images of proflavine-stained oral cancer resections (10×/0.25-NA, manual refocusing)
25
- 600 histopathology images of healthy and cancerous tissue of human brain, lungs, mouth, colon, cervix, and breast from The Cancer Genome Atlas (TCGA) Cancer FFPE slides. 
26
- 600 INRIA Holiday dataset
27
28
In total, it contains 1,800 images (each 1,000 × 1,000 pixels; gray scale)
29
30
The 1,800 images were randomly assigned to training, validation, and testing sets that contained 1,500; 150; and 150 images, respectively
31
32
33
## Code
34
35
### dependencies
36
Required packages and versions can be found in deepDOF.yml. It can also be used to create a conda environment.
37
38
39
### training
40
We use a 2 step training process. Step 1 (DeepDOF_step1.py) does not update the optical layer and only trains the U-net. Step 2 (DeepDOF_step2.py) jointly optimizes both the optical layer and the U-net
41
42
### testing
43
To test the trained network with an image, use test_image_all_720um.py
44
45
46
## Reference
47
48
Wu, Yicheng, et al. "Phasecam3d—learning phase masks for passive single view depth estimation." 2019 IEEE International Conference on Computational Photography (ICCP). IEEE, 2019.
49
50