Diff of /readme_old.md [000000] .. [4be099]

Switch to unified view

a b/readme_old.md
1
# nnU-Net
2
3
In 3D biomedical image segmentation, dataset properties like imaging modality, image sizes, voxel spacings, class 
4
ratios etc vary drastically.
5
For example, images in the [Liver and Liver Tumor Segmentation Challenge dataset](https://competitions.codalab.org/competitions/17094) 
6
are computed tomography (CT) scans, about 512x512x512 voxels large, have isotropic voxel spacings and their 
7
intensity values are quantitative (Hounsfield Units).
8
The [Automated Cardiac Diagnosis Challenge dataset](https://acdc.creatis.insa-lyon.fr/) on the other hand shows cardiac 
9
structures in cine MRI with a typical image shape of 10x320x320 voxels, highly anisotropic voxel spacings and 
10
qualitative intensity values. In addition, the ACDC dataset suffers from slice misalignments and a heterogeneity of 
11
out-of-plane spacings which can cause severe interpolation artifacts if not handled properly. 
12
13
In current research practice, segmentation pipelines are designed manually and with one specific dataset in mind. 
14
Hereby, many pipeline settings depend directly or indirectly on the properties of the dataset 
15
and display a complex co-dependence: image size, for example, affects the patch size, which in 
16
turn affects the required receptive field of the network, a factor that itself influences several other 
17
hyperparameters in the pipeline. As a result, pipelines that were developed on one (type of) dataset are inherently 
18
incomaptible with other datasets in the domain.
19
20
**nnU-Net is the first segmentation method that is designed to deal with the dataset diversity found in the domain. It 
21
condenses and automates the keys decisions for designing a successful segmentation pipeline for any given dataset.**
22
23
nnU-Net makes the following contributions to the field:
24
25
1. **Standardized baseline:** nnU-Net is the first standardized deep learning benchmark in biomedical segmentation.
26
Without manual effort, researchers can compare their algorithms against nnU-Net on an arbitrary number of datasets 
27
to provide meaningful evidence for proposed improvements. 
28
2. **Out-of-the-box segmentation method:** nnU-Net is the first plug-and-play tool for state-of-the-art biomedical 
29
segmentation. Inexperienced users can use nnU-Net out of the box for their custom 3D segmentation problem without 
30
need for manual intervention. 
31
3. **Framework:** nnU-Net is a framework for fast and effective development of segmentation methods. Due to its modular 
32
structure, new architectures and methods can easily be integrated into nnU-Net. Researchers can then benefit from its
33
generic nature to roll out and evaluate their modifications on an arbitrary number of datasets in a 
34
standardized environment.  
35
36
For more information about nnU-Net, please read the following paper:
37
38
39
    Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2020). nnU-Net: a self-configuring method 
40
    for deep learning-based biomedical image segmentation. Nature Methods, 1-9.
41
42
Please also cite this paper if you are using nnU-Net for your research!
43
44
45
# Table of Contents
46
- [Installation](#installation)
47
- [Usage](#usage)
48
  * [How to run nnU-Net on a new dataset](#how-to-run-nnu-net-on-a-new-dataset)
49
    + [Dataset conversion](#dataset-conversion)
50
    + [Experiment planning and preprocessing](#experiment-planning-and-preprocessing)
51
    + [Model training](#model-training)
52
      - [2D U-Net](#2d-u-net)
53
      - [3D full resolution U-Net](#3d-full-resolution-u-net)
54
      - [3D U-Net cascade](#3d-u-net-cascade)
55
        * [3D low resolution U-Net](#3d-low-resolution-u-net)
56
        * [3D full resolution U-Net](#3d-full-resolution-u-net-1)
57
      - [Multi GPU training](#multi-gpu-training)
58
    + [Identifying the best U-Net configuration](#identifying-the-best-u-net-configuration)
59
    + [Run inference](#run-inference)
60
  * [How to run inference with pretrained models](#how-to-run-inference-with-pretrained-models)
61
  * [Examples](#examples)
62
- [Extending/Changing nnU-Net](#extending-or-changing-nnu-net)
63
- [Information on run time and potential performance bottlenecks.](#information-on-run-time-and-potential-performance-bottlenecks)
64
- [Common questions and issues](#common-questions-and-issues)
65
66
67
# Installation
68
nnU-Net has been tested on Linux (Ubuntu 16, 18 and 20; centOS, RHEL). We do not provide support for other operating 
69
systems.
70
71
nnU-Net requires a GPU! For inference, the GPU should have 4 GB of VRAM. For training nnU-Net models the GPU should have at 
72
least 10 GB (popular non-datacenter options are the RTX 2080ti, RTX 3080 or RTX 3090). Due to the use of automated mixed 
73
precision, fastest training times are achieved with the Volta architecture (Titan V, V100 GPUs) when installing pytorch 
74
the easy way. Since pytorch comes with cuDNN 7.6.5 and tensor core acceleration on Turing GPUs is not supported for 3D 
75
convolutions in this version, you will not get the best training speeds on Turing GPUs. You can remedy that by compiling pytorch from source 
76
(see [here](https://github.com/pytorch/pytorch#from-source)) using cuDNN 8.0.2 or newer. This will unlock Turing GPUs 
77
(RTX 2080ti, RTX 6000) for automated mixed precision training with 3D convolutions and make the training blistering 
78
fast as well. Note that future versions of pytorch may include cuDNN 8.0.2 or newer by default and 
79
compiling from source will not be necessary.
80
We don't know the speed of Ampere GPUs with vanilla vs self-compiled pytorch yet - this section will be updated as 
81
soon as we know.
82
83
For training, we recommend a strong CPU to go along with the GPU. At least 6 CPU cores (12 threads) are recommended. CPU 
84
requirements are mostly related to data augmentation and scale with the number of input channels. They are thus higher 
85
for datasets like BraTS which use 4 image modalities and lower for datasets like LiTS which only uses CT images.
86
87
We very strongly recommend you install nnU-Net in a virtual environment. 
88
[Here is a quick how-to for Ubuntu.](https://linoxide.com/linux-how-to/setup-python-virtual-environment-ubuntu/)
89
If you choose to compile pytorch from source, you will need to use conda instead of pip. In that case, please set the 
90
environment variable OMP_NUM_THREADS=1 (preferably in your bashrc using `export OMP_NUM_THREADS=1`). This is important!
91
92
Python 2 is deprecated and not supported. Please make sure you are using Python 3.
93
94
1) Install [PyTorch](https://pytorch.org/get-started/locally/). You need at least version 1.6
95
2) Install nnU-Net depending on your use case:
96
    1) For use as **standardized baseline**, **out-of-the-box segmentation algorithm** or for running **inference with pretrained models**:
97
      
98
        ```pip install nnunet```
99
    
100
    2) For use as integrative **framework** (this will create a copy of the nnU-Net code on your computer so that you can modify it as needed):
101
          ```bash
102
          git clone https://github.com/MIC-DKFZ/nnUNet.git
103
          cd nnUNet
104
          pip install -e .
105
          ```
106
3) nnU-Net needs to know where you intend to save raw data, preprocessed data and trained models. For this you need to 
107
set a few of environment variables. Please follow the instructions [here](documentation/setting_up_paths.md).
108
4) (OPTIONAL) Install [hiddenlayer](https://github.com/waleedka/hiddenlayer). hiddenlayer enables nnU-net to generate 
109
plots of the network topologies it generates (see [Model training](#model-training)). To install hiddenlayer, 
110
run the following commands:
111
    ```bash
112
    pip install --upgrade git+https://github.com/FabianIsensee/hiddenlayer.git@more_plotted_details#egg=hiddenlayer
113
    ```
114
115
Installing nnU-Net will add several new commands to your terminal. These commands are used to run the entire nnU-Net 
116
pipeline. You can execute them from any location on your system. All nnU-Net commands have the prefix `nnUNet_` for 
117
easy identification.
118
119
Note that these commands simply execute python scripts. If you installed nnU-Net in a virtual environment, this 
120
environment must be activated when executing the commands.
121
122
All nnU-Net commands have a `-h` option which gives information on how to use them.
123
124
A typical installation of nnU-Net can be completed in less than 5 minutes. If pytorch needs to be compiled from source 
125
(which is what we currently recommend when using Turing GPUs), this can extend to more than an hour.
126
127
# Usage
128
To familiarize yourself with nnU-Net we recommend you have a look at the [Examples](#Examples) before you start with 
129
your own dataset.
130
131
## How to run nnU-Net on a new dataset
132
Given some dataset, nnU-Net fully automatically configures an entire segmentation pipeline that matches its properties. 
133
nnU-Net covers the entire pipeline, from preprocessing to model configuration, model training, postprocessing 
134
all the way to ensembling. After running nnU-Net, the trained model(s) can be applied to the test cases for inference. 
135
136
### Dataset conversion
137
nnU-Net expects datasets in a structured format. This format closely (but not entirely) follows the data structure of 
138
the [Medical Segmentation Decthlon](http://medicaldecathlon.com/). Please read 
139
[this](documentation/dataset_conversion.md) for information on how to convert datasets to be compatible with nnU-Net.
140
141
### Experiment planning and preprocessing
142
As a first step, nnU-Net extracts a dataset fingerprint (a set of dataset-specific properties such as 
143
image sizes, voxel spacings, intensity information etc). This information is used to create three U-Net configurations: 
144
a 2D U-Net, a 3D U-Net that operated on full resolution images as well as a 3D U-Net cascade where the first U-Net 
145
creates a coarse segmentation map in downsampled images which is then refined by the second U-Net.
146
147
Provided that the requested raw dataset is located in the correct folder (`nnUNet_raw_data_base/nnUNet_raw_data/TaskXXX_MYTASK`, 
148
also see [here](documentation/dataset_conversion.md)), you can run this step with the following command:
149
150
```bash
151
nnUNet_plan_and_preprocess -t XXX --verify_dataset_integrity
152
```
153
154
`XXX` is the integer identifier associated with your Task name `TaskXXX_MYTASK`. You can pass several task IDs at once.
155
156
Running `nnUNet_plan_and_preprocess` will populate your folder with preprocessed data. You will find the output in 
157
nnUNet_preprocessed/TaskXXX_MYTASK. `nnUNet_plan_and_preprocess` creates subfolders with preprocessed data for the 2D 
158
U-Net as well as all applicable 3D U-Nets. It will also create 'plans' files (with the ending.pkl) for the 2D and 
159
3D configurations. These files contain the generated segmentation pipeline configuration and will be read by the 
160
nnUNetTrainer (see below). Note that the preprocessed data folder only contains the training cases. 
161
The test images are not preprocessed (they are not looked at at all!). Their preprocessing happens on the fly during 
162
inference.
163
164
`--verify_dataset_integrity` should be run at least for the first time the command is run on a given dataset. This will execute some
165
 checks on the dataset to ensure that it is compatible with nnU-Net. If this check has passed once, it can be 
166
omitted in future runs. If you adhere to the dataset conversion guide (see above) then this should pass without issues :-)
167
168
Note that `nnUNet_plan_and_preprocess` accepts several additional input arguments. Running `-h` will list all of them 
169
along with a description. If you run out of RAM during preprocessing, you may want to adapt the number of processes 
170
used with the `-tl` and `-tf` options.
171
172
After `nnUNet_plan_and_preprocess` is completed, the U-Net configurations have been created and a preprocessed copy 
173
of the data will be located at nnUNet_preprocessed/TaskXXX_MYTASK.
174
175
Extraction of the dataset fingerprint can take from a couple of seconds to several minutes depending on the properties 
176
of the segmentation task. Pipeline configuration given the extracted finger print is nearly instantaneous (couple 
177
of seconds). Preprocessing depends on image size and how powerful the CPU is. It can take between seconds and several 
178
tens of minutes.
179
180
### Model training
181
nnU-Net trains all U-Net configurations in a 5-fold cross-validation. This enables nnU-Net to determine the 
182
postprocessing and ensembling (see next step) on the training dataset. Per default, all U-Net configurations need to 
183
be run on a given dataset. There are, however situations in which only some configurations (and maybe even without 
184
running the cross-validation) are desired. See [FAQ](documentation/common_questions.md) for more information.
185
186
Note that not all U-Net configurations are created for all datasets. In datasets with small image sizes, the U-Net 
187
cascade is omitted because the patch size of the full resolution U-Net already covers a large part of the input images.
188
189
Training models is done with the `nnUNet_train` command. The general structure of the command is:
190
```bash
191
nnUNet_train CONFIGURATION TRAINER_CLASS_NAME TASK_NAME_OR_ID FOLD  --npz (additional options)
192
```
193
194
CONFIGURATION is a string that identifies the requested U-Net configuration. TRAINER_CLASS_NAME is the name of the 
195
model trainer. If you implement custom trainers (nnU-Net as a framework) you can specify your custom trainer here.
196
TASK_NAME_OR_ID specifies what dataset should be trained on and FOLD specifies which fold of the 5-fold-cross-validaton 
197
is trained.
198
199
nnU-Net stores a checkpoint every 50 epochs. If you need to continue a previous training, just add a `-c` to the 
200
training command.
201
202
IMPORTANT: `--npz` makes the models save the softmax outputs during the final validation. It should only be used for trainings 
203
where you plan to run `nnUNet_find_best_configuration` afterwards 
204
(this is nnU-Nets automated selection of the best performing (ensemble of) configuration(s), see below). If you are developing new 
205
trainer classes you may not need the softmax predictions and should therefore omit the `--npz` flag. Exported softmax 
206
predictions are very large and therefore can take up a lot of disk space.
207
If you ran initially without the `--npz` flag but now require the softmax predictions, simply run 
208
```bash
209
nnUNet_train CONFIGURATION TRAINER_CLASS_NAME TASK_NAME_OR_ID FOLD -val --npz
210
```
211
to generate them. This will only rerun the validation, not the training.
212
213
See `nnUNet_train -h` for additional options.
214
215
#### 2D U-Net
216
For FOLD in [0, 1, 2, 3, 4], run:
217
```bash
218
nnUNet_train 2d nnUNetTrainerV2 TaskXXX_MYTASK FOLD --npz
219
```
220
221
#### 3D full resolution U-Net
222
For FOLD in [0, 1, 2, 3, 4], run:
223
```bash
224
nnUNet_train 3d_fullres nnUNetTrainerV2 TaskXXX_MYTASK FOLD --npz
225
```
226
227
#### 3D U-Net cascade
228
##### 3D low resolution U-Net
229
For FOLD in [0, 1, 2, 3, 4], run:
230
```bash
231
nnUNet_train 3d_lowres nnUNetTrainerV2 TaskXXX_MYTASK FOLD --npz
232
```
233
234
##### 3D full resolution U-Net
235
For FOLD in [0, 1, 2, 3, 4], run:
236
```bash
237
nnUNet_train 3d_cascade_fullres nnUNetTrainerV2CascadeFullRes TaskXXX_MYTASK FOLD --npz
238
```
239
240
Note that the 3D full resolution U-Net of the cascade requires the five folds of the low resolution U-Net to be 
241
completed beforehand!
242
243
The trained models will we written to the RESULTS_FOLDER/nnUNet folder. Each training obtains an automatically generated 
244
output folder name:
245
246
nnUNet_preprocessed/CONFIGURATION/TaskXXX_MYTASKNAME/TRAINER_CLASS_NAME__PLANS_FILE_NAME/FOLD
247
248
For Task002_Heart (from the MSD), for example, this looks like this:
249
250
    RESULTS_FOLDER/nnUNet/
251
    ├── 2d
252
    │   └── Task02_Heart
253
    │       └── nnUNetTrainerV2__nnUNetPlansv2.1
254
    │           ├── fold_0
255
    │           ├── fold_1
256
    │           ├── fold_2
257
    │           ├── fold_3
258
    │           └── fold_4
259
    ├── 3d_cascade_fullres
260
    ├── 3d_fullres
261
    │   └── Task02_Heart
262
    │       └── nnUNetTrainerV2__nnUNetPlansv2.1
263
    │           ├── fold_0
264
    │           │   ├── debug.json
265
    │           │   ├── model_best.model
266
    │           │   ├── model_best.model.pkl
267
    │           │   ├── model_final_checkpoint.model
268
    │           │   ├── model_final_checkpoint.model.pkl
269
    │           │   ├── network_architecture.pdf
270
    │           │   ├── progress.png
271
    │           │   └── validation_raw
272
    │           │       ├── la_007.nii.gz
273
    │           │       ├── la_007.pkl
274
    │           │       ├── la_016.nii.gz
275
    │           │       ├── la_016.pkl
276
    │           │       ├── la_021.nii.gz
277
    │           │       ├── la_021.pkl
278
    │           │       ├── la_024.nii.gz
279
    │           │       ├── la_024.pkl
280
    │           │       ├── summary.json
281
    │           │       └── validation_args.json
282
    │           ├── fold_1
283
    │           ├── fold_2
284
    │           ├── fold_3
285
    │           └── fold_4
286
    └── 3d_lowres
287
288
289
Note that 3d_lowres and 3d_cascade_fullres are not populated because this dataset did not trigger the cascade. In each 
290
model training output folder (each of the fold_x folder, 10 in total here), the following files will be created (only 
291
shown for one folder above for brevity):
292
- debug.json: Contains a summary of blueprint and inferred parameters used for training this model. Not easy to read, 
293
but very useful for debugging ;-)
294
- model_best.model / model_best.model.pkl: checkpoint files of the best model identified during training. Not used right now.
295
- model_final_checkpoint.model / model_final_checkpoint.model.pkl: checkpoint files of the final model (after training 
296
has ended). This is what is used for both validation and inference.
297
- network_architecture.pdf (only if hiddenlayer is installed!): a pdf document with a figure of the network architecture in it.
298
- progress.png: A plot of the training (blue) and validation (red) loss during training. Also shows an approximation of 
299
the evlauation metric (green). This approximation is the average Dice score of the foreground classes. It should, 
300
however, only to be taken with a grain of salt because it is computed on randomly drawn patches from the validation 
301
data at the end of each epoch, and the aggregation of TP, FP and FN for the Dice computation treats the patches as if 
302
they all originate from the same volume ('global Dice'; we do not compute a Dice for each validation case and then 
303
average over all cases but pretend that there is only one validation case from which we sample patches). The reason for 
304
this is that the 'global Dice' is easy to compute during training and is still quite useful to evaluate whether a model 
305
is training at all or not. A proper validation is run at the end of the training.
306
- validation_raw: in this folder are the predicted validation cases after the training has finished. The summary.json 
307
contains the validation metrics (a mean over all cases is provided at the end of the file).
308
309
During training it is often useful to watch the progress. We therefore recommend that you have a look at the generated 
310
progress.png when running the first training. It will be updated after each epoch.
311
312
Training times largely depend on the GPU. The smallest GPU we recommend for training is the Nvidia RTX 2080ti. With 
313
this GPU (and pytorch compiled with cuDNN 8.0.2), all network trainings take less than 2 days.
314
315
#### Multi GPU training
316
317
**Multi GPU training is experimental and NOT RECOMMENDED!**
318
319
nnU-Net supports two different multi-GPU implementation: DataParallel (DP) and Distributed Data Parallel (DDP)
320
(but currently only on one host!). DDP is faster than DP and should be preferred if possible. However, if you did not 
321
install nnunet as a framework (meaning you used the `pip install nnunet` variant), DDP is not available. It requires a 
322
different way of calling the correct python script (see below) which we cannot support from our terminal commands.
323
324
Distributed training currently only works for the basic trainers (2D, 3D full resolution and 3D low resolution) and not 
325
for the second, high resolution U-Net of the cascade. The reason for this is that distributed training requires some 
326
changes to the network and loss function, requiring a new nnUNet trainer class. This is, as of now, simply not 
327
implemented for the cascade, but may be added in the future.
328
329
To run distributed training (DP), use the following command:
330
331
```bash
332
CUDA_VISIBLE_DEVICES=0,1,2... nnUNet_train_DP CONFIGURATION nnUNetTrainerV2_DP TASK_NAME_OR_ID FOLD -gpus GPUS --dbs
333
```
334
335
Note that nnUNetTrainerV2 was replaced with nnUNetTrainerV2_DP. Just like before, CONFIGURATION can be 2d, 3d_lowres or 
336
3d_fullres. TASK_NAME_OR_ID refers to the task you would like to train and FOLD is the fold of the cross-validation. 
337
GPUS (integer value) specifies the number of GPUs you wish to train on. To specify which GPUs you want to use, please make use of the 
338
CUDA_VISIBLE_DEVICES envorinment variable to specify the GPU ids (specify as many as you configure with -gpus GPUS).
339
--dbs, if set, will distribute the batch size across GPUs. So if nnUNet configures a batch size of 2 and you run on 2 GPUs
340
, each GPU will run with a batch size of 1. If you omit --dbs, each GPU will run with the full batch size (2 for each GPU 
341
in this example for a total of batch size 4).
342
343
To run the DDP training you must have nnU-Net installed as a framework. Your current working directory must be the 
344
nnunet folder (the one that has the dataset_conversion, evaluation, experiment_planning, ... subfolders!). You can then run
345
the DDP training with the following command:
346
347
```bash
348
CUDA_VISIBLE_DEVICES=0,1,2... python -m torch.distributed.launch --master_port=XXXX --nproc_per_node=Y run/run_training_DDP.py CONFIGURATION nnUNetTrainerV2_DDP TASK_NAME_OR_ID FOLD --dbs
349
```
350
351
XXXX must be an open port for process-process communication (something like 4321 will do on most systems). Y is the 
352
number of GPUs you wish to use. Remember that we do not (yet) support distributed training across compute nodes. This 
353
all happens on the same system. Again, you can use CUDA_VISIBLE_DEVICES=0,1,2 to control what GPUs are used.
354
If you run more than one DDP training on the same system (say you have 4 GPUs and you run two training with 2 GPUs each) 
355
you need to specify a different --master_port for each training!
356
357
*IMPORTANT!*
358
Multi-GPU training results in models that cannot be used for inference easily (as said above, all of this is experimental ;-) ).
359
After finishing the training of all folds, run `nnUNet_change_trainer_class` on the folder where the trained model is 
360
(see `nnUNet_change_trainer_class -h` for instructions). After that you can run inference.
361
362
### Identifying the best U-Net configuration
363
Once all models are trained, use the following 
364
command to automatically determine what U-Net configuration(s) to use for test set prediction:
365
366
```bash
367
nnUNet_find_best_configuration -m 2d 3d_fullres 3d_lowres 3d_cascade_fullres -t XXX --strict
368
```
369
370
(all 5 folds need to be completed for all specified configurations!)
371
372
On datasets for which the cascade was not configured, use `-m 2d 3d_fullres` instead. If you wish to only explore some 
373
subset of the configurations, you can specify that with the `-m` command. We recommend setting the 
374
`--strict` (crash if one of the requested configurations is 
375
missing) flag. Additional options are available (use `-h` for help).
376
377
### Run inference
378
Remember that the data located in the input folder must adhere to the format specified 
379
[here](documentation/data_format_inference.md). 
380
381
`nnUNet_find_best_configuration` will print a string to the terminal with the inference commands you need to use. 
382
The easiest way to run inference is to simply use these commands. 
383
384
If you wish to manually specify the configuration(s) used for inference, use the following commands:
385
386
For each of the desired configurations, run:
387
```
388
nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t TASK_NAME_OR_ID -m CONFIGURATION --save_npz
389
```
390
391
Only specify `--save_npz` if you intend to use ensembling. `--save_npz` will make the command save the softmax 
392
probabilities alongside of the predicted segmentation masks requiring a lot of disk space.
393
394
Please select a separate `OUTPUT_FOLDER` for each configuration!
395
396
If you wish to run ensembling, you can ensemble the predictions from several configurations with the following command:
397
```bash
398
nnUNet_ensemble -f FOLDER1 FOLDER2 ... -o OUTPUT_FOLDER -pp POSTPROCESSING_FILE
399
```
400
401
You can specify an arbitrary number of folders, but remember that each folder needs to contain npz files that were 
402
generated by `nnUNet_predict`. For ensembling you can also specify a file that tells the command how to postprocess. 
403
These files are created when running `nnUNet_find_best_configuration` and are located in the respective trained model 
404
directory (RESULTS_FOLDER/nnUNet/CONFIGURATION/TaskXXX_MYTASK/TRAINER_CLASS_NAME__PLANS_FILE_IDENTIFIER/postprocessing.json or 
405
RESULTS_FOLDER/nnUNet/ensembles/TaskXXX_MYTASK/ensemble_X__Y__Z--X__Y__Z/postprocessing.json). You can also choose to 
406
not provide a file (simply omit -pp) and nnU-Net will not run postprocessing.
407
408
Note that per default, inference will be done with all available folds. We very strongly recommend you use all 5 folds. 
409
Thus, all 5 folds must have been trained prior to running inference. The list of available folds nnU-Net found will be 
410
printed at the start of the inference.
411
412
## How to run inference with pretrained models
413
414
Trained models for all challenges we participated in are publicly available. They can be downloaded and installed 
415
directly with nnU-Net. Note that downloading a pretrained model will overwrite other models that were trained with 
416
exactly the same configuration (2d, 3d_fullres, ...), trainer (nnUNetTrainerV2) and plans.
417
418
To obtain a list of available models, as well as a short description, run
419
420
```bash
421
nnUNet_print_available_pretrained_models
422
```
423
424
You can then download models by specifying their task name. For the Liver and Liver Tumor Segmentation Challenge, 
425
for example, this would be:
426
427
```bash
428
nnUNet_download_pretrained_model Task029_LiTS
429
```
430
After downloading is complete, you can use this model to run [inference](#run-inference). Keep in mind that each of 
431
these models has specific data requirements (Task029_LiTS runs on abdominal CT scans, others require several image 
432
modalities as input in a specific order).
433
434
When using the pretrained models you must adhere to the license of the dataset they are trained on! If you run 
435
`nnUNet_download_pretrained_model` you will find a link where you can find the license for each dataset.
436
437
## Examples
438
439
To get you started we compiled two simple to follow examples:
440
- run a training with the 3d full resolution U-Net on the Hippocampus dataset. See [here](documentation/training_example_Hippocampus.md).
441
- run inference with nnU-Net's pretrained models on the Prostate dataset. See [here](documentation/inference_example_Prostate.md).
442
443
Usability not good enough? Let us know!
444
445
# Extending or Changing nnU-Net
446
Please refer to [this](documentation/extending_nnunet.md) guide.
447
448
# Information on run time and potential performance bottlenecks.
449
450
We have compiled a list of expected epoch times on standardized datasets across many different GPUs. You can use them 
451
to verify that your system is performing as expected. There are also tips on how to identify bottlenecks and what 
452
to do about them.
453
454
Click [here](documentation/expected_epoch_times.md).
455
456
# Common questions and issues
457
458
We have collected solutions to common [questions](documentation/common_questions.md) and 
459
[problems](documentation/common_problems_and_solutions.md). Please consult these documents before you open a new issue.
460
461
<img src="HIP_Logo.png" width="512px" />