|
a |
|
b/docs-source/source/evaluation.rst |
|
|
1 |
.. _evaluation: |
|
|
2 |
|
|
|
3 |
Evaluation |
|
|
4 |
========== |
|
|
5 |
|
|
|
6 |
Slideflow includes several tools for evaluating trained models. In the next sections, we'll review how to evaluate a model on a held-out test set, generate predictions without ground-truth labels, and visualize predictions with heatmaps. |
|
|
7 |
|
|
|
8 |
Evaluating a test set |
|
|
9 |
********************* |
|
|
10 |
|
|
|
11 |
The :meth:`slideflow.Project.evaluate` provides an easy interface for evaluating model performance on a held-out test set. Locate the saved model to evaluate (which will be in the project ``models/`` folder). :ref:`As with training <training_with_project>`, the dataset to evaluate can be specified using either the ``filters`` or ``dataset`` arguments. If neither is provided, all slides in the project will be evaluated. |
|
|
12 |
|
|
|
13 |
.. code-block:: python |
|
|
14 |
|
|
|
15 |
# Method 1: specifying filters |
|
|
16 |
P.evaluate( |
|
|
17 |
model="/path/to/trained_model_epoch1", |
|
|
18 |
outcomes="tumor_type", |
|
|
19 |
filters={"dataset": ["test"]} |
|
|
20 |
) |
|
|
21 |
|
|
|
22 |
# Method 2: specify a dataset |
|
|
23 |
dataset = P.dataset(tile_px=299, tile_um='10x') |
|
|
24 |
test_dataset = dataset.filter({"dataset": ["test"]}) |
|
|
25 |
P.evaluate( |
|
|
26 |
model="/path/to/trained_model_epoch1", |
|
|
27 |
outcomes="tumor_type", |
|
|
28 |
dataset=test_dataset |
|
|
29 |
) |
|
|
30 |
|
|
|
31 |
Results are returned from the ``Project.evaluate()`` function as a dictionary and saved in the project evaluation directory. Tile-, slide-, and patient- level predictions are also saved in the corresponding project evaluation folder, ``eval/``. |
|
|
32 |
|
|
|
33 |
Generating predictions |
|
|
34 |
********************** |
|
|
35 |
|
|
|
36 |
For a dataset |
|
|
37 |
------------- |
|
|
38 |
|
|
|
39 |
:meth:`slideflow.Project.predict` provides an interface for generating model predictions on an entire dataset. As above, locate the saved model from which to generate predictions, and specify the dataset with either ``filters`` or ``dataset`` arguments. |
|
|
40 |
|
|
|
41 |
.. code-block:: python |
|
|
42 |
|
|
|
43 |
dfs = P.predict( |
|
|
44 |
model="/path/to/trained_model_epoch1", |
|
|
45 |
filters={"dataset": ["test"]} |
|
|
46 |
) |
|
|
47 |
print(dfs['patient']) |
|
|
48 |
|
|
|
49 |
.. rst-class:: sphx-glr-script-out |
|
|
50 |
|
|
|
51 |
.. code-block:: none |
|
|
52 |
|
|
|
53 |
patient ... cohort-y_pred1 |
|
|
54 |
0 TCGA-05-4244-01Z-00-DX1... ... 0.032608 |
|
|
55 |
1 TCGA-05-4245-01Z-00-DX1... ... 0.216634 |
|
|
56 |
2 TCGA-05-4249-01Z-00-DX1... ... 0.000858 |
|
|
57 |
3 TCGA-05-4250-01Z-00-DX1... ... 0.015915 |
|
|
58 |
4 TCGA-05-4382-01Z-00-DX1... ... 0.020700 |
|
|
59 |
.. ... ... ... |
|
|
60 |
936 TCGA-O2-A52S-01Z-00-DX1... ... 0.983500 |
|
|
61 |
937 TCGA-O2-A52V-01Z-00-DX1... ... 0.773328 |
|
|
62 |
938 TCGA-O2-A52W-01Z-00-DX1... ... 0.858558 |
|
|
63 |
939 TCGA-S2-AA1A-01Z-00-DX1... ... 0.000212 |
|
|
64 |
940 TCGA-XC-AA0X-01Z-00-DX1... ... 0.632612 |
|
|
65 |
|
|
|
66 |
Results are returned as a dictionary of pandas DataFrames (with the keys ``'tile'``, ``'slide'``, and ``'patient'`` for each level of prediction) and saved in the project evaluation directory, ``eval/``. |
|
|
67 |
|
|
|
68 |
For a single slide |
|
|
69 |
------------------ |
|
|
70 |
|
|
|
71 |
You can also generate predictions for a single slide with either :func:`slideflow.slide.predict` or :meth:`slideflow.WSI.predict`. |
|
|
72 |
|
|
|
73 |
.. code-block:: python |
|
|
74 |
|
|
|
75 |
import slideflow as sf |
|
|
76 |
|
|
|
77 |
slide = '/path/to/slide.svs' |
|
|
78 |
model = '/path/to/model_epoch1' |
|
|
79 |
sf.slide.predict(slide, model) |
|
|
80 |
|
|
|
81 |
.. rst-class:: sphx-glr-script-out |
|
|
82 |
|
|
|
83 |
.. code-block:: none |
|
|
84 |
|
|
|
85 |
array([0.84378019, 0.15622007]) |
|
|
86 |
|
|
|
87 |
The returned array has the shape ``(num_classes,)``, indicating the whole-slide prediction for each outcome category. If the model was trained with uncertainty quantification, this function will return two arrays; the first with predictions, the second with estimated uncertainty. |
|
|
88 |
|
|
|
89 |
.. _generate_heatmaps: |
|
|
90 |
|
|
|
91 |
Heatmaps |
|
|
92 |
******** |
|
|
93 |
|
|
|
94 |
For a dataset |
|
|
95 |
------------- |
|
|
96 |
|
|
|
97 |
Predictive heatmaps can be created for an entire dataset using :meth:`slideflow.Project.generate_heatmaps`. Heatmaps will be saved and exported in the project directory. See the linked API documentation for arguments and customization. |
|
|
98 |
|
|
|
99 |
.. code-block:: python |
|
|
100 |
|
|
|
101 |
P.generate_heatmaps(model="/path/to/trained_model_epoch1") |
|
|
102 |
|
|
|
103 |
For a single slide |
|
|
104 |
------------------ |
|
|
105 |
|
|
|
106 |
:class:`slideflow.Heatmap` provides more granular control for calculating and displaying a heatmap for a given slide. The required arguments are: |
|
|
107 |
|
|
|
108 |
- ``slide``: Either a path to a slide, or a :class:`slideflow.WSI` object. |
|
|
109 |
- ``model``: Path to a saved Slideflow model. |
|
|
110 |
|
|
|
111 |
Additional keyword arguments can be used to customize and optimize the heatmap. In this example, we'll increase the batch size to 64 and allow multiprocessing by setting ``num_processes`` equal to our CPU core count, 16. |
|
|
112 |
|
|
|
113 |
.. code-block:: python |
|
|
114 |
|
|
|
115 |
heatmap = sf.Heatmap( |
|
|
116 |
slide='/path/to/slide.svs', |
|
|
117 |
model='/path/to/model' |
|
|
118 |
batch_size=64, |
|
|
119 |
num_processes=16 |
|
|
120 |
) |
|
|
121 |
|
|
|
122 |
If ``slide`` is a :class:`slideflow.WSI`, the heatmap will be calculated only within non-masked areas and ROIs, if applicable. |
|
|
123 |
|
|
|
124 |
.. code-block:: python |
|
|
125 |
|
|
|
126 |
from slideflow.slide import qc |
|
|
127 |
|
|
|
128 |
# Prepare the slide |
|
|
129 |
wsi = sf.WSI('slide.svs', tile_px=299, tile_um=302, rois='/path') |
|
|
130 |
wsi.qc([qc.Otsu(), qc.Gaussian()]) |
|
|
131 |
|
|
|
132 |
# Generate a heatmap |
|
|
133 |
heatmap = sf.Heatmap( |
|
|
134 |
slide=wsi, |
|
|
135 |
model='/path/to/model' |
|
|
136 |
batch_size=64, |
|
|
137 |
num_processes=16 |
|
|
138 |
) |
|
|
139 |
|
|
|
140 |
If ``slide`` is a path to a slide, Regions of Interest can be provided through the optional ``roi_dir`` or ``rois`` arguments. |
|
|
141 |
|
|
|
142 |
Once generated, heatmaps can be rendered and displayed (ie. in a Jupyter notebook) with :meth:`slideflow.Heatmap.plot`. |
|
|
143 |
|
|
|
144 |
.. code-block:: python |
|
|
145 |
|
|
|
146 |
heatmap.plot(class_idx=0, cmap='inferno') |
|
|
147 |
|
|
|
148 |
Insets showing zoomed-in portions of the heatmap can be added with :meth:`slideflow.Heatmap.add_inset`: |
|
|
149 |
|
|
|
150 |
.. code-block:: python |
|
|
151 |
|
|
|
152 |
heatmap.add_inset(zoom=20, x=(10000, 10500), y=(2500, 3000), loc=1, axes=False) |
|
|
153 |
heatmap.add_inset(zoom=20, x=(12000, 12500), y=(7500, 8000), loc=3, axes=False) |
|
|
154 |
heatmap.plot(class_idx=0, mpp=1) |
|
|
155 |
|
|
|
156 |
.. image:: heatmap_inset.jpg |
|
|
157 |
|
|
|
158 |
| |
|
|
159 |
|
|
|
160 |
Save rendered heatmaps for each outcome category with :meth:`slideflow.Heatmap.save`. The spatial map of predictions, as calculated across the input slide, can be accessed through ``Heatmap.predictions``. You can save the numpy array with calculated predictions (and uncertainty, if applicable) as an \*.npz file using :meth:`slideflow.Heatmap.save_npz`. |