Switch to unified view

a b/docs/tutorial.evaluation.rst
1
Evaluating Pre-trained Models on Task Datasets
2
###############################################
3
LAVIS provides pre-trained and finetuned model for off-the-shelf evaluation on task dataset. 
4
Let's now see an example to evaluate BLIP model on the captioning task, using MSCOCO dataset.
5
6
.. _prep coco:
7
8
Preparing Datasets
9
******************
10
First, let's download the dataset. LAVIS provides `automatic downloading scripts` to help prepare 
11
most of the public dataset, to download MSCOCO dataset, simply run
12
13
.. code-block:: bash
14
15
    cd lavis/datasets/download_scripts && python download_coco.py
16
17
This will put the downloaded dataset at a default cache location ``cache`` used by LAVIS.
18
19
If you want to use a different cache location, you can specify it by updating ``cache_root`` in ``lavis/configs/default.yaml``.
20
21
If you have a local copy of the dataset, it is recommended to create a symlink from the cache location to the local copy, e.g.
22
23
.. code-block:: bash
24
25
    ln -s /path/to/local/coco cache/coco
26
27
Evaluating pre-trained models
28
******************************
29
30
To evaluate pre-trained model, simply run
31
32
.. code-block:: bash
33
34
    bash run_scripts/blip/eval/eval_coco_cap.sh
35
36
Or to evaluate a large model:
37
38
.. code-block:: bash
39
40
    bash run_scripts/blip/eval/eval_coco_cap_large.sh