a b/utils/loggers/clearml/README.md
1
# ClearML Integration
2
3
<img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_dark.png#gh-light-mode-only" alt="Clear|ML"><img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_light.png#gh-dark-mode-only" alt="Clear|ML">
4
5
## About ClearML
6
7
[ClearML](https://cutt.ly/yolov5-tutorial-clearml) is an [open-source](https://github.com/allegroai/clearml) toolbox designed to save you time ⏱️.
8
9
🔨 Track every YOLOv5 training run in the <b>experiment manager</b>
10
11
🔧 Version and easily access your custom training data with the integrated ClearML <b>Data Versioning Tool</b>
12
13
🔦 <b>Remotely train and monitor</b> your YOLOv5 training runs using ClearML Agent
14
15
🔬 Get the very best mAP using ClearML <b>Hyperparameter Optimization</b>
16
17
🔭 Turn your newly trained <b>YOLOv5 model into an API</b> with just a few commands using ClearML Serving
18
19
<br />
20
And so much more. It's up to you how many of these tools you want to use, you can stick to the experiment manager, or chain them all together into an impressive pipeline!
21
<br />
22
<br />
23
24
![ClearML scalars dashboard](https://github.com/thepycoder/clearml_screenshots/raw/main/experiment_manager_with_compare.gif)
25
26
<br />
27
<br />
28
29
## 🦾 Setting Things Up
30
31
To keep track of your experiments and/or data, ClearML needs to communicate to a server. You have 2 options to get one:
32
33
Either sign up for free to the [ClearML Hosted Service](https://cutt.ly/yolov5-tutorial-clearml) or you can set up your own server, see [here](https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server). Even the server is open-source, so even if you're dealing with sensitive data, you should be good to go!
34
35
1. Install the `clearml` python package:
36
37
   ```bash
38
   pip install clearml
39
   ```
40
41
1. Connect the ClearML SDK to the server by [creating credentials](https://app.clear.ml/settings/workspace-configuration) (go right top to Settings -> Workspace -> Create new credentials), then execute the command below and follow the instructions:
42
43
   ```bash
44
   clearml-init
45
   ```
46
47
That's it! You're done 😎
48
49
<br />
50
51
## 🚀 Training YOLOv5 With ClearML
52
53
To enable ClearML experiment tracking, simply install the ClearML pip package.
54
55
```bash
56
pip install clearml>=1.2.0
57
```
58
59
This will enable integration with the YOLOv5 training script. Every training run from now on, will be captured and stored by the ClearML experiment manager.
60
61
If you want to change the `project_name` or `task_name`, use the `--project` and `--name` arguments of the `train.py` script, by default the project will be called `YOLOv5` and the task `Training`.
62
PLEASE NOTE: ClearML uses `/` as a delimiter for subprojects, so be careful when using `/` in your project name!
63
64
```bash
65
python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
66
```
67
68
or with custom project and task name:
69
70
```bash
71
python train.py --project my_project --name my_training --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
72
```
73
74
This will capture:
75
76
- Source code + uncommitted changes
77
- Installed packages
78
- (Hyper)parameters
79
- Model files (use `--save-period n` to save a checkpoint every n epochs)
80
- Console output
81
- Scalars (mAP_0.5, mAP_0.5:0.95, precision, recall, losses, learning rates, ...)
82
- General info such as machine details, runtime, creation date etc.
83
- All produced plots such as label correlogram and confusion matrix
84
- Images with bounding boxes per epoch
85
- Mosaic per epoch
86
- Validation images per epoch
87
- ...
88
89
That's a lot right? 🤯
90
Now, we can visualize all of this information in the ClearML UI to get an overview of our training progress. Add custom columns to the table view (such as e.g. mAP_0.5) so you can easily sort on the best performing model. Or select multiple experiments and directly compare them!
91
92
There even more we can do with all of this information, like hyperparameter optimization and remote execution, so keep reading if you want to see how that works!
93
94
<br />
95
96
## 🔗 Dataset Version Management
97
98
Versioning your data separately from your code is generally a good idea and makes it easy to acquire the latest version too. This repository supports supplying a dataset version ID, and it will make sure to get the data if it's not there yet. Next to that, this workflow also saves the used dataset ID as part of the task parameters, so you will always know for sure which data was used in which experiment!
99
100
![ClearML Dataset Interface](https://github.com/thepycoder/clearml_screenshots/raw/main/clearml_data.gif)
101
102
### Prepare Your Dataset
103
104
The YOLOv5 repository supports a number of different datasets by using yaml files containing their information. By default datasets are downloaded to the `../datasets` folder in relation to the repository root folder. So if you downloaded the `coco128` dataset using the link in the yaml or with the scripts provided by yolov5, you get this folder structure:
105
106
```
107
..
108
|_ yolov5
109
|_ datasets
110
    |_ coco128
111
        |_ images
112
        |_ labels
113
        |_ LICENSE
114
        |_ README.txt
115
```
116
117
But this can be any dataset you wish. Feel free to use your own, as long as you keep to this folder structure.
118
119
Next, ⚠️**copy the corresponding yaml file to the root of the dataset folder**⚠️. This yaml files contains the information ClearML will need to properly use the dataset. You can make this yourself too, of course, just follow the structure of the example yamls.
120
121
Basically we need the following keys: `path`, `train`, `test`, `val`, `nc`, `names`.
122
123
```
124
..
125
|_ yolov5
126
|_ datasets
127
    |_ coco128
128
        |_ images
129
        |_ labels
130
        |_ coco128.yaml  # <---- HERE!
131
        |_ LICENSE
132
        |_ README.txt
133
```
134
135
### Upload Your Dataset
136
137
To get this dataset into ClearML as a versioned dataset, go to the dataset root folder and run the following command:
138
139
```bash
140
cd coco128
141
clearml-data sync --project YOLOv5 --name coco128 --folder .
142
```
143
144
The command `clearml-data sync` is actually a shorthand command. You could also run these commands one after the other:
145
146
```bash
147
# Optionally add --parent <parent_dataset_id> if you want to base
148
# this version on another dataset version, so no duplicate files are uploaded!
149
clearml-data create --name coco128 --project YOLOv5
150
clearml-data add --files .
151
clearml-data close
152
```
153
154
### Run Training Using A ClearML Dataset
155
156
Now that you have a ClearML dataset, you can very simply use it to train custom YOLOv5 🚀 models!
157
158
```bash
159
python train.py --img 640 --batch 16 --epochs 3 --data clearml://<your_dataset_id> --weights yolov5s.pt --cache
160
```
161
162
<br />
163
164
## 👀 Hyperparameter Optimization
165
166
Now that we have our experiments and data versioned, it's time to take a look at what we can build on top!
167
168
Using the code information, installed packages and environment details, the experiment itself is now **completely reproducible**. In fact, ClearML allows you to clone an experiment and even change its parameters. We can then just rerun it with these new parameters automatically, this is basically what HPO does!
169
170
To **run hyperparameter optimization locally**, we've included a pre-made script for you. Just make sure a training task has been run at least once, so it is in the ClearML experiment manager, we will essentially clone it and change its hyperparameters.
171
172
You'll need to fill in the ID of this `template task` in the script found at `utils/loggers/clearml/hpo.py` and then just run it :) You can change `task.execute_locally()` to `task.execute()` to put it in a ClearML queue and have a remote agent work on it instead.
173
174
```bash
175
# To use optuna, install it first, otherwise you can change the optimizer to just be RandomSearch
176
pip install optuna
177
python utils/loggers/clearml/hpo.py
178
```
179
180
![HPO](https://github.com/thepycoder/clearml_screenshots/raw/main/hpo.png)
181
182
## 🤯 Remote Execution (advanced)
183
184
Running HPO locally is really handy, but what if we want to run our experiments on a remote machine instead? Maybe you have access to a very powerful GPU machine on-site, or you have some budget to use cloud GPUs.
185
This is where the ClearML Agent comes into play. Check out what the agent can do here:
186
187
- [YouTube video](https://youtu.be/MX3BrXnaULs)
188
- [Documentation](https://clear.ml/docs/latest/docs/clearml_agent)
189
190
In short: every experiment tracked by the experiment manager contains enough information to reproduce it on a different machine (installed packages, uncommitted changes etc.). So a ClearML agent does just that: it listens to a queue for incoming tasks and when it finds one, it recreates the environment and runs it while still reporting scalars, plots etc. to the experiment manager.
191
192
You can turn any machine (a cloud VM, a local GPU machine, your own laptop ... ) into a ClearML agent by simply running:
193
194
```bash
195
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
196
```
197
198
### Cloning, Editing And Enqueuing
199
200
With our agent running, we can give it some work. Remember from the HPO section that we can clone a task and edit the hyperparameters? We can do that from the interface too!
201
202
🪄 Clone the experiment by right-clicking it
203
204
🎯 Edit the hyperparameters to what you wish them to be
205
206
⏳ Enqueue the task to any of the queues by right-clicking it
207
208
![Enqueue a task from the UI](https://github.com/thepycoder/clearml_screenshots/raw/main/enqueue.gif)
209
210
### Executing A Task Remotely
211
212
Now you can clone a task like we explained above, or simply mark your current script by adding `task.execute_remotely()` and on execution it will be put into a queue, for the agent to start working on!
213
214
To run the YOLOv5 training script remotely, all you have to do is add this line to the training.py script after the clearml logger has been instantiated:
215
216
```python
217
# ...
218
# Loggers
219
data_dict = None
220
if RANK in {-1, 0}:
221
    loggers = Loggers(save_dir, weights, opt, hyp, LOGGER)  # loggers instance
222
    if loggers.clearml:
223
        loggers.clearml.task.execute_remotely(queue="my_queue")  # <------ ADD THIS LINE
224
        # Data_dict is either None is user did not choose for ClearML dataset or is filled in by ClearML
225
        data_dict = loggers.clearml.data_dict
226
# ...
227
```
228
229
When running the training script after this change, python will run the script up until that line, after which it will package the code and send it to the queue instead!
230
231
### Autoscaling workers
232
233
ClearML comes with autoscalers too! This tool will automatically spin up new remote machines in the cloud of your choice (AWS, GCP, Azure) and turn them into ClearML agents for you whenever there are experiments detected in the queue. Once the tasks are processed, the autoscaler will automatically shut down the remote machines, and you stop paying!
234
235
Check out the autoscalers getting started video below.
236
237
[![Watch the video](https://img.youtube.com/vi/j4XVMAaUt3E/0.jpg)](https://youtu.be/j4XVMAaUt3E)