|
a |
|
b/logs_v6.txt |
|
|
1 |
Loaded pretrained weights for efficientnet-b2 |
|
|
2 |
---------- Fold: 1 ---------- |
|
|
3 |
0:05:06 | Epoch: 1/15 | Loss: 121.9 | Train Acc: 0.961 | Valid Acc: 0.912 | ROC: 0.913 |
|
|
4 |
0:05:01 | Epoch: 2/15 | Loss: 94.7 | Train Acc: 0.971 | Valid Acc: 0.938 | ROC: 0.974 |
|
|
5 |
0:04:59 | Epoch: 3/15 | Loss: 88.2 | Train Acc: 0.972 | Valid Acc: 0.939 | ROC: 0.968 |
|
|
6 |
0:04:57 | Epoch: 4/15 | Loss: 82.25 | Train Acc: 0.974 | Valid Acc: 0.952 | ROC: 0.981 |
|
|
7 |
0:04:56 | Epoch: 5/15 | Loss: 78.46 | Train Acc: 0.975 | Valid Acc: 0.924 | ROC: 0.95 |
|
|
8 |
0:04:55 | Epoch: 6/15 | Loss: 75.07 | Train Acc: 0.976 | Valid Acc: 0.939 | ROC: 0.975 |
|
|
9 |
Epoch 6: reducing learning rate of group 0 to 2.0000e-04. |
|
|
10 |
0:04:55 | Epoch: 7/15 | Loss: 62.55 | Train Acc: 0.98 | Valid Acc: 0.946 | ROC: 0.981 |
|
|
11 |
Early stopping (no improvement since 3 models) | Best ROC: 0.9812042692939246 |
|
|
12 |
Loaded pretrained weights for efficientnet-b2 |
|
|
13 |
---------- Fold: 2 ---------- |
|
|
14 |
0:05:00 | Epoch: 1/15 | Loss: 88.39 | Train Acc: 0.971 | Valid Acc: 0.97 | ROC: 0.973 |
|
|
15 |
0:04:57 | Epoch: 2/15 | Loss: 81.09 | Train Acc: 0.974 | Valid Acc: 0.966 | ROC: 0.976 |
|
|
16 |
0:04:55 | Epoch: 3/15 | Loss: 76.29 | Train Acc: 0.976 | Valid Acc: 0.979 | ROC: 0.981 |
|
|
17 |
0:04:54 | Epoch: 4/15 | Loss: 74.83 | Train Acc: 0.976 | Valid Acc: 0.955 | ROC: 0.97 |
|
|
18 |
0:04:54 | Epoch: 5/15 | Loss: 69.22 | Train Acc: 0.977 | Valid Acc: 0.976 | ROC: 0.975 |
|
|
19 |
Epoch 5: reducing learning rate of group 0 to 2.0000e-04. |
|
|
20 |
0:04:54 | Epoch: 6/15 | Loss: 57.89 | Train Acc: 0.981 | Valid Acc: 0.975 | ROC: 0.979 |
|
|
21 |
Early stopping (no improvement since 3 models) | Best ROC: 0.9811217376743738 |
|
|
22 |
Loaded pretrained weights for efficientnet-b2 |
|
|
23 |
---------- Fold: 3 ---------- |
|
|
24 |
0:04:54 | Epoch: 1/15 | Loss: 75.44 | Train Acc: 0.975 | Valid Acc: 0.967 | ROC: 0.982 |
|
|
25 |
0:04:54 | Epoch: 2/15 | Loss: 73.25 | Train Acc: 0.977 | Valid Acc: 0.971 | ROC: 0.981 |
|
|
26 |
0:04:54 | Epoch: 3/15 | Loss: 69.9 | Train Acc: 0.978 | Valid Acc: 0.978 | ROC: 0.983 |
|
|
27 |
0:04:53 | Epoch: 4/15 | Loss: 66.87 | Train Acc: 0.978 | Valid Acc: 0.979 | ROC: 0.984 |
|
|
28 |
0:04:52 | Epoch: 5/15 | Loss: 65.13 | Train Acc: 0.978 | Valid Acc: 0.978 | ROC: 0.987 |
|
|
29 |
0:04:53 | Epoch: 6/15 | Loss: 63.17 | Train Acc: 0.979 | Valid Acc: 0.979 | ROC: 0.984 |
|
|
30 |
0:04:51 | Epoch: 7/15 | Loss: 60.53 | Train Acc: 0.98 | Valid Acc: 0.978 | ROC: 0.984 |
|
|
31 |
Epoch 7: reducing learning rate of group 0 to 2.0000e-04. |
|
|
32 |
0:04:53 | Epoch: 8/15 | Loss: 47.86 | Train Acc: 0.983 | Valid Acc: 0.982 | ROC: 0.984 |
|
|
33 |
Early stopping (no improvement since 3 models) | Best ROC: 0.9868701392202659 |
|
|
34 |
Loaded pretrained weights for efficientnet-b2 |
|
|
35 |
---------- Fold: 4 ---------- |
|
|
36 |
0:04:55 | Epoch: 1/15 | Loss: 61.75 | Train Acc: 0.98 | Valid Acc: 0.974 | ROC: 0.984 |
|
|
37 |
0:04:52 | Epoch: 2/15 | Loss: 59.32 | Train Acc: 0.98 | Valid Acc: 0.974 | ROC: 0.986 |
|
|
38 |
0:04:54 | Epoch: 3/15 | Loss: 55.97 | Train Acc: 0.981 | Valid Acc: 0.974 | ROC: 0.983 |
|
|
39 |
0:04:55 | Epoch: 4/15 | Loss: 55.27 | Train Acc: 0.981 | Valid Acc: 0.97 | ROC: 0.977 |
|
|
40 |
Epoch 4: reducing learning rate of group 0 to 2.0000e-04. |
|
|
41 |
0:04:55 | Epoch: 5/15 | Loss: 41.34 | Train Acc: 0.986 | Valid Acc: 0.976 | ROC: 0.983 |
|
|
42 |
Early stopping (no improvement since 3 models) | Best ROC: 0.9862235439534969 |
|
|
43 |
Loaded pretrained weights for efficientnet-b2 |
|
|
44 |
---------- Fold: 5 ---------- |
|
|
45 |
0:04:54 | Epoch: 1/15 | Loss: 63.11 | Train Acc: 0.979 | Valid Acc: 0.979 | ROC: 0.992 |
|
|
46 |
0:04:53 | Epoch: 2/15 | Loss: 60.21 | Train Acc: 0.98 | Valid Acc: 0.98 | ROC: 0.989 |
|
|
47 |
0:04:55 | Epoch: 3/15 | Loss: 57.91 | Train Acc: 0.98 | Valid Acc: 0.98 | ROC: 0.991 |
|
|
48 |
Epoch 3: reducing learning rate of group 0 to 2.0000e-04. |
|
|
49 |
0:04:56 | Epoch: 4/15 | Loss: 45.1 | Train Acc: 0.984 | Valid Acc: 0.981 | ROC: 0.992 |
|
|
50 |
0:04:56 | Epoch: 5/15 | Loss: 41.61 | Train Acc: 0.985 | Valid Acc: 0.979 | ROC: 0.991 |
|
|
51 |
0:04:56 | Epoch: 6/15 | Loss: 36.48 | Train Acc: 0.988 | Valid Acc: 0.982 | ROC: 0.992 |
|
|
52 |
Epoch 6: reducing learning rate of group 0 to 8.0000e-05. |
|
|
53 |
0:04:57 | Epoch: 7/15 | Loss: 31.25 | Train Acc: 0.989 | Valid Acc: 0.982 | ROC: 0.992 |
|
|
54 |
Early stopping (no improvement since 3 models) | Best ROC: 0.9923051292463533 |
|
|
55 |
Loaded pretrained weights for efficientnet-b2 |
|
|
56 |
---------- Fold: 6 ---------- |
|
|
57 |
0:04:56 | Epoch: 1/15 | Loss: 62.28 | Train Acc: 0.98 | Valid Acc: 0.986 | ROC: 0.992 |
|
|
58 |
0:04:57 | Epoch: 2/15 | Loss: 57.42 | Train Acc: 0.981 | Valid Acc: 0.982 | ROC: 0.989 |
|
|
59 |
0:04:57 | Epoch: 3/15 | Loss: 59.15 | Train Acc: 0.979 | Valid Acc: 0.984 | ROC: 0.99 |
|
|
60 |
Epoch 3: reducing learning rate of group 0 to 2.0000e-04. |
|
|
61 |
0:04:57 | Epoch: 4/15 | Loss: 44.86 | Train Acc: 0.984 | Valid Acc: 0.986 | ROC: 0.992 |
|
|
62 |
0:04:56 | Epoch: 5/15 | Loss: 39.38 | Train Acc: 0.986 | Valid Acc: 0.987 | ROC: 0.991 |
|
|
63 |
Epoch 5: reducing learning rate of group 0 to 8.0000e-05. |
|
|
64 |
0:04:56 | Epoch: 6/15 | Loss: 34.3 | Train Acc: 0.988 | Valid Acc: 0.987 | ROC: 0.991 |
|
|
65 |
0:04:56 | Epoch: 7/15 | Loss: 30.06 | Train Acc: 0.99 | Valid Acc: 0.985 | ROC: 0.992 |
|
|
66 |
0:04:55 | Epoch: 8/15 | Loss: 27.28 | Train Acc: 0.991 | Valid Acc: 0.986 | ROC: 0.992 |
|
|
67 |
0:04:55 | Epoch: 9/15 | Loss: 26.81 | Train Acc: 0.99 | Valid Acc: 0.987 | ROC: 0.991 |
|
|
68 |
Epoch 9: reducing learning rate of group 0 to 3.2000e-05. |
|
|
69 |
0:04:52 | Epoch: 10/15 | Loss: 24.69 | Train Acc: 0.991 | Valid Acc: 0.987 | ROC: 0.99 |
|
|
70 |
Early stopping (no improvement since 3 models) | Best ROC: 0.9922027185133513 |
|
|
71 |
Loaded pretrained weights for efficientnet-b2 |