a b/README.md
1
<div style="padding:10px; margin:0;font-family:newtimeroman;font-size:300%;text-align:center;border-radius: 30px 10px;overflow:hidden;font-weight:700;background-color:#272643; color:white">
2
    Blood Cell Cancer By Pytorch
3
4
<div style="text-align:center;">
5
    <img src='https://i.postimg.cc/HxJRDmDz/blood-cancers.jpg'>
6
7
<div style = 'border-radius: 10px; box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.1);border:2px solid #90e0ef; background-color:##e3f6f5 ; ;padding:10px; font-size:130%'>
8
<p style="font-size:150%; font-weight:bold">About Dataset</p>
9
10
<p>The definitive diagnosis of Acute Lymphoblastic Leukemia (ALL), as a highly prevalent cancer, requires invasive, expensive, and time-consuming diagnostic tests. ALL diagnosis using peripheral blood smear (PBS) images plays a vital role in the initial screening of cancer from non-cancer cases. The examination of these PBS images by laboratory users is riddled with problems such as diagnostic error because the non-specific nature of ALL signs and symptoms often leads to misdiagnosis.</p>
11
12
<p>The images of this dataset were prepared in the bone marrow laboratory of Taleqani Hospital (Tehran, Iran). This dataset consisted of 3242 PBS images from 89 patients suspected of ALL, whose blood samples were prepared and stained by skilled laboratory staff. This dataset is divided into two classes benign and malignant. The former comprises hematogenous, and the latter is the ALL group with three subtypes of malignant lymphoblasts: Early Pre-B, Pre-B, and Pro-B ALL. All the images were taken by using a Zeiss camera in a microscope with a 100x magnification and saved as JPG files. A specialist using the flow cytometry tool made the definitive determination of the types and subtypes of these cells.
13
14
<a id='tbl_content'></a>
15
<div style="background-color:#eef1fb; padding: 20px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1)">
16
    <ul>
17
        <li><a href="#setup" style="font-size:24px; font-family:calibri; font-weight:bold"> Step 1 | Setup </a></li>
18
            <ul>
19
                <li><a href="#step11" style="font-size:18px; font-family:calibri"> Step 1.1 | Install required libraries </a></li>
20
                <li><a href="#step12" style="font-size:18px; font-family:calibri"> Step 1.2 | Import required Libraries </a></li>
21
                <li><a href="#step13" style="font-size:18px; font-family:calibri"> Step 1.3 | Configurations </a></li>
22
                <li><a href="#step14" style="font-size:18px; font-family:calibri"> Step 1.4 | Device </a></li>
23
            </ul>
24
        <li><a href="#data" style="font-size:24px; font-family:calibri; font-weight:bold"> Step 2 | Data </a></li>
25
            <ul>
26
                <li><a href="#step21" style="font-size:18px; font-family:calibri" > Step 2.1 | Read Data </a></li>
27
                <li><a href="#step22" style="font-size:18px; font-family:calibri" > Step 2.2 | Copy images to working dir </a></li>
28
                <li><a href="#step23" style="font-size:18px; font-family:calibri" > Step 2.3 | Count Plot </a></li>
29
                <li><a href="#step24" style="font-size:18px; font-family:calibri" > Step 2.4 | Plot Images </a></li>
30
                <li><a href="#step25" style="font-size:18px; font-family:calibri" > Step 2.5 | Split images to Train-Valid-test folders </a></li>
31
            </ul> 
32
        <li><a href="#aug" style="font-size:24px; font-family:calibri; font-weight:bold"> Step 3 | DATA AUGMENTATIONS </a></li>
33
            <ul>
34
                <li><a href="#step31" style="font-size:18px; font-family:calibri"> Step 3.1 | Blure </li>
35
                <li><a href="#step32" style="font-size:18px; font-family:calibri"> Step 3.2 | Noise </li>
36
                <li><a href="#step33" style="font-size:18px; font-family:calibri"> Step 3.3 | Flip </li>
37
                <li><a href="#step34" style="font-size:18px; font-family:calibri"> Step 3.3 | Apply Augmantations </li>
38
            </ul>
39
        <li><a href="#dataset" style="font-size:24px; font-family:calibri; font-weight:bold"> Step 4 | DataSets and DataLoaders </a></li>
40
            <ul>
41
                <li><a href="#step41" style="font-size:18px; font-family:calibri"> Step 4.1 | Create Datasets and DataLoaders </a></li>
42
                <li><a href="#step42" style="font-size:18px; font-family:calibri"> Step 4.2 | Data Shapes </a></li>
43
                <li><a href="#step43" style="font-size:18px; font-family:calibri"> Step 4.3 | Model Compile </a></li>
44
                <li><a href="#step43" style="font-size:18px; font-family:calibri"> Step 4.3 | Model Training </a></li>
45
                <li><a href="#step44" style="font-size:18px; font-family:calibri"> Step 4.4 | Training Result </a></li>
46
                <li><a href="#step45" style="font-size:18px; font-family:calibri"> Step 4.5 | Evaluation </a></li>
47
            </ul>
48
        <li><a href="#free" style="font-size:24px; font-family:calibri; font-weight:bold"> Step 5 | FreeUp some space in RAM and GPU </a></li>
49
            <ul>
50
                <li><a href="#step41" style="font-size:18px; font-family:calibri"> Step 5.1 | RAM </a></li>
51
                <li><a href="#step42" style="font-size:18px; font-family:calibri"> Step 5.2 | GPU </a></li>
52
            </ul>
53
        <li><a href="#model" style="font-size:24px; font-family:calibri; font-weight:bold"> Step 6 | Model </a></li>
54
            <ul>
55
                <li><a href="#step41" style="font-size:18px; font-family:calibri"> Step 6.1 | PreTrained Model </a></li>
56
                <li><a href="#step42" style="font-size:18px; font-family:calibri"> Step 6.2 | Change Last Layer (fc) </a></li>
57
                <li><a href="#step43" style="font-size:18px; font-family:calibri"> Step 6.3 | Train the Model </a></li>
58
                <li><a href="#step43" style="font-size:18px; font-family:calibri"> Step 6.4 | Evaluation </a></li>
59
                <li><a href="#step44" style="font-size:18px; font-family:calibri"> Step 6.5 | Plot The Result </a></li>
60
                <li><a href="#step45" style="font-size:18px; font-family:calibri"> Step 6.6 | Confusion Matrix </a></li>
61
            </ul>
62
        <li><a href="#author" style="font-size:24px; font-family:calibri; font-weight:bold"> Author </a></li>
63
    </ul>
64
65
</div>
66
67
# <a id='setup'></a> 
68
# <span style="background-color:#1d3461;background-size: cover;font-family:tahoma;font-size:180%;text-align:center;border-radius:15px 15px; padding:10px; border:solid 2px #09375b"><span style="color:red"><b> 1 | </b></span><span style="color:#ade8f4"><b> SETUP
69
70
###### 🏠 [Tabel of Contents](#tbl_content)
71
72
## <a id='step11'></a>
73
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">1.1 | Install required libraries
74
75
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
76
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 At the first step, Install requred python librasries with <code>pip install</code> command.
77
78
79
```python
80
# ! pip install -q split-folders
81
```
82
83
## <a id='step12'></a>
84
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">1.2 | Import required Libraries
85
86
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
87
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Then import nesseccary libraries with <code>import</code> command.
88
89
90
```python
91
import os                                                       # To work with main operating system commands
92
import gc                                                       # Its a 'Garbage collector' , to freeup spaces
93
import shutil                                                   # To copy and move files
94
import numpy as np                                              # To work with arrays
95
import cv2                                                      # Powerfull library to work with images
96
import random                                                   # To generate random number and random choices
97
import matplotlib.pyplot as plt                                 # To visualization
98
import seaborn as sns                                           # To visualization
99
import splitfolders                                             # To splite images to [train, validation, test]
100
from PIL import Image                                           # To read images
101
from tqdm.notebook import tqdm                                  # Beautifull progress-bar
102
from termcolor import colored                                   # To colorfull output
103
from warnings import filterwarnings                             # to avoid python warnings
104
105
import torch                                                   # Pytorch framework
106
import torchvision.transforms as transforms                    # to apply some  functions befor create a dataset
107
from torchvision.datasets import ImageFolder                   # To create dataset from images on local drive
108
from torch.utils.data import DataLoader                        # Create DataLoader
109
from torchvision.models import googlenet, GoogLeNet_Weights    # Pre-trained model with its weights
110
import torch.nn as nn                                          # Neural-Networs function
111
from datetime import datetime                                  # To calculate time and duration
112
from sklearn.metrics import confusion_matrix, classification_report     # To calculate and plot Confusion Matrix
113
```
114
115
## <a id='step13'></a>
116
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">1.3 | Configurations
117
118
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
119
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Apply above libraries configs to better performances.
120
121
122
```python
123
# Add a style to seaborn plots for better visualization
124
sns.set_style('darkgrid')
125
126
# To avoide Python warniongs
127
filterwarnings('ignore')
128
```
129
130
131
```python
132
# Initialization values 
133
134
img_size = (128, 128)
135
136
batch_size = 64
137
138
num_epochs = 30
139
```
140
141
142
```python
143
# Show all colors used in this notebook
144
colors_dark = ['#1d3461', '#eef1fb', '#ade8f4', 'red', 'black', 'orange', 'navy', '#fbf8cc']
145
146
sns.palplot(colors_dark)
147
```
148
149
150
    
151
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_15_0.png)
152
    
153
154
155
## <a id='step14'></a>
156
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">1.4 | Device
157
158
159
```python
160
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
161
162
if device.type == 'cuda' :
163
    print(colored(' GPU is available ', 'green', 'on_white', attrs=['bold']))
164
else :
165
    print(colored(' You are using CPU ', 'red', 'on_white', attrs=['bold']))
166
```
167
168
     GPU is available 
169
    
170
171
# <a id='data'></a> 
172
# <span style="background-color:#1d3461;background-size: cover;font-family:tahoma;font-size:180%;text-align:center;border-radius:15px 15px; padding:10px; border:solid 2px #09375b"><span style="color:red"><b> 2 | </b></span><span style="color:#ade8f4"><b> DATA
173
174
##### 🏠 [Tabel of Contents](#tbl_content)
175
176
## <a id='step21'></a>
177
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight:900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">2.1 | Read Data
178
179
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
180
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Set path of Dataset on <b>kaggle</b> or your <b>local</b> drive.
181
182
183
```python
184
# Path of main dataset
185
base_dir = 'C:\\envs\\DataSets\\Blood cell Cancer [ALL]'
186
187
# Path of working directory
188
working_dir = 'C:\\envs\\Working\\Blood_Cell_Cancer'
189
```
190
191
## <a id='step22'></a>
192
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">2.2 | Copy images to working dir
193
194
<div style = 'border : 3px solid non; background-color:#dce9f5 ; ;padding:10px; color:black'>
195
<b>Target is<b> :<br>
196
197
    working/
198
    ├── images/
199
    │          ├── Benign
200
    │          │          ├── image-1.jpg
201
    │          │          ├── image-2.jpg
202
    │          │          ├── ...
203
    │          │
204
    │          ├── Early_Pre_B
205
    │          │          ├── image-1.jpg
206
    │          │          ├── image-2.jpg
207
    │          │          ├── ...
208
    │          │
209
    │          ├── Pre_B
210
    │          │          ├── image-1.jpg
211
    │          │          ├── image-2.jpg
212
    │          │          ├── ...
213
    │          │
214
    │          ├── Pro_B
215
    │                     ├── image-1.jpg
216
    │                     ├── image-2.jpg
217
    │                     ├── ...   
218
219
220
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
221
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Base in above diagram, we should do below steps :
222
    </p>
223
    <ul style="font-size:15px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 1. Create a <b>Folder</b> in working-directory. </ul>
224
    <ul style="font-size:15px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 2. Create a Folder for <b>each class</b> . </ul>
225
    <ul style="font-size:15px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 3. <b>Copy</b> images from dataset to this folder.
226
227
228
```python
229
# Create 'Image' folder in working directory (step 1)
230
Images = os.path.join(working_dir, 'Images')
231
if not os.path.exists(Images) :
232
    os.mkdir(Images)
233
```
234
235
236
```python
237
# For each class, create a folder in Images folder (step 2)
238
Benign = os.path.join(Images, 'Benign')
239
Early_Pre_B = os.path.join(Images, 'Early_Pre_B')
240
Pre_B = os.path.join(Images, 'Pre_B')
241
Pro_B = os.path.join(Images, 'Pro_B')
242
243
os.mkdir(Benign)
244
os.mkdir(Early_Pre_B)
245
os.mkdir(Pre_B)
246
os.mkdir(Pro_B)
247
```
248
249
250
```python
251
# Copy images from dataset to working-dir/Images
252
253
for folder in os.listdir(base_dir) :
254
    folder_path = os.path.join(base_dir, folder)
255
    for img in tqdm(os.listdir(folder_path)) :
256
        src = os.path.join(folder_path, img)
257
258
        match folder :
259
            case 'Benign' :
260
                shutil.copy(src, os.path.join(Benign, img))
261
262
            case '[Malignant] early Pre-B' :
263
                shutil.copy(src, os.path.join(Early_Pre_B, img))
264
265
            case '[Malignant] Pre-B' :
266
                shutil.copy(src, os.path.join(Pre_B, img))
267
    
268
            case '[Malignant] Pro-B' :
269
                shutil.copy(src, os.path.join(Pro_B, img))
270
271
print(colored('All images copied to working directory', 'green'))
272
```
273
274
275
      0%|          | 0/512 [00:00<?, ?it/s]
276
277
278
279
      0%|          | 0/979 [00:00<?, ?it/s]
280
281
282
283
      0%|          | 0/955 [00:00<?, ?it/s]
284
285
286
287
      0%|          | 0/796 [00:00<?, ?it/s]
288
289
290
    All images copied to working directory
291
    
292
293
294
```python
295
# Read and show classes
296
297
classes = os.listdir(Images)
298
num_classes = len(classes)
299
print(classes)
300
print(f'Number of classes : {num_classes}')
301
```
302
303
    ['Benign', 'Early_Pre_B', 'Pre_B', 'Pro_B']
304
    Number of classes : 4
305
    
306
307
## <a id='step23'></a>
308
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">2.3 | Count Plot
309
310
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
311
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Show a number of samples in each class by countplot .
312
313
314
```python
315
# A variable to store values
316
counts = []
317
318
# Loop over class names 
319
for class_name in classes :
320
    class_path = os.path.join(Images, class_name)
321
    counts.append(len(os.listdir(class_path)))
322
323
# Plot the result
324
plt.figure(figsize=(13, 4), dpi=400)
325
ax = sns.barplot(x=counts, y=classes, palette='Set1', hue=classes)
326
for i in range(len(classes)) :
327
    ax.bar_label(ax.containers[i])
328
plt.title('Number of images in each class', fontsize=20, fontweight='bold', c='navy')
329
ax.set_xlim(0, 1200)
330
ax.set_xlabel('Counts', fontweight='bold')
331
ax.set_ylabel('Classes', fontweight='bold')
332
plt.show()
333
```
334
335
336
    
337
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_31_0.png)
338
    
339
340
341
## <a id='step24'></a>
342
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">2.4 | Plot Images
343
344
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
345
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Now plot some images in each class
346
347
348
```python
349
# A loop to iterate below codes for each class
350
for class_name in classes :
351
    # To create a plot with 1 row and 6 column
352
    fig, ax = plt.subplots(1, 6, figsize=(15, 2))
353
    # Define a variable for each class_name's path by joining base_directory and each class_name
354
    class_path = os.path.join(Images, class_name)
355
    # Files is a list of all image names in each folder (class)
356
    files = os.listdir(class_path)
357
    # Choose 6 random image from each class to show in plot
358
    random_images = random.choices(files, k=6)
359
    # A loop to iterate in each 6 random images
360
    for i in range(6) :
361
        # print class_name as suptitle for each class
362
        plt.suptitle(class_name, fontsize=20, fontweight='bold')
363
        # variable img is path of image, by joining class_path and image file name
364
        img = os.path.join(class_path ,random_images[i])
365
       # load image in img variable using keras.utils.load_img(image_path) 
366
        img = Image.open(img)
367
        # Plot image
368
        ax[i].imshow(img)
369
        # Turn axis off
370
        ax[i].axis('off')
371
    # Make plots to become nearer to each other
372
    plt.tight_layout()
373
```
374
375
376
    
377
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_34_0.png)
378
    
379
380
381
382
    
383
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_34_1.png)
384
    
385
386
387
388
    
389
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_34_2.png)
390
    
391
392
393
394
    
395
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_34_3.png)
396
    
397
398
399
## <a id='step25'></a>
400
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">2.5 | Split images to Train-Valid-test folders
401
402
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
403
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 In this step, split images to 3 part, <b>Train, Validation and Test</b> by ratio <b>70%, 15%, 15%</b> of whole images.
404
405
406
```python
407
# create folder for train and validation and test
408
train_valid = os.path.join(working_dir, 'train_valid')
409
410
splitfolders.ratio(
411
    input=Images, output=train_valid, seed=42, ratio=(0.7, 0.15, 0.15)
412
)
413
414
print(colored(f' All images splited to TRAIN / VALIDATION / TEST folders. ', 'white', 'on_green', attrs=['bold']))
415
```
416
417
    Copying files: 3242 files [00:28, 114.30 files/s]
418
419
     All images splited to TRAIN / VALIDATION / TEST folders. 
420
    
421
422
    
423
    
424
425
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
426
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Count Images in each folder
427
428
429
```python
430
# list of folders
431
folders = os.listdir(train_valid)
432
433
print(colored('Number of samples in each folder : ', 'green', attrs=['bold']))
434
for folder in folders :
435
    # A variable to store count of images in each part
436
    counts = 0
437
    folder_path = os.path.join(train_valid, folder)
438
    for class_name in os.listdir(folder_path) :
439
        class_path = os.path.join(folder_path, class_name)
440
        counts += len(os.listdir(class_path))
441
    print(colored(f'{folder} : {counts}', 'blue',attrs=['bold']))
442
```
443
444
    Number of samples in each folder : 
445
    test : 490
446
    train : 2268
447
    val : 484
448
    
449
450
# <a id='aug'></a> 
451
# <span style="background-color:#1d3461;background-size: cover;font-family:tahoma;font-size:180%;text-align:center;border-radius:15px 15px; padding:10px; border:solid 2px #09375b"><span style="color:red"><b> 3 | </b></span><span style="color:#ade8f4"><b> DATA AUGMENTATIONS
452
453
###### 🏠 [Tabel of Contents](#tbl_content)
454
455
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
456
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Data augmentation is the process of artificially generating new data from existing data, primarily to train new machine learning (ML) models. Data augmentation can address a variety of challenges when training a CNN model, such as limited or imbalanced data, overfitting, and variation and complexity. This technique can increase the size of the dataset and balance the classes by applying different transformations
457
458
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
459
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Here, choose a sample image to plot with each Augmentation function to represent changes.
460
461
462
```python
463
sample_image = os.path.join(Benign, 'Sap_013 (1).jpg')
464
```
465
466
## <a id='step31'></a>
467
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">3.1 | Blure
468
469
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
470
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Blurring an image is a process that makes the image less sharp and reduces its level of detail. It distorts the detail of an image which makes it less clear. The most common use of image blurriness is to remove noise from the image; the other is to get the most detailed part of the image and smooth out the less detailed ones. Image blur is also called image smoothing.
471
472
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
473
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 <b>We use 3 kind of bluring : </b></p>
474
    <ul style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 1. opencv blur (smoothing) </ul>
475
    <ul style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 2. Gausian blur </ul>
476
    <ul style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 3. Meidan blur </ul>
477
478
479
```python
480
def Blure_Filter(img, filter_type ="blur", kernel=13):
481
    '''
482
    ### Filtering ###
483
    img: image
484
    filter_type: {blur: blur, gaussian: gaussian, median: median}
485
    '''
486
    if filter_type == "blur":
487
        return cv2.blur(img,(kernel,kernel))
488
    
489
    elif filter_type == "gaussian":
490
        return cv2.GaussianBlur(img, (kernel, kernel), 0)
491
    
492
    elif filter_type == "median":
493
        return cv2.medianBlur(img, kernel)
494
```
495
496
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
497
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Represent <b>blur function</b> on sample image.
498
499
500
```python
501
plt.figure(figsize=(10, 2.25), dpi=400)
502
plt.suptitle('Blured samples', fontweight='bold', fontsize=15)
503
# Original image
504
plt.subplot(1, 4, 1)
505
img = cv2.imread(sample_image)
506
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
507
plt.imshow(img)
508
plt.axis('off')
509
plt.title('Original', fontweight='bold')
510
 # Blurs
511
 # List of filters
512
filters = ['blur', 'gaussian', 'median']
513
for filter in filters :
514
    indx = filters.index(filter)
515
    plt.subplot(1, 4, indx+2)
516
    filtered_img = Blure_Filter(img, filter_type=filter, kernel=13)
517
    plt.imshow(filtered_img)
518
    plt.axis('off')
519
    plt.title(filter, fontweight='bold')
520
```
521
522
523
    
524
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_49_0.png)
525
    
526
527
528
## <a id='step32'></a>
529
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">3.2 | Noise
530
531
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
532
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Noise is deliberately altering pixels to be different than what they may should have represented. Old-fashioned films are famous for having speckles black and white pixels present where they should not be. This is noise!  
533
    Noise is one kind of imperfection that can be particularly frustrating for machines versus human understanding. While humans can easily ignore noise (or fit it within appropriate context), algorithms struggle. This is the root of so-called adversarial attacks where small, human-imperceptible pixel changes can dramatically alter a neural network's ability to make an accurate prediction.
534
535
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
536
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 <b>We use 3 kind of Noise adding : </b></p>
537
    <ul style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 1. Gaussian noise </ul>
538
    <ul style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 2. sp noise </ul>
539
540
541
```python
542
def Add_Noise(img, noise_type="gauss"):
543
    '''
544
    ### Adding Noise ###
545
    img: image
546
    cj_type: {gauss: gaussian, sp: salt & pepper}
547
    '''
548
    if noise_type == "gauss": 
549
        mean=0
550
        st=0.5
551
        gauss = np.random.normal(mean,st,img.shape)
552
        gauss = gauss.astype('uint8')
553
        image = cv2.add(img,gauss)
554
        return image
555
    
556
    elif noise_type == "sp": 
557
        prob = 0.01
558
        black = np.array([0, 0, 0], dtype='uint8')
559
        white = np.array([255, 255, 255], dtype='uint8')
560
561
        probs = np.random.random(img.shape[:2])
562
        img[probs < (prob / 2)] = black
563
        img[probs > 1 - (prob / 2)] = white
564
        return img
565
```
566
567
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
568
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Represent <b>Noise adding function</b> on sample image.
569
570
571
```python
572
plt.figure(figsize=(10, 2.75), dpi=400)
573
plt.suptitle('Noised samples', fontweight='bold', fontsize=15)
574
plt.subplot(1, 3, 1)
575
img = cv2.imread(sample_image)
576
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
577
plt.imshow(img)
578
plt.axis('off')
579
plt.title('Original', fontweight='bold')
580
581
noises = ['gauss', 'sp']
582
for noise in noises :
583
    indx = noises.index(noise)
584
    plt.subplot(1, 3, indx+2)
585
    noised_img = Add_Noise(img, noise_type=noise)
586
    plt.imshow(noised_img)
587
    plt.axis('off')
588
    plt.title(noise, fontweight='bold')
589
```
590
591
592
    
593
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_55_0.png)
594
    
595
596
597
## <a id='step33'></a>
598
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">3.3 | Flip
599
600
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
601
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Flipping an image (and its annotations) is a deceivingly simple technique that can improve model performance in substantial ways.
602
603
Our models are learning what collection of pixels and the relationship between those collections of pixels denote an object is in-frame. But machine learning models (like convolutional neural networks) have a tendency to be quite brittle: they might memorize a specific ordering of pixels describes an object, but if that same object is mirrored across the image, our models may struggle to recognize it.
604
605
Consider the orientation of your face when you are taking a selfie versus using the backwards lens on your camera: one interpretation may be mirrored while the other is not, yet they are still both your face. This mirroring of orientation is what we call flipping an image.
606
607
By creating several versions of our images in various orientations, we give our deep learning model more information to learn from without having to go through the time consuming process of collecting and labeling more training data.
608
609
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
610
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 <b>We use 3 kind of Fliping : </b></p>
611
    <ul style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 1. X axis </ul>
612
    <ul style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 2. Y axis </ul>
613
    <ul style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;"> 3. X & Y  </ul>
614
615
616
```python
617
def Flip(img, flip_code) :
618
    flipped_img = cv2.flip(img, flip_code)
619
    return flipped_img
620
```
621
622
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
623
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Represent <b>Flip function</b> on sample image.
624
625
626
```python
627
plt.figure(figsize=(10, 2.75), dpi=400)
628
plt.suptitle('Flip a sample', fontweight='bold', fontsize=15)
629
630
plt.subplot(1, 4, 1)
631
img = cv2.imread(sample_image)
632
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
633
plt.imshow(img)
634
plt.axis('off')
635
plt.title('Original', fontweight='bold')
636
637
plt.subplot(1, 4, 2)
638
fliped = Flip(img, flip_code=0)
639
plt.imshow(fliped)
640
plt.axis('off')
641
plt.title('Horizontal Flip', fontweight='bold')
642
643
plt.subplot(1, 4, 3)
644
fliped = Flip(img, flip_code=1)
645
plt.imshow(fliped)
646
plt.axis('off')
647
plt.title('Vertical Flip', fontweight='bold')
648
649
plt.subplot(1, 4, 4)
650
fliped = Flip(img, flip_code=-1)
651
plt.imshow(fliped)
652
plt.axis('off')
653
plt.title('X&Y Flip', fontweight='bold')
654
plt.show()
655
```
656
657
658
    
659
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_61_0.png)
660
    
661
662
663
## <a id='step34'></a>
664
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">3.4 | Apply Augmantations
665
666
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
667
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 OK ! Its time to apply above functions to <b>Train images</b> . Do this by define a function to choose randomly between 3 kind of augs and apply them to images. At last return a <b>dictionary</b> with <b>key</b> of new image name and <b>value</b> of augmented images.
668
669
670
```python
671
def Apply_Augmentations(img) :
672
    ''' Apply random choice of augmentation functions on images '''
673
674
    returned_augs = dict()
675
676
    AUGS = ['Blure', 'Noise', 'Flip']
677
678
    # How many of Augs choosen ?
679
    random_num = random.randint(1, 3)
680
    random_choice = random.choices(AUGS, k=random_num)
681
    # To avoid repeatations :
682
    random_choice = list(set(random_choice))
683
684
    for choice in random_choice :
685
        if choice == 'Blure' :
686
            filters = ['blur', 'gaussian', 'median']
687
            kernels = [5, 7, 9, 11]
688
            random_filter = random.choices(filters, k=1)[0]
689
            random_kernel = random.choices(kernels, k=1)[0]
690
            blured_img =  Blure_Filter(img, filter_type=random_filter, kernel=random_kernel)
691
            new_name = '_blured'
692
            returned_augs[new_name] = blured_img
693
694
695
        elif choice == 'Noise' :
696
            noises = ['gauss', 'sp']
697
            random_noise = random.choices(noises, k=1)[0]
698
            noised_img = Add_Noise(img, noise_type=random_noise)
699
            new_name = '_noised'
700
            returned_augs[new_name] = noised_img
701
702
703
        elif choice == 'Flip' :
704
            flip_codes = [-1, 0, 1]
705
            random_code = random.choices(flip_codes, k=1)[0]
706
            flipped_img = Flip(img, flip_code=random_code)
707
            new_name = '_fliped'
708
            returned_augs[new_name] = flipped_img
709
            
710
    return returned_augs
711
```
712
713
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
714
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Count images in train folder beforeand after of augmentation to find out how many images added to train folder.
715
716
717
```python
718
train_dir = os.path.join(train_valid, 'train')
719
num_samples_befor_aug = 0
720
721
for folder in os.listdir(train_dir) :
722
    folder_path = os.path.join(train_dir, folder)
723
    num_samples_befor_aug += len(os.listdir(folder_path))
724
725
print(colored(f' Number of samples in TRAIN folder befor Augmentation : {num_samples_befor_aug} ', 'black', 'on_white', attrs=['bold']))
726
```
727
728
     Number of samples in TRAIN folder befor Augmentation : 2268 
729
    
730
731
732
```python
733
for folder in os.listdir(train_dir) :
734
    folder_path = os.path.join(train_dir, folder)
735
    for img_name in tqdm(os.listdir(folder_path)) :
736
        img_path = os.path.join(folder_path, img_name)
737
        img = cv2.imread(img_path)
738
        returned = Apply_Augmentations(img)
739
740
        for exported_name, exported_image in returned.items() :
741
            # 1_left.jpg ---TO---> 1_lef_blured.jpg
742
            new_name = img_name.split('.')[0] + exported_name + '.' + img_name.split('.')[-1]
743
            new_path = os.path.join(folder_path, new_name)
744
        
745
            # Save new image
746
            cv2.imwrite(new_path, exported_image)
747
748
749
print(colored(f' Augmentation Completed. ', 'white', 'on_green', attrs=['bold']))
750
```
751
752
753
      0%|          | 0/358 [00:00<?, ?it/s]
754
755
756
757
      0%|          | 0/685 [00:00<?, ?it/s]
758
759
760
761
      0%|          | 0/668 [00:00<?, ?it/s]
762
763
764
765
      0%|          | 0/557 [00:00<?, ?it/s]
766
767
768
     Augmentation Completed. 
769
    
770
771
772
```python
773
num_samples_after_aug = 0
774
775
for folder in os.listdir(train_dir) :
776
    folder_path = os.path.join(train_dir, folder)
777
    num_samples_after_aug += len(os.listdir(folder_path))
778
779
print(colored(f' Number of samples  in TRAIN folder after Augmentation : {num_samples_after_aug} ', 'black', 'on_white', attrs=['bold']))
780
```
781
782
     Number of samples  in TRAIN folder after Augmentation : 5917 
783
    
784
785
786
```python
787
print(colored(f' {num_samples_after_aug-num_samples_befor_aug} images added to train directory. ', 'white', 'on_blue', attrs=['bold']))
788
```
789
790
     3649 images added to train directory. 
791
    
792
793
# <a id='dataset'></a> 
794
# <span style="background-color:#1d3461;background-size: cover;font-family:tahoma;font-size:180%;text-align:center;border-radius:15px 15px; padding:10px; border:solid 2px #09375b"><span style="color:red"><b> 4 | </b></span><span style="color:#ade8f4"><b> DataSets and DataLoaders
795
796
###### 🏠 [Tabel of Contents](#tbl_content)
797
798
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
799
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Now, its time to create a dataset of images by some transforms and after that create DataLoader for each dataset.
800
801
## <a id='step41'></a>
802
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">4.1 | Create Datasets and DataLoaders
803
804
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
805
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Torchvision supports common computer vision transformations in the torchvision.transforms and torchvision.transforms.v2 modules. Transforms can be used to transform or augment data for training or inference of different tasks (image classification, detection, segmentation, video classification).
806
807
808
```python
809
transform = transforms.Compose(
810
    [
811
        transforms.Resize(img_size),
812
        transforms.ToTensor()
813
    ]
814
)
815
```
816
817
818
```python
819
############################# TRAIN #############################
820
# Dataset
821
train_ds = ImageFolder(root=os.path.join(train_valid, 'train'), transform=transform)
822
823
# DataLoader
824
train_loader = DataLoader(train_ds, batch_size=batch_size, shuffle=True)
825
826
print(colored(f'TRAIN Folder :\n', 'green', attrs=['bold']))
827
print(train_ds)
828
829
############################# VALIDATION #############################
830
# Dataset
831
valid_ds = ImageFolder(root=os.path.join(train_valid, 'val'), transform=transform)
832
833
# DataLoader
834
valid_loader = DataLoader(valid_ds, batch_size=batch_size, shuffle=True)
835
836
print(colored(f'VALID Folder :\n', 'green', attrs=['bold']))
837
print(valid_ds)
838
839
############################# TEST #############################
840
# Dataset
841
test_ds = ImageFolder(root=os.path.join(train_valid, 'test'), transform=transform)
842
843
# DataLoader
844
test_loader = DataLoader(test_ds, batch_size=batch_size, shuffle=True)
845
846
print(colored(f'TEST Folder :\n', 'green', attrs=['bold']))
847
print(test_ds)
848
```
849
850
    TRAIN Folder :
851
    
852
    Dataset ImageFolder
853
        Number of datapoints: 5917
854
        Root location: C:\envs\Working\Blood_Cell_Cancer\train_valid\train
855
        StandardTransform
856
    Transform: Compose(
857
                   Resize(size=(128, 128), interpolation=bilinear, max_size=None, antialias=True)
858
                   ToTensor()
859
               )
860
    VALID Folder :
861
    
862
    Dataset ImageFolder
863
        Number of datapoints: 484
864
        Root location: C:\envs\Working\Blood_Cell_Cancer\train_valid\val
865
        StandardTransform
866
    Transform: Compose(
867
                   Resize(size=(128, 128), interpolation=bilinear, max_size=None, antialias=True)
868
                   ToTensor()
869
               )
870
    TEST Folder :
871
    
872
    Dataset ImageFolder
873
        Number of datapoints: 490
874
        Root location: C:\envs\Working\Blood_Cell_Cancer\train_valid\test
875
        StandardTransform
876
    Transform: Compose(
877
                   Resize(size=(128, 128), interpolation=bilinear, max_size=None, antialias=True)
878
                   ToTensor()
879
               )
880
    
881
882
## <a id='step42'></a>
883
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">4.2 | Data Shapes
884
885
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
886
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Read a batch of data from each loaders(train_loader, valid_loader, test_loader), to represet shape of bach and its data type.
887
888
889
```python
890
# print shape of dataset for each set
891
for key, value in {'Train': train_loader, "Validation": valid_loader, 'Test': test_loader}.items():
892
    for X, y in value:
893
        print(colored(f'{key}:', 'white','on_green', attrs=['bold']))
894
        print(f"Shape of images [Batch_size, Channels, Height, Width]: {X.shape}")
895
        print(f"Shape of y: {y.shape} {y.dtype}\n")
896
        print('-'*45)
897
        break
898
```
899
900
    Train:
901
    Shape of images [Batch_size, Channels, Height, Width]: torch.Size([64, 3, 128, 128])
902
    Shape of y: torch.Size([64]) torch.int64
903
    
904
    ---------------------------------------------
905
    Validation:
906
    Shape of images [Batch_size, Channels, Height, Width]: torch.Size([64, 3, 128, 128])
907
    Shape of y: torch.Size([64]) torch.int64
908
    
909
    ---------------------------------------------
910
    Test:
911
    Shape of images [Batch_size, Channels, Height, Width]: torch.Size([64, 3, 128, 128])
912
    Shape of y: torch.Size([64]) torch.int64
913
    
914
    ---------------------------------------------
915
    
916
917
# <a id='free'></a> 
918
# <span style="background-color:#1d3461;background-size: cover;font-family:tahoma;font-size:180%;text-align:center;border-radius:15px 15px; padding:10px; border:solid 2px #09375b"><span style="color:red"><b> 5 | </b></span><span style="color:#ade8f4"><b> FreeUp some space in RAM and GPU
919
920
###### 🏠 [Tabel of Contents](#tbl_content)
921
922
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
923
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Because of defining lots of variables and functions, our RAM may filled with unneccessary data and our GPU may be filled too.  In this part by <b>Deleting</b> unneccessary variabels and using <code>gc.collect</code> function for RAM and <code>torch.cuda</code> for GPU cache we can free up some space to better performances.
924
925
## <a id='step51'></a>
926
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">5.1 | RAM
927
928
929
```python
930
del [ax, base_dir, Benign, class_name, class_path, counts, colors_dark, exported_image, exported_name,  Early_Pre_B, fig, files, filter, filtered_img, filters, fliped, folder,  folder_path]
931
del [folders, i, Images, img, indx, key, noise, noised_img, noises, num_classes, num_samples_after_aug, num_samples_befor_aug, Pre_B, Pro_B, random_images]
932
del [sample_image, train_dir, value, working_dir, X, y, returned, src]
933
del [img_name, img_path, img_size, new_name, new_path, ]
934
935
gc.collect()
936
```
937
938
939
940
941
    72827
942
943
944
945
## <a id='step52'></a>
946
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">5.2 | GPU
947
948
949
```python
950
torch.cuda.empty_cache()
951
```
952
953
# <a id='model'></a> 
954
# <span style="background-color:#1d3461;background-size: cover;font-family:tahoma;font-size:180%;text-align:center;border-radius:15px 15px; padding:10px; border:solid 2px #09375b"><span style="color:red"><b> 6 | </b></span><span style="color:#ade8f4"><b> Model
955
956
###### 🏠 [Tabel of Contents](#tbl_content)
957
958
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 10px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
959
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Instead of define a new model form scatch , i prefer to use a <b>Pre-Trained</b> model, <b>GoogleNet</b> with its trained wights <b>GoogLeNet_Weights</b> .   
960
    Google Net (or Inception V1) was proposed by research at Google (with the collaboration of various universities) in 2014 in the research paper titled “Going Deeper with Convolutions”. This architecture was the winner at the ILSVRC 2014 image classification challenge. It has provided a significant decrease in error rate as compared to previous winners AlexNet (Winner of ILSVRC 2012) and ZF-Net (Winner of ILSVRC 2013) and significantly less error rate than VGG (2014 runner up). This architecture uses techniques such as 1×1 convolutions in the middle of the architecture and global average pooling.
961
962
<img src='https://i.postimg.cc/x1tJbp5V/Xqv0n.jpg'>
963
964
## <a id='step61'></a>
965
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">6.1 | PreTrained Model
966
967
968
```python
969
model = googlenet(weights=GoogLeNet_Weights)
970
model
971
```
972
973
974
975
976
    GoogLeNet(
977
      (conv1): BasicConv2d(
978
        (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
979
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
980
      )
981
      (maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
982
      (conv2): BasicConv2d(
983
        (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
984
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
985
      )
986
      (conv3): BasicConv2d(
987
        (conv): Conv2d(64, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
988
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
989
      )
990
      (maxpool2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
991
      (inception3a): Inception(
992
        (branch1): BasicConv2d(
993
          (conv): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
994
          (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
995
        )
996
        (branch2): Sequential(
997
          (0): BasicConv2d(
998
            (conv): Conv2d(192, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
999
            (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1000
          )
1001
          (1): BasicConv2d(
1002
            (conv): Conv2d(96, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1003
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1004
          )
1005
        )
1006
        (branch3): Sequential(
1007
          (0): BasicConv2d(
1008
            (conv): Conv2d(192, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
1009
            (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1010
          )
1011
          (1): BasicConv2d(
1012
            (conv): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1013
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1014
          )
1015
        )
1016
        (branch4): Sequential(
1017
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1018
          (1): BasicConv2d(
1019
            (conv): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1020
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1021
          )
1022
        )
1023
      )
1024
      (inception3b): Inception(
1025
        (branch1): BasicConv2d(
1026
          (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1027
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1028
        )
1029
        (branch2): Sequential(
1030
          (0): BasicConv2d(
1031
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1032
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1033
          )
1034
          (1): BasicConv2d(
1035
            (conv): Conv2d(128, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1036
            (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1037
          )
1038
        )
1039
        (branch3): Sequential(
1040
          (0): BasicConv2d(
1041
            (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1042
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1043
          )
1044
          (1): BasicConv2d(
1045
            (conv): Conv2d(32, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1046
            (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1047
          )
1048
        )
1049
        (branch4): Sequential(
1050
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1051
          (1): BasicConv2d(
1052
            (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1053
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1054
          )
1055
        )
1056
      )
1057
      (maxpool3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
1058
      (inception4a): Inception(
1059
        (branch1): BasicConv2d(
1060
          (conv): Conv2d(480, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
1061
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1062
        )
1063
        (branch2): Sequential(
1064
          (0): BasicConv2d(
1065
            (conv): Conv2d(480, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
1066
            (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1067
          )
1068
          (1): BasicConv2d(
1069
            (conv): Conv2d(96, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1070
            (bn): BatchNorm2d(208, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1071
          )
1072
        )
1073
        (branch3): Sequential(
1074
          (0): BasicConv2d(
1075
            (conv): Conv2d(480, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
1076
            (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1077
          )
1078
          (1): BasicConv2d(
1079
            (conv): Conv2d(16, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1080
            (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1081
          )
1082
        )
1083
        (branch4): Sequential(
1084
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1085
          (1): BasicConv2d(
1086
            (conv): Conv2d(480, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1087
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1088
          )
1089
        )
1090
      )
1091
      (inception4b): Inception(
1092
        (branch1): BasicConv2d(
1093
          (conv): Conv2d(512, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
1094
          (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1095
        )
1096
        (branch2): Sequential(
1097
          (0): BasicConv2d(
1098
            (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
1099
            (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1100
          )
1101
          (1): BasicConv2d(
1102
            (conv): Conv2d(112, 224, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1103
            (bn): BatchNorm2d(224, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1104
          )
1105
        )
1106
        (branch3): Sequential(
1107
          (0): BasicConv2d(
1108
            (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
1109
            (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1110
          )
1111
          (1): BasicConv2d(
1112
            (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1113
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1114
          )
1115
        )
1116
        (branch4): Sequential(
1117
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1118
          (1): BasicConv2d(
1119
            (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1120
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1121
          )
1122
        )
1123
      )
1124
      (inception4c): Inception(
1125
        (branch1): BasicConv2d(
1126
          (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1127
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1128
        )
1129
        (branch2): Sequential(
1130
          (0): BasicConv2d(
1131
            (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1132
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1133
          )
1134
          (1): BasicConv2d(
1135
            (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1136
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1137
          )
1138
        )
1139
        (branch3): Sequential(
1140
          (0): BasicConv2d(
1141
            (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
1142
            (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1143
          )
1144
          (1): BasicConv2d(
1145
            (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1146
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1147
          )
1148
        )
1149
        (branch4): Sequential(
1150
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1151
          (1): BasicConv2d(
1152
            (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1153
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1154
          )
1155
        )
1156
      )
1157
      (inception4d): Inception(
1158
        (branch1): BasicConv2d(
1159
          (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
1160
          (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1161
        )
1162
        (branch2): Sequential(
1163
          (0): BasicConv2d(
1164
            (conv): Conv2d(512, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
1165
            (bn): BatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1166
          )
1167
          (1): BasicConv2d(
1168
            (conv): Conv2d(144, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1169
            (bn): BatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1170
          )
1171
        )
1172
        (branch3): Sequential(
1173
          (0): BasicConv2d(
1174
            (conv): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1175
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1176
          )
1177
          (1): BasicConv2d(
1178
            (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1179
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1180
          )
1181
        )
1182
        (branch4): Sequential(
1183
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1184
          (1): BasicConv2d(
1185
            (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1186
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1187
          )
1188
        )
1189
      )
1190
      (inception4e): Inception(
1191
        (branch1): BasicConv2d(
1192
          (conv): Conv2d(528, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
1193
          (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1194
        )
1195
        (branch2): Sequential(
1196
          (0): BasicConv2d(
1197
            (conv): Conv2d(528, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
1198
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1199
          )
1200
          (1): BasicConv2d(
1201
            (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1202
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1203
          )
1204
        )
1205
        (branch3): Sequential(
1206
          (0): BasicConv2d(
1207
            (conv): Conv2d(528, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1208
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1209
          )
1210
          (1): BasicConv2d(
1211
            (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1212
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1213
          )
1214
        )
1215
        (branch4): Sequential(
1216
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1217
          (1): BasicConv2d(
1218
            (conv): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1219
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1220
          )
1221
        )
1222
      )
1223
      (maxpool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)
1224
      (inception5a): Inception(
1225
        (branch1): BasicConv2d(
1226
          (conv): Conv2d(832, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
1227
          (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1228
        )
1229
        (branch2): Sequential(
1230
          (0): BasicConv2d(
1231
            (conv): Conv2d(832, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
1232
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1233
          )
1234
          (1): BasicConv2d(
1235
            (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1236
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1237
          )
1238
        )
1239
        (branch3): Sequential(
1240
          (0): BasicConv2d(
1241
            (conv): Conv2d(832, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1242
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1243
          )
1244
          (1): BasicConv2d(
1245
            (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1246
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1247
          )
1248
        )
1249
        (branch4): Sequential(
1250
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1251
          (1): BasicConv2d(
1252
            (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1253
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1254
          )
1255
        )
1256
      )
1257
      (inception5b): Inception(
1258
        (branch1): BasicConv2d(
1259
          (conv): Conv2d(832, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
1260
          (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1261
        )
1262
        (branch2): Sequential(
1263
          (0): BasicConv2d(
1264
            (conv): Conv2d(832, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
1265
            (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1266
          )
1267
          (1): BasicConv2d(
1268
            (conv): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1269
            (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1270
          )
1271
        )
1272
        (branch3): Sequential(
1273
          (0): BasicConv2d(
1274
            (conv): Conv2d(832, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
1275
            (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1276
          )
1277
          (1): BasicConv2d(
1278
            (conv): Conv2d(48, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1279
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1280
          )
1281
        )
1282
        (branch4): Sequential(
1283
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1284
          (1): BasicConv2d(
1285
            (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1286
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1287
          )
1288
        )
1289
      )
1290
      (aux1): None
1291
      (aux2): None
1292
      (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
1293
      (dropout): Dropout(p=0.2, inplace=False)
1294
      (fc): Linear(in_features=1024, out_features=1000, bias=True)
1295
    )
1296
1297
1298
1299
## <a id='step62'></a>
1300
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">6.2 | Change Last Layer (fc)
1301
1302
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
1303
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Out-feature of GoogleNet has 1000 Neuron, but in this case, our model should have 4 neuron, Length of classes. So change <b>fc</b> part of GoogleNet and replace it with a Sequential of fully connected network.
1304
1305
1306
```python
1307
model.fc = nn.Sequential(
1308
    nn.Linear(in_features=1024, out_features=512),
1309
    nn.ReLU(),
1310
    nn.Dropout(0.2),
1311
     nn.Linear(in_features=512, out_features=128),
1312
    nn.ReLU(),
1313
    nn.Dropout(0.2),
1314
    nn.Linear(in_features=128, out_features=64),
1315
    nn.ReLU(),
1316
    nn.Dropout(0.2),
1317
    nn.Linear(in_features=64, out_features=4)
1318
)
1319
```
1320
1321
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
1322
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Its time to first use of GPU ! Move the model to GPU to accelerate process.
1323
1324
1325
```python
1326
model.to(device)
1327
```
1328
1329
1330
1331
1332
    GoogLeNet(
1333
      (conv1): BasicConv2d(
1334
        (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
1335
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1336
      )
1337
      (maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
1338
      (conv2): BasicConv2d(
1339
        (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1340
        (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1341
      )
1342
      (conv3): BasicConv2d(
1343
        (conv): Conv2d(64, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1344
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1345
      )
1346
      (maxpool2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
1347
      (inception3a): Inception(
1348
        (branch1): BasicConv2d(
1349
          (conv): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1350
          (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1351
        )
1352
        (branch2): Sequential(
1353
          (0): BasicConv2d(
1354
            (conv): Conv2d(192, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
1355
            (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1356
          )
1357
          (1): BasicConv2d(
1358
            (conv): Conv2d(96, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1359
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1360
          )
1361
        )
1362
        (branch3): Sequential(
1363
          (0): BasicConv2d(
1364
            (conv): Conv2d(192, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
1365
            (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1366
          )
1367
          (1): BasicConv2d(
1368
            (conv): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1369
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1370
          )
1371
        )
1372
        (branch4): Sequential(
1373
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1374
          (1): BasicConv2d(
1375
            (conv): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1376
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1377
          )
1378
        )
1379
      )
1380
      (inception3b): Inception(
1381
        (branch1): BasicConv2d(
1382
          (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1383
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1384
        )
1385
        (branch2): Sequential(
1386
          (0): BasicConv2d(
1387
            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1388
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1389
          )
1390
          (1): BasicConv2d(
1391
            (conv): Conv2d(128, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1392
            (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1393
          )
1394
        )
1395
        (branch3): Sequential(
1396
          (0): BasicConv2d(
1397
            (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1398
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1399
          )
1400
          (1): BasicConv2d(
1401
            (conv): Conv2d(32, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1402
            (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1403
          )
1404
        )
1405
        (branch4): Sequential(
1406
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1407
          (1): BasicConv2d(
1408
            (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1409
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1410
          )
1411
        )
1412
      )
1413
      (maxpool3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
1414
      (inception4a): Inception(
1415
        (branch1): BasicConv2d(
1416
          (conv): Conv2d(480, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
1417
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1418
        )
1419
        (branch2): Sequential(
1420
          (0): BasicConv2d(
1421
            (conv): Conv2d(480, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
1422
            (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1423
          )
1424
          (1): BasicConv2d(
1425
            (conv): Conv2d(96, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1426
            (bn): BatchNorm2d(208, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1427
          )
1428
        )
1429
        (branch3): Sequential(
1430
          (0): BasicConv2d(
1431
            (conv): Conv2d(480, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
1432
            (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1433
          )
1434
          (1): BasicConv2d(
1435
            (conv): Conv2d(16, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1436
            (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1437
          )
1438
        )
1439
        (branch4): Sequential(
1440
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1441
          (1): BasicConv2d(
1442
            (conv): Conv2d(480, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1443
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1444
          )
1445
        )
1446
      )
1447
      (inception4b): Inception(
1448
        (branch1): BasicConv2d(
1449
          (conv): Conv2d(512, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
1450
          (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1451
        )
1452
        (branch2): Sequential(
1453
          (0): BasicConv2d(
1454
            (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
1455
            (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1456
          )
1457
          (1): BasicConv2d(
1458
            (conv): Conv2d(112, 224, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1459
            (bn): BatchNorm2d(224, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1460
          )
1461
        )
1462
        (branch3): Sequential(
1463
          (0): BasicConv2d(
1464
            (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
1465
            (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1466
          )
1467
          (1): BasicConv2d(
1468
            (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1469
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1470
          )
1471
        )
1472
        (branch4): Sequential(
1473
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1474
          (1): BasicConv2d(
1475
            (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1476
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1477
          )
1478
        )
1479
      )
1480
      (inception4c): Inception(
1481
        (branch1): BasicConv2d(
1482
          (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1483
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1484
        )
1485
        (branch2): Sequential(
1486
          (0): BasicConv2d(
1487
            (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1488
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1489
          )
1490
          (1): BasicConv2d(
1491
            (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1492
            (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1493
          )
1494
        )
1495
        (branch3): Sequential(
1496
          (0): BasicConv2d(
1497
            (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
1498
            (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1499
          )
1500
          (1): BasicConv2d(
1501
            (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1502
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1503
          )
1504
        )
1505
        (branch4): Sequential(
1506
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1507
          (1): BasicConv2d(
1508
            (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1509
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1510
          )
1511
        )
1512
      )
1513
      (inception4d): Inception(
1514
        (branch1): BasicConv2d(
1515
          (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
1516
          (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1517
        )
1518
        (branch2): Sequential(
1519
          (0): BasicConv2d(
1520
            (conv): Conv2d(512, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
1521
            (bn): BatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1522
          )
1523
          (1): BasicConv2d(
1524
            (conv): Conv2d(144, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1525
            (bn): BatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1526
          )
1527
        )
1528
        (branch3): Sequential(
1529
          (0): BasicConv2d(
1530
            (conv): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1531
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1532
          )
1533
          (1): BasicConv2d(
1534
            (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1535
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1536
          )
1537
        )
1538
        (branch4): Sequential(
1539
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1540
          (1): BasicConv2d(
1541
            (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
1542
            (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1543
          )
1544
        )
1545
      )
1546
      (inception4e): Inception(
1547
        (branch1): BasicConv2d(
1548
          (conv): Conv2d(528, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
1549
          (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1550
        )
1551
        (branch2): Sequential(
1552
          (0): BasicConv2d(
1553
            (conv): Conv2d(528, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
1554
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1555
          )
1556
          (1): BasicConv2d(
1557
            (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1558
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1559
          )
1560
        )
1561
        (branch3): Sequential(
1562
          (0): BasicConv2d(
1563
            (conv): Conv2d(528, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1564
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1565
          )
1566
          (1): BasicConv2d(
1567
            (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1568
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1569
          )
1570
        )
1571
        (branch4): Sequential(
1572
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1573
          (1): BasicConv2d(
1574
            (conv): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1575
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1576
          )
1577
        )
1578
      )
1579
      (maxpool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)
1580
      (inception5a): Inception(
1581
        (branch1): BasicConv2d(
1582
          (conv): Conv2d(832, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
1583
          (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1584
        )
1585
        (branch2): Sequential(
1586
          (0): BasicConv2d(
1587
            (conv): Conv2d(832, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
1588
            (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1589
          )
1590
          (1): BasicConv2d(
1591
            (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1592
            (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1593
          )
1594
        )
1595
        (branch3): Sequential(
1596
          (0): BasicConv2d(
1597
            (conv): Conv2d(832, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
1598
            (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1599
          )
1600
          (1): BasicConv2d(
1601
            (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1602
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1603
          )
1604
        )
1605
        (branch4): Sequential(
1606
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1607
          (1): BasicConv2d(
1608
            (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1609
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1610
          )
1611
        )
1612
      )
1613
      (inception5b): Inception(
1614
        (branch1): BasicConv2d(
1615
          (conv): Conv2d(832, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
1616
          (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1617
        )
1618
        (branch2): Sequential(
1619
          (0): BasicConv2d(
1620
            (conv): Conv2d(832, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
1621
            (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1622
          )
1623
          (1): BasicConv2d(
1624
            (conv): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1625
            (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1626
          )
1627
        )
1628
        (branch3): Sequential(
1629
          (0): BasicConv2d(
1630
            (conv): Conv2d(832, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
1631
            (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1632
          )
1633
          (1): BasicConv2d(
1634
            (conv): Conv2d(48, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
1635
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1636
          )
1637
        )
1638
        (branch4): Sequential(
1639
          (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
1640
          (1): BasicConv2d(
1641
            (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
1642
            (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
1643
          )
1644
        )
1645
      )
1646
      (aux1): None
1647
      (aux2): None
1648
      (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
1649
      (dropout): Dropout(p=0.2, inplace=False)
1650
      (fc): Sequential(
1651
        (0): Linear(in_features=1024, out_features=512, bias=True)
1652
        (1): ReLU()
1653
        (2): Dropout(p=0.2, inplace=False)
1654
        (3): Linear(in_features=512, out_features=128, bias=True)
1655
        (4): ReLU()
1656
        (5): Dropout(p=0.2, inplace=False)
1657
        (6): Linear(in_features=128, out_features=64, bias=True)
1658
        (7): ReLU()
1659
        (8): Dropout(p=0.2, inplace=False)
1660
        (9): Linear(in_features=64, out_features=4, bias=True)
1661
      )
1662
    )
1663
1664
1665
1666
## <a id='step63'></a>
1667
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">6.3 | Train the Model
1668
1669
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
1670
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 As the first step in this part, define some functions to make output more beautifull and better undrestand.
1671
1672
1673
```python
1674
def DeltaTime(dt) :
1675
    '''A Function to apply strftime manualy on delta.datetime class'''
1676
    h = dt.seconds // 3600
1677
    dh = dt.seconds % 3600
1678
1679
    m = dh // 60
1680
    s = dh % 60
1681
1682
    if h<10 : h='0'+str(h)
1683
    else : h = str(h)
1684
1685
    if m<10 : m='0'+str(m)
1686
    else : m = str(m)
1687
1688
    if s<10 : s='0'+str(s)
1689
    else : s = str(s)
1690
1691
    return( h + ':' + m + ':' + s)
1692
```
1693
1694
1695
```python
1696
def Beauty_epoch(epoch) :
1697
    ''' Return epochs in 2 digits - like (01 or 08) '''
1698
    if epoch<10 :
1699
        return '0' + str(epoch)
1700
    else :
1701
        return str(epoch)
1702
```
1703
1704
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
1705
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Lets Train the model with train data and evaluate with validations. 
1706
1707
1708
```python
1709
# Create Loss_function and Optimizer
1710
Learning_Rate = 0.001
1711
1712
criterion = nn.CrossEntropyLoss()
1713
optimizer = torch.optim.Adam(model.parameters(), lr=Learning_Rate)
1714
1715
# Some variables to store loss and accuracy to plot them
1716
train_losses = np.zeros(num_epochs)
1717
train_accs = np.zeros(num_epochs)
1718
valid_losses = np.zeros(num_epochs)
1719
valid_accs = np.zeros(num_epochs)
1720
1721
print(colored('Training Starts ... ', 'blue', 'on_white', attrs=['bold']))
1722
for epoch in range(num_epochs) :
1723
    # Set the mode to TRAIN
1724
    model.train()
1725
1726
    # Current time to calculate duration of epoch
1727
    t0 = datetime.now()
1728
1729
    # Some variables to store data
1730
    train_loss = []
1731
    train_acc = []
1732
    valid_loss = []
1733
    valid_acc = []
1734
    n_correct = 0
1735
    n_total = 0
1736
1737
            ###############
1738
            #### Train ####
1739
            ###############
1740
1741
    # Read Images and Labels from TrainLoader
1742
    for images, labels in train_loader :
1743
        # Move Data to GPU
1744
        images = images.to(device)
1745
        labels = labels.to(device)
1746
1747
        # Reshape labels to [Batch-Size, 1]
1748
        # labels = torch.reshape(labels, (-1, 1))
1749
1750
        # Zero Grad Optimizer
1751
        optimizer.zero_grad()
1752
1753
        # Forward Pass
1754
        y_pred = model(images)
1755
        loss = criterion(y_pred, labels)
1756
1757
        # Backward pass
1758
        loss.backward()
1759
        optimizer.step()
1760
1761
        # Train Loss
1762
        train_loss.append(loss.item())
1763
1764
        # Train Accuracy
1765
        _, prediction = torch.max(y_pred, 1)
1766
        n_correct += (prediction==labels).sum().item()
1767
        n_total += labels.shape[0]
1768
1769
    train_losses[epoch] = np.mean(train_loss)
1770
    train_accs[epoch] = n_correct / n_total
1771
1772
            ####################
1773
            #### Validation ####
1774
            ####################
1775
1776
    n_correct = 0
1777
    n_total = 0
1778
1779
    # Read Images and Labels from ValidLoader
1780
    for images, labels in valid_loader :
1781
        # Move Data to GPU
1782
        images = images.to(device)
1783
        labels = labels.to(device)
1784
1785
        # Reshape labels to [Batch-Size, 1]
1786
        # labels = torch.reshape(labels, (-1, 1))
1787
1788
        # Forward pass
1789
        y_pred = model(images)
1790
        loss = criterion(y_pred, labels)
1791
1792
        # Validation Loss
1793
        valid_loss.append(loss.item())
1794
1795
        # val Accuracy
1796
        _, prediction = torch.max(y_pred, 1)
1797
        n_correct += (prediction==labels).sum().item()
1798
        n_total += labels.shape[0]
1799
    
1800
    valid_losses[epoch] = np.mean(valid_loss)
1801
    valid_accs[epoch] = n_correct / n_total
1802
1803
1804
1805
    ############################### Duration ###############################
1806
1807
    dt = datetime.now() - t0
1808
1809
    ############################### BEAUTIFULL OUTPUT ###############################
1810
    EPOCH =  colored(f' Epoch [{Beauty_epoch(epoch+1)}/{num_epochs}] ', 'black', 'on_white', attrs=['bold'])
1811
    TRAIN_LOSS = colored(f' Train Loss:{train_losses[epoch]:.4f} ', 'white', 'on_green', attrs=['bold'])
1812
    TRAIN_ACC = colored(f' Train Acc:{train_accs[epoch]:.4f} ', 'white', 'on_blue', attrs=['bold'])
1813
    VAL_LOSS = colored(f' Val Loss:{valid_losses[epoch]:.4f} ', 'white', 'on_green', attrs=['bold'])
1814
    VAL_ACC = colored(f' Val Acc:{valid_accs[epoch]:.4f} ', 'white', 'on_blue', attrs=['bold'])
1815
    DURATION = colored(f' Duration : {DeltaTime(dt)} ', 'white', 'on_dark_grey', attrs=['bold'])
1816
    LR = colored(f' lr = {Learning_Rate} ', 'black',  'on_cyan', attrs=['bold'])
1817
1818
1819
    # Print the result of  each epochs
1820
    print(f'{EPOCH} -> {TRAIN_LOSS}{TRAIN_ACC} {VAL_LOSS}{VAL_ACC} {DURATION} {LR}')
1821
1822
1823
print(colored('Training Finished ...', 'blue', 'on_white', attrs=['bold']))
1824
```
1825
1826
    Training Starts ... 
1827
     Epoch [01/30]  ->  Train Loss:0.2178  Train Acc:0.9280   Val Loss:0.0504  Val Acc:0.9814   Duration : 00:01:49   lr = 0.001 
1828
     Epoch [02/30]  ->  Train Loss:0.0515  Train Acc:0.9865   Val Loss:0.0952  Val Acc:0.9814   Duration : 00:01:33   lr = 0.001 
1829
     Epoch [03/30]  ->  Train Loss:0.0389  Train Acc:0.9910   Val Loss:0.0138  Val Acc:0.9959   Duration : 00:01:29   lr = 0.001 
1830
     Epoch [04/30]  ->  Train Loss:0.0232  Train Acc:0.9932   Val Loss:0.0269  Val Acc:0.9897   Duration : 00:01:33   lr = 0.001 
1831
     Epoch [05/30]  ->  Train Loss:0.0059  Train Acc:0.9983   Val Loss:0.0217  Val Acc:0.9917   Duration : 00:01:28   lr = 0.001 
1832
     Epoch [06/30]  ->  Train Loss:0.0244  Train Acc:0.9943   Val Loss:0.1601  Val Acc:0.9793   Duration : 00:01:28   lr = 0.001 
1833
     Epoch [07/30]  ->  Train Loss:0.0123  Train Acc:0.9971   Val Loss:0.0147  Val Acc:0.9917   Duration : 00:01:30   lr = 0.001 
1834
     Epoch [08/30]  ->  Train Loss:0.0026  Train Acc:0.9993   Val Loss:0.1443  Val Acc:0.9814   Duration : 00:01:31   lr = 0.001 
1835
     Epoch [09/30]  ->  Train Loss:0.0355  Train Acc:0.9927   Val Loss:0.0460  Val Acc:0.9897   Duration : 00:01:30   lr = 0.001 
1836
     Epoch [10/30]  ->  Train Loss:0.0089  Train Acc:0.9980   Val Loss:0.0095  Val Acc:0.9959   Duration : 00:01:32   lr = 0.001 
1837
     Epoch [11/30]  ->  Train Loss:0.0205  Train Acc:0.9958   Val Loss:0.0523  Val Acc:0.9876   Duration : 00:01:33   lr = 0.001 
1838
     Epoch [12/30]  ->  Train Loss:0.0185  Train Acc:0.9961   Val Loss:0.0065  Val Acc:0.9959   Duration : 00:01:32   lr = 0.001 
1839
     Epoch [13/30]  ->  Train Loss:0.0075  Train Acc:0.9983   Val Loss:0.1034  Val Acc:0.9835   Duration : 00:01:32   lr = 0.001 
1840
     Epoch [14/30]  ->  Train Loss:0.0217  Train Acc:0.9961   Val Loss:0.1034  Val Acc:0.9814   Duration : 00:01:33   lr = 0.001 
1841
     Epoch [15/30]  ->  Train Loss:0.0213  Train Acc:0.9963   Val Loss:0.0368  Val Acc:0.9876   Duration : 00:01:33   lr = 0.001 
1842
     Epoch [16/30]  ->  Train Loss:0.0017  Train Acc:0.9998   Val Loss:0.0424  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001 
1843
     Epoch [17/30]  ->  Train Loss:0.0066  Train Acc:0.9990   Val Loss:0.0770  Val Acc:0.9897   Duration : 00:01:33   lr = 0.001 
1844
     Epoch [18/30]  ->  Train Loss:0.0241  Train Acc:0.9946   Val Loss:0.0921  Val Acc:0.9814   Duration : 00:01:33   lr = 0.001 
1845
     Epoch [19/30]  ->  Train Loss:0.0381  Train Acc:0.9919   Val Loss:0.0295  Val Acc:0.9917   Duration : 00:01:33   lr = 0.001 
1846
     Epoch [20/30]  ->  Train Loss:0.0072  Train Acc:0.9983   Val Loss:0.0251  Val Acc:0.9938   Duration : 00:01:33   lr = 0.001 
1847
     Epoch [21/30]  ->  Train Loss:0.0099  Train Acc:0.9976   Val Loss:0.0414  Val Acc:0.9938   Duration : 00:01:33   lr = 0.001 
1848
     Epoch [22/30]  ->  Train Loss:0.0068  Train Acc:0.9985   Val Loss:0.0537  Val Acc:0.9917   Duration : 00:01:33   lr = 0.001 
1849
     Epoch [23/30]  ->  Train Loss:0.0064  Train Acc:0.9992   Val Loss:0.1647  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001 
1850
     Epoch [24/30]  ->  Train Loss:0.0005  Train Acc:1.0000   Val Loss:0.0403  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001 
1851
     Epoch [25/30]  ->  Train Loss:0.0023  Train Acc:0.9995   Val Loss:0.0523  Val Acc:0.9876   Duration : 00:01:33   lr = 0.001 
1852
     Epoch [26/30]  ->  Train Loss:0.0130  Train Acc:0.9970   Val Loss:0.0338  Val Acc:0.9897   Duration : 00:01:34   lr = 0.001 
1853
     Epoch [27/30]  ->  Train Loss:0.0013  Train Acc:0.9997   Val Loss:0.0267  Val Acc:0.9938   Duration : 00:01:33   lr = 0.001 
1854
     Epoch [28/30]  ->  Train Loss:0.0204  Train Acc:0.9968   Val Loss:0.0510  Val Acc:0.9917   Duration : 00:01:34   lr = 0.001 
1855
     Epoch [29/30]  ->  Train Loss:0.0110  Train Acc:0.9973   Val Loss:0.0458  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001 
1856
     Epoch [30/30]  ->  Train Loss:0.0091  Train Acc:0.9980   Val Loss:0.1245  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001 
1857
    Training Finished ...
1858
    
1859
1860
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
1861
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 Plot the result of training.
1862
1863
1864
```python
1865
plt.figure(figsize=(12, 3), dpi=400)
1866
plt.subplot(1, 2, 1)
1867
sns.lineplot(train_accs, label='Train Accuracy')
1868
sns.lineplot(valid_accs, label='Valid Accuracy')
1869
plt.title('Accuracy')
1870
1871
plt.subplot(1, 2, 2)
1872
sns.lineplot(train_losses, label='Train Loss')
1873
sns.lineplot(valid_losses, label='Validation Loss')
1874
plt.title('Loss')
1875
1876
plt.show()
1877
```
1878
1879
1880
    
1881
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_102_0.png)
1882
    
1883
1884
1885
## <a id='step64'></a>
1886
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">6.4 | Evaluation
1887
1888
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
1889
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 After finishing training, we should test the model with never unseen images to final evaluate the model.
1890
1891
1892
```python
1893
with torch.no_grad() :
1894
    model.eval()
1895
    t0 = datetime.now()
1896
    test_loss = []
1897
    val_loss = []
1898
    n_correct = 0
1899
    n_total = 0
1900
1901
    for images, labels in test_loader :
1902
        # Move input data to GPU
1903
        images = images.to(device)
1904
        labels = labels.to(device)
1905
1906
        # Forward pass
1907
        y_pred = model(images)
1908
        loss = criterion(y_pred, labels)
1909
1910
        # Train Loss
1911
        test_loss.append(loss.item())
1912
1913
        # Train Accuracy
1914
        _, prediction = torch.max(y_pred, 1)
1915
        n_correct += (prediction==labels).sum().item()
1916
        n_total += labels.shape[0]
1917
1918
    test_loss = np.mean(train_loss)
1919
    train_acc = n_correct / n_total
1920
    dt = datetime.now() - t0
1921
    print(colored(f'Loss:{test_loss:.4f}\nAccuracy:{train_acc:.4f}\nDuration:{dt}', 'green', attrs=['bold']))
1922
```
1923
1924
    Loss:0.0091
1925
    Accuracy:0.9939
1926
    Duration:0:00:10.539431
1927
    
1928
1929
## <a id='step65'></a>
1930
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">6.5 | Plot The Result
1931
1932
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
1933
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 And now, plot some images with <b>real labels</b> and <b>predicted labels</b> .</p>
1934
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 To do this, we should create a Dictionary called label_map, a dictionary with indexes as keys and class_names as values. </p>
1935
1936
1937
```python
1938
# Create a label_map to show True and Predicted labels in below plot
1939
classes.sort()
1940
classes
1941
labels_map = {}
1942
1943
for index, label in enumerate(classes) :
1944
    labels_map[index] = label
1945
1946
labels_map
1947
```
1948
1949
1950
1951
1952
    {0: 'Benign', 1: 'Early_Pre_B', 2: 'Pre_B', 3: 'Pro_B'}
1953
1954
1955
1956
1957
```python
1958
# Move model to CPU
1959
cpu_model = model.cpu()
1960
1961
# Get 1 batch of test_loader
1962
for imgs, labels in test_loader :
1963
    break
1964
1965
# Plot 1 batch of test_loader images with True and Predicted label
1966
plt.subplots(4, 8, figsize=(16, 12))
1967
plt.suptitle('Rice images in 1 Batch', fontsize=25, fontweight='bold')
1968
for i in range(32) :
1969
    ax = plt.subplot(4, 8, i+1)
1970
    img = torch.permute(imgs[i], (1, 2, 0))
1971
    plt.imshow(img)
1972
    label = labels_map[int(labels[i])]
1973
    img = img[i].unsqueeze(0)
1974
    img = imgs[i].unsqueeze(0)
1975
    out = cpu_model(img)
1976
    predict = labels_map[int(out.argmax())]
1977
    plt.title(f'True :{label}\nPredict :{predict}')
1978
    plt.axis('off')
1979
1980
plt.show()
1981
```
1982
1983
1984
    
1985
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_109_0.png)
1986
    
1987
1988
1989
## <a id='step66'></a>
1990
## <span style="background-color:orange ;background-size: cover;font-family:tahoma;font-size:70%; font-weight: 900; text-align:left;border-radius:25px 25px; padding:10px; border:solid 2px #09375b"><span style="color:navy">6.6 | Confusion Matrix
1991
1992
<div style="background-color:#fbf8cc; padding: 10px 10px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:left">
1993
    <p style="font-size:16px; font-family:tahoma; line-height: 2em; text-indent: 20px;">🔵 And the final step is ploting <b>Confusion Matrix</b> by <code>sklearn</code> library.
1994
1995
1996
```python
1997
# Get out 2 list include y_true and y_pred for use in confusion_matrix
1998
model = model.to(device)
1999
2000
y_true = []
2001
y_pred = []
2002
for images, labels in test_loader:
2003
    images = images.to(device)
2004
    labels = labels.numpy()
2005
    outputs = model(images)
2006
    _, pred = torch.max(outputs.data, 1)
2007
    pred = pred.detach().cpu().numpy()
2008
    
2009
    y_true = np.append(y_true, labels)
2010
    y_pred = np.append(y_pred, pred)
2011
```
2012
2013
2014
```python
2015
classes = labels_map.values()
2016
2017
print(classification_report(y_true, y_pred))
2018
2019
def plot_confusion_matrix(y_test, y_prediction):
2020
    '''Plotting Confusion Matrix'''
2021
    cm = confusion_matrix(y_true, y_pred)
2022
    ax = plt.figure(figsize=(8, 6))
2023
    ax = sns.heatmap(cm, annot=True, fmt='', cmap="Blues")
2024
    ax.set_xlabel('Prediced labels', fontsize=18)
2025
    ax.set_ylabel('True labels', fontsize=18)
2026
    ax.set_title('Confusion Matrix', fontsize=25)
2027
    ax.xaxis.set_ticklabels(classes)
2028
    ax.yaxis.set_ticklabels(classes) 
2029
    plt.show()
2030
2031
2032
plot_confusion_matrix(y_true, y_pred)
2033
```
2034
2035
                  precision    recall  f1-score   support
2036
    
2037
             0.0       0.99      1.00      0.99        78
2038
             1.0       1.00      0.99      0.99       148
2039
             2.0       0.99      1.00      1.00       144
2040
             3.0       0.99      0.99      0.99       120
2041
    
2042
        accuracy                           0.99       490
2043
       macro avg       0.99      0.99      0.99       490
2044
    weighted avg       0.99      0.99      0.99       490
2045
    
2046
    
2047
2048
2049
    
2050
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_113_1.png)
2051
    
2052
2053
2054
<a id="author"></a>
2055
<div style="border:3px solid navy; border-radius:30px; padding: 15px; background-size: cover; font-size:120%; text-align:left; background-image: url(https://i.postimg.cc/sXwGWcwC/download.jpg); background-size: cover">
2056
2057
<h4 align="left"><span style="font-weight:700; font-size:150%"><font color=#d10202>Author:</font><font color=navy> Nima Pourmoradi</font></span></h4>
2058
<h6 align="left"><font color=#ff6200><a href='https://github.com/NimaPourmoradi'>github: https://github.com/NimaPourmoradi</font></h6>
2059
<h6 align="left"><font color=#ff6200><a href='https://www.kaggle.com/nimapourmoradi'>kaggle : https://www.kaggle.com/nimapourmoradi</a></font></h6>
2060
<h6 align="left"><font color=#ff6200><a href='https://www.linkedin.com/in/nima-pourmoradi-081949288/'>linkedin : www.linkedin.com/in/nima-pourmoradi</a></font></h6>
2061
<h6 align="left"><font color=#ff6200><a href='https://t.me/Nima_Pourmoradi'>Telegram : https://t.me/Nima_Pourmoradi</a></font></h6>
2062
2063
<div style="background-color:#c5d8d1; padding: 25px 0px 10px 0px; border-radius: 10px; box-shadow: 2px 2px 4px 0 rgba(0, 0, 0, 0.1);border:0px solid #0A2342; text-align:center">
2064
    <p style="font-size:18px; font-family:tahoma; line-height: 2em; text-indent: 20px;"><b>✅ If you like my notebook, please upvote it ✅
2065
    </b></p>
2066
</div>
2067
2068
<img src="https://i.postimg.cc/t4b3WtCy/1000-F-291522205-Xkrm-S421-Fj-SGTMR.jpg">
2069
2070
##### [🏠 Tabel of Contents](#tbl_content)