Card
Blood Cell Cancer By Pytorch

About Dataset

The definitive diagnosis of Acute Lymphoblastic Leukemia (ALL), as a highly prevalent cancer, requires invasive, expensive, and time-consuming diagnostic tests. ALL diagnosis using peripheral blood smear (PBS) images plays a vital role in the initial screening of cancer from non-cancer cases. The examination of these PBS images by laboratory users is riddled with problems such as diagnostic error because the non-specific nature of ALL signs and symptoms often leads to misdiagnosis.

The images of this dataset were prepared in the bone marrow laboratory of Taleqani Hospital (Tehran, Iran). This dataset consisted of 3242 PBS images from 89 patients suspected of ALL, whose blood samples were prepared and stained by skilled laboratory staff. This dataset is divided into two classes benign and malignant. The former comprises hematogenous, and the latter is the ALL group with three subtypes of malignant lymphoblasts: Early Pre-B, Pre-B, and Pro-B ALL. All the images were taken by using a Zeiss camera in a microscope with a 100x magnification and saved as JPG files. A specialist using the flow cytometry tool made the definitive determination of the types and subtypes of these cells.

# # 1 | SETUP ###### 🏠 [Tabel of Contents](#tbl_content) ## ## 1.1 | Install required libraries

🔵 At the first step, Install requred python librasries with pip install command.

# ! pip install -q split-folders
## ## 1.2 | Import required Libraries

🔵 Then import nesseccary libraries with import command.

import os                                                       # To work with main operating system commands
import gc                                                       # Its a 'Garbage collector' , to freeup spaces
import shutil                                                   # To copy and move files
import numpy as np                                              # To work with arrays
import cv2                                                      # Powerfull library to work with images
import random                                                   # To generate random number and random choices
import matplotlib.pyplot as plt                                 # To visualization
import seaborn as sns                                           # To visualization
import splitfolders                                             # To splite images to [train, validation, test]
from PIL import Image                                           # To read images
from tqdm.notebook import tqdm                                  # Beautifull progress-bar
from termcolor import colored                                   # To colorfull output
from warnings import filterwarnings                             # to avoid python warnings

import torch                                                   # Pytorch framework
import torchvision.transforms as transforms                    # to apply some  functions befor create a dataset
from torchvision.datasets import ImageFolder                   # To create dataset from images on local drive
from torch.utils.data import DataLoader                        # Create DataLoader
from torchvision.models import googlenet, GoogLeNet_Weights    # Pre-trained model with its weights
import torch.nn as nn                                          # Neural-Networs function
from datetime import datetime                                  # To calculate time and duration
from sklearn.metrics import confusion_matrix, classification_report     # To calculate and plot Confusion Matrix
## ## 1.3 | Configurations

🔵 Apply above libraries configs to better performances.

# Add a style to seaborn plots for better visualization
sns.set_style('darkgrid')

# To avoide Python warniongs
filterwarnings('ignore')
# Initialization values 

img_size = (128, 128)

batch_size = 64

num_epochs = 30
# Show all colors used in this notebook
colors_dark = ['#1d3461', '#eef1fb', '#ade8f4', 'red', 'black', 'orange', 'navy', '#fbf8cc']

sns.palplot(colors_dark)
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_15_0.png) ## ## 1.4 | Device
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

if device.type == 'cuda' :
    print(colored(' GPU is available ', 'green', 'on_white', attrs=['bold']))
else :
    print(colored(' You are using CPU ', 'red', 'on_white', attrs=['bold']))
 GPU is available  # # 2 | DATA ##### 🏠 [Tabel of Contents](#tbl_content) ## ## 2.1 | Read Data

🔵 Set path of Dataset on kaggle or your local drive.

# Path of main dataset
base_dir = 'C:\\envs\\DataSets\\Blood cell Cancer [ALL]'

# Path of working directory
working_dir = 'C:\\envs\\Working\\Blood_Cell_Cancer'
## ## 2.2 | Copy images to working dir
Target is :
working/ ├── images/ │ ├── Benign │ │ ├── image-1.jpg │ │ ├── image-2.jpg │ │ ├── ... │ │ │ ├── Early_Pre_B │ │ ├── image-1.jpg │ │ ├── image-2.jpg │ │ ├── ... │ │ │ ├── Pre_B │ │ ├── image-1.jpg │ │ ├── image-2.jpg │ │ ├── ... │ │ │ ├── Pro_B │ ├── image-1.jpg │ ├── image-2.jpg │ ├── ... │

🔵 Base in above diagram, we should do below steps :

    1. Create a Folder in working-directory.
    2. Create a Folder for each class .
    3. Copy images from dataset to this folder.
    # Create 'Image' folder in working directory (step 1)
    Images = os.path.join(working_dir, 'Images')
    if not os.path.exists(Images) :
        os.mkdir(Images)
    
    # For each class, create a folder in Images folder (step 2)
    Benign = os.path.join(Images, 'Benign')
    Early_Pre_B = os.path.join(Images, 'Early_Pre_B')
    Pre_B = os.path.join(Images, 'Pre_B')
    Pro_B = os.path.join(Images, 'Pro_B')
    
    os.mkdir(Benign)
    os.mkdir(Early_Pre_B)
    os.mkdir(Pre_B)
    os.mkdir(Pro_B)
    
    # Copy images from dataset to working-dir/Images
    
    for folder in os.listdir(base_dir) :
        folder_path = os.path.join(base_dir, folder)
        for img in tqdm(os.listdir(folder_path)) :
            src = os.path.join(folder_path, img)
    
            match folder :
                case 'Benign' :
                    shutil.copy(src, os.path.join(Benign, img))
    
                case '[Malignant] early Pre-B' :
                    shutil.copy(src, os.path.join(Early_Pre_B, img))
    
                case '[Malignant] Pre-B' :
                    shutil.copy(src, os.path.join(Pre_B, img))
    
                case '[Malignant] Pro-B' :
                    shutil.copy(src, os.path.join(Pro_B, img))
    
    print(colored('All images copied to working directory', 'green'))
    
    0%| | 0/512 [00:00
    # Read and show classes
    
    classes = os.listdir(Images)
    num_classes = len(classes)
    print(classes)
    print(f'Number of classes : {num_classes}')
    
['Benign', 'Early_Pre_B', 'Pre_B', 'Pro_B'] Number of classes : 4 ## ## 2.3 | Count Plot

🔵 Show a number of samples in each class by countplot .

# A variable to store values
counts = []

# Loop over class names 
for class_name in classes :
    class_path = os.path.join(Images, class_name)
    counts.append(len(os.listdir(class_path)))

# Plot the result
plt.figure(figsize=(13, 4), dpi=400)
ax = sns.barplot(x=counts, y=classes, palette='Set1', hue=classes)
for i in range(len(classes)) :
    ax.bar_label(ax.containers[i])
plt.title('Number of images in each class', fontsize=20, fontweight='bold', c='navy')
ax.set_xlim(0, 1200)
ax.set_xlabel('Counts', fontweight='bold')
ax.set_ylabel('Classes', fontweight='bold')
plt.show()
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_31_0.png) ## ## 2.4 | Plot Images

🔵 Now plot some images in each class

# A loop to iterate below codes for each class
for class_name in classes :
    # To create a plot with 1 row and 6 column
    fig, ax = plt.subplots(1, 6, figsize=(15, 2))
    # Define a variable for each class_name's path by joining base_directory and each class_name
    class_path = os.path.join(Images, class_name)
    # Files is a list of all image names in each folder (class)
    files = os.listdir(class_path)
    # Choose 6 random image from each class to show in plot
    random_images = random.choices(files, k=6)
    # A loop to iterate in each 6 random images
    for i in range(6) :
        # print class_name as suptitle for each class
        plt.suptitle(class_name, fontsize=20, fontweight='bold')
        # variable img is path of image, by joining class_path and image file name
        img = os.path.join(class_path ,random_images[i])
       # load image in img variable using keras.utils.load_img(image_path) 
        img = Image.open(img)
        # Plot image
        ax[i].imshow(img)
        # Turn axis off
        ax[i].axis('off')
    # Make plots to become nearer to each other
    plt.tight_layout()
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_34_0.png) ![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_34_1.png) ![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_34_2.png) ![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_34_3.png) ## ## 2.5 | Split images to Train-Valid-test folders

🔵 In this step, split images to 3 part, Train, Validation and Test by ratio 70%, 15%, 15% of whole images.

# create folder for train and validation and test
train_valid = os.path.join(working_dir, 'train_valid')

splitfolders.ratio(
    input=Images, output=train_valid, seed=42, ratio=(0.7, 0.15, 0.15)
)

print(colored(f' All images splited to TRAIN / VALIDATION / TEST folders. ', 'white', 'on_green', attrs=['bold']))
Copying files: 3242 files [00:28, 114.30 files/s]  All images splited to TRAIN / VALIDATION / TEST folders. 

🔵 Count Images in each folder

# list of folders
folders = os.listdir(train_valid)

print(colored('Number of samples in each folder : ', 'green', attrs=['bold']))
for folder in folders :
    # A variable to store count of images in each part
    counts = 0
    folder_path = os.path.join(train_valid, folder)
    for class_name in os.listdir(folder_path) :
        class_path = os.path.join(folder_path, class_name)
        counts += len(os.listdir(class_path))
    print(colored(f'{folder} : {counts}', 'blue',attrs=['bold']))
Number of samples in each folder :  test : 490 train : 2268 val : 484 # # 3 | DATA AUGMENTATIONS ###### 🏠 [Tabel of Contents](#tbl_content)

🔵 Data augmentation is the process of artificially generating new data from existing data, primarily to train new machine learning (ML) models. Data augmentation can address a variety of challenges when training a CNN model, such as limited or imbalanced data, overfitting, and variation and complexity. This technique can increase the size of the dataset and balance the classes by applying different transformations

🔵 Here, choose a sample image to plot with each Augmentation function to represent changes.

sample_image = os.path.join(Benign, 'Sap_013 (1).jpg')
## ## 3.1 | Blure

🔵 Blurring an image is a process that makes the image less sharp and reduces its level of detail. It distorts the detail of an image which makes it less clear. The most common use of image blurriness is to remove noise from the image; the other is to get the most detailed part of the image and smooth out the less detailed ones. Image blur is also called image smoothing.

🔵 We use 3 kind of bluring :

    1. opencv blur (smoothing)
    2. Gausian blur
    3. Meidan blur
def Blure_Filter(img, filter_type ="blur", kernel=13):
    '''
    ### Filtering ###
    img: image
    filter_type: {blur: blur, gaussian: gaussian, median: median}
    '''
    if filter_type == "blur":
        return cv2.blur(img,(kernel,kernel))

    elif filter_type == "gaussian":
        return cv2.GaussianBlur(img, (kernel, kernel), 0)

    elif filter_type == "median":
        return cv2.medianBlur(img, kernel)

🔵 Represent blur function on sample image.

plt.figure(figsize=(10, 2.25), dpi=400)
plt.suptitle('Blured samples', fontweight='bold', fontsize=15)
# Original image
plt.subplot(1, 4, 1)
img = cv2.imread(sample_image)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.axis('off')
plt.title('Original', fontweight='bold')
 # Blurs
 # List of filters
filters = ['blur', 'gaussian', 'median']
for filter in filters :
    indx = filters.index(filter)
    plt.subplot(1, 4, indx+2)
    filtered_img = Blure_Filter(img, filter_type=filter, kernel=13)
    plt.imshow(filtered_img)
    plt.axis('off')
    plt.title(filter, fontweight='bold')
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_49_0.png) ## ## 3.2 | Noise

🔵 Noise is deliberately altering pixels to be different than what they may should have represented. Old-fashioned films are famous for having speckles black and white pixels present where they should not be. This is noise! Noise is one kind of imperfection that can be particularly frustrating for machines versus human understanding. While humans can easily ignore noise (or fit it within appropriate context), algorithms struggle. This is the root of so-called adversarial attacks where small, human-imperceptible pixel changes can dramatically alter a neural network's ability to make an accurate prediction.

🔵 We use 3 kind of Noise adding :

    1. Gaussian noise
    2. sp noise
def Add_Noise(img, noise_type="gauss"):
    '''
    ### Adding Noise ###
    img: image
    cj_type: {gauss: gaussian, sp: salt & pepper}
    '''
    if noise_type == "gauss": 
        mean=0
        st=0.5
        gauss = np.random.normal(mean,st,img.shape)
        gauss = gauss.astype('uint8')
        image = cv2.add(img,gauss)
        return image

    elif noise_type == "sp": 
        prob = 0.01
        black = np.array([0, 0, 0], dtype='uint8')
        white = np.array([255, 255, 255], dtype='uint8')

        probs = np.random.random(img.shape[:2])
        img[probs < (prob / 2)] = black
        img[probs > 1 - (prob / 2)] = white
        return img

🔵 Represent Noise adding function on sample image.

plt.figure(figsize=(10, 2.75), dpi=400)
plt.suptitle('Noised samples', fontweight='bold', fontsize=15)
plt.subplot(1, 3, 1)
img = cv2.imread(sample_image)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.axis('off')
plt.title('Original', fontweight='bold')

noises = ['gauss', 'sp']
for noise in noises :
    indx = noises.index(noise)
    plt.subplot(1, 3, indx+2)
    noised_img = Add_Noise(img, noise_type=noise)
    plt.imshow(noised_img)
    plt.axis('off')
    plt.title(noise, fontweight='bold')
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_55_0.png) ## ## 3.3 | Flip

🔵 Flipping an image (and its annotations) is a deceivingly simple technique that can improve model performance in substantial ways. Our models are learning what collection of pixels and the relationship between those collections of pixels denote an object is in-frame. But machine learning models (like convolutional neural networks) have a tendency to be quite brittle: they might memorize a specific ordering of pixels describes an object, but if that same object is mirrored across the image, our models may struggle to recognize it. Consider the orientation of your face when you are taking a selfie versus using the backwards lens on your camera: one interpretation may be mirrored while the other is not, yet they are still both your face. This mirroring of orientation is what we call flipping an image. By creating several versions of our images in various orientations, we give our deep learning model more information to learn from without having to go through the time consuming process of collecting and labeling more training data.

🔵 We use 3 kind of Fliping :

    1. X axis
    2. Y axis
    3. X & Y
def Flip(img, flip_code) :
    flipped_img = cv2.flip(img, flip_code)
    return flipped_img

🔵 Represent Flip function on sample image.

plt.figure(figsize=(10, 2.75), dpi=400)
plt.suptitle('Flip a sample', fontweight='bold', fontsize=15)

plt.subplot(1, 4, 1)
img = cv2.imread(sample_image)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.axis('off')
plt.title('Original', fontweight='bold')

plt.subplot(1, 4, 2)
fliped = Flip(img, flip_code=0)
plt.imshow(fliped)
plt.axis('off')
plt.title('Horizontal Flip', fontweight='bold')

plt.subplot(1, 4, 3)
fliped = Flip(img, flip_code=1)
plt.imshow(fliped)
plt.axis('off')
plt.title('Vertical Flip', fontweight='bold')

plt.subplot(1, 4, 4)
fliped = Flip(img, flip_code=-1)
plt.imshow(fliped)
plt.axis('off')
plt.title('X&Y Flip', fontweight='bold')
plt.show()
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_61_0.png) ## ## 3.4 | Apply Augmantations

🔵 OK ! Its time to apply above functions to Train images . Do this by define a function to choose randomly between 3 kind of augs and apply them to images. At last return a dictionary with key of new image name and value of augmented images.

def Apply_Augmentations(img) :
    ''' Apply random choice of augmentation functions on images '''

    returned_augs = dict()

    AUGS = ['Blure', 'Noise', 'Flip']

    # How many of Augs choosen ?
    random_num = random.randint(1, 3)
    random_choice = random.choices(AUGS, k=random_num)
    # To avoid repeatations :
    random_choice = list(set(random_choice))

    for choice in random_choice :
        if choice == 'Blure' :
            filters = ['blur', 'gaussian', 'median']
            kernels = [5, 7, 9, 11]
            random_filter = random.choices(filters, k=1)[0]
            random_kernel = random.choices(kernels, k=1)[0]
            blured_img =  Blure_Filter(img, filter_type=random_filter, kernel=random_kernel)
            new_name = '_blured'
            returned_augs[new_name] = blured_img


        elif choice == 'Noise' :
            noises = ['gauss', 'sp']
            random_noise = random.choices(noises, k=1)[0]
            noised_img = Add_Noise(img, noise_type=random_noise)
            new_name = '_noised'
            returned_augs[new_name] = noised_img


        elif choice == 'Flip' :
            flip_codes = [-1, 0, 1]
            random_code = random.choices(flip_codes, k=1)[0]
            flipped_img = Flip(img, flip_code=random_code)
            new_name = '_fliped'
            returned_augs[new_name] = flipped_img

    return returned_augs

🔵 Count images in train folder beforeand after of augmentation to find out how many images added to train folder.

train_dir = os.path.join(train_valid, 'train')
num_samples_befor_aug = 0

for folder in os.listdir(train_dir) :
    folder_path = os.path.join(train_dir, folder)
    num_samples_befor_aug += len(os.listdir(folder_path))

print(colored(f' Number of samples in TRAIN folder befor Augmentation : {num_samples_befor_aug} ', 'black', 'on_white', attrs=['bold']))
 Number of samples in TRAIN folder befor Augmentation : 2268 
for folder in os.listdir(train_dir) :
    folder_path = os.path.join(train_dir, folder)
    for img_name in tqdm(os.listdir(folder_path)) :
        img_path = os.path.join(folder_path, img_name)
        img = cv2.imread(img_path)
        returned = Apply_Augmentations(img)

        for exported_name, exported_image in returned.items() :
            # 1_left.jpg ---TO---> 1_lef_blured.jpg
            new_name = img_name.split('.')[0] + exported_name + '.' + img_name.split('.')[-1]
            new_path = os.path.join(folder_path, new_name)

            # Save new image
            cv2.imwrite(new_path, exported_image)


print(colored(f' Augmentation Completed. ', 'white', 'on_green', attrs=['bold']))
0%| | 0/358 [00:00
num_samples_after_aug = 0

for folder in os.listdir(train_dir) :
    folder_path = os.path.join(train_dir, folder)
    num_samples_after_aug += len(os.listdir(folder_path))

print(colored(f' Number of samples  in TRAIN folder after Augmentation : {num_samples_after_aug} ', 'black', 'on_white', attrs=['bold']))
 Number of samples in TRAIN folder after Augmentation : 5917 
print(colored(f' {num_samples_after_aug-num_samples_befor_aug} images added to train directory. ', 'white', 'on_blue', attrs=['bold']))
 3649 images added to train directory.  # # 4 | DataSets and DataLoaders ###### 🏠 [Tabel of Contents](#tbl_content)

🔵 Now, its time to create a dataset of images by some transforms and after that create DataLoader for each dataset. ## ## 4.1 | Create Datasets and DataLoaders

🔵 Torchvision supports common computer vision transformations in the torchvision.transforms and torchvision.transforms.v2 modules. Transforms can be used to transform or augment data for training or inference of different tasks (image classification, detection, segmentation, video classification).

transform = transforms.Compose(
    [
        transforms.Resize(img_size),
        transforms.ToTensor()
    ]
)
############################# TRAIN #############################
# Dataset
train_ds = ImageFolder(root=os.path.join(train_valid, 'train'), transform=transform)

# DataLoader
train_loader = DataLoader(train_ds, batch_size=batch_size, shuffle=True)

print(colored(f'TRAIN Folder :\n', 'green', attrs=['bold']))
print(train_ds)

############################# VALIDATION #############################
# Dataset
valid_ds = ImageFolder(root=os.path.join(train_valid, 'val'), transform=transform)

# DataLoader
valid_loader = DataLoader(valid_ds, batch_size=batch_size, shuffle=True)

print(colored(f'VALID Folder :\n', 'green', attrs=['bold']))
print(valid_ds)

############################# TEST #############################
# Dataset
test_ds = ImageFolder(root=os.path.join(train_valid, 'test'), transform=transform)

# DataLoader
test_loader = DataLoader(test_ds, batch_size=batch_size, shuffle=True)

print(colored(f'TEST Folder :\n', 'green', attrs=['bold']))
print(test_ds)
TRAIN Folder :  Dataset ImageFolder Number of datapoints: 5917 Root location: C:\envs\Working\Blood_Cell_Cancer\train_valid\train StandardTransform Transform: Compose( Resize(size=(128, 128), interpolation=bilinear, max_size=None, antialias=True) ToTensor() ) VALID Folder :  Dataset ImageFolder Number of datapoints: 484 Root location: C:\envs\Working\Blood_Cell_Cancer\train_valid\val StandardTransform Transform: Compose( Resize(size=(128, 128), interpolation=bilinear, max_size=None, antialias=True) ToTensor() ) TEST Folder :  Dataset ImageFolder Number of datapoints: 490 Root location: C:\envs\Working\Blood_Cell_Cancer\train_valid\test StandardTransform Transform: Compose( Resize(size=(128, 128), interpolation=bilinear, max_size=None, antialias=True) ToTensor() ) ## ## 4.2 | Data Shapes

🔵 Read a batch of data from each loaders(train_loader, valid_loader, test_loader), to represet shape of bach and its data type.

# print shape of dataset for each set
for key, value in {'Train': train_loader, "Validation": valid_loader, 'Test': test_loader}.items():
    for X, y in value:
        print(colored(f'{key}:', 'white','on_green', attrs=['bold']))
        print(f"Shape of images [Batch_size, Channels, Height, Width]: {X.shape}")
        print(f"Shape of y: {y.shape} {y.dtype}\n")
        print('-'*45)
        break
Train: Shape of images [Batch_size, Channels, Height, Width]: torch.Size([64, 3, 128, 128]) Shape of y: torch.Size([64]) torch.int64 --------------------------------------------- Validation: Shape of images [Batch_size, Channels, Height, Width]: torch.Size([64, 3, 128, 128]) Shape of y: torch.Size([64]) torch.int64 --------------------------------------------- Test: Shape of images [Batch_size, Channels, Height, Width]: torch.Size([64, 3, 128, 128]) Shape of y: torch.Size([64]) torch.int64 --------------------------------------------- # # 5 | FreeUp some space in RAM and GPU ###### 🏠 [Tabel of Contents](#tbl_content)

🔵 Because of defining lots of variables and functions, our RAM may filled with unneccessary data and our GPU may be filled too. In this part by Deleting unneccessary variabels and using gc.collect function for RAM and torch.cuda for GPU cache we can free up some space to better performances. ## ## 5.1 | RAM

del [ax, base_dir, Benign, class_name, class_path, counts, colors_dark, exported_image, exported_name,  Early_Pre_B, fig, files, filter, filtered_img, filters, fliped, folder,  folder_path]
del [folders, i, Images, img, indx, key, noise, noised_img, noises, num_classes, num_samples_after_aug, num_samples_befor_aug, Pre_B, Pro_B, random_images]
del [sample_image, train_dir, value, working_dir, X, y, returned, src]
del [img_name, img_path, img_size, new_name, new_path, ]

gc.collect()
72827 ## ## 5.2 | GPU
torch.cuda.empty_cache()
# # 6 | Model ###### 🏠 [Tabel of Contents](#tbl_content)

🔵 Instead of define a new model form scatch , i prefer to use a Pre-Trained model, GoogleNet with its trained wights GoogLeNet_Weights . Google Net (or Inception V1) was proposed by research at Google (with the collaboration of various universities) in 2014 in the research paper titled “Going Deeper with Convolutions”. This architecture was the winner at the ILSVRC 2014 image classification challenge. It has provided a significant decrease in error rate as compared to previous winners AlexNet (Winner of ILSVRC 2012) and ZF-Net (Winner of ILSVRC 2013) and significantly less error rate than VGG (2014 runner up). This architecture uses techniques such as 1×1 convolutions in the middle of the architecture and global average pooling. ## ## 6.1 | PreTrained Model

model = googlenet(weights=GoogLeNet_Weights)
model
GoogLeNet( (conv1): BasicConv2d( (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True) (conv2): BasicConv2d( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (conv3): BasicConv2d( (conv): Conv2d(64, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (maxpool2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True) (inception3a): Inception( (branch1): BasicConv2d( (conv): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(192, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(96, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(192, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception3b): Inception( (branch1): BasicConv2d( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(128, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(32, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (maxpool3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True) (inception4a): Inception( (branch1): BasicConv2d( (conv): Conv2d(480, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(480, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(96, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(208, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(480, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(16, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(480, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception4b): Inception( (branch1): BasicConv2d( (conv): Conv2d(512, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(112, 224, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(224, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception4c): Inception( (branch1): BasicConv2d( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception4d): Inception( (branch1): BasicConv2d( (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 144, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(144, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception4e): Inception( (branch1): BasicConv2d( (conv): Conv2d(528, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(528, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(528, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (maxpool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True) (inception5a): Inception( (branch1): BasicConv2d( (conv): Conv2d(832, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(832, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(832, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception5b): Inception( (branch1): BasicConv2d( (conv): Conv2d(832, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(832, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(832, 48, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(48, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (aux1): None (aux2): None (avgpool): AdaptiveAvgPool2d(output_size=(1, 1)) (dropout): Dropout(p=0.2, inplace=False) (fc): Linear(in_features=1024, out_features=1000, bias=True) ) ## ## 6.2 | Change Last Layer (fc)

🔵 Out-feature of GoogleNet has 1000 Neuron, but in this case, our model should have 4 neuron, Length of classes. So change fc part of GoogleNet and replace it with a Sequential of fully connected network.

model.fc = nn.Sequential(
    nn.Linear(in_features=1024, out_features=512),
    nn.ReLU(),
    nn.Dropout(0.2),
     nn.Linear(in_features=512, out_features=128),
    nn.ReLU(),
    nn.Dropout(0.2),
    nn.Linear(in_features=128, out_features=64),
    nn.ReLU(),
    nn.Dropout(0.2),
    nn.Linear(in_features=64, out_features=4)
)

🔵 Its time to first use of GPU ! Move the model to GPU to accelerate process.

model.to(device)
GoogLeNet( (conv1): BasicConv2d( (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True) (conv2): BasicConv2d( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (conv3): BasicConv2d( (conv): Conv2d(64, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (maxpool2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True) (inception3a): Inception( (branch1): BasicConv2d( (conv): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(192, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(96, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(192, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception3b): Inception( (branch1): BasicConv2d( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(128, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(32, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (maxpool3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True) (inception4a): Inception( (branch1): BasicConv2d( (conv): Conv2d(480, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(480, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(96, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(208, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(480, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(16, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(480, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception4b): Inception( (branch1): BasicConv2d( (conv): Conv2d(512, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(112, 224, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(224, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception4c): Inception( (branch1): BasicConv2d( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception4d): Inception( (branch1): BasicConv2d( (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 144, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(144, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception4e): Inception( (branch1): BasicConv2d( (conv): Conv2d(528, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(528, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(528, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (maxpool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True) (inception5a): Inception( (branch1): BasicConv2d( (conv): Conv2d(832, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(832, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(832, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (inception5b): Inception( (branch1): BasicConv2d( (conv): Conv2d(832, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (branch2): Sequential( (0): BasicConv2d( (conv): Conv2d(832, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch3): Sequential( (0): BasicConv2d( (conv): Conv2d(832, 48, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicConv2d( (conv): Conv2d(48, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) (branch4): Sequential( (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True) (1): BasicConv2d( (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (aux1): None (aux2): None (avgpool): AdaptiveAvgPool2d(output_size=(1, 1)) (dropout): Dropout(p=0.2, inplace=False) (fc): Sequential( (0): Linear(in_features=1024, out_features=512, bias=True) (1): ReLU() (2): Dropout(p=0.2, inplace=False) (3): Linear(in_features=512, out_features=128, bias=True) (4): ReLU() (5): Dropout(p=0.2, inplace=False) (6): Linear(in_features=128, out_features=64, bias=True) (7): ReLU() (8): Dropout(p=0.2, inplace=False) (9): Linear(in_features=64, out_features=4, bias=True) ) ) ## ## 6.3 | Train the Model

🔵 As the first step in this part, define some functions to make output more beautifull and better undrestand.

def DeltaTime(dt) :
    '''A Function to apply strftime manualy on delta.datetime class'''
    h = dt.seconds // 3600
    dh = dt.seconds % 3600

    m = dh // 60
    s = dh % 60

    if h<10 : h='0'+str(h)
    else : h = str(h)

    if m<10 : m='0'+str(m)
    else : m = str(m)

    if s<10 : s='0'+str(s)
    else : s = str(s)

    return( h + ':' + m + ':' + s)
def Beauty_epoch(epoch) :
    ''' Return epochs in 2 digits - like (01 or 08) '''
    if epoch<10 :
        return '0' + str(epoch)
    else :
        return str(epoch)

🔵 Lets Train the model with train data and evaluate with validations.

# Create Loss_function and Optimizer
Learning_Rate = 0.001

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=Learning_Rate)

# Some variables to store loss and accuracy to plot them
train_losses = np.zeros(num_epochs)
train_accs = np.zeros(num_epochs)
valid_losses = np.zeros(num_epochs)
valid_accs = np.zeros(num_epochs)

print(colored('Training Starts ... ', 'blue', 'on_white', attrs=['bold']))
for epoch in range(num_epochs) :
    # Set the mode to TRAIN
    model.train()

    # Current time to calculate duration of epoch
    t0 = datetime.now()

    # Some variables to store data
    train_loss = []
    train_acc = []
    valid_loss = []
    valid_acc = []
    n_correct = 0
    n_total = 0

            ###############
            #### Train ####
            ###############

    # Read Images and Labels from TrainLoader
    for images, labels in train_loader :
        # Move Data to GPU
        images = images.to(device)
        labels = labels.to(device)

        # Reshape labels to [Batch-Size, 1]
        # labels = torch.reshape(labels, (-1, 1))

        # Zero Grad Optimizer
        optimizer.zero_grad()

        # Forward Pass
        y_pred = model(images)
        loss = criterion(y_pred, labels)

        # Backward pass
        loss.backward()
        optimizer.step()

        # Train Loss
        train_loss.append(loss.item())

        # Train Accuracy
        _, prediction = torch.max(y_pred, 1)
        n_correct += (prediction==labels).sum().item()
        n_total += labels.shape[0]

    train_losses[epoch] = np.mean(train_loss)
    train_accs[epoch] = n_correct / n_total

            ####################
            #### Validation ####
            ####################

    n_correct = 0
    n_total = 0

    # Read Images and Labels from ValidLoader
    for images, labels in valid_loader :
        # Move Data to GPU
        images = images.to(device)
        labels = labels.to(device)

        # Reshape labels to [Batch-Size, 1]
        # labels = torch.reshape(labels, (-1, 1))

        # Forward pass
        y_pred = model(images)
        loss = criterion(y_pred, labels)

        # Validation Loss
        valid_loss.append(loss.item())

        # val Accuracy
        _, prediction = torch.max(y_pred, 1)
        n_correct += (prediction==labels).sum().item()
        n_total += labels.shape[0]

    valid_losses[epoch] = np.mean(valid_loss)
    valid_accs[epoch] = n_correct / n_total



    ############################### Duration ###############################

    dt = datetime.now() - t0

    ############################### BEAUTIFULL OUTPUT ###############################
    EPOCH =  colored(f' Epoch [{Beauty_epoch(epoch+1)}/{num_epochs}] ', 'black', 'on_white', attrs=['bold'])
    TRAIN_LOSS = colored(f' Train Loss:{train_losses[epoch]:.4f} ', 'white', 'on_green', attrs=['bold'])
    TRAIN_ACC = colored(f' Train Acc:{train_accs[epoch]:.4f} ', 'white', 'on_blue', attrs=['bold'])
    VAL_LOSS = colored(f' Val Loss:{valid_losses[epoch]:.4f} ', 'white', 'on_green', attrs=['bold'])
    VAL_ACC = colored(f' Val Acc:{valid_accs[epoch]:.4f} ', 'white', 'on_blue', attrs=['bold'])
    DURATION = colored(f' Duration : {DeltaTime(dt)} ', 'white', 'on_dark_grey', attrs=['bold'])
    LR = colored(f' lr = {Learning_Rate} ', 'black',  'on_cyan', attrs=['bold'])


    # Print the result of  each epochs
    print(f'{EPOCH} -> {TRAIN_LOSS}{TRAIN_ACC} {VAL_LOSS}{VAL_ACC} {DURATION} {LR}')


print(colored('Training Finished ...', 'blue', 'on_white', attrs=['bold']))
Training Starts ...   Epoch [01/30]  ->  Train Loss:0.2178  Train Acc:0.9280   Val Loss:0.0504  Val Acc:0.9814   Duration : 00:01:49   lr = 0.001   Epoch [02/30]  ->  Train Loss:0.0515  Train Acc:0.9865   Val Loss:0.0952  Val Acc:0.9814   Duration : 00:01:33   lr = 0.001   Epoch [03/30]  ->  Train Loss:0.0389  Train Acc:0.9910   Val Loss:0.0138  Val Acc:0.9959   Duration : 00:01:29   lr = 0.001   Epoch [04/30]  ->  Train Loss:0.0232  Train Acc:0.9932   Val Loss:0.0269  Val Acc:0.9897   Duration : 00:01:33   lr = 0.001   Epoch [05/30]  ->  Train Loss:0.0059  Train Acc:0.9983   Val Loss:0.0217  Val Acc:0.9917   Duration : 00:01:28   lr = 0.001   Epoch [06/30]  ->  Train Loss:0.0244  Train Acc:0.9943   Val Loss:0.1601  Val Acc:0.9793   Duration : 00:01:28   lr = 0.001   Epoch [07/30]  ->  Train Loss:0.0123  Train Acc:0.9971   Val Loss:0.0147  Val Acc:0.9917   Duration : 00:01:30   lr = 0.001   Epoch [08/30]  ->  Train Loss:0.0026  Train Acc:0.9993   Val Loss:0.1443  Val Acc:0.9814   Duration : 00:01:31   lr = 0.001   Epoch [09/30]  ->  Train Loss:0.0355  Train Acc:0.9927   Val Loss:0.0460  Val Acc:0.9897   Duration : 00:01:30   lr = 0.001   Epoch [10/30]  ->  Train Loss:0.0089  Train Acc:0.9980   Val Loss:0.0095  Val Acc:0.9959   Duration : 00:01:32   lr = 0.001   Epoch [11/30]  ->  Train Loss:0.0205  Train Acc:0.9958   Val Loss:0.0523  Val Acc:0.9876   Duration : 00:01:33   lr = 0.001   Epoch [12/30]  ->  Train Loss:0.0185  Train Acc:0.9961   Val Loss:0.0065  Val Acc:0.9959   Duration : 00:01:32   lr = 0.001   Epoch [13/30]  ->  Train Loss:0.0075  Train Acc:0.9983   Val Loss:0.1034  Val Acc:0.9835   Duration : 00:01:32   lr = 0.001   Epoch [14/30]  ->  Train Loss:0.0217  Train Acc:0.9961   Val Loss:0.1034  Val Acc:0.9814   Duration : 00:01:33   lr = 0.001   Epoch [15/30]  ->  Train Loss:0.0213  Train Acc:0.9963   Val Loss:0.0368  Val Acc:0.9876   Duration : 00:01:33   lr = 0.001   Epoch [16/30]  ->  Train Loss:0.0017  Train Acc:0.9998   Val Loss:0.0424  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001   Epoch [17/30]  ->  Train Loss:0.0066  Train Acc:0.9990   Val Loss:0.0770  Val Acc:0.9897   Duration : 00:01:33   lr = 0.001   Epoch [18/30]  ->  Train Loss:0.0241  Train Acc:0.9946   Val Loss:0.0921  Val Acc:0.9814   Duration : 00:01:33   lr = 0.001   Epoch [19/30]  ->  Train Loss:0.0381  Train Acc:0.9919   Val Loss:0.0295  Val Acc:0.9917   Duration : 00:01:33   lr = 0.001   Epoch [20/30]  ->  Train Loss:0.0072  Train Acc:0.9983   Val Loss:0.0251  Val Acc:0.9938   Duration : 00:01:33   lr = 0.001   Epoch [21/30]  ->  Train Loss:0.0099  Train Acc:0.9976   Val Loss:0.0414  Val Acc:0.9938   Duration : 00:01:33   lr = 0.001   Epoch [22/30]  ->  Train Loss:0.0068  Train Acc:0.9985   Val Loss:0.0537  Val Acc:0.9917   Duration : 00:01:33   lr = 0.001   Epoch [23/30]  ->  Train Loss:0.0064  Train Acc:0.9992   Val Loss:0.1647  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001   Epoch [24/30]  ->  Train Loss:0.0005  Train Acc:1.0000   Val Loss:0.0403  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001   Epoch [25/30]  ->  Train Loss:0.0023  Train Acc:0.9995   Val Loss:0.0523  Val Acc:0.9876   Duration : 00:01:33   lr = 0.001   Epoch [26/30]  ->  Train Loss:0.0130  Train Acc:0.9970   Val Loss:0.0338  Val Acc:0.9897   Duration : 00:01:34   lr = 0.001   Epoch [27/30]  ->  Train Loss:0.0013  Train Acc:0.9997   Val Loss:0.0267  Val Acc:0.9938   Duration : 00:01:33   lr = 0.001   Epoch [28/30]  ->  Train Loss:0.0204  Train Acc:0.9968   Val Loss:0.0510  Val Acc:0.9917   Duration : 00:01:34   lr = 0.001   Epoch [29/30]  ->  Train Loss:0.0110  Train Acc:0.9973   Val Loss:0.0458  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001   Epoch [30/30]  ->  Train Loss:0.0091  Train Acc:0.9980   Val Loss:0.1245  Val Acc:0.9917   Duration : 00:01:32   lr = 0.001  Training Finished ...

🔵 Plot the result of training.

plt.figure(figsize=(12, 3), dpi=400)
plt.subplot(1, 2, 1)
sns.lineplot(train_accs, label='Train Accuracy')
sns.lineplot(valid_accs, label='Valid Accuracy')
plt.title('Accuracy')

plt.subplot(1, 2, 2)
sns.lineplot(train_losses, label='Train Loss')
sns.lineplot(valid_losses, label='Validation Loss')
plt.title('Loss')

plt.show()
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_102_0.png) ## ## 6.4 | Evaluation

🔵 After finishing training, we should test the model with never unseen images to final evaluate the model.

with torch.no_grad() :
    model.eval()
    t0 = datetime.now()
    test_loss = []
    val_loss = []
    n_correct = 0
    n_total = 0

    for images, labels in test_loader :
        # Move input data to GPU
        images = images.to(device)
        labels = labels.to(device)

        # Forward pass
        y_pred = model(images)
        loss = criterion(y_pred, labels)

        # Train Loss
        test_loss.append(loss.item())

        # Train Accuracy
        _, prediction = torch.max(y_pred, 1)
        n_correct += (prediction==labels).sum().item()
        n_total += labels.shape[0]

    test_loss = np.mean(train_loss)
    train_acc = n_correct / n_total
    dt = datetime.now() - t0
    print(colored(f'Loss:{test_loss:.4f}\nAccuracy:{train_acc:.4f}\nDuration:{dt}', 'green', attrs=['bold']))
Loss:0.0091 Accuracy:0.9939 Duration:0:00:10.539431 ## ## 6.5 | Plot The Result

🔵 And now, plot some images with real labels and predicted labels .

🔵 To do this, we should create a Dictionary called label_map, a dictionary with indexes as keys and class_names as values.

# Create a label_map to show True and Predicted labels in below plot
classes.sort()
classes
labels_map = {}

for index, label in enumerate(classes) :
    labels_map[index] = label

labels_map
{0: 'Benign', 1: 'Early_Pre_B', 2: 'Pre_B', 3: 'Pro_B'}
# Move model to CPU
cpu_model = model.cpu()

# Get 1 batch of test_loader
for imgs, labels in test_loader :
    break

# Plot 1 batch of test_loader images with True and Predicted label
plt.subplots(4, 8, figsize=(16, 12))
plt.suptitle('Rice images in 1 Batch', fontsize=25, fontweight='bold')
for i in range(32) :
    ax = plt.subplot(4, 8, i+1)
    img = torch.permute(imgs[i], (1, 2, 0))
    plt.imshow(img)
    label = labels_map[int(labels[i])]
    img = img[i].unsqueeze(0)
    img = imgs[i].unsqueeze(0)
    out = cpu_model(img)
    predict = labels_map[int(out.argmax())]
    plt.title(f'True :{label}\nPredict :{predict}')
    plt.axis('off')

plt.show()
![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_109_0.png) ## ## 6.6 | Confusion Matrix

🔵 And the final step is ploting Confusion Matrix by sklearn library.

# Get out 2 list include y_true and y_pred for use in confusion_matrix
model = model.to(device)

y_true = []
y_pred = []
for images, labels in test_loader:
    images = images.to(device)
    labels = labels.numpy()
    outputs = model(images)
    _, pred = torch.max(outputs.data, 1)
    pred = pred.detach().cpu().numpy()

    y_true = np.append(y_true, labels)
    y_pred = np.append(y_pred, pred)
classes = labels_map.values()

print(classification_report(y_true, y_pred))

def plot_confusion_matrix(y_test, y_prediction):
    '''Plotting Confusion Matrix'''
    cm = confusion_matrix(y_true, y_pred)
    ax = plt.figure(figsize=(8, 6))
    ax = sns.heatmap(cm, annot=True, fmt='', cmap="Blues")
    ax.set_xlabel('Prediced labels', fontsize=18)
    ax.set_ylabel('True labels', fontsize=18)
    ax.set_title('Confusion Matrix', fontsize=25)
    ax.xaxis.set_ticklabels(classes)
    ax.yaxis.set_ticklabels(classes) 
    plt.show()


plot_confusion_matrix(y_true, y_pred)
precision recall f1-score support 0.0 0.99 1.00 0.99 78 1.0 1.00 0.99 0.99 148 2.0 0.99 1.00 1.00 144 3.0 0.99 0.99 0.99 120 accuracy 0.99 490 macro avg 0.99 0.99 0.99 490 weighted avg 0.99 0.99 0.99 490 ![png](Blood_Cell_Cancer_files/Blood_Cell_Cancer_113_1.png)

Author: Nima Pourmoradi

github: https://github.com/NimaPourmoradi
kaggle : https://www.kaggle.com/nimapourmoradi
linkedin : www.linkedin.com/in/nima-pourmoradi
Telegram : https://t.me/Nima_Pourmoradi

✅ If you like my notebook, please upvote it ✅

##### [🏠 Tabel of Contents](#tbl_content)