|
a |
|
b/README.md |
|
|
1 |
# GI Tract Image Segmentation |
|
|
2 |
|
|
|
3 |
## Overview |
|
|
4 |
This project implements a **U-Net deep learning model** for **medical image segmentation**, specifically targeting segmentation of the gastrointestinal (GI) tract from medical scans. The pipeline includes custom data preprocessing, a modular training process, and exportable predictions in **Run-Length Encoding (RLE)** format. |
|
|
5 |
|
|
|
6 |
--- |
|
|
7 |
|
|
|
8 |
## Key Features |
|
|
9 |
- **Data Pipeline**: |
|
|
10 |
- Custom data generator to handle large datasets with RLE-encoded masks. |
|
|
11 |
- Dynamic resizing of images and masks to a configurable target size. |
|
|
12 |
- Flexible test mode for visualizing individual predictions and ground truths. |
|
|
13 |
|
|
|
14 |
- **Model Architecture**: |
|
|
15 |
- Based on **TransUNet**, combining convolutional layers with transformer-based features for superior performance. |
|
|
16 |
- Option to load pre-trained weights for transfer learning or train from scratch. |
|
|
17 |
|
|
|
18 |
- **Evaluation**: |
|
|
19 |
- Metrics include **Dice coefficient**, **accuracy**, and **visual analysis** of predictions. |
|
|
20 |
- Visualization overlays for ground truths and predictions on original images. |
|
|
21 |
|
|
|
22 |
- **Export**: |
|
|
23 |
- Saves predictions in RLE format compatible with Kaggle competitions or downstream pipelines. |
|
|
24 |
|
|
|
25 |
--- |
|
|
26 |
|
|
|
27 |
## Requirements |
|
|
28 |
To set up and run the project, ensure the following dependencies are installed: |
|
|
29 |
- TensorFlow 2.8+ |
|
|
30 |
- Keras |
|
|
31 |
- NumPy |
|
|
32 |
- Pandas |
|
|
33 |
- OpenCV |
|
|
34 |
- Matplotlib |
|
|
35 |
- Scikit-learn |
|
|
36 |
|
|
|
37 |
--- |
|
|
38 |
|
|
|
39 |
## Usage |
|
|
40 |
|
|
|
41 |
### **1. Running the Pipeline** |
|
|
42 |
Run the main script to train, evaluate, or export predictions: |
|
|
43 |
python GI-Tract-Image-Segmentation.py |
|
|
44 |
|
|
|
45 |
--- |
|
|
46 |
|
|
|
47 |
### **2. Training the Model** |
|
|
48 |
If training from scratch: |
|
|
49 |
- Automatically splits the data into training, validation, and test sets. |
|
|
50 |
- Implements early stopping, learning rate scheduling, and model checkpointing. |
|
|
51 |
- Saves the best model weights to the `output/` directory. |
|
|
52 |
|
|
|
53 |
### **3. Evaluating the Model** |
|
|
54 |
During evaluation, the script: |
|
|
55 |
- Computes **Dice coefficient** and loss for each test sample. |
|
|
56 |
- Visualizes predictions and overlays with ground truths. |
|
|
57 |
|
|
|
58 |
--- |
|
|
59 |
|
|
|
60 |
### **Data Pipeline** |
|
|
61 |
The project includes a highly modular pipeline: |
|
|
62 |
- **Custom Generator**: |
|
|
63 |
- Decodes RLE masks into binary masks dynamically. |
|
|
64 |
- Handles resizing, augmentation, and batch generation. |
|
|
65 |
- **Training Pipeline**: |
|
|
66 |
- Modularized for scalability and customization. |
|
|
67 |
- Includes checkpoints and CSV logging of training metrics. |
|
|
68 |
## Model Architecture |
|
|
69 |
- **Base Model**: TransUNet |
|
|
70 |
- Combines transformer layers for long-range dependency capture with CNNs for spatial feature extraction. |
|
|
71 |
- **Custom Modifications**: |
|
|
72 |
- Configurable input size. |
|
|
73 |
- Optional pre-trained weights for transfer learning. |