841 lines (841 with data), 29.6 kB
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "Unsupervised Anomaly Detection Brain-MRI.ipynb",
"provenance": [],
"collapsed_sections": [
"1-a9c5c4qaiK",
"P2WmjqCqp3S4",
"9Oc6ISRcqJMI",
"qdIb36bv-yji",
"U0D7zuB62DR0"
],
"machine_shape": "hm"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "6cjIFrxl4xpb"
},
"source": [
"# Unsupervised Anomaly Detection Brain-MRI\n",
"\n",
"Jupyter notebook for running all the experiments from our [paper](https://arxiv.org/abs/2004.03271). \n",
"\n",
"Hyperparameters may have to be adjusted!"
]
},
{
"cell_type": "markdown",
"source": [
"## Preperation\n",
"\n",
"Installing tensorflow version 1 manually, since it is not supported anymore in colab. \n"
],
"metadata": {
"id": "KrKRutoIVzqO"
}
},
{
"cell_type": "code",
"source": [
"!pip uninstall -y tensorflow \n",
"!pip install tensorflow-gpu==1.15\n",
"!apt install --allow-change-held-packages libcudnn7=7.6.5.32-1+cuda10.2"
],
"metadata": {
"id": "B2tANEUeV1FO"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "xlBFG8cb9zRr"
},
"source": [
"### Get Code\n",
"Clone Code from github.com"
]
},
{
"cell_type": "code",
"metadata": {
"id": "BmL1urt8-F1a"
},
"source": [
"! git clone https://github.com/StefanDenn3r/Unsupervised_Anomaly_Detection_Brain_MRI"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"cd Unsupervised_Anomaly_Detection_Brain_MRI/"
],
"metadata": {
"id": "RRskXHhVQd6x"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Installing all the requirements. \n",
"\n",
"Note: Although Colab might say, that a restart of the runtime is necessary, for us it was not. "
],
"metadata": {
"id": "hH8OPkp2XchA"
}
},
{
"cell_type": "code",
"source": [
"! pip install -r requirements.txt"
],
"metadata": {
"id": "YXyxDwvMWFKR"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Download Brainweb Dataset\n",
"\n",
"Downloading the brainweb dataset from [here](https://brainweb.bic.mni.mcgill.ca/). \n",
"The authors request optionally to provide your name, institution and email. "
],
"metadata": {
"id": "1FRAwbqsRX_x"
}
},
{
"cell_type": "code",
"metadata": {
"id": "hBFA_7f5cUe0"
},
"source": [
"from utils.brainweb_download import download_brainweb_dataset\n",
"from pathlib import Path\n",
"\n",
"download_brainweb_dataset(\n",
" base_dir=Path('/content/data/Brainweb'),\n",
" name=\"\",\n",
" institution=\"\",\n",
" email=\"\"\n",
")"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "o8QsqbYA53MI"
},
"source": [
"## Training\n",
"\n",
"### Imports"
]
},
{
"cell_type": "code",
"metadata": {
"id": "m1xgAd-K4Q30"
},
"source": [
"import json\n",
"import os\n",
"import tensorflow as tf\n",
"from datetime import datetime\n",
"from utils.default_config_setup import get_config, get_options, get_datasets\n",
"from trainers.AE import AE\n",
"from trainers.VAE import VAE\n",
"from trainers.CE import CE\n",
"from trainers.ceVAE import ceVAE\n",
"from trainers.VAE_You import VAE_You\n",
"from trainers.GMVAE import GMVAE\n",
"from trainers.GMVAE_spatial import GMVAE_spatial\n",
"from trainers.fAnoGAN import fAnoGAN\n",
"from trainers.ConstrainedAAE import ConstrainedAAE\n",
"from trainers.ConstrainedAE import ConstrainedAE\n",
"from trainers.AnoVAEGAN import AnoVAEGAN\n",
"from models import autoencoder, variational_autoencoder, context_encoder_variational_autoencoder, variational_autoencoder_Zimmerer, context_encoder_variational_autoencoder, context_encoder_variational_autoencoder_Zimmerer, gaussian_mixture_variational_autoencoder_You, gaussian_mixture_variational_autoencoder_spatial, gaussian_mixture_variational_autoencoder, fanogan, fanogan_schlegl, constrained_autoencoder, constrained_adversarial_autoencoder, constrained_adversarial_autoencoder_Chen, anovaegan\n",
"from utils import Evaluation\n",
"from utils.default_config_setup import get_config, get_options, get_datasets, Dataset\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "xogLARJl_B0K"
},
"source": [
"Set paths to datasets and where to save checkpoints and evaluations.\n",
"\n",
"**Note:** You may have to adjust `dataset_root` and `save_dir`"
]
},
{
"cell_type": "code",
"metadata": {
"id": "GgkGf2LO35hI"
},
"source": [
"def get_CONFIG(timestamp=None):\n",
" current_time = datetime.now().strftime('%Y%m%d_%H%M%S')\n",
" if timestamp:\n",
" current_time=timestamp\n",
" dataset_root = \"/content/data\"\n",
" save_dir = \"/content/saved\"\n",
" CONFIG = {\n",
" \"BRAINWEBDIR\": os.path.join(dataset_root, 'Brainweb'),\n",
"# \"MSSEG2008DIR\": os.path.join(dataset_root, 'MSSEG2008'),\n",
"# \"MSISBI2015DIR\": os.path.join(dataset_root, 'ISBIMSlesionChallenge'),\n",
"# \"MSLUBDIR\": os.path.join(dataset_root, 'MSlub'),\n",
" \"CHECKPOINTDIR\": os.path.join(save_dir, 'checkpoints', current_time),\n",
" \"SAMPLEDIR\": os.path.join(save_dir, 'sample_dir', current_time),\n",
" }\n",
" return CONFIG"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "L41jcwBrqkev"
},
"source": [
"### Manual Training\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1-a9c5c4qaiK"
},
"source": [
"#### Baseline"
]
},
{
"cell_type": "code",
"metadata": {
"id": "IBobdPWvXBsl"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=AE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"# Create an instance of the model and train it\n",
"model = AE(tf.Session(), config, network=autoencoder.autoencoder)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "E-pIg1uHQufK"
},
"source": [
"**VAE**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Ia25A9wli8d6"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=VAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"# Create an instance of the model and train it\n",
"model = VAE(tf.Session(), config, network=variational_autoencoder.variational_autoencoder)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "P2WmjqCqp3S4"
},
"source": [
"#### ceVAE - Variations\n",
"\n",
"Paper: [Context-encoding Variational Autoencoder for Unsupervised Anomaly Detection](https://arxiv.org/abs/1812.05941)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AsxKL2xIXhrL"
},
"source": [
"**CE**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Ed4PbNOc2P50"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.Brainweb\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=CE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"# Create an instance of the model and train it\n",
"model = CE(tf.Session(), config, network=autoencoder.autoencoder)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "XlPoFhpyLgqs"
},
"source": [
"**ceVAE**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "7ujv7TbWVuA5"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.Brainweb\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=ceVAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"config.use_gradient_based_restoration = 0.002\n",
"\n",
"# Create an instance of the model and train it\n",
"model = ceVAE(tf.Session(), config, network=context_encoder_variational_autoencoder.context_encoder_variational_autoencoder)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "ruwGvgCnuPQ8"
},
"source": [
"**VAE-Zimmerer**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "rtNu_izGM8FO"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=64, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=VAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"# Create an instance of the model and train it\n",
"model = VAE(tf.Session(), config, network=variational_autoencoder_Zimmerer.variational_autoencoder_Zimmerer)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "9lhvpN6mu7QT"
},
"source": [
"**ceVAE-Zimmerer**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "szrN4m0eu6xM"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.Brainweb\n",
"options = get_options(batchsize=64, learningrate=0.0001, numEpochs=1, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=ceVAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"# Create an instance of the model and train it\n",
"model = ceVAE(tf.Session(), config, network=context_encoder_variational_autoencoder_Zimmerer.context_encoder_variational_autoencoder_Zimmerer)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "9Oc6ISRcqJMI"
},
"source": [
"#### GMVAE-(Restoration)-Variations\n",
"\n",
"Paper: [Unsupervised Lesion Detection via Image Restoration with a Normative Prior](https://openreview.net/forum?id=S1xg4W-leV)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KZmw9-2HO61B"
},
"source": [
"**VAE-You**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "VAF3eVrCPAdG"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=VAE_You, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"config.restore_lr = 1e-3\n",
"config.restore_steps = 10\n",
"config.tv_lambda = 0.0\n",
"\n",
"# Create an instance of the model and train it\n",
"model = VAE_You(tf.Session(), config, network=variational_autoencoder.variational_autoencoder)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "nhllMocWpww9"
},
"source": [
"**GMVAE-You**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "TmRGUPN86DTX"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=GMVAE_spatial, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"config.dim_c = 9\n",
"config.dim_z = 1\n",
"config.dim_w = 1\n",
"config.c_lambda = 1\n",
"config.restore_lr = 1e-3\n",
"config.restore_steps = 10\n",
"config.tv_lambda = 1\n",
"\n",
"# Create an instance of the model and train it\n",
"model = GMVAE_spatial(tf.Session(), config, network=gaussian_mixture_variational_autoencoder_You.gaussian_mixture_variational_autoencoder_You)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "azCAeseN3t6i"
},
"source": [
"**GMVAE**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "o5HNY5v-qCxO"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=GMVAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"config.dim_c = 9\n",
"config.dim_z = 128\n",
"config.dim_w = 1\n",
"config.c_lambda = 1\n",
"config.restore_lr = 1e-3\n",
"config.restore_steps = 10\n",
"config.tv_lambda = 0.0\n",
"\n",
"# Create an instance of the model and train it\n",
"model = GMVAE(tf.Session(), config, network=gaussian_mixture_variational_autoencoder.gaussian_mixture_variational_autoencoder)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "8_GIxM1KGN2-"
},
"source": [
"**GMVAE-spatial**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "m9G0foovGNWI"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=GMVAE_spatial, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"config.dim_c = 9\n",
"config.dim_z = 1\n",
"config.dim_w = 1\n",
"config.c_lambda = 1\n",
"config.restore_lr = 1e-3\n",
"config.restore_steps = 10\n",
"config.tv_lambda = 0.0\n",
"\n",
"# Create an instance of the model and train it\n",
"model = GMVAE_spatial(tf.Session(), config, network=gaussian_mixture_variational_autoencoder_spatial.gaussian_mixture_variational_autoencoder_spatial)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "qdIb36bv-yji"
},
"source": [
"#### f-AnoGAN\n",
"\n",
"Paper: [f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks.](https://www.ncbi.nlm.nih.gov/pubmed/30831356)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "B3h_nH3F-0PP"
},
"source": [
"**Unified f-AnoGan**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "aj70VsVlAjIj"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=fAnoGAN, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"config.kappa = 1.0\n",
"config.scale = 10.0\n",
"\n",
"# Create an instance of the model and train it\n",
"model = fAnoGAN(tf.Session(), config, network=fanogan.fanogan)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "JP-2nm-jYBuQ"
},
"source": [
"**f-AnoGAN - Schlegl**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "hclZeagWA6Rf"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=8, learningrate=0.0001, numEpochs=2, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=fAnoGAN, options=options, optimizer='ADAM', intermediateResolutions=[16, 16], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"config.kappa = 1.0\n",
"config.scale = 10.0\n",
"\n",
"# Create an instance of the model and train it\n",
"model = fAnoGAN(tf.Session(), config, network=fanogan_schlegl.fanogan_schlegl)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "U0D7zuB62DR0"
},
"source": [
"#### Constrained Adversarial AE\n",
"\n",
"Paper: [Unsupervised Detection of Lesions in Brain MRI using constrained adversarial auto-encoders](https://arxiv.org/abs/1806.04972)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yh6Tvwe02OWA"
},
"source": [
"**constrained AAE**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "YPBnr3r32LGf"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=ConstrainedAAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"config.scale = 10.0\n",
"config.rho = 1.0\n",
"\n",
"# Create an instance of the model and train it\n",
"model = ConstrainedAAE(tf.Session(), config, network=constrained_adversarial_autoencoder.constrained_adversarial_autoencoder)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "5mR-WjhNv0Mo"
},
"source": [
"**constrained AAE Chen**"
]
},
{
"cell_type": "code",
"metadata": {
"id": "m_HhaHSr4Of9"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=8, learningrate=0.0001, numEpochs=2, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=ConstrainedAAE, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"config.kappa = 1.0\n",
"config.scale = 10.0\n",
"config.rho = 1.0\n",
"\n",
"# Create an instance of the model and train it\n",
"model = ConstrainedAAE(tf.Session(), config, network=constrained_adversarial_autoencoder_Chen.constrained_adversarial_autoencoder_Chen)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "_aMjd0DR236b"
},
"source": [
"#### AnoVAEGAN\n",
"\n",
"Paper: [Deep autoencoding models for unsupervised anomaly segmentation in brain MR images](https://arxiv.org/abs/1804.04488)"
]
},
{
"cell_type": "code",
"metadata": {
"id": "KdHKBg3B2-6W"
},
"source": [
"tf.reset_default_graph()\n",
"dataset = Dataset.BRAINWEB\n",
"options = get_options(batchsize=128, learningrate=0.0001, numEpochs=20, zDim=128, outputWidth=128, outputHeight=128, config=get_CONFIG())\n",
"options['data']['dir'] = options[\"globals\"][dataset.value]\n",
"datasetHC, datasetPC = get_datasets(options, dataset)\n",
"config = get_config(trainer=AnoVAEGAN, options=options, optimizer='ADAM', intermediateResolutions=[8, 8], dropout_rate=0.1, dataset=datasetHC)\n",
"\n",
"# Create an instance of the model and train it\n",
"model = AnoVAEGAN(tf.Session(), config, network=anovaegan.anovaegan)\n",
"\n",
"# Train it\n",
"model.train(datasetHC)\n",
"\n",
"# Evaluate\n",
"Evaluation.evaluate(datasetPC, model, options, description=f\"{type(datasetHC).__name__}-{options['threshold']}\", epoch=str(options['train']['numEpochs']))\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "1XhjDLN73NC5"
},
"source": [],
"execution_count": null,
"outputs": []
}
]
}