[61c0d0]: / BayesClassifierECG.ipynb

Download this file

448 lines (448 with data), 17.0 kB

{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "BayesClassifierECG-v1.ipynb",
      "version": "0.3.2",
      "provenance": [],
      "collapsed_sections": [
        "x2Q8Cy8HZ8dB"
      ]
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "metadata": {
        "id": "DfByttJIWbgQ",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "# Bayesian Classification for ECG Time-Series\n",
        "\n",
        "> Copyright 2019 Dave Fernandes. All Rights Reserved.\n",
        "> \n",
        "> Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "> you may not use this file except in compliance with the License.\n",
        "> You may obtain a copy of the License at\n",
        ">\n",
        "> http://www.apache.org/licenses/LICENSE-2.0\n",
        ">  \n",
        "> Unless required by applicable law or agreed to in writing, software\n",
        "> distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "> See the License for the specific language governing permissions and\n",
        "> limitations under the License."
      ]
    },
    {
      "metadata": {
        "id": "MTd4aHhYWhdN",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "### Overview\n",
        "This notebook classifies time-series for segmented heartbeats from ECG lead II recordings. Either of two models (CNN or RNN) can be selected from training and evaluation.\n",
        "- Data for this analysis should be prepared using the `PreprocessECG.ipynb` notebook from this project.\n",
        "- Original data is from: https://www.kaggle.com/shayanfazeli/heartbeat"
      ]
    },
    {
      "metadata": {
        "id": "hjmdX-HbWdeI",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "import numpy as np\n",
        "import tensorflow as tf\n",
        "import tensorflow.keras.layers as keras\n",
        "import tensorflow_probability as tfp\n",
        "from tensorflow_probability import distributions as tfd\n",
        "import matplotlib.pyplot as plt\n",
        "import pickle\n",
        "\n",
        "from google.colab import drive\n",
        "drive.mount('/content/drive')\n",
        "\n",
        "TRAIN_SET = '/content/drive/My Drive/Colab Notebooks/Data/train_set.pickle'\n",
        "TEST_SET = '/content/drive/My Drive/Colab Notebooks/Data/test_set.pickle'\n",
        "\n",
        "BATCH_SIZE = 125\n",
        "\n",
        "with open(TEST_SET, 'rb') as file:\n",
        "    test_set = pickle.load(file)\n",
        "    x_test = test_set['x'].astype('float32')\n",
        "    y_test = test_set['y'].astype('int32')\n",
        "\n",
        "with open(TRAIN_SET, 'rb') as file:\n",
        "    train_set = pickle.load(file)\n",
        "    x_train = train_set['x'].astype('float32')\n",
        "    y_train = train_set['y'].astype('int32')\n",
        "\n",
        "TRAIN_COUNT = len(y_train)\n",
        "TEST_COUNT = len(y_test)\n",
        "BATCHES_PER_EPOCH = TRAIN_COUNT // BATCH_SIZE\n",
        "TEST_BATCHES_PER_EPOCH = TEST_COUNT // BATCH_SIZE\n",
        "INPUT_SIZE = np.shape(x_train)[1]\n",
        "\n",
        "print('Train count:', TRAIN_COUNT, 'Test count:', TEST_COUNT)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "x2Q8Cy8HZ8dB",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "### Input Datasets"
      ]
    },
    {
      "metadata": {
        "id": "uiTIHzr5Wsmn",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "def combined_dataset(features, labels):\n",
        "    assert features.shape[0] == labels.shape[0]\n",
        "    dataset = tf.data.Dataset.from_tensor_slices((np.expand_dims(features, axis=-1), labels))\n",
        "    return dataset\n",
        "\n",
        "# For training\n",
        "def train_input_fn():\n",
        "    dataset = combined_dataset(x_train, y_train)\n",
        "    return dataset.shuffle(TRAIN_COUNT, reshuffle_each_iteration=True).repeat().batch(BATCH_SIZE, drop_remainder=True).prefetch(1)\n",
        "\n",
        "# For evaluation and metrics\n",
        "def eval_input_fn():\n",
        "    dataset = combined_dataset(x_test, y_test)\n",
        "    return dataset.repeat().batch(BATCH_SIZE).prefetch(1)\n",
        "\n",
        "training_batches = train_input_fn()\n",
        "training_iterator = tf.compat.v1.data.make_one_shot_iterator(training_batches)\n",
        "heldout_iterator = tf.compat.v1.data.make_one_shot_iterator(eval_input_fn())\n",
        "\n",
        "# Combine these into a feedable iterator that can switch between training\n",
        "# and validation inputs.\n",
        "handle = tf.compat.v1.placeholder(tf.string, shape=[])\n",
        "feedable_iterator = tf.compat.v1.data.Iterator.from_string_handle(handle, training_batches.output_types, training_batches.output_shapes)\n",
        "series, labels = feedable_iterator.get_next()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "PnGsC48GaGTk",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "### Define the model\n",
        "#### Bayesian CNN Model\n",
        "* The convolutional model is taken from: https://arxiv.org/pdf/1805.00794.pdf\n",
        "\n",
        "Model consists of:\n",
        "* An initial 1-D convolutional layer\n",
        "* 5 repeated residual blocks (`same` padding)\n",
        "* A fully-connected layer\n",
        "* A linear layer with softmax output\n",
        "* Flipout layers are used instead of standard layers"
      ]
    },
    {
      "metadata": {
        "id": "Q8XdxTVYaO1q",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "KL_ANNEALING = 30\n",
        "\n",
        "MODEL_PATH = '/content/drive/My Drive/Colab Notebooks/Models/BayesianCNN/BNN.tfmodel'\n",
        "\n",
        "def kernel_prior(dtype, shape, name, trainable, add_variable_fn):\n",
        "    return tfp.layers.default_multivariate_normal_fn(dtype, shape, name, trainable, add_variable_fn)\n",
        "#     return tfd.Horseshoe(scale=5.0)\n",
        "\n",
        "#     mix = 0.75\n",
        "#     mixture = tfd.Mixture(name=name,\n",
        "#         cat=tfd.Deterministic([mix, 1. - mix]),\n",
        "#         components=[tfd.Normal(loc=0., scale=1.), tfd.Normal(loc=0., scale=7.)])\n",
        "#     return mixture\n",
        "\n",
        "def conv_unit(unit, input_layer):\n",
        "    s = '_' + str(unit)\n",
        "    layer = tfp.layers.Convolution1DFlipout(name='Conv1' + s, filters=32, kernel_size=5, strides=1, padding='same', activation='relu', kernel_prior_fn=kernel_prior)(input_layer)\n",
        "    layer = tfp.layers.Convolution1DFlipout(name='Conv2' + s, filters=32, kernel_size=5, strides=1, padding='same', activation=None, kernel_prior_fn=kernel_prior)(layer )\n",
        "    layer = keras.Add(name='ResidualSum' + s)([layer, input_layer])\n",
        "    layer = keras.Activation(\"relu\", name='Act' + s)(layer)\n",
        "    layer = keras.MaxPooling1D(name='MaxPool' + s, pool_size=5, strides=2)(layer)\n",
        "    return layer\n",
        "\n",
        "def model_fn(input_shape):\n",
        "    time_series = tf.keras.layers.Input(shape=input_shape, dtype='float32')\n",
        "    \n",
        "    current_layer = tfp.layers.Convolution1DFlipout(filters=32, kernel_size=5, strides=1, kernel_prior_fn=kernel_prior)(time_series)\n",
        "\n",
        "    for i in range(5):\n",
        "        current_layer = conv_unit(i + 1, current_layer)\n",
        "\n",
        "    current_layer = keras.Flatten()(current_layer)\n",
        "    current_layer = tfp.layers.DenseFlipout(32, name='FC1', activation='relu', kernel_prior_fn=kernel_prior)(current_layer)\n",
        "    logits = tfp.layers.DenseFlipout(5, name='Output', kernel_prior_fn=kernel_prior)(current_layer)\n",
        "    \n",
        "    model = tf.keras.Model(inputs=time_series, outputs=logits, name='bayes_cnn_model')\n",
        "    return model\n",
        "  \n",
        "# Compute the negative Evidence Lower Bound (ELBO) loss\n",
        "t = tf.compat.v1.Variable(0.0)\n",
        "\n",
        "def loss_fn(labels, logits):\n",
        "    labels_distribution = tfd.Categorical(logits=logits)\n",
        "\n",
        "    # Perform KL annealing. The optimal number of annealing steps\n",
        "    # depends on the dataset and architecture.\n",
        "    kl_regularizer = t / (KL_ANNEALING * BATCHES_PER_EPOCH)\n",
        "\n",
        "    # Compute the -ELBO as the loss. The kl term is annealed from 0 to 1 over\n",
        "    # the epochs specified by the kl_annealing flag.\n",
        "    log_likelihood = labels_distribution.log_prob(labels)\n",
        "    neg_log_likelihood = -tf.reduce_mean(input_tensor=log_likelihood)\n",
        "    kl = sum(model.losses) / len(x_train) * tf.minimum(1.0, kl_regularizer)\n",
        "    return neg_log_likelihood + kl, kl, kl_regularizer, labels_distribution\n",
        "\n",
        "model = model_fn([INPUT_SIZE, 1])\n",
        "model.summary()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "1h96cjFraUo_",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "### Train model"
      ]
    },
    {
      "metadata": {
        "id": "9uHtgxoLynkU",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "INITIAL_LEARNING_RATE = 0.0001\n",
        "EPOCHS = 100\n",
        "\n",
        "assert (EPOCHS > 0)\n",
        "\n",
        "logits = model(series)\n",
        "loss, kl, kl_reg, labels_distribution = loss_fn(labels, logits)\n",
        "\n",
        "# Build metrics for evaluation. Predictions are formed from a single forward\n",
        "# pass of the probabilistic layers. They are cheap but noisy\n",
        "# predictions.\n",
        "predictions = tf.argmax(input=logits, axis=1)\n",
        "with tf.compat.v1.name_scope(\"train\"):\n",
        "    train_accuracy, train_accuracy_update_op = tf.compat.v1.metrics.accuracy(labels=labels, predictions=predictions)\n",
        "    opt = tf.compat.v1.train.AdamOptimizer(INITIAL_LEARNING_RATE)\n",
        "    train_op = opt.minimize(loss)\n",
        "    update_step_op = tf.compat.v1.assign(t, t + 1)\n",
        "\n",
        "with tf.compat.v1.name_scope(\"valid\"):\n",
        "    valid_accuracy, valid_accuracy_update_op = tf.compat.v1.metrics.accuracy(labels=labels, predictions=predictions)\n",
        "\n",
        "init_op = tf.group(tf.compat.v1.global_variables_initializer(), tf.compat.v1.local_variables_initializer())\n",
        "\n",
        "stream_vars_valid = [\n",
        "    v for v in tf.compat.v1.local_variables() if \"valid/\" in v.name\n",
        "]\n",
        "reset_valid_op = tf.compat.v1.variables_initializer(stream_vars_valid)\n",
        "\n",
        "with tf.compat.v1.Session() as sess:\n",
        "    sess.run(init_op)\n",
        "\n",
        "    # Run the training loop\n",
        "    train_handle = sess.run(training_iterator.string_handle())\n",
        "    heldout_handle = sess.run(heldout_iterator.string_handle())\n",
        "    training_steps = EPOCHS * BATCHES_PER_EPOCH\n",
        "    \n",
        "    for step in range(training_steps):\n",
        "        _ = sess.run([train_op, train_accuracy_update_op, update_step_op], feed_dict={handle: train_handle})\n",
        "\n",
        "        # Manually print the frequency\n",
        "        if step % (BATCHES_PER_EPOCH // 5) == 0:\n",
        "            loss_value, accuracy_value, kl_value, kl_reg_value = sess.run([loss, train_accuracy, kl, kl_reg], feed_dict={handle: train_handle})\n",
        "            print(\"   Loss: {:.3f} Accuracy: {:.3f} KL: {:.3f} KL-reg: {:.3f}\".format(loss_value, accuracy_value, kl_value, kl_reg_value))\n",
        "\n",
        "        if (step + 1) % BATCHES_PER_EPOCH == 0:\n",
        "            # Calculate validation accuracy\n",
        "            for _ in range(TEST_BATCHES_PER_EPOCH):\n",
        "                sess.run(valid_accuracy_update_op, feed_dict={handle: heldout_handle})\n",
        "            \n",
        "            valid_value = sess.run(valid_accuracy, feed_dict={handle: heldout_handle})\n",
        "            print(\"Epoch: {:>3d} Validation Accuracy: {:.3f}\".format((step + 1) // BATCHES_PER_EPOCH, valid_value))\n",
        "\n",
        "            sess.run(reset_valid_op)\n",
        "            \n",
        "    model.save_weights(MODEL_PATH)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "Ml9cWK8j0vEU",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "### Evaluate model"
      ]
    },
    {
      "metadata": {
        "id": "p40qKgiIIVNd",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "NUM_MONTE_CARLO = 1000\n",
        "\n",
        "model.load_weights(MODEL_PATH)\n",
        "\n",
        "mc_counts = np.zeros((TEST_COUNT, 5))\n",
        "x = np.expand_dims(x_test, -1)\n",
        "sample_index = np.arange(TEST_COUNT)\n",
        "\n",
        "for i in range(NUM_MONTE_CARLO):\n",
        "    y_pred = np.argmax(model.predict(x), axis=1)\n",
        "    mc_counts[sample_index, y_pred] += 1\n",
        "    \n",
        "y_pred = np.argmax(mc_counts, axis=1)\n",
        "y_prob = mc_counts[sample_index, y_pred] / NUM_MONTE_CARLO\n",
        "\n",
        "y_prob_correct = y_prob[y_pred == y_test]\n",
        "y_prob_mis = y_prob[y_pred != y_test]"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "dWh_nMZN9J0N",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "### Check probability estimates"
      ]
    },
    {
      "metadata": {
        "id": "K0lRvUS9gQIk",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "from astropy.stats import binom_conf_interval\n",
        "\n",
        "_, _, _ = plt.hist(y_prob, 10, (0, 1))\n",
        "plt.xlabel('Belief')\n",
        "plt.ylabel('Count')\n",
        "plt.title('All Predictions')\n",
        "plt.show();\n",
        "\n",
        "n_all, bins = np.histogram(y_prob, 10, (0, 1))\n",
        "n_correct, bins = np.histogram(y_prob_correct, 10, (0, 1))\n",
        "\n",
        "f_correct = n_correct / np.clip(n_all, 1, None)\n",
        "f_bins = 0.5 * (bins[:-1] + bins[1:])\n",
        "\n",
        "n_correct = n_correct[n_all > 0]\n",
        "n_total = n_all[n_all > 0]\n",
        "f_correct = n_correct / n_total\n",
        "f_bins = f_bins[n_all > 0]\n",
        "\n",
        "lower_bound, upper_bound = binom_conf_interval(n_correct, n_total)\n",
        "error_bars = np.array([f_correct - lower_bound, upper_bound - f_correct])\n",
        "\n",
        "plt.plot([0., 1.], [0., 1.])\n",
        "plt.errorbar(f_bins, f_correct, yerr=error_bars, fmt='o')\n",
        "plt.xlabel('Monte Carlo Probability')\n",
        "plt.ylabel('Frequency')\n",
        "plt.title('Correct Predictions')\n",
        "plt.show();"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "ggvEGluM9TEw",
        "colab_type": "text"
      },
      "cell_type": "markdown",
      "source": [
        "### Compute metrics"
      ]
    },
    {
      "metadata": {
        "id": "TeV6NtwnP0Lm",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        "import sklearn.metrics as skm\n",
        "import seaborn\n",
        "\n",
        "# Classification report\n",
        "report = skm.classification_report(y_test, y_pred)\n",
        "print(report)\n",
        "\n",
        "# Confusion matrix\n",
        "cm = skm.confusion_matrix(y_test, y_pred)\n",
        "seaborn.heatmap(cm, annot=True,annot_kws={\"size\": 16})"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "metadata": {
        "id": "MnQeNtCkV3Uv",
        "colab_type": "code",
        "colab": {}
      },
      "cell_type": "code",
      "source": [
        ""
      ],
      "execution_count": 0,
      "outputs": []
    }
  ]
}