--- a +++ b/demo/notebooks/demo_hypersearchDL.ipynb @@ -0,0 +1,179 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Hyperparameter Search for Deep Learning model" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "##### This notebook illustrates how our deep learning model is trained. Optimal network parameters are determined via a hyperparameter search." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Import required libraries:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false}, + "outputs": [], + "source": [ + "import optunity\n", + "import lifelines\n", + "from lifelines.utils import concordance_index\n", + "import os, sys, pickle\n", + "import numpy as np" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We import ```DL_single_run``` (which performs one run of the DL model) from the ```trainDL.py``` file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false}, + "outputs": [], + "source": [ + "sys.path.insert(0, '../code')\n", + "from trainDL import *" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Import simulated data. This serialized ```.pkl``` file consists of the following variables: \n", + "- ```x_full``` : simulated input mesh motion data (i.e. motion descriptors [11,514-element vector])\n", + "- ```y_full``` : simulated censoring status and survival times\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false}, + "outputs": [], + "source": [ + "with open('../data/inputdata_DL.pkl', 'rb') as f: c3 = pickle.load(f)\n", + "x_full = c3[0]\n", + "y_full = c3[1]\n", + "del c3" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The ```hypersearch_DL()``` function is defined in the code block below. It searches for optimal hyperparameters for our deep learning model. The function takes as arguments the input and outcome data variables, hyperparameter search method, search parameters (number iterations, number of cross-validation folds), and search boundaries for each hyperparameter.\n", + "\n", + "The function evaluates various hyperparameter sets by using each to fit our deep learning model (see the ```modelrun()``` function below). For a stable estimate of performance, each hyperparameter set is evaluated over cross-validation folds, with the average performance reported. Here, performance is assessed by _Harrell's_ Concordance Index (see ```concordance_index``` function below). Cross-validation is carried out using the ```optunity.cross_validated``` function decorator, applied to the ```modelrun()``` function. The hyperparameter set yielding the best cross-validated performance is reported and returned (see the last 3 lines of the function definition below)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false}, + "outputs": [], + "source": [ + "def hypersearch_DL(x_data, y_data, method, nfolds, nevals, lrexp_range, l1rexp_range, dro_range, \n", + " units1_range, units2_range, alpha_range, batch_size, num_epochs):\n", + "\n", + " @optunity.cross_validated(x=x_data, y=y_data, num_folds=nfolds)\n", + " def modelrun(x_train, y_train, x_test, y_test, lrexp, l1rexp, dro, units1, units2, alpha):\n", + " cv_log = DL_single_run(xtr=x_train, ytr=y_train, units1=units1, units2=units2, dro=dro, lr=10**lrexp, \n", + " l1r=10**l1rexp, alpha=alpha, batchsize=batch_size, numepochs=num_epochs)\n", + " cv_preds = cv_log.model.predict(x_test, batch_size=1)[1]\n", + " cv_C = concordance_index(y_test[:,1], -cv_preds, y_test[:,0])\n", + " return cv_C\n", + "\n", + " optimal_pars, searchlog, _ = optunity.maximize(modelrun, num_evals=nevals, solver_name=method, lrexp=lrexp_range, \n", + " l1rexp=l1rexp_range, dro=dro_range, units1=units1_range, \n", + " units2=units2_range, alpha=alpha_range)\n", + "\n", + " print('Optimal hyperparameters : ' + str(optimal_pars))\n", + " print('Cross-validated C after tuning: %1.3f' % searchlog.optimum)\n", + " return optimal_pars, searchlog" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Below, we provide a detailed description of each argument of the ```hypersearch_DL``` function:\n", + "- **x_data** : input data (mesh motion descriptors)\n", + "- **y_data** : outcome data (censoring status and survival times)\n", + "- **method** : Hyperparameter search technique. Below are examples of possible options (for full list of search techniques, see [Optunity solvers](https://optunity.readthedocs.io/en/latest/user/solvers.html) ):\n", + " - '_grid search_' : sample the hyperparameter space using a grid search\n", + " - '_random_' : sample the hyperparameter space using a random search\n", + " - '_sobol_' : sample the hyperparameter space using a [Sobol sequence](https://optunity.readthedocs.io/en/latest/user/solvers/sobol.html#sobol)\n", + " - '_particle swarm_' : search hyperparameter space using [Particle Swarm Optimization](https://optunity.readthedocs.io/en/latest/user/solvers/particle_swarm.html#pso2010)\n", + "- **nfolds** : number of cross-validation folds\n", + "- **nevals** : number of hyperparameter search evaluations\n", + "- **lrexp_range** : range of learning rate values, expressed in log<sub>10</sub> base\n", + "- **l1rexp_range** : range of _L_<sub>1</sub> regularization values, expressed in log<sub>10</sub> base\n", + "- **dro_range** : range of dropout values\n", + "- **units1_range** : range of number of units for autoencoder hidden layers, specifically the layer immediately after the encoder's random noise layer, and the one immediately preceding the decoder's output layer\n", + "- **units2_range** : range of number of units for the latent code layer ('central' layer of the autoencoder)\n", + "- **alpha_range** : range of values for parameter α, which controls the tradeoff between the 2 losses used in our network (i.e. reconstruction and prediction losses) \n", + "- **batch_size** : batch size\n", + "- **num_epochs** : number of epochs\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "deletable": false, + "editable": false}, + "outputs": [], + "source": [ + "opars, clog = hypersearch_DL(x_data=x_full, y_data=y_full, \n", + " method='particle swarm', nfolds=6, nevals=50,\n", + " lrexp_range=[-6.,-4.5], l1rexp_range=[-7,-4], dro_range=[.1,.9], \n", + " units1_range=[75,250], units2_range=[5,20], alpha_range=[0.3, 0.7],\n", + " batch_size=16, num_epochs=100)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}