|
a/README.md |
|
b/README.md |
|
... |
|
... |
44 |
1. Directly run MOSA with the default configurations as described above. |
44 |
1. Directly run MOSA with the default configurations as described above. |
45 |
|
45 |
|
46 |
## Instructions for Integrating Disentanglement Learning into MOSA |
46 |
## Instructions for Integrating Disentanglement Learning into MOSA |
47 |
To incorporate disentanglement learning, two additional terms are included in the loss function, following the Disentangled Inferred Prior Variational Autoencoder (DIP-VAE) approach, as described by [Kumar et al. (2018)](https://arxiv.org/abs/1711.00848): |
47 |
To incorporate disentanglement learning, two additional terms are included in the loss function, following the Disentangled Inferred Prior Variational Autoencoder (DIP-VAE) approach, as described by [Kumar et al. (2018)](https://arxiv.org/abs/1711.00848): |
48 |
|
48 |
|
49 |
 |
|
|
50 |
|
49 |
|
51 |
To use this, update the `hyperparameters.json` file by specifying `dip_vae_type` as either `"i"` or `"ii"` (type ii is recommended), and define the parameters `lambda_d` and `lambda_od` as float values, which control the diagonal and off-diagonal regularization, respectively. |
50 |
To use this, update the `hyperparameters.json` file by specifying `dip_vae_type` as either `"i"` or `"ii"` (type ii is recommended), and define the parameters `lambda_d` and `lambda_od` as float values, which control the diagonal and off-diagonal regularization, respectively. |
52 |
|
51 |
|
53 |
## Pre-trained models |
52 |
## Pre-trained models |
54 |
The pre-trained models can be downloaded from the Hugging Face model hub: [MOSA](https://huggingface.co/QuantitativeBiology/MOSA_pretrained) |
53 |
The pre-trained models can be downloaded from the Hugging Face model hub: [MOSA](https://huggingface.co/QuantitativeBiology/MOSA_pretrained) |