BERT SMILES Autocompletion + API is a project fine-tuning and deploying a BERT model to predict the next element and character in a SMILES (Simplified Molecular Input Line Entry System) string. The API allows users to autocomplete SMILES strings with high accuracy, making it easier to access molecules without using drawing software.
Web App Demo of SMILES Autocompletion API
The BERT model was fine-tuned using a dataset of valid SMILES strings. Additional generated SMILES strings were also generated using the RDKit library. The dataset was preprocessed to create masked language model (MLM) training examples, where a portion of the SMILES strings was masked with a special [MASK] token. The objective of the MLM task is to predict the masked tokens based on the context provided by the surrounding unmasked tokens.
During the fine-tuning process, the model learned the syntactic and semantic patterns within the SMILES strings, enabling it to generate chemically valid suggestions for the masked positions.
Algorithm for SMILES Autocompletion.
To set up and run the BERT SMILES Autocompletion API, follow these steps:
$ git clone https://github.com/alpayariyak/BERT-SMILES-Autocompletion-API.git
$ cd BERT-SMILES-Autocompletion-API
$ pip install -r requirements.txt
$ python autocompletionAPI.py
/autocomplete: autocompletes a given SMILES string using the fine-tuned BERT model, the database search, or both.
smiles: The SMILES string to autocomplete. (required)
n_max_suggestions: The maximum number of suggestions to return (default: 5).
use_model: Set to true to use the BERT model for autocompletion (default: true).
use_database: Set to true to use the database search for autocompletion (default: true).
max_search_length: The maximum depth to search when using the BERT model (default: 10).