cathy-stones/alphafold
This package provides an implementation of the inference pipeline of AlphaFold v2.
1
cathy-stones/cnn-design-for-ad
This repository contains code for a medical paper and a machine learning paper on deep learning for dementia.
cathy-stones/fewshot-gan-unet3d
Few-shot 3D Medical Image Segmentation using Generative Adversarial Learning
3
cathy-stones/moleanalysis
We trained our model on cancerous me-cells using Huawei ML Kit Custom Model Generation with our Mole Analysis application, which will help you in your diagnosis and follow-up.
cathy-stones/medicine-llm
Domain Adaptation of Large Language Models
cathy-stones/med-palm
A responsible path to generative AI in healthcare: Unleash the power of Med-PaLM 2 to revolutionize medical knowledge, answer complex questions, and enhance healthcare experiences with accuracy, safety, and equitable practices.
cathy-stones/clinicalbert
This model card describes the ClinicalBERT model, which was trained on a large multicenter dataset with a large corpus of 1.2B words of diverse diseases we constructed. We then utilized a large-scale corpus of EHRs from over 3 million patient records to fine tune the base language model.
cathy-stones/medical-tokenizer
clinitokenizer is a sentence tokenizer for clinical text to split unstructured text from clinical text (such as Electronic Medical Records) into individual sentences.
cathy-stones/deepvariant-r1-6-1
DeepVariant is a deep learning-based variant caller that takes aligned reads (in BAM or CRAM format), produces pileup image tensors from them, classifies each tensor using a convolutional neural network, and finally reports the results in a standard VCF or gVCF file.
cathy-stones/medical-chatbot
Please note that the chatbot is designed for research purposes only and is not intended for use in real medical settings.
cathy-stones/codoc
This repository includes the source code for the paper "Enhancing the reliability and accuracy of AI-enabled diagnosis via complementarity-driven deferral to clinicians (CoDoC)" by Dvijotham et al. (2023), published in the journal Nature Medicine.
cathy-stones/segment-anything
Segment Anything Model for Medical Image Analysis: an Experimental Study
cathy-stones/llava-med
Large Language and Vision Assistant for BioMedicine
cathy-stones/biomistral-7b
A Collection of Open-Source Pretrained Large Language Models for Medical Domains
cathy-stones/meditron
Meditron is a suite of open-source medical Large Language Models (LLMs).
cathy-stones/medalpaca
MedAlpaca expands upon both Stanford Alpaca and AlpacaLoRA to offer an advanced suite of large language models specifically fine-tuned for medical question-answering and dialogue applications. Our primary objective is to deliver an array of open-source language models, paving the way for seamless development of medical chatbot solutions.
cathy-stones/asclepius
Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes
cathy-stones/pmc-llama
Towards Building Open-source Language Models for Medicine
cathy-stones/chatdoctor
Autonomous ChatDoctor with Disease Database Demo.
cathy-stones/biomedlm
BioMedLM 2.7B is new language model trained exclusively on biomedical abstracts and papers from The Pile.
cathy-stones/endoscopic-segmentation
A pre-trained model for the endoscopic tool segmentation task, trained using a flexible unet structure with an efficientnet-b2 [1] as the backbone and a UNet architecture [2] as the decoder.
cathy-stones/wholebody-ct-segmentation
Models for (3D) segmentation of 104 whole-body segments.
cathy-stones/pathology-tumor-detection
A pre-trained model for automated detection of metastases in whole-slide histopathology images.
cathy-stones/pathology-nuclei-classification
A pre-trained model for classifying nuclei cells
cathy-stones/livianet
3D fully Convolutional Neural Network for semantic image segmentation
1
4
cathy-stones/medical-ner
This model is a fine-tuned version of DeBERTa on the PubMED Dataset.
1
cathy-stones/biomedclip
BiomedCLIP is a biomedical vision-language foundation model that is pretrained on PMC-15M, a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning.
1