|
a |
|
b/R&D/034 Meta Llama 3 Fine-tuning, RAG.md |
|
|
1 |
## Meta Llama 3 Fine-tuning, RAG |
|
|
2 |
|
|
|
3 |
[Meta Llama 3 Fine-tuning, RAG, and Prompt Engineering for Drug Discovery](https://www.chemicalqdevice.com/meta-llama-3-fine-tuning-rag) Event Seminar and PDF 04/25/24. |
|
|
4 |
|
|
|
5 |
Achieving the very best outputs based on most relevant specific data typically takes additional techniques beyond cutting-edge pre-trained models such as Llama 3. Two of these techniques referred to as fine-tuning and retrieval-augmented generation (RAG) can be used together with LLMs or separately alongside an LLM, depending on the objectives. |
|
|
6 |
|
|
|
7 |
Here, three sets of experiments are detailed, with access to fine-tuned Hugging Face models, Colab links, and GitHub repositories of new developments and the authors' original notebooks. A main finding was that increasing the number of machine learning steps decreased model loss and improved the output accuracy significantly when fine-tuning on the 'Mol-Instructions' dataset. (14) A different LLM+RAG model based on the UC Irvine 'Heart Disease' dataset provided a more accurate and concise output regarding a specific directory's contents when compared to the meta.AI chatbot response. (16) |
|
|
8 |
|
|
|
9 |
“RAG systems have a unique appeal over traditional search engines in that they can incorporate prior knowledge to fill in the gaps and extrapolate the retrieved information.” |
|
|
10 |
- Kevin Wu, et al., Stanford University, April 16, 2024. [How faithful are RAG models? Quantifying the tug-of-war between RAG and LLMs' internal prior |
|
|
11 |
](https://arxiv.org/abs/2404.10198) |