|
|
3 |
Performed very well when compared to an Anthropic LLM, and A100 40GB workflows. In specific, using LangChain's Code Assistant: gpt-4o and nvidia L4 was used for higher output accuracy, but had some errors. A LangChain Agent Supervisor model with gpt-4o was most concise and less expensive vs. gpt-3.5-turbo. The RAPTOR LangChain model was 10x Faster vs. mentioned ways above, with gpt-4o having more relevant answers vs. gpt-3.5-turbo. (Slides 06-13) [Discussion](https://youtu.be/XuRHku8LQ4Q), [Presentation](https://drive.google.com/file/d/1oEMEGP3tLHHOA8yxtkdDP6H3RPaWsDB2/view?usp=sharing) |