--- a +++ b/Code/C++/README.md @@ -0,0 +1,19 @@ +### [Download](https://drive.google.com/file/d/1t6irPsTSC9Yk9V9lW44IZ-ZGvJodty-h/view?usp=sharing), [Results](https://drive.google.com/file/d/1zvZUio10eiFxOAsC85SzlDXSQl9z66aK/view?usp=sharing), [Discussion](https://youtu.be/6nZzji0dlws). + +## C++ Generative AI Inference: Production Ready Speed and Control: +C++ is utilized across many software libraries for speed, and is widely used in operating systems, financial applications, and scientific simulations. Production ready generative artificial intelligence applications continue to implement C++ to ensure application performance, reliability, and flexibility. Open source LLaMA.cpp, GPT4All, Libtorch, and TensorFlow C++ permit a broad range of fast machine learning implementations. TensorRT (nVIDIA) and OpenVINO (Intel) heavily rely on C++ software designed for hardware specific to the manufacturer. + +C++ was developed by Bjarne Stroustrup, and was released in 1985 <br> +C++ object-oriented programming (OOP): encapsulation, inheritance, and polymorphism for modularity <br> +C++ is widely used in Operating systems, Financial applications, and Scientific simulations <br> +C++ code is compiled into machine code, making it a fast and efficient language <br> +C++ is known for its: performance, reliability, flexibility, and complexity <br> + +Generative AI Inference with C++ allows for fast and controllable answers to user questions <br> +Python is more for training/fine-tuning models, which can be converted to C++ for speed <br> +Many IDEs across Operating Systems utilize C++: XCode, CLion, VS Code, and more <br> +PyTorch dedicates a significant portion of C++ code in its own Platform <br> +#cpp <br> + +June 13th 2024 <br> +Kevin Kawchak