AMD GPU run large language model locally – LLaMA and LoRA: Ubuntu step by step tutorial

Thank you for watching! please consider to subscribe. thank you!
πŸ‘‰β“’β“€β“‘β“’β“’β“‘β“˜β“‘β“”
Step by step guide on how to run LLaMA or other models using AMD GPU is shown in this video.
0:6 Intro
1:34 Ensure that ROCm is installed. If not, check the tutorial on
7:53 Install Bitsandbytes library
12:34 Download llama model
13:49 Start the webui and testing
17:50 Download lora model
18:58 Start the webui and load LoRA.

πŸ‘‰The discord server invite is There is a llama bot free to use. see this for demo:

If you would like to support me, here is my Kofi link: and Patreon page:
Thank you!

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 EVERYTHING CHATGPT - WordPress Theme by WPEnjoy