-
Notifications
You must be signed in to change notification settings - Fork 261
Closed
Description
Hi Eric, It's Brian Mahabir, it was a pleasure to meet you at Dev Conf!
I have a bare-bones fully working demo that utilizes Nvidia GPU support. I'd like to have a nv branch on ramalama repo to have a record of the development.
Although you can build llama.cpp with GPU support it doesn't work seamlessly. For example, it takes time to offload the work to Vram so when using GPU you would have to wait about 15-30 seconds before the chatbox appears. However, Ollama handles this much better albeit at the cost of some performance. The plan is to look at how Ollama handles GPU support and integrate those changes for a better experience as it's built ontop of llama.cpp.
Metadata
Metadata
Assignees
Labels
No labels