Skip to content

Commit eadc3b8

Browse files
committed
backend: bump llama.cpp for VRAM leak fix when switching models
Signed-off-by: Jared Van Bortel <[email protected]>
1 parent 6db5307 commit eadc3b8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

gpt4all-backend/llama.cpp-mainline

0 commit comments

Comments
 (0)