-
Notifications
You must be signed in to change notification settings - Fork 13.2k
Closed
Labels
Description
Name and Version
Minor design issue for SvelteKit-based WebUI (#14839)
./llama-server --version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
version: 6514 (c0b4509)
built with cc (Debian 12.2.0-14+deb12u1) 12.2.0 for x86_64-linux-gnu
Current behavior :
Bug.mp4
Target behavior ( proposal / test master...ServeurpersoCom:llama.cpp:webui-mobile-fix )
Fix.mp4
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-server
Command line
Problem description & steps to reproduce
Tested on kiwi/chrome based + chrome (official) mobile browser @ Samsung S25 Ultra
First Bad Commit
Relevant log output
allozaur