v3.0.0-beta.31
Pre-release
Pre-release
3.0.0-beta.31 (2024-06-17)
Bug Fixes
- remove CUDA binary compression for Windows (#243) (0b85800)
- improve
inspect gpu
command output (#243) (0b85800)
Shipped with llama.cpp
release b3166
To use the latest
llama.cpp
release available, runnpx --no node-llama-cpp download --release latest
. (learn more)