-
Notifications
You must be signed in to change notification settings - Fork 59
v2.3.2 (sync with upstream llama.cpp) #179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe changes update the handling of key-value cache operations in the codebase by switching from direct context-based API calls to memory-based API calls. Additionally, the project updates the llama.cpp submodule, increments the package version to 2.3.2, and updates CDN URLs for WebAssembly binaries to match the new version. Changes
Sequence Diagram(s)sequenceDiagram
participant App
participant Context
participant Memory
App->>Context: llama_get_memory(ctx)
App->>Memory: llama_memory_* (remove, add, clear)
Memory-->>App: Operation result
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI ⛔ Files ignored due to path filters (2)
📒 Files selected for processing (4)
🧰 Additional context used🧠 Learnings (1)src/wasm-from-cdn.ts (1)
🔇 Additional comments (7)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Summary by CodeRabbit