Skip to content

Conversation

rhatdan
Copy link
Member

@rhatdan rhatdan commented Sep 9, 2025

llama.cpp is defaulting to ctx-size of 4098 and we were hard coding 2049, which means we were not using the default setting. This PR changes to use ctx-size=0 which will not be specified in the command unless the value is > 0, so llama-server default will be used.

Also we hard coded cache_reuse=256 with no way for user to override, this PR adds support for cache_reuse being set in ramalama.conf and on the command line.

Summary by Sourcery

Enable user-configurable cache reuse and switch to using llama-server’s default context size by default instead of hard-coding a ctx_size

New Features:

  • Add --cache-reuse option to the CLI and support cache_reuse configuration in ramalama.conf

Enhancements:

  • Set default ctx_size to 0 to defer to llama-server’s built-in default and only emit --ctx-size when explicitly >0

Documentation:

  • Add documentation for the cache_reuse option and update ctx-size defaults across man pages and example configs

Tests:

  • Update system tests to check for cache-reuse flag and ensure --ctx-size isn’t shown by default

Copy link
Contributor

sourcery-ai bot commented Sep 9, 2025

Reviewer's Guide

This PR replaces the hard-coded context size default with a zero placeholder to allow llama-server’s built-in default and introduces a new cache_reuse configuration and CLI argument, updating runtime invocation, configuration schema, documentation, and tests accordingly.

Sequence diagram for llama_serve command argument construction

sequenceDiagram
    participant "User/CLI"
    participant "ramalama.model.llama_serve()"
    participant "llama-server"
    "User/CLI"->>"ramalama.model.llama_serve()": Provide --ctx-size and --cache-reuse args
    "ramalama.model.llama_serve()"->>"llama-server": Build exec_args
    alt context > 0
        "ramalama.model.llama_serve()"->>"llama-server": Pass --ctx-size <value>
    else context == 0
        "ramalama.model.llama_serve()"->>"llama-server": Omit --ctx-size (use llama-server default)
    end
    "ramalama.model.llama_serve()"->>"llama-server": Pass --cache-reuse <value>
    "llama-server"-->>"User/CLI": Use provided or default values
Loading

Entity relationship diagram for updated ramalama.conf configuration options

erDiagram
    CONFIG {
        int ctx_size
        int cache_reuse
    }
    CONFIG ||--|| "ramalama.conf" : defines
Loading

Class diagram for updated configuration options in BaseConfig

classDiagram
    class BaseConfig {
        api: str = "none"
        carimage: str = "registry.access.redhat.com/ubi10-micro:latest"
        container: bool
        ctx_size: int = 0
        cache_reuse: int = 256
        default_image: str
        dryrun: bool
        engine: SUPPORTED_ENGINES | None
        ...
    }
Loading

Class diagram for updated CLI argument parsing in runtime_options

classDiagram
    class runtime_options {
        +parser.add_argument("--cache-reuse", dest="cache_reuse", type=int, default=CONFIG.cache_reuse)
        +parser.add_argument("--ctx-size", "-c", dest="context", type=int, default=CONFIG.ctx_size)
        +parser.add_argument("--max-model-len", dest="context", type=int, default=CONFIG.ctx_size)
    }
Loading

File-Level Changes

Change Details Files
Introduce configurable cache_reuse parameter
  • Add --cache-reuse argument in CLI
  • Expose cache_reuse in BaseConfig
  • Pass cache_reuse value to llama-server invocation
  • Add documentation entries and test assertions for cache_reuse
ramalama/cli.py
ramalama/config.py
ramalama/model.py
docs/ramalama-serve.1.md
docs/ramalama-run.1.md
docs/ramalama-perplexity.1.md
docs/ramalama.conf
docs/ramalama.conf.5.md
test/system/030-run.bats
Switch to using llama-server's default context size
  • Set BaseConfig.ctx_size default to 0
  • Use CONFIG.ctx_size in CLI defaults
  • Omit --ctx-size flag when context is 0
  • Update documentation defaults to reflect new context size behavior
ramalama/config.py
ramalama/cli.py
ramalama/model.py
docs/ramalama-serve.1.md
docs/ramalama-run.1.md
docs/ramalama-perplexity.1.md
docs/ramalama.conf
docs/ramalama.conf.5.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @rhatdan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request improves the flexibility and default behavior of ramalama when interacting with llama.cpp. It adjusts the handling of the ctx-size parameter to align with llama.cpp's default behavior and introduces user-configurable options for the cache_reuse parameter, enhancing control over model serving.

Highlights

  • Context Size Defaulting: The ctx-size parameter for llama.cpp is no longer hardcoded to 2048 but defaults to 0 in ramalama, allowing llama.cpp to use its internal default (4098). The --ctx-size argument is now only passed to llama-server if its value is explicitly set to greater than 0.
  • Cache Reuse Configurability: The cache_reuse parameter, previously hardcoded to 256, can now be configured by users via ramalama.conf or command-line arguments, providing more flexibility.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `ramalama/cli.py:779` </location>
<code_context>
         )
     parser.add_argument("--authfile", help="path of the authentication file")
     if command in ["run", "perplexity", "serve"]:
+        parser.add_argument(
+            "--cache-reuse",
+            dest="cache_reuse",
+            type=int,
+            default=CONFIG.cache_reuse,
+            help="min chunk size to attempt reusing from the cache via KV shifting",
+            completer=suppressCompleter,
+        )
         parser.add_argument(
</code_context>

<issue_to_address>
Consider clarifying the unit for --cache-reuse in the help text.

Specifying the unit in the help text will make it clearer for users and prevent misunderstandings.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
            help="min chunk size to attempt reusing from the cache via KV shifting",
=======
            help="min chunk size (in bytes) to attempt reusing from the cache via KV shifting",
>>>>>>> REPLACE

</suggested_fix>

### Comment 2
<location> `ramalama/model.py:662` </location>
<code_context>
     parser.add_argument("--authfile", help="path of the authentication file")
     if command in ["run", "perplexity", "serve"]:
+        parser.add_argument(
+            "--cache-reuse",
+            dest="cache_reuse",
+            type=int,
</code_context>

<issue_to_address>
Switching --cache-reuse from a hardcoded value to a parameter increases flexibility but may require validation.

Please add validation for the --cache-reuse parameter to prevent invalid values and potential performance issues.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

dest="cache_reuse",
type=int,
default=CONFIG.cache_reuse,
help="min chunk size to attempt reusing from the cache via KV shifting",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider clarifying the unit for --cache-reuse in the help text.

Specifying the unit in the help text will make it clearer for users and prevent misunderstandings.

Suggested change
help="min chunk size to attempt reusing from the cache via KV shifting",
help="min chunk size (in bytes) to attempt reusing from the cache via KV shifting",

Comment on lines 660 to -665
"--temp",
f"{args.temp}",
"--cache-reuse",
"256",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Switching --cache-reuse from a hardcoded value to a parameter increases flexibility but may require validation.

Please add validation for the --cache-reuse parameter to prevent invalid values and potential performance issues.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly changes the default ctx-size to 0 to use the llama.cpp default and introduces configurability for cache_reuse. The implementation looks solid, with corresponding updates to documentation and tests. I've noted a minor potential inconsistency in the documentation regarding the default context size.


#### **--ctx-size**, **-c**
size of the prompt context. This option is also available as **--max-model-len**. Applies to llama.cpp and vllm regardless of alias (default: 2048, 0 = loaded from model)
size of the prompt context. This option is also available as **--max-model-len**. Applies to llama.cpp and vllm regardless of alias (default: 4096, 0 = loaded from model)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The PR description mentions that llama.cpp defaults to a context size of 4098, but the documentation here states the default is 4096. To ensure accuracy, could you please verify the current default ctx-size in llama.cpp and update the documentation accordingly? This will help avoid confusion for users relying on the default behavior.


#### **--ctx-size**, **-c**
size of the prompt context. This option is also available as **--max-model-len**. Applies to llama.cpp and vllm regardless of alias (default: 2048, 0 = loaded from model)
size of the prompt context. This option is also available as **--max-model-len**. Applies to llama.cpp and vllm regardless of alias (default: 4096, 0 = loaded from model)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The PR description mentions that llama.cpp defaults to a context size of 4098, but the documentation here states the default is 4096. To ensure accuracy, could you please verify the current default ctx-size in llama.cpp and update the documentation accordingly? This will help avoid confusion for users relying on the default behavior.


#### **--ctx-size**, **-c**
size of the prompt context. This option is also available as **--max-model-len**. Applies to llama.cpp and vllm regardless of alias (default: 2048, 0 = loaded from model)
size of the prompt context. This option is also available as **--max-model-len**. Applies to llama.cpp and vllm regardless of alias (default: 4096, 0 = loaded from model)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The PR description mentions that llama.cpp defaults to a context size of 4098, but the documentation here states the default is 4096. To ensure accuracy, could you please verify the current default ctx-size in llama.cpp and update the documentation accordingly? This will help avoid confusion for users relying on the default behavior.

Copy link
Member

@giuseppe giuseppe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

llama.cpp is defaulting to ctx-size of 4096 and we were hard coding
2048, which means we were not using the default setting. This PR
changes to use ctx-size=0 which will not be specified in the command
unless the value is > 0, so llama-server default will be used.

Also we hard coded cache_reuse=256 with no way for user to override,
this PR adds support for cache_reuse being set in ramalama.conf and on
the command line.

Signed-off-by: Daniel J Walsh <[email protected]>
@rhatdan rhatdan merged commit 6462424 into containers:main Sep 10, 2025
26 of 46 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants