Skip to content

Conversation

bmahabirbu
Copy link
Collaborator

@bmahabirbu bmahabirbu commented Sep 17, 2025

…can no longer assume chunks

can only be of type content. Adjusted the code so it doesnt break

Summary by Sourcery

Update RAG framework script to handle new chunk types introduced by the latest llama.cpp version and chat templates

Bug Fixes:

  • Remove assumption that chunks are only of type 'content' to prevent breakage with new llama.cpp and chat templates

Enhancements:

  • Adjust chunk processing logic to support various chunk types in the RAG pipeline

…can no longer assume chunks

can only be of type content. Adjusted the code so it doesnt break

Signed-off-by: Brian <[email protected]>
Copy link
Contributor

sourcery-ai bot commented Sep 17, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

This PR modifies the rag_framework script to support the new llama.cpp version and updated chat templates by removing the hardcoded assumption that all chunks are of type “content” and introducing type checks and graceful handling for other chunk types.

File-Level Changes

Change Details Files
Relax chunk-type assumption
  • Added conditional check for chunk.type before processing
  • Revised loop logic to skip non-content chunk types
  • Added debug logging for skipped chunk types
container-images/scripts/rag_framework
Update chat-template integration
  • Adjusted command-line flags to match new llama.cpp template options
  • Mapped new template placeholders to internal variables
container-images/scripts/rag_framework

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @bmahabirbu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical compatibility issue within the rag_framework script, which was encountering breakage due to recent changes in how llama.cpp handles chat templates and response chunks. The modifications ensure that the framework can gracefully process AI responses, even when the structure of chunk.choices varies, thereby enhancing the stability and robustness of the RAG system.

Highlights

  • Compatibility Fix: Updated the rag_framework script to ensure compatibility with newer versions of llama.cpp and its chat templates.
  • Robust Chunk Processing: Implemented a null check for chunk.choices when processing AI responses, preventing potential errors if the choices array is empty or undefined.
  • Code Refinement: Introduced a local variable content to store chunk.choices[0].delta.content for improved readability and maintainability within the response collection loop.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a potential IndexError when processing streaming responses from the language model. The change correctly adds a check to ensure the choices list in a response chunk is not empty before accessing its elements. This makes the code more robust, especially with newer llama.cpp versions that might send chunks without content. The introduction of a content variable also improves readability and avoids redundant attribute access. The fix is correct and well-implemented.

@bmahabirbu
Copy link
Collaborator Author

bmahabirbu commented Sep 17, 2025

easy way to test

bin/ramalama --debug run llama3.2 --rag test:latest

grab the output and paste without the -c ""

like this

podman run --rm --label ai.ramalama.model=ollama://library/llama3.2:latest --label ai.ramalama.engine=podman --label ai.ramalama.runtime=llama.cpp --label ai.ramalama.port=8080 --label ai.ramalama.command=run --device /dev/dri --device /dev/kfd -e HIP_VISIBLE_DEVICES=0 --network bridge -p 8080:8080 --security-opt=label=disable --cap-drop=all --security-opt=no-new-privileges --pull newer --mount=type=image,source=test:latest,destination=/rag,rw=true -t -i --label ai.ramalama --name ramalama_hMc6ZNNmkr --env=HOME=/tmp --init --mount=type=bind,src=/home/brian/.local/share/ramalama/store/ollama/library/llama3.2/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff,destination=/mnt/models/llama3.2,ro --mount=type=bind,src=/home/brian/.local/share/ramalama/store/ollama/library/llama3.2/blobs/sha256-34bb5ab01051a11372a91f95f3fbbc51173eed8e7f13ec395b9ae9b8bd0e242b,destination=/mnt/models/config.json,ro --mount=type=bind,src=/home/brian/.local/share/ramalama/store/ollama/library/llama3.2/blobs/sha256-966de95ca8a62200913e3f8bfbf84c8494536f1b94b49166851e76644e966396,destination=/mnt/models/chat_template,ro quay.io/ramalama/rocm-rag:latest bash

then inside the container run llama.cpp server in background

nohup llama-server \
  --port 8080 \
  --model /mnt/models/llama3.2 \
  --no-warmup \
  --jinja \
  --chat-template-file /mnt/models/chat_template \
  --log-colors on \
  --alias llama3.2 \
  --temp 0.8 \
  --cache-reuse 256 \
  -v \
  -ngl 999 \
  --threads 6 \
  --host 0.0.0.0 \
  > /tmp/llama-server.log 2>&1 &

check the logs to see it starts correctly

cd to /usr/bin
vi rag_framework
clear the file I personally like 500dd (when you not in insert mode)
then go into insert mode by clicking i a bunch of times

copy and paste the changed file I have here
then do :qw for save and quit

finally run rag_framework run /rag/vector.db

@rhatdan
Copy link
Member

rhatdan commented Sep 17, 2025

I think you need to rebase for the test to pass.

@rhatdan
Copy link
Member

rhatdan commented Sep 17, 2025

LGTM

@mikebonnet
Copy link
Collaborator

I ran into the same bug, thanks for the PR!

@bmahabirbu
Copy link
Collaborator Author

hmm a lot of the tests are failing because of Reading package lists... E: The repository 'http://archive.ubuntu.com/ubuntu oracular Release' does not have a Release file. Error: Process completed with exit code 100. but I believe this should be still good to go

@bmahabirbu bmahabirbu merged commit f4e299a into containers:main Sep 17, 2025
22 of 56 checks passed
ieaves pushed a commit to ramalama-labs/ramalama that referenced this pull request Sep 18, 2025
…can no longer assume chunks (containers#1937)

can only be of type content. Adjusted the code so it doesnt break

Signed-off-by: Brian <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants