Skip to content

Conversation

mikebonnet
Copy link
Collaborator

@mikebonnet mikebonnet commented Sep 4, 2025

The -llama-server variants are only needed for ramalama and cuda images.
The -whisper-server variant is only needed for the ramalama image.
The project is no longer providing vllm images.

Summary by Sourcery

Stop building unnecessary Docker images by removing Tekton pipeline configurations for unused llama-server and whisper-server variants and discontinuing vllm image support.

Enhancements:

  • Remove Tekton pipeline configs for unneeded -llama-server variants
  • Remove Tekton pipeline configs for unneeded -whisper-server variants
  • Remove Tekton pipeline configs for all vllm-related images

CI:

  • Delete CI pipeline definitions for obsolete image variants

The -llama-server variants are only needed for ramalama and cuda images.
The -whisper-server variant is only needed for the ramalama image.
The project is no longer providing vllm images.

Signed-off-by: Mike Bonnet <[email protected]>
Copy link
Contributor

sourcery-ai bot commented Sep 4, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

This PR trims the CI pipeline by removing obsolete Tekton tasks for unused llama-server and whisper-server variants across multiple platforms, and drops all vllm image pipelines.

File-Level Changes

Change Details Files
Cleanup obsolete Tekton pipelines for llama and whisper server variants
  • Delete pull-request and push tasks for asahi llama-server and whisper-server variants
  • Delete pull-request and push tasks for cann llama-server and whisper-server variants
  • Delete pull-request and push tasks for cuda whisper-server variant
  • Delete pull-request and push tasks for intel-gpu llama-server and whisper-server variants
  • Delete pull-request and push tasks for musa llama-server and whisper-server variants
  • Delete pull-request and push tasks for rocm llama-server and whisper-server variants
.tekton/asahi-llama-server/asahi-llama-server-pull-request.yaml
.tekton/asahi-llama-server/asahi-llama-server-push.yaml
.tekton/asahi-whisper-server/asahi-whisper-server-pull-request.yaml
.tekton/asahi-whisper-server/asahi-whisper-server-push.yaml
.tekton/cann-llama-server/cann-llama-server-pull-request.yaml
.tekton/cann-llama-server/cann-llama-server-push.yaml
.tekton/cann-whisper-server/cann-whisper-server-pull-request.yaml
.tekton/cann-whisper-server/cann-whisper-server-push.yaml
.tekton/cuda-whisper-server/cuda-whisper-server-pull-request.yaml
.tekton/cuda-whisper-server/cuda-whisper-server-push.yaml
.tekton/intel-gpu-llama-server/intel-gpu-llama-server-pull-request.yaml
.tekton/intel-gpu-llama-server/intel-gpu-llama-server-push.yaml
.tekton/intel-gpu-whisper-server/intel-gpu-whisper-server-pull-request.yaml
.tekton/intel-gpu-whisper-server/intel-gpu-whisper-server-push.yaml
.tekton/musa-llama-server/musa-llama-server-pull-request.yaml
.tekton/musa-llama-server/musa-llama-server-push.yaml
.tekton/musa-whisper-server/musa-whisper-server-pull-request.yaml
.tekton/musa-whisper-server/musa-whisper-server-push.yaml
.tekton/rocm-llama-server/rocm-llama-server-pull-request.yaml
.tekton/rocm-llama-server/rocm-llama-server-push.yaml
.tekton/rocm-whisper-server/rocm-whisper-server-pull-request.yaml
.tekton/rocm-whisper-server/rocm-whisper-server-push.yaml
Remove all vllm image pipelines
  • Delete pull-request and push tasks for ramlama vllm base image
  • Delete pull-request and push tasks for ramlama vllm llama-server variant
  • Delete pull-request and push tasks for ramlama vllm rag pipeline
  • Delete pull-request and push tasks for ramlama vllm whisper-server variant
.tekton/ramalama-vllm/ramalama-vllm-pull-request.yaml
.tekton/ramalama-vllm/ramalama-vllm-push.yaml
.tekton/ramalama-vllm-llama-server/ramalama-vllm-llama-server-pull-request.yaml
.tekton/ramalama-vllm-llama-server/ramalama-vllm-llama-server-push.yaml
.tekton/ramalama-vllm-rag/ramalama-vllm-rag-pull-request.yaml
.tekton/ramalama-vllm-rag/ramalama-vllm-rag-push.yaml
.tekton/ramalama-vllm-whisper-server/ramalama-vllm-whisper-server-pull-request.yaml
.tekton/ramalama-vllm-whisper-server/ramalama-vllm-whisper-server-push.yaml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @mikebonnet, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on optimizing the image build process within the Konflux system by eliminating build configurations for images that are no longer necessary or are built under specific, limited conditions. This strategic removal aims to enhance efficiency, reduce computational overhead, and ensure that only relevant image variants are generated.

Highlights

  • Optimized Llama Server Image Builds: Removed Tekton PipelineRun configurations for llama-server variants of asahi, cann, intel-gpu, musa, and rocm images. These variants are now only built for ramalama and cuda images, as per project requirements.
  • Optimized Whisper Server Image Builds: Removed Tekton PipelineRun configurations for whisper-server variants of asahi, cann, cuda, intel-gpu, musa, and rocm images. These variants are now only built for the ramalama image, as per project requirements.
  • Discontinued vLLM Image Builds: All vllm related image builds, including ramalama-vllm, ramalama-vllm-llama-server, ramalama-vllm-rag, and ramalama-vllm-whisper-server, have been completely removed as the project no longer provides these images.
  • Streamlined CI/CD Pipelines: The removal of these unnecessary build configurations significantly streamlines the CI/CD pipelines, reducing build times and optimizing resource utilization by focusing only on essential image variants.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a cleanup effort to remove Tekton pipeline configurations for container images that are no longer being built. The changes involve deleting numerous YAML files corresponding to unused llama-server, whisper-server, and vllm image variants. The deletions are consistent with the goals outlined in the pull request description. This will help reduce CI/CD complexity and resource consumption. Given that the changes are exclusively file deletions, there are no specific code suggestions.

@rhatdan
Copy link
Member

rhatdan commented Sep 4, 2025

LGTM

@rhatdan rhatdan merged commit 9790cfd into main Sep 4, 2025
42 of 46 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants