-
Notifications
You must be signed in to change notification settings - Fork 261
konflux: stop building unnecessary images #1897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The -llama-server variants are only needed for ramalama and cuda images. The -whisper-server variant is only needed for the ramalama image. The project is no longer providing vllm images. Signed-off-by: Mike Bonnet <[email protected]>
Reviewer's guide (collapsed on small PRs)Reviewer's GuideThis PR trims the CI pipeline by removing obsolete Tekton tasks for unused llama-server and whisper-server variants across multiple platforms, and drops all vllm image pipelines. File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @mikebonnet, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on optimizing the image build process within the Konflux system by eliminating build configurations for images that are no longer necessary or are built under specific, limited conditions. This strategic removal aims to enhance efficiency, reduce computational overhead, and ensure that only relevant image variants are generated.
Highlights
- Optimized Llama Server Image Builds: Removed Tekton PipelineRun configurations for
llama-server
variants ofasahi
,cann
,intel-gpu
,musa
, androcm
images. These variants are now only built forramalama
andcuda
images, as per project requirements. - Optimized Whisper Server Image Builds: Removed Tekton PipelineRun configurations for
whisper-server
variants ofasahi
,cann
,cuda
,intel-gpu
,musa
, androcm
images. These variants are now only built for theramalama
image, as per project requirements. - Discontinued vLLM Image Builds: All
vllm
related image builds, includingramalama-vllm
,ramalama-vllm-llama-server
,ramalama-vllm-rag
, andramalama-vllm-whisper-server
, have been completely removed as the project no longer provides these images. - Streamlined CI/CD Pipelines: The removal of these unnecessary build configurations significantly streamlines the CI/CD pipelines, reducing build times and optimizing resource utilization by focusing only on essential image variants.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request is a cleanup effort to remove Tekton pipeline configurations for container images that are no longer being built. The changes involve deleting numerous YAML files corresponding to unused llama-server
, whisper-server
, and vllm
image variants. The deletions are consistent with the goals outlined in the pull request description. This will help reduce CI/CD complexity and resource consumption. Given that the changes are exclusively file deletions, there are no specific code suggestions.
LGTM |
The
-llama-server
variants are only needed forramalama
andcuda
images.The
-whisper-server
variant is only needed for theramalama
image.The project is no longer providing
vllm
images.Summary by Sourcery
Stop building unnecessary Docker images by removing Tekton pipeline configurations for unused llama-server and whisper-server variants and discontinuing vllm image support.
Enhancements:
CI: