Skip to content

Conversation

zhengkezhou1
Copy link
Contributor

Pull Request Description

Replace the functionality currently maintained by mock inference with that provided by llm-d. In the future, mock inference should only need to simulate functionality specific to the aibrix system.

Related Issues

Resolves: #1569

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

Copy link
Contributor

Summary of Changes

Hello @zhengkezhou1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the llm-d vLLM simulator into the project's end-to-end testing framework. This change aims to provide a more specialized and robust simulation environment for LLM inference, allowing the existing mock inference to be reserved for aibrix-specific functionalities. The update involves new Kubernetes configurations for deploying the simulator and corresponding adjustments to the e2e tests and their execution script.

Highlights

  • LLM Simulator Integration: Introduced llm-d as a vLLM simulator for end-to-end tests, replacing the existing generic mock inference to allow it to focus on aibrix-specific functionalities.
  • Kubernetes Manifests: Added new Kubernetes YAML files (components.yaml, deployment.yaml, kustomization.yaml) to define the vllm-llama3-8b-instruct service, deployment, and associated RBAC for the new simulator.
  • E2E Test Updates: Modified existing Go e2e tests to utilize the newly introduced vllm-llama3-8b-instruct simulator model for all inference requests.
  • Test Script Automation: Updated the run-e2e-tests.sh script to automatically deploy and clean up the llm-d simulator resources during test execution.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a vLLM simulator for end-to-end tests, replacing the previous mock inference setup. The changes include new Kubernetes manifests for the simulator deployment, service, and RBAC, as well as updates to the E2E test script to manage these new resources. The Go tests are also updated to use the new simulator model. My review focuses on Kubernetes best practices in the new manifests. I've suggested improvements for resource management in the Deployment and for service port configuration to enhance the stability and robustness of the test environment. I also pointed out a minor formatting issue in the kustomization file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use llm-d vLLM simulator for E2E tests
1 participant