Skip to content

Conversation

bobqianic
Copy link
Collaborator

It's normal for the CI of this PR to not pass. We should first merge #1669, and then the CI of this PR will pass, because there is an issue with talk-llama's CMake.

Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merge if CI is green

@bobqianic bobqianic merged commit db8ccdb into ggml-org:master Dec 21, 2023
@bobqianic bobqianic deleted the cuda-ci branch December 22, 2023 20:58
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Dec 25, 2023
* ggerganov/master:
  whisper : Replace WHISPER_PRINT_DEBUG with WHISPER_LOG_DEBUG (ggml-org#1681)
  sync : ggml (ggml_scale, ggml_row_size, etc.) (ggml-org#1677)
  docker :  Dockerize whisper.cpp (ggml-org#1674)
  CI : Add coverage for talk-llama when WHISPER_CUBLAS=1 (ggml-org#1672)
  examples : Revert CMakeLists.txt for talk-llama (ggml-org#1669)
  cmake : set default CUDA architectures (ggml-org#1667)
viktor-silakov pushed a commit to viktor-silakov/whisper_node_mic.cpp that referenced this pull request May 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants