Skip to content

Conversation

jp1924
Copy link
Contributor

@jp1924 jp1924 commented Mar 31, 2025

Summary

fix #638

Testing Done

  • Hardware Type:
  • run make test to ensure correctness
  • run make checkstyle to ensure code style
  • run make test-convergence to ensure convergence
convergence-test log
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/fp32/test_mini_models.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0
rootdir: /root/workspace/jp-liger
configfile: pyproject.toml

----------------------------- live log collection ------------------------------
INFO     datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 13 items

test/convergence/fp32/test_mini_models.py::test_mini_model[mini_llama3-32-0.0001-dtype0-1e-08-2e-05-0.0001-1e-05-0.005-1e-05] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:209 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [  7%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_llava-32-0.0001-dtype1-1e-08-1e-05-0.005-1e-05-0.005-1e-05] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:209 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:267 Support for transformers versions < 4.49.0 will soon be discontinued due to issues with incorrect legacy processing. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/35526
PASSED                                                                   [ 15%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_mllama-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 23%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_qwen2-32-0.0001-dtype3-1e-08-1e-05-0.005-1e-05-0.005-1e-05] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:855 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 30%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype4-1e-05-0.1-0.005-1e-05-0.005-1e-05] SKIPPED [ 38%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype5-1e-05-0.1-0.005-1e-05-0.005-1e-05] SKIPPED [ 46%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_olmo2-32-0.0001-dtype6-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 53%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_phi3-32-0.0001-dtype7-1e-08-1e-05-0.005-1e-05-0.005-1e-05] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:1067 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 61%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_mistral-32-0.0001-dtype8-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 69%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_gemma1-32-0.0001-dtype9-1e-08-0.0001-0.005-1e-05-0.005-1e-05] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:598 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 76%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype10-1e-08-0.0001-0.005-1e-05-0.005-1e-05] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:598 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 84%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_gemma2-32-0.0001-dtype11-1e-08-0.0001-0.005-1e-05-0.005-1e-05] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:672 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 92%]
test/convergence/fp32/test_mini_models.py::test_mini_model[mini_granite3-32-0.0001-dtype12-1e-08-0.0001-0.005-1e-05-0.005-1e-05] SKIPPED [100%]

============== 8 passed, 5 skipped, 1 warning in 69.42s (0:01:09) ==============
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/fp32/test_mini_models_multimodal.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0
rootdir: /root/workspace/jp-liger
configfile: pyproject.toml

----------------------------- live log collection ------------------------------
INFO     datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 6 items

test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_vl-32-0.0001-dtype0-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 16%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_llava-32-0.0001-dtype1-1e-08-1e-05-0.005-1e-05-0.005-1e-05] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:209 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 33%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_5_vl-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 50%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_mllama-32-0.0001-dtype3-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 66%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_paligemma-32-0.0001-dtype4-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 83%]
test/convergence/fp32/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_paligemma2-32-0.0001-dtype5-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [100%]

================== 1 passed, 5 skipped, 2 warnings in 30.71s ===================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/fp32/test_mini_models_with_logits.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0
rootdir: /root/workspace/jp-liger
configfile: pyproject.toml

----------------------------- live log collection ------------------------------
INFO     datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 13 items

test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_llama3-32-0.0001-dtype0-1e-08-2e-05-0.0001-1e-05-0.005-1e-05] PASSED [  7%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_llava-32-0.0001-dtype1-1e-08-1e-05-0.005-1e-05-0.005-1e-05] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:209 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 15%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_mllama-32-0.0001-dtype2-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 23%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_qwen2-32-0.0001-dtype3-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 30%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype4-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 38%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype5-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 46%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_olmo2-32-0.0001-dtype6-1e-08-1e-05-0.005-1e-05-0.005-1e-05] SKIPPED [ 53%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_phi3-32-0.0001-dtype7-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 61%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_mistral-32-0.0001-dtype8-1e-08-1e-05-0.005-1e-05-0.005-1e-05] PASSED [ 69%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_gemma1-32-0.0001-dtype9-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 76%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype10-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 84%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_gemma2-32-0.0001-dtype11-1e-08-0.0001-0.005-1e-05-0.005-1e-05] PASSED [ 92%]
test/convergence/fp32/test_mini_models_with_logits.py::test_mini_model[mini_granite3-32-0.0001-dtype12-1e-08-0.0001-0.005-1e-05-0.005-1e-05] SKIPPED [100%]

============== 8 passed, 5 skipped, 1 warning in 68.41s (0:01:08) ==============
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/bf16/test_mini_models.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0
rootdir: /root/workspace/jp-liger
configfile: pyproject.toml

----------------------------- live log collection ------------------------------
INFO     datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 12 items

test/convergence/bf16/test_mini_models.py::test_mini_model[mini_llama3-32-0.0001-dtype0-0.001-0.01-0.1-0.01-0.01-0.01] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:209 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [  8%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_llava-32-0.0001-dtype1-0.001-0.01-0.1-0.01-0.01-0.01] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:209 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:267 Support for transformers versions < 4.49.0 will soon be discontinued due to issues with incorrect legacy processing. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/35526
PASSED                                                                   [ 16%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_granite3-32-0.0001-dtype2-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 25%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_mllama-32-0.0001-dtype3-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 33%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_qwen2-32-0.0001-dtype4-0.001-0.01-0.1-0.01-0.01-0.01] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:855 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 41%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype5-0.001-0.05-0.1-0.01-0.01-0.01] SKIPPED [ 50%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype6-0.001-0.05-0.1-0.01-0.01-0.01] SKIPPED [ 58%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_phi3-32-0.0001-dtype7-0.001-0.01-0.1-0.01-0.01-0.01] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:1067 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 66%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_mistral-32-0.0001-dtype8-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 75%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_olmo2-32-0.0001-dtype9-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 83%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_gemma1-32-0.0001-dtype10-0.001-0.01-0.1-0.01-0.01-0.01] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:598 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 91%]
test/convergence/bf16/test_mini_models.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype11-0.001-0.01-0.1-0.01-0.01-0.01] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:598 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [100%]

=================== 7 passed, 5 skipped, 1 warning in 46.95s ===================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/bf16/test_mini_models_multimodal.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0
rootdir: /root/workspace/jp-liger
configfile: pyproject.toml

----------------------------- live log collection ------------------------------
INFO     datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 6 items

test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_vl-32-0.0001-dtype0-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 16%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_llava-32-0.0001-dtype1-0.001-0.01-0.1-0.01-0.01-0.01] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:209 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 33%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_qwen2_5_vl-32-0.0001-dtype2-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 50%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_mllama-32-0.0001-dtype3-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 66%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_paligemma-32-0.0001-dtype4-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 83%]
test/convergence/bf16/test_mini_models_multimodal.py::test_mini_model_multimodal[mini_paligemma2-32-0.0001-dtype5-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [100%]

================== 1 passed, 5 skipped, 2 warnings in 19.27s ===================
HF_DATASETS_OFFLINE=1 python -m pytest --disable-warnings test/convergence/bf16/test_mini_models_with_logits.py
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0
rootdir: /root/workspace/jp-liger
configfile: pyproject.toml

----------------------------- live log collection ------------------------------
INFO     datasets:config.py:54 PyTorch version 2.5.1+cu121 available.
collected 12 items

test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_llama3-32-0.0001-dtype0-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [  8%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_llava-32-0.0001-dtype1-0.001-0.01-0.1-0.01-0.01-0.01] 
-------------------------------- live log call ---------------------------------
WARNING  liger_kernel.transformers.monkey_patch:monkey_patch.py:209 Support for transformers versions < 4.46.1 will soon be discontinued due to issues with incorrect gradient accumulation. 
 Please consider upgrading to avoid potential issues. See details: https://github.com/huggingface/transformers/pull/34191
PASSED                                                                   [ 16%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_granite3-32-0.0001-dtype2-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 25%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_mllama-32-0.0001-dtype3-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 33%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_qwen2-32-0.0001-dtype4-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 41%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_vl-32-0.0001-dtype5-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 50%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_qwen2_5_vl-32-0.0001-dtype6-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [ 58%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_phi3-32-0.0001-dtype7-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 66%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_mistral-32-0.0001-dtype8-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 75%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_gemma1-32-0.0001-dtype9-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 83%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_gemma1.1-32-0.0001-dtype10-0.001-0.01-0.1-0.01-0.01-0.01] PASSED [ 91%]
test/convergence/bf16/test_mini_models_with_logits.py::test_mini_model[mini_olmo2-32-0.0001-dtype11-0.001-0.01-0.1-0.01-0.01-0.01] SKIPPED [100%]

=================== 7 passed, 5 skipped, 1 warning in 50.33s ===================

env

transformers             4.44.2
torch                    2.5.1+cu121
torchaudio               2.5.1+cu121
torchvision              0.20.1+cu121

@jp1924
Copy link
Contributor Author

jp1924 commented Mar 31, 2025

╭─ Modal Deprecation Warning (2025-02-06) ─────────────────────────────────────╮
│ Using Python module paths will require using the -m flag in a future version │
│ of Modal.                                                                    │
│ Use `modal run -m dev.modal.tests_bwd` instead.                              │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Modal Deprecation Warning (2025-01-08) ─────────────────────────────────────╮
│ modal.Mount usage will soon be deprecated.                                   │
│                                                                              │
│ Use image.add_local_dir instead, which is functionally and performance-wise  │
│ equivalent.                                                                  │
│                                                                              │
│ See https://modal.com/docs/guide/modal-1-0-migration for more details.       │
│                                                                              │
│ Source:                                                                      │
│ /home/runner/work/Liger-Kernel/Liger-Kernel/dev/modal/tests_bwd.py:14        │
│   repo = modal.Mount.from_local_dir(ROOT_PATH, remote_path=REMOTE_ROOT_PATH) │
╰──────────────────────────────────────────────────────────────────────────────╯
╭─ Error ──────────────────────────────────────────────────────────────────────╮
│ Token missing. Could not authenticate client. If you have token credentials, │
│ see modal.com/docs/reference/modal.config for setup help. If you are a new   │
│ user, register an account at modal.com, then run modal token new.          │
╰──────────────────────────────────────────────────────────────────────────────╯

These errors occur, but all tests pass successfully.

Copy link
Collaborator

@Tcc0403 Tcc0403 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx

@Tcc0403 Tcc0403 merged commit f248529 into linkedin:main Mar 31, 2025
4 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Llava test-bwd failure
2 participants