Skip to content

Commit 6d89c82

Browse files
Fix PSIRT Vulnerability - Dependency Confusion in oneccl_bind_pt package (#13305)
* Fix PSIRT Vulnerability - Dependency Confusion in oneccl_bind_pt package * update --------- Co-authored-by: YongZhuIntel <[email protected]>
1 parent 25e1709 commit 6d89c82

File tree

16 files changed

+16
-16
lines changed

16 files changed

+16
-16
lines changed

docker/llm/serving/cpu/docker/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
7575
pip install Jinja2==3.1.3 && \
7676
pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cpu && \
7777
pip install intel-extension-for-pytorch==2.2.0 && \
78-
pip install oneccl_bind_pt==2.2.0 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/ && \
78+
pip install oneccl_bind_pt==2.2.0 --index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/ && \
7979
pip install transformers==4.36.2 && \
8080
# Install vllm dependencies
8181
pip install --upgrade fastapi && \

docs/mddocs/Quickstart/deepspeed_autotp_fastapi_quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ conda create -n llm python=3.11
2020
conda activate llm
2121
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
2222
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
23-
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
23+
pip install oneccl_bind_pt==2.1.100 --index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
2424
# configures OneAPI environment variables
2525
source /opt/intel/oneapi/setvars.sh
2626
pip install git+https://github.com/microsoft/DeepSpeed.git@ed8aed5

python/llm/example/CPU/QLoRA-FineTuning/alpaca-qlora/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ python ./alpaca_qlora_finetuning_cpu.py \
5353
```bash
5454
# need to run the alpaca stand-alone version first
5555
# for using mpirun
56-
pip install oneccl_bind_pt --extra-index-url https://developer.intel.com/ipex-whl-stable
56+
pip install oneccl_bind_pt --index-url https://developer.intel.com/ipex-whl-stable
5757
```
5858

5959
2. modify conf in `finetune_one_node_two_sockets.sh` and run

python/llm/example/CPU/Speculative-Decoding/Self-Speculation/baichuan2/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ To accelerate speculative decoding on CPU, optionally, you can install our valid
6969
```bash
7070
python -m pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cpu
7171
python -m pip install intel-extension-for-pytorch==2.2.0
72-
python -m pip install oneccl_bind_pt==2.2.0 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
72+
python -m pip install oneccl_bind_pt==2.2.0 --index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
7373
# if there is any installation problem for oneccl_binding, you can also find suitable index url at "https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/" or "https://developer.intel.com/ipex-whl-stable-cpu" according to your environment.
7474

7575
# Install other dependencies

python/llm/example/CPU/Speculative-Decoding/Self-Speculation/llama2/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ To accelerate speculative decoding on CPU, you can install our validated version
104104
# Install IPEX 2.2.0+cpu
105105
python -m pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cpu
106106
python -m pip install intel-extension-for-pytorch==2.2.0
107-
python -m pip install oneccl_bind_pt==2.2.0 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
107+
python -m pip install oneccl_bind_pt==2.2.0 --index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
108108
# if there is any installation problem for oneccl_binding, you can also find suitable index url at "https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/" or "https://developer.intel.com/ipex-whl-stable-cpu" according to your environment.
109109

110110
# Update transformers

python/llm/example/CPU/Speculative-Decoding/Self-Speculation/llama3/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ To accelerate speculative decoding on CPU, you can install our validated version
8181
# Install IPEX 2.2.0+cpu
8282
python -m pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cpu
8383
python -m pip install intel-extension-for-pytorch==2.2.0
84-
python -m pip install oneccl_bind_pt==2.2.0 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
84+
python -m pip install oneccl_bind_pt==2.2.0 --index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
8585
# if there is any installation problem for oneccl_binding, you can also find suitable index url at "https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/" or "https://developer.intel.com/ipex-whl-stable-cpu" according to your environment.
8686

8787
# Update transformers

python/llm/example/CPU/Speculative-Decoding/Self-Speculation/mistral/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ To accelerate speculative decoding on CPU, you can install our validated version
9090
# Install IPEX 2.2.0+cpu
9191
python -m pip install torch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cpu
9292
python -m pip install intel-extension-for-pytorch==2.2.0
93-
python -m pip install oneccl_bind_pt==2.2.0 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
93+
python -m pip install oneccl_bind_pt==2.2.0 --index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
9494
# if there is any installation problem for oneccl_binding, you can also find suitable index url at "https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/" or "https://developer.intel.com/ipex-whl-stable-cpu" according to your environment.
9595

9696
# Update transformers

python/llm/example/GPU/Deepspeed-AutoTP-FastAPI/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ conda create -n llm python=3.11
1515
conda activate llm
1616
# below command will install intel_extension_for_pytorch==2.1.10+xpu as default
1717
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
18-
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
18+
pip install oneccl_bind_pt==2.1.100 --index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
1919
# configures OneAPI environment variables
2020
source /opt/intel/oneapi/setvars.sh
2121
pip install git+https://github.com/microsoft/DeepSpeed.git@ed8aed5

python/llm/example/GPU/LLM-Finetuning/HF-PEFT/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte
1717
pip install transformers==4.45.0 "trl<0.12.0" datasets
1818
pip install bitsandbytes==0.45.1 scipy
1919
pip install fire peft==0.10.0
20-
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # necessary to run distributed finetuning
20+
pip install oneccl_bind_pt==2.1.100 --index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # necessary to run distributed finetuning
2121
```
2222

2323
### 2. Configures OneAPI environment variables

python/llm/example/GPU/LLM-Finetuning/LoRA/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-exte
1515
pip install transformers==4.45.0 "trl<0.12.0" datasets
1616
pip install fire peft==0.10.0
1717
pip install bitsandbytes==0.45.1 scipy
18-
pip install oneccl_bind_pt==2.1.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # necessary to run distributed finetuning
18+
pip install oneccl_bind_pt==2.1.100 --index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ # necessary to run distributed finetuning
1919
```
2020

2121
### 2. Configures OneAPI environment variables

0 commit comments

Comments
 (0)