Skip to content

Virtualenv with python 3.11 not able to detect OpenVINOExecutionProvider, even when it's present in the location where the dlls should be present #697

@prasad-pr-20

Description

@prasad-pr-20

Describe the issue

Trying to run optimum[onnxruntime] based llm whisper model on onnxruntime using openvino-ep.
Used the library, optimum.onnxruntime.ORTModelForSpeechSeq2Seq to load the model.
Each time observe the below error:
ValueError: Asked to use OpenVINOExecutionProvider as an ONNX Runtime execution provider, but the available execution providers are ['AzureExecutionProvider', 'CPUExecutionProvider'].

Library versions (tried on multiple versions):

  • OS: Windows 11
  • Python: 3.11
  • onnx: 1.17.1/1.18.0
  • onnxruntime: 1.21.0/1.22.0
  • onnxruntime-openvino: 1.21.0/1.22.0
  • openvino: 2024.3.0/ 2025.1.0

Verification Steps:

  • Have also verified that the onnxruntime_providers_openvino.dll is present in the specified location i.e., venv\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_openvino.dll.
  • Verified that the library version match each other as shown in openvino-ep-requirements .

Help Needed

  • Need help to identify the issue and then raise a ticket accodingly to get the changes added into the documentation.

To reproduce

Steps to Reproduce:

  1. Create a python virtual env: python -m venv virt_env.

  2. Activate the virtual env: virt_env\Scripts\activate.

  3. Install necessary libraries: pip install --no-cache-dir openvino==2024.3.0 onnx==1.17.1 onnxruntime==1.21.0 onnxruntime-openvino==1.21.0 optimum[onnxruntime] 'huggingface_hub[cli]'.

  4. Download model: huggingface-cli download Intel/whisper-tiny-onnx-int4-inc --local-dir whisper-tiny-onnx

  5. Copy the below code to a temp testing file example text.py:
    from transformers import AutoProcessor

    from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
    import onnxruntime.tools.add_openvino_win_libs as utils
    utils.add_openvino_libs_to_path()
    from datasets import load_dataset
    model_id = "whisper-tiny-onnx"
    processor = AutoProcessor.from_pretrained(model_id)
    ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
    inputs = processor.feature_extractor(ds[9]["audio"]["array"], return_tensors="pt").to("cpu")
    ort_model = ORTModelForSpeechSeq2Seq.from_pretrained(model_id, provider="OpenVINOExecutionProvider")

  6. Execute the python file: python test.py.

  7. Expect to see the aforementioned errors.

Urgency

Request help at the earliest as the model is not running on the above configuration, creating a blocker for deliverables.

Platform

Windows

OS Version

11

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.21.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

OpenVINO

Execution Provider Library Version

OpenVINO: 2024.3.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions