-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Closed
Description
Describe the bug
Onnxruntime C++ runs fine with GPU on local machine but failed inside nvidia-docker image.
Onnxruntime fails to load gpu with OrtSessionOptionsAppendExecutionProvider_CUDA(session_options, gpu_id);
with Cuda 10.1 and Cudnn 7.6.5.
The error looks like bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cublasStatus_t; bool THRW = true] CUBLAS failure 1: CUBLAS_STATUS_NOT_INITIALIZED ; GPU=0 ; hostname=00410badc493 ; expr=cublasCreate(&cublas_handle_);
System information
- OS Platform and Distribution: Linux Ubuntu 16.04:
- ONNX Runtime installed from (source or binary): v1.0.1
- ONNX Runtime version: v1.0.1
- Python version: N/A (C++ API)
- Visual Studio version (if applicable): N/A
- GCC/Compiler version (if compiling from source): 5.4.0
- CUDA/cuDNN version: Cuda 10.1, Cudnn 7.6.5
- GPU model and memory:
To Reproduce
Describe steps/code to reproduce the behavior:
- Compile ORT from source with Cuda 10.1, CUdnn 7.6.5 and MKLML_gnu.
- build a new docker image from
nvidia/cuda:10.1-cudnn7-devel-ubuntu16.04
- Copy the compiled libraries as well as this sample code to docker image. Enable cuda capability with
OrtSessionOptionsAppendExecutionProvider_CUDA(session_options, 0);
- Run the docker image with gpu enabled
Expected behavior
The docker container should run normally similar to local environment
Metadata
Metadata
Assignees
Labels
No labels