Win10安装mxnet-cu90后无法使用显卡

可以使用mx.gpu()返回gpu(0),但是无法使用gb.try_gpu(),这样就会返回cpu(),看了下try_gpu的代码,发现在_= nd.zeros((1,), ctx=ctx)这一步出错,但ipython中没有报错,只是主动从ipython中退了出来,如图

我也碰到这样的问题,不知楼现在后来是如何解决的?

import mxnet as mx
ctx=mx.gpu()
ctx
gpu(0)

from mxnet import ndarray as nd
_=nd.zeros((1,),ctx=ctx)

Traceback (most recent call last):
File “”, line 1, in
File “D:\Anaconda3\envs\gluon\lib\site-packages\mxnet\ndarray\utils.py”, line 67, in zeros
return _zeros_ndarray(shape, ctx, dtype, **kwargs)
File “D:\Anaconda3\envs\gluon\lib\site-packages\mxnet\ndarray\ndarray.py”, line 3753, in zeros
return _internal._zeros(shape=shape, ctx=ctx, dtype=dtype, **kwargs)
File “”, line 34, in _zeros
File “D:\Anaconda3\envs\gluon\lib\site-packages\mxnet_ctypes\ndarray.py”, line 92, in _imperative_invoke
ctypes.byref(out_stypes)))
File “D:\Anaconda3\envs\gluon\lib\site-packages\mxnet\base.py”, line 251, in check_call
raise MXNetError(py_str(LIB.MXGetLastError()))
mxnet.base.MXNetError: [11:06:24] C:\Jenkins\workspace\mxnet-tag\mxnet\src\engine\threaded_engine.cc:320: Check failed: device_count
> 0 (-1 vs. 0) GPU usage requires at least 1 GPU

您好,我也遇到的同样的问题。我再mxnet框架下跑unet网络。模型训练可以正常进行,但是预测的时候总是出现一下错误:
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [17:48:11] /home/travis/build/dmlc/mxnet-distro/mxnet-build/3rdparty/mshadow/…/…/src/operator/tensor/…/elemwise_op_common.h:135: Check failed: assign(&dattr, vec.at(i)) Incompatible attr in node at 0-th output: expected [1], got [2]
请问该如何解决呢

你好,请问问题解决了吗,什么原因呢

你好,请问问题解决了吗,我也遇到了这个问题
mxnet.base.MXNetError: [10:55:57] C:\Jenkins\workspace\mxnet-tag\mxnet\src\engine\threaded_engine.cc:328: Check failed: device_count_ > 0 (-1 vs. 0) : GPU usage requires at least 1 GPU

我解决了这个问题,这个问题在于显卡驱动的版本号过低,与安装的cuda不匹配,去官网升级一下显卡驱动就行