新手请大神指教,关于Incompatible attr in node at 0-th output的问题

mxnet.base.MXNetError: [15:44:10] src/operator/tensor/./…/elemwise_op_common.h:123: Check failed: assign(&dattr, (*vec)[i]) Incompatible attr in node at 0-th output: expected [500,200], got [500,140450]
:pray:

1赞

expected [500,200], got [500,140450]

请问是哪出错了?.json文件吗?

输出的shape不一致,他要500×200的,你的输出是500×140450的

谢谢,请问shape输出的大小是在哪里看的到呢?.json文件吗?

你用是symbol吗?用symbol的话有个infer_shape,看文档symbol.infer_shape
如果是gluon的block或者sequential的话,直接扔给那个网络一个随机初始化的矩阵,比如

a = nd.random_normal(shape = (你的输入的shape))
print(net(a).shape)

就能看到输出的shape

1赞

您好,不知道我这样加对吗?(最后黑体部分加的),代码如下:
import mxnet as mx
import logging

import numpy as np

#import graphviz
#定义网络结构
def get_lenet():

data = mx.symbol.Variable('data')
# first conv
conv1 = mx.symbol.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1 = mx.symbol.Activation(data=conv1, act_type="tanh")
pool1 = mx.symbol.Pooling(data=tanh1, pool_type="max",
                          kernel=(2,2), stride=(2,2))
# second conv
conv2 = mx.symbol.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2 = mx.symbol.Activation(data=conv2, act_type="tanh")
pool2 = mx.symbol.Pooling(data=tanh2, pool_type="max",
                          kernel=(2,2), stride=(2,2))
# first fullc
flatten = mx.symbol.Flatten(data=pool2)
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3 = mx.symbol.Activation(data=fc1, act_type="relu")
# second fullc
fc2 = mx.symbol.FullyConnected(data=tanh3, num_hidden=34)
# loss
lenet = mx.symbol.SoftmaxOutput(data=fc2, name='softmax')

#网络可视化
#mx.viz.plot_network(lenet).view()
return lenet

#训练并保存模型
def main():
batch_size=50
num_epoch = 5
num_cpus = 1
logging.basicConfig(level=logging.DEBUG)
cpus = [mx.cpu(i) for i in range(num_cpus)]
lenet=get_lenet()
model = mx.model.FeedForward(ctx=cpus, symbol=lenet, num_epoch=num_epoch,
learning_rate=0.001, momentum=0.9, wd=0.0001,
initializer=mx.init.Uniform(0.07))
train_dataiter = mx.io.ImageRecordIter(
path_imgrec="/home/lqf/mxnet/lis/dc_train.rec",
data_name=‘data’,
label_name=‘softmax_label’,
mean_img="/home/lqf/mxnet/lis/mean.bin",
rand_crop=True,
rand_mirror=True,
data_shape=(3,20,20),
batch_size=batch_size,
preprocess_threads=1)
test_dataiter = mx.io.ImageRecordIter(
path_imgrec="/home/lqf/mxnet/lis/dc_val.rec",
data_name=‘data’,
label_name=‘softmax_label’,
mean_img="/home/lqf/mxnet/lis/mean.bin",
rand_crop=False,
rand_mirror=False,
data_shape=(3,20,20),
batch_size=batch_size,
preprocess_threads=1)
model.fit(X=train_dataiter, eval_data=test_dataiter,
batch_end_callback=mx.callback.Speedometer(100))
#保存最后一个训练模型
model.save(’/home/lqf/mxnet/lis/lenetweights’,num_epoch)
c = lenet.infer_shape(data=(1,1,500,200))
print©

if name==“main”:
main()
输出是:
([(1, 1, 500, 200), (20, 1, 5, 5), (20,), (50, 20, 5, 5), (50,), (500, 286700), (500,), (34, 500), (34,), (1,)], [(1, 34)], [])
我还是不清楚输出大小是多少:joy:,希望指点一下:pray:

2赞

infer_shape的第二个元素就是输出,也就是c[1],你看看你的train_dataiter是shape是多少,可能是这里的问题

您好,感谢解惑,还想请问一下,c[1]是[(1,34)],是表示有34个类别吗?如果我只想要输出为两个类别应该在哪修改呢?

是把num_hidden=34直接改为2就行了吗?

是的 没错

谢谢了,请问您知道我的输出500*140450是怎么得出来的吗?
我将data_shape改为data_shape=(3,112,112),又得到如下:
Incompatible attr in node at 0-th output: expected [500,31250], got [500,140450]

估计是一开始读文件的时候就没有设定shape

谢谢,已经解决了

@luckyfan hi I met the same issue as you, can u tell me how did you solve the shape thing?

mxnet.base.mxneterror: error in operator deeprenewaltrainingnetwork0__mul1: [11:28:48] c:\jenkins\workspace\mxnet-tag\mxnet\src\operator\tensor…/elemwise_op_common.h:135: check failed: assign(&dattr, vec.at(i)): incompatible attr in node deeprenewaltrainingnetwork0__mul1 at 1-th input: expected [18], got [1,18]您好,我也有类似问题,但是我找不到出错的位置,怎么解决

可否告知解决方法?