模型训练GPU显存占用突然上升,然后就out of memory


#1

with autograd.record():
output = net([train_w, train_d, train_r])
l = loss_function(output, train_t)
l.backward()
trainer.step(train_t.shape[0])
training_loss = l.mean().asscalar()
在l.mean()阶段显存占用率就突然飙升然后就cudamalloc failed:out of memory
哪个大神帮帮我,模型训练时候初始loss值上万了