Traceback (most recent call last): File "E:/Program Files/PyCharm 2019.2/PyG/test.py", line 70, in原因loss.backward() # 反向传播计算梯度 File "F:Anaconda3libsite-packagestorch_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "F:Anaconda3libsite-packagestorchautograd__init__.py", line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
self.conv1 = GCNConv(features, 32) self.conv2 = GCNConv(32, classes)
classes与原始数据的类别数不一致。
解决self.conv2 = GCNConv(32, dataset.num_classes)
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)