深度学习pytorch代码:损失函数与反向传播

深度学习pytorch代码:损失函数与反向传播,第1张

损失函数:

  1. 计算实际输出和目标之间的差距
  2. 为我们更新输出提供一定的依据(反向传播)

注:

        在L1损失下,如果reduction=none,则输入和输出的维度要相同

import torch
from torch import nn
from torch.nn import L1Loss, MSELoss

inputs = torch.tensor([1, 2, 3], dtype=torch.float32)
targets = torch.tensor([1, 2, 5], dtype=torch.float32)

inputs = torch.reshape(inputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))

# L1loss
loss = L1Loss()
result = loss(inputs, targets)

# MSE loss
loss_mse = MSELoss()
result1 = loss_mse(inputs, targets)

# 交叉熵损失 适合分类问题
print(result)
print(result1)


x = torch.tensor([0.1, 0.2, 0.3])
y = torch.tensor([1])
x = torch.reshape(x, [1, 3])
loss_cross = nn.CrossEntropyLoss()
result_cross = loss_cross(x, y)
print(result_cross)

 损失函数与反向传播实战:

import torchvision.datasets
from mmcv import DataLoader
from mmcv.cnn import Conv2d, Linear
from torch import nn
from torch.nn import Sequential, MaxPool2d, Flatten

dataset = torchvision.datasets.CIFAR10(r"C:\Users3\Desktop\python4.7\test03\data", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)
dataloader = DataLoader(dataset, batch_size=64)
class LR(nn.Module):

    def __init__(self):
        super(LR, self).__init__()
        self.model1 = Sequential(
        Conv2d(3, 32, 5, padding=2),
        MaxPool2d(2),
        Conv2d(32, 32, 5, padding=2),
        MaxPool2d(2),
        Conv2d(32, 64, 5, padding=2),
        MaxPool2d(2),
        Flatten(),
        Linear(1024, 64),
        Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

# 计算交叉熵损失
loss = nn.CrossEntropyLoss()

lrp = LR()
for data in dataloader:
    imgs, targets = data
    outputs = lrp(imgs)
    result_loss = loss(outputs, targets)
    # print(outputs)
    # print(targets)
    # 反向传播
    result_loss.backward()
    print("ok")

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/579574.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-11
下一篇 2022-04-11

发表评论

登录后才能评论

评论列表(0条)

保存