关于Pytorch笔记总结

关于Pytorch笔记总结,第1张

关于Pytorch笔记总结 关于Pytorch笔记总结

文章目录
  • 关于Pytorch笔记总结
  • 前言
    • 一.关于它的主要结构
    • 二.从每一个功能粗略的看起(快速搭建)
      • 第一:dataset
      • 第二:把image转为Tensor(pytorch处理Tensor)
        • (1) 导入官方提供给的数据集(这里用MNIST)
        • (2)有自己的图片,搭建自己的数据集
      • 第三:dataloader
      • 第四:Module
      • 第五:optimizer优化器
      • 整合

前言

主要是对自己看的一些东西总结的笔记,然后把主要看的地址先贴在这里方便大家找原作者:

bilbil up 牛奶奶:
https://www.bilibili.com/video/BV1UR4y1t7Cm?spm_id_from=333.999.0.0

知乎:
https://bbs.cvmart.net/articles/3663

一.关于它的主要结构

1.它实现模型训练的5大要素

2.每个结构的主要功能

  • 数据:包括数据读取,数据清洗,进行数据划分和数据预处理,比如读取图片如何预处理及数据增强。
  • 模型:包括构建模型模块,组织复杂网络,初始化网络参数,定义网络层.
  • 损失函数:包括创建损失函数,设置损失函数超参数,根据不同任务选择合适的损失函数。
  • 优化器:包括根据梯度使用某种优化器更新参数,管理模型参数,管理多个参数组实现不同学习率,调整学习率。
  • 迭代训练:组织上面 4 个模块进行反复训练。包括观察训练效果,绘制 Loss/ Accuracy 曲线,用 TensorBoard 进行可视化分析。
二.从每一个功能粗略的看起(快速搭建) 第一:dataset

dataset是数据集,但是在Pytorch中dataset ** 并不是数据集本身 而是一个个类的实例**(就是一个实例化的对象)dataset它的基类就是torch.utils.data.Dataset。

dataset必须重写

__getitem__:获取数据的方法
__len__:返回dataset的size,即返回数据大小

为什么要用 __ ???

__getitem__ == getitem[i]
__len__ =len()
一样的,因为python提供了magic methods

魔术方法,指python中所有以”__”(双下划线)作为名字开头和结尾的方法。都可以用更简便的方法调用,像函数一样使用。

第二:把image转为Tensor(pytorch处理Tensor)
form torchvision import transforms
'''
transforms模块中的方法可以把数据转换成我们需要的格式
'''

关于torchvision中构成一览

  • torchvision.datasets: 一些加载数据的函数及常用的数据集接口;
  • torchvision.models: 包含常用的模型结构(含预训练模型),例如AlexNet、VGG、ResNet等;
  • torchvision.transforms: 常用的图片变换,例如裁剪、旋转等;
  • torchvision.utils: 其他的一些有用的方法。
    ————————————————
    原文链接:https://blog.csdn.net/wangkaidehao/article/details/104520022/
(1) 导入官方提供给的数据集(这里用MNIST)
def hello_world():
    from torchvision.datasets.mnist import MNIST
    from torchvision import transforms
    # transforms.Compose串联多个transforms *** 作
    transform = transforms.Compose(
        [
            transforms.ToTensor(),
            transforms.Normalize(mean=(0.5,), std=(0.5,))  # 均值和标准差
        ]
    )
    train_dataset = MNIST(root="./mnist_data", # 路径
                          train=True, # training set(True) or test set
                          transform=transform, # 上面transform的方法
                          target_transform=None, # 不转换标签
                          download=True) # 下载数据集
    index = 0
    print("train(train_dataset):{}".format(type(train_dataset[index])))
    print("train(train_dataset):{}[0]:{}".format(index, type(train_dataset[index][0])))
    print("train_dataset[{}][0].shape:{}".format(index, train_dataset[index][0].shape))
    print("len(train_dataset):{}".format(len(train_dataset)))
    print("type(train_dataset[{}][1]):{}".format(index, type(train_dataset[index][1])))
	return train_dataset
(2)有自己的图片,搭建自己的数据集

ImageFolder方法来搭建自己的数据集

def create_mydataset():
    from torchvision.datasets import ImageFolder
    from torchvision import transforms
    transform = transforms.Compose(
        [
            # 随机裁剪
            transforms.RandomResizedCrop(size=(224, 224)),
            # 随机水平翻转
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor(),
            transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
        ]
    )
	
    # 这里是训练集的地址
    train_dataset = ImageFolder(root=os.path.join(r"D:yolo_Bimages", "train"),
                                transform=transform, target_transform=None)

    index = 0
    print("train(train_dataset):{}".format(type(train_dataset[index])))
    print("train(train_dataset):{}[0]:{}".format(index, type(train_dataset[index][0])))
    print("train_dataset[{}][0].shape:{}".format(index, train_dataset[index][0].shape))
    print("len(train_dataset):{}".format(len(train_dataset)))
    print("type(train_dataset[{}][1]):{}".format(index, type(train_dataset[index][1])))
    # ImageFolder有一些属性 train_dataset.classes(值),train_dataset.class_to_idx(值与键)
    # train_dataset.classes:['blue', 'red']
    print("train_dataset.classes:{}".format(train_dataset.classes))          			     # train_dataset.class_to_idx:{'blue': 0, 'red': 1}
    print("train_dataset.class_to_idx:{}".format(train_dataset.class_to_idx))   

第三:dataloader

Dataloader也是一个类,datalodar把dataset分成一个对可迭代对象

就是对dataset进行划分,划分成一个个batch

然后之后的一系列 *** 作都在这个可迭代对象上进行

def learn_dataloader():
    train_dataset = hello_world()
    from torch.utils.data import DataLoader
    train_loader = DataLoader(dataset=train_dataset,
                              batch_size=10000,
                              shuffle=False)
    # if shuffle = True则不同epoch之间的数据会被打乱

    from collections.abc import Iterable
    print("isinstance(train_dataset,Iteration):{}".
          format(isinstance(train_dataset, Iterable)))
    print("isinstance(train_loader,Iteration):{}".
          format(isinstance(train_loader, Iterable)))

    print("type(train_loader):{}".format(type(train_loader)))
    # loader的大小是batch的数量 dataset是总样本量的大小
    print("len(train_loader):{}".format(len(train_loader)))
    # batch 一个放着图片,一个放着label
    for batch in train_loader:
        print("type(batch):{}".format(type(batch)))
        print("len(batch):{}".format(len(batch)))
        print("type(batch[0]):{}".format(type(batch[0])))
        print("type(batch[0]):{}".format(type(batch[1])))
        print("type(batch[0]):{}".format(batch[0].shape))
        print("type(batch[0]):{}".format(batch[1].shape))
        break

python训练小技巧:
[enumerate 枚举](Python enumerate() 函数 | 菜鸟教程 (runoob.com))

def fun_way():
    train_dataset = hello_world()
    from torch.utils.data import DataLoader
    train_loader = DataLoader(dataset=train_dataset,
                              batch_size=10000,
                              shuffle=False)

    for batch,(x,y) in enumerate(train_loader):
        print("batch:{},type(x):{},type(y):{}".format(batch, type(x), type(y)))
        print("batch:{},x.shape:{},y.shape:{}".format(batch, x.shape, y.shape))
        break

tqdm 加进度条

def fun_way2():
    train_dataset = hello_world()
    from tqdm import tqdm
    from torch.utils.data import DataLoader
    train_loader = DataLoader(dataset=train_dataset,
                              batch_size=10000,
                              shuffle=False)

    with tqdm(train_loader, desc="TRAINING") as train_bar:
        for (x, y) in train_bar:
            pass

collate_fn 把标签也类型改变成torch.Tensor

def eg_1():
    train_dataset = hello_world()
    def collate_fn(batch):
        print("type(batch):{}".format(type(batch)))
        print("len(batch):{}".format(len(batch)))
        print("type(batch[0]):{}".format(type(batch[0])))
        x = [i[0] for i in batch]
        y = [i[1] for i in batch]
        x = torch.cat(x)[:, None, ...]
        y = torch.Tensor(y)
        return {"x":x, "y":y}

    from torch.utils.data import DataLoader
    train_loader =DataLoader(dataset=train_dataset,
                            batch_size=10000,
                            shuffle=False,
                            collate_fn=collate_fn)

    from torch.utils.data import DataLoader
    train_loader = DataLoader(dataset=train_dataset,
                              batch_size=10000,
                              shuffle=False,
                              collate_fn=collate_fn)

    for batch in train_loader:
        print("type(batch): {}".format(type(batch)))  # 
        print("type(batch["x"]): {}".format(type(batch["x"])))  # 
        print("type(batch["y"]): {}".format(type(batch["y"])))  # 
        print("batch["x"].shape: {}".format(batch["x"].shape))  # torch.Size([10000, 1, 28, 28])
        print("batch["y"].shape: {}".format(batch["y"].shape))  # torch.Size([10000])
        break
第四:Module

所有的模型都必须继承基类,怎么查看model

  • 直接print(model)
  • 也可用model.named_parameters()
super(model,self).__init__的作用是调用父类的初始化,反正你想建立model就加上就好
model.__call__(x)=model(x) 两个方法一样 
model.__call__(x)里面也调用了model.forward(x)

模型调用和拉平

def eg_3():
    from torch import nn
    train_dataset = hello_world()

    class SimpleMode(nn.Module):
        def __init__(self):
            super(SimpleMode, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=(1, 1))
            self.conv2 = nn.Conv2d(in_channels=3, out_channels=5, kernel_size=(1, 1))
            self.relu = nn.ReLU(inplace=True)
            # 从1开始拉平
            self.flatten = nn.Flatten(start_dim=1, end_dim=-1)  # (B, C, H ,W)
            self.linear = nn.Linear(in_features=5*28*28, out_features=10, bias=False)

        def forward(self, x):
            x = self.conv1(x)
            x = self.relu(x)
            x = self.conv2(x)
            x = self.relu(x)
            print("[before flatten x shape]:{}".format(x.shape))
            x = self.flatten(x)
            print("[after flatten x shape]:{}".format(x.shape))
            x = self.linear(x)
            x = self.relu(x)
            return x

    model = SimpleMode()
    x = train_dataset[0][0]
    x = x[None, ...]    # 增加一维 batch_size 的维度
    model(x)

model里面的方法

载入模型数据
方法一:
torch.load("./vgg16.pth", map_location = "cpu")
方法二:
from torch.nn import model_zoo
state_dict = model_zoo.load_url('数据网址')
保存模型数据
torch.save(模型,"路径") #保存了模型的参数
第五:optimizer优化器

用于调参

优化器的调用

    from torch import optim
    # 直接初始化随机梯度下降SGD类
    # lr 学习率 momentum 动量
    optimizer = optim.SGD(params=model.parameters(), lr=0.0001, momentum=0.9)
    print("optim.state_dict():{}".format(optimizer.state_dict()))

只调优部分参数

def eg_4():
    from torch import nn
    train_dataset = hello_world()

    class SimpleMode(nn.Module):
        def __init__(self):
            super(SimpleMode, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=(1, 1))
            self.conv2 = nn.Conv2d(in_channels=3, out_channels=5, kernel_size=(1, 1))
            self.relu = nn.ReLU(inplace=True)
            # 从1开始拉平
            self.flatten = nn.Flatten(start_dim=1, end_dim=-1)  # (B, C, H ,W)
            self.linear = nn.Linear(in_features=5*28*28, out_features=10, bias=False)

        def forward(self, x):
            x = self.conv1(x)
            x = self.relu(x)
            x = self.conv2(x)
            x = self.relu(x)
            print("[before flatten x shape]:{}".format(x.shape))
            x = self.flatten(x)
            print("[after flatten x shape]:{}".format(x.shape))
            x = self.linear(x)
            x = self.relu(x)
            return x

    model = SimpleMode()
    x = train_dataset[0][0]
    x = x[None, ...]    # 增加一维 batch_size 的维度
    model(x)
    # 让网络只学习部分数据
    from torch import optim
    # 这里让只有bias为我们学习的
    params = [param for name, param in model.named_parameters() if ".bias" in name]
    optimizer = optim.SGD(params=params, lr=0.0001, momentum=0.9)
    print("optim.state_dict():{}".format(optimizer.state_dict()))

非常模式化的训练迭代步骤

def eg_5():
    from torch import nn
    train_dataset = hello_world()

    class SimpleMode(nn.Module):
        def __init__(self):
            super(SimpleMode, self).__init__()
            self.conv1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=(1, 1))
            self.conv2 = nn.Conv2d(in_channels=3, out_channels=5, kernel_size=(1, 1))
            self.relu = nn.ReLU(inplace=True)
            # 从1开始拉平
            self.flatten = nn.Flatten(start_dim=1, end_dim=-1)  # (B, C, H ,W)
            self.linear = nn.Linear(in_features=5 * 28 * 28, out_features=10, bias=False)

        def forward(self, x):
            x = self.conv1(x)
            x = self.relu(x)
            x = self.conv2(x)
            x = self.relu(x)
            print("[before flatten x shape]:{}".format(x.shape))
            x = self.flatten(x)
            print("[after flatten x shape]:{}".format(x.shape))
            x = self.linear(x)
            x = self.relu(x)
            return x

    model = SimpleMode()
    x = train_dataset[0][0]
    x = x[None, ...]  # 增加一维 batch_size 的维度
    model(x)
    from torch import optim
    from tqdm import tqdm
    optimizer = optim.SGD(params=model.parameters(), lr=0.001, momentum=0.9)
    loss_fn = nn.CrossEntropyLoss() # 一般用于多分类问题
    train_loader = learn_dataloader()

    for epoch in range(2):
        with tqdm(train_loader, desc="EPOCH:{}".format(epoch)) as train_bar:
            for (x, y) in train_bar:
                optimizer.zero_grad()
                loss = loss_fn(model(x), y)
                loss.backward()
                optimizer.step()
            print("epoch:{},loss:{:.6f}".format(epoch, loss))
整合

train

import os
from datetime import datetime
import torch
from torch import nn
from torch import optim
from torch.utils.data import Dataset, DataLoader
from torchvision import models, transforms
from torchvision.datasets.mnist import MNIST
from tqdm import tqdm

transform = transforms.Compose(
  [
    transforms.ToTensor(),
    transforms.Normalize(mean=(0.5,), std=(0.5,))
  ]
)
# dataset
train_dataset = MNIST(root="./mnist_data",
                      train=True,
                      transform=transform,
                      target_transform=None,
                      download=False)
# dataloader
train_loader = DataLoader(dataset=train_dataset,
                          batch_size=100,
                          shuffle=True)

class SimpleModel(nn.Module):
  def __init__(self):
      super(SimpleModel, self).__init__()
      self.conv1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=(1, 1))
      self.conv2 = nn.Conv2d(in_channels=3, out_channels=5, kernel_size=(1, 1))
      self.relu = nn.ReLU(inplace=True)
      self.flatten = nn.Flatten(start_dim=1, end_dim=-1)
      self.linear = nn.Linear(in_features=5*28*28, out_features=10, bias=False)

  def forward(self, x):
      x = self.conv1(x)
      x = self.relu(x)
      x = self.conv2(x)
      x = self.relu(x)
      x = self.flatten(x)
      x = self.linear(x)
      x = self.relu(x)
      return x
# model
model = SimpleModel()
model.load_state_dict(torch.load("./model_2021_11_19.pth"))
# optimizer
optimizer = optim.SGD(params=model.parameters(), lr=0.001, momentum=0.9)
loss_fn = nn.CrossEntropyLoss()
# train
for epoch in range(2):
  with tqdm(train_loader, desc="EPOCH: {}".format(epoch)) as train_bar:
    for (x, y) in train_bar:
      optimizer.zero_grad()
      loss = loss_fn(model(x), y)
      loss.backward()
      optimizer.step()
  print("epoch: {},  loss: {:.6f}".format(epoch, loss))

time = str(datetime.now()).split(" ")[0].replace("-", "_")
torch.save(model.state_dict(), "model_{}.pth".format(time))

print("~~~~~~撒花~~~~~~")

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5657925.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存