pytorch快速入门教程

pytorch快速入门教程,第1张

文章目录
  • pytorch极简教程
    • 使用pytorch进行深度学习的一般步骤
      • 模型的构建
      • 数据的准备及预处理
        • Dataset
        • DataLoader
        • transforms
      • 模型的训练
        • 损失函数-loss
        • 优化器-optimizer
      • 模型的保存与恢复
        • 方法一:只恢复和保存模型的参数
        • 方法二:恢复和保存模型参数及网络结构
        • 提取和加载模型参数-模型的state_dict属性
      • 一个简单而完整的例子
    • 其它
      • python/pytorch的两大学习法宝
      • 利用GPU
      • 可视化
        • SummaryWriter
        • tqdm
      • 更快的构建顺序网络-Sequential

B站视频教程-PyTorch深度学习快速入门教程

pytorch极简教程 使用pytorch进行深度学习的一般步骤 模型的构建

构建自己的模型,必须继承自nn.Module,且必须实现两个方法:__init()__(self)forward()


一般单独一个文件来构建模型,如Models.py

import torch
from torch import nn
import torch.nn.functional as F    # 提供一些函数
class Mnist_CNN(nn.Module):
    # 构建网络结构
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=5, stride=1, padding=2)
        self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
        self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5, stride=1, padding=2)
        self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
        self.fc1 = nn.Linear(7*7*64, 512)
        self.fc2 = nn.Linear(512, 10)
	# 前向传播
    def forward(self, inputs):
        # -1表示根据后面3个参数自动计算第一个维度
        tensor = inputs.view(-1, 1, 28, 28)
        tensor = F.relu(self.conv1(tensor))
        tensor = self.pool1(tensor)
        tensor = F.relu(self.conv2(tensor))
        tensor = self.pool2(tensor)
        tensor = tensor.view(-1, 7*7*64)
        tensor = F.relu(self.fc1(tensor))
        tensor = self.fc2(tensor)
        return tensor

数据的准备及预处理

训练模型一般都是先处理数据的输入问题和预处理问题。


Pytorch提供了几个有用的工具:torch.utils.data.Datasettorch.utils.data.DataLoader类 。


流程是先把原始数据转变成torch.utils.data.Dataset类随后再把得到的torch.utils.data.Dataset类当作一个参数传递给torch.utils.data.DataLoader类,得到一个数据加载器,这个数据加载器每次可以返回一个 Batch 的数据供模型训练使用。


Dataset

Dataset1参考,Dataset参考2

对数据集的抽象,必须继承自Dataset,且必须实现两个方法:__getitem(self, index)____len(self)__,当然还包括__init(self)__


一般__init__负责加载全部原始数据,初始化之类的。


__getitem__负责按索引取出某个数据,并对该数据做预处理。


但是对于如何加载原始数据以及如何预处理数据完全是由自己定义的,包括我们用 dataset[index] 取出的数据的组织形式都是完全自行定义的。


一般结构如下:

from torch.utils.data import Dataset
class MyDataSet(Dataset):
    def __init__(self):
        self.sample_list = ...

    def __getitem__(self, index):
        x= f1(index)
        y= f2(index)
        return x, y
 
    def __len__(self):
        return len(self.sample_list)

也可以通过TensorDataset直接封装成Dataset,如下:

from torch.utils.data import Dataset, DataLoader, TensorDataset
 
src = torch.sin(torch.arange(1, 1000, 0.1))
trg = torch.cos(torch.arange(1, 1000, 0.1))
 
data = TensorDataset(src, trg)
data_loader = DataLoader(data, batch_size=5, shuffle=False)
DataLoader

pytorch-DataLoader数据迭代器

from torch.utils.data import DataLoader
dataset = Mydata() # Dataset类
dataloader = DataLoader(dataset, batch_size = 10, shuffle=True)
transforms

一般作为DataLoader的参数,对数据进行预处理,也可单独使用。


from torch.vision import transforms
test_data = torchvision.datasets.CIFAR10("./dataset", train=False, transform=transforms.ToTensor())
# 也可如下单独对数据使用
train_transformer = transforms.Compose([
    transforms.ToPILImage(),
    transforms.Resize(256),
    transforms.RandomResizedCrop(224,scale=(0.5,1.0)),
    transforms.RandomHorizontalFlip(),
    transforms.ToTensor(),    # 比较重要的api
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# numpy.ndarray
img = cv2.imread(img_path)  # 读取图像
img1 = train_transformer(img)
模型的训练

训练什么(训练哪些参数)?怎么训练(优化器)

损失函数-loss

先定义损失函数,再给数据。


from torch import nn

inputs = torch.tensor([1, 2, 3], dtype=torch.float32)
targets = torch.tensor([1, 2, 5], dtype=torch.float32)
inputs = torch.reshape(inputs, (1, 1, 1, 3))
targets = torch.reshape(targets, (1, 1, 1, 3))
loss_mse = nn.MSELoss()    # 定义损失函数
result_mse = loss_mse(inputs, targets)    # 喂数据
# 结果为1.333

# 如何应用到模型中去
loss = nn.CrossEntropyLoss()    # 定义损失函数
model = MyModel()
for data in dataloader:
    imgs, targets = data
    outputs = model(imgs)
    result_loss = loss(outputs, targets)    # 喂数据
优化器-optimizer

先把优化器中的梯度清零optimizer.zero_grad()、再反向传播result_loss.backward()、最后优化optimizer.step()


model = MyModel()
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for epoch in range(20):
    running_loss = 0.0
    for data in dataloader:
        imgs, targets = data
        outputs = model(imgs)
        result_loss = loss(outputs, targets)
        optimizer.zero_grad()    # 一定要清零!!!!!
        result_loss.backward()
        optimizer.step()
        running_loss = running_loss + result_loss
    print(running_loss)
模型的保存与恢复

参考

方法一:只恢复和保存模型的参数
# 保存
torch.save(the_model.state_dict(), PATH)
# 恢复
the_model = TheModelClass(*args, **kwargs)
the_model.load_state_dict(torch.load(PATH))
方法二:恢复和保存模型参数及网络结构
# 保存
torch.save(the_model, PATH)
# 恢复
the_model = torch.load(PATH)

该方法保存的数据绑定着特定的 classes 和所用的确切目录结构。


因此,再加载后经过许多重构后,可能会被打乱。


提取和加载模型参数-模型的state_dict属性

【PyTorch技巧1】详解pytorch中的state_dict

global_parameters = {}
for key, var in net.state_dict().items():
    global_parameters[key] = var.clone()
一个简单而完整的例子
import torch, torchvision
from torch import nn
from torch.utils.tensorboard import SummaryWriter
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms

import torch.nn.functional as F

from tqdm import tqdm
# 1. 使用cpu或gpu
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
print('dev is: ', dev)
# 2. 可视化
writer = SummaryWriter(log_dir='./logs', comment='fashion_mnist')
# 3.1 数据的准备Dataset
train_data = torchvision.datasets.FashionMNIST(root='./data', train=True, download=True, transform=transforms.ToTensor())
test_data = torchvision.datasets.FashionMNIST(root='./data', train=False, download=True, transform=transforms.ToTensor())
# 3.2 数据的准备DataLoader
train_dataloader = DataLoader(dataset=train_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(dataset=test_data, batch_size=64, shuffle=False)
# 3.3 也可自定义Dataset
class MyDataset(Dataset):
	"""docstring for MyDataset"""
	def __init__(self, arg):
		super(MyDataset, self).__init__()
		self.arg = arg
		
	def __getitem__(self, index):
		pass

	def __len__(self):
		pass

# 4. 模型的构建
class MyModel(nn.Module):
	"""docstring for MyModel"""
	def __init__(self):
		super(MyModel, self).__init__()
		# input = (?, 1, 28, 28), after by two conv2d, it becomes (?, 1, 7, 7)
		self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=5, stride=1, padding=2)
		self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
		self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5, stride=1, padding=2)
		self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
		self.fc1 = nn.Linear(in_features=7*7*64, out_features=512, bias=True)
		self.fc2 = nn.Linear(in_features=512, out_features=10, bias=True)

	def forward(self, inputs):
		# -1表示根据后面3个参数自动计算第一个维度
		tensor = inputs.view(-1, 1, 28, 28)
		tensor = F.relu(self.conv1(tensor))
		tensor = self.pool1(tensor)
		tensor = F.relu(self.conv2(tensor))
		tensor = self.pool2(tensor)
		tensor = tensor.view(-1, 7 * 7 * 64)
		tensor = F.relu(self.fc1(tensor))
		tensor = self.fc2(tensor)
		return tensor


net = MyModel()
net = net.to(dev)
# 5. 损失函数的定义
loss_fun = F.cross_entropy
# 6. 优化器的定义
optimizer = torch.optim.SGD(net.parameters(), lr=0.001)

epoch = 2
for i in tqdm(range(epoch)):
	total_loss = 0
	for img, label in test_dataloader:
		writer.add_images('imgs', img_tensor=img, dataformats='NCHW')
		img, label = img.to(dev), label.to(dev)
		pred = net(img)    # pred is (64, 10)
		# the position of pred and label can't exchange
		loss_res = loss_fun(input=pred, target=label)    # label is (64, 1)
		optimizer.zero_grad()
		loss_res.backward()
		optimizer.step()
		total_loss = total_loss + loss_res

	print('total loss is: ', total_loss)

writer.close()

其它 python/pytorch的两大学习法宝

在python Console使用

dir():用来查看有那些工具,是层级架构

dir(torch)
['AVG', ..., cuda, 'Code', ...,'DictType']
dir(torch.cuda)
['BFloat16Storage', ..., 'is_available', 'is_initialized', ...,'warnings']
dir(torch.cuda.is_available())
['__abs__', '__add__', '__and__', '__bool__', '__ceil__', ...]

help():用来查看工具的具体属性,即最末一级的内容

Help on bool object:
class bool(int)
 |  bool(x) -> bool
 |  
 |  Returns True when the argument x is true, False otherwise.
 |  The builtins True and False are the only two instances of the class bool.
 |  The class bool is a subclass of the class int, and cannot be subclassed.
 |  
 |  Method resolution order:
 |      bool
 |      int
 |      object
 |  
 |  Methods defined here:
 |  
 |  __and__(self, value, /)
 |      Return self&value.
 |  
 |  __or__(self, value, /)
 |      Return self|value.
 ...
利用GPU
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
# 让model在dev上运行
model = model.to(dev)

for data, label in testDataLoader:
    data, label = data.to(dev), label.to(dev)
    preds = net(data)
可视化 SummaryWriter
from torch.utils.tensorboard import SummaryWriter

writer = SummaryWriter("logs")
image_path = "data/train/digit0.jpg"
img_PIL = Image.open(image_path)
img_array = np.array(img_PIL)
# 添加一张图片
writer.add_image("train", img_array, 1, dataformats='HWC')
# 添加标量:y = 2x
for i in range(100):
    writer.add_scalar("y=2x", 3*i, i)

writer.close()
tqdm

封装可迭代对象

from tqdm import tqdm
for i in tqdm(range(10000)):
    ...
更快的构建顺序网络-Sequential
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        # 利用Sequential快速构建顺序网络
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

待续

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/langs/568836.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-09
下一篇 2022-04-09

发表评论

登录后才能评论

评论列表(0条)

保存