【pytorch】实现Inception v2模块(GoogLeNet)对照网络结构图手动编写

【pytorch】实现Inception v2模块(GoogLeNet)对照网络结构图手动编写,第1张

Inception v2模块结构图如下

pytorch代码如下:

# This is a sample Python script.
import os.path
from typing import Iterator
import numpy as np
import torch
import cv2
from PIL import Image
from torch.utils.data import Dataset,DataLoader,Subset,random_split
import re
from functools import reduce
from torch.utils.tensorboard import SummaryWriter as Writer
from torchvision import transforms,datasets
import torchvision as tv
from torch import nn
import torch.nn.functional as F
import time

cnnWithReluAndBn=lambda indim,outdim,ksize,padding:\
nn.Sequential(nn.Conv2d(indim,outdim,kernel_size=ksize,padding=padding),nn.BatchNorm2d(outdim),nn.ReLU(True))
#可用pycharm中code中的generater功能实现:
class myCustomerInCeptionV1(nn.Module):
    #classNum为最后输出的分类数:
    def __init__(self):
        super().__init__()
        self.features=[]
        #设特征图的输入尺寸为s,则4分支输出尺寸为:经计算4分支输出特征图尺寸大小均一致,可于通道处进行叠加
        #s-1+1=s
        self.branch1=cnnWithReluAndBn(192,96,1,0)
        #s-1+1=s,s+2+1-3=s
        self.branch2 =nn.Sequential(cnnWithReluAndBn(192, 48, 1, 0),cnnWithReluAndBn(48, 64, 3, 1))
        #s
        self.branch3=nn.Sequential(cnnWithReluAndBn(192, 64, 1, 0),
                                   cnnWithReluAndBn(64, 96, 3, 1),cnnWithReluAndBn(96, 96, 3, 1))
        #count_include_pad=False表示计算均值不含0填充,
        # 这里池化移动步长为1,池化的特征图输出尺寸与卷积计算类似:其大小为s-3+2+1=s(一般池化其步长默认为卷积核大小)
        self.branch4=nn.Sequential(nn.AvgPool2d(3,stride=1,padding=1),
                                   cnnWithReluAndBn(192, 64, 1, 0))
    def forward(self,x):
        x1 = self.branch1(x)
        x2 = self.branch2(x)
        x3 = self.branch3(x)
        x4 = self.branch4(x)
        return torch.cat([x1,x2,x3,x4],dim=1)
myNet=myCustomerInCeptionV1()
print(myNet)

网络结果输出为:

myCustomerInCeptionV1(
  (branch1): Sequential(
    (0): Conv2d(192, 96, kernel_size=(1, 1), stride=(0, 0))
    (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
  )
  (branch2): Sequential(
    (0): Sequential(
      (0): Conv2d(192, 48, kernel_size=(1, 1), stride=(0, 0))
      (1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
    )
    (1): Sequential(
      (0): Conv2d(48, 64, kernel_size=(3, 3), stride=(1, 1))
      (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
    )
  )
  (branch3): Sequential(
    (0): Sequential(
      (0): Conv2d(192, 64, kernel_size=(1, 1), stride=(0, 0))
      (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
    )
    (1): Sequential(
      (0): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1))
      (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
    )
    (2): Sequential(
      (0): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1))
      (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
    )
  )
  (branch4): Sequential(
    (0): AvgPool2d(kernel_size=3, stride=1, padding=1)
    (1): Sequential(
      (0): Conv2d(192, 64, kernel_size=(1, 1), stride=(0, 0))
      (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
    )
  )
)

该代码示例默认输入192通道,输出320通道,该Inception模块输入特征图大小与输出特征图大小一致,计算参量较VGGNet小。


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/571655.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-09
下一篇 2022-04-09

发表评论

登录后才能评论

评论列表(0条)

保存