参考:https://blog.csdn.net/u012428169/article/details/114702453
通过一个网络模型看import torch import torch.nn as nn def get_n_params(model): pp=0 for p in list(model.parameters()): nn=1 for s in list(p.size()): nn = nn*s pp += nn return pp class test_module(nn.Module): def __init__(self, ): #num_classes=1000 super(test_module, self).__init__() hidden1 = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=20, kernel_size=5, padding=2, stride=2), nn.BatchNorm2d(num_features=20), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2, padding=1), nn.Dropout(0.2) ) # hidden2 = nn.Sequential( # nn.Conv2d(in_channels=20, out_channels=24, # kernel_size=5, padding=2), # nn.BatchNorm2d(num_features=24), # nn.ReLU(), # nn.MaxPool2d(kernel_size=2, stride=1, padding=1), # nn.Dropout(0.2) # ) self.features = nn.Sequential( hidden1, # hidden2, ) def forward(self, x): x1 = self.features(x) return x1 model_conv = test_module() print(model_conv) print(get_n_params(model_conv))当只有一个隐藏层hidden1时 只有nn.Conv2d
只有nn.Conv2d使用了1520个参数,可以计算5*5*3*20 +20=1520,5*5表示kernel_size(或filter_size),1表示在进行卷积计算时的一个bias,3和20 分别表示输入输出的通道数
nn.BatchNorm2d(下式)使用了2*20个参数,gamma和beta需要得到
参数个数是13632,计算第一层的参数是1560,则第二层参数是12072=5*5*24*20 +24+2*24,
nn.Conv2d的参数个数,(卷积和为正方形时)kernel_size^2*in_channels*out_channels + out_channels
最后的out_channels表示每一个输出通道的biase
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)