在keras中的预训练密集层之间添加辍学层

在keras中的预训练密集层之间添加辍学层,第1张

在keras中的预训练密集层之间添加辍学

我通过使用Keras功能API自己找到了答案

from keras.applications import VGG16from keras.layers import Dropoutfrom keras.models import Modelmodel = VGG16(weights='imagenet')# Store the fully connected layersfc1 = model.layers[-3]fc2 = model.layers[-2]predictions = model.layers[-1]# Create the dropout layersdropout1 = Dropout(0.85)dropout2 = Dropout(0.85)# Reconnect the layersx = dropout1(fc1.output)x = fc2(x)x = dropout2(x)predictors = predictions(x)# Create a new modelmodel2 = Model(input=model.input, output=predictors)

model2
有我想要的辍学层

____________________________________________________________________________________________________Layer (type)          Output Shape          Param #     Connected to          ====================================================================================================input_1 (InputLayer)  (None, 3, 224, 224)   0____________________________________________________________________________________________________block1_conv1 (Convolution2D)     (None, 64, 224, 224)  1792        input_1[0][0]         ____________________________________________________________________________________________________block1_conv2 (Convolution2D)     (None, 64, 224, 224)  36928       block1_conv1[0][0]    ____________________________________________________________________________________________________block1_pool (MaxPooling2D)       (None, 64, 112, 112)  0block1_conv2[0][0]    ____________________________________________________________________________________________________block2_conv1 (Convolution2D)     (None, 128, 112, 112) 73856       block1_pool[0][0]     ____________________________________________________________________________________________________block2_conv2 (Convolution2D)     (None, 128, 112, 112) 147584      block2_conv1[0][0]    ____________________________________________________________________________________________________block2_pool (MaxPooling2D)       (None, 128, 56, 56)   0block2_conv2[0][0]    ____________________________________________________________________________________________________block3_conv1 (Convolution2D)     (None, 256, 56, 56)   295168      block2_pool[0][0]     ____________________________________________________________________________________________________block3_conv2 (Convolution2D)     (None, 256, 56, 56)   590080      block3_conv1[0][0]    ____________________________________________________________________________________________________block3_conv3 (Convolution2D)     (None, 256, 56, 56)   590080      block3_conv2[0][0]    ____________________________________________________________________________________________________block3_pool (MaxPooling2D)       (None, 256, 28, 28)   0block3_conv3[0][0]    ____________________________________________________________________________________________________block4_conv1 (Convolution2D)     (None, 512, 28, 28)   1180160     block3_pool[0][0]     ____________________________________________________________________________________________________block4_conv2 (Convolution2D)     (None, 512, 28, 28)   2359808     block4_conv1[0][0]    ____________________________________________________________________________________________________block4_conv3 (Convolution2D)     (None, 512, 28, 28)   2359808     block4_conv2[0][0]    ____________________________________________________________________________________________________block4_pool (MaxPooling2D)       (None, 512, 14, 14)   0block4_conv3[0][0]    ____________________________________________________________________________________________________block5_conv1 (Convolution2D)     (None, 512, 14, 14)   2359808     block4_pool[0][0]     ____________________________________________________________________________________________________block5_conv2 (Convolution2D)     (None, 512, 14, 14)   2359808     block5_conv1[0][0]    ____________________________________________________________________________________________________block5_conv3 (Convolution2D)     (None, 512, 14, 14)   2359808     block5_conv2[0][0]    ____________________________________________________________________________________________________block5_pool (MaxPooling2D)       (None, 512, 7, 7)     0block5_conv3[0][0]    ____________________________________________________________________________________________________flatten (Flatten)     (None, 25088)         0block5_pool[0][0]     ____________________________________________________________________________________________________fc1 (Dense)(None, 4096)          102764544   flatten[0][0]         ____________________________________________________________________________________________________dropout_1 (Dropout)   (None, 4096)          0fc1[0][0]  ____________________________________________________________________________________________________fc2 (Dense)(None, 4096)          16781312    dropout_1[0][0]       ____________________________________________________________________________________________________dropout_2 (Dropout)   (None, 4096)          0fc2[1][0]  ____________________________________________________________________________________________________predictions (Dense)   (None, 1000)          4097000     dropout_2[0][0]       ====================================================================================================Total params: 138,357,544Trainable params: 138,357,544Non-trainable params: 0____________________________________________________________________________________________________


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5661631.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存