我认为建议的技巧实际上是不正确的。
tf.conv3d()图层发生的事情是将输入卷积到深度(=实际批处理)维度上,然后沿着结果要素图求和。随着
padding='SAME'所产生的产出数量则恰好是同批大小,这样一个被愚弄!
编辑:我认为对不同的迷你批处理元素使用不同的滤镜进行卷积的一种可能方法涉及“破解”深度卷积。假设批量大小
MB已知:
inp = tf.placeholder(tf.float32, [MB, H, W, channels_img])# F has shape (MB, fh, fw, channels, out_channels)# REM: with the notation in the question, we need: channels_img==channelsF = tf.transpose(F, [1, 2, 0, 3, 4])F = tf.reshape(F, [fh, fw, channels*MB, out_channels)inp_r = tf.transpose(inp, [1, 2, 0, 3]) # shape (H, W, MB, channels_img)inp_r = tf.reshape(inp, [1, H, W, MB*channels_img])out = tf.nn.depthwise_conv2d( inp_r, filter=F, strides=[1, 1, 1, 1], padding='VALID') # here no requirement about padding being 'VALID', use whatever you want. # Now out shape is (1, H, W, MB*channels*out_channels)out = tf.reshape(out, [H, W, MB, channels, out_channels) # careful about the order of depthwise conv out_channels!out = tf.transpose(out, [2, 0, 1, 3, 4])out = tf.reduce_sum(out, axis=3)# out shape is now (MB, H, W, out_channels)
如果
MB未知,应该可以使用
tf.shape()(我认为)动态确定
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)