如何在TensorFlow中使用批处理规范化?

如何在TensorFlow中使用批处理规范化?,第1张

如何在TensorFlow中使用批处理规范化?

2016年7月更新 在TensorFlow中使用批处理规范化的最简单方法是通过contrib /
layers
,tflearn或slim中提供的高级接口。

如果您想自己动手,则可以使用以前的答案 :自发布以来,此文档的字符串已得到改进-
请参阅master分支中的docs注释,而不是找到的注释。它特别说明了它是的输出

tf.nn.moments

您可以在batch_norm测试代码中看到一个非常简单的示例。对于更真实的使用示例,我将其包含在帮助器类下面,并使用了我为自己使用而写的注释(不提供保修!):

"""A helper class for managing batch normalization state.This class is designed to simplify adding batch normalization    (http://arxiv.org/pdf/1502.03167v3.pdf) to your model by         managing the state variables associated with it.important use note:  The function get_assigner() returns         an op that must be executed to save the updated state.A suggested way to do this is to make execution of themodel optimizer force it, e.g., by:  update_assignments = tf.group(bn1.get_assigner(),  bn2.get_assigner())     with tf.control_dependencies([optimizer]):   optimizer = tf.group(update_assignments)"""import tensorflow as tfclass ConvolutionalBatchNormalizer(object):  """Helper class that groups the normalization logic and variables.  Use:          ewma = tf.train.ExponentialMovingAverage(decay=0.99)  bn = ConvolutionalBatchNormalizer(depth, 0.001, ewma, True)      update_assignments = bn.get_assigner()     x = bn.normalize(y, train=training?)       (the output x will be batch-normalized).          """  def __init__(self, depth, epsilon, ewma_trainer, scale_after_norm):    self.mean = tf.Variable(tf.constant(0.0, shape=[depth]),      trainable=False)    self.variance = tf.Variable(tf.constant(1.0, shape=[depth]),          trainable=False)    self.beta = tf.Variable(tf.constant(0.0, shape=[depth]))    self.gamma = tf.Variable(tf.constant(1.0, shape=[depth]))    self.ewma_trainer = ewma_trainer    self.epsilon = epsilon    self.scale_after_norm = scale_after_norm  def get_assigner(self):    """Returns an EWMA apply op that must be invoked after optimization."""    return self.ewma_trainer.apply([self.mean, self.variance])  def normalize(self, x, train=True):    """Returns a batch-normalized version of x."""    if train:      mean, variance = tf.nn.moments(x, [0, 1, 2])      assign_mean = self.mean.assign(mean)      assign_variance = self.variance.assign(variance)      with tf.control_dependencies([assign_mean, assign_variance]):        return tf.nn.batch_norm_with_global_normalization( x, mean, variance, self.beta, self.gamma, self.epsilon, self.scale_after_norm)    else:      mean = self.ewma_trainer.average(self.mean)      variance = self.ewma_trainer.average(self.variance)      local_beta = tf.identity(self.beta)      local_gamma = tf.identity(self.gamma)      return tf.nn.batch_norm_with_global_normalization(          x, mean, variance, local_beta, local_gamma,          self.epsilon, self.scale_after_norm)

请注意,我

ConvolutionalBatchNormalizer
之所以称其为a是因为它
tf.nn.moments
在轴0、1和2上固定了使用sum的用途,而对于非卷积用途,您可能只需要轴0。

如果您使用它,反馈表示赞赏。



欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/zaji/5648565.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存