将自定义损失函数从Numpy函数转换为TensorFlow函数

将自定义损失函数从Numpy函数转换为TensorFlow函数,第1张

将自定义损失函数从Numpy函数转换为TensorFlow函数

将自定义损失函数从Numpy函数转换为TensorFlow函数

  1. np.where => tf.where 函数直接替换
  2. np.sum => K.sum
  3. 0.0 => tf.zeros_like(u)
  4. x_in => K.cast(x, “float32”) 先转换数据格式
  5. x_out => x.eval(session=tf.Session())
  6. s => s*tf.ones_like(u) 常数在tf中需要转换成相同大小的矩阵张sjgs量

转换前原始的计算方法

def pinball_loss_trucated_np(y_true, y_pred, quantile, s=0.5):
    u = y_true - y_pred
    pl_1 = np.where(u>=0,           u,           0.0)
    pl_2 = np.where(u<=-s,          s*quantile,  0.0)
    pl_3 = np.where((u>-s) & (u<0), -u*quantile, 0.0)
    return np.sum(np.mean(pl_1+pl_2+pl_3, axis=0))

def pinball_loss_huberized_np(y_true, y_pred, quantile, sigma=0.5):
    u = y_true - y_pred
    pl_1 = np.where(u<1-sigma, 1          -u-sigma/2,                0.0)
    pl_2 = np.where((u>=1-sigma) & (u<1), (1-u)**2/sigma/2,          0.0)
    pl_3 = np.where((u>=1) & (u<1+sigma), (1-u)**2*quantile/2/sigma, 0.0)
    pl_4 = np.where(u>=1+sigma,           -quantile*(1-u+sigma/2),   0.0)
    return np.sum(np.mean(pl_1+pl_2+pl_3+pl_4, axis=0))

转换后

def pinball_loss_trucated_tf(y_true, y_pred, quantile, s=0.1):
    y_true = K.cast(y_true, "float32")
    y_pred = K.cast(y_pred, "float32")
    quantile = K.cast(quantile, "float32")
    s = K.cast(s, "float32")
    u = y_true - y_pred
    pl_1 = tf.where(u>=0,           u,         				    tf.zeros_like(u))
    pl_3 = tf.where((u>-s) & (u<0), -u*quantile, 				tf.zeros_like(u))
    pl_2 = tf.where(u<=-s,          s*quantile*tf.ones_like(u), tf.zeros_like(u))
    return K.sum(K.mean(pl_1+pl_2+pl_3, axis=0))

plt = pinball_loss_trucated_tf(y_test, y_pred, quantiles).eval(session=sess) 

def pinball_loss_huberized_tf(y_true, y_pred, quantile, sigma=0.5):
    y_true = K.cast(y_true, "float32")
    y_pred = K.cast(y_pred, "float32")
    u = y_true - y_pred
    pl_1 = tf.where(u<1-sigma,            1-u-sigma/2,               tf.zeros_like(u))
    pl_2 = tf.where((u>=1-sigma) & (u<1), (1-u)**2/sigma/2,          tf.zeros_like(u))
    pl_3 = tf.where((u>=1) & (u<1+sigma), (1-u)**2*quantile/2/sigma, tf.zeros_like(u))
    pl_4 = tf.where(u>=1+sigma,           -quantile*(1-u+sigma/2),   tf.zeros_like(u))
    return K.sum(K.mean(tf.add(pl_1, pl_2)+tf.add(pl_3, pl_4), axis=0))

导入损失函数:model.compile(loss = [lambda y_true, y_pred: self.loss(y_true, y_pred, self.quantiles)], optimizer = "adam")

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5593903.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-15
下一篇 2022-12-15

发表评论

登录后才能评论

评论列表(0条)

保存