替换Tensorflow v2的占位符

替换Tensorflow v2的占位符,第1张

替换Tensorflow v2的占位符 使您的代码与TF 2.0一起使用

以下是可与TF
2.0一起使用的示例代码。它依赖于可
通过以下方式访问的兼容性API

tensorflow.compat.v1
,并且需要禁用v2行为。我不知道它的行为是否符合您的预期。如果没有,请向我们提供更多有关您要实现的目标的说明。

import tensorflow.compat.v1 as tftf.disable_v2_behavior()@tf.functiondef construct_graph(graph_dict, inputs, outputs):    queue = inputs[:]    make_dict = {}    for key, val in graph_dict.items():        if key in inputs: make_dict[key] = tf.placeholder(tf.float32, name=key)        else: make_dict[key] = None    # Breadth-First search of graph starting from inputs    while len(queue) != 0:        cur = graph_dict[queue[0]]        for outg in cur["outgoing"]: if make_dict[outg[0]]: # If discovered node, do add/multiply operation     make_dict[outg[0]] = tf.add(make_dict[outg[0]], tf.multiply(outg[1], make_dict[queue[0]])) else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue     make_dict[outg[0]] = tf.multiply(make_dict[queue[0]], outg[1])     for outgo in graph_dict[outg[0]]["outgoing"]:         queue.append(outgo[0])        queue.pop(0)    # Returns one data graph for each output    return [make_dict[x] for x in outputs]def main():    graph_def = {        "B": { "incoming": [], "outgoing": [("A", 1.0)]        },        "C": { "incoming": [], "outgoing": [("A", 1.0)]        },        "A": { "incoming": [("B", 2.0), ("C", -1.0)], "outgoing": [("D", 3.0)]        },        "D": { "incoming": [("A", 2.0)], "outgoing": []        }    }    outputs = construct_graph(graph_def, ["B", "C"], ["A"])    print(outputs)if __name__ == "__main__":    main()[<tf.Tensor 'PartitionedCall:0' shape=<unknown> dtype=float32>]
将您的代码迁移到TF 2.0

尽管以上代码段有效,但仍与TF 1.0绑定。要将其迁移到TF 2.0,您必须重构一点代码。

建议您不要返回的列表,而不是返回TF 1.0中可调用的张量的列表

keras.layers.Model

下面是一个工作示例:

import tensorflow as tfdef construct_graph(graph_dict, inputs, outputs):    queue = inputs[:]    make_dict = {}    for key, val in graph_dict.items():        if key in inputs: # Use keras.Input instead of placeholders make_dict[key] = tf.keras.Input(name=key, shape=(), dtype=tf.dtypes.float32)        else: make_dict[key] = None    # Breadth-First search of graph starting from inputs    while len(queue) != 0:        cur = graph_dict[queue[0]]        for outg in cur["outgoing"]: if make_dict[outg[0]] is not None: # If discovered node, do add/multiply operation     make_dict[outg[0]] = tf.keras.layers.add([         make_dict[outg[0]],         tf.keras.layers.multiply(  [[outg[1]], make_dict[queue[0]]],         )],     ) else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue     make_dict[outg[0]] = tf.keras.layers.multiply(         [make_dict[queue[0]], [outg[1]]]     )     for outgo in graph_dict[outg[0]]["outgoing"]:         queue.append(outgo[0])        queue.pop(0)    # Returns one data graph for each output    model_inputs = [make_dict[key] for key in inputs]    model_outputs = [make_dict[key] for key in outputs]    return [tf.keras.Model(inputs=model_inputs, outputs=o) for o in model_outputs]def main():    graph_def = {        "B": { "incoming": [], "outgoing": [("A", 1.0)]        },        "C": { "incoming": [], "outgoing": [("A", 1.0)]        },        "A": { "incoming": [("B", 2.0), ("C", -1.0)], "outgoing": [("D", 3.0)]        },        "D": { "incoming": [("A", 2.0)], "outgoing": []        }    }    outputs = construct_graph(graph_def, ["B", "C"], ["A"])    print("Builded models:", outputs)    for o in outputs:        o.summary(120)        print("Output:", o((1.0, 1.0)))if __name__ == "__main__":    main()

这里要注意什么?

  • 从更改
    placeholder
    keras.Input
    ,需要设置输入的形状。
  • 使用
    keras.layers.[add|multiply]
    的计算。这可能不是必需的,而是坚持使用一个界面。但是,它需要将因子包装在列表中(以处理批处理)
  • 建立
    keras.Model
    返回
  • 用值的元组调用模型(不再是字典)

这是代码的输出。

Builded models: [<tensorflow.python.keras.engine.training.Model object at 0x7fa0b49f0f50>]Model: "model"________________________________________________________________________________________________________________________Layer (type)     Output Shape    Param #       Connected to      ========================================================================================================================B (InputLayer)   [(None,)]       0         ________________________________________________________________________________________________________________________C (InputLayer)   [(None,)]       0         ________________________________________________________________________________________________________________________tf_op_layer_mul (TensorFlowOpLayer)    [(None,)]       0  B[0][0]________________________________________________________________________________________________________________________tf_op_layer_mul_1 (TensorFlowOpLayer)  [(None,)]       0  C[0][0]________________________________________________________________________________________________________________________add (Add)        (None,)         0  tf_op_layer_mul[0][0]tf_op_layer_mul_1[0][0]      ========================================================================================================================Total params: 0Trainable params: 0Non-trainable params: 0________________________________________________________________________________________________________________________Output: tf.Tensor([2.], shape=(1,), dtype=float32)


欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/zaji/5653090.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存