tf2怎么使用字典及矩阵堆积高级方法

tf2怎么使用字典及矩阵堆积高级方法,第1张

概述这特么难死我了,卧槽,强行使用字典出错!不在图中使用则没有问题。ForRecommendationinDeeplearningQQGroup102948747ForVisualindeeplearningQQGroup 629530787Wannahaveadatewithsomeone?Comeon,baby.QQGroup 737813700I'mherewaitingforyou 不

这特么难死我了,卧槽,强行使用字典出错!不在图中使用则没有问题。

For Recommendation in Deep learning QQ Group 102948747
For Visual in deep learning QQ Group 629530787
Wanna have a date with someone? Come on, baby .QQ Group 737813700
I'm here waiting for you 
不接受这个网页的私聊/私信!!
本宝宝长期征集真实情感经历(发在我公号:美好时光与你同行),长期接受付费咨询(啥问题都可),付费改代码。
报错如下:

        return {p.numpy(): i for i, p in enumerate(node)}    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/autograph/operators/py_builtins.py:388 enumerate_  **        return _py_enumerate(s, start)    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/autograph/operators/py_builtins.py:396 _py_enumerate        return enumerate(s, start)    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:503 __iter__        self._disallow_iteration()    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:496 _disallow_iteration        self._disallow_when_autograph_enabled("iterating over `tf.Tensor`")    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:474 _disallow_when_autograph_enabled        " indicate you are trying to use an unsupported feature.".format(task))    OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: autoGraph dID convert this function. This might indicate you are trying to use an unsupported feature.

经过搜索发现gather似乎可以,然而发现是从索引取值,而不是相反,难道只能用where吗?我不想用where这个玩意。但目前没办法,只好如此了。

>>> node<tf.Tensor: shape=(7,), dtype=int32, numpy=array([ 1,  2,  3,  5,  6,  7, 11], dtype=int32)>>>> tf.gather(node,1)<tf.Tensor: shape=(), dtype=int32, numpy=2>>>> tf.where(node==3)<tf.Tensor: shape=(1, 1), dtype=int64, numpy=array([[2]])>

但是这个玩意怎么转成tensor格式呢?

>>> inds[[<tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>, <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>], [<tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])>, <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>], [<tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>], [<tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, <tf.Tensor: shape=(1,), dtype=int64, numpy=array([3])>], [<tf.Tensor: shape=(1,), dtype=int64, numpy=array([3])>, <tf.Tensor: shape=(1,), dtype=int64, numpy=array([5])>], [<tf.Tensor: shape=(1,), dtype=int64, numpy=array([5])>, <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>], [<tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])>, <tf.Tensor: shape=(1,), dtype=int64, numpy=array([1])>]]

当采用concat堆积后,发现竟然出错了,本来数据执行是没有毛病的,但在分布式计算中会出错。

ValueError: 'inds' has shape (1, 2) before the loop, but shape (2, 2) after one iteration. Use tf.autograph.experimental.set_loop_options to set shape invariants.

搜索发现这里有官方参考,我试试吧,之前本以为将要大功告成,卧槽,坑真多啊。看不懂啊,这玩意,也没个示例,只有个函数,难道这样就是应用了??提个issue,bug复现

@tf.function(autograph=True)def f():  v = tf.constant((0,))  for i in tf.range(3):    tf.autograph.experimental.set_loop_options(        shape_invariants=[(v, tf.TensorShape([None]))]    )    v = tf.concat((v, [i]), 0)  return v

感谢ncnn群里的大佬@月危月危沙鱼 ,提出了stack的方法,我之前博文以为stack不可行(因为它不能循环增加tensor),没想到可以直接转列表为tensor,厉害啊。

然后证明我高兴太早了,如下错误,就是直接用的stack

当然还有大佬提出concat也可,我不用试,肯定是同样的错误。我找到了相关错误的解决办法,

采用第二个方法,如下官方案例:

|  >>> ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False) |  >>> ta = ta.write(0, 10) |  >>> ta = ta.write(1, 20) |  >>> ta = ta.write(2, 30) |  >>> |  >>> ta.read(0) |  <tf.Tensor: shape=(), dtype=float32, numpy=10.0> |  >>> ta.read(1) |  <tf.Tensor: shape=(), dtype=float32, numpy=20.0> |  >>> ta.read(2) |  <tf.Tensor: shape=(), dtype=float32, numpy=30.0> |  >>> ta.stack() |  <tf.Tensor: shape=(3,), dtype=float32, numpy=array([10., 20., 30.], |  dtype=float32)>

简化后如下(必须指定其中的参数),

>>> inds=tf.TensorArray(tf.float32,size=2,dynamic_size=True,clear_after_read=True)>>> for i in range(3):...     inds=inds.write(i,tf.random.uniform(shape=[2]))... >>> inds=inds.stack()>>> inds<tf.Tensor: shape=(3, 2), dtype=float32, numpy=array([[0.41484547, 0.4884013 ],       [0.5207218 , 0.06094539],       [0.11978662, 0.49889505]], dtype=float32)>

试试在分布式中是否报错,报错我就feng了,。。。。。。。

        u_A=tf.scatter_nd(tf.constant(inds),tf.ones(len(inds)),    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py:264 constant  **        allow_broadcast=True)    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py:282 _constant_impl        allow_broadcast=allow_broadcast))    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py:456 make_tensor_proto        _AssertCompatible(values, dtype)    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py:333 _AssertCompatible        raise TypeError("Expected any non-tensor type, got a tensor instead.")    TypeError: Expected any non-tensor type, got a tensor instead.Traceback (most recent call last):  file "/data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/training/coordinator.py", line 297, in stop_on_exception    yIEld  file "/data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/distribute/mirrored_run.py", line 323, in run    self.main_result = self.main_fn(*self.main_args, **self.main_kwargs)  file "/tmp/tmp8cw3gl_d.py", line 24, in step_fn    (predictions, _) = ag__.converted_call(ag__.ld(model), (ag__.ld(inputs),), dict(training=True), fscope_1)  file "/data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/autograph/impl/API.py", line 532, in converted_call    return _call_unconverted(f, args, kwargs, options)  file "/data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/autograph/impl/API.py", line 339, in _call_unconverted    return f(*args, **kwargs)  file "/data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 985, in __call__    outputs = call_fn(inputs, *args, **kwargs)  file "/data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/autograph/impl/API.py", line 258, in wrapper    raise e.ag_error_Metadata.to_exception(e)TypeError: in user code:    docpic_gnn_multi_gpu2.py:144 call  *        adj_in, adj_out, graph_item, last_node_ID = self.get_inputs(seqs)    docpic_gnn_multi_gpu2.py:171 get_inputs  *        u_A=tf.scatter_nd(tf.constant(inds),tf.ones(len(inds)),    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py:264 constant  **        allow_broadcast=True)    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py:282 _constant_impl        allow_broadcast=allow_broadcast))    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py:456 make_tensor_proto        _AssertCompatible(values, dtype)    /data/logs/xulm1/myconda/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py:333 _AssertCompatible        raise TypeError("Expected any non-tensor type, got a tensor instead.")    TypeError: Expected any non-tensor type, got a tensor instead.

后来发现这个玩意完全没有必要转成constant,因为输入序列的长度是None(未知),所以它必然是None。

【0626】今天证实转为tf速度慢死了,卧槽,。算球吧。

其中的错误是没有讲tensorArray的 索引进行循环,所以出错了。

愿我们终有重逢之时,而你还记得我们曾经讨论的话题。

总结

以上是内存溢出为你收集整理的tf2怎么使用字典及矩阵堆积高级方法全部内容,希望文章能够帮你解决tf2怎么使用字典及矩阵堆积高级方法所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/1183639.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-02
下一篇 2022-06-02

发表评论

登录后才能评论

评论列表(0条)

保存