加载数据很好,没有问题.阅读了Tensorflow的文档,并在线阅读了一些教程和示例后,我写了以下内容,用CSV数据做一个非常简单的网络示例.我在这个提供的例子中使用的数据是标准的MNIST图像数据库,但是是CSV格式.
datafile = os.path.join('/pathtofile/','mnist_train.csv') descfile = os.path.join('/pathtofile/','mnist_train.rst') mnist = DataLoader(datafile,descfile).load_model() x_train,x_test,y_train,y_test = train_test_split(mnist.DATA,mnist.TARGET,test_size=0.33,random_state=42) ## WIDth and length of arrays train_wIDth = len(a_train[0]) + 1; train_length = len(a_train) test_wIDth = len(a_test[0]) + 1; test_length = len(a_test) data = self.build_rawdata(a_train,b_train,train_length,train_wIDth) test_data = self.build_rawdata(a_test,b_test,test_length,test_wIDth) y_train,y_train_onehot = self.onehot_converter(data) y_test,y_test_onehot = self.onehot_converter(test_data) ## A = Features,B = Classes A = data.shape[1]-1 B = len(y_train_onehot[0])
全部功能.训练,测试和onehot阵列都是正确的大小,并填充正确的值.
实际的张量流代码是我最有可能出错的地方(?).
sess = tf.InteractiveSession() ##Weights and bias x = tf.placeholder("float",shape=[None,A]) y_ = tf.placeholder("float",B]) W = tf.Variable(tf.random_normal([A,B],stddev=0.01)) b = tf.Variable(tf.random_normal([B],stddev=0.01)) sess.run(tf.initialize_all_variables()) y = tf.nn.softmax(tf.matmul(x,W) + b) cross_entropy = -tf.reduce_sum(y_*tf.log(y)) train_step = tf.train.GradIEntDescentoptimizer(0.01).minimize(cross_entropy) ## 300 iterations of learning ## of the above GradIEntDescentoptimiser for i in range(100): train_step.run(Feed_dict={x: x_train,y_: y_train_onehot}) correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float")) result = sess.run(accuracy,Feed_dict={x: x_test,y_: y_test_onehot}) print 'Run {},{}'.format(i+1,result)
这段代码的每个输出总是产生完全相同的精度,我无法弄清楚为什么.
I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 12I tensorflow/core/common_runtime/direct_session.cc:58] Direct session inter op parallelism threads: 12Run 1,0.0974242389202Run 2,0.0974242389202Run 3,0.0974242389202Run 4,0.0974242389202Run 5,0.0974242389202Run 6,0.0974242389202Run 7,0.0974242389202Run 8,0.0974242389202Run 9,0.0974242389202Run 10,0.0974242389202....Run 100,0.0974242389202
我回过头来看了一下教程,以及我从中学到的例子.虹膜数据集(以相同方式加载)产生了准确预测的适当输出.然而,这个带有MNIST CSV数据的代码却没有.
任何见解将不胜感激.
编辑1:
所以我有几分钟时间尝试了一些你的建议,但没有用.为了比较,我还决定回去使用Iris CSV数据集进行测试.使用sess.run后,输出略有不同(train_step,Feed = dict = {…}:
Run 1,0.300000011921Run 2,0.319999992847Run 3,0.699999988079Run 4,0.699999988079Run 5,0.699999988079Run 6,0.699999988079Run 7,0.360000014305Run 8,0.699999988079Run 9,0.699999988079Run 10,0.699999988079Run 11,0.699999988079Run 12,0.699999988079Run 13,0.699999988079Run 14,0.699999988079Run 15,0.699999988079Run 16,0.300000011921Run 17,0.759999990463Run 18,0.680000007153Run 19,0.819999992847Run 20,0.680000007153Run 21,0.680000007153Run 22,0.839999973774Run 23,0.319999992847Run 24,0.699999988079Run 25,0.699999988079
值通常在此范围内徘徊,直到达到运行64,其中它被锁定在:
Run 64,0.379999995232...Run 100,0.379999995232解决方法 我认为问题可能是你的train_step不在sess.run尝试这个.还可以考虑使用迷你批处理进行培训:
for i in range(100): for start,end in zip(range(0,len(x_train),20),range(20,20)): sess.run(train_step,Feed_dict={x: x_train[start:end],y_: y_train_onehot[start:end]})总结
以上是内存溢出为你收集整理的python – Tensorflow运行之间的准确性保持不变全部内容,希望文章能够帮你解决python – Tensorflow运行之间的准确性保持不变所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)