执行模型后清除Tensorflow GPU内存

执行模型后清除Tensorflow GPU内存,第1张

执行模型后清除Tensorflow GPU内存

2016年6月的git问题(https://github.com/tensorflow/tensorflow/issues/1727)表示存在以下问题:

当前,GPUDevice中的分配器属于ProcessState,它本质上是全局单例。使用GPU的第一个会话将其初始化,并在进程关闭时释放自身。

因此,唯一的解决方法是使用进程并在计算后关闭它们。

示例代码:

import tensorflow as tfimport multiprocessingimport numpy as npdef run_tensorflow():    n_input = 10000    n_classes = 1000    # Create model    def multilayer_perceptron(x, weight):        # Hidden layer with RELU activation        layer_1 = tf.matmul(x, weight)        return layer_1    # Store layers weight & bias    weights = tf.Variable(tf.random_normal([n_input, n_classes]))    x = tf.placeholder("float", [None, n_input])    y = tf.placeholder("float", [None, n_classes])    pred = multilayer_perceptron(x, weights)    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))    optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)    init = tf.global_variables_initializer()    with tf.Session() as sess:        sess.run(init)        for i in range(100): batch_x = np.random.rand(10, 10000) batch_y = np.random.rand(10, 1000) sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})    print "finished doing stuff with tensorflow!"if __name__ == "__main__":    # option 1: execute pre with extra process    p = multiprocessing.Process(target=run_tensorflow)    p.start()    p.join()    # wait until user presses enter key    raw_input()    # option 2: just execute the function    run_tensorflow()    # wait until user presses enter key    raw_input()

因此,如果要

run_tensorflow()
在创建的进程中调用该函数并关闭该进程(选项1),则会释放内存。如果仅运行
run_tensorflow()
(选项2),则函数调用后不会释放内存。



欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5644534.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存