ACL TF 1.x 训练使能混合精度

ACL TF 1.x 训练使能混合精度,第1张

TF 1.x 训练使能混合精度
实验内容及目标
混合精度训练方法是通过混合使用float16和float32数据类型来加速深度神经网络训练的过程,并减少内存使用和存取,从而可以训练更大的神经网络。同时又能基本保持使用float32训练所能达到的网络精度。当前昇腾AI处理器支持如下几种训练精度模式,用户可以在训练脚本中设置。 本实验以一个Sess.run的手写数字分类网络为例,介绍迁移TensorFlow 1.15训练脚本后,如何使能混合精度。

运行环境配置
本环境默认分配的是CPU资源规格,我们需要NPU资源场景 ,因此需要在界面右上角区域点击【切换规格】,将资源切换成Ascend:1*Ascend 910目标规格。

切换前后资源规格状态如下图所示。

下载训练数据集
直接执行以下命令下载数据集

!wget -N -P /home/ma-user/work/Data https://modelarts-train-ae.obs.cn-north-4.myhuaweicloud.com/train/Data/t10k-images.idx3-ubyte
!wget -N -P /home/ma-user/work/Data https://modelarts-train-ae.obs.cn-north-4.myhuaweicloud.com/train/Data/t10k-labels.idx1-ubyte
!wget -N -P /home/ma-user/work/Data https://modelarts-train-ae.obs.cn-north-4.myhuaweicloud.com/train/Data/train-labels.idx1-ubyte
!wget -N -P /home/ma-user/work/Data https://modelarts-train-ae.obs.cn-north-4.myhuaweicloud.com/train/Data/train-images.idx3-ubyte
out:
--2022-04-20 16:51:25--  https://modelarts-train-ae.obs.cn-north-4.myhuaweicloud.com/train/Data/t10k-images.idx3-ubyte
Resolving proxy-notebook.modelarts-dev-proxy.com (proxy-notebook.modelarts-dev-proxy.com)... 192.168.0.172
Connecting to proxy-notebook.modelarts-dev-proxy.com (proxy-notebook.modelarts-dev-proxy.com)|192.168.0.172|:8083... connected.
Proxy request sent, awaiting response... 304 Not Modified
File ‘/home/ma-user/work/Data/t10k-images.idx3-ubyte’ not modified on server. Omitting download.

--2022-04-20 16:51:26--  https://modelarts-train-ae.obs.cn-north-4.myhuaweicloud.com/train/Data/t10k-labels.idx1-ubyte
Resolving proxy-notebook.modelarts-dev-proxy.com (proxy-notebook.modelarts-dev-proxy.com)... 192.168.0.172
Connecting to proxy-notebook.modelarts-dev-proxy.com (proxy-notebook.modelarts-dev-proxy.com)|192.168.0.172|:8083... connected.
Proxy request sent, awaiting response... 304 Not Modified
File ‘/home/ma-user/work/Data/t10k-labels.idx1-ubyte’ not modified on server. Omitting download.

--2022-04-20 16:51:26--  https://modelarts-train-ae.obs.cn-north-4.myhuaweicloud.com/train/Data/train-labels.idx1-ubyte
Resolving proxy-notebook.modelarts-dev-proxy.com (proxy-notebook.modelarts-dev-proxy.com)... 192.168.0.172
Connecting to proxy-notebook.modelarts-dev-proxy.com (proxy-notebook.modelarts-dev-proxy.com)|192.168.0.172|:8083... connected.
Proxy request sent, awaiting response... 304 Not Modified
File ‘/home/ma-user/work/Data/train-labels.idx1-ubyte’ not modified on server. Omitting download.

--2022-04-20 16:51:27--  https://modelarts-train-ae.obs.cn-north-4.myhuaweicloud.com/train/Data/train-images.idx3-ubyte
Resolving proxy-notebook.modelarts-dev-proxy.com (proxy-notebook.modelarts-dev-proxy.com)... 192.168.0.172
Connecting to proxy-notebook.modelarts-dev-proxy.com (proxy-notebook.modelarts-dev-proxy.com)|192.168.0.172|:8083... connected.
Proxy request sent, awaiting response... 304 Not Modified
File ‘/home/ma-user/work/Data/train-images.idx3-ubyte’ not modified on server. Omitting download.

导入库文件
要使基于TensorFlow开发的训练脚本在昇腾910 AI处理器上训练,需要借助Tensorflow框架适配插件(即TF Adapter),TF Adapter中提供了适配Tensorflow框架的用户Python接口,用于CANN软件与TensorFlow框架对接。因此在训练之前,需要在训练代码中增加:from npu_bridge.npu_init import *,导入相关库文件。

import tensorflow as tf
import numpy as np
import struct
import os
import time 
from npu_bridge.npu_init import *

处理MNIST数据集
此部分代码一般无需改造

#加载数据集
def load_image_set(filename):
    print ("load image set",filename)
    binfile = open(filename, 'rb')  # 读取二进制文件
    buffers = binfile.read()
    head = struct.unpack_from('>IIII', buffers, 0)  # 读取前四个整数,返回一个元组
    offset = struct.calcsize('>IIII')  # 定位到data开始的位置
    image_num = head[1]  # 获取图片数量
    width = head[2]
    height = head[3]
    bits = image_num * width * height 
    bits_string = '>' + str(bits) + 'B'  # fmt格式:'>47040000B'
    imgs = struct.unpack_from(bits_string, buffers, offset)  # 取data数据,返回一个元组
    binfile.close()
    imgs = np.reshape(imgs, [image_num, width * height])  # reshape为[60000,784]型的数组
    print ("load imgs finished")
    return imgs, head

#加载标签
def load_label_set(filename):
    print ("load lable set",filename)
    binfile = open(filename, 'rb')  # 读取二进制文件
    buffers = binfile.read()
    head = struct.unpack_from('>II', buffers, 0)  # 读取label文件前两个整形数
    label_num = head[1]
    offset = struct.calcsize('>II')  # 定位到label数据开始的位置
    num_string = '>' + str(label_num) + 'B'  # fmt格式:'>60000B'
    labels = struct.unpack_from(num_string, buffers, offset)  # 取label数据
    binfile.close()
    labels = np.reshape(labels, [label_num])
    print ("load lable finished")
    return labels, head

# 手动one_hot编码
def encode_one_hot(labels):
    num = labels.shape[0]
    res = np.zeros((num, 10))
    for i in range(num):
        res[i, labels[i]] = 1  # labels[i]表示0,1,2,3,4,5,6,7,8,9,则 对应的列是1,这就是One-Hot编码
    return res

train_image = '/home/ma-user/work/Data/train-images.idx3-ubyte'
train_label = '/home/ma-user/work/Data/train-labels.idx1-ubyte'
test_image = '/home/ma-user/work/Data/t10k-images.idx3-ubyte'
test_label ='/home/ma-user/work/Data/t10k-labels.idx1-ubyte'
imgs, data_head = load_image_set(train_image)

# 这里的label是60000个数字,需要转成one-hot编码
labels, labels_head = load_label_set(train_label)
test_images, test_images_head = load_image_set(test_image)
test_labels, test_labels_head = load_label_set(test_label)
out:
load image set /home/ma-user/work/Data/train-images.idx3-ubyte
load imgs finished
load lable set /home/ma-user/work/Data/train-labels.idx1-ubyte
load lable finished
load image set /home/ma-user/work/Data/t10k-images.idx3-ubyte
load imgs finished
load lable set /home/ma-user/work/Data/t10k-labels.idx1-ubyte
load lable finished

模型搭建/计算Loss/梯度更新
此部分代码一般无需改造。

# 定义参数
learning_rate = 0.01
training_epoches = 10
bacth_size = 100  # mini-batch
display_step = 2 # display once every 2 epochs

# tf graph input
x = tf.placeholder(tf.float32, [None, 784])  # 28 * 28 = 784
y = tf.placeholder(tf.float32, [None, 10])  # 0-9 ==> 10 classes

# 定义模型参数
W = tf.Variable(tf.zeros([784, 10]))  # tf.truncated_normal()
b = tf.Variable(tf.zeros([10]))

# 构建模型
prediction = tf.nn.softmax(tf.matmul(x, W) + b)
loss = tf.reduce_mean(-tf.reduce_sum(y * tf.log(tf.clip_by_value(prediction,1e-8,1.0)), reduction_indices=1))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
init = tf.global_variables_initializer()
res = encode_one_hot(labels)
print("res", res)
total_batchs = int(data_head[1] / bacth_size)
print("total_batchs:", total_batchs)
out:
WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.15.0/lib/python3.7/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
res [[0. 0. 0. ... 0. 0. 0.]
 [1. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 1. 0.]]
total_batchs: 600

创建session并执行训练
我们需要在创建session前添加如下配置,创建config并添加custom_op:

config = tf.ConfigProto()
custom_op = config.graph_options.rewrite_options.custom_optimizers.add()
custom_op.name = "NpuOptimizer"
config.graph_options.rewrite_options.remapping = RewriterConfig.OFF

创建好的config作为session config传给tf.Session,使得训练能够在NPU上执行,sess.run代码无需修改

#训练
def train():
    with tf.Session(config=config) as sess:
            sess.run(init)
            for epoch in range(training_epoches):
                start_time = time.time()
                avg_loss = 0.
                total_batchs = int(data_head[1] / bacth_size)  # data_head[1]是图片数量

                for i in range(total_batchs):
                    batch_xs = imgs[i * bacth_size: (i + 1) * bacth_size, 0:784]
                    batch_ys = res[i * bacth_size: (i + 1) * bacth_size, 0:10]

                    _, l = sess.run([optimizer, loss], feed_dict={x: batch_xs, y: batch_ys})

                    # 计算平均损失
                    avg_loss += l / total_batchs
                end_time = time.time()
                if epoch % display_step == 0:
                    print("Epoch:", '%04d' % (epoch), "loss=", "{:.9f}".format(avg_loss), "time=", "{:.3f}".format(end_time-start_time) )

            print("Optimization Done!")

            correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
            accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

            print("Accuracy:", accuracy.eval({x: test_images, y: encode_one_hot(test_labels)}))
train()
cout:
Epoch: 0000 loss= 6.713961569 time= 23.144
Epoch: 0002 loss= 5.096043814 time= 14.943
Epoch: 0004 loss= 5.052241913 time= 14.921
Epoch: 0006 loss= 4.953291978 time= 14.615
Epoch: 0008 loss= 4.819878323 time= 14.316
Optimization Done!
Accuracy: 0.7448001

到这里我们已经完成了一个基本的脚本迁移,下面我们来讲下如何使能混合精度。

使能混合精度
很简单,只需要在前面的config中设置"precision_mode"为"allow_mix_precision"。

config = tf.ConfigProto()
custom_op = config.graph_options.rewrite_options.custom_optimizers.add()
custom_op.name = "NpuOptimizer"
config.graph_options.rewrite_options.remapping = RewriterConfig.OFF
custom_op.parameter_map["precision_mode"].s = tf.compat.as_bytes("allow_mix_precision") #使能混合精度

再次执行训练部分,此时已经使能了混合精度。

#训练
train()
cout:
Epoch: 0000 loss= 6.713961569 time= 24.848
Epoch: 0002 loss= 5.096043814 time= 14.823
Epoch: 0004 loss= 5.052241913 time= 14.595
Epoch: 0006 loss= 4.953291978 time= 14.335
Epoch: 0008 loss= 4.819878323 time= 14.284
Optimization Done!
Accuracy: 0.74480003

使能Loss Scaling
在混合精度计算中使用float16数据格式数据动态范围降低,造成梯度计算出现浮点溢出,会导致部分参数更新失败。为了保证部分模型训练在混合精度训练过程中收敛,需要配置Loss Scaling的方法。

Loss Scaling方法通过在前向计算所得的loss乘以loss scale系数S,起到在反向梯度计算过程中达到放大梯度的作用,从而最大程度规避浮点计算中较小梯度值无法用FP16表达而出现的溢出问题。在参数梯度聚合之后以及优化器更新参数之前,将聚合后的参数梯度值除以loss scale系数S还原。 Loss Scaling分动态和静态两种方法。

动态Loss Scaling通过在训练过程中检查梯度中浮点计算异常状态,自动动态选取loss scale系数S以适应训练过程中梯度变化,从而解决人工选取loss scale系数S和训练过程中自适应调整的问题。 静=静态Loss Scaling顾名思义就是Loss Scaling初始化后保持不变,需要开发者能够人工选择一个适合网络的值。 这里主要和大家一起实现一个简单的动态Loss Scaling。 相对之前的迁移,需要额外创建一个NPULossScaleOptimizer,并实例化一个ExponentialUpdateLossScaleManager类进行动态Loss Scale的配置。

config = tf.ConfigProto()
custom_op = config.graph_options.rewrite_options.custom_optimizers.add()
custom_op.name = "NpuOptimizer"
config.graph_options.rewrite_options.remapping = RewriterConfig.OFF
custom_op.parameter_map["precision_mode"].s = tf.compat.as_bytes("allow_mix_precision") #使能混合精度

optimizer = tf.train.GradientDescentOptimizer(learning_rate) # NPULossScaleOptimizer接受tf的optimizer,这里重新初始化下optimizer
loss_scale_manager = ExponentialUpdateLossScaleManager(init_loss_scale=2**32, \
                                                       incr_every_n_steps=1000, decr_every_n_nan_or_inf=2, decr_ratio=0.5)#实例化1个loss manager
optimizer = NPULossScaleOptimizer(optimizer, loss_scale_manager) #创建loss scale optimizer
optimizer = optimizer.minimize(loss) #传入sess 执行的最后是loss的minimize值

执行训练

train()
cout:
Epoch: 0000 loss= 6.330714426 time= 24.207
Epoch: 0002 loss= 5.222052340 time= 14.885
Epoch: 0004 loss= 5.161887498 time= 14.921
Epoch: 0006 loss= 4.942068146 time= 14.570
Epoch: 0008 loss= 4.824350572 time= 14.450
Optimization Done!
Accuracy: 0.7452

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/726402.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-26
下一篇 2022-04-26

发表评论

登录后才能评论

评论列表(0条)

保存