Tensorflow Lite使用介绍

Tensorflow Lite使用介绍,第1张

接着前面的博客系列讲,这里来介绍下Tensorflow LIte。

TensorFlow Litehttps://tensorflow.google.cn/lite/guide?hl=zh-cn

博主的环境简单介绍如下:

python 3.6.5

tensorflow-gpu 2.6.2

cuda version: 11.2

cudnn version: cudnn-11.2-linux-x64-v8.1.1.33

主要参考官方文档资料,此文档大部分是有中文版的,更方便去掌握理解,博主这边具体再记录下简单实践过程:

官网指南开发工作流程里第1项就提到了其,先从将Tensorflow模型转换为Tensorflow lite模型开始吧。

前面安装tensorflow2.6的时候,已经有了TensorFlow LIte的库,如下验证下是否成功

 这边将keras自带的模型以save_model方式保存为pb模型,此自带模型也在之前博客中用过,代码如下,所用的测试图片可以从博客中获得:

import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications import resnet50

from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from PIL import Image
import time

physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)

#加载预训练模型
model1 = resnet50.ResNet50(weights='imagenet')

model1.save("weights.h5")

model = tf.keras.models.load_model("weights.h5")

# file = tf.keras.utils.get_file(
#     "grace_hopper.jpg",
#     "http://www.kaotop.com/file/tupian/20220415/grace_hopper.jpg")
# img = tf.keras.preprocessing.image.load_img(file, target_size=[224, 224])

#加载图片,为NHWC格式
img = image.load_img('2008_002682.jpg', target_size=(224, 224))
img = image.img_to_array(img)
img = preprocess_input(img)
print("img: ",img.shape)

img = np.expand_dims(img, axis=0)
print(img.shape)

#img = img.transpose(0,3,1,2) 通道转换

t_model = time.perf_counter()
pred_class = model.predict(img)
print((pred_class.shape))
print(f'do inference cost:{time.perf_counter() - t_model:.8f}s')

print('Predicted:', decode_predictions(pred_class, top=5)[0])

tf.saved_model.save(model, "resnet/1/")
loaded = tf.saved_model.load("resnet/1/")
print(list(loaded.signatures.keys()))

infer = loaded.signatures["serving_default"]
print(infer.structured_outputs)

print(model.output_names[0])

t_model = time.perf_counter()
labeling = infer(tf.constant(img))[model.output_names[0]].numpy()
print(f'do inference cost:{time.perf_counter() - t_model:.8f}s')

print('Predicted:', decode_predictions(labeling, top=5)[0])

部分运行结果如下:

(1, 1000)
do inference cost:2.46907689s
Predicted: [('n02123597', 'Siamese_cat', 0.1655076), ('n02108915', 'French_bulldog', 0.14137916), ('n04409515', 'tennis_ball', 0.08570885), ('n02095314', 'wire-haired_fox_terrier', 0.05204656), ('n02123045', 'tabby', 0.050695747)]
2022-04-13 17:01:26.639696: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
['serving_default']
{'predictions': TensorSpec(shape=(None, 1000), dtype=tf.float32, name='predictions')}
predictions
do inference cost:0.29890721s
Predicted: [('n02123597', 'Siamese_cat', 0.16550793), ('n02108915', 'French_bulldog', 0.14137916), ('n04409515', 'tennis_ball', 0.08570894), ('n02095314', 'wire-haired_fox_terrier', 0.052046414), ('n02123045', 'tabby', 0.050695755)]

Process finished with exit code 0

可以看到save_mode方式保存后再加载,预测的结果和保存前的一致。

下面介绍如何将上面save_mode保存的tensorflow模型转换为tensorflow lite模型,同时加载这个tensorflow lite模型对相同图片进行预测,代码如下:

import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
import time

# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model("resnet/1/") # path to the SavedModel directory
tflite_model = converter.convert()

# Save the model.
tflite_file_path = 'model.tflite'
with open(tflite_file_path, 'wb') as f:
  f.write(tflite_model)


# Load the TFLite model in TFLite Interpreter
interpreter = tf.lite.Interpreter(tflite_file_path)
interpreter.allocate_tensors()

img = image.load_img('2008_002682.jpg', target_size=(224, 224))
img = image.img_to_array(img)
img = preprocess_input(img)
img = np.expand_dims(img, axis=0)
print(img.shape)

input  = interpreter.get_input_details()[0]
output = interpreter.get_output_details()[0]

interpreter.set_tensor(input['index'], tf.convert_to_tensor(img))

t_model = time.perf_counter()
interpreter.invoke()
print(f'do inference cost:{time.perf_counter() - t_model:.8f}s')

output = interpreter.get_tensor(output['index'])
print(output.shape)

print('Predicted:', decode_predictions(output, top=5)[0])


部分运行结果如下,结果和前面tensorflow直接预测结果一致,时间是提升了一些

(1, 224, 224, 3)
do inference cost:0.18793295s
(1, 1000)
Predicted: [('n02123597', 'Siamese_cat', 0.16550776), ('n02108915', 'French_bulldog', 0.14137983), ('n04409515', 'tennis_ball', 0.08570886), ('n02095314', 'wire-haired_fox_terrier', 0.05204646), ('n02123045', 'tabby', 0.050695557)]

Process finished with exit code 0

也可以直接从keras转换到tf lite模型并对同样的图片进行预测,代码如下:

import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications import resnet50

from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from PIL import Image
import time

physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)

#加载预训练模型
model1 = resnet50.ResNet50(weights='imagenet')

# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model1)
tflite_model = converter.convert()

# Save the model.
with open('model.tflite', 'wb') as f:
  f.write(tflite_model)

# Load the TFLite model in TFLite Interpreter
interpreter = tf.lite.Interpreter('model.tflite')
interpreter.allocate_tensors()

img = image.load_img('2008_002682.jpg', target_size=(224, 224))
img = image.img_to_array(img)
img = preprocess_input(img)
img = np.expand_dims(img, axis=0)
print(img.shape)

input  = interpreter.get_input_details()[0]
output = interpreter.get_output_details()[0]

interpreter.set_tensor(input['index'], tf.convert_to_tensor(img))

t_model = time.perf_counter()
interpreter.invoke()
print(f'do inference cost:{time.perf_counter() - t_model:.8f}s')

output = interpreter.get_tensor(output['index'])
print(output.shape)

print('Predicted:', decode_predictions(output, top=5)[0])

部分运行结果如下,结果速度和上面从save_mode转换为tf lite方式差距不大

(1, 224, 224, 3)
do inference cost:0.18793295s
(1, 1000)
Predicted: [('n02123597', 'Siamese_cat', 0.16550776), ('n02108915', 'French_bulldog', 0.14137983), ('n04409515', 'tennis_ball', 0.08570886), ('n02095314', 'wire-haired_fox_terrier', 0.05204646), ('n02123045', 'tabby', 0.050695557)]

Process finished with exit code 0

如下页面中有迁移学习的一些示例,可参考官方去 *** 作

TensorFlow Lite

 

接下来博主会在树莓派上来去部署tensor lite,来实现如下官网上的图像分类、对象检测以及分割例子

TensorFlow Lite 示例 | TensorFlow中文官网

 

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/langs/662268.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-18
下一篇 2022-04-18

发表评论

登录后才能评论

评论列表(0条)

保存