Tensorflow2.0学习-快速入门

Tensorflow2.0学习-快速入门 ,第1张

文章目录
  • 基础版
    • 引包
    • 数据准备
    • 模型准备
    • 跑起来
    • 整体代码
  • 进阶版
    • 引包
    • 数据准备
    • 模型准备
    • 跑起来
    • 整体代码

TensorFlow 2.0 教程
跟着官方教程走试试看
第一个简单例子是MNIST手写数字分类

基础版 引包

首先引入TensorFlow包

# 安装 TensorFlow

import tensorflow as tf
数据准备

再引入数据,并将整数转换为浮点数,因为这个图片,每个像素0-255,变成0-1有利于运算

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
模型准备

数据有了就引入模型了,这个模型很简单就是两层全连接。

# 模型结构
model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation='softmax')
])
# 为model定义优化器和loss函数选择,metrics是一个参数,可以在里面加各种准确率,自动就帮我们算出来了
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
跑起来

然后就训练了,这里它使用了fit函数自动训练。

model.fit(x_train, y_train, epochs=5)

model.evaluate(x_test,  y_test, verbose=2)
整体代码
# 安装 TensorFlow

import tensorflow as tf

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)

model.evaluate(x_test, y_test, verbose=2)

结果

Epoch 1/5
1875/1875 [==============================] - 1s 550us/step - loss: 0.2912 - accuracy: 0.9162
Epoch 2/5
1875/1875 [==============================] - 1s 554us/step - loss: 0.1396 - accuracy: 0.9574
Epoch 3/5
1875/1875 [==============================] - 1s 543us/step - loss: 0.1064 - accuracy: 0.9672
Epoch 4/5
1875/1875 [==============================] - 1s 549us/step - loss: 0.0858 - accuracy: 0.9729
Epoch 5/5
1875/1875 [==============================] - 1s 542us/step - loss: 0.0738 - accuracy: 0.9773
313/313 - 0s - loss: 0.0811 - accuracy: 0.9761
进阶版 引包

还是引入需要的库,这次多一点

import tensorflow as tf

from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
数据准备

准备好数据,还和上面的一样,不过这次的模型需要我们加一维,就是卷积嘛,它这个加维度还蛮直接的,还有一步就是将数据变成tf.data方便运算,tf.data.Dataset.from_tensor_slices这个就是将数据进行切片,出来的结果是

就是数据的格式维度已经掌握,送入模型的时候,让数据更规整一些。
.shuffle()就是打乱数据,里面的数字越大,打乱的就越充分。
.batch(32)每次进入模型的量
tf.data.Dataset.from_tensor_slices中的shuffle()、repeat()、batch()用法

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Add a channels dimension
x_train = x_train[..., tf.newaxis].astype("float32")
x_test = x_test[..., tf.newaxis].astype("float32")

# tf.data
train_ds = tf.data.Dataset.from_tensor_slices(
    (x_train, y_train)).shuffle(10000).batch(32)

test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
模型准备

这个就是自己定义了一个类,和Pytroch差不多,只不过把forward()替换为call()

class MyModel(Model):
  def __init__(self):
    super(MyModel, self).__init__()
    self.conv1 = Conv2D(32, 3, activation='relu')
    self.flatten = Flatten()
    self.d1 = Dense(128, activation='relu')
    self.d2 = Dense(10)

  def call(self, x):
    x = self.conv1(x)
    x = self.flatten(x)
    x = self.d1(x)
    return self.d2(x)

# Create an instance of the model
model = MyModel()

优化器和损失函数准备,这里单独写出来

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

optimizer = tf.keras.optimizers.Adam()

还有一些准确率参数设定

train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
跑起来

训练步骤定义,测试步骤定义,最后就是熟悉的for循环跑起来了。
@tf.function加上这个会加快训练,但是断点就不好用了,去掉的话可以断点。@tf.function
tf.GradientTape()就是自动求导,每一步训练后要跟新模型数据

tensorflow计算图与自动求导——tf.GradientTape

@tf.function
def train_step(images, labels):
  with tf.GradientTape() as tape:
    # training=True is only needed if there are layers with different
    # behavior during training versus inference (e.g. Dropout).
    predictions = model(images, training=True)
    loss = loss_object(labels, predictions)
  gradients = tape.gradient(loss, model.trainable_variables)
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))

  train_loss(loss)
  train_accuracy(labels, predictions)

@tf.function
def test_step(images, labels):
  # training=False is only needed if there are layers with different
  # behavior during training versus inference (e.g. Dropout).
  predictions = model(images, training=False)
  t_loss = loss_object(labels, predictions)

  test_loss(t_loss)
  test_accuracy(labels, predictions)

EPOCHS = 5

for epoch in range(EPOCHS):
  # Reset the metrics at the start of the next epoch
  train_loss.reset_states()
  train_accuracy.reset_states()
  test_loss.reset_states()
  test_accuracy.reset_states()

  for images, labels in train_ds:
    train_step(images, labels)

  for test_images, test_labels in test_ds:
    test_step(test_images, test_labels)

  print(
    f'Epoch {epoch + 1}, '
    f'Loss: {train_loss.result()}, '
    f'Accuracy: {train_accuracy.result() * 100}, '
    f'Test Loss: {test_loss.result()}, '
    f'Test Accuracy: {test_accuracy.result() * 100}'
  )
整体代码
# 安装 TensorFlow

import tensorflow as tf

from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

# Add a channels dimension
x_train = x_train[..., tf.newaxis].astype("float32")
x_test = x_test[..., tf.newaxis].astype("float32")

train_ds = tf.data.Dataset.from_tensor_slices(
    (x_train, y_train)).shuffle(10000).batch(32)

test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)


class MyModel(Model):
    def __init__(self):
        super(MyModel, self).__init__()
        self.conv1 = Conv2D(32, 3, activation='relu')
        self.flatten = Flatten()
        self.d1 = Dense(128, activation='relu')
        self.d2 = Dense(10)

    def call(self, x):
        x = self.conv1(x)
        x = self.flatten(x)
        x = self.d1(x)
        return self.d2(x)


# Create an instance of the model
model = MyModel()

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

optimizer = tf.keras.optimizers.Adam()

train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')


@tf.function
def train_step(images, labels):
    with tf.GradientTape() as tape:
        # training=True is only needed if there are layers with different
        # behavior during training versus inference (e.g. Dropout).
        predictions = model(images, training=True)
        loss = loss_object(labels, predictions)
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))

    train_loss(loss)
    train_accuracy(labels, predictions)


@tf.function
def test_step(images, labels):
    # training=False is only needed if there are layers with different
    # behavior during training versus inference (e.g. Dropout).
    predictions = model(images, training=False)
    t_loss = loss_object(labels, predictions)

    test_loss(t_loss)
    test_accuracy(labels, predictions)


EPOCHS = 5

for epoch in range(EPOCHS):
    # Reset the metrics at the start of the next epoch
    train_loss.reset_states()
    train_accuracy.reset_states()
    test_loss.reset_states()
    test_accuracy.reset_states()

    for images, labels in train_ds:
        train_step(images, labels)

    for test_images, test_labels in test_ds:
        test_step(test_images, test_labels)

    print(
        f'Epoch {epoch + 1}, '
        f'Loss: {train_loss.result()}, '
        f'Accuracy: {train_accuracy.result() * 100}, '
        f'Test Loss: {test_loss.result()}, '
        f'Test Accuracy: {test_accuracy.result() * 100}'
    )

结果

Epoch 1, Loss: 0.13607242703437805, Accuracy: 95.9000015258789, Test Loss: 0.06232420727610588, Test Accuracy: 97.91999816894531
Epoch 2, Loss: 0.04245823994278908, Accuracy: 98.67666625976562, Test Loss: 0.05139134079217911, Test Accuracy: 98.22999572753906
Epoch 3, Loss: 0.02266189455986023, Accuracy: 99.2699966430664, Test Loss: 0.052426181733608246, Test Accuracy: 98.37999725341797
Epoch 4, Loss: 0.014395711943507195, Accuracy: 99.5199966430664, Test Loss: 0.07228700816631317, Test Accuracy: 98.0999984741211
Epoch 5, Loss: 0.009736561216413975, Accuracy: 99.67500305175781, Test Loss: 0.06506235152482986, Test Accuracy: 98.33999633789062

准确率高了许多,总体还是数据-模型-训练,相比于Pytorch最大的感觉是,各种工作不用自己去 *** 心了,都写好了,就调用就可,但是也失去了对模型训练的绝对把控,确实蛮高效,不用自己去写。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/790536.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-05
下一篇 2022-05-05

发表评论

登录后才能评论

评论列表(0条)

保存