构建逻辑回归模型识别MNIST手写字——单个神经元

构建逻辑回归模型识别MNIST手写字——单个神经元,第1张

实验步骤 1、导入库
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
print("Tensorflow版本是:",tf.__version__)

2、数据获取
  MNIST 数据集可在 http://yann.lecun.com/exdb/mnist/ 获取  TensorFlow提供了数据集读取方法(1.x和2.0版本提供的方法不同)
mnist = tf.keras.datasets.mnist
(train_images,train_labels),(test_images,test_labels)=mnist.load_data()
MNIST数据集文件在读取时如果指定目录下不存在,则会自动去下载,需等待 一定时间;如果已经存在了,则直接读取
3、数据集划分
total_num = len(train_images)
valid_split = 0.2
train_num = int(total_num*(1-valid_split))

train_x = train_images[:train_num]
train_y = train_labels[:train_num]

valid_x = train_images[train_num:]
valid_y = train_labels[train_num:]

test_x = test_images
test_y = test_labels

valid_x.shape
4、数据塑形
train_x = train_x.reshape(-1,784)
valid_x = valid_x.reshape(-1,784)
test_x = test_x.reshape(-1,784)

5、特征数据归一化
train_x = tf.cast(train_x/255.0,tf.float32)
valid_x = tf.cast(valid_x/255.0,tf.float32)
test_x = tf.cast(test_x/255.0,tf.float32)


train_x[1]

6、标签数据独热编码 独热编码常用于表示拥有有限个可能值的字符串或标识符
train_y = tf.one_hot(train_y,depth=10)
valid_y = tf.one_hot(valid_y,depth=10)
test_y = tf.one_hot(test_y,depth=10)

train_y

7、构建模型
def model(x,w,b):
    pred = tf.matmul(x,w)+b
    return tf.nn.softmax(pred)
8、定义模型变量
W=tf.Variable(tf.random.normal([784,10],mean=0.0,stddev=1.0,dtype=tf.float32))

B=tf.Variable(tf.zeros([10]),dtype=tf.float32)
9、定义交叉熵损失函数 在自定义的损失函数loss中直接调用了TensorFlow提供的交叉熵函数。
def loss(x,y,w,b):
    pred = model(x,w,b)
    loss_ = tf.keras.losses.categorical_crossentropy(y_true=y,y_pred = pred)
    return tf.reduce_mean(loss_)
10、定义训练参数
training_epochs=20
batch_size=50
learning_rate=0.001
11、定义梯度计算函数
def grad(x,y,w,b):
    with tf.GradientTape() as tape:
        loss_=loss(x,y,w,b)
        return tape.gradient(loss_,[w,b])
12、选择优化器 常用优化器: SGD、Adagrad、Adadelta、RMSprop、Adam
optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate)

13、定义准确率
def accuracy(x,y,w,b):
    pred=model(x,w,b)
    correct_prediction = tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
    return tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
14、训练模型
total_step = int(train_num/batch_size)

loss_list_train = []
loss_list_valid = []
acc_list_train = []
acc_list_valid = []

for epoch in range (training_epochs):
    for step in range(total_step):
        xs = train_x[step*batch_size:(step+1)*batch_size]
        ys = train_y[step*batch_size:(step+1)*batch_size]
        
        grads = grad(xs,ys,W,B)
        optimizer.apply_gradients(zip(grads,[W,B]))
    
    loss_train = loss(train_x,train_y,W,B).numpy()
    loss_valid = loss(valid_x,valid_y,W,B).numpy()
    acc_train = accuracy(train_x,train_y,W,B).numpy()
    acc_valid = accuracy(valid_x,valid_y,W,B).numpy()
    loss_list_train.append(loss_train)
    loss_list_valid.append(loss_valid)
    acc_list_train.append(acc_train)
    acc_list_valid.append(acc_valid)
    print("epoch={:3d},train_loss={:.4f},train_acc={:.4f},val_loss={:.4f},val_lacc={:.4f}".format(epoch+1,loss_train,acc_train,loss_valid,acc_valid))

15、显示训练过程数据
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.plot(loss_list_train,'blue',label="Train Loss")
plt.plot(loss_list_valid,'red',label='Valid Loss')
plt.legend(loc=1)

plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.plot(acc_list_train,'blue',label="Train Loss")
plt.plot(acc_list_valid,'red',label='Valid Loss')
plt.legend(loc=1)

16、评估模型 
acc_test = accuracy(test_x,test_y,W,B).numpy()
print("Test accuracy:",acc_test)

17、模型应用与可视化
  1. 应用模型
    def predict(x,w,b):
        pred=model(x,w,b)
        result=tf.argmax(pred,1).numpy()
        return result
    
    pred_test=predict(test_x,W,B)
    
    pred_test[0]
  2. 定义可视化函数
    import matplotlib.pyplot as plt
    import numpy as np
    def plot_images_label_prediction(images,
                                     labels,
                                     preds,
                                     index=0,
                                     num=10
                                    ):
        fig = plt.gcf()
        fig.set_size_inches(10,4)
        if num > 10:
            num = 10
        for i in range(0,num):
            ax = plt.subplot(2,5,i+1)
            
            ax.imshow(np.reshape(images[index],(28,28)),cmap='binary')
            
            title = "label=" + str(labels[index])
            if len(preds)>0:
                title +=",predict=" + str(labels[index])
                
            ax.set_title(title,fontsize=10)
            ax.set_xticks([]);
            ax.set_yticks([])
            index = index + 1
            
            
        plt.show()    
  3. 可视化预测结果 
    plot_images_label_prediction(test_images,test_labels,pred_test,10,10)

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/797468.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-06
下一篇 2022-05-06

发表评论

登录后才能评论

评论列表(0条)

保存