Tensorflow项目实战-Cats And Dogs数据集

Tensorflow项目实战-Cats And Dogs数据集,第1张

目录

数据集:

Baidu 网盘地址:

实验环境:

数据集处理部分:

网络搭建:

 VGG16(非原版的,采用的是B站北京大学教程里面的那个)

InceptionNet_v1_10

训练:

预测:


数据集: Baidu 网盘地址:

链接:https://pan.baidu.com/s/1Vl1Z2RwPKNXCrtpcBWygAw 
提取码:1234

解压后共有两个文件夹,分别存放着Cat 和 Dog 的图片,每个类别都有12500张图片(解压路径最好不要有中文)。


实验环境:

Tensorflow2.2 

Opencv

Numpy

tqdm(进度条)

Python3.6

数据集处理部分:

我个人习惯将数据集的内容存储到文本文档中,这也方便后期读取。


from tqdm import tqdm
import os

#   需要自己修改的地方
datasets_path = r'E:\Datasets\CatsandDogs\PetImages'  # 只需定位到该目录即可
anno_path = r'E:\Python_Subject\Study_All\DeepLearning_Study\DataSets\Cats_and_Dogs/anno.txt'  # 生成文本路径

cats_datasets_path = datasets_path + '/Cat/'
dogs_datasets_path = datasets_path + '/Dog/'

number_cats = len(os.listdir(cats_datasets_path))
number_dogs = len(os.listdir(dogs_datasets_path))
print("cats total number ={},dogs total number ={}".format(number_cats, number_dogs))

with open(anno_path, 'w') as f:
    cats = os.listdir(cats_datasets_path)  # 所有cats list
    dogs = os.listdir(dogs_datasets_path)  # 所有dogs list
    with tqdm(total=number_cats + number_dogs) as pbar:  # 进度条
        for i in range(number_cats):
            pbar.update(1)  # 更新进度条
            f.write(cats_datasets_path + str(cats[i]) + ' ' + '0' + '\n')  # 0代表猫
        for j in range(number_dogs):
            pbar.update(1)
            f.write(dogs_datasets_path + str(dogs[j]) + ' ' + '1' + '\n')  # 1代表狗
    pbar.close()
f.close()

需要自己修改的地方我做了标注,数据集的路径只需要定位到PetImages路径即可,

anno_path 为 文本文档的路径

运行后会生成 数据集的 文本文档,每一行分别写入 图片的路径 + 动物类别(0代表猫,1代表狗)

E:\Datasets\CatsandDogs\PetImages/Cat/cat.0.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.1.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.10.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.100.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.1000.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.10000.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.10001.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.10002.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.10003.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.10004.jpg 0
E:\Datasets\CatsandDogs\PetImages/Cat/cat.10005.jpg 0
网络搭建:

网络提供两种,一是VGG16,最终检测的精度是51.36%,效果较差。


另外一种是IncenptionNetV1,精度为85% 。


搭建方式采用继承 tf.keras.Model

 VGG16(非原版的,采用的是B站北京大学教程里面的那个)
import tensorflow as tf

class VGG16(tf.keras.Model):
    def __init__(self):
        super(VGG16, self).__init__()

        self.conv1 = tf.keras.layers.Conv2D(filters=64,kernel_size=(3,3),padding='same')
        self.bn1 = tf.keras.layers.BatchNormalization()
        self.ac1 = tf.keras.layers.Activation(tf.keras.activations.relu)

        self.conv2 = tf.keras.layers.Conv2D(filters=64,kernel_size=(3,3),padding='same')
        self.bn2 = tf.keras.layers.BatchNormalization()
        self.ac2 = tf.keras.layers.Activation(tf.keras.activations.relu)
        self.pool2 = tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2,padding='same')
        self.drop2 = tf.keras.layers.Dropout(0.2)

        self.conv3 = tf.keras.layers.Conv2D(filters=128,kernel_size=(3,3),padding='same')
        self.bn3 = tf.keras.layers.BatchNormalization()
        self.ac3 = tf.keras.layers.Activation(tf.keras.activations.relu)

        self.conv4 = tf.keras.layers.Conv2D(filters=128,kernel_size=(3,3),padding='same')
        self.bn4 = tf.keras.layers.BatchNormalization()
        self.ac4 = tf.keras.layers.Activation(tf.keras.activations.relu)
        self.pool4 = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.drop4 = tf.keras.layers.Dropout(0.2)

        self.conv5 = tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3), padding='same')
        self.bn5 = tf.keras.layers.BatchNormalization()
        self.ac5 = tf.keras.layers.Activation(tf.keras.activations.relu)

        self.conv6 = tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3), padding='same')
        self.bn6 = tf.keras.layers.BatchNormalization()
        self.ac6 = tf.keras.layers.Activation(tf.keras.activations.relu)

        self.conv7 = tf.keras.layers.Conv2D(filters=256, kernel_size=(3, 3), padding='same')
        self.bn7 = tf.keras.layers.BatchNormalization()
        self.ac7 = tf.keras.layers.Activation(tf.keras.activations.relu)
        self.pool7 = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.drop7 = tf.keras.layers.Dropout(0.2)

        self.conv8 = tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3), padding='same')
        self.bn8 = tf.keras.layers.BatchNormalization()
        self.ac8 = tf.keras.layers.Activation(tf.keras.activations.relu)

        self.conv9 = tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3), padding='same')
        self.bn9 = tf.keras.layers.BatchNormalization()
        self.ac9 = tf.keras.layers.Activation(tf.keras.activations.relu)

        self.conv10 = tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3), padding='same')
        self.bn10 = tf.keras.layers.BatchNormalization()
        self.ac10 = tf.keras.layers.Activation(tf.keras.activations.relu)
        self.pool10 = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.drop10 = tf.keras.layers.Dropout(0.2)

        self.conv11 = tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3), padding='same')
        self.bn11 = tf.keras.layers.BatchNormalization()
        self.ac11 = tf.keras.layers.Activation(tf.keras.activations.relu)

        self.conv12 = tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3), padding='same')
        self.bn12 = tf.keras.layers.BatchNormalization()
        self.ac12 = tf.keras.layers.Activation(tf.keras.activations.relu)

        self.conv13 = tf.keras.layers.Conv2D(filters=512, kernel_size=(3, 3), padding='same')
        self.bn13 = tf.keras.layers.BatchNormalization()
        self.ac13 = tf.keras.layers.Activation(tf.keras.activations.relu)
        self.pool13 = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.drop13 = tf.keras.layers.Dropout(0.2)

        self.flatten = tf.keras.layers.Flatten()
        self.dense1 = tf.keras.layers.Dense(512,activation=tf.keras.activations.relu)
        self.drop14 = tf.keras.layers.Dropout(0.2)
        self.dense2 = tf.keras.layers.Dense(512, activation=tf.keras.activations.relu)
        self.drop15 = tf.keras.layers.Dropout(0.2)
        self.dense3 = tf.keras.layers.Dense(2,activation=tf.keras.activations.softmax)

    def call(self, inputs, training=True):
        x = inputs

        x = self.conv1(x)
        x = self.bn1(x)
        x = self.ac1(x)

        x = self.conv2(x)
        x = self.bn2(x)
        x = self.ac2(x)
        x = self.pool2(x)
        if training:
            x = self.drop2(x,training=training)

        x = self.conv3(x)
        x = self.bn3(x)
        x = self.ac3(x)

        x = self.conv4(x)
        x = self.bn4(x)
        x = self.ac4(x)
        x = self.pool4(x)
        if training:
            x = self.drop4(x,training=training)

        x = self.conv5(x)
        x = self.bn5(x)
        x = self.ac5(x)

        x = self.conv6(x)
        x = self.bn6(x)
        x = self.ac6(x)

        x = self.conv7(x)
        x = self.bn7(x)
        x = self.ac7(x)
        x = self.pool7(x)
        if training:
            x = self.drop7(x,training=training)

        x = self.conv8(x)
        x = self.bn8(x)
        x = self.ac8(x)

        x = self.conv9(x)
        x = self.bn9(x)
        x = self.ac9(x)

        x = self.conv10(x)
        x = self.bn10(x)
        x = self.ac10(x)
        x = self.pool10(x)
        if training:
            x = self.drop10(x,training=training)

        x = self.conv11(x)
        x = self.bn11(x)
        x = self.ac11(x)

        x = self.conv12(x)
        x = self.bn12(x)
        x = self.ac12(x)

        x = self.conv13(x)
        x = self.bn13(x)
        x = self.ac13(x)
        x = self.pool13(x)
        if training:
            x = self.drop13(x,training=training)

        x = self.flatten(x)
        x = self.dense1(x)
        if training:
            x = self.drop14(x, training=training)
        x = self.dense2(x)
        if training:
            x = self.drop15(x, training=training)
        x = self.dense3(x)

        outputs = x
        return outputs
InceptionNet_v1_10
import tensorflow as tf

class InceptionNet_v1_10(tf.keras.Model):
    def __init__(self, num_blocks, num_classes, init_ch=16, **kwargs):
        super(InceptionNet_v1_10, self).__init__(**kwargs)
        self.in_channels = init_ch
        self.out_channels = init_ch
        self.num_blocks = num_blocks
        self.init_ch = init_ch

        self.c1 = ConvBNRelu(init_ch)
        self.blocks = tf.keras.models.Sequential()

        for block_id in range(num_blocks):
            for layer_id in range(2):
                if layer_id == 0:
                    block = Inception_Block(self.out_channels, strides=2)
                else:
                    block = Inception_Block(self.out_channels, strides=1)
                self.blocks.add(block)
            self.out_channels *= 2

        self.pool1 = tf.keras.layers.GlobalAveragePooling2D()
        self.dense1 = tf.keras.layers.Dense(units=num_classes, activation=tf.keras.activations.softmax)

    def call(self, inputs):
        x = inputs
        x = self.c1(x)
        x = self.blocks(x)
        x = self.pool1(x)
        x = self.dense1(x)
        outputs = x

        return outputs


"""
Inception 结构块
"""


class Inception_Block(tf.keras.Model):
    def __init__(self, ch, strides=1):
        super(Inception_Block, self).__init__()
        self.ch = ch
        self.strides = strides

        self.conv1 = ConvBNRelu(ch, kernel_size=(1, 1), strides=strides)

        self.conv2_1 = ConvBNRelu(ch, kernel_size=(1, 1), strides=strides)
        self.conv2_2 = ConvBNRelu(ch, kernel_size=(3, 3), strides=1)

        self.conv3_1 = ConvBNRelu(ch, kernel_size=(1, 1), strides=strides)
        self.conv3_2 = ConvBNRelu(ch, kernel_size=(5, 5), strides=1)

        self.pool4 = tf.keras.layers.MaxPool2D(pool_size=(3, 3), strides=1, padding='same')
        self.conv4 = ConvBNRelu(ch, kernel_size=(1, 1), strides=strides)

    def call(self, x):
        x1 = self.conv1(x)

        x2 = self.conv2_1(x)
        x2 = self.conv2_2(x2)

        x3 = self.conv3_1(x)
        x3 = self.conv3_2(x3)

        x4 = self.pool4(x)
        x4 = self.conv4(x4)

        x = tf.concat([x1, x2, x3, x4], axis=3)
        return x
训练:
import numpy as np
import cv2
from tqdm import tqdm

#   修改到对应的文本路径
anno_txt = r'E:\Python_Subject\Study_All\DeepLearning_Study\DataSets\Cats_and_Dogs\anno.txt'

#   调用GPU
def use_gpu():
    #   调用GPU
    gpus = tf.config.list_physical_devices(device_type='GPU')
    for gpu in gpus:
        tf.config.experimental.set_memory_growth(device=gpu, enable=True)

use_gpu()  # 没有GPU可以不执行此代码

#   从文本中读取所有数据
datasets = []
with open(anno_txt, 'r') as f:
    datasets = datasets + f.readlines()

#   随机打乱
np.random.seed(20220407)  # 指定种子
np.random.shuffle(datasets)  # 打乱数据集
np.random.seed()

#   划分数据集 :训练集 测试集 8:2
train = datasets[0:int(len(datasets) * 0.8)]
test = datasets[int(len(datasets) * 0.8):]

#   训练集 图片及标签
train_image = []
train_label = []
for item in train:
    info = item.replace('\n', '').split(' ')  # 替换原本的换行为空白,并依据空格符号划分
    image_path = info[0]  # 图片路径
    image_label = info[1]  # 标签
    train_image.append(image_path)
    train_label.append(image_label)

#   测试集 图片及标签
test_image = []
test_label = []
for item in test:
    info = item.replace('\n', '').split(' ')  # 替换原本的换行为空白,并依据空格符号划分
    image_path = info[0]  # 图片路径
    image_label = info[1]  # 标签
    test_image.append(image_path)
    test_label.append(image_label)


#   利用tf.keras.utils.Sequence生成数据集
class My_Datasets_Sequence(tf.keras.utils.Sequence):
    def __init__(self, x, y, batch_size):
        self.x = x
        self.y = y
        self.batch_size = batch_size

    def __len__(self):
        return (np.ceil(len(self.x) / float(self.batch_size))).astype(np.int)

    def __getitem__(self, item):
        batch_x = self.x[item * self.batch_size:(item + 1) * self.batch_size]
        batch_y = self.y[item * self.batch_size:(item + 1) * self.batch_size]

        images = []
        labels = []

        for i in range(0, self.batch_size):
            image_path = str(batch_x[i])  # 图片路径
            label = int(batch_y[i])  # 标签路径
            # print("{},{}".format(image_path,label))
            image = cv2.imread(image_path)  # 读取图片
            # print("image path ={}".format(image_path))
            image = tf.image.resize(image, size=(224, 224))  # 调整大小
            image = image / 255.  # 归一化

            images.append(image)
            labels.append(label)
        return np.array(images), np.array(labels)

#    实例化网络模型
model = InceptionNet_v1_10(num_blocks=2, num_classes=2)

#    训练周期数,批大小设置(可以自行修改)
epoch = 5
batch_size = 8

#    实例化    训练集 与 测试集
train_datasets = My_Datasets_Sequence(x=train_image, y=train_label, batch_size=batch_size)
test_datasets = My_Datasets_Sequence(x=test_image, y=test_label, batch_size=batch_size)

#    指定优化策略
optimzers = tf.keras.optimizers.Adam()
#    指定精度指标
sparse_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()

for i in range(epoch):    
    print("epoch = {}".format(i))    
    with tqdm(total=train_datasets.__len__()) as pbar:    #    进度条
        for j in range(0, train_datasets.__len__()):    
            pbar.update(1)  # 更新进度条
            img, label = train_datasets.__getitem__(j)  # 读取图像和标签
            with tf.GradientTape() as tape:  # 梯度计算
                y_pred = model(img)
                loss = tf.keras.losses.sparse_categorical_crossentropy(y_pred=y_pred, y_true=label)
                loss = tf.reduce_mean(loss)
            grads = tape.gradient(loss, model.trainable_variables)
            optimzers.apply_gradients(grads_and_vars=zip(grads, model.trainable_variables))
    pbar.close()

#    训练完后在 测试集上评估
for t in range(0, test_datasets.__len__()):
    img, label = train_datasets.__getitem__(t)
    y_t = model.predict(img)
    sparse_accuracy.update_state(y_true=label, y_pred=y_t)
print("test accuracy = %f" % sparse_accuracy.result())

#    保存权重,自己指定路径
model.save_weights(filepath=r'E:\Python_Subject\Study_All\DeepLearning_Study\CatsandDogs/weight.hdf5')
预测:
import tensorflow as tf
from net import InceptionNet_v1_10
import cv2

weight_path = r"E:\Python_Subject\Study_All\DeepLearning_Study\CatsandDogs\weight.hdf5"

classed = {'0':'cat','1':'dog'}

model = InceptionNet_v1_10(num_blocks=2, num_classes=2)
model.build(input_shape=(1,224,224,3))

model.load_weights(weight_path)

while True:
    img = input("Input a image \n") #   输入图像的路径
    try:
        image = cv2.imread(img) #   读取图像
    except:
        print("No such image name as {}".format(img))
        continue
    else:
        image = tf.image.resize(image,size=(224,224))
        image = image/255.
        image = tf.expand_dims(image,axis=0)    #   (1,224,224,3)
        predict = model.predict(image)[0].tolist() #   获取预测结果
        pred_posi = predict.index(max(predict)) #   获取最大值的索引
        pred_classes = classed[str(pred_posi)]  #   输出对应的类别
        print(pred_classes+'\n')

运行后输入图像的路径,就可以预测是猫是狗

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/langs/568040.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-09
下一篇 2022-04-09

发表评论

登录后才能评论

评论列表(0条)

保存