分解图以轴承外圈故障信号为例:
2.自编码SAE 简单的自编码SAE是一种三层神经网络模型,包含了数据输入层、隐藏层、输出重构层,同时它是一种无监督学习模型。
由两部分构成:
- 编码器(encoder):这部分能将输入压缩成潜在空间表征,可以用编码函数h=f(x)表示。
- 解码器(decoder):这部分重构来自潜在空间表征的输入,可以用解码函数r=g(h)表示。
因此,整个自编码器可以用函数g(f(x)) = r 来描述,其中输出r与原始输入x相近。
想学习得更深刻参考博客SAE. 3、轴承故障诊断实例 数据选取:数据位西储大学轴承数据,采样频率为12khz,有内圈,外圈,滚动体三种故障。本次选用了四百组数据,三种故障和正常状态分别一百组。 导入库:from sklearn.model_selection import train_test_split
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense,Input
from keras.utils.np_utils import to_categorical
from sklearn.metrics import classification_report, confusion_matrix, recall_score
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import os
from sklearn import preprocessing
import pywt
import pywt.data
import glob
进行小波包分解,分解层数3层,计算得到的八个特征的能量值作为实验数据集。
path = r'E:/FUXIAN-ZHOUCHENG/WP-SAE/0HP/'
def read_txt(path):
cate = [path + x for x in os.listdir(path) if os.path.isdir(path + x)]
imgs = []
labels = []
for idx, folder in enumerate(cate):
for im in glob.glob(folder + '/*.txt'):
img = np.loadtxt(im)
imgs.append(img)
labels.append(idx)
return np.asarray(imgs, np.float32), np.asarray(labels, np.int32)
data1, label = read_txt(path)
print(label)
aaa = []
aad = []
ada = []
add = []
daa = []
dad = []
dda = []
ddd = []
for i in range(len(data1)):
print(i)
data = data1[i]
wp = pywt.WaveletPacket(data, wavelet='db1',mode='symmetric',maxlevel=3)
n = 3
re = [] #第n层所有节点的分解系数
for j in [node.path for node in wp.get_level(n, 'freq')]:
re.append(wp[j].data)
#第n层能量特征
energy = []
for k in re:
energy.append(pow(np.linalg.norm(k,ord=None),2))
aaa.append(energy[0])
aad.append(energy[1])
ada.append(energy[2])
add.append(energy[3])
daa.append(energy[4])
dad.append(energy[5])
dda.append(energy[6])
ddd.append(energy[7])
np.savetxt('E:/FUXIAN-ZHOUCHENG/WP-SAE/aaa.txt',aaa)
np.savetxt('E:/FUXIAN-ZHOUCHENG/WP-SAE/aad.txt',aad)
np.savetxt('E:/FUXIAN-ZHOUCHENG/WP-SAE/ada.txt',ada)
np.savetxt('E:/FUXIAN-ZHOUCHENG/WP-SAE/add.txt',add)
np.savetxt('E:/FUXIAN-ZHOUCHENG/WP-SAE/daa.txt',daa)
np.savetxt('E:/FUXIAN-ZHOUCHENG/WP-SAE/dad.txt',dad)
np.savetxt('E:/FUXIAN-ZHOUCHENG/WP-SAE/dda.txt',dda)
np.savetxt('E:/FUXIAN-ZHOUCHENG/WP-SAE/ddd.txt',ddd)
四种数据能量分布图如下:
构造完数据集,再用自编码进行故障识别。
x = pd.read_csv('E:/FUXIAN-ZHOUCHENG/WP-SAE/tezheng.csv',usecols=["1",'2','3','4','5','6','7','8'])
y = pd.read_csv('E:/FUXIAN-ZHOUCHENG/WP-SAE/tezheng.csv',usecols=['label'])
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2)
print(x_train)
y_train_cate= to_categorical(y_train, num_classes=4)
y_test_cate= to_categorical(y_test, num_classes=4)
input_img=Input(shape=(8,))
# 编码层
encoded=Dense(20,activation='relu',name='encoded_hidden1')(input_img)
encoder_output=Dense(12,activation='relu',name='encoded_hidden2')(encoded)
LR=Dense(4,activation='softmax',name='LR')(encoder_output)
# 解码层
decoded=Dense(12,activation='relu',name='decoded_hidden2')(encoder_output)
decoded=Dense(20,activation='relu',name='decoded_hidden3')(decoded)
decoded=Dense(8,activation='tanh',name='decoded_output')(decoded)
# # 构建自编码模型
# autoencoder=Model(inputs=input_img,outputs=decoded)
# # complile autoencoder 设置自编码的优化参数
# autoencoder.compile(optimizer='adam',loss='mse')
# # train
# history = autoencoder.fit(x_train,x_train,epochs=100,batch_size=16,shuffle=True)
# 采用编码层的网络结构,从新构成一个新的model,此model的参数跟原来autoencode的训练的参数一样。
encoder=Model(inputs=input_img,outputs=LR)
encoder.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['categorical_accuracy'])
history = encoder.fit(x_train,y_train_cate,epochs=50,batch_size=16,shuffle=True,validation_split=0.1)
score=encoder.evaluate(x_test,y_test_cate)
print(score)
print(encoder.summary())
# 显示训练集和验证集的acc和loss曲线
acc = history.history['categorical_accuracy']
val_acc = history.history['val_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()
效果图如下:
此图有波动,应该是数据集稍微不够,有兴趣你们可以去验证。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)