人工智能-作业2:例题程序复现

人工智能-作业2:例题程序复现,第1张

        反向传播算法(Backpropagation)是目前用来训练人工神经网络(Artificial Neural Network,ANN)的最常用且最有效的算法。其主要思想是:
       (1)将训练集数据输入到ANN的输入层,经过隐藏层,最后达到输出层并输出结果,这是ANN的前向传播过程;
        (2)由于ANN的输出结果与实际结果有误差,则计算估计值与实际值之间的误差,并将该误差从输出层向隐藏层反向传播,直至传播到输入层;
       (3)在反向传播的过程中,根据误差调整各种参数的值;不断迭代上述过程,直至收敛。

 

 代码实现

import numpy as np
 
 
def sigmoid(z):
    a = 1 / (1 + np.exp(-z))
    return a
 
 
def forward_propagate(x1, x2, y1, y2, w1, w2, w3, w4, w5, w6, w7, w8):
    in_h1 = w1 * x1 + w3 * x2
    out_h1 = sigmoid(in_h1)
    in_h2 = w2 * x1 + w4 * x2
    out_h2 = sigmoid(in_h2)
 
    in_o1 = w5 * out_h1 + w7 * out_h2
    out_o1 = sigmoid(in_o1)
    in_o2 = w6 * out_h1 + w8 * out_h2
    out_o2 = sigmoid(in_o2)
 
    print("正向计算:o1 ,o2")
    print(round(out_o1, 5), round(out_o2, 5))
 
    error = (1 / 2) * (out_o1 - y1) ** 2 + (1 / 2) * (out_o2 - y2) ** 2
 
    print("损失函数:均方误差")
    print(round(error, 5))
 
    return out_o1, out_o2, out_h1, out_h2
 
 
def back_propagate(out_o1, out_o2, out_h1, out_h2):
    # 反向传播
    d_o1 = out_o1 - y1
    d_o2 = out_o2 - y2
    # print(round(d_o1, 2), round(d_o2, 2))
 
    d_w5 = d_o1 * out_o1 * (1 - out_o1) * out_h1
    d_w7 = d_o1 * out_o1 * (1 - out_o1) * out_h2
    # print(round(d_w5, 2), round(d_w7, 2))
    d_w6 = d_o2 * out_o2 * (1 - out_o2) * out_h1
    d_w8 = d_o2 * out_o2 * (1 - out_o2) * out_h2
    # print(round(d_w6, 2), round(d_w8, 2))
 
    d_w1 = (d_w5 + d_w6) * out_h1 * (1 - out_h1) * x1
    d_w3 = (d_w5 + d_w6) * out_h1 * (1 - out_h1) * x2
    # print(round(d_w1, 2), round(d_w3, 2))
 
    d_w2 = (d_w7 + d_w8) * out_h2 * (1 - out_h2) * x1
    d_w4 = (d_w7 + d_w8) * out_h2 * (1 - out_h2) * x2
    # print(round(d_w2, 2), round(d_w4, 2))
    print("反向传播:误差传给每个权值")
    print(round(d_w1, 5), round(d_w2, 5), round(d_w3, 5), round(d_w4, 5), round(d_w5, 5), round(d_w6, 5),
          round(d_w7, 5), round(d_w8, 5))
 
    return d_w1, d_w2, d_w3, d_w4, d_w5, d_w6, d_w7, d_w8
 
 
def update_w(w1, w2, w3, w4, w5, w6, w7, w8):
    # 步长
    step = 5
    w1 = w1 - step * d_w1
    w2 = w2 - step * d_w2
    w3 = w3 - step * d_w3
    w4 = w4 - step * d_w4
    w5 = w5 - step * d_w5
    w6 = w6 - step * d_w6
    w7 = w7 - step * d_w7
    w8 = w8 - step * d_w8
    return w1, w2, w3, w4, w5, w6, w7, w8
 
 
if __name__ == "__main__":
    w1, w2, w3, w4, w5, w6, w7, w8 = 0.2, -0.4, 0.5, 0.6, 0.1, -0.5, -0.3, 0.8
    x1, x2 = 0.5, 0.3
    y1, y2 = 0.23, -0.07
    print("=====输入值:x1, x2;真实输出值:y1, y2=====")
    print(x1, x2, y1, y2)
    print("=====更新前的权值=====")
    print(round(w1, 2), round(w2, 2), round(w3, 2), round(w4, 2), round(w5, 2), round(w6, 2), round(w7, 2),
          round(w8, 2))
 
    for i in range(1000):
        print("=====第" + str(i) + "轮=====")
        out_o1, out_o2, out_h1, out_h2 = forward_propagate(x1, x2, y1, y2, w1, w2, w3, w4, w5, w6, w7, w8)
        d_w1, d_w2, d_w3, d_w4, d_w5, d_w6, d_w7, d_w8 = back_propagate(out_o1, out_o2, out_h1, out_h2)
        w1, w2, w3, w4, w5, w6, w7, w8 = update_w(w1, w2, w3, w4, w5, w6, w7, w8)
 
    print("更新后的权值")
    print(round(w1, 2), round(w2, 2), round(w3, 2), round(w4, 2), round(w5, 2), round(w6, 2), round(w7, 2),
          round(w8, 2))

 运行结果:

参考资料:

反向传播算法过程及公式推导https://blog.csdn.net/u014313009/article/details/51039334

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/langs/868441.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-13
下一篇 2022-05-13

发表评论

登录后才能评论

评论列表(0条)

保存