神经网络与深度学习 Class 8:回归问题

神经网络与深度学习 Class 8:回归问题,第1张

神经网络与深度学习 Class 8:回归问题

目录

8.1 机器学习基础

8.2 一元线性回归

​8.3 解析法实现一元回归 

1.Python实现

2.Numpy实现

3.Tensorflow实现

4.数据和模型可视化

8.4 多元线性回归

8.5 解析法实现多元线性回归


8.1 机器学习基础

机器学习即学习算法从数据中学习模型的过程

8.2 一元线性回归

8.3 解析法实现一元回归 

 待解决问题

1.Python实现
#load dataset
x=[137.97,104.5,100.00,124.32,79.20,99.00,124.00,114.00,
106.69,138.05,53.75,46.91,68.00,63.02,81.26,86.21]
y=[145.00,110.00,93.00,116.00,65.32,104.00,118.00,91.00,
62.00,133.00,51.00,45.00,78.50,69.65,75.69,95.3]

#Calculate w, b
meanX=sum(x)/len(x)     #Calculate the mean of x
meanY=sum(y)/len(y)     #Calculate the mean of y

sumXY=0.0               #Set the initial value of the numerator
sumX=0.0                #Set the initial value of the denominator
#Calculate in cycle
for i in range(len(x)):            
    sumXY+=(x[i]-meanX)*(y[i]-meanY)
    sumX+=(x[i]-meanX)*(x[i]-meanX)
#Calculate w,b
w=sumXY/sumX
b=meanY-w*meanX
#Output
print('w=',w)
print('b=',b)
2.Numpy实现
from numpy.core.fromnumeric import mean
#load dataset
x=np.array([137.97,104.5,100.00,124.32,79.20,99.00,124.00,114.00,
106.69,138.05,53.75,46.91,68.00,63.02,81.26,86.21])
y=np.array([145.00,110.00,93.00,116.00,65.32,104.00,118.00,91.00,
62.00,133.00,51.00,45.00,78.50,69.65,75.69,95.3])
#Calculate w,b
meanX=np.mean(x)
meanY=np.mean(y)
sumXY=np.sum((x-meanX)*(y-meanY))
sumX=np.sum((x-meanX)*(x-meanX))

w=sumXY/sumX
b=meanY-w*meanX
print('w=',w)
print('b=',b)

无需循环

3.Tensorflow实现
import tensorflow as tf
#load dataset
x=tf.constant([137.97,104.5,100.00,124.32,79.20,99.00,124.00,114.00,
106.69,138.05,53.75,46.91,68.00,63.02,81.26,86.21])
y=tf.constant([145.00,110.00,93.00,116.00,65.32,104.00,118.00,91.00,
62.00,133.00,51.00,45.00,78.50,69.65,75.69,95.3])
#Calculate w,b
meanX=tf.reduce_mean(x)
meanY=tf.reduce_mean(y)
sumXY=tf.reduce_sum((x-meanX)*(y-meanY))
sumX=tf.reduce_sum((x-meanX)*(x-meanX))

w=sumXY/sumX
b=meanY-w*meanX
print('w=',w)
print('b=',b)
4.数据和模型可视化
#import library
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt 
#load dataset
x=tf.constant([137.97,104.5,100.00,124.32,79.20,99.00,124.00,114.00,
106.69,138.05,53.75,46.91,68.00,63.02,81.26,86.21])
y=tf.constant([145.00,110.00,93.00,116.00,65.32,104.00,118.00,91.00,
62.00,133.00,51.00,45.00,78.50,69.65,75.69,95.3])
#Calculate w,b
meanX=tf.reduce_mean(x)
meanY=tf.reduce_mean(y)
sumXY=tf.reduce_sum((x-meanX)*(y-meanY))
sumX=tf.reduce_sum((x-meanX)*(x-meanX))

w=sumXY/sumX
b=meanY-w*meanX
print('w=',w.numpy())
print('b=',b.numpy())

x_test=np.array([128.15,45.00,141.43,106.27,99.00,53.84,85.36,70.00])
y_pred=(w*x_test+b).numpy()
n=len(x_test)
for i in range(n):
    print(x_test[i],round(y_pred[i],2))

#mapping
plt.figure()
plt.scatter(x,y,color='red')      #使用label添加图例
plt.scatter(x_test,y_pred,color='blue')
plt.plot(x_test,y_pred,color='green')
plt.xlabel('the measure of area')
plt.ylabel('Price')
plt.legend()
plt.show()

8.4 多元线性回归

 其中x^0=1

 

 

 使用矩阵算时,需要满秩,即特征数与样本数相同,否则会造成解不唯一

8.5 解析法实现多元线性回归

#import library
import numpy as np
from numpy.core.fromnumeric import transpose

#load dataset
x1=np.array([137.97,104.5,100.00,124.32,79.20,99.00,124.00,114.00,
106.69,138.05,53.75,46.91,68.00,63.02,81.26,86.21])
x2=np.array([3,2,2,3,1,2,3,2,2,3,1,1,1,1,2,2])
y=np.array([145.00,110.00,93.00,116.00,65.32,104.00,118.00,91.00,
62.00,133.00,51.00,45.00,78.50,69.65,75.69,95.3])
#data processing 
x0=np.ones(len(x1))
X=np.stack((x0,x1,x2),axis=1)
Y=np.array(y).reshape(-1,1)
#Calculate W
Xt=np.transpose(X)
XtX_1=np.linalg.inv(np.matmul(Xt,X))
XtX_1_Xt=np.matmul(XtX_1,Xt)
W=np.matmul(XtX_1_Xt,Y)

W=W.reshape(-1)

print('Y=',W[1],'x1+',W[2],'x2+',W[0])

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5157170.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-11-18
下一篇 2022-11-18

发表评论

登录后才能评论

评论列表(0条)

保存