2021-11-12

2021-11-12,第1张

2021-11-12

随机梯度下降法由于每次仅仅采用一个样本来迭代,训练速度很快

import numpy as np
import matplotlib.pyplot as plt
X = np.array([[2104,3],[1600,3],[2400,3],[1416,2],[3000,4]])
Y = np.array( [400,330,369,232,540])
theta0 = np.random.random()
theta1 = np.random.random()
theta2 = np.random.random()
epochs = 0.0001
alpha = 0.01
def cost(X,Y,theta0,theta1,theta2):
loss = 0
m = len(Y)
for i in range(m):
loss += (theta0+theta1X[i,0]+theta2X[i,1]-Y[i])**2
loss = loss/(2m)
return loss
def grad_des(X,Y,theta0,theta1,theta2,alpha,epochs):
m = len(Y)
for z in range(epochs):
theta0_grad = 0
theta1_grad = 0
theta2_grad = 0
for i in range(m):
theta0_grad = (theta0+theta1
X[i,0]+theta2X[i,1]-Y[i])
theta1_grad = (theta0+theta1
X[i,0]+theta2X[i,1]-Y[i])X[i,0]
theta2_grad = (theta0+theta1
X[i,0]+theta2
X[i,1]-Y[i])X[i,1]
theta0_grad = theta0_grad/m
theta1_grad = theta1_grad/m
theta2_grad = theta2_grad/m
theta0 -=alpha
theta0_grad
theta1 -=alphatheta1_grad
theta2 -=alpha
theta2_grad
return theta0,theta1,theta2
print(theta0, theta1, theta2)

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5480807.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-12
下一篇 2022-12-12

发表评论

登录后才能评论

评论列表(0条)

保存