MATLAB如何做非线性SVM?自带function吗?

MATLAB如何做非线性SVM?自带function吗?,第1张

matlab是有自己自正蠢带的svm函数的,不过做得不是很强,你可以羡悄另外下载网上其他人开发的svm函数工具箱,放到matlab自己的的tool文件夹里面,在命令窗口打

addpath(genpath(D:\MATLAB6p5\toolbox\svm))

前面的路径是你举派陪的matlab的路径

matlab自带的svm函数是svmtrain

svmtrain Train a support vector machine classifier

SVMSTRUCT = svmtrain(TRAINING, Y) trains a support vector machine (SVM)

classifier on data taken from two groups. TRAINING is a numeric matrix

of predictor data. Rows of TRAINING correspond to observationscolumns

correspond to features. Y is a column vector that contains the known

class labels for TRAINING. Y is a grouping variable, i.e., it can be a

categorical, numeric, or logical vectora cell vector of stringsor a

character matrix with each row representing a class label (see help for

groupingvariable). Each element of Y specifies the group the

corresponding row of TRAINING belongs to. TRAINING and Y must have the

same number of rows. SVMSTRUCT contains information about the trained

classifier, including the support vectors, that is used by SVMCLASSIFY

for classification. svmtrain treats NaNs, empty strings or 'undefined'

values as missing values and ignores the corresponding rows in

TRAINING and Y.

SVMSTRUCT = svmtrain(TRAINING, Y, 'PARAM1',val1, 'PARAM2',val2, ...)

这个是非线性洞旁缺svm的:

1.命令函数部分:

clear%清屏

clc

X =load('data.txt')

n = length(X)%总样本数量

y = X(:,4)%类纳辩别标志

X = X(:,1:3)

TOL = 0.0001%精度要求

C = 1%参数,对损失函数的权重

b = 0%初始设置截距b

Wold = 0%未更新a时的W(a)

Wnew = 0%更新a后的W(a)

for i = 1 : 50%设置类别标志为1或者-1

y(i) = -1

end

a = zeros(n,1)%参数a

for i = 1 : n%随机初始化a,a属于[0,C]

a(i) = 0.2

end

%为简化计算,减少重复计算进行的计算

K = ones(n,n)

for i = 1 :n%求出K矩阵,便于之后的计算

for j = 1 : n

K(i,j) = k(X(i,:),X(j,:))

end

end

sum = zeros(n,1)%中间变量,便于之后的计算,sum(k)=sigma a(i)*y(i)*K(k,i)

for k = 1 : n

for i = 1 : n

sum(k) = sum(k) + a(i) * y(i) * K(i,k)

end

end

while 1%迭代过程

%启发式选点

n1 = 1%初始化,n1,n2代表选择的2个点

n2 = 2

%n1按照第一个违反KKT条件的点选择

while n1 <= n

if y(n1) * (sum(n1) + b) == 1 &&a(n1) >= C &&a(n1) <= 0

break

end

if y(n1) * (sum(n1) + b) >1 &&a(n1) ~= 0

break

end

if y(n1) * (sum(n1) + b) <1 &&a(n1) ~=C

break

end

n1 = n1 + 1

end

%n2按启迟照最大化|E1-E2|的原则选取

E1 = 0

E2 = 0

maxDiff = 0%假设的最大误差

E1 = sum(n1) + b - y(n1)%n1的误差

for i = 1 : n

tempSum = sum(i) + b - y(i)

if abs(E1 - tempSum)>maxDiff

maxDiff = abs(E1 - tempSum)

n2 = i

E2 = tempSum

end

end

%以下进行更新

a1old = a(n1)

a2old = a(n2)

KK = K(n1,n1) + K(n2,n2) - 2*K(n1,n2)

a2new = a2old + y(n2) *(E1 - E2) / KK%计算新的a2

%a2必须满足约束条件

S = y(n1) * y(n2)

if S == -1

U = max(0,a2old - a1old)

V = min(C,C - a1old + a2old)

else

U = max(0,a1old + a2old - C)

V = min(C,a1old + a2old)

end

if a2new >V

a2new = V

end

if a2new <U

a2new = U

end

a1new = a1old + S * (a2old - a2new)%计算新的a1

a(n1) = a1new%更新a

a(n2) = a2new

%更新部分值

sum = zeros(n,1)

for k = 1 : n

for i = 1 : n

sum(k) = sum(k) + a(i) * y(i) * K(i,k)

end

end

Wold = Wnew

Wnew = 0%更新a后的W(a)

tempSum = 0%临时变量

for i = 1 : n

for j = 1 : n

tempSum= tempSum + y(i )*y(j)*a(i)*a(j)*K(i,j)

end

Wnew= Wnew+ a(i)

end

Wnew= Wnew - 0.5 * tempSum

%以下更新b:通过找到某一个支持向量来计算

support = 1%支持向量坐标初始化

while abs(a(support))<1e-4 &&support <= n

support = support + 1

end

b = 1 / y(support) - sum(support)

%判断停止条件

if abs(Wnew/ Wold - 1 ) <= TOL

break

end

end

%输出结果:包括原分类,辨别函数计算结果,svm分类结果

for i = 1 : n

fprintf('第%d点:原标号 ',i)

if i <= 50

fprintf('-1')

else

fprintf(' 1')

end

fprintf('判别函数值%f 分类结果',sum(i) + b)

if abs(sum(i) + b - 1) <0.5

fprintf('1\n')

else if abs(sum(i) + b + 1) <0.5

fprintf('-1\n')

else

fprintf('归类错误\n')

end

end

end

2.名为f的功能函数部分:

function y = k(x1,x2)

y = exp(-0.5*norm(x1 - x2).^2)

end

3.数据:

0.8871 -0.34918.3376 0

1.25191.20836.5041 0

-1.19251.93381.8790 0

-0.12772.43712.6971 0

1.96973.09066.0391 0

0.76030.82411.5323 0

1.63823.55164.4694 0

1.3438 -0.45395.9366 0

-1.3361 -2.02011.6393 0

-0.38863.30418.0450 0

-0.67806.0196 -0.4084 0

0.3552 -0.10511.2458 0

1.65604.07860.8521 0

0.81173.54516.8925 0

1.4773 -1.93403.9256 0

-0.0732 -0.95260.4609 0

0.15214.37112.2600 0

1.48200.74930.3475 0

0.61404.52618.3776 0

0.57213.34603.7853 0

0.52694.14524.3900 0

1.7879 -0.53902.5516 0

0.98855.76250.1832 0

-0.33182.4373 -0.6884 0

1.35785.47093.4302 0

2.7210 -1.12684.7719 0

0.5039 -0.10252.3650 0

1.11071.68853.7650 0

0.78621.35877.3203 0

1.0444 -1.58413.6349 0

1.77951.72764.9847 0

0.67101.4724 -0.5504 0

0.23030.2720 -1.6028 0

1.7089 -1.73994.8882 0

1.00590.55575.1188 0

2.30500.85452.8294 0

1.95550.98980.3501 0

1.71411.54133.8739 0

2.27495.32804.9604 0

1.61710.52703.3826 0

3.6681 -1.84094.8934 0

1.19641.87811.4146 0

0.77882.10480.0380 0

0.79165.09063.8513 0

1.08071.88495.9766 0

0.63402.60303.6940 0

1.9069 -0.06097.4208 0

1.65994.94098.1108 0

1.37630.88993.9069 0

0.84851.46886.7393 0

3.67926.10924.9051 1

4.38127.21486.1211 1

4.39713.41397.7974 1

5.07167.7253 10.5373 1

5.30788.81386.1682 1

4.14485.51562.8731 1

5.36096.04584.0815 1

4.74526.63521.3689 1

6.02746.5397 -1.9120 1

5.31743.01346.7935 1

7.24593.69703.1246 1

6.10078.10875.5568 1

5.99246.92385.7938 1

6.02635.33337.5185 1

3.64708.09156.4713 1

3.65437.22647.5783 1

5.01146.53353.5229 1

4.43487.4379 -0.0292 1

3.60873.73513.0172 1

3.53745.53547.6578 1

6.00482.0691 10.4513 1

3.14234.00035.4994 1

3.40127.15368.3510 1

5.54715.1372 -1.5090 1

6.50895.49118.0468 1

5.45836.76745.9353 1

4.17272.97983.6027 1

5.16728.41364.8621 1

4.88083.55141.9953 1

5.49384.19983.2440 1

5.45425.88034.4269 1

4.87433.96418.1417 1

5.97626.77112.3816 1

6.69457.28581.8942 1

4.73015.76521.6608 1

4.70845.36233.2596 1

6.04083.31387.7876 1

4.60248.35170.2193 1

4.70546.6633 -0.3492 1

4.71395.63626.2330 1

4.0850 10.71183.3541 1

6.10886.16354.2292 1

4.98365.40426.7422 1

6.13876.19492.5614 1

6.07007.03733.3256 1

5.68815.13639.9254 1

7.20582.35704.7361 1

4.29727.32454.7928 1

4.77948.12353.1827 1

3.92826.4092 -0.6339 1

我觉得主要有两点原因:

1、我们要求样本特征维数要和数据集规模成正比,当样本数很桐搜郑多时,样本的特征数也应该很多,如果特征维数很高局颂,往往线性可分(SVM解决非线性分类问题的思路就是将样本映射到更高维的特征空间中),可以直接采用LR或者线性核的SVM;

2、大数据场景下,往往意味着数据集规模庞大,样本很多,如果样本数量很漏春多,由于求解最优化问题的时候,两两样本都需要计算内积,使用非线性核(其实主要就是高斯核)明显计算量会远大于线性核;

另外如果遇到样本数很多,而特征数很少时,应该手动添加一些特征。


欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/yw/12306893.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-05-24
下一篇 2023-05-24

发表评论

登录后才能评论

评论列表(0条)

保存