PSO优化SVM参数的问题

PSO优化SVM参数的问题,第1张

Elapsed time is 64.799304 seconds.

bestc =

45.3915

bestg =

0.0100

bestCVaccuarcy =

97.7528

Accuracy = 97.7528% (87/89) (classification)

trainacc =

97.7528

0.0225

0.9633

Accuracy = 93.2584% (83/89) (classification)

testacc =

93.2584

0.0674

0.9007

代码:

%% 清空环境

clc

clear

load wine

train = [wine(1:30,:)wine(60:95,:)wine(131:153,:)]

train_label = [wine_labels(1:30)wine_labels(60:95)wine_labels(131:153)]

test = [wine(31:59,:)wine(96:130,:)wine(154:178,:)]

test_label = [wine_labels(31:59)wine_labels(96:130)wine_labels(154:178)]

[train,pstrain] = mapminmax(train'嫌埋)

pstrain.ymin = 0

pstrain.ymax = 1

[train,pstrain] = mapminmax(train,pstrain)

[test,pstest] = mapminmax(test')

pstest.ymin = 0

pstest.ymax = 1

[test,pstest] = mapminmax(test,pstest)

train = train'

test = test'笑扰

%% 参数初始化

%粒子群算法中的两个参数

c1 = 1.6% c1 belongs to [0,2]

c2 = 1.5% c2 belongs to [0,2]

maxgen=300 % 进碰者旦化次数

sizepop=30 % 种群规模

popcmax=10^(2)

popcmin=10^(-1)

popgmax=10^(3)

popgmin=10^(-2)

k = 0.6% k belongs to [0.1,1.0]

Vcmax = k*popcmax

Vcmin = -Vcmax

Vgmax = k*popgmax

Vgmin = -Vgmax

% SVM参数初始化

v = 3

%% 产生初始粒子和速度

for i=1:sizepop

% 随机产生种群

pop(i,1) = (popcmax-popcmin)*rand+popcmin % 初始种群

pop(i,2) = (popgmax-popgmin)*rand+popgmin

V(i,1)=Vcmax*rands(1) % 初始化速度

V(i,2)=Vgmax*rands(1)

% 计算初始适应度

cmd = ['-v ',num2str(v),' -c ',num2str( pop(i,1) ),' -g ',num2str( pop(i,2) )]

fitness(i) = svmtrain(train_label, train, cmd)

fitness(i) = -fitness(i)

end

% 找极值和极值点

[global_fitness bestindex]=min(fitness)% 全局极值

local_fitness=fitness % 个体极值初始化

global_x=pop(bestindex,:) % 全局极值点

local_x=pop % 个体极值点初始化

tic

%% 迭代寻优

for i=1:maxgen

for j=1:sizepop

%速度更新

wV = 0.9% wV best belongs to [0.8,1.2]

V(j,:) = wV*V(j,:) + c1*rand*(local_x(j,:) - pop(j,:)) + c2*rand*(global_x - pop(j,:))

if V(j,1) >Vcmax

V(j,1) = Vcmax

end

if V(j,1) <Vcmin

V(j,1) = Vcmin

end

if V(j,2) >Vgmax

V(j,2) = Vgmax

end

if V(j,2) <Vgmin

V(j,2) = Vgmin

end

%种群更新

wP = 0.6

pop(j,:)=pop(j,:)+wP*V(j,:)

if pop(j,1) >popcmax

pop(j,1) = popcmax

end

if pop(j,1) <popcmin

pop(j,1) = popcmin

end

if pop(j,2) >popgmax

pop(j,2) = popgmax

end

if pop(j,2) <popgmin

pop(j,2) = popgmin

end

% 自适应粒子变异

if rand>0.5

k=ceil(2*rand)

if k == 1

pop(j,k) = (20-1)*rand+1

end

if k == 2

pop(j,k) = (popgmax-popgmin)*rand+popgmin

end

end

%适应度值

cmd = ['-v ',num2str(v),' -c ',num2str( pop(j,1) ),' -g ',num2str( pop(j,2) )]

fitness(j) = svmtrain(train_label, train, cmd)

fitness(j) = -fitness(j)

end

%个体最优更新

if fitness(j) <local_fitness(j)

local_x(j,:) = pop(j,:)

local_fitness(j) = fitness(j)

end

%群体最优更新

if fitness(j) <global_fitness

global_x = pop(j,:)

global_fitness = fitness(j)

end

fit_gen(i)=global_fitness

end

toc

%% 结果分析

plot(-fit_gen,'LineWidth',5)

title(['适应度曲线','(参数c1=',num2str(c1),',c2=',num2str(c2),',终止代数=',num2str(maxgen),')'],'FontSize',13)

xlabel('进化代数')ylabel('适应度')

bestc = global_x(1)

bestg = global_x(2)

bestCVaccuarcy = -fit_gen(maxgen)

cmd = ['-c ',num2str( bestc ),' -g ',num2str( bestg )]

model = svmtrain(train_label,train,cmd)

[trainpre,trainacc] = svmpredict(train_label,train,model)

trainacc

[testpre,testacc] = svmpredict(test_label,test,model)

testacc

可以利用libsvm工具箱中自带的k折交叉验证方法进行参数的寻优。

k折交叉验证的基本思想如下:

k个子集,每个子集均做一次测试集,其余的作为训练集。交叉验证重复k次,每次选择一个子集作为测试集,并将k次的平均交叉验证识别正确率作为结果。

libsvm工具箱中交叉验证的使用方法如下:

predict = trian(data_label, data_train, cmd)  

% train_label表示训练输出样本数据;

% data_train表示训练输入样本数据;

% cmd就是训练参数困则的正尺核设置,如设置为cmd='-v 5'就表示进行5折交叉验证(该设置中省略了其他参数的举掘设置,即保存默认设置)。


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/12352045.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-05-24
下一篇 2023-05-24

发表评论

登录后才能评论

评论列表(0条)

保存