knn最基本的Matlab仿真怎么弄

knn最基本的Matlab仿真怎么弄,第1张

准备条件:已经把特征数据和样本标号保存为文件。

测试代码为:

[plain]

view

plaincopy

train_data=load('sample_feature.txt')

train_label=load('train_label.txt')

test_data=load('features.txt')

k=knnclassify(test_data,train_data,train_label,3,'cosine','random')

train_data保存的是训练样本特征,要求是最能代表本类别的,不一定多,当然不能太少;

train_label保存的是样本标号,如0,1,2等等,随便设置,只有能区分就行,具体格式可以为:

[plain]

view

plaincopy

1

1

2

2

3

3

test_data测试文件保存的是测试数据的特征;

function [ccr,pgroupt]=knnt(x,group,K,dist,xt,groupt)

%#

%# AIM: to classify test set objects or unknown objects with the

%# K Nearest Neighbour method

%#

%# PRINCIPLE: KNN is a supervised, deterministic, non-parametric

%# classification method. It uses the majority rule to

%# assign new objects to a class.

%# It is assumed that the number of objects in each class

%# is similar.

%# There are no assumptions about the data distribution and

%# the variance-covariance matrices of each class.

%# There is no limitation of the number of variables when

%# the Euclidean distance is used.

%# However, when the correlation coefficient is used, the

%# number of variables must be larger than 1.

%# Ref: Massart D. L., Vandeginste B. G. M., Deming S. N.,

%# Michotte Y. and Kaufman L., Chemometrics: a textbook,

%# Chapter 23, 395-397, Elsevier Science Publishers B. V.,

%# Amsterdam 1988.

%#

%# INPUT: x: (mxn) data matrix with m objects and n variables,

%# containing samples of several classes (training set)

%# group: (mx1) column vector labelling the m objects from the

%# training set

%# K: integer, number of nearest neighbours

%# dist: integer,

%# = 1, Euclidean distance

%# = 2, Correlation coefficient, (No. of variables >1)

%# xt: (mtxn) data matrix with mt objects and n variables

%# (test set or unknowns)

%# groupt: (mtx1) column vector labelling the mt objects from

%# the test set

%# -->if the new objects are unknown, input [].

%#

%# OUTPUT: ccr: scalar, correct classification rate

%# pgroupt:row vector, predicted class label for the test set

%# 0 means that the object is not classified to any

%# class

%#

%# SUBROUTINES: sortlab.m: sorts the group label vector into classes

%#

%# AUTHOR: Wen Wu

%# Copyright(c) 1997 for ChemoAc

%# FABI, Vrije Universiteit Brussel

%# Laarbeeklaan 103 1090 Jette

%#

%# VERSION: 1.1 (28/02/1998)

%#

%# TEST: Andrea Candolfi

%#

function [ccr,pgroupt]=knnt(x,group,K,dist,xt,groupt)

if nargin==5, groupt=[]end % for unknown objects

distance=distclear dist % change variable

if size(group,1)>1,

group=group' % change column vector into row vector

groupt=groupt' % change column vector into row vector

end

[m,n]=size(x) % size of the training set

if distance==2 &n<2, error('Number of variables must >1'),end % to check the number of variables when using correlation coefficient

[mt,n]=size(xt) % size of the test set

dis=zeros(mt,m) % initial values for the distance (matrix of zeros)

% Calculation of the distance for each test set object

for i=1:mt

for j=1:m % between each training set object and each test set object

if distance==1

dis(i,j)=(xt(i,:)-x(j,:))*(xt(i,:)-x(j,:))' % Euclidian distance

else

r=corrcoef(xt(i,:)',x(j,:)') % Correlation coefficient matrix

r=r(1,2) % Correlation coefficient

dis(i,j)=1-r*r % 1 - the power of correlation coefficient

end

end

end

% Finding of the nearest neighbours

lab=zeros(1,mt) % initial values of lab

for i=1:mt % for each test object

[a,b]=sort(dis(i,:)) % sort distances

b=b(find(a<=a(K))) % to find the nearest neighbours indices

b=group(b) % the nearest neighbours objects

[ng,lgroup]=sortlab(b) % calculate the number of objects from each class in the nearest neighbours

a=find(ng==max(ng)) % find the class with the maximum number of objects

if length(a)==1 % only one class

lab(i)=lgroup(a) % class label

else

lab(i)=0 % more than one class

end

end

% Calculation of the success rate

if ~isempty(groupt)

dif=groupt-lab % difference between predicted class label and known class label

ccr=sum(dif==0)/mt % success rate

end

pgroupt=lab % the output vector

knn算法,即k-NearestNeighbor,后面的nn意思是最近邻的意思,前面的k是前k个的意思,就是找到前k个离得最近的元素

离得最近这个词具体实现有很多种,我使用的是欧式几何中的距离公式

二维中两点x(x1,y1),y(x2,y2)间距离公式为sqrt( (x1-x2)^2+(y1-y2)^2 )

推广到n维就是

x(x1,x2, … ,xn),y(y1,y2, … ,yn)

sqrt [ ∑( x[i] - y[i] )^2 ] (i=1,2, … ,n)

knn算法是要计算距离的,也就是数字之间的运算,而图像是png,jpg这种格式,并不是数字也不能直接参与运算,所以我们需要进行一下转换

如图所示一个数字8,首先要确定的是这一步我做的是一个最简单的转换,因为我假定背景和图之间是没有杂物的,而且整个图只有一个数字(0-9)如果遇到其他情况,比如背景色不纯或者有其他干扰图像需要重新设计转换函数

接下来就是最简单的转换,将图片白色部分(背景)变0,有图像的部分变1。转换后的大小要合适,太小会影响识别准确度,太大会增加计算量。所以我用的是书上的32*32,转换后结果如图所示

这样一来,图片就变成了能进行计算的数字了。

接下来我们需要创建一个库,这个库里面存着0-9这些数字的各种类似上图的实例。因为我们待识别的图像要进行对比,选出前k个最近的,比较的对象就是我们的库。假定库中有0-9十个数字,每个数字各有100个这种由0和1表示的实例,那么我们就有了一共1000个实例。

最后一步就是进行对比,利用开头说的欧式几何距离计算公式,首先这个32*32的方阵要转换成一个1*1024的1024维坐标表示,然后拿这个待识别的图像和库中的1000个实例进行距离计算,选出前k个距离最近的。比如50个,这50个里面出现次数最多的数字除以50就是结果数字的概率。比如50个里面数字8出现40次,那么待识别数字是8的可能性就是40/50 = 80%

个人理解:

只能识别单个数字,背景不能有干扰。如果想多数字识别或者背景有干扰需要针对具体情况考虑具体的图像转01的方法。

数字识别非常依赖库中的图像,库中的图像的样子严重影响图像的识别(因为我们是和库中的一一对比找出距离最近的前k个),所以数字的粗细,高低,胖瘦等待都是决定性因素,建库时一定全面考虑数字的可能样子

计算量比较大,待识别图像要和库中所有实例一一计算,如果使用32*32,就已经是1024维了。如果库中有1000个,那就是1024维向量之间的1000次计算,图像更清晰,库更丰富只会使计算量更大

对于其他可以直接计算距离的数值型问题,可以用欧式距离,也可以用其他能代表距离的计算公式,对于非数值型的问题需要进行合适的转换,转换方式很重要,我觉得首先信息不能丢失,其次要精确不能模糊,要实现图片转换前后是一对一的关系

参考资料:机器学习实战 [美] Peter Harrington 人民邮电出版社

python源码

import numpy

import os

from PIL import Image

import heapq

from collections import Counter

def pictureconvert(filename1,filename2,size=(32,32)):

    #filename1待识别图像,filename2 待识别图像转换为01txt文件输出,size图像大小,默认32*32

    image_file = Image.open(filename1)

    image_file = image_file.resize(size)

    width,height = image_file.size

    f1 = open(filename1,'r')

    f2 = open(filename2,'w')

    for i in range(height):

        for j in range(width):

            pixel = image_file.getpixel((j,i))

            pixel = pixel[0] + pixel[1] + pixel[2]

            if(pixel == 0):

                pixel = 0

            elif(pixel != 765 and pixel != 0):

                pixel = 1

            # 0代表黑色(无图像),255代表白色(有图像)

            # 0/255 = 0,255/255 = 1

            f2.write(str(pixel))

            if(j == width-1):

                f2.write('\n')

    f1.close()

    f2.close()

def imgvector(filename):

    #filename将待识别图像的01txt文件转换为向量

    vector = numpy.zeros((1,1024),numpy.int)

    with open(filename) as f:

        for i in range(0,32):

            linestr = f.readline()

            for j in range(0,32):

                vector[0,32*i+j] = int(linestr[j])

    return  vector

def compare(filename1,filename2):

    #compare直接读取资源库识别

    #filename1资源库目录,filename2 待识别图像01txt文档路径

    trainingfilelist = os.listdir(filename1)

    m = len(trainingfilelist)

    labelvector = []

    trainingmatrix = numpy.zeros((m, 1024), numpy.int8)

    for i in range(0,m):

        filenamestr = trainingfilelist[i]

        filestr = filenamestr.split('.')[0]

        classnumber = int(filestr.split('_')[0])

        labelvector.append(classnumber)

        trainingmatrix[i,:] = imgvector(filename1 + '/' + filenamestr)

    textvector = imgvector(filename2)

    resultdistance = numpy.zeros((1,m))

    result = []

    for i in range(0,m):

        resultdistance[0,i] = numpy.vdot(textvector[0],trainingmatrix[i])

    resultindices = heapq.nlargest(50,range(0,len(resultdistance[0])),resultdistance[0].take)

    for i in resultindices:

        result.append(labelvector[i])

    number = Counter(result).most_common(1)

    print('此数字是',number[0][0],'的可能性是','%.2f%%' % ((number[0][1]/len(result))*100))

def distinguish(filename1,filename2,filename3,size=(32,32)):

    # filename1 png,jpg等格式原始图像路径,filename2 原始图像转换成01txt文件路径,filename3 资源库路径

    pictureconvert(filename1,filename2,size)

    compare(filename3,filename2)

url1 = "/Users/wang/Desktop/number.png"

url2 = "/Users/wang/Desktop/number.txt"

traininglibrary = "/Users/wang/Documents/trainingDigits"

distinguish(url1,url2,traininglibrary)


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/11834605.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-05-19
下一篇 2023-05-19

发表评论

登录后才能评论

评论列表(0条)

保存