数据预处理是机器学习中最基础也最麻烦的一部分内容
在我们把精力扑倒各种算法的推导之前,最应该做的就是把数据预处理先搞定
在之后的每个算法实现和案例练手过程中,这一步都必不可少
同学们也不要嫌麻烦,动起手来吧
基础比较好的同学也可以温故知新,再练习一下哈
闲言少叙,下面我们六步完成数据预处理
其实我感觉这里少了一步:观察数据
![此处输入图片的描述][1]
这是十组国籍、年龄、收入、是否已购买的数据
有分类数据,有数值型数据,还有一些缺失值
看起来是一个分类预测问题
根据国籍、年龄、收入来预测是够会购买
OK,有了大体的认识,开始表演。
Step 1:导入库
import numpy as npimport pandas as pd
Step 2:导入数据集
dataset = pd.read_csv('Data.csv')X = dataset.iloc[ :,:-1].valuesY = dataset.iloc[ :,3].valuesprint("X")print(X)print("Y")print(Y)
这一步的目的是将自变量和因变量拆成一个矩阵和一个向量。
结果如下
X[['France' 44.0 72000.0] ['Spain' 27.0 48000.0] ['Germany' 30.0 54000.0] ['Spain' 38.0 61000.0] ['Germany' 40.0 nan] ['France' 35.0 58000.0] ['Spain' nan 52000.0] ['France' 48.0 79000.0] ['Germany' 50.0 83000.0] ['France' 37.0 67000.0]]Y['No' 'Yes' 'No' 'No' 'Yes' 'Yes' 'No' 'Yes' 'No' 'Yes']
Step 3:处理缺失数据
from sklearn.preprocessing import Imputerimputer = Imputer(missing_values = "NaN",strategy = "mean",axis = 0)imputer = imputer.fit(X[ :,1:3])X[ :,1:3] = imputer.transform(X[ :,1:3])
Imputer类具体用法移步
http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing
本例中我们用的是均值替代法填充缺失值
运行结果如下
Step 3: Handling the missing datastep2X[['France' 44.0 72000.0] ['Spain' 27.0 48000.0] ['Germany' 30.0 54000.0] ['Spain' 38.0 61000.0] ['Germany' 40.0 63777.77777777778] ['France' 35.0 58000.0] ['Spain' 38.77777777777778 52000.0] ['France' 48.0 79000.0] ['Germany' 50.0 83000.0] ['France' 37.0 67000.0]]
Step 4:把分类数据转换为数字
from sklearn.preprocessing import LabelEncoder,OneHotEncoderlabelencoder_X = LabelEncoder()X[ :,0] = labelencoder_X.fit_transform(X[ :,0])onehotencoder = OneHotEncoder(categorical_features = [0])X = onehotencoder.fit_transform(X).toarray()labelencoder_Y = LabelEncoder()Y = labelencoder_Y.fit_transform(Y)print("X")print(X)print("Y")print(Y)
LabelEncoder用法请移步
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
X[[1.00000000e+00 0.00000000e+00 0.00000000e+00 4.40000000e+01 7.20000000e+04] [0.00000000e+00 0.00000000e+00 1.00000000e+00 2.70000000e+01 4.80000000e+04] [0.00000000e+00 1.00000000e+00 0.00000000e+00 3.00000000e+01 5.40000000e+04] [0.00000000e+00 0.00000000e+00 1.00000000e+00 3.80000000e+01 6.10000000e+04] [0.00000000e+00 1.00000000e+00 0.00000000e+00 4.00000000e+01 6.37777778e+04] [1.00000000e+00 0.00000000e+00 0.00000000e+00 3.50000000e+01 5.80000000e+04] [0.00000000e+00 0.00000000e+00 1.00000000e+00 3.87777778e+01 5.20000000e+04] [1.00000000e+00 0.00000000e+00 0.00000000e+00 4.80000000e+01 7.90000000e+04] [0.00000000e+00 1.00000000e+00 0.00000000e+00 5.00000000e+01 8.30000000e+04] [1.00000000e+00 0.00000000e+00 0.00000000e+00 3.70000000e+01 6.70000000e+04]]Y[0 1 0 0 1 1 0 1 0 1]
Step 5:将数据集分为训练集和测试集
from sklearn.cross_valIDation import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split( X,Y,test_size = 0.2,random_state = 0)
X_train[[0.00000000e+00 1.00000000e+00 0.00000000e+00 4.00000000e+01 6.37777778e+04] [1.00000000e+00 0.00000000e+00 0.00000000e+00 3.70000000e+01 6.70000000e+04] [0.00000000e+00 0.00000000e+00 1.00000000e+00 2.70000000e+01 4.80000000e+04] [0.00000000e+00 0.00000000e+00 1.00000000e+00 3.87777778e+01 5.20000000e+04] [1.00000000e+00 0.00000000e+00 0.00000000e+00 4.80000000e+01 7.90000000e+04] [0.00000000e+00 0.00000000e+00 1.00000000e+00 3.80000000e+01 6.10000000e+04] [1.00000000e+00 0.00000000e+00 0.00000000e+00 4.40000000e+01 7.20000000e+04] [1.00000000e+00 0.00000000e+00 0.00000000e+00 3.50000000e+01 5.80000000e+04]]X_test[[0.0e+00 1.0e+00 0.0e+00 3.0e+01 5.4e+04] [0.0e+00 1.0e+00 0.0e+00 5.0e+01 8.3e+04]]step2Y_train[1 1 1 0 1 0 0 1]Y_test[0 0]
Step 6:特征缩放
from sklearn.preprocessing import StandardScalersc_X = StandardScaler()X_train = sc_X.fit_transform(X_train)X_test = sc_X.transform(X_test)
大多数机器学习算法在计算中使用两个数据点之间的欧氏距离
特征在幅度、单位和范围上很大的变化,这引起了问题
高数值特征在距离计算中的权重大于低数值特征
通过特征标准化或Z分数归一化来完成
导入sklearn.preprocessing 库中的StandardScala
用法:http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
X_train[[-1. 2.64575131 -0.77459667 0.26306757 0.12381479] [ 1. -0.37796447 -0.77459667 -0.25350148 0.46175632] [-1. -0.37796447 1.29099445 -1.97539832 -1.53093341] [-1. -0.37796447 1.29099445 0.05261351 -1.11141978] [ 1. -0.37796447 -0.77459667 1.64058505 1.7202972 ] [-1. -0.37796447 1.29099445 -0.0813118 -0.16751412] [ 1. -0.37796447 -0.77459667 0.95182631 0.98614835] [ 1. -0.37796447 -0.77459667 -0.59788085 -0.48214934]]X_test[[-1. 2.64575131 -0.77459667 -1.45882927 -0.90166297] [-1. 2.64575131 -0.77459667 1.98496442 2.13981082]]
总结 以上是内存溢出为你收集整理的100天搞定机器学习|Day1数据预处理全部内容,希望文章能够帮你解决100天搞定机器学习|Day1数据预处理所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)