ML---Data

ML---Data,第1张

机器学习100天系列学习笔记 机器学习100天(中文翻译版)机器学习100天(英文原版)

数据预处理
学习machine learning第一步需要先会清洗数据(即数据的预处理),下面我们需要对表中的数据进行清洗。



经过观察可以看出表中两处缺少值(NAN);

第一步:导包
#Step 1: importing library
import numpy as np
import pandas as pd
第二步:导入数据
#Step 2: Importing dataset
dataset = pd.read_csv('D:/daily/机器学习100天/100-Days-Of-ML-Code-中文版本/100-Days-Of-ML-Code-master/datasets/Data.csv') # 设置你的文件路径
X = dataset.iloc[:,:,-1].values
Y = dataset.iloc[:,3].values
print("Step 2: Importing dataset")
print("X") #打印字符X
print(X)   #打印变量X的值
print("Y")
print(Y)

打印

Step 2: Importing dataset
X
[['France' 44.0 72000.0]
 ['Spain' 27.0 48000.0]
 ['Germany' 30.0 54000.0]
 ['Spain' 38.0 61000.0]
 ['Germany' 40.0 nan]
 ['France' 35.0 58000.0]
 ['Spain' nan 52000.0]
 ['France' 48.0 79000.0]
 ['Germany' 50.0 83000.0]
 ['France' 37.0 67000.0]]
Y
['No' 'Yes' 'No' 'No' 'Yes' 'Yes' 'No' 'Yes' 'No' 'Yes']
第三步:处理缺省值
#Step 3: Handing the missing data
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy="mean")  #通过SimpleImputer,这里用均值补充缺省值
X = dataset.iloc[:,:-1].values
imputer = imputer.fit(X[:,1:3])
X[:,1:3] = imputer.transform(X[:,1:3])
print("------------")
print("Step 3: Handing the missing data")
print("X")
print(X)

打印

Step 3: Handing the missing data
X
[['France' 44.0 72000.0]
 ['Spain' 27.0 48000.0]
 ['Germany' 30.0 54000.0]
 ['Spain' 38.0 61000.0]
 ['Germany' 40.0 63777.77777777778]
 ['France' 35.0 58000.0]
 ['Spain' 38.77777777777778 52000.0]
 ['France' 48.0 79000.0]
 ['Germany' 50.0 83000.0]
 ['France' 37.0 67000.0]]

38.8、63777.8是平均值计算得到的;

第四步:数据编码
#Step 4: Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder # OneHotEncoder 独热编码
from sklearn.compose import ColumnTransformer
ct = ColumnTransformer([("", OneHotEncoder(), [0])], remainder = 'passthrough')
X = ct.fit_transform(X)
labelencoder_Y = LabelEncoder()
Y =  labelencoder_Y.fit_transform(Y)
print("----------------------------")
print("Step 4: Encoding categorical data")
print("X")
print(X)
print("Y")
print(Y)

打印

Step 4: Encoding categorical data
X
[[1.0 0.0 0.0 44.0 72000.0]
 [0.0 0.0 1.0 27.0 48000.0]
 [0.0 1.0 0.0 30.0 54000.0]
 [0.0 0.0 1.0 38.0 61000.0]
 [0.0 1.0 0.0 40.0 63777.77777777778]
 [1.0 0.0 0.0 35.0 58000.0]
 [0.0 0.0 1.0 38.77777777777778 52000.0]
 [1.0 0.0 0.0 48.0 79000.0]
 [0.0 1.0 0.0 50.0 83000.0]
 [1.0 0.0 0.0 37.0 67000.0]]
Y
[0 1 0 0 1 1 0 1 0 1]
第五步:划分训练集、测试集
#Step 5: Splitting the datasets into training sets and Test sets
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split( X , Y , test_size = 0.2, random_state = 0)
print("----------------------------")
print("Step 5: Splitting the datasets into training sets and Test sets")
print("X_train")
print(X_train)
print("X_test")
print(X_test)
print("Y_train")
print(Y_train)
print("Y_test")
print(Y_test)

打印

Step 5: Splitting the datasets into training sets and Test sets
X_train
[[0.0 1.0 0.0 40.0 63777.77777777778]
 [1.0 0.0 0.0 37.0 67000.0]
 [0.0 0.0 1.0 27.0 48000.0]
 [0.0 0.0 1.0 38.77777777777778 52000.0]
 [1.0 0.0 0.0 48.0 79000.0]
 [0.0 0.0 1.0 38.0 61000.0]
 [1.0 0.0 0.0 44.0 72000.0]
 [1.0 0.0 0.0 35.0 58000.0]]
X_test
[[0.0 1.0 0.0 30.0 54000.0]
 [0.0 1.0 0.0 50.0 83000.0]]
Y_train
[1 1 1 0 1 0 0 1]
Y_test
[0 0]
第六步:特征缩放
#Step 6: Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
print("----------------------------")
print("Step 6: Feature Scaling")
print("X_train")
print(X_train)
print("X_test")
print(X_test)

打印

Step 6: Feature Scaling
X_train
[[-1.          2.64575131 -0.77459667  0.26306757  0.12381479]
 [ 1.         -0.37796447 -0.77459667 -0.25350148  0.46175632]
 [-1.         -0.37796447  1.29099445 -1.97539832 -1.53093341]
 [-1.         -0.37796447  1.29099445  0.05261351 -1.11141978]
 [ 1.         -0.37796447 -0.77459667  1.64058505  1.7202972 ]
 [-1.         -0.37796447  1.29099445 -0.0813118  -0.16751412]
 [ 1.         -0.37796447 -0.77459667  0.95182631  0.98614835]
 [ 1.         -0.37796447 -0.77459667 -0.59788085 -0.48214934]]
X_test
[[-1.          2.64575131 -0.77459667 -1.45882927 -0.90166297]
 [-1.          2.64575131 -0.77459667  1.98496442  2.13981082]]

对每一列的数据进行缩放,即(x-mean(x))/std(x)。


数据上传不了,大家手动设置设置吧(狗头)

完整代码:

#Day 1: Data Preprocessing 2022/4/4

#Step 1: importing library
import numpy as np
import pandas as pd

#Step 2: Importing dataset
dataset = pd.read_csv('D:/daily/机器学习100天/100-Days-Of-ML-Code-中文版本/100-Days-Of-ML-Code-master/datasets/Data.csv')
X = dataset.iloc[:,:-1].values
Y = dataset.iloc[:,3].values
print("Step 2: Importing dataset")
print("X") #打印字符X
print(X)   #打印变量X的值
print("Y")
print(Y)

#Step 3: Handing the missing data
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy="mean")  #通过SimpleImputer,这里用均值补充缺省值
X = dataset.iloc[:,:-1].values
imputer = imputer.fit(X[:,1:3])
X[:,1:3] = imputer.transform(X[:,1:3])
print("------------")
print("Step 3: Handing the missing data")
print("X")
print(X)

#Step 4: Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder # OneHotEncoder 独热编码
from sklearn.compose import ColumnTransformer
ct = ColumnTransformer([("", OneHotEncoder(), [0])], remainder = 'passthrough')
X = ct.fit_transform(X)
labelencoder_Y = LabelEncoder()
Y =  labelencoder_Y.fit_transform(Y)
print("----------------------------")
print("Step 4: Encoding categorical data")
print("X")
print(X)
print("Y")
print(Y)

#Step 5: Splitting the datasets into training sets and Test sets
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split( X , Y , test_size = 0.2, random_state = 0)
print("----------------------------")
print("Step 5: Splitting the datasets into training sets and Test sets")
print("X_train")
print(X_train)
print("X_test")
print(X_test)
print("Y_train")
print(Y_train)
print("Y_test")
print(Y_test)

#Step 6: Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
print("----------------------------")
print("Step 6: Feature Scaling")
print("X_train")
print(X_train)
print("X_test")
print(X_test)

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/571739.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-09
下一篇 2022-04-09

发表评论

登录后才能评论

评论列表(0条)

保存