来源:15 Python Snippets to Optimize your Data Science Pipeline
翻译:RankFan
15种Python片段去优化你的数据科学管道为什么片段对于数据科学是重要的在我的日常中,我经常处理许多同样的状况,主要是从加载 csv
文件到数据可视化。因此,为了流水线这个过程,我有兴趣去储存一些 code
片段, 在不同的情形下,加载csv
文件到数据可视化是非常有帮助的。
在这篇短文中,我将分享15个Python片段去简化你不同的数据分析管道。
1. 通过 GLob 和 List 加载多个文件import globimport pandas as pdcsv_files = glob.glob("path/to/folder/with/csvs/*.csv")dfs = [pd.read_csv(filename) for filename in csv_flIEs]
2. 得到列中的唯一值import pandas as pddf = pd.read_csv("path/to/csv/file.csv")df = ["Item_IDentifIEr"].unique()array['FDA15', 'DRC01', 'FDN15', ..., 'NCF55', 'NCW30', 'NCW05'],dtype = object]
3. 并排展示Pandas Dataframe
from IPython.display import display_HTMLfrom itertools import chain, cycledef display_sIDe_by_sIDe(*arg, Title = cycle([''])): HTML_str = "" for df, Title in zip(args, chain(Title, cycle(['</br>']))): HTML_str += '< the style = " text-align : center "> < td style = "vertical-align : top">' HTML_str += "<br>" HTML_str += f'<h2>{Title}</h2>' HTML_str += df.to_HTML().replace('table', tabel ) HTML_str += '</td></th>' display_HTML(HTML_str, raw = True) df1 = pd.csv_read("file_csv")df2 = pd.csv_read("file2")dispaly_sIDe_by_sIDe(df1.head(), df2.head(), Titles=[Sales, Advertising])
4. 移除Pandas DataFrame
中的缺失值df = pd.DataFrame(dict(a = [1, 2, 3, None]))dfdf.dropna(inplace = True)df
5. 显示缺失值的个数def FindNanCol(df): for col in df: print(f"Column : {col}") num_Nans = df[col].isnull().sum() print(f"Number of Nans : {num_Nans}") df = pd.DataFrame(dict(a = [1, 2, 3, None], b = [None, None, 5, 6]))FindNanCol(df)
6. 使用.apply
函数和 lambda
函数 转变列df = pd.DataFrame(dict(a = [10, 20 ,30, 40, 50]))square = lambda x: x**2df["a"] = df["a"].apply(square)df
7. 将两个DataFrame
列转化为字典df = pd.DataFrame(dict(a = ["a", "b", "c"], b = [1, 2, 3]))df_dictionary = dict(zip(df["a"], df["b"]))df_dictionary
8. 绘制列的网格分布import numpy as npimport matplotlib.pyplot as pltimport seaborns as snsimport pandas as pdsns.set()df = pd.DataFrame(dict(a = np.random.randint(0, 100, 100), b = np.arange(0, 100, 1)))plt.figure(figsize = (15,7))plt.subplot(1, 2, 1)df["b"][df["a"]>50].hist(color='green', label="bigger than 50")plt.legend()plt.subplot(1, 2, 1)df["b"][df["a"]<50].hist(color='orange', label="small than 50")plt.legend()plt.show
9. 在pandas
中对不同的列进行t
检验from scipy.stats import ttest_reldata = np.arange(0, 1000, 1)data_plus_noise = np.arange(0, 1000, 1) + np.random.normal(0, 1, 1000)df = pd.DataFrame(dict(data = data, data_plus_noise = data_plus_noise))print(ttest_rel(df["data"], df["data_plus_noise"]))
10. 合并数据df1 = pd.DataFrame(dict(a = [1, 2, 3], b=[10, 20, 30], col_to_merge= ["a", "b", "c"]))df2 = pd.DataFrame(dict(d = [10, 20, 30], col_to_merge=["a", "b", "c"]))df_merged = df1.merge(df2, on='col_to_merge')
11. 用sklearn
进行标准化from sklearn.preprocessing import MinMaxScalerscaler = MinMaxScaler()scores = scaler.fit_transform(df["a"].values.reshape(-1, 1))
12. 丢弃特定列的缺失值df.dropna(subset = ["col_to_remove_NaNs_from"], inplace = True)
13. 有条件的选择dataframe
的子集df = pd.Dataframe(dict(result = ["pass", "Fail", "pass", "Fail", "distinction", "distinction"]))pass_index = (df["result"] == "pass") | (df["result"] == "distinction")df_pass = df['pass_index']df_pass
14. 饼图import matplotlib.pyplot as pltdf = pd.DataFrame(dict(a = [10, 20, 50, 10, 10], b=["A", "B", "C", "D", "E"]))labels = df["b"]sizes = df["a"]plt.pIE(sizes, labels = labels, autopct = '%1.1f%%', shadow = True, startangle=140)plt.axis('equal')plt.show
15. 将百分数字符串转化为数值def change_to_numerical(x): try: x = int(x.strip("%")[:2]) except: x = int(x.strip("%")[:1]) return xdf = pd.DataFrame(dict(a =["A", "B" ,"C"], col_with_percentage = ["10%", "20%", "70%"]))df["col_with_percentage"] = df["col_with_percentage"].apply(change_to_numerical)df
结论我认为代码片段是非常有用,重新写是浪费时间的,因此,有一个完整的工具包可以对数据分析进行流水线处理,这是非常有帮助的。
总结以上是内存溢出为你收集整理的15种Python片段去优化你的数据科学管道全部内容,希望文章能够帮你解决15种Python片段去优化你的数据科学管道所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)