pandas学习笔记:01、数据文件的读取与写入

pandas学习笔记:01、数据文件的读取与写入,第1张

pandas学习笔记:01、数据文件读取与写入 1、读取数据
'''
常用的读取数据函数
'''
import pandas as pd
'''
	./	代表当前目录,当前目录也可以什么都不写,直接寻找当前目录的文件
		比如:./data/ 和 data/ 都代表当前目录下的data文件夹下的文件
	../	代表上一级目录
	/	代表根目录
		Linux系统里面会用到根目录
	~	代表当前用户目录
		比如Windows用户Dongze代表的就是'C:\Users\Dongz'
'''
#读取CSV格式数据,返回Dataframe格式列表
data = pd.read_csv("数据目录/xxx.csv")
#还可以使用URL来读取
pd.read_csv("http://localhost/xxx.csv")
data = pd.read_excel("数据目录/xxx.xlsx")


如果数据过多,编译器会省略中间部分数据,如下图所示:

我们可以设置dataframe显示中间忽略的数据

'''
    设置dataframe显示数据
'''
#显示Dateframe所有行
pd.set_option('display.max_rows',None)
#显示Dateframe所有列(参数设置为None代表显示所有行,也可以自行设置数字)
pd.set_option('display.max_columns',None)
#设置Dataframe数据的显示长度,默认为50
pd.set_option('max_colwidth',200)
#禁止Dateframe自动换行(设置为Flase不自动换行,True反之)
pd.set_option('expand_frame_repr', False)

这样就会显示出所有数据

**

2、官网提供的读取文件和写入文件的API

官网提供的read_csv函数参数详解
https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html

pandas.read_csv(
	#文件路径,必须要写的参数,其他参数按需要填写
	filepath_or_buffer, 
	sep=NoDefault.no_default, 
	delimiter=None, 
	header='infer', 
	names=NoDefault.no_default, 
	index_col=None, 
	usecols=None, 
	squeeze=False, 
	prefix=NoDefault.no_default, 
	mangle_dupe_cols=True, 
	dtype=None, 
	engine=None, 
	converters=None, 
	true_values=None, 
	false_values=None, 
	skipinitialspace=False, 
	skiprows=None, 
	skipfooter=0, 
	nrows=None, 
	na_values=None, 
	keep_default_na=True, 
	na_filter=True, 
	verbose=False, 
	skip_blank_lines=True, 
	parse_dates=False, 
	infer_datetime_format=False, 
	keep_date_col=False, 
	date_parser=None, 
	dayfirst=False, 
	cache_dates=True, 
	iterator=False, 
	chunksize=None, 
	compression='infer', 
	thousands=None, 
	decimal='.', 
	lineterminator=None, 
	quotechar='"', 
	quoting=0, 
	doublequote=True, 
	escapechar=None, 
	comment=None, 
	encoding=None, 
	encoding_errors='strict', 
	dialect=None, 
	error_bad_lines=None, 
	warn_bad_lines=None, 
	on_bad_lines=None, 
	delim_whitespace=False, 
	low_memory=True, 
	memory_map=False, 
	float_precision=None, 
	storage_options=None)

**

#Input/output
#Pickling
#读取pickling文件
read_pickle(filepath_or_buffer[, ...])
#Load pickled pandas object (or any object) from file.
#写入pickle文件
Dataframe.to_pickle(path[, compression, ...])
#Pickle (serialize) object to file.

#Flat file
read_table(filepath_or_buffer[, sep, ...])
#Read general delimited file into Dataframe.
read_csv(filepath_or_buffer[, sep, ...])
#Read a comma-separated values (csv) file into Dataframe.
Dataframe.to_csv([path_or_buf, sep, na_rep, ...])
#Write object to a comma-separated values (csv) file.
read_fwf(filepath_or_buffer[, colspecs, ...])
#Read a table of fixed-width formatted lines into Dataframe.

#Clipboard
read_clipboard([sep])
#Read text from clipboard and pass to read_csv.
Dataframe.to_clipboard([excel, sep])
#Copy object to the system clipboard.

#Excel
read_excel(io[, sheet_name, header, names, ...])
#Read an Excel file into a pandas Dataframe.
Dataframe.to_excel(excel_writer[, ...])
#Write object to an Excel sheet.
ExcelFile.parse([sheet_name, header, names, ...])
#Parse specified sheet(s) into a Dataframe.
Styler.to_excel(excel_writer[, sheet_name, ...])
#Write Styler to an Excel sheet.
ExcelWriter(path[, engine, date_format, ...])
#Class for writing Dataframe objects into excel sheets.

#JSON
read_json([path_or_buf, orient, typ, dtype, ...])
#Convert a JSON string to pandas object.
to_json(path_or_buf, obj[, orient, ...])
build_table_schema(data[, index, ...])
#Create a Table schema from data.

#HTML
read_html(io[, match, flavor, header, ...])
#Read HTML tables into a list of Dataframe objects.
Dataframe.to_html([buf, columns, col_space, ...])
#Render a Dataframe as an HTML table.
Styler.to_html([buf, table_uuid, ...])
#Write Styler to a file, buffer or string in HTML-CSS format.

#XML
read_xml(path_or_buffer[, xpath, ...])
#Read XML document into a Dataframe object.
Dataframe.to_xml([path_or_buffer, index, ...])
#Render a Dataframe to an XML document.

#Latex
Dataframe.to_latex([buf, columns, ...])
#Render object to a LaTeX tabular, longtable, or nested table/tabular.
Styler.to_latex([buf, column_format, ...])
#Write Styler to a file, buffer or string in LaTeX format.
HDFStore: PyTables (HDF5)
read_hdf(path_or_buf[, key, mode, errors, ...])
#Read from the store, close it if we opened it.
HDFStore.put(key, value[, format, index, ...])
#Store object in HDFStore.
HDFStore.append(key, value[, format, axes, ...])
#Append to Table in file.
HDFStore.get(key)
#Retrieve pandas object stored in file.
HDFStore.select(key[, where, start, stop, ...])
#Retrieve pandas object stored in file, optionally based on where criteria.
HDFStore.info()
#Print detailed information on the store.
HDFStore.keys([include])
#Return a list of keys corresponding to objects stored in HDFStore.
HDFStore.groups()
#Return a list of all the top-level nodes.
HDFStore.walk([where])
#Walk the pytables group hierarchy for pandas objects.

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5652451.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存