把RDD保存成文件

把RDD保存成文件,第1张

把RDD保存成文件 第1种保存方法
scala> val p= spark.read.format("json").load("file:///usr/local/spark/examples/src/main/resources/people.json")
p: org.apache.spark.sql.Dataframe = [age: bigint, name: string]
 
scala> p.select("name", "age").write.format("csv").save("file:///usr/local/spark/mycode/newpeople.csv")

//这里使用select(“name”, “age”)确定要把哪些列进行保存,然后调用write.format(“csv”).save ()保存成csv文件
 

write.format()支持输出 json,parquet, jdbc, orc, libsvm, csv, text等格式文件,如果要输出文本文件,可以采用write.format(“text”)

  • 查看信息
scala> val t = sc.textFile("file:///usr/local/spark/mycode/newpeople.csv")
t: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/mycode/newpeople.csv MapPartitionsRDD[1] at textFile at :24
scala> t.foreach(println)
Justin,19
Michael,
Andy,30
第2种保存方法:
scala> val p = spark.read.format("json").load("file:///usr/local/spark/examples/src/main/resources/people.json")
p: org.apache.spark.sql.Dataframe = [age: bigint, name: string]
 
scala> df.rdd.saveAsTextFile("file:///usr/local/spark/mycode/newpeople.txt")
  • 查看
scala> val t = sc.textFile("file:///usr/local/spark/mycode/newpeople.txt")
t: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/mycode/newpeople.txt MapPartitionsRDD[11] at textFile at :28
 
scala> t.foreach(println)
[null,Michael]
[30,Andy]
[19,Justin]

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5682220.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存