scala> val p= spark.read.format("json").load("file:///usr/local/spark/examples/src/main/resources/people.json") p: org.apache.spark.sql.Dataframe = [age: bigint, name: string] scala> p.select("name", "age").write.format("csv").save("file:///usr/local/spark/mycode/newpeople.csv") //这里使用select(“name”, “age”)确定要把哪些列进行保存,然后调用write.format(“csv”).save ()保存成csv文件
write.format()支持输出 json,parquet, jdbc, orc, libsvm, csv, text等格式文件,如果要输出文本文件,可以采用write.format(“text”)
- 查看信息
scala> val t = sc.textFile("file:///usr/local/spark/mycode/newpeople.csv") t: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/mycode/newpeople.csv MapPartitionsRDD[1] at textFile at第2种保存方法::24 scala> t.foreach(println) Justin,19 Michael, Andy,30
scala> val p = spark.read.format("json").load("file:///usr/local/spark/examples/src/main/resources/people.json") p: org.apache.spark.sql.Dataframe = [age: bigint, name: string] scala> df.rdd.saveAsTextFile("file:///usr/local/spark/mycode/newpeople.txt")
- 查看
scala> val t = sc.textFile("file:///usr/local/spark/mycode/newpeople.txt") t: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/mycode/newpeople.txt MapPartitionsRDD[11] at textFile at:28 scala> t.foreach(println) [null,Michael] [30,Andy] [19,Justin]
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)