hdfs dfs -mkdir -p /app/data/allprovinces
hdfs dfs -mkdir -p /app/data/events/products
hdfs dfs -put /opt/data/allprovinces.txt /app/data/allprovinces
hdfs dfs -put /opt/data/products.txt /app/data/allprovinces
hdfs dfs -mv /app/data/allprovinces/products.txt /app/data/events/products/
create_namespace ‘market’
create ‘market:province_market’,‘market’,‘info’
drop database if exists market
create database market
create external table market.ex_allprovinces(
name string,
abbr string
)
row format delimited fields terminated by ‘t’
location ‘/app/data/allprovinces’
create external table market.ex_products(
name string,
price double,
craw_time string,
market string,
province string,
city string
)
row format delimited fields terminated by ‘t’
location ‘/app/data/events/products’
数据导入成功,查询结果如下图
映射hive到hbase
CREATE external TABLE market.ex_province_market(
abbr string,
market_count int,
province_name string
)
STORED BY ‘org.apache.hadoop.hive.hbase.HbaseStorageHandler’
WITH SERDEPROPERTIES (“hbase.columns.mapping” = “:key,market:market_count,info:province_name”)
TBLPROPERTIES (“hbase.table.name” = “market:province_market”)
*统计每个省份的农产品市场总数,并将每个省份的市场数量保存到* *Hbase* *的*
*province_market* *表中(如该省无农产品市场,则数量为* *0********)。其中* *RowKey* *使用省*
*份名称的英文缩写(来自于* *allprovinces.txt********),市场数量保存在* *market* *列族下*
*的* *count* *列中,省份名称保存在* *info* *列族下的* *name* *列中。*
代码:
insert into market.ex_province_market
select ap.abbr abbr,nvl(p.cnt,0) market_count,ap.name province_name from market.ex_allprovinces ap left join (
select province,count(distinct market) cnt from market.ex_products where trim(province)!=’’ group by province
) p on ap.name=p.province
Hive查询结果:
Hbase查询结果:
代码复制:
val conf = new SparkConf().setAppName(“exam”).setMaster(“local[*]”)
val sc = new SparkContext(conf)
val rdd: RDD[String] = sc.textFile(“hdfs://192.168.10.141:9000/app/data/events/products/products.txt”)
rdd.map(x=>{
val info: Array[String] = x.split("t",-1)
//((省份,农产品名称),1)
((info(4),info(0)),1)
}).filter(_._1._1!="")
.groupByKey()
.map(_._1)
.groupByKey()
.mapValues(_.size)
.map(x=>(1,x))
.groupByKey()
.map(_.2.toList.sortBy(-._2).take(3))
.foreach(println)
结果截图:
②根据农产品类型数量,统计每个省份排名前 3 名的农产品市场。
代码复制:
rdd.map(x=>{
val info: Array[String] = x.split("t",-1)
//((省份,市场名,农产品名称),1)
((info(4),info(3),info(0)),1)
}).filter(_._1._1!="")
.groupByKey()
.map(x=>{
((x._1._1,x._1._2),x._1._3)
}).groupByKey()
.map(x=>{
(x._1._1,(x._1._2,x._2.size))
}).groupByKey()
.map(x=>{
(x._1,x.2.toList.sortBy(-._2).take(3))
})
.foreach(println)
结果截图:
题目:在 Spark-Shell 中,将 products.txt 装载成 Dataframe,计算山西省每种农产 品的价格波动趋势,即计算每天价格均值,并将结果输出到控制台上。
代码:val spark = SparkSession.builder().master(“local[*]”).appName(“examspark”)
.getOrCreate()
import spark.implicits._
val df: Dataframe = spark.read.format(“text”)
.load(“hdfs://192.168.10.141:9000/app/data/events/products/products.txt”).map(x=>{
val info = x.toString()
.replaceAll("[","")
.replaceAll("]","")
.split("t",-1)
(info(0),info(1),info(2),info(3),info(4),info(5))
}).toDF(“name”,“price”,“craw_time”,“market”,“province”,“city”)
df.where(“trim(province)=‘山西’”)
.groupBy(“province”,“name”,“craw_time”)
.agg(sum(“price”).as(“sumprice”)
,count(“price”).as(“cnt”)
,max(“price”).as(“maxprice”)
,min(“price”).as(“minprice”))
.withColumn(“pavg”,when( " c n t " > 2 , ( "cnt">2,( "cnt">2,(“sumprice”- " m a x p r i c e " − "maxprice"- "maxprice"−“minprice”)/( " c n t " − 2 ) ) . o t h e r w i s e ( "cnt"-2)).otherwise( "cnt"−2)).otherwise(“sumprice”/$“cnt”))
.select(“province”,“name”,“craw_time”,“pavg”)
.show()
结果截图:
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)