1.执行sql语句,报错信息。
hive> insert into table student values(1,'abc'); Query ID = atguigu_20200814150018_318272cf-ede4-420c-9f86-c5357b57aa11 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Job failed with java.lang.ClassNotFoundException: org.apache.spark.AccumulatorParam FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
原因:由于当前的hive的版本3.1.2,spark版本3.0.0,只能自己编译。
建议用官方发布的hive+spark版本搭配。
安装和Spark对应版本一起编译的Hive,当前官网推荐的版本关系如下:
若版本一致,还报该错误:
若配置的是HA,则:hive-site.xml是如下:
spark.yarn.jars
hdfs://mycluster/spark-jars/*
若不是以上原因:
则删掉hive,重新进行安装,也许是在hive还没解压完,你就进行了mv加压后的目录,导致jar包不全;
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)