【Hadoop】集群运行卡在Kill Command = optmodulehadoop-2.7.2binhadoop job -kill job

【Hadoop】集群运行卡在Kill Command = optmodulehadoop-2.7.2binhadoop job -kill job,第1张

【Hadoop】集群运行卡在Kill Command = /opt/module/hadoop-2.7.2/bin/hadoop job -kill job

当hive运行卡主不动时,报错如下

hive (default)> select count(*) cnt from emp;

Query ID = root_20200220175612_bb456a03-2298-4d20-82b9-c0a96ae859a0
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1582192539192_0001, Tracking URL = http://hadoop103:8088/proxy/application_1582192539192_0001/
Kill Command = /opt/module/hadoop-2.7.2/bin/hadoop job  -kill job_1582192539192_0001

原因:hive程序在运行是其实就是转换成MapReduce来运行的,可能在搭建的时候将运行内存设置的过于小了,我们可以将运行内存设置成如下即可。
打开hadoop安装文件下的yarn-site.xml文件


  
    yarn.nodemanager.aux-services
    mapreduce_shuffle
  
  
    yarn.nodemanager.aux-services.mapreduce_shuffle.class
    org.apache.hadoop.mapred.ShuffleHandler
  

如果在运行MapReduce时也发生上述情况的话,也可以使用同样的方法来进行 *** 作

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5708950.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存