HIVE:Hive插入数据失败,数据不显示的问题解决

HIVE:Hive插入数据失败,数据不显示的问题解决,第1张

HIVE:Hive插入数据失败,数据不显示的问题解决

首先我们来查看报错信息

Ended Job = job_1639620592561_0002 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1639620592561_0002_m_000000 (and more) from job job_1639620592561_0002

Task with the most failures(4): 
-----
Task ID:
  task_1639620592561_0002_m_000000

URL:
  http://hadoop101:8088/taskdetails.jsp?jobid=job_1639620592561_0002&tipid=task_1639620592561_0002_m_000000
-----
Diagnostic Messages for this Task:
[2021-12-16 10:56:22.545]Container [pid=2417,containerID=container_1639620592561_0002_01_000005] is running 263662080B beyond the 'VIRTUAL' memory limit. Current usage: 95.8 MB of 1 GB physical memory used; 2.3 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1639620592561_0002_01_000005 :
        |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
        |- 2429 2417 2417 2417 (java) 532 22 2508722176 24228 /opt/module/jdk1.8.0_161/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx820m -Djava.io.tmpdir=/opt/module/hadoop-3.1.3/data/nm-local-dir/usercache/atguigu/appcache/application_1639620592561_0002/container_1639620592561_0002_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/opt/module/hadoop-3.1.3/logs/userlogs/application_1639620592561_0002/container_1639620592561_0002_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.17.42 34985 attempt_1639620592561_0002_m_000000_3 5 
        |- 2417 2416 2417 2417 (bash) 0 0 9797632 286 /bin/bash -c /opt/module/jdk1.8.0_161/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN   -Xmx820m -Djava.io.tmpdir=/opt/module/hadoop-3.1.3/data/nm-local-dir/usercache/atguigu/appcache/application_1639620592561_0002/container_1639620592561_0002_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/opt/module/hadoop-3.1.3/logs/userlogs/application_1639620592561_0002/container_1639620592561_0002_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.17.42 34985 attempt_1639620592561_0002_m_000000_3 5 1>/opt/module/hadoop-3.1.3/logs/userlogs/application_1639620592561_0002/container_1639620592561_0002_01_000005/stdout 2>/opt/module/hadoop-3.1.3/logs/userlogs/application_1639620592561_0002/container_1639620592561_0002_01_000005/stderr  

[2021-12-16 10:56:22.694]Container killed on request. Exit code is 143
[2021-12-16 10:56:22.738]Container exited with a non-zero exit code 143. 


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

我们可以从上面报错信息看得出来:主要是因为MR的资源效率利用太高,简单来说就是你的虚拟机内存不足。

解决方法(1或2选一种即可):

1.设置hive为本地模式。

set hive.exec.mode.local.auto=true;

2.给虚拟机堆内存。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5676270.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存