这三个文件已经提供好了
自定义配置文件 core-site.xml 、 hdfs-site.xml 、 yarn-site.xml 、 mapred-site.xml 四个配置文件存放在 $HADOOP_HOME/etc/hadoop 这个路径上,用户可以根据项目需求重新进行修改配置。
我的是在这个目录下:
/opt/module/hadoop-3.1.3/etc/hadoop
配置集群 核心配置文件 配置core-site.xml cd $HADOOP_HOME/etc/hadoop vim core-site.xml
需要配置的内容如下:
fs.defaultFS hdfs://hadoop102:8020 hadoop.tmp.dir /opt/module/hadoop-3.1.3/data hadoop.http.staticuser.user atguigu
同目录下配置hdfs-site.xml先不配置:
hadoop.http.staticuser.user atguigu
可以等后面再配
vim hdfs-site.xml
配置内容如下:
YARN 配置文件dfs.namenode.http-address hadoop102:9870 dfs.namenode.secondary.http-address hadoop104:9868
vim yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname hadoop103 yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CO NF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAP RED_HOME
MapReduce 配置文件yarn.nodemanager.env-whitelist JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CO NF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAP RED_HOME 这段配置在hadoop3.2.x以后就无需再配置,属于hadoop的小bug
vim mapred-site.xml
mapreduce.framework.name yarn
默认是运行在local本地
目前只在hadoop102上配置完成
需要将这些分发到hadoop103和hadoop104
在集群上分发配置好的 Hadoop 配置文件
xsync /opt/module/hadoop-3.1.3/etc/hadoop/
[root@hadoop102 hadoop]# xsync /opt/module/hadoop-3.1.3/etc/hadoop/ ==================== hadoop102 ==================== sending incremental file list 。。。 ==================== hadoop103 ==================== sending incremental file list hadoop/ hadoop/.hdfs-site.xml.swp hadoop/core-site.xml hadoop/hdfs-site.xml hadoop/mapred-site.xml hadoop/yarn-site.xml hadoop/shellprofile.d/ 。。。 ==================== hadoop104 ==================== sending incremental file list hadoop/ hadoop/.hdfs-site.xml.swp hadoop/capacity-scheduler.xml hadoop/configuration.xsl hadoop/container-executor.cfg hadoop/core-site.xml 。。。。去 103 和 104 上查看文件分发情况 [atguigu@hadoop103 ~]$ cat /opt/module/hadoop- 3.1.3/etc/hadoop/core-site.xml [atguigu@hadoop104 ~]$ cat /opt/module/hadoop- 3.1.3/etc/hadoop/core-site.xml
至此集群配置已经完成
群起集群 配置workers
vim /opt/module/hadoop-3.1.3/etc/hadoop/workers
在该文件中增加如下内容
hadoop102 hadoop103 hadoop104
注意:该文件中添加的内容结尾不允许有空格,文件中不允许有空行。
完成修改后要分发
[root@hadoop102 hadoop]# vim workers [root@hadoop102 hadoop]# xsync workers ==================== hadoop102 ==================== sending incremental file list sent 55 bytes received 12 bytes 44.67 bytes/sec total size is 30 speedup is 0.45 ==================== hadoop103 ==================== sending incremental file list workers sent 132 bytes received 41 bytes 115.33 bytes/sec total size is 30 speedup is 0.17 ==================== hadoop104 ==================== sending incremental file list workers sent 132 bytes received 41 bytes 346.00 bytes/sec total size is 30 speedup is 0.17
启动集群
如果集群是第一次启动,需要在 hadoop102 节点格式化 NameNode
注意:格式化 NameNode,会产生新的集群 id,导致 NameNode 和 DataNode 的集群 id 不一致,集群找不到已往数据。如果集群在运行过程中报错,需要重新格式化 NameNode 的话,一定要先停 止 namenode 和 datanode 进程,并且要删除所有机器的 data 和 logs 目录,然后再进行格式化。
初始化:在hadoop的根目录运行
hdfs namenode -format
[root@hadoop102 hadoop-3.1.3]# hdfs namenode -format
启动 HDFS这里不小心用了root用户初始化,现在要去更改新增的data和logs文件夹的权限了。
sudo chown yourname:yourname data/ logs/
还有后面启动hdfs都不要用root账号
会出现以下报错
Starting namenodes on [hadoop102] ERROR: Attempting to operate on hdfs namenode as root ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation. Starting datanodes ERROR: Attempting to operate on hdfs datanode as root ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation. Starting secondary namenodes [hadoop104] ERROR: Attempting to operate on hdfs secondarynamenode as root ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
在hadoop根目录下运行
sbin/start-dfs.sh输入jsp查看集群情况
[zuck@hadoop102 hadoop-3.1.3]$ jps 3123 NameNode 3496 Jps 3245 DataNode
[zuck@hadoop103 hadoop-3.1.3]$ jps 2326 DataNode 2398 Jps
[zuck@hadoop104 hadoop-3.1.3]$ jps 2353 SecondaryNameNode 2460 Jps 2286 DataNode
一切都OK
访问web页面Web 端查看 HDFS 的 NameNode
(a)浏览器中输入:http://hadoop102:9870
(b)查看 HDFS 上存储的数据信息
启动YARN
在配置了 ResourceManager 的节点(hadoop103)启动 YARN
sbin/start-yarn.sh
[zuck@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh Starting resourcemanager Starting nodemanagers [zuck@hadoop103 hadoop-3.1.3]$ jps 2993 Jps 2326 DataNode 2550 ResourceManager 2665 NodeManager
Web 端查看 YARN 的 ResourceManager
(a)浏览器中输入:http://hadoop103:8088
(b)查看 YARN 上运行的 Job 信息
集群基本测试 上传文件到集群 上传小文件
hadoop fs -mkdir /input
hadoop fs -put $HADOOP_HOME/wcinput/word.txt /input
[zuck@hadoop102 hadoop-3.1.3]$ hadoop fs -mkdir /input
在http://hadoop102:9870/页面,可以刷新得到新创建的input
上传大文件
[zuck@hadoop102 hadoop-3.1.3]$ hadoop fs -put wcinput/word.txt /input
将wcinput/word.txt上传到hdfs的/input文件夹下
再试试传个大一点的文件
[zuck@hadoop102 hadoop-3.1.3]$ hadoop fs -put /opt/software/jdk-8u212-linux-x64.tar.gz /input
这只是个WEB页面,并不是实际存储文件,那么HDFS到底把文件存到哪了?
是datanode结点。
上传文件后查看文件存放在什么位置
曾经配置过这么一段
hadoop.tmp.dir /opt/module/hadoop-3.1.3/data
查看 HDFS 文件存储路径
[atguigu@hadoop102 subdir0]$ pwd/opt/module/hadoop-3.1.3/data/dfs/data/current/BP-1436128598-192.168.10.102-1610603650062/current/finalized/subdir0/subdir0
查看 HDFS 在磁盘存储文件内容
[atguigu@hadoop102 subdir0]$ cat blk_1073741825
hadoop yarn
hadoop mapreduce
atguigu
atguigu
-rw-rw-r--. 1 atguigu atguigu 1048583 5 月 23 16:01 blk_1073741836_1012.meta
-rw-rw-r--. 1 atguigu atguigu 63439959 5 月 23 16:01 blk_1073741837
-rw-rw-r--. 1 atguigu atguigu 495635 5 月 23 16:01 blk_1073741837_1013.meta
[atguigu@hadoop102 subdir0]$ cat blk_1073741836>>tmp.tar.gz
[atguigu@hadoop102 subdir0]$ cat blk_1073741837>>tmp.tar.gz
[atguigu@hadoop102 subdir0]$ tar -zxvf tmp.tar.gz
下载
[atguigu@hadoop104 software]$ hadoop fs -get /jdk-8u212-linuxx64.tar.gz ./
执行 wordcount 程序
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output
记录输入上面这段命令遇到的问题:
[2021-11-11 00:29:28.386]Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster Please check whether your etc/hadoop/mapred-site.xml contains the below configuration:yarn.app.mapreduce.am.env HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory} mapreduce.map.env HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory} For more detailed output, check the application tracking page: http://hadoop103:8088/cluster/app/application_1636618147042_0002 Then click on links to logs of each attempt. . Failing the application. 2021-11-11 00:29:28,875 INFO mapreduce.Job: Counters: 0 mapreduce.reduce.env HADOOP_MAPRED_HOME=${full path of your hadoop distribution directory}
解决方法:
先到根目录关闭yarn
sbin/stop-yarn.sh
在mapred-site.xml配置文件加入以下内容
yarn.app.mapreduce.am.env HADOOP_MAPRED_HOME=${HADOOP_HOME} mapreduce.map.env HADOOP_MAPRED_HOME=${HADOOP_HOME} mapreduce.reduce.env HADOOP_MAPRED_HOME=${HADOOP_HOME}
分发
xsync mapred-site.xml
再启动yarn
sbin/start-yarn.sh
再测试一遍
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output
又报错了!
Error: Could not find or load main class jar share.hadoop.mapreduce.hadoop-mapreduce-examples-3.1.3.jar wordcount
解决方法:
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/m0_51755061/article/details/114990461
搞定了,然后又遇到这个问题:
Container killed on request. Exit code is 143
1. 以为是内存调小了,调高了虚拟机的内存后没效果。
2.修改map内存限制,具体 *** 作如下:
修改mapred-site.xml文件
增加以下内容
mapreduce.map.memory.mb 1500 每个Map任务的物理内存限制 mapreduce.reduce.memory.mb 3000 每个Reduce任务的物理内存限制 mapreduce.map.java.opts -Xmx1200m mapreduce.reduce.java.opts -Xmx2600m mapreduce.framework.name yarn 修改yarn-site.xml内容
yarn.nodemanager.resource.memory-mb 22528 每个节点可用内存,单位MB yarn.scheduler.minimum-allocation-mb 1500 单个任务可申请最少内存,默认1024MB yarn.scheduler.maximum-allocation-mb 16384 单个任务可申请最大内存,默认8192MB 然后分发到hadoop103 104上,再重启集群 ,即可执行
这次运行这条命令:hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output
终于没有出现问题了。
看到了正常的进度!
出现了!成功上传!
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)