1. 准备1台虚拟机master,配置好jdk,hadoop
2、克隆两台虚拟机slave0、slave1,分别配置ip地址
cd /etc/sysconfig/network-scripts vi ifcfg-ens33
BOOTPROTO=static onBOOT=yes IPADDR=192.168.47.10 *三台虚拟机ip地址不能相同* GATEWAY=192.168.47.2 *网关* NETMASK=255.255.255.0 *子网掩码* DNS1=192.168.47.2 *和网关相同*
3、修改主机名(master、slave0、slave1)
vi /etc/hostname master
4、修改ip映射(master、slave0、slave1)
vi /etc/hosts ip地址 主机名
**建议全部修改完后使用reboot重启虚拟机**
5、ssh配置(免密登录)(master、slave0、slave1)
#ssh-keygen -t rsa #ll ~/.ssh/ #ssh master #ssh-copy-id master #ssh-copy-id slave0 #ssh-copy-id slave1
6、Hadoop配置
进入hadoop配置所在的文件夹中:
#cd /usr/local/hadoop-3.3.1/etc/hadoop
7、配置jdk以及hadoop所在路径
#vi hadoop-env.sh export JAVA_HOME=/usr/lib/jdk1.8.0_281 export HADOOP_CONF_DIR=/usr/local/hadoop-3.3.1/etc/hadoop/
8、配置core-site.xml *核心设置*
fs.defaultFS hdfs://master:9000 io.file.buffer.size 4096 hadoop.tmp.dir /home/bigdata/tmp
9、hdfs设置
dfs.replication 3 dfs.block.size 134217728 dfs.namenode.name.dir file:///home/hadoopdata/dfs/name dfs.datanode.data.dir /home/hadoopdata/dfs/data fs.checkpoint.dir /home/hadoopdata/checkpoint/dfs/slave1 dfs.http.address master:50070 dfs.secondary.http.address master:50090 dfs.webhdfs.enabled true dfs.permissions true
10、mapreduce设置
mapreduce.framework.name yarn true mapreduce.jobhistory.address master:10020 mapreduce.jobhistory.webapp.address master:19888
11、yarn设置
yarn.resourcemanager.hostname master yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.address master:8032 yarn.resourcemanager.scheduler.address master:8030 yarn.resourcemanager.resource-tracker.address master:8031 yarn.resourcemanager.admin.address master:8033 yarn.resourcemanager.webapp.address master:8088
12、设置slaves
#vi slaves
添加以下内容:
master slave0 slave1
13、完成分发任务
在两个slave里删除Hadoop目录
slave0: rm -rf /usr/local/hadoop-3.3.1/ slave1: rm -rf /usr/local/hadoop-3.3.1/
完成分发工作:
master:
#scp -r /usr/local/hadoop-3.3.1/ slave2:/usr/local/ #scp -r /usr/local/hadoop-3.3.1/ slave1:/usr/local/
14、格式化namenode *只能一次
hadoop namenode-format
15、启动
start-all.sh
测试:
1、查看进程:jps
2、查看对应模块web
192.168.47.10:50070
192.168.47.10:8088
3、上传下载文件
hdfs dfs -ls / Hdfs dfs -put ./*** /
4、运行一个程序
补充:配置环境变量vi ~/.bash_profile JAVA_HOME=/root/myhadoop/jdk1.8.0_111 export JAVA_HOME PATH=$JAVA_HOME/bin:$PATH export PATH HADOOP_HOME=/root/myhadoop/hadoop-3.0.0 export HADOOP_HOME PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH export PATH
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)