配置三个节点hosts
vi /etc/hosts
192.168.61.128 centos01
192.168.61.129 centos02
192.168.61.130 centos03
配置网卡
centos01
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=00:0C:29:14:FC:37
TYPE=Ethernet
UUID=409f6562-d469-4dd4-b89e-9d30a04c3537
onBOOT=yes
NM_ConTROLLED=yes
BOOTPROTO=static
BROADCAST=192.168.61.255
IPADDR=192.168.61.128
GATEWAY=192.168.61.2
NETMASK=255.255.255.0
重启网络服务
service network restart
设置dns
vi /etc/resolv.conf
naeserver xxx.xxx.xxx.xxx
设置hostname
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=centos01
重启服务器
init 6
centos02
克隆master节点后修改
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
onBOOT=yes
NM_ConTROLLED=yes
BOOTPROTO=static
BROADCAST=192.168.61.255
IPADDR=192.168.61.129
GATEWAY=192.168.61.2
NETMASK=255.255.255.0
重启网络服务
service network restart
删除net.rules
rm -rf /etc/udev/rules.d/70-persistent-net.rules
设置hostname
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=centos02
重启服务器
init 6
centos03与centos02进行相同 *** 作
IP 192.168.61.130
hostname=centos03
参考ssh免密登录文档,并关闭防火墙等
rpm -ivh jdk-7u79-linux-x64.rpm
tar -zxvf hadoop-2.7.0_x64.tar.gz
vi ~/.bash_profile
export JAVA_HOME=/usr/java/jdk1.7.0_79
export HADOOP_HOME=/opt/software/hadoop-2.7.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
source ~/.bash_profile
vi hadoop-2.7.0/etc/hadoop/hadoop-env.sh
vi hadoop-2.7.0/etc/hadoop/mapred-env.sh
vi hadoop-2.7.0/etc/hadoop/yarn-env.sh
分别添加以下环境变量:
export JAVA_HOME=/usr/java/jdk1.7.0_79
centos01
cd /opt/software/hadoop-2.5.1/etc/hadoop/
vi core-site.xml
vi hdfs-site.xml
mv mapred-site.xml.template mapred-site.xml
vi mapred-site.xml
vi yarn-site.xml
touch masters
touch slaves
vi masters
master
:wq
vi slaves
master
slave
:wq
完全分布式环境中master和slave节点上的文件需要一致,因此复制文件到slave中,包括1. hadoop文件夹
scp -r /opt/software/hadoop-2.7.0 root@centos02:/opt/software/
scp -r /opt/software/hadoop-2.7.0 root@centos03:/opt/software/
- 系统配置文件,包括环境变量
scp .bash_profile root@centos02:~
scp .bash_profile root@centos03:~
- hosts文件
scp /etc/hosts root@centos02:/etc/hosts
scp /etc/hosts root@centos03:/etc/hosts
cd /opt/software/hadoop-2.7.0
hdfs namenode -format
yes
sbin/start-all.sh
三个节点jps命令查看进程
http://192.168.61.128:50070/
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)