基于jdk1.7和hadoop2.6.4的Hadoop集群搭建

基于jdk1.7和hadoop2.6.4的Hadoop集群搭建,第1张

  1. 通过xmanager的Xftp上传hadoop-2.6.4.tar.gz文件到/opt目录

hadoop2.6.4下载地址

  1. 解压缩hadoop-2.6.4.tar.gz 文件

tar -zxf hadoop-2.6.4.tar.gz -C /usr/local

解压后即可,看到/usr/local/hadoop-2.6.4文件夹

  1. 配置Hadoop

进入目录:
cd /usr/local/hadoop-2.6.4/etc/hadoop/
依次修改下面的文件:
4.1 core-site.xml

 <configuration>
    <property>
    <name>fs.defaultFSname>  
      <value>hdfs://master:8020value>  
      property>  
    <property>
      <name>hadoop.tmp.dirname>
      <value>/var/log/hadoop/tmpvalue>
    property>
configuration>

4.2 hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_80

4.3 hdfs-site.xml

 <configuration>
<property>
    <name>dfs.namenode.name.dirname>
    <value>file:///data/hadoop/hdfs/namevalue>
property>
<property>
    <name>dfs.datanode.data.dirname>
    <value>file:///data/hadoop/hdfs/datavalue>
property>
<property>
     <name>dfs.namenode.secondary.http-addressname>
     <value>master:50090value>
property>
<property>
     <name>dfs.replicationname>
     <value>3value>
property>
configuration>

4.4 mapred-site.xml

<configuration>
<property>
    <name>mapreduce.framework.namename>
    <value>yarnvalue>
property>

<property>
    <name>mapreduce.jobhistory.addressname>
    <value>master:10020value>
property>
<property>
     <name>mapreduce.jobhistory.webapp.addressname>
     <value>master:19888value>
property>
configuration>

4.5 yarn-site.xml

 <property>
    <name>yarn.resourcemanager.hostnamename>
    <value>mastervalue>
  property>    
  <property>
    <name>yarn.resourcemanager.addressname>
    <value>${yarn.resourcemanager.hostname}:8032value>
  property>
  <property>
    <name>yarn.resourcemanager.scheduler.addressname>
    <value>${yarn.resourcemanager.hostname}:8030value>
  property>
  <property>
    <name>yarn.resourcemanager.webapp.addressname>
    <value>${yarn.resourcemanager.hostname}:8088value>
  property>
  <property>
    <name>yarn.resourcemanager.webapp.https.addressname>
    <value>${yarn.resourcemanager.hostname}:8090value>
  property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.addressname>
    <value>${yarn.resourcemanager.hostname}:8031value>
  property>
  <property>
    <name>yarn.resourcemanager.admin.addressname>
    <value>${yarn.resourcemanager.hostname}:8033value>
  property>
  <property>
    <name>yarn.nodemanager.local-dirsname>
    <value>/data/hadoop/yarn/localvalue>
  property>
  <property>
    <name>yarn.log-aggregation-enablename>
    <value>truevalue>
  property>
  <property>
    <name>yarn.nodemanager.remote-app-log-dirname>
    <value>/data/tmp/logsvalue>
  property>
<property> 
 <name>yarn.log.server.urlname> 
 <value>http://master:19888/jobhistory/logs/value>
 <description>URL for job history serverdescription>
property>
<property>
   <name>yarn.nodemanager.vmem-check-enabledname>
    <value>falsevalue>
  property>
 <property>
    <name>yarn.nodemanager.aux-servicesname>
    <value>mapreduce_shufflevalue>
  property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.classname>
      <value>org.apache.hadoop.mapred.ShuffleHandlervalue>
      property>
<property>  
        <name>yarn.nodemanager.resource.memory-mbname>  
        <value>2048value>  
 property>  
 <property>  
        <name>yarn.scheduler.minimum-allocation-mbname>  
        <value>512value>  
 property>   
 <property>  
        <name>yarn.scheduler.maximum-allocation-mbname>  
        <value>4096value>  
 property> 
 <property> 
    <name>mapreduce.map.memory.mbname> 
    <value>2048value> 
 property> 
 <property> 
    <name>mapreduce.reduce.memory.mbname> 
    <value>2048value> 
 property> 
 <property> 
    <name>yarn.nodemanager.resource.cpu-vcoresname> 
    <value>1value> 
 property>

4.6 yarn-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_80

拷贝hadoop安装文件到集群slave节点

scp -r /usr/local/hadoop-2.6.4 slave1:/usr/local
scp -r /usr/local/hadoop-2.6.4 slave2:/usr/local
scp -r /usr/local/hadoop-2.6.4 slave3:/usr/local

4.7 slaves文件后加上
slave1
slave2
slave3

4.8 设置IP映射
编辑/etc/hosts
前面是自己虚拟机的IP地址,后面是自己的主机名称

192.168.128.130 master master.centos.com
192.168.128.131 slave1 slave1.centos.com
192.168.128.132 slave2 slave2.centos.com
192.168.128.133 slave3 slave3.centos.com

5.配置SSH无密码登录
(1)使用ssh-keygen产生公钥与私钥对。
输入命令

ssh-keygen -t rsa

接着按三次Enter键

[root@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a6:13:5a:7b:54:eb:77:58:bd:56:ef:d0:64:90:66:d4 [email protected]
The key’s randomart image is:
±-[ RSA 2048]----+
| … |
| . .E|
| . = |
| . . o o |
| o S . . =|
| o * . o ++|
| . + . . o ooo|
| o . …o |
| .|
±----------------+

生成私有密钥id_rsa和公有密钥id_rsa.pub两个文件。ssh-keygen用来生成RSA类型的密钥以及管理该密钥,参数“-t”用于指定要创建的SSH密钥的类型为RSA。
(2)用ssh-copy-id将公钥复制到远程机器中
ssh-copy-id -i /root/.ssh/id_rsa.pub master//依次输入yes,123456(root用户的密码)

ssh-copy-id -i /root/.ssh/id_rsa.pub slave1
ssh-copy-id -i /root/.ssh/id_rsa.pub slave2
ssh-copy-id -i /root/.ssh/id_rsa.pub slave3

(3)验证时间是否同步
依次输入

ssh slave1
ssh slave2
ssh slave3

6.配置时间同步服务

(1)安装NTP服务。在各节点:

yum -y install ntp

(2)设置假设master节点为NTP服务主节点,那么其配置如下。
使用命令

vim /etc/ntp.conf

打开/etc/ntp.conf文件,注释掉以server开头的行,并添加:

restrict 192.168.0.0 mask 255.255.255.0 nomodify notrap
server 127.127.1.0
fudge 127.127.1.0 stratum 10

(3)分别在slave1、slave2、slave2中配置NTP,同样修改/etc/ntp.conf文件,注释掉server开头的行,并添加:

server master

(4)执行命令

service iptables stop & chkconfig iptables off

永久性关闭防火墙,主节点和从节点都要关闭。
(5)启动NTP服务。
① 在master节点执行命令

service ntpd start & chkconfig ntpd on

所有节点都要关闭nptd服务,查看nptd状态

service nptd status

记住在ntpdate之前一定要关闭,如果主节点的时间跟系统时间对不上,则先在主节点上nptdate ntp1.aliyun.com
② 在slave1、slave2、slave3上执行命令

ntpdate master

即可同步时间
③ 在slave1、slave2、slave3上分别执行

service ntpd start & chkconfig ntpd on

即可启动并永久启动NTP服务。

7.在/etc/profile添加JAVA_HOME和Hadoop路径

export HADOOP_HOME=/usr/local/hadoop-2.6.4
export PATH=$HADOOP_HOME/bin:$PATH:/usr/java/jdk1.7.0_80/bin

source /etc/profile

使修改生效

  1. 格式化NameNode
    进入目录

cd /opt/hadoop-2.6.4/bin

执行格式化

hdfs namenode -format

9.启动集群
进入目录

cd /usr/local/hadoop-2.6.4/sbin

执行启动:

./start-dfs.sh
./start-yarn.sh
./mr-jobhistory-daemon.sh start historyserver

使用jps,查看进程
[root@centos67 sbin]# jps
3672 NodeManager
3301 DataNode
3038 NameNode
4000 JobHistoryServer
4058 Jps
3589 ResourceManager
3408 SecondaryNameNode

  1. 关闭防火墙(在所有节点执行):
    service iptables stop
    chkconfig iptables off

  2. 浏览器查看

http://master:50070
http://master:8088

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/869801.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-13
下一篇 2022-05-13

发表评论

登录后才能评论

评论列表(0条)

保存