一、Hadoop1.0到2.0的架构变化
1.Hadoop2.0由HDFS、MapReduce和YARN组成。
2、HDFSNN联邦、HA
3.在纱线上运行的MapReduce
4.纱线资源管理系统
二、HDFS2.0
1.解决HDFS1.0单点故障和内存有限的问题。
2.解决单点故障
HDFSHA由主用和备用NameNode解决。
如果主NameNode失败,切换到备用NameNode。
3.解决内存有限的问题。
HDFS联邦
水平缩放支持多个NameNode。
每个NameNode负责目录的一部分。
所有NameNode共享所有DataNode存储资源。
4.只是架构变了,使用方式不变。
对HDFS用户透明
HDFS1.0中的命令和API仍然可以使用$Hadoopfs-ls/user/Hadoop/$Hadoopfs-mkdir/user/Hadoop/data。
三。HDFS2.0公顷
1.主要和备用命名节点
2.解决单点故障
主NameNode提供外部服务,备用NameNode同步要切换的主NameNode的元数据。
所有DataNode同时向两个NameNode报告数据块信息。
3.两种切换选项
手动切换:主备切换可以通过命令实现,升级可以使用HDFS。
基于Zookeeper实现自动切换。
4.基于Zookeeper的自动切换方案
Zookeeper故障转移控制器监视NameNode的健康状态,并向Zookeeper注册NameNode。
NameNode挂断后,ZKC争夺NameNode的锁,而获得了ZKC锁的NameNode变得活跃起来。
四。环境建设
192.168.1.2主
从属1
从属2
Hadoopversionhadoop-2.2.0.tar.gz
hbaseversionhbase-0.98.11-hadoop2-bin.tar.gz
动物园管理员versionzookeeper-3.4.5.tar.gz
versionjdk-7u25-linux-x64.gzJDK
1、主机HOSTS文件配置
[root@master ~]# cat /etc/hosts 192.168.1.2 master 192.168.1.3 slave1 192.168.1.4 slave2 [root@slave1 ~]# cat /etc/hosts 192.168.1.2 master 192.168.1.3 slave1 192.168.1.4 slave2 [root@slave2 ~]# cat /etc/hosts 192.168.1.2 master 192.168.1.3 slave1 192.168.1.4 slave22.配置节点之间的相互信任。
[root@master ~]# useradd hadoop [root@slave1 ~]# useradd hadoop [root@slave2 ~]# useradd hadoop [root@master ~]# passwd hadoop [root@slave1 ~]# passwd hadoop [root@slave2 ~]# passwd hadoop [root@master ~]# su - hadoop [hadoop@master ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave1 [hadoop@master ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave2 [hadoop@master ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub master3.JDK环境配置
[root@master ~]# tar jdk-7u25-linux-x64.gz [root@master ~]# mkdir /usr/java [root@master ~]# mv jdk-7u25-linux-x64.gz /usr/java [root@master ~]# cd /usr/java/ [root@master java]# ln -s jdk1.7.0_25 jdk # 修改/etc/profile,添加 export JAVA_HOME=/usr/java/jdk export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib export PATH=/usr/java/jdk/bin:$PATH [root@master ~]# source /etc/profile [root@master ~]# java -version java version "1.7.0_25" Java(TM) SE Runtime Environment (build 1.7.0_25-b15) Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) # slave1,slave2同样 *** 作4.Hadoop安装
[root@master ~]# tar zxvf hadoop-2.2.0.tar.gz [root@master ~]# mv hadoop-2.2.0 /home/hadoop/ [root@master ~]# cd /home/hadoop/ [root@master hadoop]# ln -s hadoop-2.2.0 hadoop [root@master hadoop]# chown -R hadoop.hadoop /home/hadoop/ [root@master ~]# cd /home/hadoop/hadoop/etc/hadoop # 修改hadoop-env.sh文件 export JAVA_HOME=/usr/java/jdk export HADOOP_HEAPSIZE=200 # 修改mapred-env.sh文件 export JAVA_HOME=/usr/java/jdk export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000 # 修改yarn-env.sh文件 export JAVA_HOME=/usr/java/jdk JAVA_HEAP_MAX=-Xmx300m YARN_HEAPSIZE=100 # 修改core-site.xml文件 <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/tmp</value> </property> <property> <name>hadoop.proxyuser.hadoop.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hadoop.groups</name> <value>*</value> </property> </configuration> # 修改hdfs-site.xml文件 <configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>master:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration> # 修改mapred-site.xml文件 <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property> <property> <name>mapreduce.map.memory.mb</name> <value>512</value> </property> <property> <name>mapreduce.map.cpu.vcores</name> <value>1</value> </property> <property> <name>mapreduce.reduce.memory.mb</name> <value>512</value> </property> </configuration> # 修改yarn-site.xml文件 <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>100</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>200</value> </property> <property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> </property> <property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>2</value> </property> </configuration> # 修改slaves文件 slave1 slave2 # 修改 /home/hadoop/.bashrc export HADOOP_DEV_HOME=/home/hadoop/hadoop export PATH=$PATH:$HADOOP_DEV_HOME/bin export PATH=$PATH:$HADOOP_DEV_HOME/sbin export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME} export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME} export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME} export YARN_HOME=${HADOOP_DEV_HOME} export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop export HDFS_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop export YARN_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop # 将上面修改的文件全部传送到slave1,slave2节点5.在主节点上启动hdfs
[hadoop@master ~]$ cd /home/hadoop/hadoop/sbin/ [hadoop@master sbin]$ ./start-dfs.sh 15/03/21 00:49:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master] master: starting namenode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-namenode-master.out slave2: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-datanode-slave2.out slave1: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-datanode-slave1.out Starting secondary namenodes [master] master: starting secondarynamenode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-hadoop-secondarynamenode-master.out # 查看进程 [hadoop@master ~]$ jps 39093 Jps 38917 SecondaryNameNode 38767 NameNode [root@slave1 ~]# jps 2463 Jps 2379 DataNode [root@slave2 ~]# jps 2463 Jps 2379 DataNode #启动jobhistory [hadoop@master sbin]$ mr-jobhistory-daemon.sh start historyserver starting historyserver, logging to /home/hadoop/hadoop-2.2.0/logs/mapred-hadoop-historyserver-master.out6.开始纱线
[hadoop@master ~]$ cd /home/hadoop/hadoop/sbin/ [hadoop@master sbin]$ ./start-yarn.sh starting yarn daemons starting resourcemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-hadoop-resourcemanager-master.out slave2: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-slave2.out slave1: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-hadoop-nodemanager-slave1.out # 查看进程 [hadoop@master sbin]$ jps 39390 Jps 38917 SecondaryNameNode 39147 ResourceManager 38767 NameNode [hadoop@slave1 ~]$ jps 2646 Jps 2535 NodeManager 2379 DataNode [hadoop@slave2 ~]$ jps 8261 Jps 8150 NodeManager 8004 DataNode7.检查hdfs文件系统
[hadoop@master sbin]$ hadoop fs -ls / 15/03/21 15:56:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Found 2 items drwxr-xr-x - hadoop supergroup 0 2015-03-20 17:46 /hbase drwxrwx--- - hadoop supergroup 0 2015-03-20 16:56 /tmp8.安装动物园管理员
[root@master ~]# tar zxvf zookeeper-3.4.5.tar.gz -C /home/hadoop/ [root@master ~]# cd /home/hadoop/ [root@master hadoop]# ln -s zookeeper-3.4.5 zookeeper [root@master hadoop]# chown -R hadoop.hadoop /home/hadoop/zookeeper [root@master hadoop]# cd zookeeper/conf/ [root@master conf]# cp zoo_sample.cfg zoo.cfg # 修改zoo.cfg dataDir=/home/hadoop/zookeeper/data dataLogDir=/home/hadoop/zookeeper/logs server.1=192.168.1.2:7000:7001 server.2=192.168.1.3:7000:7001 server.3=192.168.1.4:7000:7001 #在slave1,slave2执行相同的 *** 作 [hadoop@master conf]# cd /home/hadoop/zookeeper/data/ [hadoop@master data]# echo 1 > myid [hadoop@slave1 data]# echo 2 > myid [hadoop@slave2 data]# echo 3 > myid #启动zookeeper [hadoop@master ~]$ cd zookeeper/bin/ [hadoop@master bin]$ ./zkServer.sh start [hadoop@slave1 ~]$ cd zookeeper/bin/ [hadoop@slave1 bin]$ ./zkServer.sh start [hadoop@slave2 ~]$ cd zookeeper/bin/ [hadoop@slave2 bin]$ ./zkServer.sh start9.Hbase的安装
[root@master ~]# tar zxvf hbase-0.98.11-hadoop2-bin.tar.gz -C /home/hadoop/ [root@master ~]# cd /home/hadoop/ [root@master hadoop]# ln -s hbase-0.98.11-hadoop2 hbase [root@master hadoop]# chown -R hadoop.hadoop /home/hadoop/hbase [root@master hadoop]# cd /home/hadoop/hbase/conf/ # 修改hbase-env.sh文件 export JAVA_HOME=/usr/java/jdk export HBASE_HEAPSIZE=50 # 修改 hbase-site.xml 文件 <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://master:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>master,slave1,slave2</value> </property> </configuration> # 修改regionservers文件 slave1 slave2 # 将上面修改的文件传送到slave1,slave210.在主服务器上启动Hbase。
[hadoop@master ~]$ cd hbase/bin/ [hadoop@master bin]$ ./start-hbase.sh master: starting zookeeper, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-master.out slave1: starting zookeeper, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-slave1.out slave2: starting zookeeper, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-slave2.out starting master, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-master-master.out slave1: starting regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-slave1.out slave2: starting regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-slave2.out # 查看进程 [hadoop@master bin]$ jps 39532 QuorumPeerMain 38917 SecondaryNameNode 39147 ResourceManager 39918 HMaster 38767 NameNode 40027 Jps [hadoop@slave1 data]$ jps 3021 HRegionServer 3133 Jps 2535 NodeManager 2379 DataNode 2942 HQuorumPeer [hadoop@slave2 ~]$ jps 8430 HRegionServer 8351 HQuorumPeer 8150 NodeManager 8558 Jps 8004 DataNode # 验证 [hadoop@master bin]$ ./hbase shell 2015-03-21 16:11:44,534 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 0.98.11-hadoop2, r6e6cf74c1161035545d95921816121eb3a516fe0, Tue Mar 3 00:23:49 PST 2015 hbase(main):001:0> list TABLE SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hadoop/hbase-0.98.11-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 2015-03-21 16:11:56,499 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 0 row(s) in 1.9010 seconds => []1.检查集群状态。
HDFSUIhttp://192.168.1.2:50070/DFShealth.JSP
YARNuihttp://192.168.1.2:8088/cluster
作业历史uihttp://192.168.1.2:19888/作业历史
HBASEuihttp://192.168.1.2:60010/master-status
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)