hadoop安装 单机

hadoop安装 单机,第1张

hadoop安装 单机


解压缩包    tar -zxvf hadoop-2.6.0-cdh5.14.2.tar.gz -C ../soft

# 配置环境

```
vi /etc/profile
#hadoop
export HADOOP_HOME=/opt/soft/hadoop260
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

cd /opt/soft/hadoop260/etc/hadoop
[root@gree139 hadoop]# vi ./hadoop-env.sh 
export JAVA_HOME=/opt/soft/jdk180
[root@gree139 hadoop]# vi ./mapred-env.sh
“#”去掉 export JAVA_HOME=/opt/soft/jdk180
[root@gree139 hadoop]# vi ./yarn-env.sh
export JAVA_HOME=/opt/soft/jdk180
```

##### 修改windows系统 c:/window/system32/drivers/etc/hosts 添加ip hostname

```
[root@gree139 hadoop]# vi ./core-site.xml

 
     
    fs.defaultFS
    hdfs://gree139:9000
 

 
     
    hadoop.tmp.dir
    /opt/soft/hadoop260/hadooptmp
 

 
    hadoop.proxyuser.root.hosts
    *
 

 
    hadoop.proxyuser.root.groups
    *
 


```

```
[root@gree139 hadoop]# vi ./hdfs-site.xml 

 
   
    dfs.replication
    3
 

 
   
    dfs.namenode.secondary.http-address
    gree139:50090
 


```

```
[root@gree139 hadoop]# cp mapred-site.xml.template mapred-site.xml
[root@gree139 hadoop]# vi ./mapred-site.xml

 
   
    mapreduce.framework.name
    yarn
 

 
    mapreduce.jobhistory.address
    gree139:10020
 

 
    mapreduce.jobhistory.webapp.address
    gree139:19888
 


```

```
[root@gree139 hadoop]# vi ./yarn-site.xml

   
    yarn.nodemanager.aux-services
    mapreduce_shuffle
 

 
    yarn.nodemanager.aux-services.mapreduce.shuffle.class
    org.apache.hadoop.mapred.ShuffleHandler
 

 
   
    yarn.resourcemanager.hostname
    gree139
 

 
   
    yarn.log-aggregation-enable
    true
 

 
   
    yarn.log-aggregation.retain-seconds
    604800
 

```

```
[root@gree139 hadoop]# vi ./slaves 
hadoop namenode -format
```

```
[root@gree139 hadoop]# hadoop-daemon.sh start namenode
[root@gree139 hadoop]# hadoop-daemon.sh stop namenode

[root@gree139 hadoop]# hadoop-daemon.sh stop datanode
[root@gree139 hadoop]# hadoop-daemon.sh start datanode

[root@gree139 hadoop]# hadoop-daemon.sh start secondarynamenode
[root@gree139 hadoop]# hadoop-daemon.sh stop secondarynamenode

启动yarn资源管理器  NodeManager ResourceManager
[root@gree139 hadoop]# start-yarn.sh
[root@gree139 hadoop]# stop-yarn.sh

启动hdfs DataNode namenode secondarynamenode
[root@gree139 hadoop]# start-dfs.sh 
[root@gree139 hadoop]# stop-dfs.sh 

[root@gree139 hadoop]# yarn-daemon.sh start nodemanager
[root@gree139 hadoop]# yarn-daemon.sh stop nodemanager
[root@gree139 hadoop]# yarn-daemon.sh start resourcemanager
[root@gree139 hadoop]# yarn-daemon.sh stop resourcemanager

全部启动
[root@gree139 hadoop]# start-all.sh 
[root@gree139 hadoop]# stop-all.sh 
```


http://hostname/ip:50070  HDFS

http://gree139:8088/      yarn管理界面

http://gree139:19888/     jobhistory界面

启动历史服务
[root@gree139 hadoop]# mr-jobhistory-daemon.sh start historyserver

命令方式查看 节点状态
[root@gree139 hadoop]# yarn node -list -all

在hdfs文件系统中创建input目录
[root@gree139 hadoop]# hdfs dfs -mkdir /input

查看目录下的文件信息
[root@gree139 hadoop]# hdfs dfs -ls /

上传文件到hdfs指定目录下
[root@gree139 hadoop]# hdfs dfs -put ./yarn-env.sh /input/

下载
[root@gree139 hadoop260]# hdfs dfs -get /input/yarn-env.sh ./yarn-env.sh.bak

删除
[root@gree139 hadoop260]# hdfs dfs -rm /input/yarn-env.sh

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5350540.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-09
下一篇 2022-12-09

发表评论

登录后才能评论

评论列表(0条)

保存