环境:同方龙芯两路服务器, *** 作系统 64 位 loongnix
下载重新编译需要的软件包
- 部署JAVA环境
loongnix默认已经安装
[root@localhost hadoop-2.6.4-src]# java -version openjdk version "1.8.0_202" OpenJDK Runtime Environment (Loongson 8.1.2-loongson3a-Fedora) (build 1.8.0_202-b08) OpenJDK 64-Bit Server VM (build 25.202-b08, mixed mode) [root@localhost hadoop-2.6.4-src]#
- 安装maven
yum install maven
[root@localhost hadoop-2.6.4-src]# mvn -version Apache Maven 3.2.2 (NON-CANONICAL_2015-07-07T11:23:04_root; 2015-07-07T11:23:04+08:00) Maven home: /usr/share/maven Java version: 1.8.0_202, vendor: Oracle Corporation Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.202-1.b08.8.1.2.fc21.loongson.mips64el/jre Default locale: zh_CN, platform encoding: UTF-8 OS name: "linux", version: "3.10.0+", arch: "mips64el", family: "unix" [root@localhost hadoop-2.6.4-src]#
- 安装Protocol Buffer 2.5.0
wget https://github.com/protocolbuffers/protobuf/archive/v2.5.0.zip
需要访问googletest才能执行autogen.sh
这是生成configure 的包
https://pan.baidu.com/s/1pJlZubT
./autogen.sh bash ./configure --prefix=$INSTALL_DIR make make check make install
或使用yum源码安装
yum install protobuf-devel
验证:
查看是否安装成功:protoc --version
如果出现:libprotoc 2.5.0 则说明安装成功!
添加环境变量
[root@hadoop001 ~]# vi /etc/profile export PROTOC_HOME=/home/hadoop/app/protobuf export PATH=$PROTOC_HOME/bin:$PATH [root@hadoop001 ~]# source /etc/profile [root@hadoop001 ~]# protoc --version libprotoc 2.5.0 (安装成功)
- 安装其他依赖包
yum install -y openssl openssl-devel svn ncurses-devel zlib-devel libtool
yum install -y snappy snappy-devel bzip2 bzip2-devel lzo lzo-devel lzop autoconf automake
- 安装Findbugs(可选)
下载编译yum install findbugs
[root@localhost src]# findbugs -version
3.0.0
[root@localhost src]#
- 下载 hadoop2.6.4 的源码包
http://mirrors.hust.edu.cn/apache/hadoop/common/
hadoop-2.6.4-src.tar.gz
- 压解源码包
tar -xvzf hadoop-2.6.4-src.tar.gz
- 编译
mvn clean package -Pdist,native -DskipTests -Dtar
还可以选择以下其他命令编译
mvn package -Pdist -DskipTests -Dtar //如果不需要native code、忽略测试用例和文档,可以用下面的命令创建二进制分发版
mvn package -Pdist,native,docs -DskipTests -Dtar //创建二进制分发版,带native code和文档
mvn package -Psrc -DskipTests //创建源码分发版
mvn package -Pdist,native,docs,src -DskipTests -Dtar //创建二进制带源码分发版,带native code和文档
mvn clean site; mvn site:stage -DstagingDirectory=/tmp/hadoop-site //创建本地版web页面,放在/tmp/hadoop-site
编译生成路径
hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4.tar.gz
- 编译完成结果
main: [exec] $ tar cf hadoop-2.6.4.tar hadoop-2.6.4 [exec] $ gzip -f hadoop-2.6.4.tar [exec] [exec] Hadoop dist tar available at: /home/zhubo/disk_dir/hadoop/hadoop_src/hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4.tar.gz [exec] [INFO] Executed tasks [INFO] [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist --- [INFO] Building jar: /home/zhubo/disk_dir/hadoop/hadoop_src/hadoop-2.6.4-src/hadoop-dist/target/hadoop-dist-2.6.4-javadoc.jar [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Common ............................... SUCCESS [06:44 min] [INFO] Apache Hadoop NFS .................................. SUCCESS [ 22.945 s] [INFO] Apache Hadoop KMS .................................. SUCCESS [03:02 min] [INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.156 s] [INFO] Apache Hadoop HDFS ................................. SUCCESS [09:43 min] [INFO] Apache Hadoop HttpFS ............................... SUCCESS [01:02 min] [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 51.817 s] [INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 14.303 s] [INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.131 s] [INFO] hadoop-yarn ........................................ SUCCESS [ 0.123 s] [INFO] hadoop-yarn-api .................................... SUCCESS [02:01 min] [INFO] hadoop-yarn-common ................................. SUCCESS [02:14 min] [INFO] hadoop-yarn-server ................................. SUCCESS [ 0.156 s] [INFO] hadoop-yarn-server-common .......................... SUCCESS [01:29 min] [INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [01:54 min] [INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [ 8.869 s] [INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 21.212 s] [INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [01:10 min] [INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 17.023 s] [INFO] hadoop-yarn-client ................................. SUCCESS [ 22.395 s] [INFO] hadoop-yarn-applications ........................... SUCCESS [ 0.123 s] [INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [ 9.894 s] [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [ 6.379 s] [INFO] hadoop-yarn-site ................................... SUCCESS [ 0.237 s] [INFO] hadoop-yarn-registry ............................... SUCCESS [ 17.471 s] [INFO] hadoop-yarn-project ................................ SUCCESS [ 13.062 s] [INFO] hadoop-mapreduce-client ............................ SUCCESS [ 0.225 s] [INFO] hadoop-mapreduce-client-core ....................... SUCCESS [01:11 min] [INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 56.526 s] [INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [ 12.969 s] [INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 33.396 s] [INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 27.378 s] [INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [01:39 min] [INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [ 6.331 s] [INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 20.835 s] [INFO] hadoop-mapreduce ................................... SUCCESS [ 14.185 s] [INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 31.983 s] [INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 48.137 s] [INFO] Apache Hadoop Archives ............................. SUCCESS [ 8.449 s] [INFO] Apache Hadoop Rumen ................................ SUCCESS [ 19.740 s] [INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 15.111 s] [INFO] Apache Hadoop Data Join ............................ SUCCESS [ 9.635 s] [INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [ 9.216 s] [INFO] Apache Hadoop Extras ............................... SUCCESS [ 10.886 s] [INFO] Apache Hadoop Pipes ................................ SUCCESS [ 27.424 s] [INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 16.461 s] [INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [05:13 min] [INFO] Apache Hadoop Client ............................... SUCCESS [ 24.814 s] [INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 0.357 s] [INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 14.325 s] [INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 24.274 s] [INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.118 s] [INFO] Apache Hadoop Distribution ......................... SUCCESS [01:32 min] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 49:48 min [INFO] Finished at: 2019-10-10T12:09:00+08:00 [INFO] Final Memory: 208M/2982M [INFO] ------------------------------------------------------------------------集群搭建
- 设置主机名,三台主机分别设置
echo mini >> /etc/hostname
echo mini1 >> /etc/hostname
echo mini2 >> /etc/hostname
设置域名,三台主机都设置
vim /etc/hosts 10.40.25.185 mini.com mini 10.40.25.186 mini1.com mini1 10.40.25.187 mini2.com mini2
- 设置SSH免密码登录
mini上面执行如下
ssh-keygen cd /root/.ssh/ cat id_rsa.pub>> authorized_keys
验证,看是否可以免密登陆
ssh root@mini
分到其他主机
ssh-copy-id -i /root/.ssh/id_rsa.pub root@mini1 ssh-copy-id -i /root/.ssh/id_rsa.pub root@mini2
将上面编译生成的hadoop-2.6.4-src/hadoop-dist/target/hadoop-2.6.4.tar.gz拷贝出来到 /home/hadoop/apps
cd /home/hadoop/apps/
解压
tar xvzf hadoop-2.6.4.tar.gz
规划安装目录 /home/hadoop/apps/hadoop-2.6.4
export HADOOP_HOME=/home/hadoop/apps/hadoop-2.6.4
mkdir -p tmp、hdfs、hdfs/data、hdfs/name (必须手动创建,用于hdfs的节点信息,数据信息等存储)
修改配置文件
cd $HADOOP_HOME/etc/hadoop/
指定JAVA_HADOOP
vim hadoop_env.sh
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.202-1.b08.8.1.2.fc21.loongson.mips64el
ip设置成master的地址或域名,tmp路径修改成自己对应路径
vim core-site.xml
fs.defaultFS hdfs://mini:9000 hadoop.tmp.dir /home/hadoop/apps/hadoop-2.6.4/tmp
设置name目录和data目录,secondary ip设置成master的地址或域名
vim hdfs-site.xml
dfs.namenode.name.dir /home/hadoop/apps/hadoop-2.6.4/data/name dfs.datanode.data.dir /home/hadoop/apps/hadoop-2.6.4/data/data dfs.replication 3 dfs.secondary.http.address mini:50090
cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
mapreduce.framework.name yarn
vim yarn-site.xml
yarn.resourcemanager.hostname mini yarn.nodemanager.aux-services mapreduce_shuffle
vim slaves
mini mini1 mini2启动集群
- 将hadoop添加到环境变量
vim setenv.sh export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.202-1.b08.8.1.2.fc21.loongson.mips64el export HADOOP_HOME=/home/hadoop/apps/hadoop-2.6.4 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
生效
source setenv.sh
初始化HDFS,格式化
hadoop namenode -format
启动HDFS
start-dfs.sh
启动YARN
start-yarn.sh
查看启动
[root@localhost hadoop-2.6.4]# jps 12997 Jps 11542 NameNode 11834 SecondaryNameNode 12251 ResourceManager 9773 DataNode 12350 NodeManager验证
- 上传文件到HDFS
从本地上传一个文本文件到hdfs的/wordcount/input目录下
hadoop fs -mkdir -p /wordcount/input hadoop fs -put /home/hadoop/test.txt /wordcount/input hadoop fs -ls /wordcount/input/
- 运行一个mapreduce程序
在HADOOP安装目录下,运行一个示例mr程序
cd hadoop-2.6.4/share/hadoop/mapreduce/ hadoop jar hadoop-mapreduce-examples-2.6.4.jar wordcount /wordcount/input /wordcount/output hadoop fs -ls /wordcount/output
查看生成文件
[root@mini mapreduce]# hadoop fs -ls /wordcount/output -rw-r--r-- 3 root supergroup 0 2019-10-12 15:46 /wordcount/output/_SUCCESS -rw-r--r-- 3 root supergroup 234 2019-10-12 15:46 /wordcount/output/part-r-00000 [root@mini mapreduce]#
查看执行结果
hadoop fs -cat /wordcount/output/part-r-00000
[root@mini mapreduce]# hadoop fs -cat /wordcount/output/part-r-00000 19/10/12 16:01:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable d 1 dddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffweeeeeeeeeeeeeeeeeeeeeeeeetrrrrrrrrrrrrrrrrrrrrrrrrrrrhhhhhhhhhhhhhhhhhhhhhhhqqqqqqqqqqqqqqqqqlkkkkkkkkkkkkkkkkhggggg 1 dfgd 1 ew 1 g 2 r 3 sdf 1 sdff 1 sfd 1 w 1 we 2 [root@mini mapreduce]#
- 查看HDFS状态
hdfs dfsadmin –report
[root@mini mapreduce]# hdfs dfsadmin -report 19/10/12 15:48:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 18327826432 (17.07 GB) Present Capacity: 9458700288 (8.81 GB) DFS Remaining: 9458647040 (8.81 GB) DFS Used: 53248 (52 KB) DFS Used%: 0.00% Under replicated blocks: 2 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Live datanodes (1): Name: 10.40.25.185:50010 (mini.com) Hostname: mini.com Decommission Status : Normal Configured Capacity: 18327826432 (17.07 GB) DFS Used: 53248 (52 KB) Non DFS Used: 8869126144 (8.26 GB) DFS Remaining: 9458647040 (8.81 GB) DFS Used%: 0.00% DFS Remaining%: 51.61% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Sat Oct 12 15:49:00 CST 2019 [root@mini mapreduce]#编译问题总结
1.错误描述: Failed to find a viable JVM installation under JAVA_HOME
【解决方法】
进入到hadoop-2.6.4-src目录下的src目录
[root@localhost src]# cd /home/zhubo/disk_dir/hadoop/hadoop_src/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/src [root@localhost src]# ls CMakeLists.txt config.h.cmake contrib JNIFlags.cmake main site test [root@localhost src]# vim JNIFlags.cmake
在文件中添加以下代码(图中红色圈出的部分)
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "mips64") SET(_java_libarch "mips64el")
说明:
_java_libarch设置需要根据环境实际路径进行设置,如下:
[root@localhost disk_dir]# ls /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.202-1.b08.8.1.2.fc21.loongson.mips64el/jre/lib/mips64el/ hsdis-mips64el.so libawt_headless.so libdt_socket.so libinstrument.so libjaas_unix.so libjavafx_font.so libjdwp.so libjsoundalsa.so libmlib_image.so libprism_common.so libsctp.so libverify.so jli libawt.so libfontmanager.so libj2gss.so libjava_crw_demo.so libjavafx_iio.so libjpeg.so libjsound.so libnet.so libprism_es2.so libsplashscreen.so libzip.so jvm.cfg libawt_xawt.so libglass.so libj2pcsc.so libjavafx_font_freetype.so libjava.so libjsdt.so liblcms.so libnio.so libprism_sw.so libsunec.so server libattach.so libdecora_sse.so libhprof.so libj2pkcs11.so libjavafx_font_pango.so libjawt.so libjsig.so libmanagement.so libnpt.so libsaproc.so libunpack.so [root@localhost disk_dir]#
CMAKE_SYSTEM_PROCESSOR 需要根据系统环境进行设置,使用如下命令进行查看:
grep -nr CMAKE_SYSTEM_PROCESSOR
执行过程如下
[root@localhost hadoop-2.6.4-src]# pwd /home/zhubo/disk_dir/hadoop/hadoop_src/hadoop-2.6.4-src [root@localhost hadoop-2.6.4-src]# [root@localhost hadoop-2.6.4-src]# grep -nr CMAKE_SYSTEM_PROCESSOR hadoop-common-project/hadoop-common/src/JNIFlags.cmake:25: if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_SYSTEM_PROCESSOR MATCHES ".*64") hadoop-common-project/hadoop-common/src/JNIFlags.cmake:30: if (CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64" OR CMAKE_SYSTEM_PROCESSOR STREQUAL "amd64") hadoop-common-project/hadoop-common/src/JNIFlags.cmake:31: # Set CMAKE_SYSTEM_PROCESSOR to ensure that find_package(JNI) will use hadoop-common-project/hadoop-common/src/JNIFlags.cmake:33: set(CMAKE_SYSTEM_PROCESSOR "i686") hadoop-common-project/hadoop-common/src/JNIFlags.cmake:38:if (CMAKE_SYSTEM_PROCESSOR MATCHES "^arm" AND CMAKE_SYSTEM_NAME STREQUAL "Linux") hadoop-common-project/hadoop-common/src/JNIFlags.cmake:66:endif (CMAKE_SYSTEM_PROCESSOR MATCHES "^arm" AND CMAKE_SYSTEM_NAME STREQUAL "Linux") hadoop-common-project/hadoop-common/src/JNIFlags.cmake:75: IF(CMAKE_SYSTEM_PROCESSOR MATCHES "^i.86$") hadoop-common-project/hadoop-common/src/JNIFlags.cmake:77: ELSEIF (CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64" OR CMAKE_SYSTEM_PROCESSOR STREQUAL "amd64") hadoop-common-project/hadoop-common/src/JNIFlags.cmake:79: ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "^arm") hadoop-common-project/hadoop-common/src/JNIFlags.cmake:81: ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64le") hadoop-common-project/hadoop-common/src/JNIFlags.cmake:88: SET(_java_libarch ${CMAKE_SYSTEM_PROCESSOR}) hadoop-common-project/hadoop-common/target/native/CMakeFiles/3.9.0/CMakeSystem.cmake:11:set(CMAKE_SYSTEM_PROCESSOR "mips64")
如下
IF("${CMAKE_SYSTEM}" MATCHES "Linux") # # Locate JNI_INCLUDE_DIRS and JNI_LIBRARIES. # Since we were invoked from Maven, we know that the JAVA_HOME environment # variable is valid. So we ignore system paths here and just use JAVA_HOME. # FILE(TO_CMAKE_PATH "$ENV{JAVA_HOME}" _JAVA_HOME) IF(CMAKE_SYSTEM_PROCESSOR MATCHES "^i.86$") SET(_java_libarch "i386") ELSEIF (CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64" OR CMAKE_SYSTEM_PROCESSOR STREQUAL "amd64") SET(_java_libarch "amd64") ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "mips64") SET(_java_libarch "mips64el") ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "^arm") SET(_java_libarch "arm") ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64le") IF(EXISTS "${_JAVA_HOME}/jre/lib/ppc64le") SET(_java_libarch "ppc64le") ELSE() SET(_java_libarch "ppc64") ENDIF() ELSE() SET(_java_libarch ${CMAKE_SYSTEM_PROCESSOR}) ENDIF() SET(_JDK_DIRS "${_JAVA_HOME}/jre/lib/${_java_libarch}/*" "${_JAVA_HOME}/jre/lib/${_java_libarch}" "${_JAVA_HOME}/jre/lib/*" "${_JAVA_HOME}/jre/lib" "${_JAVA_HOME}/lib/*" "${_JAVA_HOME}/lib" "${_JAVA_HOME}/include/*" "${_JAVA_HOME}/include" "${_JAVA_HOME}" ....
2、问题
[14:21:34@root hadoop_src]#protoc --version
protoc: error while loading shared libraries: libprotoc.so.8: cannot open shared object file: No such file or directory
解决办法
sudo ldconfig
3、问题
FailfindbugsXml.xml does not exist
是由于中途出错,再进行编译导致的错误,解决需要将出错的模块删除
解决:
mvn package -Pdist,native -DskipTests -Dtar -rf :hadoop-common
4
OpenJDK 64-Bit Server VM warning: You have loaded library /home/hadoop/apps/hadoop-2.6.4/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It’s highly recommended that you fix the library with 'execstack -c ', or link it with ‘-z noexecstack’.
5
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
6
Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:j
解决
使用如下命令进行编译
mvn clean install -DskipTests -Dmaven.javadoc.skip=true
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)