linux – 结合HBase和HDFS会在makeDirOnFileSystem中导致异常

linux – 结合HBase和HDFS会在makeDirOnFileSystem中导致异常,第1张

概述介绍 尝试将HBase和HDFS结合使用会产生以下结果: 2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Directory, retries exhausted2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMa 介绍

尝试将HBase和HDFS结合使用会产生以下结果:

2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBasefileSystem: Create Directory,retrIEs exhausted2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandledexception. Starting shutdown.java.io.IOException: Exception in makeDirOnfileSystem        at org.apache.hadoop.hbase.HBasefileSystem.makeDirOnfileSystem(HBasefileSystem.java:136)        at org.apache.hadoop.hbase.master.MasterfileSystem.checkRootDir(MasterFileSystem.java:428)        at org.apache.hadoop.hbase.master.MasterfileSystem.createInitialfileSystemLayout(MasterfileSystem.java:148)        at org.apache.hadoop.hbase.master.MasterfileSystem.<init>(MasterfileSystem.java:133)        at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:572)        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432)        at java.lang.Thread.run(Thread.java:744)Caused by: org.apache.hadoop.security.AccessControlException: Permission denIEd: user=hbase,access=WRITE,inode="/":vagrant:supergroup:drwxr-xr-x        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)        at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.checkPermission(FSnamesystem.java:4891)        at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.checkPermission(FSnamesystem.java:4873)        at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.checkAncestorAccess(FSnamesystem.java:4847)        at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.mkdirsInternal(FSnamesystem.java:3192)        at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.mkdirsInt(FSnamesystem.java:3156)        at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.mkdirs(FSnamesystem.java:3137)        at org.apache.hadoop.hdfs.server.namenode.nameNodeRpcServer.mkdirs(nameNodeRpcServer.java:669)        at org.apache.hadoop.hdfs.protocolPB.ClIEntnamenodeProtocolServerSIDeTranslatorPB.mkdirs(ClIEntnamenodeProtocolServerSIDeTranslatorPB.java:419)        at org.apache.hadoop.hdfs.protocol.proto.ClIEntnamenodeProtocolProtos$ClIEntnamenodeProtocol.callBlockingMethod(ClIEntnamenodeProtocolProtos.java:44970)        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1752)        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1748)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:422)        at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupInformation.java:1438)        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746)        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)        at java.lang.reflect.Constructor.newInstance(Constructor.java:408)        at org.apache.hadoop.ipc.remoteexception.instantiateException(RemoteException.java:90)        at org.apache.hadoop.ipc.remoteexception.unwrapremoteexception(RemoteException.java:57)        at org.apache.hadoop.hdfs.DFSClIEnt.primitiveMkdir(DFSClIEnt.java:2153)        at org.apache.hadoop.hdfs.DFSClIEnt.mkdirs(DFSClIEnt.java:2122)        at org.apache.hadoop.hdfs.distributedfileSystem.mkdirs(distributedfileSystem.java:545)        at org.apache.hadoop.fs.fileSystem.mkdirs(fileSystem.java:1915)        at org.apache.hadoop.hbase.HBasefileSystem.makeDirOnfileSystem(HBasefileSystem.java:129)        ... 6 more

配置和系统设置如下:

[vagrant@localhost hadoop-hdfs]$hadoop fs -ls hdfs://localhost/Found 1 items-rw-r--r--   3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/ubuntu-14.04-desktop-amd64.iso[vagrant@localhost hadoop-hdfs]$

/etc/hadoop/conf/core-site.xml

<configuration>  <property>    <name>fs.defaultFS</name>    <value>hdfs://localhost:8020</value>  </property></configuration>

/etc/hbase/conf/hbase-site.xml

<configuration>  <property>    <name>hbase.rootdir</name>    <value>hdfs://localhost:8020/hbase</value>  </property>  <property>    <name>hbase.cluster.distributed</name>    <value>true</value>  </property></configuration>

/etc/hadoop/conf/hdfs-site.xml

<configuration>  <property>    <name>dfs.name.dir</name>    <value>/var/lib/hadoop-hdfs/cache</value>  </property>  <property>    <name>dfs.data.dir</name>    <value>/tmp/hellodatanode</value>  </property></configuration>

nameNode目录权限

[vagrant@localhost hadoop-hdfs]$ls -ltr /var/lib/hadoop-hdfs/cachetotal 8-rwxrwxrwx. 1 hbase hdfs   15 Jun  8 23:43 in_use.lockdrwxrwxrwx. 2 hbase hdfs 4096 Jun  8 23:43 current[vagrant@localhost hadoop-hdfs]$

如果在core-site.xml中注释了fs.defaultFS属性,则HMaster可以启动

nameNode正在侦听

[vagrant@localhost hadoop-hdfs]$netstat -nato | grep 50070tcp        0      0 0.0.0.0:50070               0.0.0.0:*                   ListEN      off (0.00/0/0)tcp        0      0 33.33.33.33:50070           33.33.33.1:57493            ESTABliSHED off (0.00/0/0)

并可通过导航到http://33.33.33.33:50070/dfshealth.Jsp访问.

如何解决makeDirOnfileSystem异常并让HBase连接到HDFS?

解决方法 您需要知道的是在堆栈跟踪的这一行:

Caused by: org.apache.hadoop.security.AccessControlException: Permission denIEd:
user=hbase,inode=”/”:vagrant:supergroup:drwxr-xr-x

用户hbase没有写入HDFS根目录(/)的权限,因为它由vargrant拥有,并且设置为仅允许所有者写入它.

使用hadoop fs -chmod修改权限.

编辑:

您也可以成功创建目录/ hbase并将hbase用户设置为所有者.这样您就不必允许hbase写入根目录.

总结

以上是内存溢出为你收集整理的linux – 结合HBase和HDFS会在makeDirOnFileSystem中导致异常全部内容,希望文章能够帮你解决linux – 结合HBase和HDFS会在makeDirOnFileSystem中导致异常所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/1022114.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-23
下一篇 2022-05-23

发表评论

登录后才能评论

评论列表(0条)

保存