Hbase - kerberos认证异常

Hbase - kerberos认证异常,第1张

>之前怎么认证都认证不上,问题找了好了,发现它的异常跟实际 *** 作根本就对不上,死马当活马医,当时也是瞎改才好的,给大家伙记录记录。

```

KrbException: Server not found in Kerberos database (7) - LOOKING_UP_SERVER

>>>KdcAccessibility: remove storm1.starsriver.cn

at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73)

>>>KDCRep: init() encoding tag is 126 req type is 13

at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251)

>>>KRBError:

at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262)

cTime is Fri Aug 18 02:49:26 CST 2000 966538166000

sTime is Tue Jul 31 11:59:12 CST 2018 1533009552000

at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308)

suSec is 97126

error code is 7

at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126)

error Message is Server not found in Kerberos database

at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)

cname is hbase/lake.dounine.com@dounine.com

at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)

sname is hbase/120.77.207.19@dounine.com

msgType is 30

at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)

at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)

at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)

at org.apache.hadoop.hbase.security.AbstractHBaseSaslRpcClient.getInitialResponse(AbstractHBaseSaslRpcClient.java:131)

at org.apache.hadoop.hbase.security.NettyHBaseSaslRpcClientHandler$1.run(NettyHBaseSaslRpcClientHandler.java:108)

at org.apache.hadoop.hbase.security.NettyHBaseSaslRpcClientHandler$1.run(NettyHBaseSaslRpcClientHandler.java:104)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)

at org.apache.hadoop.hbase.security.NettyHBaseSaslRpcClientHandler.handlerAdded(NettyHBaseSaslRpcClientHandler.java:104)

at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:606)

at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:187)

at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:380)

at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:359)

at org.apache.hadoop.hbase.ipc.NettyRpcConnection.saslNegotiate(NettyRpcConnection.java:200)

at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$800(NettyRpcConnection.java:71)

at org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:273)

at org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:261)

at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)

at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)

at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)

at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)

at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)

at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)

at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)

at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)

at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)

at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)

at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)

at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)

at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)

at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)

at java.lang.Thread.run(Thread.java:748)

Caused by: KrbException: Identifier doesn't match expected value (906)

at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)

at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)

at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60)

at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55)

... 39 more

```

此错误需要将线上服务器的域名映射放入Client中的`/etc/hosts`

```

10.10.0.2 h1.demo.com

10.10.0.3 h2.demo.com

10.10.0.4 h3.demo.com

```

---

![](https://upload-images.jianshu.io/upload_images/9028759-07315bb8dadcd082.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

进入hbase shell console

$HBASE_HOME/bin/hbase shell

如果有kerberos认证,需要事先使用相应的keytab进行一下认证(使用kinit命令),认证成功之后再使用hbase shell进入可以使用whoami命令可查看当前用户

hbase(main)>whoami

表的管理

1)查看有哪些表

hbase(main)>list

2)创建表

# 语法:create <table>, {NAME =><family>, VERSIONS =><VERSIONS>}

# 例如:创建表t1,有两个family name:f1,f2,且版本数均为2

hbase(main)>create 't1',{NAME =>'f1', VERSIONS =>2},{NAME =>'f2', VERSIONS =>2}

3)删除

分两步:首先disable,然后drop

例如:删除表t1

hbase(main)>disable 't1'

hbase(main)>drop 't1'

4)查看表的结构

# 语法:describe <table>

# 例如:查看表t1的结构

hbase(main)>describe 't1'

5)修改表结构

修改表结构必须先disable

# 语法:alter 't1', {NAME =>'f1'}, {NAME =>'f2', METHOD =>'delete'}

# 例如:修改表test1的cf的TTL为180天

hbase(main)>disable 'test1'

hbase(main)>alter 'test1',{NAME=>'body',TTL=>'15552000'},{NAME=>'meta', TTL=>'15552000'}

hbase(main)>enable 'test1'

权限管理

1)分配权限

# 语法 : grant <user><permissions><table><column family><column qualifier>参数后面用逗号分隔

# 权限用五个字母表示: "RWXCA".

# READ('R'), WRITE('W'), EXEC('X'), CREATE('C'), ADMIN('A')

# 例如,给用户‘test'分配对表t1有读写的权限,

hbase(main)>grant 'test','RW','t1'

2)查看权限

# 语法:user_permission <table>

# 例如,查看表t1的权限列表

hbase(main)>user_permission 't1'

3)收回权限

# 与分配权限类似,语法:revoke <user><table><column family><column qualifier>

# 例如,收回test用户在表t1上的权限

hbase(main)>revoke 'test','t1'

数据的增删改查

1)添加数据

# 语法:put <table>,<rowkey>,<family:column>,<value>,<timestamp>

# 例如:给表t1的添加一行记录:rowkey是rowkey001,family name:f1,column name:col1,value:value01,timestamp:系统默认

hbase(main)>put 't1','rowkey001','f1:col1','value01'

用法比较单一。

2)查询数据

a)查询某行记录

# 语法:get <table>,<rowkey>,[<family:column>,....]

# 例如:查询表t1,rowkey001中的f1下的col1的值

hbase(main)>get 't1','rowkey001', 'f1:col1'

# 或者:

hbase(main)>get 't1','rowkey001', {COLUMN=>'f1:col1'}

# 查询表t1,rowke002中的f1下的所有列值

hbase(main)>get 't1','rowkey001'

b)扫描表

# 语法:scan <table>, {COLUMNS =>[ <family:column>,.... ], LIMIT =>num}

# 另外,还可以添加STARTROW、TIMERANGE和FITLER等高级功能

# 例如:扫描表t1的前5条数据

hbase(main)>scan 't1',{LIMIT=>5}

c)查询表中的数据行数

# 语法:count <table>, {INTERVAL =>intervalNum, CACHE =>cacheNum}

# INTERVAL设置多少行显示一次及对应的rowkey,默认1000;CACHE每次去取的缓存区大小,默认是10,调整该参数可提高查询速度

# 例如,查询表t1中的行数,每100条显示一次,缓存区为500

hbase(main)>count 't1', {INTERVAL =>100, CACHE =>500}

3)删除数据

a )删除行中的某个列值

# 语法:delete <table>, <rowkey>, <family:column>, <timestamp>,必须指定列名

# 例如:删除表t1,rowkey001中的f1:col1的数据

hbase(main)>delete 't1','rowkey001','f1:col1'

注:将删除改行f1:col1列所有版本的数据

b )删除行

# 语法:deleteall <table>, <rowkey>, <family:column>, <timestamp>,可以不指定列名,删除整行数据

# 例如:删除表t1,rowk001的数据

hbase(main)>deleteall 't1','rowkey001'

c)删除表中的所有数据

# 语法: truncate <table>

# 其具体过程是:disable table ->drop table ->create table

# 例如:删除表t1的所有数据

hbase(main)>truncate 't1'

Region管理

1)移动region

# 语法:move 'encodeRegionName', 'ServerName'

# encodeRegionName指的regioName后面的编码,ServerName指的是master-status的Region Servers列表

# 示例

hbase(main)>move '4343995a58be8e5bbc739af1e91cd72d', 'db-41.xxx.xxx.org,60020,1390274516739'

2)开启/关闭region

# 语法:balance_switch true|false

hbase(main)>balance_switch

3)手动split

# 语法:split 'regionName', 'splitKey'

4)手动触发major compaction

#语法:

#Compact all regions in a table:

#hbase>major_compact 't1'

#Compact an entire region:

#hbase>major_compact 'r1'

#Compact a single column family within a region:

#hbase>major_compact 'r1', 'c1'

#Compact a single column family within a table:

#hbase>major_compact 't1', 'c1'

配置管理及节点重启

1)修改hdfs配置

hdfs配置位置:/etc/hadoop/conf

# 同步hdfs配置

cat /home/hadoop/slaves|xargs -i -t scp /etc/hadoop/conf/hdfs-site.xml hadoop@{}:/etc/hadoop/conf/hdfs-site.xml

#关闭:

cat /home/hadoop/slaves|xargs -i -t ssh hadoop@{} "sudo /home/hadoop/cdh4/hadoop-2.0.0-cdh4.2.1/sbin/hadoop-daemon.sh --config /etc/hadoop/conf stop datanode"

#启动:

cat /home/hadoop/slaves|xargs -i -t ssh hadoop@{} "sudo /home/hadoop/cdh4/hadoop-2.0.0-cdh4.2.1/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode"

2)修改hbase配置

hbase配置位置:

# 同步hbase配置

cat /home/hadoop/hbase/conf/regionservers|xargs -i -t scp /home/hadoop/hbase/conf/hbase-site.xml hadoop@{}:/home/hadoop/hbase/conf/hbase-site.xml

# graceful重启

cd ~/hbase

bin/graceful_stop.sh --restart --reload --debug inspurXXX.xxx.xxx.org


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/bake/11938894.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-05-19
下一篇 2023-05-19

发表评论

登录后才能评论

评论列表(0条)

保存