要使用DBeaver连接SparkSQL连接SQL,要做如下准备:
- 1,在spark的conf目录下,创建hive-site.xml(附在文章最后)。
Spark的Thrift Server脱胎于Hive的Thrift Server,所以有很多配置的name都包含hive关键字
注意最后一个配置,设置了用户名密码,客户端会使用这个用户密码连接SparkSql
- 2,启动Spark Thrift Sever
./sbin/start-thriftserver.sh
- 3,DBeaver使用Hive的驱动连接SparkSql
附:配置文件 hive-site.xml
```go
```alluxio.zookeeper.address 10.49.2.146:2181,10.49.2.215:2181,10.49.0.10:2181 alluxio.zookeeper.enabled true datanucleus.schema.autoCreateTables true hbase.zookeeper.quorum 10.19.2.146:2181,10.41.2.215:2181,10.19.0.10:2181 hive.downloaded.resources.dir /data/emr/hive/tmp/${hive.session.id}_resources hive.exec.local.scratchdir /data/emr/hive/tmp hive.hwi.listen.host 0.0.0.0 hive.hwi.listen.port 7002 hive.llap.daemon.output.service.port 7009 hive.llap.daemon.rpc.port 7007 hive.llap.daemon.web.port 7008 hive.llap.daemon.yarn.shuffle.port 7006 hive.llap.management.rpc.port 7005 hive.metastore.db.encoding UTF-8 hive.metastore.metrics.enabled false hive.metastore.port 7004 hive.metastore.schema.verification false hive.metastore.schema.verification.record.version false hive.metastore.uris thrift://10.49.2.15:17004 hive.metastore.warehouse.dir /usr/hive/warehouse hive.querylog.location /data/emr/hive/tmp hive.server2.logging.operation.log.location /data/emr/hive/tmp/operation_logs hive.server2.metrics.enabled true hive.server2.support.dynamic.service.discovery true hive.server2.thrift.bind.host 10.9.2.105 hive.server2.thrift.http.port 27050 hive.server2.thrift.port 27001 hive.server2.webui.host 0.0.0.0 hive.server2.webui.port 27003 hive.server2.zookeeper.namespace hiveserver2 hive.stats.autogather false hive.tez.container.size 1024 hive.zookeeper.client.port 2181 hive.zookeeper.quorum 10.49.2.46:2181 javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver javax.jdo.option.ConnectionPassword rxxxxxxxxxxx#B javax.jdo.option.ConnectionURL jdbc:mysql://10.49.1.57:3306/hivemetastore?useSSL=false&createDatabaseIfNotExist=true&characterEncoding=UTF-8 javax.jdo.option.ConnectionUserName root spark.driver.extraClassPath /usr/local/service/alluxio/client/alluxio-2.5.0-client.jar spark.executor.extraClassPath /usr/local/service/alluxio/client/alluxio-2.5.0-client.jar spark.yarn.jars hdfs:///spark/jars/* io.compression.codec.lzo.class com.hadoop.compression.lzo.LzoCodec io.compression.codecs org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.SnappyCodec org.spark.auth.test 123456
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)