1.集群下载客户端
从FusionInsight Manager上下载Fusioninsight的客户端,服务管理->下载客户端->完整客户端,上传服务器解压
注意:客户端生成及下载时间都比较长
2.创建安装目录
$: sudo mkdir /opt/third
3.安装
3.1.解压安装包
$ tar -xvf FusionInsight_Services_Client.tar
3.2.进入安装包执行:
$: ./install.sh /opt/third/client
4.设置环境变量
$ : source /opt/third/client/bigdata_env
5.验证
5.1.执行:
$ spark-submit
结果:
```shell
Usage: spark-submit [options] [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]
Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn,
k8s://https://host:port, or local (Default: local[*]).
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of jars to include on the driver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
--exclude-packages Comma-separated list of groupId:artifactId, to exclude while
resolving the dependencies provided in --packages to avoid
dependency conflicts.
--repositories Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHonPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working
directory of each executor. File paths of these files
in executors can be accessed via SparkFiles.get(fileName).
--conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.
```
5.2.如果集群开启了kerberos认证,则需要集群下载对应的keytab文件到服务器
执行:
$: kinit -kt yours_keytabs yoursprincpal
类似kinit -kt /opt/admin.keytab admin
$:klist 检查
验证 (出现结果即成功)
$:hdfs dfs -ls /user
6.常见错误解决
6.1.执行命令sudo ./install.sh /opt/third/client的时候遇到:
[19-07-15 16:06:58]: Pre-install check begin...
[19-07-15 16:06:58]: Checking necessary files and directory.
[19-07-15 16:06:58]: Checking NTP service status.
[19-07-15 16:06:58]: Error: Network time protocol(NTP) not running. Please start NTP first.
解决:启动ntp即可
$:sudo systemctl start ntpd
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)