- 修改yarn的配置,使其支持Spark的shuffle Service,修改每个节点上的yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle,spark_shuffle yarn.nodemanager.aux-services.spark_shuffle.class org.apache.spark.network.yarn.YarnShuffleService spark.shuffle.service.port 7337
-
添加jar包
将将 S P A R K H O M E / l i b / s p a r k − S p a r k 版 本 号 − y a r n − s h u f f l e . j a r 拷 贝 到 每 台 N o d e M a n a g e r 的 SPARKHOME/lib/spark-{Spark版本号}-yarn-shuffle.jar拷贝到每台NodeManager的 SPARKHOME/lib/spark−Spark版本号−yarn−shuffle.jar拷贝到每台NodeManager的{HADOOPHOME}/share/hadoop/yarn/lib/下, 重启所有修改配置的节点
-
Spark的配置
spark.shuffle.service.enabled true #启用External shuffle Service服务(可以提交任务时指定) spark.dynamicAllocation.enabled true #开启动态资源分配(可以提交任务时指定) spark.shuffle.service.port 7337 #Shuffle Service默认服务端口,与yarn-site中的一致 spark.dynamicAllocation.initialExecutors 2 #初始化的executor数量(默认3) spark.dynamicAllocation.minExecutors 1 #每个Application最小分配的executor数(默认0) spark.dynamicAllocation.maxExecutors 30 #每个Application最大并发分配的executor数 spark.dynamicAllocation.schedulerBacklogTimeout 1s #task挂起或者等待多长时间开始动态资源分配 spark.dynamicAllocation.sustainedSchedulerBacklogTimeout 5s #资源仍不足再次申请资源的间隔时间 spark.dynamicAllocation.executorIdleTimeout 60 #executor空闲释放的时间2. 作业提交样例
#动态提交作业 SPARK_HOME=/usr/local/apps/spark-3.0.1 ${SPARK_HOME}/bin/spark-submit --master yarn --deploy-mode cluster --name "spark-pi" --driver-memory 2g --executor-cores 2 --executor-memory 1g --conf spark.shuffle.service.enabled=true --conf spark.dynamicAllocation.enabled=true --conf spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=1 --conf spark.dynamicAllocation.executorIdleTimeout=60 --class org.apache.spark.examples.SparkPi ${SPARK_HOME}/examples/jars/spark-examples_2.12-3.0.1.jar 2000
/usr/local/apps/spark-3.0.1/sbin/start-thriftserver.sh --master yarn --deploy-mode client --driver-memory 6g --driver-cores 4 --executor-cores 2 --executor-memory 4g --conf spark.shuffle.service.enabled=true --conf spark.dynamicAllocation.enabled=true --conf spark.dynamicAllocation.minExecutors=9 --conf spark.dynamicAllocation.maxExecutors=30 --conf spark.dynamicAllocation.sustainedSchedulerBacklogTimeout=1 --conf spark.dynamicAllocation.executorIdleTimeout=60 --hiveconf hive.server2.thrift.port=10001 --hiveconf hive.server2.thrift.bind.host=server3
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)