pyspark TypeError: ‘JavaPackage‘ object is not callable

pyspark TypeError: ‘JavaPackage‘ object is not callable,第1张

pyspark TypeError: ‘JavaPackage‘ object is not callable pyspark 初始化报错 问题
Python 3.7.10 (default, Jun  4 2021, 14:48:32)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
Warning: Ignoring non-spark config property: history.server.spnego.keytab.file=/etc/security/keytabs/spnego.service.keytab
Warning: Ignoring non-spark config property: history.server.spnego.kerberos.principal=HTTP/[email protected]
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2021-12-29 16:24:26 WARN  Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
2021-12-29 16:24:33 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
        at org.apache.spark.SparkContext.(SparkContext.scala:500)
        at org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:58)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:238)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)
2021-12-29 16:24:33 WARN  YarnSchedulerBackend$YarnSchedulerEndpoint:66 - Attempted to request executors before the AM has registered!
2021-12-29 16:24:33 WARN  MetricsSystem:66 - Stopping a MetricsSystem that is not running
Traceback (most recent call last):
  File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/pyspark/shell.py", line 44, in 
    SparkContext._jvm.org.apache.hadoop.hive.conf.HiveConf()
TypeError: 'JavaPackage' object is not callable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/pyspark/shell.py", line 59, in 
    spark = SparkSession.builder.getOrCreate()
  File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/pyspark/sql/session.py", line 173, in getOrCreate
    sc = SparkContext.getOrCreate(sparkConf)
  File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/pyspark/context.py", line 351, in getOrCreate
    SparkContext(conf=conf or SparkConf())
  File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/pyspark/context.py", line 118, in __init__
    conf, jsc, profiler_cls)
  File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/pyspark/context.py", line 180, in _do_init
    self._jsc = jsc or self._initialize_context(self._conf._jconf)
  File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/pyspark/context.py", line 290, in _initialize_context
    return self._jvm.JavaSparkContext(jconf)
  File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__
  File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
        at org.apache.spark.SparkContext.(SparkContext.scala:500)
        at org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:58)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:238)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)

原因
TypeError: 'JavaPackage' object is not callable

明显是jar找不到

File "/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/pyspark/shell.py", line 44, in 
vim /home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/python/pyspark/shell.py
set number


明显是缺少hive相关的包

解决

将hive相关jars放到/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/jars/下
我这集群用的hdp的,直接把hdp下的包替换了/home/kehujingyingbu/spark-2.3.2-bin-without-hadoop/jars/

注意

使用hdp宝替换需要在spark-env.sh配置:
export HDP_VERSION=3.1.0.0-78



问题解决

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/zaji/5688660.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存