org.apache.spark.sql.kafka010.KafkaMicroBatchReader.createDataReaderFactories()LjavautilList;

org.apache.spark.sql.kafka010.KafkaMicroBatchReader.createDataReaderFactories()LjavautilList;,第1张

org.apache.spark.sql.kafka010.KafkaMicroBatchReader.createDataReaderFactories()Ljava/util/List;

在windows本地用 IDEA 测试StructuredStreaming集成kafka的代码时,出现以下异常报错:

Exception in thread "stream execution thread for [id = 02953159-7c16-4aca-aa16-e2f40ed96488, runId = 539b97c0-2092-47a0-b5c1-8460383c5128]" java.lang.AbstractMethodError: org.apache.spark.sql.kafka010.KafkaMicroBatchReader.createDataReaderFactories()Ljava/util/List;
	at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.readerFactories$lzycompute(DataSourceV2ScanExec.scala:55)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.readerFactories(DataSourceV2ScanExec.scala:52)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.inputRDD$lzycompute(DataSourceV2ScanExec.scala:76)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.inputRDD(DataSourceV2ScanExec.scala:60)
	at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExec.inputRDDs(DataSourceV2ScanExec.scala:79)
	at org.apache.spark.sql.execution.ProjectExec.inputRDDs(basicPhysicalOperators.scala:41)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:622)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:131)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute.apply(SparkPlan.scala:127)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery.apply(SparkPlan.scala:155)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)

异常原因:可能是因为maven引入的spark-sql-kafka-0-10_2.11 jar包版本不对

版本对照:
由于StructuredStreaming是基于Spark Sql的,因此版本需要跟SparkSql保持一致,且需要跟scala的版本一致
我在项目中用到的SparkSql版本是


       org.apache.spark
       spark-sql_2.11
       2.3.4

因此修改spark-sql-kafka-0-10_2.11 的maven引入为:

   
            org.apache.spark
            spark-sql-kafka-0-10_2.11
            2.3.4
   

重新加载maven后,启动代码运行,可以正常出数据了

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5677085.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存