记录一次UTFDataFormatException

记录一次UTFDataFormatException,第1张

记录一次UTFDataFormatException

在一次执行spark任务的时候出现了以下错误

at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:933)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition.apply$mcV$sp(Dataset.scala:2736)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition.apply(Dataset.scala:2736)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition.apply(Dataset.scala:2736)
at org.apache.spark.sql.Dataset$$anonfun$withNewRDDExecutionId.apply(Dataset.scala:3350)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)


Caused by: java.io.UTFDataFormatException: encoded string too long: 105049 bytes
at java.io.DataOutputStream.writeUTF(DataOutputStream.java:364)
排查:虽然是通过typesafe报的错,当时怀疑是配置文件过大导致的,一看文件也不大。后面发现引入了一个读取文件的config对象,这个对象把它前面加上修饰符
@transient即可避免参与序列化,这样就避免了文件过大的问题

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/zaji/5705609.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存