22/01/06 22:10:05 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 22/01/06 22:10:05 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, linux121, 34370, None) 22/01/06 22:10:05 INFO BlockManagerMasterEndpoint: Registering block manager linux121:34370 with 93.3 MB RAM, BlockManagerId(driver, linux121, 34370, None) 22/01/06 22:10:05 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, linux121, 34370, None) 22/01/06 22:10:05 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, linux121, 34370, None) 22/01/06 22:10:05 INFO JettyUtils: Adding filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter to /metrics/json. 22/01/06 22:10:06 INFO EventLoggingListener: Logging events to hdfs://linux121:9000/spark-eventlog/application_1641476901097_0004.lz4 22/01/06 22:10:08 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.80.121:42626) with ID 1 22/01/06 22:10:09 INFO BlockManagerMasterEndpoint: Registering block manager linux121:42393 with 366.3 MB RAM, BlockManagerId(1, linux121, 42393, None) 22/01/06 22:10:09 INFO YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.80.123:60596) with ID 2 22/01/06 22:10:09 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8 22/01/06 22:10:09 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 2. 22/01/06 22:10:09 INFO DAGScheduler: Executor lost: 2 (epoch 0) 22/01/06 22:10:09 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster. 22/01/06 22:10:09 ERROR TransportClient: Failed to send RPC RPC 5631266366836363375 to /192.168.80.122:52864: java.nio.channels.ClosedChannelException java.nio.channels.ClosedChannelException at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:958) at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:866) at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1379) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:716) at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:708) at io.netty.channel.AbstractChannelHandlerContext.access00(AbstractChannelHandlerContext.java:56) at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1102) at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1149) at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518) at io.netty.util.concurrent.SingleThreadEventExecutor.run(SingleThreadEventExecutor.java:1044) at io.netty.util.internal.ThreadExecutorMap.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748)
原因:container使用的虚拟内存超过了设置的容量,Container被kill掉了。 解决方案:修改yarn-site.xml如下几个配置,关闭任务超出物理及虚拟内存分配值直接杀掉任务。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)