在hdfs的文件系统中复制错误

在hdfs的文件系统中复制错误,第1张

概述在hdfs的文件系统复制错误

我写了下面的bash脚本

#!/bin/bash cd /export/hadoop-1.0.1/bin ./hadoop namenode -format ./start-all.sh ./hadoop fs -rmr hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/output ./hadoop fs -rmr hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/input ./hadoop fs -mkdir hdfs://192.168.1.8:7000/export/hadoop-1.0.1/input ./reaDWritepaths ./hadoop fs -put /export/hadoop-1.0.1/bin/input/paths.txt hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/input/paths.txt ./hadoop jar /export/hadoop-1.0.1/bin/Parallelindexation.jar org.myorg.Parallelindexation /export/hadoop-1.0.1/bin/input /export/hadoop-1.0.1/bin/output -D mapred.map.tasks=1 1> resultofexecute.txt 2>&1

作为一个命令执行的结果

./hadoop fs -put /export/hadoop-1.0.1/bin/input/paths.txt hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/input/paths.txt

我收到以下消息

13/04/28 10:13:15 WARN hdfs.DFSClIEnt: DataStreamer Exception: org.apache.hadoop.ipc.remoteexception: java.io.IOException: file /export/hadoop-1.0.1/bin/input/paths.txt Could only be replicated to 0 nodes,instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.getAdditionalBlock(FSnamesystem.java:1556) at org.apache.hadoop.hdfs.server.namenode.nameNode.addBlock(nameNode.java:696) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1093) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) at org.apache.hadoop.ipc.ClIEnt.call(ClIEnt.java:1066) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) at $Proxy1.addBlock(UnkNown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy1.addBlock(UnkNown Source) at org.apache.hadoop.hdfs.DFSClIEnt$DFSOutputStream.locateFollowingBlock(DFSClIEnt.java:3507) at org.apache.hadoop.hdfs.DFSClIEnt$DFSOutputStream.nextBlockOutputStream(DFSClIEnt.java:3370) at org.apache.hadoop.hdfs.DFSClIEnt$DFSOutputStream.access$2700(DFSClIEnt.java:2586) at org.apache.hadoop.hdfs.DFSClIEnt$DFSOutputStream$DataStreamer.run(DFSClIEnt.java:2826) 13/04/28 10:13:15 WARN hdfs.DFSClIEnt: Error Recovery for block null bad datanode[0] nodes == null 13/04/28 10:13:15 WARN hdfs.DFSClIEnt: Could not get block locations. Source file "/export/hadoop-1.0.1/bin/input/paths.txt" - Aborting... put: java.io.IOException: file /export/hadoop-1.0.1/bin/input/paths.txt Could only be replicated to 0 nodes,instead of 1 13/04/28 10:13:15 ERROR hdfs.DFSClIEnt: Exception closing file /export/hadoop-1.0.1/bin/input/paths.txt : org.apache.hadoop.ipc.remoteexception: java.io.IOException: file /export/hadoop-1.0.1/bin/input/paths.txt Could only be replicated to 0 nodes,instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.getAdditionalBlock(FSnamesystem.java:1556) at org.apache.hadoop.hdfs.server.namenode.nameNode.addBlock(nameNode.java:696) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1093) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) org.apache.hadoop.ipc.remoteexception: java.io.IOException: file /export/hadoop-1.0.1/bin/input/paths.txt Could only be replicated to 0 nodes,instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.getAdditionalBlock(FSnamesystem.java:1556) at org.apache.hadoop.hdfs.server.namenode.nameNode.addBlock(nameNode.java:696) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1093) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) at org.apache.hadoop.ipc.ClIEnt.call(ClIEnt.java:1066) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) at $Proxy1.addBlock(UnkNown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy1.addBlock(UnkNown Source) at org.apache.hadoop.hdfs.DFSClIEnt$DFSOutputStream.locateFollowingBlock(DFSClIEnt.java:3507) at org.apache.hadoop.hdfs.DFSClIEnt$DFSOutputStream.nextBlockOutputStream(DFSClIEnt.java:3370) at org.apache.hadoop.hdfs.DFSClIEnt$DFSOutputStream.access$2700(DFSClIEnt.java:2586) at org.apache.hadoop.hdfs.DFSClIEnt$DFSOutputStream$DataStreamer.run(DFSClIEnt.java:2826)

我还给datanode一个下级节点上的日志(在第二个下级节点上,这个日志包含一个类似的错误

启动hadoop守护进程时出错

HDFS namenood格式错误“无法find主类:namenood。”代码附加

无法连接到http:// localhost:50030 / – Hadoop 2.6.0 Ubuntu 14.04 LTS

OozIE:Sqoopdynamic目标目录

在windows 10中设置星火

2013-04-28 11:10:40,634 INFO org.apache.hadoop.hdfs.server.datanode.Datanode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting Datanode STARTUP_MSG: host = myhost2/192.168.1.10 STARTUP_MSG: args = [] STARTUP_MSG: version = 1.0.1 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012 ************************************************************/ 2013-04-28 11:10:40,948 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded propertIEs from hadoop-metrics2.propertIEs 2013-04-28 11:10:40,982 INFO org.apache.hadoop.metrics2.impl.MetriCSSourceAdapter: MBean for source MetriCSSystem,sub=Stats registered. 2013-04-28 11:10:40,983 INFO org.apache.hadoop.metrics2.impl.MetriCSSystemImpl: Scheduled snapshot period at 10 second(s). 2013-04-28 11:10:40,983 INFO org.apache.hadoop.metrics2.impl.MetriCSSystemImpl: Datanode metrics system started 2013-04-28 11:10:41,285 INFO org.apache.hadoop.metrics2.impl.MetriCSSourceAdapter: MBean for source ugi registered. 2013-04-28 11:10:41,308 WARN org.apache.hadoop.metrics2.impl.MetriCSSystemImpl: Source name ugi already exists! 2013-04-28 11:10:42,811 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 0 time(s). 2013-04-28 11:10:43,811 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 1 time(s). 2013-04-28 11:10:44,813 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 2 time(s). 2013-04-28 11:10:45,814 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 3 time(s). 2013-04-28 11:10:46,814 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 4 time(s). 2013-04-28 11:10:47,814 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 5 time(s). 2013-04-28 11:10:48,815 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 6 time(s). 2013-04-28 11:10:49,815 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 7 time(s). 2013-04-28 11:10:50,816 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 8 time(s). 2013-04-28 11:10:51,818 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 9 time(s). 2013-04-28 11:10:51,822 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7000 not available yet,Zzzzz... 2013-04-28 11:10:53,824 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 0 time(s). 2013-04-28 11:10:54,825 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 1 time(s). 2013-04-28 11:10:55,826 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 2 time(s). 2013-04-28 11:10:56,828 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 3 time(s). 2013-04-28 11:10:57,828 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 4 time(s). 2013-04-28 11:10:58,829 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 5 time(s). 2013-04-28 11:10:59,829 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 6 time(s). 2013-04-28 11:11:00,830 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 7 time(s). 2013-04-28 11:11:01,831 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 8 time(s). 2013-04-28 11:11:02,831 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 9 time(s). 2013-04-28 11:11:02,833 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7000 not available yet,Zzzzz... 2013-04-28 11:11:04,834 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 0 time(s). 2013-04-28 11:11:05,834 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 1 time(s). 2013-04-28 11:11:06,835 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 2 time(s). 2013-04-28 11:11:07,836 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 3 time(s). 2013-04-28 11:11:08,837 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 4 time(s). 2013-04-28 11:11:09,837 INFO org.apache.hadoop.ipc.ClIEnt: retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already trIEd 5 time(s). 2013-04-28 11:11:40,381 ERROR org.apache.hadoop.hdfs.server.datanode.Datanode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-hadoop/dfs/data: namenode namespaceID = 454531810; datanode namespaceID = 345408440 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) at org.apache.hadoop.hdfs.server.datanode.Datanode.startDatanode(Datanode.java:385) at org.apache.hadoop.hdfs.server.datanode.Datanode.<init>(Datanode.java:299) at org.apache.hadoop.hdfs.server.datanode.Datanode.makeInstance(Datanode.java:1582) at org.apache.hadoop.hdfs.server.datanode.Datanode.instantiateDatanode(Datanode.java:1521) at org.apache.hadoop.hdfs.server.datanode.Datanode.createDatanode(Datanode.java:1539) at org.apache.hadoop.hdfs.server.datanode.Datanode.secureMain(Datanode.java:1665) at org.apache.hadoop.hdfs.server.datanode.Datanode.main(Datanode.java:1682) 2013-04-28 11:11:40,383 INFO org.apache.hadoop.hdfs.server.datanode.Datanode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down Datanode at myhost2/192.168.1.10 ************************************************************/

帮助消除复制错误。 @ChrisWhite namenode日志。

2013-04-28 10:10:38,310 INFO org.apache.hadoop.hdfs.server.namenode.nameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting nameNode STARTUP_MSG: host = one/192.168.1.8 STARTUP_MSG: args = [] STARTUP_MSG: version = 1.0.1 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012 ************************************************************/ 2013-04-28 10:10:38,579 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded propertIEs from hadoop-metrics2.propertIEs 2013-04-28 10:10:38,594 INFO org.apache.hadoop.metrics2.impl.MetriCSSourceAdapter: MBean for source MetriCSSystem,sub=Stats registered. 2013-04-28 10:10:38,596 INFO org.apache.hadoop.metrics2.impl.MetriCSSystemImpl: Scheduled snapshot period at 10 second(s). 2013-04-28 10:10:38,596 INFO org.apache.hadoop.metrics2.impl.MetriCSSystemImpl: nameNode metrics system started 2013-04-28 10:11:08,818 INFO org.apache.hadoop.metrics2.impl.MetriCSSourceAdapter: MBean for source ugi registered. 2013-04-28 10:11:08,825 WARN org.apache.hadoop.metrics2.impl.MetriCSSystemImpl: Source name ugi already exists! 2013-04-28 10:11:08,831 INFO org.apache.hadoop.metrics2.impl.MetriCSSourceAdapter: MBean for source jvm registered. 2013-04-28 10:11:08,832 INFO org.apache.hadoop.metrics2.impl.MetriCSSourceAdapter: MBean for source nameNode registered. 2013-04-28 10:11:08,852 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit 2013-04-28 10:11:08,854 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB 2013-04-28 10:11:08,854 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entrIEs 2013-04-28 10:11:08,855 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304,actual=4194304 2013-04-28 10:11:08,977 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: fsOwner=hadoop 2013-04-28 10:11:08,977 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: supergroup=supergroup 2013-04-28 10:11:08,977 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: isPermissionEnabled=true 2013-04-28 10:11:08,983 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: dfs.block.invalIDate.limit=100 2013-04-28 10:11:08,983 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: isAccesstokenEnabled=false accessKeyUpdateInterval=0 min(s),accesstokenlifetime=0 min(s) 2013-04-28 10:11:09,088 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: Registered FSnamesystemStateMBean and nameNodeMXBean 2013-04-28 10:11:09,129 INFO org.apache.hadoop.hdfs.server.namenode.nameNode: Caching file names occuring more than 10 times 2013-04-28 10:11:09,143 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1 2013-04-28 10:11:09,147 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0 2013-04-28 10:11:09,147 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 loaded in 0 seconds. 2013-04-28 10:11:09,147 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /tmp/hadoop-hadoop/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds. 2013-04-28 10:11:09,149 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds. 2013-04-28 10:11:09,157 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112 saved in 0 seconds. 2013-04-28 10:11:09,160 INFO org.apache.hadoop.hdfs.server.namenode.nameCache: initialized with 0 entrIEs 0 lookups 2013-04-28 10:11:09,160 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: Finished loading FSImage in 192 msecs 2013-04-28 10:11:09,176 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: Total number of blocks = 0 2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: Number of invalID blocks = 0 2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: Number of under-replicated blocks = 0 2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: Number of over-replicated blocks = 0 2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalID,over- and under-replicated blocks completed in 15 msec 2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs. 2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 2013-04-28 10:11:09,177 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2013-04-28 10:11:09,192 INFO org.apache.hadoop.util.HostsfileReader: Refreshing hosts (include/exclude) List 2013-04-28 10:11:09,204 INFO org.apache.hadoop.metrics2.impl.MetriCSSourceAdapter: MBean for source FSnamesystemMetrics registered. 2013-04-28 10:11:09,223 INFO org.apache.hadoop.metrics2.impl.MetriCSSourceAdapter: MBean for source RpcDetailedActivityForPort7000 registered. 2013-04-28 10:11:09,223 INFO org.apache.hadoop.metrics2.impl.MetriCSSourceAdapter: MBean for source RpcActivityForPort7000 registered. 2013-04-28 10:11:09,225 INFO org.apache.hadoop.hdfs.server.namenode.nameNode: namenode up at: one/192.168.1.8:7000 2013-04-28 10:11:09,245 INFO org.apache.hadoop.ipc.Server: Starting SocketReader 2013-04-28 10:11:09,247 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec 2013-04-28 10:11:09,247 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time,0 msec clock time,1 cycles 2013-04-28 10:11:09,248 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: InvalIDateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec 2013-04-28 10:11:09,248 INFO org.apache.hadoop.hdfs.server.namenode.FSnamesystem: InvalIDateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time,1 cycles 2013-04-28 10:11:39,379 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2013-04-28 10:11:39,559 INFO org.apache.hadoop.http.httpServer: Added global filtersafety (class=org.apache.hadoop.http.httpServer$QuotinginputFilter) 2013-04-28 10:11:39,574 INFO org.apache.hadoop.http.httpServer: dfs.webhdfs.enabled = false 2013-04-28 10:11:39,582 INFO org.apache.hadoop.http.httpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. opening the Listener on 50070 2013-04-28 10:11:39,583 INFO org.apache.hadoop.http.httpServer: Listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070 2013-04-28 10:11:39,583 INFO org.apache.hadoop.http.httpServer: Jetty bound to port 50070 2013-04-28 10:11:39,583 INFO org.mortbay.log: jetty-6.1.26 2013-04-28 10:11:40,093 INFO org.mortbay.log: Started [email protected]:50070 2013-04-28 10:11:40,093 INFO org.apache.hadoop.hdfs.server.namenode.nameNode: Web-server up at: 0.0.0.0:50070 2013-04-28 10:11:40,111 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2013-04-28 10:11:40,170 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 7000: starting 2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 7000: starting 2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 7000: starting 2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 7000: starting 2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 7000: starting 2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 7000: starting 2013-04-28 10:11:40,171 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 7000: starting 2013-04-28 10:11:40,172 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 7000: starting 2013-04-28 10:11:40,172 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 7000: starting 2013-04-28 10:11:40,172 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 7000: starting 2013-04-28 10:11:40,172 INFO org.apache.hadoop.ipc.Server: IPC Server Listener on 7000: starting 2013-04-28 10:11:41,177 ERROR org.apache.hadoop.security.UserGroupinformation: PriviledgedActionException as:hadoop cause:java.io.IOException: file /tmp/hadoop-hadoop/mapred/system/jobtracker.info Could only be replicated to 0 nodes,instead of 1 2013-04-28 10:11:41,180 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 7000,call addBlock(/tmp/hadoop-hadoop/mapred/system/jobtracker.info,DFSClIEnt_1259183364,null) from 192.168.1.8:37770: error: java.io.IOException: file /tmp/hadoop-hadoop/mapred/system/jobtracker.info Could only be replicated to 0 nodes,instead of 1 java.io.IOException: file /tmp/hadoop-hadoop/mapred/system/jobtracker.info Could only be replicated to 0 nodes,instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSnamesystem.getAdditionalBlock(FSnamesystem.java:1556) at org.apache.hadoop.hdfs.server.namenode.nameNode.addBlock(nameNode.java:696) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1093) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

在windows上通过Cygwin发生Hadoop错误:找不到null bin winutils.exe

获取创builddate大于某个datelinux的文件列表

目录中的子目录数量?

在windows中设置HADOOP_HOMEvariables

无法find或加载主类M – hadoop窗口

您需要在您的hdfs-site.xml中配置dfs.name.dir和dfs.data.dir属性的值,否则它们很可能会默认为临时目录(在系统重新启动时,他的回答中的@rVr注释被清除)。

至于合适的值 – 取决于您的系统,但通常应该为dfs.name.dir(在名称节点服务器上)创建一个目录,然后为dfs.data.dir创建另一个目录(或者在大多数生产群集中,这是不同磁盘上目录的csv值)。

创建并配置这些值后,您需要确保hdfs-site.xml文件分布在您的群集中。 之后,您应该重新格式化您的namenode,最后使用bin文件夹中的脚本启动HDFS服务(请务必从名称节点运行的机器上运行此脚本)

通常原因是数据节点没有运行,或者在tmp目录中配置了dfs.data.dir,在机器重新启动时清除。可以在放入之前包含jps命令以确保datanode正在运行,并测试if你可以在namenode和datanode之间使用无密码的ssh。 另外节点之间的防火墙可能会导致这个问题。

总结

以上是内存溢出为你收集整理的在hdfs的文件系统中复制错误全部内容,希望文章能够帮你解决在hdfs的文件系统中复制错误所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/langs/1244259.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-06
下一篇 2022-06-06

发表评论

登录后才能评论

评论列表(0条)

保存