Hadoop爬坑记——HDFS文件因Hadoop版本原因导致的追加问题

Hadoop爬坑记——HDFS文件因Hadoop版本原因导致的追加问题,第1张

今日在练习HDFS文件的读取输出,写入,追加写入时,

读取输出,写入都没问题,在追加写入时出现了问题。

报错如下:

2017-07-14 10:50:00,046 WARN [org.apache.hadoop.hdfs.DFSClient] - DataStreamer Exception

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]], original=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:918)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:992)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1160)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]], original=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:918)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:992)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1160)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)

2017-07-14 10:50:00,052 ERROR [org.apache.hadoop.hdfs.DFSClient] - Failed to close inode 16762

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]], original=[DatanodeInfoWithStorage[192.168.153.138:50010,DS-1757abd0-ffc8-4320-a277-3dfb1022533e,DISK], DatanodeInfoWithStorage[192.168.153.137:50010,DS-ddd9a357-58d6-4c2a-921e-c16087974edb,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:918)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:992)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1160)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)

添加上

之后报错如下:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to APPEND_FILE /weather/output/abc.txt for DFSClient_NONMAPREDUCE_1964095166_1 on 192.168.153.1 because this file lease is currently owned by DFSClient_NONMAPREDUCE_-590151867_1 on 192.168.153.1

然后再添加

完事就没问题了。

你的hadoop参数配置正确了吗?在hdfs-site.xml中把以下属性修改为true才可以。

<property>

<name>dfs.support.append</name>

<value>true</value>

</property>

下面有一段测试代码,你可以参考一下:

package com.wyp

import org.apache.hadoop.conf.Configuration

import org.apache.hadoop.fs.FileSystem

import org.apache.hadoop.fs.Path

import org.apache.hadoop.io.IOUtils

import java.io.*

import java.net.URI

public class AppendContent {

public static void main(String[] args) {

String hdfs_path = "hdfs://mycluster/home/wyp/wyp.txt"//文件路径

Configuration conf = new Configuration()

conf.setBoolean("dfs.support.append", true)

String inpath = "/home/wyp/append.txt"

FileSystem fs = null

try {

fs = FileSystem.get(URI.create(hdfs_path), conf)

//要追加的文件流,inpath为文件

InputStream in = new

BufferedInputStream(new FileInputStream(inpath))

OutputStream out = fs.append(new Path(hdfs_path))

IOUtils.copyBytes(in, out, 4096, true)

} catch (IOException e) {

e.printStackTrace()

}

}

}

一、常用的hadoop命令

1、hadoop的fs命令

#查看hadoop所有的fs命令

1

hadoop fs

#上传文件(put与copyFromLocal都是上传命令)

1

2

hadoop fs -put jdk-7u55-linux-i586.tar.gz hdfs://hucc01:9000/jdk

hadoop fs -copyFromLocal jdk-7u55-linux-i586.tar.gz hdfs://hucc01:9000/jdk

#下载命令(get与copyToLocal都是下载命令)

1

2

hadoop fs -get hdfs://hucc01:9000/jdk jdk1.7

hadoop fs -copyToLocal hdfs://hucc01:9000/jdk jdk1.7

#将本地一个或者多个文件追加到hdfs文件中(appendToFile)

1

hadoop fs -appendToFile install.log /words

#查询hdfs的所有文件(ls)

1

hadoop fs -ls /

#帮助命令(help)

1

hadoop fs -help fs

#查看hdfs文件的内容(cat和text)

1

2

hadoop fs -cat /words

hadoop fs -text /words

#删除hdfs文件(rm)

1

hadoop fs -rm -r /words

#统计hdfs文件和文件夹的数量(count)

1

hadoop fs -count -r /

#合并hdfs某个文件夹的文件,并且下载到本地(getmerge)

1

hadoop fs -getmerge / merge

#将本地文件剪切到hdfs,相当于对本地文件上传后再删除(moveFormLocal)

1

hadoop fs -moveFromLocal words /

#查看当前文件系统的使用状态(df)

1

hadoop fs -df

二、常用的hdfs命令(这个用的比较多)

用法跟hadoop命令一样,推荐2.0之后使用hdfs命令

1

hdfs dfs


欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/tougao/11697169.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-05-18
下一篇 2023-05-18

发表评论

登录后才能评论

评论列表(0条)

保存