- 基本语法
hadoop fs 具体命令 OR hdfs dfs 具体命令 两个是完全相同的。
- 命令大全
[atguigu@hadoop102 hadoop-3.1.3]$ bin/hadoop fs [-appendToFile... ] [-cat [-ignoreCrc] ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] ... ] [-copyToLocal [-p] [-ignoreCrc] [-crc] ... ] [-count [-q] ...] [-cp [-f] [-p] ... ] [-df [-h] [ ...]] [-du [-s] [-h] ...] [-get [-p] [-ignoreCrc] [-crc] ... ] [-getmerge [-nl] ] [-help [cmd ...]] [-ls [-d] [-h] [-R] [ ...]] [-mkdir [-p] ...] [-moveFromLocal ... ] [-moveToLocal ] [-mv ... ] [-put [-f] [-p] ... ] [-rm [-f] [-r|-R] [-skipTrash] ...] [-rmdir [--ignore-fail-on-non-empty] ...] ]] [-setrep [-R] [-w] ...] [-stat [format] ...] [-tail [-f] ] [-test -[defsz] ] [-text [-ignoreCrc] ...]
- 上传
[atguigu@hadoop102 hadoop-3.1.3]$ vim shuguo.txt 输入: shuguo [atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -moveFromLocal ./shuguo.txt /sanguo(2) -copyFromLocal:从本地文件系统中拷贝文件到 HDFS 路径去
[atguigu@hadoop102 hadoop-3.1.3]$ vim weiguo.txt 输入: weiguo [atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -copyFromLocal weiguo.txt /sanguo(3)-put:等同于 copyFromLocal,生产环境更习惯用 put
[atguigu@hadoop102 hadoop-3.1.3]$ vim wuguo.txt 输入: wuguo [atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -put ./wuguo.txt /sanguo(4)-appendToFile:追加一个文件到已经存在的文件末尾
[atguigu@hadoop102 hadoop-3.1.3]$ vim liubei.txt 输入: liubei [atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -appendToFile liubei.txt /sanguo/shuguo.txt
- 下载
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -copyToLocal /sanguo/shuguo.txt ./(2)-get:等同于 copyToLocal,生产环境更习惯用 get
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -get /sanguo/shuguo.txt ./shuguo2.txt
- HDFS 直接 *** 作
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -ls /sanguo(2)-cat:显示文件内容
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -cat /sanguo/shuguo.txt(3)-chgrp、-chmod、-chown:Linux 文件系统中的用法一样,修改文件所属权限
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -chmod 666 /sanguo/shuguo.txt [atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -chown atguigu:atguigu /sanguo/shuguo.txt(4)-mkdir:创建路径
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -mkdir /jinguo(5)-cp:从 HDFS 的一个路径拷贝到 HDFS 的另一个路径
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -cp /sanguo/shuguo.txt /jinguo(6)-mv:在 HDFS 目录中移动文件
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -mv /sanguo/wuguo.txt /jinguo [atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -mv /sanguo/weiguo.txt /jinguo(7)-tail:显示一个文件的末尾 1kb 的数据
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -tail /jinguo/shuguo.txt(8)-rm:删除文件或文件夹
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -rm /sanguo/shuguo.txt(9)-rm -r:递归删除目录及目录里面内容
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -rm -r /sanguo(10)-du 统计文件夹的大小信息
[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -du -s -h /jinguo 27 81 /jinguo [atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -du -h /jinguo 14 42 /jinguo/shuguo.txt 7 21 /jinguo/weiguo.txt 6 18 /jinguo/wuguo.tx
说明:27 表示文件大小;81 表示 27*3 个副本;/jinguo 表示查看的目录
(11)-setrep:设置 HDFS 中文件的副本数量[atguigu@hadoop102 hadoop-3.1.3]$ hadoop fs -setrep 10 /jinguo/shuguo.txt
这里设置的副本数只是记录在 NameNode 的元数据中,是否真的会有这么多副本,还得 看 DataNode 的数量。因为目前只有 3 台设备,最多也就 3 个副本,只有节点数的增加到 10 台时,副本数才能达到 10。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)