1。背景
GlusterFS是一个开源的分布式文件系统,具有很强的横向扩展能力。通过扩展,它可以支持数Pb的存储容量,并处理数千个客户端。GlusterFS通过TCP/IP或InfiniBandRDMA(一种支持多个并发链接的“转换电缆”技术)网络将物理上分布的存储资源聚集在一起,并使用单个全局名称空来管理数据。GlusterFS基于可堆叠用户空房间设计,可为各种数据负载提供出色的性能。
GlusterFS支持在任何标准IP网络上运行的标准应用程序的标准客户端。
2。优势
*线性横向扩展和高性能
*高可用性
*全球统一命名空
*d性哈希算法和d性卷管理
*基于标准协议
*完整的软件实施(仅软件)
*用户空实现(用户空间)
*模块化可堆叠架构(模块化可堆叠架构)
*以原生格式存储的数据。
*d性哈希算法没有元数据。
3。环境
server_1CentOS7.2.1511(核心)192.168.60.201
server_2CentOS7.2.1511(核心)192.168.60.202
4。安装
*server_1安装centos-release-gloster
[root@server_1 ~]# yum install centos-release-gluster -y*server_1安装glusterfs-server
[root@server_1 ~]# yum install glusterfs-server -y*server_1启动glusterfs-server服务
*server_2安装centos-release-gloster
[root@server_2 ~]# yum install centos-release-gluster -y*server_2安装glusterfs-server
[root@server_2 ~]# yum install glusterfs-server -y*server_2启动glusterfs-server服务
[root@server_2 ~]# systemctl start glusterd5。建立信任池[信任可以单向建立]
*server_1与server_2建立信任关系。
[root@server_1 ~]# gluster peer probe 192.168.60.202 peer probe: success.*检查信任池的建立
[root@server_1 ~]# gluster peer status Number of Peers: 1 Hostname: 192.168.60.202 Uuid: 84d98fd8-4500-46d3-9d67-8bafacb5898b State: Peer in Cluster (Connected) [root@server_2 ~]# gluster peer status Number of Peers: 1 Hostname: 192.168.60.201 Uuid: 20722daf-35c4-422c-99ff-6b0a41d07eb4 State: Peer in Cluster (Connected)6。创建分布式卷
*server_1和server_2创建数据存储目录。
[root@server_1 ~]# mkdir -p /data/exp1 [root@server_2 ~]# mkdir -p /data/exp2*使用命令创建一个分布式卷,并将其命名为test-volume。
[root@server_1 ~]# gluster volume create test-volume 192.168.60.201:/data/exp1 192.168.60.202:/data/exp2 force volume create: test-volume: success: please start the volume to access data*检查音量信息
[root@server_1 ~]# gluster volume info test-volume Volume Name: test-volume Type: Distribute Volume ID: 457ca1ff-ac55-4d59-b827-fb80fc0f4184 Status: Created Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 192.168.60.201:/data/exp1 Brick2: 192.168.60.202:/data/exp2 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@server_2 ~]# gluster volume info test-volume Volume Name: test-volume Type: Distribute Volume ID: 457ca1ff-ac55-4d59-b827-fb80fc0f4184 Status: Created Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 192.168.60.201:/data/exp1 Brick2: 192.168.60.202:/data/exp2 Options Reconfigured: transport.address-family: inet nfs.disable: on*启动音量
[root@server_1 ~]# gluster volume start test-volume volume start: test-volume: success7。创建复制卷[比较Raid1]
*server_1和server_2创建数据存储目录。
[root@server_1 ~]# mkdir -p /data/exp3 [root@server_2 ~]# mkdir -p /data/exp4*使用命令创建复制卷,并将其命名为repl-volume。
[root@server_1 ~]# gluster volume create repl-volume replica 2 transport tcp 192.168.60.201:/data/exp3 192.168.60.202:/data/exp4 force volume create: repl-volume: success: please start the volume to access data*检查音量信息
[root@server_1 ~]# gluster volume info repl-volume Volume Name: repl-volume Type: Replicate Volume ID: 1924ed7b-73d4-45a9-af6d-fd19abb384cd Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.60.201:/data/exp3 Brick2: 192.168.60.202:/data/exp4 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@server_2 ~]# gluster volume info repl-volume Volume Name: repl-volume Type: Replicate Volume ID: 1924ed7b-73d4-45a9-af6d-fd19abb384cd Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.60.201:/data/exp3 Brick2: 192.168.60.202:/data/exp4 Options Reconfigured: transport.address-family: inet nfs.disable: on*启动音量
[root@server_1 ~]# gluster volume start repl-volume volume start: repl-volume: success8。创建条带卷[比较Raid0]
*server_1和server_2创建数据存储目录。
[root@server_1 ~]# mkdir -p /data/exp5 [root@server_2 ~]# mkdir -p /data/exp6*使用命令创建一个复制卷,并将其命名为raid0-volume。
[root@server_1 ~]# gluster volume create raid0-volume stripe 2 transport tcp 192.168.60.201:/data/exp5 192.168.60.202:/data/exp6 force volume create: raid0-volume: success: please start the volume to access data*检查音量信息
[root@server_1 ~]# gluster volume info raid0-volume Volume Name: raid0-volume Type: Stripe Volume ID: 13b36adb-7e8b-46e2-8949-f54eab5356f6 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.60.201:/data/exp5 Brick2: 192.168.60.202:/data/exp6 Options Reconfigured: transport.address-family: inet nfs.disable: on [root@server_2 ~]# gluster volume info raid0-volume Volume Name: raid0-volume Type: Stripe Volume ID: 13b36adb-7e8b-46e2-8949-f54eab5356f6 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.60.201:/data/exp5 Brick2: 192.168.60.202:/data/exp6 Options Reconfigured: transport.address-family: inet nfs.disable: on*启动音量
[root@server_1 ~]# gluster volume start raid0-volume volume start: raid0-volume: success9。客户端应用程序
*安装glusterfs-cli
[root@client ~]# yum install glusterfs-cli -y*创建一个挂载目录
[root@client ~]# mkdir /mnt/g1 /mnt/g2 /mnt/g3*装入卷
[root@server_1 ~]# mount.glusterfs 192.168.60.201:/test-volume /mnt/g1 [root@server_1 ~]# mount.glusterfs 192.168.60.202:/repl-volume /mnt/g2 [root@server_1 ~]# mount.glusterfs 192.168.60.201:/raid0-volume /mnt/g310。展开卷
*创建存储目录
*扩展卷
[root@server_1 ~]# gluster volume add-brick test-volume 192.168.60.201:/data/exp9 force volume add-brick: success*重新平衡
[root@server_1 ~]# gluster volume rebalance test-volume start volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 008c3f28-d8a1-4f05-b63c-4543c51050ec十一。摘要
需求驱动技术,技术本身并不优越,只有商业。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)