Kafka节点变化怎么办?

Kafka节点变化怎么办?,第1张

broker节点数量发生变化时,需要对已有topics的分区和副本进行重新分配,按照以下步骤来测试一下。
注:--bootstrap-server指定的IP替换为自己的服务器地址

指定主题

首先创建一个json文件,指定要进行重新分配的主题名称

[root@localhost kafka]# vim topics-to-move.json
{
 "topics": [
 	{"topic": "test"}
 ],
 "version": 1
}

先查看一下当前test主题的分配情况,目前分配在0、1、2三个节点上

[root@localhost kafka]# ./bin/kafka-topics.sh --bootstrap-server IP:9092 --topic test --describe
Topic: test	TopicId: 8ZUQdSBFSX6bifqtfkqtfw	PartitionCount: 3	ReplicationFactor: 3	Configs: segment.bytes=1073741824
	Topic: test	Partition: 0	Leader: 0	Replicas: 2,1,0	Isr: 2,1,0
	Topic: test	Partition: 1	Leader: 1	Replicas: 0,2,1	Isr: 1,2,0
	Topic: test	Partition: 2	Leader: 2	Replicas: 1,0,2	Isr: 0,2,1

生成新的分配计划

使用kafka-reassign-partitions.sh执行以下命令生成一个新的分配计划,其中--broker-list指定要重新分配到哪些broker上(添加或删除后的节点列表)

[root@localhost kafka]# ./bin/kafka-reassign-partitions.sh --bootstrap-server IP:9092 --topics-to-move-json-file topic-to-move.json --broker-list "0,1,2,3" --generate
Current partition replica assignment
{"version":1,"partitions":[{"topic":"test","partition":0,"replicas":[2,1,0],"log_dirs":["any","any","any"]},{"topic":"test","partition":1,"replicas":[1,0,2],"log_dirs":["any","any","any"]},{"topic":"test","partition":2,"replicas":[0,2,1],"log_dirs":["any","any","any"]}]}

Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"test","partition":0,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test","partition":1,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"test","partition":2,"replicas":[2,3,0],"log_dirs":["any","any","any"]}]}

可以看到Current partition replica assignment是当前分配情况,Proposed partition reassignment configuration是生成的新的分配计划,其中多了一个节点3

创建副本存储计划

把上一步生成的新的分配计划存储到json文件

[root@localhost kafka]#  vim increase-replication-factor.json
{"version":1,"partitions":[{"topic":"test","partition":0,"replicas":[0,1,2],"log_dirs":["any","any","any"]},{"topic":"test","partition":1,"replicas":[1,2,3],"log_dirs":["any","any","any"]},{"topic":"test","partition":2,"replicas":[2,3,0],"log_dirs":["any","any","any"]}]}

执行副本存储计划
[root@localhost kafka]# ./bin/kafka-reassign-partitions.sh --bootstrap-server IP:9092 --reassignment-json-file increase-replication-factor.json --execute
Current partition replica assignment

{"version":1,"partitions":[{"topic":"test","partition":0,"replicas":[2,1,0],"log_dirs":["any","any","any"]},{"topic":"test","partition":1,"replicas":[1,0,2],"log_dirs":["any","any","any"]},{"topic":"test","partition":2,"replicas":[0,2,1],"log_dirs":["any","any","any"]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignments for test-0,test-1,test-2
验证副本存储计划
[root@localhost kafka]# ./bin/kafka-reassign-partitions.sh --bootstrap-server IP:9092 --reassignment-json-file increase-replication-factor.json --verify
Status of partition reassignment:
Reassignment of partition test-0 is complete.
Reassignment of partition test-1 is complete.
Reassignment of partition test-2 is complete.

Clearing broker-level throttles on brokers 0,1,2,3
Clearing topic-level throttles on topic test

重新查看test主题的分配情况,现在0 、1、2、3四个节点都已完成分配

[root@localhost kafka]# ./bin/kafka-topics.sh --bootstrap-server IP:9092 --describe --topic test
Topic: test	TopicId: 8ZUQdSBFSX6bifqtfkqtfw	PartitionCount: 3	ReplicationFactor: 3	Configs: segment.bytes=1073741824
	Topic: test	Partition: 0	Leader: 0	Replicas: 0,1,2	Isr: 2,1,0
	Topic: test	Partition: 1	Leader: 1	Replicas: 1,2,3	Isr: 1,2,3
	Topic: test	Partition: 2	Leader: 2	Replicas: 2,3,0	Isr: 0,2,3

总结:

  1. 指定要 *** 作的主题
  2. 生成副本存储计划
  3. 执行副本存储计划
  4. 验证副本存储计划

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/719354.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-25
下一篇 2022-04-25

发表评论

登录后才能评论

评论列表(0条)

保存