Kafka集群管理—,java大数据开发面试

Kafka集群管理—,java大数据开发面试,第1张

Kafka集群管理—,java大数据开发面试

drwxrwxrwx 1 dayuan dayuan 512 Jul 24 10:02 zookeeper-2/
drwxrwxrwx 1 dayuan dayuan 512 Jul 24 10:02 zookeeper-3/
itcast@Server-node:/mnt/d/zookeeper-cluster$

ClientPort设置

配置每一个Zookeeper 的dataDir(zoo.cfg) clientPort 分别为2181 2182 2183

the port at which the clients will connect

clientPort=2181

myid配置

在每个zookeeper的 data 目录下创建一个 myid 文件,内容分别是0、1、2 。这个文件就是记录每个服务器的ID

dayuan@MY-20190430BUDR:/mnt/d/zookeeper-cluster/zookeeper-1$ cat
temp/zookeeper/data/myid
0
dayuan@MY-20190430BUDR:/mnt/d/zookeeper-cluster/zookeeper-1$

zoo.cfg

在每一个zookeeper 的 zoo.cfg配置客户端访问端口(clientPort)和集群服务器IP列表。

dayuan@MY-20190430BUDR:/mnt/d/zookeeper-cluster/zookeeper-1$ cat conf/zoo.cfg

The number of milliseconds of each tick zk服务器的心跳时间

tickTime=2000

The number of ticks that the initial synchronization phase can take

initLimit=10

The number of ticks that can pass between sending a request and getting an acknowledgement

syncLimit=5

the directory where the snapshot is stored. do not use /tmp for storage, /tmp here is just example sakes.

#dataDir=/tmp/zookeeper
dataDir=temp/zookeeper/data
dataLogDir=temp/zookeeper/log

the port at which the clients will connect

clientPort=2181

the maximum number of client connections. increase this if you need to handle more clients

#maxClientCnxns=60

Be sure to read the maintenance section of the administrator guide before turning on autopurge. http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

Purge task interval in hours Set to “0” to disable auto purge feature

#autopurge.purgeInterval=1

server.0=127.0.0.1:2888:3888
server.1=127.0.0.1:2889:3889
server.2=127.0.0.1:2890:3890
dayuan@MY-20190430BUDR:/mnt/d/zookeeper-cluster/zookeeper-1$

解释:server.服务器ID=服务器IP地址:服务器之间通信端口:服务器之间投票选举端口

启动集群

启动集群就是分别启动每个实例,启动后我们查询一下每个实例的运行状态

itcast@Server-node:/mnt/d/zookeeper-cluster/zookeeper-1$ bin/zkServer.sh status
ZooKeeper JMX enabled b

《一线大厂Java面试题解析+后端开发学习笔记+最新架构讲解视频+实战项目源码讲义》

【docs.qq.com/doc/DSmxTbFJ1cmN1R2dB】 完整内容开源分享

y default
Using config: /mnt/d/zookeeper-cluster/zookeeper-1/bin/…/conf/zoo.cfg
Mode: leader

itcast@Server-node:/mnt/d/zookeeper-cluster/zookeeper-2$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /mnt/d/zookeeper-cluster/zookeeper-2/bin/…/conf/zoo.cfg
Mode: follower

itcast@Server-node:/mnt/d/zookeeper-cluster/zookeeper-3$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /mnt/d/zookeeper-cluster/zookeeper-3/bin/…/conf/zoo.cfg
Mode: follower

2.Kafka集群搭建

集群目录

itcast@Server-node:/mnt/d/kafka-cluster$ ll
total 0
drwxrwxrwx 1 dayuan dayuan 512 Aug 28 18:15 ./
drwxrwxrwx 1 dayuan dayuan 512 Aug 19 18:42 …/
drwxrwxrwx 1 dayuan dayuan 512 Aug 28 18:39 kafka-1/
drwxrwxrwx 1 dayuan dayuan 512 Jul 24 14:02 kafka-2/
drwxrwxrwx 1 dayuan dayuan 512 Jul 24 14:02 kafka-3/
drwxrwxrwx 1 dayuan dayuan 512 Aug 28 18:15 kafka-4/
itcast@Server-node:/mnt/d/kafka-cluster$

server.properties

broker 编号,集群内必须唯一

broker.id=1

host 地址

host.name=127.0.0.1

端口

port=9092

消息日志存放地址

log.dirs=/tmp/kafka/log/cluster/log3

ZooKeeper 地址,多个用,分隔

zookeeper.connect=localhost:2181,localhost:2182,localhost:2183

启动集群

分别通过 cmd 进入每个 Kafka 实例,输入命令启动


[2019-07-24 06:18:19,793] INFO [Transaction Marker Channel Manager 2]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2019-07-24 06:18:19,793] INFO [TransactionCoordinator id=2] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-07-24 06:18:19,846] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2019-07-24 06:18:19,869] INFO [SocketServer brokerId=2] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2019-07-24 06:18:19,879] INFO Kafka version: 2.2.1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-07-24 06:18:19,879] INFO Kafka commitId: 55783d3133a5a49a (org.apache.kafka.common.utils.AppInfoParser)
[2019-07-24 06:18:19,883] INFO [KafkaServer id=2] started (kafka.server.KafkaServer)

三、多集群同步

MirrorMaker是为解决Kafka跨集群同步、创建镜像集群而存在的;下图展示了其工作原理。该工具消费源集群消息然后将数据重新推送到目标集群。

1.配置

创建镜像

使用MirrorMaker创建镜像是比较简单的,搭建好目标Kafka集群后,只需要启动mirror-maker程序即可。其中,一个或多个consumer配置文件、一个producer配置文件是必须的,whitelist、blacklist是可选的。在consumer的配置中指定源Kafka集群的Zookeeper,在producer的配置中指定目标集群的 Zookeeper(或者broker.list)。

kafka-run-class.sh kafka.tools.MirrorMaker –
consumer.config sourceCluster1Consumer.config –
consumer.config sourceCluster2Consumer.config –num.streams 2 –
producer.config targetClusterProducer.config –whitelist=“.*”

consumer配置文件:

format: host1:port1,host2:port2 …

bootstrap.servers=localhost:9092

consumer group id

group.id=test-consumer-group

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server: latest, earliest, none

#auto.offset.reset=

producer配置文件:

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5669894.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存