在写kafka之前,docker搭建一套kafka集群环境,用的腾讯云,这里真的不得不吐槽一下:腾讯云的体验比阿里云强,阿里云要不你反思反思?
1、创建子网,管理方便一点
docker network create --subnet 172.19.0.0/16 --gateway 172.19.0.1 zookeeper_kafka
2、创建docker-compose.yml文件,cat vi都可以
version: '3.3' services: zookeeper: image: wurstmeister/zookeeper ports: - 2181:2181 container_name: zookeeper networks: default: ipv4_address: 172.19.0.11 kafka0: image: wurstmeister/kafka depends_on: - zookeeper container_name: kafka0 ports: - 9092:9092 environment: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka0:9092 KAFKA_LISTENERS: PLAINTEXT://kafka0:9092 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_BROKER_ID: 0 volumes: - /root/data/kafka0/data:/data - /root/data/kafka0/log:/datalog networks: default: ipv4_address: 172.19.0.12 kafka1: image: wurstmeister/kafka depends_on: - zookeeper container_name: kafka1 ports: - 9093:9093 environment: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9093 KAFKA_LISTENERS: PLAINTEXT://kafka1:9093 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_BROKER_ID: 1 volumes: - /root/data/kafka1/data:/data - /root/data/kafka1/log:/datalog networks: default: ipv4_address: 172.19.0.13 kafka2: image: wurstmeister/kafka depends_on: - zookeeper container_name: kafka2 ports: - 9094:9094 environment: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka2:9094 KAFKA_LISTENERS: PLAINTEXT://kafka2:9094 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_BROKER_ID: 2 volumes: - /root/data/kafka2/data:/data - /root/data/kafka2/log:/datalog networks: default: ipv4_address: 172.19.0.14 kafka3: image: wurstmeister/kafka depends_on: - zookeeper container_name: kafka3 ports: - 9095:9095 environment: KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka3:9095 KAFKA_LISTENERS: PLAINTEXT://kafka3:9095 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_BROKER_ID: 3 volumes: - /root/data/kafka3/data:/data - /root/data/kafka3/log:/datalog networks: default: ipv4_address: 172.19.0.15 kafka-manager: image: sheepkiller/kafka-manager:latest restart: unless-stopped container_name: kafka-manager hostname: kafka-manager ports: - "9000:9000" links: # 连接本compose文件创建的container - kafka1 - kafka2 - kafka3 external_links: # 连接本compose文件以外的container - zookeeper environment: ZK_HOSTS: zoo1:2181 ## 修改:宿主机IP TZ: CST-8 networks: default: external: name: zookeeper_kafka
在没有端口占用的情况下可以直接复制使用
3、之后docker-compose启动,别再问这里-d啥意思了,不懂百度
docker-compose -f docker-compose.yml up -d
截图一下成功的模样
4、测试kafka生产者消费者
进入kafka0,
docker exec -it kafka0 /bin/bash
进入脚本文件目录
cd /opt/kafka_2.13-2.7.1/bin/
创建topic为chat的主题,
kafka-topics.sh --create --topic chat --partitions 5 --zookeeper 8.210.138.111:2181 --replication-factor 3
运行生产者脚本
kafka-console-producer.sh --broker-list kafka0:9092 --topic chat
再开一个客户端,进入kafka1,2,3都行,进入bin脚本文件目录
运行消费者脚本
kafka-console-consumer.sh --bootstrap-server kafka2:9094 --topic chat --from-beginning
在生产者窗口直接发送消息
在消费者窗口能看到消息
基本搭建就结束了
之后差一个kafka manager客户端
这玩意要用kubernates去安装,你要先安装kubernates,这里我们先埋个坑,因为我想用kibana去做一个类似kafka manager的东西,我们先用这样的集群已经没有问题了
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)