Springboot整合kafka

Springboot整合kafka,第1张

Springboot整合kafka

首先在windows下启动kafka
启动方法如下:
首先下载kafka,zookeeper安装包

pom.xml


4.0.0

org.springframework.boot
spring-boot-starter-parent
2.1.5.RELEASE


com.cxy
skafka
0.0.1-SNAPSHOT
skafka
Demo project for Spring Boot


    1.8



    
        org.springframework.boot
        spring-boot-starter-web
    
    
        org.springframework.kafka
        spring-kafka
    
    
        com.alibaba
        fastjson
        1.2.56
    
    
        org.projectlombok
        lombok
        true
    
    
        org.springframework.boot
        spring-boot-starter-test
        test
    
    
        org.springframework.kafka
        spring-kafka-test
        test
    



    
        
            org.springframework.boot
            spring-boot-maven-plugin
        
    

启动类:
package com.cxy.skafka;

import com.cxy.skafka.component.UserLogProducer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import javax.annotation.PostConstruct;

@SpringBootApplication
public class SkafkaApplication {

public static void main(String[] args) {
    SpringApplication.run(SkafkaApplication.class, args);
}

@Autowired
private UserLogProducer userLogProducer;

@PostConstruct
public  void init() {
    for (int i = 0; i < 10; i++) {
        userLogProducer.sendlog(String.valueOf(i));
    }
}

}

model
package com.cxy.skafka.model;

import lombok.Data;
import lombok.experimental.Accessors;


@Data
@Accessors
public class Userlog {
private String username;
private String userid;
private String state;
}
producer
package com.cxy.skafka.component;

import com.alibaba.fastjson.JSON;
import com.cxy.skafka.model.Userlog;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;


@Component
public class UserLogProducer {
@Autowired
private KafkaTemplate kafkaTemplate;

public void sendlog(String userid){
Userlog userlog = new Userlog();
userlog.setUsername(“cxy”);
userlog.setState(“1”);
userlog.setUserid(userid);

 System.err.println(userlog+"1");

 kafkaTemplate.send("userLog",JSON.toJSonString(userlog));

}
}

消费者:
package com.cxy.skafka.component;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

import java.util.Optional;


@Component
public class UserLogConsumer {
@KafkaListener(topics = {“userLog”})
public void consumer(ConsumerRecord consumerRecord){
Optional kafkaMsg= Optional.ofNullable(consumerRecord.value());
if (kafkaMsg.isPresent()){
Object msg= kafkaMsg.get();
System.err.println(msg);
}
}
}
 
配置文件:
server.port=8080
#制定kafka代理地址
spring.kafka.bootstrap-servers=localhost:9092
#消息发送失败重试次数
spring.kafka.producer.retries=0
#每次批量发送消息的数量
spring.kafka.producer.batch-size=16384
#每次批量发送消息的缓冲区大小
spring.kafka.producer.buffer-memory=335554432 指定消息key和消息体的编解码方式

spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

#=============== consumer =======================

指定默认消费者group id

spring.kafka.consumer.group-id=user-log-group

earliest:重置为分区中最小的offset; latest:重置为分区中最新的offset(消费分区中新产生的数据);

spring.kafka.consumer.auto-offset-reset=earliest

是否自动提交offset

spring.kafka.consumer.enable-auto-commit=true

提交offset延时(接收到消息后多久提交offset)

spring.kafka.consumer.auto-commit-interval=100

指定消息key和消息体的编解码方式

spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5574625.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-14
下一篇 2022-12-14

发表评论

登录后才能评论

评论列表(0条)

保存