目录
环境:
实验环境:
一.dns域名解析(3台)
二.安装filebeat
源码包部署:
开启收集nginx相关日志
配置nginx模块相关内容指定搜集日志的位置
修改filebeat的配置文件
三.安装logstash
3.1参考官方网站
3.2源码包部署
3.3修改配置文件
这里直接创建了一个新的文件
3.4logstash的作用
四.部署ES集群
4.1三台机器分别部署安装
4.2修改elk的配置文件
4.3.启动集群
4.4.查看集群健康状态
4.5.查看集群节点信息
五.部署kafka集群
5.1.安装
5.2.安装jdk8
5.3.配置ZK
elk2,elk3配置相同的内容无需修改
5.4.创建data、log目录
5.5.创建myid文件 es01
5.6.配置Kafka
5.7.创建对应的目录
5.8.其他节点配置
5.9.启动ZK集群
启动在三个节点依次执行:
查看端口
5.10.启动Kafka
启动在三个节点依次执行:
验证:创建topic
在es02或者es03上查询es01的topic
六.kibana
6.1.部署
6.2.配置主配置文件
6.3.运行
七.运行环境
此时kafka已经在运行
环境: 实验环境:
elk1: 192.168.31.204
elk2: 192.168.31.205
elk3: 192.168.31.207
软件部署:
elk1: filebeat+logstash+elasticsearch+kafka+kibena
elk2:elasticserach+kafka
elk3:elasticsearch+kafka
实验步骤:
一.dns域名解析(3台)vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.31.204 elk1 192.168.31.205 elk2 192.168.31.207 elk3
互ping测试连通性
二.安装filebeat 源码包部署:官网:Download Filebeat • Lightweight Log Analysis | Elastic
# tar xzvf filebeat‐7.13.2‐linux‐x86_64.tar.gz ‐C /usr/local/
# mv /usr/local/filebeat‐7.13.2‐linux‐x86_64 /usr/local/filebeat
/usr/local/filebeat/filebeat -c /usr/local/filebeat/filebeat.yml modules enable nginx配置nginx模块相关内容指定搜集日志的位置
(这里更改了nginx日志所在位置)
- module: nginx access: enabled: true var.paths: ["/var/log/access.log","/var/log/error.log"] error: enabled: true ingress_controller: enabled: false修改filebeat的配置文件
vim /usr/local/filebeat/filebeat.yml
filebeat.inputs: - type: log enabled: true paths: - /tmp/*.log filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 1 output.kafka: hosts: ["elk1:9092","elk2:9092","elk3:9092"] topic: "nginx" partition.round_robin: reachable_only: false required_acks: 1 compression: gzip max_message_bytes: 1000000 processors: - add_host_metadata: when.not.contains.tags: forwarded - drop_event: when: regexp: message: "^WHY:"
filebeat把收集到的相关日志推送到kafka消息队列中
三.安装logstash 3.1参考官方网站Logstash Reference [7.16] | Elastic
3.2源码包部署# tar ‐xf logstash‐7.13.2‐linux‐x86_64.tar.gz ‐C /usr/local/
# mv /usr/local/logstash‐7.13.2/ /usr/local/logstash
/usr/local/logstash/config/first-pipeline.conf
vim /usr/local/logstash/config/first-pipeline.conf
input { kafka{ type => "nginx_log" codec => json topics => ["nginx"] decorate_events => true bootstrap_servers => "192.168.31.204:9092,192.168.31.205:9092,192.168.31.207:9092" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } } output { stdout { } if [log][file][path] == "/var/log/access.log" { elasticsearch { hosts => ["192.168.31.204:9200","192.168.31.205:9200","192.168.31.207:9200"] index => "%{[host][hostname]}‐nginx‐access‐%{+YYYY.MM.dd}" } } else if [log][file][path] == "/var/log/error.log" { elasticsearch { hosts => ["192.168.31.204:9200","192.168.31.205:9200","192.168.31.207:9200"] index => "%{[host][hostname]}‐nginx‐error‐%{+YYYY.MM.dd}" } } }3.4logstash的作用
用来把从kafka接收到的数据推送到elasticsearch中
四.部署ES集群官网链接:Install Elasticsearch with RPM | Elasticsearch Guide [7.16] | Elastic
安装方式:yum安装下载好的rpm包
4.1三台机器分别部署安装yum -y install elasticsearch-7.13.2-x86_64.rpm4.2修改elk的配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: elk node.name: elk1 node.data: true network.host: 0.0.0.0 http.port: 9200 discovery.seed_hosts: - elk1 - 192.168.31.205:9300 - 192.168.31.207 cluster.initial_master_nodes: ["elk1", "elk2","elk3"]
elk2上修改node.name: elk2
elk3上修改node.name: elk3
4.3.启动集群systemctl start elasticsearch
默认端口号是 :
9200 用于外部访问的监听端口,比如查看集群状态,向其传输数据,查询数据等
9300 用户集群中节点之间的互相通信,比如主节点的选举,集群节点信息的通告等
curl -X GET "localhost:9200/_cat/health?v"4.5.查看集群节点信息
curl -X GET "localhost:9200/_cat/nodes?v"五.部署kafka集群
源码包地址: https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/
5.1.安装tar xzvf kafka_2.12-2.8.0.tgz -C /usr/local/ mv /usr/local/kafka_2.12-2.8.0/ /usr/local/kafka/5.2.安装jdk8
yum install -y java-1.8.0-openjdk5.3.配置ZK
vim /usr/local/kafka/config/zookeeper.properties
dataDir=/opt/data/zookeeper/data dataLogDir=/opt/data/zookeeper/logs clientPort=2181 tickTime=2000 initLimit=20 syncLimit=10 #服务端IP地址 server.1=192.168.31.204:2888:3888 server.2=192.168.31.205:2888:3888 server.3=192.168.31.207:2888:3888elk2,elk3配置相同的内容无需修改 5.4.创建data、log目录
mkdir ‐p /opt/data/zookeeper/{data,logs}5.5.创建myid文件 es01
echo 1 > /opt/data/zookeeper/data/myid
el02的是2 el03的是3
5.6.配置Kafkavim /usr/local/kafka/config/server.properties
broker.id=1 listeners=PLAINTEXT://192.168.31.204:9092 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/opt/data/kafka/logs num.partitions=6 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=2 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=192.168.31.204:2181,192.168.31.205:2181,192.168.31.207:2181 zookeeper.connection.timeout.ms=6000 group.initial.rebalance.delay.ms=05.7.创建对应的目录
mkdir ‐p /opt/data/kafka/logs5.8.其他节点配置
只需把配置好的安装包直接分发到其他节点,修改 Kafka的broker.id和 listeners就可以了。
5.9.启动ZK集群 启动在三个节点依次执行:nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &查看端口
netstat ‐lntp | grep 21815.10.启动Kafka 启动在三个节点依次执行:
nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &验证:创建topic
/usr/local/kafka/bin/kafka-topics.sh --create --zookeeper localhost --replication-factor 1 --partitions 1 --topic testtopic在es02或者es03上查询es01的topic
/usr/local/kafka/bin/kafka-topics.sh --zookeeper 192.168.31.204:2181 --list六.kibana 6.1.部署
官网下载:Download Kibana Free | Get Started Now | Elastic
tar包部署
tar xzvf kibana‐7.13.2‐linux‐x86_64.tar.gz ‐C /usr/local/ mv /usr/local/kibana‐7.13.2‐linux‐x86_64 /usr/local/kibana6.2.配置主配置文件
server.port: 5601 #改 server.host: "0.0.0.0" #改 # 用于连接到 ES 集群的地址和端口 elasticsearch.hosts: ["http://es01:9200"] #改 # 日志文件路径 # logging.dest: stdout logging.dest: /var/log/kibana/kibana.log #改 # 设置页面的字体为中文 i18n.locale: "zh‐CN" #改6.3.运行
nohup /usr/local/kibana/bin/kibana ‐‐allow‐root &
192.168.31.204:5601
七.运行环境 此时kafka已经在运行1.启动elasticsearch(3台)
systemctl start elasticsearch
2.启动filebeat收集日志信息
nohup /usr/local/filebeat/filebeat -c /usr/local/filebeat/filebeat.yml -e &
3.启动logstash
/usr/local/logstash/bin/logstash -f /usr/local/logstash/config/first-pipeline.conf
4.在kafka上验证是否生成topic
/usr/local/kafka/bin/kafka-topics.sh --zookeeper 192.168.31.204 --list
5.在ES集群上查看索引
curl -X GET "192.168.31.204:9200/_cat/indices"
6.在kibana上添加索引查看数据
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)