Filebeat+Redis+Logstash+Elasticsearch+Kibana搭建日志采集分析系统

Filebeat+Redis+Logstash+Elasticsearch+Kibana搭建日志采集分析系统,第1张

环境说明 Logstash、Elasticsearch、Kibana我放到一台机器上了,用Docker搭建的环境 Redis单独一台机器 Filebeat跟需要采集日志的项目在同一台机器 安装Docker参考  docker安装笔记 安装Redis
wget http://download.redis.io/releases/redis-6.0.8.tar.gz
tar xzf redis-6.0.8.tar.gz
cd redis-6.0.8
make

# 报错/bin/sh: cc: 未找到命令,执行以下命令
# yum install gcc-c++ -y

# 报错致命错误:jemalloc/jemalloc.h:没有那个文件或目录,执行以下命令
# make MALLOC=libc

# 报错 错误:‘struct redisServer’没有名为‘unixsocket’的成员,执行以下命令
# yum -y install centos-release-scl
# yum -y install devtoolset-9-gcc devtoolset-9-gcc-c++ devtoolset-9-binutils
# scl enable devtoolset-9 bash

# 默认方式启动redis
cd src
./redis-server

# 补充信息
# redis.conf文件中
# 允许远程访问
# bind 0.0.0.0 
# 启用后台启动
# daemonize yes
# 设置密码为1234567890
# requirepass 1234567890

# 配置方式启动redis
cd src
./redis-server ../redis.conf
ELK安装 使用Docker搭建Elasticsearch:7.17.1
# 拉镜像
docker pull elasticsearch:7.17.1

# 修改vm.max_map_count数量,在sysctl.conf最后添加vm.max_map_count
vi /etc/sysctl.conf
vm.max_map_count=262144

# 保存sysctl.conf后重置系统设置
/sbin/sysctl -p


# 本机创建es挂载的配置文件和数据文件夹
cd /home
mkdir -p elasticsearch/config
mkdir -p elasticsearch/data
mkdir -p elasticsearch/plugins

echo "http.host: 0.0.0.0" >> elasticsearch/config/elasticsearch.yml

chmod 777 -R elasticsearch/

# 启动es
docker run --name elasticsearch -p 9200:9200 -p 9300:9300  -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx128m" -v /home/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /home/elasticsearch/data:/usr/share/elasticsearch/data -v /home/elasticsearch/plugins:/usr/share/elasticsearch/plugins -d elasticsearch:7.17.1
使用Docker搭建Kibana:7.17.1
docker pull kibana:7.17.1

docker run --name kibana --link elasticsearch:elasticsearch -p 5601:5601 -d kibana:7.17.1
使用Docker搭建logstash:7.17.1
docker pull logstash:7.17.1

cd /home
mkdir logstash
cd /home/logstash
mkdir config pipeline
cd /home/logstash/config
touch logstash.yml

vim logstash.yml

# 写入一下两个配置
# http.host: "0.0.0.0"
# xpack.monitoring.elasticsearch.hosts: [ "http://10.0.3.102:9200" ]

# 保存退出logstash.yml

cd /home/logstash/pipeline
touch logstash.conf

vim logstash.conf
# 写入input output配置,从redis获取日志信息,输出到es中
# input {
#     redis {
#         host => "10.0.3.101"
#         port => 6379
#         password => "1234567890"
#         data_type => list
#         key => "filebeat"
#     }
# }
# 
# output {
#   elasticsearch {
#     hosts => ["http://10.0.3.102:9200"]
#     index => "applog"
#   }
# }

# 保存退出logstash.conf

chmod 777 -R /home/logstash/

docker run -d --name logstash -p 5044:5044 -p 9600:9600 -v /home/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml -v /home/logstash/pipeline/:/usr/share/logstash/pipeline/ logstash:7.17.1
Filebeat安装,和需要采集日志的项目放在同一台机器
cd /home

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.1-linux-x86_64.tar.gz

tar -xvf filebeat-7.17.1-linux-x86_64.tar.gz

mv filebeat-7.17.1-linux-x86_64 filebeat

cd filebeat

touch log_redis.yml

vi log_redis.yml

# log_redis.yml替换成以下内容

# .global: &global
#   ignore_older: 30m
#   scan_frequency: 5m
#   harvester_limit: 1
#   close_inactive: 1m
#   clean_inactive: 45m
#   close_removed: true
#   clean_removed: true

# filebeat.inputs:
# - type: log
#   enabled: true
#   paths:
#     - /opt/myproject/logs/catalina.out
#   <<: *global

# output.redis:
#   hosts: ["10.0.3.101"]
#   key: "filebeat"
#   password: "1234567890"
#   db: 0
#   timeout: 5

# 保存退出log_redis.yml

# 运行filebeat

nohup ./filebeat -c log_redis.yml &
检查日志是否采集成功

登录kibana

http://10.0.3.102:5601/

找到Index Management

查看applog这个index是否创建了

创建一个Index patterns

去discover看一下日志是否正常采集

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/922088.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-16
下一篇 2022-05-16

发表评论

登录后才能评论

评论列表(0条)

保存