最近发现ELK页面等有了较大的改动,重新配置了遍最新版
注:软件版本必须一致,这里配置现发布的最新版本 7.15.1-1
注:conf文件中格式使用空格,空2格,不要用tab 不要用tab 不要用tab
1.配置JDK环境 (略)
#################以下为 直接使用logstash获取日志#################
2.安装配置elasticsearch.x86_64
[root@localhost yum.repos.d]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch [root@localhost yum.repos.d]# cat elasticsearch.repo [elasticsearch] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=0 autorefresh=1 type=rpm-md [root@localhost yum.repos.d]# yum clean all [root@localhost yum.repos.d]# yum makecache fast [root@localhost yum.repos.d]# yum -y install elasticsearch.x86_64 [root@localhost /]# systemctl enable elasticsearch Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service. [root@localhost /]# systemctl start elasticsearch [root@localhost /]# systemctl status elasticsearch ● elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled) Active: active (running) since 三 2021-11-03 17:04:35 CST; 7s ago Docs: https://www.elastic.co Main PID: 15439 (java) CGroup: /system.slice/elasticsearch.service ├─15439 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.... └─15638 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller 11月 03 17:04:20 localhost.localdomain systemd[1]: Starting Elasticsearch... 11月 03 17:04:35 localhost.localdomain systemd[1]: Started Elasticsearch. [root@localhost /]# netstat -ntlp |grep java tcp6 0 0 127.0.0.1:9200 :::* LISTEN 15439/java tcp6 0 0 ::1:9200 :::* LISTEN 15439/java tcp6 0 0 127.0.0.1:9300 :::* LISTEN 15439/java tcp6 0 0 ::1:9300 :::* LISTEN 15439/java #9200作为Http协议,主要用于外部通讯 #9300作为Tcp协议,ES集群之间是通过9300进行通讯 [root@localhost /]# vim /etc/elasticsearch/elasticsearch.yml [root@localhost /]# cat /etc/elasticsearch/elasticsearch.yml | grep -v '^#' cluster.name: elk001 node.name: node-1 path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 10.2.33.102 http.port: 9200 discovery.seed_hosts: ["10.2.33.102", "[::1]"] cluster.initial_master_nodes: ["node-1"] [root@localhost /]# systemctl restart elasticsearch [root@localhost ~]# curl http://10.2.33.102:9200 { "name" : "node-1", "cluster_name" : "elk001", "cluster_uuid" : "hTUer8_jQRSYE-cwg55mSw", "version" : { "number" : "7.15.1", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "83c34f456ae29d60e94d886e455e6a3409bba9ed", "build_date" : "2021-10-07T21:56:19.031608185Z", "build_snapshot" : false, "lucene_version" : "8.9.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
3.安装配置logstash.x86_64
logstash目录结构
注:/etc/logstash/conf.d 下可配置多个文件
2、在logstash上配置第二个nginx.conf 接收filebeat传输过来的数据
[root@elk001 conf.d]# pwd /etc/logstash/conf.d [root@elk001 conf.d]# ll 总用量 8 -rw-r--r-- 1 root root 174 11月 8 16:54 nginx.conf -rw-r--r-- 1 root root 538 11月 8 16:29 system.conf [root@elk001 conf.d]# cat nginx.conf input{ beats{ host => "10.2.33.102" port => "5044" } } output { elasticsearch{ hosts => ["10.2.33.102:9200"] index => "nginx-web-%{+YYYY.MM.dd}" } } [root@elk001 conf.d]# systemctl restart logstash
3、登陆http://10.2.33.102:5601查看新增的文件
#################filebeat-redis-logstash-elasticsearch-kibana #################
架构图:
说明:(测试就把除了filebeat都装一台上了,生产环境请自分配)
1,前端服务器只启动轻量级日志收集工具filebeat(不需要JDK环境)
2,收集的日志不经过处理直接发送到redis消息队列
3,redis消息队列只是暂时存储日志数据,不需要进行持久化(防止数据丢失)
4,logstash从redis消息队列读取数据并且按照一定规则进行过滤然后存储至elasticsearch
5,通过kibana进行图形化展示
redis在其中的作用:
存储日志,全部日志集中一起,打好标签,便于 *** 作管理,可以是nginx,apache,tomcat等其他只要产生都可以存储,只要打上标签,logstash在input时就会分好类
提高冗余性,若redis后面的全部宕机了,也不至于数据丢失
加快日志的读取速度,防止大批量日志的时候logstash无法及时处理
redis配置安装(略)
bind 0.0.0.0 protected-mode no requirepass redis123456
一些配置上面已有,忽略,直接修改配置文件,如下:
修改filebeat配置文件,redis作为日志的输出对象,需要添加,放置不同的redis库,设置不同的key方便使用
[root@localhost filebeat]# cat /etc/filebeat/filebeat.yml # ============================== Filebeat inputs =============================== filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/*.log tags: ["nginx-web"] filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false # ======================= Elasticsearch template setting ======================= setup.template.settings: index.number_of_shards: 1 setup.kibana: output.redis: hosts: ["10.2.33.102:6379"] password: "redis123456" key: "nginx-web" data_type: "list" db: 4 processors: - add_host_metadata: when.not.contains.tags: forwarded - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~ # ================================== Logging =================================== #logging.level: debug # ============================= X-Pack Monitoring ============================== #monitoring.enabled: false # ============================== Instrumentation =============================== #instrumentation: #enabled: false #hosts: # - http://localhost:8200 #api_key: # ================================= Migration ================================== #migration.6_to_7.enabled: true
修改logstash配置文件,根据filebeat中设置的redis存储进行读取:
[root@elk001 conf.d]# cat redis.conf input{ redis { host => "10.2.33.102" port => 6379 password => "redis123456" key => "nginx-web" data_type => "list" db => 4 } } output { elasticsearch { hosts => ["10.2.33.102:9200"] index => "redis-%{+YYYY.MM.dd}" } }
注:修改配置文件注意要重启服务
redis查看数据
[root@elk001 conf.d]# redis-cli 127.0.0.1:6379> auth redis123456 OK 127.0.0.1:6379> select 4 OK 127.0.0.1:6379[4]> keys * 1) "nginx-web"
登陆验证:
#################filebeat-kafka-logstash-elasticsearch-kibana #################
未完待续。。。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)