- ELK集群搭建
- rpm包安装方式(后面有tar包安装方式)
- tar包安装ELK集群
- tar包安装Elasticsearch
- 安装Kibana
- 安装logstash
- Spring-boot 日志输出配置
- 收集日志到ELK
- filebeat收集日志到kafka
- filebeat收集本地日志到logstash
- logstash传输日志到ELasticsearch
- ELK索引周期控制
rpm包下载
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/yum/7.6.2/elasticsearch-7.6.2-x86_64.rpm
配置文件 /etc/elasticsearch
ll /etc/elasticsearch/
total 40
-rw-rw---- 1 root elasticsearch 199 May 6 14:01 elasticsearch.keystore
-rw-rw---- 1 root elasticsearch 2847 Mar 26 14:41 elasticsearch.yml
-rw-rw---- 1 root elasticsearch 2373 Mar 26 14:41 jvm.options
-rw-rw---- 1 root elasticsearch 17545 Mar 26 14:41 log4j2.properties
-rw-rw---- 1 root elasticsearch 473 Mar 26 14:41 role_mapping.yml
-rw-rw---- 1 root elasticsearch 197 Mar 26 14:41 roles.yml
-rw-rw---- 1 root elasticsearch 0 Mar 26 14:41 users
-rw-rw---- 1 root elasticsearch 0 Mar 26 14:41 users_roles
node-1配置
cat elasticsearch.yml|grep -v "^#"
cluster.name: chauncy-elk
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
gateway.recover_after_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
node-2配置
cat elasticsearch.yml|grep -v "^#"
cluster.name: chauncy-elk
node.name: node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
gateway.recover_after_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
node-3 配置
cat elasticsearch.yml|grep -v "^#"
cluster.name: chauncy-elk
node.name: node-3
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
gateway.recover_after_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
调整jvm参数
vim jvm.options
-Xms4g
-Xmx4g
调整内核参数
vim /etc/sysctl.conf
vm.max_map_count=655360
vim /etc/security/limits.conf
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 131072
* hard nofile 131072
vim /etc/security/limits.d/90-nproc.conf
* soft nproc 4096
sysctl -p
在systemd文件中添加LimitMEMLOCK=infinity
vim /usr/lib/systemd/system/elasticsearch.service
LimitMEMLOCK=infinity
启动
systemctl daemon-reload
systemctl restart elasticsearch
报错解决
加入集群报错处理
删除出错节点数据
rm -rf /var/lib/elasticsearch/nodes
查看集群状态
curl -sXGET http://localhost:9200/_cluster/health?pretty=true
如果设置里内置用户密码
curl -u elastic:meifute@elastic -XGET 'http://localhost:9200/_cat/nodes?pretty'
查看集群节点
curl -XGET 'http://localhost:9200/_cat/nodes?pretty'
172.20.5.15 8 96 0 0.00 0.01 0.05 dilm - node-3
172.20.5.13 5 95 0 0.00 0.01 0.05 dilm * node-1
172.20.5.14 8 96 0 0.00 0.01 0.05 dilm - node-2
tar包安装ELK集群
tar包安装Elasticsearch
安装Elasticsearch
node1
tar xf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /opt/
mkdir /opt/elasticsearch-7.6.2/data
useradd es
配置文件
vim /opt/elasticsearch-7.6.2/config/elasticsearch.yml
cluster.name: chauncy-elk
node.name: node1
path.data: /opt/elasticsearch-7.6.2/data/
path.logs: /opt/elasticsearch-7.6.2/logs/
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node1", "node2", "node3"]
gateway.recover_after_nodes: 2
# 缓存回收大小,无默认值
# 有了这个设置,最久未使用(LRU)的 fielddata 会被回收为新数据腾出空间
# 控制fielddata允许内存大小,达到HEAP 20% 自动清理旧cache
indices.fielddata.cache.size: 20%
indices.breaker.total.use_real_memory: false
# fielddata 断路器默认设置堆的 60% 作为 fielddata 大小的上限。
indices.breaker.fielddata.limit: 40%
# request 断路器估算需要完成其他请求部分的结构大小,例如创建一个聚合桶,默认限制是堆内存的 40%。
indices.breaker.request.limit: 40%
# total 揉合 request 和 fielddata 断路器保证两者组合起来不会使用超过堆内存的 70%(默认值)。
indices.breaker.total.limit: 95%
内核参数修改
vim /etc/sysctl.conf
vm.max_map_count=655360
fs.file-max=655350
vim jvm.options
-Xms4g
-Xmx4g
vim /etc/security/limits.conf
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 131072
* hard nofile 131072
vim /etc/security/limits.d/90-nproc.conf
* soft nproc 655360
sysctl -p
为es用户授权
chown -R es:es /opt
后台启动
su - es
cd elasticsearch-7.6.2/
./bin/elasticsearch -d -p ./es.pid
node2
tar xf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /opt/
mkdir /opt/elasticsearch-7.6.2/data
useradd es
配置文件
vim /opt/elasticsearch-7.6.2/config/elasticsearch.yml
cluster.name: chauncy-elk
node.name: node2
path.data: /opt/elasticsearch-7.6.2/data/
path.logs: /opt/elasticsearch-7.6.2/logs/
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node1", "node2", "node3"]
gateway.recover_after_nodes: 2
内核参数修改
vim /etc/sysctl.conf
vm.max_map_count=655360
fs.file-max=655350
vim jvm.options
-Xms4g
-Xmx4g
vim /etc/security/limits.conf
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 131072
* hard nofile 131072
vim /etc/security/limits.d/90-nproc.conf
* soft nproc 655360
sysctl -p
为es用户授权
chown -R es:es /opt
后台启动
su - es
cd elasticsearch-7.6.2/
./bin/elasticsearch -d -p ./es.pid
查看集群状态
curl -sXGET http://localhost:9200/_cluster/health?pretty=true
node3
tar xf elasticsearch-7.6.2-linux-x86_64.tar.gz -C /opt/
mkdir /opt/elasticsearch-7.6.2/data
useradd es
配置文件
vim /opt/elasticsearch-7.6.2/config/elasticsearch.yml
cluster.name: chauncy-elk
node.name: node3
path.data: /opt/elasticsearch-7.6.2/data/
path.logs: /opt/elasticsearch-7.6.2/logs/
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.seed_hosts: ["172.20.5.11", "172.20.5.12", "172.20.5.13"]
cluster.initial_master_nodes: ["node1", "node2", "node3"]
gateway.recover_after_nodes: 2
内核参数修改
vim /etc/sysctl.conf
vm.max_map_count=655360
fs.file-max=655350
vim jvm.options
-Xms4g
-Xmx4g
vim /etc/security/limits.conf
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 131072
* hard nofile 131072
vim /etc/security/limits.d/90-nproc.conf
* soft nproc 655360
sysctl -p
为es用户授权
chown -R es:es /opt
后台启动
su - es
cd elasticsearch-7.6.2/
./bin/elasticsearch -d -p ./es.pid
使用opt用户启动
cd /opt/elasticsearch-7.10.2
su - opt -c "`pwd`/bin/elasticsearch -d -p ./es.pid"
查看集群状态
curl -sXGET http://localhost:9200/_cluster/health?pretty=true
安装Kibana
yum 安装
yum install https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/yum/7.6.2/filebeat-7.6.2-x86_64.rpm
配置文件
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://172.20.5.11:9200", "http://172.20.5.12:9200", "http://172.20.5.13:9200"]
i18n.locale: "zh-CN"
tar包启动(如果用tar包安装)
nohup su - opt -c "`pwd`/bin/kibana" &>/dev/null &
安装logstash
安装logstash
yum安装
yum install https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/yum/7.6.2/logstash-7.6.2.rpm
Spring-boot 日志输出配置
收集日志到ELK filebeat收集日志到kafka日志流向: filebeat --> logstash --> elasticsearch
kafka创建topic
kafka-topics.sh --zookeeper 192.168.11.111:2181 --create --topic app-log-collector --partitions 3 --replication-factor 2
kafka-topics.sh --zookeeper 192.168.11.111:2181 --create --topic error-log-collector --partitions 3 --replication-factor 3
kafka-topics.sh --zookeeper 192.168.11.111:2181 --list
filebeat配置:
filebeat.prospectors:
- input_type: log
paths:
## app-服务名称.log, 为什么写死,防止发生轮转抓取历史数据
- /home/prod/logs/app-collector.log
#定义写入 ES 时的 _type 值
document_type: "app-log"
multiline:
#pattern: '^\s*(\d{4}|\d{2})\-(\d{2}|[a-zA-Z]{3})\-(\d{2}|\d{4})' # 指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串)
pattern: '^\[' # 指定匹配的表达式(匹配以 "{ 开头的字符串)
negate: true # 是否匹配到
match: after # 合并到上一行的末尾
max_lines: 2000 # 最大的行数
timeout: 2s # 如果在规定时间没有新的日志事件就不等待后面的日志
fields:
logbiz: collector
logtopic: app-log-collector ## 按服务划分用作kafka topic
evn: dev
- input_type: log
paths:
- /usr/local/logs/error-collector.log
document_type: "error-log"
multiline:
#pattern: '^\s*(\d{4}|\d{2})\-(\d{2}|[a-zA-Z]{3})\-(\d{2}|\d{4})' # 指定匹配的表达式(匹配以 2017-11-15 08:04:23:889 时间格式开头的字符串)
pattern: '^\[' # 指定匹配的表达式(匹配以 "{ 开头的字符串)
negate: true # 是否匹配到
match: after # 合并到上一行的末尾
max_lines: 2000 # 最大的行数
timeout: 2s # 如果在规定时间没有新的日志事件就不等待后面的日志
fields:
logbiz: collector
logtopic: error-log-collector ## 按服务划分用作kafka topic
evn: dev
output.kafka:
enabled: true
hosts: ["192.168.11.51:9092"]
topic: '%{[fields.logtopic]}'
partition.hash:
reachable_only: true
compression: gzip
max_message_bytes: 1000000
required_acks: 1
logging.to_files: true
检查配置
./filebeat -c filebeat.yml -configtest
启动
/usr/local/filebeat-6.6.0/filebeat &
filebeat收集本地日志到logstash
# filebeat是跟spring-boot项目在一台主机的
cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/prod/logs/m-mall-admin/m-mall-admin.log
scan_frequency: 5s
multiline.pattern: '^\d{4}-\d{2}-\d{2}'
multiline.negate: true
multiline.match: after
tags: ["m-mall-admin-b-01"]
# 主机有多个收集的日志可以多写几个 -type: log标签
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "172.20.5.14:5601"
output.logstash:
hosts: ["172.20.5.16:5043", "172.20.5.15:5043"]
worker: 2
compression_level: 3
processors:
- add_host_metadata: ~
重启
systemctl restart filebeat
logstash传输日志到ELasticsearch
收集kafka日志到Elasticsearch
## multiline 插件也可以用于其他类似的堆栈式信息,比如 linux 的内核日志。
input {
kafka {
## app-log-服务名称
topics_pattern => "app-log-.*"
bootstrap_servers => "192.168.11.51:9092"
codec => json
consumer_threads => 1 ## 增加consumer的并行消费线程数
decorate_events => true
#auto_offset_rest => "latest"
group_id => "app-log-group"
}
kafka {
## error-log-服务名称
topics_pattern => "error-log-.*"
bootstrap_servers => "192.168.11.51:9092"
codec => json
consumer_threads => 1
decorate_events => true
#auto_offset_rest => "latest"
group_id => "error-log-group"
}
}
filter {
## 时区转换
ruby {
code => "event.set('index_time',event.timestamp.time.localtime.strftime('%Y.%m.%d'))"
}
if "app-log" in [fields][logtopic]{
grok {
## 表达式,这里对应的是Springboot输出的日志格式
match => ["message", "\[%{NOTSPACE:currentDateTime}\] \[%{NOTSPACE:level}\] \[%{NOTSPACE:thread-id}\] \[%{NOTSPACE:class}\] \[%{DATA:hostName}\] \[%{DATA:ip}\] \[%{DATA:applicationName}\] \[%{DATA:location}\] \[%{DATA:messageInfo}\] ## (\'\'|%{QUOTEDSTRING:throwable})"]
}
}
if "error-log" in [fields][logtopic]{
grok {
## 表达式
match => ["message", "\[%{NOTSPACE:currentDateTime}\] \[%{NOTSPACE:level}\] \[%{NOTSPACE:thread-id}\] \[%{NOTSPACE:class}\] \[%{DATA:hostName}\] \[%{DATA:ip}\] \[%{DATA:applicationName}\] \[%{DATA:location}\] \[%{DATA:messageInfo}\] ## (\'\'|%{QUOTEDSTRING:throwable})"]
}
}
}
## 测试输出到控制台:
output {
stdout { codec => rubydebug }
}
## elasticsearch:
output {
if "app-log" in [fields][logtopic]{
## es插件
elasticsearch {
# es服务地址
hosts => ["192.168.11.35:9200"]
# 用户名密码
user => "elastic"
password => "123456"
## 索引名,+ 号开头的,就会自动认为后面是时间格式:
## javalog-app-service-2019.01.23
index => "app-log-%{[fields][logbiz]}-%{index_time}"
# 是否嗅探集群ip:一般设置true;http://192.168.11.35:9200/_nodes/http?pretty
# 通过嗅探机制进行es集群负载均衡发日志消息
sniffing => true
# logstash默认自带一个mapping模板,进行模板覆盖
template_overwrite => true
}
}
if "error-log" in [fields][logtopic]{
elasticsearch {
hosts => ["192.168.11.35:9200"]
user => "elastic"
password => "123456"
index => "error-log-%{[fields][logbiz]}-%{index_time}"
sniffing => true
template_overwrite => true
}
}
}
cat /etc/logstash/all-a.conf
input {
beats {
port => "5043"
}
}
output{
if "m-mall-admin-a-01" in [tags]{
elasticsearch{
hosts => ["172.20.5.11:9200", "172.20.5.12:9200", "172.20.5.13:9200"]
index => "admin-a-01-%{+YYYY.MM.dd}"
}
}
if "m-mall-admin-a-02" in [tags]{
elasticsearch{
hosts => ["172.20.5.11:9200", "172.20.5.12:9200", "172.20.5.13:9200"]
index => "admin-a-02-%{+YYYY.MM.dd}"
}
}
stdout { codec => plain }
}
重启
systemctl restart logstash
ELK索引周期控制
在kibana上 *** 作
创建索引周期管理
PUT /_ilm/policy/prod_policy
{
"policy": {
"phases": {
"delete": {
"min_age": "3d",
"actions": {
"delete": {}
}
}
}
}
}
应用策略到索引模板
PUT /_template/prod_template?pretty
{
"index_patterns": [
"pay-a*",
"item-a*",
"admin-a*",
"order-a*",
"agent-a*",
"zuul-a*",
"eureka-a*",
"auth-a*",
"notify-a*",
"pm-a*",
"workflow-a*",
"txmanager-a*",
"user-a*",
"report-a*"
],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"index.lifecycle.name": "prod_policy"
}
}
PUT /_template/dev_template?pretty
{
"index_patterns": [
"admin-dev-01*",
"agent-dev-01*",
"agent-dev-02*",
"auth-dev-01*",
"bgw-dev-01*",
"eureka-dev-01*",
"eureka-dev-02*",
"item-dev-01*",
"logistics-dev-01*",
"notify-dev-01*",
"order-dev-01*",
"others-dev-01*",
"pay-dev-01*",
"pm-dev-01*",
"report-dev-01*",
"task-dev-01*",
"tracker-dev-01*",
"txmanager-dev-02*",
"workflow-dev-01*"
],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"index.lifecycle.name": "dev_policy"
}
}
查看索引是否被管理(旧索引不会管理)
GET *-a-*/_ilm/explain
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)