ELKF安装部署

ELKF安装部署,第1张

ELKF安装部署 ELKF部署环境:
  • JDK1.8
  • Elastic Stack 7.7.0
Filebeat安装:

官方下载安装包filebeat 7.7.0,下载linux版本并解压
解压后配置filebeat.yml,filebeat不需太多额外配置,具体如下:

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
  # Change to true to enable this input configuration.
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /data/app/P011/log/P011-info.log  ## 需要抓取日志文件
    - /data/app/P011-1/log/P011-info.log
    #- c:programdataelasticsearchlogs*

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:  ## 关闭输出Elasticsearch
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:  ## 开启输出Logstash 并指定ip地址
  # The Logstash hosts
  hosts: ["172.16.10.7:5044","172.16.10.3:5044"]
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
monitoring.enabled: true  ## 打开kibana服务监控

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
monitoring.elasticsearch:  ## 将监控信息索引发送到ES集群中,任意一个节点都行
 hosts: ["http://172.16.10.3:9200"] 

启动守护线程 nohup ./filebeat -e -c ./filebeat.yml &

Logstash安装:

官方下载安装包logstash 7.7.0,下载linux版本并解压。
解压后配置logstash.yml,具体如下:

# ------------ Pipeline Settings --------------
#
pipeline.workers: 40  ## 工作通道  建议和内核数保持一致
# How many events to retrieve from inputs before sending to filters+workers
#
pipeline.batch.size: 1500  ## 根据不同的服务器而定
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
pipeline.batch.delay: 10  ## 默认5s
#
pipeline.ordered: auto
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true  ## 开启x-pack服务监控
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
xpack.monitoring.elasticsearch.hosts: ["http://172.16.10.3:9200"]  ## 将监控信息发送到ES集群的一个节点即可

合理的为logstash分配堆内存可以有效地提高logstash的性能

## JVM configuration

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms16g  ## 堆内存大小
-Xmx16g


-XX:NewSize=8G  ## 新生代堆内存大小
-XX:MaxNewSize=8G
################################################################

logstash基本配置完毕,此时需要根据业务来动态的配置 input 和 output 的conf文件了

启动守护线程 nohup ./bin/logstash -f ./config/inandout.conf &

Elasticsearch安装:

官方下载安装包elasticsearch 7.7.0,下载linux版本并解压。
elasticsearch需要使用普通用户来启动
解压后配置elasticsearch.yml,具体如下:

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: myElasticsearchCluster  ## 集群名称
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-3  ## 节点名称
node.data: true  ## 开启数据节点
node.master: true  ## 开启主节点
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/tools/elk/es/elasticsearch-7.7.0/data ## 用来存放es数据
#
# Path to log files:
#
path.logs: /data/tools/elk/es/elasticsearch-7.7.0/logs ## 用来存放es日志
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true ## 锁住内存 不让交换 打开以后后续还需设置 下文有讲
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 172.16.10.3  ## 设置本机ip即可
#
# Set a custom port for HTTP:
#
http.port: 9200  ## 默认9200
#
# For more information, consult the network module documentation.
#
#
# culster transport port
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.seed_hosts: ["172.16.10.3","172.16.10.7"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-3", "node-9","node-8"] ## 集群中节点名称
discovery.zen.ping.unicast.hosts: ["172.16.10.3","172.16.10.9","172.16.10.8"] ## 单播发现集群主机地址

discovery.zen.minimum_master_nodes: 2  ## 最小主节点数 法定个数就是 ( master 候选节点个数 / 2) + 1 严格按照否则容易脑裂
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
gateway.recover_after_nodes: 2 ## 集群恢复配置
gateway.expected_nodes: 10
gateway.recover_after_time: 5m
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
#--------------------------------------Custom 自定义配置-----------------------------------------
#
xpack.monitoring.collection.enabled: true  ## 开启监控配置
xpack.security.enabled: false
xpack.monitoring.enabled: true
#
http.cors.enabled: true  ## 跨域配置
http.cors.allow-origin: "*"

同时elasticsearch默认的jvm堆内存为1G,生产环境是不够用的,所以根据我们的硬件来配置我们的jvm堆内存大小
配置jvm.options

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms31g  ## 建议为物理内存的一半,但是最大不超过32G
-Xmx31g


-XX:NewSize=6G  ## 年轻代堆内存大小
-XX:MaxNewSize=6G

启动守护线程./elasticsearch -d

此时是不能启动成功的
es不能用root用户启动,所有需要建立用户
创建用户 useradd elk
创建密码 passwd elastisearch
更改文件拥有者: chown -R elk:elk elk(目录) chown 用户:用户 目录
进入bin目录./elasticsearch
如果报错: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
vim /etc/sysctl.conf
添加:vm.max_map_count=262144
执行 sysctl -p

刚刚设置的elasticsearch.yml锁定内存,需要设置
vim /etc/systemd/system.conf
添加:DefaultLimitMEMLOCK=infinity
vim /etc/security/limits.conf
添加:

  • hard memlock unlimited
  • soft memlock unlimited

执行 systemctl daemon-reload

curl http://ip:9200/?pretty 查看启动结果

{
  "name" : "node-3",
  "cluster_name" : "myelk",
  "cluster_uuid" : "6xYF8fRWR_q1eXbB_FCGWQ",
  "version" : {
    "number" : "7.7.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "81a1e9eda8e6183f5237786246f6dced26a10eaf",
    "build_date" : "2020-05-12T02:01:37.602180Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
Kibana安装:

官方下载安装包kibana 7.7.0,下载linux版本并解压。
kibana需要使用普通用户来启动
解压后配置kibana.yml,具体如下:

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5602  ## 端口号

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"  ## 设置0.0.0.0即可

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
server.name: "mykibana"  ## 服务名称

# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://172.16.10.3:9200","http://172.16.10.4:9200","http://172.16.10.5:9200"]
elasticsearch.hosts: ["http://172.16.10.3:9200"]  ## 连接es集群一个节点即可

i18n.locale: "zh-CN" ## 使用中文界面

启动守护线程nohup ./kibana &

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5687926.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存