Elasticsearch集群传输层安全配置

Elasticsearch集群传输层安全配置,第1张

Elasticsearch集群传输层安全配置

传输层安全配置以最低安全要求为基础(用户名和密码),通过安装证书来验证集群中的节点,以防止未经授权的节点加入你的 Elasticsearch 集群。

生成证书

官方提供了一个在 bin 目录下叫做 elasticsearch-certutil 的工具用来生成证书

我们启动一个 Elasticsearch 实例然后进入实例的 bin 目录下

[root@7bd455c1db3a bin]# elasticsearch-certutil cert
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires an SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA)
      unless the --self-signed command line option is specified.
      The tool can automatically generate a new CA for you, or you can provide your own with
      the --ca or --ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files

Note: Generating certificates without providing a CA certificate is deprecated.
      A CA certificate will become mandatory in the next major release.

Please enter the desired output file [elastic-certificates.p12]: 
Enter password for elastic-certificates.p12 : 

Certificates written to /usr/share/elasticsearch/elastic-certificates.p12

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

输入命令会有两个输入

  • Please enter the desired output file [elastic-certificates.p12]::输出文件的文件名,回车使用默认文件名
  • Enter password for elastic-certificates.p12 ::证书的密码,可以不填写密码

Certificates written to /usr/share/elasticsearch/elastic-certificates.p12

上面的输出表示文件所在目录

使用 cp 命令将证书文件拷贝出来

docker cp es01:/usr/share/elasticsearch/elastic-certificates.p12 .

修改证书文件的访问权限

chmod +777 elastic-certificates.p12
修改配置

修改 docker-compose.yml 文件

version: '2.2'
services: 
  es01:
    image: elasticsearch:7.14.1
    container_name: es01
    environment:
      - node.name=es01
      - discovery.seed_hosts=es02
      - cluster.initial_master_nodes=es01,es02
      - cluster.name=docker-cluster
      #开启内存锁定检查
      - bootstrap.memory_lock=true 	
      #限制堆大小
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      #开启安全功能
      - xpack.security.enabled=true			
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.verification_mode=certificate 
      - xpack.security.transport.ssl.client_authentication=required
      - xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
      - xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
    volumes:
      - /root/work/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
    #内存锁定
    ulimits: 	
      memlock:
        soft: -1
        hard: -1
  es02:
    image: elasticsearch:7.14.1
    container_name: es02
    environment:
      - node.name=es02
      - discovery.seed_hosts=es01
      - cluster.initial_master_nodes=es01,es02
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - xpack.security.enabled=true
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.verification_mode=certificate 
      - xpack.security.transport.ssl.client_authentication=required
      - xpack.security.transport.ssl.keystore.path=elastic-certificates.p12
      - xpack.security.transport.ssl.truststore.path=elastic-certificates.p12
    volumes:
      - /root/work/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
    ulimits:
      memlock:
        soft: -1
        hard: -1
  kibana:
    image: kibana:7.14.1
    container_name: kibana
    environment:
      - SERVER_NAME=kibana.localhost
      - ELASTICSEARCH_HOSTS=http://es01:9200
      - I18N_LOCALE=zh-CN
      - ELASTICSEARCH_USERNAME=elastic
      - ELASTICSEARCH_PASSWORD="123123"
      - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIonKEY="fhjskloppd678ehkdfdlliverpoolfcr"
    ports:
      - 5601:5601
    depends_on:
      - es01
  filebeat:
    image: elastic/filebeat:7.14.1
    container_name: filebeat
    volumes:
      - /root/work/beats/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
      - /root/work/logs/:/root/work/logs/
    depends_on:
      - es01

启动 docker-compose up -d

然后给 Elasticsearch 集群设置用户名和密码,密码要和 Kibana 配置的密码一致,设置密码可以参考另一篇博客

然后就可以访问 Kibana 了

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/zaji/5678742.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-17
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存