使用kolla-ansible部署多节点OpenStack(T版)及对接Ceph

使用kolla-ansible部署多节点OpenStack(T版)及对接Ceph,第1张

推荐感兴趣的小伙伴先阅读官方文档
近期使用kolla-ansible部署一套OpenStack多节点测试环境,用于测试计算节点宕机撤离的生产场景。
虽然官方文档写的非常详细,但是整个部署过程也多多少少遇到些问题

现在将整个基于Kolla-ansible 快速部署的OpenStack多节点的详细 *** 作过程,记录下来,方便小伙伴们快速搭建自己的环境。

① 这个是kolla-ansible的官方网站

https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html

② 这个是ansible的官方网站。

http://docs.ansible.com/

③ 这个是ceph的官方网站。

https://docs.openstack.org/kolla-ansible/queens/reference/ceph-guide.html

④ 这个是swift的官方网站。

https://docs.openstack.org/kolla-ansible/pike/reference/swift-guide.html

⑥ 我之前也做了docker的基础实验:

https://blog.csdn.net/qq_28513801/category_8592442.html

一、环境准备 基础环境是OpenStack T 版本.对应的Ceph存储镜像TagKolla-ansible版本为10.0.0
1、基础环境资源
主要组件版本
Keystone*T
Nova*T
Glance*T
Neutron*T
Cinder*T
Ironic *T
Ceph*T
Swift*T
Haproxy\
Keepalived\
2、服务器资源架构 每台控制节点上各有两张网卡eth0eth1 (无地址)每台存储节点上各有4块数据盘,其中 vdd为Ceph Cache加速盘
用途主机名eth0CPU内存vdavdbvdcvddvde
部署节点deploy172.31.234.2128C16G50G\\\\
控制节点control01172.31.234.2128C16G50G\\\\
控制节点control02172.31.234.528C16G50G\\\\
控制节点control03172.31.234.1428C16G50G\\\\
计算节点compute01172.31.234.24616C32G50G\\\\
计算节点compute02172.31.234.22616C32G50G\\\\
存储节点ceph-01172.31.234.274C8G50G100G100G80G100G
存储节点ceph-02172.31.234.2144C8G50G100G100G80G100G
存储节点ceph-03172.31.234.2184C8G50G100G100G80G100G
网络节点network01172.31.234.1798C16G50G\\\\
二、软件安装 2.1 关于kolla-ansible及本环境
关于新版本部署:
opentstack 版本和 Kolla-ansible 版本的对应关系如下:
Train 9.x.x
Stein 8.x.x
Rocky 7.x.x
queens 6.x.x
Pike 5.x.x
Ocata 4.x.x
pip3 install kolla-ansible==9.0.0.0rc1   
为了方便自动补全,这里安装个插件

[root@control01 ~]# yum install -y bash-completion
[root@control01 ~]# source /usr/share/bash-completion/bash_completion

[root@control01 ~]#

本次安装环境信息:
[root@control01 ~]# cat /etc/hosts
127.0.0.1 localhost
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.31.234.212 control01
172.31.234.52 control02
172.31.234.142 control03
172.31.234.179 network01
172.31.234.246 compute01
172.31.234.226 compute02
172.31.234.27 ceph-01
172.31.234.214 ceph-02
172.31.234.218 ceph-03
172.31.241.232 registry

# BEGIN ANSIBLE GENERATED HOSTS
172.31.234.212 control01
172.31.234.52 control02
172.31.234.142 control03
172.31.234.179 network01
172.31.234.246 compute01
172.31.234.226 compute02
172.31.234.27 ceph-01
172.31.234.214 ceph-02
172.31.234.218 ceph-03
# END ANSIBLE GENERATED HOSTS



2.2 相关软件安装
[root@control01 ~]#   yum install -y yum-utils device-mapper-persistent-data lvm2

[root@control01 ~]# yum install docker-ce -y

[root@control01 ~]# systemctl daemon-reload
[root@control01 ~]# systemctl enable docker
[root@control01 ~]# systemctl start docker

[root@control01 ~]# yum install python-pip ansible –y 
[root@control01 ~]# pip install -U pip 

# 配置镜像加速

[root@control01 ~]# mkdir -p /etc/docker
[root@control01 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF
[root@control01 ~]# systemctl daemon-reload
[root@control01 ~]# systemctl restart docker



2.3 节点初始化 *** 作 所有节点均执行该初始化 *** 作,避免安装途中报错
#!/bin/sh
sed -i 's/SELINUX=.*/SELINUX=Disabled/g' /etc/selinux/config
echo '' > /etc/resolv.conf
echo nameserver 114.114.114.114 >> /etc/resolv.conf
echo search novalocal >> /etc/resolv.conf
echo " net.ipv4.ip_forward = 1 ">> /etc/sysctl.conf&&sysctl -p
yum install vim wget -y
systemctl stop firewalld
systemctl disable firewalld
2.4 安装 kolla-ansible(version==9.0.0)
pip install kolla-ansible==9.0.0 #这里版本严格使用9.0.0,否则会报异常错误

可以适当添加源 
https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/

例如:
pip install kolla-ansible==9.0.0  -i  https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/


如果安装过程出现报错,可以先忽略该模块,如果需要就在安装。

例如:

Cannot uninstall 'PyYAML'. It is a distutils installed project 
and thus we cannot accurately determine which files
 belong to it which would lead to only a partial uninstall


则可以先忽略:
pip install kolla-ansible --ignore-installed PyYAML  -i  https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/

需要就在安装:
 
 pip  install XXXX==9.0.0



2.5 copy文件并配置下ansible的并行数

mkdir -p /etc/kolla
chown $USER:$USER /etc/kolla
cp -r /usr/share/kolla-ansible/etc_examples/kolla/* /etc/kolla/    ##Copy globals.yml and passwords.yml
cp /usr/share/kolla-ansible/ansible/inventory/* .    ##Copy all-in-one and multinode inventory files

##### 1. Configure Ansible /etc/ansible/ansible.cfg

[defaults]
host_key_checking=False
pipelining=True
forks=100
2.6 配置主机清单文件,根据部署场景,这里选择 multinode 文件(下面贴出了全部的配置)

[root@control01 ~]# cat multinode
# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
[control]
# These hostname must be resolvable from your deployment host
control01
control02
control03

# The above can also be specified as follows:
#control[01:03]     ansible_user=kolla

# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
network01

[compute]
compute01
compute02

[monitoring]
network01

# When compute nodes and control nodes use different interfaces,
# you need to comment out "api_interface" and other interfaces from the globals.yml
# and specify like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1

[storage]
ceph-01
ceph-02
ceph-03

[deployment]
localhost       ansible_connection=local

[baremetal:children]
control
network
compute
storage
monitoring

# You can explicitly specify which hosts run each project by updating the
# groups in the sections below. Common services are grouped together.
[chrony-server:children]
haproxy

[chrony:children]
control
network
compute
storage
monitoring

[collectd:children]
compute

[grafana:children]
monitoring

[etcd:children]
control

[influxdb:children]
monitoring

[prometheus:children]
monitoring

[kafka:children]
control

[karbor:children]
control

[kibana:children]
control

[telegraf:children]
compute
control
monitoring
network
storage

[elasticsearch:children]
control

[haproxy:children]
#network
control

[hyperv]
#hyperv_host

[hyperv:vars]
#ansible_user=user
#ansible_password=password
#ansible_port=5986
#ansible_connection=winrm
#ansible_winrm_server_cert_validation=ignore

[mariadb:children]
control

[rabbitmq:children]
control

[outward-rabbitmq:children]
control

[qdrouterd:children]
control

[monasca-agent:children]
compute
control
monitoring
network
storage

[monasca:children]
monitoring

[storm:children]
monitoring

[mongodb:children]
control

[keystone:children]
control

[glance:children]
control

[nova:children]
control

[neutron:children]
network

[openvswitch:children]
network
compute
manila-share

[opendaylight:children]
network

[cinder:children]
control

[cloudkitty:children]
control

[freezer:children]
control

[memcached:children]
control

[horizon:children]
control

[swift:children]
control

[barbican:children]
control

[heat:children]
control

[murano:children]
control

[solum:children]
control

[ironic:children]
control

[ceph:children]
control

[magnum:children]
control

[qinling:children]
control

[sahara:children]
control

[mistral:children]
control

[manila:children]
control

[ceilometer:children]
control

[aodh:children]
control

[cyborg:children]
control
compute

[congress:children]
control

[panko:children]
control

[gnocchi:children]
control

[tacker:children]
control

[trove:children]
control

# Tempest
[tempest:children]
control

[senlin:children]
control

[vmtp:children]
control

[vitrage:children]
control

[watcher:children]
control

[rally:children]
control

[searchlight:children]
control

[octavia:children]
control

[designate:children]
control

[placement:children]
control

[bifrost:children]
deployment

[zookeeper:children]
control

[zun:children]
control

[skydive:children]
monitoring

[redis:children]
control

[blazar:children]
control

# Additional control implemented here. These groups allow you to control which
# services run on which hosts at a per-service level.
#
# Word of caution: Some services are required to run on the same host to
# function appropriately. For example, neutron-metadata-agent must run on the
# same host as the l3-agent and (depending on configuration) the dhcp-agent.

# Glance
[glance-api:children]
glance

# Nova
[nova-api:children]
nova

[nova-conductor:children]
nova

[nova-super-conductor:children]
nova

[nova-novncproxy:children]
nova

[nova-scheduler:children]
nova

[nova-spicehtml5proxy:children]
nova

[nova-compute-ironic:children]
nova

[nova-serialproxy:children]
nova

# Neutron
[neutron-server:children]
control

[neutron-dhcp-agent:children]
neutron

[neutron-l3-agent:children]
neutron

[neutron-metadata-agent:children]
neutron

[neutron-bgp-dragent:children]
neutron

[neutron-infoblox-ipam-agent:children]
neutron

[neutron-metering-agent:children]
neutron

[ironic-neutron-agent:children]
neutron

# Ceph
[ceph-mds:children]
ceph

[ceph-mgr:children]
ceph

[ceph-nfs:children]
ceph

[ceph-mon:children]
ceph

[ceph-rgw:children]
ceph

[ceph-osd:children]
storage

# Cinder
[cinder-api:children]
cinder

[cinder-backup:children]
storage

[cinder-scheduler:children]
cinder

[cinder-volume:children]
storage

# Cloudkitty
[cloudkitty-api:children]
cloudkitty

[cloudkitty-processor:children]
cloudkitty

# Freezer
[freezer-api:children]
freezer

[freezer-scheduler:children]
freezer

# iSCSI
[iscsid:children]
compute
storage
ironic

[tgtd:children]
storage

# Karbor
[karbor-api:children]
karbor

[karbor-protection:children]
karbor

[karbor-operationengine:children]
karbor

# Manila
[manila-api:children]
manila

[manila-scheduler:children]
manila

[manila-share:children]
network

[manila-data:children]
manila

# Swift
[swift-proxy-server:children]
swift

[swift-account-server:children]
storage

[swift-container-server:children]
storage

[swift-object-server:children]
storage

# Barbican
[barbican-api:children]
barbican

[barbican-keystone-listener:children]
barbican

[barbican-worker:children]
barbican

# Heat
[heat-api:children]
heat

[heat-api-cfn:children]
heat

[heat-engine:children]
heat

# Murano
[murano-api:children]
murano

[murano-engine:children]
murano

# Monasca
[monasca-agent-collector:children]
monasca-agent

[monasca-agent-forwarder:children]
monasca-agent

[monasca-agent-statsd:children]
monasca-agent

[monasca-api:children]
monasca

[monasca-grafana:children]
monasca

[monasca-log-api:children]
monasca

[monasca-log-transformer:children]
monasca

[monasca-log-persister:children]
monasca

[monasca-log-metrics:children]
monasca

[monasca-thresh:children]
monasca

[monasca-notification:children]
monasca

[monasca-persister:children]
monasca

# Storm
[storm-worker:children]
storm

[storm-nimbus:children]
storm

# Ironic
[ironic-api:children]
ironic

[ironic-conductor:children]
ironic

[ironic-inspector:children]
ironic

[ironic-pxe:children]
ironic

[ironic-ipxe:children]
ironic

# Magnum
[magnum-api:children]
magnum

[magnum-conductor:children]
magnum

# Qinling
[qinling-api:children]
qinling

[qinling-engine:children]
qinling

# Sahara
[sahara-api:children]
sahara

[sahara-engine:children]
sahara

# Solum
[solum-api:children]
solum

[solum-worker:children]
solum

[solum-deployer:children]
solum

[solum-conductor:children]
solum

[solum-application-deployment:children]
solum

[solum-image-builder:children]
solum

# Mistral
[mistral-api:children]
mistral

[mistral-executor:children]
mistral

[mistral-engine:children]
mistral

[mistral-event-engine:children]
mistral

# Ceilometer
[ceilometer-central:children]
ceilometer

[ceilometer-notification:children]
ceilometer

[ceilometer-compute:children]
compute

[ceilometer-ipmi:children]
compute

# Aodh
[aodh-api:children]
aodh

[aodh-evaluator:children]
aodh

[aodh-listener:children]
aodh

[aodh-notifier:children]
aodh

# Cyborg
[cyborg-api:children]
cyborg

[cyborg-agent:children]
compute

[cyborg-conductor:children]
cyborg

# Congress
[congress-api:children]
congress

[congress-datasource:children]
congress

[congress-policy-engine:children]
congress

# Panko
[panko-api:children]
panko

# Gnocchi
[gnocchi-api:children]
gnocchi

[gnocchi-statsd:children]
gnocchi

[gnocchi-metricd:children]
gnocchi

# Trove
[trove-api:children]
trove

[trove-conductor:children]
trove

[trove-taskmanager:children]
trove

# Multipathd
[multipathd:children]
compute
storage

# Watcher
[watcher-api:children]
watcher

[watcher-engine:children]
watcher

[watcher-applier:children]
watcher

# Senlin
[senlin-api:children]
senlin

[senlin-engine:children]
senlin

# Searchlight
[searchlight-api:children]
searchlight

[searchlight-listener:children]
searchlight

# Octavia
[octavia-api:children]
octavia

[octavia-health-manager:children]
octavia

[octavia-housekeeping:children]
octavia

[octavia-worker:children]
octavia

# Designate
[designate-api:children]
designate

[designate-central:children]
designate

[designate-producer:children]
designate

[designate-mdns:children]
network

[designate-worker:children]
designate

[designate-sink:children]
designate

[designate-backend-bind9:children]
designate

# Placement
[placement-api:children]
placement

# Zun
[zun-api:children]
zun

[zun-wsproxy:children]
zun

[zun-compute:children]
compute

# Skydive
[skydive-analyzer:children]
skydive

[skydive-agent:children]
compute
network

# Tacker
[tacker-server:children]
tacker

[tacker-conductor:children]
tacker

# Vitrage
[vitrage-api:children]
vitrage

[vitrage-notifier:children]
vitrage

[vitrage-graph:children]
vitrage

[vitrage-ml:children]
vitrage

# Blazar
[blazar-api:children]
blazar

[blazar-manager:children]
blazar

# Prometheus
[prometheus-node-exporter:children]
monitoring
control
compute
network
storage

[prometheus-mysqld-exporter:children]
mariadb

[prometheus-haproxy-exporter:children]
haproxy

[prometheus-memcached-exporter:children]
memcached

[prometheus-cadvisor:children]
monitoring
control
compute
network
storage

[prometheus-alertmanager:children]
monitoring

[prometheus-openstack-exporter:children]
monitoring

[prometheus-elasticsearch-exporter:children]
elasticsearch

[prometheus-blackbox-exporter:children]
monitoring

[masakari-api:children]
control

[masakari-engine:children]
control

[masakari-monitors:children]
compute

2.7 配置global.yml(下面贴出了全部的配置及安装的组件)
[root@control01 kolla]# cat globals.yml | grep -v '^#'| grep -v '^$'
---
kolla_base_distro: "centos"
kolla_install_type: "source"
openstack_release: "train"
node_custom_config: "/etc/kolla/config"
kolla_internal_vip_address: "172.31.234.208"
network_interface: "eth0"
kolla_external_vip_interface: "{{ network_interface }}"
api_interface: "{{ network_interface }}"
storage_interface: "{{ network_interface }}"
cluster_interface: "{{ network_interface }}"
swift_storage_interface: "{{ storage_interface }}"
swift_replication_interface: "{{ swift_storage_interface }}"
tunnel_interface: "{{ network_interface }}"
dns_interface: "{{ network_interface }}"
neutron_external_interface: "eth1"
neutron_plugin_agent: "openvswitch"
keepalived_virtual_router_id: "66"
enable_opendaylight_qos: "yes"
enable_opendaylight_l3: "yes"
openstack_logging_debug: "True"
nova_console: "novnc"
enable_glance: "yes"
enable_haproxy: "yes"
enable_keepalived: "{{ enable_haproxy | bool }}"
enable_keystone: "yes"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "{{ enable_openstack_core | bool }}"
enable_nova: "{{ enable_openstack_core | bool }}"
enable_rabbitmq: "{{ 'yes' if om_rpc_transport == 'rabbit' or om_notify_transport == 'rabbit' else 'no' }}"
enable_blazar: "no"
enable_ceilometer: "yes"
enable_ceph: "yes"
enable_ceph_mds: "yes"
enable_ceph_rgw: "yes"
enable_ceph_dashboard: "{{ enable_ceph | bool }}"
enable_chrony: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
enable_cloudkitty: "no"
enable_freezer: "yes"
enable_gnocchi: "yes"
enable_grafana: "yes"
enable_heat: "{{ enable_openstack_core | bool }}"
enable_horizon: "{{ enable_openstack_core | bool }}"
enable_horizon_blazar: "{{ enable_blazar | bool }}"
enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}"
enable_horizon_freezer: "{{ enable_freezer | bool }}"
enable_horizon_ironic: "{{ enable_ironic | bool }}"
enable_horizon_karbor: "{{ enable_karbor | bool }}"
enable_horizon_murano: "{{ enable_murano | bool }}"
enable_horizon_neutron_lbaas: "{{ enable_neutron_lbaas | bool }}"
enable_horizon_sahara: "{{ enable_sahara | bool }}"
enable_horizon_senlin: "{{ enable_senlin | bool }}"
enable_horizon_solum: "{{ enable_solum | bool }}"
enable_horizon_watcher: "{{ enable_watcher | bool }}"
enable_horizon_zun: "{{ enable_zun | bool }}"
enable_ironic: "yes"
enable_ironic_ipxe: "yes"
enable_ironic_neutron_agent: "yes"
enable_kafka: "yes"
enable_karbor: "yes"
enable_kuryr: "yes"
enable_murano: "yes"
enable_neutron_lbaas: "yes"
enable_neutron_qos: "yes"
enable_neutron_sriov: "yes"
enable_nova_ssh: "yes"
enable_openvswitch: "{{ enable_neutron | bool and neutron_plugin_agent != 'linuxbridge' }}"
enable_placement: "yes"
enable_prometheus: "yes"
enable_sahara: "yes"
enable_senlin: "yes"
enable_solum: "yes"
enable_swift: "yes"
enable_tempest: "no"
enable_watcher: "yes"
enable_zun: "yes"
ceph_enable_cache: "yes"
external_ceph_cephx_enabled: "yes"
ceph_cache_mode: "writeback"
ceph_pool_type: "replicated"
enable_ceph_rgw_keystone: "no"
ceph_pool_pg_num: 8
ceph_pool_pgp_num: 8
keystone_token_provider: 'fernet'
keystone_admin_user: "admin"
keystone_admin_project: "admin"
fernet_token_expiry: 86400
glance_backend_ceph: "yes"
glance_backend_file: "yes"
glance_enable_rolling_upgrade: "no"
cinder_backend_ceph: "yes"
cinder_volume_group: "cinder-volumes"
cinder_backup_driver: "ceph"
cinder_backup_share: "ceph"
cinder_backup_mount_options_nfs: "ceph"
nova_backend_ceph: "yes"
nova_compute_virt_type: "qemu"
num_nova_fake_per_node: 5
horizon_backend_database: "{{ enable_murano | bool }}"
ironic_dnsmasq_interface: "{{ network_interface }}"
ironic_dnsmasq_dhcp_range: "192.168.0.10,192.168.0.100"
ironic_dnsmasq_boot_file: "pxelinux.0"
swift_devices_match_mode: "strict"
swift_devices_name: "KOLLA_SWIFT_DATA"
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:
enable_prometheus_haproxy_exporter: "{{ enable_haproxy | bool }}"
enable_prometheus_mysqld_exporter: "{{ enable_mariadb | bool }}"
enable_prometheus_node_exporter: "{{ enable_prometheus | bool }}"
enable_prometheus_cadvisor: "{{ enable_prometheus | bool }}"
enable_prometheus_memcached: "{{ enable_prometheus | bool }}"
enable_prometheus_alertmanager: "{{ enable_prometheus | bool }}"
enable_prometheus_ceph_mgr_exporter: "{{ enable_prometheus | bool and enable_ceph | bool }}"
enable_prometheus_openstack_exporter: "{{ enable_prometheus | bool }}"
enable_prometheus_elasticsearch_exporter: "{{ enable_prometheus | bool and enable_elasticsearch | bool }}"
[root@control01 kolla]#


三、开始部署 3.1 . 配置 ssh 无密钥登录,授权节点

[root@control01 ~]# ssh-keygen
[root@control01 ~]# ssh-copy-id  root@control01
[root@control01 ~]# ssh-copy-id  root@control02
[root@control01 ~]# ssh-copy-id  root@control03
....
....
检查主机连接
[root@control01 ~]# ansible -i multinode all -m ping
/usr/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in the next release.
  from cryptography.exceptions import InvalidSignature
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
compute01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
control01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
network01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
compute02 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
ceph-01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
localhost | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
control02 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
ceph-02 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
control03 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
ceph-03 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
[root@control01 ~]#

3.2 配置数据盘
在需要部署存储节点的及其上执行如下命令,来使kolla识别到ceph和swift数据盘

#ceph
parted /dev/vdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
#swift
parted /dev/vdc -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1
KOLLA_SWIFT_DATA
本次采用的是ceph三个节点
[root@control01 ~]# ssh ceph-01
Last login: Tue May  4 16:28:05 2021 from 172.31.234.212
[root@ceph-01 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1  478K  0 rom
vda    253:0    0   50G  0 disk
├─vda1 253:1    0    1G  0 part /boot
└─vda2 253:2    0   49G  0 part /
vdb    253:16   0  100G  0 disk
├─vdb1 253:17   0  100M  0 part /var/lib/ceph/osd/2a0320fc-1841-45b6-a478-cc48d0a31519
└─vdb2 253:18   0 99.9G  0 part
vdc    253:32   0  100G  0 disk
└─vdc1 253:33   0  100G  0 part /srv/node
vdd    253:48   0   80G  0 disk
└─vdd1 253:49   0   80G  0 part
vde    253:64   0  100G  0 disk
├─vde1 253:65   0  100M  0 part /var/lib/ceph/osd/8e39785b-ca50-4cf2-b707-a370914735a7
└─vde2 253:66   0 99.9G  0 part
[root@ceph-01 ~]#

[root@control01 ~]# ssh ceph-02
Last login: Tue May  4 16:28:06 2021 from 172.31.234.212
[root@ceph-02 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1  478K  0 rom
vda    253:0    0   50G  0 disk
├─vda1 253:1    0    1G  0 part /boot
└─vda2 253:2    0   49G  0 part /
vdb    253:16   0  100G  0 disk
├─vdb1 253:17   0  100M  0 part /var/lib/ceph/osd/dd8c5222-d8d9-4445-8deb-6d9133d85b50
└─vdb2 253:18   0 99.9G  0 part
vdc    253:32   0  100G  0 disk
└─vdc1 253:33   0  100G  0 part /srv/node
vdd    253:48   0   80G  0 disk
└─vdd1 253:49   0   80G  0 part
vde    253:64   0  100G  0 disk
├─vde1 253:65   0  100M  0 part /var/lib/ceph/osd/fa9a8c4d-2082-431a-b0a1-1a48e8568f3b
└─vde2 253:66   0 99.9G  0 part
[root@ceph-02 ~]#

[root@ceph-03 ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0     11:0    1  478K  0 rom
vda    253:0    0   50G  0 disk
├─vda1 253:1    0    1G  0 part /boot
└─vda2 253:2    0   49G  0 part /
vdb    253:16   0  100G  0 disk
├─vdb1 253:17   0  100M  0 part /var/lib/ceph/osd/5273a9e5-918e-4a47-bf91-a592b8b7ffe1
└─vdb2 253:18   0 99.9G  0 part
vdc    253:32   0  100G  0 disk
└─vdc1 253:33   0  100G  0 part /srv/node
vdd    253:48   0   80G  0 disk
└─vdd1 253:49   0   80G  0 part
vde    253:64   0  100G  0 disk
├─vde1 253:65   0  100M  0 part /var/lib/ceph/osd/38c0cbf7-679d-4074-8acf-5a5584595490
└─vde2 253:66   0 99.9G  0 part
[root@ceph-03 ~]#


3.3 开始deploy(先检查下3.4-3.6)
#检查并安装依赖
kolla-ansible -i /etc/kolla/multinode  bootstrap-servers   -vvv  
kolla-ansible -i  /etc/kolla/multinode  prechecks -vvv 
#开始拉去镜像 (多执行1次)
kolla-ansible -i /etc/kolla/multinode pull
#部署
kolla-ansible -i /etc/kolla/multinode deploy

#如果部署失败
kolla-ansible  destroy   /etc/kolla/multinode   --yes-i-really-really-mean-it

3.4 处理依赖
依赖包requests, websocket-client, backports.ssl-match-hostname, ipaddress, docker
ansible -i ceps  all -m shell -a " pip install docker==4.4.4  -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/"
pip install websocket-client  -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/"
...
....

3.5处理swift
# vim /usr/share/kolla-ansible/ansible/roles/swift/templates/proxy-server.conf.j2 
[filter:s3token]
use = egg:swift#s3token
#www_authenticate_uri = {{ keystone_internal_url }}/v3
auth_uri = {{ keystone_internal_url }}/v3
{% endif %}

#3块磁盘分区格式化,并打上KOLLA_SWIFT_DATA标签
index=0
for d in sdc sdd sde; do
    parted /dev/${d} -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1
    sudo mkfs.xfs -f -L d${index} /dev/${d}1
    (( index++ ))
done

#生成rings
部署swift前需要生成相关rings,这里在kolla-ansible部署节点进行 *** 作。
为准备Swift Rings生成,请运行以下命令以初始化环境变量并创建/etc/kolla/config/swift目录

STORAGE_NODES=(172.31.234.27 172.31.234.214 172.31.234.218)
KOLLA_SWIFT_BASE_IMAGE="registry.cn-shenzhen.aliyuncs.com/kollaimage/centos-binary-swift-base:train"
mkdir -p /etc/kolla/config/swift

#生成Object Ring
docker run \
  --rm \
  -v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
  $KOLLA_SWIFT_BASE_IMAGE \
  swift-ring-builder \
    /etc/kolla/config/swift/object.builder create 10 3 1

for node in ${STORAGE_NODES[@]}; do
    for i in {0..2}; do
      docker run \
        --rm \
        -v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
        $KOLLA_SWIFT_BASE_IMAGE \
        swift-ring-builder \
          /etc/kolla/config/swift/object.builder add r1z1-${node}:6000/d${i} 1;
    done
done

docker run \
  --rm \
  -v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
  $KOLLA_SWIFT_BASE_IMAGE \
  swift-ring-builder \
    /etc/kolla/config/swift/account.builder create 10 3 1

for node in ${STORAGE_NODES[@]}; do
    for i in {0..2}; do
      docker run \
        --rm \
        -v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
        $KOLLA_SWIFT_BASE_IMAGE \
        swift-ring-builder \
          /etc/kolla/config/swift/account.builder add r1z1-${node}:6001/d${i} 1;
    done
done

# 生成Account Ring and 生成Container Ring
docker run \
  --rm \
  -v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
  $KOLLA_SWIFT_BASE_IMAGE \
  swift-ring-builder \
    /etc/kolla/config/swift/container.builder create 10 3 1

for node in ${STORAGE_NODES[@]}; do
    for i in {0..2}; do
      docker run \
        --rm \
        -v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
        $KOLLA_SWIFT_BASE_IMAGE \
        swift-ring-builder \
          /etc/kolla/config/swift/container.builder add r1z1-${node}:6002/d${i} 1;
    done
done

#要重新平衡rings文件:
for ring in object account container; do
  docker run \
    --rm \
    -v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
    $KOLLA_SWIFT_BASE_IMAGE \
    swift-ring-builder \
      /etc/kolla/config/swift/${ring}.builder rebalance;
done

最终生成的路径
[root@control01 ~]# tree /etc/kolla/config/swift/
/etc/kolla/config/swift/
├── account.builder
├── account.ring.gz
├── backups
│   ├── 1616692500.account.builder
│   ├── 1616692500.container.builder
│   ├── 1616692500.object.builder
│   ├── 1616692576.object.builder
│   ├── 1616692576.object.ring.gz
│   ├── 1616692577.account.builder
│   ├── 1616692577.account.ring.gz
│   ├── 1616692578.container.builder
│   └── 1616692578.container.ring.gz
├── container.builder
├── container.ring.gz
├── object.builder
└── object.ring.gz

1 directory, 15 files
[root@control01 ~]#

3.6 处理ceph相关
# 这里使用了缓存SSD,即 SSD+SATA global.yml文件中有备注
#因此需要手动创建 cache ,否部署会失败
(ceph-mgr)[root@control01 /]# ceph dashboard ac-user-create admin  -i /password  administrator
{"username": "admin", "lastUpdate": 1617620416, "name": null, "roles": ["administrator"], "password": "b$qqSC2Ach9R2lLwj8kg.Pge17ppOfQHJIwPKL2w5sYwLJXyHuX/Y/y", "email": null}
(ceph-mgr)[root@control01 /]#
docker exec ceph_mon ceph osd crush rule create-simple cache  default  host
 docker exec ceph_mon ceph osd pool create cephfs_data-cache 512 512 replicated cache
3.7 处理ironic相关
mkdir /etc/kolla/config/ironic
curl https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos8-stable-victoria.kernel -o /etc/kolla/config/ironic/ironic-agent.kernel
curl https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos8-stable-victoria.initramfs -o /etc/kolla/config/ironic/ironic-agent.initramfs

# 后续会用到
openstack image create --disk-format aki --container-format aki --public --file /etc/kolla/config/ironic/ironic-agent.kernel deploy-vmlinuz
openstack image create --disk-format ari --container-format ari --public --file /etc/kolla/config/ironic/ironic-agent.initramfs deploy-initrd
四、部署完成
#检查ceph的状态
[root@control01 ~]# docker exec -it ceph_mon ceph -s
  cluster:
    id:     6901a603-3b98-4c7d-b64a-c48ab5b93fc7
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum 172.31.234.52,172.31.234.142,172.31.234.212 (age 31h)
    mgr: control01(active, since 4d), standbys: control02, control03
    mds: cephfs:1 {0=control01=up:active} 2 up:standby
    osd: 6 osds: 6 up (since 4w), 6 in (since 4w)
    rgw: 1 daemon active (radosgw.gateway)

  task status:

  data:
    pools:   16 pools, 1328 pgs
    objects: 283 objects, 13 MiB
    usage:   6.1 GiB used, 593 GiB / 599 GiB avail
    pgs:     1328 active+clean

[root@control01 ~]#




#cinder-volume的配置文件
[DEFAULT]
debug = True
log_dir = /var/log/kolla/cinder
use_forwarded_for = true
use_stderr = False
my_ip = 172.31.234.214
osapi_volume_workers = 4
volume_name_template = volume-%s
glance_api_servers = http://172.31.234.208:9292
glance_num_retries = 3
glance_api_version = 2
os_region_name = RegionOne
enabled_backends = rbd-1
osapi_volume_listen = 172.31.234.214
osapi_volume_listen_port = 8776
api_paste_config = /etc/cinder/api-paste.ini
auth_strategy = keystone
transport_url = rabbit://openstack:OMXKcQsdkZ0XZfPTDjFKwT8SUmb5qfvnyxIfTDIp@172.31.234.212:5672,openstack:OMXKcQsdkZ0XZfPTDjFKwT8SUmb5qfvnyxIfTDIp@172.31.234.52:5672,openstack:OMXKcQsdkZ0XZfPTDjFKwT8SUmb5qfvnyxIfTDIp@172.31.234.142:5672//

[oslo_messaging_notifications]
transport_url = rabbit://openstack:OMXKcQsdkZ0XZfPTDjFKwT8SUmb5qfvnyxIfTDIp@172.31.234.212:5672,openstack:OMXKcQsdkZ0XZfPTDjFKwT8SUmb5qfvnyxIfTDIp@172.31.234.52:5672,openstack:OMXKcQsdkZ0XZfPTDjFKwT8SUmb5qfvnyxIfTDIp@172.31.234.142:5672//
driver = messagingv2
topics = notifications

[oslo_middleware]
enable_proxy_headers_parsing = True

[nova]
interface = internal
auth_url = http://172.31.234.208:35357
auth_type = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = fofzmWYL0RjbpKRr3Rzxx52uJGhISUCDybdUShFK

[database]
connection = mysql+pymysql://cinder:Udkz1sy49ZRptVVqEY82hLmNibfO0SlXpdylVK8c@172.31.234.208:3306/cinder
max_retries = -1

[keystone_authtoken]
www_authenticate_uri = http://172.31.234.208:5000
auth_url = http://172.31.234.208:35357
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = J308qwds7reI9o2gAIm5eiTzX4q3X8eZ7O96ioar
memcache_security_strategy = ENCRYPT
memcache_secret_key = DMIsuiGY5pfFNIQn4oOtCZyZCHk40nY1EN6sXm6G
memcached_servers = 172.31.234.212:11211,172.31.234.52:11211,172.31.234.142:11211

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[rbd-1]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = rbd-1
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = a6fa3031-59af-4e5c-88dc-a44cd35f2aa9
report_discard_supported = True
image_upload_use_cinder_backend = True

[privsep_entrypoint]
helper_command = sudo cinder-rootwrap /etc/cinder/rootwrap.conf privsep-helper --config-file /etc/cinder/cinder.conf

[coordination]


1.安装CLI客户端
pip install python-openstackclient
2.生成环境脚本
kolla-ansible post-deploy . /etc/kolla/admin-openrc.sh


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/web/994469.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-21
下一篇 2022-05-21

发表评论

登录后才能评论

评论列表(0条)

保存