OpenStack Train版本安装记录

OpenStack Train版本安装记录,第1张

目录

一、安装准备

OpenStack部署脚本

OpenStack组件服务总览

vi设置及常用命令 (方便之后编辑配置文件)

bash tab补全加强

虚拟机上网(以控制节点为例,nat模式)

虚拟机上网(桥接模式)

更改主机名(以控制节点为例)

本地ssh连接虚拟机

查看可安装的openstack版本

常用排错命令

新加节点后该做的事情

域名解析同步脚本及ssh免密登陆

二、安装OpenStack前的环境配置

1. 初始化配置(所有节点配置)

2. 数据库(只在controller节点做)

3. 消息队列 (只在controller节点做)

4. 缓存服务 (只在controller节点做)

三 、OpenStack组件服务安装及验证

1. KeyStone

2. Glance

3. Placement

4. Nova

        4.1控制节点上

        4.2计算节点上

        4.3 在controller上验证

5. Neutron

        5.1 控制节点上

        5.2 计算节点上

        5.3 在controller节点上验证

6. 创建一个实例

7. Horizon

8. 配置多个计算节点

9. 实例联网验证

        9.1 包含三层子网、3个计算节点

        9.2 绑定浮动ip、添加静态路由

10. 实例迁移(勾选块设备迁移,不然会报错需要做共享存储)

11. Cinder

        11.1控制节点上

        11.2存储节点上

        11.3 验证

        11.4 使用cinder服务

        11.5 可能遇到的问题及解决方法

12. Swift  (放到下篇博客与 ceph存储一并更新)

        12.1 控制节点上

        12.2 在对象存储节点上

        12.3 在控制节点上创建初始的ring

        12.4 检验安装

        12.5 使用swift

四、一些问题及解决

实例没有ip

虚拟机创建实例报错:找不到有效的宿主机

五、附录

1. nova配置文件参考

2. neutron配置文件参考

3. httpd配置文件参考


一、安装准备

本机环境:MacOS Big Sur 版本11.6

VMware版本:VMware Fusion 12.2

镜像:CentOS-7-x86_64-Minimal-2009.iso

虚拟机分配2核4G磁盘20GB

网络连接选择NAT,本机vmnet8配置网关 192.168.140.2,掩码255.255.255.0

controller配置静态ip 192.168.140.10,compute配置 192.168.140.20 ...

建议配置完一个服务就打个快照保存

参考视频:https://www.bilibili.com/video/BV1fL4y1i7NZ?spm_id_from=333.999.0.0

官网安装文档:https://docs.openstack.org/train/install/


OpenStack部署脚本 OpenStack组件服务总览

vi设置及常用命令 (方便之后编辑配置文件)

        linux vi/vim |菜鸟教程

#vi设置显示行号
vi /etc/virc
set number

#删除空行以及只有空格的行
:g/^\s*$/d

#删除以 # 开头或 空格# 或 tab#开头的行
:g/^\s*#/d

gg #到文件头
shift + g #到文件尾
0 #到行首
shift+4 #到行尾
/ + 想要查的字符串 #查找
dd #删除一行
:.,$d #全删
u 
#撤销上一步 *** 作
yy #复制一行
p  #粘贴
bash tab补全加强
yum -y install bash-completion
虚拟机上网(以控制节点为例,nat模式)

    配置虚拟机静态ip

vi /etc/sysconfig/network-scripts/ifcfg-ens33

# 虚拟机选择NAT上网,这里的ip和网关配置与本机vmnet8一样,以controller为例
BOOTPROTO=static
...
ONBOOT = yes
IPADDR=192.168.140.10
NETMASK=255.255.255.0
GATEWAY=192.168.140.2
DNS1=114.114.114.114
DNS2=8.8.8.8

#重启网络服务
systemctl start network

# ip a 查看网络信息(ifconfig备选项)

验证:
ping 192.168.140.2 #ping网关
ping baidu.com #ping外网

    mac若ping不通网关 ,前往文件夹 /资源库/Preferences/VMware Fusion

检查networking文件及vmnet8/nat.conf文件 中 nat网关地址是否为192.168.140.2

改正后重启nat服务

sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --stop
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --start

    进入虚拟机再次ping网关、ping百度测试,不行则重启网络服务、重启虚拟机后再测试

虚拟机上网(桥接模式)
# 虚拟机上网选择桥接模式
vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO=static
...
ONBOOT=yes
IPADDR=192.168.1.xxx
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=192.168.1.1
DNS2=8.8.8.8

# ip a 查看网络信息(ifconfig备选项)
验证:
ping 192.168.1.xxx #ping主机
ping baidu.com #ping外网
更改主机名(以控制节点为例)
hostnamectl set-hostname controller
hostname # 查看主机名
hostname -I # 查看主机ip
vi /etc/hosts #增加主机域名解析,以下是我的配置
192.168.140.10 controller
192.168.140.20 comput
192.168.140.21 compute1
192.168.140.30 block
192.168.140.40 object
192.168.140.41 object1

之后ping controller、ping compute看是否成功
本地ssh连接虚拟机
ssh远程连接出现WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
解决办法:ssh-keygen -R 目标主机
查看可安装的openstack版本
yum list | grep openstack*
常用排错命令
#nova日志
tail -f /var/log/nova/*.log |grep ERROR
#neutron日志
tail -f /var/log/neutron/*.log |grep ERROR
#cinder日志
tail -f /var/log/cinder/*.log |grep ERROR
#重启nova服务
systemctl restart openstack-nova*
#重启neutron服务
systemctl restart neutron*
#时间同步大法
systemctl restart chronyd

#关机重启大法
reboot

#重设用户密码
openstack user set --password admin admin
新加节点后该做的事情
更改静态ip,配置联网
修改主机名
修改域名解析文件并同步到所有节点,先修改控制节点上的
    1.在其它节点拉取控制节点上的hosts文件
        scp [email protected]:/etc/hosts /etc/hosts
    2.从控制节点推送hosts文件到其它所有节点
        在控制节点的hosts文件读取其它节点的IP,再将hosts文件推送到其它节点
        rsync -av -e ssh /etc/hosts [email protected]:/etc/
关闭防火墙和selinux
配置时间同步
安装openstack-train包以及依赖等
域名解析同步脚本及ssh免密登陆
[root@controller ~]# cat rsync_hosts.sh
#!/bin/bash
echo "开始进行域名解析文件同步"

for ip in `grep -o "[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}" /etc/hosts`; do
        if [ "$ip" != '127.0.0.1' ] && [ "$ip" != '192.168.140.10' ]; then
                rsync -av -e ssh /etc/hosts root@${ip}:/etc/
                echo ${ip}"同步成功"
        fi
done


# 免密登陆
ssh-keygen -t rsa指令,连续回车确认
#如果已经有该文件了则不需要再生成

# 语法 ssh-copy-id 用户@服务器ip/别名,需要输入密码
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]

之后从controller可免密进入这些节点
二、安装OpenStack前的环境配置 1. 初始化配置(所有节点配置)
# 关闭selinux和防火墙
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 
systemctl stop firewalld
systemctl disable firewalld


# 配置时间服务器

# 所有节点
yum -y install chrony

# controller
vi /etc/chrony.conf 
server ntp3.aliyun.com iburst    # 只留这一段
allow all
local stratum 10

# 重启使得配置生效
 systemctl restart chronyd

# 查看服务是否正常
chronyc sources -v

# compute
vi /etc/chrony.conf 
server controller iburst

# 所有节点
yum install centos-release-openstack-train python-openstackclient openstack-selinux -y
2. 数据库(只在controller节点做)
yum install mariadb mariadb-server python2-PyMySQL

# vi /etc/my.cnf.d/openstack.cnf 
[mysqld]
bind-address = 192.168.140.10

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

systemctl enable mariadb.service; systemctl start mariadb.service

# 初始化数据库
mysql_secure_installation
回车
y    # 设置密码xxx,确认xxx
y
n
y
y
3. 消息队列 (只在controller节点做)
yum install rabbitmq-server -y
#启动自启
systemctl enable rabbitmq-server.service; 
systemctl start rabbitmq-server.service

rabbitmqctl add_user openstack openstack123

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

# 查看需要启动的服务
rabbitmq-plugins list

# 开启图形化界面
rabbitmq-plugins enable rabbitmq_management rabbitmq_management_agent

# 赋予角色管理员权限后才能登陆网页
rabbitmqctl set_user_tags openstack administrator
# 访问
http://192.168.140.10:15672/
4. 缓存服务 (只在controller节点做)
yum install memcached python-memcached -y

vi /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="1024"
OPTIONS="-l 127.0.0.1,::1,controller"

systemctl enable memcached.service;  systemctl start memcached.service
三 、OpenStack组件服务安装及验证 1. KeyStone 创库授权
#进入数据库
mysql -u root -p
CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone123';
安装服务、修改配置文件
yum install openstack-keystone httpd mod_wsgi -y

vi /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:keystone123@controller/keystone

[token]
provider = fernet
同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
再次进入数据库看keystone的表是否创建成功
mysql -pxxx
use keystone
show tables;
创建令牌、做软链接、启动服务并设置开机自启动
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone


keystone-manage bootstrap --bootstrap-password admin --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

vi /etc/httpd/conf/httpd.conf
ServerName controller

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

systemctl enable httpd.service; systemctl start httpd.service
设置客户端用户脚本
# 需要输入admin密码,密码为admin
openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue

# 需要输入myuser密码,密码为myuser
openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name myproject --os-username myuser token issue

vi admin.sh 
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

cat myuser.sh 
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
创建服务使用域、项目、用户和角色
source admin.sh

openstack domain create --description "An Example Domain" example

openstack project create --domain default --description "Service Project" service

openstack project create --domain default --description "Demo Project" myproject

openstack user create --domain default --password-prompt myuser      # 需要设置密码,统一myuser

openstack role create myrole

openstack role add --project myproject --user myuser myrole
验证
# 最终验证
source admin.sh
openstack token issue
2. Glance 创库授权
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance123';


openstack user create --domain default --password-prompt glance    # 需要设置密码,统一glance

openstack role add --project service --user glance admin

openstack service create --name glance --description "OpenStack Image" image

openstack endpoint create --region RegionOne image public http://controller:9292

openstack endpoint create --region RegionOne image internal http://controller:9292

openstack endpoint create --region RegionOne image admin http://controller:9292
安装服务、修改配置文件
yum install openstack-glance -y

# 配置glance文件(openstack配置文件不能有中文,注释的也不行)
vi /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:glance123@controller/glance

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance


[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
同步数据库、启动服务并设置开机自启动
# 同步数据库
su -s /bin/sh -c "glance-manage db_sync" glance
# 启动自启服务
systemctl enable openstack-glance-api.service; systemctl start openstack-glance-api.service
上传镜像、验证
#下载镜像
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

# 上传镜像
glance image-create --name "cirros4" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public

glance image-create --name "CentOS-7-x86_64-Minimal-2009" --file CentOS-7-x86_64-Minimal-2009.iso --disk-format iso --container-format bare --visibility public

qemu-img convert -f iso -O raw CentOS-7-x86_64-Minimal-2009.iso CentOS-7-x86_64-Minimal-2009.raw

#获取刚才上传的镜像
openstack image list
3. Placement 创库授权
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement123';
创建账号、域、用户等配置
openstack user create --domain default --password-prompt placement   # 设置密码,统一placement

openstack role add --project service --user placement admin

openstack service create --name placement --description "Placement API" placement

openstack endpoint create --region RegionOne placement public http://controller:8778

openstack endpoint create --region RegionOne placement internal http://controller:8778

openstack endpoint create --region RegionOne placement admin http://controller:8778
安装服务、修改配置文件
# 安装服务
yum install openstack-placement-api -y

# 配置placement文件
vi /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placement:placement123@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = placement
同步数据库并设置开机自启动
su -s /bin/sh -c "placement-manage db sync" placement

# 解决bug(最后加入)
vi  /etc/httpd/conf.d/00-placement-api.conf
 
   =  2.4> 
      Require all 
   allowed  
    
      Order allow,deny 
      Allow from all 
    
验证
# 重启http 出现问题请替换附录3的配置文件
systemctl restart httpd

# 验证
placement-status upgrade check
4. Nova         4.1控制节点上 创库授权  
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova123';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova123';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
创建账号、域、用户等配置
openstack user create --domain default --password-prompt nova

openstack role add --project service --user nova admin  #输入密码 nova

openstack service create --name nova --description "OpenStack Compute" compute

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1

openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1

openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
安装服务、修改配置文件等
yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

# openstack-nova-conductor 负责数据库
# openstack-nova-novncproxy  负责云主机连接
# openstack-nova-scheduler  负责调度调度


vi /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack123@controller:5672/
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
my_ip = 192.168.140.10


[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[api_database]
connection = mysql+pymysql://nova:nova123@controller/nova_api

[database]
connection = mysql+pymysql://nova:nova123@controller/nova

[api]
auth_strategy = keystone
同步数据库、开启服务并设置开机自启动
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova


# 验证 nova cell0 和 cell1 是否正确注册
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova


# 启动自启服务
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service; systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# 看日志是否有报错信息

tail -f /var/log/nova/*.log
        4.2计算节点上

安装服务、修改配置文件

yum install openstack-nova-compute -y

vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack123@controller
my_ip = 192.168.140.20
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
rpc_backend = rabbit

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.140.10:6080/vnc_auto.html

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack123


[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[api]
auth_strategy = keystone

cpu虚拟化设置

# 查看是否支持cpu虚拟化
egrep -c '(vmx|svm)' /proc/cpuinfo
# 如为零配置
vi /etc/nova/nova.conf
[libvirt]
virt_type = qemu

启动服务并设置自启动

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service 
systemctl start openstack-nova-compute.service
        4.3 在controller上验证
# 验证
openstack compute service list --service nova-compute
# 主机发现, 能够识别新加入的计算节点
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

# 控制节点配置主机发现
vi /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
5. Neutron         5.1 控制节点上 创库授权
CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron123';
创建服务凭证
openstack user create --domain default --password-prompt neutron   # 密码统一neutron

openstack role add --project service --user neutron admin

openstack service create --name neutron --description "OpenStack Networking" network

openstack endpoint create --region RegionOne network public http://controller:9696

openstack endpoint create --region RegionOne network internal http://controller:9696

openstack endpoint create --region RegionOne network admin http://controller:9696
安装服务、修改配置文件(比较多、有6个)
# 安装服务
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y


# 配置neutron文件
vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router    ##这里官网为空,加了router后日志报错消失
transport_url = rabbit://openstack:openstack123@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true


[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

[database]
connection = mysql+pymysql://neutron:neutron123@controller/neutron


# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security

[ml2_type_flat]
flat_networks = extnet

[securitygroup]
enable_ipset = true

# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = extnet:ens33

[vxlan]
enable_vxlan = false

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver


# 配置内核
cat /etc/sysctl.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# 加载网桥过滤器内核模块
modprobe br_netfilter

# 验证
sysctl -p

# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true



# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = xier123



# vi /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = xier123
配置软链接、同步数据库
# 配置软链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini


# 同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
验证数据库
mysql -pxxx
use neutron;
show tables;
启动服务并设置开机自启动
systemctl restart openstack-nova-api.service

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

#对于三层网络,还启用并启动第3层服务:
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
        5.2 计算节点上 安装服务、修改配置文件内
yum install openstack-neutron-linuxbridge ebtables ipset -y

# 配置文件
]# vi /etc/neutron/neutron.conf 
[DEFAULT]
transport_url = rabbit://openstack:openstack123@controller
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp


]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = extnet:ens33
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver



]# vi /etc/nova/nova.conf 
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
重启nova-compute 服务
systemctl restart openstack-nova-compute.service
启动neutron服务并设置自启动
systemctl enable neutron-linuxbridge-agent.service; systemctl start neutron-linuxbridge-agent.service
        5.3 在controller节点上验证
openstack network agent list

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 2ae637f1-b37e-43d8-a687-66036e60e3b2 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 4810f1ba-a23b-4417-affc-866c406f4036 | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
| 67bdd588-332c-4115-9655-91eaaf97e22e | Linux bridge agent | compute    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 9316367d-92c5-4bf3-a132-e63b217a7e80 | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| c9096845-78c8-44b8-8638-9bb4413c5637 | Linux bridge agent | compute1   | None              | :-)   | UP    | neutron-linuxbridge-agent |
| ec02f6be-47c5-4d73-b9ee-4f183321491d | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
| fa16b8c6-7816-4fe9-8502-f79f0990ae0d | Linux bridge agent | compute2   | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
6. 创建一个实例 创建云主机规格
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
设置ssh密钥对
ssh-keygen -q -N ""  # 一直回车即可

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

openstack keypair list
创建二层网络及子网
openstack network create  --share --external --provider-physical-network extnet --provider-network-type flat flat-extnet

openstack subnet create --network flat-extnet --allocation-pool start=192.168.140.50,end=192.168.140.100 --dns-nameserver 114.114.114.114 --gateway 192.168.140.2 --subnet-range 192.168.140.0/24 flat-subnet
查看镜像
openstack image list
创建实例
openstack server create --flavor m1.nano 
/--image cirros4                                   #上传的镜像名
/--nic net-id=faa79183-a070-47d1-908c-fc71633d3e6d #这里网络id为刚才创建的
/--security-group default                          #使用默认的安全组,之后修改它使得从外面可以访问实例
/--key-name mykey vm1

openstack server create --flavor tiny --image cirros4                                 --nic net-id=80309f63-6b64-47bc-96c0-32d2915974ab --security-group default                     --key-name ssh-key vm1
查看实例控制台url
openstack console url show vm1
验证
拿到上面的实例控制台url再浏览器访问,登陆进去,ping网关,ping外网

#查看创建的实例 
openstack server list

安全组设置
默认的安全组设置为实例可以访问外面,外面不可以访问里面
需要在ui界面上修改默认安全组的设置,使得外面可以ping通并ssh进入实例

修改default安全组的入口,允许任何协议访问

这里镜像给的用户名为cirros,密码为gocubsgo
ssh cirros@实例ip
密码:gocubsgo
7. Horizon 安装服务、修改配置文件
yum install openstack-dashboard -y

vi /etc/openstack-dashboard/local_settings
以下是完整的配置文件,直接覆盖掉原来的就行
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
ALLOWED_HOSTS = ['*']
LOCAL_PATH = '/tmp'
SECRET_KEY='b26f4c752e1ac68e7608'
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "controller"
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_NEUTRON_NETWORK = {
    'enable_auto_allocated_network': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_ipv6': True,
    'enable_lb': False,
    'enable_quotas': False,
    'enable_rbac_policy': True,
    'enable_router': True,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
    'default_dns_nameservers': [],
    'supported_provider_types': ['*'],
    'segmentation_id_range': {},
    'extra_provider_types': {},
    'supported_vnic_types': ['*'],
    'physical_networks': [],
}
TIME_ZONE = "Asia/Shanghai"
WEBROOT = '/dashboard'
LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'console': {
            'format': '%(levelname)s %(name)s %(message)s'
        },
        'operation': {
            'format': '%(message)s'
        },
    },
    'handlers': {
        'null': {
            'level': 'DEBUG',
            'class': 'logging.NullHandler',
        },
        'console': {
            'level': 'DEBUG' if DEBUG else 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'console',
        },
        'operation': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'operation',
        },
    },
    'loggers': {
        'horizon': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'horizon.operation_log': {
            'handlers': ['operation'],
            'level': 'INFO',
            'propagate': False,
        },
        'openstack_dashboard': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'novaclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'cinderclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneauth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'glanceclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'neutronclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'swiftclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'oslo_policy': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_auth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django.db.backends': {
            'handlers': ['null'],
            'propagate': False,
        },
        'requests': {
            'handlers': ['null'],
            'propagate': False,
        },
        'urllib3': {
            'handlers': ['null'],
            'propagate': False,
        },
        'chardet.charsetprober': {
            'handlers': ['null'],
            'propagate': False,
        },
        'iso8601': {
            'handlers': ['null'],
            'propagate': False,
        },
        'scss': {
            'handlers': ['null'],
            'propagate': False,
        },
    },
}
SECURITY_GROUP_RULES = {
    'all_tcp': {
        'name': _('All TCP'),
        'ip_protocol': 'tcp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_udp': {
        'name': _('All UDP'),
        'ip_protocol': 'udp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_icmp': {
        'name': _('All ICMP'),
        'ip_protocol': 'icmp',
        'from_port': '-1',
        'to_port': '-1',
    },
    'ssh': {
        'name': 'SSH',
        'ip_protocol': 'tcp',
        'from_port': '22',
        'to_port': '22',
    },
    'smtp': {
        'name': 'SMTP',
        'ip_protocol': 'tcp',
        'from_port': '25',
        'to_port': '25',
    },
    'dns': {
        'name': 'DNS',
        'ip_protocol': 'tcp',
        'from_port': '53',
        'to_port': '53',
    },
    'http': {
        'name': 'HTTP',
        'ip_protocol': 'tcp',
        'from_port': '80',
        'to_port': '80',
    },
    'pop3': {
        'name': 'POP3',
        'ip_protocol': 'tcp',
        'from_port': '110',
        'to_port': '110',
    },
    'imap': {
        'name': 'IMAP',
        'ip_protocol': 'tcp',
        'from_port': '143',
        'to_port': '143',
    },
    'ldap': {
        'name': 'LDAP',
        'ip_protocol': 'tcp',
        'from_port': '389',
        'to_port': '389',
    },
    'https': {
        'name': 'HTTPS',
        'ip_protocol': 'tcp',
        'from_port': '443',
        'to_port': '443',
    },
    'smtps': {
        'name': 'SMTPS',
        'ip_protocol': 'tcp',
        'from_port': '465',
        'to_port': '465',
    },
    'imaps': {
        'name': 'IMAPS',
        'ip_protocol': 'tcp',
        'from_port': '993',
        'to_port': '993',
    },
    'pop3s': {
        'name': 'POP3S',
        'ip_protocol': 'tcp',
        'from_port': '995',
        'to_port': '995',
    },
    'ms_sql': {
        'name': 'MS SQL',
        'ip_protocol': 'tcp',
        'from_port': '1433',
        'to_port': '1433',
    },
    'mysql': {
        'name': 'MYSQL',
        'ip_protocol': 'tcp',
        'from_port': '3306',
        'to_port': '3306',
    },
    'rdp': {
        'name': 'RDP',
        'ip_protocol': 'tcp',
        'from_port': '3389',
        'to_port': '3389',
    },
}
重启httpd服务,在本机浏览器访问
systemctl restart httpd.service memcached.service

http://192.168.140.10/dashboard

域 default
账号 admin 
密码 admin
8. 配置多个计算节点 9. 实例联网验证         9.1 包含三层子网、3个计算节点

        创建了三个网络,每个网络上创建两台实例,vm1在提供商网络上(二层网络),vm2、vm3在自助服务网络上(三层网络),其中vm2-1、vm3-2绑定浮动ip,可以从外部访问到,网络拓扑如下, 每台compute主机上都分配了两台实例(由Nova调度算法决定)

         验证过程:

                a)从controller节点ssh进入vm3-2(172.1.1.163,192.168.140.70),ping外网测试 (红色部分)

                b) 从vm3-2 ssh进入 vm1-1(192.168.140.92),ping外网测试(蓝色部分)

                c)从vm1-1 ssh进入vm2-1(10.10.10.126, 192.168.140.78) ,ping外网测试(黄色部分)

        9.2 绑定浮动ip、添加静态路由

        创建子网,为子网分配各自的网段,并在子网上创建实例。以下面的网络拓扑为例:

1)子网、实例创建完后,路由器r1上选择->添加接口,将想所有子网的网关都分别填上,如下:

 

2)分配浮动IP(每个子网都需要分配浮动ip):

        a、网络->浮动IP->分配IP给项目

 

关联->为选中的实例或端口选择要绑定的IP地址。

        b、完成后,在controller节点输入openstack server list,查看实例的IP以及为其分配的浮动IP:

 例如实例vm7(192.168.137.161)绑定的浮动IP为192.168.140.90。

3)测试连通性:

        例如: controller节点中输入: ssh [email protected],进入到vm7, 然后输入:ping baidu.com。以上步骤完成后,flat-extnet可以通过路由r1访问各子网。

        添加静态路由

1)通过配置静态路由,可以使得指定的不同子网中的实例间可以相互通信。

        选择网络->路由->静态路由表->添加路由表。目的CIDR中输入要指定的CIDR,下一跳地址。例如这里选中vm7。

        目的CIDR:192.168.137.0/24(所在子网的网段)

        下一跳:192.168.137.161(vm7的IP)

2)添加完成后,测试:

        在controller节点输入:ssh [email protected],进入到实例222,输入:ping 192.168.137.161(vm7),发现实例222和vm7可以相互通信。

 

        若输入:ping 192.168.137.108(vm6)发现无法通信,符合设置的网络拓扑。

10. 实例迁移(勾选块设备迁移,不然会报错需要做共享存储) 11. Cinder         11.1控制节点上 创库授权
CREATE DATABASE cinder;
##密码 cinder123
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder123';     
创建cinder服务凭据 以及创建块存储endpoint服务
openstack user create --domain default --password-prompt cinder
openstack role add --project service --user cinder admin

openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3


openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s

openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
安装和配置组件
yum install openstack-cinder -y

#可以直接覆盖
vi /etc/cinder/cinder.conf 
[DEFAULT]
transport_url = rabbit://openstack:openstack123@controller
auth_strategy = keystone
my_ip = 192.168.140.10

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[database]
connection = mysql+pymysql://cinder:cinder123@controller/cinder

###同步数据库
su -s /bin/sh -c "cinder-manage db sync" cinder

vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

##重启服务
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
        11.2存储节点上
##安装cinder包
yum install openstack-cinder targetcli python-keystone -y
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

pvcreate /dev/sdb
#若报错没有sdb则关闭虚拟机给它再添加一个磁盘
vgcreate cinder-volumes /dev/sdb
这里需要注意看系统盘有没有使用lvm:
1)若有则如下配置:

cinder节点:
vi  /etc/lvm/lvm.conf

devices {
...
filter = ["a/sda/", "a/sdb/", "r/.*/"]
}

compute节点 :
vi  /etc/lvm/lvm.conf
devices {
...
filter = ["a/sda/", "r/.*/"]
}

2)没有则只在存储节点做如下配置
devices {
...
filter = ["a/sdb/", "r/.*/"]


#以下文件可直接覆盖
vi /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack123@controller
auth_strategy = keystone
my_ip = 192.168.140.30
enabled_backends = lvm
glance_api_servers = http://controller:9292

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

[database]
connection = mysql+pymysql://cinder:cinder123@controller/cinder

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

##############启动服务
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
        11.3 验证
openstack volume service list
+------------------+------------+------+---------+-------+----------------------------+
| Binary           | Host       | Zone | Status  | State | Updated At                 |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up    | 2021-11-06T13:14:05.000000 |
| cinder-volume    | cinder@lvm | nova | enabled | up    | 2021-11-06T13:14:07.000000 |
+------------------+------------+------+---------+-------+----------------------------+
        11.4 使用cinder服务

                创建一个卷并将其加入到实例上

#创建一个大小为1G的卷,名为volume1
openstack volume create --size 1 volume1

#查看卷
openstack volume list

#将卷附加到实例 vm1-1
openstack server add volume vm1-1 volume1

#再次查看卷的状态,此时挂载到了vm1-1上
openstack volume list

#进入实例vm1-1,并使用fdisk命令验证卷是否存在/dev/vdb块存储设备:
$ sudo fdisk -l | grep vdb
Disk /dev/vdb: 1 GiB, 1073741824 bytes, 2097152 sectors
        11.5 可能遇到的问题及解决方法
controller节点上cinder服务起不来:

[root@controller ~]# openstack volume service list
+------------------+------------+------+---------+-------+----------------------------+
| Binary           | Host       | Zone | Status  | State | Updated At                 |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up    | 2021-11-10T02:54:09.000000 |
| cinder-volume    | cinder@lvm | nova | enabled | down  | 2021-11-10T02:42:37.000000 |
+------------------+------------+------+---------+-------+----------------------------+
且计算节点上查看cinder日志,报错如下:
 ERROR cinder.service [-] Manager for service cinder-volume cinder@lvm is reporting problems, not sending heartbeat. Service will appear "down".

可能是节点之间时间不一致导致的问题:chronyc sources -v 检查时间同步是否有问题(还要检查一下是否所有节点的域名解析都是正确的)。
12. Swift  (放到下篇博客与 ceph存储一并更新)         12.1 控制节点上         12.2 在对象存储节点上         12.3 在控制节点上创建初始的ring         12.4 检验安装         12.5 使用swift 四、一些问题及解决 实例没有ip

        详情:openstack平台中创建虚拟机后,虚拟机在web页面中显示获取到了ip,但是打开虚拟机控制台后查看网络状态,虚拟机没有ip地址

        解决:openstack虚拟机获取不到ip - 东篱昏后 - 博客园

        简化 *** 作:子网配置更新,然后重启实例

#手动添加ip
sudo ip addr add 分配给它的ip dev eth0
虚拟机创建实例报错:找不到有效的宿主机

        利用两个计算节点compute1、compute(2处理内核、4GB内存)创建实例。

       上限:48个实例

        compute节点:(内存)2.9GB/3.7GB (磁盘)32GB/128GB

        compute1节点:(内存)1.5GB/3.7GB (磁盘)16GB/16GB

        理论上compute1节点的内存够创建新的实例,但会报错。

 

        可能的解释:https://blog.csdn.net/weixin_44768879/article/details/111742594(但是compute1节点内存加到6GB依旧无法创建新的实例)

五、附录 1. nova配置文件参考
#############################控制节点#########################
vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack123@controller:5672/
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
my_ip = 192.168.140.10
rpc_backend = rabbit

[api]
auth_strategy = keystone

[api_database]
connection = mysql+pymysql://nova:nova123@controller/nova_api

[database]
connection = mysql+pymysql://nova:nova123@controller/nova

[glance]
api_servers = http://controller:9292

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = xier123

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack123

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[scheduler]
discover_hosts_in_cells_interval = 300

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip


#############################计算节点#########################
vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack123@controller
my_ip = 192.168.140.22
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
rpc_backend = rabbit

[api]
auth_strategy = keystone

[glance]
api_servers = http://controller:9292

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova

[libvirt]
virt_type = qemu

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack123

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.140.10:6080/vnc_auto.html
2. neutron配置文件参考
#############################控制节点#########################
vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
transport_url = rabbit://openstack:openstack123@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
allow_overlapping_ips = true

[database]
connection = mysql+pymysql://neutron:neutron123@controller/neutron

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = extnet

[securitygroup]
enable_ipset = true

[ml2_type_vxlan]
vni_ranges = 1:1000

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = extnet:ens33

[vxlan]
enable_vxlan = true
local_ip = 192.168.140.10
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge

vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
#############################计算节点#########################
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = extnet:ens33

[vxlan]
enable_vxlan = true
local_ip = 设成本机的ip
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
3. httpd配置文件参考
vi /etc/httpd/conf.d/00-placement-api.conf
Listen 8778


 WSGIProcessGroup placement-api
 WSGIApplicationGroup %{GLOBAL}
 WSGIPassAuthorization On
 WSGIDaemonProcess placement-api processes=3 threads=1 user=placement group=placement
 WSGIScriptAlias / /usr/bin/placement-api
 = 2.4>
  ErrorLogFormat "%M"
 
 ErrorLog /var/log/placement/placement-api.log
 #SSLEngine On
 #SSLCertificateFile ...
 #SSLCertificateKeyFile ...


Alias /placement-api /usr/bin/placement-api

 SetHandler wsgi-script
 Options +ExecCGI
 WSGIProcessGroup placement-api
 WSGIApplicationGroup %{GLOBAL}
 WSGIPassAuthorization On


  = 2.4>
   Require all granted
  
  
   Order allow,deny
   Allow from all
  

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/web/993527.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-21
下一篇 2022-05-21

发表评论

登录后才能评论

评论列表(0条)

保存